Error when execute MavenCli in the loop (maven-embedder)? - java

What is the problem when I execute the maven command in the loop ? The goal is to update the version of pom.xml of the list of bundles. The first iteration, maven execute correctly (update pom.xml), but it makes error for all item after.
for (String bundlePath: bundlesToUpdate)
{
MavenCli cli = new MavenCli();
String[] arguments = {
"-Dtycho.mode=maven",
"org.eclipse.tycho:tycho-versions-plugin:set-version",
"-DgenerateBackupPoms=false",
"-DnewVersion=" + version};
int result = cli.doMain(arguments, bundlePath, System.out, System.err);
}
Same error with the code:
`MavenCli cli = new MavenCli();
for (String bundlePath: bundlesToUpdate)
{
String[] arguments = {
"-Dtycho.mode=maven",
"org.eclipse.tycho:tycho-versions-plugin:set-version",
"-DgenerateBackupPoms=false",
"-DnewVersion=" + version};
int result = cli.doMain(arguments, bundlePath, System.out, System.err);
}`
First time, it's ok:
[main] INFO org.eclipse.tycho.versions.manipulation.PomManipulator - pom.xml//project/version: 2.2.6-SNAPSHOT => 2.2.7-SNAPSHOT
[main] INFO org.apache.maven.cli.event.ExecutionEventLogger - ------------------------------------------------------------------------
[main] INFO org.apache.maven.cli.event.ExecutionEventLogger - Reactor Summary:
[main] INFO org.apache.maven.cli.event.ExecutionEventLogger -
[main] INFO org.apache.maven.cli.event.ExecutionEventLogger - XXXX project ....................... SUCCESS [ 0.216 s]
[main] INFO org.apache.maven.cli.event.ExecutionEventLogger - com.sungard.valdi.bus.fixbroker.client.bnp ........ SKIPPED
[main] INFO org.apache.maven.cli.event.ExecutionEventLogger - XXX project Feature ...................... SKIPPED
[main] INFO org.apache.maven.cli.event.ExecutionEventLogger - ------------------------------------------------------------------------
[main] INFO org.apache.maven.cli.event.ExecutionEventLogger - BUILD SUCCESS
After the errors are:
[main] ERROR org.apache.maven.cli.MavenCli - Error executing Maven.
[main] ERROR org.apache.maven.cli.MavenCli - java.util.NoSuchElementException
role: org.apache.maven.eventspy.internal.EventSpyDispatcher
roleHint:
[main] ERROR org.apache.maven.cli.MavenCli - Caused by: null
[main] ERROR org.apache.maven.cli.MavenCli - Error executing Maven.
[main] ERROR org.apache.maven.cli.MavenCli - java.util.NoSuchElementException
role: org.apache.maven.eventspy.internal.EventSpyDispatcher
roleHint:

The solution I found is to use Maven Invoker and it works fine for the same functionality:
public class MavenInvoker {
public static void main(String[] args) throws IOException, NoHeadException, GitAPIException
{
MavenInvoker toto = new MavenInvoker();
toto.updateVersionMavenInvoker("2.2.8-SNAPSHOT", "TECHNICAL\\WEB" );
}
private InvocationRequest request = new DefaultInvocationRequest();
private DefaultInvoker invoker = new DefaultInvoker();
public InvocationResult updateVersionMavenInvoker(String newVersion, String folderPath)
{
InvocationResult result = null;
request.setPomFile( new File(folderPath+"\\pom.xml" ) );
String version = "-DnewVersion="+newVersion;
request.setGoals( Arrays.asList("-Dtycho.mode=maven",
"org.eclipse.tycho:tycho-versions-plugin:set-version",
"-DgenerateBackupPoms=false",
version) );
try {
result = invoker.execute( request );
} catch (MavenInvocationException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return result;
}
}

This works for me inside a custom maven plugin (using Maven 3.5.0):
ClassRealm classRealm = (ClassRealm) Thread.currentThread().getContextClassLoader();
MavenCli cli = new MavenCli(classRealm.getWorld());
cli.doMain( ... );
The plexus Launcher sets the context class loader to its ClassRealm, which has access to the "global" ClassWorld.
Not sure how stable that solution is, but so far looking good.
Used imports:
import org.codehaus.plexus.classworlds.ClassWorld;
import org.codehaus.plexus.classworlds.realm.ClassRealm;
import org.apache.maven.cli.MavenCli;

See the email thread for a more detail explanation: https://dev.eclipse.org/mhonarc/lists/sisu-users/msg00063.html
It seems the correct way is to give MainCli a ClassWorld instance on construction so it can maintain a proper state through multiple calls.
Example:
final ClassWorld classWorld = new ClassWorld("plexus.core", getClass().getClassLoader());
MavenCli cli = new MavenCli(classWorld);
String[] arguments = {
"-Dtycho.mode=maven",
"org.eclipse.tycho:tycho-versions-plugin:set-version",
"-DgenerateBackupPoms=false",
"-DnewVersion=" + version};
int result = cli.doMain(arguments, bundlePath, System.out, System.err);

Related

Executing Sample Flink kafka consummer

I'm trying to create a simple Flink Kafka consumer
public class ReadFromKafka {
public static void main(String[] args) throws Exception {
// create execution environment
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:9092");
properties.setProperty("group.id", "flink_consumer");
DataStream<String> stream = env
.addSource(new FlinkKafkaConsumer09<>("test", new SimpleStringSchema(), properties));
stream.map(new MapFunction<String, String>() {
private static final long serialVersionUID = -6867736771747690202L;
#Override
public String map(String value) throws Exception {
return "Stream Value: " + value;
}
}).print();
env.execute();
}
}
It is giving me this error :
INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.3.0
16:47:28,448 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: fc1aaa116b661c8a
16:47:28,448 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1563029248441
16:47:28,451 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer09 - Trying to get partitions for topic test
16:47:28,775 INFO org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-1, groupId=flink_consumer] Cluster ID: 4rz71KZCS_CSasZMrFBNKw
16:47:29,858 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer09 - Got 1 partitions from these topics: [test]
16:47:29,859 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer is going to read the following topics (with number of partitions):
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/flink/api/java/operators/Keys
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.addSource(StreamExecutionEnvironment.java:994)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.addSource(StreamExecutionEnvironment.java:955)
at myflink.ReadFromKafka.main(ReadFromKafka.java:43)
Caused by: java.lang.ClassNotFoundException: org.apache.flink.api.java.operators.Keys
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 3 more
Process finished with exit code 1
According to your stack-trace, java could not find a class.
Caused by: java.lang.ClassNotFoundException: org.apache.flink.api.java.operators.Keys
This class is in the flink-java_2.11 jar file which you might have missed from your dependencies.
https://www.javadoc.io/doc/org.apache.flink/flink-java_2.11/0.10.2

btrace didn't print out anything when the specified method is invoked

I'm learning how to use btrace. In order to do that, I created a spring-boot project which contained the following code.
#Controller
public class MainController {
private static Logger logger = LoggerFactory.getLogger(MainController.class);
#ResponseBody
#GetMapping("/testFile")
public Map<String, Object> testFile() throws IOException {
File file = new File("/tmp/a");
if (file.exists()) {
file.delete();
}
file.createNewFile();
return ImmutableMap.of("success", true);
}
}
Then I started the project using mvn spring-boot:run, after which I wrote a btrace script, as follows.
import com.sun.btrace.annotations.*;
import com.sun.btrace.BTraceUtils;
#BTrace
public class HelloWorld {
#OnMethod(clazz = "java.io.File", method = "createNewFile")
public static void onNewFileCreated(String fileName) {
BTraceUtils.println("New file is being created");
BTraceUtils.println(fileName);
}
}
As you can see, this script should print something when java.io.File#createNewFile is called, which is exactly what the above controller does. Then I attached btrace to the running spring-boot project using the following code.
btrace 30716 HelloWorld.java
30716 is the PID of the running spring-boot project. Then I tried accessing http://localhost:8080/testFile, and I got the following extra output from the running spring-boot project.
objc[30857]: Class JavaLaunchHelper is implemented in both /Library/Java/JavaVirtualMachines/jdk1.8.0_151.jdk/Contents/Home/bin/java (0x10e2744c0) and /Library/Java/JavaVirtualMachines/jdk1.8.0_151.jdk/Contents/Home/jre/lib/libinstrument.dylib (0x1145e24e0). One of the two will be used. Which one is undefined.
2019-01-04 11:24:49.003 INFO 30857 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2019-01-04 11:24:49.003 INFO 30857 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2019-01-04 11:24:49.019 INFO 30857 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 16 ms
I was expecting it to output New file is being created, but it didn't. Why? Did I do something wrong?
Your trace method, onNewFileCreated(String fileName), cannot be used to intercept java.io.File.createNewFile() as the signatures don't agree (createNewFile() doesn't take any arguments, while onNewFileCreated() has one). If there are arguments in the trace method (unless they have a BTrace annotation), BTrace will attempt to "bind" them to the arguments in the intercepted method. If it can't do so, it will not successfully intercept that method.
Try
#OnMethod(clazz = "java.io.File", method = "createNewFile")
public static void onNewFileCreated() {
BTraceUtils.println("method createNewFile called");
}
or
#OnMethod(clazz = "java.io.File", method = "createNewFile")
public static void onNewFileCreated(#ProbeMethodName String methodName) {
BTraceUtils.println("method " + methodName + " called");
}
Update 1:
First, what version of the JDK are you using? BTrace doesn't appear to support JDK > 8 (https://github.com/btraceio/btrace/issues/292).
Second, can you try running this tracing script:
import com.sun.btrace.annotations.*;
import com.sun.btrace.BTraceUtils;
#BTrace
public class TracingScript {
#OnMethod(clazz = "java.io.File", method = "createNewFile")
public static void onNewFileCreated(#ProbeMethodName String methodName) {
BTraceUtils.println("method " + methodName + " called");
}
}
against a simple test application:
import java.io.File;
public class FileCreator {
public static void main(String[] args) throws Exception{
for(int i = 0; i < 250; i++) {
File file = new File("C://Temp//file" + i);
if (file.exists()) {
file.delete();
}
file.createNewFile();
Thread.sleep(10000);
}
}
}
This works for me with BTrace 1.3.11.3 (and via the BTrace Workbench JVisualVM Plugin 0.6.8, which is where I usually use BTrace).

Why is the result of pluginManager.getExtensions empty?

When trying to use PF4J i created the necessary parts as outlined in
https://github.com/pf4j/pf4j
an Interface that extends the ExtensionPoint
a Plugin
Jar with Manifest
Plugin load and activation
Why is the List of clickHandlers empty?
I have tested this with a JUnit test where I can debug the other parts which seem to work fine. See debug log below.
I have also looked at https://github.com/pf4j/pf4j/issues/21 and activated the Eclipse annotation processing with no positive effect.
1. Interface that extends the Extension Point
public interface ClickHandler extends ExtensionPoint {
...
}
2. a Plugin
public class MBClickHandlerPlugin extends Plugin {
/**
* construct me
* #param wrapper
*/
public MBClickHandlerPlugin(PluginWrapper wrapper) {
super(wrapper);
}
#Extension
public static class MBClickHandler implements ClickHandler {
}
}
3. Jar with Manifest
unzip -q -c target/com.bitplan.mb-0.0.1.jar META-INF/MANIFEST.MF
Manifest-Version: 1.0
Plugin-Dependencies:
Plugin-Id: com.bitplan.mb
Built-By: wf
Plugin-Provider: BITPlan GmbH
Plugin-Version: 0.0.1
Plugin-Class: com.bitplan.mb.MBClickHandlerPlugin
Created-By: Apache Maven 3.5.2
Build-Jdk: 1.8.0_152
4. Plugin load and activation
/**
* activate the plugins requested on the command line
*/
public void activatePlugins() {
pluginManager = new DefaultPluginManager();
for (String plugin : plugins) {
Path pluginPath = Paths.get(plugin);
pluginManager.loadPlugin(pluginPath);
}
pluginManager.startPlugins();
List<ClickHandler> clickHandlers = pluginManager
.getExtensions(ClickHandler.class);
for (ClickHandler clickHandler : clickHandlers) {
installClickHandler(clickHandler);
}
}
Debug log
22 [main] DEBUG org.pf4j.CompoundPluginDescriptorFinder - Try to continue with the next finder
22 [main] DEBUG org.pf4j.CompoundPluginDescriptorFinder - 'org.pf4j.ManifestPluginDescriptorFinder#73d4cc9e' is applicable for plugin '/Users/wf/Documents/workspace/com.bitplan.mb/target/com.bitplan.mb-0.0.1.jar'
24 [main] DEBUG org.pf4j.AbstractPluginManager - Found descriptor PluginDescriptor [pluginId=com.bitplan.mb, pluginClass=com.bitplan.mb.MBClickHandlerPlugin, version=0.0.1, provider=BITPlan GmbH, dependencies=[], description=, requires=*, license=null]
24 [main] DEBUG org.pf4j.AbstractPluginManager - Class 'com.bitplan.mb.MBClickHandlerPlugin' for plugin '/Users/wf/Documents/workspace/com.bitplan.mb/target/com.bitplan.mb-0.0.1.jar'
24 [main] DEBUG org.pf4j.AbstractPluginManager - Loading plugin '/Users/wf/Documents/workspace/com.bitplan.mb/target/com.bitplan.mb-0.0.1.jar'
24 [main] DEBUG org.pf4j.CompoundPluginLoader - 'org.pf4j.DefaultPluginLoader#6366ebe0' is not applicable for plugin '/Users/wf/Documents/workspace/com.bitplan.mb/target/com.bitplan.mb-0.0.1.jar'
24 [main] DEBUG org.pf4j.CompoundPluginLoader - 'org.pf4j.JarPluginLoader#44f75083' is applicable for plugin '/Users/wf/Documents/workspace/com.bitplan.mb/target/com.bitplan.mb-0.0.1.jar'
25 [main] DEBUG org.pf4j.PluginClassLoader - Add 'file:/Users/wf/Documents/workspace/com.bitplan.mb/target/com.bitplan.mb-0.0.1.jar'
25 [main] DEBUG org.pf4j.AbstractPluginManager - Loaded plugin '/Users/wf/Documents/workspace/com.bitplan.mb/target/com.bitplan.mb-0.0.1.jar' with class loader 'org.pf4j.PluginClassLoader#43d7741f'
25 [main] DEBUG org.pf4j.AbstractPluginManager - Creating wrapper for plugin '/Users/wf/Documents/workspace/com.bitplan.mb/target/com.bitplan.mb-0.0.1.jar'
25 [main] DEBUG org.pf4j.AbstractPluginManager - Created wrapper 'PluginWrapper [descriptor=PluginDescriptor [pluginId=com.bitplan.mb, pluginClass=com.bitplan.mb.MBClickHandlerPlugin, version=0.0.1, provider=BITPlan GmbH, dependencies=[], description=, requires=*, license=null], pluginPath=/Users/wf/Documents/workspace/com.bitplan.mb/target/com.bitplan.mb-0.0.1.jar]' for plugin '/Users/wf/Documents/workspace/com.bitplan.mb/target/com.bitplan.mb-0.0.1.jar'
26 [main] DEBUG org.pf4j.DependencyResolver - Graph:
com.bitplan.mb -> []
26 [main] DEBUG org.pf4j.DependencyResolver - Plugins order: [com.bitplan.mb]
27 [main] INFO org.pf4j.AbstractPluginManager - Plugin 'com.bitplan.mb#0.0.1' resolved
27 [main] INFO org.pf4j.AbstractPluginManager - Start plugin 'com.bitplan.mb#0.0.1'
27 [main] DEBUG org.pf4j.DefaultPluginFactory - Create instance for plugin 'com.bitplan.mb.MBClickHandlerPlugin'
28 [main] DEBUG org.pf4j.AbstractExtensionFinder - Finding extensions of extension point 'com.bitplan.uml2mxgraph.ClickHandler'
28 [main] DEBUG org.pf4j.LegacyExtensionFinder - Reading extensions storages from classpath
28 [main] DEBUG org.pf4j.AbstractExtensionFinder - No extensions found
28 [main] DEBUG org.pf4j.LegacyExtensionFinder - Reading extensions storages from plugins
28 [main] DEBUG org.pf4j.LegacyExtensionFinder - Reading extensions storage from plugin 'com.bitplan.mb'
28 [main] DEBUG org.pf4j.LegacyExtensionFinder - Cannot find 'META-INF/extensions.idx'
28 [main] DEBUG org.pf4j.AbstractExtensionFinder - No extensions found
28 [main] DEBUG org.pf4j.AbstractExtensionFinder - Finding extensions of extension point 'com.bitplan.uml2mxgraph.ClickHandler' for plugin 'null'
28 [main] DEBUG org.pf4j.AbstractExtensionFinder - Finding extensions of extension point 'com.bitplan.uml2mxgraph.ClickHandler' for plugin 'com.bitplan.mb'
29 [main] DEBUG org.pf4j.AbstractExtensionFinder - Found 0 extensions for extension point 'com.bitplan.uml2mxgraph.ClickHandler
'
Workaround #1
use a customized PluginManager
pluginManager = new JarPluginManager(this.getClass().getClassLoader());
from the class that will use the plugin to make sure the same classloader is used
JarPluginManager source code:
import java.nio.file.Path;
import org.pf4j.DefaultPluginManager;
import org.pf4j.JarPluginLoader;
import org.pf4j.ManifestPluginDescriptorFinder;
import org.pf4j.PluginClassLoader;
import org.pf4j.PluginDescriptor;
import org.pf4j.PluginDescriptorFinder;
import org.pf4j.PluginLoader;
import org.pf4j.PluginManager;
/**
* see https://github.com/pf4j/pf4j/issues/249 see
* https://pf4j.org/doc/class-loading.html
*
* #author wf
*
*/
public class JarPluginManager extends DefaultPluginManager {
public static class ParentClassLoaderJarPluginLoader extends JarPluginLoader {
static ClassLoader parentClassLoader;
/**
*
* #param pluginManager
*/
public ParentClassLoaderJarPluginLoader(PluginManager pluginManager) {
super(pluginManager);
}
static PluginClassLoader pluginClassLoader;
#Override
public ClassLoader loadPlugin(Path pluginPath,
PluginDescriptor pluginDescriptor) {
if (pluginClassLoader == null) {
boolean parentFirst=true;
pluginClassLoader = new PluginClassLoader(pluginManager,
pluginDescriptor, parentClassLoader,parentFirst);
}
pluginClassLoader.addFile(pluginPath.toFile());
return pluginClassLoader;
}
}
/**
* construct me with the given classloader
* #param classLoader
*/
public JarPluginManager(ClassLoader classLoader) {
ParentClassLoaderJarPluginLoader.parentClassLoader=classLoader;
//System.setProperty("pf4j.mode", RuntimeMode.DEPLOYMENT.toString());
//System.setProperty("pf4j.mode", RuntimeMode.DEVELOPMENT.toString());
}
#Override
protected PluginLoader createPluginLoader() {
// load only jar plugins
return new ParentClassLoaderJarPluginLoader(this);
}
#Override
protected PluginDescriptorFinder createPluginDescriptorFinder() {
// read plugin descriptor from jar's manifest
return new ManifestPluginDescriptorFinder();
}
}
Workaround #2
If the extensions.idx file is not created there is something wrong with your annotation processing. You might want to fix the source of the problem but it is also possible to try to work around it:
https://groups.google.com/forum/#!topic/pf4j/nn20axJHpfI
pointed me to creating the META-INF/extensions.idx file manually and making sure there is a no args constructor for the static inner class. With this change things work.
Watch out for setting the classname correctly in the extensions.idx
file - otherwise you'll end up with a null entry in the handler list
Watch out for having a null argument constructor otherwise you'll endup with an exception
#Extension
public static class MBClickHandler implements ClickHandler {
/**
* constructor with no argument
*/
public MBClickHandler() {
}
src/main/resources/META-INF/extensions.idx
com.bitplan.mb.MBClickHandlerPlugin$MBClickHandler
Code to check
Correct name for extension.idx entry
MBClickHandler ch=new MBClickHandler();
File extFile=new File("src/main/resources/META-INF/extensions.idx");
String extidx=FileUtils.readFileToString(extFile,"UTF-8");
assertEquals(extidx,ch.getClass().getName());
checking the extensions
List<PluginWrapper> startedPlugins = pluginManager.getStartedPlugins();
for (PluginWrapper plugin : startedPlugins) {
String pluginId = plugin.getDescriptor().getPluginId();
System.out.println(String.format("Extensions added by plugin '%s':", pluginId));
Set<String> extensionClassNames = pluginManager.getExtensionClassNames(pluginId);
for (String extension : extensionClassNames) {
System.out.println(" " + extension);
}
}

InvalidServerSideConfigurationException when creating cache using XML

I'm new with terracotta. I want to create a clustered server cache but found some difficulties with configuration files.
Here is my tc-config-terracotta.xml file (with which I launch terracotta server)
<?xml version="1.0" encoding="UTF-8"?>
<tc-config xmlns="http://www.terracotta.org/config"
xmlns:ohr="http://www.terracotta.org/config/offheap-resource">
<servers>
<server host="localhost" name="clustered">
<logs>/path/log/terracotta/server-logs</logs>
</server>
</servers>
<plugins>
<config>
<ohr:offheap-resources>
<ohr:resource name="primary-server-resource" unit="MB">128
</ohr:resource>
<ohr:resource name="secondary-server-resource" unit="MB">96
</ohr:resource>
</ohr:offheap-resources>
</config>
</plugins>
</tc-config>
I used the ehcache-clustered-3.3.1-kit to launch the server.
$myPrompt/some/dir/with/ehcache/clustered/server/bin>./start-tc-server.sh -f /path/to/conf/tc-config-terracotta.xml
No problem for the server to start
2017-06-01 11:29:14,052 INFO - New logging session started.
2017-06-01 11:29:14,066 INFO - Terracotta 5.2.2, as of 2017-03-29 at 15:26:20 PDT (Revision 397a456cfe4b8188dfe8b017a5c14346f79c2fcf from UNKNOWN)
2017-06-01 11:29:14,067 INFO - PID is 6114
2017-06-01 11:29:14,697 INFO - Successfully loaded base configuration from file at '/path/to/conf/tc-config-terracotta.xml'.
2017-06-01 11:29:14,757 INFO - Available Max Runtime Memory: 1822MB
2017-06-01 11:29:14,836 INFO - Log file: '/path/log/terracotta/server-logs/terracotta-server.log'.
2017-06-01 11:29:15,112 INFO - Becoming State[ ACTIVE-COORDINATOR ]
2017-06-01 11:29:15,129 INFO - Terracotta Server instance has started up as ACTIVE node on 0:0:0:0:0:0:0:0:9510 successfully, and is now ready for work.
Here is the ehcache-terracotta.xml configuration file
<ehcache:config xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
xmlns:terracotta='http://www.ehcache.org/v3/clustered'
xmlns:ehcache='http://www.ehcache.org/v3'
xsi:schemaLocation="http://www.ehcache.org/v3 http://www.ehcache.org/schema/ehcache-core-3.3.xsd
http://www.ehcache.org/v3/clustered http://www.ehcache.org/schema/ehcache-clustered-ext-3.3.xsd">
<ehcache:service>
<terracotta:cluster>
<terracotta:connection url="terracotta://localhost:9510/clustered" />
<terracotta:server-side-config
auto-create="true">
<terracotta:default-resource from="primary-server-resource" />
</terracotta:server-side-config>
</terracotta:cluster>
</ehcache:service>
<ehcache:cache alias="myTest">
<ehcache:key-type>java.lang.String</ehcache:key-type>
<ehcache:value-type>java.lang.String</ehcache:value-type>
<ehcache:resources>
<terracotta:clustered-dedicated unit="MB">10
</terracotta:clustered-dedicated>
</ehcache:resources>
<terracotta:clustered-store consistency="strong" />
</ehcache:cache>
</ehcache:config>
I have a class to test the conf:
import java.net.URL;
import org.ehcache.Cache;
import org.ehcache.CacheManager;
import org.ehcache.config.Configuration;
import org.ehcache.config.builders.CacheManagerBuilder;
import org.ehcache.xml.XmlConfiguration;
public class TestTerracottaCacheManager
{
private static TestTerracottaCacheManager cacheManager = null;
private CacheManager cm;
private Cache<Object, Object> cache;
private static final String DEFAULT_CACHE_NAME = "myTest";
private String cacheName;
public static TestTerracottaCacheManager getInstance()
{
if (cacheManager == null)
{
cacheManager = new TestTerracottaCacheManager();
}
return cacheManager;
}
private TestTerracottaCacheManager()
{
// 1. Create a cache manager
final URL url =
TestTerracottaCacheManager.class.getResource("/ehcache-terracotta.xml");
System.out.println(url);
Configuration xmlConfig = new XmlConfiguration(url);
cm = CacheManagerBuilder.newCacheManager(xmlConfig);
cm.init();
intializeCache();
}
private void intializeCache()
{
// 2. Get a cache called "cache1", declared in ehcache.xml
cache = cm.getCache(cacheName == null ? DEFAULT_CACHE_NAME : cacheName,
Object.class, Object.class);
if (cache == null)
{
throw new NullPointerException();
}
}
public void put(Object key, Object value)
{
cache.put(key, value);
}
public Object get(String key)
{
// 5. Print out the element
Object ele = cache.get(key);
return ele;
}
public boolean isKeyInCache(Object key)
{
return cache.containsKey(key);
}
public void closeCache()
{
// 7. shut down the cache manager
cm.close();
}
public static void main(String[] args)
{
TestTerracottaCacheManager testCache = TestTerracottaCacheManager.getInstance();
testCache.put("titi", "1");
System.out.println(testCache.get("titi"));
testCache.closeCache();
}
public String getCacheName()
{
return cacheName;
}
public void setCacheName(String cacheName)
{
this.cacheName = cacheName;
}
}
I've got an exception. Here it's the stack trace:
14:18:38.978 [main] ERROR org.ehcache.core.EhcacheManager - Initialize failed.
Exception in thread "main" org.ehcache.StateTransitionException: Unable to validate cluster tier manager for id clustered
at org.ehcache.core.StatusTransitioner$Transition.failed(StatusTransitioner.java:235)
at org.ehcache.core.EhcacheManager.init(EhcacheManager.java:587)
at fr.test.cache.TestTerracottaCacheManager.<init>(TestTerracottaCacheManager.java:41)
at fr.test.cache.TestTerracottaCacheManager.getInstance(TestTerracottaCacheManager.java:28)
at fr.test.cache.TestTerracottaCacheManager.main(TestTerracottaCacheManager.java:81)
Caused by: org.ehcache.clustered.client.internal.ClusterTierManagerValidationException: Unable to validate cluster tier manager for id clusteredENS
at org.ehcache.clustered.client.internal.ClusterTierManagerClientEntityFactory.retrieve(ClusterTierManagerClientEntityFactory.java:196)
at org.ehcache.clustered.client.internal.service.DefaultClusteringService.autoCreateEntity(DefaultClusteringService.java:215)
at org.ehcache.clustered.client.internal.service.DefaultClusteringService.start(DefaultClusteringService.java:148)
at org.ehcache.core.internal.service.ServiceLocator.startAllServices(ServiceLocator.java:118)
at org.ehcache.core.EhcacheManager.init(EhcacheManager.java:559)
... 3 more
Caused by: org.ehcache.clustered.common.internal.exceptions.InvalidServerSideConfigurationException: Default resource not aligned. Client: primary-server-resource Server: null
at org.ehcache.clustered.common.internal.exceptions.InvalidServerSideConfigurationException.withClientStackTrace(InvalidServerSideConfigurationException.java:43)
at org.ehcache.clustered.common.internal.exceptions.InvalidServerSideConfigurationException.withClientStackTrace(InvalidServerSideConfigurationException.java:22)
at org.ehcache.clustered.common.internal.messages.ResponseCodec.decode(ResponseCodec.java:197)
at org.ehcache.clustered.common.internal.messages.EhcacheCodec.decodeResponse(EhcacheCodec.java:110)
at org.ehcache.clustered.common.internal.messages.EhcacheCodec.decodeResponse(EhcacheCodec.java:37)
at com.tc.object.EntityClientEndpointImpl$InvocationBuilderImpl$1.getWithTimeout(EntityClientEndpointImpl.java:193)
at com.tc.object.EntityClientEndpointImpl$InvocationBuilderImpl$1.getWithTimeout(EntityClientEndpointImpl.java:175)
at org.ehcache.clustered.client.internal.SimpleClusterTierManagerClientEntity.waitFor(SimpleClusterTierManagerClientEntity.java:184)
at org.ehcache.clustered.client.internal.SimpleClusterTierManagerClientEntity.invokeInternal(SimpleClusterTierManagerClientEntity.java:148)
at org.ehcache.clustered.client.internal.SimpleClusterTierManagerClientEntity.validate(SimpleClusterTierManagerClientEntity.java:120)
at org.ehcache.clustered.client.internal.ClusterTierManagerClientEntityFactory.retrieve(ClusterTierManagerClientEntityFactory.java:190)
... 7 more
Caused by: org.ehcache.clustered.common.internal.exceptions.InvalidServerSideConfigurationException: Default resource not aligned. Client: primary-server-resource Server: null
at org.ehcache.clustered.server.EhcacheStateServiceImpl.checkConfigurationCompatibility(EhcacheStateServiceImpl.java:207)
at org.ehcache.clustered.server.EhcacheStateServiceImpl.validate(EhcacheStateServiceImpl.java:194)
at org.ehcache.clustered.server.ClusterTierManagerActiveEntity.validate(ClusterTierManagerActiveEntity.java:253)
at org.ehcache.clustered.server.ClusterTierManagerActiveEntity.invokeLifeCycleOperation(ClusterTierManagerActiveEntity.java:203)
at org.ehcache.clustered.server.ClusterTierManagerActiveEntity.invoke(ClusterTierManagerActiveEntity.java:147)
at org.ehcache.clustered.server.ClusterTierManagerActiveEntity.invoke(ClusterTierManagerActiveEntity.java:57)
at com.tc.objectserver.entity.ManagedEntityImpl.performAction(ManagedEntityImpl.java:741)
at com.tc.objectserver.entity.ManagedEntityImpl.invoke(ManagedEntityImpl.java:488)
at com.tc.objectserver.entity.ManagedEntityImpl.lambda$processInvokeRequest$2(ManagedEntityImpl.java:319)
at com.tc.objectserver.entity.ManagedEntityImpl$SchedulingRunnable.run(ManagedEntityImpl.java:1048)
at com.tc.objectserver.entity.RequestProcessor$EntityRequest.invoke(RequestProcessor.java:170)
at com.tc.objectserver.entity.RequestProcessor$EntityRequest.run(RequestProcessor.java:161)
at com.tc.objectserver.entity.RequestProcessorHandler.handleEvent(RequestProcessorHandler.java:27)
at com.tc.objectserver.entity.RequestProcessorHandler.handleEvent(RequestProcessorHandler.java:23)
at com.tc.async.impl.StageQueueImpl$HandledContext.runWithHandler(StageQueueImpl.java:502)
at com.tc.async.impl.StageImpl$WorkerThread.run(StageImpl.java:192)
I think it's a problem in the XML files, but I'm not sure. Someone can help please?
Thanks
What the exception tells you is that the configuration of the clustered bits of your cache manager and cache differ between what the cluster knows and what the client ask.
The most likely explanation is that you ran your client code once with a different config, realised there was an issue or just wanted to change something. And then tried to run the client without destroying the cahche manager on the cluster or restarting the server.
You simply need to restart your server, to lose all clustered state since you want a different setup.
I've tried to reproduce your issue in my IDE, copying / pasting your 3 files.
I found an error in intializeCache():
cache = cm.getCache(cacheName == null ? DEFAULT_CACHE_NAME : cacheName,
Object.class, Object.class);
triggered a :
Exception in thread "main" java.lang.IllegalArgumentException: Cache 'myTest' type is <java.lang.String, java.lang.String>, but you retrieved it with <java.lang.Object, java.lang.Object>
at org.ehcache.core.EhcacheManager.getCache(EhcacheManager.java:162)
at MyXmlClient.intializeCache(MyXmlClient.java:48)
So please make sure that your xml configuration matches your Java code : you used <String, String> in XML, use <String, String> in your java code :
cache = cm.getCache(cacheName == null ? DEFAULT_CACHE_NAME : cacheName,
String.class, String.class);
Everything else worked fine !
INFO --- [8148202b7ba8914] customer.logger.tsa : Connection successfully established to server at 127.0.0.1:9510
INFO --- [ main] org.ehcache.core.EhcacheManager : Cache 'myTest' created in EhcacheManager.
1
INFO --- [ main] org.ehcache.core.EhcacheManager : Cache 'myTest' removed from EhcacheManager.
INFO --- [ main] o.e.c.c.i.s.DefaultClusteringService : Closing connection to cluster terracotta://localhost:9510
The error you gave is coming form a mismatch between the terracotta server offheap resource use in your client and the terracotta server offheap configuration ; make sure they match ! (copying / pasting your example they did !)
#AnthonyDahanne I am using ehcache-clustered-3.8.1-kit to launch the server. I have ehcache.xml my spring boot application is automatically picking my ehache.xml so I am not explicitly writing cacheManager.
<ehcache:config
xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
xmlns:terracotta='http://www.ehcache.org/v3/clustered'
xmlns:ehcache='http://www.ehcache.org/v3'
xsi:schemaLocation="http://www.ehcache.org/v3 http://www.ehcache.org/schema/ehcache-core-3.8.xsd
http://www.ehcache.org/v3/clustered http://www.ehcache.org/schema/ehcache-clustered-ext-3.8.xsd">
<ehcache:service>
<terracotta:cluster>
<terracotta:connection url="terracotta://localhost:9410/clustered"/>
<terracotta:server-side-config auto-create="true">
<!--<terracotta:default-resource from="default-resource"/>-->
<terracotta:shared-pool name="shared-pool-expense" unit="MB">100</terracotta:shared-pool>
</terracotta:server-side-config>
</terracotta:cluster>
</ehcache:service>
<ehcache:cache alias="areaOfCircleCache">
<ehcache:key-type>java.lang.String</ehcache:key-type>
<ehcache:value-type>com.db.entity.LogMessage</ehcache:value-type>
<ehcache:resources>
<!-- <ehcache:heap unit="entries">100</ehcache:heap>
<ehcache:offheap unit="MB">10</ehcache:offheap>-->
<terracotta:clustered-dedicated unit="MB">10</terracotta:clustered-dedicated>
</ehcache:resources>
</ehcache:cache>
</ehcache:config>

Read an HDFS File from a HIVE UDF - Execution Error, return code 101 FunctionTask. Could not initialize class

We have been trying to create a simple Hive UDF to mask some fields in a Hive Table. We are using an external file (placed on HDFS) to grab a piece of text to make a salting to the masking process. It seems we are doing everything ok but when we tried to create the external function it throws the error:
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.FunctionTask. Could not initialize class co.company.Mask
This is our code for the UDF:
package co.company;
import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.hive.ql.exec.Description;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import java.io.BufferedReader;
import java.io.InputStreamReader;
import org.apache.commons.codec.digest.DigestUtils;
#Description(
name = "masker",
value = "_FUNC_(str) - mask a string",
extended = "Example: \n" +
" SELECT masker(column) FROM hive_table; "
)
public class Mask extends UDF {
private static final String arch_clave = "/user/username/filename.dat";
private static String clave = null;
public static String getFirstLine( String arch ) {
try {
FileSystem fs = FileSystem.get(new Configuration());
FSDataInputStream in = fs.open(new Path(arch));
BufferedReader br = new BufferedReader(new InputStreamReader(in));
String ret = br.readLine();
br.close();
return ret;
} catch (Exception e) {
System.out.println("out: Error Message: " + arch + " exc: " + e.getMessage());
return null;
}
}
public Text evaluate(Text s) {
clave = getFirstLine( arch_clave );
Text to_value = new Text( DigestUtils.shaHex( s + clave) );
return to_value;
}
}
We are uploading the jar file and creating the UDF through HUE's interface (Sadly, we don't have yet console access to the Hadoop cluster.
On Hue's Hive Interface, our commands are:
add jar hdfs:///user/my_username/myJar.jar
And then to create the Function we execute:
CREATE TEMPORARY FUNCTION masker as 'co.company.Mask';
Sadly the error thrown when we tried to create the UDF is not very helpful. This is the log for the creation of the UDF. Any Help is greatly appreciated. Thank you very much.
14/12/10 08:32:15 INFO log.PerfLogger: <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>
14/12/10 08:32:15 INFO log.PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
14/12/10 08:32:15 INFO parse.ParseDriver: Parsing command: CREATE TEMPORARY FUNCTION enmascarar as 'co.bancolombia.analitica.Enmascarar'
14/12/10 08:32:15 INFO parse.ParseDriver: Parse Completed
14/12/10 08:32:15 INFO log.PerfLogger: </PERFLOG method=parse start=1418218335753 end=1418218335754 duration=1 from=org.apache.hadoop.hive.ql.Driver>
14/12/10 08:32:15 INFO log.PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
14/12/10 08:32:15 INFO parse.FunctionSemanticAnalyzer: analyze done
14/12/10 08:32:15 INFO ql.Driver: Semantic Analysis Completed
14/12/10 08:32:15 INFO log.PerfLogger: </PERFLOG method=semanticAnalyze start=1418218335754 end=1418218335757 duration=3 from=org.apache.hadoop.hive.ql.Driver>
14/12/10 08:32:15 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
14/12/10 08:32:15 INFO log.PerfLogger: </PERFLOG method=compile start=1418218335753 end=1418218335757 duration=4 from=org.apache.hadoop.hive.ql.Driver>
14/12/10 08:32:15 INFO log.PerfLogger: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
14/12/10 08:32:15 INFO log.PerfLogger: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
14/12/10 08:32:15 INFO log.PerfLogger: <PERFLOG method=acquireReadWriteLocks from=org.apache.hadoop.hive.ql.Driver>
14/12/10 08:32:15 INFO lockmgr.DummyTxnManager: Creating lock manager of type org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager
14/12/10 08:32:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=server1.domain:2181,server2.domain.corp:2181,server3.domain:2181 sessionTimeout=600000 watcher=org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager$DummyWatcher#2ebe4e81
14/12/10 08:32:15 INFO log.PerfLogger: </PERFLOG method=acquireReadWriteLocks start=1418218335760 end=1418218335797 duration=37 from=org.apache.hadoop.hive.ql.Driver>
14/12/10 08:32:15 INFO log.PerfLogger: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
14/12/10 08:32:15 INFO ql.Driver: Starting command: CREATE TEMPORARY FUNCTION enmascarar as 'co.company.Mask'
14/12/10 08:32:15 INFO log.PerfLogger: </PERFLOG method=TimeToSubmit start=1418218335760 end=1418218335798 duration=38 from=org.apache.hadoop.hive.ql.Driver>
14/12/10 08:32:15 INFO log.PerfLogger: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>
14/12/10 08:32:15 INFO log.PerfLogger: <PERFLOG method=task.FUNCTION.Stage-0 from=org.apache.hadoop.hive.ql.Driver>
14/12/10 08:32:15 ERROR ql.Driver: FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.FunctionTask. Could not initialize class co.company.MasK
14/12/10 08:32:15 INFO log.PerfLogger: </PERFLOG method=Driver.execute start=1418218335797 end=1418218335800 duration=3 from=org.apache.hadoop.hive.ql.Driver>
14/12/10 08:32:15 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
14/12/10 08:32:15 INFO ZooKeeperHiveLockManager: about to release lock for default
14/12/10 08:32:15 INFO ZooKeeperHiveLockManager: about to release lock for colaboradores
14/12/10 08:32:15 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1418218335800 end=1418218335822 duration=22 from=org.apache.hadoop.hive.ql.Driver>
14/12/10 08:32:15 ERROR operation.Operation: Error running hive query:
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.FunctionTask. Could not initialize class co.company.Mask
at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:147)
at org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperation.java:69)
at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:200)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:502)
at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
This issue was solved but it wasn't related to the code. The code above is fine to read a file in HDFS from a HIVE UDF (Awufully inneficient because it reads the file each time the evaluation function is called, buth it manages to read the file).
It turns out that When creating a Hive UDF through HUE, you upload the jar and then you create the function. However, if you changed your function and reuploaded the jar, it still maintained the previous definition of the function.
We defined the same UDF class in another packagein the jar, droped the original function in HIVE and created again the function (with the new class) through HUE:
add jar hdfs:///user/my_username/myJar2.jar;
drop function if exists masker;
create temporary function masker as 'co.company.otherpackage.Mask';
It seems a bug report is needed for HIVE (or HUE?, Thrift?), I still need to understand better which part of the system is at fault.
I hope it helps someone in the future.
This will not work because new Configuration() will be initialized by default with core-default.xml and core-site.xml, see sources.
In the same time, you may (and should) have hdfs-site.xml etc.
Unfortunately I didn't found the reliable way to get Configuration on HiveUDF and this is long story why.
In general, IMHO, you have to use next approaches, one-by-one:
public void configure(MapredContext context) on your UDF, nevertheless it may not be invoked due to defect with vectorization and/or use of other than MR engines or local execution (... limit 5 will trigger the issue) etc.
SessionState.get().getConf() if SessionState.get() is not null
Initialize Configuration and add more resources than default (see list in Configuration sources)
Use RHive approach and load all .xml from Hadoop configuration (FSUtils.java)
public static Configuration getConf() throws IOException{
if(conf != null) return conf;
conf = new Configuration();
String hadoopConfPath = System.getProperty("HADOOP_CONF_DIR");
if (StringUtils.isNotEmpty(hadoopConfPath)) {
File dir = new File(hadoopConfPath);
if (!dir.exists() || !dir.isDirectory()) {
return conf;
}
File[] files = dir.listFiles(
new FilenameFilter() {
public boolean accept(File dir, String name) {
return name.endsWith("xml");
}
}
);
for (File file : files) {
try {
URL url = new URL("file://" + file.getCanonicalPath());
conf.addResource(url);
} catch (Exception e) {
}
}
}
return conf;
}
So, here is complete solution
In UDF add setters
public abstract class BasicUDF extends GenericUDF implements Configurable {
/**
* Invocation context
*/
private MapredContext mapReduceContext = null;
/**
* Hadoop Configuration
*/
private Configuration hadoopConfiguration = null;
/**
Save MR context, if arrived
*/
#Override
public void configure(MapredContext context) {
if (context != null) {
this.mapReduceContext = context;
this.propertyReader.addHiveConfigurationSource(context);
this.resourceFinder.addHiveJobConfiguration(context.getJobConf());
log.debug("Non-empty MapredContext arrived");
} else {
log.error("Empty MapredContext arrived");
}
}
/**
* Save Hadoop configuration, if arrived
*/
#Override
public void setConf(Configuration conf) {
this.hadoopConfiguration = conf;
this.propertyReader.addHadoopConfigurationSource(conf);
this.resourceFinder.addHadoopConfigurationSource(conf);
}
And then, where you need configuration
public Configuration findConfiguration() {
if (hiveJobConfiguration != null) {
log.debug("Starting with hiveJobConfiguration");
return hiveJobConfiguration;
}
if (SessionState.get() != null && SessionState.get().getConf() != null) {
log.debug("Starting with SessionState configuration");
return SessionState.get().getConf();
}
if (hadoopConfiguration != null) {
log.debug("Starting with hadoopConfiguration");
return hadoopConfiguration;
}
log.debug("No existing configuration found, falling back to manually initialized");
return createNewConfiguration();
}
private Configuration createNewConfiguration() {
// load defaults, "core-default.xml" and "core-site.xml"
Configuration configuration = new Configuration();
// load expected configuration, mapred-site.xml, mapred-default.xml, hdfs-site.xml hdfs-default.xml
configuration.addResource("mapred-default.xml");
configuration.addResource("mapred-site.xml");
configuration.addResource("hdfs-default.xml");
configuration.addResource("hdfs-site.xml");
// load Hadoop configuration from FS if any and if requested
if (fallbackReadHadoopFilesFromFS) {
log.debug("Configured manual read of Hadoop configuration from FS");
try {
addFSHadoopConfiguration(configuration);
} catch (RuntimeException re) {
log.error("Reading of Hadoop configuration from FS failed", re);
}
}
return configuration;
}
#edu.umd.cs.findbugs.annotations.SuppressFBWarnings(
value = {"REC_CATCH_EXCEPTION", "SIC_INNER_SHOULD_BE_STATIC_ANON"},
justification = "Findbugs bug, missed IOException from file.getCanonicalPath(); DOn't like idea with static anon"
)
private void addFSHadoopConfiguration(Configuration configuration) {
log.debug("Started addFSHadoopConfiguration to load configuration from FS");
String hadoopConfPath = System.getProperty("HADOOP_CONF_DIR");
if (StringUtils.isEmpty(hadoopConfPath)) {
log.error("HADOOP_CONF_DIR is not set, skipping FS load in addFSHadoopConfiguration");
return;
} else {
log.debug("Found configuration dir, it points to " + hadoopConfPath);
}
File dir = new File(hadoopConfPath);
if (!dir.exists() || !dir.isDirectory()) {
log.error("HADOOP_CONF_DIR points to invalid place " + hadoopConfPath);
return;
}
File[] files = dir.listFiles(
new FilenameFilter() {
public boolean accept(File dir, String name) {
return name.endsWith("xml");
}
}
);
if (files == null) {
log.error("Configuration dir does not denote a directory, or if an I/O error occured. Dir used " + hadoopConfPath);
return;
}
for (File file : files) {
try {
URL url = new URL("file://" + file.getCanonicalPath());
configuration.addResource(url);
} catch (Exception e) {
log.error("Failed to open configuration file " + file.getPath(), e);
}
}
}
Works like a charm

Categories

Resources