I'm getting a confusing ClassNotFoundException when I try to run ExportSnapshot from my HBase master node. hbase shell and other commands work just fine, and my cluster is fully operational.
This feels like a Classpath issue, but I don't know what I'm missing.
$ /usr/bin/hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot ambarismoketest-snapshot -copy-to hdfs://10.0.1.90/apps/hbase/data -mappers 16
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
2015-10-13 20:05:02,339 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2015-10-13 20:05:04,351 INFO [main] util.FSVisitor: No logs under directory:hdfs://cinco-de-nameservice/apps/hbase/data/.hbase-snapshot/impression_event_production_hbase-transfer-to-staging-20151013/WALs
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/mapreduce/Job
at org.apache.hadoop.hbase.snapshot.ExportSnapshot.runCopyJob(ExportSnapshot.java:529)
at org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:646)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:697)
at org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:701)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.mapreduce.Job
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 5 more
Problem
It turns out this is because the mapreduce2 JARs are not available in the classpath. The classpath was properly set up, but I did not have the mapreduce2 client installed on that node. HBase's ExportSnapshot apparently depends on those client JARs when exporting snapshots to another cluster because it writes to HDFS.
Fix
If you use Ambari:
Load Ambari UI
Pull up node where you were running the ExportSnapshot from and getting the above error
Under "components", click "Add"
Click "Mapreduce 2 client"
Background
There's a ticket here https://issues.apache.org/jira/browse/HBASE-9687 where the title is ClassNotFoundException is thrown when ExportSnapshot runs against hadoop cluster where HBase is not installed on the same node as resourcemanager. The title implies that installing resource manager is the fix and this may work; however, the crux is you need the hadoop mapreduce2 jars in the classpath and you can do that by simply installing the mapreduce2 client.
For us, specifically, the reason our snapshot exports were working one day and broken the next is that our HBase master switched on us b/c of another issue we had. Our backup HBase master did not have the mapreduce2 client JARs, but the original primary master did.
Related
Oozie workflow triggers a Hadoop Map Reduce job's Java class. I have added opencsv-2.3.jar and commons-lang-3-3.1 jar dependencies in my Eclipse project. The project builds successfully, however when moved it on Hadoop cluster I get an ClassNotFoundError even though my project contains jar.
Since this is a working existing legacy system, I do not wish to change the environment dependencies. Hence, i tried different combinations by adding libraries to classpath without success.
Tried: java.lang.NoClassDefFoundError: au/com/bytecode/opencsv/CSVReader - Upload File Vaadin
Checked with a MR client maven dependency - org.apache.hadoop:hadoop-mapreduce-client-common:2.6.0-cdh5.4.2.
The legacy jar in production env runs fine, but my project's compiled jar throws errors as follows:
oozie syslog:
INFO [uber-SubtaskRunner] org.apache.hadoop.mapreduce.Job: Running job: job_123213123123_35305
INFO [communication thread] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1548794054671_35304_m_000000_0 is : 1.0
INFO [uber-SubtaskRunner] org.apache.hadoop.mapreduce.Job: Job job_123213123123_35305 running in uber mode : false
INFO [uber-SubtaskRunner] org.apache.hadoop.mapreduce.Job: map 0% reduce 0%
INFO [uber-SubtaskRunner] org.apache.hadoop.mapreduce.Job: Task Id : attempt_123213123123_35305_m_000001_0, Status : FAILED
oozie stderr:
Error: java.lang.ClassNotFoundException: au.com.bytecode.opencsv.CSVParser
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Please suggest if I am missing anything and what I can try.
opencsv-2.3.jar library was added from Eclipse Build Path as an external jar. I had to use mvn clean and build it. Finally, used "*jar-with-dependencies.jar" from the target folder which fixed the issue.
I am starting spark-shell (of spark 2.2) and added bunch of jars in spark-shell command (from Ignite 2.1 directory).
Still getting error:
Can't load log handler "org.apache.ignite.logger.java.JavaLoggerFileHandler"
Also followed recommendation from here:
https://apacheignite.readme.io/v1.2/docs/installation--deployment
# Optionally set IGNITE_HOME here.
# IGNITE_HOME=/path/to/ignite
IGNITE_LIBS="${IGNITE_HOME}/libs/*"
for file in ${IGNITE_HOME}/libs/*
do
if [ -d ${file} ] && [ "${file}" != "${IGNITE_HOME}"/libs/optional ]; then
IGNITE_LIBS=${IGNITE_LIBS}:${file}/*
fi
done
export SPARK_CLASSPATH=$IGNITE_LIBS
Also set logging to only ERROR as well but still getting error:
Can't load log handler "org.apache.ignite.logger.java.JavaLoggerFileHandler"
java.lang.ClassNotFoundException: org.apache.ignite.logger.java.JavaLoggerFileHandler
java.lang.ClassNotFoundException: org.apache.ignite.logger.java.JavaLoggerFileHandler
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.util.logging.LogManager$5.run(LogManager.java:965)
at java.security.AccessController.doPrivileged(Native Method)
Looks like you use documentation for an old Ignite version 1.2, while you use Ignite 2.1. Check the documentation for the latest version here: https://apacheignite-fs.readme.io/v2.2/docs/installation-deployment
Also, please make sure that have configured IGNITE_HOME in your environment. JavaLoggerFileHandler placed in the ignite-core module, looks like spark classpath doesn't see any Ignite lib at all.
The documentation describe the issue here:
https://apacheignite-fs.readme.io/v2.2/docs/troubleshooting
This issue appears when you do not have any loggers included in classpath and Ignite tries to use standard Java logging. By default Spark loads all user jar files using separate class loader. Java logging framework, on the other hand, uses application class loader to initialize log handlers. To resolve this, you can either add ignite-log4j module to the list of the used jars so that Ignite would use Log4j as a logging subsystem, or alter default Spark classpath as described
I have a simple MapReduce program which I want to run it on a remote cluster. I can do this from command line by simply running
hadoop jar myjar.jar input output
but when I want to run a function in my junit TestCase class from my IDE which invokes the MR job, I get the following warnings:
WARN org.apache.hadoop.mapreduce.JobSubmitter - No job jar file set. User classes may not be found. See Job or Job#setJar(String).
INFO org.apache.hadoop.mapred.YARNRunner - Job jar is not present. Not adding any jar to the list of resources.
although I have this line set, before submitting the MR job:
job.setJarByClass(MyJob.class);
and hence the job fails as it cannot find the appropriate classes (like MyMapKey which is the mapper key class) to operate.
Error: java.io.IOException: Initialization of all the collectors failed. Error in last collector was :java.lang.RuntimeException: java.lang.ClassNotFoundException: Class MyMapKey not found
at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:414)
at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:81)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:698)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
any thoughts on this?
First you should add remote Hadoop cluster config files (i.e. core-site.xml, hdfs-site.xml, mapred-site.xml, yarn-site.xml, ssl-client.xml) as resources to your Configuration object. Then follow the steps in the above link to see how you should add manually the job jar to classpath on remote cluster.
When trying to launch eclipse, the loadscreen pops up and vanishes without
any notification.
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
org.eclipse.m2e.logback.configuration: The org.eclipse.m2e.logback.configuration bundle was activated before the state location was initialized. Will retry after the state location is initialized.
org.eclipse.m2e.logback.configuration: Logback config file: /home/jonas/workspace/.metadata/.plugins/org.eclipse.m2e.logback.configuration/logback.1.6.2.20150902-0002.xml
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [bundleresource://474.fwk1710228600:1/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [bundleresource://474.fwk1710228600:2/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [ch.qos.logback.classic.util.ContextSelectorStaticBinder]
org.eclipse.m2e.logback.configuration: Initializing logback
Exception in thread "LocalReportsHistory STARTING" java.lang.RuntimeException: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock#/home/jonas/workspace/.metadata/.plugins/org.eclipse.epp.logging.aeri.ide/org.eclipse.epp.logging.aeri.ide.server/local-history/write.lock
at com.google.common.base.Throwables.propagate(Throwables.java:160)
at com.google.common.util.concurrent.AbstractIdleService$2$1.run(AbstractIdleService.java:58)
at com.google.common.util.concurrent.Callables$3.run(Callables.java:93)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock#/home/jonas/workspace/.metadata/.plugins/org.eclipse.epp.logging.aeri.ide/org.eclipse.epp.logging.aeri.ide.server/local-history/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:84)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1108)
at org.eclipse.epp.internal.logging.aeri.ide.server.LocalReportsHistory.createWriter(LocalReportsHistory.java:159)
at org.eclipse.epp.internal.logging.aeri.ide.server.LocalReportsHistory.startUp(LocalReportsHistory.java:76)
at com.google.common.util.concurrent.AbstractIdleService$2$1.run(AbstractIdleService.java:54)
... 2 more
Error when trying to rename log file to backup one.
Error when trying to rename log file to backup one.
Error when trying to rename log file to backup one.
Error when trying to rename log file to backup one.
Java and Eclipse process is running (~50%cpu) but nothings happening. Deleting the .eclipse and workspace-directory solves this once, but
at the next launch the problem comes up again.
Furthermore, eclipse can't open windows as Marketplace.
Tried to reinstall eclipse and jdk and delete workspace and .eclipse directory, also /workspace/.metadata/.plugins/org.eclipse.e4.workbench/workbench.xmi, but without success.
I have been following the HBase installation instructions as given on this page:
https://hbase.apache.org/book.html#quickstart
I am using HBase version 1.1.3 (stable release) and have configured in standalone mode. I have installed OpenJDK 7
When I try to start hbase it crashes after few seconds and I get the following error:
2016-01-29 17:37:04,136 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster
at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:143)
at org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:219)
at org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:155)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:224)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2355)
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hbase.util.Bytes$LexicographicalComparerHolder$UnsafeComparer
at org.apache.hadoop.hbase.util.Bytes.putInt(Bytes.java:899)
at org.apache.hadoop.hbase.KeyValue.createByteArray(KeyValue.java:1082)
at org.apache.hadoop.hbase.KeyValue.<init>(KeyValue.java:652)
at org.apache.hadoop.hbase.KeyValue.<init>(KeyValue.java:580)
at org.apache.hadoop.hbase.KeyValue.<init>(KeyValue.java:483)
at org.apache.hadoop.hbase.KeyValue.<init>(KeyValue.java:370)
at org.apache.hadoop.hbase.KeyValue.<clinit>(KeyValue.java:267)
at org.apache.hadoop.hbase.HConstants.<clinit>(HConstants.java:978)
at org.apache.hadoop.hbase.HTableDescriptor.<clinit>(HTableDescriptor.java:1488)
at org.apache.hadoop.hbase.util.FSTableDescriptors.<init>(FSTableDescriptors.java:124)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:570)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:365)
at org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.<init>(HMasterCommandLine.java:307)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139)
... 7 more
Can anyone tell me the reason for this error ?
Please let me know if you need more info.
The exception causing this is NoClassDefFoundError which usually means you are missing something from your classpath. In this you're missing org.apache.hadoop.hbase.util.Bytes which comes from hbase-common.jar. However, if one is missing, there are probably others too. This might mean you're starting HBase the wrong way. Have you tried the included start-up scripts?