This question already has answers here:
Exception in thread "main" java.lang.NoSuchMethodError: scala.Product.$init$(Lscala/Product;)
(3 answers)
Closed 3 years ago.
I'm using spark-2.4.4-bin-without-hadoop, and I wanna test the self-contained example JavaDirectKafkaWordCountexample.
From the official document, it mentioned the application should include spark-streaming-kafka-0-10_2.12 this dependency. So I download the spark-streaming-kafka-0-10_2.12-2.4.0.jar to jars directory.
However, When I run run-example streaming.JavaDirectKafkaWordCount device1:9092 group_id topic, it manifests
NoSuchMethodError:
20/01/13 11:51:12 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#1549bba7{/metrics/json,null,AVAILABLE,#Spark}
Exception in thread "main" java.lang.NoSuchMethodError: scala.Product.$init$(Lscala/Product;)V
at org.apache.spark.streaming.kafka010.PreferConsistent$.<init>(LocationStrategy.scala:42)
at org.apache.spark.streaming.kafka010.PreferConsistent$.<clinit>(LocationStrategy.scala)
at org.apache.spark.streaming.kafka010.LocationStrategies$.PreferConsistent(LocationStrategy.scala:66)
at org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent(LocationStrategy.scala)
at org.apache.spark.examples.streaming.JavaDirectKafkaWordCount.main(JavaDirectKafkaWordCount.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
20/01/13 11:51:12 INFO spark.SparkContext: Invoking stop() from shutdown hook
As per the documentation -
You have to compile your streaming application into a JAR. If you are using spark-submit to start the application, then you will not need to provide Spark and Spark Streaming in the JAR. However, if your application uses advanced sources (e.g. Kafka, Flume), then you will have to package the extra artifact they link to, along with their dependencies, in the JAR that is used to deploy the application. For example, an application using KafkaUtils will have to include spark-streaming-kafka-0-10_2.12 and all its transitive dependencies in the application JAR.
Alternatively, you can use like "--packages org.apache.spark:spark-sql-kafka-0-10_2.12:2.4.0" with spark submit command.
Related
I am able to start the server with the command line 'java -jar jarname.jar
But , while running main method of the spring boot application , server start fails ,saying that a class from an imported dependency project does not exists
Caused by: java.lang.NoClassDefFoundError: Lcom/jj/db/repositories/KKRepository;
Also there is a warning message in the console :
2021-11-15 11:04:47 WARN WebappClassLoaderBase:173 - - The web application [MM] appears to have started a thread named [RxIoScheduler-1 (Evictor)] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
Can anyone please help ?
Instead of a multi-module project, make it as a single module project or specify appropriately in the manifest.
I am writing simple jakarta messaging applications with NetBeans and Glassfish. I followed the tutorial "The Jakarta EE 7 Tutorial" step by step. After successfully building all the "simple" Examples, I use the appclient -client target/producer.jar queue 3 in my windows terminal. While Can't send the message.
I'm using Glassfish-5.0.1 . It looks like I can't use the appclient. Can anyone give me some help?
The Jakarta EE 7/ 46.2 Writing Simple JMS Applications
jakartaee-tutorial-examples-master\jakartaee-tutorial-examples-master\jms\simple\producer>appclient -client target/producer.jar queue 3
java.io.FileNotFoundException: C:\Users\?????ó\AppData\Local\Temp\acc7678140812900140496.dat (System cannot find the specified path。)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileReader.<init>(FileReader.java:72)
at org.glassfish.appclient.client.acc.agent.AppClientContainerAgent.optionsValue(AppClientContainerAgent.java:104)
at org.glassfish.appclient.client.acc.agent.AppClientContainerAgent.premain(AppClientContainerAgent.java:83)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:386)
at sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:401)
File not found exception means you either did not have a correct specified path or you gave a path with no file for the inputstream to read.
I am starting spark-shell (of spark 2.2) and added bunch of jars in spark-shell command (from Ignite 2.1 directory).
Still getting error:
Can't load log handler "org.apache.ignite.logger.java.JavaLoggerFileHandler"
Also followed recommendation from here:
https://apacheignite.readme.io/v1.2/docs/installation--deployment
# Optionally set IGNITE_HOME here.
# IGNITE_HOME=/path/to/ignite
IGNITE_LIBS="${IGNITE_HOME}/libs/*"
for file in ${IGNITE_HOME}/libs/*
do
if [ -d ${file} ] && [ "${file}" != "${IGNITE_HOME}"/libs/optional ]; then
IGNITE_LIBS=${IGNITE_LIBS}:${file}/*
fi
done
export SPARK_CLASSPATH=$IGNITE_LIBS
Also set logging to only ERROR as well but still getting error:
Can't load log handler "org.apache.ignite.logger.java.JavaLoggerFileHandler"
java.lang.ClassNotFoundException: org.apache.ignite.logger.java.JavaLoggerFileHandler
java.lang.ClassNotFoundException: org.apache.ignite.logger.java.JavaLoggerFileHandler
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.util.logging.LogManager$5.run(LogManager.java:965)
at java.security.AccessController.doPrivileged(Native Method)
Looks like you use documentation for an old Ignite version 1.2, while you use Ignite 2.1. Check the documentation for the latest version here: https://apacheignite-fs.readme.io/v2.2/docs/installation-deployment
Also, please make sure that have configured IGNITE_HOME in your environment. JavaLoggerFileHandler placed in the ignite-core module, looks like spark classpath doesn't see any Ignite lib at all.
The documentation describe the issue here:
https://apacheignite-fs.readme.io/v2.2/docs/troubleshooting
This issue appears when you do not have any loggers included in classpath and Ignite tries to use standard Java logging. By default Spark loads all user jar files using separate class loader. Java logging framework, on the other hand, uses application class loader to initialize log handlers. To resolve this, you can either add ignite-log4j module to the list of the used jars so that Ignite would use Log4j as a logging subsystem, or alter default Spark classpath as described
I am getting
Exception in thread "main" java.lang.NoClassDefFoundError: com/linkedin/camus/etl/IEtlKey.
On running the command:
hadoop jar camus-etl-kafka-0.1.0-SNAPSHOT.jar
com.linkedin.camus.etl.kafka.CamusJob -P camus.properties
I am getting the below exceptions..
2016-04-27 11:34:04.622 java[13567:351959] Unable to load realm mapping info from SCDynamicStore
[NativeCodeLoader] - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" java.lang.NoClassDefFoundError: com/linkedin/camus/etl/IEtlKey
at com.linkedin.camus.etl.kafka.CamusJob.run(CamusJob.java:252)
at com.linkedin.camus.etl.kafka.CamusJob.run(CamusJob.java:235)
at com.linkedin.camus.etl.kafka.CamusJob.run(CamusJob.java:691)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at com.linkedin.camus.etl.kafka.CamusJob.main(CamusJob.java:646)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.ClassNotFoundException: com.linkedin.camus.etl.IEtlKey
I have included camus-example-0.1.0-SNAPSHOT-shaded.jar in the classpath .
Please let me know if I am missing something .
Thanks in Advance
Soumyajit
You should try to include camus-api you can find on this LinkedIn's previous generation Kafka to HDFS pipeline page, since the missing class is contained in this package, as you can see here.
Pay attention to other transitive dependencies that may be required by Camus.
In addition, to be sure that classes will be found in the classpath when you use the hadoop jar from command line, you can add the libjars command line option, as reported in Using the libjars option with Hadoop:
$ export LIBJARS=/path/jar1,/path/jar2
$ hadoop jar my-example.jar com.example.MyTool -libjars ${LIBJARS} -mytoolopt value
It could be useful to know that Camus is going to be superseded by Gobblin:
Camus is being phased out and replaced by Gobblin. For those using or
interested in Camus, we suggest taking a look at Gobblin.
For instructions on Migrating from Camus to Gobblin, please take
a look at Camus Gobblin Migration.
I'm keep getting an exception because Oozie add a wrong version of httpcore jar to classpath. I tryed different options such as
oozie.launcher.mapreduce.task.classpath.user.precedence
oozie.launcher.mapreduce.user.classpath.first
oozie.launcher.mapreduce.task.classpath.user.precedence does not respond at all and when I use oozie.launcher.mapreduce.user.classpath.first, application cannot load even one class.
In class path I can see two versions of http-core.
httpcore-4.4.1.jar
httpcore-4.2.4.jar
When application runs in stand alone mode, I'm not getting that exception.
Exception:
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.JavaMain], main() threw exception, java.lang.NoSuchFieldError: INSTANCE
org.apache.oozie.action.hadoop.JavaMainException: java.lang.NoSuchFieldError: INSTANCE
at org.apache.oozie.action.hadoop.JavaMain.run(JavaMain.java:59)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:47)
at org.apache.oozie.action.hadoop.JavaMain.main(JavaMain.java:35)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:236)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.NoSuchFieldError: INSTANCE
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.<clinit>(SSLConnectionSocketFactory.java:144)
at microsoft.exchange.webservices.data.core.ExchangeServiceBase.createConnectionSocketFactoryRegistry(ExchangeServiceBase.java:244)
at microsoft.exchange.webservices.data.core.ExchangeServiceBase.initializeHttpClient(ExchangeServiceBase.java:198)
at microsoft.exchange.webservices.data.core.ExchangeServiceBase.<init>(ExchangeServiceBase.java:174)
at microsoft.exchange.webservices.data.core.ExchangeServiceBase.<init>(ExchangeServiceBase.java:179)
at microsoft.exchange.webservices.data.core.ExchangeService.<init>(ExchangeService.java:3729)
at com.sonasoft.sonacloud.email.dispatcher.conn.EwsConnection.getConnection(EwsConnection.java:16)
at com.sonasoft.sonacloud.email.dispatcher.conn.EwsConnection.getConnection(EwsConnection.java:10)
at com.sonasoft.sonacloud.email.dispatcher.utils.EwsOperations.<init>(EwsOperations.java:47)
at com.sonasoft.sonacloud.email.dispatcher.utils.EwsOperations.getInstance(EwsOperations.java:53)
at com.sonasoft.sonacloud.email.dispatcher.main.MainClass.main(MainClass.java:41)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.oozie.action.hadoop.JavaMain.run(JavaMain.java:56)
... 15 more
Oozie client build version: 4.2.0.2.3.2.0-2950
Any help is appreciated.
We have had this nasty issue with HortonWorks distro 2.3.2 (shame on them):
the Oozie "launcher" job always gets httpcore and httpclient in
the CLASSPATH as part of the Hadoop client
the Oozie "launcher" job always gets httpcore and httpclient
as bundled in the "Oozie" ShareLib
the Hive/Hive2 Sharelibs contain httpcore and httpclient in a
more recent version
from Hadoop point of view, user.classpath.first applies to both
ShareLibs so it's a 50/50 chance of getting the right order for each
JAR (so a 25/75 chance overall)
Bottom line: we had to
remove httpcore and httpclient from the "Oozie" ShareLib HDFS
dir (duh!)
raise oozie.launcher.mapreduce.job.user.classpath.first flag for all actions relying on Hive JARS (i.e. Hive action, Hive2 action, Shell action calling the JDBC driver somehow, etc.)
Post-scriptum -- the Oozie server keeps in memory a list of the JARs in each ShareLib, so that removing a JAR while the server is running will trigger errors in new jobs. If you don't want to stop the Oozie server, then the "proper way" to update a live ShareLib is to (a) create a new version in a new, time-stamped directory [check the documentation...] and (b) tell the server to resync on the newer libs with oozie admin -sharelibupdate
You want to build with your local version of the httpcore JAR, but you don't want it present in your classpath, because Hadoop will provide its own version. Then you should be using the provided scope for the httpcore JAR:
<project>
...
<dependencies>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpcore</artifactId>
<scope>provided</scope> <!-- this line is important -->
<version>4.4.1</version>
</dependency>
</dependencies>
</project>
From the Maven documentation for provided:
This is much like compile, but indicates you expect the JDK or a container to provide the dependency at runtime.