My use case is as follows. I need to be able to call java methods from within python code
from py spark this seems to be very easy
I start the py spark like this
./pyspark --driver-class-path /path/to/app.jar
and from pyspark shell do this
x=sc._jvm.com.abc.def.App
x.getMessage()
u'Hello'
x.getMessage()
u'Hello'
This works fine.
When working with spark job server though:
I use the WordCountSparkJob.py example shipped
from sparkjobserver.api import SparkJob, build_problems
from py4j.java_gateway import JavaGateway, java_import
class WordCountSparkJob(SparkJob):
def validate(self, context, runtime, config):
if config.get('input.strings', None):
return config.get('input.strings')
else:
return build_problems(['config input.strings not found'])
def run_job(self, context, runtime, data):
x = context._jvm.com.abc.def.App
return x.getMessage()
My python.conf looks like this
spark {
jobserver {
jobdao = spark.jobserver.io.JobSqlDAO
}
context-settings {
python {
paths = [
"/home/xxx/SPARK/spark-1.6.0-bin-hadoop2.6/python",
"/home/xxx/.local/lib/python2.7/site-packages/pyhocon",
"/home/xxx/SPARK/spark-1.6.0-bin-hadoop2.6/python/lib/pyspark.zip",
"/home/xxx/SPARK/spark-1.6.0-bin-hadoop2.6/python/lib/py4j-0.9-src.zip",
"/home/xxx/gitrepos/spark-jobserver/job-server-python/src/python /dist/spark_jobserver_python-NO_ENV-py2.7.egg"
]
}
dependent-jar-uris = ["file:///path/to/app.jar"]
}
home = /home/path/to/spark
}
I get the following error
[2016-10-08 23:03:46,214] ERROR jobserver.python.PythonJob [] [akka://JobServer/user/context-supervisor/py-context] - From Python: Error while calling 'run_job'TypeError("'JavaPackage' object is not callable",)
[2016-10-08 23:03:46,226] ERROR jobserver.python.PythonJob [] [akka://JobServer/user/context-supervisor/py-context] - Python job failed with error code 4
[2016-10-08 23:03:46,228] ERROR .jobserver.JobManagerActor [] [akka://JobServer/user/context-supervisor/py-context] - Got Throwable
java.lang.Exception: Python job failed with error code 4
at spark.jobserver.python.PythonJob$$anonfun$1.apply(PythonJob.scala:85)
at scala.util.Try$.apply(Try.scala:161)
at spark.jobserver.python.PythonJob.runJob(PythonJob.scala:62)
at spark.jobserver.python.PythonJob.runJob(PythonJob.scala:13)
at spark.jobserver.JobManagerActor$$anonfun$getJobFuture$4.apply(JobManagerActor.scala:288)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2016-10-08 23:03:46,232] ERROR .jobserver.JobManagerActor [] [akka://JobServer/user/context-supervisor/py-context] - Exception from job 942727f0-dd81-445d-bc64-bd18880eb4bc:
java.lang.Exception: Python job failed with error code 4
at spark.jobserver.python.PythonJob$$anonfun$1.apply(PythonJob.scala:85)
at scala.util.Try$.apply(Try.scala:161)
at spark.jobserver.python.PythonJob.runJob(PythonJob.scala:62)
at spark.jobserver.python.PythonJob.runJob(PythonJob.scala:13)
at spark.jobserver.JobManagerActor$$anonfun$getJobFuture$4.apply(JobManagerActor.scala:288)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2016-10-08 23:03:46,232] INFO k.jobserver.JobStatusActor [] [akka://JobServer/user/context-supervisor/py-context/$a] - Job 942727f0-dd81-445d-bc64-bd18880eb4bc finished with an error
[2016-10-08 23:03:46,233] INFO r$RemoteDeadLetterActorRef [] [akka://JobServer/deadLetters] - Message [spark.jobserver.CommonMessages$JobErroredOut] from Actor[akka://JobServer/user/context-supervisor/py-context/$a#1919442151] to Actor[akka://JobServer/deadLetters] was not delivered. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
In the python.conf file I have the app.jar as an entry in dependent-jar-uris.
Am I missing something here
Error "'JavaPackage' object is not callable" probably means that PySpark cannot see your jar or your class in it.
Related
edited
I am trying to iterate trough a dataframe to create another one. In this example I am not using data from the first one, it is just to show what I am trying to do. However, the idea is to use the first one to generate a new one much bigger based on data from the first one.
Whatever I try in the void function, I always get the error in the foreach.
Sample dataframe to iterate:
Dataset<Row> obtencionRents = spark.createDataFrame(Arrays.asList(
new testRentabilidades("0000A0","PORTAL","4-ANUAL","asdasd","asdasd"),
new testRentabilidades("00A00","PORTAL","","asdasd","sdasd"),
new testRentabilidades("00A","PORTAL","4-ANUAL","asdasd","asdasd")
), testRentabilidades.class);
Foreach function to iterate sample dataframe:
obtencionRents.toJavaRDD().foreach(new VoidFunction<Row>() {
public void call(Row r) throws Exception {
//add registers to new collection/arraylist/etc.
}
});
The Error I've got:
Driver stacktrace:
2021-11-03 17:34:41 INFO DAGScheduler:54 - Job 0 failed: foreach at CargarRentabilidades.java:154, took 0,812094 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.NullPointerException
at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:139)
at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:137)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:73)
at org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:419)
at batchload.proceso.builder.CargarRentabilidades$1.call(CargarRentabilidades.java:157)
at batchload.proceso.builder.CargarRentabilidades$1.call(CargarRentabilidades.java:154)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1.apply(JavaRDDLike.scala:351)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1.apply(JavaRDDLike.scala:351)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:921)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:921)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1587)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1586)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1586)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2027)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2048)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:921)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:919)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.foreach(RDD.scala:919)
at org.apache.spark.api.java.JavaRDDLike$class.foreach(JavaRDDLike.scala:351)
at org.apache.spark.api.java.AbstractJavaRDDLike.foreach(JavaRDDLike.scala:45)
at batchload.proceso.builder.CargarRentabilidades.transformacionRentabilidades(CargarRentabilidades.java:154)
at batchload.proceso.builder.CargarRentabilidades.coleccionRentabilidades(CargarRentabilidades.java:78)
at batchload.proceso.builder.CargarRentabilidades.coleccionCargaRentabilidades(CargarRentabilidades.java:52)
at batchload.proceso.MainBatch.init(MainBatch.java:59)
at batchload.BatchloadRentabilidades.main(BatchloadRentabilidades.java:24)
Caused by: java.lang.NullPointerException
at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:139)
at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:137)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:73)
at org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:419)
at batchload.proceso.builder.CargarRentabilidades$1.call(CargarRentabilidades.java:157)
at batchload.proceso.builder.CargarRentabilidades$1.call(CargarRentabilidades.java:154)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1.apply(JavaRDDLike.scala:351)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1.apply(JavaRDDLike.scala:351)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:921)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:921)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Versions:
mongo-spark-connector_2.11-2.3.0
Java 1.8
IntelliJ 2021 1.2 Community
Spark library versions 2.11
other dependency versions I am using:
hadoop 2.7, spark 2.3.0, java driver 2.7, spark catalyst,core,hive,sql ....all 2.11:2.3.0, scala scala-library:2.11.12
Stuck with this, any help is more than welcome
Thanks!
This might be due to serialization issue.
Can you try converting your anonymous Function into static method of class?
Whenever I try to read a Spark dataset using PySpark and convert it to a Pandas df for modeling I get the error: java.io.StreamCorruptedException: invalid stream header: 204356EC on the toPandas() step.
I am not a Java coder (hence PySpark) and so these errors can be pretty cryptic to me. I tried the following things, but I still have this issue:
Made sure my Spark and PySpark versions matched as suggested here: java.io.StreamCorruptedException when importing a CSV to a Spark DataFrame
Reinstalled Spark using the methods suggested here: Complete Guide to Installing PySpark on MacOS
The logging in the test script below verifies the Spark and PySpark versions are aligned.
test.py:
import logging
from pyspark.sql import SparkSession
from pyspark import SparkContext
import findspark
findspark.init()
logging.basicConfig(
format='%(asctime)s %(levelname)-8s %(message)s',
level=logging.INFO,
datefmt='%Y-%m-%d %H:%M:%S')
sc = SparkContext('local[*]', 'test')
spark = SparkSession(sc)
logging.info('Spark location: {}'.format(findspark.find()))
logging.info('PySpark version: {}'.format(spark.sparkContext.version))
logging.info('Reading spark input dataframe')
test_df = spark.read.csv('./data', header=True, sep='|', inferSchema=True)
logging.info('Converting spark DF to pandas DF')
pandas_df = test_df.toPandas()
logging.info('DF record count: {}'.format(len(pandas_df)))
sc.stop()
Output:
$ python ./test.py
21/05/13 11:54:32 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
2021-05-13 11:54:34 INFO Spark location: /Users/username/server/spark-3.1.1-bin-hadoop2.7
2021-05-13 11:54:34 INFO PySpark version: 3.1.1
2021-05-13 11:54:34 INFO Reading spark input dataframe
2021-05-13 11:54:42 INFO Converting spark DF to pandas DF
21/05/13 11:54:42 WARN package: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.sql.debug.maxToStringFields'.
21/05/13 11:54:45 ERROR TaskResultGetter: Exception while getting task result12]
java.io.StreamCorruptedException: invalid stream header: 204356EC
at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:936)
at java.io.ObjectInputStream.<init>(ObjectInputStream.java:394)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.<init>(JavaSerializer.scala:64)
at org.apache.spark.serializer.JavaDeserializationStream.<init>(JavaSerializer.scala:64)
at org.apache.spark.serializer.JavaSerializerInstance.deserializeStream(JavaSerializer.scala:123)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:108)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3.$anonfun$run$1(TaskResultGetter.scala:97)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1996)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(TaskResultGetter.scala:63)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Traceback (most recent call last):
File "./test.py", line 23, in <module>
pandas_df = test_df.toPandas()
File "/Users/username/server/spark-3.1.1-bin-hadoop2.7/python/pyspark/sql/pandas/conversion.py", line 141, in toPandas
pdf = pd.DataFrame.from_records(self.collect(), columns=self.columns)
File "/Users/username/server/spark-3.1.1-bin-hadoop2.7/python/pyspark/sql/dataframe.py", line 677, in collect
sock_info = self._jdf.collectToPython()
File "/Users/username/server/spark-3.1.1-bin-hadoop2.7/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1304, in __call__
File "/Users/username/server/spark-3.1.1-bin-hadoop2.7/python/pyspark/sql/utils.py", line 111, in deco
return f(*a, **kw)
File "/Users/username/server/spark-3.1.1-bin-hadoop2.7/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 326, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o31.collectToPython.
: org.apache.spark.SparkException: Job aborted due to stage failure: Exception while getting task result: java.io.StreamCorruptedException: invalid stream header: 204356EC
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2253)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2202)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2201)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2201)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1078)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1078)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1078)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2440)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2382)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2371)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:868)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2202)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2223)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2242)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2267)
at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1030)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:414)
at org.apache.spark.rdd.RDD.collect(RDD.scala:1029)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:390)
at org.apache.spark.sql.Dataset.$anonfun$collectToPython$1(Dataset.scala:3519)
at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3687)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3685)
at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3516)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
The issue was resolved for me by ensuring that the serialisation option (registered in configuration under spark.serlializer) was not incompatible with pyarrow (typically used during the conversion of pandas to pyspark and vice versa if you've got it enabled).
The fix was to remove the often recommended spark.serializer: org.apache.spark.serializer.KryoSerializer from the configuration and rely instead on the potentially slower default.
For context, our set-up was with a ML version of the databricks spark cluster (v7.3).
I have this exception with Spark Thrift server.
Driver version and cluster version was different.
In my case i delete this, for using version from driver in all cluster.
spark.yarn.archive=hdfs:///spark/3.1.1.zip
I am scala newbie and have recently started using spark and scala. I have a piece of code which simply reads and csv file and processes the rows and it's all running locally on my laptop. The code was working just fine and suddenly stopped working. The code looks like:
val spark = SparkSession.builder()
.appName("testApp")
.config("spark.master", "local")
.getOrCreate()
spark.conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
val sc = spark.sparkContext
val ERROR_UUID = new UUID(0,0)
// Read data from input path
val data = spark.read.format("csv")
.schema(inputSchema)
.load(inputPath)
.rdd
data.take(2).foreach(println)
val headerlessRDD = data
.map {
case Row(colval: String, colval2: String) => {
val colval_uuid: UUID =
Try({
UUID.fromString(colval)
}).recoverWith({
// Just log the exception and keep it as a failure.
case (ex: Throwable) => malformedRows.add(1); ex.printStackTrace; Failure(ex);
}).getOrElse(ERROR_UUID)
(colval_uuid, colval2_uuid, 45)
}
}.filter( x => x._1 != ERROR_UUID) // FILTER OUT MALFORMED UUID ROWS
val aggregatedRDD = headerlessRDD.repartition(100)
aggregatedRDD.top(5)
And the exception is:
Caused by: java.lang.NoSuchMethodError: net.jpountz.lz4.LZ4BlockInputStream.<init>(Ljava/io/InputStream;Z)V
at org.apache.spark.io.LZ4CompressionCodec.compressedInputStream(CompressionCodec.scala:122)
at org.apache.spark.serializer.SerializerManager.wrapForCompression(SerializerManager.scala:163)
at org.apache.spark.serializer.SerializerManager.wrapStream(SerializerManager.scala:124)
at org.apache.spark.shuffle.BlockStoreShuffleReader$$anonfun$3.apply(BlockStoreShuffleReader.scala:50)
at org.apache.spark.shuffle.BlockStoreShuffleReader$$anonfun$3.apply(BlockStoreShuffleReader.scala:50)
at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:453)
at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:64)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.convert.Wrappers$IteratorWrapper.hasNext(Wrappers.scala:30)
at org.spark_project.guava.collect.Ordering.leastOf(Ordering.java:628)
at org.apache.spark.util.collection.Utils$.takeOrdered(Utils.scala:37)
at org.apache.spark.rdd.RDD$$anonfun$takeOrdered$1$$anonfun$32.apply(RDD.scala:1478)
at org.apache.spark.rdd.RDD$$anonfun$takeOrdered$1$$anonfun$32.apply(RDD.scala:1475)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:823)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:823)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
I am using spark 2.4.5 and can see lz4-java-1.4.0.jar in the jars directory. I've read some similar questions on stackoverflow and people pointing out the have the same issue but talking about kafka, but I am not using kafka at all. Also, I am running this code using IntelliJ IDEA. And as I pointed out earlier, this was working just fine and not sure while it's throwing an exception right now. It's worth to mention that the problem goes away if I take the repartition part out! (which I am not planning to)
I am currently using spark 1.5.0 from cloudera distribution and in my java code trying to broadcast a concurrent hashmap. In the map() function, when I try to read the broadcast variable, I get a NullPointer Exception in the resource manager logs. Can anyone please help me out? I am unable to find any resolution for this. Following is my code snippet:
// for broadcasting before calling mapper
final Broadcast<ConcurrentHashMap<ConstantKeys, Object>> constantmapFinal =
context.broadcast(constantMap);
.......
// In map function
JavaRDD<String> outputRDD =
tempRdd.map(new org.apache.spark.api.java.function.Function() {
private static final long serialVersionUID =
6104325309455195113L;
public Object call(final Object arg0)
throws **Exception {
ConcurrentHashMap<ConstantKeys, Object> constantMap =
constantmapFinal.value(); // line 428
}
});
The exception from resource manager logs:
016-11-17 10:40:10 ERROR ApplicationMaster:96 - User class threw exception: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 2.0 failed 4 times, most recent failure: Lost task 1.3 in stage 2.0 (TID 20, ******(server name)): java.io.IOException: java.lang.NullPointerException
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1177)
at org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:165)
at org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:64)
at org.apache.spark.broadcast.TorrentBroadcast._value(TorrentBroadcast.scala:64)
at org.apache.spark.broadcast.TorrentBroadcast.getValue(TorrentBroadcast.scala:88)
at org.apache.spark.broadcast.Broadcast.value(Broadcast.scala:70)
at com.***.text.fuzzymatch.execute.FuzzyMatchWrapper$2.call(FuzzyMatchWrapper.java:428)
at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1027)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1109)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1108)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1108)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1116)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1095)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at java.util.HashMap.put(HashMap.java:493)
at com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:135)
at com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:17)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)
at com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:134)
at com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:17)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)
at org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:192)
at org.apache.spark.broadcast.TorrentBroadcast$.unBlockifyObject(TorrentBroadcast.scala:217)
at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$readBroadcastBlock$1.apply(TorrentBroadcast.scala:178)
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1174)
... 21 more
This is working for smaller sizes of the map. The map can contain numerous key value pairs based on the input request. Can anyone please help me out?
I am trying to read data from Cassandra using Spark.
DataFrame rdf = sqlContext.read().option("keyspace", "readypulse")
.option("table", "ig_posts")
.format("org.apache.spark.sql.cassandra").load();
rdf.registerTempTable("cassandra_table");
System.out.println(sqlContext.sql("select count(external_id) from cassandra_table").collect()[0].getLong(0));
The task fails with the following error. I am not able to understand why the ShuffleMaptask is being called and why it is a problem to cast it to Task.
16/03/30 02:27:15 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, ip-10-165-180-22.ec2.internal):
java.lang.ClassCastException:
org.apache.spark.scheduler.ShuffleMapTask
cannot be cast to org.apache.spark.scheduler.Task
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:193)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
16/03/30 02:27:15 INFO TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) on executor ip-10-165-180-22.ec2.internal:
java.lang.ClassCastException (org.apache.spark.scheduler.Shuf
fleMapTask
cannot be cast to org.apache.spark.scheduler.Task) [duplicate 1]
16/03/30 02:27:15 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
I am using EMR 4.4, Spark 1.6, Cassandra 2.2 (Datastax Community), and spark-cassandra-connector-java_2.10 1.6.0-M1 (also tried 1.5.0)
I also tried the same with following code, but got same error.
CassandraJavaRDD<CassandraRow> cjrdd = functions.cassandraTable(
KEYSPACE, tableName).select(columns);
logger.info("Got rows from cassandra " + cjrdd.count());
JavaRDD<Double> jrdd2 = cjrdd.map(new Function<CassandraRow, Double>() {
#Override
public Double call(CassandraRow trainingRow) throws Exception {
Object fCount = trainingRow.getRaw("follower_count");
double count = 0;
if (fCount != null) {
count = (Long) fCount;
}
return count;
}
});
logger.info("Mapper done : " + jrdd2.count());
logger.info("Mapper done values : " + jrdd2.collect());
I've been encountering the similar problem recently due to
--conf spark.executor.userClassPathFirst=true.
Quoting Spark's official documentation:
spark.executor.userClassPathFirst (Experimental) Same functionality as spark.driver.userClassPathFirst, but applied to executor instances.
I think those exceptions were due to some jar version conflict, and by the spark document, "The user's jar should never include Hadoop or Spark libraries, however, these will be added at runtime."
I was also struggling with the same error. But setting userClassPathFirst=true doesn't help me. I set "spark.driver.userClassPathFirst=false" and "spark.executor.userClassPathFirst=false". It solved my ClassCastException problem. Here is my command.
spark-submit --conf "spark.driver.userClassPathFirst=false" --conf "spark.executor.userClassPathFirst=false" --deploy-mode client --class org.sdrc.kspmis.dashboardservice.KspDashboardServiceApplication --master spark://192.168.1.95:7077 dashboard-0.0.1-SNAPSHOT-shaded.jar