I am trying to sink kafka topic records into s3 bucket using kafka-connect + camel-kafka-connector-0.9.
The connector loads up fine, I can see it connected to kafka as the consumer can be seen in AKHQ. But it fails immediately just after trying to commit offsets with the following exception from the kafka-connect pod:
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:614)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:329)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:238)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.ClassCastException: class java.math.BigDecimal cannot be cast to class [B (java.math.BigDecimal and [B are in module java.base of loader 'bootstrap')
at org.apache.camel.kafkaconnector.CamelSinkTask.mapHeader(CamelSinkTask.java:233)
at org.apache.camel.kafkaconnector.CamelSinkTask.put(CamelSinkTask.java:184)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:586)
... 10 more
I don't understand whats going on under the hood and there is little room for debugging...
Kafka-connector config:
{
"name": "connector-name",
"config": {
"connector.class": "org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SinkConnector",
"topics": "topic-v1-0",
"camel.sink.endpoint.region": "region",
"camel.sink.path.bucketNameOrArn": "bucket-name",
"camel.sink.endpoint.keyName": "incoming-v1-0/${date:now:yyyyMMdd-HHmmssSSS}-${exchangeId}",
"camel.sink.endpoint.useDefaultCredentialsProvider": "true",
"camel.beans.aggregate": "#class:org.apache.camel.kafkaconnector.aggregator.StringAggregator",
"camel.aggregation.size": "1000",
"camel.aggregation.timeout": "5000",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
"key.converter.schemas.enable": "false",
"value.converter.schemas.enable": "false"
}
}
Related
Created Spring Kafka producer and consumer applications that are connect to the schema registry. Created a schema group with forward compatibility (avro) and uploaded a avro schema. When I have single version both applications are working fine.
Now added a new filed to schema, so the version got increased to 2. Producer application using version 2 to send the message and consumer still using old version (version 1). while consuming the message consumer application throwing the error. I am uisng avroSpecificRecord.
Do I need to specify any version number on consumer side? how consumer application know which version of schema to use ?
java.lang.IllegalStateException: Error deserializing Avro message.
Caused by: java.lang.ClassCastException: class org.apache.avro.util.Utf8 cannot be cast to class com.test.model.Alternates (org.apache.avro.util.Utf8 and com.test.model.Alternates are in unnamed module of loader org.springframework.boot.loader.LaunchedURLClassLoader #1de0aca6)
I am using Azure EventHub Schema registry libraries, but eventually those are also using apache avro libraries.
Schema V1: - using by consumer
{
"type" : "record",
"namespace" : "com.test",
"name" : "Employee",
"fields" : [
{ "name" : "firstName" , "type" : "string" },
{ "name" : "age", "type" : "int" }
]
}
Schema V2: - using by producer
{
"type": "record",
"namespace": "com.test",
"name": "Employee",
"fields": [
{
"name": "firstName",
"type": "string"
},
{
"name": "middleName",
"type": [
"null",
"string"
],
"default" : null
},
{
"name": "age",
"type": "int"
}
]
}
When producer sent a message, message appended with version 2 ID. So consumer expecting version 2 schema while deserialize that message.
ERROR:
Caused by: java.lang.ClassCastException: class org.apache.avro.util.Utf8 cannot be cast to class java.lang.Integer (org.apache.avro.util.Utf8 is in unnamed module of loader org.springframework.boot.loader.LaunchedURLClassLoader #1de0aca6; java.lang.Integer is in module java.base of loader 'bootstrap')
at com.aa.opshub.test.Employee.put(Employee.java:110) ~[classes!/:0.0.1-SNAPSHOT]
at org.apache.avro.generic.GenericData.setField(GenericData.java:816) ~[avro-1.9.2.jar!/:1.9.2]
at org.apache.avro.specific.SpecificDatumReader.readField(SpecificDatumReader.java:139) ~[avro-1.9.2.jar!/:1.9.2]
at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:247) ~[avro-1.9.2.jar!/:1.9.2]
at org.apache.avro.specific.SpecificDatumReader.readRecord(SpecificDatumReader.java:123) ~[avro-1.9.2.jar!/:1.9.2]
at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:179) ~[avro-1.9.2.jar!/:1.9.2]
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:160) ~[avro-1.9.2.jar!/:1.9.2]
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153) ~[avro-1.9.2.jar!/:1.9.2]
at com.azure.data.schemaregistry.avro.AvroSchemaRegistryUtils.decode(AvroSchemaRegistryUtils.java:131) ~[azure-data-schemaregistry-avro-1.0.0-beta.4.jar!/:?]
at com.azure.data.schemaregistry.avro.SchemaRegistryAvroSerializer.lambda$deserializeAsync$1(SchemaRegistryAvroSerializer.java:101) ~[azure-data-schemaregistry-avro-1.0.0-beta.4.jar!/:?]
at reactor.core.publisher.FluxHandleFuseable$HandleFuseableSubscriber.onNext(FluxHandleFuseable.java:169) ~[reactor-core-3.4.0.jar!/:3.4.0]
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1784) ~[reactor-core-3.4.0.jar!/:3.4.0]
at reactor.core.publisher.MonoCallable.subscribe(MonoCallable.java:61) ~[reactor-core-3.4.0.jar!/:3.4.0]
at reactor.core.publisher.Mono.subscribe(Mono.java:3987) ~[reactor-core-3.4.0.jar!/:3.4.0]
at reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:199) ~[reactor-core-3.4.0.jar!/:3.4.0]
at reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53) ~[reactor-core-3.4.0.jar!/:3.4.0]
at reactor.core.publisher.Mono.subscribe(Mono.java:3972) ~[reactor-core-3.4.0.jar!/:3.4.0]
at reactor.core.publisher.Mono.block(Mono.java:1678) ~[reactor-core-3.4.0.jar!/:3.4.0]
at com.azure.data.schemaregistry.avro.SchemaRegistryAvroSerializer.deserialize(SchemaRegistryAvroSerializer.java:50) ~[azure-data-schemaregistry-avro-1.0.0-beta.4.jar!/:?]
at com.microsoft.azure.schemaregistry.kafka.avro.KafkaAvroDeserializer.deserialize(KafkaAvroDeserializer.java:66) ~[azure-schemaregistry-kafka-avro-1.0.0-beta.4.jar!/:?]
at org.apache.kafka.common.serialization.Deserializer.deserialize(Deserializer.java:60) ~[kafka-clients-2.6.0.jar!/:?]
at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:1365) ~[kafka-clients-2.6.0.jar!/:?]
at org.apache.kafka.clients.consumer.internals.Fetcher.access$3400(Fetcher.java:130) ~[kafka-clients-2.6.0.jar!/:?]
at org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.fetchRecords(Fetcher.java:1596) ~[kafka-clients-2.6.0.jar!/:?]
at org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.access$1700(Fetcher.java:1432) ~[kafka-clients-2.6.0.jar!/:?]
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:684) ~[kafka-clients-2.6.0.jar!/:?]
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:635) ~[kafka-clients-2.6.0.jar!/:?]
at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1283) ~[kafka-clients-2.6.0.jar!/:?]
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1237) ~[kafka-clients-2.6.0.jar!/:?]
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210) ~[kafka-clients-2.6.0.jar!/:?]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doPoll(KafkaMessageListenerContainer.java:1238) ~[spring-kafka-2.6.3.jar!/:2.6.3]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1133) ~[spring-kafka-2.6.3.jar!/:2.6.3]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1054) ~[spring-kafka-2.6.3.jar!/:2.6.3]
... 3 more
2021.06.02 23:05:56,821 org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1 DEBUG com.azure.core.util.logging.ClientLogger.performLogging(ClientLogger.java:335) - Cache hit for schema id 'eb3549cd9b544e3a89b8d693275a502f'
2021.06.02 23:05:56,821 org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1 ERROR com.azure.core.util.logging.ClientLogger.performLogging(ClientLogger.java:350) - Error deserializing Avro message.
java.lang.IllegalStateException: Error deserializing Avro message.
When middle name is placed at the end of schema v2 and I regenerated classes at Producer applications. No changes made at Consumer (still using schema v1), getting below error at consumer application .
Caused by: java.lang.IndexOutOfBoundsException: Invalid index: 2
I have 3 instance mongodb replicaset including 1 arbiter in 3 different ec2 instance. From mongo console I am able to connect to replica set.
But when I try to build/deploy my dockerized spring boot apllication in the primary ec2 instance it gives below exception
Caused by: org.springframework.data.mongodb.UncategorizedMongoDbException: Exception authenticating MongoCredential{mechanism=SCRAM-SHA-1, userName='<usrName>', source='<source>', password=<hidden>, mechanismProperties=<hidden>}; nested exception is com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=SCRAM-SHA-1, userName='<usrName>', source='<source>', password=<hidden>, mechanismProperties=<hidden>}
Caused by: com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=SCRAM-SHA-1, userName='<usrName>', source='<source>', password=<hidden>, mechanismProperties=<hidden>}
Caused by: com.mongodb.MongoCommandException: Command failed with error 18 (AuthenticationFailed): 'Authentication failed.' on server <Primary-Host>:27017. The full response is {"operationTime": {"$timestamp": {"t": 1601217500, "i": 1}}, "ok": 0.0, "errmsg": "Authentication failed.", "code": 18, "codeName": "AuthenticationFailed", "$clusterTime": {"clusterTime": {"$timestamp": {"t": 1601217500, "i": 1}}, "signature": {"hash": {"$binary": {"base64": "KSwBAZHnhcqmjdsAy9HHVB8+yZQ=", "subType": "00"}}, "keyId": 6876114453302083588}}}
Spring data mongodb properties used while connecting to replicaset
spring.data.mongodb.uri=mongodb://<usrName>:<password>#<host-primary>:27017,<host-secondary>:27017/<dbName>?<replicaset name>
spring.data.mongodb.auto-index-creation = true
Where as when I try to build/deploy using below properties i.e single node connection this is getting successfull
spring.data.mongodb.host=<Primary-Host>
spring.data.mongodb.port=27017
spring.data.mongodb.database=<database name>
spring.data.mongodb.authentication-database=admin
spring.data.mongodb.username=<user name>
spring.data.mongodb.password=<password>
spring.data.mongodb.auto-index-creation = true
Does username or password contains at sign #, colon :, slash /, or the percent sign % character ?
If yes check if you are using correct encoding.
Also try adding authSource in uri like so :
?authSource=admin&replicaSet=myRepl
I've run into this error when KafkaStream tries to deserialise the Arvo message:
[filtering-app-6adef284-11eb-48f8-8ca0-cde7da5224ab-StreamThread-1] ERROR org.apache.kafka.streams.KafkaStreams - stream-client [filtering-app-6adef284-11eb-48f8-8ca0-cde7da5224ab] All stream threads have died. The instance will be in error state and should be closed.
[filtering-app-6adef284-11eb-48f8-8ca0-cde7da5224ab-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [filtering-app-6adef284-11eb-48f8-8ca0-cde7da5224ab-StreamThread-1] Shutdown complete
Exception in thread "filtering-app-6adef284-11eb-48f8-8ca0-cde7da5224ab-StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Deserialization exception handler is set to fail upon a deserialization error. If you would rather have the streaming pipeline continue after a deserialization error, please set the default.deserialization.exception.handler appropriately.
at org.apache.kafka.streams.processor.internals.RecordDeserializer.deserialize(RecordDeserializer.java:80)
at org.apache.kafka.streams.processor.internals.RecordQueue.maybeUpdateTimestamp(RecordQueue.java:160)
The cause exception was:
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id 1
Caused by: java.lang.RuntimeException: java.lang.StringIndexOutOfBoundsException: begin 1, end 0, length 1
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1529)
and
Caused by: java.lang.StringIndexOutOfBoundsException: begin 1, end 0, length 1
at java.base/java.lang.String.checkBoundsBeginEnd(String.java:3319)
at java.base/java.lang.String.substring(String.java:1874)
The avro configuration is straigthforward:
{
"namespace": "io.confluent.developer.avro",
"type": "record",
"name": "Publication",
"fields": [
{"name": "name", "type": "string"},
{"name": "title", "type": "string"}
]
}
which is from this tutorial: https://kafka-tutorials.confluent.io/filter-a-stream-of-events/kstreams.html. The producer serialises the input string "{"name": "George R. R. Martin", "title": "A Dream of Spring"}" with no problem, but then the KafkaStream which basically tries to filter the event failed to deserialise the object to perform the Java filtering logic...
Has anyone encountered this problem before ? Appreciate any suggestions!
Found the issue: a proxy gets in the way.
The root cause was that the app can't connect to schema-registry. Just note it here in case someone runs into the same problem later.
We are trying to integrate Spark and Spring boot, unfortunately we are facing each time lot of issues. After resolving the most of them, we are now stuck on the exception below
Job aborted due to stage failure: Task 0 in stage 11.0 failed 4 times, most recent failure: Lost task 0.3 in stage 11.0 (TID 14, xxxxx.ax.internal.cloudapp.net, executor 1): java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2233)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1405)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2284)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2202)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2278)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2202)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:427)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1928)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1941)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1954)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:336)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2386)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withNewExecutionId(Dataset.scala:2788)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2385)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2392)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2128)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2127)
at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2818)
at org.apache.spark.sql.Dataset.head(Dataset.scala:2127)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2342)
at org.apache.spark.sql.Dataset.showString(Dataset.scala:248)
at org.apache.spark.sql.Dataset.show(Dataset.scala:638)
at org.apache.spark.sql.Dataset.show(Dataset.scala:597)
at org.apache.spark.sql.Dataset.show(Dataset.scala:606)
at com.xxx.xxx.spark.Execute.run(Execute.java:46)
at com.xxx.xxx.spark.Loader.process(Loader.java:505)
at com.xxx.xxx.spark.Loader.run(Loader.java:122)
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:732)
at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:716)
at org.springframework.boot.SpringApplication.afterRefresh(SpringApplication.java:703)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:304)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1118)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1107)
at com.xxx.xxx.spark.AnalyseFec.main(AnalyseFec.java:11)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
This exception is raised when trying to manipulate a transformed data set (created with a map). The count and collect methods works fine,
The sample below throw the exception on dataf22.show();
StructType schemata = DataTypes.createStructType(
new StructField[]{
DataTypes.createStructField("Column1", DataTypes.StringType, false),
DataTypes.createStructField("Column2", DataTypes.DoubleType, false),
DataTypes.createStructField("Column3", DataTypes.DoubleType, false),
});
ExpressionEncoder<Row> encoder = RowEncoder.apply(schemata);
Dataset<Row> dataf2 = session.read()
.option("header", "true")
.option("delimiter",separateur)
// .schema(schemata)
.csv(csvPath);
dataf2.write().mode(SaveMode.Overwrite).parquet("xxx.parquet");
Dataset<Row> parquetFileDF = session.read().parquet("xxx.parquet");
Dataset<Row> dataf22 = parquetFileDF.map(row -> {
return RowFactory.create(row.getAs("Column1"),
Double.parseDouble(row.getAs("Column2").toString().replace(",", ".")),
Double.parseDouble(row.getAs("Column3").toString().replace(",", ".")));
}, encoder);
dataf22.printSchema();
dataf22.show();
dataf22.groupBy("Column1");
Dataset<Row> ds1 = dataf22.groupBy("Column1").sum("Column2");
ds1.show();
Dataset<Row> ds2 = dataf22.groupBy("Column1").sum("Column3");
ds2.show();
Initially we were packaging using the spring-boot-maven-plugin, the spark-submit was calling the org.springframework.boot.loader.JarLauncher that launch our starter class.
When we moved to maven-shade-plugin with some modification to support spring boot, the exception above disappear and we were able to execute our program, but only in client mode. In cluster mode the application is never running in Yarn, after multiple attempt the application Fail without any error that can help to fix the issue.
I feel that once the program is executed on executors, the problem will appear related to classpath or classloader issues
Did you succeed to made this integration working ? If yes, what maven plugin did you used ? what extra parameters of spark-submit command did you uses ( extraclasspath … )
Thank you
We seem to be out of ideas on how to continue troubleshooting this issue. Suddenly we see the exception listed below being hit every few minutes.
This is a 4 Mongo Shard Setup. The 4 MongoS servers are proxied using 3 HAProxies. We aren't having any visible network issues between our application and shards. The mongo logs for the shards, config servers or mongos don't show anything out of the ordinary. The haproxies seem to be doing their job just fine. Any thoughts and leads on this would be most appreciated!
{"#timestamp":"2016-02-10T05:10:23.780+00:00","#version":1,"message":"unable to process event [ Request Id: [ eb8c702c-b3e7-4605-99a7-c3dcb5a076a9 ] - Event: id [ 49e0ae8d-f16e-448c-9b28-a4af59aa2eb0 ] messageType [ bulk ] operationType: [ create ] xpoint [ 10254910941235296908401 ] ]","logger_name":"com.xyz.event.RabbitMQMessageProcessor","thread_name":"SimpleAsyncTaskExecutor-1","level":"WARN","level_value":30000,"stack_trace":"java.io.EOFException: null\n\tat org.bson.io.Bits.readFully(Bits.java:50) ~[mongo-java-driver-2.12.5.jar:na]\n\tat org.bson.io.Bits.readFully(Bits.java:35)\n\tat org.bson.io.Bits.readFully(Bits.java:30)\n\tat com.mongodb.Response.(Response.java:42)\n\tat com.mongodb.DBPort$1.execute(DBPort.java:141)\n\tat com.mongodb.DBPort$1.execute(DBPort.java:135)\n\tat com.mongodb.DBPort.doOperation(DBPort.java:164)\n\tat com.mongodb.DBPort.call(DBPort.java:135)\n\tat c.m.DBTCPConnector.innerCall(DBTCPConnector.java:289)\n\t... 56 common frames omitted\nWrapped by: c.m.MongoException$Network: Read operation to server prod_mongos.internal.xyz.com:27017 failed on database xyz\n\tat c.m.DBTCPConnector.innerCall(DBTCPConnector.java:297) ~[mongo-java-driver-2.12.5.jar:na]\n\tat c.m.DBTCPConnector.call(DBTCPConnector.java:268)\n\tat c.m.DBCollectionImpl.find(DBCollectionImpl.java:84)\n\tat c.m.DBCollectionImpl.find(DBCollectionImpl.java:66)\n\tat c.m.DBCollection.findOne(DBCollection.java:869)\n\tat c.m.DBCollection.findOne(DBCollection.java:843)\n\tat c.m.DBCollection.findOne(DBCollection.java:789)\n\tat o.s.d.m.c.MongoTemplate$FindOneCallback.doInCollection(MongoTemplate.java:2013) ~[spring-data-mongodb-1.6.3.RELEASE.jar:na]\n\tat o.s.d.m.c.MongoTemplate$FindOneCallback.doInCollection(MongoTemplate.java:1997)\n\tat o.s.d.m.c.MongoTemplate.executeFindOneInternal(MongoTemplate.java:1772)\n\t... 47 common frames omitted\nWrapped by: o.s.d.DataAccessResourceFailureException: Read operation to server prod_mongos.internal.xyz.com:27017 failed on database xyz; nested exception is com.mongodb.MongoException$Network: Read operation to server prod_mongos.internal.xyz.com:27017 failed on database xyz\n\tat o.s.d.m.c.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:59) ~[spring-data-mongodb-1.6.3.RELEASE.jar:na]\n\tat o.s.d.m.c.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:1946)\n\tat o.s.d.m.c.MongoTemplate.executeFindOneInternal(MongoTemplate.java:1776)\n\tat o.s.d.m.c.MongoTemplate.doFindOne(MongoTemplate.j...","HOSTNAME":"prod-node-09","requestId":"eb8c702c-b3e7-4605-99a7-c3dcb5a076a9","WHAT":"ProcessBulkDiscoveredXYZEvent","host":"172.30.31.155:44243","type":"cloud_service","tags":["_grokparsefailure"]}