Created Spring Kafka producer and consumer applications that are connect to the schema registry. Created a schema group with forward compatibility (avro) and uploaded a avro schema. When I have single version both applications are working fine.
Now added a new filed to schema, so the version got increased to 2. Producer application using version 2 to send the message and consumer still using old version (version 1). while consuming the message consumer application throwing the error. I am uisng avroSpecificRecord.
Do I need to specify any version number on consumer side? how consumer application know which version of schema to use ?
java.lang.IllegalStateException: Error deserializing Avro message.
Caused by: java.lang.ClassCastException: class org.apache.avro.util.Utf8 cannot be cast to class com.test.model.Alternates (org.apache.avro.util.Utf8 and com.test.model.Alternates are in unnamed module of loader org.springframework.boot.loader.LaunchedURLClassLoader #1de0aca6)
I am using Azure EventHub Schema registry libraries, but eventually those are also using apache avro libraries.
Schema V1: - using by consumer
{
"type" : "record",
"namespace" : "com.test",
"name" : "Employee",
"fields" : [
{ "name" : "firstName" , "type" : "string" },
{ "name" : "age", "type" : "int" }
]
}
Schema V2: - using by producer
{
"type": "record",
"namespace": "com.test",
"name": "Employee",
"fields": [
{
"name": "firstName",
"type": "string"
},
{
"name": "middleName",
"type": [
"null",
"string"
],
"default" : null
},
{
"name": "age",
"type": "int"
}
]
}
When producer sent a message, message appended with version 2 ID. So consumer expecting version 2 schema while deserialize that message.
ERROR:
Caused by: java.lang.ClassCastException: class org.apache.avro.util.Utf8 cannot be cast to class java.lang.Integer (org.apache.avro.util.Utf8 is in unnamed module of loader org.springframework.boot.loader.LaunchedURLClassLoader #1de0aca6; java.lang.Integer is in module java.base of loader 'bootstrap')
at com.aa.opshub.test.Employee.put(Employee.java:110) ~[classes!/:0.0.1-SNAPSHOT]
at org.apache.avro.generic.GenericData.setField(GenericData.java:816) ~[avro-1.9.2.jar!/:1.9.2]
at org.apache.avro.specific.SpecificDatumReader.readField(SpecificDatumReader.java:139) ~[avro-1.9.2.jar!/:1.9.2]
at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:247) ~[avro-1.9.2.jar!/:1.9.2]
at org.apache.avro.specific.SpecificDatumReader.readRecord(SpecificDatumReader.java:123) ~[avro-1.9.2.jar!/:1.9.2]
at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:179) ~[avro-1.9.2.jar!/:1.9.2]
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:160) ~[avro-1.9.2.jar!/:1.9.2]
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153) ~[avro-1.9.2.jar!/:1.9.2]
at com.azure.data.schemaregistry.avro.AvroSchemaRegistryUtils.decode(AvroSchemaRegistryUtils.java:131) ~[azure-data-schemaregistry-avro-1.0.0-beta.4.jar!/:?]
at com.azure.data.schemaregistry.avro.SchemaRegistryAvroSerializer.lambda$deserializeAsync$1(SchemaRegistryAvroSerializer.java:101) ~[azure-data-schemaregistry-avro-1.0.0-beta.4.jar!/:?]
at reactor.core.publisher.FluxHandleFuseable$HandleFuseableSubscriber.onNext(FluxHandleFuseable.java:169) ~[reactor-core-3.4.0.jar!/:3.4.0]
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1784) ~[reactor-core-3.4.0.jar!/:3.4.0]
at reactor.core.publisher.MonoCallable.subscribe(MonoCallable.java:61) ~[reactor-core-3.4.0.jar!/:3.4.0]
at reactor.core.publisher.Mono.subscribe(Mono.java:3987) ~[reactor-core-3.4.0.jar!/:3.4.0]
at reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:199) ~[reactor-core-3.4.0.jar!/:3.4.0]
at reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53) ~[reactor-core-3.4.0.jar!/:3.4.0]
at reactor.core.publisher.Mono.subscribe(Mono.java:3972) ~[reactor-core-3.4.0.jar!/:3.4.0]
at reactor.core.publisher.Mono.block(Mono.java:1678) ~[reactor-core-3.4.0.jar!/:3.4.0]
at com.azure.data.schemaregistry.avro.SchemaRegistryAvroSerializer.deserialize(SchemaRegistryAvroSerializer.java:50) ~[azure-data-schemaregistry-avro-1.0.0-beta.4.jar!/:?]
at com.microsoft.azure.schemaregistry.kafka.avro.KafkaAvroDeserializer.deserialize(KafkaAvroDeserializer.java:66) ~[azure-schemaregistry-kafka-avro-1.0.0-beta.4.jar!/:?]
at org.apache.kafka.common.serialization.Deserializer.deserialize(Deserializer.java:60) ~[kafka-clients-2.6.0.jar!/:?]
at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:1365) ~[kafka-clients-2.6.0.jar!/:?]
at org.apache.kafka.clients.consumer.internals.Fetcher.access$3400(Fetcher.java:130) ~[kafka-clients-2.6.0.jar!/:?]
at org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.fetchRecords(Fetcher.java:1596) ~[kafka-clients-2.6.0.jar!/:?]
at org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.access$1700(Fetcher.java:1432) ~[kafka-clients-2.6.0.jar!/:?]
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:684) ~[kafka-clients-2.6.0.jar!/:?]
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:635) ~[kafka-clients-2.6.0.jar!/:?]
at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1283) ~[kafka-clients-2.6.0.jar!/:?]
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1237) ~[kafka-clients-2.6.0.jar!/:?]
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210) ~[kafka-clients-2.6.0.jar!/:?]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doPoll(KafkaMessageListenerContainer.java:1238) ~[spring-kafka-2.6.3.jar!/:2.6.3]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1133) ~[spring-kafka-2.6.3.jar!/:2.6.3]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1054) ~[spring-kafka-2.6.3.jar!/:2.6.3]
... 3 more
2021.06.02 23:05:56,821 org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1 DEBUG com.azure.core.util.logging.ClientLogger.performLogging(ClientLogger.java:335) - Cache hit for schema id 'eb3549cd9b544e3a89b8d693275a502f'
2021.06.02 23:05:56,821 org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1 ERROR com.azure.core.util.logging.ClientLogger.performLogging(ClientLogger.java:350) - Error deserializing Avro message.
java.lang.IllegalStateException: Error deserializing Avro message.
When middle name is placed at the end of schema v2 and I regenerated classes at Producer applications. No changes made at Consumer (still using schema v1), getting below error at consumer application .
Caused by: java.lang.IndexOutOfBoundsException: Invalid index: 2
Related
I am trying to sink kafka topic records into s3 bucket using kafka-connect + camel-kafka-connector-0.9.
The connector loads up fine, I can see it connected to kafka as the consumer can be seen in AKHQ. But it fails immediately just after trying to commit offsets with the following exception from the kafka-connect pod:
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:614)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:329)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:238)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.ClassCastException: class java.math.BigDecimal cannot be cast to class [B (java.math.BigDecimal and [B are in module java.base of loader 'bootstrap')
at org.apache.camel.kafkaconnector.CamelSinkTask.mapHeader(CamelSinkTask.java:233)
at org.apache.camel.kafkaconnector.CamelSinkTask.put(CamelSinkTask.java:184)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:586)
... 10 more
I don't understand whats going on under the hood and there is little room for debugging...
Kafka-connector config:
{
"name": "connector-name",
"config": {
"connector.class": "org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SinkConnector",
"topics": "topic-v1-0",
"camel.sink.endpoint.region": "region",
"camel.sink.path.bucketNameOrArn": "bucket-name",
"camel.sink.endpoint.keyName": "incoming-v1-0/${date:now:yyyyMMdd-HHmmssSSS}-${exchangeId}",
"camel.sink.endpoint.useDefaultCredentialsProvider": "true",
"camel.beans.aggregate": "#class:org.apache.camel.kafkaconnector.aggregator.StringAggregator",
"camel.aggregation.size": "1000",
"camel.aggregation.timeout": "5000",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
"key.converter.schemas.enable": "false",
"value.converter.schemas.enable": "false"
}
}
https://pricing.twilio.com/v1/PhoneNumbers/Countries/{countryCode}
API is fetching country and give a response like
expected response =>
{
"url": "https://pricing.twilio.com/v1/PhoneNumbers/Countries/US",
"country": "United States",
"price_unit": "USD",
"phone_number_prices": [
{
"number_type": "local",
"base_price": "1.00",
"current_price": "1.00"
},
{
"number_type": "toll free",
"base_price": "2.00",
"current_price": "2.00"
}
],
"iso_country": "US"
}
but I am facing a problem while fetching country, it will give exception
Country country= Country.fetcher(countryCode).fetch();
exception =>
Unrecognized field "number_type" (class com.twilio.type.PhoneNumberPrice), not marked as ignorable (5 known properties: "basePrice", "type", "base_price", "currentPrice", "current_price"])
at [Source: (org.apache.http.conn.EofSensorInputStream); line: 1, column: 212] (through reference chain: com.twilio.rest.pricing.v1.phonenumber.Country["phone_number_prices"]-
>java.util.ArrayList[0]->com.twilio.type.PhoneNumberPrice["number_type"])
how to resolve this problem
Twilio developer evangelist here.
What version of the Twilio Java library are you using? This looks like an issue that was fixed back in March 2017.
I recommend you upgrade the version of the Twilio Java library you are using and if that doesn't solve the issue, raise an issue on the library repo.
I've run into this error when KafkaStream tries to deserialise the Arvo message:
[filtering-app-6adef284-11eb-48f8-8ca0-cde7da5224ab-StreamThread-1] ERROR org.apache.kafka.streams.KafkaStreams - stream-client [filtering-app-6adef284-11eb-48f8-8ca0-cde7da5224ab] All stream threads have died. The instance will be in error state and should be closed.
[filtering-app-6adef284-11eb-48f8-8ca0-cde7da5224ab-StreamThread-1] INFO org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [filtering-app-6adef284-11eb-48f8-8ca0-cde7da5224ab-StreamThread-1] Shutdown complete
Exception in thread "filtering-app-6adef284-11eb-48f8-8ca0-cde7da5224ab-StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Deserialization exception handler is set to fail upon a deserialization error. If you would rather have the streaming pipeline continue after a deserialization error, please set the default.deserialization.exception.handler appropriately.
at org.apache.kafka.streams.processor.internals.RecordDeserializer.deserialize(RecordDeserializer.java:80)
at org.apache.kafka.streams.processor.internals.RecordQueue.maybeUpdateTimestamp(RecordQueue.java:160)
The cause exception was:
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id 1
Caused by: java.lang.RuntimeException: java.lang.StringIndexOutOfBoundsException: begin 1, end 0, length 1
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1529)
and
Caused by: java.lang.StringIndexOutOfBoundsException: begin 1, end 0, length 1
at java.base/java.lang.String.checkBoundsBeginEnd(String.java:3319)
at java.base/java.lang.String.substring(String.java:1874)
The avro configuration is straigthforward:
{
"namespace": "io.confluent.developer.avro",
"type": "record",
"name": "Publication",
"fields": [
{"name": "name", "type": "string"},
{"name": "title", "type": "string"}
]
}
which is from this tutorial: https://kafka-tutorials.confluent.io/filter-a-stream-of-events/kstreams.html. The producer serialises the input string "{"name": "George R. R. Martin", "title": "A Dream of Spring"}" with no problem, but then the KafkaStream which basically tries to filter the event failed to deserialise the object to perform the Java filtering logic...
Has anyone encountered this problem before ? Appreciate any suggestions!
Found the issue: a proxy gets in the way.
The root cause was that the app can't connect to schema-registry. Just note it here in case someone runs into the same problem later.
I am trying to post mappings to a remote server from a spring application. What I found while debugging is that my JSON gets converted to "StubMapping" and this is the place where the code is failing with the following error.
Error creating bean with name 'wiremockConfig' defined in file [C:\Users\Addy\school-impl-api\target\classes\com\test\school\project\wiremock\WiremockConfig.class]: Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.test.order.implementation.product.wiremock.WiremockConfig$$EnhancerBySpringCGLIB$$b100848d]: Constructor threw exception; nested exception is com.github.tomakehurst.wiremock.common.JsonException: {
"errors" : [ {
"code" : 10,
"source" : {
"pointer" : "/mappings"
},
"title" : "Error parsing JSON",
"detail" : "Unrecognized field \"mappings\" (class com.github.tomakehurst.wiremock.stubbing.StubMapping), not marked as ignorable"
} ]
}
I got details for posting to a remote standalone server from the following issue (last comment).
https://github.com/tomakehurst/wiremock/issues/1138
My code for posting to the remote server is like this:
WireMock wm = new WireMock("https", "wiremock-poc.apps.pcf.sample.int", 443);
wm.loadMappingsFrom("src/main/resources"); // Root dir contains mappings and __files
This gets loaded when I run the profile local.
Please provide your guidance on how to solve this and move further.
Regards
Update: Sample mapping file.
{
"mappings": [
{
"request": {
"method": "GET",
"urlPathPattern": "/school/admin/rest/users/([0-9]*)?([a-zA-Z0-9_\\-\\=\\?\\.]*)"
},
"response": {
"status": 200,
"headers": {
"Content-Type": "application/json"
},
"bodyFileName": "./mockResponses/School-getUser.json"
}
}
]
}
After a discussion in chat found out that it's supported to keep each mapping in a separate file.
Here's the source code that is responsible for that: RemoteMappingsLoader#load
I produce the same Avro schema to one topic use different Confluent Registry sources. I get the error when I consume this topic:
org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition XXXXX_XXXX_XXX-0 at offset 0. If needed, please seek past the record to continue consumption.
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id 7
Caused by: org.apache.kafka.common.errors.SerializationException: Could not find class XXXXX_XXXX_XXX specified in writer's schema whilst finding reader's schema for a SpecificRecord.
How to ignore differently Avro message-id?
Schema:
{
"type": "record",
"name": "XXXXX_XXXX_XXX",
"namespace": "aa.bb.cc.dd",
"fields": [
{
"name": "ACTION",
"type": [
"null",
"string"
],
"default":null,
"doc":"action"
},
{
"name": "EMAIL",
"type": [
"null",
"string"
],
"default":null,
"doc":"email address"
}
]
}
Produced command
{"Action": "A", "EMAIL": "xxxx#xxx.com"}
It's not possible to use different Registry urls in a producer and be able to consume them consistently.
The reason is that a different ID will be placed in the topic.
The Schema ID lookup cannot be skipped
If you had the used same registry, the same schema payload would always generate the same ID, which the consumer would then be able to use consistently to read messages