Spring boot Apache Camel-Java DSL app reads messages from Kafka topic.
#Component
public class KafkaTopicService extends RouteBilder {
public void configure(){
from("kafka:myTopic?brokers=localhost:9092")
.log("Message received from Kafka: ${body}")}
}
If I stop Kafka I get org.apache.kafka.common.errors.DisconnectException
I looked into onException(...class).handled(true) but Im not sure how to implement handling of the exception in my code. Can someone give me few implementation examples? What options are available? For example logging the message or reattempting to read message?
Documentation also mentions Quarkus. Do I need Quarkus to use onException()?
You can do something like (have not tried running it so please take care of any typos)
#Component
public class KafkaTopicService extends RouteBilder {
public void configure(){
onException(org.apache.kafka.common.errors.DisconnectException.class)
.log("Error connecting kafka");
from("kafka:myTopic?brokers=localhost:9092&bridgeErrorHandler=true")
.log("Message received from Kafka: ${body}")}
}
Please note that I have added bridgeErrorHandler=true. Normally exception handling happens after from. In most of the case using bridgeErrorHandler we can use onException function for those.
Also note that I have defined onException outside your route, so the exception handling logic which you add would be global and applicable to all routes wherever you encounter DisconnectException
Related
I'm getting a bunch of deserialization failure before my Kafka Listener is hit. I was looking into the things Gary Russel built, but having issues getting it to work. All my stuff is configured via properties file.
spring.kafka.consumer.value-deserializer=org.springframework.kafka.support.serializer.ErrorHandlingDeserializer2
spring.kafka.consumer.properties.spring.deserializer.value.delegate.class=io.confluent.kafka.serializers.KafkaAvroDeserializer
So if I add these, my understanding is it wraps an error in the headers of the consumer record? My ultimate goal is to have any deserialization exception hit some custom class I have so I can handle what I want to do with it. IE, forward to my dead letter handler which uploads failed data to s3.
I tried adding the errorhandler flag to the kafkalistener, but that also didn't do anything.
Updated Property Configuration
I've updated my configuration, it's still unclear to me if this is correct. It's not working, so I assume not.
None of the custom code is getting called
spring.kafka.consumer.properties.value.deserializer=org.springframework.kafka.support.serializer.ErrorHandlingDeserializer2
spring.kafka.consumer.properties.key.deserializer=org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
spring.kafka.consumer.properties.spring.deserializer.value.function=com.thing.cyclic.service.FailedFooProvider
spring.kafka.consumer.properties.spring.deserializer.key.delegate.class=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.properties.spring.deserializer.value.delegate.class=io.confluent.kafka.serializers.KafkaAvroDeserializer
spring.kafka.consumer.properties.spring.json.trusted.packages=*
spring.kafka.consumer.properties.value.subject.name.strategy=io.confluent.kafka.serializers.subject.TopicNameStrategy
spring.kafka.consumer.properties.specific.avro.reader=true
spring.kafka.consumer.properties.auto.register.schemas=false
spring.kafka.consumer.properties.isolation.level=read_committed
spring.kafka.listener.ack-mode=manual_immediate
BadFoo
public class BadFoo {
private final FailedDeserializationInfo failedDeserializationInfo;
public BadFoo(FailedDeserializationInfo failedDeserializationInfo) {
this.failedDeserializationInfo = failedDeserializationInfo;
}
public FailedDeserializationInfo getFailedDeserializationInfo() {
return this.failedDeserializationInfo;
}
}
FailedFooProvider
public class FailedFooProvider implements Function<FailedDeserializationInfo, String> {
#Override
public String apply(FailedDeserializationInfo info) {
System.out.println("");
return "";
}
}
See the documentation here and here.
Also take a look at the DeadLetterPublishingRecoverer code, which can used to publish the failed record to an other topic. You can model your code after that to obtain the header(s) containing the failed byte[].
https://github.com/spring-projects/spring-kafka/blob/fa5c35e9b15c4cecfc6ea2bbbf9e7745bc5d9f75/spring-kafka/src/main/java/org/springframework/kafka/listener/DeadLetterPublishingRecoverer.java#L169-L178
The recoverer is used in conjunction with a SeekToCurrentErrorHandler.
Configure the error handler as a #Bean and Spring Boot will automatically wire it into the container.
I am implementing a gprc server in quarkus (1.8.3.Final).
My service is written in reactive style (smallrye mutiny)
This is my service class
#Singleton
#Blocking
#Slf4j
public class GrpcService extends MutinyGrpcServicesGrpc.GrpcServicesImplBase{
#Blocking
public Uni<MyResponse> executeMyLogic(MyRequest request) {
System.out.println("grpc thread name "+Thread.currentThread().getName());
...
}
}
Now the actual logic written inside executeMyLogic is a bit blocking and was resulting in blocked event loop warnings(and some other errors) by vertx.
So as mentioned in the quarkus grpc server guide(https://quarkus.io/guides/grpc-getting-started)
I annotated the method with #Blocking (io.smallrye.common.annotation.Blocking).
Before I added this annotation I get this log on sys.out
grpc thread name vert.x-eventloop-thread-0
which indicates that this logic is being run on a vertx event loop which seems to be causing the issue.
Now according to my understanding after adding this #Blocking annotation on executeMyLogic this should be running on some worker thread.
But its still running on vertx event loop.
It seems like this this annotation is not being honored by the framework.
Correct me if my understanding is wrong or else please help me get this working.
So as it turns out this was a bug in quarkus framework.
Earlier it didn't honor the #Blocking annotation.
It worked after upgrading to 1.10.2.Final
Here's a link to the PR that fixed it
Good Morning:
I'm new in Citrus Framework. Actually i work in a Test Case that consumes one soap webservice. I can send request message from a xml file and i need to store response message from server into another xml file for trazability and audit.
I try some options but still not working. Can you help me with posibles solutions to this requirement?
My test look like this:
public class DummyIT extends TestNGCitrusTestDesigner {
#Autowired
private WebServiceClient DummyClient;
#Test
#CitrusTest
public void dummyTest() {
soap()
.client(DummyClient)
.send()
.messageType(MessageType.XML)
.charset("UTF-8")
.contentType("text/xml")
.payload(new ClassPathResource("templates/DummyRequest.xml"));
soap()
.client(DummyClient)
.receive()
.schemaValidation(false);
}
I'm using Citrus Framework version 2.7.2.
Thanks for your help.
You can add a message tracing test listener to the Spring application context. This listener is called with all inbound and/or outbound messages. With a custom implementation you can write the message content as file to an external folder.
There is a default message listener implementation available that is a good starting point. See if this default tracing listener fits your requirements. Otherwise you would have to implement the listener logic on your own.
You can add the default listener to the application context as bean:
#Bean
public MessageTracingTestListener tracingTestListener() {
return new MessageTracingTestListener();
}
After that you should see .msgs files in target/citrus-logs/trace/messages folder containing all exchanged inbound and outbound messages.
Here is the default implementation: https://github.com/citrusframework/citrus/blob/master/modules/citrus-core/src/main/java/com/consol/citrus/report/MessageTracingTestListener.java
I'm trying to run example from http://www.baeldung.com/spring-remoting-amqp, even when I set up the connection to the dedicated vhost to my RabbitMQ broker, I can only send the request from client (I see it in RabbitMQ UI), but I never get the answer from the server.
The server seems to bean the service (the returning Impl class) with getBeanDefinitionNames(), but I definitly do not see those beans on the client side. I use annotations to set up beans, not the .xml file.
So the question is - why my client is not seeing the Server beans, I discover it more a less in following way:
#Autowired
private ApplicationContext appContext;
public GetResponse get(String id) {
Service service = appContext.getBean(Service.class);
System.out.println(service.ping());
return new GetResponse();
}
The answer which I get on the level of webservice is:
{
"timestamp": "2018-02-01T10:09:00.809Z",
"status": 500,
"error": "Internal Server Error",
"exception": "org.springframework.remoting.RemoteProxyFailureException",
"message": "No reply received from 'toString' with arguments '[]' - perhaps a timeout in the template?",
"path": "/v3/app/r"
}
Service:
public interface Service extends Serializable{
String ping();
}
Service Impl:
public class ServiceImpl implements Service {
#Override
public String ping() {
System.out.println("ponged");
return "pong";
}
#Override
public String toString() {
return "to string";
}
EDITED + BOUNTY
In the link you can find extracted modules which I want to connect together. I suppose that it is still about 'not seeing' the beans from one module in the second one.
The action can be trigerd with GET http://localhost:8081/v3/app/u The RabbitMQ settings has to be adjusted to your set-up.
https://bitbucket.org/herbatnic/springremotingexample/overview
I think you shouldn't set the routing key in your client, in amqpFactoryBean (and the one you set seems invalid):
https://bitbucket.org/herbatnic/springremotingexample/src/b1f08a5398889525a0b1a439b9bb4943f345ffd1/Mod1/src/main/java/simpleremoting/mod1/messaging/Caller.java?at=master&fileviewer=file-view-default
Did you try to run their example?
https://github.com/eugenp/tutorials/tree/master/spring-remoting/remoting-amqp
Just stumbled upon this question 3 years later.. trying to run the Baeldung example!
I tried debugging the issue and as far as I can tell, something internal in the AMQP implementation of spring remoting is not using the correct Routing Key when sending the client message, meaning the payload arrives at the broker and is never put into the queue for processing, we then timeout after 5s (default) on the client.
I tried the other answer by Syl to remove the routingKey however it doesn't seem to allow us to create a binding without one, and even when creating a binding directly on the broker management page (without a routing key) it doesn't route the messages.
I have not managed to make the example work, however I found a blog post on fatalerrors.org that shows a custom implementation of the AmqpProxyFactoryBean and it has custom handling for the routing key, this one works.
I've create this gist with the example that is working for me in case the blog post above goes under.
One other thing to note is that on the Baeldung example they are using a DirectExchange, while here we are using a TopicExchange.
I am currently working on a project involves consuming messages from RabbitMQ brocker. However, I am still new to Spring Integration, AMQP and RabbitMQ.
I have an issue with consuming malformed messages formats. When my consumer receives a malformed message it returns it back the queue then RabbitMQ sends it back which creates an endless cycle.
In Spring Integration documentation there are some configuration that can be implemented to that this kind of message are no returned back to the queue.
However I could not understand how to implement that.
What I want is to be able to configure some kind of bean that has a format like
class ExceptionHandler {
public void handle(Throwable e ) {
Logger.log("Some log ... we don't give a Sh** ... ") ;
}
}
I've checked section 3.9 Exception Handling
and 3.15.3 Message Listeners and the Asynchronous Case
but unfortunately I could not understand anything.
So, if you have an example code or a link to one send it I will be greateful.
Yes, that's is one of the correct solution - to throw AmqpRejectAndDontRequeueException, when you decide that the message should not be requeued.
There is also defaultRequeueRejected on the SimpleMessageListenerContainer, which is true by default.
You maybe should to take a look to the DLX/DLQ solution to not lose those malformed messages.
Please, share the StackTrace which bothers you.
There is such a code in the SimpleMessageListenerContainer:
catch (AmqpRejectAndDontRequeueException rejectEx) {
/*
* These will normally be wrapped by an LEFE if thrown by the
* listener, but we will also honor it if thrown by an
* error handler.
*/
}
After a lot of try-fails attempts I was able to handle the error. However I am struggling with harboring the exception log now. I don't understand why this is implemented this way. I was able to handle the log issue too.
It turns that there is another way to say that you don't want to return the message back it is with acknowledge-mode="NONE" attribute. Checkout 10.2 Inbound Channel Adapter section.This way you don't even need to throw that ugly exception.
< bean id="handler" class="MessagesErrorHandler"/>
< int-amqp:inbound-channel-adapter
error-handler="handler"
id="idActivityAdapter"
channel="channelName"
queue-names="activityQueue"
/>
import org.springframework.util.ErrorHandler;
import org.springframework.amqp.AmqpRejectAndDontRequeueException;
public class MessagesErrorHandler implements ErrorHandler {
#Override
public void handleError(Throwable throwable) {
System.out.println("YESSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ERROR IS HANDLED !!!!");
throw new AmqpRejectAndDontRequeueException(throwable);// this very important
//so that message don't go back to the queue.
}
}
The AmqpRejectAndDontRequeueException is a signal to the container to reject and not requeue the message; by default, it requeues any exception.
Alternatively, you can manually wire up a SimpleMessageListenerContainer bean; set defaultRequeueRejected to false and add it to the adapter using the container attribute. Then, all exceptions will cause messages to be rejected and not requeued.
Also, instead of an error-handler, you can use an error-channel and throw the AmqpRejectAndDontRequeueException from the error flow.