How to receive what was sent by convertAndSend? - java

I'm reading Spring Framework reference, chapter about JMS integration. There are some examples for sending text messages and asynchronously receiving them (by listeners). And there is also an example for JmsTemplate function convertAndSend which converts given object to a message. The reference says:
By using the converter, you and your application code can focus on the business object that is being sent or received via JMS and not be concerned with the details of how it is represented as a JMS message.
But there is no example for receiving such messages. They mention function receiveAndConvert but, unfortunately, it receives synchronously.
So how am I to receive it asynchronously? Must I be aware that when I convertAndSend a Map, the resulting message will be a MapMessage, and just check in my listener for this type of message and handle it? But they promised I'm not to be concerned with the details of how it is represented as a JMS message.
So is there a better way?

I know it's been a while since this was asked, but I had the same problem, solved it and wanted to give an explicit code example here.
Here's my MessageListener. This implements the onMessage(Message) method to intercept messages asynchronously.
package com.package.amqp;
import org.springframework.amqp.core.Message;
import org.springframework.amqp.core.MessageListener;
import org.springframework.amqp.support.converter.JsonMessageConverter;
import com.package.model.User;
public class TestListener implements MessageListener {
public void onMessage(Message message) {
JsonMessageConverter jmc = new JsonMessageConverter();
User u = (User)jmc.fromMessage(message);
System.out.println("received: " + u.getFirstName());
}
}
The messages are then converted using the standard JsonMessageConvertor in my case as this is the messageConvertor I plugged into my rabbitTemplate bean.
<bean id="rabbitConnectionFactory" class="org.springframework.amqp.rabbit.connection.SingleConnectionFactory">
<constructor-arg value="10.10.1.2"/>
<property name="username" value="guest"/>
<property name="password" value="guest"/>
</bean>
<bean class="org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer">
<property name="connectionFactory" ref="rabbitConnectionFactory"/>
<property name="queueName" value="queue.helloWorld"/>
<property name="messageListener" ref="someListener"/>
</bean>
<bean id="someListener" class="com.package.amqp.TestListener"></bean>
<bean id="rabbitTemplate" class="org.springframework.amqp.rabbit.core.RabbitTemplate">
<property name="connectionFactory" ref="rabbitConnectionFactory"/>
<property name="messageConverter">
<bean class="org.springframework.amqp.support.converter.JsonMessageConverter"/>
</property>
</bean>
Hope this helps someone!
Owen

While JmsTemplate provides basic synchronous receive methods, asynchronous reception is a whole lot more complicated, and is beyond the scope of JmsTemplate.
Asynchronous reception of JMS messages is done in Spring using Message Listener Containers, which asynchronously take messages from the JMS destination and pass them to your application. You can plug a MessageConverter in to your message listener container via a MessageListenerAdapter (plug the converter into the adapter, plug your application's listener into the adapter, then plug the adapter into the listener container).

Related

Why KafkaTemplate does not close tansactional producers?

I have written a simple Kafka app with spring integration kafka 3.2.1.RELEASE and kafka-clients 2.5 to learn kafka transactions.
It recieves the messages from a topic and sends them to another topic. The beans.xml file is as follows
<int-kafka:message-driven-channel-adapter
listener-container="container"
auto-startup="true"
send-timeout="30000"
channel="channelA"/>
<bean id="container" class="org.springframework.kafka.listener.KafkaMessageListenerContainer" parent="kafkaMessageListenerContainerAbstract">
<constructor-arg>
<bean class="org.springframework.kafka.listener.ContainerProperties">
<constructor-arg
name="topics"
value="test"/>
<property name="transactionManager" ref="KafkaTransactionManager"/>
</bean>
</constructor-arg>
</bean>
.
.
.
<int-kafka:outbound-channel-adapter kafka-template="kafkaTemplate"
auto-startup="true"
channel="channelB"
topic="output"/>
<bean id="dbsenderTemplate" class="org.springframework.kafka.core.KafkaTemplate">
<constructor-arg>
<bean class="org.springframework.kafka.core.DefaultKafkaProducerFactory">
<constructor-arg>
<map>
<entry key="value.serializer" value="org.apache.kafka.common.serialization.StringSerializer"/>
<entry key="key.serializer" value="org.apache.kafka.common.serialization.StringSerializer"/>
<entry key="bootstrap.servers" value="localhost:9092"/>
</map>
</constructor-arg>
<property name="transactionIdPrefix" value="mytest-"/>
<property name="producerPerConsumerPartition" value="false"/>
</bean>
</constructor-arg>
</bean>
The code that starts the app is as follows:
GenericXmlApplicationContext tempContext = new GenericXmlApplicationContext("beans.xml");
tempContext.close();
//POINT A.
try {
Thread.sleep(60000);
} catch (InterruptedException e) {
e.printStackTrace();
}
GenericXmlApplicationContext context = new GenericXmlApplicationContext();
context.load("beans.xml");
context.refresh();
//POINT B
At POINT A I just closed the context to check which beans are closed, and put a 60 seconds sleep to have time to check the JMX console. I noticed that even though the context is closed but the producer is still registered in JMX. After that I traced the code and noticed that on context closing the KafkaTemplate calls the following code:
public void flush() {
Producer<K, V> producer = getTheProducer();
try {
producer.flush();
}
finally {
closeProducer(producer, inTransaction());
}
}
protected void closeProducer(Producer<K, V> producer, boolean inTx) {
if (!inTx) {
producer.close(this.closeTimeout);
}
}
It means it creates a producer but because it is transactional it will not be closed.
This behaviour makes that runnning the context again on POINT B and sending the message cause the javax.management.InstanceAlreadyExistsException: kafka.producer:type=app-info,id=producer-mytest-0 Exception.
Why the KafkaTemplate does not close these producers?
And another question is what happens to these producers when a new KafkaTemplate is created on POINT B?
The last question is if I change the producerPerConsumerPartition property to true the mentioned app still registers producer Mbean with producer-mytest-0 and does not follow the groupid.topic.partition pattern in naming. Is it a correct behaviour?
UPDATES:
I understood when the KafkaTemplate executeInTransaction is called. At the finally block it calls the close on the producer and as it is a logical close, the following code is called on the CloseSafeProducer and put it in the cache:
if (!this.cache.contains(this)
&& !this.cache.offer(this)) {
this.delegate.close(closeTimeout);
}
This makes when the context is closed the destroy method of DefaultKafkaProducerFactory clears the cache and closes the producer physically. But in my situation application context is created but before consume and producing any message the context is closed, only the flush method of KafkaTemplate is called internally that force it to create a transactional producer but does not put it in the cache. As I didn't start a producer and KafkaTemplate do it on flush, is not it good that DefaultKafkaProducerFactory put them in cache before using them?
The producer cannot be closed if this template operation is participating in a transaction that was started outside of the template.
Even when closed, it is only "logically" closed - cached for reuse by another operation.
Is it a correct behaviour?
Yes, for producer-initiated transactions; the alternative name is used when a consumer initiates the transaction.
The InstanceAlreadyExistsException problem is simply because you are creating two application contexts with identical configuration. Why are you doing that?

Spring RabbitMQ SimpleRabbitListenerContainerFactory usage

From the docs, I want to use consume from queues by dynamically changing the consumers without restarting the application.
I do see that Spring RabbitMQ latest version supports the same, but no clue/example/explanation to change the same. I couldn't see proper source code for the same or how to pass params like maxConcurrentConsumers
I am using XML based configuration of Spring RabbitMQ along with Spring integration
<bean id="rabbitListenerContainerFactory"
class="org.springframework.amqp.rabbit.config.SimpleRabbitListenerContainerFactory">
<property name="connectionFactory" ref="rabbitConnectionFactory"/>
<property name="concurrentConsumers" value="3"/>
<property name="maxConcurrentConsumers" value="10"/>
<property name="acknowledgeMode" value="AUTO" />
</bean>
<int-amqp:inbound-channel-adapter channel="lowInboundChannel" queue-names="lowLoadQueue" advice-chain="retryInterceptor" acknowledge-mode="AUTO" listener-container="rabbitListenerContainerFactory" />
<int-amqp:inbound-channel-adapter channel="highInboundChannel" queue-names="highLoadQueue" advice-chain="retryInterceptor" acknowledge-mode="AUTO" listener-container="rabbitListenerContainerFactory" />
Can anyone guide me how to dynamically configure the consumers?
First of all you shouldn't share the same rabbitListenerContainerFactory for different <int-amqp:inbound-channel-adapter>s, because they do this:
protected void onInit() {
this.messageListenerContainer.setMessageListener(new ChannelAwareMessageListener() {
So, only last adapter wins.
From other side there is even no reason to have several adapters. You can specify queue-names="highLoadQueue,lowLoadQueue" for a single adapter.
Although in case of listener-container you must specify queues on the SimpleRabbitListenerContainerFactory.
If you want to change some rabbitListenerContainerFactory options at runtime, you can just inject it to some service and invoke its setters.
Let me know if I have missed anything.

DefaultMessageListenerContainer, knowledge about the queue to listen on

Is there any possibility to know inside the onMessage method, which queue the MessageListener is listening to?
My Spring-config (a part of it):
<bean id="abstractMessageListenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer" abstract="true">
<property name="connectionFactory" ref="connectionFactory" />
<property name="maxConcurrentConsumers" value="5"/>
<property name="receiveTimeout" value="10000" />
</bean>
<bean class="org.springframework.jms.listener.DefaultMessageListenerContainer" parent="abstractMessageListenerContainer">
<property name="destinationName" value="MY.QUEUE" />
<property name="messageListener" ref="myMessageListener" />
</bean>
<bean id="myMessageListener" class="my.package.structure.ListenerClass"></bean>
My Listener Class:
public class ListenerClass implements MessageListener {
public void onMessage(Message msg) {
//where do I know from, on which queue the message has been written to?
}
}
Is there any out-of-the box solution? Or any custom solution to get the queue/destination-name?
Do need the queue in a subsequent batch-processing...
Easy. in trivial cases at least:
msg.getJMSDestination() will give you the destination as a javax.jms.Destination object. Typically .toString() returns something like: queue://MYQUEUENAME
However, in some JMS implementations, there might be multihop queues, such as a static pub/sub setup in WebSphere MQ where you might write your message to one queue and it will bounce around a route to end up in a completly different queue. Also, you might have the case of an ESB with logic in the middle that routes the message. In such cases, you will need to think twice before relying to much on the JMSDestination attribute. Otherwise, go ahead.

Spring JMS - Draining Topic on a Timer

I'm fairly new to Spring JMS, and I've found lots of documentation and examples at the Spring site and elsewhere, but my use case doesn't seem to be described anywhere, or at least in a way I can understand. I hope you might be able to help.
I would like to create a publisher of a topic and several durable subscribers to that topic. I'm working on the first subscriber now, and it is intended to run hourly (on a timer) and drain the topic of messages and process them all at once (i.e. to send an email summarizing all messages).
I do not know how to configure this setup in Spring, although I feel like this should be easy. Advice would be tremendously helpful.
My plan, such as it is, was to have the timer invoke the "processBatch" method, which would call receiveAndConvert() in a loop until it timed out, building up its list of messages.
This doesn't seem to work, though, because the consumer isn't really subscribed to the topic. Certainly not before it's run, and potentially not afterward.
How can I configure this using Spring and/or direct ActiveMQ?
I'm not sure where I ended up with my XML is a useful place for this discussion to start, but I'll provide it in case it is helpful:
<beans>
<!-- some unrelated beans -->
<!-- my Active MQ connection factory -->
<bean id="mqConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory" destroy-method="stop">
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="vm://broker"/>
</bean>
</property>
</bean>
<!-- my topic -->
<amq:topic id="completionsTopic" physicalName="completions.topic"/>
<!-- my subscriber -->
<bean id="emailer" class="com.j128.Emailer">
<property name="jmsTemplate">
<bean class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory" ref="mqConnectionFactory"/>
<property name="defaultDestination" ref="completionsTopic"/>
<property name="receiveTimeout" value="2000"/>
</bean>
</property>
</bean>
<!-- my scheduler and periodic call to the topic drainer -->
<task:scheduler id="taskScheduler" pool-size="10"/>
<task:scheduled-tasks>
<!-- send emails hourly -->
<task:scheduled ref="emailer" method="processBatch" cron="0 * * * *"/>
</task:scheduled-tasks>
</beans>
But I'm certain I fundamentally have the wrong strategy and that there's a simple way to configure this.
Thank you for your assistance.
Have a look at how-does-a-queue-compare-to-a-topic: "Only subscribers who had an active subscription at the time the broker receives the message will get a copy of the message."
And for a durable topic how-do-durable-queues-and-topics-work: "Durable topics however are different as they must logically persist an instance of each suitable message for every durable consumer - since each durable consumer gets their own copy of the message"
So with a non-durable topic, your plan wont work as the hourly job won't get any messages as it isn't running when the messages are published. If you set up a durable topic then it might work but it depends what you expect to happen when you say your subscriber will "drain the topic of messages". All it can do it read the messages published to it since it last run, it can't affect the messages going to other subscribers.
For a discussion around durable subscribers (I haven't used them on ActiveMQ) see this

Multiple instances of a bean

I'm implementing a listener class which listens for some events and then processes them. If processing of this event goes well then this event is never notified again, but in case of any exception event will be again notified to MyBeanImplementation class after some time and it can again try processing it.
Following code works fine but since this event processing may take some time,
1. I want to have multiple listeners.
2. Limit number of calls to service, may be using threadpool.
How can I have multiple listeners which will process each event differently? I'm new to Spring and don't have much idea if this is even possible or not.
Heres is an example:
// Spring Configuration:
<bean id="MyBean" class="MyBeanImplementation">
// Sample Class
public class MyBeanImplementation implements EventListener {
#override
public processEvent(Event event) throws EventProcessFailureException {
try {
// Validate event
validateEvent(event);
// Call another service to store part of information from this event
// This service takes some time to return success
boolean success = makeCallToServiceAndStoreInfo(event);
if(!success) {
throw new EventProcessFailureException("Error storing event information!");
}
} catch (Exception e) {
throw new EventProcessFailureException(e);
}
}
}
Thanks
Basically, you can use Strategy pattern to introduce different listeners that re-act in a different way to a shared event.
<bean id="strategy1Listener" />
<bean id="strategy2Listener" />
<bean id="strategy3Listener" />
Then, you can introduce a composite listener to iterate through other listeners and pass through the event and allow them to process the event:
<bean id="compositeStrategyListener">
<property name="listeners">
<list>
<ref bean="strategy1Listener" />
<ref bean="strategy2Listener" />
<ref bean="strategy3Listener" />
</list>
</property>
</bean>
On the other side of the story, you have an object that generates/publishes the events:
<bean id="eventGenerator">
<property name="eventListener" ref="compositeStrategyListener" />
</bean>
So, now eventGenerator publishes the generated event to a compositeStrategyListner which iterates over the listeners it has and allows them to process the event in their own different way.
You can also take advantage of Spring Task Execution to configure how you need to run the task of event processing.
Why not using google guava eventbus library with spring together? It will handle every thing needed for listening events.

Categories

Resources