I have setup a standalone HornetQ instance which is running locally. For testing purposes I have created a consumer using the HornetQ core API which will receive a message every 500 milliseconds.
I am facing a strange behaviour on the consumer side when my client connects and reads all the messages from queue and if I forced shutdown this (without properly closing the session/connection) then next time I start this consumer again it will read the old messages from the queue. Here is my consumer example:
// HornetQ Consumer Code
public void readMessage() {
ClientSession session = null;
try {
if (sf != null) {
session = sf.createSession(true, true);
ClientConsumer messageConsumer = session.createConsumer(JMS_QUEUE_NAME);
session.start();
while (true) {
ClientMessage messageReceived = messageConsumer.receive(1000);
if (messageReceived != null && messageReceived.getStringProperty(MESSAGE_PROPERTY_NAME) != null) {
System.out.println("Received JMS TextMessage:" + messageReceived.getStringProperty(MESSAGE_PROPERTY_NAME));
messageReceived.acknowledge();
}
Thread.sleep(500);
}
}
} catch (Exception e) {
LOGGER.error("Error while adding message by producer.", e);
} finally {
try {
session.close();
} catch (HornetQException e) {
LOGGER.error("Error while closing producer session,", e);
}
}
}
Can someone tell me why it is working like this, and what kind of configuration should I use in client/server side so that if a message read by consumer it will delete this from a queue?
You are not committing the session after the acknowledgements are complete, and you are not creating the session with auto-commit for acknowledgements enabled. Therefore, you should do one of the following:
Either explicitly call session.commit() after one or more invocations of acknowledge()
Or enable implicit auto-commit for acknowledgements by creating the session using sf.createSession(true,true) or sf.createSession(false,true) (the boolean which controls auto-commit for acknowledgements is the second one).
Keep in mind that when you enable auto-commit for acknowledgements there is an internal buffer which needs to reach a particular size before the acknowledgements are flushed to the broker. Batching acknowledgements like this can drastically improve performance for certain high-volume use-cases. By default you need to acknowledge 1,048,576 bytes worth of messages in order to flush the buffer and send the acknowledgements to the broker. You can change the size of this buffer by invoking setAckBatchSize on your ServerLocator instance or by using a different createSession method (e.g. sf.createSession(true, true, myAckBatchSize)).
If the acknowledgement buffer isn't flushed and your client crashes then the corresponding messages will still be in the queue when the client comes back. If the buffer hasn't reached its threshold it will still be flushed anyway when the consumer is closed gracefully.
Related
I need to get exactly-once semantics, so I use Kafka Transactional API. And I'm trying to understand how to work with Producer efficiently. As I read in some articles, it's more optimized way to use only one Producer per application instance (within one TCP connection) because of its buffering mechanism. On the other hand, when I call producer.commitTransaction() for a single message, it flushes message immediately without using message buffer.
Do I need to implement buffer manually and call producer.commitTransaction() for buffered messages? Or is there another way to use buffering with transactions?
I know that in Spring producers are cached when transactions are enabled. But I don't use Spring and I'm not sure how Spring producers cache actually works. Maybe I should implement something similar and create new Producer if another is busy?
Example of my method:
public void produce(#NotNull T payload) {
var key = UUID.randomUUID();
var value = JsonUtils.toJson(payload);
try {
ProducerRecord<UUID, String> record = new ProducerRecord<>(topic, key, value);
producer.beginTransaction();
producer.send(record);
producer.commitTransaction();
} catch (ProducerFencedException e) {
log.error("Producer with the same transactional id already exists", e);
producer = KafkaProducerFactory.getInstance().recreateProducer();
} catch (KafkaException e) {
log.error("Failed to produce to kafka", e);
producer.abortTransaction();
}
log.info("Message with key {} produced to topic {}", key, topic);
}
Let's begin with Non-transactional kafka Producer, there are set of configurable properties that control the buffering mechanism:
batch.size
linger.ms
buffer.memory
Basically Kafka internally batch as per configuration.
If linger.ms=0, producer will always send immediately even if batch is not full, non zero value will wait for define number of time if batch is not full.
When it come to Transactional Producer, there are some differences:
commitTransaction() will immediately send the message from transaction, doesn't wait for batch size to be full fill.
This is the reason one message is sent immediately in above example.
If there are multiple producer.send() in transaction boundary, all will be part of single transaction. This will not be true for non-transactional because of batch and other configuration.
When commitTransaction() is called, it basically wakeup the thread to send the messages.
I have a requirement to consume the messages from the ActiveMQ topic and persist them in mongo. I am wondering if there is a way/configuration for consuming the messages in batch from the topic instead of reading messages one by one and making a DB call for every message.
I am imagining the end solution will do something like:
Consumes message in a batch size of 100
Use mongo bulk insert for saving the batch into DB
Send ACK to broker for successfully inserted messages and NAK for the failed message.
The JMS API only allows you to receive one message at a time whether that's via an asynchronous javax.jms.MessageListener or a synchronous call to javax.jms.MessageConsumer#receive() in JMS 1.1 or javax.jms.JMSConsumer.receive() in JMS 2. However, you can batch the receipt of multiple messages up using a transacted session. Here's what the javax.jms.Session JavaDoc says about transacted sessions:
A session may be specified as transacted. Each transacted session supports a single series of transactions. Each transaction groups a set of message sends and a set of message receives into an atomic unit of work. In effect, transactions organize a session's input message stream and output message stream into series of atomic units. When a transaction commits, its atomic unit of input is acknowledged and its associated atomic unit of output is sent. If a transaction rollback is done, the transaction's sent messages are destroyed and the session's input is automatically recovered.
So you can receive 100 messages individually using a transacted session, insert that data into Mongo, commit the transacted session or if there's a failure you can rollback the transacted session (which essentially acts as a negative acknowledgement). For example:
final int TX_SIZE = 100;
ConnectionFactory cf = new ActiveMQConnectionFactory("tcp://localhost:61616");
Connection connection = cf.createConnection();
Session session = connection.createSession(true, Session.SESSION_TRANSACTED);
Topic topic = session.createTopic("myTopic");
MessageConsumer consumer = session.createConsumer(topic);
connection.start();
while (true) {
List messages = new ArrayList<Message>();
for (int i = 0; i < TX_SIZE; i++) {
Message message = consumer.receive(1000);
if (message != null) {
messages.add(message);
} else {
break; // no more messages available for this batch
}
}
if (messages.size() > 0) {
try {
// bulk insert data from messages List into Mongo
session.commit();
} catch (Exception e) {
e.printStackTrace();
session.rollback();
}
} else {
break; // no more messages in the subscription
}
}
It's worth noting that if you are only using JMS transacted sessions and not full XA transactions there's going to be at least some risk of duplicates in Mongo (e.g. if your application crashes after successfully inserting data into Mongo but before committing the transacted session). XA transactions would mitigate this risk for you at the cost of a fair amount of additional complexity depending on your environment.
Lastly, if you run into performance limitations with ActiveMQ "Classic" consider using ActiveMQ Artemis, the next-generation message broker from ActiveMQ.
#Nabeel Ahmad you may be interested in checking out Virtual Topics in ActiveMQ. They provide an ability to use topics on the producer side, then use queues to consume. They are super helpful when wanting to scale consumption, as you have more features and observability using queues than topics on the consumer side.
Add this config to activemq.xml
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<virtualTopic name="VT.>" prefix="VQ.*." selectorAware="false"/>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
Then have producers send to: topic://VT.DATA
Then have consumer receive from: queue://VQ.CLIENT1.VT.DATA
As #Justin Bertram mentioned, batching reads can be done using a transacted session and committing every 100 or so messages.
Session session = connection.createSession(true, Session.SESSION_TRANSACTED);
MessageConsumer messageConsumer = session.createConsumer(session.createQueue("VQ.CLIENT1.VT.DATA");
Message message = null;
long count = 0l;
do {
message = messageConsumer.receive(2000l);
if(message != null) {
// check the message and publisher.send() to .DLQ if it is bad
// if message is good, send to Mongo
if(count % 100 == 0) {
// commit every 100 messages on the JMS-side
session.commit();
}
}
} while(message != null);
I am receiving messages from aActive MQ queue.
Is there a way to receive a number of messages in one time? or is that have to be done with a loop?
Further more, if i want to take say 30 messages run a procedure, and only if that procedure works return a message.acknowledge(); for all of them.
I mean i dont want to erase those 30 from the queue if the procedure fails.
Thanks.
You'll have to do it in a loop. Usually, it's best to use message-driven beans for consuming messages, but it's not suitable for this case, because they take message by message and you cannot specify the exact number. Thus, use MessageConsumer and manual transactions:
#Resource
UserTransaction utx;
#Resource(mappedName="jms/yourConnectionFactory");
ConnectionFactory cf;
#Resource(mappedName="jms/yourQueue");
Queue queue;
..
Connection conn = null;
Session s = null;
MessageConsumer mc = null;
try {
utx.begin();
conn = cf.createConnection();
s = conn.createSession(true, Session.CLIENT_ACKNOWLEDGE); //TRANSACTIONAL SESSION!
mc = s.createConsumer(queue);
conn.start(); // START CONNECTION'S DELIVERY OF INCOMING MESSAGES
for(int i=0; i<30; i++)
{
Message msg = mc.receive();
//BUSINESS LOGIC
}
utx.commit();
} catch(Exception ex) {
..
} finally { //CLOSE CONNECTION, SESSION AND MESSAGE CONSUMER
}
I don't have any experience in ActiveMQ. But I think in case of queue listeners, basic logic should be the same independent to the queue implementation.
For your first question I don't know any way of retrieving multiple messages from a queue. I think best way would be to fetch it one by one inside a loop.
For your second question, message will not be discarded from the queue till the underlying transaction which read the message commits. So you could read whole bunch of messages in a single transaction and roll it back in case of an error. It shouldn't erase any existing messages from the queue.
May I ask why do you need 30 messages to run a procedure. Usually when we use a queue, each message should be able to process independently.
I am trying to handle flow control situation on producer end.
I have a queue on a qpid-broker with a max queue-size set. Also have flow_stop_count and flow_resume_count set on the queue.
now at the producer keeps on continuously producing messages until this flow_stop_count is reached. Upon breach of this count, an exception is thrown which is handled by Exception listener.
Now sometime later the consumer on queue will catch up and the flow_resume_count will be reached. The question is how does the producer know of this event.
Here's a sample code of the producer
connection connection = connectionFactory.createConnection();
connection.setExceptionListenr(new MyExceptionListerner());
connection.start();
Session session = connection.createSession(false,Session.CLIENT_ACKNOWLEDGE);
Queue queue = (Queue)context.lookup("Test");
MessageProducer producer = session.createProducer(queue);
while(notStopped){
while(suspend){//---------------------------how to resume this flag???
Thread.sleep(1000);
}
TextMessage message = session.createTextMessage();
message.setText("TestMessage");
producer.send(message);
}
session.close();
connection.close();
and for the exception listener
private class MyExceptionListener implements ExceptionListener {
public void onException(JMSException e) {
System.out.println("got exception:" + e.getMessage());
suspend=true;
}
}
Now the exceptionlistener is a generic listener for exceptions, so it should not be a good idea to suspend the producer flow through that.
What I need is perhaps some method on the producer level , something like produer.isFlowStopped() which I can use to check before sending a message. Does such a functionality exist in qpid api.
There is some documentation on the qpid website which suggest this can be done. But I couldn't find any examples of this being done anywhere.
Is there some standard way of handling this kind of scenario.
From what I have read from the Apache QPid documentation it seems that the flow_resume_count and flow_stop_count will cause the producers to start getting blocked.
Therefore the only option would be to software wise to poll at regular intervals until the messages start flowing again.
Extract from here.
If a producer sends to a queue which is overfull, the broker will respond by instructing the client not to send any more messages. The impact of this is that any future attempts to send will block until the broker rescinds the flow control order.
While blocking the client will periodically log the fact that it is blocked waiting on flow control.
WARN AMQSession - Broker enforced flow control has been enforced
WARN AMQSession - Message send delayed by 5s due to broker enforced flow control
WARN AMQSession - Message send delayed by 10s due to broker enforced flow control
After a set period the send will timeout and throw a JMSException to the calling code.
ERROR AMQSession - Message send failed due to timeout waiting on broker enforced flow control.
From this documentation it implicates that the software managing the producer would then have to self manage. So basically when you receive an exception that the queue is overfull you will need to back off and most likely poll and reattempt to send your messages.
You can try setting the capacity (size in bytes at which the queue is thought to be full ) and flowResumeCapacity (the queue size at which producers are unflowed) properties for a queue.
send() will then be blocked if the size exceeds the capacity value.
You can have a look at this test case file in the repo to get an idea.
Producer flow control is not yet implemented on the JMS client.
See https://issues.apache.org/jira/browse/QPID-3388
I have a client that receives messages from a Queue. I currently have a MessageListener that implements onMessage().
Once the message is received, it is processed further then saved to a Database on the onMessage() method; the client then acknowledges the message receipt.
As long as the database is up there is no problem. But if the DB is down, the client will not acknowledge.
To cater for this, I want the client to be sending scheduled requests to the queue for any unacknowledged messages at scheduled intervals.
As it is, the only way I have of doing this is to restart the client which is not Ideal. Is there a way to trigger the queue to resend an unacknowledged message without a restart?
What i have in onMessage():
//code to connect to queue
try {
if (DB is available){
//process message
//save required details to DB
msg.acknowledge();
}
else{
//schedule to request same message later from queue
}
} catch (Exception e) {}
I think the standard behavior is already doing what you want: if the message broker is using the same database and the database is not available, it will not accept the messages and thus the client will spool them until the message broker is ready again.
If they do not share the same database and the message broker is on, it will spool the message and retry if onMessage throws an exception.
The message broker will try to resend according to its configurable policy.
After some more research, I have stumbled on the session.recover() which I can use to trigger redelivery. I have seen there is the RedeliveryPolicy class which I can use to set message resend options. Now my code looks like:
ConnectionFactory factory = new ActiveMQConnectionFactory(url);
RedeliveryPolicy policy = new RedeliveryPolicy();
policy.setBackOffMultiplier((short) 2);
policy.setRedeliveryDelay(30000);
policy.setInitialRedeliveryDelay(60000);
policy.setUseExponentialBackOff(true);
((ActiveMQConnectionFactory)factory).setRedeliveryPolicy(policy);
final Session session = connection.createSession(false,
Session.CLIENT_ACKNOWLEDGE);
...
...
...
..
//inside onMessage()
try {
if (DB is available){
//process message
//save required details to DB
msg.acknowledge();
}
else{
session.recover();
}
} catch (Exception e) {}