I have a onMessage method where I'm reciving an ObjectMessage from the Queue and using that information to populate and persist a JPA entity object. But when something goes wrong while persisting the entity object it is re-executing the onMessage(). My guess is it is pushing the ObjectMessage back the queue and hence the onmessage is getting executed again. This way I'm entering an infinite loop. How can stop onMessage() to get execute again or control the no of times it gets executed. Here is the code I have.
Error is happening at saveAuditData(auditInfo).
public void onMessage(Message inMessage) {
log.debug("Entering onMessage() Method.");
AuditInfo auditInfo = null;
try {
ObjectMessage om = (ObjectMessage) inMessage;
auditInfo = (AuditInfo) om.getObject();
log.debug("Message received : " + auditInfo.getApiUsed());
log.debug("Calling saveAuditData().");
saveAuditData(auditInfo);
log.debug("Leaving onMessage() Method.");
}
catch (Exception e) {
e.printStackTrace();
log.debug("Error persisting Audit Info.",e);
log.debug("Printing Audit Info:");
log.debug(auditInfo.toString());
}
}
private void saveAuditData(AuditInfo auditInfo) {
log.debug("Entering saveAuditData() Method.");
log.debug("Populating Audit Object.");
IdmAudit idmAudit = new IdmAudit();
idmAudit.setApiUsed("API");
idmAudit.setAppClientIpAddress("localhost");
idmAudit.setAuditActivity("activity1");
idmAudit.setAuditData(auditInfo.getAuditData());
idmAudit.setAuditGroup(AUDIT_GROUP);
idmAudit.setAuditType("Type");
idmAudit.setIdmAuditCreationDate(new Date());
idmAudit.setLocationCd("Location");
idmAudit.setPurgeDate(null);
idmAudit.setSubscriberId(new BigDecimal(0));
idmAudit.setSuccessInd("Y");
idmAudit.setUserId(new BigDecimal(0));
idmAudit.setAuditSource("Source");
idmAudit.setVersionNumber(new BigDecimal(0));
log.debug("Saving Audit.");
entityManager.persist(idmAudit);
entityManager.flush();
log.debug("Leaving saveAuditData() Method.");
}
When a container-managed transaction is started by the container to process a JMS message, any failure in JDBC connections or exception thrown in the thread will result into a rollback of the global XA transaction. So the message goes back to the queue and will be retry later according to the queue configuration: period between retries, maximum number of retry before moving the message to a dead-letter queue.
So you have the following options:
Choose "Bean managed" transaction mode in your MDB deployment descriptor and use UserTransaction from lookup to java:comp/UserTransaction to call begin, commit or rollback manually, so care your exception handling.
Keep "Container managed" transaction but query the redelivery count property on the JMS message to decide what to do next: either try again something that can fail or either skip this step and save your data in database. You can get redelivery info on your message from Message.getJMSRedelivered() or Message.getLongProperty("JMSXDeliveryCount") if your JMS provider delivers it.
Or else, move your saveAuditData method to a EJB StatelessBean with transaction support RequiresNew in deployment descriptor so that a new transaction is created and your data is saved whatever happens to your MDB transaction. This option can be combined with the previous one.
You can simply mark the onMessage method with the TransactionType annotation:
#TransactionAttribute(value=TransactionAttributeType.REQUIRES_NEW)
public void onMessage(Message message) {
.....
}
Related
I'm working on an application which uses Kafka to consume messages from multiple topics, persisting data as it goes.
To that end I use a #Service class, with a couple of methods annotated with #kafkaListener. Consider this:
#Transactional
#KafkaListener(topics = MyFirstMessage.TOPIC, autoStartup = "false", containerFactory = "myFirstKafkaListenerContainerFactory")
public void handleMyFirstMessage(ConsumerRecord<String, MyFirstMessage> record, Acknowledgment acknowledgment) throws Exception {
MyFirstMessage message = consume(record, acknowledgment);
try {
doHandle(record.key(), message);
} catch (Exception e) {
TransactionInterceptor.currentTransactionStatus().setRollbackOnly();
} finally {
acknowledgment.acknowledge();
}
}
#Transactional
#KafkaListener(topics = MySecondMessage.TOPIC, autoStartup = "false", containerFactory = "mySecondKafkaListenerContainerFactory")
public void handleMySecondMessage(ConsumerRecord<String, MySecondMessage> record, Acknowledgment acknowledgment) throws Exception {
MySecondMessage message = consume(record, acknowledgment);
try {
doHandle(record.key(), message);
} catch (Exception e) {
TransactionInterceptor.currentTransactionStatus().setRollbackOnly();
} finally {
acknowledgment.acknowledge();
}
}
Please disregard the stuff about setRollbackOnly, it's not relevant to this question.
What IS relevant is that the doHandle() methods in each listener perform inserts in a table, which occasionally fail because autogenerated keys turn out to be non-unique once the final commit is done.
What happens is that each doHandle() method will increment the key column in their own little transactions, and only one of them will "win" that race. The other will fail during commit, with a non-unique constraint violation.
What is best practice to handle this? How do I "synchronize" transactions to execute like pearls on a string in stead of all at once?
I'm thinking of using some kind of semaphor or lock, to serialize things but that smells like a solution with many pitfalls. If there was a general pattern or framework to help with this problem I would be much more comfortable implementing it.
See the documentation.
Using #Transactional for the DB and a KafkaTransactionManager in the listener container is similar to using a ChainedKafkaTransactionManager (configured with both TMs) in the container. The DB tx is committed, followed by Kafka, when the listener exits normall.
When the listener throws an exception, both transactions are rolled back in the same order.
The setRollbackOnly is definitely relevant to this question since you are not rolling back the kafka transaction when you do that.
I am trying to connect to solace queues and simply read messages from them. However, I am able to read the messages but the messages are not getting removed from the queue.
Below is the code which I tried:
public void clearMessages() throws Exception {
// Programmatically create the connection factory using default settings
// Create connection to the Solace router
SolXAConnectionFactoryImpl connectionFactory = returnConnFactory();
XAConnection connection = connectionFactory.createXAConnection();
XASession session = connection.createXASession();
Queue queue = session.createQueue(QUEUE_NAME);
connection.start();
MessageConsumer messageConsumer = session.createConsumer(queue);
//session.b
messageConsumer.setMessageListener(new MessageListener() {
#Override
public void onMessage(Message message) {
if(message instanceof SolTextMessage) {
SolTextMessage solTextMessage =(SolTextMessage)message;
try {
System.out.println("Message cleared is : "+solTextMessage.getText());
} catch (JMSException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
else {
System.out.println("Message Content: %s"+ SolJmsUtility.dumpMessage(message));
}
latch.countDown();
}
});
latch.await(120,TimeUnit.SECONDS);
connection.stop();
messageConsumer.close();
session.close();
connection.close();
}
Here latch is the object of CountDownLatch which is initialized as:
CountDownLatch latch = new CountDownLatch(2);
You need to commit the XA transaction in order for messages to be consumed.
In JMS, the function call is XAResource.commit(xid, true)
Also, is there a reason to use the CountDownLatch?
If you would like to consume messages synchronously, you can choose not to set a message listener and call MessageConsumer.receive()
Solace does provide a basic sample showing how to make use of XA transactions.
Refer to XATransactions.java in the samples directory of the API.
Note that the sample code is manually managing the XA transaction by calling the relevant XAResource methods such as XAResource.commit().
XA transactions are usually used within Java EE application servers that contain a transaction manager to manage the lifecycle of XA transactions.
We're using hornetQ, and got a separate java application which connects to the queue to read from it.
We're having a Connection to it, creating a JMS session from the connection,
then obtaining a JMS MessageConsumer from the session, then assigning a custom MessageListener to the MessageConsumer.
We call message.acknowledge() for every processed message.
This all works very well, but in the case where we kill our java-application in the middle of processing, before hitting acknowledge(), the message is removed from hornetQ, yet never processed, and lost forever.
It seems that our current java application JMS setup somehow works by removing from the queue, and then rolling back/re-inserting it, if something goes wrong.
But if we kill our application, there won't be any roll-back or re-inserting, as it happens within our client/java-code.
Is there a setting or delivery-mode or something I can configure in the client, that causes the HornetQ to treat every message as "not consumed"(and still in the queue) untill the client sends the ack, so that a rollback arent neccecary?
EDIT: Also tried using transactional mode, calling:
connection.createSession(true, acknowledgementType)
Our MessageListener:
#Override
public void onMessage(Message message) {
LOGGER.debug("Message received. Going to handle message.");
try {
final TextMessage messageReceived = (TextMessage) message;
final String text = messageReceived.getText();
LOGGER.debug(String.format("Message content:\n%s", text));
final boolean isMessageSent = eventDispatcher.handleEvent(text);
if (isMessageSent) {
LOGGER.error("Behold!");
message.acknowledge();
session.commit();
} else {
session.rollback();
}
} catch (JMSException jmse) {
LOGGER.error("Failed to get text from a message from the events queue.", jmse);
} catch (JAXBException jaxbe) {
LOGGER.error("Failed to deserialize contents of the message from the events queue.", jaxbe);
} catch (Exception ex) {
LOGGER.error("An error occurred when receiving an event from the events queue:", ex);
}
Basically i have EJB3 timer calling another EJB 3 (DAO) - this call is wrapped in catch block. The other EJB throws timeout SQL Exception when tries to acquire a connection to DS (throws exception to caller). In the logs i see that this timeout is keep on trying to execute again and again. What options are there to prevent it from trying again?
..
// Timer
#Timeout
public void timeout(Timer timer) { // keeps on coming here
...
try {
dao.processJob();
} catch (SQLException) { // catches the timeout
log
}
// dao
#Resource(...)
private Datasource ds
public void process() throws SQLException {
ds.getConnection() // throws timeout here
..
}
From Documentation :
if a bean cancels a timer within a transaction that gets rolled back, the timer
cancellation is rolled back. In this case, the timer’s
duration is reset as if the cancellation had never occurred.
Therefore, the timer probably doesn't get cancelled. This may be the probable reason to retry continuously.
You can try to catch the exception in process method, instead of throwing it & can return null.
Alternatively, you can have TransactionAttributeType.REQUIRES_NEW for method process, which will be part of new transaction.
Requirement: I want messages to be persist in queue till onMessage() executed successfully. If any exception occur during execution of onMessage() and if it is not handle then message should be redelivered to listener.
I am having Glassfish v2 as a application server. I am using OpenMQConnectionFactory and JmsTemplate to send message on queue.
Please note that I am not using MDB.
<bean id="openMQConnectionFactory"
class="com.is.br.util.OpenMqConnectionFactoryBean">
<property name="imqAddressList" value="mq://localhost:7676" />
<property name="imqDefaultUsername" value="admin" />
<property name="imqDefaultPassword" value="admin" />
</bean>
I tried AUTO_ACKNOWLEDGE as acknowledge mode but in listener when exception thrown message is not redelivered.
MessageProducer.java
public void sendMessage(final String responseStream) {
System.out.println("Enter into IsJmsProducer.sendMessage method");
try {
MessageCreator creator = new MessageCreator() {
public Message createMessage(Session session) {
ObjectMessage message = null;
try {
message = session.createObjectMessage(responseStream);
} catch (Exception e) {
System.out.println("Unable create a JMSMessage");
}
return message;
}
};
System.out.println("Sending message to destination: " + this.destination.toString());
this.jmsTemplate.setSessionAcknowledgeMode(Session.AUTO_ACKNOWLEDGE);
this.jmsTemplate.send(this.destination, creator);
System.out.println("SendMessage to queue successfully.");
} catch (Exception ex) {
System.out.println("SendMessage to queue Fail." + ex);
}
System.out.println("Exit from IsJmsProducer.sendMessage method");
}
SampleJMSConsumer.java
public class SampleJMSConsumer implements MessageListener {
#Override
public void onMessage(Message message) {
throw new RuntimeException();
}
}
Then I tried with this.jmsTemplate.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE); and in listener I called message.acknowledge(); and in catch block I called session.recover() still message is not redeliver.
SampleJMSConsumer.java
public class SampleJMSConsumer implements MessageListener {
#Override
public void onMessage(Message message) {
ObjectMessage objectMessage = (ObjectMessage) message;
Object object;
try {
object = objectMessage.getObject();
if (object instanceof String) {
System.out.println("Message received - " + object.toString());
throw new JMSException("JMS exception");
}
message.acknowledge();
} catch (JMSException e) {
session.recover();
}
}
}
When I run the program in debug mode and I send message on queue in broker admin console I am able to see the number of messages but as soon as onMessage() called number of messages get reduce by one. That means message is consumed and deleted from queue. Is that message consider as "delivered"?
Please help me to understand why message is not redeliver when exception occur?
Thanks in advance.
I think this is by design, delivered when onmessage gets called. If you want do something about the exception you might handle it using try catch.
Assume the message was put on the queue once again, you wouls likely hit the same exception when consumed anyways.
The ack mechanisms should imo be about assuring correct delivery. Maybe what you are after is a reject mechanism where you ask the prpoducerside to send a new message?
Client Acknowledge is suited for you. In your onMessage() method, once processing is over, you need to call Acknowledge otherwise if there is any exception, then you don't call Acknowledge().
Session.Recovery() stops and restarts message delivery. The delivery of the message will be from the last unacknowledged message.
I would suggest to verify what is the default session mode for OpenMQ. It could happen that, once you opened a connection, you can't change it, so that has to be specified upon connection openin.
session created in consumer should set session mode to AUTO_ACK / DUPS_OK_ACK. You haven't shared your code for starting consumer. You are setting session mode in Producer but not Consumer.