I am using IBM MQ-7.5. I am running a jms client which connects to the manager running on some other hosts.
I want to monitor the TCP connections with the manager. How do I get notified if my client connection is broken with the manager ? Is there any callback or listener provided in IBM MQ APIs to know any interruption on connection ?
Eg. Like ActiveMQ has http://activemq.apache.org/maven/apidocs/org/apache/activemq/transport/TransportListener.html
Thanks,
Anuj
In terms of connections being dropped a connection broken exception will be sent via the exception listener.
The JMS Specification is written such that events such as connection broken are legitimately returned on synchronous calls only. I would also recommended setting an exception listener, and to catch exceptions from all messaging operations and taking appropriate action.
Do you want to monitor client application connections at the queue manager end or in the client application?
To get notified of any connection issues, MQ JMS client has an ExceptionListener that can be attached to MQConnection. This exception listener will be invoked when there is an issue with connection to queue manager, for example connection to queue manager is broken. More details here: View details of setExceptionListener method. Call the setExceptionListener method on MQConnection to register a callback as shown below.
MQQueueConnectionFactory cf = new MQQueueConnectionFactory();
ExceptionListener exceptionListener = new ExceptionListener(){
#Override
public void onException(JMSException e) {
System.out.println(e);
if(e.getLinkedException() != null)
System.out.println(e.getLinkedException());
}
};
MQQueueConnection connection = (MQQueueConnection) cf.createQueueConnection();
connection.setExceptionListener(exceptionListener);
To actively check the health of connection and session, i am thinking of using below approach.
/**
* Method to check if connection is healthy or not.
* It creates a session and close it. mqQueueConnection
* is the connection for which we want to check the health.
*/
protected boolean isConnectionHealthy()
{
try {
Session session = mqQueueConnection.createSession(false, Session.AUTO_ACKNOWLEDGE);
session.close();
}
catch(JMSException e) {
LOG.warn("Exception occured while checking health of connection. Not able to create " + "new session" + e.getMessage(), e);
return false;
}
return true;
}
/**
* Method to check if session is healthy or not.
* It creates a consumer and close it. mqQueueSession
* is the session for which we want to check the health.
*/
protected boolean isSessionHealthy()
{
try {
MessageConsumer consumer = mqQueueSession.createConsumer(mqQueue);
consumer.close();
}
catch(JMSException e) {
LOG.warn("Exception occured while checking health of the session. Not able to create "
+ "new consumer" + e.getMessage(), e);
return false;
}
return true;
}
Does it approach looks good ?
I just have one fear here:
I am creating a test session in isConnectionhealthy() method and closing it. Will it not affect the already created session which is actually used for real communication? I mean will it do something like it closed already created session and started new one ?
Related
I made both client and server able to wait for each other to connect, so they can be started independently in any order.
The solution described here does exactly that, out of the box, BUT client side keeps printing massive stack traces into our logs, with java.net.ConnectException: Connection refused: connect, followed by 46 lines long stack trace, until server is up and connection happens.
This is not ideal.
My question is: how to apply my custom logic to fine tune what to log and when.
What I find so far is that logs are printed by org.springframework.integration.handler.LoggingHandler. This acts as sort of error channel and errors are always dispatched there. I cannot find where is this one set so I can, for example, replace it with my own implementation.
I managed to configure my own default error channel, but that channel was added alongside the preconfigured LoggingHandler channel, not replacing it.
The other approach could be to set much longer timeout when first message is send. I am struggling with that too. I tried to set it on outboundGateway, as .handle(Tcp.outboundGateway(clientConnectionFactory).remoteTimeout(1_000_000L)) but that didn't have any effect.
OK, Solved.
The problem was not really in LoggingHandler or any error channel, but that org.springframework.integration.ip.tcp.connection.TcpNetClientConnectionFactory.createSocket() throws exception if server is not immediately ready, and TcpOutboundGateway then logs this exception it an old fashioned way; and only then is the error dispatched to errorChannel where it can be reacted on; and the default SI reaction is to print it again :) That is what I initially didn't notice, the exception is logged twice. The second log can be prevented by using custom error message handler but not the first one.
The TcpNetClientConnectionFactory.createSocket() call default Java's createSocket() and there is not option to set timeout. If the recipient is not ready, the method call fails nearly immediately. See JDK's enhancement request JDK-4414843.
Possible solution is to override TcpNetClientConnectionFactory.createSocket() to repeat connection attempts to server until it's successful.
WaitingTcpNetClientConnectionFactory
public class WaitingTcpNetClientConnectionFactory extends TcpNetClientConnectionFactory {
private final SocketConnectionListener socketConnectionListener;
private final int waitBetweenAttemptsInMs;
private final Logger log = LogManager.getLogger();
public WaitingTcpNetClientConnectionFactory(
String host, int port,
int waitBetweenAttemptsInMs,
SocketConnectionListener socketConnectionListener) {
super(host, port);
this.waitBetweenAttemptsInMs = waitBetweenAttemptsInMs;
this.socketConnectionListener = socketConnectionListener;
}
#Override
protected Socket createSocket(String host, int port) throws IOException {
Socket socket = null;
while (socket == null) {
try {
socket = super.createSocket(host, port);
socketConnectionListener.onConnectionOpen();
} catch (ConnectException ce) {
socketConnectionListener.onConnectionFailure();
log.warn("server " + host + ":" + port + " is not ready yet ..waiting");
try {
Thread.sleep(waitBetweenAttemptsInMs);
} catch (InterruptedException ie) {
Thread.currentThread().interrupt();
throw new IOException("interrupted while wating between connection attempts", ie);
}
}
}
return socket;
}
}
As an extra bonus I also sets success or failure to provided SocketConnectionListener, my very own custom interface, so other parts of application can synchronise with it; for example wait with streaming until server/peer node is ready.
Use WaitingTcpNetClientConnectionFactory same way as TcpNetClientConnectionFactory.
HeartbeatClientConfig (only relevant bit):
#Bean
public TcpNetClientConnectionFactory clientConnectionFactory(
ConnectionStatus connectionStatus) {
TcpNetClientConnectionFactory connectionFactory = new WaitingTcpNetClientConnectionFactory("localhost", 7777, 2000, connectionStatus);
connectionFactory.setSerializer(new ByteArrayLengthHeaderSerializer());
connectionFactory.setDeserializer(new ByteArrayLengthHeaderSerializer());
return connectionFactory;
}
Now it just prints:
INFO [ main] o.b.e.d.s.h.client.HeartbeatClientRun : Started HeartbeatClientRun in 1.042 seconds (JVM running for 1.44)
WARN [ask-scheduler-1] h.c.WaitingTcpNetClientConnectionFactory : server localhost:7777 is not ready yet ..waiting
WARN [ask-scheduler-1] h.c.WaitingTcpNetClientConnectionFactory : server localhost:7777 is not ready yet ..waiting
WARN [ask-scheduler-1] h.c.WaitingTcpNetClientConnectionFactory : server localhost:7777 is not ready yet ..waiting
WARN [ask-scheduler-1] h.c.WaitingTcpNetClientConnectionFactory : server localhost:7777 is not ready yet ..waiting
As usual, full project sources are available on my git, here is the relevant commit.
I have setup a standalone HornetQ instance which is running locally. For testing purposes I have created a consumer using the HornetQ core API which will receive a message every 500 milliseconds.
I am facing a strange behaviour on the consumer side when my client connects and reads all the messages from queue and if I forced shutdown this (without properly closing the session/connection) then next time I start this consumer again it will read the old messages from the queue. Here is my consumer example:
// HornetQ Consumer Code
public void readMessage() {
ClientSession session = null;
try {
if (sf != null) {
session = sf.createSession(true, true);
ClientConsumer messageConsumer = session.createConsumer(JMS_QUEUE_NAME);
session.start();
while (true) {
ClientMessage messageReceived = messageConsumer.receive(1000);
if (messageReceived != null && messageReceived.getStringProperty(MESSAGE_PROPERTY_NAME) != null) {
System.out.println("Received JMS TextMessage:" + messageReceived.getStringProperty(MESSAGE_PROPERTY_NAME));
messageReceived.acknowledge();
}
Thread.sleep(500);
}
}
} catch (Exception e) {
LOGGER.error("Error while adding message by producer.", e);
} finally {
try {
session.close();
} catch (HornetQException e) {
LOGGER.error("Error while closing producer session,", e);
}
}
}
Can someone tell me why it is working like this, and what kind of configuration should I use in client/server side so that if a message read by consumer it will delete this from a queue?
You are not committing the session after the acknowledgements are complete, and you are not creating the session with auto-commit for acknowledgements enabled. Therefore, you should do one of the following:
Either explicitly call session.commit() after one or more invocations of acknowledge()
Or enable implicit auto-commit for acknowledgements by creating the session using sf.createSession(true,true) or sf.createSession(false,true) (the boolean which controls auto-commit for acknowledgements is the second one).
Keep in mind that when you enable auto-commit for acknowledgements there is an internal buffer which needs to reach a particular size before the acknowledgements are flushed to the broker. Batching acknowledgements like this can drastically improve performance for certain high-volume use-cases. By default you need to acknowledge 1,048,576 bytes worth of messages in order to flush the buffer and send the acknowledgements to the broker. You can change the size of this buffer by invoking setAckBatchSize on your ServerLocator instance or by using a different createSession method (e.g. sf.createSession(true, true, myAckBatchSize)).
If the acknowledgement buffer isn't flushed and your client crashes then the corresponding messages will still be in the queue when the client comes back. If the buffer hasn't reached its threshold it will still be flushed anyway when the consumer is closed gracefully.
I have setup a JMS listener for IBM MQ . When the listener is running on single JVM like tomcat on local machine it runs fine. But when i deploy it to the Cloud where there are 2 VM's , it's running fine on one of the VM and connects to MQ but it says below on the other one.
Is there any limitation from IBM MQ on using a ID,password from multiple clients in order to connect to the Queue manager?
RROR> com.ssc.ach.mq.JMSMQReceiver[main]: errorMQJMS2013: invalid security authentication supplied for MQQueueManager
javax.jms.JMSSecurityException: MQJMS2013: invalid security authentication supplied for MQQueueManager
at com.ibm.mq.jms.MQConnection.createQM(MQConnection.java:2050)
at com.ibm.mq.jms.MQConnection.createQMNonXA(MQConnection.java:1532)
at com.ibm.mq.jms.MQQueueConnection.<init>(MQQueueConnection.java:150)
at com.ibm.mq.jms.MQQueueConnectionFactory.createQueueConnection(MQQueueConnectionFactory.java:185)
at com.ibm.mq.jms.MQQueueConnectionFactory.createConnection(MQQueueConnectionFactory.java:1066)
at
I am starting the listener on VM startup using Servlet init method
public void init(ServletConfig config) throws ServletException {
logger.info("App Init");
try {
boolean isListnerOn = Boolean.parseBoolean(System.getProperty("listner", "false"));
logger.info(" startReceiver , listner flag is "+isListnerOn);
if(isListnerOn){
if (mqReceive == null) {
MyMessageListener jmsListner = new MyMessageListener();
mqReceive = new JMSMQReceiver(jmsListner);
}
if (mqReceive != null) {
try{
mqReceive.start();
} catch (Exception e) {
logger.error("Error starting the listner ", e);
}
}
}else{
logger.info(" listner not started as flag is "+isListnerOn);
}
} catch (Exception e) {
logger.error(e, e);
}
}
private MQQueueConnectionFactory mqQueueConnectionFactory;
public MQReceiver(MyMessageListener listner) {
userName=System.getProperty( "mqId","");
pwd=System.getProperty("mqId","");
host = System.getProperty(PREFIX + ".host");
port = Integer.parseInt(System.getProperty(PREFIX + ".port"));
qManager = System.getProperty(PREFIX + ".qManager");
channel = System.getProperty(PREFIX + ".channel");
queueName = System.getProperty(PREFIX + ".achqueueName");
logger.info("HOST:" + host + "\tPORT:" + port + "\tqManager:"+ qManager + "\tchannel:" + channel + "\tqueueName:"+ queueName);
try {
mqQueueConnectionFactory = new MQQueueConnectionFactory();
mqQueueConnectionFactory.setHostName(host);
mqQueueConnectionFactory.setChannel(channel);//communications link
mqQueueConnectionFactory.setPort(port);
mqQueueConnectionFactory.setQueueManager(qManager);//service provider
mqQueueConnectionFactory.setTransportType(JMSC.MQJMS_TP_CLIENT_MQ_TCPIP);
queueConnection = mqQueueConnectionFactory.createConnection(trustUserName, trustID);
session = queueConnection.createSession(false, Session.AUTO_ACKNOWLEDGE);
queue = session.createQueue(queueName);
((MQDestination) queue).setTargetClient(WMQConstants.WMQ_CLIENT_NONJMS_MQ);
MessageConsumer consumer = session.createConsumer(queue);
consumer.setMessageListener(listner);
logger.info(" Connect MQ successfully.");
} catch (JMSException e) {
logger.error("error" + e.getMessage(), e);
}
}
public void start() {
logger.info("Starting MQListener... ");
//mqListener.start();
try {
queueConnection.start();
logger.info("MQListener start successfully");
} catch (JMSException e) {
logger.error("error" + e.getMessage(), e);
}
}
The IBM MQ Classes for JMS client error JMSWMQ2013 can be caused by quite a few issues.
IBM Support Technote "WMQ 7.1 / 7.5 / 8.0 / 9.0 queue manager RC 2035 MQRC_NOT_AUTHORIZED or AMQ4036 or JMSWMQ2013 when using client connection as an MQ Administrator" has a good write up on diagnosing and resolving issues like this.
If you want more specific help, to start with please provide the following details by editing and adding them to your question.
Version of IBM MQ Classes for JMS used by the client application.
Version of IBM MQ installed on the IBM MQ queue manager
Errors in the queue manager's AMQERR01.LOG that happen at the same time as the error you receive in the IBM MQ Classes for JMS client application on the second VM..
It is strange that it works on the first VM and fails on the second. If the trustUserName, trustID are the same for both VMs, IBM MQ should accept them equally.
If you are using IBM MQ v8 or later native connection authentication it is possible the OS or PAM is denying the second connection. I have only seen this where pam_tally had a limit of 5 and more than 5 connection hit at the same time. It could be possible there is some sort of login limit of one login per user.
Per you comment it appears that you had a CHLAUTH ADDRESSMAP rule missing, the first VM's IP was allowed and the second VM's IP was not allowed. Depending on how the queue manager is configured and how the CHLAUTH rule is blocking the connection the queue manager may return MQRC 2035 to the client. On a IBM MQ Classes for JMS client this is returned as MQJMS2013. This could happen for instance if your queue manager is using ADOPTCTX(NO) and has a CHLAUTH rule that maps ADDRESS(*) to a MCAUSER that does not exist (ex: *NOACCESS) and other CHLAUTH rules that map connections from specific IPs to a user that does have access.
A more secure setup is to use ADOPTCTX(YES) which will tell MQ to set the MCAUSER to the id that is authenticated by CONNAUTH. You could also have a ADDRESSMAP rule with ADDRESS(*) USERSRC(NOACCESS) to block by default and then other rules with specific IPs and USERSRC(CHANNEL to allow those IPs you want to whitelist.
com.ssc.ach.mq.JMSMQReceiver[main]: errorMQJMS2013: invalid security authentication supplied for MQQueueManager
javax.jms.JMSSecurityException: MQJMS2013: invalid security authentication supplied for MQQueueManager
Is because:
queueConnection = mqQueueConnectionFactory.createConnection(trustUserName, trustID);
If you don't supply a valid UserId and Password that can be authenticated by the queue manager then your connection will rejected. I have no idea what you are passing into the method but it is supposed to be a valid UserId and Password for the remote system.
Also, I wish people would stop using the term 'MQ Listener' because you are not creating an 'MQ Listener', you are creating a consumer that is receiving messages.
An MQ listener is a component of MQ that accepts and handles incoming connections. See here.
I am using javax.jms.Connection to send and receive JMS messages to/from JBoss501. I am also using the Connection.setExceptionListener(). I would like to know if the exception listener needs to be set before the connection is started by Connection.start()? Any ideas to reproduce the JBoss connection exception at will to confirm if the exception listener is invoked.
From the spec:
If a JMS provider detects a serious problem with a Connection object, it informs the Connection object's ExceptionListener, if one has been registered. It does this by calling the listener's onException method, passing it a JMSException argument describing the problem.
An exception listener allows a client to be notified of a problem asynchronously. Some connections only consume messages, so they would have no other way to learn that their connection has failed.
Remember that there is place for vendor specific implementation here, about how exceptions are handled. Some vendors try to "fix" the situation if possible.
Now about start the connection before or after setting the exception listeneer...
Always set the exception listener BEFORE starting the connection.
And About reproducing I think you could
Start a consumer, connection.start should be run. And waiting for a message.
Shutdown jboss immediately.
Restart jboss.
Also I know that using Eclipse or other dev tools will help you start in debug mode, and you can at any specific time as the debugger shows you the status just abort the jboss server and restart it again.
With Jboss 5.0.1, setting the exception listener worked even after starting the connection. As mentioned by "MrSimpleMind" exception listener serves better before starting the connection - in fact - best as soon as the connection is created from ConnectionFactory.
The exception listener is effective even if the connection is not started - in case of Jboss 501.
//Main
try {
connection = getConnection();
connection.setExceptionListener(new MyExceptionListener());
//Exception listener is effective even before connection is started.
//connection.start();
while(true){
try {
Thread.sleep(1000 * 5);
Log.l("Kill the JMS provider any time now. !! Observe if the JMS listener works.");
} catch (InterruptedException e) {
//do nothing.
}
}
} catch (NamingException e) {
e.printStackTrace();
} catch (JMSException e) {
e.printStackTrace();
}
//Exception Listener
public class MyExceptionListener implements ExceptionListener {
#Override
public void onException(JMSException e) {
Log.l("Exception listener invoked");
}
}
To reproduce the scenario where the ExceptionListener gets triggered/invoked, I used the JBoss Management console and stopped the ConnectionFactory using the mx bean exposed by Jboss mgmt console.
Currently i'm working on a standalone Java apps that connects to a Websphere MQ to send and receive messages.
The flow is in asynchronous mode, which we implemented using MessageListener class to retrieve the messages from the queue when they are ready. The code to initialize the consumer with the listener is as follow:
if(connection == null)
connection = getJmsConnection();
try {
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
if (isTopic) {
destination = session.createTopic(destinationName);
} else {
destination = session.createQueue(destinationName);
}
consumer = session.createConsumer(destination);
consumer.setMessageListener(listener);
} catch (JMSException e) {
e.printStackTrace();
}
The getJmsConnection() method will return a connection from a pool, implemented using Apache Commons Pool library.
My question is, will the connection assign to the listener from the pool be active and tied to that listener as long as the program is running? Or the connection is used intermittently and can be reuse by other processes? Our idea is to have the sending and receiving process to reuse the connection from the pool, but i'm not sure how the MessageListener deal with the connection they are assigned with.
Thank you.
The key object here is the session rather than the connection; the session is on the one that will be doing the primary work here with the message consumption (async or otherwise).
It is advisable to try and share out the connection as widely as possible. Temporary destinations are scoped on the connection level. So the use of pooling is a good idea; it will be perfectly possible to share that connection around.
However I would also say that it might be worth considering pooling the sessions. With the code here a new session will be created, each time through that code, that will mean a new connection to the WebSphere MQ queue manager will be created. It's not clear what the scope of that will be, but if that is closed quickly it could become a bottleneck.