I need a setup where messages can be transmitted to a queue or topic which is listened to by 2 or more servers.
The consumer is a specific client who will access one of those 2 servers and it is not known ahead of time which server the client will check from. The message will have an ID on it which correlates to the correct client.
There may be multiple messages at any time waiting to be consumed by various clients accessing these servers.
How can I accomplish this? Queue, topic? point-to-point or publish subscribe? What exact setup would do this trick?
Here's another way to look at the scenario: imagine multiple towns which have a community mail box. The residents of these towns do not have specific addresses rather they are constantly moving around between the towns. Someone needs to send a message to another person, so they create the mail and it gets copied and routed to each town's mailbox waiting to be received. When the right person checks and finds the message addressed to him, the message is consumed and destroyed on all the other mailboxes ensuring the same message is not read again.
So the JMS queue or topic is this mailbox, and the clients connecting to these servers (which specifically are web servers in a clustered environment) are the people. Multiple messages addressed to different people can exist at the same time.
Whats the best way to do this using JMS?
If you need to address messages to specific clients, you can use consumer-side selectors, here's a round trip example:
Server Sends
QueueSender queueSender = queueSession.createSender(queue);
queueSender.setDeliveryMode(DeliveryMode.PERSISTENT);
TextMessage message = queueSession.createTextMessage("Hello John!");
message.setObjectProperty("ToAddress", "John-123");
queueSender.send(message);
Consumer Receives
QueueConnection queueConn = connFactory.createQueueConnection();
QueueSession queueSession = queueConn.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
QueueReceiver queueReceiver =
queueSession.createReceiver(queue, "ToAddress = 'John-123'");
queueConn.start();
TextMessage message = (TextMessage) queueReceiver.receive();
The client creates a queueReceiver using the selector ToAddress=John-123, so only messages that match that selector are delivered to that client; other messages go to different consumers based on their selector.
If queuereceiver 'John-123' is not connected, any messages addressed to him simply accumulate in the queue. If you want to receive messages in real time, the receiver needs to be connected, always. To check for messages intermittently (sort of like checking email a few times a day), there's not too much overhead associated with creating a receiver, checking for messages, then disconnecting, however, avoid doing that repeatedly (1000's of times, or more); if that's the case, just keep the receiver connected all the time.
Hope that helps,
Related
Im trying to put a message in a MQ Queue. Here is my source code:
QueueConnection queueConn;
QueueSession queueSession;
QueueSender queueSender;
queueConn = connectionFactory.getConnection();
queueSession = queueConn.createQueueSession(false,
Session.AUTO_ACKNOWLEDGE);
queueSender = queueSession.createSender(queueSession
.createQueue(KEY_CONFIG_QUEUE_NAME));
queueSender.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
TextMessage message = queueSession.createTextMessage(logBase);
queueSender.send(message);
I don't have the source code from the queue consumer, that is the one who sends the messages to SPLUNK. But at SPLUNK console, I could realize that the message is composed by JMS HEADER + my text message (logBase).
Id like the messages without JMS Header. Could someone help me to understand where the problem is? Could be at consumer? Maybe a wrong or missing SPLUNK config??
Assuming that you cannot change the source code at the consumer, there is a way to administratively do this. You can change the queue definition so that these message properties are not given to the getting application.
ALTER QLOCAL(q-name) PROPCTL(NONE)
Related Links
PROPCTL queue options
If you able and happy to change the producer, you could look into the Target Client property of the MQ JMS destination.
This informs the JMS client that the consuming application is not a JMS app so it removes the extra headers.
I have a Spring application that consumes messages on a specific port (say 9001), restructures them and then forwards to a Rabbit MQ server. The code segment is:
private void send(String routingKey, String message) throws Exception {
String exchange = applicationConfiguration.getAMQPExchange();
String exchangeType = applicationConfiguration.getAMQPExchangeType();
Connection connection = myConnection.getConnection();
Channel channel = connection.createChannel();
channel.exchangeDeclare(exchange, exchangeType);
channel.basicPublish(exchange, routingKey, null, message.getBytes());
log.debug(" [CORE: AMQP] Sent message with key {} : {}",routingKey, message);
}
If the Rabbit MQ server fails (crashes, runs out of RAM, turned off etc) the code above blocks, preventing the upstream service from receiving messages (a bad thing). I am looking for a way of preventing this behaviour whilst not losing mesages so that at some time in the future they can be resent.
I am not sure how best to address this. One option may be to queue the messages to a disk file and then use a separate thread to read and forward to the Rabbit MQ server?
If I understand correctly, the issue you are describing is a known JDK socket behaviour when the connection is lost mid-write. See this mailing list thread: http://markmail.org/thread/3vw6qshxsmu7fv6n.
Note that if RabbitMQ is shut down, the TCP connection should be closed in a way that's quickly observable by the client. However, it is true that stale TCP connections can take
a while to be detected, that's why RabbitMQ's core protocol has heartbeats. Set heartbeat
interval to a low value (say, 6-8) and the client itself will notice unresponsive peer
in that amount of time.
You need to use Publisher confirms [1] but also account for the fact that the app itself
can go down right before sending a message. As you rightly point out, having a disk-based
WAL (write-ahead log) is a common solution for this problem. Note that it is both quite
tricky to get right and still leaves some time window where your app process shutting down can result in an unpublished and unlogged message.
No promises on the time frame but the idea of adding WAL to the Java client has been discussed.
http://www.rabbitmq.com/confirms.html
I haven't been able to figure this one out from Google alone. I am connecting to a non-durable EMS topic, which publishes updates to a set of data. If I skip a few updates, it doesn't matter, as the following update will overwrite it anyway.
The number of messages being published on the EMS topic is quite high, and occasionally for whatever reason the consumer lags behind. Is there a way, on the client connection side, to determine a 'time to live' for messages? I know there is on other brokers, but specifically on Tibco I have been unable to figure out whether it's possible or not, only that this parameter can definitely be set on the server side for all clients (this is not an option for me).
I am creating my connection factory and then creating an Apache Camel jms endpoint with the following code:
TibjmsConnectionFactory connectionFactory = new TibjmsConnectionFactory();
connectionFactory.setServerUrl(properties.getProperty(endpoints.getServerUrl()));
connectionFactory.setUserName(properties.getProperty(endpoints.getUsername()));
connectionFactory.setUserPassword(properties.getProperty(endpoints.getPassword()));
JmsComponent emsComponent = JmsComponent.jmsComponent(connectionFactory);
emsComponent.setAsyncConsumer(true);
emsComponent.setConcurrentConsumers(Integer.parseInt(properties.getProperty("jms.concurrent.consumers")));
emsComponent.setDeliveryPersistent(false);
emsComponent.setClientId("MyClient." + ManagementFactory.getRuntimeMXBean().getName() + "." + emsConnectionNumber.getAndIncrement());
return emsComponent;
I am using tibjms-6.0.1, tibjmsufo-6.0.1, and various other tib***-6.0.1.
The JMSExpiration property can be set per message or, more globally, at the destination level (in which case the JMSExpiration of all messages received in this destination is overridden). It cannot be set per consumer.
One option would be to create a bridge from the topic to a custom queue that only your consumer application will listen to, and set the "expiration" property of this queue to 0 (unlimited). All messages published on the topic will then be copied to this queue and won't ever expire, whatever their JMSExpiration value.
I'm sending JMS requests to a Weblogic 10.3 server through a named JMS queue, and receive a reply back through a temporary queue.
Client (barebone):
//init
Destination replyQueue = session.createTemporaryQueue();
replyConsumer = session.createConsumer(replyQueue);
...
//loop
TextMessage requestMessage = session.createTextMessage();
requestMessage.setText("Some request")
requestMessage.setJMSReplyTo(replyQueue);
requestProducer.send(requestMessage);
Message msg = replyConsumer.receive(5000);
if (msg instanceof TextMessage) {
...
} else { ... }
//loop end
Server MDB (message driven bean):
public void onMessage(Message msg) {
if (msg instanceof TextMessage) {
...
TextMessage replyMessage = jmsSession.createTextMessage();
replyMessage.setText("Some response");
replyMessage.setJMSCorrelationID(msg.getJMSCorrelationID());
replyProducer.send(replyMessage);
}
}
The problem is that the very first server reply is often lost! That is, the replyConsumer.receive(5000) ends with timeout for every 4th-5th replyConsumer. When the consumer receives the first answer, then it continues to receive all the rest, so the problem is only with the first message send through the temporary queue after the temp queue has been created.
My question: Do I have to set something special for the temporary queue in order it works from the very start after being created? Or any other hint?
Further info:
When testing against my local development machine, the temp queues work without problem. The messages are getting lost only when testing against our clustered Weblogic server. However, I have switched off all cluster members but one instance.
I have verified that the server successfully replies all the requests that the client sends (by counting the sent requests and sent replies). The server replies in the order of milliseconds, even for the lost replies.
When I replace the temporary queue with a regular named queue, the problem disappears! So the problem doesn't seem (to me) to be in my code.
I've also tried to modify expiration, persistency, delay etc. of the reply message, but without success. This way I excluded the scenario that the response arrives earlier than the client begins to read the queue, and then the message immediately expires not giving the client a chance to process it.
Edit: Instead of the synchronous replyConsumer.receive(5000) I've also tried to use the asynchronous replyConsumer.setMessageListener(this). The behaviour hasn't changed, first messages are still getting lost for temp queues.
Edit: It seems that there's something wrong with the Weblogic server (or cluster) I am using. Because when I deployed the server application to another Weblogic cluster we have, everything began to work correctly! Both clusters should be configured identically - so where's a difference? It scares me that the Weblogic signals no error.
Your problem seems to be that sometimes the server is receiving the publish and discarding it before your consumer has started receiving.
The way around it is to use the asynchronous receive (replyConsumer.setMessageListener) calls instead of the blocking call you currently have (replyConsumer.receive(5000)) and to add the call to the code with the rest of your consumer code.
That way, you are already listening for replies before you send out the request.
Hope that helps.
Edit: Just read that you are using a temporary queue, so my first sentence is not correct. However as an experiment try the rest of my response to see if it changes the behaviour you are seeing
Is it possible to send message to particular receiver using JMS Queue(HornetQ)?
Among so many receivers, I want certain message to be received by receiver which
are running on Linux OS.
Every suggestion is appriciated.
Thanks.
You can set a message property using Message.setObjectProperty(String, Object) and then have your consumers select the messages they are interested in using Session.createConsumer(Destination, String)
Sender example:
Message message = session.createMessage();
message.setObjectProperty("OS", "LINUX");
producer.send(message);
Receiver example:
MessageConsumer consumer = session.createConsumer(destination, "OS = 'LINUX'");
//Use consumer to receive messages.
The receiver in the example will ignore (they will go to some other receiver) all messages that do not match the selector. In this case all message where the 'OS' property is not 'LINUX' will be ignored by this consumer.
You can set properties of JMS message: http://download.oracle.com/javaee/1.4/api/javax/jms/TextMessage.html and filter messages at client side.
For example,
message.setStringProperty("TARGET_OS", "LINUX") - at sender
http://www.mkyong.com/java/how-to-detect-os-in-java-systemgetpropertyosname/ - detect OS at receivers and filter messages with correct TARGET_OS property
You can use JMS selectors on the consumer side to look for messages that fit specific criteria.
Not sure if I am missing something, you could keep things simple by having multiple queues - specific to each platform, then the linux based consumers can listen to the linux specific queue alone. Now your challenge probably will be to route the messages to the appropriate queue from the producer side, that should be fairly easy if the routing is based on some attribute of the message?