I have a Spring application that consumes messages on a specific port (say 9001), restructures them and then forwards to a Rabbit MQ server. The code segment is:
private void send(String routingKey, String message) throws Exception {
String exchange = applicationConfiguration.getAMQPExchange();
String exchangeType = applicationConfiguration.getAMQPExchangeType();
Connection connection = myConnection.getConnection();
Channel channel = connection.createChannel();
channel.exchangeDeclare(exchange, exchangeType);
channel.basicPublish(exchange, routingKey, null, message.getBytes());
log.debug(" [CORE: AMQP] Sent message with key {} : {}",routingKey, message);
}
If the Rabbit MQ server fails (crashes, runs out of RAM, turned off etc) the code above blocks, preventing the upstream service from receiving messages (a bad thing). I am looking for a way of preventing this behaviour whilst not losing mesages so that at some time in the future they can be resent.
I am not sure how best to address this. One option may be to queue the messages to a disk file and then use a separate thread to read and forward to the Rabbit MQ server?
If I understand correctly, the issue you are describing is a known JDK socket behaviour when the connection is lost mid-write. See this mailing list thread: http://markmail.org/thread/3vw6qshxsmu7fv6n.
Note that if RabbitMQ is shut down, the TCP connection should be closed in a way that's quickly observable by the client. However, it is true that stale TCP connections can take
a while to be detected, that's why RabbitMQ's core protocol has heartbeats. Set heartbeat
interval to a low value (say, 6-8) and the client itself will notice unresponsive peer
in that amount of time.
You need to use Publisher confirms [1] but also account for the fact that the app itself
can go down right before sending a message. As you rightly point out, having a disk-based
WAL (write-ahead log) is a common solution for this problem. Note that it is both quite
tricky to get right and still leaves some time window where your app process shutting down can result in an unpublished and unlogged message.
No promises on the time frame but the idea of adding WAL to the Java client has been discussed.
http://www.rabbitmq.com/confirms.html
Related
I'm using javax.websocket API in my app. I send messages from server to client like this:
Future<Void> messageFuture = session.getAsyncRemote().sendText(message);
messageFutures.add(messageFuture); // List<Future<Void>> messageFutures
I use async API because I really care about performance and cannot make server wait until each message is delivered, because server does smth like this:
for (i = 1..N) {
result = doStuff()
sendMessage(result)
}
So it is impossible to wait for message delivery each iteration.
After I send all the messages I need to wait for all the Future's to be finished (all messages are delivered). And to be safe I need to use some timeout like "if server sends message to client and client doesn't confirm receipt in 30 seconds then consider websocket connection broken" - as far as I understand it should be possible to do with websockets since they work over TCP.
There is a method session.setMaxIdleTimeout(long):
Set the non-zero number of milliseconds before this session will be
closed by the container if it is inactive, ie no messages are either
sent or received. A value that is 0 or negative indicates the session
will never timeout due to inactivity.
but I really not sure if it is what I want (is it?). So how can I set a timeout like I described using javax.websocket API?
The idle timeout could cover your case, but it is not designed to. The idle timeout applies more to the case where a client makes a connection, but is using it only infrequently.
The more precise feature for checking a timeout when sending is setAsyncSendTimeout.
Using both of these allows you to configure for the case where a client may leave a connection idle for minutes at a time, but the server expects relatively quick messages acknowledgements.
In my experience with Spring, the timeout implementation provided by Spring is not actually configurable. See How do you quickly close a nonresponsive websocket in Java Spring Tomcat? I am not sure whether this is applicable to your websocket implementation.
I haven't been able to figure this one out from Google alone. I am connecting to a non-durable EMS topic, which publishes updates to a set of data. If I skip a few updates, it doesn't matter, as the following update will overwrite it anyway.
The number of messages being published on the EMS topic is quite high, and occasionally for whatever reason the consumer lags behind. Is there a way, on the client connection side, to determine a 'time to live' for messages? I know there is on other brokers, but specifically on Tibco I have been unable to figure out whether it's possible or not, only that this parameter can definitely be set on the server side for all clients (this is not an option for me).
I am creating my connection factory and then creating an Apache Camel jms endpoint with the following code:
TibjmsConnectionFactory connectionFactory = new TibjmsConnectionFactory();
connectionFactory.setServerUrl(properties.getProperty(endpoints.getServerUrl()));
connectionFactory.setUserName(properties.getProperty(endpoints.getUsername()));
connectionFactory.setUserPassword(properties.getProperty(endpoints.getPassword()));
JmsComponent emsComponent = JmsComponent.jmsComponent(connectionFactory);
emsComponent.setAsyncConsumer(true);
emsComponent.setConcurrentConsumers(Integer.parseInt(properties.getProperty("jms.concurrent.consumers")));
emsComponent.setDeliveryPersistent(false);
emsComponent.setClientId("MyClient." + ManagementFactory.getRuntimeMXBean().getName() + "." + emsConnectionNumber.getAndIncrement());
return emsComponent;
I am using tibjms-6.0.1, tibjmsufo-6.0.1, and various other tib***-6.0.1.
The JMSExpiration property can be set per message or, more globally, at the destination level (in which case the JMSExpiration of all messages received in this destination is overridden). It cannot be set per consumer.
One option would be to create a bridge from the topic to a custom queue that only your consumer application will listen to, and set the "expiration" property of this queue to 0 (unlimited). All messages published on the topic will then be copied to this queue and won't ever expire, whatever their JMSExpiration value.
I have been developing my first TCP/Socket based application with Apache Mina, it looks great and easy to do things. I just want to ask a question here about Mina.
The server impose an idle time of 5 second will terminate the socket connection, so we have to send periodic heartbeat (echo message / keepalive) to make sure connection is alive. Sort of keepalive mechanism.
There's one way that we send blindly echo/heartbeat message just before every 5 seconds. I am thinking, there should be smart/intelligent way "Idle Monitor" if I am sending my business message and do not come to idle time i.e. 5 second, I should not issue heartbeat message. Heartbeat message will be sent if whole connection is idle, so that we save bandwidth and fast reading & writing on socket.
You can achieve it by using Keep Alive Filter (already present in mina).
Alternatively, you can achieve a smarter way of sending echo/heart beat by setting session idle timeout of client a bit smaller than idle timeout of server. For example:
For server side
NioSocketAcceptor.getSessionConfig().setIdleTime(IdleStatus.BOTH_IDLE, 5);
and for client side it would be
NioSocketConnector.getSessionConfig().setIdleTime(IdleStatus.BOTH_IDLE, 3);
Now, if there is no communication for lets say 3 seconds, a sessionIdle will be triggred at the client side ( and it will not be triggered at server side as timeout there is 5 seconds) and you can send an echo. This will keep the session alive. The echo will be sent only if the session is idle.
Note: I am assuming that at session idle, session is being closed at the server side. If it is other way around you will need to switch values of session idle timeout(e.g. 3 seconds for server and 5 seconds for client) and echo will be sent from server.
(I hope I'm understanding the question correctly)
I was having trouble keeping my session alive and this question came up on Google search results so I'm hoping someone else will find it useful:
#Test
public void testClientWithHeartBeat() throws Exception {
SshClient client = SshClient.setUpDefaultClient();
client.getProperties().put(ClientFactoryManager.HEARTBEAT_INTERVAL, "500");
client.start();
ClientSession session = client.connect("localhost", port).await().getSession();
session.authPassword("smx", "smx").await().isSuccess();
ClientChannel channel = session.createChannel(ClientChannel.CHANNEL_SHELL);
int state = channel.waitFor(ClientChannel.CLOSED, 2000);
assertTrue((state & ClientChannel.CLOSED) == 0);
channel.close(false);
client.stop();
}
(Source: https://issues.apache.org/jira/browse/SSHD-185)
In newer versions (e.g. version 2.8.0), enabling heartbeats changed to CoreModuleProperties.HEARTBEAT_INTERVAL.set(client, Duration.ofMillis(500));
I'm not sure I totally understand your question, but you can send a heartbeat in an overridden sessionIdle method of the IoHandlerAdapter. You don't need to necessarily close a session just because Mina on the server side calls Idle. As far as a more intelligent way of maintaining an active connection between and Server and Client without this type of heartbeat communication I have never heard of one.
Here is an interesting read of how microsoft handles their heartbeat in ActiveSync. I personally used this methodology when using mina in my client/server application. Hope this helps you some.
I am trying to handle flow control situation on producer end.
I have a queue on a qpid-broker with a max queue-size set. Also have flow_stop_count and flow_resume_count set on the queue.
now at the producer keeps on continuously producing messages until this flow_stop_count is reached. Upon breach of this count, an exception is thrown which is handled by Exception listener.
Now sometime later the consumer on queue will catch up and the flow_resume_count will be reached. The question is how does the producer know of this event.
Here's a sample code of the producer
connection connection = connectionFactory.createConnection();
connection.setExceptionListenr(new MyExceptionListerner());
connection.start();
Session session = connection.createSession(false,Session.CLIENT_ACKNOWLEDGE);
Queue queue = (Queue)context.lookup("Test");
MessageProducer producer = session.createProducer(queue);
while(notStopped){
while(suspend){//---------------------------how to resume this flag???
Thread.sleep(1000);
}
TextMessage message = session.createTextMessage();
message.setText("TestMessage");
producer.send(message);
}
session.close();
connection.close();
and for the exception listener
private class MyExceptionListener implements ExceptionListener {
public void onException(JMSException e) {
System.out.println("got exception:" + e.getMessage());
suspend=true;
}
}
Now the exceptionlistener is a generic listener for exceptions, so it should not be a good idea to suspend the producer flow through that.
What I need is perhaps some method on the producer level , something like produer.isFlowStopped() which I can use to check before sending a message. Does such a functionality exist in qpid api.
There is some documentation on the qpid website which suggest this can be done. But I couldn't find any examples of this being done anywhere.
Is there some standard way of handling this kind of scenario.
From what I have read from the Apache QPid documentation it seems that the flow_resume_count and flow_stop_count will cause the producers to start getting blocked.
Therefore the only option would be to software wise to poll at regular intervals until the messages start flowing again.
Extract from here.
If a producer sends to a queue which is overfull, the broker will respond by instructing the client not to send any more messages. The impact of this is that any future attempts to send will block until the broker rescinds the flow control order.
While blocking the client will periodically log the fact that it is blocked waiting on flow control.
WARN AMQSession - Broker enforced flow control has been enforced
WARN AMQSession - Message send delayed by 5s due to broker enforced flow control
WARN AMQSession - Message send delayed by 10s due to broker enforced flow control
After a set period the send will timeout and throw a JMSException to the calling code.
ERROR AMQSession - Message send failed due to timeout waiting on broker enforced flow control.
From this documentation it implicates that the software managing the producer would then have to self manage. So basically when you receive an exception that the queue is overfull you will need to back off and most likely poll and reattempt to send your messages.
You can try setting the capacity (size in bytes at which the queue is thought to be full ) and flowResumeCapacity (the queue size at which producers are unflowed) properties for a queue.
send() will then be blocked if the size exceeds the capacity value.
You can have a look at this test case file in the repo to get an idea.
Producer flow control is not yet implemented on the JMS client.
See https://issues.apache.org/jira/browse/QPID-3388
I'm using the IA92 Java implementation for MQTT, which allows me to connect to a MQTT broker. In order to establish the connection, I'm doing something like this:
// Create connection spec
String mqttConnSpec = "tcp://the_server#the_port";
// Create the client and connect
mqttClient = MqttClient.createMqttClient(mqttConnSpec, null);
mqttClient.connect("the_id", true, 666);
The problem is that sometimes the server takes too much time to send a response, and it throws a timeout exception:
org.apache.harmony.luni.platform.OSNetworkSystem.connectStreamWithTimeoutSocket(OSNetworkSystem.java:130)
at org.apache.harmony.luni.net.PlainSocketImpl.connect(PlainSocketImpl.java:246)
at org.apache.harmony.luni.net.PlainSocketImpl.connect(PlainSocketImpl.java:533)
at java.net.Socket.connect(Socket.java:1055)
at com.ibm.mqtt.j2se.MqttJava14NetSocket.<init>((null):-1)
at com.ibm.mqtt.j2se.MqttJavaNetSocket.setConnection((null):-1)
at com.ibm.mqtt.Mqtt.tcpipConnect((null):-1)
at com.ibm.mqtt.MqttBaseClient.doConnect((null):-1)
at com.ibm.mqtt.MqttBaseClient.connect((null):-1)
at com.ibm.mqtt.MqttClient.connect((null):-1)
at com.ibm.mqtt.MqttClient.connect((null):-1)
What I need to do is setting a timeout manually, instead of letting the mqtt client decide that. The documentation says: There are also methods for setting attributes of the MQ Telemetry Transport connection, such as timeouts and retries.
But, honestly, I haven't found anything about it. I have taken a look at the whole javadoc reference and there's no evidence of timeout configuration. I can't see the source code since it's not open source.
So how can I set the timeout for the Mqtt connection?
If you have confusion you can go to MqttConnectionOptions for detail.
String userName="Ohelig";
String password="Pojke";
MqttClient client = new MqttClient("tcp://192.168.1.4:1883","Sending");
MqttConnectOptions authen = new MqttConnectOptions();
authen.setUserName(userName);
authen.setPassword(password.toCharArray());
authen.setKeepAliveInterval(30);
authen.setConnectionTimeout(300);
client.connect(authen);
I don't know anything about ia92, but I'd imagine that the 666 in the connect() call is what you're trying to set the timeout to?
The timeout the documentation is referring to is probably the keepalive timeout. This is the maximum number of seconds (chosen by the client) that can elapse without communication between the server and client. I think this is what you're most interested in.
Retries on the other hand are most likely to refer to the retrying of messages that seem to have gone astray when sending messages with QoS>0. This will be something handled by the client library code though, rather than the broker. This is something that comes into play only after you've connected though, so I very much doubt it's your problem.
To be sure that the keepalive timeout is being set correctly, I'd try pointing your client at a modified mosquitto broker. You can modify mqtt3_handle_connect() in src/read_handle_server.c to print out the keepalive value when you connect. This will ensure it's doing what you think, but won't help with the actual problem I'm afraid!
What broker do you use? Really Small Message Broker V1.1 Alpha, Mosquitto, the broker that comes with IBM WebSphere? You need to set this timeout value in your server configuration. Because the system works that way. You set a keep alive value in your broker and send a ping from the client before that interval expires, in order not for the broker to close the client-server connection, and the process restarts. Actually, even if that interval expires, server will still not close the connection until the 'grace period' ends. See http://public.dhe.ibm.com/software/dw/webservices/ws-mqtt/mqtt-v3r1.html#connect