So, i used concurrency in spring jms 50-100, allowing max connections upto 200. Everything is working as expected but if i try to retrieve 100k messages from queue, i mean there are 100k messages on my sqs and i reading them through the spring jms normal approach.
#JmsListener
Public void process (String message) {
count++;
Println (count);
//code
}
I am seeing all the logs in my console but after around 17k it starts throwing exceptions
Something like : aws sdk exception : port already in use.
Why do i see this exception and how do. I get rid of it?
I tried looking on the internet for it. Couldn't find anything.
My setting :
Concurrency 50-100
Set messages per task :50
Client acknowledged
timestamp=10:27:57.183, level=WARN , logger=c.a.s.j.SQSMessageConsumerPrefetch, message={ConsumerPrefetchThread-30} Encountered exception during receive in ConsumerPrefetch thread,
javax.jms.JMSException: AmazonClientException: receiveMessage.
at com.amazon.sqs.javamessaging.AmazonSQSMessagingClientWrapper.handleException(AmazonSQSMessagingClientWrapper.java:422)
at com.amazon.sqs.javamessaging.AmazonSQSMessagingClientWrapper.receiveMessage(AmazonSQSMessagingClientWrapper.java:339)
at com.amazon.sqs.javamessaging.SQSMessageConsumerPrefetch.getMessages(SQSMessageConsumerPrefetch.java:248)
at com.amazon.sqs.javamessaging.SQSMessageConsumerPrefetch.run(SQSMessageConsumerPrefetch.java:207)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.amazonaws.SdkClientException: Unable to execute HTTP request: Address already in use: connect
Update : i looked for the problem and it seems that new sockets are being created until every sockets gets exhausted.
My spring jms version would be 4.3.10
To replicate this problem just do the above configuration with the max connection as 200 and currency set to 50-100 and push some 40k messages to the sqs queue.. One can use https://github.com/adamw/elasticmq this as a local stack server which replicates Amazon sqs.. After being done till here. Comment jms listener and use soap ui load testing and call the send message to fire many messages. Just because you commented #jmslistener annotation, it won't consume messages from queue. Once you see that you have sent 40k messages, stop. Uncomment #jmslistener and restart the server.
Update :
DefaultJmsListenerContainerFactory factory =
new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setErrorHandler(Throwable::printStackTrace);
factory.setConcurrency("50-100");
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
return factory;
Update :
SQSConnectionFactory connectionFactory = new SQSConnectionFactory( new ProviderConfiguration(), amazonSQSclient);
Update :
Client configuration details :
Protocol : HTTP
Max connections : 200
Update :
I used cache connection factory class and it seems. I read on stack overflow and in their official documentation to not use cache connection factory class and default jms listener container factory.
https://stackoverflow.com/a/21989895/5871514
It's gives the same error that i got before though.
update
My goal is to get a 500 tps, i.e i should be able to consume that much.. So i tried this method and it seems I can reach 100-200, but not more than that.. Plus this thing is a blocker at high concurrency .. If you use it.. If you have some better solution to achieve it.. I am all ears.
**updated **
I am using amazonsqsclient
Starvation on the Consumer
One possible optimization that JMS clients tend to implement, is a message consumption buffer or "prefetch". This buffer is sometimes tunable via the number of messages or by a buffer size in bytes.
The intention is to prevent the consumer from going to the server every single time it receives a messages, rather than pulling multiple messages in a batch.
In an environment where you have many "fast consumers" (which is the opinionated view these libraries may take), this prefetch is set to a somewhat high default in order to minimize these round trips.
However, in an environment with slow message consumers, this prefetch can be a problem. The slow consumer is holding up messaging consumption for those prefetched messages from the faster consumer. In a highly concurrent environment, this can cause starvation quickly.
That being the case the SQSConnectionFactory has a property for this:
SQSConnectionFactory sqsConnectionFactory = new SQSConnectionFactory( new ProviderConfiguration(), amazonSQSclient);
sqsConnectionFactory.setNumberOfMessagesToPrefetch(0);
Starvation on the Producer (i.e. via JmsTemplate)
It's very common for these JMS implementations to expect be interfaced to the broker via some intermediary. These intermediaries actually cache and reuse connections or use a pooling mechanism to reuse them. In the Java EE world, this is usually taken care of a JCA adapter or other method on a Java EE server.
Because of the way Spring JMS works, it expects an intermediary delegate for the ConnectionFactory to exist to do this caching/pooling. Otherwise, when Spring JMS wants to connect to the broker, it will attempt to open a new connection and session (!) every time you want to do something with the broker.
To solve this, Spring provides a few options. The simplest being the CachingConnectionFactory, which caches a single Connection, and allows many Sessions to be opened on that Connection. A simple way to add this to your #Configuration above would be something like:
#Bean
public ConnectionFactory connectionFactory(AmazonSQSClient amazonSQSclient) {
SQSConnectionFactory sqsConnectionFactory = new SQSConnectionFactory(new ProviderConfiguration(), amazonSQSclient);
// Doing the following is key!
CachingConnectionFactory connectionfactory = new CachingConnectionFactory();
connectionfactory.setTargetConnectionFactory(sqsConnectionFactory);
// Set the #connectionfactory properties to your liking here...
return connectionFactory;
}
If you want something more fancy as a JMS pooling solution (which will pool Connections and MessageProducers for you in addition to multiple Sessions), you can use the reasonably new PooledJMS project's JmsPoolConnectionFactory, or the like, from their library.
Related
I'm trying to execute an application under (reasonable) load. What is happening under load is that when trying to place a message onto a queue, the application stalls for about 4 seconds before completing the send. The strange part is that immediately after doing this, the next message takes a matter of milliseconds to place onto the queue. The message is in fact the same message - so the message size isn't a factor.
The application is using Spring Boot 2.1.6, Apache Qpid 0.43.0 as the JMS/AMQP provider.
The message bus being used is Azure ServiceBus, but I have observed the same behaviour using Artemis.
On the Apache Qpid JmsConnectionFactory, I've tried fiddling with the properties "forceSyncSend".
I've tried using the Spring Boot CachingConnectionFactory to cache message producers only. I have increased the default cache size from 1 to 20 without any success.
I've looked at the JmsTemplate parameters but can't find any parameters in regard to message producers (plenty with listeners but that's another story).
The code doing the sending is quite simple:
private void sendToQueue(Object message, String queueName) {
jmsTemplate.convertAndSend(queueName, message, (Message jmsMessage) -> {
jmsMessage.setStringProperty(OBJECT_TYPE_PARAMETER, message.getClass().getSimpleName());
return jmsMessage;
});
Is there anything obvious to try? Are there any tuning parameters to stop this stalling happening?
The load on the system is not trivial, but it is not excessive (it needs to go a lot higher than where it is at the moment!)
Any ideas?
I need to realize listening from a queue from two servers. The queue name is the same. The first server is the primary, the second is the backup.
When the main server is down, work with the backup server queue should continue.
My class:
#RabbitListener(queues = "to_client")
public class ClientRabbitService {
Now I use RoutingConnectionFactory:
#Bean
#Primary
public ConnectionFactory routingConnectionFactory() {
SimpleRoutingConnectionFactory rcf = new SimpleRoutingConnectionFactory();
Map<Object, ConnectionFactory> map = new HashMap<>();
map.put("[to_kernel]", mainConnectionFactory());
map.put("[to_kernel_reserve]", reserveConnectionFactory());
map.put("[to_client]", mainConnectionFactory());
rcf.setTargetConnectionFactories(map);
return rcf;
}
[to_kernel] and [to_kernel_reserve] - the queues for sending messages only, [to_client] - to receive them.
Any ideas please?
Is the queue on backup server populated only when primary server is down? If yes you may always listen to both queues (queue on secondary server will be empty when primary is up).
Note that your solution would be more reliable if you use RabbitMQ clustering.
Then, you connect to the cluster (you specify addresses of all machines in cluster).
It is explained in official documentation https://docs.spring.io/spring-amqp/reference/htmlsingle/#connections
Alternatively, if running in a clustered environment, use the
addresses attribute.
<rabbit:connection-factory id="connectionFactory" addresses="host1:5672,host2:5672"/>
When using cluster you will have single queue (replicated across cluster). Note that RabbitMQ suffers significant performance hit when using replication, be sure to read official documentation how to configure clustering https://www.rabbitmq.com/clustering.html
I have different problems with a Camel Producer that I tried to solved but I've fallen into other problems.
1) The first implementation I did was to create a producer template each time we needed to communicate with an ActiveMQ topic. That resulted with poor memory results leading to server crashing after sometime.
The solution for the memory problem was to stop() producer template after each request. That fix has corrected the memory issue but cause some latency problem.
2) I read somewhere that it's not necessary to create each time a producer template. So I decide to fix the latency problem and declared only one producer template in my class and use it for each request. It seem to work fine, no memory leak, fix the latency problem...
BUT, when we send multiple queries that take a lot of time (20 sec each), it looks like we hit a timeout and the component crash with something like «javax.jms.IllegalStateException: The Session is closed».
Is there a way to do multi threading? Is this cause by using InOut exchange pattern? How the MAXIMUM_CACHE_POOL_SIZE works? Is my implementation is right?
I've put a sample of the code of my component:
public void process(Exchange exchange) throws Exception
{
Message in = exchange.getIn();
if (producerTemplate == null) {
CamelContext camelContext = exchange.getContext();
//camelContext.getProperties().put(Exchange.MAXIMUM_CACHE_POOL_SIZE, "50");
producerTemplate = camelContext.createProducerTemplate();
}
...
result = producerTemplate.sendBody(String.format("activemq:%s", camelContext.resolvePropertyPlaceholders("{{channel1}}")), ExchangePattern.InOut, messageToSend).toString();
...
finalResult = producerTemplate.sendBody(String.format("activemq:%s", camelContext.resolvePropertyPlaceholders("{{channel2}}")), ExchangePattern.InOut, result).toString();
...
in.setBody(finalResult );
}
Yes it is because you use InOut pattern.
Your route expects a response to the specified reply queue, which is never received, and therefore results in the default 20 sec. timeout.
Change the Exchange pattern to InOnly to resolve your issue.
Apart from that, your posted code seems to be fine.
The MAXIMUM_CACHE_POOL_SIZE is used internally in Camel, and thus does not effect the ActiveMQ endpoint settings.
I have a Spring application that consumes messages on a specific port (say 9001), restructures them and then forwards to a Rabbit MQ server. The code segment is:
private void send(String routingKey, String message) throws Exception {
String exchange = applicationConfiguration.getAMQPExchange();
String exchangeType = applicationConfiguration.getAMQPExchangeType();
Connection connection = myConnection.getConnection();
Channel channel = connection.createChannel();
channel.exchangeDeclare(exchange, exchangeType);
channel.basicPublish(exchange, routingKey, null, message.getBytes());
log.debug(" [CORE: AMQP] Sent message with key {} : {}",routingKey, message);
}
If the Rabbit MQ server fails (crashes, runs out of RAM, turned off etc) the code above blocks, preventing the upstream service from receiving messages (a bad thing). I am looking for a way of preventing this behaviour whilst not losing mesages so that at some time in the future they can be resent.
I am not sure how best to address this. One option may be to queue the messages to a disk file and then use a separate thread to read and forward to the Rabbit MQ server?
If I understand correctly, the issue you are describing is a known JDK socket behaviour when the connection is lost mid-write. See this mailing list thread: http://markmail.org/thread/3vw6qshxsmu7fv6n.
Note that if RabbitMQ is shut down, the TCP connection should be closed in a way that's quickly observable by the client. However, it is true that stale TCP connections can take
a while to be detected, that's why RabbitMQ's core protocol has heartbeats. Set heartbeat
interval to a low value (say, 6-8) and the client itself will notice unresponsive peer
in that amount of time.
You need to use Publisher confirms [1] but also account for the fact that the app itself
can go down right before sending a message. As you rightly point out, having a disk-based
WAL (write-ahead log) is a common solution for this problem. Note that it is both quite
tricky to get right and still leaves some time window where your app process shutting down can result in an unpublished and unlogged message.
No promises on the time frame but the idea of adding WAL to the Java client has been discussed.
http://www.rabbitmq.com/confirms.html
I haven't been able to figure this one out from Google alone. I am connecting to a non-durable EMS topic, which publishes updates to a set of data. If I skip a few updates, it doesn't matter, as the following update will overwrite it anyway.
The number of messages being published on the EMS topic is quite high, and occasionally for whatever reason the consumer lags behind. Is there a way, on the client connection side, to determine a 'time to live' for messages? I know there is on other brokers, but specifically on Tibco I have been unable to figure out whether it's possible or not, only that this parameter can definitely be set on the server side for all clients (this is not an option for me).
I am creating my connection factory and then creating an Apache Camel jms endpoint with the following code:
TibjmsConnectionFactory connectionFactory = new TibjmsConnectionFactory();
connectionFactory.setServerUrl(properties.getProperty(endpoints.getServerUrl()));
connectionFactory.setUserName(properties.getProperty(endpoints.getUsername()));
connectionFactory.setUserPassword(properties.getProperty(endpoints.getPassword()));
JmsComponent emsComponent = JmsComponent.jmsComponent(connectionFactory);
emsComponent.setAsyncConsumer(true);
emsComponent.setConcurrentConsumers(Integer.parseInt(properties.getProperty("jms.concurrent.consumers")));
emsComponent.setDeliveryPersistent(false);
emsComponent.setClientId("MyClient." + ManagementFactory.getRuntimeMXBean().getName() + "." + emsConnectionNumber.getAndIncrement());
return emsComponent;
I am using tibjms-6.0.1, tibjmsufo-6.0.1, and various other tib***-6.0.1.
The JMSExpiration property can be set per message or, more globally, at the destination level (in which case the JMSExpiration of all messages received in this destination is overridden). It cannot be set per consumer.
One option would be to create a bridge from the topic to a custom queue that only your consumer application will listen to, and set the "expiration" property of this queue to 0 (unlimited). All messages published on the topic will then be copied to this queue and won't ever expire, whatever their JMSExpiration value.