I'm using simple test project with Spring's JmsTemplate that sends synchronous messages with:
jmsTemplate.sendAndReceive(...)
Code snippet of JmsTemplate to do this:
Message requestMessage = messageCreator.createMessage(session);
responseQueue = session.createTemporaryQueue();
producer = session.createProducer(destination);
consumer = session.createConsumer(responseQueue);
requestMessage.setJMSReplyTo(responseQueue);
if (logger.isDebugEnabled()) {
logger.debug("Sending created message: " + requestMessage);
}
doSend(producer, requestMessage);
return receiveFromConsumer(consumer, getReceiveTimeout());
All work fine but when I'm going to Jolokia console I can see all my temp queues at address level:
In standard ActiveMQ console temporary queues are not shown (deleted?).
Because of my application use many synchronous message, list can grow up rapidly.
I try to use
<temporary-queue-namespace>temp</temporary-queue-namespace>
with
<address-setting match="temp.#">
<enable-metrics>false</enable-metrics>
</address-setting>
But my temp-queue are not under temp addresses...
Does it possible to don't show temp queue in console? (because when JmsTemplate has received response or time-out, consumer is closed and temp queue is marked as deleted).
If not, how can I regroup them into one addresses folder?
or something else useful to achieve this.
My application work with about 30-40 queues, and possibly 1000 or more temp queues by day. ActiveMQ "Classic" doesn't show temp queue in web console so its easy to administer durable queue. We plan to migrate to Artemis, and during my simple test case I see that temp queue are by default shown in the web console next to all other queues, and if I have 1000 or more temp queues I need to scroll down a very long time to show the queues that I want to see. After each refresh the scroll is reinitialized. So i want to find a solution to regroup all temp queue in one folder like namespace or other solution.
There are two main ways to deal with a large number of queues and problems with refreshing the JMX "tree" view.
Use the "Queues" tab to view the queues you're interested in rather than the JMX "tree" view. You can even filter out temporary queues, e.g.:
Disable refresh of the JMX "tree" view via the "Preferences" available by clicking on the user icon in the top right of the web console, e.g.:
It's worth noting that the enable-metrics only deals with metrics as they are related to metrics plugins. Setting this to false does not disable their MBeans.
In the future the JMX "tree" like likely be removed from the web console due, in part, to the issues you're observing.
Related
I am currently implementing a Java messaging system with Apache Camel and ActiveMQ. My goal is to dynamically set the priority of a message based on a few attributes the message has.
I already configured my ActiveMQ as explained here. Then I created the following method that sends a TextMessage:
public void send(BaseMessage baseMessage, int jmsPriority) throws JsonProcessingException {
Map<String, Object> messageHeaders = new HashMap<>();
messageHeaders.put(MESSAGING_HEADER_JMS_PRIORITY, jmsPriority);
messageHeaders.put(MESSAGING_HEADER_TYPE, baseMessage.getClass().getSimpleName());
String payload = objectMapper.writeValueAsString(baseMessage);
producerTemplate.sendBodyAndHeaders(payload, messageHeaders);
}
Sending the message perfectly works, and the dynamic type of BaseMessage is properly set to the header of each message. The priority is set as well, but is ignored. The order for the outcoming messages is still FIFO, as queues usually do.
Until now I did not achieve to set the priority of the message dynamically. I do not want to use Apache Camel's Resequencer since I would have to create several new queues only for "sorting". From my point of view ActiveMQ must be able to prioritize and reorder the messages itself.
Any tip is appreciated. Ask me for further details if required.
By default, ActiveMQ disables message priority. This is normal. When doing distributed messaging-- sending messages across servers, prioritization does not practically work out, since the broker can only scan so many messages in the queue for messages of a higher priority before it stats to slow down all traffic for that queue.
Prioritized messages can work well when embedding a broker and using it for task dispatch-- where queue depth generally doesn't exceed the low-thousands.
Updated:
Reminder-- the QOS settings in JMS must be set on the MessageProducer object, and not the message per JMS-spec.
Enable Prioritized Messages
I'm trying to execute an application under (reasonable) load. What is happening under load is that when trying to place a message onto a queue, the application stalls for about 4 seconds before completing the send. The strange part is that immediately after doing this, the next message takes a matter of milliseconds to place onto the queue. The message is in fact the same message - so the message size isn't a factor.
The application is using Spring Boot 2.1.6, Apache Qpid 0.43.0 as the JMS/AMQP provider.
The message bus being used is Azure ServiceBus, but I have observed the same behaviour using Artemis.
On the Apache Qpid JmsConnectionFactory, I've tried fiddling with the properties "forceSyncSend".
I've tried using the Spring Boot CachingConnectionFactory to cache message producers only. I have increased the default cache size from 1 to 20 without any success.
I've looked at the JmsTemplate parameters but can't find any parameters in regard to message producers (plenty with listeners but that's another story).
The code doing the sending is quite simple:
private void sendToQueue(Object message, String queueName) {
jmsTemplate.convertAndSend(queueName, message, (Message jmsMessage) -> {
jmsMessage.setStringProperty(OBJECT_TYPE_PARAMETER, message.getClass().getSimpleName());
return jmsMessage;
});
Is there anything obvious to try? Are there any tuning parameters to stop this stalling happening?
The load on the system is not trivial, but it is not excessive (it needs to go a lot higher than where it is at the moment!)
Any ideas?
So, i used concurrency in spring jms 50-100, allowing max connections upto 200. Everything is working as expected but if i try to retrieve 100k messages from queue, i mean there are 100k messages on my sqs and i reading them through the spring jms normal approach.
#JmsListener
Public void process (String message) {
count++;
Println (count);
//code
}
I am seeing all the logs in my console but after around 17k it starts throwing exceptions
Something like : aws sdk exception : port already in use.
Why do i see this exception and how do. I get rid of it?
I tried looking on the internet for it. Couldn't find anything.
My setting :
Concurrency 50-100
Set messages per task :50
Client acknowledged
timestamp=10:27:57.183, level=WARN , logger=c.a.s.j.SQSMessageConsumerPrefetch, message={ConsumerPrefetchThread-30} Encountered exception during receive in ConsumerPrefetch thread,
javax.jms.JMSException: AmazonClientException: receiveMessage.
at com.amazon.sqs.javamessaging.AmazonSQSMessagingClientWrapper.handleException(AmazonSQSMessagingClientWrapper.java:422)
at com.amazon.sqs.javamessaging.AmazonSQSMessagingClientWrapper.receiveMessage(AmazonSQSMessagingClientWrapper.java:339)
at com.amazon.sqs.javamessaging.SQSMessageConsumerPrefetch.getMessages(SQSMessageConsumerPrefetch.java:248)
at com.amazon.sqs.javamessaging.SQSMessageConsumerPrefetch.run(SQSMessageConsumerPrefetch.java:207)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.amazonaws.SdkClientException: Unable to execute HTTP request: Address already in use: connect
Update : i looked for the problem and it seems that new sockets are being created until every sockets gets exhausted.
My spring jms version would be 4.3.10
To replicate this problem just do the above configuration with the max connection as 200 and currency set to 50-100 and push some 40k messages to the sqs queue.. One can use https://github.com/adamw/elasticmq this as a local stack server which replicates Amazon sqs.. After being done till here. Comment jms listener and use soap ui load testing and call the send message to fire many messages. Just because you commented #jmslistener annotation, it won't consume messages from queue. Once you see that you have sent 40k messages, stop. Uncomment #jmslistener and restart the server.
Update :
DefaultJmsListenerContainerFactory factory =
new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setErrorHandler(Throwable::printStackTrace);
factory.setConcurrency("50-100");
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
return factory;
Update :
SQSConnectionFactory connectionFactory = new SQSConnectionFactory( new ProviderConfiguration(), amazonSQSclient);
Update :
Client configuration details :
Protocol : HTTP
Max connections : 200
Update :
I used cache connection factory class and it seems. I read on stack overflow and in their official documentation to not use cache connection factory class and default jms listener container factory.
https://stackoverflow.com/a/21989895/5871514
It's gives the same error that i got before though.
update
My goal is to get a 500 tps, i.e i should be able to consume that much.. So i tried this method and it seems I can reach 100-200, but not more than that.. Plus this thing is a blocker at high concurrency .. If you use it.. If you have some better solution to achieve it.. I am all ears.
**updated **
I am using amazonsqsclient
Starvation on the Consumer
One possible optimization that JMS clients tend to implement, is a message consumption buffer or "prefetch". This buffer is sometimes tunable via the number of messages or by a buffer size in bytes.
The intention is to prevent the consumer from going to the server every single time it receives a messages, rather than pulling multiple messages in a batch.
In an environment where you have many "fast consumers" (which is the opinionated view these libraries may take), this prefetch is set to a somewhat high default in order to minimize these round trips.
However, in an environment with slow message consumers, this prefetch can be a problem. The slow consumer is holding up messaging consumption for those prefetched messages from the faster consumer. In a highly concurrent environment, this can cause starvation quickly.
That being the case the SQSConnectionFactory has a property for this:
SQSConnectionFactory sqsConnectionFactory = new SQSConnectionFactory( new ProviderConfiguration(), amazonSQSclient);
sqsConnectionFactory.setNumberOfMessagesToPrefetch(0);
Starvation on the Producer (i.e. via JmsTemplate)
It's very common for these JMS implementations to expect be interfaced to the broker via some intermediary. These intermediaries actually cache and reuse connections or use a pooling mechanism to reuse them. In the Java EE world, this is usually taken care of a JCA adapter or other method on a Java EE server.
Because of the way Spring JMS works, it expects an intermediary delegate for the ConnectionFactory to exist to do this caching/pooling. Otherwise, when Spring JMS wants to connect to the broker, it will attempt to open a new connection and session (!) every time you want to do something with the broker.
To solve this, Spring provides a few options. The simplest being the CachingConnectionFactory, which caches a single Connection, and allows many Sessions to be opened on that Connection. A simple way to add this to your #Configuration above would be something like:
#Bean
public ConnectionFactory connectionFactory(AmazonSQSClient amazonSQSclient) {
SQSConnectionFactory sqsConnectionFactory = new SQSConnectionFactory(new ProviderConfiguration(), amazonSQSclient);
// Doing the following is key!
CachingConnectionFactory connectionfactory = new CachingConnectionFactory();
connectionfactory.setTargetConnectionFactory(sqsConnectionFactory);
// Set the #connectionfactory properties to your liking here...
return connectionFactory;
}
If you want something more fancy as a JMS pooling solution (which will pool Connections and MessageProducers for you in addition to multiple Sessions), you can use the reasonably new PooledJMS project's JmsPoolConnectionFactory, or the like, from their library.
I'm trying to have a singleton cluster configuration with only one messaging consumer route running in the cluster (if it matters it's a rabbitmq consumer).
I've configured Quartz and am using the clustered features, which seems to only work for having only one concurrent execution.
Also to note: I've looked at using both the SimpleScheduledRoutePolicy and CronRoutePolicy. The issue I'm seeing there is I'm not seeing a way to set the quartz endpoint parameters for quartz. (stateful=true, JobName, GroupName etc...).
Am I doing something wrong here? I apologize, as I'm a bit new to both camel and quartz. Below is the route code to outline what i'm trying to do:
SimpleScheduledRoutePolicy policy = new SimpleScheduledRoutePolicy();
long startTime = System.currentTimeMillis() + 3000L;
policy.setRouteStartDate(new Date(startTime));
policy.setRouteStartRepeatCount(-1);
policy.setRouteStartRepeatInterval(10000);
from({consumer.endpoint}}").noAutoStartup().routePolicy(policy).to("log:example?showBody=true&multiline=false");
Maybe you mean something like this:
from("quartz2://myGroup/myTimerName?cron=0+0/5+12-18+?+*+MON-FRI").enrich("rabbitmq://<yourRabbitMQuri>).to("somewhere");
What is the reason behind limiting the application to one message consumer? Each processing step you add to a Camel route should be stateless.
I'm not sure I fully understand your requirements, so here's a general hint.
Camel offers JMS support out of the box, and JMS Queues may be what you are looking for:
JMS queue
A staging area that contains messages that have been sent and are waiting to be read (by only one consumer). Contrary to what the name queue suggests, messages don't have to be received in the order in which they were sent. A JMS queue only guarantees that each message is processed only once.
Your route could be something like:
<route>
<from uri="jms:queue:myqueue" />
<log message="Received message: ${body}" />
<to uri="bean:yourProcessorHere" />
</route>
ActiveMQ is also supported.
You might want to use another Camel route policy. Camel comes with support for a Zookeeper based route policy (http://camel.apache.org/zookeeper.html). In such a solution Zookeeper would elect the active node.
If you don't want to build up the necessary Zookeeper infrastructure, you could roll your own route policy and use a database to do the election of the active node by having your nodes compete to lock a table row, have a look here and here for inspiration. Be aware the code might be outdated. Could be a starting point though.
I haven't been able to figure this one out from Google alone. I am connecting to a non-durable EMS topic, which publishes updates to a set of data. If I skip a few updates, it doesn't matter, as the following update will overwrite it anyway.
The number of messages being published on the EMS topic is quite high, and occasionally for whatever reason the consumer lags behind. Is there a way, on the client connection side, to determine a 'time to live' for messages? I know there is on other brokers, but specifically on Tibco I have been unable to figure out whether it's possible or not, only that this parameter can definitely be set on the server side for all clients (this is not an option for me).
I am creating my connection factory and then creating an Apache Camel jms endpoint with the following code:
TibjmsConnectionFactory connectionFactory = new TibjmsConnectionFactory();
connectionFactory.setServerUrl(properties.getProperty(endpoints.getServerUrl()));
connectionFactory.setUserName(properties.getProperty(endpoints.getUsername()));
connectionFactory.setUserPassword(properties.getProperty(endpoints.getPassword()));
JmsComponent emsComponent = JmsComponent.jmsComponent(connectionFactory);
emsComponent.setAsyncConsumer(true);
emsComponent.setConcurrentConsumers(Integer.parseInt(properties.getProperty("jms.concurrent.consumers")));
emsComponent.setDeliveryPersistent(false);
emsComponent.setClientId("MyClient." + ManagementFactory.getRuntimeMXBean().getName() + "." + emsConnectionNumber.getAndIncrement());
return emsComponent;
I am using tibjms-6.0.1, tibjmsufo-6.0.1, and various other tib***-6.0.1.
The JMSExpiration property can be set per message or, more globally, at the destination level (in which case the JMSExpiration of all messages received in this destination is overridden). It cannot be set per consumer.
One option would be to create a bridge from the topic to a custom queue that only your consumer application will listen to, and set the "expiration" property of this queue to 0 (unlimited). All messages published on the topic will then be copied to this queue and won't ever expire, whatever their JMSExpiration value.