ActiveMq not creating queue automaticlly - java

I create a destination like this:
Destination destination = session.createQueue("queue_name");
In this case if the queue named "queue_name" dont exist, it will be created.
I want to form a destination to a queue and in case it dont exist, i dont want to create it.
Is there a way to connent to a queue only if it exists?

I think you should be able to get a list of the available queues using DestinationSource from your connection. The you could look to see if the queue exists.
Havnt tried it, but think it looks like this:
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory("tcp://localhost:61616");
ActiveMQConnection connection = (ActiveMQConnection)connectionFactory.createConnection();
DestinationSource ds = connection.getDestinationSource();
Set<ActiveMQQueue> queues = ds.getQueues();

You have to use the security feature in ActiveMQ to limit the users who are allowed to create destinations. You can then configure a set of destinations in the ActiveMQ config which are always created. See this page on the subject and also this page on configuring security.

You can either do it through security configuration of your client (Consumer/Producer).
Or alternatively you can do it programmatically by getting the list of queues available and only connecting if it is in the list. ActiveMQ provides a class for this, but its not part of JMS (so you'll be restricted to an ActiveMQ specific implementation).
http://activemq.apache.org/maven/5.5.0/activemq-core/apidocs/org/apache/activemq/advisory/DestinationSource.html

Related

ActiveMQ Artemis temp queue

I'm using simple test project with Spring's JmsTemplate that sends synchronous messages with:
jmsTemplate.sendAndReceive(...)
Code snippet of JmsTemplate to do this:
Message requestMessage = messageCreator.createMessage(session);
responseQueue = session.createTemporaryQueue();
producer = session.createProducer(destination);
consumer = session.createConsumer(responseQueue);
requestMessage.setJMSReplyTo(responseQueue);
if (logger.isDebugEnabled()) {
logger.debug("Sending created message: " + requestMessage);
}
doSend(producer, requestMessage);
return receiveFromConsumer(consumer, getReceiveTimeout());
All work fine but when I'm going to Jolokia console I can see all my temp queues at address level:
In standard ActiveMQ console temporary queues are not shown (deleted?).
Because of my application use many synchronous message, list can grow up rapidly.
I try to use
<temporary-queue-namespace>temp</temporary-queue-namespace>
with
<address-setting match="temp.#">
<enable-metrics>false</enable-metrics>
</address-setting>
But my temp-queue are not under temp addresses...
Does it possible to don't show temp queue in console? (because when JmsTemplate has received response or time-out, consumer is closed and temp queue is marked as deleted).
If not, how can I regroup them into one addresses folder?
or something else useful to achieve this.
My application work with about 30-40 queues, and possibly 1000 or more temp queues by day. ActiveMQ "Classic" doesn't show temp queue in web console so its easy to administer durable queue. We plan to migrate to Artemis, and during my simple test case I see that temp queue are by default shown in the web console next to all other queues, and if I have 1000 or more temp queues I need to scroll down a very long time to show the queues that I want to see. After each refresh the scroll is reinitialized. So i want to find a solution to regroup all temp queue in one folder like namespace or other solution.
There are two main ways to deal with a large number of queues and problems with refreshing the JMX "tree" view.
Use the "Queues" tab to view the queues you're interested in rather than the JMX "tree" view. You can even filter out temporary queues, e.g.:
Disable refresh of the JMX "tree" view via the "Preferences" available by clicking on the user icon in the top right of the web console, e.g.:
It's worth noting that the enable-metrics only deals with metrics as they are related to metrics plugins. Setting this to false does not disable their MBeans.
In the future the JMX "tree" like likely be removed from the web console due, in part, to the issues you're observing.

Slow message consumption using AmazonSQSClient

So, i used concurrency in spring jms 50-100, allowing max connections upto 200. Everything is working as expected but if i try to retrieve 100k messages from queue, i mean there are 100k messages on my sqs and i reading them through the spring jms normal approach.
#JmsListener
Public void process (String message) {
count++;
Println (count);
//code
}
I am seeing all the logs in my console but after around 17k it starts throwing exceptions
Something like : aws sdk exception : port already in use.
Why do i see this exception and how do. I get rid of it?
I tried looking on the internet for it. Couldn't find anything.
My setting :
Concurrency 50-100
Set messages per task :50
Client acknowledged
timestamp=10:27:57.183, level=WARN , logger=c.a.s.j.SQSMessageConsumerPrefetch, message={ConsumerPrefetchThread-30} Encountered exception during receive in ConsumerPrefetch thread,
javax.jms.JMSException: AmazonClientException: receiveMessage.
at com.amazon.sqs.javamessaging.AmazonSQSMessagingClientWrapper.handleException(AmazonSQSMessagingClientWrapper.java:422)
at com.amazon.sqs.javamessaging.AmazonSQSMessagingClientWrapper.receiveMessage(AmazonSQSMessagingClientWrapper.java:339)
at com.amazon.sqs.javamessaging.SQSMessageConsumerPrefetch.getMessages(SQSMessageConsumerPrefetch.java:248)
at com.amazon.sqs.javamessaging.SQSMessageConsumerPrefetch.run(SQSMessageConsumerPrefetch.java:207)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.amazonaws.SdkClientException: Unable to execute HTTP request: Address already in use: connect
Update : i looked for the problem and it seems that new sockets are being created until every sockets gets exhausted.
My spring jms version would be 4.3.10
To replicate this problem just do the above configuration with the max connection as 200 and currency set to 50-100 and push some 40k messages to the sqs queue.. One can use https://github.com/adamw/elasticmq this as a local stack server which replicates Amazon sqs.. After being done till here. Comment jms listener and use soap ui load testing and call the send message to fire many messages. Just because you commented #jmslistener annotation, it won't consume messages from queue. Once you see that you have sent 40k messages, stop. Uncomment #jmslistener and restart the server.
Update :
DefaultJmsListenerContainerFactory factory =
new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setErrorHandler(Throwable::printStackTrace);
factory.setConcurrency("50-100");
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
return factory;
Update :
SQSConnectionFactory connectionFactory = new SQSConnectionFactory( new ProviderConfiguration(), amazonSQSclient);
Update :
Client configuration details :
Protocol : HTTP
Max connections : 200
Update :
I used cache connection factory class and it seems. I read on stack overflow and in their official documentation to not use cache connection factory class and default jms listener container factory.
https://stackoverflow.com/a/21989895/5871514
It's gives the same error that i got before though.
update
My goal is to get a 500 tps, i.e i should be able to consume that much.. So i tried this method and it seems I can reach 100-200, but not more than that.. Plus this thing is a blocker at high concurrency .. If you use it.. If you have some better solution to achieve it.. I am all ears.
**updated **
I am using amazonsqsclient
Starvation on the Consumer
One possible optimization that JMS clients tend to implement, is a message consumption buffer or "prefetch". This buffer is sometimes tunable via the number of messages or by a buffer size in bytes.
The intention is to prevent the consumer from going to the server every single time it receives a messages, rather than pulling multiple messages in a batch.
In an environment where you have many "fast consumers" (which is the opinionated view these libraries may take), this prefetch is set to a somewhat high default in order to minimize these round trips.
However, in an environment with slow message consumers, this prefetch can be a problem. The slow consumer is holding up messaging consumption for those prefetched messages from the faster consumer. In a highly concurrent environment, this can cause starvation quickly.
That being the case the SQSConnectionFactory has a property for this:
SQSConnectionFactory sqsConnectionFactory = new SQSConnectionFactory( new ProviderConfiguration(), amazonSQSclient);
sqsConnectionFactory.setNumberOfMessagesToPrefetch(0);
Starvation on the Producer (i.e. via JmsTemplate)
It's very common for these JMS implementations to expect be interfaced to the broker via some intermediary. These intermediaries actually cache and reuse connections or use a pooling mechanism to reuse them. In the Java EE world, this is usually taken care of a JCA adapter or other method on a Java EE server.
Because of the way Spring JMS works, it expects an intermediary delegate for the ConnectionFactory to exist to do this caching/pooling. Otherwise, when Spring JMS wants to connect to the broker, it will attempt to open a new connection and session (!) every time you want to do something with the broker.
To solve this, Spring provides a few options. The simplest being the CachingConnectionFactory, which caches a single Connection, and allows many Sessions to be opened on that Connection. A simple way to add this to your #Configuration above would be something like:
#Bean
public ConnectionFactory connectionFactory(AmazonSQSClient amazonSQSclient) {
SQSConnectionFactory sqsConnectionFactory = new SQSConnectionFactory(new ProviderConfiguration(), amazonSQSclient);
// Doing the following is key!
CachingConnectionFactory connectionfactory = new CachingConnectionFactory();
connectionfactory.setTargetConnectionFactory(sqsConnectionFactory);
// Set the #connectionfactory properties to your liking here...
return connectionFactory;
}
If you want something more fancy as a JMS pooling solution (which will pool Connections and MessageProducers for you in addition to multiple Sessions), you can use the reasonably new PooledJMS project's JmsPoolConnectionFactory, or the like, from their library.

How to access state of Producer connection in Route from CamelContext

I have a route which looks like the following:
from("seda:in")
.routeId("aggregation")
.process(filterProcessor)
.aggregate(header("flag", new MyAggregationStrategy())
.completionInterval(10000)
.multicast()
.to(sftpUris);
I'd like to be able to access the producers for each of the URIs in the to clause and check the status of the SFTP connection.
So far I haven't worked out a way of doing this for single (non-multicast) producer, so solutions to that would be useful as well.
According to the Camel docs, you should look at the CamelFtpReplyCode header (and perhaps CamelFtpReplyString) to determine what happened with the request. By default, FTP errors do NOT raise an exception.

Tibco JMS (EMS) TimeToLive per client?

I haven't been able to figure this one out from Google alone. I am connecting to a non-durable EMS topic, which publishes updates to a set of data. If I skip a few updates, it doesn't matter, as the following update will overwrite it anyway.
The number of messages being published on the EMS topic is quite high, and occasionally for whatever reason the consumer lags behind. Is there a way, on the client connection side, to determine a 'time to live' for messages? I know there is on other brokers, but specifically on Tibco I have been unable to figure out whether it's possible or not, only that this parameter can definitely be set on the server side for all clients (this is not an option for me).
I am creating my connection factory and then creating an Apache Camel jms endpoint with the following code:
TibjmsConnectionFactory connectionFactory = new TibjmsConnectionFactory();
connectionFactory.setServerUrl(properties.getProperty(endpoints.getServerUrl()));
connectionFactory.setUserName(properties.getProperty(endpoints.getUsername()));
connectionFactory.setUserPassword(properties.getProperty(endpoints.getPassword()));
JmsComponent emsComponent = JmsComponent.jmsComponent(connectionFactory);
emsComponent.setAsyncConsumer(true);
emsComponent.setConcurrentConsumers(Integer.parseInt(properties.getProperty("jms.concurrent.consumers")));
emsComponent.setDeliveryPersistent(false);
emsComponent.setClientId("MyClient." + ManagementFactory.getRuntimeMXBean().getName() + "." + emsConnectionNumber.getAndIncrement());
return emsComponent;
I am using tibjms-6.0.1, tibjmsufo-6.0.1, and various other tib***-6.0.1.
The JMSExpiration property can be set per message or, more globally, at the destination level (in which case the JMSExpiration of all messages received in this destination is overridden). It cannot be set per consumer.
One option would be to create a bridge from the topic to a custom queue that only your consumer application will listen to, and set the "expiration" property of this queue to 0 (unlimited). All messages published on the topic will then be copied to this queue and won't ever expire, whatever their JMSExpiration value.

Queue connection

environment.put(Context.INITIAL_CONTEXT_FACTORY,QUEUE_CONTEXT);
System.out.println("QUEUE_URL -> " + QUEUE_URL);
environment.put(Context.PROVIDER_URL,QUEUE_URL);
try{
ctx = new InitialDirContext(environment);
String MYCF_LOOKUP_NAME = QUEUE_CONTEXT_FACTORY;
connectionFactory = (ConnectionFactory) ctx.lookup(MYCF_LOOKUP_NAME);
connection = ((MQQueueConnectionFactory) connectionFactory)
.createQueueConnection();
I dont know whther it correct or not.. It gives me connectivity issue
n the first program it asks for queue manager name but in the second program it doesn't require Queue Manager name. I need to replace the First program code with the second program.. Can anyone help me on this ..??
You're using JNDI here - JNDI is a store of Java Obects. For JMS this will be ConnectionFactorys and Destinations (Queues or Topics).
So you need to put into JNDI an Connection Factory as the code suggets you already have and also a Queue.
Would suggest if this is not clear why you need to do this I suggest searching for a JNDI tuturial and also a JMS one - to get the very basic background.

Categories

Resources