In-process ActiveMQ producer/consumer example? - java

I'm investigating using ActiveMQ as an embedded in-process message queue in my
application, but I'm a bit stuck on how I go about starting such an application
up. I envision it like so (pseudocode, of course):
configureBroker ()
broker.start ()
createProducer (broker)
producer.start ()
for each desired consumer
createConsumer (broker)
consumer.start ()
waitForSignal ()
signalProducerShutdown ()
waitForEmptyQueues ()
signalConsumerShutdown ()
broker.stop ()
I've tried to assemble a simple version of this, but I'm stuck on how to write
the producers and consumers in such a way as to have them work forever, or
until told to quit. What is the best way to do this? I'm speaking specifically about the threading aspect; what do I need/want to spawn off in its own thread, etc...
I'm completely new to message queue based applications, so please be verbose with your examples.

When you specify the ActiveMQConnectionFactory, you can specify "vm://" where name is the intra-vm specific name of your broker and it will start the broker within the VM.
For example,
String broker = "vm://stackOverflowTest";
ActiveMQConnectionFactory connectionFactory =
new ActiveMQConnectionFactory(broker);
Connection amqcon = connectionFactory.createConnection();
amqcon.start();
From there you can create your producers or consumers the same as if it was over the network. As long as you use the same name for the broker, you can have multiple threads / code open / talk to the same VM instance.
This solution only allows communication with the VM, it does not open any external ports. I'm assuming this is what you were looking for since you said you wanted embedded, in-process queues.

Related

gRPC connection cycling

We are setting up a cluster to handle inferencing (with Tensorflow Serving) over gRPC. We intend to use a layer-7 load balancer (AWS ALB) to distribute the load. For our work load, inferencing will occur many times per minute from each client account. It is my understand that gRPC holds connection state for each of these channels. As a result, in order for the ALB to do its job, we need to periodically teardown and rebuild the connection on the client instance.
My question: what is the best practice for cycling a connection in Java?
Below is my proposed code, which would be called every couple minutes on each client channel. I assume that while the first connection is being shutdown, we can go about creating new one and immediately issue a request on it; or do we need to wait while the prior channel is shutdown first. In our situation, the channel will (very likely) be empty since the previous request will have been 10 seconds earlier.
if (mChannel != null)
mChannel.shutdown();
mChannel = ManagedChannelBuilder.forAddress(mHost, mPort).usePlaintext().build();
mStub = PredictionServiceGrpc.newBlockingStub(mChannel);
The best practice is to use Lookaside Load Balancing.
However, you can do few tweaks to terminate client connections.
var builder = ManagedChannelBuilder.forAddress(mHost, mPort)
.keepAliveTime(15, TimeUnit.SECONDS)
.keepAliveTimeout(5, TimeUnit.SECONDS);
The above config will ensure to terminate sticky gRPC connections, and AWS ALB can do its job to load balance requests uniformly.
There are other options that you can try depending upon your use case, e.g retries, etc. See ManagedChannelBuilder

Slow message consumption using AmazonSQSClient

So, i used concurrency in spring jms 50-100, allowing max connections upto 200. Everything is working as expected but if i try to retrieve 100k messages from queue, i mean there are 100k messages on my sqs and i reading them through the spring jms normal approach.
#JmsListener
Public void process (String message) {
count++;
Println (count);
//code
}
I am seeing all the logs in my console but after around 17k it starts throwing exceptions
Something like : aws sdk exception : port already in use.
Why do i see this exception and how do. I get rid of it?
I tried looking on the internet for it. Couldn't find anything.
My setting :
Concurrency 50-100
Set messages per task :50
Client acknowledged
timestamp=10:27:57.183, level=WARN , logger=c.a.s.j.SQSMessageConsumerPrefetch, message={ConsumerPrefetchThread-30} Encountered exception during receive in ConsumerPrefetch thread,
javax.jms.JMSException: AmazonClientException: receiveMessage.
at com.amazon.sqs.javamessaging.AmazonSQSMessagingClientWrapper.handleException(AmazonSQSMessagingClientWrapper.java:422)
at com.amazon.sqs.javamessaging.AmazonSQSMessagingClientWrapper.receiveMessage(AmazonSQSMessagingClientWrapper.java:339)
at com.amazon.sqs.javamessaging.SQSMessageConsumerPrefetch.getMessages(SQSMessageConsumerPrefetch.java:248)
at com.amazon.sqs.javamessaging.SQSMessageConsumerPrefetch.run(SQSMessageConsumerPrefetch.java:207)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.amazonaws.SdkClientException: Unable to execute HTTP request: Address already in use: connect
Update : i looked for the problem and it seems that new sockets are being created until every sockets gets exhausted.
My spring jms version would be 4.3.10
To replicate this problem just do the above configuration with the max connection as 200 and currency set to 50-100 and push some 40k messages to the sqs queue.. One can use https://github.com/adamw/elasticmq this as a local stack server which replicates Amazon sqs.. After being done till here. Comment jms listener and use soap ui load testing and call the send message to fire many messages. Just because you commented #jmslistener annotation, it won't consume messages from queue. Once you see that you have sent 40k messages, stop. Uncomment #jmslistener and restart the server.
Update :
DefaultJmsListenerContainerFactory factory =
new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setErrorHandler(Throwable::printStackTrace);
factory.setConcurrency("50-100");
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
return factory;
Update :
SQSConnectionFactory connectionFactory = new SQSConnectionFactory( new ProviderConfiguration(), amazonSQSclient);
Update :
Client configuration details :
Protocol : HTTP
Max connections : 200
Update :
I used cache connection factory class and it seems. I read on stack overflow and in their official documentation to not use cache connection factory class and default jms listener container factory.
https://stackoverflow.com/a/21989895/5871514
It's gives the same error that i got before though.
update
My goal is to get a 500 tps, i.e i should be able to consume that much.. So i tried this method and it seems I can reach 100-200, but not more than that.. Plus this thing is a blocker at high concurrency .. If you use it.. If you have some better solution to achieve it.. I am all ears.
**updated **
I am using amazonsqsclient
Starvation on the Consumer
One possible optimization that JMS clients tend to implement, is a message consumption buffer or "prefetch". This buffer is sometimes tunable via the number of messages or by a buffer size in bytes.
The intention is to prevent the consumer from going to the server every single time it receives a messages, rather than pulling multiple messages in a batch.
In an environment where you have many "fast consumers" (which is the opinionated view these libraries may take), this prefetch is set to a somewhat high default in order to minimize these round trips.
However, in an environment with slow message consumers, this prefetch can be a problem. The slow consumer is holding up messaging consumption for those prefetched messages from the faster consumer. In a highly concurrent environment, this can cause starvation quickly.
That being the case the SQSConnectionFactory has a property for this:
SQSConnectionFactory sqsConnectionFactory = new SQSConnectionFactory( new ProviderConfiguration(), amazonSQSclient);
sqsConnectionFactory.setNumberOfMessagesToPrefetch(0);
Starvation on the Producer (i.e. via JmsTemplate)
It's very common for these JMS implementations to expect be interfaced to the broker via some intermediary. These intermediaries actually cache and reuse connections or use a pooling mechanism to reuse them. In the Java EE world, this is usually taken care of a JCA adapter or other method on a Java EE server.
Because of the way Spring JMS works, it expects an intermediary delegate for the ConnectionFactory to exist to do this caching/pooling. Otherwise, when Spring JMS wants to connect to the broker, it will attempt to open a new connection and session (!) every time you want to do something with the broker.
To solve this, Spring provides a few options. The simplest being the CachingConnectionFactory, which caches a single Connection, and allows many Sessions to be opened on that Connection. A simple way to add this to your #Configuration above would be something like:
#Bean
public ConnectionFactory connectionFactory(AmazonSQSClient amazonSQSclient) {
SQSConnectionFactory sqsConnectionFactory = new SQSConnectionFactory(new ProviderConfiguration(), amazonSQSclient);
// Doing the following is key!
CachingConnectionFactory connectionfactory = new CachingConnectionFactory();
connectionfactory.setTargetConnectionFactory(sqsConnectionFactory);
// Set the #connectionfactory properties to your liking here...
return connectionFactory;
}
If you want something more fancy as a JMS pooling solution (which will pool Connections and MessageProducers for you in addition to multiple Sessions), you can use the reasonably new PooledJMS project's JmsPoolConnectionFactory, or the like, from their library.

Netty-TCP-HTTP-MQTT design

I am trying to write a server-side java application that can accept tcp, http and mqtt communication (receive and send/ MongoDB as storage). From research, we decided that it could be a jar application based on Netty and paho for mqtt. We have 3 project using three of these protocols, therefore I am trying to unify the connection module. They each have different protocol style, for example:
-tcp: 0102330123456700
-http: HTTP POST /URL/count {"id":"02","count":"01234567"}
-mqtt: topic /02/count {"count":"01234567"}
Since we are a bit short of time, i am running them three in a silly but quick way---3 different thread listening to 3 different ports.
public class ServerLauncher {
public static void main(String[] args) {
NettyRestServer nettyRestServer = new NettyRestServer();
MqttServer mqttServer = new MqttServer();
EchoServer echoServer = new EchoServer();
new Thread(nettyRestServer,"HTTP Listening").start();
new Thread(mqttServer,"Mqtt Listening").start();
new Thread(echoServer,"socket Listening").start();
}
}
My questions are:
Since they are all based on tcp, is there a good way to manage them all together without wasting thread resource? maybe running just one thread for listening one port. I only find examples of single protocol.
For data storage, is it an okey design to push all the incoming messages to a concurrentHashMap across all thread/channels. Finally with a another thread running scheduled task, storage this concurrentHashMap into MongoDB and reset. or maybe use queue instead of concurrentHashMap
If you use Netty for all of these you can share the same EventLoopGroup for all servers which means all will share the same Threads.
You don't have to use three threads to start Server. You can do all of these in only one ServerBootstrap. And put the logic in ChannelHandler.
Netty's ChannelPipeline can dynamicly change ChannelHandler when getting the connecttion.
ctx.pipeline().addBefore(...)
ctx.pipeline().addAfter(...)
ctx.pipeline().remove(...)

Using EventLoopGroup in Netty with Multiple Channels

I am new to Netty but unfortunately there seems to be no detailed documentation/tutorial for a beginner.
I have multiple threads, each creating separate clients to connect to separate channels, using NettyChannelBuilder. The idea is that each channel will send & receive different kind of messages to/from different hosts. E.g. it looks like this:
class MyServiceClass{
void executeTasks() {
...
//here multiple tasks are executed in a for loop
executorService.execute(new Task(new Client());
...
}
}
class Client {
..
void connect() {
channel = NettyChannelBuilder.forAddress(host, port).build();
}
}
In this case, each task has its own client and clients are building their own channels to receive messages.
Should i create a single EventLoopGroup at executeTasks and give it to the Clients to be used while building their channel.
If this is the case, what is the advantage of using EventLoopGroup? What is it exactly doing at the background?
I'm not sure what you're asking. EventLoopGroups are just a grouping of threads used for netty. Using netty your clients will be on an EventLoopGroup and will be assigned to threads in a round robin matter so some may be on the same thread.
Personally I find the docs to be great but it's definitely not a framework designed for beginners.

How does the Netty threading model work in the case of many client connections?

I intend to use Netty in an upcoming project. This project will act as both client and server. Especially it will establish and maintain many connections to various servers while at the same time serving its own clients.
Now, the documentation for NioServerSocketChannelFactory fairly specifies the threading model for the server side of things fairly well - each bound listen port will require a dedicated boss thread throughout the process, while connected clients will be handled in a non-blocking fashion on worker threads. Specifically, one worker thread will be able to handle multiple connected clients.
However, the documentation for NioClientSocketChannelFactory is less specific. This also seems to utilize both boss and worker threads. However, the documentation states:
One NioClientSocketChannelFactory has one boss thread. It makes a connection attempt on request. Once a connection attempt succeeds, the boss thread passes the connected Channel to one of the worker threads that the NioClientSocketChannelFactory manages.
Worker threads seem to function in the same way as for the server case too.
My question is, does this mean that there will be one dedicated boss thread for each connection from my program to an external server? How will this scale if I establish hundreds, or thousands of such connections?
As a side note. Are there any adverse side effects for re-using a single Executor (cached thread pool) as both the bossExecutor and workerExecutor for a ChannelFactory? What about also re-using between different client and/or server ChannelFactory instances? This is somewhat discussed here, but I do not find those answers specific enough. Could anyone elaborate on this?
This is not a real answer to your question regarding how the Netty client thread model works. But you can use the same NioClientSocketChannelFactory to create single ClientBootstrap with multiple ChannelPipelineFactorys , and in turn make a large number of connections. Take a look at the example below.
public static void main(String[] args)
{
String host = "localhost";
int port = 8090;
ChannelFactory factory = new NioClientSocketChannelFactory(Executors
.newCachedThreadPool(), Executors.newCachedThreadPool());
MyHandler handler1 = new MyHandler();
PipelineFactory factory1 = new PipelineFactory(handler1);
AnotherHandler handler2 = new AnotherHandler();
PipelineFactory factory2 = new PipelineFactory(handler2);
ClientBootstrap bootstrap = new ClientBootstrap(factory);
// At client side option is tcpNoDelay and at server child.tcpNoDelay
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
for (int i = 1; i<=50;i++){
if(i%2==0){
bootstrap.setPipelineFactory(factory1);
}else{
bootstrap.setPipelineFactory(factory2);
}
ChannelFuture future = bootstrap.connect(new InetSocketAddress(host,
port));
future.addListener(new ChannelFutureListener()
{
#Override
public void operationComplete(ChannelFuture future) throws Exception
{
future.getChannel().write("SUCCESS");
}
});
}
}
It also shows how different pipeline factories can be set for different connections, so based on the connection you make you can tweak your encoders/decoders in the channel pipeline.
I am not sure your question has been answer. Here's my answer: there's a single Boss thread that is managing simultaneously all the pending CONNECTs in your app. It uses nio to process all the current connects in a single (Boss) thread, and then hands each successfully connected channel off to one of the workers.
Your question mainly concerns performance. Single threads scale very well on the client.
Oh, and nabble has been closed. You can still browse the archive there.

Categories

Resources