Java Message passing among threads - java

Im new to Java and have been stuck on an issue with respect to thread message passing.
What i mean here is- I have 4 threads, one thread reads msg from network and based on type of msg passes on the msg to either parser thread or database thread . Database thread performs some operation and has to send msg back to the first network thread which puts it into socket. Similarly, the parser thread also performs some action and based on result either has to send msg back to network thread or database thread.
Things i have tried-
I have read about notify() wait() for thread communication which does not help in my case as i need one to one msg passing its not braodcast all
I have read about concurrentqueues blockingqueues - Since this is not an ideal producer consumer problem where one thread is producing msgs and other threads reading from it- i cannot use this.
Using this would be like i need to have 5 queues for each communication channel
network->db,
db->network,
parser->network,
parser->db
Is this efficient to go about?
In c++ i was using msging mechanism where i used to just post msg(windows msg) to corresponding thread's msg pool and that thread in its msging pool, would fetch it
Is there any mechanism like message passing in java which i could use?

one thread reads msg from network and based on type of msg passes on the msg to ...database thread. Database thread performs some operation and has to send msg back to the first network thread which puts it into socket.
You're making the "network" thread responsible to wait for messages from the network, and also, to wait for messages from the "database" thread. That's awkward. You may find it somewhere between mildly difficut and impossible to make that happen in a clean, satisfying way.
My personal opinion is that each long-lived thread in a multi-threaded program should wait for only one thing.
What is the reason for having the database thread "send msg back to the first network thread [to be put] into socket?" Why can't the database thread itself put the message into the socket?
If there's a good reason for the database not to send out the message, then why can't "put the message into the socket" be a task that your database thread submits to a thread pool?
I have read about notify() wait() for thread communication which does not help in my case
Would a BlockingQueue help?
I have read about concurrentqueues blockingqueues - Since this is not an ideal producer consumer problem where one thread is producing msgs and other threads reading from it- i cannot use this. Using this would be like i need to have 5 queues for each communication channel.
And? If adding more queues or more threads to a program makes the work that those threads do simpler or makes the explanation of what those queues are for easier to understand, would that be a Bad Thing?
Note about wait() and notify(). Those are low-level methods that are meant to be used in a very specific way to build higher-level mechanisms. I don't know whether the standard Java BlockingQueue implementations actually does use wait() and notify() but it would not be hard to implement a BlockingQueue that actually did use that mechanism. So, if BlockingQueue solves your problem, then that means wait() and notify() solve your problem. You just didn't see the solution.
In fact, I would be willing to bet that wait() and notify() can be used to solve any problem that requires one thread to wait for another. It's just a matter of seeing what else you need to build around them.

Related

Java NIO multiplexed Server: should I use worker threads to process requests?

Should I accept connections and monitoring clients on a listener thread and then let workers handle the request and answer to the client, or should I do everything on one thread?
Neither.
Ideally, for an NIO-based server, you create a thread pool using something like Executors.newFixedThreadPool(), which you will use to perform all the processing for handling your requests.
But, there should be no assignment of requests to specific threads, because the rest of your system should be asynchronous as well. That means that when a request handler needs to perform some lengthy I/O work or similar, instead of blocking the thread and waiting for it to finish, it starts it asynchronously and arranges for processing to continue when the work is finished by submitting a new task to the thread pool. There's no telling which thread will pick up the work at that point, so the processing for a request could end up being spread across many threads.
You should usually coordinate your asynchronous processing using CompletableFuture the same way that Promise is used in node. Have a look at my answer over here, which tries to explain how to do that: decoupled design for async http request
If your request handling is 100% asynchronous, that is you never wait for anything during request handling and you're on a single-core system, then it might be slightly better to do everything in the same thread.
If you have a multi-core system or you wait on I/O during request processing, then you should use a thread pool instead.

How to synchronize between executors of an ExecutorService

I have a list of client sockets, usually of size around 2000. These clients are dynamic, they come and go.
I have an ExecutorService with a fixed thread pool of 32 threads handling these threads. This executor service is responsible for decoding and sending the messages to be sent to these 2000 clients.
I want to prevent that two (or more) threads of the executor service are processing the same client at the same time.
One approach could be to introduce another bookkeeping thread (so I end up with 32 + 1 threads) which is responsible for calling ExecutorService.submit(mesage) when the previous message corresponding to the same client is done. But I am not sure if this will introduce a bottleneck, meaning that this newly introduced bookkeeping thread cannot keep up submitting messages.
Ideally, I don't want to pre-allocate a thread to a set of clients in advance, as the message load is not evenly distributed between the clients. It is also not known in advance.
What are approaches for this? Are they offered by java.util.concurrent functionalities?
update
This is a quick summary, as the comments pointed out that there were some misunderstandings:
I don't want a single thread per client, as I would end up with 2000 threads.
Ideally, I don't want to pre-allocate a thread to a set of clients, because message rate is not evenly distributed between all clients and not known in advance.
Message order must be preserved.
I believe it would not be good that thread A is waiting for thread B because B is already sending a message to the same client. In other words, at all times only one thread is processing one client.
When a thread (A) begins processing a message (#1), it needs to register the client id with a shared manager object. For each registered client, there is a queue.
When another thread (B) begins processing a message (#2) for the same client, the registration will detect that thread A is already processing, and will add message #2 to the queue for client. Thread B will then stop and process the next message.
When thread A is done with message #1, it will try to unregister, but since message #2 is queue, thread A will instead begin processing that message. After that, when it tries to unregister again, there are no queued messages and the thread will stop and process the next message.
It is up to the manager code to correctly synchronize access, so a second message is either processed by thread B, or handed off to thread A, without getting lost.
The above logic ensures that thread B will not wait for thread A, i.e. no idle time, and that message #2 is processed as soon as possible, i.e. with minimal delay, without processing two messages for the same client as the same time.
Message order for each client is retained. Globally, message order is of course not retained, because the processing of message #2 is delayed.
Note, there will be only one queue for each thread, so only 32 queues, and only "duplicate" messages are queue, so all queue will usually remain empty.
UPDATE
Example: For identification here, messages are named clientId.messageId where messageId is global.
Messages are submitted to the Executor (3 threads) in this order:
1.1, 2.2, 1.3, 2.4, 3.5, 1.6
Thread A picks up 1.1 and starts processing.
Thread B picks up 2.2 and starts processing.
Thread C picks up 1.3, adds it to thread A's queue, then returns.
Thread C picks up 2.4, adds it to thread B's queue, then returns.
Thread C picks up 3.5 and starts processing.
Thread A is done with message 1.1 and starts processing 1.3.
Thread C is done with message 3.5 and returns.
Thread C picks up 1.6, adds it to thread A's queue, then returns.
Thread C is now idle.
Thread B is done with message 2.2 and starts processing 2.4.
Thread A is done with message 1.3 and starts processing 1.6.
Thread B is done with message 2.4 and returns.
Thread B is now idle.
Thread A is done with message 1.6 and returns.
Thread A is now idle.
Have each thread service it's own queue. Number the sockets. Put each request on queue[socket num % num of threads].
This will ensures requests from a particular socket are handled in series and in sequence.
Unfortunately you won't get load balancing this way.
Alternatively, use a concurrenthashmap to store the sockets being served. If a thread services a request of a socket currently being processed, just put the request back in the queue.
You want to process messages for each client sequentially and at the same time you do not want to allocate separate thread for each client. This is the exact use case to employ Actor model. Actors are like lightweight threads. They are not as powerful as the usual threads but fit ideally for repeatable tasks like yours.
If you find java actor libraries found by Google too heavyweight, you can use the most compact actor implementation at my Github repository, or look at the extended actor implementation included in my asynchronous library df4j.

UDP and Threads - Java

I'm using UDP to communicate through threads, but I want to make some kind of variable to know if the thread waiting for a message has waited too long.
Is there any method inherited by UDP class that I could use?
Or is it a better choice to make my own timekeeper parallel to every thread to keep the time?
Question: If a thread has waited too long for a message, what should it do?
Answer: Stop waiting!
What you should probably do is to call setSoTimeout(int) on the DatagramSocket to set a timeout before you call receive(DatagramPacket). This will cause the thread that is waiting for a message to get a SocketTimeoutException if it waits for longer than the timeout.
To answer your actual question:
There isn't a builtin method that one method can call to see how long another thread has been waiting for a message.
Building a separate timekeeper is possible but rather heavy-weight

Does a ConfirmListener in Java RabbitMQ Client has to be synchronized?

I want to know what happens when we receive ACK. Do we receive ACKs in a single thread or in many threads?
Do handleAck and handleNack methods are used by a single thread or many threads? If they are used by a single thread then it is OK. But if they are used by several threads then we have to construct our code in the thread safe manner.
You shouldn't need to write your ConfirmListener code thread-safe but not because the ack and nack methods won't be called from multiple threads but because you shouldn't share a Channel between threads to begin with.
The documentation specifically calls this out:
While some operations on channels are safe to invoke concurrently,
some are not and will result in incorrect frame interleaving on the
wire. Sharing channels between threads will also interfere with *
Publisher Confirms.
When you are publishing to the broker just don't share the Channel. Channels are lightweight and not that expensive to create. That way you don't need to worry about the confirms either.
If you do share the Channel your confirms will be interfered with as per the above quote.

How can 2 threads communicate each other?

Thread A is summing up data passed from 10 clients.
while(true){
Socket clientfd= server.accept ();
BufferedReader message = new BufferedReader(new InputStreamReader (clientfd.getInputStream() ) );
String val = message.readLine();
this.sum_data+=(message.readLine();
message.close ();
clientfd.close ();
this.left--;
if(this.left==0){
System.out.println(this.sum_data);
break;
}
}
Thread B is constantly communicating with clients whether they are alive or not (heartbeating technique).
The thing is that clients sometimes can fail, and in that case, thread which is summing up data should just print out the all possible results from alive clients. Otherwise, it will never printout the result.
So, if heartbeat thread notices one client is not responding, is there a way for it to tell the other thread (or change other thread's class variable this.left)?
Basically, there are two general approaches to thread communication:
Shared memory
Event/queue based
In the shared memory approach, you might create a a synchronized list or a synchronized map that both threads may read from and write to. Typically there is some overhead to making sure reads and writes occur without conflicts, you don't want to have an object you're reading deleted while you're reading it, for instance. Java provides collections which are well behaved, like Collections.synchronizedMap and Collections.synchronizedList.
In event, or queue based, thread communication, threads have incoming queues and write to other thread's incoming queues. In this scenario, you might have the heartbeat thread load up a queue with clients to read from, and have the other thread poll/take from this queue and do its processing. The heartbeat thread could continually add the clients that are alive to this queue so that the processing thread "knows" to continue processing them.

Categories

Resources