Thread A is summing up data passed from 10 clients.
while(true){
Socket clientfd= server.accept ();
BufferedReader message = new BufferedReader(new InputStreamReader (clientfd.getInputStream() ) );
String val = message.readLine();
this.sum_data+=(message.readLine();
message.close ();
clientfd.close ();
this.left--;
if(this.left==0){
System.out.println(this.sum_data);
break;
}
}
Thread B is constantly communicating with clients whether they are alive or not (heartbeating technique).
The thing is that clients sometimes can fail, and in that case, thread which is summing up data should just print out the all possible results from alive clients. Otherwise, it will never printout the result.
So, if heartbeat thread notices one client is not responding, is there a way for it to tell the other thread (or change other thread's class variable this.left)?
Basically, there are two general approaches to thread communication:
Shared memory
Event/queue based
In the shared memory approach, you might create a a synchronized list or a synchronized map that both threads may read from and write to. Typically there is some overhead to making sure reads and writes occur without conflicts, you don't want to have an object you're reading deleted while you're reading it, for instance. Java provides collections which are well behaved, like Collections.synchronizedMap and Collections.synchronizedList.
In event, or queue based, thread communication, threads have incoming queues and write to other thread's incoming queues. In this scenario, you might have the heartbeat thread load up a queue with clients to read from, and have the other thread poll/take from this queue and do its processing. The heartbeat thread could continually add the clients that are alive to this queue so that the processing thread "knows" to continue processing them.
Related
Im new to Java and have been stuck on an issue with respect to thread message passing.
What i mean here is- I have 4 threads, one thread reads msg from network and based on type of msg passes on the msg to either parser thread or database thread . Database thread performs some operation and has to send msg back to the first network thread which puts it into socket. Similarly, the parser thread also performs some action and based on result either has to send msg back to network thread or database thread.
Things i have tried-
I have read about notify() wait() for thread communication which does not help in my case as i need one to one msg passing its not braodcast all
I have read about concurrentqueues blockingqueues - Since this is not an ideal producer consumer problem where one thread is producing msgs and other threads reading from it- i cannot use this.
Using this would be like i need to have 5 queues for each communication channel
network->db,
db->network,
parser->network,
parser->db
Is this efficient to go about?
In c++ i was using msging mechanism where i used to just post msg(windows msg) to corresponding thread's msg pool and that thread in its msging pool, would fetch it
Is there any mechanism like message passing in java which i could use?
one thread reads msg from network and based on type of msg passes on the msg to ...database thread. Database thread performs some operation and has to send msg back to the first network thread which puts it into socket.
You're making the "network" thread responsible to wait for messages from the network, and also, to wait for messages from the "database" thread. That's awkward. You may find it somewhere between mildly difficut and impossible to make that happen in a clean, satisfying way.
My personal opinion is that each long-lived thread in a multi-threaded program should wait for only one thing.
What is the reason for having the database thread "send msg back to the first network thread [to be put] into socket?" Why can't the database thread itself put the message into the socket?
If there's a good reason for the database not to send out the message, then why can't "put the message into the socket" be a task that your database thread submits to a thread pool?
I have read about notify() wait() for thread communication which does not help in my case
Would a BlockingQueue help?
I have read about concurrentqueues blockingqueues - Since this is not an ideal producer consumer problem where one thread is producing msgs and other threads reading from it- i cannot use this. Using this would be like i need to have 5 queues for each communication channel.
And? If adding more queues or more threads to a program makes the work that those threads do simpler or makes the explanation of what those queues are for easier to understand, would that be a Bad Thing?
Note about wait() and notify(). Those are low-level methods that are meant to be used in a very specific way to build higher-level mechanisms. I don't know whether the standard Java BlockingQueue implementations actually does use wait() and notify() but it would not be hard to implement a BlockingQueue that actually did use that mechanism. So, if BlockingQueue solves your problem, then that means wait() and notify() solve your problem. You just didn't see the solution.
In fact, I would be willing to bet that wait() and notify() can be used to solve any problem that requires one thread to wait for another. It's just a matter of seeing what else you need to build around them.
I have a below code in which I am using synchronized on a socket:
public boolean send(final long addr, final byte[] enc, final Socket socket) {
ZMsg msg = new ZMsg();
msg.add(enc);
// using the socket as its own lock while accessing it
boolean sent;
synchronized (socket) {
sent = msg.send(socket);
}
msg.destroy();
retryHolder.put(addr, enc);
return sent;
}
I wanted to understand how this synchronized on a socket will work here? I have around 20 threads calling this send method concurrently and each time Socket can be different. We have around 60 sockets to choose from so all those 20 threads can pick any one socket from 60. It is possible, multiple threads can pick same socket to send data on or multiple threads can pick different socket everytime to send data on. Below are the scenarios I can think of:
All 20 threads picking different socket each time to send data on as we have 60 sockets to work on. So how does synchronized on the socket will work in this scenario? Will it be fast or it will block for any threads?
Out of 20 threads, some threads picking same socket to send data on randomly. So all those threads will wait for others before entering the synchronized block meaning each thread waiting for that socket to get freed up? So will it slow down anything?
Any other case I missed?
Basically I am trying to figure out will there be any performance hit of using socket as the lock by using synchronized keyword for all the scenarios I can hit.
This mechanism is in place to prevent different threads from sending data to the same socket at the same time and mucking up your data. Different threads using different sockets won't block, and although the act of synchronizing is not "free", it's negligible compared to sending data over the network.
If each thread has different socket than other thread. Performance will not be hampered. As no thread will be blocked to get lock on a socket.
Suppose If three threads have same socket. Only one will be allowed to send data on socket. Once it is finished lock will be release and next thread will get lock of the socket and start sending data and then 3rd thread and so on. So here there will be some performance impact. And as you mentioned threads are less in number as compared to sockets. I would advise to change implementation to assign the same socket to a thread only when all the sockets are already assigned to avoid any performance problems.
But yes if socket class is your own class and you can change the code. It is better to add synchronization on send method or preferably on all the methods, so that no other thread can use it if it does not have lock. And if you can not change it you can create wrapper class or proxy for Socket class, and make the methods in proxy class as synchronized.
I am working on a fairly simple producer/consumer-scenario: I have some threads that deliver data to a monitor, and other threads that await data in the monitor, remove it and deliver the data to another monitor. At a certain point, the producers wille all have delivered their last data to the monitor. After the consumers have consumed the last data in the monitor, they need to be told not await more data from the monitor. To make this run as it should, the consumer threads need to get notified when the last producer thread has produced it's last bit of data, and there is no more data due. I am sure there are multiple ways to do this. As of now, the monitor counts the number of active produer threads, and when a producer thread finishes, it tells the monitor so. I am very curios though what the more elegant approach to this would be.
A simple solution is to let each producer send a poison pill where consumer would keep a count of the poison pills received so far and compare it with the number of producers.
class Consumer{
final int numOfProducers;
int poisonPillsReceived;
void run(){
while(true){
Object obj = queue.poll();
if(isPoisonPill(obj)){
poisonPillsReceived++;
}
if(numOfProducers == poisonPillsReceived){
break;
}else{
....
}
}
}
}
I have a large number of state machines. Occasionally, a state machine will need to be moved from one state to another, which may be cheap or expensive and may involve DB reads and writes and so on.
These state changes occur because of incoming commands from clients, and can occur at any time.
I want to parallelise the workload. I want a queue saying 'move this machine from this state to this state'. Obviously the commands for any one machine need to be performed in sequence, but I can be moving many machines forward in parallel if I have many threads.
I could have a thread per state machine, but the number of state machines is data-dependent and may be many hundreds or thousands; I don't want a dedicated thread per state machine, I want a pool of some sort.
How can I have a pool of workers but ensure that the commands for each state machine are processed strictly sequentially?
UPDATE: so imagine the Machine instance has a list of outstanding commands. When an executor in the thread pool has finished consuming a command, it puts the Machine back into the thread-pool's task queue if it has more outstanding commands. So the question is, how to atomically put the Machine into the thread pool when you append the first command? And ensure this is all thread safe?
I suggest you this scenario:
Create thread pool, probably some of fix size with Executors.newFixedThreadPool
Create some structure (probably it would be a HashMap) which holds one Semaphore for each state machine. That semaphores will have a value of 1 and would be fair semaphores to keep sequence
In Runnable which will do the job on the begging just add semaphore.aquire() for semaphore of its state machine and semaphore.release() at the end of run method.
With size of thread pool you will control level of parallelism.
I suggest another approach. Instead of using a threadpool to move states in a state machine, use a threadpool for everything, including doing the work. After doin some work resulting in a state-change the state-change event should be added to the queue. After the state-change is processed, another do-work event should be added to the queue.
Assuming that the state transition is work-driven, and vice-versa, asequential processing is not possible.
The idea with storing semaphores in a special map is very dangerous. The map will have to be synchronized (adding/removing objs is thread-unsafe) and there is relatively large overhead of doing the searches (possibly synchronizing on the map) and then using the semaphore.
Besides - if you want to use a multithreaded architecture in your application, I think that you should go all the way. Mixing different architectures may proove troublesome later on.
Have a thread ID per machine. Spawn the desired number of threads. Have all the threads greedily process messages from the global queue. Each thread locks the current message's server to be used exclusively by itself (until it's done processing the current message and all messages on its queue), and the other threads puts messages for that server on its internal queue.
EDIT: Handling message pseudo-code:
void handle(message)
targetMachine = message.targetMachine
if (targetMachine.thread != null)
targetMachine.thread.addToQueue(message);
else
targetMachine.thread = this;
process(message);
processAllQueueMessages();
targetMachine.thread = null;
Handling message Java code: (I may be overcomplicating things slightly, but this should be thread-safe)
/* class ThreadClass */
void handle(Message message)
{
// get targetMachine from message
targetMachine.mutexInc.aquire(); // blocking
targetMachine.messages++;
boolean acquired = targetMachine.mutex.aquire(); // non-blocking
if (acquired)
targetMachine.threadID = this.ID;
targetMachine.mutexInc.release();
if (!acquired)
// can put this before release, it may speed things up
threads[targetMachine.threadID].addToQueue(message);
else
{
process(message);
targetMachine.messages--;
while (true)
{
while (!queue.empty())
{
process(queue.pop());
targetMachine.messages--;
}
targetMachine.mutexInc.acquire(); // blocking
if (targetMachine.messages > 0)
{
targetMachine.mutexInc.release();
Thread.sleep(1);
}
else
break;
}
targetMachine.mutex.release();
targetMachine.mutexInc.release();
}
}
I am currently implementing a program that requires me to handle threads and process.
IDEA:
There are multiple java processes running and each process may have multiple threads.
Current java implementation is such that thread ids in java is unique for a particular process but not within the processes. So is there a way I could implement a unique thread ids among multiple processes?
Also, I need to implement an external java program that monitors these threads. By monitoring I mean, depending upon some logic I need to notify a particular thread(using unique thread id) regarding an event. Is there a way that I can access thread from external program. If yes how?
Are there any other solutions to implement the similar idea?
Thank you in advance.
You could use a concatenation of the process id and the thread id to uniquely identify a thread - for instance, thread 23 in process 7038 could be identified as 7038:23. This has the advantage that given a thread identifier, you can tell which process the thread belongs to.
I doubt that it is possible for one process to control the threads of another. You probably need to use some form of inter-process communication, such as RMI, named pipes, or TCP. Each process should probably have one thread that waits for an incoming message, parses it, and notifies the appropriate thread based on the contents of the message.
A very simple example of what a TCP-based solution might look like: Every worker process has a thread that listens for TCP connections from the monitoring process; it is expected that when the monitoring process connects, it will write one line containing the id of a thread in this worker process. The worker process must keep e.g. a HashMap that maps thread ids to Thread objects.
ServerSocket socket = new ServerSocket(6789);
while (true) {
Socket connectionSocket = welcomeSocket.accept();
BufferedReader socketReader = new BufferedReader(new InputStreamReader(
connectionSocket.getInputStream()));
String line = socketReader.readLine();
int threadId = Integer.parseInt(line);
// Now, use threadId to locate the appropriate thread
// and send a notification to it.
}
There should probably also be a way for the monitoring process to ask a worker process for all its thread ids. The worker process can simply maintain a list of process ids (and which port each process listens to) and, for each process id, a list of the thread ids inside that process.
By the way, as #parsifal said, it would be interesting to know what you are actually trying to achieve.