I have a client end (say Customer) that sends request (with RequestID 1) to server end and receives ack for the sent request. My server end (say SomeStore) processes the request 1 and sends to Customer and receives ack (or resends three times). I have another thread listening at Customer. Upon receiving the Customer's listener thread should update HashMap at key 1. All I need is to wait and retrieve this updated value at key 1.
I have a thread from a threadpool to send request and recieve ack on both ends. I see that both threads do the process of sending. I also have a threadpool for listener. After receiving ack, if I make my main thread wait in a while loop, I don't see the listener's update. (Here I cannot make it with wait()). I don't understand this behavior. Shouldn't both threads be working?
I tried changing my implementation and created a separate class upon receiving and synchroned with this.wait() on myHashMap.get(key) and this.notify() on myHashMap.set(key, value). It works a couple of times and not always. My understanding is that it depends on which thread gets the lock first.
How else do I wait and listen at the same time? Maybe I am overseeing something obvious...
It is easy to receive reply instead of ack but my request gets lost in the network. Therefore using ack. I am already using Callable<> for ack. Any idea is appreciated...
I suspect you are not using thread safe access to the map.
If it's not a ConcurrentHashMap and you are not using synchronization, there is no guarentee you will ever see a change in a HashMap.
Instead of using wait/notify and your own threads, I suggest you use ConcurrentHashMap and ExecutorService and add tasks to perform the update. This will ensure you process and see every update.
Related
I have some theoretical question regarding "message loops"; particularly returning result of operations happening in a message loop that runs in a different thread. I have the situation where I'm having a TCP server listening for incoming messages. For each incoming message the server will authenticate the client who sent the message and two things may happen:
If the authenticated client has an attached handler the received message will be passed to the handler's message queue.
If the client has no handler a new one will be created and the same as above (the message will be passed to its message queue).
The handler is currently an object implementing the Callable interface so that it'll run in a different thread and its simple enough the get the result of the operation. Now for my problem: Each handler can have N amount of messages to be processed. The handler has a "message loop" like functionality that runs until a timeout occurs - a timeout in this case would be the socket's idle time reaching a predefined treshold. What I would like to know, how can I get Java to return a value from within the message loop without actually terminating the thread. Something like the following:
while (true) {
if (expired(socket))
break; // the callable will finish the call() method.
// get the first item from the queue.
message = messageQueue.poll();
result = process(message);
// I want to return the result to the caller which is in a different thread.
}
Now obviously a return statement would stop the message loop and if the messageQueue contains more messages they'll be lost. Another naive approach would be to use a callback-like mechanism, which requires an extra object + I still need to synchronize the caller with the Callable in the background thread. Something like wait & notify although I have K threads running in the background.
What would be the sophisticated way to handle this situation of returning results of operations from within a message-loop in a different thread, without terminating the thread itself?
#Edit:
I'll give a description of the whole process so that it clarifies what is happening here.
A client sends a message (xml string) to the application through tcp sockets.
The application authenticates the client, and if the client has no associated handler it'll create one.
The app will push the message to the queue of the handler.
The handler runs in a separate thread waiting for incoming messages from clients they're associated with, they MUST NOT handle messages for other clients.
When the handler picks up a message it'll transform it into a SOAP message and will forward it to another system through TCP socket.
When the handler recieves the response it needs to delegate it back to the caller without terminating its message-loop.
So the caller is something like a Dispatcher dispatching messages to the threads that are running the handlers associated with the sender of the message. It also collects the response from the handlers and sends them back to the correct clients.
Each handler, currently has their own message queue where only those messages are pushed which the particular handle has to process. When a handler starts up, it'll open a TCP socket to the target system where they'll forward the incoming messages after transformations were applied. When the handler reaches the maximal allowed idle time (The socket were opened without sending a request) the socket will be closed and the message-loop stopped. At this point the handler will finish its execution. the purpose of this, is to have a socket for each individual clients through which they can send multiple requests without the need for the target system to do another authentication.
Few options/questions come to mind:
Is there a problem to terminate the thread, check the returned result and then re-submit this task to the same thread pool? You will get a result, analyze it, and then resubmit to the pool and continue the work
As this thread runs, it can submit the statuses to a different ("external") queue which is analyzed outside this thread. An independent thread always run and check this queue
That's as far as I could think on how to...
It depends on...
If you want to return simple type you can use a thead safe result queue (global or by caller).
Propably thread pool will be more suitable in your case.
I belive that the most universal way is callback mechanism.
For this school assignment, I need to simulate a client server type application using Java threads (no need for sockets etc). How might I go about doing it?
I need a way to server to start and wait for clients to call it then it should return a response. The "API" in my mind was something like:
server.start()
client1.connect(server)
client2.connect(server)
x = client1.getData()
y = client2.getData()
success1 = client1.sendData(1)
success2 = client2.sendData(2)
How might the server|client.run method look like? Assume I could hardcode the method calls for now.
I suggest to use the following approach:
1. Have "server" code that works with Blocking Queue -
A blocking queue is a data structure which is synchronized and let's the thread that reads data from it (the "consumer" thread) to wait until there is a data in the queue to be read.
The "producer" thread is a thread that "pushes" data on the queue.
I would recommend you use one of the blocking queue implementations.
I would also suggest you read more about "consumer producer" pattern.
Blocking queue also eliminates the need for "busy wait" which is not recommended in multi-threading programming.
From the description that you have provided What i can suggest is you should write some thing like
1) Have one queue where all the clients can put up messages.
2) server which is running in an infinite loop like while(true) waits for the new messages that has been put in the queue and if it finds one then processes it and marks it as processed.
3) The job of the client threads would be to create messages and put them in the queue. And notifying the server that new message has been added to the queue so that server can come to know that new message has been arrived for processing.
For this program to make it working i think you need to learn Thread's notify, notifyAll(), and wait() methods. So basically without sockets what you are looking for it "Inter thread communication". This link can help.
Hope this helps.
I'm developing a small client-server program in Java.
The client and the server are connected over one tcp-connection. Most parts of the communication are asynchronous (can happen at any time) but some parts I want to be synchronous (like ACKs for a sent command).
I use a Thread that reads commands from the socket's InputStream and raises an onCommand() event. The Command itself is progressed by the Command-Design-Pattern.
What would be a best-practice approach (Java), to enable waiting for an ACK without missing other, commands that could appear at the same time?
con.sendPacket(new Packet("ABC"));
// wait for ABC_ACK
edit1
Think of it like an FTP-Connection but that both data and control-commands are on the same connection. I want to catch the response to a control-command, while data-flow in the background is running.
edit2
Everything is sent in blocks to enable multiple (different) transmissons over the same TCP-Connection (multiplexing)
Block:
1 byte - block's type
2 byte - block's payload length
n byte - block's paylod
In principle, you need a registry of blocked threads (or better, the locks on which they are waiting), keyed with some identifier which will be sent by the remote side.
For asynchronous operation, you simply sent the message and proceed.
For synchronous operation, after sending the message, your sending thread (or the thread which initiated this) create a lock object, adds this with some key to the registry and then waits on the lock until notified.
The reading thread, when it receives some answer, looks in the registry for the lock object, adds the answer to it, and calls notify(). Then it goes reading the next input.
The hard work here is the proper synchronization to avoid dead locks as well as missing a notification (because it comes back before we added ourself to the registry).
I did something like this when I implemented the remote method calling protocol for our Fencing-applet. In principle RMI works the same way, just without the asynchronous messages.
#Paulo's solution is one I have used before. However, there may be a simpler solution.
Say you don't have a background thread reading results in the connection. What you can do instead do is use the current thread to read any results.
// Asynchronous call
conn.sendMessage("Async-request");
// server sends no reply.
// Synchronous call.
conn.sendMessage("Sync-request");
String reply = conn.readMessage();
My system consists of a "proxy" class that receives "request" packets, marshals them and sends them over the network to a server, which unmarshals them, processes, and returns some "response packet".
My "submit" method on the proxy side should block until a reply is received to the request (packets have ids for identification and referencing purposes) or until a timeout is reached.
If I was building this in early versions of Java, I would likely implement in my proxy a collection of "pending messages ids", where I would submit a message, and wait() on the corresponding id (with a timeout). When a reply was received, the handling thread would notify() on the corresponding id.
Is there a better way to achieve this using an existing library class, perhaps in java.util.concurrency?
If I went with the solution described above, what is the correct way to deal with the potential race condition where a reply arrives before wait() is invoked?
The simple way would be to have a Callable that talks to the server and returns the Response.
// does not block
Future<Response> response = executorService.submit(makeCallable(request));
// wait for the result (blocks)
Response r = response.get();
Managing the request queue, assigning threads to the requests, and notifying the client code is all hidden away by the utility classes.
The level of concurrency is controlled by the executor service.
Every network call blocks one thread in there.
For better concurrency, one could look into using java.nio as well (but since you are talking to same server for all requests, a fixed number of concurrent connections, maybe even just one, seems to be sufficient).
A little help please.
I am designing a stateless server that will have the following functionality:
Client submits a job to the server.
Client is blocked while the server tries to perform the job.
The server will spawn one or multiple threads to perform the job.
The job either finishes, times out or fails.
The appropriate response (based on the outcome) is created, the client is unblocked and the response is handed off to the client.
Here is what I have thought of so far.
Client submits a job to the server.
The server assigns an ID to the job, places the job on a Queue and then places the Client on an another queue (where it will be blocked).
Have a thread pool that will execute the job, fetch the result and appropriately create the response.
Based on ID, pick the client out of the queue (thereby unblocking it), give it the response and send it off.
Steps 1,3,4 seems quite straight forward however any ideas about how to put the client in a queue and then block it. Also, any pointers that would help me design this puppy would be appreciated.
Cheers
Why do you need to block the client? Seems like it would be easier to return (almost) immediately (after performing initial validation, if any) and give client a unique ID for a given job. Client would then be able to either poll using said ID or, perhaps, provide a callback.
Blocking means you're holding on to a socket which obviously limits the upper number of clients you can serve simultaneously. If that's not a concern for your scenario and you absolutely need to block (perhaps you have no control over client code and can't make them poll?), there's little sense in spawning threads to perform the job unless you can actually separate it into parallel tasks. The only "queue" in that case would be the one held by common thread pool. The workflow would basically be:
Create a thread pool (such as ThreadPoolExecutor)
For each client request:
If you have any parts of the job that you can execute in parallel, delegate them to the pool.
And / or do them in the current thread.
Wait until pooled job parts complete (if applicable).
Return results to client.
Shutdown the thread pool.
No IDs are needed per se; though you may need to use some sort of latch for 2.1 / 2.3 above.
Timeouts may be a tad tricky. If you need them to be more or less precise you'll have to keep your main thread (the one that received client request) free from work and have it signal submitted job parts (by flipping a flag) when timeout is reached and return immediately. You'll have to check said flag periodically and terminate your execution once it's flipped; pool will then reclaim the thread.
How are you communicating to the client?
I recommend you create an object to represent each job which holds job parameters and the socket (or other communication mechanism) to reach the client. The thread pool will then send the response to unblock the client at the end of job processing.
The timeouts will be somewhat tricky, and will have hidden gotcha's but the basic design would seem to be to straightforward, write a class that takes a Socket in the constructor. on socket.accept we just do a new socket processing instantiation, with great foresight and planning on scalability or if this is a bench-test-experiment, then the socket processing class just goes to the data processing stuff and when it returns you have some sort of boolean or numeric for the state or something, handy place for null btw, and ether writes the success to the Output Stream from the socket or informs client of a timeout or whatever your business needs are
If you have to have a scalable, effective design for long-running heavy-haulers, go directly to nio ... hand coded one-off solutions like I describe probably won't scale well but would provide fundamental conceptualizing basis for an nio design of code-correct work.
( sorry folks, I think directly in code - design patterns are then applied to the code after it is working. What does not hold up gets reworked then, not before )