How to implement blocking request-reply using Java concurrency primitives? - java

My system consists of a "proxy" class that receives "request" packets, marshals them and sends them over the network to a server, which unmarshals them, processes, and returns some "response packet".
My "submit" method on the proxy side should block until a reply is received to the request (packets have ids for identification and referencing purposes) or until a timeout is reached.
If I was building this in early versions of Java, I would likely implement in my proxy a collection of "pending messages ids", where I would submit a message, and wait() on the corresponding id (with a timeout). When a reply was received, the handling thread would notify() on the corresponding id.
Is there a better way to achieve this using an existing library class, perhaps in java.util.concurrency?
If I went with the solution described above, what is the correct way to deal with the potential race condition where a reply arrives before wait() is invoked?

The simple way would be to have a Callable that talks to the server and returns the Response.
// does not block
Future<Response> response = executorService.submit(makeCallable(request));
// wait for the result (blocks)
Response r = response.get();
Managing the request queue, assigning threads to the requests, and notifying the client code is all hidden away by the utility classes.
The level of concurrency is controlled by the executor service.
Every network call blocks one thread in there.
For better concurrency, one could look into using java.nio as well (but since you are talking to same server for all requests, a fixed number of concurrent connections, maybe even just one, seems to be sufficient).

Related

gRPC Concurrency for Stubs

In gRPC I would like some more information on the way the server handles requests.
Are requests executed in parallel? Or does the server spawn a new thread for each request, and execute them in parallel? Is there a way to modify this behavior? I understand that in client-streaming rpc's that message order is guaranteed.
If I send Request A followed by Request B to the same RPC, is it guaranteed that A will be executed first before B begins processing? Or are they each their own thread and executed in parallel with no guarantee that A finishes before B.
Ideally I would like to send a request to the server, the server acknowledges receipt of the request, and then the request is added to a queue to be processed sequentially, and returns a response once it's been processed. An approach I was exploring is to use an external task queue (like RabbitMQ) to queue the work done by the service but I want to know if there is a better approach.
Also -- on a somewhat related note -- does gRPC have a native retry counter mechanism? I have a particularly error-prone RPC that may have to retry up to 3 times (with an arbitrary delay between retries) before it is successful. This is something that could be implemented with RabbitMQ as well.
grpc-java passes RPCs to the service using the Executor provided by ServerBuilder.executor(Executor), or a cached thread pool if no executor is provided.
There is no ordering between simultaneous RPCs. RPCs can arrive in any order.
You could use a server-streaming RPC to allow the server to respond twice, once for acknowledgement and once for completion. You can use a oneof in the response message to allow sending the two different responses.
grpc-java as experimental retry support. gRFC A6 describes the support. The configuration is delivered to the client via service config. Retries are disabled by default, so overall you would want something like channelBuilder.defaultServiceConfig(serviceConfig).enableRetry(). You can also reference the hedging example which is very similar to retries.

Calling multiple asynchronous requests to Thrift Service from a single client

For my Code it is necessary to call multiple asynchronous Requests from the same client to a Thrift Service.
So I am using a Non blocking Server and Asynchronous Clients (see the code below) to allow asynchronous Calls, which means the execution of the code continues after the first call of the "checkForPrime()" Method, which I call on the Thrift Service.
Now this seems to work with only executing one call. If I make a second asynchronous call right after, I get the following error message:
Client is currently executing another method:
Interfaces.PrimeCheck$AsyncClient$checkForPrime_call
at
org.apache.thrift.async.TAsyncClient.checkReady(TAsyncClient.java:78)
at
Interfaces.PrimeCheck$AsyncClient.checkForPrime(PrimeCheck.java:110)
at ThriftClient.main(ThriftClient.java:40)
I need a smart solution to allow for multiple calls, but it has to be from the same client. Any suggestions are welcome. Please dont hesitate if you need further information.
org.apache.thrift.protocol.TBinaryProtocol.Factory factory = new TBinaryProtocol.Factory();
TAsyncClientManager manager;
TNonblockingSocket socket;
AsyncClient client;
try {
manager = new TAsyncClientManager();
socket =new TNonblockingSocket("localhost", 4711);
client = new AsyncClient(factory, manager, socket);
client.checkForPrime(5, resultHandler);
client.checkForPrime(7, resultHandler);
Thread.sleep(100);
} catch (IOException e2) ....
to allow asynchronous Calls, which means the execution of the code continues after the first call of the "checkForPrime()" Method,
Not quite. Asynchronous only means that the call is completed asynchronously and you don't have to wait for the completion until necessary.
It does not imply that you can use the same client to do another parallel request. Some implementations may support this but the current implementation does not.
Multiple outstanding calls require some bookkeeping, otherwise you will get lost with the responses:
call 1 made --->
call 2 made --->
response arrives <----
response arrives <----
Now, what call does the first response belong to: call 1 or call 2? Hard to say, it could be either one. Without more information a multi-call client would have a hard time trying to correlate the data.
The TAsyncClientManager handles that by restricting clients to allowing only one pending call at a time.
it is necessary to call multiple asynchronous Requests from the same client
Why do you think it is necessary?
The client is only a mediator, a means of transport. If you send two emails, do you require the emails follow the exact same path across the interwebs? No, because the relevant information the other side (server) should rely on is in the message content, not in the transport level.
If, however, you need to store data at the client, you should store it in a dedicated place outside of the client instance. Either way, the fact that we deal with one or two client instances should not really matter.

A multi-agent system that uses Producer-Consumer pattern?

I am trying to implement a Producer-Consumer pattern that uses multi-agents as workers instead of multi-threads.
As I understand, a typical multi-threaded implementation uses a BlockingQueue where one Producer thread puts information on the Queue and have multiple Consumer threads pull the data and execute some processing functions.
So following the same logic, my design will use a Producer agent that generates data and sends it to multiple Consumer Agents. At first guess, I have thought I should use a shared BlockingQueue between the Consumer agents and have the agents access the queue and retrieve the data. But I don't know if this is easy to do because I don't think agents have any shared memory and it is way more simple to directly send the information to the Consumer agents as ACL Messages.
This is important to consider because my multi-agent design will process a lot of data. So my question is, in Jade, what happens if I send to many ACL messages to a single agent? will the agent ignore the other messages?
This post has an answer that suggests "..Within the JADE framework, Agents feature an 'Inbox' for ACLMessages, basically a BlockingQueue Object that contains a list of recieved messages. the agent is able to observe its own list and treat them as its lifecycle proceeds. Containers do not feature this ability...". Is that statement correct? If this is true, then the other messages are just waiting on the queue and it will be ideal for my design to send information directly to the Consumer Agents, but I didn't see any BlockingQueues on the ACLMessage class.
Yes, messages will be in queue and agent will not ignore them.
ACLMessage is just a message object, that is sent between agents. Each agents has its own message queue (jade.core.MessageQueue) and several methods for handling communication.
If you check Agent class documentation, you can find methods like
receive() - nonblocking receive, returns first message in queue or null if queue is empty
receive(MessageTemplate pattern) - behaves like the the previous one, but you can also specify pattern for message, like for example specific sender AID, conversation ID, also combinations.
blockingReceive() - blocking receive, blocks agent until message appears in queue
blockingReceive(MessageTemplate pattern) - blocking receive, with pattern
and also there are methods for blocking receive, where you can set the timeout.
It's also important to mention, that if you define your agent logic in Behaviour class, you can also just block only behaviour, instead of blocking entire agent.
ACLMessage msg = agent.receive();
if (msg != null) {
// your logic
} else {
block();
}
The difference, is that block() method inside behaviour just marks your behaviour as blocked and removes it from agent's active behaviour pool (it added back to active pool, when message is received or behaviour is restared by restart() method) allowing to execute other agent's behaviours, and blockingReceive() blocks entirely your agent and all his behaviours until he receives message.

Java - networking - Best Practice - mixed synchronous / asynchronous commands

I'm developing a small client-server program in Java.
The client and the server are connected over one tcp-connection. Most parts of the communication are asynchronous (can happen at any time) but some parts I want to be synchronous (like ACKs for a sent command).
I use a Thread that reads commands from the socket's InputStream and raises an onCommand() event. The Command itself is progressed by the Command-Design-Pattern.
What would be a best-practice approach (Java), to enable waiting for an ACK without missing other, commands that could appear at the same time?
con.sendPacket(new Packet("ABC"));
// wait for ABC_ACK
edit1
Think of it like an FTP-Connection but that both data and control-commands are on the same connection. I want to catch the response to a control-command, while data-flow in the background is running.
edit2
Everything is sent in blocks to enable multiple (different) transmissons over the same TCP-Connection (multiplexing)
Block:
1 byte - block's type
2 byte - block's payload length
n byte - block's paylod
In principle, you need a registry of blocked threads (or better, the locks on which they are waiting), keyed with some identifier which will be sent by the remote side.
For asynchronous operation, you simply sent the message and proceed.
For synchronous operation, after sending the message, your sending thread (or the thread which initiated this) create a lock object, adds this with some key to the registry and then waits on the lock until notified.
The reading thread, when it receives some answer, looks in the registry for the lock object, adds the answer to it, and calls notify(). Then it goes reading the next input.
The hard work here is the proper synchronization to avoid dead locks as well as missing a notification (because it comes back before we added ourself to the registry).
I did something like this when I implemented the remote method calling protocol for our Fencing-applet. In principle RMI works the same way, just without the asynchronous messages.
#Paulo's solution is one I have used before. However, there may be a simpler solution.
Say you don't have a background thread reading results in the connection. What you can do instead do is use the current thread to read any results.
// Asynchronous call
conn.sendMessage("Async-request");
// server sends no reply.
// Synchronous call.
conn.sendMessage("Sync-request");
String reply = conn.readMessage();

Stateless Blocking Server Design

A little help please.
I am designing a stateless server that will have the following functionality:
Client submits a job to the server.
Client is blocked while the server tries to perform the job.
The server will spawn one or multiple threads to perform the job.
The job either finishes, times out or fails.
The appropriate response (based on the outcome) is created, the client is unblocked and the response is handed off to the client.
Here is what I have thought of so far.
Client submits a job to the server.
The server assigns an ID to the job, places the job on a Queue and then places the Client on an another queue (where it will be blocked).
Have a thread pool that will execute the job, fetch the result and appropriately create the response.
Based on ID, pick the client out of the queue (thereby unblocking it), give it the response and send it off.
Steps 1,3,4 seems quite straight forward however any ideas about how to put the client in a queue and then block it. Also, any pointers that would help me design this puppy would be appreciated.
Cheers
Why do you need to block the client? Seems like it would be easier to return (almost) immediately (after performing initial validation, if any) and give client a unique ID for a given job. Client would then be able to either poll using said ID or, perhaps, provide a callback.
Blocking means you're holding on to a socket which obviously limits the upper number of clients you can serve simultaneously. If that's not a concern for your scenario and you absolutely need to block (perhaps you have no control over client code and can't make them poll?), there's little sense in spawning threads to perform the job unless you can actually separate it into parallel tasks. The only "queue" in that case would be the one held by common thread pool. The workflow would basically be:
Create a thread pool (such as ThreadPoolExecutor)
For each client request:
If you have any parts of the job that you can execute in parallel, delegate them to the pool.
And / or do them in the current thread.
Wait until pooled job parts complete (if applicable).
Return results to client.
Shutdown the thread pool.
No IDs are needed per se; though you may need to use some sort of latch for 2.1 / 2.3 above.
Timeouts may be a tad tricky. If you need them to be more or less precise you'll have to keep your main thread (the one that received client request) free from work and have it signal submitted job parts (by flipping a flag) when timeout is reached and return immediately. You'll have to check said flag periodically and terminate your execution once it's flipped; pool will then reclaim the thread.
How are you communicating to the client?
I recommend you create an object to represent each job which holds job parameters and the socket (or other communication mechanism) to reach the client. The thread pool will then send the response to unblock the client at the end of job processing.
The timeouts will be somewhat tricky, and will have hidden gotcha's but the basic design would seem to be to straightforward, write a class that takes a Socket in the constructor. on socket.accept we just do a new socket processing instantiation, with great foresight and planning on scalability or if this is a bench-test-experiment, then the socket processing class just goes to the data processing stuff and when it returns you have some sort of boolean or numeric for the state or something, handy place for null btw, and ether writes the success to the Output Stream from the socket or informs client of a timeout or whatever your business needs are
If you have to have a scalable, effective design for long-running heavy-haulers, go directly to nio ... hand coded one-off solutions like I describe probably won't scale well but would provide fundamental conceptualizing basis for an nio design of code-correct work.
( sorry folks, I think directly in code - design patterns are then applied to the code after it is working. What does not hold up gets reworked then, not before )

Categories

Resources