Event Driven server using Java NIO - java

I'm trying to wrap my head around building an asynchronous (non blocking) HTTP server using java NIO. I presently have a threadpool implementation and would like to make it into Event Driven with single thread.
How exactly does an Event Driven server work?
Do we still need threads?
I've been reading on Java channels, buffers and selectors.
So after I create a serverSocketChannel and the selector and listen to requests, Do I need to hand over the request to other threads so that they can process them and serve the request. If so, how is it any different from a threadpool implementation.
And if I don't create more threads that can process the request, how can the same thread still keep listening to requests and process them. I'm talking SCALABLE, say 1 million requests in total and 1000 coming in concurrently.

I've been reading on Java channels, buffers and selectors. So after I create a serverSocketChannel and the selector and listen to requests, Do I need to hand over the request to other threads so that they can process them and serve the request.
No, the idea is that you process data as it is available, not necessarily using threads.
The complication comes out of the need to handle data as it comes. For instance, you might not get a full request at once. In that case, you need to buffer it somewhere until you have the full request, or process it piecemeal.
Once you have got the request, you need to send the response. Again, the whole response cannot normally be sent at once. You send as much as you can without blocking, then use the selector to wait until you can send more (or another event happens, such as another request coming in).

Related

Handle high number of traditional synchronous/blocking HTTP client requests with few threads Java?

I am working with Java. Another software developer has provided me his code performing synchronous HTTP calls and is responsible of maintaining it - he is using com.google.api.client.http. Updating his code to use an asynchronous HTTP client with a callback is not an available option, and I can't contact the developer to make changes to it. But I still want the efficient asynchronous behaviour of attaching a callback to an HTTP request.
(I am working in Spring Boot and my system is built using RabbitMQ AMQP if it has any effect.)
The simple HTTP GET (it is actually an API call) is performed as follows:
HttpResponse<String> response = httpClient.send(request, BodyHandlers.ofString());
This server I'm communicating with via HTTP takes some time to reply back... say 3-4 seconds. So my thread of execution is blocked for this duration, waiting for a reply. This scales very poorly, my single thread isn't doing is just waiting back for a reply to arrive - this is very heavy.
Sure, I can add the number of threads performing this call if I want to send more HTTP requests concurrently, i.e. I can scale in that way, but this doesn't sound efficient or correct. If possible, I would really like to get a better ratio than 1 thread waiting for 1 HTTP request in this situation.
In other words, I want to send thousands of HTTP requests with 2-3 available threads and handle the response once it arrives; I don't want to incur any significant delay between the execution of each request.
I was wondering: how can I achieve a more scalable solution? How can I handle thousands of this HTTP call per thread? What should I be looking at or do I just have no options and I am asking for the impossible?
EDIT: I guess this is another way to phrase my problem. Assume I have 1000 requests to be sent right now, each will last 3-4 seconds, but only 4-5 available threads of execution on which to send them. I would like to send them all at the same time, but that's not possible; if I manage to send them ALL within the span of 0.5s or less and handle their requests via some callback or something like that, I would consider that a great solution. But I can't switch to an asynchronous HTTP client library.
Using an asynchronous HTTP client is not an available option - I can't change my HTTP client library.
In that case, I think you are stuck with non-scalable synchronous behavior on the client side.
The only work-around I can think of is to run your requests as tasks in an ExecutorService with a bounded thread pool. That will limit the number of threads that are used ... but will also limit the number of simultaneous HTTP requests in play. This is replacing one scaling problem with another one: you are effectively rate-limiting your HTTP requests.
But the flip-side is that launching too many simultaneous HTTP requests is liable to overwhelm the target service(s) and / or the client or server-side network links. From that perspective, client-side rate limiting could be a good thing.
Assume I have 1000 requests to be sent right now, each will last 3-4 seconds, but only 4-5 available threads of execution on which to send them. I would like to send them all at the same time, but that's not possible; if I manage to send them ALL within the span of 0.5s or less and handle their requests via some callback or something like that, I would consider that a great solution. But I can't switch to an asynchronous HTTP client.
The only way you are going to be able to run > N requests at the same time with N threads is to use an asynchronous client. Period.
And "... callback or something like that ...". That's a feature you will only get with an asynchronous client. (Or more precisely, you can only get real asynchronous behavior via callbacks if there is a real asynchronous client library under the hood.)
So the solution is akin to sending the HTTP requests in a staggering manner i.e. some delay between one request and another, where each delay is limited by the number of available threads? If the delay between each request is not significant, I can find that acceptable, but I am assuming it would be a rather large delay between the time each thread is executed as each thread has to wait for each other to finish (3-4s)? In that case, it's not what I want.
With my proposed work-around, the delay between any two requests is difficult to quantify. However, if you are trying to submit a large number of requests at the same time and wait for all of the responses, then the delay between individual requests is not relevant. For that scenario, the relevant measure is the time taken to complete all of the requests. Assuming that nothing else is submitting to the executor, the time taken to complete the requests will be approximately:
nos_requests * average_request_time / nos_worker_threads
The other thing to note is that if you did manage to submit a huge number of requests simultaneously, the server delay of 3-4s per request is liable to increase. The server will only have the capacity to process a certain number of requests per second. If that capacity is exceeded, requests will either be delayed or dropped.
But if there are no other options.
I suppose, you could consider changing your server API so that you can submit multiple "requests" in a single HTTP request.
I think that the real problem here is there is a mismatch between what the server API was designed to support, and what you are trying to do with it.
And there is definitely a problem with this:
Another software developer has provided me his code performing synchronous HTTP calls and is responsible of maintaining it - he is using com.google.api.client.http. Updating his code to use an asynchronous HTTP client with a callback is not an available option, and I can't contact the developer to make changes to it.
Perhaps you need to "bite the bullet" and stop using his code. Work out what it is doing and replace it with your own implementation.
There is no magic pixie dust that will give scalable performance from a synchronous HTTP client. Period.

WebSocket async send can result in blocked send once queue filled

I have pretty simple Jetty-based websockets server, responsible for streaming small binary messages to connect clients.
To avoid any blocking on server side I was using sendBytesByFuture method.
After increasing load from 2 clients to 20, they stop receive any data. During troubleshooting I decided to switch on synchronous send method and finally got potential reason:
java.lang.IllegalStateException: Blocking message pending 10000 for BLOCKING
at org.eclipse.jetty.websocket.common.WebSocketRemoteEndpoint.lockMsg(WebSocketRemoteEndpoint.java:130)
at org.eclipse.jetty.websocket.common.WebSocketRemoteEndpoint.sendBytes(WebSocketRemoteEndpoint.java:244)
Clients not doing any calculations upon receiving data so potentially they can't be slow joiners.
So I wondering what can I do to solve this problem?
(using Jetty 9.2.3)
If the error message occurs from a synchronous send, then you have multiple threads attempting to send messages on the same RemoteEndpoint - something that isn't allowed per the protocol. Only 1 message at a time may be sent. (There is essentially no queue for synchronous sends)
If the error message occurs from an asynchronous send, then that means you have messages sitting in a queue waiting to be sent, yet you are still attempting to write more async messages.
Try not to mix synchronous and asynchronous at the same time (it would be very easy to accidentally have output that become an invalid protocol stream)
Using Java Futures:
You'll want to use the Future objects that are provided on the return of the sendBytesByFuture() and sendStringByFuture() methods to verify that the message was actually sent or not (could have been an error), and if enough start to queue up unsent you back off on sending more messages until the remote endpoint can catch up.
Standard Future behavior and techniques apply here.
Using Jetty Callbacks:
There is also the WriteCallback behavior available in the sendBytes(ByteBuffer,WriteCallback) and sendString(String,WriteCallback) methods that would call your own code on success/error, at which you can put some logic around what you send (limit it, send it slower, queue it, filter it, drop some messages, prioritize messages, etc. whatever you need)
Using Blocking:
Or you can just use blocking sends to never have too many messages queue up.

Multiple Threads Sharing One Socket

I have a scenario where multiple threads need to communicate with external system on one socket. Each thread's message can be identified by a unique id.
In this scenario where all threads share same socket, can I use blockQueues. Since the threads can produce request and consume response, can i have singleton component say "Socketer" who holds the socket and have two BlockQueues (incoming & outgoing). Any message on outgoing queue is written on socket and any message from socket is sent to incoming queue. The socketer also maintains the hashtable of all the producer threads and as it reads response, it identifies the corresponding producer and hands over response to it.s
Please suggest if it is a right design approach or advise the improvement. My threads are actually WebServices and I am in Spring environment.
Thanks
I don't see why you need the hash table, but you do need a response queue per thread. You could embed the correct response queue into the request message.
But are you sure you can't open multiple connections to the external system? It would make your life a lot simpler.

Java - networking - Best Practice - mixed synchronous / asynchronous commands

I'm developing a small client-server program in Java.
The client and the server are connected over one tcp-connection. Most parts of the communication are asynchronous (can happen at any time) but some parts I want to be synchronous (like ACKs for a sent command).
I use a Thread that reads commands from the socket's InputStream and raises an onCommand() event. The Command itself is progressed by the Command-Design-Pattern.
What would be a best-practice approach (Java), to enable waiting for an ACK without missing other, commands that could appear at the same time?
con.sendPacket(new Packet("ABC"));
// wait for ABC_ACK
edit1
Think of it like an FTP-Connection but that both data and control-commands are on the same connection. I want to catch the response to a control-command, while data-flow in the background is running.
edit2
Everything is sent in blocks to enable multiple (different) transmissons over the same TCP-Connection (multiplexing)
Block:
1 byte - block's type
2 byte - block's payload length
n byte - block's paylod
In principle, you need a registry of blocked threads (or better, the locks on which they are waiting), keyed with some identifier which will be sent by the remote side.
For asynchronous operation, you simply sent the message and proceed.
For synchronous operation, after sending the message, your sending thread (or the thread which initiated this) create a lock object, adds this with some key to the registry and then waits on the lock until notified.
The reading thread, when it receives some answer, looks in the registry for the lock object, adds the answer to it, and calls notify(). Then it goes reading the next input.
The hard work here is the proper synchronization to avoid dead locks as well as missing a notification (because it comes back before we added ourself to the registry).
I did something like this when I implemented the remote method calling protocol for our Fencing-applet. In principle RMI works the same way, just without the asynchronous messages.
#Paulo's solution is one I have used before. However, there may be a simpler solution.
Say you don't have a background thread reading results in the connection. What you can do instead do is use the current thread to read any results.
// Asynchronous call
conn.sendMessage("Async-request");
// server sends no reply.
// Synchronous call.
conn.sendMessage("Sync-request");
String reply = conn.readMessage();

How to implement blocking request-reply using Java concurrency primitives?

My system consists of a "proxy" class that receives "request" packets, marshals them and sends them over the network to a server, which unmarshals them, processes, and returns some "response packet".
My "submit" method on the proxy side should block until a reply is received to the request (packets have ids for identification and referencing purposes) or until a timeout is reached.
If I was building this in early versions of Java, I would likely implement in my proxy a collection of "pending messages ids", where I would submit a message, and wait() on the corresponding id (with a timeout). When a reply was received, the handling thread would notify() on the corresponding id.
Is there a better way to achieve this using an existing library class, perhaps in java.util.concurrency?
If I went with the solution described above, what is the correct way to deal with the potential race condition where a reply arrives before wait() is invoked?
The simple way would be to have a Callable that talks to the server and returns the Response.
// does not block
Future<Response> response = executorService.submit(makeCallable(request));
// wait for the result (blocks)
Response r = response.get();
Managing the request queue, assigning threads to the requests, and notifying the client code is all hidden away by the utility classes.
The level of concurrency is controlled by the executor service.
Every network call blocks one thread in there.
For better concurrency, one could look into using java.nio as well (but since you are talking to same server for all requests, a fixed number of concurrent connections, maybe even just one, seems to be sufficient).

Categories

Resources