Multiple Threads Sharing One Socket - java

I have a scenario where multiple threads need to communicate with external system on one socket. Each thread's message can be identified by a unique id.
In this scenario where all threads share same socket, can I use blockQueues. Since the threads can produce request and consume response, can i have singleton component say "Socketer" who holds the socket and have two BlockQueues (incoming & outgoing). Any message on outgoing queue is written on socket and any message from socket is sent to incoming queue. The socketer also maintains the hashtable of all the producer threads and as it reads response, it identifies the corresponding producer and hands over response to it.s
Please suggest if it is a right design approach or advise the improvement. My threads are actually WebServices and I am in Spring environment.
Thanks

I don't see why you need the hash table, but you do need a response queue per thread. You could embed the correct response queue into the request message.
But are you sure you can't open multiple connections to the external system? It would make your life a lot simpler.

Related

Understanding of Netty internals

In my understanding of Netty's, incoming message passed to eventLoop(only one eventLoop, one thread). Next, EventLoop doesn't process it, but pass it to ExecutorService(it holds multiple threads in pool) for execution.
All this happens with the help of NIO. EventLoop waits for incoming messages and pass it by selectors, keys, channels etc.
Am I right?
Netty 4 is used
As far as i know Netty uses the EvenLoopGroups for handeling incoming and outgoing data as well as incoming connections.
That shouln't be so interesting when you start using Netty as the way the data goes through different classes. When a message is inbound the first interfacce you can intercept it is the decoder (ByteToMessageDecoder) where your encrypted ByteBuf is available. Then its making its way through the handler (ChannelInboundHandler).

How to read SocketChannel in one thread, and write from n threads?

I have a socketChannel (java.nio.channels.SocketChannel) listening for reading requests (from multiple clients). It stores each request in a Request Queue.
Also socketChannel.configureBlocking(false)
Then I want the multiple threads to take one request at a time from the Request Queue and write to the socketChannel
I have read the following from a documentation.
Socket channels are safe for use by multiple concurrent threads. They
support concurrent reading and writing, though at most one thread may
be reading and at most one thread may be writing at any given time.
Since only 1 thread can be written, what can I do in the case of multiple writes?
You can use your own lock synchronized or ReentrantLock, or queue the messages and have one thread do the actual writes.
The problem with writes is you can only atomically write one byte at a time, if you write more than one byte, you might send some, but not all of the data in which case another thread can attempt to write it's message and you get a corrupted message.
I have a java.nio.channels.SocketChannel listening for reading requests (from multiple clients).
No you don't. You might have a ServerSocketChannel that listens for connections from multiple clients, but once you have an accepted SocketChannel, it is only connected to one client. All you can get from it is sequential requests from that client.
It stores each request in a Request Queue.
I don't see any need for that.
Also socketChannel.configureBlocking(false)
Then I want the multiple threads to take one request at a time from the Request Queue and write to the socketChannel
Why not just compute the reply as soon as you read it and write it directly back?
I have read the following from a documentation.
Socket channels are safe for use by multiple concurrent threads. They support concurrent reading and writing, though at most one thread may be reading and at most one thread may be writing at any given time.
Since only 1 thread can be written, what can I do in the case of multiple writes?
What multiple writes? You only have one client request per channel. You only need to write one reponse per request. You should not read, let alone process, a new request until you've written the prior response.

How many thread should I use to send and receive data with java.nio?

I am creating a distributed system with many machines for learning. I need to send and receive data between machines, and I am using java.nio to create that network. In one machine, I use one thread for serversocketchannel to receive data from other machines, and use for each package of data I create new thread to send it. It means that one thread for receiving and multiple threads for sending in one machines.
But I face a problem that since one thread handles receiving, many client will be pending when connecting.
Should I change it to one thread handles receiving and one thread handles sending?
Thank you
P/s: I don't want to use any 3rd party framework.
If there are more senders than receivers in your network, then obviously some of the senders will end up waiting. If you have more receivers than senders, then obviously some of the receivers will be idle, since nearly all of the time a sender will probably be connected to a receiver, one to one.
I cannot judge on what you "should do" as I don't know what you're trying to accomplish.
Anyway, the two common patterns used on the receiver side are:
One thread handles all
One thread handles just accepting the connection and opening the stream, then delegates the actual work with the stream to another thread (usually from a thread pool to prevent resource exhaustion that could happen if a new thread was created for every connection)

Java multithreaded stateful server - networking design

I'm trying to implement a stateful, multi-client server application and have some questions about the networking/threading design. The problem I'm currently facing is how to exchange messages between the communication layer and the logic layer.
The server handles multiple clients, where each of them can be active in multiple "channels", where each channel has multiple stages and may have multiple clients acting in it. Think of it to something similar as a chat program with multiple rooms.
I have already implemented the receiving of messages on the server side. Each client has his own thread that blockingly reads the data and decodes into a message. Now how to proceed? In my oppinion, each channel should also have this own thread to easily maintain its state. I could use a BlockingQueue to exchange the received messages with the channel thread, who's blockingly waiting for new messages on that queue.
But then how to send messages to the clients? The logic in the channel will handle the message, and produce some messages to be sent to one/some/all of the clients. Is it safe to use the channel thread to directly write to the socket? Or should I use another BlockingQueue to transmit the messages to the client handler thread? But how to wake it then, since it's waiting on the socket to read? Or should I use a separate send-thread per client, or even a separate send-socket?
BTW: I know I could use existing libraries for the networking layer, but I want do do it from scratch on plain sockets.
Put a send message method on the communication object that wraps the socket. Synchronize this method so that only one thread can be calling it at once. Then, it doesn't make any difference how many threads call this method. Each message will only be sent one at a time. You also don't have to disturb the thread that's blocking to read. This send method will be a quick enough operation that you don't have to worry about other threads blocking while a thread sends.
As long as the channel has a reference to the communication objects for each connected client, it can send messages and not worry about it.
If it ever caused problems, you could always modify that send message to enqueue the object to be sent. Then you could have a specific send thread to block on the queue and write the contents to the socket. But from my experience, this won't be necessary.
What about a event mechanism? When you ready with processing the request and there is a data for client available, then simply send it with an event for the client socket handler thread. as because the transmission from client is ended, you can send reply normally - if i think correctly.

Implementing asynchronous message queue in java

I have a java server that handles logins from multiple clients. The server creates a thread for each tcp/ip socket listener. Database access is handled by another thread that the server creates.
At the moment the number of clients I have attaching to the server is quite low (<100) so I have no real performance worries, but I am working out how I should handle more clients in the future. My concern is that with lots of clients my server and database threads will get bogged down by constant calls to their methods from the client threads.
Specifically in relation to the database: At the moment each client thread accesses the public database thread on its server parent and executes a data access method. What I think I should do is have some kind of message queue that a client thread can put its data request on and the database thread will do it when it gets round to it. If there is data to be returned from the data access call then it can put it on a queue for the client thread to pick up. All of this wouldn't hit the main server code or any other client threads.
I therefore think that I want to implement an asynchronous message queue that client threads can put a message on and the database thread will pick up from. Is that the right approach? Any thoughts and links to somewhere I can read up about implementation would be appreciated.
I would not recommend this approach.
JMS was born for this sort of thing. It'll be better than any implementation you'll write from scratch. I'd recommend using a Java EE app server that has JMS built in or something like ActiveMQ or RabbitMQ that you can add to a servlet engine like Tomcat.
I would strongly encourage you to investigate these before writing your own.
What you are describing sounds like an ExecutorCompletionService. This is essentially an asynch task broker that accepts requests (Runnables or Callables) from one thread, returning a "handle" to the forthcoming result in the form of a Future. The request is then executed in a thread pool (which could be a single thread thread pool) and the result of the request is then delivered back to the calling thread through the Future.
In between the time that the request is submitted and response is supplied, your client thread will simply wait on the Future (with an optional timeout).
I would advise, however, that if you're expecting a big increase in the number of clients (and therefore client threads), you should evaluate some of the Java NIO Server frameworks out there. This will allow you to avoid allocating one thread per client, especially since you expect all these threads to spend some time waiting on DB requests. If this is the case, I would suggest looking at MINA or Netty.
Cheers.
//Nicholas
It sounds like what you want to do is limit the number of concurrent requests to the database you want to allow. (To stop it being overloaded)
I suggest you have a limited size connection pool. When too many threads want to use the database they will have to wait until a connection is free. A simple way to do this is with a BlockingQueue with all the connections created in advance.
private final BlockingQueue<Connection> connections = new ArrayBlockingQueue<Connection>(40); {
// create connections
}
// to perform a query.
Connection conn = connections.get();
try {
// do something
} finally {
connections.add(conn);
}
This way you can keep your thread design much the same as it is and limit the number of concurrent queries to the database. With some tweaking you can create the connections as needed and provide a time out if a database connection cannot be obtained quickly.

Categories

Resources