My java application has to send messages(multithreaded) to a socket server. The application can send around 100-200 messages a second.
I want to know which is a better approach to do this ?
Open a single client socket and send the message from all threads though this one socket.
Disadvantages: Have to handle the reconnection logic on connection failure,may lose many messages when reconnection is in process.Thread safety, blocking ??
Create a new client socket connection for each thread and close it after sending.
Disadvantages: Even though I close the socket, the ports will wait till the TIME_WAIT period.
Which is a better practical approach ?
I would propose 3. :
Open an socket per thread, and reuse threads (for example via thread pool). Then handle reconnection inside thread, or just dispose it properly and create new one. This way you can avoid blocking and synchronisation issues
100-200 messages per second isn't that much. I wouldn't re-connect every time as this is expensive. If you re-use your connection, it will be much faster.
If you are worried about losing messages, you can send a batch of messages or one at a time and wait for a confirmation from the server they have been received. You can still send thousands of messages per second this way.
Related
For an exercise, we are to implement a server that has a thread that listens for connections, accepts them and throws the socket into a BlockingQueue. A set of worker threads in a pool then goes through the queue and processes the requests coming in through the sockets.
Each client connects to the server, sends a large number of requests (waiting for the response before sending the next request) and eventually disconnects when done.
My current approach is to have each worker thread waiting on the queue, getting a socket, then processing one request, and finally putting the (still open) socket back into the queue before handling another request, potentially from a different client. There are many more clients than there are worker threads, so many connections queue up.
The problem with this approach: A thread will be blocked by a client even if the client doesn't send anything. Possible pseudo-solutions, all not satisfactory:
Call available() on the inputStream and put the connection back into the queue if it returns 0. The problem: It's impossible to detect if the client is still connected.
As above but use socket.isClosed() or socket.isConnected() to figure out if the client is still connected. The problem: Both methods don't detect a client hangup, as described nicely by EJP in Java socket API: How to tell if a connection has been closed?
Probe if the client is still there by reading from or writing to it. The problem: Reading blocks (i.e. back to the original situation where an inactive client blocks the queue) and writing actually sends something to the client, making the tests fail.
Is there a way to solve this problem? I.e. is it possible to distinguish a disconnected client from a passive client without blocking or sending something?
Short answer: no. For a longer answer, refer to the one by EJP.
Which is why you probably shouldn't put the socket back on the queue at all, but rather handle all the requests from the socket, then close it. Passing the connection to different worker threads to handle requests separately won't give you any advantage.
If you have badly behaving clients you can use a read timeout on the socket, so reading will block only until the timeout occurs. Then you can close that socket, because your server doesn't have time to cater to clients that don't behave nicely.
Is there a way to solve this problem? I.e. is it possible to distinguish a disconnected client from a passive client without blocking or sending something?
Not really when using blocking IO.
You could look into the non-blocking (NIO) package, which deals with things a little differently.
In essence you have a socket which can be registered with a "selector". If you register sockets for "is data ready to be read" you can then determine which sockets to read from without having to poll individually.
Same sort of thing for writing.
Here is a tutorial on writing NIO servers
Turns out the problem is solvable with a few tricks. After long discussions with several people, I combined their ideas to get the job done in reasonnable time:
After creating the socket, configure it such that a blocking read will only block for a certain time, say 100ms: socket.setSoTimeout(100);
Additionally, record the timestamp of the last successful read of each connection, e.g. with System.currentTimeMillis()
In principle (see below for exception to this principle), run available() on the connection before reading. If this returns 0, put the connection back into the queue since there is nothing to read.
Exception to the above principle in which case available() is not used: If the timestamp is too old (say, more than 1 second), use read() to actually block on the connection. This will not take longer than the SoTimeout that you set above for the socket. If you get a TimeoutException, put the connection back into the queue. If you read -1, throw the connection away since it was closed by the remote end.
With this strategy, most read attempts terminate immediately, either returning some data or nothing beause they were skipped since there was nothing available(). If the other end closed its connection, we will detect this within one second since the timestamp of the last successful read is too old. In this case, we perform an actual read that will return -1 and the socket's isClosed() is updated accordingly. And in the case where the socket is still open but the queue is so long that we have more than a second of delay, it takes us aditionally 100ms to find out that the connection is still there but not ready.
EDIT: An enhancement of this is to change "last succesful read" to "last blocking read" and also update the timestamp when getting a TimeoutException.
No, the only way to discern an inactive client from a client that didn't shut down their socket properly is to send a ping or something to check if they're still there.
Possible solutions I can see is
Kick clients that haven't sent anything for a while. You would have to keep track of how long they've been quiet for, and once they reach a limit you assume they've disconnected .
Ping the client to see if they're still there. I know you asked for a way to do this without sending anything, but if this is really a problem, i.e you can't use the above solution, this is probably the best way to do it depending on the specifics(since it's an exercise you might have to imagine the specifics).
A mix of both, actually this is probably better. Keep track of how long they've been quiet for, after a bit send them a ping to see if they still live.
I want to write a chat application in java which can handle many users simultaneously. I read about sockets and threadpools to limit thread number, but I can't imagine how to handle e.g. 100 socket connections at the same time and do not create 100 new threads. Idea is that client connects at the beginning and his connection stays opened until he leaves the chat. He can send data to server as well as receive other users messages.
Read from socket is blocking operation, so I would need to check all user's sockets in loop with some timeout if new data is available in particular socket connection? My first idea was to create e.g. 3 threads for handling input from all connected users and 3 threads for outcomming communication from server to clients, but how can I achieve that? Is there any async API for sockets in Java where can I define threadpools for in/out communication?
Make a Client class that extends Thread. Write all the methods and in the void run() method, write the code you want executed when the client connection is made.
On the Server side, listen for new connections. Accept a new connection, get the information about the connection, pass it in the constructor to create a new Client object, and add it to an ArrayList to keep track of all ongoing connections and execute the start() method. So, all the Client objects are in an Arraylist, and the they keep running at the same time.
I had made such a chat application about an year ago. And do not forget to close the connection once the Client disengages, orelse all the objects pile up and slow up the application. I learnt that the hard way.
Use Netty as it provides an NIO framework (non-blocking IO) so that you do not need 1 thread per connection. It is a little bit (or a lot..) more complicated to write a server using non-blocking IO, but there are performance gains in regards to not requiring one thread per connection.
However, 100 threads is not so many, so you could still create your server using standard IO and one thread per connection, it just depends on how much you need to scale.
For a server setup using Netty, you create a channel to which new connections are assigned. This channel is an ordered series of handlers which process incoming (and outgoing) messages from a connection / client. The handlers themselves all need to be asynchronous such that when a handler needs to return a message to the client it writes it asynchronously (non-blockingly) to the channel and receives a future back to which it can attach actions for when the message is actually written.
There is a little bit of a learning curve, but it is not that steep and the overall design of your application will be much better if built the Netty way vs using standard blocking IO.
I have a simple client-server application using sockets for the communication. One possibility is to close the socket every time the client has sent something to the server.
But my idea is to keep the connection always open, i.e. if a client contacts the server the connection should be put into a queue (e.g. LinkedBlockingQueue) and kept open, this would increase the performance.
How can I check in the server if there is new data available in a socket in the queue? The only thing I can imagine is to constantly iterate over the whole queue and check every socket if it has new data. But this would be very inefficient because if I have several threads working on the queue, the queue gets blocked when one thread is scanning over it.
Or is there a possibility to register a callback function on the socket, so that the socket informs the threads that data is ready?
But my idea is to keep the connection always open, i.e. if a client contacts the server the connection should be put into a queue (e.g. LinkedBlockingQueue) and kept open, this would increase the performance.
Keeping connections open will improve performance, though there are scaling issues: an open socket uses kernel resources. (I wouldn't use a queue though ...)
How can I check in the server if there is new data available in a socket in the queue?
If you have a number of sockets to different clients, and you want to process data in (roughly) the order that it arrives, there are two common techniques:
Create a thread per socket, and have each thread simply do a read. This will (naturally) block the thread until data becomes available.
Use the NIO channel selector mechanism (see Selector) which allows you to find out which of a group of I/O channels is ready for a read or write.
Thread per socket tends to be resource hungry (thread stacks), and does not scale well at all if you have multiple threads that are active simultaneously. (Too many context switches, too much load on the thread scheduler.)
By contrast, selectors map onto native syscalls provided by the host operating system, and thus they are efficient and responsive ... if used intelligently.
(You could also obtain non-blocking channels for the sockets, and poll them round-robin fashion. But that isn't going to be either efficient or responsive.)
As you can see, none of these ideas work with a queue. Either you have a number of threads each dealing with one socket, or you have one thread dealing with an array or (array) list of sockets. The queue abstraction is not designed for indexing or iterating.
Or is there a possibility to register a callback function on the socket, so that the socket informs the threads that data is ready?
See #Lolo's answer.
A practical solution would be to use NIO2 AsynchronousSocketChannels to perform asynchronous read operations with a callback that you can specify as a CompletionHandler.
I have a client that will connect to a server through a socket. After connecting every event that happens on the server will be sent to all registered clients.
Every client should receive data related to the event.
I just need to implement the client...meaning I need to connect to the server and receive the events' data.
I was thinking on doing something like:
this.socket = new Socket(InetAddress.getByName(host),
this.socket.connect(socket.getLocalSocketAddress(), SOCKET_TIMEOUT);
And then launch a thread which gets the InputStream of the socket in a while loop.
But I don't know if this the best way to implement an event driven client through a socket.
Is it?
In an event driven environment a Datagram Socket will incur lower network overhead but will not give you the reliability. Here is a tutorial about writing datagram socket clients and servers.
This is often done by spawning a separate thread for the client that continuously makes blocking calls to read() from the stream - that way, as soon as data becomes available the read() call unblocks and can act on what it received ('the event fires'), then it goes back to blocking waiting for the next event.
You don't necessarily need a thread here unless the client has to respond to some other input like GUI events.
Then, assuming you are talking about TCP, read from the socket in a loop, buffering received data until you have a complete application "event", and call your application "event handler". It's that simple.
I have a socket tcp connection between two java applications. When one side closes the socket the other side remains open. but I want it to be closed. And also I can't wait on it to see whether it is available or not and after that close it. I want some way to close it completely from one side.
What can I do?
TCP doesn't work like this. The OS won't release the resources, namely the file descriptor and thus the port, until the application explicitly closes the socket or dies, even if the TCP stack knows that the other side closed it. There's no callback from kernel to user application on receipt of the FIN from the peer. The OS acknowledges it to the other side but waits for the application to call close() before sending its FIN packet. Take a look at the TCP state transition diagram - you are in the passive close box.
One way to detect a situation like this without dedicating a thread to each socket is to use the select/poll/epoll/kqueue family of functions. The socket being passively closed will be signaled as readable and read attempt will return the EOF.
Hope this helps.
Both sides have to read from the connection, so they can detect when the peer has closed. When read returns -1 it will mean the other end closed the connection and that's your clue to close your end.
If you are still reading from your socket, then you will detect the -1 when it closes.
If you are no longer reading from your socket, go ahead and close it.
If it's neither of these, you are probably having a thread wait on an event. This is NOT the way you want to handle thousands of ports! Java will start to get pukey at around 3000 threads in windows--much less in Linux (I don't know why).
Make sure you are using NIO. Use a single thread to manage all your ports (connection pool). It should just grab the data from a thread, forward it to a queue. At that point I think I'd have a thread pool take the data out of the queues and process it because actually processing the data from a port will take some time.
Attaching a thread to each port will NOT work, and is the biggest reason NIO was needed.
Also, having some kind of a "Close" message as part of your stream to trigger closing the port may make things work faster--but you'll still need to handle the -1 to cover the case of broken streams
The usual solution is to let the other side know you are going to close the connection, before actually closing it. For instance, in the case of the SMTP protocol, the server will send '221 Bye' before it closes the connection.
You probably want to have a connection pool.