Timing issues with socket Java I/O - java

Trying to get how Java sockets operate. A question is: what can you do simultaneously if you are using socket Java API, and what happens if we send and read data with some delay?
READ & WRITE at once. If one socket-client connected to one spcket-server, can they BOTH read and write at the same time? As far as I understand, TCP protocol is full-duplex, so theoretically socket should be able to read and write at one, but we have to create two threads for bot client and server. Am I right?
WRITE to N clients at once. If several socket-clients connected to one socket-server, can server read several clients at one moment, can server write to several clients at one moment?
If maximum possible physical speed rate of NetworkCard is 1kbyte/sec and 5 clients are connected, which speed is it possible to write with to one client?
How can I implement sequential sending of data in both directions? I mean I want to send N bytes from server to client, then M bytes from client to server, then N from server to client etc. The problem is if any of the two sides has written something to the channel, the other side will stop reading that data (read() == -1) only if channel is closed, which means that we cannot reuse it and have to open another connection. Or, may be, we should place readers and writers to different threads which do their job with read() and write() until connection is closed?
Imagine we have a delay between calling write(); flush() on one side, and calling read() on the other side. During the delay - where the written data would be stored? Would it be transmitted? What is the max size of that "delayed" data to be stored somewhere "between"?

Correct. If you're using blocking I/O, you'll need a reader thread and a writer thread for each Socket connection.
You could use a single thread to write to N clients at once, but you run the risk of blocking on a write. I won't address the writing speeds here, as it would depend on several things, but obviously the cumulative writing speed to all clients would be under 1kbps.
Yes, you'll need 2 threads, you can't do this with a single thread (or you could, but as you said yourself, you'd need to constantly open and close connections).
It would be stored in a buffer somewhere. Depending on your code it could be in a Buffered stream, or the socket's own buffer. I believe the default buffer size of BufferedOutputStream is 8K, and the socket's own buffer would depend on the environment. It shouldn't really be of importance though, the streaming quality of TCP/IP removes the need to think about buffers unless you really need to do fine-tuning.

Related

can socket.getInputStream().read() read all data after keep send without read()?

I have one network app and have one puzzle:
If I send data(socket.getOutputStream.write()) for many times without call socket.getInputStream().read()?
after minutes.
can socket.getInputStream().read() read all data for the all sent data?
If can, if over buffer occurred if sent data too huge for minutes or hours?
Yes. Either anything you write to the socket will be read, or the connection will be terminated. If you don't get an error, then you will always read everything you wrote.
If you fill up whatever buffer space is available, then the sender's write call will wait until there's more buffer space. It will not raise an error.
Yes. As long as the socket is still open, because TCP sockets provide reliable transmission.
In practice, the socket might be forced closed. But yes, forcing the server to use a lot of memory buffers is one common vector in a DDOS attack.
Ye, but if you never read from the socket, the sender might block, which might prevent it from reading, which might block your writes.
It isn't a good idea. If the peer is sending responses, read them as the application protocol requires.

Keeping java sockets open - how to check if new data available?

I have a simple client-server application using sockets for the communication. One possibility is to close the socket every time the client has sent something to the server.
But my idea is to keep the connection always open, i.e. if a client contacts the server the connection should be put into a queue (e.g. LinkedBlockingQueue) and kept open, this would increase the performance.
How can I check in the server if there is new data available in a socket in the queue? The only thing I can imagine is to constantly iterate over the whole queue and check every socket if it has new data. But this would be very inefficient because if I have several threads working on the queue, the queue gets blocked when one thread is scanning over it.
Or is there a possibility to register a callback function on the socket, so that the socket informs the threads that data is ready?
But my idea is to keep the connection always open, i.e. if a client contacts the server the connection should be put into a queue (e.g. LinkedBlockingQueue) and kept open, this would increase the performance.
Keeping connections open will improve performance, though there are scaling issues: an open socket uses kernel resources. (I wouldn't use a queue though ...)
How can I check in the server if there is new data available in a socket in the queue?
If you have a number of sockets to different clients, and you want to process data in (roughly) the order that it arrives, there are two common techniques:
Create a thread per socket, and have each thread simply do a read. This will (naturally) block the thread until data becomes available.
Use the NIO channel selector mechanism (see Selector) which allows you to find out which of a group of I/O channels is ready for a read or write.
Thread per socket tends to be resource hungry (thread stacks), and does not scale well at all if you have multiple threads that are active simultaneously. (Too many context switches, too much load on the thread scheduler.)
By contrast, selectors map onto native syscalls provided by the host operating system, and thus they are efficient and responsive ... if used intelligently.
(You could also obtain non-blocking channels for the sockets, and poll them round-robin fashion. But that isn't going to be either efficient or responsive.)
As you can see, none of these ideas work with a queue. Either you have a number of threads each dealing with one socket, or you have one thread dealing with an array or (array) list of sockets. The queue abstraction is not designed for indexing or iterating.
Or is there a possibility to register a callback function on the socket, so that the socket informs the threads that data is ready?
See #Lolo's answer.
A practical solution would be to use NIO2 AsynchronousSocketChannels to perform asynchronous read operations with a callback that you can specify as a CompletionHandler.

From classic multithreaded to java.nio asynchronous/non-blocking server

I'm the main developer of an online game.
Players use a specific client software that connects to the game server with TCP/IP (TCP, not UDP)
At the moment, the architecture of the server is a classic multithreaded server with one thread per connection.
But in peak hours, when there are often 300 or 400 connected people, the server is getting more and more laggy.
I was wondering, if by switching to a java.nio.* asynchronous I/O model with few threads managing many connections, if the performances would be better.
Finding example codes on the web that cover the basics of such a server architecture is very easy. However, after hours of googling, I didn't find the answers to some more advanced questions:
1 - The protocol is text-based, not binary-based. The clients and the server exchanges lines of text encoded in UTF-8. A single line of text represents a single command, each lines are properly terminated by \n or \r\n.
For the classic multithreaded server, I have that kind of code :
public Connection (Socket sock) {
this.in = new BufferedReader( new InputStreamReader( sock.getInputStream(), "UTF-8" ));
this.out = new BufferedWriter( new OutputStreamWriter(sock.getOutputStream(), "UTF-8"));
new Thread(this) .start();
}
And then in run, data are read line by line with readLine.
In the doc, I found an utilitiy class Channels that can create a Reader out of a SocketChannel. But it is said that the produced Reader wont work if the Channel is in non-blocking mode, what contradicts the fact that non-blocking mode is mandatory to use the highly performant channel selection API I'm willing to use. So, I suspect that it isn't the right solution for what I would like to do.
The first question is therefore the following: if I can't use that, how to efficiently and properly take care of breaking lines and converting native java strings from/to UTF-8 encoded data in the nio API, with buffers and channels?
Do I have to play with get/put or inside the wrapped byte array by hand? How to go from ByteBuffer to strings encoded in UTF-8 ? I admit to don't understand very well how to use classes in the charset package and how it works to do that.
2 - In the asynchronous/non-blocking I/O world, what about the handling of consecutive read/write that have by nature to be executed sequencially one after the other?
For example, the login procedure, which is typicly challenge-response-based: the server sends a question (a particular computation), the client sends the response, and then the server checks the response given by the client.
The answer is, I think, certainly not to make a single task to send to worker threads for the whole login process, as it is quite long, with the risk to freeze worker threads for too much time (Imagine that scenario: 10 pool threads, 10 players try to connect at the same time; tasks related to players already online are delayed until one thread is again ready).
3 - What happens if two different threads simultaneously call Channel.write(ByteBuffer) on the same Channel?
Do the client might receive mixed up lines ? For example if a thread sends "aaaaa" and another sends "bbbbb", could the client receive "aaabbbbbaa", or am I ensured that everyting is sent in a consist order? Am I allowed to modify the buffer used right after the call returned?
Or asked differently, do I need additional synchronization to avoid this sort of situation?
If I need additionnal synchronization, how to know when release locks and so on, upon write finishes?
I'm afraid that the answer isn't as simple as registering for OP_WRITE in the selector. By trying that, I noticed that I get the write-ready event all the time and always for all clients, exiting Selector.select early mostly for nothing, since there are only 3 or 4 messages to send pers second per client, while the selection loop is performed hundreds of times per second. So, potentially, active wait in perspective, what is very bad.
4 - Can multiple threads call Selector.select on the same selector simultaneously without any concurrency problems such as missing an event, scheduling it twice, etc?
5 - In fact, is nio as good as it is said to be ? Would it be interesting to stay to classic multithreaded model, but unstead of creating a thread per connection, use fewer threads and loop over the connections to look for data availability using InputStream.isAvailable ? Is that idea stupid and/or inefficient?
1) Yes. I think that you need to write your own nonblocking readLine method. Note also that a nonblocking read may be signaled when there are several lines in the buffer, or when there is an incomplete line:
Example: (first read)
USER foo
PASS
(second read)
bar
You will need to store (see 2) the data that was not consumed, until enough information is ready to process it.
//channel was select for OP_READ
read data from channel
prepend data from previous read
split complete lines
save incomplete line
execute commands
2) You will need to keep the state of each client.
Map<SocketChannel,State> clients = new HashMap<SocketChannel,State>();
when a channel is connected, put a fresh state into the map
clients.put(channel,new State());
Or store the current state as the attached object of the SelectionKey.
Then, when executing each command, update the state. You may write it as a monolithic method, or do something more fancy such as polymorphic implementations of State, where each state knows how to deal with some commands (e.g. LoginState expects USER and PASS, then you change the state into a new AuthorizedState).
3) I don't recall using NIO with many asynchronous writers per channel, but the documentation says it is thread safe (I won't elaborate, since I have no proof of this). About OP_WRITE, note that it signals when the write buffer is not full. In other words, as said here: OP_WRITE is almost always ready, i.e. except when the socket send buffer is full, so you will just cause your Selector.select() method to spin mindlessly.
4) Yes. Selector.select() performs a blocking selection operation.
5) I think that the most difficult part is switching from a thread-per-client architecture, to a different design where reads and writes are decoupled from processing. Once you have done that, it is easier to work with channels than working your own way with blocking streams.

Non-blocking socket writes in Java versus blocking socket writes

Why would someone prefer blocking writes over non-blocking writes? My understanding is that you would only want blocking write if you want to make sure the other side got the TCP packet once the write method returned, but I am not even sure that's possible. You would have to flush and flush would have to flush the underlying operating system write socket buffer. So is there any disadvantage of non-blocking socket writes? Does having a large underlying write socket buffer a bad idea in terms of performance? My understanding is that the smaller the underlying socket write buffer the more likely you will hit slow/buggy client and have to drop/queue packets in the application level while the underlying socket buffer is full and isWritable() is returning false.
My understanding is that you would only want blocking write if you want to make sure the other side got the TCP packet once the write method returned
Your understanding is incorrect. It doesn't ensure that.
Blocking writes block until all the data has been transferred to the socket send buffer, from where it is transferred asynchronously to the network. If the reader is slow, his socket receive buffer will fill up, which will eventually cause your socket send buffer to fill up, which will cause a blocking write to block, blocking the whole thread. Non-blocking I/O gives you a way to detect and handle that situation.
The problem with non blocking writes is that you may not have anything useful to do if the write is incomplete. You can end up with loops like
// non-blocking write
while(bb.remaining() > 0) sc.write(bb);
OR
// blocking write
sc.write(bb);
The first can burn CPU and the second might be more desirable.
The big problem is reads. Once you decide whether you want blocking or non-blocking reads, your writes have to be the same. Unfortunately there is no way to make them different. If you want non-blocking reads, you have to have non-blocking writes.

Java NIO SocketChannel writing problem

I am using Java NIO's SocketChannel to write : int n = socketChannel.write(byteBuffer); Most of the times the data is sent in one or two parts; i.e. if the data could not be sent in one attemmpt, remaining data is retried.
The issue here is, sometimes, the data is not being sent completely in one attempt, rest of the data when tried to send multiple times, it occurs that even after trying several times, not a single character is being written to channel, finally after some time the remaning data is sent. This data may not be large, could be approx 2000 characters.
What could be the cause of such behaviour? Could external factors such as RAM, OS, etc cause the hindarance?
Please help me solve this issue. If any other information is required please let me know.
Thanks
EDIT:
Is there a way in NIO SocketChannel, to check, if the channel could be provided with data to write before actual writing. The intention here is, after attempting to write complete data, if some data hasn't been written on channel, before writing the remaining data can we check if the SocketChannel can take any more data; so instead of attempting multiple times fruitlessly, the thread responsible for writing this data could wait or do something else.
TCP/IP is a streaming protocol. There is no guarantee anywhere at any level that the data you send won't be broken up into single-byte segments, or anything in between that and a single segment as you wrote it.
Your expectations are misplaced.
Re your EDIT, write() will return zero when the socket send buffer fills. When you get that, register the channel for OP_WRITE and stop the write loop. When you get OP_WRITE, deregister it (very important) and continue writing. If write() returns zero again, repeat.
While using TCP, we can write over sender side socket channel only until the socket buffers are filled up and not after that. So, in case the receiver is slow in consuming the data, sender side socket buffers fill up and as you mentioned, write() might return zero.
In any case, when there is some data to be sent on the sender side, we should register the SocketChannel with the selector with OP_WRITE as the interested operation and when selector returns the SelectionKey, check key.isWritable() and try writing on that channel. As mentioned by Nilesh above, don't forget to unregister the OP_WRITE bit with the selector after writing the complete data.

Categories

Resources