I have a server-client application and I've hit a weird speed bump with "login failure codes" of sorts.
What I want to do is send the code that describes the login's validity and close the OutputStream if necessary.
The problem with this is that the socket is closed before the client can read the response, which is leading to seemingly random and cryptic failures.
Is there a way, aside from using a setSoLinger() (etc.), to check that the last byte (or more) that was written has been read by the client?
Thanks.
In Java, the only way for your server application to know that the client has received something is to include this property in the protocol: the client must send some kind of message stating that it received the data (maybe with some kind of checksum if you want), and the server must then wait for this message. Only then it can be sure of anything.
There's no other reliable way to be sure that the other endpoint received some data: no .close(), no .flush(), nothing will guarantee reception by the other endpoint.
Related
Netty Version: 4.0.10.Final
I've written a client and server using Netty. Here is what client and server do.
Server:
Wait for connection from client
Receive messages from client
If a message is bad, write error message (6 bytes), flush it,
close the socket and do not read any unread messages in the socket.
Otherwise continue reading messages. Do nothing with good messages.
Client:
Connect to server.
After writing N good messages, write one bad message and continue
writing M good messages. This process happens in a separate thread.
This thread is started after the channel is active.
If there is any response from server, log it and close the
socket. (Note that server responds only when there is an error)
I've straced both client and server. I've found that server is closing connection after writing the error message. Client began seeing broken pipe errors when writing good messages after the bad message. This is because server detected bad message and responded with error message and closed socket. connection is closed only after the write operation is complete using a listener. Client is not reading error message from server always. Earlier step (2) in client is performed in I/O thread. This caused the % of error messages received over K number of experiments to be really low (<10%). After moving step (2) to separate thread, % went to (70%). In any case it is not accurate. Does netty trigger channel read if the write fails due to broken pipe?
Update 1:
I'm clarifying and answering any questions asked here, so everybody can find the asked questions/clarifications at one place.
"You're writing a bad message that will cause a reset, followed by good messages that you already know won't get through, and trying to read a response that may have been thrown away. It doesn't make any sense to me whatsoever" - from EJP
-- In real world the server could treat something as bad for whatever reason client can't know in advance. For simplification, I said client intentionally sends a bad message that causes reset from server. I would like to send all good messages even if there are bad messages in the total messages.
What I'm doing is similar to the protocol implemented by Apple Push Notification Service.
If a message is bad, write error message (6 bytes), flush it, close the socket and do not read any unread messages in the socket. Otherwise continue reading messages.
That will cause a connection reset, which will be seen by the client as a broken pipe in Unix, Linux etc.
After writing N good messages, write one bad message and continue writing M good messages.
That will encounter the broken pipe error just mentioned.
This process happens in a separate thread.
Why? The whole point of NIO and therefore Netty is that you don't need extra threads.
I've found that server is closing connection after writing the error message.
Well that's what you said it does, so it does it.
Client began seeing broken pipe errors when writing good messages after the bad message.
As I said.
This is because server detected bad message and responded with error message and closed socket.
Correct.
Client is not reading error message from server always.
Due to the connection reset. The delivery of pending data ceases after a reset.
Does netty trigger channel read if the write fails due to broken pipe?
No, it triggers read when data or EOS arrives
However your bizarre system design/protocol is making that unpredictable if not impossible. You're writing a bad message that will cause a reset, followed by good messages that you already know won't get through, and trying to read a response that may have been thrown away. It doesn't make any sense to me whatsoever. What are you trying to prove here?
Try a request-response protocol like everybody else.
The APN protocol appears to be quite awkward because it does not acknowledge successful receipt of a notification. Instead it just tells you which notifications it has successfully received when it encounters an error. The protocol is working on the assumption that you will generally send well formed notifications.
I would suggest that you need some sort of expiring cache (a LinkedHashMap might work here) and you need to use the opaque identifier field in the notification as a globally unique, ordered value. A sequence number will work (but you'll need to persist if your client can be restarted).
Every time you generate an APN
set its identifier to the next sequence number
send it
place it in the LinkedHashMap with a string key of sequence number concatenated with the current time (eg String key = sequenceNumber + "-" + System.currentTimeMillis() )
If you receive an error you need to reopen the connection and resend all the APNs in the map with a sequence number higher than the identifier reported in the error. This is relatively easy. Just iterate through the map removing any APN with a sequence number lower than that reported. Then resend the remain APNs in order, replacing them in the map with the current time (ie you remove an APN when you resend it, then re-insert into the map with the new current time).
You'll need to periodically purge the map of old entries. You need to determine what is a reasonable length of time based on how long it takes the APN service to return an error if you send a malformed APN. I suspect it'll be a matter of seconds (if not much quicker). If, for example, you're sending 10 APNs / second, and you know that the APN server will definitely respond within 30 seconds, a 30 second expiry time, purging every second, might be appropriate. Just iterate along the map removing any elements which has a time section of it's key that is less than System.currentTimeMillis() - 30000 (for 30 second expiry time). You'll need to synchronize threads appropriately.
I would catch any IOExceptions caused by writing and place the APN you were attempting to write in the map and resend.
What you cannot cope with is a genuine network error whereby you do not know if the APN service received the notification (or a bunch of notifications). You'll have to make a decision based on what your service is as to whether you resend the affected APNs immediately, or after some time period, or not at all. If you send after a time period you'll want to give them new sequence numbers at the point you send them. This will allow you to send new APNs in the meantime.
since hours I am thinking about how I should use Java Sockets to send and receive data. I hope you can help me finding a good way to do it.
I have build a Server/Client structure where the client connects to the server and gets his own thread handling a request-response method. The information are capsuled in XML.
At the beginning the Server starts a blocking socket read. First I use writeInt() where the lenght of the XML message is transmitted. After that the Server reads the amount of lenght bytes and parses the message. After the transmission the client goes in the receive state and waits for an response.
That is fine but after the client authenticates the server waits for the things that will come and blocks.
But what should I do when the server has no information which needs to be transmitted. I could try to destroy the read blocking and send the message to the client. But what happens if the client comens in mind that he has also a message and also begins to send. In that moment no one will listen.
For that maybe I could use some sort of buffer but I have the feeling that this is not the way I should go.
I have been searching for a while and found some interesting information but I didn't understand them well. Sadly my book is only about this simple socket model.
Maybe I should use two threads. One sending and one receiving. The server has a database where the messages are stored and waiting for transmission. So when the server receives a message he stores that message in the database and also the answer like "message received" would be stored in the database. The sender thread would look if there are new messages and would send "message received" to the client. That approach would not send the answer in millisenconds but I could image that it would work well.
I hope that I gave you enough information about what I am trying. What would you recommend me how to implement that communication?
Thanks
But what should I do when the server has now information which needs to be transmitted.
I would make writes from the server synchronized. This will allow you to respond to requests in one thread, but also send other messages as required by another thread. For the client you can have a separate thread doing the receiving.
A simpler model may be to have two sockets. One socket works as you do now and when you "login" a unique id is sent to the client. At this point the client opens a second connection with a dedicated thread for listening to asynchronous events. It passes the unique id so the server know which client the asynchronous messages are for.
This will give a you a simple synchronous pattern as you have now and a simple asynchronous pattern. The only downside is you have two socket connections and you have co-ordinate them.
I am testing the behaviour of some client software, and need to write some software that emulates router-like functionality, preferably using something simple like UDP sockets. All it needs to do is receive the packet, alter the time to live, and send it back. Is this possible in regular Java? Or do you do something like the following:
Listen on Socket A
For EACH udp packet received, open a NEW socket, set time to live on that socket, and send it back (or this isn't possible/efficient?)
Receiver gets packet with altered values that appear like it has traversed some hops (but in reality hasn't)
So two approaches may be possible - editing the recieved packet directly (and then simply sending back), or constructing a new packet, copying the values from the original one and setting the appropriate headers/socket options before sending it out.
EDIT: the 'router' does not do any complex routing at all such as forwarding to other routers... it is simply decrements the t-t-l header field of the received message and sends the message directly back to the client.
Please refer API of Socket and ServerSocket class. Most of the server implementation for variety of protocols does accept packets at standard port like 80 and send response using some ephemaral port.
Background
My application gathers data from the phone and sends the to a remote server.
The data is first stored in memory (or on file when it's big enough) and every X seconds or so the application flushes that data and sends it to the server.
It's mission critical that every single piece of data is sent successfully, I'd rather send the data twice than not at all.
Problem
As a test I set up the app to send data with a timestamp every 5 seconds, this means that every 5 seconds a new line appear on the server.
If I kill the server I expect the lines to stop, they should now be written to memory instead.
When I enable the server again I should be able to confirm that no events are missing.
The problem however is that when I kill the server it takes about 20 seconds for IO operations to start failing meaning that during those 20 seconds the app happily sends the events and removes them from memory but they never reach the server and are lost forever.
I need a way to make certain that the data actually reaches the server.
This is possibly one of the more basic TCP questions but non the less, I haven't found any solution to it.
Stuff I've tried
Setting Socket.setTcpNoDelay(true)
Removing all buffered writers and just using OutputStream directly
Flushing the stream after every send
Additional info
I cannot change how the server responds meaning I can't tell the server to acknowledge the data (more than mechanics of TCP that is), the server will just silently accept the data without sending anything back.
Snippet of code
Initialization of the class:
socket = new Socket(host, port);
socket.setTcpNoDelay(true);
Where data is sent:
while(!dataList.isEmpty()) {
String data = dataList.removeFirst();
inMemoryCount -= data.length();
try {
OutputStream os = socket.getOutputStream();
os.write(data.getBytes());
os.flush();
}
catch(IOException e) {
inMemoryCount += data.length();
dataList.addFirst(data);
socket = null;
return false;
}
}
return true;
Update 1
I'll say this again, I cannot change the way the server behaves.
It receive data over TCP and UPD and does not send any data back to confirm the receive. This is a fact and sure in a perfect world the server would acknowledge the data but that will simply not happen.
Update 2
The solution posted by Fraggle works perfect (closing the socket and waiting for the input stream to be closed).
This however comes with a new set of problems.
Since I'm on a phone I have to assume that the user cannot send an infinite amount of bytes and I would like to keep all data traffic to a minimum if possible.
I'm not worried by the overhead of opening a new socket, those few bytes will not make a difference. What I am worried about however is that every time I connect to the server I have to send a short string identifying who I am.
The string itself is not that long (around 30 characters) but that adds up if I close and open the socket too often.
One solution is only to "flush" the data every X bytes, the problem is I have to choose X wisely; if too big there will be too much duplicate data sent if the socket goes down and if it's too small the overhead is too big.
Final update
My final solution is to "flush" the socket by closing it every X bytes and if all didn't got well those X bytes will be sent again.
This will possibly create some duplicate events on the server but that can be filtered there.
Dan's solution is the one I'd suggest right after reading your question, he's got my up-vote.
Now can I suggest working around the problem? I don't know if this is possible with your setup, but one way of dealing with badly designed software (this is your server, sorry) is to wrap it, or in fancy-design-pattern-talk provide a facade, or in plain-talk put a proxy in front of your pain-in-the-behind server. Design meaningful ack-based protocol, have the proxy keep enough data samples in memory to be able to detect and tolerate broken connections, etc. etc. In short, have the phone app connect to a proxy residing somewhere on a "server-grade" machine using "good" protocol, then have the proxy connect to the server process using the "bad" protocol. The client is responsible for generating data. The proxy is responsible for dealing with the server.
Just another idea.
Edit 0:
You might find this one entertaining: The ultimate SO_LINGER page, or: why is my tcp not reliable.
The bad news: You can't detect a failed connection except by trying to send or receive data on that connection.
The good news: As you say, it's OK if you send duplicate data. So your solution is not to worry about detecting failure in less than the 20 seconds it now takes. Instead, simply keep a circular buffer containing the last 30 or 60 seconds' worth of data. Each time you detect a failure and then reconnect, you can start the session by resending that saved data.
(This could get to be problematic if the server repeatedly cycles up and down in less than a minute; but if it's doing that, you have other problems to deal with.)
See the accepted answer here: Java Sockets and Dropped Connections
socket.shutdownOutput();
wait for inputStream.read() to return -1, indicating the peer has also shutdown its socket
Won't work: server cannot be modified
Can't your server acknowledge every message it receives with another packet? The client won't remove the messages that the server did not acknowledge yet.
This will have performance implications. To avoid slowing down you can keep on sending messages before an acknowledgement is received, and acknowledge several messages in one return message.
If you send a message every 5 seconds, and disconnection is not detected by the network stack for 30 seconds, you'll have to store just 6 messages. If 6 sent messages are not acknowledged, you can consider the connection to be down. (I suppose that logic of reconnection and backlog sending is already implemented in your app.)
What about sending UDP datagrams on a separate UDP socket while making the remote host respond to each, and then when the remote host doesn't respond, you kill the TCP connection? It detects a link breakage quickly enough :)
Use http POST instead of socket connection, then you can send a response to each post. On the client side you only remove the data from memory if the response indicates success.
Sure its more overhead, but gives you what you want 100% of the time.
In some circumstances I wish to send an error message from a server to client using non-blocking I/O (SocketChannel.write(ByteBuffer)) and then disconnect the client. Assuming I write the full contents of the message and then immediately disconnect I presume the client may not receive this message as I'm guessing that the OS hasn't actually sent the data at this point.
Is this correct, and if so is there a recommended approach to dealing with this situation?
I was thinking of using a timer whereby if I wish to disconnect a client I send a message and then close their connection after 1-2 seconds.
SocketChannel.write will in non-blocking mode return the number of bytes which could immediately be sent to the network without blocking. Your question makes me think that you expect the write method to consume the entire buffer and try asynchronously to send additional data to the network, but that is not how it's working.
If you really need to make sure that the error message is sent to the client before disconnecting the socket, I would simply enable blocking before calling the write method. Using non-blocking mode, you would have to call write in a loop, counting the number of bytes being sent by each invocation and exit the loop when you've succeeded to pass the entire message to the socket (bad solution, I know, unnecessary code, busy wait and so on).
you may be better off launching a thread and synchronously write data to the channel. the async api is more geared toward "one thread dispatching multiple channels" and not really intended for fire and forget communications.
The close() method of sockets makes sure, everything sent using write before is actually sent before the socket is really closed. However this assumes that your write() was able to copy all data to the tcp stacks output window, which will not always work. For solutions to this see the other answers.