I've recently been writing java code to send notifications to the Apple Push Notification server. The problem I'm running into is if I create the socket and then disconnect from the network. I've bounced around articles on-line and most suggest relying on the methods:
socket.setKeepAlive(false);
socket.setSoTimeout(1000);
Specifically the "setSoTimeout" method. But the javadoc states that setSoTimeout will only throw an exception when reading from the InputStream. But the Apple Push Notification server never puts any data on the InputStream so I can never read anything from it. Does anyone have any suggestions of how to determine a network disconnect without using the socket InputStream?
You can only reliably detect a Socket has been disconnect when you attempt to write or read from a Socket. Reading is better because writing often takes a while to detect a failure.
The server doesn't need to write anything for you to attempt to read it. If you have a server which never writes you will either read nothing or detect a failure.
A quick precision: APNS will return data on your InputStream if you are using the enhanced notification format and that some error occurs. You should therefore make sure you do not ignore your InputStream...
Unless you are doing this as a personal learning project, you might want to take a look at existing APNS-specific Java libraries that deal with all the communication details for you. Communicating reliably with APNS is much more difficult than it looks at first, especially when you get to the error management part which involves various vague or undocumented details.
Related
I'm debugging the thread which manages a socket and I noticed that when I call getOutputStream() the debug breaks with no exception thrown. I've even added breakpoints to the very Socket.getOutputStream() but there is no way to get what's wrong.
The Java server correctly accepts the connection and waits for input (by checking inputStream.available()).
Both Socket and ServerSocket come from SSLSocketFactory and SSLServerSocketFactory.
I'm using Android Studio.
What am I doing wrong?
edit: I've even tried to change structure from Thread to AsyncTask but the result is the same. This is frustrating.
Debugging network connections is a bit tricky as time-outs may occur.
I am also unsure if breakpoints on non-app-code (like Socket.getOutputStream()) will really work. The SDK code in AndroidStudio may be different to the one used by your devices which mean that breakpoints (which are set to a specific line) may end up in a totally different method (if they work at all).
Therefore I would suggest to change your code and add log statements and if necessary sleep commands to slow-down the important parts.
For SSL traffic I strongly suggest to look also at the transferred data. There are apps capturing the traffic on-device that run without root permissions. Later you can then debug the traffic on the PC using Wireshark and see if the problem was caused by a communication problem between your client and the server.
This is a similar answer, though is not what I exactly want. I want to do following two things:
I want to find out if all the bytes have been sent to the receiver?
Also I want to know the current remaining capacity of output buffer of the socket, without attempting a write to it?
Taking your numbered points in order:
The only way you can find that out is by having the peer application acknowledge the receipt.
There isn't such an API in Java. As far as I know there isn't one at the BSD sockets layer either, but I'm not familiar with the outer limits of Linux where they may have introduced some such thing.
You cannot know. The data is potentially buffered by the OS and TCP/IP stack, and there is no method for determining if it has actually been placed on the wire. Even knowing it was placed on the wire is no guarantee of anything as it could be lost in transit.
For UDP you will never know if the data was received by the destination system unless you write a UDP-based protocol such that the remote system acknowledges the data.
For TCP the protocol stack will ensure that your code is notified if the data is lost in transit, but it may be many seconds before you receive confirmation.
I'm currently writing something to work around this Java bug:
http://bugs.sun.com/view_bug.do?bug_id=5049299
Basically, I've got a light weight C server that runs on the same machine as the Java server. I'm adding a feature to the C server when I can request it to fork/run a new process via a socket and pass back stdin/stdout/stderr. On the Java side, I've created something that mimics the behavior of ProcessBuilder and Runtime.exec(), but over the socket.
The problem arises with stderr. Java sockets don't have an error stream, so I'm at a bit of a loss as to how to get it back over. I've come up with two potential solutions:
Create a second socket (probably
from the C server back to the Java
server) where in I just send stderr
back over.
Interleave the output
of process with the stderr of the
process and then parse them apart in
the Java back into separate streams
Both solutions have inherent problems, so I'd love to hear any feedback anybody has.
BONUS: Give me an easy, guaranteed solution to the Java bug that doesn't involve me doing any of this and I'll be your best friend forever.
To solve this, I took advantage of the fact that the two servers are running on the same machine. I merely wrote the stderr to a file which I read in the other server. Not the most elegant solution in the world, but quite simple and it works.
I have been doing socket programming for many years, but I have never had a missed message using TCP - until now. I have a java server and a client in C - both on the localhost. They are sending short message back and forth as strings, with some delays in between. I have one particular case where a message never arrives on the client side. It is reproducible, but oddly machine dependent.
To give some more details, I can debug the server side and see the send followed by the flush. I can attach to the client and walk through the select calls (in a loop) but it simply never shows up. Has anyone experienced this and is there an explanation other than a coding error?
In other words, if you have a connected socket and do a write on one side and a read on the other, what can happen in the middle to cause something like this?
One other detail - I've used tcpdump on the loopback interface and can see the missed message.
I've seen this happen in SMTP transactions before. Do you have a virus scanner running on that machine? If so try turning it off and see if that makes a difference.
Otherwise, I'd suggest installing Wireshark so you can take a look at what's actually happening.
Finally - after sniffing some more, I found the problem. Two messages were getting sent before a read (sometimes, but rarely...) so they were both read, but only the first was handled. This is why it seemed as though the second message never arrived. It was buried in the receive buffer.
I have the following situation: using a "classical" Java server (using ServerSocket) I would like to detect (as rapidly as possible) when the connection with the client failed unexpectedly (ie. non-gracefully / without a FIN packet).
The way I'm simulating this is as follows:
I'm running the server on a Linux box
I connect with telnet to the box
After the connection has succeeded I add "DROP" rule in the box's firewall
What happens is that the sending blocks after ~10k of data. I don't know for how long, but I've waited more than 10 minutes on several occasions. What I've researched so far:
Socket.setSoTimeout - however this affects only reads. If there are only writes, it doesn't have an effect
Checking for errors with PrintWriter.checkError(), since PW swallows the exceptions - however it never returns true
How could I detect this error condition, or at least configure the timeout value? (either at the JVM or at the OS level)
Update: after ~20min checkError returned true on the PrintWriter (using the server JVM 1.5 on a CentOS machine). Where is this timeout value configured?
The ~20 min timeout is because of standard TCP settings in Linux. It's really not a good idea to mess with them unless you know what you're doing. I had a similar project at work, where we were testing connection loss by disconnecting the network cable and things would just hang for a long time, exactly like you're seeing. We tried messing with the following TCP settings, which made the timeout quicker, but it caused side effects in other applications where connections would be broken when they shouldn't, due to small network delays when things got busy.
net.ipv4.tcp_retries2
net.ipv4.tcp_syn_retries
If you check the man page for tcp (man tcp) you can read about what these settings mean and maybe find other settings that might apply. You can either set them directly under /proc/sys/net/ipv4 or use sysctl.conf. These two were the ones we found made the send/recv fail quicker. Try setting them both to 1 and you'll see the send call fail a lot faster. Make sure to take not of the current settings before changing them.
I will reiterate that you really shouldn't mess with these settings. They can have side effects on the OS and other applications. The best solution is like Kitson says, use a heartbeat and/or application level timeout.
Also look into how to create a non-blocking socket, so that the send call won't block like that. Although keep in mind that sending with a non-blocking socket is usually successful as long as there's room in the send buffer. That's why it takes around 10k of data before it blocks, even though you broke the connection before that.
The only sure fire way is to generate application level "checks" instead of relying on the transport level. For example, a bi-directional heartbeat message, where if either end does not get the expected message, it closes and resets the connection.