Why SSLSocket write option does not have timeout? - java

In Java, write operation on SSLSocket API is blocking and the write operation does not support timeout also.
Can someone please explain?
Can there be a situation where write operation can block a thread forever? I checked on Internet and it seems that there is a possibility of blocking forever.
How to add timeout for write operation?
My application creates two threads one for read and one for write.

1- Can there be a situation where write operation can block a thread forever? I checked on Internet and it seems that there is a possibility of blocking forever.
Yes there can. Though not literally forever :-)
2- Can someone please suggest how we can add timeout for write operation?
You cannot do it with Java's implementation of sockets / SSL sockets, etcetera. Java sockets support connect timeouts and read timeouts but not write timeouts.
See also: How can I set Socket write timout in java?
(Why? Well socket write timeouts were requested in bug ID JDK-4031100 back in 1997 but the bug was closed with status "WontFix". Read the link for the details.)
The alternatives include:
Use a Timer to implement the timeout, and interrupt the thread or close the Socket if the timer goes off. Note that both interrupting and closing will leave you in a state where you need to abandon the socket.
Use NIO selectors and non-blocking I/O.

Because:
If such a facility is needed at all, it is needed at the TCP level, not just the SSL level.
There is no API for it at the TCP level, and I don't mean just in Java: there is no C level API for it either, except maybe on a couple of platforms.
If you added it at the SSL level, a write timeout event would leave the connection in an indeterminate state which would mean that it had to be closed, because you couldn't know how much data had been transmitted, so you couldn't maintain integrity at the SSL level.
To address your specific questions:
Can there be a situation where write operation can block a thread forever? I checked on Internet and it seems that there is a possibility of blocking forever.
Yes. I've seen an application blocked for several days in such a situation. Although not, as #StephenC rightly says, forever. We haven't lived that long yet.
How to add timeout for write operation?
You can do it at the TCP level with non-blocking I/O and a Selector, and you can layer an SSLEngine on top of that to get SSL, but it is a tedious and highly error-prone exercise that many have tried: few have succeeded. Not for the faint-hearted.

Related

What happen to a thread on a server crash?

Let's say I am following a client-server model, and one client, which is actually a thread, block on the remote server, which is actually a monitor.
What will happen to that thread, if the server crashes for some reason?
Answer is: it depends:
it is possible that this thread just sits there; waiting forever.
but it is also possible that an exception is thrown at some point; and that thread somehow gets back "alive".
Things that play a role here:
the underlying TCP stack
your usage of that (for example it is possible to give timeout values to sockets; which cause exceptions to be thrown on timeout situations)
how exactly your client is coded
In other words: nobody can tell you what your client application will be doing. Because we do not have any information about the implementation and configuration details you are dealing with.
Or, changing perspective: you should make sure that your client uses some form of timeouts. As said, this could be done by setting a timeout on the sockets used for communication. Or by having another thread that monitors all threads talking to a server; and that kicks in at some point in order to prevent such threads from waiting forever.
Long story short: if you are serious about such issues; you have to do a lot of research; a good starting point would be the old classic "Release it" by Michael Nygard.

Thread-per-request tcp server

I am just trying to understand how to write a thread-per-request TCP server in Java.
I have already written a thread-per-connection server, that runs serverSocket.accept() and creates a new thread each time a new connection comes in.
How could this be modified into a thread-per-request server?
I suppose the incoming connections could be put into some sort of queue, but how would you know which one has issued a request & is ready for service?
I am suspecting that NIO is necessary here, but not sure.
Thanks.
[edit]
To be clear - The original "server" is just a loop that I have written that waits for a connection and then passes it to a new thread.
The lecturer has mentioned "thread-per-request" architecture, and I was wondering how it worked "under the hood".
My first idea about how it works, may be completely wrong.
You can use a Selector to achieve your goal. Here is a good example you can refer.
You can use plain IO, or blocking NIO, (OR non-blocking NIO, or async NIO2) You can have any multiple threads per connection (or a shared worker thread pool) but unless these are waiting for slow services like databases, this might be any faster (it can be much slower if you want low latency)

selecting among multiple sockets that are ready to be read from

I am writing a server-client application. I have a server that holds several sockets that I have got from the accept() method of ServerSocket. I want to read from these sockets but I don't necesserally know which socket is ready to be read from. I need some kind of selector that will select one of the sockets that are ready to be read from, so I can read the data it sends.
Thanks.
You have basically two options to make it work:
Have dedicated thread per accepted socket. This is because the 'regular' socket I/O is blocking. You can not selectively handle multiple sockets using a single thread. And as there is no 'peeking' functionality, you will always take a risk of getting blocked when you invoke read. By having a thread per each socket you are interested in reading, blocking reads will not block any other operations (threads).
Use NIO. NIO allows for asynchronous I/O operations, and basically exactly what you asked for - a Selector.
If you do decide to go NIO-way, I would recommend checking out MINA and Netty. I've found them much easier to work with than plain NIO. Not only will you get a nicer API to work with, but at least MINA had workarounds for some nasty NIO bugs, too.

Is java.net.Socket.setSoTimeout reliable?

From the JavaDoc for setSoTimeout
Enable/disable SO_TIMEOUT with the
specified timeout, in milliseconds.
With this option set to a non-zero
timeout, a read() call on the
InputStream associated with this
Socket will block for only this amount
of time. If the timeout expires, a
java.net.SocketTimeoutException is
raised, though the Socket is still
valid. The option must be enabled
prior to entering the blocking
operation to have effect. The timeout
must be > 0. A timeout of zero is
interpreted as an infinite timeout.
From the variety of posts on the Internet I have read that SO_TIMEOUT is rather unreliable when using Socket C API ( e.g. here ).
Hence the question, is it reliable to use setSoTimeout to check for run-away sessions?
If not, what techniques can you recommend to put a time limit on socket sessions?
I don't know any relevant recent/current operating system, on which (stream) socket timeouts are not working as they are supposed to. The post you're linking to is from a rather confused poster, which is trying to set a send timeout on a datagram socket, which makes absolutely no sense. Datagrams are either sent immediately or silently discarded.
I am not aware of any modern platform OS platform whose network stack is so broken that socket timeouts don't work. But if anyone knows of a real life example, please add it as a comment!
I would not worry about this scenario unless you are actually forced to support your application on such a broken OS. I suspect that it would be a painful exercise.
The link is about SO_RCVTIMEO. The question is about Socket.setSoTimeout(). In the only platform I am aware of where the former doesn't work (some versions of Solaris), the latter is fudged up using select(), which does work. The contract of the method demands it. You don't need to worry about this unless someone actually comes up with a platform where it doesn't I've never seen one in 16 years.
Check out the connectivity classes in Java 6 nio, they include sockets now and do non-blocking operation so you can cancel an operation if you want to.
Apache htmlclient core (?) is now able to use the nio sockets, so it seems they got that concept working. That's all I know about it, though.

How to detect dataloss with Java sockets?

I have the following situation: using a "classical" Java server (using ServerSocket) I would like to detect (as rapidly as possible) when the connection with the client failed unexpectedly (ie. non-gracefully / without a FIN packet).
The way I'm simulating this is as follows:
I'm running the server on a Linux box
I connect with telnet to the box
After the connection has succeeded I add "DROP" rule in the box's firewall
What happens is that the sending blocks after ~10k of data. I don't know for how long, but I've waited more than 10 minutes on several occasions. What I've researched so far:
Socket.setSoTimeout - however this affects only reads. If there are only writes, it doesn't have an effect
Checking for errors with PrintWriter.checkError(), since PW swallows the exceptions - however it never returns true
How could I detect this error condition, or at least configure the timeout value? (either at the JVM or at the OS level)
Update: after ~20min checkError returned true on the PrintWriter (using the server JVM 1.5 on a CentOS machine). Where is this timeout value configured?
The ~20 min timeout is because of standard TCP settings in Linux. It's really not a good idea to mess with them unless you know what you're doing. I had a similar project at work, where we were testing connection loss by disconnecting the network cable and things would just hang for a long time, exactly like you're seeing. We tried messing with the following TCP settings, which made the timeout quicker, but it caused side effects in other applications where connections would be broken when they shouldn't, due to small network delays when things got busy.
net.ipv4.tcp_retries2
net.ipv4.tcp_syn_retries
If you check the man page for tcp (man tcp) you can read about what these settings mean and maybe find other settings that might apply. You can either set them directly under /proc/sys/net/ipv4 or use sysctl.conf. These two were the ones we found made the send/recv fail quicker. Try setting them both to 1 and you'll see the send call fail a lot faster. Make sure to take not of the current settings before changing them.
I will reiterate that you really shouldn't mess with these settings. They can have side effects on the OS and other applications. The best solution is like Kitson says, use a heartbeat and/or application level timeout.
Also look into how to create a non-blocking socket, so that the send call won't block like that. Although keep in mind that sending with a non-blocking socket is usually successful as long as there's room in the send buffer. That's why it takes around 10k of data before it blocks, even though you broke the connection before that.
The only sure fire way is to generate application level "checks" instead of relying on the transport level. For example, a bi-directional heartbeat message, where if either end does not get the expected message, it closes and resets the connection.

Categories

Resources