close a socket port on linux [duplicate] - java

I am using Socket communication in one of my Java applications.As I know if the program meets any abnormal termination the listening ports does not get closed and the program cannot be started back because it reports "Port already open.."
Do I have anyway to handle this problem? What is the general way used to handle this matter?

It sounds like your program is listening on a socket. Normally, when your program exits the OS closes all sockets that might be open (including listening sockets). However, for listening sockets the OS normally reserves the port for some time (several minutes) after your program exits so it can handle any outstanding connection attempts. You may notice that if you shut down your program abnormally, then come back some time later it will start up just fine.
If you want to avoid this delay time, you can use setsockopt() to configure the socket with the SO_REUSEADDR option. This tells the OS that you know it's OK to reuse the same address, and you won't run into this problem.
You can set this option in Java by using the ServerSocket.setReuseAddress(true) method.

You want to set the SO_REUSEADDR flag on the socket.
See setReuseAddress().

The operating system should handle things such as that automatically, when the JVM process has ended. There might be a short delay before the port is closed, though.

As mentioned in the Handling abnormal Java program exits, you could setup a Runtime.addShutdownHook() method to deals with any special case, if it really needs an explicit operation.

Related

Java Application must be closed when in sleep mode

I want to keep my application stopped if computer goes to sleep mode. I have use thread in my application and it performs some task after specific interval.
Is it possible to stop program execution when the computer sleep?
If yes, please provide some solution or Java classes for the same.
Not using Java. You'd have to write an operating system hook (in say C++) that triggers on whatever signal your operating system sends when it puts the computer to sleep and then tries to gracefully shut down your Java program (probably by sending it some sort of a message on a TCP port that your program will listen to).

How can I kill a Java program that is left hanging (in Windows )?

Suppose that in Java I have a simple Echo Client/Server pair. I start up the Server process, but I never actually start the Client process.
What I'd like to do, is have a third program (call it "Parent"), that will automatically kill this Server program after 30 seconds of being idle.
Could I use Powershell to do this? Or do I need to use C or other programming?
Yes, as long as you have a good way to:
1) uniquely identify the server process
2) determine server is idle
If there will never be two (or more) instances of the server process, then you can identify the process by name (make sure your process has a unique name!)
Determining that the server is idle may be tricky (hence the comments to implement the server stopping itself without resorting to a "parent" process). However if the server is memory or CPU intensive when it is active you may be able to take advantage of that to distinguish idle from busy. You can use get-process (gps) to determine the process' current CPU and memory use. The trick will be to know how long it's been idle if it currently looks idle. To do this reliably you will need to poll with gps more frequently than it takes the server to process a request. Otherwise you may poll before the server is busy, then the server is busy, and then you poll again when it's idle. But you'll think it was idle the whole time.
You can avoid the dilemma above by having the server change something when it knows it has been idle for 30 seconds, like the window title. (But if you're doing that why not just have the server terminate itself?)
Once the PS script determines the server is idle then get-process -name yourServerProcess|stop-process will stop the server process. Specify "yourServerProcess" without the .EXE at the end. If you get a permissions error, then run PS as administrator.

How can I make a socket close immediately, bypassing the timeout period?

In Java, when you close a socket, it doesn't do anything anymore, but it actually closes the TCP connection after a timeout period.
I need to use thousands of sockets and I want them to be closed immediately after I close them, not after the timeout period, which wastes my time and my resources. What can I do?
I found out that by using socket.setReuseAddress(boolean), you can tell the JVM to reuse the port even if it's in the timeout period.
You are probably seeing sockets in TIME_WAIT state. This is the normal state for a socket to enter on the side of the connection that does the 'active close'. TIME_WAIT exists for a very good reason and so you should be careful of simply reusing addresses.
I wrote about TIME_WAIT, why it exists and what you can do about it when writing servers here on my blog: http://www.serverframework.com/asynchronousevents/2011/01/time-wait-and-its-design-implications-for-protocols-and-scalable-servers.html
In summary, if you can, change the protocol so that your clients enter TIME_WAIT.
If you're running a server, then ServerSocket is the proper solution. It will manage everything better than you doing it by hand through recycling and a host of other optimizations intended for running a server with Java.
Closing the socket disconnects the Java object from the operating system, which means that it isn't taking up any resources outside of the JVM, so it really shouldn't be a problem. But if the minimal overhead from Java's garbage collection/finalization scheme is too big of a burden, then Java isn't a valid solution (since your problem isn't specific to socket programming any more). Although I have to say that an efficient garbage collector is not much worse than explicitly managing memory (and can actually perform better).
'I want them to be closed exactly after closed them not after wasting my time and my resources!'
No you don't. You want TCP/IP to work correctly, and the TIME_WAIT state is a critically important part of that. If you're worried about the TIME_WAIT state, the quick answer is to be the one who receives the FIN rather than the one who first sends it.

Is java.net.Socket.setSoTimeout reliable?

From the JavaDoc for setSoTimeout
Enable/disable SO_TIMEOUT with the
specified timeout, in milliseconds.
With this option set to a non-zero
timeout, a read() call on the
InputStream associated with this
Socket will block for only this amount
of time. If the timeout expires, a
java.net.SocketTimeoutException is
raised, though the Socket is still
valid. The option must be enabled
prior to entering the blocking
operation to have effect. The timeout
must be > 0. A timeout of zero is
interpreted as an infinite timeout.
From the variety of posts on the Internet I have read that SO_TIMEOUT is rather unreliable when using Socket C API ( e.g. here ).
Hence the question, is it reliable to use setSoTimeout to check for run-away sessions?
If not, what techniques can you recommend to put a time limit on socket sessions?
I don't know any relevant recent/current operating system, on which (stream) socket timeouts are not working as they are supposed to. The post you're linking to is from a rather confused poster, which is trying to set a send timeout on a datagram socket, which makes absolutely no sense. Datagrams are either sent immediately or silently discarded.
I am not aware of any modern platform OS platform whose network stack is so broken that socket timeouts don't work. But if anyone knows of a real life example, please add it as a comment!
I would not worry about this scenario unless you are actually forced to support your application on such a broken OS. I suspect that it would be a painful exercise.
The link is about SO_RCVTIMEO. The question is about Socket.setSoTimeout(). In the only platform I am aware of where the former doesn't work (some versions of Solaris), the latter is fudged up using select(), which does work. The contract of the method demands it. You don't need to worry about this unless someone actually comes up with a platform where it doesn't I've never seen one in 16 years.
Check out the connectivity classes in Java 6 nio, they include sockets now and do non-blocking operation so you can cancel an operation if you want to.
Apache htmlclient core (?) is now able to use the nio sockets, so it seems they got that concept working. That's all I know about it, though.

How to detect dataloss with Java sockets?

I have the following situation: using a "classical" Java server (using ServerSocket) I would like to detect (as rapidly as possible) when the connection with the client failed unexpectedly (ie. non-gracefully / without a FIN packet).
The way I'm simulating this is as follows:
I'm running the server on a Linux box
I connect with telnet to the box
After the connection has succeeded I add "DROP" rule in the box's firewall
What happens is that the sending blocks after ~10k of data. I don't know for how long, but I've waited more than 10 minutes on several occasions. What I've researched so far:
Socket.setSoTimeout - however this affects only reads. If there are only writes, it doesn't have an effect
Checking for errors with PrintWriter.checkError(), since PW swallows the exceptions - however it never returns true
How could I detect this error condition, or at least configure the timeout value? (either at the JVM or at the OS level)
Update: after ~20min checkError returned true on the PrintWriter (using the server JVM 1.5 on a CentOS machine). Where is this timeout value configured?
The ~20 min timeout is because of standard TCP settings in Linux. It's really not a good idea to mess with them unless you know what you're doing. I had a similar project at work, where we were testing connection loss by disconnecting the network cable and things would just hang for a long time, exactly like you're seeing. We tried messing with the following TCP settings, which made the timeout quicker, but it caused side effects in other applications where connections would be broken when they shouldn't, due to small network delays when things got busy.
net.ipv4.tcp_retries2
net.ipv4.tcp_syn_retries
If you check the man page for tcp (man tcp) you can read about what these settings mean and maybe find other settings that might apply. You can either set them directly under /proc/sys/net/ipv4 or use sysctl.conf. These two were the ones we found made the send/recv fail quicker. Try setting them both to 1 and you'll see the send call fail a lot faster. Make sure to take not of the current settings before changing them.
I will reiterate that you really shouldn't mess with these settings. They can have side effects on the OS and other applications. The best solution is like Kitson says, use a heartbeat and/or application level timeout.
Also look into how to create a non-blocking socket, so that the send call won't block like that. Although keep in mind that sending with a non-blocking socket is usually successful as long as there's room in the send buffer. That's why it takes around 10k of data before it blocks, even though you broke the connection before that.
The only sure fire way is to generate application level "checks" instead of relying on the transport level. For example, a bi-directional heartbeat message, where if either end does not get the expected message, it closes and resets the connection.

Categories

Resources