I am seeing some strange issue with my automation tests.
There is a following setup:
Server: Centos 6
Client1: Windows 7
Client2: Centos 6
I'm writing the test that simulated connection disruption to server by blocking the outbound connection on server's iptables. But, the behavior of socket differs on Windows from one on Linux client.
One thing though, in both cases there is a line of Java code that does:
socket.setSoTimeout(0)
Scenario #1 (Windows):
send the ssh command to server iptables -A OUTPUT --dport XYZ -j DROP
After approximately 60 seconds my console says java.net.SocketTimeoutException: Read timed out
Connection drops
Scenario #2 (Centos)
send the same command as above
I tried waiting as much as 10 minutes but console never outputs the exception.
So, the question, is there a way to make the behavior of socket the same (or approximately the same)?
I read that Windows actually does not use SO_TIMEOUT but SO_RCVTIMEO instead.
From setSoTimeout(int timeout):
Enable/disable SO_TIMEOUT with the specified timeout, in milliseconds. With this option set to a non-zero timeout, a read() call on the InputStream associated with this Socket will block for only this amount of time. If the timeout expires, a java.net.SocketTimeoutException is raised, though the Socket is still valid. The option must be enabled prior to entering the blocking operation to have effect. The timeout must be > 0. A timeout of zero is interpreted as an infinite timeout.
Are you able to set a realistic timeout value?
After reading David's answer I observed the behavior of that socket in order to see how does it work.
As David stated, SO_TIMEOUT only applies to future read() calls and not the ones that have already been called.
The common pattern for my tests was:
Connect the user (socket)
Exchange some data
Server does not communicate with user for some time
At this point client has already entered the read method. Setting SO_TIMEOUT at this point is useless.
Block the server's OUTBOUND traffic on local port (some random 55000+ port)
Wait for SocketTimeoutException
If I was to set the SO_TIMEOUT before, my socket would throw the exceptions like mad. So, I was hoping and was wrong when excepting for exception to be thrown in case of network outage. On the contrary, exception is being throw in case of traffic outage.
Can this be an issue with Keep-Alive? My socket sets it to true.
Solution (empiric):
I measured (on Windows) that it takes approximately 60 seconds for socket to go down.
So, when I need to sever the connection, I create a thread that checks every 2 seconds if current time is greater from (creation time + 60s). When it reaches that time, it invokes socket.close which effectively causes SocketException.
This solution is by no means optimal but it will have to do, at least for now.
Hope this will be of any help to someone else out here..
Related
How can I prevent TCP from making multiple socket connection attempts?
Background
I'm trying to get a rough estimate of the round-trip-time to a client. The high-level protocol I have to work with gives no way to determine the RTT, nor does it have any sort of no-op reqeust/response flow. So, I'm attempting to get information directly from the lower layers. In particular, I know that the client will actively reject TCP connection attempts on a particular port.
Me -> Client: SYN
Client -> Me: ACK, RST
Code
long lStartTime = System.nanoTime() / 1000000;
long lEndTime;
// Attempt to connect to the remote party. We don't mind whether this
// succeeds or fails.
try
{
// Connect to the remote system.
lSocket.connect(mTarget, MAX_PING_TIME_MS);
// Record the end time.
lEndTime = System.nanoTime() / 1000000;
// Close the socket.
lSocket.close();
}
catch (SocketTimeoutException|IOException lEx)
{
lEndTime = System.nanoTime() / 1000000;
}
// Calculate the interval.
lInterval = lEndTime - lStartTime;
System.out.println("Interval = " + lInterval);
Problem
Using Wireshark, I see that the call to lSocket.connect makes three (failed) attempts to connect the socket before giving up - with an apparently arbitrary inter-attempt interval (often ~300ms).
Me -> Client: SYN
Client -> Me: ACK, RST
Me -> Client: SYN
Client -> Me: ACK, RST
Me -> Client: SYN
Client -> Me: ACK, RST
Question
Is there any way to make TCP give up after a single SYN/RST pair?
I've looked through some of the Java code. I wondered if I was on to a winner when the comment on AbstractPlainSocketImpl said...
/**
* The workhorse of the connection operation. Tries several times to
* establish a connection to the given <host, port>. If unsuccessful,
* throws an IOException indicating what went wrong.
*/
...but sadly there's no evidence of looping/retries in that function or any of the other (non-native) functions that I've looked at.
Where does this retry behaviour actually come from? And how can it be controlled?
Alternatives
I may also be open to alternatives, but not...
Using ICMP echo requests (pings). I know that many clients won't respond to them.
Using raw sockets. One of the platforms is Windows, which these days severely limits the ability to use raw sockets. (I also think the Linux network stack jumps in unhelpfully if it's caught in the cross-fire of an application trying to use a raw socket to do TCP.)
Using the JNI, except as a last resort. My code needs to work on at least 2 very different operating systems.
TCP connect retries are a function of the OS's socket implementation. Configuring this depends on the platform. See https://security.stackexchange.com/questions/34607/why-is-the-server-returning-3-syn-ack-packets-during-a-syn-scan for a description of what this is and why it is happening.
On Windows, you should be able to modify the retry count in the registry:
HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\TcpMaxConnectRetransmissions
Settings related to the RTT are detailed in that documentation as well.
On Linux, the accepted answer in the linked Security post talks about how to configure this parameter:
On a Linux system, see the special files in /proc/sys/net/ipv4/ called tcp_syn_retries and tcp_synack_retries: they contain the number of times the kernel would emit SYN (respectively SYN+ACK) on a given connection ... this is as simple as echo 3 > tcp_synack_retries ...
Note that this is a system-wide setting.
You can read the current values by reading the registry settings (on Windows) or reading the contents of the special files (on Linux).
Also, MSDN has this to say about the TCP connect RTT on Windows:
TCP/IP adjusts the frequency of retransmissions over time. The delay between the original transmission and the first retransmission for each interface is determined by the value of the TcpInitialRTT entry. By default, it is three seconds. This delay doubles after each attempt. After the final attempt, TCP/IP waits for an interval equal to double the last delay, and then it abandons the connection request.
By the way, re: raw sockets - Yes, you would have an extremely difficult time. Also, as of Windows XP SP2, Windows won't actually let you specify TCP protocol numbers for raw sockets under any circumstances (see Limitations).
Also, as an aside: Make sure that the TCP connection is not being blocked by a separate firewall in front of the client, otherwise you only end up measuring round trip time to the firewall.
I trying to make a call to a very heavy duty process.
It's average work length is estimated by 9-10 minutes.
When I'm executing the process, I set the timeout for a ridiculously huge number: 99999999.
After 2 minutes, I get the following error:
java.net.SocketTimeoutException: Read timed out
I tried to mess with it some more, and I set the timeout to 3000, and after 3 seconds as anticipated I got the same error.
Do you have any idea on why socket.setSoTimeout(99999999) sets it to 120000 max?
I had the same problem and the solution was not use
socket.shutdownInput(); socket.shutDownOutput(); until the last time of reading or writing data to the socket. This made the socket go to FIN_WAIT state thus waiting 2 minutes before closing. You can read more about it in this post
Clearly you aren't setting the timeout you think you're setting, or someone else is changing it afterwards. You'll have to post some code to get further elucidation.
Note that according to W.R. Stevens in TCP/IP Illustrated, Vol II, #17.4, the timeout is held in a short as a number of 1000Hz ticks, so a timeout beyond 11 minutes isn't possible. This applies to the BSD code.
I'm not sure how your application works, but try to set an infinite timeout to the socket
public void setSoTimeout(int timeout)
throws SocketException
Enable/disable SO_TIMEOUT with the specified timeout, in milliseconds. With this option set to a non-zero timeout, a read() call on the InputStream associated with this Socket will block for only this amount of time. If the timeout expires, a java.net.SocketTimeoutException is raised, though the Socket is still valid. The option must be enabled prior to entering the blocking operation to have effect. The timeout must be > 0. A timeout of zero is interpreted as an infinite timeout.
If you provide more information about your call, i may improve the answer.
I'm having a problem with a library that I am using. It might be the library or it might be me using it wrong!
Basically, when I do this (Timeout in milliseconds)
_ignitedHttp.setConnectionTimeout(1); // v short
_ignitedHttp.setSocketTimeout(60000); // 60 seconds
No timeout exception is generated and it works ok, however, when I do the following,
_ignitedHttp.setConnectionTimeout(60000); // 60 seconds
_ignitedHttp.setSocketTimeout(1); // v short
I get a Socket Exception.
So, my question is why can I not simulate a Connection Exception? Am I misunderstanding the difference between a socket and a connection time-out? The library is here (not officially released yet).
A connection timeout occurs only upon starting the TCP connection. This usually happens if the remote machine does not answer. This means that the server has been shut down, you used the wrong IP/DNS name, wrong port or the network connection to the server is down.
A socket timeout is dedicated to monitor the continuous incoming data flow. If the data flow is interrupted for the specified timeout the connection is regarded as stalled/broken. Of course this only works with connections where data is received all the time.
By setting socket timeout to 1 this would require that every millisecond new data is received (assuming that you read the data block wise and the block is large enough)!
If only the incoming stream stalls for more than a millisecond you are running into a timeout.
A connection timeout is the maximum amount of time that the program is willing to wait to setup a connection to another process. You aren't getting or posting any application data at this point, just establishing the connection, itself.
A socket timeout is the timeout when waiting for individual packets. It's a common misconception that a socket timeout is the timeout to receive the full response. So if you have a socket timeout of 1 second, and a response comprised of 3 IP packets, where each response packet takes 0.9 seconds to arrive, for a total response time of 2.7 seconds, then there will be no timeout.
I'm trying to load test a Java server by opening a large number of socket connections to the server, authenticating, closing the connection, then repeating. My app runs great for awhile but eventually I get:
java.net.BindException: Address already in use: connect
According to documentation I read, the reason for this is that closed sockets still occupy the local address assigned to them for a period of time after close() was called. This is OS dependent but can be on the order of minutes. I tried calling setReuseAddress(true) on the socket with the hopes that its address would be reusable immediately after close() was called. Unfortunately this doesn't seem to be the case.
My code for socket creation is:
Socket socket = new Socket();
socket.setReuseAddress(true);
socket.connect(new InetSocketAddress(m_host, m_port));
But I still get this error:
java.net.BindException: Address already in use: connect after awhile.
Is there any other way to accomplish what I'm trying to do? I would like to for instance: open 100 sockets, close them all, open 200 sockets, close them all, open 300, etc. up to a max of 2000 or so sockets.
Any help would be greatly appreciated!
You are exhausing the space of outbound ports by opening that many outbound sockets within the TIME_WAIT period of two minutes. The first question you should ask yourself is does this represent a realistic load test at all? Is a real client really going to do that? If not, you just need to revise your testing methodology.
BTW SO_LINGER is the number of seconds the application will wait during close() for data to be flushed. It is normally zero. The port will hang around for the TIME_WAIT interval anyway if this is the end that issued the close. This is not the same thing. It is possible to abuse the SO_LINGER option to patch the problem. However that will also cause exceptional behaviour at the peer and again this is not the purpose of a test.
Not using bind() but setReuseAddress(true) is just weird, I hope you do understand the implications of setReuseAddress (and the point of). 100-2000 is not a great number of sockets to open, however the server you are attempting to connect to (since it looks the same addr/port pair), may just drop them w/ a normal backlog of 50.
Edit:
if you need to open multiple sockets quickly (ermm port scan?), I'd very strongly recommend using NIO and connect()/finishConnect() + Selector. Opening 1000 sockets in the same thread is just plain slow.
Forgot you may need finishConnect() either way in your code.
I think that you should plan on the port you want to use to connect to be in use. By that I mean try to connect using the given port. If the connect fails (or in your case throws an exception), try to open the connection using the next port number.
Try wrapping the connect statement in a try/catch.
Here's some pseudo-code that conveys what I think will work:
portNumber = x //where x is the first port number you will try
numConnections = 200 // or however many connections you want to open
while(numConnections > 0){
try{
connect(host, portNumber)
numConnections--
}catch(){}
portNumber++
}
This code doesn't cover corner cases such as "what happens when all ports are in use?"
We are doing FTP connection through our applicaion which is a JAVA aplication.
We have set timeout for connection using Socket.connect(Adreess,timeout) method before calling FTPClient.connect() method.
During retriving files from the FTP site under same connection we havent set any timeout. Is it mandatory to call method FTPClient.setSoTimeOut(timeout) method to set individual time out for each such interaction under same connection or Socket.connect(Adreess,timeout) method will set timeout for each interaction with FTP site under one connection?
I would also like to know What is the difference between these two methods?
The timeout in Socket.connect() is connect timeout, which is the time to wait for TCP handshake to finish. This timeout only occurs once per connection.
setSoTimeout() is called socket read timeout, which is how long you wait to read pending bytes from socket. This occurs on every socket read throughout the TCP session.
It's good practice to set both timeout value so you don't rely on system defaults, which may vary. However, the timeout may not work sometimes when the call is stuck in native code. For example, the connect timeout is not honored if firewall silently drops packet.