Is there a way to simulate Socket and Connection timeout? - java

I have a certain piece of code that integrates with a third party using HTTP connection, which handles socket timeout and connection timeout differently.
I have been trying to simulate and test all the scenarios which could arise from the third party. was able to test connection timeout by connecting to a port which is blocked by the servers firewall e.g. port 81.
However I'm unable to simulate a socket timeout. If my understanding is not wrong socket timeout is associated with continuous packet flow, and the connection dropping in between. Is there a way I can simulate this?

So we are talking about to kinds of timeouts here, one is to connect to the server (connect timeout), the other timeout will happen when no data is send or received via the socket for a while (idle timeout).
Node sockets have a socket timeout, that can be used to synthesize both the connect and the idle timeout. This can be done by setting the socket timeout to the connect timeout and then when connected, setting it to the idle timeout.
example:
const request = http.request(url, {
timeout: connectTimeout,
});
request.setTimeout(idleTimeout);
This works because the timeout in the options is set immediately when creating the socket, the setTimeout function is run on the socket when connected!
Anyway, the question was about how to test the connect timeout. Ok so let's first park the idle timeout. We can simply test that by not sending any data for some time, that would cause the timeout. Check!
The connect timeout is a bit harder to test, the first thing that comes to mind is that we need a place to connect to that will not error, but also not connect. This would cause a timeout. But how the hell do we simulate that, in node?
If we think a little bit outside the box then we might figure out that this timeout is about the time it takes to connect. It does not matter why the connection takes as long as it does. We simply need to delay the time it takes to connect. This is not necessarily a server thing, we could also do it on the client. After all this is the part connecting, if we can delay it there, we can test the timeout.
So how could we delay the connection on the client side? Well, we can use the DNS lookup for that. Before the connection is made, a DNS lookup is done. If we simply delay that by 5 seconds or so we can test for the connect timeout very easily.
This is what the code could look like:
import * as dns from "dns";
import * as http from "http";
const url = new URL("http://localhost:8080");
const request = http.request(url, {
timeout: 3 * 1000, // connect timeout
lookup(hostname, options, callback) {
setTimeout(
() => dns.lookup(hostname, options, callback),
5 * 1000,
);
},
});
request.setTimeout(10 * 1000); // idle timeout
request.addListener("timeout", () => {
const message = !request.socket || request.socket.connecting ?
`connect timeout while connecting to ${url.href}` :
`idle timeout while connected to ${url.href}`;
request.destroy(new Error(message));
});
In my projects I usually use an agent that I inject. The agent then has the delayed lookup. Like this:
import * as dns from "dns";
import * as http from "http";
const url = new URL("http://localhost:8080");
const agent = new http.Agent({
lookup(hostname, options, callback) {
setTimeout(
() => dns.lookup(hostname, options, callback),
5 * 1000,
);
},
});
const request = http.request(url, {
timeout: 3 * 1000, // connect timeout
agent,
});
request.setTimeout(10 * 1000); // idle timeout
request.addListener("timeout", () => {
const message = !request.socket || request.socket.connecting ?
`connect timeout while connecting to ${url.href}` :
`idle timeout while connected to ${url.href}`;
request.destroy(new Error(message));
});
Happy coding!

"Connection timeout" determines how long it may take for a TCP connection to be established and this all happens before any HTTP related data is send over the line. By connecting to a blocked port, you have only partially tested the connection timeout since no connection was being made. Typically, a TCP connection on your local network is created (established) very fast. But when connecting to a server on the other side of the world, establishing a TCP connection can take seconds.
"Socket timeout" is a somewhat misleading name - it just determines how long you (the client) will wait for an answer (data) from the server. In other words, how long the Socket.read() function will block while waiting for data.
Properly testing these functions involves creating a server socket or a (HTTP) web-server that you can modify to be very slow. Describing how to create and use a server socket for connection timeout testing (if that is possible) is too much to answer here, but socket timeout testing is a common question - see for example here (I just googled "mock web server for testing timeouts") which leads to a tool like MockWebServer. "MockWebServer" might have an option for testing connection timeouts as well (I have not used "MockWebServer"), but if not, another tool might have.
On a final note: it is good you are testing your usage of the third-party HTTP library with respect to timeout settings, even if this takes some effort. The worst that can happen is that a socket timeout setting in your code is somehow not used by the library and the default socket timeout of "wait forever" is used. That can result in your application doing absolutely nothing ("hanging") for no apparent reason.

Related

Apache HttpClient: setConnectTimeout() vs. setConnectionTimeToLive() vs. setSocketTimeout()

Can someone please explain what is the difference between these two:
client = HttpClientBuilder.create()
.setConnectionTimeToLive(1, TimeUnit.MINUTES)
.build();
and
RequestConfig requestConfig = RequestConfig.custom().setConnectTimeout(30 * 1000).build();
client = HttpClientBuilder
.create()
.setDefaultRequestConfig(requestConfig)
.build();
Is it better to use setSocketTimeout()?
A ConnectTimeout determines the maximum time to wait for the other side to answer "yes, I'm here, let's talk" when creating a new connection (ConnectTimeout eventually calls socket.connect(address, timeout). The wait-time is usually less than a second, unless the other side is really busy with just accepting new incoming connections or you have to go through the great firewall of China. In the latter cases it can be a minute (or more) before the new connection is created. If the connection is not established within the ConnectTimeout, you get an error (1).
setSocketTimeout eventually calls socket.setSoTimeout which is explained in this answer.
The ConnectionTimeToLive determines the maximum age of a connection (after which it will be closed), regardless of when the connection was last used. Normally, there is an "idle timeout" to cleanup connections, i.e. you or the other side will close a connection that is not used for a while. Typically, you will close an idle connection before the other side does to prevent errors. But there are two other cases I can think of where a maximum age for a connection is useful:
Bad network components: count yourself lucky if you have not met them. Some bad routers, firewalls, proxies, etc. will just drop (actively being used) connections after something like 30 minutes. Since you and the other side may not even be aware that a connection was dropped, you can get "connection reset" errors for no obvious reason at weird times.
Cached meta-data: most systems keep some meta-data about a connection in some sort of cache. Some systems manage this cache badly - cache size just grows with the age of the connection.
A note about the ConnectionTimeToLive implementation in Apache HttpClient 4.5.4: I think you must use the PoolingHttpClientConnectionManager for the option to work (it eventually all comes down to a call to this isExpired method). If you do not use this connection manager, test the option to make sure it really works.
(1) Interesting comment from EJP user207421 on this related answer
Connection Timeout:
It is the timeout until a connection with the server is established.
Socket Timeout:
this is the time of inactivity to wait for packets[data] to receive.
setConnectionRequestTimeout:
However it is specific for configuring the connection manager. It is the time to fetch a connection from the connection pool.
It returns the timeout in milliseconds used when requesting a connection from the connection manager. 0(zero) is used for an infinite timeout.
setConnectionTimeToLive
public final HttpClientBuilder setConnectionTimeToLive(long connTimeToLive, TimeUnit connTimeToLiveTimeUnit)
Sets maximum time to live for persistent connections
Please note this value can be overridden by the setConnectionManager(org.apache.http.conn.HttpClientConnectionManager) method.
Since:
4.4
Example: HttpClientStarter.java
#Override
public boolean start() {
RegistryBuilder<ConnectionSocketFactory> r = RegistryBuilder.<ConnectionSocketFactory> create();
// Register http and his plain socket factory
final SocketFactory ss = getLevel().find(SocketFactory.class);
ConnectionSocketFactory plainsf = new PlainConnectionSocketFactory() {
#Override
public Socket createSocket(HttpContext context) throws IOException {
return ss.createSocket();
}
};
r.register("http", plainsf);
// Register https
ConnectionSocketFactory sslfactory = getSSLSocketFactory();
if (sslfactory != null) {
r.register("https", getSSLSocketFactory());
} else {
log(Level.WARN, "ssl factory not found, won't manage https");
}
HttpClientBuilder builder = HttpClientBuilder.create();
builder.setUserAgent(USERAGENT);
builder.setConnectionTimeToLive(timeout, TimeUnit.SECONDS);
builder.evictIdleConnections((long) timeout, TimeUnit.SECONDS);
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager(r.build());
cm.setMaxTotal(maxConnect * 2);
cm.setDefaultMaxPerRoute(2);
cm.setValidateAfterInactivity(timeout * 1000);
builder.setConnectionManager(cm);
RequestConfig rc = RequestConfig.custom()
.setConnectionRequestTimeout(timeout * 1000)
.setConnectTimeout(timeout * 1000)
.setSocketTimeout(timeout * 1000)
.build();
builder.setDefaultRequestConfig(rc);
client = builder.build();
return true;
}
Resource Link:
HttpClientStarter.java
HttpClient 4.x
Timeout
The HTTP specification does not determine how long a persistent connection may or should remain active. Some HTTP servers use a non-standard header, Keep-Alive, to tell clients the number of seconds they want to stay connected on the server side. HttClient will take advantage of this if this information is available. If the header information Keep-Alive does not exist in the response, HttpClient assumes the connection remains active indefinitely. However, many real-world HTTP servers are configured to discard persistent connections after certain periods of inactivity to conserve system resources, often without notification to the client.
Here you can rewrite one, here is set to 5 seconds
ConnectionKeepAliveStrategy keepAliveStrategy = new DefaultConnectionKeepAliveStrategy() {
#Override
public long getKeepAliveDuration(final HttpResponse response, final HttpContext context) {
long keepAlive = super.getKeepAliveDuration(response, context);
if (keepAlive == -1) {
keepAlive = 5000;
}
return keepAlive;
}
};
Connection eviction policy
The main disadvantage of a classic blocking I/O model is that network
sockets respond to I/O events only when I/O operations are blocked.
When a connection is released back to the manager, it can be kept
alive without monitoring the status of the socket and responding to
any I/O events. If the connection is closed on the server side, then
the client connection can not detect changes in the connection status
and shut down the local socket to respond properly.
HttpClient tries to alleviate this problem by testing if the connection is outdated, which is no longer valid as it is already closed on the server side before using the connection that made the HTTP request. Outdated connection check is not 100% stable, but instead requires 10 to 30 milliseconds for each request execution. The only workable socket model thread solution that does not involve every free connection is to use a dedicated monitoring thread to reclaim the connection that is considered expired because of prolonged inactivity. Monitoring thread can periodically call ClientConnectionManager#closeExpiredConnections() method to close all expired connections, withdraw from the connection pool closed connection. It can also optionally call the ClientConnectionManager#closeIdleConnections() method to close all connections that have been idle for more than a given period of time.
Resource Link:
http://dev.dafan.info/detail/513285

Apache Mina Idle Monitor

I have been developing my first TCP/Socket based application with Apache Mina, it looks great and easy to do things. I just want to ask a question here about Mina.
The server impose an idle time of 5 second will terminate the socket connection, so we have to send periodic heartbeat (echo message / keepalive) to make sure connection is alive. Sort of keepalive mechanism.
There's one way that we send blindly echo/heartbeat message just before every 5 seconds. I am thinking, there should be smart/intelligent way "Idle Monitor" if I am sending my business message and do not come to idle time i.e. 5 second, I should not issue heartbeat message. Heartbeat message will be sent if whole connection is idle, so that we save bandwidth and fast reading & writing on socket.
You can achieve it by using Keep Alive Filter (already present in mina).
Alternatively, you can achieve a smarter way of sending echo/heart beat by setting session idle timeout of client a bit smaller than idle timeout of server. For example:
For server side
NioSocketAcceptor.getSessionConfig().setIdleTime(IdleStatus.BOTH_IDLE, 5);
and for client side it would be
NioSocketConnector.getSessionConfig().setIdleTime(IdleStatus.BOTH_IDLE, 3);
Now, if there is no communication for lets say 3 seconds, a sessionIdle will be triggred at the client side ( and it will not be triggered at server side as timeout there is 5 seconds) and you can send an echo. This will keep the session alive. The echo will be sent only if the session is idle.
Note: I am assuming that at session idle, session is being closed at the server side. If it is other way around you will need to switch values of session idle timeout(e.g. 3 seconds for server and 5 seconds for client) and echo will be sent from server.
(I hope I'm understanding the question correctly)
I was having trouble keeping my session alive and this question came up on Google search results so I'm hoping someone else will find it useful:
#Test
public void testClientWithHeartBeat() throws Exception {
SshClient client = SshClient.setUpDefaultClient();
client.getProperties().put(ClientFactoryManager.HEARTBEAT_INTERVAL, "500");
client.start();
ClientSession session = client.connect("localhost", port).await().getSession();
session.authPassword("smx", "smx").await().isSuccess();
ClientChannel channel = session.createChannel(ClientChannel.CHANNEL_SHELL);
int state = channel.waitFor(ClientChannel.CLOSED, 2000);
assertTrue((state & ClientChannel.CLOSED) == 0);
channel.close(false);
client.stop();
}
(Source: https://issues.apache.org/jira/browse/SSHD-185)
In newer versions (e.g. version 2.8.0), enabling heartbeats changed to CoreModuleProperties.HEARTBEAT_INTERVAL.set(client, Duration.ofMillis(500));
I'm not sure I totally understand your question, but you can send a heartbeat in an overridden sessionIdle method of the IoHandlerAdapter. You don't need to necessarily close a session just because Mina on the server side calls Idle. As far as a more intelligent way of maintaining an active connection between and Server and Client without this type of heartbeat communication I have never heard of one.
Here is an interesting read of how microsoft handles their heartbeat in ActiveSync. I personally used this methodology when using mina in my client/server application. Hope this helps you some.

How to set the timeout for a MQTT client?

I'm using the IA92 Java implementation for MQTT, which allows me to connect to a MQTT broker. In order to establish the connection, I'm doing something like this:
// Create connection spec
String mqttConnSpec = "tcp://the_server#the_port";
// Create the client and connect
mqttClient = MqttClient.createMqttClient(mqttConnSpec, null);
mqttClient.connect("the_id", true, 666);
The problem is that sometimes the server takes too much time to send a response, and it throws a timeout exception:
org.apache.harmony.luni.platform.OSNetworkSystem.connectStreamWithTimeoutSocket(OSNetworkSystem.java:130)
at org.apache.harmony.luni.net.PlainSocketImpl.connect(PlainSocketImpl.java:246)
at org.apache.harmony.luni.net.PlainSocketImpl.connect(PlainSocketImpl.java:533)
at java.net.Socket.connect(Socket.java:1055)
at com.ibm.mqtt.j2se.MqttJava14NetSocket.<init>((null):-1)
at com.ibm.mqtt.j2se.MqttJavaNetSocket.setConnection((null):-1)
at com.ibm.mqtt.Mqtt.tcpipConnect((null):-1)
at com.ibm.mqtt.MqttBaseClient.doConnect((null):-1)
at com.ibm.mqtt.MqttBaseClient.connect((null):-1)
at com.ibm.mqtt.MqttClient.connect((null):-1)
at com.ibm.mqtt.MqttClient.connect((null):-1)
What I need to do is setting a timeout manually, instead of letting the mqtt client decide that. The documentation says: There are also methods for setting attributes of the MQ Telemetry Transport connection, such as timeouts and retries.
But, honestly, I haven't found anything about it. I have taken a look at the whole javadoc reference and there's no evidence of timeout configuration. I can't see the source code since it's not open source.
So how can I set the timeout for the Mqtt connection?
If you have confusion you can go to MqttConnectionOptions for detail.
String userName="Ohelig";
String password="Pojke";
MqttClient client = new MqttClient("tcp://192.168.1.4:1883","Sending");
MqttConnectOptions authen = new MqttConnectOptions();
authen.setUserName(userName);
authen.setPassword(password.toCharArray());
authen.setKeepAliveInterval(30);
authen.setConnectionTimeout(300);
client.connect(authen);
I don't know anything about ia92, but I'd imagine that the 666 in the connect() call is what you're trying to set the timeout to?
The timeout the documentation is referring to is probably the keepalive timeout. This is the maximum number of seconds (chosen by the client) that can elapse without communication between the server and client. I think this is what you're most interested in.
Retries on the other hand are most likely to refer to the retrying of messages that seem to have gone astray when sending messages with QoS>0. This will be something handled by the client library code though, rather than the broker. This is something that comes into play only after you've connected though, so I very much doubt it's your problem.
To be sure that the keepalive timeout is being set correctly, I'd try pointing your client at a modified mosquitto broker. You can modify mqtt3_handle_connect() in src/read_handle_server.c to print out the keepalive value when you connect. This will ensure it's doing what you think, but won't help with the actual problem I'm afraid!
What broker do you use? Really Small Message Broker V1.1 Alpha, Mosquitto, the broker that comes with IBM WebSphere? You need to set this timeout value in your server configuration. Because the system works that way. You set a keep alive value in your broker and send a ping from the client before that interval expires, in order not for the broker to close the client-server connection, and the process restarts. Actually, even if that interval expires, server will still not close the connection until the 'grace period' ends. See http://public.dhe.ibm.com/software/dw/webservices/ws-mqtt/mqtt-v3r1.html#connect

java.net.SocketException Connection timed out error

I am getting below error when I am trying to connect to a TCP server. My programs tries to open around 300-400 connections using diffferent threads and this is happening during 250th thread. Each thread uses its own connection to send and receive data.
java.net.SocketException: Connection timed out:could be due to invalid address
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:372)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:233)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:220)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:385)
Here is the code I have that a thread uses to get socket:
socket = new Socket(my_hostName, my_port);
Is there any default limit on number of connections that a TCP server can have at one time? If not how to solve this type of problems?
You could be getting a connection timeout if the server has a ServerSocket bound to the port you are connecting to, but is not accepting the connection.
If it always happens with the 250th connection, maybe the server is set up to only accept 250 connections. Someone has to disconnect so you can connect. Or you can increase the timeout; instead of creating the socket like that, create the socket with the empty constructor and then use the connect() method:
Socket s = new Socket();
s.connect(new InetSocketAddress(my_hostName, my_port), 90000);
Default connection timeout is 30 seconds; the code above waits 90 seconds to connect, then throws the exception if the connection cannot be established.
You could also set a lower connection timeout and do something else when you catch that exception...
Why all the connections? Is this a test program? In which case be aware that opening large numbers of connections from a single client stresses the client in ways that aren't exercised by real systems with large numbers of different client hosts, so test results from that kind of client aren't all that valid. You could be running out of client ports, or some other client resource.
If it isn't a test program, same question. Why all the connections? You'd be better off running a connection pool and reusing a much smaller number of connections serially. The network only has so much bandwidth after all; dividing it by 400 isn't very useful.

PDA loses TCP connection to ServerSocket in Suspend Mode

I'm implementing a java TCP/IP Server using ServerSocket to accept messages from clients via network sockets.
Works fine, except for clients on PDAs (a WIFI barcode scanner).
If I have a connection between server and pda - and the pda goues into suspend (standby) after some idle time - then there will be problems with the connection.
When the pda wakes up again, I can observer in a tcp monitor, that a second connection with a different port is established, but the old one remains established too:
localhost:2000 remotehost:4899 ESTABLISHED (first connection)
localhost:2000 remotehost:4890 ESTABLISHED (connection after wakeup)
And now communication doesn't work, as the client now uses the new connection, but the server still listens at the old one - so the server doesn't receive the messages. But when the server sends a message to the client he realizes the problem (receives a SocketException: Connection reset. The server then uses the new connection and all the messages which have been send in the meantime by the client will be received at a single blow!
So I first realize the network problems, when the server tries to send a message - but in the meantime there are no exceptions or anything. How can I properly react to this problem - so that the new connection is used, as soon as it is established (and the old one closed)?
From your description I guess that the server is structured like this:
server_loop
{
client_socket = server_socket.accept()
TalkToClientUntilConnectionCloses(client_socket)
}
I'd change it to process incoming connections and established connections in parallel. The simplest approach (from the implementation point of view) is to start a new thread for each client. It is not a good approach in general (it has poor scalability), but if you don't expect a lot of clients and can afford it, just change the server like this:
server_loop
{
client_socket = server_socket.accept()
StartClientThread(client_socket)
}
As a bonus, you get an ability to handle multiple clients simultaneously (and all the troubles attached too).
It sounds like the major issue is that you want the server to realize and drop the old connections as they become stale.
Have you considered setting a timeout on the connection on the server-side socket (the connection Socket, not the ServerSocket) so you can close/drop it after a certain period? Perhaps after the SO_TIMEOUT expires on the Socket, you could test it with an echo/keepalive command to verify that the connection is still good.

Categories

Resources