reactor-netty: using keep-alive HTTP client - java

I use reactor-netty to request a set of URLs. Majority of URLs belong to the same hosts. reactor-netty seems to make a brand new TCP connection for every URL even if connection to the host is already established for the previous URL. Some servers drop new connections or start to respond slowly when hundreds of simultaneous connections established.
Sample of the code:
Flux.just(...)
.groupBy(link -> {
String host = "";
try {
host = new URL(link).getHost();
} catch (MalformedURLException e) {
LOGGER.warn("Cannot determine host {}", link, e);
}
return host;
})
.flatMap(group -> {
HttpClient client = HttpClient.create()
.keepAlive(true)
.tcpConfiguration(tcp -> tcp.host(group.key()));
return group.flatMap(link -> client.get()
.uri(link)
.response((resp, cont) -> resp.status().code() == 200 ? cont.aggregate().asString() : Mono.empty())
.doOnSubscribe(s -> LOGGER.debug("Requesting {}", link))
.timeout(Duration.ofMinutes(1))
.doOnError(e -> LOGGER.warn("Cannot get response from {}", link, e))
.onErrorResume(e -> Flux.empty())
.collect(Collectors.joining())
.filter(s -> StringUtils.isNotBlank(s)));
})
.blockLast();
In the log I see that local ports are different for the same remote host and sum of active and inactive connections are way higher than the number of distinct hosts. That's why I think that reactor-netty is not reusing already established connections.
DEBUG [2019-04-29 08:15:18,711] reactor-http-nio-10 r.n.r.PooledConnectionProvider: [id: 0xaed18e87, L:/192.168.1.183:56832 - R:capcp2.naad-adna.pelmorex.com/52.242.33.4:80] Releasing channel
DEBUG [2019-04-29 08:15:18,711] reactor-http-nio-10 r.n.r.PooledConnectionProvider: [id: 0xaed18e87, L:/192.168.1.183:56832 - R:capcp2.naad-adna.pelmorex.com/52.242.33.4:80] Channel cleaned, now 1 active connections and 239 inactive connections
...
DEBUG [2019-04-29 08:15:20,158] reactor-http-nio-10 r.n.r.PooledConnectionProvider: [id: 0xd6c6c5db, L:/192.168.1.183:56965 - R:capcp2.naad-adna.pelmorex.com/52.242.33.4:80] Releasing channel
DEBUG [2019-04-29 08:15:20,158] reactor-http-nio-10 r.n.r.PooledConnectionProvider: [id: 0xd6c6c5db, L:/192.168.1.183:56965 - R:capcp2.naad-adna.pelmorex.com/52.242.33.4:80] Channel cleaned, now 0 active connections and 240 inactive connections
Is it possible to request several URLs on the same host using keep-alive HTTP client through the same TCP connection to the host? If not, how do I restrict the number of simultaneous connections to the same host or perform requests to the same host sequentially (the next request only after receiving response to the previous one)?
I use Californium-SR6 release train.

Yes, reactor netty supports keep-alive, connection reuse, and connection pooling.
Note that .flatMap is a async operation that processes the inner streams in parallel. Therefore, when you call group.flatMap(... the inner requests will be executed in parallel. And since they are executed in parallel, multiple connections will need to be established.
If you want to execute requests to the same host sequentially, change your example to use group.concatMap instead of .flatMap.
If you want to still execute them in parallel, but limit the number of active requests to an individual host, then change your example to use one of the overloaded versions of .flatMap that takes a concurrency parameter.
Also, since you are using HttpClient.create(), your example uses the default global http connection pool. If you want more control over connection pooling, you can specify a different ConnectionProvider via HttpClient.create(ConnectionProvider).

Related

Apache HttpClient Keep-Alive Strategy for active connections

In an Apache HttpClient with a PoolingHttpClientConnectionManager, does the Keep-Alive strategy change the amount of time that an active connection will stay alive until it will be removed from the connection pool? Or will it only close out idle connections?
For example, if I set my Keep-Alive strategy to return 5 seconds for every request, and I use the same connection to hit a single URL/route once every 2 seconds, will my keep-alive strategy cause this connection to leave the pool? Or will it stay in the pool, because the connection is not idle?
I just tested this and confirmed that the Keep-Alive strategy will only idle connections from the HttpClient's connection pool after the Keep-Alive duration has passed. The Keep-Alive duration determines whether or not the connection is idle, in fact - if the Keep-alive strategy says to keep connections alive for 10 seconds, and we receive responses from the server every 2 seconds, the connection will be kept alive for 10 seconds after the last successful response.
The test that I ran was as follows:
I set up an Apache HttpClient (using a PoolingHttpClientConnectionManager) with the following ConnectionKeepAliveStrategy:
return (httpResponse, httpContext) -> {
// Honor 'keep-alive' header
HeaderElementIterator it = new BasicHeaderElementIterator(
httpResponse.headerIterator(HTTP.CONN_KEEP_ALIVE));
while (it.hasNext()) {
HeaderElement he = it.nextElement();
String param = he.getName();
String value = he.getValue();
if (value != null && param.equalsIgnoreCase("timeout")) {
try {
return Long.parseLong(value) * 1000;
} catch(NumberFormatException ignore) {
}
}
}
if (keepAliveDuration <= 0) {
return -1; // the connection will stay alive indefinitely.
}
return keepAliveDuration * 1000;
};
}
I created an endpoint on my application which used the HttpClient to make a GET request to a URL behind a DNS.
I wrote a program to hit that endpoint every 1 second.
I changed my local DNS for the address that the HttpClient was sending GET requests to to point to a dummy URL that would not respond to requests. (This was done by changing my /etc/hosts file).
When I had set the keepAliveDuration to -1 seconds, even after changing the DNS to point to the dummy URL, the HttpClient would continuously send requests to the old IP address, despite the DNS change. I kept this test running for 1 hour and it continued to send requests to the old IP address associated with the stale DNS. This would happen indefinitely, as my ConnectionKeepAliveStrategy had been configured to keep the connection to the old URL alive indefinitely.
When I had set the keepAliveDuration to 10, after I had changed my DNS, I sent successful requests continuously, for about an hour. It wasn't until I turned off my load test and waited 10 seconds until we received a new connection. This means that the ConnectionKeepAliveStrategy removed the connection from the HttpClient's connection pool 10 seconds after the last successful response from the server.
Conclusion
By default, if an HttpClient does not receive a Keep-Alive header from a response it gets from a server, it assumes its connection to that server can be kept alive indefinitely, and will keep that connection in it's PoolingHttpClientConnectionManager indefinitely.
If you set a ConnectionKeepAliveStrategy like I did, then it will add a Keep-Alive header to the response from the server. Having a Keep-Alive header on the HttpClient response will cause the connection to leave the connection pool after the Keep-Alive duration has passed, after the last successful response from the server. This means that only idle connections are affected by the Keep-Alive duration, and "idle connections" are connections that haven't been used since the Keep-Alive duration has passed.

Is there a way to simulate Socket and Connection timeout?

I have a certain piece of code that integrates with a third party using HTTP connection, which handles socket timeout and connection timeout differently.
I have been trying to simulate and test all the scenarios which could arise from the third party. was able to test connection timeout by connecting to a port which is blocked by the servers firewall e.g. port 81.
However I'm unable to simulate a socket timeout. If my understanding is not wrong socket timeout is associated with continuous packet flow, and the connection dropping in between. Is there a way I can simulate this?
So we are talking about to kinds of timeouts here, one is to connect to the server (connect timeout), the other timeout will happen when no data is send or received via the socket for a while (idle timeout).
Node sockets have a socket timeout, that can be used to synthesize both the connect and the idle timeout. This can be done by setting the socket timeout to the connect timeout and then when connected, setting it to the idle timeout.
example:
const request = http.request(url, {
timeout: connectTimeout,
});
request.setTimeout(idleTimeout);
This works because the timeout in the options is set immediately when creating the socket, the setTimeout function is run on the socket when connected!
Anyway, the question was about how to test the connect timeout. Ok so let's first park the idle timeout. We can simply test that by not sending any data for some time, that would cause the timeout. Check!
The connect timeout is a bit harder to test, the first thing that comes to mind is that we need a place to connect to that will not error, but also not connect. This would cause a timeout. But how the hell do we simulate that, in node?
If we think a little bit outside the box then we might figure out that this timeout is about the time it takes to connect. It does not matter why the connection takes as long as it does. We simply need to delay the time it takes to connect. This is not necessarily a server thing, we could also do it on the client. After all this is the part connecting, if we can delay it there, we can test the timeout.
So how could we delay the connection on the client side? Well, we can use the DNS lookup for that. Before the connection is made, a DNS lookup is done. If we simply delay that by 5 seconds or so we can test for the connect timeout very easily.
This is what the code could look like:
import * as dns from "dns";
import * as http from "http";
const url = new URL("http://localhost:8080");
const request = http.request(url, {
timeout: 3 * 1000, // connect timeout
lookup(hostname, options, callback) {
setTimeout(
() => dns.lookup(hostname, options, callback),
5 * 1000,
);
},
});
request.setTimeout(10 * 1000); // idle timeout
request.addListener("timeout", () => {
const message = !request.socket || request.socket.connecting ?
`connect timeout while connecting to ${url.href}` :
`idle timeout while connected to ${url.href}`;
request.destroy(new Error(message));
});
In my projects I usually use an agent that I inject. The agent then has the delayed lookup. Like this:
import * as dns from "dns";
import * as http from "http";
const url = new URL("http://localhost:8080");
const agent = new http.Agent({
lookup(hostname, options, callback) {
setTimeout(
() => dns.lookup(hostname, options, callback),
5 * 1000,
);
},
});
const request = http.request(url, {
timeout: 3 * 1000, // connect timeout
agent,
});
request.setTimeout(10 * 1000); // idle timeout
request.addListener("timeout", () => {
const message = !request.socket || request.socket.connecting ?
`connect timeout while connecting to ${url.href}` :
`idle timeout while connected to ${url.href}`;
request.destroy(new Error(message));
});
Happy coding!
"Connection timeout" determines how long it may take for a TCP connection to be established and this all happens before any HTTP related data is send over the line. By connecting to a blocked port, you have only partially tested the connection timeout since no connection was being made. Typically, a TCP connection on your local network is created (established) very fast. But when connecting to a server on the other side of the world, establishing a TCP connection can take seconds.
"Socket timeout" is a somewhat misleading name - it just determines how long you (the client) will wait for an answer (data) from the server. In other words, how long the Socket.read() function will block while waiting for data.
Properly testing these functions involves creating a server socket or a (HTTP) web-server that you can modify to be very slow. Describing how to create and use a server socket for connection timeout testing (if that is possible) is too much to answer here, but socket timeout testing is a common question - see for example here (I just googled "mock web server for testing timeouts") which leads to a tool like MockWebServer. "MockWebServer" might have an option for testing connection timeouts as well (I have not used "MockWebServer"), but if not, another tool might have.
On a final note: it is good you are testing your usage of the third-party HTTP library with respect to timeout settings, even if this takes some effort. The worst that can happen is that a socket timeout setting in your code is somehow not used by the library and the default socket timeout of "wait forever" is used. That can result in your application doing absolutely nothing ("hanging") for no apparent reason.

netty client takes very long before broken network is detected

I am using netty.io (4.0.4) in a java application to implement a TCP client to communicate with an external hardware driver. One of the requirements of this hardware is, the client send a KEEP_ALIVE (heart-beat) message every 30 seconds, the hardware however does not respond to this heat-beat.
My problem is, when the connection is abruptly broken (eg: network cable unplugged) the client is completely unaware of this, and keeps sending the KEEP_ALIVE message for much longer (around 5-10 minutes) before it gets an operation timeout exception.
In other words, from the client side, there is no way to tell if its still connected.
Below is a snippet of my bootstrap setup if it helps
// bootstrap setup
bootstrap = new Bootstrap().group(group)
.channel(NioSocketChannel.class)
.option(ChannelOption.SO_KEEPALIVE, true)
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 3000)
.remoteAddress(ip, port)
.handler(tcpChannelInitializer);
// part of the pipeline responsible for keep alive messages
pipeline.addLast("idleStateHandler", new IdleStateHandler(0, 0, 30, TimeUnit.SECONDS));
pipeline.addLast("keepAliveHandler", keepAliveMessageHandler);
I would expect since the client is sending keep alive messages, and those messages are not received at the other end, a missing acknowledgement should indicate a problem in the connection much earlier?
EDIT
Code from the KeepAliveMessageHandler
public class KeepAliveMessageHandler extends ChannelDuplexHandler
{
private static final Logger LOGGER = getLogger(KeepAliveMessageHandler.class);
private static final String KEEP_ALIVE_MESSAGE = "";
#Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception
{
if (!(evt instanceof IdleStateEvent)) {
return;
}
IdleStateEvent e = (IdleStateEvent) evt;
Channel channel = ctx.channel();
if (e.state() == IdleState.ALL_IDLE) {
LOGGER.info("Sending KEEP_ALIVE_MESSAGE");
channel.writeAndFlush(KEEP_ALIVE_MESSAGE);
}
}
}
EDIT 2
I tired to explicitly ensure the keep alive message delivered using the code below
#Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception
{
if (!(evt instanceof IdleStateEvent)) {
return;
}
IdleStateEvent e = (IdleStateEvent) evt;
Channel channel = ctx.channel();
if (e.state() == IdleState.ALL_IDLE) {
LOGGER.info("Sending KEEP_ALIVE_MESSAGE");
channel.writeAndFlush(KEEP_ALIVE_MESSAGE).addListener(future -> {
if (!future.isSuccess()) {
LOGGER.error("KEEP_ALIVE message write error");
channel.close();
}
});
}
}
This also does not work. :( according to this answer this behavior makes sense, but I am still hoping there is some way to figure-out if the write was a "real" success. (Having the hardware ack the hear-beat is not possible)
You have enabled the TCP Keepalive
.option(ChannelOption.SO_KEEPALIVE, true)
But in your code I can't see any piece that ensures keepalive to be sent at 30 seconds rate.
If a connection has been terminated due to a TCP Keepalive time-out and the other host eventually sends a packet for the old connection, the host that terminated the connection will send a packet with the RST flag set to signal the other host that the old connection is no longer active. This will force the other host to terminate its end of the connection so a new connection can be established.
Typically TCP Keepalives are sent every 45 or 60 seconds on an idle TCP connection, and the connection is dropped after 3 sequental ACKs are missed. This varies by host, e.g. by default Windows PCs send the first TCP Keepalive packet after 7200000ms (2 hour)s, then sends 5 Keepalives at 1000ms intervals, dropping the connection if there is no response to any of the Keepalive packets.
(taken form http://ltxfaq.custhelp.com/app/answers/detail/a_id/1512/~/tcp-keepalives-explained_
I do understand now that
pipeline.addLast("idleStateHandler", new IdleStateHandler(0, 0, 30, TimeUnit.SECONDS));
pipeline.addLast("keepAliveHandler", keepAliveMessageHandler);
Will trigger an idle event every 30 seconds on mutual inactivity and keepAliveMessageHandler will sent a packet to remove side in this case.
Unfortunately
ChannelFuture future = channel.writeAndFlush(KEEP_ALIVE_MESSAGE);
is considered success when it is written to OS buffers.
It seems that under your conditions you have only 2 optios:
Sending a command that will have some response from external
device (something that will not cause distruption)
But I would assume that this is impossible in your case.
Modyfying underlying TCP driver settings
The default OS settings for TCP keepalive are more about conserving system resources to support large amount of applications and connections. Provided that you have a dedicated system you may set more aggressive TCP checks configuration.
Here is the link on how to make adjustments to linux kernel: http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/usingkeepalive.html
The solution should work as on plain installations as well in VMs and Docker containers.
General information on the topic: https://blog.stephencleary.com/2009/05/detection-of-half-open-dropped.html

Play 2.5 WebSocket Connection Build

I have an AWS server (medium) running in EU West, and there are roughly 250 devices connected but are also always reconnecting due to internet connectivity issues, but for some reason, the amount of TCP connections to the server grows until it reaches around 4300. Then no new connections are allowed to the server. I have confirmed that it is isolated to WebSocket requests and not regular HTTP requests.
WebSocket connections are kept per device in a Map with device UUID as key; it sometimes happens that a device will send a request for a new WS connection even though the server has a connection to the device. In this case, the current connection is closed, and an error is returned so that the device can retry the connection request.
Below is the code snippet from the Controller handling the connections using LegacyWebSocket. Connections are closed using out.close() as per https://www.playframework.com/documentation/2.5.x/JavaWebSockets#handling-websockets-using-callbacks
public LegacyWebSocket<String> create(String uuid) {
logger.debug("NEW WebSocket request from {}, creating new socket...", uuid);
if(webSocketMap.containsKey(uuid)){
logger.debug("WebSocket already exists for {}, closing existing connection", uuid);
webSocketMap.get(uuid).close();
logger.debug("Responding forbidden to force WS restart from device {}", uuid);
return WebSocket.reject(forbidden());
}
LegacyWebSocket<String> ws = WebSocket.whenReady((in, out) -> {
logger.debug("Adding downstream connection to webSocketMap-> {} webSocketMap.size() = {}",uuid, webSocketMap.size());
webSocketMap.put(uuid,out);
// For each event received on the socket,
in.onMessage(message->{
if(message.equals("ping")){
logger.debug("PING received from {} {}",uuid, message);
out.write("pong");
}
});
// When the socket is closed.
in.onClose(() -> {
logger.debug("onClose, removing for {}",uuid);
webSocketMap.remove(uuid);
});
});
return ws;
}
How can I ensure that Play Framework closes the TCP connection for closed WS connections?
The call that I use to check the amount of TCP connections is netstat -n -t | wc -l
Looks like a TCP keep-alive issue - i.e. that the TCP connections become stale because of connectivity issues on the client side and the server does not handle or clean up the stale connections in time before the limit is reached.
This link will help you configure the TCP keep-alive on your server to ensure that the stale connections are cleaned up in time.

Apache HttpClient: setConnectTimeout() vs. setConnectionTimeToLive() vs. setSocketTimeout()

Can someone please explain what is the difference between these two:
client = HttpClientBuilder.create()
.setConnectionTimeToLive(1, TimeUnit.MINUTES)
.build();
and
RequestConfig requestConfig = RequestConfig.custom().setConnectTimeout(30 * 1000).build();
client = HttpClientBuilder
.create()
.setDefaultRequestConfig(requestConfig)
.build();
Is it better to use setSocketTimeout()?
A ConnectTimeout determines the maximum time to wait for the other side to answer "yes, I'm here, let's talk" when creating a new connection (ConnectTimeout eventually calls socket.connect(address, timeout). The wait-time is usually less than a second, unless the other side is really busy with just accepting new incoming connections or you have to go through the great firewall of China. In the latter cases it can be a minute (or more) before the new connection is created. If the connection is not established within the ConnectTimeout, you get an error (1).
setSocketTimeout eventually calls socket.setSoTimeout which is explained in this answer.
The ConnectionTimeToLive determines the maximum age of a connection (after which it will be closed), regardless of when the connection was last used. Normally, there is an "idle timeout" to cleanup connections, i.e. you or the other side will close a connection that is not used for a while. Typically, you will close an idle connection before the other side does to prevent errors. But there are two other cases I can think of where a maximum age for a connection is useful:
Bad network components: count yourself lucky if you have not met them. Some bad routers, firewalls, proxies, etc. will just drop (actively being used) connections after something like 30 minutes. Since you and the other side may not even be aware that a connection was dropped, you can get "connection reset" errors for no obvious reason at weird times.
Cached meta-data: most systems keep some meta-data about a connection in some sort of cache. Some systems manage this cache badly - cache size just grows with the age of the connection.
A note about the ConnectionTimeToLive implementation in Apache HttpClient 4.5.4: I think you must use the PoolingHttpClientConnectionManager for the option to work (it eventually all comes down to a call to this isExpired method). If you do not use this connection manager, test the option to make sure it really works.
(1) Interesting comment from EJP user207421 on this related answer
Connection Timeout:
It is the timeout until a connection with the server is established.
Socket Timeout:
this is the time of inactivity to wait for packets[data] to receive.
setConnectionRequestTimeout:
However it is specific for configuring the connection manager. It is the time to fetch a connection from the connection pool.
It returns the timeout in milliseconds used when requesting a connection from the connection manager. 0(zero) is used for an infinite timeout.
setConnectionTimeToLive
public final HttpClientBuilder setConnectionTimeToLive(long connTimeToLive, TimeUnit connTimeToLiveTimeUnit)
Sets maximum time to live for persistent connections
Please note this value can be overridden by the setConnectionManager(org.apache.http.conn.HttpClientConnectionManager) method.
Since:
4.4
Example: HttpClientStarter.java
#Override
public boolean start() {
RegistryBuilder<ConnectionSocketFactory> r = RegistryBuilder.<ConnectionSocketFactory> create();
// Register http and his plain socket factory
final SocketFactory ss = getLevel().find(SocketFactory.class);
ConnectionSocketFactory plainsf = new PlainConnectionSocketFactory() {
#Override
public Socket createSocket(HttpContext context) throws IOException {
return ss.createSocket();
}
};
r.register("http", plainsf);
// Register https
ConnectionSocketFactory sslfactory = getSSLSocketFactory();
if (sslfactory != null) {
r.register("https", getSSLSocketFactory());
} else {
log(Level.WARN, "ssl factory not found, won't manage https");
}
HttpClientBuilder builder = HttpClientBuilder.create();
builder.setUserAgent(USERAGENT);
builder.setConnectionTimeToLive(timeout, TimeUnit.SECONDS);
builder.evictIdleConnections((long) timeout, TimeUnit.SECONDS);
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager(r.build());
cm.setMaxTotal(maxConnect * 2);
cm.setDefaultMaxPerRoute(2);
cm.setValidateAfterInactivity(timeout * 1000);
builder.setConnectionManager(cm);
RequestConfig rc = RequestConfig.custom()
.setConnectionRequestTimeout(timeout * 1000)
.setConnectTimeout(timeout * 1000)
.setSocketTimeout(timeout * 1000)
.build();
builder.setDefaultRequestConfig(rc);
client = builder.build();
return true;
}
Resource Link:
HttpClientStarter.java
HttpClient 4.x
Timeout
The HTTP specification does not determine how long a persistent connection may or should remain active. Some HTTP servers use a non-standard header, Keep-Alive, to tell clients the number of seconds they want to stay connected on the server side. HttClient will take advantage of this if this information is available. If the header information Keep-Alive does not exist in the response, HttpClient assumes the connection remains active indefinitely. However, many real-world HTTP servers are configured to discard persistent connections after certain periods of inactivity to conserve system resources, often without notification to the client.
Here you can rewrite one, here is set to 5 seconds
ConnectionKeepAliveStrategy keepAliveStrategy = new DefaultConnectionKeepAliveStrategy() {
#Override
public long getKeepAliveDuration(final HttpResponse response, final HttpContext context) {
long keepAlive = super.getKeepAliveDuration(response, context);
if (keepAlive == -1) {
keepAlive = 5000;
}
return keepAlive;
}
};
Connection eviction policy
The main disadvantage of a classic blocking I/O model is that network
sockets respond to I/O events only when I/O operations are blocked.
When a connection is released back to the manager, it can be kept
alive without monitoring the status of the socket and responding to
any I/O events. If the connection is closed on the server side, then
the client connection can not detect changes in the connection status
and shut down the local socket to respond properly.
HttpClient tries to alleviate this problem by testing if the connection is outdated, which is no longer valid as it is already closed on the server side before using the connection that made the HTTP request. Outdated connection check is not 100% stable, but instead requires 10 to 30 milliseconds for each request execution. The only workable socket model thread solution that does not involve every free connection is to use a dedicated monitoring thread to reclaim the connection that is considered expired because of prolonged inactivity. Monitoring thread can periodically call ClientConnectionManager#closeExpiredConnections() method to close all expired connections, withdraw from the connection pool closed connection. It can also optionally call the ClientConnectionManager#closeIdleConnections() method to close all connections that have been idle for more than a given period of time.
Resource Link:
http://dev.dafan.info/detail/513285

Categories

Resources