Unable to configure "Keep Alive" in Camel HTTP component - java

I'm having some troubles with the right setup of the HTTP component. Currently a microservice pulls JSON Content from a provider, process it and send it to the next service for further processes. The main problem is that this microservice create a ton of CLOSE_WAIT socket connections. I understand that the whole concept of "KEEP-ALIVE" shall keep the connection open until I close it, but it's possible that the server will drop the connection for some reasons and creates this CLOSE_WAIT socket.
I've created a small service for debugging / testing purposes which make a GET Call to Google, but even this connection stays open until i close the program. I've tried many different solutions:
.setHeader("Connection", constant("Close"))
-Dhttp.keepAlive=false as VM argument
Switching from Camel-Http to Camel-Http4
httpClient.soTimeout=500 (Camel-HTTP), httpClient.socketTimeout=500 and connectionTimeToLive=500 (Camel-HTTP4)
.setHeader("Connection", simple("Keep-Alive")) and
.setHeader("Keep-Alive", simple("timeout=10")) (Camel-HTTP4)
Setting via debugging the response of DefaultConnectionKeepAliveStrategy from -1 (never ending) to a specific value in Camel-HTTP4 - that works but I was not able to inject my own strategy.
but i had no success. So maybe one of you can help me:
How can i tell the Camel-HTTP that it should close a connection when a specific time is passed? For example, the service pulls every hour from the content provider. After 3-4 hours the HttpComponent should close the connection after the pull and reopen it when the next pull is there. Currently every connection would be put back into the MultiThreadedHttpConnectionManager and the socket is still open.
If it's not possible to do that with Camel-HTTP: How can i inject a HttpClientBuilder into the Creation of my route? I know that it should be possible via httpClient option but I don't understand that specific part of the documentation.
Thank you all for your help

Unfortunately none of the proposed answers solved the CLOSE_WAIT connection status on my side until the application finally was closed.
I reproduced this problem with the following test case:
public class HttpInvokationTest extends CamelSpringTestSupport {
private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
#EndpointInject(uri = "mock:success")
private MockEndpoint successEndpoint;
#EndpointInject(uri = "mock:failure")
private MockEndpoint failureEndpoint;
#Override
protected AbstractApplicationContext createApplicationContext() {
return new AnnotationConfigApplicationContext(ContextConfig.class);
}
#Configuration
#Import(HttpClientSpringTestConfig.class)
public static class ContextConfig extends CamelConfiguration {
#Override
public List<RouteBuilder> routes() {
List<RouteBuilder> routes = new ArrayList<>(1);
routes.add(new RouteBuilder() {
#Override
public void configure() {
from("direct:start")
.log(LoggingLevel.INFO, LOG, CONFIDENTIAL, "Invoking external URL: ${header[ERPEL_URL]}")
.setHeader("Connection", constant("close"))
.recipientList(header("TEST_URL"))
.log(LoggingLevel.DEBUG, "HTTP response code: ${header["+Exchange.HTTP_RESPONSE_CODE+"]}")
.bean(CopyBodyToHeaders.class)
.choice()
.when(header(Exchange.HTTP_RESPONSE_CODE).isGreaterThanOrEqualTo(300))
.to("mock:failure")
.otherwise()
.to("mock:success");
}
});
return routes;
}
}
#Test
public void testHttpInvocation() throws Exception {
successEndpoint.expectedMessageCount(1);
failureEndpoint.expectedMessageCount(0);
ProducerTemplate template = context.createProducerTemplate();
template.sendBodyAndHeader("direct:start", null, "TEST_URL", "http4://meta.stackoverflow.com");
successEndpoint.assertIsSatisfied();
failureEndpoint.assertIsSatisfied();
Exchange exchange = successEndpoint.getExchanges().get(0);
Map<String, Object> headers = exchange.getIn().getHeaders();
String body = exchange.getIn().getBody(String.class);
for (String key : headers.keySet()) {
LOG.info("Header: {} -> {}", key, headers.get(key));
}
LOG.info("Body: {}", body);
Thread.sleep(120000);
}
}
and issuing netstat -ab -p tcp | grep 151.101.129.69 requests, where the IP is the one of meta.stackoverflow.com.
This gave responses like:
tcp4 0 0 192.168.0.10.52183 151.101.129.69.https ESTABLISHED 37562 2118
tcp4 0 0 192.168.0.10.52182 151.101.129.69.http ESTABLISHED 885 523
right after the invocation followeb by
tcp4 0 0 192.168.0.10.52183 151.101.129.69.https CLOSE_WAIT 37562 2118
tcp4 0 0 192.168.0.10.52182 151.101.129.69.http CLOSE_WAIT 885 523
responses until the application was closed due to the Connection: keep-alive header even with a configuration like the one below:
#Configuration
#EnableConfigurationProperties(HttpClientSettings.class)
public class HttpClientSpringTestConfig {
private final static Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
#Resource
private HttpClientSettings httpClientSettings;
#Resource
private CamelContext camelContext;
private SocketConfig httpClientSocketConfig() {
/*
socket timeout:
Monitors the time passed between two consecutive incoming messages over the connection and
raises a SocketTimeoutException if no message was received within the given timeout interval
*/
LOG.info("Creating a SocketConfig with a socket timeout of {} seconds", httpClientSettings.getSoTimeout());
return SocketConfig.custom()
.setSoTimeout(httpClientSettings.getSoTimeout() * 1000)
.setSoKeepAlive(false)
.setSoReuseAddress(false)
.build();
}
private RequestConfig httpClientRequestConfig() {
/*
connection timeout:
The time span the application will wait for a connection to get established. If the connection
is not established within the given amount of time a ConnectionTimeoutException will be raised.
*/
LOG.info("Creating a RequestConfig with a socket timeout of {} seconds and a connection timeout of {} seconds",
httpClientSettings.getSoTimeout(), httpClientSettings.getConTimeout());
return RequestConfig.custom()
.setConnectTimeout(httpClientSettings.getConTimeout() * 1000)
.setSocketTimeout(httpClientSettings.getSoTimeout() * 1000)
.build();
}
#Bean(name = "httpClientConfigurer")
public HttpClientConfigurer httpConfiguration() {
ConnectionKeepAliveStrategy myStrategy = new ConnectionKeepAliveStrategy() {
#Override
public long getKeepAliveDuration(HttpResponse response, HttpContext context) {
return 5 * 1000;
}
};
PoolingHttpClientConnectionManager conMgr =
new PoolingHttpClientConnectionManager();
conMgr.closeIdleConnections(5, TimeUnit.SECONDS);
return builder -> builder.setDefaultSocketConfig(httpClientSocketConfig())
.setDefaultRequestConfig(httpClientRequestConfig())
.setConnectionTimeToLive(5, TimeUnit.SECONDS)
.setKeepAliveStrategy(myStrategy)
.setConnectionManager(conMgr);
}
#PostConstruct
public void init() {
LOG.debug("Initializing HTTP clients");
HttpComponent httpComponent = camelContext.getComponent("http4", HttpComponent.class);
httpComponent.setHttpClientConfigurer(httpConfiguration());
HttpComponent httpsComponent = camelContext.getComponent("https4", HttpComponent.class);
httpsComponent.setHttpClientConfigurer(httpConfiguration());
}
}
or defining the settings directly on the respective HttpComponent.
On examining the respective proposed methods in the HttpClient code it gets obvious that these methods are single-shot operations and not configurations that HttpClient internally will check every few milliseconds itself.
PoolingHttpClientConnectionManager states further that:
The handling of stale connections was changed in version 4.4. Previously, the code would check every connection by default before re-using it. The code now only checks the connection if the elapsed time since the last use of the connection exceeds the timeout that has been set. The default timeout is set to 2000ms
which only occurs if an attempt is done on re-using a connection, which makes sense for a connection pool, especially if multiple messages are exchanged via the same connection. For single-shot invocations, that should more behave like a Connection: close there might not be a reuse of that connection for some time, leaving the connection open or half-closed as no further attempt is done to read from that connection and therefore recognizing itself that the connection could be closed.
I noticed that I already solved such an issue a while back with traditional HttpClients and started to port this solution to Camel, which worked out quite easily.
The solution basically consists of registering HttpClients with a service and then periodically (5 seconds in my case) call closeExpiredConnections() and closeIdleConnections(...).
This logic is kept in a singleton enum, as this is actually in a library that a couple of applications use, each running in their own JVM.
/**
* This singleton monitor will check every few seconds for idle and stale connections and perform
* a cleanup on the connections using the registered connection managers.
*/
public enum IdleConnectionMonitor {
INSTANCE;
private final static Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
/** The execution service which runs the cleanup every 5 seconds **/
private ScheduledExecutorService executorService =
Executors.newScheduledThreadPool(1, new NamingThreadFactory());
/** The actual thread which performs the monitoring **/
private IdleConnectionMonitorThread monitorThread = new IdleConnectionMonitorThread();
IdleConnectionMonitor() {
// execute the thread every 5 seconds till the application is shutdown (or the shutdown method
// is invoked)
executorService.scheduleAtFixedRate(monitorThread, 5, 5, TimeUnit.SECONDS);
}
/**
* Registers a {#link HttpClientConnectionManager} to monitor for stale connections
*/
public void registerConnectionManager(HttpClientConnectionManager connMgr) {
monitorThread.registerConnectionManager(connMgr);
}
/**
* Request to stop the monitoring for stale HTTP connections.
*/
public void shutdown() {
executorService.shutdown();
try {
if (!executorService.awaitTermination(3, TimeUnit.SECONDS)) {
LOG.warn("Connection monitor shutdown not finished after 3 seconds!");
}
} catch (InterruptedException iEx) {
LOG.warn("Execution service was interrupted while waiting for graceful shutdown");
}
}
/**
* Upon invocation, the list of registered connection managers will be iterated through and if a
* referenced object is still reachable {#link HttpClientConnectionManager#closeExpiredConnections()}
* and {#link HttpClientConnectionManager#closeIdleConnections(long, TimeUnit)} will be invoked
* in order to cleanup stale connections.
* <p/>
* This runnable implementation holds a weakly referable list of {#link
* HttpClientConnectionManager} objects. If a connection manager is only reachable by {#link
* WeakReference}s or {#link PhantomReference}s it gets eligible for garbage collection and thus
* may return null values. If this is the case, the connection manager will be removed from the
* internal list of registered connection managers to monitor.
*/
private static class IdleConnectionMonitorThread implements Runnable {
// we store only weak-references to connection managers in the list, as the lifetime of the
// thread may extend the lifespan of a connection manager and thus allowing the garbage
// collector to collect unused objects as soon as possible
private List<WeakReference<HttpClientConnectionManager>> registeredConnectionManagers =
Collections.synchronizedList(new ArrayList<>());
#Override
public void run() {
LOG.trace("Executing connection cleanup");
Iterator<WeakReference<HttpClientConnectionManager>> conMgrs =
registeredConnectionManagers.iterator();
while (conMgrs.hasNext()) {
WeakReference<HttpClientConnectionManager> weakConMgr = conMgrs.next();
HttpClientConnectionManager conMgr = weakConMgr.get();
if (conMgr != null) {
LOG.trace("Found connection manager: {}", conMgr);
conMgr.closeExpiredConnections();
conMgr.closeIdleConnections(30, TimeUnit.SECONDS);
} else {
conMgrs.remove();
}
}
}
void registerConnectionManager(HttpClientConnectionManager connMgr) {
registeredConnectionManagers.add(new WeakReference<>(connMgr));
}
}
private static class NamingThreadFactory implements ThreadFactory {
#Override
public Thread newThread(Runnable r) {
Thread t = new Thread(r);
t.setName("Connection Manager Monitor");
return t;
}
}
}
As mentioned, this singleton service spawns an own thread that invokes the two, above mentioned methods every 5 seconds. These invocations take care of closing connections that are either unused for a certain amount of time or that are IDLE for the stated amount of time.
In order to camelize this service EventNotifierSupport can be utilized in order to let Camel take care of shutting down the monitor thread once it is closing down.
/**
* This Camel service with take care of the lifecycle management of {#link IdleConnectionMonitor}
* and invoke {#link IdleConnectionMonitor#shutdown()} once Camel is closing down in order to stop
* listening for stale connetions.
*/
public class IdleConnectionMonitorService extends EventNotifierSupport {
private final static Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
private IdleConnectionMonitor connectionMonitor;
#Override
public void notify(EventObject event) {
if (event instanceof CamelContextStartedEvent) {
LOG.info("Start listening for closable HTTP connections");
connectionMonitor = IdleConnectionMonitor.INSTANCE;
} else if (event instanceof CamelContextStoppingEvent){
LOG.info("Shutting down listener for open HTTP connections");
connectionMonitor.shutdown();
}
}
#Override
public boolean isEnabled(EventObject event) {
return event instanceof CamelContextStartedEvent || event instanceof CamelContextStoppingEvent;
}
public IdleConnectionMonitor getConnectionMonitor() {
return this.connectionMonitor;
}
}
In order to take advantage of that service, the connection manager that is used by the HttpClient Camel uses internally needs to be registered with the service, which is done in the code block below:
private void registerHttpClientConnectionManager(HttpClientConnectionManager conMgr) {
if (!getIdleConnectionMonitorService().isPresent()) {
// register the service with Camel so that on a shutdown the monitoring thread will be stopped
camelContext.getManagementStrategy().addEventNotifier(new IdleConnectionMonitorService());
}
IdleConnectionMonitor.INSTANCE.registerConnectionManager(conMgr);
}
private Optional<IdleConnectionMonitorService> getIdleConnectionMonitorService() {
for (EventNotifier eventNotifier : camelContext.getManagementStrategy().getEventNotifiers()) {
if (eventNotifier instanceof IdleConnectionMonitorService) {
return Optional.of((IdleConnectionMonitorService) eventNotifier);
}
}
return Optional.empty();
}
Last but not least the connection manager defined in httpConfiguration inside the HttpClientSpringTestConfig in my case needed to be past to the introduced register function
PoolingHttpClientConnectionManager conMgr = new PoolingHttpClientConnectionManager();
registerHttpClientConnectionManager(conMgr);
This might not be the prettiest solution, but it does close the half-closed connections on my machine.
#edit
I just learned that you can use a NoConnectionReuseStrategy which changes the connection state to TIME_WAIT rather than CLOSE_WAIT and therefore removes the connection after a short moment. Unfortunately, the request is still issued with a Connection: keep-alive header. This strategy will create a new connection per request, i.e. if you've got a 301 Moved Permanently redirect response the redirect would occur on a new connection.
The httpClientConfigurer bean would need to change to the following in order to make use of the above mentioned strategy:
#Bean(name = "httpClientConfigurer")
public HttpClientConfigurer httpConfiguration() {
return builder -> builder.setDefaultSocketConfig(socketConfig)
.setDefaultRequestConfig(requestConfig)
.setConnectionReuseStrategy(NoConnectionReuseStrategy.INSTANCE);
}

It can be done by closing idle connections if they are idle for configured time. You can achieve same by configuring idle connection timeout for Camel Http Component.
Camel Http provide interface to do so.
Cast org.apache.camel.component.http4.HttpComponent to PoolingHttpClientConnectionManager
PoolingHttpClientConnectionManager poolingClientConnectionManager = (PoolingHttpClientConnectionManager) httpComponent
.getClientConnectionManager();
poolingClientConnectionManager.closeIdleConnections(5000, TimeUnit.MILLISECONDS);
Visit Here [http://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/impl/conn/PoolingHttpClientConnectionManager.html#closeIdleConnections(long, java.util.concurrent.TimeUnit)]

Firstly Roman Vottner, your answer and just your sheer dedication to finding the issue helped me a truckload. I have been struggling with the CLOSE_WAIT for 2 days now and your answer was what helped. Here is what I did. Added the following code in my CamelConfiguration class which essentially tampers with CamelContext at startup.
HttpComponent http4 = camelContext.getComponent("https4", HttpComponent.class);
http4.setHttpClientConfigurer(new HttpClientConfigurer() {
#Override
public void configureHttpClient(HttpClientBuilder builder) {
builder.setConnectionReuseStrategy(NoConnectionReuseStrategy.INSTANCE);
}
});
Worked like a charm.

You can provide your own clientConnectionManager to HTTP4. Generally you should use an instance of org.apache.http.impl.conn.PoolingHttpClientConnectionManager, which you'd configure with your own org.apache.http.config.SocketConfig by passing it to setDefaultSocketConfig method of the connection manager.
If you're using Spring with Java config, you would have a method:
#Bean
PoolingHttpClientConnectionManager connectionManager() {
SocketConfig socketConfig = SocketConfig.custom()
.setSoKeepAlive(false)
.setSoReuseAddress(true)
.build();
PoolingHttpClientConnectionManager connectionManager = new PoolingHttpClientConnectionManager();
connectionManager.setDefaultSocketConfig(socketConfig);
return connectionManager;
}
and then you'd just use it in your endpoint definition like so: clientConnectionManager=#connectionManager

Related

how to configure pooled connection idle timeout in reactor-netty

I am using reactor-netty http client (0.7.X series) with connection pooling and would like to configure pooled connection's idle timeout but don't know where.
More precisely, I need to configure reactor-netty http client connection pool in such a way that it will automatically close connections that did not see any activity within configurable timeout. These connections are open but no bytes were transferred in or out for some (configurable) amount of time.
How can I configure reactory-netty http client to close idle connections preemptively?
I managed to configure WebClient (via underlying TcpClient) to remove idle connections on timeout from connection pool in reactor-netty 0.8.9
My solution is partially based on the official documentation about IdleStateHandler extended with my research on how to properly apply it when creating an instance of HttpClient.
Here is how I did that:
public class IdleCleanupHandler extends ChannelDuplexHandler {
#Override
public void userEventTriggered(final ChannelHandlerContext ctx, final Object evt) throws Exception {
if (evt instanceof IdleStateEvent) {
final IdleState state = ((IdleStateEvent) evt).state();
if (state == IdleState.ALL_IDLE) { // or READER_IDLE / WRITER_IDLE
// close idling channel
ctx.close();
}
} else {
super.userEventTriggered(ctx, evt);
}
}
}
...
public static WebClient createWebClient(final String baseUrl, final int idleTimeoutSec) {
final TcpClient tcpClient = TcpClient.create(ConnectionProvider.fixed("fixed-pool"))
.bootstrap(bootstrap -> BootstrapHandlers.updateConfiguration(bootstrap, "idleTimeoutConfig",
(connectionObserver, channel) -> {
channel.pipeline()
.addLast("idleStateHandler", new IdleStateHandler(0, 0, idleTimeoutSec))
.addLast("idleCleanupHandler", new IdleCleanupHandler());
}));
return WebClient.builder()
.clientConnector(new ReactorClientHttpConnector(HttpClient.from(tcpClient)))
.baseUrl(baseUrl)
.build();
}
IMPORTANT UPDATE:
My further testing has indicated that adding handlers during bootstrap hook distructs the pool and sockets (channels) are not reused by Connection.
The right way to add the handlers is:
public static WebClient createWebClient(final String baseUrl, final int idleTimeoutSec) {
final TcpClient tcpClient = TcpClient.create(ConnectionProvider.fixed("fixed-pool"))
.doOnConnected(conn -> {
final ChannelPipeline pipeline = conn.channel().pipeline();
if (pipeline.context("idleStateHandler") == null) {
pipeline.addLast("idleStateHandler", new IdleStateHandler(0, 0, idleTimeoutSec))
.addLast("idleCleanupHandler", new IdleCleanupHandler());
}
});
return WebClient.builder()
.clientConnector(new ReactorClientHttpConnector(HttpClient.from(tcpClient)))
.baseUrl(baseUrl)
.build();
}
Note: in reactor-netty 0.9.x there will be a standard way to configure idle timeout for connections in the connection pool, see this commit: https://github.com/reactor/reactor-netty/pull/792
I was able to accomplish this on the 0.7.x branch by adding netty write and read time-out handlers to the channel pipeline. However, on 0.8.x, this approach no longer works.
HttpClient httpClient = HttpClient
.create((HttpClientOptions.Builder builder) -> builder
.host(endpointUrl.getHost())
.port(endpointUrl.getPort())
.poolResources(PoolResources.fixed(connectionPoolName, maxConnections, timeoutPool))
.afterChannelInit(channel -> {
channel.pipeline()
// The write and read timeouts are serving as generic socket idle state handlers.
.addFirst("write_timeout", new WriteTimeoutHandler(timeoutIdle, TimeUnit.MILLISECONDS))
.addFirst("read_timeout", new ReadTimeoutHandler(timeoutIdle, TimeUnit.MILLISECONDS));
})
.build());
The easiest way to do this in reactor-netty 0.9.x with TCP client is by using the below approach, I got this from the link referred by #Vladimir-L. Configure "maxIdleTime" for your question.
TcpClient timeoutClient = TcpClient.create(ConnectionProvider.fixed(onnectionPoolName, maxConnections, acquireTimeout,maxIdleTime));
I am currently at reactor-netty 0.8.2 because of spring-boot-starter-webflux and faced the same issue, the connection pool kept connections open for 60 seconds after they were finished.
With this approach you can't configure the timeout, but you can disable it:
WebClient.builder()
.clientConnector(new ReactorClientHttpConnector(
HttpClient.from(TcpClient.create()).keepAlive(false)))
.build()
.get()
.uri("someurl")
.retrieve()
.bodyToMono(String.class)
For Reactor Netty version 1 you need to create a reactor.netty.resources.ConnectionProvider which will contain the idle time configuration and then use that when creating the reactor.netty.http.client.HttpClient.
I'm using Spring so I then use that to create a Spring org.springframework.http.client.reactive.ClientHttpConnector as shown below.
ConnectionProvider connectionProvider = ConnectionProvider.builder("Name")
.maxIdleTime(Duration.ofSeconds(10))
.build();
HttpClient httpClient = HttpClient.create(connectionProvider)
.compress(true);
return WebClient.builder()
.clientConnector(new ReactorClientHttpConnector(httpClient))
.baseUrl(host);

Netty 3.10.5-final "lags"

I'm using Netty 3.10.5-final for my network server. Server has about ~100 simultaneous clients.
Sometimes server starts to "lags", he stops sending packets, but continue to accept incoming connections.
This is the code i'm using to start server:
public class ClientListener {
/**
* NIO server that processes requests between login and game servers.
*/
protected NettyServer gameServerListener;
/**
* Client packets executor.
*/
protected final Executor packetsExecutor = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() * 2);
ExecutorService bossExec = new OrderedMemoryAwareThreadPoolExecutor(1, 400000000, 2000000000, 60, TimeUnit.SECONDS);
ExecutorService ioExec = new OrderedMemoryAwareThreadPoolExecutor(4 , 400000000, 2000000000, 60, TimeUnit.SECONDS);
private String serverName;
private String bindIp;
private int port;
public ClientListener(String serverName, String bindIp, int port) {
this.serverName = serverName;
this.bindIp = bindIp;
this.port = port;
}
public void start() {
gameServerListener = new NettyServer(serverName, bindIp, port);
gameServerListener.setChannelFactory(new NioServerSocketChannelFactory(bossExec, ioExec));
gameServerListener.setPipelineFactory(new ClientPipeline(packetsExecutor));
gameServerListener.setOption("child.bufferFactory", new HeapChannelBufferFactory(ByteOrder.LITTLE_ENDIAN));
gameServerListener.setOption("tcpNoDelay", true);
gameServerListener.setOption("child.tcpNoDelay", true);
gameServerListener.setOption("child.keepAlive", true);
gameServerListener.setOption("readWriteFair", true);
gameServerListener.startServer();
}
NettyServer class is simple wrapper for ServerBootstrap.
First of all i thought that maybe IO Executer reached events/memory limits, and replaced its limits with 0, which means no limits at all. This doesn't solved problem.
Then i tried to use different executors for client packets, and that doesn't helped too.
My channel implementation extends SimpleChannelHandler and haven't any synchronization inside, so i threw away this version too.
I haven't ideas what else could cause this "lags", help needed.
Found a solution.
The problem was because i used to call database operations in the channelDisconnected method in the ChannelHandler. When database was performing long queries - this could block IO threads, and network start to lags.
So, in my case i just excluded all database operation outside of IO threads and that helped.

When should JMS connection be started? In its own Thread?

I have a Java Swing GUI client that communicates with a WildFly server.
standalone-full.xml
<jms-queue name="goReceiveFmSvrQueue">
<entry name="java:/jboss/exported/jms/goReceiveFmSvrQueue"/>
<durable>true</durable>
</jms-queue>
<jms-queue name="goSendToSvrQueue">
<entry name="java:jboss/exported/jms/goSendToSvrQueue"/>
<durable>true</durable>
</jms-queue>
My client has a Runnable MsgCenterSend class. It instantiates MsgCenterSend. then calls msgCenter.run() to start a connection. Then used msgCenter.send() to send a message. And msgCenter.stop() to close it when the client closes.
Does that make sense?
Or Should the client just create a connection, session, destination and producer every time it needs to send message? And if it does that should it be done in a separate Thread?
public class MsgCenterSend implements Runnable {
private Connection connection = null;
private MessageProducer msgProducer = null;
private Session session = null;
public void run() {
Context ctx = new InitialContext(/*connection propoerties*/);
HornetQJMSConnectionFactory jmsConnectionFactory = (HornetQJMSConnectionFactory) ctx.lookup("jms/RemoteConnectionFactory");
this.connection = jmsConnectionFactory.createConnection("jmsuser", "jmsuser#123");
this.session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Destination sendToDestination = (Destination) ctx.lookup("jms/goSendToSvrQueue");
this.msgProducer = this.session.createProducer(sendToDestination);
this.connection.start();
}
public boolean sendMsg (/*parameters*/) {
ObjectMessage message = this.session.createObjectMessage();
// set MessageObject and Properties
this.msgProducer.send(message);
}
public void stop ()
this.connection.stop();
}
}
}
The client uses stop() on exit.
For now my MessageBean looks like:
#MessageDriven(
activationConfig ={
#ActivationConfigProperty(propertyName="destinationType", propertyValue="javax.jms.Queue"),
#ActivationConfigProperty(propertyName="maxSession",propertyValue="1"),
#ActivationConfigProperty(propertyName="destination", propertyValue="jms/goSendToSvrQueue")
})
public class GoMsgBean implements MessageListener {
#ApplicationScoped
#Inject
JMSContext jmsCtx;
//This is queue client listens to. Server sends replies to it.
#Resource(name = "java:jboss/exported/jms/goReceiveFmSvrQueue")
private Queue svrSendQueue;
public GoMsgBean () {
}
#PostConstruct
public void myInit () {
System.out.println("XXXXXXXXXX Post Construct - GoMsgBean XXXXXXXXXX");
}
#PreDestroy
public void myDestroy () {
System.out.println("XXXXXXXXXX Post Destroy - logger XXXXXXXXXX");
}
public void onMessage(Message msg) {
System.out.println("XXXXXXXXXX MessageBean received a Message XXXXXXXXX");
}
}
Even infrequent I don't see a problem keeping the connection open, unless you have serious resource constraints, messaging protocols are usually light-weight enough to just keep open and not worry about connect/disconnect/reconnect. ActiveMQ's documentation says exactly that, and though I can't find the per-connection memory overhead it's not a lot. There's also server-side configuration that can help manage large volumes of messages, but again, I'm not worried about it.
One disadvantage of ActiveMQ is that it doesn't support true clustering, so if you're really dealing with 10's or 100's of thousands of connections, then you're going to have problems.
And in the end, you'll need to do performance analysis on your end to make sure the application behaves with the server.
If your application is sending messages frequently to the same destination then it is a best practice to create connection, session and producer once and re-use them because creating connection, session etc are costly operations.
If messages are not sent frequently, then it's better to create all the required objects, send message and close the objects. This way resources are freed up on the messaging provider.

Broadcasting with Jersey SSE: Detect closed connection

I believe this question is not a duplicate of Server sent event with Jersey: EventOutput is not closed after client drops, but probably related to Jersey Server-Sent Events - write to broken connection does not throw exception.
In chapter 15.4.2 of the Jersey documentation, the SseBroadcaster is described:
However, the SseBroadcaster internally identifies and handles also client disconnects. When a client closes the connection the broadcaster detects this and removes the stale connection from the internal collection of the registered EventOutputs as well as it frees all the server-side resources associated with the stale connection.
I cannot confirm this. In the following testcase, I see the subclassed SseBroadcaster's onClose() method never being called: not when the EventInput is closed, and not when another message is broadcasted.
public class NotificationsResourceTest extends JerseyTest {
final static Logger log = LoggerFactory.getLogger(NotificationsResourceTest.class);
final static CountingSseBroadcaster broadcaster = new CountingSseBroadcaster();
public static class CountingSseBroadcaster extends SseBroadcaster {
final AtomicInteger connectionCounter = new AtomicInteger(0);
public EventOutput createAndAttachEventOutput() {
EventOutput output = new EventOutput();
if (add(output)) {
int cons = connectionCounter.incrementAndGet();
log.debug("Active connection count: "+ cons);
}
return output;
}
#Override
public void onClose(final ChunkedOutput<OutboundEvent> output) {
int cons = connectionCounter.decrementAndGet();
log.debug("A connection has been closed. Active connection count: "+ cons);
}
#Override
public void onException(final ChunkedOutput<OutboundEvent> chunkedOutput, final Exception exception) {
log.trace("An exception has been detected", exception);
}
public int getConnectionCount() {
return connectionCounter.get();
}
}
#Path("notifications")
public static class NotificationsResource {
#GET
#Produces(SseFeature.SERVER_SENT_EVENTS)
public EventOutput subscribe() {
log.debug("New stream subscription");
EventOutput eventOutput = broadcaster.createAndAttachEventOutput();
return eventOutput;
}
}
#Override
protected Application configure() {
ResourceConfig config = new ResourceConfig(NotificationsResource.class);
config.register(SseFeature.class);
return config;
}
#Test
public void test() throws Exception {
// check that there are no connections
assertEquals(0, broadcaster.getConnectionCount());
// connect subscriber
log.info("Connecting subscriber");
EventInput eventInput = target("notifications").request().get(EventInput.class);
assertFalse(eventInput.isClosed());
// now there are connections
assertEquals(1, broadcaster.getConnectionCount());
// push data
log.info("Broadcasting data");
String payload = UUID.randomUUID().toString();
OutboundEvent chunk = new OutboundEvent.Builder()
.mediaType(MediaType.TEXT_PLAIN_TYPE)
.name("message")
.data(payload)
.build();
broadcaster.broadcast(chunk);
// read data
log.info("Reading data");
InboundEvent inboundEvent = eventInput.read();
assertNotNull(inboundEvent);
assertEquals(payload, inboundEvent.readData());
// close subscription
log.info("Closing subscription");
eventInput.close();
assertTrue(eventInput.isClosed());
// at this point, the subscriber has disconnected itself,
// but jersey doesnt realise that
assertEquals(1, broadcaster.getConnectionCount());
// wait, give TCP a chance to close the connection
log.debug("Sleeping for some time");
Thread.sleep(10000);
// push data again, this should really flush out the not-connected client
log.info("Broadcasting data again");
broadcaster.broadcast(chunk);
Thread.sleep(100);
// there is no subscriber anymore
assertEquals(0, broadcaster.getConnectionCount()); // FAILS!
}
}
Maybe JerseyTest is not a good way to test this. In a less ... clinical setup, where a JavaScript EventSource is used, I see onClose() being called, but only after a message is broadcasted on the previously closed connection.
What am I doing wrong?
Why doesn't SseBroadcaster detect the closing of the connection by the client?
Follow-up
I've found JERSEY-2833 which was rejected with Works as designed:
According to the Jersey Documentation in SSE chapter (https://jersey.java.net/documentation/latest/sse.html) in 15.4.1 it's mentioned that Jersey does not explicitly close the connection, it's the responsibility of the resource method or the client.
What does that mean exactly? Should the resource enforce a timeout and kill all active and closed-by-client connections?
In the documentation of the constructor org.glassfish.jersey.media.sse.SseBroadcaster.SseBroadcaster(), it says:
Creates a new instance. If this constructor is called by a subclass, it assumes the the reason for the subclass to exist is to implement onClose(org.glassfish.jersey.server.ChunkedOutput) and onException(org.glassfish.jersey.server.ChunkedOutput, Exception)methods, so it adds the newly created instance as the listener. To avoid this, subclasses may call SseBroadcaster(Class) passing their class as an argument.
So you should not leave default constructor and try implementing your constructor invoking super with your class:
public CountingSseBroadcaster(){
super(CountingSseBroadcaster.class);
}
I believe it might be better to set a timeout on your resource and kill only that connection, for example:
#Path("notifications")
public static class NotificationsResource {
#GET
#Produces(SseFeature.SERVER_SENT_EVENTS)
public EventOutput subscribe() {
log.debug("New stream subscription");
EventOutput eventOutput = broadcaster.createAndAttachEventOutput();
new Timer().schedule( new TimerTask()
{
#Override public void run()
{
eventOutput.close()
}
}, 10000); // 10 second timeout
return eventOutput;
}
}
Im wondering if by subclassing you may have changed the behaviour.
#Override
public void onClose(final ChunkedOutput<OutboundEvent> output) {
int cons = connectionCounter.decrementAndGet();
log.debug("A connection has been closed. Active connection count: "+ cons);
}
In this you don't close the ChunkedOutput so it won't release the connection. Could this be the problem?

Lot of TIME_WAIT connections while using RestTemplate?

I am using Spring RestTemplate to make a HTTP Calls to my RestService. I am using spring framework 3.2.8 version of RestTemplate. I cannot upgrade this since in our company we have a parent POM in which we are using Spring Framework version 3.2.8 so I need to stick to that.
Let's say I have two machines:
machineA: This machine is running my code which uses RestTemplate as my HttpClient and from this machine I make HTTP Calls to my RestService which is running on a different machine (machineB). I have wrapped the below code around multithreaded application so that I can do load and performance testing on my client code.
machineB: On this machine, I am running my RestService.
Now the problem I am seeing is whenever I run a load and performance testing on machineA - Meaning, my client code will make lot of HTTPClient calls to the RestService running on machineB very fast since the client code is getting called in a multithreaded way.
I always see lot of TIME_WAIT connections on machineA as shown below:
298 ESTABLISHED
14 LISTEN
2 SYN_SENT
10230 TIME_WAIT
291 ESTABLISHED
14 LISTEN
1 SYN_SENT
17767 TIME_WAIT
285 ESTABLISHED
14 LISTEN
1 SYN_SENT
24055 TIME_WAIT
I don't think it's a good sign that we have lot of TIME_WAIT connections here.
Problem Statement:-
What does this high TIME_WAIT connection mean here in a simple language on machineA?
Is there any reason why this is happening with RestTemplate or is it just the way I am using RestTemplate? If I am doing anything wrong in the way I am using RestTemplate, then what's the right way to use it?
Do I need to set any keep-alive header or Connection:Close thing while using RestTemplate? Any inputs/suggestions are greatly appreciated as I am confuse what's going on here.
Below is how I am using RestTemplate in my code base in a simple way (just to explain the whole idea of how I am using RestTemplate):
public class DataClient implements Client {
private final RestTemplate restTemplate = new RestTemplate();
private ExecutorService executor = Executors.newFixedThreadPool(10);
// for synchronous call
#Override
public String getSyncData(DataKey key) {
String response = null;
Future<String> handler = null;
try {
handler = getAsyncData(key);
response = handler.get(100, TimeUnit.MILLISECONDS); // we have a 100 milliseconds timeout value set
} catch (TimeoutException ex) {
// log an exception
handler.cancel(true);
} catch (Exception ex) {
// log an exception
}
return response;
}
// for asynchronous call
#Override
public Future<String> getAsyncData(DataKey key) {
Future<String> future = null;
try {
Task task = new Task(key, restTemplate);
future = executor.submit(task);
} catch (Exception ex) {
// log an exception
}
return future;
}
}
And below is my simple Task class
class Task implements Callable<String> {
private final RestTemplate restTemplate;
private final DataKey key;
public Task(DataKey key, RestTemplate restTemplate) {
this.key = key;
this.restTemplate = restTemplate;
}
public String call() throws Exception {
ResponseEntity<String> response = null;
String url = "some_url_created_by_using_key";
// handling all try catch here
response = restTemplate.exchange(url, HttpMethod.GET, null, String.class);
return response.getBody();
}
}
"TIME_WAIT" is the state that a TCP connection mantains during a configurable amount of time after closed (FIN/FIN reception). In this way, a possible "delayed" packet of one connection can not be mixed with a latter connection that reuses same port.
In a high-traffic test, it is normal to have a lot of them, but they should disappear after a few minutes test finished.

Categories

Resources