Netty 3.10.5-final "lags" - java

I'm using Netty 3.10.5-final for my network server. Server has about ~100 simultaneous clients.
Sometimes server starts to "lags", he stops sending packets, but continue to accept incoming connections.
This is the code i'm using to start server:
public class ClientListener {
/**
* NIO server that processes requests between login and game servers.
*/
protected NettyServer gameServerListener;
/**
* Client packets executor.
*/
protected final Executor packetsExecutor = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() * 2);
ExecutorService bossExec = new OrderedMemoryAwareThreadPoolExecutor(1, 400000000, 2000000000, 60, TimeUnit.SECONDS);
ExecutorService ioExec = new OrderedMemoryAwareThreadPoolExecutor(4 , 400000000, 2000000000, 60, TimeUnit.SECONDS);
private String serverName;
private String bindIp;
private int port;
public ClientListener(String serverName, String bindIp, int port) {
this.serverName = serverName;
this.bindIp = bindIp;
this.port = port;
}
public void start() {
gameServerListener = new NettyServer(serverName, bindIp, port);
gameServerListener.setChannelFactory(new NioServerSocketChannelFactory(bossExec, ioExec));
gameServerListener.setPipelineFactory(new ClientPipeline(packetsExecutor));
gameServerListener.setOption("child.bufferFactory", new HeapChannelBufferFactory(ByteOrder.LITTLE_ENDIAN));
gameServerListener.setOption("tcpNoDelay", true);
gameServerListener.setOption("child.tcpNoDelay", true);
gameServerListener.setOption("child.keepAlive", true);
gameServerListener.setOption("readWriteFair", true);
gameServerListener.startServer();
}
NettyServer class is simple wrapper for ServerBootstrap.
First of all i thought that maybe IO Executer reached events/memory limits, and replaced its limits with 0, which means no limits at all. This doesn't solved problem.
Then i tried to use different executors for client packets, and that doesn't helped too.
My channel implementation extends SimpleChannelHandler and haven't any synchronization inside, so i threw away this version too.
I haven't ideas what else could cause this "lags", help needed.

Found a solution.
The problem was because i used to call database operations in the channelDisconnected method in the ChannelHandler. When database was performing long queries - this could block IO threads, and network start to lags.
So, in my case i just excluded all database operation outside of IO threads and that helped.

Related

Inconvenient Robot framework test case using websocket and jms

I'm having trouble rewriting java test cases in robot framework.
in order to do this, i need to create new java keywords, but the way tests are implemented, don't make it easy !
this is an example of script that i need to rewrite in RF :
try
{
ServerSocket server = Utils.startSocketServer;
while(true)
{
Socket socket = server.accept();
ObjectInputStream ois = new ObjectInputStream(socket.getInputStream());
RequestX request = (RequestX) ois.readObject();
if(request.getSource().equals(String.INFO)
{
/** do something **/
}
else if(request.getSource().equals(String.X)
{
/** do something **/
}
else
{
/** do something **/
}
/** break on condition **/
}
Utils.closeSocketServer(server);
}catch(Exception e)
{
/** do something **/
}
Any suggestion on how i can make this into a RF test case !
Make the whole script into a single keyword is not an option because somewhere in that loop, in the do something comment, i also need to call keywords.
The main idea is to fragment this script into functions so that i can use them as java keywords in RF but i still can't figure this out!
So, i did further researches and this is what i came up with :
Split this code into functions so that i can call and use them as keywords in robot framework.
So code became like this :
public static String SendTask(String taskFile)
{
ServerSocket server = null;
try
{
server = startSocketServer();
if (taskFile != null)
{
Utils.sendJMSWakeUp();
while(true)
{
Socket socket = server.accept();
ObjectInputStream ois = getInputStream(socket);
RequestX request = (cast)ois.readObject();
if (getSource(request,Strings.INFO)
{
/** log info **/
}
/** if the current jms queue is Scheduler then send task !*/
else if (getSource(request,Strings.SCHEDULER))
{
/** send task **/
break;
}
}
}
else
{
assertion(false, "Illegal Argument Value null");
}
}catch (Exception e)
{
/** log errors **/
}finally
{
/** close socket server & return a task id **/
}
}
the same goes for every JMS queue that I am listening to
public static String getTaskAck(String taskId);
public static String getTaskresult(String taskId);
it did work in my case for synchronous task execution. But this is very incovenient for asynchronous task execution. Because each time i'll have to wait for response on keyword, so the next keyword may fail because the response that he is supposed to read was already sent !
i could look into process BuiltIn library or RobotFramework-Async library for parallel keyword execution but it will be harder to process for many asynchronous jms messages.
After further investigation, i think i will look into robotframework-jmsLibrary. some developpment enhancement has to be done like adding activeMq.
This way, i can send and consume many asynchronous messages via activeMq then process every message via robotframework-jmsLibrary
Example :
RF-jmsLibrary <==> synchronous <==> activeMq <==> asynchronous <==> system

JMS API cannot browse messages, IBM API can

My current application logic uses the depth of the a 'PROCESS' WMQ queue to determine if a job is being processed by an IIB9 workflow. If the workflow is processing a messages, the application wait till that workflow is over. Once the workflow is over, the 'PROCESS' queue will be emptied using GET operation and the application sends other messages in the sequence for processing. I am using JMS selectors to differentiate between multiple messages being processed parallelly by the workflow.
The issue is with determining the depth of the queue. JMS API is giving the depth as 0, while IBM API is giving the depth as 1(which is expected). Unfortunately, I cannot use IBM API as my logic using some complex messageselectors.
Has anyone seen this bizarre behaviour? Please note that the IIB9 workflow is in progress while the size check is being made. Is there a setting to be tweaked?
JMS Code (message selector removed for clarity):
public class QDepthJMS {
public static void main(String[] a) throws Exception {
MQConnectionFactory factory = new MQConnectionFactory();
factory.setTransportType(WMQConstants.WMQ_CM_CLIENT);
factory.setQueueManager("QM01");
factory.setHostName("10.10.98.15");
factory.setPort(1414);
factory.setChannel("Java.Clients");
MQConnection connection = (MQConnection) factory.createConnection();
connection.start();
MQSession session = (MQSession) connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MQQueue queue = (MQQueue) session.createQueue("queue:///PROCESS");
MQQueueBrowser browser = (MQQueueBrowser) session.createBrowser(queue);
Enumeration<Message> msgs = browser.getEnumeration();
int count =0;
if (msgs.hasMoreElements()) {
msgs.nextElement();
++count;
}
System.out.println(count);
}
}
IBM API (Check MQ queue depth):
public class QDepth {
private final String host;
private final int port;
private final String channel;
private final String manager;
private final MQQueueManager qmgr;
public QDepth(String host, int port, String channel, String manager) throws MQException {
this.host = host;
this.port = port;
this.channel = channel;
this.manager = manager;
this.qmgr = createQueueManager();
}
public int depthOf(String queueName) throws MQException {
MQQueue queue = qmgr.accessQueue(queueName, MQC.MQOO_INQUIRE | MQC.MQOO_INPUT_AS_Q_DEF, null, null, null);
return queue.getCurrentDepth();
}
#SuppressWarnings("unchecked")
private MQQueueManager createQueueManager() throws MQException {
MQEnvironment.channel = channel;
MQEnvironment.port = port;
MQEnvironment.hostname = host;
MQEnvironment.properties.put(MQC.TRANSPORT_PROPERTY, MQC.TRANSPORT_MQSERIES);
return new MQQueueManager(manager);
}
public static void main(String[] a) throws Exception {
QDepth qd = new QDepth("10.10.98.15, 1414, "Java.Clients", "QM01");
System.out.println(qd.depthOf("PROCESS"));
}
}
You are not comparing "like with like" - The IBM API queries the queue depth, ie how many messages are in the queue but the JMS API is browsing through the messages and counting them. There is a valid reason for them to be different - the usual cause is that someone has put a message under a unit of work (syncpoint) and it has not yet been committed - Therefore at the point you run the IBM API it will say there's 1 message on the queue (there is...) but it is not gettable / browseable as its not yet committed.
You can verify this using runmqsc (and probably the GUI) by runmqsc DIS QSTATUS and look at the UNCOM attribute - See http://www-01.ibm.com/support/docview.wss?uid=swg21636775

Unable to configure "Keep Alive" in Camel HTTP component

I'm having some troubles with the right setup of the HTTP component. Currently a microservice pulls JSON Content from a provider, process it and send it to the next service for further processes. The main problem is that this microservice create a ton of CLOSE_WAIT socket connections. I understand that the whole concept of "KEEP-ALIVE" shall keep the connection open until I close it, but it's possible that the server will drop the connection for some reasons and creates this CLOSE_WAIT socket.
I've created a small service for debugging / testing purposes which make a GET Call to Google, but even this connection stays open until i close the program. I've tried many different solutions:
.setHeader("Connection", constant("Close"))
-Dhttp.keepAlive=false as VM argument
Switching from Camel-Http to Camel-Http4
httpClient.soTimeout=500 (Camel-HTTP), httpClient.socketTimeout=500 and connectionTimeToLive=500 (Camel-HTTP4)
.setHeader("Connection", simple("Keep-Alive")) and
.setHeader("Keep-Alive", simple("timeout=10")) (Camel-HTTP4)
Setting via debugging the response of DefaultConnectionKeepAliveStrategy from -1 (never ending) to a specific value in Camel-HTTP4 - that works but I was not able to inject my own strategy.
but i had no success. So maybe one of you can help me:
How can i tell the Camel-HTTP that it should close a connection when a specific time is passed? For example, the service pulls every hour from the content provider. After 3-4 hours the HttpComponent should close the connection after the pull and reopen it when the next pull is there. Currently every connection would be put back into the MultiThreadedHttpConnectionManager and the socket is still open.
If it's not possible to do that with Camel-HTTP: How can i inject a HttpClientBuilder into the Creation of my route? I know that it should be possible via httpClient option but I don't understand that specific part of the documentation.
Thank you all for your help
Unfortunately none of the proposed answers solved the CLOSE_WAIT connection status on my side until the application finally was closed.
I reproduced this problem with the following test case:
public class HttpInvokationTest extends CamelSpringTestSupport {
private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
#EndpointInject(uri = "mock:success")
private MockEndpoint successEndpoint;
#EndpointInject(uri = "mock:failure")
private MockEndpoint failureEndpoint;
#Override
protected AbstractApplicationContext createApplicationContext() {
return new AnnotationConfigApplicationContext(ContextConfig.class);
}
#Configuration
#Import(HttpClientSpringTestConfig.class)
public static class ContextConfig extends CamelConfiguration {
#Override
public List<RouteBuilder> routes() {
List<RouteBuilder> routes = new ArrayList<>(1);
routes.add(new RouteBuilder() {
#Override
public void configure() {
from("direct:start")
.log(LoggingLevel.INFO, LOG, CONFIDENTIAL, "Invoking external URL: ${header[ERPEL_URL]}")
.setHeader("Connection", constant("close"))
.recipientList(header("TEST_URL"))
.log(LoggingLevel.DEBUG, "HTTP response code: ${header["+Exchange.HTTP_RESPONSE_CODE+"]}")
.bean(CopyBodyToHeaders.class)
.choice()
.when(header(Exchange.HTTP_RESPONSE_CODE).isGreaterThanOrEqualTo(300))
.to("mock:failure")
.otherwise()
.to("mock:success");
}
});
return routes;
}
}
#Test
public void testHttpInvocation() throws Exception {
successEndpoint.expectedMessageCount(1);
failureEndpoint.expectedMessageCount(0);
ProducerTemplate template = context.createProducerTemplate();
template.sendBodyAndHeader("direct:start", null, "TEST_URL", "http4://meta.stackoverflow.com");
successEndpoint.assertIsSatisfied();
failureEndpoint.assertIsSatisfied();
Exchange exchange = successEndpoint.getExchanges().get(0);
Map<String, Object> headers = exchange.getIn().getHeaders();
String body = exchange.getIn().getBody(String.class);
for (String key : headers.keySet()) {
LOG.info("Header: {} -> {}", key, headers.get(key));
}
LOG.info("Body: {}", body);
Thread.sleep(120000);
}
}
and issuing netstat -ab -p tcp | grep 151.101.129.69 requests, where the IP is the one of meta.stackoverflow.com.
This gave responses like:
tcp4 0 0 192.168.0.10.52183 151.101.129.69.https ESTABLISHED 37562 2118
tcp4 0 0 192.168.0.10.52182 151.101.129.69.http ESTABLISHED 885 523
right after the invocation followeb by
tcp4 0 0 192.168.0.10.52183 151.101.129.69.https CLOSE_WAIT 37562 2118
tcp4 0 0 192.168.0.10.52182 151.101.129.69.http CLOSE_WAIT 885 523
responses until the application was closed due to the Connection: keep-alive header even with a configuration like the one below:
#Configuration
#EnableConfigurationProperties(HttpClientSettings.class)
public class HttpClientSpringTestConfig {
private final static Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
#Resource
private HttpClientSettings httpClientSettings;
#Resource
private CamelContext camelContext;
private SocketConfig httpClientSocketConfig() {
/*
socket timeout:
Monitors the time passed between two consecutive incoming messages over the connection and
raises a SocketTimeoutException if no message was received within the given timeout interval
*/
LOG.info("Creating a SocketConfig with a socket timeout of {} seconds", httpClientSettings.getSoTimeout());
return SocketConfig.custom()
.setSoTimeout(httpClientSettings.getSoTimeout() * 1000)
.setSoKeepAlive(false)
.setSoReuseAddress(false)
.build();
}
private RequestConfig httpClientRequestConfig() {
/*
connection timeout:
The time span the application will wait for a connection to get established. If the connection
is not established within the given amount of time a ConnectionTimeoutException will be raised.
*/
LOG.info("Creating a RequestConfig with a socket timeout of {} seconds and a connection timeout of {} seconds",
httpClientSettings.getSoTimeout(), httpClientSettings.getConTimeout());
return RequestConfig.custom()
.setConnectTimeout(httpClientSettings.getConTimeout() * 1000)
.setSocketTimeout(httpClientSettings.getSoTimeout() * 1000)
.build();
}
#Bean(name = "httpClientConfigurer")
public HttpClientConfigurer httpConfiguration() {
ConnectionKeepAliveStrategy myStrategy = new ConnectionKeepAliveStrategy() {
#Override
public long getKeepAliveDuration(HttpResponse response, HttpContext context) {
return 5 * 1000;
}
};
PoolingHttpClientConnectionManager conMgr =
new PoolingHttpClientConnectionManager();
conMgr.closeIdleConnections(5, TimeUnit.SECONDS);
return builder -> builder.setDefaultSocketConfig(httpClientSocketConfig())
.setDefaultRequestConfig(httpClientRequestConfig())
.setConnectionTimeToLive(5, TimeUnit.SECONDS)
.setKeepAliveStrategy(myStrategy)
.setConnectionManager(conMgr);
}
#PostConstruct
public void init() {
LOG.debug("Initializing HTTP clients");
HttpComponent httpComponent = camelContext.getComponent("http4", HttpComponent.class);
httpComponent.setHttpClientConfigurer(httpConfiguration());
HttpComponent httpsComponent = camelContext.getComponent("https4", HttpComponent.class);
httpsComponent.setHttpClientConfigurer(httpConfiguration());
}
}
or defining the settings directly on the respective HttpComponent.
On examining the respective proposed methods in the HttpClient code it gets obvious that these methods are single-shot operations and not configurations that HttpClient internally will check every few milliseconds itself.
PoolingHttpClientConnectionManager states further that:
The handling of stale connections was changed in version 4.4. Previously, the code would check every connection by default before re-using it. The code now only checks the connection if the elapsed time since the last use of the connection exceeds the timeout that has been set. The default timeout is set to 2000ms
which only occurs if an attempt is done on re-using a connection, which makes sense for a connection pool, especially if multiple messages are exchanged via the same connection. For single-shot invocations, that should more behave like a Connection: close there might not be a reuse of that connection for some time, leaving the connection open or half-closed as no further attempt is done to read from that connection and therefore recognizing itself that the connection could be closed.
I noticed that I already solved such an issue a while back with traditional HttpClients and started to port this solution to Camel, which worked out quite easily.
The solution basically consists of registering HttpClients with a service and then periodically (5 seconds in my case) call closeExpiredConnections() and closeIdleConnections(...).
This logic is kept in a singleton enum, as this is actually in a library that a couple of applications use, each running in their own JVM.
/**
* This singleton monitor will check every few seconds for idle and stale connections and perform
* a cleanup on the connections using the registered connection managers.
*/
public enum IdleConnectionMonitor {
INSTANCE;
private final static Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
/** The execution service which runs the cleanup every 5 seconds **/
private ScheduledExecutorService executorService =
Executors.newScheduledThreadPool(1, new NamingThreadFactory());
/** The actual thread which performs the monitoring **/
private IdleConnectionMonitorThread monitorThread = new IdleConnectionMonitorThread();
IdleConnectionMonitor() {
// execute the thread every 5 seconds till the application is shutdown (or the shutdown method
// is invoked)
executorService.scheduleAtFixedRate(monitorThread, 5, 5, TimeUnit.SECONDS);
}
/**
* Registers a {#link HttpClientConnectionManager} to monitor for stale connections
*/
public void registerConnectionManager(HttpClientConnectionManager connMgr) {
monitorThread.registerConnectionManager(connMgr);
}
/**
* Request to stop the monitoring for stale HTTP connections.
*/
public void shutdown() {
executorService.shutdown();
try {
if (!executorService.awaitTermination(3, TimeUnit.SECONDS)) {
LOG.warn("Connection monitor shutdown not finished after 3 seconds!");
}
} catch (InterruptedException iEx) {
LOG.warn("Execution service was interrupted while waiting for graceful shutdown");
}
}
/**
* Upon invocation, the list of registered connection managers will be iterated through and if a
* referenced object is still reachable {#link HttpClientConnectionManager#closeExpiredConnections()}
* and {#link HttpClientConnectionManager#closeIdleConnections(long, TimeUnit)} will be invoked
* in order to cleanup stale connections.
* <p/>
* This runnable implementation holds a weakly referable list of {#link
* HttpClientConnectionManager} objects. If a connection manager is only reachable by {#link
* WeakReference}s or {#link PhantomReference}s it gets eligible for garbage collection and thus
* may return null values. If this is the case, the connection manager will be removed from the
* internal list of registered connection managers to monitor.
*/
private static class IdleConnectionMonitorThread implements Runnable {
// we store only weak-references to connection managers in the list, as the lifetime of the
// thread may extend the lifespan of a connection manager and thus allowing the garbage
// collector to collect unused objects as soon as possible
private List<WeakReference<HttpClientConnectionManager>> registeredConnectionManagers =
Collections.synchronizedList(new ArrayList<>());
#Override
public void run() {
LOG.trace("Executing connection cleanup");
Iterator<WeakReference<HttpClientConnectionManager>> conMgrs =
registeredConnectionManagers.iterator();
while (conMgrs.hasNext()) {
WeakReference<HttpClientConnectionManager> weakConMgr = conMgrs.next();
HttpClientConnectionManager conMgr = weakConMgr.get();
if (conMgr != null) {
LOG.trace("Found connection manager: {}", conMgr);
conMgr.closeExpiredConnections();
conMgr.closeIdleConnections(30, TimeUnit.SECONDS);
} else {
conMgrs.remove();
}
}
}
void registerConnectionManager(HttpClientConnectionManager connMgr) {
registeredConnectionManagers.add(new WeakReference<>(connMgr));
}
}
private static class NamingThreadFactory implements ThreadFactory {
#Override
public Thread newThread(Runnable r) {
Thread t = new Thread(r);
t.setName("Connection Manager Monitor");
return t;
}
}
}
As mentioned, this singleton service spawns an own thread that invokes the two, above mentioned methods every 5 seconds. These invocations take care of closing connections that are either unused for a certain amount of time or that are IDLE for the stated amount of time.
In order to camelize this service EventNotifierSupport can be utilized in order to let Camel take care of shutting down the monitor thread once it is closing down.
/**
* This Camel service with take care of the lifecycle management of {#link IdleConnectionMonitor}
* and invoke {#link IdleConnectionMonitor#shutdown()} once Camel is closing down in order to stop
* listening for stale connetions.
*/
public class IdleConnectionMonitorService extends EventNotifierSupport {
private final static Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
private IdleConnectionMonitor connectionMonitor;
#Override
public void notify(EventObject event) {
if (event instanceof CamelContextStartedEvent) {
LOG.info("Start listening for closable HTTP connections");
connectionMonitor = IdleConnectionMonitor.INSTANCE;
} else if (event instanceof CamelContextStoppingEvent){
LOG.info("Shutting down listener for open HTTP connections");
connectionMonitor.shutdown();
}
}
#Override
public boolean isEnabled(EventObject event) {
return event instanceof CamelContextStartedEvent || event instanceof CamelContextStoppingEvent;
}
public IdleConnectionMonitor getConnectionMonitor() {
return this.connectionMonitor;
}
}
In order to take advantage of that service, the connection manager that is used by the HttpClient Camel uses internally needs to be registered with the service, which is done in the code block below:
private void registerHttpClientConnectionManager(HttpClientConnectionManager conMgr) {
if (!getIdleConnectionMonitorService().isPresent()) {
// register the service with Camel so that on a shutdown the monitoring thread will be stopped
camelContext.getManagementStrategy().addEventNotifier(new IdleConnectionMonitorService());
}
IdleConnectionMonitor.INSTANCE.registerConnectionManager(conMgr);
}
private Optional<IdleConnectionMonitorService> getIdleConnectionMonitorService() {
for (EventNotifier eventNotifier : camelContext.getManagementStrategy().getEventNotifiers()) {
if (eventNotifier instanceof IdleConnectionMonitorService) {
return Optional.of((IdleConnectionMonitorService) eventNotifier);
}
}
return Optional.empty();
}
Last but not least the connection manager defined in httpConfiguration inside the HttpClientSpringTestConfig in my case needed to be past to the introduced register function
PoolingHttpClientConnectionManager conMgr = new PoolingHttpClientConnectionManager();
registerHttpClientConnectionManager(conMgr);
This might not be the prettiest solution, but it does close the half-closed connections on my machine.
#edit
I just learned that you can use a NoConnectionReuseStrategy which changes the connection state to TIME_WAIT rather than CLOSE_WAIT and therefore removes the connection after a short moment. Unfortunately, the request is still issued with a Connection: keep-alive header. This strategy will create a new connection per request, i.e. if you've got a 301 Moved Permanently redirect response the redirect would occur on a new connection.
The httpClientConfigurer bean would need to change to the following in order to make use of the above mentioned strategy:
#Bean(name = "httpClientConfigurer")
public HttpClientConfigurer httpConfiguration() {
return builder -> builder.setDefaultSocketConfig(socketConfig)
.setDefaultRequestConfig(requestConfig)
.setConnectionReuseStrategy(NoConnectionReuseStrategy.INSTANCE);
}
It can be done by closing idle connections if they are idle for configured time. You can achieve same by configuring idle connection timeout for Camel Http Component.
Camel Http provide interface to do so.
Cast org.apache.camel.component.http4.HttpComponent to PoolingHttpClientConnectionManager
PoolingHttpClientConnectionManager poolingClientConnectionManager = (PoolingHttpClientConnectionManager) httpComponent
.getClientConnectionManager();
poolingClientConnectionManager.closeIdleConnections(5000, TimeUnit.MILLISECONDS);
Visit Here [http://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/impl/conn/PoolingHttpClientConnectionManager.html#closeIdleConnections(long, java.util.concurrent.TimeUnit)]
Firstly Roman Vottner, your answer and just your sheer dedication to finding the issue helped me a truckload. I have been struggling with the CLOSE_WAIT for 2 days now and your answer was what helped. Here is what I did. Added the following code in my CamelConfiguration class which essentially tampers with CamelContext at startup.
HttpComponent http4 = camelContext.getComponent("https4", HttpComponent.class);
http4.setHttpClientConfigurer(new HttpClientConfigurer() {
#Override
public void configureHttpClient(HttpClientBuilder builder) {
builder.setConnectionReuseStrategy(NoConnectionReuseStrategy.INSTANCE);
}
});
Worked like a charm.
You can provide your own clientConnectionManager to HTTP4. Generally you should use an instance of org.apache.http.impl.conn.PoolingHttpClientConnectionManager, which you'd configure with your own org.apache.http.config.SocketConfig by passing it to setDefaultSocketConfig method of the connection manager.
If you're using Spring with Java config, you would have a method:
#Bean
PoolingHttpClientConnectionManager connectionManager() {
SocketConfig socketConfig = SocketConfig.custom()
.setSoKeepAlive(false)
.setSoReuseAddress(true)
.build();
PoolingHttpClientConnectionManager connectionManager = new PoolingHttpClientConnectionManager();
connectionManager.setDefaultSocketConfig(socketConfig);
return connectionManager;
}
and then you'd just use it in your endpoint definition like so: clientConnectionManager=#connectionManager

HttpServer within junit fails with address in use error after first test

I have a Java class for use with JUnit 4.x. Within each #Test method I create a new HttpServer, with port 9090 used. The first invocation works find, but subsequent ones error with "Address is already in use: bind".
Here's an example:
#Test
public void testSendNoDataHasValidResponse() throws Exception {
InetSocketAddress address = new InetSocketAddress(9090);
HttpHandler handler = new HttpHandler() {
#Override
public void handle(HttpExchange exchange) throws IOException {
byte[] response = "Hello, world".getBytes();
exchange.sendResponseHeaders(HttpURLConnection.HTTP_OK, response.length);
exchange.getResponseBody().write(response);
exchange.close();
}
};
HttpServer server = HttpServer.create(address, 1);
server.createContext("/me.html", handler);
server.start();
Client client = new Client.Builder(new URL("http://localhost:9090/me.html"), 20, "mykey").build();
client.sync();
server.stop(1);
assertEquals(true, client.isSuccessfullySynchronized());
}
Clearly the HttpServer is held solely within each method and is stopped before the end. I fail to see what's continuing to hold any sockets open. The first test passes, subsequent ones fail every time.
Any ideas?
EDIT with corrected method:
#Test
public void testSendNoDataHasValidResponse() throws Exception {
server = HttpServer.create(new InetSocketAddress("127.0.0.1", 0), 1);
HttpHandler handler = new HttpHandler() {
#Override
public void handle(HttpExchange exchange) throws IOException {
byte[] response = "Hello, world".getBytes();
exchange.sendResponseHeaders(HttpURLConnection.HTTP_OK, response.length);
exchange.getResponseBody().write(response);
exchange.close();
}
};
server.createContext("/me.html", handler);
server.start();
InetSocketAddress address = server.getAddress();
String target = String.format("http://%s:%s/me.html", address.getHostName(), address.getPort());
Client client = new Client.Builder(new URL(target), 20, "mykey").build();
client.sync();
server.stop(0);
assertEquals(true, client.isSuccessfullySynchronized());
}
jello's answer is on the money.
Other workarounds:
Reuse the same HttpServer for all your tests. To clean it up between tests, you can remove all its contexts. If you give it a custom executor, you can also wait for or kill off all the worker threads too.
Create each HttpServer on a new port. You can do this by specifying a port number of zero when creating the InetSocketAddress. You can then find the actual port in useby querying the server for its port after creating it, and use that in tests.
Change the global server socket factory to a custom factory which returns the same server socket every time. That lets you reuse the same actual socket for many tests, without having to reuse the HttpServer.
There is usually a 2 minute wait time before you can rebind to a specific port number. Run netstat to confirm if your server's connection is in TIME_WAIT. If so, you can get around it by using the SO_REUSEADDR option before binding. Docs are here for java.
When you create HttpServer, you specified
the maximum number of queued incoming connections to allow on the
listening socket
which is 1
server = HttpServer.create(new InetSocketAddress("127.0.0.1", 0), 1);
link

Java Netty load testing issues

I wrote the server that accepts connection and bombards messages ( ~100 bytes ) using text protocol and my implementation is able to send about loopback 400K/sec messages with the 3rt party client. I picked Netty for this task, SUSE 11 RealTime, JRockit RTS.
But when I started developing my own client based on Netty I faced drastic throughput reduction ( down from 400K to 1.3K msg/sec ). The code of the client is pretty straightforward. Could you, please, give an advice or show examples how to write much more effective client. I,actually, more care about latency, but started with throughput tests and I don't think that it is normal to have 1.5Kmsg/sec on loopback.
P.S. client purpose is only receiving messages from server and very seldom send heartbits.
Client.java
public class Client {
private static ClientBootstrap bootstrap;
private static Channel connector;
public static boolean start()
{
ChannelFactory factory =
new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
ExecutionHandler executionHandler = new ExecutionHandler( new OrderedMemoryAwareThreadPoolExecutor(16, 1048576, 1048576));
bootstrap = new ClientBootstrap(factory);
bootstrap.setPipelineFactory( new ClientPipelineFactory() );
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
bootstrap.setOption("receiveBufferSize", 1048576);
ChannelFuture future = bootstrap
.connect(new InetSocketAddress("localhost", 9013));
if (!future.awaitUninterruptibly().isSuccess()) {
System.out.println("--- CLIENT - Failed to connect to server at " +
"localhost:9013.");
bootstrap.releaseExternalResources();
return false;
}
connector = future.getChannel();
return connector.isConnected();
}
public static void main( String[] args )
{
boolean started = start();
if ( started )
System.out.println( "Client connected to the server" );
}
}
ClientPipelineFactory.java
public class ClientPipelineFactory implements ChannelPipelineFactory{
private final ExecutionHandler executionHandler;
public ClientPipelineFactory( ExecutionHandler executionHandle )
{
this.executionHandler = executionHandle;
}
#Override
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline pipeline = pipeline();
pipeline.addLast("framer", new DelimiterBasedFrameDecoder(
1024, Delimiters.lineDelimiter()));
pipeline.addLast( "executor", executionHandler);
pipeline.addLast("handler", new MessageHandler() );
return pipeline;
}
}
MessageHandler.java
public class MessageHandler extends SimpleChannelHandler{
long max_msg = 10000;
long cur_msg = 0;
long startTime = System.nanoTime();
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
cur_msg++;
if ( cur_msg == max_msg )
{
System.out.println( "Throughput (msg/sec) : " + max_msg* NANOS_IN_SEC/( System.nanoTime() - startTime ) );
cur_msg = 0;
startTime = System.nanoTime();
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
e.getCause().printStackTrace();
e.getChannel().close();
}
}
Update. On the server side there is a periodic thread that writes to the accepted client channel. And the channel soon become unwritable.
Update N2. Added OrderedMemoryAwareExecutor in the pipeline, but still there is very low throughput ( about 4k msg/sec )
Fixed. I put executor in front of the whole pipeline stack and it worked out!
If the server is sending messages with a fixed size (~100 bytes), you can set the ReceiveBufferSizePredictor to the client bootstrap, this will optimize the read
bootstrap.setOption("receiveBufferSizePredictorFactory",
new AdaptiveReceiveBufferSizePredictorFactory(MIN_PACKET_SIZE, INITIAL_PACKET_SIZE, MAX_PACKET_SIZE));
According to the code segment you have posted: The client's nio worker thread is doing everything in the pipeline, so it will be busy with decoding and executing the message handlers. You have to add a execution handler.
You have said that, channel is becoming unwritable from server side, so you may have to adjust the watermark sizes in the server bootstrap. you can periodically monitor the write buffer size (write queue size) and make sure that channel is becoming unwritable because of messages can not written to the network. It can be done by having a util class like below.
package org.jboss.netty.channel.socket.nio;
import org.jboss.netty.channel.Channel;
public final class NioChannelUtil {
public static long getWriteTaskQueueCount(Channel channel) {
NioSocketChannel nioChannel = (NioSocketChannel) channel;
return nioChannel.writeBufferSize.get();
}
}

Categories

Resources