I have server which receives requests from clients and based on the requests connects to some external website & does some operations.
I am using Apache Commons HttpClient (v 2.0.2) to do these connections (I know it's old, but I have to use it because of other restrictions).
My server is not going to get frequent requests. I think it may be a lot of requests when it's first deployed. Then on it's only going to be a few requests a day. There may be occasional spurts when again there are a lot of requests occasionally.
All connections are going to be one of 3 URLS - they may be http or https
I was thinking of using separate instances of HttpClient for each request
Is there any need for me to use a common HttpClient object & use it with MultiThreadedHttpConnectionManager for different connections.
How exactly does MultiThreadedHttpConnectionManager help - does it keep the connection open even after you call releaseConnection? How long will it keep it open?
All my connections are going to be GET & they are going to return 10-20 bytes at most. I am not downloading anything. The reason I am using HttpClient rather than core java libraries is because occasionally, I may want to use HTTP 1.0 (I don't think java classes support this) and I also may want to do Http Redirects automatically.
I think it all depends on what your SLAs are and if the the performance is within the acceptable/expected response times. Your solution will work without any issues but it is not scalable if your application demands grow over time.
Using MultiThreadedHttpConnectionManager is much more elegant/scalable solution than having to manage 3 independent HttpClient objects.
I use a PoolingHttpClientConnectionManager in a considerably multi-threaded environment and it works very well.
Here's an implementation of a Client pool:
public class HttpClientPool {
// Single-element enum to implement Singleton.
private static enum Singleton {
// Just one of me so constructor will be called once.
Client;
// The thread-safe client.
private final CloseableHttpClient threadSafeClient;
// The pool monitor.
private final IdleConnectionMonitor monitor;
// The constructor creates it - thus late
private Singleton() {
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
// Increase max total connection to 200
cm.setMaxTotal(200);
// Increase default max connection per route to 200
cm.setDefaultMaxPerRoute(200);
// Make my builder.
HttpClientBuilder builder = HttpClients.custom()
.setRedirectStrategy(new LaxRedirectStrategy())
.setConnectionManager(cm);
// Build the client.
threadSafeClient = builder.build();
// Start up an eviction thread.
monitor = new IdleConnectionMonitor(cm);
// Start up the monitor.
Thread monitorThread = new Thread(monitor);
monitorThread.setDaemon(true);
monitorThread.start();
}
public CloseableHttpClient get() {
return threadSafeClient;
}
}
public static CloseableHttpClient getClient() {
// The thread safe client is held by the singleton.
return Singleton.Client.get();
}
public static void shutdown() throws InterruptedException, IOException {
// Shutdown the monitor.
Singleton.Client.monitor.shutdown();
}
// Watches for stale connections and evicts them.
private static class IdleConnectionMonitor implements Runnable {
// The manager to watch.
private final PoolingHttpClientConnectionManager cm;
// Use a BlockingQueue to stop everything.
private final BlockingQueue<Stop> stopSignal = new ArrayBlockingQueue<Stop>(1);
IdleConnectionMonitor(PoolingHttpClientConnectionManager cm) {
this.cm = cm;
}
public void run() {
try {
// Holds the stop request that stopped the process.
Stop stopRequest;
// Every 5 seconds.
while ((stopRequest = stopSignal.poll(5, TimeUnit.SECONDS)) == null) {
// Close expired connections
cm.closeExpiredConnections();
// Optionally, close connections that have been idle too long.
cm.closeIdleConnections(60, TimeUnit.SECONDS);
}
// Acknowledge the stop request.
stopRequest.stopped();
} catch (InterruptedException ex) {
// terminate
}
}
// Pushed up the queue.
private static class Stop {
// The return queue.
private final BlockingQueue<Stop> stop = new ArrayBlockingQueue<Stop>(1);
// Called by the process that is being told to stop.
public void stopped() {
// Push me back up the queue to indicate we are now stopped.
stop.add(this);
}
// Called by the process requesting the stop.
public void waitForStopped() throws InterruptedException {
// Wait until the callee acknowledges that it has stopped.
stop.take();
}
}
public void shutdown() throws InterruptedException, IOException {
// Signal the stop to the thread.
Stop stop = new Stop();
stopSignal.add(stop);
// Wait for the stop to complete.
stop.waitForStopped();
// Close the pool.
HttpClientPool.getClient().close();
// Close the connection manager.
cm.close();
}
}
}
All you need to do is CloseableHttpResponse conversation = HttpClientPool.getClient().execute(request); and when you've finished with it, just close it and it will be returned to the pool.
Related
How should I deal with servers that hang sending an HTTP response body using the HTTP client included in Java 11 onwards, when I need to handle the response in a streaming fashion?
Having read the documentation, I'm aware that it's possible to set a timeout on connection and a timeout on the request:
HttpClient httpClient = HttpClient.newBuilder()
.connectTimeout(Duration.ofSeconds(2))
.build();
HttpRequest httpRequest = HttpRequest.newBuilder(URI.create("http://example.com"))
.timeout(Duration.ofSeconds(5))
.build();
HttpResponse<Stream<String>> httpResponse = httpClient
.send(httpRequest, HttpResponse.BodyHandlers.ofLines());
Stream<String> responseLineStream = httpResponse.body();
responseLineStream.count();
In the above code:
If a connection cannot be established within 2 seconds, a timeout exception will be thrown.
If a response is not received within 5 seconds, a timeout exception will be thrown. By experimentation, the timer starts after the connection is established, and for this type of BodyHandler, a response is considered received when the status line and headers have been received.
This means that when the code executes, within 7 seconds either an exception will have been thrown, or we'll have arrived at the last line. However, the last line isn't constrained by any timeout. If the server stops sending the response body, the last line blocks forever.
How can I prevent the last line hanging in this case?
My guess this is left to the consumer of the stream, since this is part of the handling logic, so the body handling can be still be processed with a CompletableFuture:
HttpResponse<Stream<String>> httpResponse = httpClient.send(httpRequest,
HttpResponse.BodyHandlers.ofLines());
Stream<String> responseLineStream = httpResponse.body();
CompletableFuture<Long> future = CompletableFuture.supplyAsync(() -> responseLineStream.count());
long count = future.get(3, TimeUnit.SECONDS);
Or just simply a Future executed by a Java Executor.
One way to solve this is to set a timeout on the time taken to receive the whole body. That's what M A's solution does. As you've noticed, you should close the stream if timeout evaluates, so the connection is released properly instead of hanging in background. A more general approach is to implement a BodySubscriber that completes itself exceptionally when it is not completed by upstream within the timeout. This affords not having to spawn a thread just for timed waiting or close the stream. Here's an appropriate implementation.
class TimeoutBodySubscriber<T> implements BodySubscriber<T> {
private final BodySubscriber<T> downstream;
private final Duration timeout;
private Subscription subscription;
/** Make sure downstream isn't called after we receive an onComplete or onError. */
private boolean done;
TimeoutBodySubscriber(BodySubscriber<T> downstream, Duration timeout) {
this.downstream = downstream;
this.timeout = timeout;
}
#Override
public CompletionStage<T> getBody() {
return downstream.getBody();
}
#Override
public synchronized void onSubscribe(Subscription subscription) {
this.subscription = requireNonNull(subscription);
downstream.onSubscribe(subscription);
// Schedule an error completion to be fired when timeout evaluates
CompletableFuture.delayedExecutor(timeout.toMillis(), TimeUnit.MILLISECONDS)
.execute(this::onTimeout);
}
private synchronized void onTimeout() {
if (!done) {
done = true;
downstream.onError(new HttpTimeoutException("body completion timed out"));
// Cancel subscription to release the connection, so it doesn't keep hanging in background
subscription.cancel();
}
}
#Override
public synchronized void onNext(List<ByteBuffer> item) {
if (!done) {
downstream.onNext(item);
}
}
#Override
public synchronized void onError(Throwable throwable) {
if (!done) {
done = true;
downstream.onError(throwable);
}
}
#Override
public synchronized void onComplete() {
if (!done) {
done = true;
downstream.onComplete();
}
}
static <T> BodyHandler<T> withBodyTimeout(BodyHandler<T> handler, Duration timeout) {
return responseInfo -> new TimeoutBodySubscriber<>(handler.apply(responseInfo), timeout);
}
}
It can be used as follows:
Duration timeout = Duration.ofSeconds(10);
HttpResponse<Stream<String>> httpResponse = httpClient
.send(httpRequest, TimeoutBodySubscriber.withTimeout(HttpResponse.BodyHandlers.ofLines(), timeout));
Another approach is to use a read timeout. This is more flexible as the response isn't timed-out as long as the server remains active (i.e. keeps sending stuff). You'll need a BodySubscriber that completes itself exceptionally if it doesn't receive its next requested signal within the timeout. This is slightly more complex to implement. You can use Methanol if you're fine with a dependency. It implements read timeouts as described.
Duration timeout = Duration.ofSeconds(3);
HttpResponse<Stream<String>> httpResponse = httpClient
.send(httpRequest, MoreBodyHandlers.withReadTimeout(HttpResponse.BodyHandlers.ofLines(), timeout));
Another strategy is to use a combination of both: time out as soon as the server becomes inactive or the body takes too long to complete.
I am familiar with Netty basics and have used it to build a typical application server running on TCP designed to serve many clients/connections. However, I recently have a requirement to build a server which is designed to handle handful of clients or only one client most of the times. But the client is the gateway to many devices and therefore generate substantial traffic to the server I am trying to design.
My questions are:
Is it possible / recommended at all to use Netty for this use case? I have seen the discussion here.
Is it possible to use multithreaded EventExecutor to the channel handlers in the pipeline so that instead of channel EventLoop, the concurrency is achieved by the EventExecutor thread pool? Will it ensure that one message from the client will be handled by one thread through all handlers, while the next message by another thread?
Is there any example implementation available?
According to the documentation of io.netty.channel.oio you can use it if you don't have lot's of client. In this case, every connection will be handled in a separate thread and use Java old blocking IO under the hood. Take a look at OioByteStreamChannel::activate:
/**
* Activate this instance. After this call {#link #isActive()} will return {#code true}.
*/
protected final void activate(InputStream is, OutputStream os) {
if (this.is != null) {
throw new IllegalStateException("input was set already");
}
if (this.os != null) {
throw new IllegalStateException("output was set already");
}
if (is == null) {
throw new NullPointerException("is");
}
if (os == null) {
throw new NullPointerException("os");
}
this.is = is;
this.os = os;
}
As you can see, the oio Streams will be used there.
According to your comment. You can Specify EventExecutorGroup while adding handler to a pipeline as this:
new ChannelInitializer<Channel> {
public void initChannel(Channel ch) {
ch.pipeline().addLast(new YourHandler());
}
}
Let's take a look at the AbstractChannelHandlerContext:
#Override
public EventExecutor executor() {
if (executor == null) {
return channel().eventLoop();
} else {
return executor;
}
}
Here we see that if you don't register your EventExecutor it will use the child event group you specified while creating the ServerBootstrap.
new ServerBootstrap()
.group(new OioEventLoopGroup(), new OioEventLoopGroup())
//acceptor group //child group
Here is how reading from channel is invoked AbstractChannelHandlerContext::invokeChannelRead:
static void invokeChannelRead(final AbstractChannelHandlerContext next, Object msg) {
final Object m = next.pipeline.touch(ObjectUtil.checkNotNull(msg, "msg"), next);
EventExecutor executor = next.executor();
if (executor.inEventLoop()) {
next.invokeChannelRead(m);
} else {
executor.execute(new Runnable() { //Invoked by the EventExecutor you specified
#Override
public void run() {
next.invokeChannelRead(m);
}
});
}
}
Even for a few connections I would go with NioEventLoopGroup.
Regarding your question:
Is it possible to use multithreaded EventExecutor to the channel
handlers in the pipeline so that instead of channel EventLoop, the
concurrency is achieved by the EventExecutor thread pool? Will it
ensure that one message from the client will be handled by one thread
through all handlers, while the next message by another thread?
Netty's Channel guarantees that every processing for an inbound or an outbound message will occur in the same thread. You don't have to hack an EventExecutor of your own to handle this. If serving inbound messages doesn't require long-lasting processings your code will look like basic usage of ServerBootstrap. You might find useful to tune the number of threads in the pool.
I want to know if I can save my application threads by implementing Netty Client.
I wrote a demo client please find the below code. Expecting that a single thread can connect to different port handle them efficiently but i was wrong. Netty creates per thread connection.
public class NettyClient {
public static void main(String[] args) {
Runnable runA = new Runnable() {
public void run() {
Connect(5544);
}
};
Thread threadA = new Thread(runA, "threadA");
threadA.start();
try {
Thread.sleep(1000);
} catch (InterruptedException x) {
}
Runnable runB = new Runnable() {
public void run() {
Connect(5544);
}
};
Thread threadB = new Thread(runB, "threadB");
threadB.start();
}
static ClientBootstrap bootstrap = null;
static NettyClient ins = new NettyClient();
public NettyClient() {
bootstrap = new ClientBootstrap(new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(), Executors.newCachedThreadPool()));
/*
* ClientBootstrap A helper class which creates a new client-side
* Channel and makes a connection attempt.
*
* NioClientSocketChannelFactory A ClientSocketChannelFactory which
* creates a client-side NIO-based SocketChannel. It utilizes the
* non-blocking I/O mode which was introduced with NIO to serve many
* number of concurrent connections efficiently
*
* There are two types of threads :Boss thread Worker threads Boss
* Thread passes control to worker thread.
*/
// Configure the client.
ChannelGroup channelGroup = new DefaultChannelGroup(NettyClient.class.getName());
// Only 1 thread configured but still aceepts threadA and Thread B
// connection
OrderedMemoryAwareThreadPoolExecutor pipelineExecutor = new OrderedMemoryAwareThreadPoolExecutor(
1, 1048576, 1073741824, 1, TimeUnit.MILLISECONDS,
new NioDataSizeEstimator(), new NioThreadFactory("NioPipeline"));
bootstrap.setPipelineFactory(new NioCommPipelineFactory(channelGroup,
pipelineExecutor));
// bootstrap.setPipelineFactory(new
// BackfillClientSocketChannelFactory());
bootstrap.setOption("child.tcpNoDelay", true);
bootstrap.setOption("child.keepAlive", true);
bootstrap.setOption("child.reuseAddress", true);
bootstrap.setOption("readWriteFair", true);
}
public static NettyClient getins() {
return ins;
}
public static void Connect(int port) {
ChannelFuture future = bootstrap
.connect(new InetSocketAddress("localhost", port));
Channel channel = future.awaitUninterruptibly().getChannel();
System.out.println(channel.getId());
channel.getCloseFuture().awaitUninterruptibly();
}
}
Now I want to know what are the benefits of using Netty client? Does it save Threads?
Netty saves threads. Your NettyClient wastes threads when waiting synchronously for opening and closing of the connections (calling awaitUninterruptibly()).
BTW how many connections will your client have? Maybe using classic synchronous one-thread-per-connection approach would suffice? Usually we have to save threads on a server side.
Netty allows you to handle thousands of connections with a handful of threads.
When used in a client application, it allows a handful of threads to make thousands of concurrent connections to server.
You have put sleep() in your thread. We must never block the Netty worker/boss threads. Even if there is a need to perform any one-off blocking operation, it must be off-loaded to another executor. Netty uses NIO, and the same thread can be used for creating a new connection, while the earlier connection gets some data in its input buffer.
I want to create a server to handle socket connections from users, and inside my server I want to have a connection to a RabbitMQ, one per connection, but in the examples provided in their webpage I see only "while" loops to wait for the message, in this case I will need to create a thread per connection only to process the message from RabbitMQ.
Is there a way to do this in Java using Spring or any framework that I just create the call back for the RabbitMQ instead of using while loops?
I was using node.js and there it is pretty straightforward to do this,
and I want to know some proposals for Java
You should take a look at the Channel.basicConsume and the DefaultConsumer abstract class: https://www.rabbitmq.com/api-guide.html#consuming
Java concurrency will require a thread for the callback to handle each message, but you can use a thread pool to reuse threads.
static final ExecutorService threadPool;
static {
threadPool = Executors.newCachedThreadPool();
}
Now you need to create a consumer that will handle each delivery by creating a Runnable instance that will be passed to the thread pool to execute.
channel.basicConsume(queueName, false, new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
final byte msgBody = body; // a 'final' copy of the body that you can pass to the runnable
final long msgTag = envelope.getDeliveryTag();
Runnable runnable = new Runnable() {
#Override
public void run() {
// handle the message here
doStuff(msgBody);
channel.basicAck(msgTag, false);
}
};
threadPool.submit(runnable);
}
});
This shows how you can handle concurrent deliveries on a single connection and channel without a while loop in a single thread that would be blocked on each delivery. For your sanity, you probably will want to factor your Runnable implementation into its own class that could accept the channel, msgBody, msgTag and any other data as parameters that will be accessible when the run() method is called.
I am using jersey to implement a SSE scenario.
The server keeps connections alive. And push data to clients periodically.
In my scenario, there is a connection limit, only a certain number of clients can subscribe to the server at the same time.
So when a new client is trying to subscribe, I do a check(EventOutput.isClosed) to see if any old connections are not active anymore, so they can make room for new connections.
But the result of EventOutput.isClosed is always false, unless the client explicitly calls close of EventSource. This means that if a client drops accidentally(power outage or internet cutoff), it's still hogging the connection, and new clients can not subscribe.
Is there a work around for this?
#CuiPengFei,
So in my travels trying to find an answer to this myself I stumbled upon a repository that explains how to handle gracefully cleaning up the connections from disconnected clients.
The encapsulate all of the SSE EventOutput logic into a Service/Manager. In this they spin up a thread that checks to see if the EventOutput has been closed by the client. If so they formally close the connection (EventOutput#close()). If not they try to write to the stream. If it throws an Exception then the client has disconnected without closing and it handles closing it. If the write is successful then the EventOutput is returned to the pool as it is still an active connection.
The repo (and the actual class) are available here. Ive also included the class without imports below in case the repo is ever removed.
Note that they bind this to a Singleton. The store should be globally unique.
public class SseWriteManager {
private final ConcurrentHashMap<String, EventOutput> connectionMap = new ConcurrentHashMap<>();
private final ScheduledExecutorService messageExecutorService;
private final Logger logger = LoggerFactory.getLogger(SseWriteManager.class);
public SseWriteManager() {
messageExecutorService = Executors.newScheduledThreadPool(1);
messageExecutorService.scheduleWithFixedDelay(new messageProcessor(), 0, 5, TimeUnit.SECONDS);
}
public void addSseConnection(String id, EventOutput eventOutput) {
logger.info("adding connection for id={}.", id);
connectionMap.put(id, eventOutput);
}
private class messageProcessor implements Runnable {
#Override
public void run() {
try {
Iterator<Map.Entry<String, EventOutput>> iterator = connectionMap.entrySet().iterator();
while (iterator.hasNext()) {
boolean remove = false;
Map.Entry<String, EventOutput> entry = iterator.next();
EventOutput eventOutput = entry.getValue();
if (eventOutput != null) {
if (eventOutput.isClosed()) {
remove = true;
} else {
try {
logger.info("writing to id={}.", entry.getKey());
eventOutput.write(new OutboundEvent.Builder().name("custom-message").data(String.class, "EOM").build());
} catch (Exception ex) {
logger.info(String.format("write failed to id=%s.", entry.getKey()), ex);
remove = true;
}
}
}
if (remove) {
// we are removing the eventOutput. close it is if it not already closed.
if (!eventOutput.isClosed()) {
try {
eventOutput.close();
} catch (Exception ex) {
// do nothing.
}
}
iterator.remove();
}
}
} catch (Exception ex) {
logger.error("messageProcessor.run threw exception.", ex);
}
}
}
public void shutdown() {
if (messageExecutorService != null && !messageExecutorService.isShutdown()) {
logger.info("SseWriteManager.shutdown: calling messageExecutorService.shutdown.");
messageExecutorService.shutdown();
} else {
logger.info("SseWriteManager.shutdown: messageExecutorService == null || messageExecutorService.isShutdown().");
}
}}
Wanted to provide an update on this:
What was happening is that the eventSource on the client side (js) never got into readyState '1' unless we did a broadcast as soon as a new subscription was added. Even in this state the client could receive data pushed from the server. Adding call to do a broadcast of a simple "OK" message helped kicking the eventSource into readyState 1.
On closing the connection from the client side; to be pro-active in cleaning up resources, just closing the eventSource on the client side doesn't help. We must make another ajax call to the server to force the server to do a broadcast. When the broadcast is forced, jersey will clean up the connections that are no longer alive and will in-turn release resources (Connections in CLOSE_WAIT). If not a connection will linger in CLOSE_WAIT till the next broadcast happens.