We want to migrate all our apache-httpclient-4.x code to java-http-client code to reduce dependencies. While migrating them, i ran into the following issue under java 11:
How to set the socket timeout in Java HTTP Client?
With apache-httpclient-4.x we can set the connection timeout and the socket timeout like this:
DefaultHttpClient httpClient = new DefaultHttpClient();
int timeout = 5; // seconds
HttpParams httpParams = httpClient.getParams();
httpParams.setParameter(CoreConnectionPNames.CONNECTION_TIMEOUT, timeout * 1000);
httpParams.setParameter(CoreConnectionPNames.SO_TIMEOUT, timeout * 1000);
With java-http-client i can only set the connection timeout like this:
HttpClient httpClient = HttpClient.newBuilder()
.connectTimeout(Duration.ofSeconds(5))
.build()
But i found no way to set the socket timeout. Is there any way or an open issue to support that in the future?
You can specify it at the HttpRequest.Builder level via the timeout method:
HttpClient httpClient = HttpClient.newBuilder()
.connectTimeout(Duration.ofSeconds(5))
.build();
HttpRequest httpRequest = HttpRequest.newBuilder()
.uri(URI.create("..."))
.timeout(Duration.ofSeconds(5)) //this
.build();
httpClient.send(httpRequest, HttpResponse.BodyHandlers.ofString());
If you've got connected successfully but not able to receive a response at the desired amount of time, java.net.http.HttpTimeoutException: request timed out will be thrown (in contrast with java.net.http.HttpConnectTimeoutException: HTTP connect timed out which will be thrown if you don't get a successful connection).
There doesn't seem to be a way to specify a timeout on the flow of packets (socket timeout) on the Java Http Client.
I found an enhancement request on OpenJDK which seems to cover this possibility - https://bugs.openjdk.org/browse/JDK-8258397
Content from the link
The HttpClient lets you set a connection timeout (HttpClient.Builder) and a request timeout (HttpRequest.Builder). However the request timeout will be cancelled as soon as the response headers have been read. There is currently no timeout covering the reception of the body.
A possibility for the caller is to make use of the CompletableFuture API (get/join will accept a timeout, or CF::orTimeout can be called).
IIRC - in that case, it will still be the responsibility of the caller to cancel the request. We might want to reexamine and possibility change that.
The disadvantage here is that some of our BodyHandlers (ofPublisher, ofInputStream) will return immediately - so the CF API won't help in this case.
This might be a good thing (or not).
Another possibility could be to add a body timeout on HttpRequest.Builder. This would then cover all cases - but do we really want to timeout in the case of ofInputStream or ofPublisher if the caller doesn't read the body fast enough?
Related
I am using java Apache HttpClient to request a resource (B) with a timeout of 10s. If timeout exceeds Broken pipe is seen at the other application server.
Because of which Nginx at application B is not caching the response. How to gracefully close the connection so that the other app server (B) does not encounter broken pipe exception.
If you're using new enough HttpClient can't you do something like this (I just found a snippet someone else had written ... but see below where I've added ###)
HttpClient client = HttpClient.newHttpClient();
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create("http://localhost:8081/test/resource"))
.header("Accept", "application/json")
.POST(HttpRequest.BodyPublishers.ofString("ping!"))
.build();
CompletableFuture<HttpResponse<String>> completableFuture =
client.sendAsync(request, HttpResponse.BodyHandlers.ofString());
completableFuture
.completeOnTimeout(DEFAULT_RESPONSE, 1, TimeUnit.SECONDS) // ### ADD THIS the HttpClientRequest actually continues but the future has timed out so the user of the client progresses ??
.thenApplyAsync(HttpResponse::headers)
.thenAcceptAsync(System.out::println);
HttpResponse<String> response = completableFuture.join();
I created object of HTTPRequestBase from package org.apache.http.client.methods
after that I send the object vie CloseableHttpClient
protected CloseableHttpClient httpClient;
HttpRequestBase httpRequest = this.createHttpRequest(request);
this.httpClient.execute(httpRequest, new BasicResponseHandler());
I want to check httpRequest size before I send it. I need it to be limited to a specific number of MB.
How can I check its size?
it's not easy. The header and the body of the request are part of the HTTP application protocol.if your server use ssl,the certificate is sent as part of the SSL / TLS configuration ... before HTTP starts. Even the simple measurement of the amount of data in an HTTP request is tricky. A typical HTTP stack does not assemble the entire request message in one place and does not retain the cumulative total of the data sent. Depending on the HTTP stack you use, you can (in theory) use a custom socket factory and socket flows that count bytes sent.
In my test application I execute consecutive HttpGet requests to the same host with Apache HttpClient but upon each next request it turns out that the previous HttpConnection is closed and the new HttpConnection is created.
I use the same instance of HttpClient and don't close responses. From each entity I get InputStream, read from it with Scanner and then close the Scanner. I have tested KeepAliveStrategy, it returns true. The time between requests doesn't exceed keepAlive or connectionTimeToLive durations.
Can anyone tell me what could be the reason for such behavior?
Updated
I have found the solution. In order to keep the HttpConnecton alive it is necessary to set HttpClientConnectionManager when building HttpClient. I have used BasicHttpClientConnectionManager.
ConnectionKeepAliveStrategy keepAliveStrat = new DefaultConnectionKeepAliveStrategy() {
#Override
public long getKeepAliveDuration(HttpResponse response, HttpContext context)
{
long keepAlive = super.getKeepAliveDuration(response, context);
if (keepAlive == -1)
keepAlive = 120000;
return keepAlive;
}
};
HttpClientConnectionManager connectionManager = new BasicHttpClientConnectionManager();
try (CloseableHttpClient httpClient = HttpClients.custom()
.setConnectionManager(connectionManager) // without this setting connection is not kept alive
.setDefaultCookieStore(store)
.setKeepAliveStrategy(keepAliveStrat)
.setConnectionTimeToLive(120, TimeUnit.SECONDS)
.setUserAgent(USER_AGENT)
.build())
{
HttpClientContext context = new HttpClientContext();
RequestConfig config = RequestConfig.custom()
.setCookieSpec(CookieSpecs.DEFAULT)
.setSocketTimeout(10000)
.setConnectTimeout(10000)
.build();
context.setRequestConfig(config);
HttpGet httpGet = new HttpGet(uri);
CloseableHttpResponse response = httpClient.execute(httpGet, context);
HttpConnection conn = context.getConnection();
HttpEntity entity = response.getEntity();
try (Scanner in = new Scanner(entity.getContent(), ENC))
{
// do something
}
System.out.println("open=" + conn.isOpen()); // now open=true
HttpGet httpGet2 = new HttpGet(uri2); // on the same host with other path
// and so on
}
Updated 2
In general checking connections with conn.isOpen() is not proper way to check the connections state because: "Internally HTTP connection managers work with instances of ManagedHttpClientConnection acting as a proxy for a real connection that manages connection state and controls execution of I/O operations. If a managed connection is released or get explicitly closed by its consumer the underlying connection gets detached from its proxy and is returned back to the manager. Even though the service consumer still holds a reference to the proxy instance, it is no longer able to execute any I/O operations or change the state of the real connection either intentionally or unintentionally." (HttpClent Tutorial)
As have pointed #oleg the proper way to trace connections is using the logger.
First of all you need to make sure remote server you're working with does support keep-alive connections. Just simply check whether remote server does return header Connection: Keep-Alive or Connection: Closed in each and every response. For Close case there is nothing you can do with that. You can use this online tool to perform such check.
Next, you need to implement the ConnectionKeepAliveStrategy as defined in paragraph #2.6 of this manual. Note that you can use existent DefaultConnectionKeepAliveStrategy since HttpClient version 4.0, so that your HttpClient will be constructed as following:
HttpClient client = HttpClients.custom()
.setKeepAliveStrategy(DefaultConnectionKeepAliveStrategy.INSTANCE)
.build();
That will ensure you HttpClient instance will reuse the same connection via keep-alive mechanism if it is being supported by server.
Your application must be closing response objects in order to ensure proper resource de-allocation of the underlying connections. Upon response closure HttpClient keeps valid connections alive and returns them back to the connection manager (connection pool).
I suspect your code simply leaks connections and every request ens up with a newly created connection while all previous connections keep on piling up in memory.
From the example at HttpClient website:
// In order to ensure correct deallocation of system resources
// the user MUST call CloseableHttpResponse#close() from a finally clause.
// Please note that if response content is not fully consumed the underlying
// connection cannot be safely re-used and will be shut down and discarded
// by the connection manager.
So as #oleg said you need to close the HttpResponse before checking the connection status.
HttpClient executes request 4 times if it times out. If it does not time out then it is working fine. Is it related to HttpClient?
I found that it is HttpClient's default behaviour to execute requests 4 times if it fails. I am not sure about other kind of failures but at least with time out.
To disable this behaviour do this :
DefaultHttpClient client = new DefaultHttpClient();
// Disable default behavior of HttpClient of retrying requests in case of failure
((AbstractHttpClient) client).setHttpRequestRetryHandler(new DefaultHttpRequestRetryHandler(0, false));
Here retry count is set to 0 to disable retry.
I found solution from this blog.
This resolved the issue for me. Using httpclient 4.3 and above.
HttpClientBuilder.create().disableAutomaticRetries().build();
Apache HttpClient tries to connect 5 times in case of transport exception. Here is what doc says:
HttpClient will automatically retry up to 5 times those methods that
fail with a transport exception while the HTTP request is still being
transmitted to the target server (i.e. the request has not been fully
transmitted to the server).
To change this behaviour you need to implement HttpMethodRetryHandler interface
I have a REST webservice with some methods.
I'm sending requests to the rest with Apache HttpClient 4.
When I make a connection to this rest, in a method that is bigger and slower, it throws a NoHttpResponseException.
After googling, I discovered that the server is cutting down the connection with my client app.
So, I tried to disable the timeout this way :
DefaultHttpClient httpclient = null;
HttpParams params = new BasicHttpParams();
HttpConnectionParams.setConnectionTimeout(params, 0);
HttpConnectionParams.setSoTimeout(params, 0);
HttpConnectionParams.setStaleCheckingEnabled(params, true);
httpclient = new DefaultHttpClient(params);
httpclient.execute(httpRequest, httpContext);
But it failed. The request dies in 15 seconds (possible default timeout?)
Does anyone know the best way to do this?
I would suggest that you return data to the client before the timeout can occur. This may just be some bytes that says "working" to the client. By trickling the data out, you should be able to keep the client alive.