I have some code to download a file utilizing org.apache.http.client.HttpClient. Now my IDE tells me that I have a Potential resource leak: 'client' may not be closed. The concrete code is:
HttpClient client = HttpClientBuilder.create().build();
HttpResponse response = client.execute(request);
HttpEntity entity = response.getEntity();
I did some research and found EntityUtils.consume(entity);, but this doesn't solve the resource leak for client.
So my question is, is this really a resource leak and if yes, how do I close it properly?
As I couldn't find any other way and Eclipse IDE didn't have any other quick-fix available, I tried the only proposed "fix" which was to merge all the 3 lines into 1 line like this:
HttpEntity entity = HttpClientBuilder.create().build().execute(request).getEntity();
I am not sure if this actually solves the resource-leak problem, but at least Eclipse seems to think to
Related
I'm trying to make a little utility that will synchronise data between two servers. Most of the calls there are REST calls with JSON, so I decided to use Apache HttpClient for this.
There is however a section where I need to upload a file. I'm trying to do this using the mutipart form data with the MutipartEntityBuilder but I encounter a Content too long problem. (I tried to gzip the contents of the file too, but I'm still going over the limit).
Here's my java code:
HttpPost request = new HttpPost(baseUrl+URL);
MultipartEntityBuilder builder = MultipartEntityBuilder.create();
//create upload file params
builder.addTextBody("scanName", "Test upload");
builder.addBinaryBody("myfile", f);
HttpEntity params= builder.build();
request.setEntity(params);
request.addHeader("content-type","multipart/form-data");
HttpResponse response = httpClient.execute(request);
Are there better atlernatives that I should be using for the file upload part? I'm also going to download the files from one of the server. Will I hit a similar issue when try to handle those responses?
Is there something I'm doing wrong?
I try to use your code and send some file with size something about 33MB and it was successful. So, I think your problem one of the follows:
Created http client has limitations for request size - in this case you need to change properties of client or use another client;
In some peace of code you call HttpEntity.getContent() method. For multipart request for this method exists limitations - 25kB. For this case you need to use writeTo(OutputStream) instead of getContent()
In comments you told about swagger, but I don't understand what does it mean. If you use swagger generated api, that problems maybe occurred at their code and you need to fix generation logic (or something like this - I never used swagger)
I hope my answer will help you
these days i'm struggling with a quite weird issue regarding Apache HttpClient and threads.
The point is that I have a HttpClient shared by all the threads and the use it to execute an HttpPut request to upload a small file (8k aprox.). Well with a small amount of threads everything is allright and the times are good (200-600 milliseconds), but when we start increasing the number of concurrent threads the times are awful (8 seconds).
We checked the server to ensure the problem wasn't there. Using jmeter with the same load (1000 threads in a second) we got response times of milliseconds!!
The implentation uses a thread-safe connection manager:
PoolingHttpClientConnectionManager httpConnectionManager = new PoolingHttpClientConnectionManager();
httpConnectionManager.setMaxTotal(5000);
httpConnectionManager.setDefaultMaxPerRoute(5000);
HttpClient httpClient = HttpClientBuilder.create()
.setConnectionManager(httpConnectionManager)
.build();
And the threads run the following code:
HttpPut put = new HttpPut(urlStr);
put.setConfig(RequestConfig.custom()
.setExpectContinueEnabled(true)
.setStaleConnectionCheckEnabled(false)
.setRedirectsEnabled(true).build());
put.setEntity(new FileEntity(new
File("image.tif")));
put.setHeader("Content-Type", "image/tiff");
put.setHeader("Connection", "keep-alive");
HttpResponse response = httpClient.execute(put, HttpClientContext.create());
It looks like if there was a shared resource that has a huge impact when there is a high load.
Looking at the sourcecode of Apache Jmeter I don't see relevant differences respect this code.
Any idea guys?
You need to turn on debugging on the client side in order to do the following:
verify that the pool of 5000 is actually being used to great depth. The logger will display the changing totals for "available remain in pool" and the number of the current Pool entry being used.
verify that you are immediately clean-up and RETURN to pool the entry. Remember to close your resources ( Streams used to access response, Entity objects )
CloseableHttpResponse response
case PUT:
HttpPut httpPut = new HttpPut(url);
httpPut.setProtocolVersion(new ProtocolVersion("HTTP", 1,1));
httpPut.setConfig(this.config);
httpPut.setEntity(new StringEntity(data, ContentType.APPLICATION_JSON));
response = httpClient.execute(httpPut, context);
httpPut.releaseConnection();
More info
I am developing a web-app that needs to query an ontology through a REST-API.
If I call the API through the browser, it opens a pop-up "Save As" through which I can save the file.
This is because the header of the response contains:
Content-Disposition: attachment; filename = query-result.srx
The problem is that I would like to receive the file within my web-app without using the browser.
The web-app is write on java and I use Apache HttpClient for send and receive, HTTP request and response:
CloseableHttpClient httpClient = HttpClients.createDefault();
HttpGet httpGet = new HttpGet(uri);
CloseableHttpResponse httpResponse = httpClient.execute(httpGet);
If I try to get the entity's content:
httpResponse.getEntity().getContent()
It return a useless value.
It 's something that you can do with this library, or should I use another library.
I found another question similar to mine but no one answered.
java-javascript-read-content-disposition-file-content
Thanks to all who answer me!
I realized that the error was in the query that I used the REST API. So the operations I did in Java were correct. With the command
httpResponse.getEntity().getContent()
you can take the content that is returned even if this file is described in the content-disposition.
Thanks to #Julian Reschke
We have the following code, which later on replaced with HttpHead method as we only need to pull back the header info of our web pages. After the change, we noticed that, on average, it took longer time for the HttpHead to return than the HttpGet for same sets of webpages. Is it normal? What could be wrong here?
HttpClient httpclient = new DefaultHttpClient();
// the time it takes to open TCP connection.
httpclient.getParams().setParameter(CoreConnectionPNames.CONNECTION_TIMEOUT, this.timeout);
// timeout when server does not send data.
httpclient.getParams().setParameter(CoreConnectionPNames.SO_TIMEOUT, this.timeout);
// the get method
HttpGet httpget = new HttpGet(url);
HttpResponse response = httpclient.execute(httphead);
Is it normal?
It certainly seems a bit peculiar.
What could be wrong here?
It is difficult to say. It would seem that the strange behavior is most likely on the server side. I would check the following:
Write a micro-benchmark that repeatedly GETs and HEADs the same page to make sure that the performance difference is real, and not an artifact of the way you measured it.
Use packet logger to look at what is actually being sent and received.
Check the server logs.
Profile the server code under load using your micro-benchmark.
One possible explanation is that the HEAD is loading data from a (slow) database or file system. The following GET could then be faster because the data has already been cached. (It could be explicit caching in the server code, the query caching in the back-end database, or file system caching.) You could test for this by seeing if a GET is slower if not preceded by a HEAD.
I used to have this code working with my Tomcat server:
HttpRequestBase targetRequest = ...;
HttpResponse targetResponse = httpclient.execute(targetRequest);
HttpEntity entity = targetResponse.getEntity();
However when I migrated with Google App Engine, I can' use this code anymore. So I read a bit and found that I need to use another code to achieve this.
So I have this code:
URLFetchService fetcher = URLFetchServiceFactory.getURLFetchService();
HTTPResponse targetRespose = fetcher.fetch(targetRequest); // Error
HttpEntity entity = targetResponse.getEntity();
However its obvious that there's an error with the fetcher.fetch code.
All I need to accomplish to to have the same HttpEntity using App Engine approach. Any way to work this out?
org.apache.http.HttpRequest and com.google.appengine.api.urlfetch.HTTPRequest are two totally different classes from two different libraries, so you can not just exchange one for the other.
If you'd like to use Apache HttpClient on GAE, it can be done with some workarounds: see here and here.