This question already has answers here:
What is the difference between CloseableHttpClient and HttpClient in Apache HttpClient API?
(8 answers)
Closed 2 years ago.
I am building an Android app that will fire multiple HTTP requests (say a request every second) to a server to fetch data. What are the best practices I must follow?
Should I create and close the client after each request, like the following?
CloseableHttpClient httpClient = HttpClientBuilder.create().build();
try {
HttpPost request = new HttpPost("http://yoururl");
StringEntity params = new StringEntity(json.toString());
request.addHeader("content-type", "application/json");
request.setEntity(params);
httpClient.execute(request);
// handle response here...
} catch (Exception ex) {
// handle exception here
} finally {
httpClient.close();
}
Or should I create a client initially, use it for all requests and then finally close it when I'm done with it?
The idea of closing your HttpClient is about releasing the allocated ressources. Therefore, It depends on how often you plan on firing those HTTP requests.
Keep in mind that firing a request every 10 seconds is considered an eternity ;)
Related
I am using HttpClient within a servlet to make calls to a resource which I return as the servlets response after some manipulation.
My HttpClient uses PoolingHttpClientConnectionManager.
I create the client like so:
private CloseableHttpClient getConfiguredHttpClient(){
return HttpClientBuilder
.create()
.setDefaultRequestConfig(config)
.setConnectionReuseStrategy(NoConnectionReuseStrategy.INSTANCE)
.setConnectionManagerShared(true)
.setConnectionManager(connManager)
.build();
}
I use this client within a Try With Resource within the servlets service method, so it is auto closed. To stop the the connection manager from being closed, I set setConnectionManagerShared to true.
I have seen other code samples that do not close the HttpClient. Should I not be closing this resource?
Thanks
For httpcomponents version 4.5.x:
I found that you really need to close the resource as shown in the documentation: https://hc.apache.org/httpcomponents-client-4.5.x/quickstart.html
CloseableHttpClient httpclient = HttpClients.createDefault();
HttpGet httpGet = new HttpGet("http://targethost/homepage");
CloseableHttpResponse response1 = httpclient.execute(httpGet);
try {
System.out.println(response1.getStatusLine());
HttpEntity entity1 = response1.getEntity();
EntityUtils.consume(entity1);
} finally {
response1.close();
}
For other versions of httpcomponents, see other answers.
For older versions of httpcomponents (http://hc.apache.org/httpcomponents-client-4.2.x/quickstart.html):
You do not need to explicitly close the HttpClient, however, (you may be doing this already but worth noting) you should ensure that connections are released after method execution.
Edit: The ClientConnectionManager within the HttpClient is going to be responsible for maintaining the state of connections.
GetMethod httpget = new GetMethod("http://www.url.com/");
try {
httpclient.executeMethod(httpget);
Reader reader = new InputStreamReader(httpget.getResponseBodyAsStream(), httpget.getResponseCharSet());
// consume the response entity and do something awesome
} finally {
httpget.releaseConnection();
}
I am running the following code:
CloseableHttpClient httpclient = HttpClients.createDefault();
HttpGet httpGet = new HttpGet("http://10.0.0.22:8086/db/cadvisorDB/series?u=root&p=root&q=select%20max(memory_usage)%20from%20stats%20where%20container_name%20%3D%27execution_container_"+bench_list+"_"+i+"%27%20and%20memory_usage%20%3C%3E%200%20group%20by%20container_name");
//Thread.sleep(10000);
CloseableHttpResponse requestResponse = httpclient.execute(httpGet);
String response=EntityUtils.toString(requestResponse.getEntity());
System.out.println(response);
Output console:
[]
When I wait for the HttpResponse 30s it works. I got the complete response (JSON with data points) :
Thread.sleep(30000);
IS it possible using Apache Java client to tell the client to wait until getting a value different than "[]". I mean a non empty Json.
Using timeouts does not solve the problem.
Thank you in advance
Then setting the timeout will work.
HttpGet request = new HttpGet(url);
// set timeouts as you like
RequestConfig config = RequestConfig.custom()
.setSocketTimeout(60 * 1000).setConnectTimeout(20 * 1000)
.setConnectionRequestTimeout(20 * 1000).build();
request.setConfig(config);
To be specific, no it is not possible simply using HttpClient "to tell the client to wait until getting a value different than" what it gets when the call is over. You have to program this yourself (in a loop or something.)
Does it make a difference if the sleep() is before HttpClients.createDefault() ?
Is it possible that your server at 10.0.0.22:8086 is just not ready when your code is executed? Is this server launched by the same app?
I had also same issue , but problem was 2 Http call making sequentially. so i have putted Thread.sleep(2000) for seconds and it worked.
Please confirm if your code making two rest call sequentially?
then may be you can place Thread.sleep just before second http call.
This question already has answers here:
How can I fix 'android.os.NetworkOnMainThreadException'?
(66 answers)
NetworkOnMainThreadException [duplicate]
(5 answers)
Closed 8 years ago.
I have been attempting to to connect to a WCF Service from a Android device. I have read a lot of blogs that does not seem to be useful. One of the Operations running on my WCF is
[OperationContract]
[WebGet(UriTemplate = "write", ResponseFormat = WebMessageFormat.Json)]
string write();
This writes one entity to a database. When I enter the URL in my phones browser "10.0.0.14/serv/UserManagement.svc/write" I get the relevant message and it writes to the database with no problem. The problem arises when I attempt to Consume the WCF from a android application. I have jumped between many different solution types and I am currently using
try
{
DefaultHttpClient httpClient = new DefaultHttpClient();
URI uri = new URI("http://10.0.0.14/serv/UserManagement.svc/write");
HttpGet httpget = new HttpGet(uri);
httpget.setHeader("Accept", "application/json");
httpget.setHeader("Content-type", "application/json; charset=utf-8");
HttpResponse response = httpClient.execute(httpget);
HttpEntity responseEntity = response.getEntity();
}
catch (Exception e)
{
e.printStackTrace();
}
This does not work. I have added <uses-permission android:name="android.permission.INTERNET"/> to my manifest. In my LogCat there is a NetworkOnMainThreadException. How can I fix the problem?
I'm building a simple web-scraper and i need to fetch the same page a few hundred times, and there's an attribute in the page that is dynamic and should change at each request. I've built a multithreaded HttpClient based class to process the requests and i'm using an ExecutorService to make a thread pool and run the threads. The problem is that dynamic attribute sometimes doesn't change on each request and i end up getting the same value on like 3 or 4 subsequent threads. I've read alot about HttpClient and i really can't find where this problem comes from. Could it be something about caching, or something like it!?
Update: here is the code executed in each thread:
HttpContext localContext = new BasicHttpContext();
HttpParams params = new BasicHttpParams();
HttpProtocolParams.setVersion(params, HttpVersion.HTTP_1_1);
HttpProtocolParams.setContentCharset(params,
HTTP.DEFAULT_CONTENT_CHARSET);
HttpProtocolParams.setUseExpectContinue(params, true);
ClientConnectionManager connman = new ThreadSafeClientConnManager();
DefaultHttpClient httpclient = new DefaultHttpClient(connman, params);
HttpHost proxy = new HttpHost(inc_proxy, Integer.valueOf(inc_port));
httpclient.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY,
proxy);
HttpGet httpGet = new HttpGet(url);
httpGet.setHeader("User-Agent",
"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)");
String iden = null;
int timeoutConnection = 10000;
HttpConnectionParams.setConnectionTimeout(httpGet.getParams(),
timeoutConnection);
try {
HttpResponse response = httpclient.execute(httpGet, localContext);
HttpEntity entity = response.getEntity();
if (entity != null) {
InputStream instream = entity.getContent();
String result = convertStreamToString(instream);
// System.out.printf("Resultado\n %s",result +"\n");
instream.close();
iden = StringUtils
.substringBetween(result,
"<input name=\"iden\" value=\"",
"\" type=\"hidden\"/>");
System.out.printf("IDEN:%s\n", iden);
EntityUtils.consume(entity);
}
}
catch (ClientProtocolException e) {
// TODO Auto-generated catch block
System.out.println("Excepção CP");
} catch (IOException e) {
// TODO Auto-generated catch block
System.out.println("Excepção IO");
}
HTTPClient does not use cache by default (when you use DefaultHttpClient class only). It does so, if you use CachingHttpClient which is HttpClient interface decorator enabling caching:
HttpClient client = new CachingHttpClient(new DefaultHttpClient(), cacheConfiguration);
Then, it analyzes If-Modified-Since and If-None-Match headers in order to decide if request to the remote server is performed, or if its result is returned from cache.
I suspect, that your issue is caused by proxy server standing between your application and remote server.
You can test it easily with curl application; execute some number of requests omitting proxy:
#!/bin/bash
for i in {1..50}
do
echo "*** Performing request number $i"
curl -D - http://yourserveraddress.com -o $i -s
done
And then, execute diff between all downloaded files. All of them should have differences you mentioned. Then, add -x/--proxy <host[:port]> option to curl, execute this script and compare files again. If some responses are the same as others, then you can be sure that this is proxy server issue.
Generally speaking, in order to test whether or not HTTP requests are being made over the wire, you can use a "sniffing" tool that analyzes network traffic, for example:
Fiddler ( http://fiddler2.com/fiddler2/ ) - I would start with this
Wireshark ( http://www.wireshark.org/ ) - more low level
I highly doubt HttpClient is performing caching of any sort (this would imply it needs to store pages in memory or on disk - not one of its capabilities).
While this is not an answer, its a point to ponder: Is it possible that the server (or some proxy in between) is returning you cached content? If you are performing many requests (simultaneously or near simultaneously) for the same content, the server may be returning you cached content because it has decided that the information has not "expired" yet. In fact the HTTP protocol provides caching directives for such functionality. Here is a site that provides a high level overview of the different HTTP caching mechanisms:
http://betterexplained.com/articles/how-to-optimize-your-site-with-http-caching/
I hope this gives you a starting point. If you have already considered these avenues then that's great.
You could try appending some unique dummy parameter to the URL on every request to try to defeat any URL-based caching (in the server, or somewhere along the way). It won't work if caching isn't the problem, or if the server is smart enough to reject requests with unknown parameters, or if the server is caching but only based on parameters it cares about, or if your chosen parameter name collides with a parameter the site actually uses.
If this is the URL you're using
http://www.example.org/index.html
try using
http://www.example.org/index.html?dummy=1
Set dummy to a different value for each request.
Is there any class for reading http pages that return a java.io.InputStream and its timeout be reliable?
I tried java.net.URLConnection and it doesn't have a reliable timeout (it takes more time that it set to timeout reach)? My Code is here:
URLConnection con = url.openConnection();
con.setConnectTimeout(2000);
con.setReadTimeout(2000);
InputStream in = con.getInputStream();
I expect that the reason that the timeout is not working for you is that you are setting the timeout after the connection has been established, or you are using the wrong setter. It is also possible that you are using "non-standard" version of URLConnection ...
"Some non-standard implementation of this method ignores the specified timeout. To see the read timeout set, please call getReadTimeout()." (or getConnectTimeout())
If you posted the relevant part of your actual code we could give you a better answer ...
Alternatively, use the Apache HttpClient library.
You can use Apache HttpClient to read http pages, it also has an http parser.check this for further reference about httpclient. you can get an InputStream object using their API like this.
HttpClient httpclient = new DefaultHttpClient();
// Prepare a request object
HttpGet httpget = new HttpGet("http://www.apache.org/");
// Execute the request
HttpResponse response = httpclient.execute(httpget);
// Examine the response status
System.out.println(response.getStatusLine());
// Get hold of the response entity
HttpEntity entity = response.getEntity();
// If the response does not enclose an entity, there is no need
// to worry about connection release
if (entity != null) {
InputStream instream = entity.getContent();
and coming to timeout part, it totally depends on the network and you cant do much about it from your java code.