I'm trying to communicate with a RESTful API over SSL. The whole client application relies on a basic connection method which looks something like this
URL url = null;
HttpsURLConnection connection = null;
BufferedReader bufferedReader = null;
InputStream is = null;
try {
url = new URL(TARGET_URL);
HostnameVerifier hv = new HostnameVerifier() {
public boolean verify(String urlHostName, SSLSession session) {
return true;
}
};
HttpsURLConnection.setDefaultHostnameVerifier(hv);
connection = (HttpsURLConnection) url.openConnection();
connection.setRequestMethod(requestType);
connection.setRequestProperty("Content-Type", "application/x-www-form-urlencoded");
connection.setRequestProperty("Content-Language", "en-US");
connection.setSSLSocketFactory(sslSocketFactory);
is = connection.getInputStream();
bufferedReader = new BufferedReader(new InputStreamReader(is));
String line;
StringBuffer lines = new StringBuffer();
while ((line = bufferedReader.readLine()) != null) {
lines.append(line).append(LINE_BREAKER);
}
return lines.toString();
} catch (Exception e) {
System.out.println("Exception is :"+e.toString());
return e.toString();
}
}
This works well, but is there a more efficient way? We've tried Apache HTTPClient. It has an awesomely simple API but when we compared the performance of the above code vs Apache HTTPClient with YourKit, the latter one was creating more objects than the first. How do I optimize this?
I've used HTTPClient (but not for HTTPS) but I think this applies to your case. The recommendation is to create a single HTTPClient for your server, and a new HTTPGet object per call. You'll want to specify the multi-threaded client connection manager with an appropriate number of connections to allocate per host, and max total connections when you initialize your HTTPClient.
You could also try to use HttpCore instead of HttpClient. HttpCore is a set of HTTP transport components HttpClient is based upon. It has been specifically optimized for low memory footprint as well as performance. It lacks higher level HTTP functionality provided by HttpClient (connection management, cookie & state management, authentication) but should be quite efficient in terms of memory utilization.
http://hc.apache.org/httpcomponents-core-ga/examples.html
Related
I want to test my application if it can handle many Request parallel. To test this i simply initialised the Request about 50 times. Each Request in a own Thread.
The run() Method looks sth. like this with a bit more Stuff.
public void run(){
URL getUrl = new URL(url);
HttpURLConnection con = (HttpURLConnection) getUrl.openConnection();
con.setRequestMethod("GET");
BufferedReader br = new BufferedReader(new InputStreamReader(con.getInputStream()));
while ((data = br.readLine()) != null) {
sb.append(data);
}
br.close();
}
The stragne thing is that with more Request the time per Request is reducing significant to about 50%. Why is this happening? Does the Java HTTP Connection cache the Request?
I'm a client side developer, moving over to server side development.
One common problem I am encountering is the need to make one API call (say to get an authentication token) and then make a follow up API call to get the data I want. Sometimes, I need to make two API calls in succession for data, without an auth token as well.
Is there a common design pattern or Java library to address this issue? Or do I need to manually create the string of calls each time I need to do so?
Edit: I'm hoping for something that looks like this
CustomClassBasedOnJson myStuff = callAPI("url", getResponse("authURL"));
This would make a call to the "url" with data received from the "authURL".
The point here is that I'm stringing multiple url calls, using the result of one call to define the next one.
When doing Server side programming, it is acceptable for HTTP calls to be called synchronously.
Therefore the proper pattern is to simply make the first call, receive the result, and then use that in the next line. There is no need to separate the calls into separate threads, or asynchronous calls, unless there is major processing happening between http calls.
For example:
JsonResponseEntry getJsonReportResponse() throws IOException {
String sReportURL = "https://someurl.com/v2/report/report?" +
"startts=" + getDateYesterday("ts") +
"&endts=" + getDateNow("ts") +
"&auth=" + getAuthCode();
URL reportURL = new URL(sReportURL);
URLConnection conn = reportURL.openConnection();
BufferedReader buf = new BufferedReader(new InputStreamReader(conn.getInputStream()));
ObjectMapper mapper = new ObjectMapper();
JsonNode reportResult = mapper.readTree(buf);
return convertJSonNodeToJsonResponseEntry(reportResult);
}
String getAuthCode() throws IOException {
String sReportURL = "https://someurl.com/auth";
URL reportURL = new URL(sReportURL);
HttpURLConnection conn = (HttpURLConnection) reportURL.openConnection();
conn.setRequestMethod("POST");
conn.setDoOutput(true);
conn.setDoInput(true);
conn.connect();
String urlParameters = "username=myUserName&password=mypassword";
DataOutputStream wr = new DataOutputStream(conn.getOutputStream());
wr.writeBytes(urlParameters);
wr.flush();
wr.close();
BufferedReader buf = new BufferedReader(new InputStreamReader(conn.getInputStream()));
ObjectMapper mapper = new ObjectMapper();
AuthResponse response = mapper.readValue(buf, AuthResponse.class);
return response.toString();
}
The function getAuthCode() is synchronously called within the URL call that requires the response.
I am trying to make a GET request via an https end point, I am not sure if there are any special treatment that is needed, but below is my code:
String foursquareURL = "https://api.foursquare.com/v2/venues/search?ll=" + latitude + "," + longitude + "&client_id="+CLIENT_ID+"&client_secret="+CLIENT_SECRET;
System.out.println("Foursquare URL is " + foursquareURL);
try {
Log.v("HttpClient", "Preparing to create a request " + foursquareURL);
URI foursquareURI = new URI(foursquareURL);
HttpClient httpclient = new DefaultHttpClient();
HttpResponse response = httpclient.execute(new HttpGet(foursquareURI));
content = response.getEntity().getContent();
BufferedReader br = new BufferedReader(new InputStreamReader(content));
String strLine;
String result = "";
while ((strLine = br.readLine()) != null) {
result += strLine;
}
//editTextShowLocation.setText(result);
Log.v("result of the parser is", result);
} catch (Exception e) {
Log.v("Exception", e.getLocalizedMessage());
}
I'm not sure if this approach will work on Android, but we saw this same problem in server-side Java using HttpClient with HTTPS URLs. Here is how we solved the problem:
First, we copied/adapted an implementation of the class EasySSLProtocolSocketFactory into our own code base. You can find the source here:
http://svn.apache.org/viewvc/httpcomponents/oac.hc3x/trunk/src/contrib/org/apache/commons/httpclient/contrib/ssl/EasySSLProtocolSocketFactory.java?view=markup
With that class in place, we then create our new HttpClient instances with:
HttpClient httpClient = new HttpClient();
mHttpClient = httpClient;
Protocol easyhttps = new Protocol("https", new EasySSLProtocolSocketFactory(), 443);
Protocol.registerProtocol("https", easyhttps);
The use of the EasySSLProtocolSocketsfactory will allow your HttpClient to ignore any certificate failures/isses when making the request.
Take a look at the AndroidHttpClient. It's essentially an alternative to DefaultHttpClient which will register some commonly used schemes (including HTTPS) for you behind the scenes when you create it.
You should then be able to execute HttpGets using this an instance of this client and it will handle the SSL for you if your URL indicates an 'https' scheme. You shouldn't need to mess around with registering your own SSL protocols / schemes etc.
I have been looking around at different ways to connect to URLs and there seem to be a few.
My requirements are to do POST and GET queries on a URL and retrieve the result.
I have seen
URL class
DefaultHttpClient class
HttpClient - apache commons
which method is best?
My rule of thumb and recommendation: Don't introduce dependencies and 3rd party libraries if it's fairly easy to get away without.
In this case I would say, if you need efficiency such as multiple requests per established connection session handling or cookie support etc, go for HTTPClient.
If you only need to perform an HTTP get, this will suffice:
Getting Text from a URL
try {
// Create a URL for the desired page
URL url = new URL("http://hostname:80/index.html");
// Read all the text returned by the server
BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream()));
String str;
while ((str = in.readLine()) != null) {
// str is one line of text; readLine() strips the newline character(s)
}
in.close();
} catch (MalformedURLException e) {
} catch (IOException e) {
}
Sending a POST Request Using a URL
try {
// Construct data
String data = URLEncoder.encode("key1", "UTF-8") + "=" + URLEncoder.encode("value1", "UTF-8");
data += "&" + URLEncoder.encode("key2", "UTF-8") + "=" + URLEncoder.encode("value2", "UTF-8");
// Send data
URL url = new URL("http://hostname:80/cgi");
URLConnection conn = url.openConnection();
conn.setDoOutput(true);
OutputStreamWriter wr = new OutputStreamWriter(conn.getOutputStream());
wr.write(data);
wr.flush();
// Get the response
BufferedReader rd = new BufferedReader(new InputStreamReader(conn.getInputStream()));
String line;
while ((line = rd.readLine()) != null) {
// Process line...
}
wr.close();
rd.close();
} catch (Exception e) {
}
Both methods work great. (I've even done manual gets/posts with cookies.)
HTTPClient is the way to go if your needs go past trivial URL connection (e.g. proxy authentication such as NTLM). There are at least a comparison here between standard HTTP client functionality between libraries provided by the JRE, Apache HTTP Client and others.
If you are using JDK versions earlier to (including 1.4) and have a fairly large data in your post requests, like large file uploads, the default HTTPURLConnection that comes with the JRE is bound to go Out of memory at some point since it buffers the entire data before posting. Additionally it does not support some advanced HTTP headers like chunked encoding, etc.
So I'd recommend it only if your request are trivial and you are not posting large data as aioobe did.
We are using HttpURLConnection API to invoke a REST API to the same provider often (kind of an aggregation usecase). We want to keep a pool of 5 connections always open to the provider host (always the same IP).
What is the proper solution? Here is what we tried:
System.setProperty("http.maxConnections", 5); // set globally only once
...
// everytime we need a connection, we use the following
HttpURLConnection conn = (HttpURLConnection) (new URL(url)).openConnection();
conn.setRequestMethod("GET");
conn.setDoInput(true);
conn.setDoOutput(false);
conn.setUseCaches(true);
...
BufferedReader reader = new BufferedReader(new InputStreamReader(conn.getInputStream()));
...
At this point we read the input stream until the BufferedReader returns no more bytes. What do we do after that point if we want to reuse the underlying connection to the provider? We were under the impression that if the input stream is completely read, the connection is then added back to the pool.
It's been working for several weeks this way, but today it stopped working producing this exception: java.net.SocketException: Too many open files
We found numerous sockets in the CLOSE_WAIT state like this (by running lsof):
java 1814 root 97u IPv6 844702 TCP colinux:58517->123.123.254.205:www (CLOSE_WAIT)
Won't either conn.getInputStream().close() or conn.disconnect() completely close the connection and remove it from the pool?
We had this problem also on Java 5 and our solution is to switch to Apache HttpClient with pooled connection manager.
The keepalive implementation of Sun's URL handler for HTTP is very buggy. There is no maintenance thread to close idle connections.
Another bigger problem with keepalive is that you need to delete responses. Otherwise, the connection will be orphaned also. Most people don't handle error stream correctly. Please see my answer to this question for an example on how to read error responses correctly,
HttpURLConnection.getResponseCode() returns -1 on second invocation
From here:
The current implementation doesn't buffer the response body. Which means that the application has to finish reading the response body or call close() to abandon the rest of the response body, in order for that connection to be reused. Furthermore, current implementation will not try block-reading when cleaning up the connection, meaning if the whole response body is not available, the connection will not be reused.
I read this as if your solution should work, but that you are also free to call close and the connection will still be reused.
The reference cited by disown was what really helped.
We know Apache HttpClient is better, but that would require another jar and we might use this code in an applet.
Calling HttpURLConnection.connect() was unnecessary. I'm not sure if it prevents connection reuse, but we took it out. It is safe to close the stream, but calling disconnect() on the connection will prevent reuse. Also, setting sun.net.http.errorstream.enableBuffering=true helps.
Here is what we ended up using:
System.setProperty("http.maxConnections", String.valueOf(CONST.CONNECTION_LIMIT));
System.setProperty("sun.net.http.errorstream.enableBuffering", "true");
...
int responseCode = -1;
HttpURLConnection conn = null;
BufferedReader reader = null;
try {
conn = (HttpURLConnection) (new URL(url)).openConnection();
conn.setRequestProperty("Accept-Encoding", "gzip");
// this blocks until the connection responds
InputStream in = new GZIPInputStream(conn.getInputStream());
reader = new BufferedReader(new InputStreamReader(in));
StringBuffer sb = new StringBuffer();
char[] buff = new char[CONST.HTTP_BUFFER_SIZE];
int cnt;
while((cnt = reader.read(buff)) > 0) sb.append(buff, 0, cnt);
reader.close();
responseCode = conn.getResponseCode();
if(responseCode != HttpURLConnection.HTTP_OK) throw new IOException("abnormal HTTP response code:"+responseCode);
return sb.toString();
} catch(IOException e) {
// consume error stream, otherwise, connection won't be reused
if(conn != null) {
try {
InputStream in = ((HttpURLConnection)conn).getErrorStream();
in.close();
if(reader != null) reader.close();
} catch(IOException ex) {
log.fine(ex);
}
}
// log exception
String rc = (responseCode == -1) ? "unknown" : ""+responseCode;
log.severe("Error for HttpUtil.httpGet("+url+")\nServer returned an HTTP response code of '"+rc+"'");
log.severe(e);
}