For the majority of the time, my HTTP Requests work with no problem. However, occasionally they will hang.
The code that I am using is set up so that if the request succeeds (with a response code of 200 or 201), then call screen.requestSucceeded(). If the request fails, then call screen.requestFailed().
When the request hangs, however, it does so before one of the above methods are called. Is there something wrong with my code? Should I be using some sort of best practice to prevent any hanging?
The following is my code. I would appreciate any help. Thanks!
HttpConnection connection = (HttpConnection) Connector.open(url
+ connectionParameters);
connection.setRequestMethod(method);
connection.setRequestProperty("WWW-Authenticate",
"OAuth realm=api.netflix.com");
if (method.equals("POST") && postData != null) {
connection.setRequestProperty("Content-Type",
"application/x-www-form-urlencoded");
connection.setRequestProperty("Content-Length", Integer
.toString(postData.length));
OutputStream requestOutput = connection.openOutputStream();
requestOutput.write(postData);
requestOutput.close();
}
int responseCode = connection.getResponseCode();
System.out.println("RESPONSE CODE: " + responseCode);
if (connection instanceof HttpsConnection) {
HttpsConnection secureConnection = (HttpsConnection) connection;
String issuer = secureConnection.getSecurityInfo()
.getServerCertificate().getIssuer();
UiApplication.getUiApplication().invokeLater(
new DialogRunner(
"Secure Connection! Certificate issued by: "
+ issuer));
}
if (responseCode != 200 && responseCode != 201) {
screen.requestFailed("Unexpected response code: "
+ responseCode);
connection.close();
return;
}
String contentType = connection.getHeaderField("Content-type");
ByteArrayOutputStream baos = new ByteArrayOutputStream();
InputStream responseData = connection.openInputStream();
byte[] buffer = new byte[20000];
int bytesRead = 0;
while ((bytesRead = responseData.read(buffer)) > 0) {
baos.write(buffer, 0, bytesRead);
}
baos.close();
connection.close();
screen.requestSucceeded(baos.toByteArray(), contentType);
} catch (IOException ex) {
screen.requestFailed(ex.toString());
}
Without any trace, I am just shooting in the dark.
Try to add this 2 calls,
System.setProperty("http.keepAlive", "false");
connection.setRequestProperty("Connection", "close");
Keep-alive is a common cause for stale connections. These calls will disable it.
I don't see any issues with the code. It could be that your platform has an intermittent bug, or that the website is causing the connection to hang. Changing connection parameters, such as keep alive, may help.
But, even with a timeout set, Sockets can hang indefinitely - a friend aptly demonstrated this to me some years ago by pulling out the network cable - my program just hung there forever, even with a SO_TIMEOUT set to 30 seconds.
As a "best practice", you can avoid hanging your application by moving all network communication to a separate thread. If you wrap up each request as a Runnable and queue these for exeuction, you maintain control over timeouts (synchronization is still in java, rather than a blocking native I/O call). You can interrupt your waiting thread after (say) 30s to avoid stalling your app. You could then either inform the user, or retry the request. Because the request is a Runnable, you can remove it from the stalled thread's queue and schedule it to execute on another thread.
I see you have code to handle sending a "POST" type request, however there is nothing that writes the POST data in the request. If the connection type is "POST", then you should be doing the following BEFORE the connection.getResponseCode():
set the "Content-Length" header
set the "Content-Type" header (which you're doing)
get an OutputStream from the connection using connection.openOutputStream()
write the POST (form) data to the OutputStream
close the OutputStream
I noticed this problem too on the blackberry OS 5.0. There is no way to reproduce this reliably. We ended up using additional thread using wait/notify along with Timertask.
Related
Note: expressed in Scala. Using a BufferedReader to process a gzipped HTTP stream and iterating through each line to read the incoming data. Problem is that if there is ever a reset connection due to a network I/O issue (provider does weird things sometimes) then I can see the connection staying open for up to 15 seconds before it times out, something I'd like to get down to 1 second. For some reason our office provider resets connections every 11 hours.
Here's how I'm handling the connection:
val connection = getConnection(URL, USER, PASSWORD)
val inputStream = connection.getInputStream()
val reader = new BufferedReader(new InputStreamReader(new StreamingGZIPInputStream(inputStream), GNIP_CHARSET))
var line = reader.readLine()
while(line != null){
parseSupervisor ! ParseThis(line)
line = reader.readLine()
}
throw new ParseStreamCollapseException
and here is getConnection defined:
private def getConnection(urlString: String, user: String, password: String): HttpURLConnection = {
val url = new URL(urlString)
val connection = url.openConnection().asInstanceOf[HttpURLConnection]
connection.setReadTimeout(1000 * KEEPALIVE_TIMEOUT)
connection.setConnectTimeout(1000 * 1)
connection.setRequestProperty("Authorization", createAuthHeader(user, password));
connection.setRequestProperty("Accept-Encoding", "gzip")
connection
}
To summarize: reading HTTP stream line-by-line via java.io.BufferedReader. Keep-alive on stream is 16 seconds, but to prevent further data loss I'd like to narrow it down to hopefully 1-2 seconds (basically check if stream is currently blank or if is network I/O). Some device in the middle is terminating the connection every 11 hours, and it would be nice to have a meaningful workaround to minimize data loss. The HttpURLConnection does not receive a "termination signal" on the connection.
Thanks!
Unfortunately, unless the network device that's killing the connection is closing it cleanly, you're not going to get any sort of notification that the connection is dead. The reason for this is that there is no way to tell the difference between a remote host that is just taking a long time to respond and a broken connection. Either way the socket is silent.
Again, assuming that the connection is just being severed, your only option to detect the broken connection more quickly is to decrease your timeout.
I am currently implementing a web proxy but i have run into a problem.I can parse my request from the browser and make a new request quite alright but i seem to have a problem with response.It keeps hanging inside my response loop
serveroutput.write(request.getFullRequest());
// serveroutput.newLine();
serveroutput.flush();
//serveroutput.
//serveroutput.close();
} catch (IOException e) {
System.out.println("Writting tothe server was unsuccesful");
e.printStackTrace();
}
System.out.println("Write was succesful...");
System.out.println("flushed.");
try {
System.out.println("Getting a response...");
response= new HttpResponse(serversocket.getInputStream());
} catch (IOException e) {
System.out.println("tried to read response from server but failed");
e.printStackTrace();
}
System.out.println("Response was succesfull");
//response code
public HttpResponse(InputStream input) {
busy=true;
reader = new BufferedReader(new InputStreamReader(input));
try {
while (!reader.ready());//wait for initialization.
String line;
while ((line = reader.readLine()) != null) {
fullResponse += "\r\n" + line;
}
reader.close();
fullResponse = "\r\n" + fullResponse.trim() + "\r\n\r\n";
} catch (IOException`` e) {
e.printStackTrace();
}
busy = false;
}
You're doing a blocking, synchronous read on a socket. Web servers don't close their connections after sending you a page (if HTTP/1.1 is specified) so it's going to sit there and block until the webserver times out the connection. To do this properly you would need to be looking for the Content-Length header and reading the appropriate amount of data when it gets to the body.
You really shouldn't be trying to re-invent the wheel and instead be using either the core Java provided HttpURLConnection or the Appache HttpClient to make your requests.
while (!reader.ready());
This line goes into an infinite loop, thrashing the CPU until the stream is available for read. Generally not a good idea.
You are making numerous mistakes here.
Using a spin loop calling ready() instead of just blocking in the subsequent read.
Using a Reader when you don't know that the data is text.
Not implementing the HTTP 1.1 protocol even slightly.
Instead of reviewing your code I suggest you review the HTTP 1.1 RFC. All you need to do to implement a naive proxy for HTTP 1.1 is the following:
Read one line from the client. This should be a CONNECT command naming the host you are to connect to. Read this with a DataInputStream, not a BufferedReader, and yes I know it's deprecated.
Connect to the target. If that succeeded, send an HTTP 200 back to the client. If it didn't, send whatever HTTP status is appropriate and close the client.
If you succeeded at (2), start two threads, one to copy all the data from the client to the target, as bytes, and the other to do the opposite.
When you get EOS reading one of those sockes, call shutdownOutput() on the other one.
If shutdownOutput() hasn't already been called on the input socket of this thread, just exit the thread.
If it has been called already, close both sockets and exit the thread.
Note that you don't have to parse anything except the CONNECT command; you don't have to worry about Content-length; you just have to transfer bytes and then EOS correctly.
I have the following code in Java that sends an HTTP request to a web server and read the response:
StringBuilder response = new StringBuilder(50000);
URL url2 = new URL(ServiceURL);
connection = (HttpURLConnection)url2.openConnection();
connection.setRequestMethod("POST");
//... (some more connection settings) ...
BufferedWriter wr = new BufferedWriter(new OutputStreamWriter(connection.getOutputStream(), "UTF-8"));
wr.write(Request);
wr.flush ();
wr.close ();
InputStream is = connection.getInputStream();
BufferedReader rd = new BufferedReader(new InputStreamReader(is));
int i = 0;
while ((i = rd.read()) > 0) {
response.append((char)i);
}
It works for most cases, but I have a problem with one server that returns a rather large XML (something like 500KB; I guess this is pretty large for just a bunch of text..), where I keep getting a read timeout exception.
I believe it's not a network problem, because I've tried making the same request using curl and the response just arrived all right and pretty quick, something like two seconds.
When I look what's going on in the network (using wireshark to capture the packets), I noticed that the TCP receive window in my computer gets full at some point. The TCP stack sometimes survives this; I can see the server sending TCP keep-alive to keep the connection up, but in the end the TCP connection just breaks down.
Could it be that the reading part of the code (appending the received response character-by-character) is slowing my code down? Is there a more efficient way to read an HTTP response?
Reading character by character is quite slow, yes. Try reading chunks at a time into a buffer:
char[] buf = new char[2048];
int charsRead;
while((charsRead = rd.read(buf, 0, 2048)) > 0) {
response.append(buf, 0, charsRead);
}
As Phil already said reading the stream byte by byte is kinda slow. I prefer using the readLine() method of BufferedReader :
StringBuilder response = new StringBuilder();
String line = "";
while((line = rd.readLine()) != null) {
response.append(line + System.getProperty("line.separator");
}
If possible, I would consider using the Apache HTTP Client library. It is easy to use and very powerful in handling HTTP stuff.
http://hc.apache.org/httpcomponents-client-ga/
You should also keep in mind to set the socket and connection timeouts. This way you can control how long a connection is kept open (alt least on you side of the connection).
And last but not least always close your HTTP connections in a finally block after you received the response, otherwise you may run into a too many open files problem.
Hope this heps ;)
I have a problem where an external HTTP server that I need to POST large messages to is having OutOfMemory issues. My HTTP client code is not timing out.
It is possible to reproduce this behaviour by using kill -STOP to pause the HTTP server process (to undo, use kill -CONT ).
I have found using the code below that if I keep my request small that the entire message is written to the output stream and the getResponseCode times out.
With a large message like the one below, the code ties up in the write to the output stream. I presume that I have filled the socket buffer. The code then never times out.
What I am looking for is a way of controlling the timeout when writing the request.
I have tried something similar using Apache HttpClient and got a similar result.
I tried running the Java code below in a different thread and interrupting it myself but the thread stays running.
I need to keep the streaming behaviour but I would appreciate any ideas into how I might be able to get the client code to time out.
Thanks,
PJ
URL url = new URL("http://unresponsive/path");
HttpURLConnection conn = (HttpURLConnection)url.openConnection();
conn.setDoInput(true);
conn.setDoOutput(true);
conn.setUseCaches(false);
conn.setConnectTimeout(10000);
conn.setFixedLengthStreamingMode(4 * 1000000);
conn.setRequestProperty("Content-Length", "4000000");
conn.setReadTimeout(10000);
conn.setRequestMethod("POST");
OutputStream os = conn.getOutputStream();
for(int i = 0; i < 1000000; i++) {
if(i % 1000 == 0) {
System.out.println("write: " + i);
}
os.write("test".getBytes("us-ascii"));
}
os.close();
System.out.println("response-code: " + conn.getResponseCode());
InputStream is = conn.getInputStream();
InputStreamReader isr = new InputStreamReader(is);
BufferedReader br = new BufferedReader(isr);
String line;
while((line = br.readLine()) != null) {
System.out.println(line);
}
is.close();
it appears that you are opening a cnnection and writing to the output stream.... I think the confusion is the role of reading vs writing.... Youre not reading from an input stream, when your code is hanging ... so the timeout won't have any effect to rescue the tie up ..
If there is a way to timeout the writing, your code can be fixed that way.
I want to recognize end of data stream in Java Sockets. When I run the code below, it just stuck and keeps running (it stucks at value 10).
I also want the program to download binary files, but the last byte is always distinct, so I don't know how to stop the while (pragmatically).
String host = "example.com";
String path = "/";
Socket connection = new Socket(host, 80);
PrintWriter out = new PrintWriter(connection.getOutputStream());
out.write("GET "+ path +" HTTP/1.1\r\nHost: "+ host +"\r\n\r\n");
out.flush();
int dataBuffer;
while ((dataBuffer = connection.getInputStream().read()) != -1)
System.out.println(dataBuffer);
out.close();
Thanks for any hints.
Actually your code is not correct.
In HTTP 1.0 each connection is closed and as a result the client could detect when an input has ended.
In HTTP 1.1 with persistent connections, the underlying TCP connection remains open, so a client can detect when an input ends with 1 of the following 2 ways:
1) The HTTP Server puts a Content-Length header indicating the size of the response. This can be used by the client to understand when the reponse has been fully read.
2)The response is send in Chunked-Encoding meaning that it comes in chunks prefixed with the size of each chunk. The client using this information can construct the response from the chunks received by the server.
You should be using an HTTP Client library since implementing a generic HTTP client is not trivial (at all I may say).
To be specific in your code posted you should have followed one of the above approaches.
Additionally you should read in lines, since HTTP is a line terminated protocol.
I.e. something like:
BufferedReader in =new BufferedReader(new InputStreamReader( Connection.getInputStream() ) );
String s=null;
while ( (s=in.readLine()) != null) {
//Read HTTP header
if (s.isEmpty()) break;//No more headers
}
}
By sending a Connection: close as suggested by khachik, gets the job done (since the closing of the connection helps detect the end of input) but the performance gets worse because for each request you start a new connection.
It depends of course on what you are trying to do (if you care or not)
You should use existing libraries for HTTP. See here.
Your code works as expected. The server doesn't close the connection, and dataBuffer never becomes -1. This happens because connections are kept alive in HTTP 1.1 by default. Use HTTP 1.0, or put Connection: close header in your request.
For example:
out.write("GET "+ path +" HTTP/1.1\r\nHost: "+ host +"\r\nConnection: close\r\n\r\n");
out.flush();
int dataBuffer;
while ((dataBuffer = connection.getInputStream().read()) != -1)
System.out.print((char)dataBuffer);
out.close();