java.net.SocketTimeoutException is not always thrown [duplicate] - java

Here is my code :
URL u = new URL("http://www.google.com");
URLConnection conn = u.openConnection();
conn.setConnectTimeout(3000);
conn.connect();
My network connection is sometimes unstable(I've connected to the wireless router but actually my router doesn't have Internet access). When that happens, this code will block for a lone time and finally throws UnknownHostException. Why setConnectTimeout(3000) doesn't work in this case? How to fix this?
Thanks!
------------update---------------
My guess is that conn.connect() will query DNS first but there's no time limit for this operation. I've tried Socket class and problem remains. setTimeout() seems do not work for DNS query.

I found a post that can work around it. Use another thread to query DNS to simulate timeout:
http://thushw.blogspot.sg/2009/11/resolving-domain-names-quickly-with.html

Some non-standard implmentation of this method may ignore the specified timeout.
See this setConnectTimeout

Related

Java - url.openConnection() and httpsUrlConnection.connect() is running slow when first called

I found an interesting phenomenon when writing Java programs.
I had created 3 https connections to 3 different URLs, and I found that when I call url.openConnection() and httpsUrlConnection.connect() for the FIRST time, they took nearly 300ms and 1s to execute respectively, while during the second and third call, they took nearly 0ms.
Are there any reasons for these performance difference?
BTW, is there something I can do to improve the performance?
FYI, all of the three httpsURLConnection look like this (try-catch is not shown):
Url url = new URL("https://www.google.com");
Utils.logTime(logger);
HttpsURLConnection httpsURLConnection = (HttpsURLConnection) url.openConnection();
Utils.logTime(logger);
httpsURLConnection.setRequestMethod("GET");
httpsURLConnection.setConnectTimeout(5 * 1000);
httpsURLConnection.setReadTimeout(5 * 1000);
httpsURLConnection.setRequestProperty(Utils.ACCEPT, Utils.ACCEPT_ALL);
httpsURLConnection.setRequestProperty(Utils.ACCEPT_ENCODING, Utils.GZIP);
httpsURLConnection.setRequestProperty(Utils.USER_AGENT, Utils.MOZILLA);
Utils.addCookiesToConnection(httpsURLConnection, cookieMap);
Utils.logTime(logger);
httpsURLConnection.connect();
Utils.logTime(logger);
And as you may assume, Utils and cookieMap are a class and a HashMap created by myself, so they shall not be the focus of solution.
Any ideas? Thanks in advance.
The reason for the time difference could be: the first time, socket connection needs to be established (from the source ip to the target ip and port). Once established, the same TCP connection could be reused. It's normal in network programming.
To improve efficiency and more control over the connection pooling, I would suggest considering Apache HttpClient

Does setConnectTimeout not affect the gateway timeout?

I'm opening an HttpUrlConnection and am setting the connection timeout using its inherited setConnectTimeout method, but for one particular URL I'm getting a gateway timeout (a 504). I don't mind getting a gateway timeout as such, but I do object to it taking far longer than the connection timeout that I've set!
Does setConnectTimeout have no impact upon the gateway timeout? I couldn't see another intuitively-named method that I could use.
Thanks in advance.
You should set read timeout by setReadTimeout. If you got a 504, it means that the connection is ok, but waiting too long to read something from it.
See more here: http://docs.oracle.com/javase/6/docs/api/java/net/URLConnection.html#setReadTimeout(int)

Timeout for UnknownHostException with established connection but no internet

I have an interesting issue.
I have an application where inside of it, I'm trying to account for the condition where the phone is connected to a router, but that router is not connected to the internet.
I've tried multiple methods of establishing the connection, but NONE of the timeouts account for this condition.
I've tried:
HttpParams httpParameters = new BasicHttpParams();
int timeoutSocket = 1000;
HttpConnectionParams.setSoTimeout(httpParameters, timeoutSocket);
HttpConnectionParams.setConnectionTimeout(httpParameters, timeoutSocket);
I've also tried:
HttpURLConnection huc = (HttpURLConnection)serverAddress.openConnection();
huc.setDoOutput(true);
huc.setRequestMethod("PUT"); // For amazon
//huc.setRequestMethod("POST"); // For regular server.
huc.setRequestProperty("Content-Type", "text/plain");
huc.setRequestProperty("Content-Length", String.valueOf(bytes));
huc.setFixedLengthStreamingMode(bytes);
huc.setConnectTimeout(1000); // Establishing connection timeout
huc.setReadTimeout(1000);
But in BOTH cases, when I execute/ get the output stream, it takes about 20 seconds to receive an UnknownHostException error.
I would like that reduced to a maximum of 5 seconds before reaching that conclusion.
Is there any way to do this?
Cheers
Through lots of searching and through the help of this link I've found a solid solution that seams to be working so far.
My understanding of the conclusion is that when I use methods like:
DataOutputStream wr = new DataOutputStream(huc.getOutputStream());
or
InputStream is = ucon.getInputStream();
BufferedInputStream bis = new BufferedInputStream(is);
(uploading or downloading)
There is a lot of things happening under the hood. Including a DNS lookup call. With no-connectivity, but while connected to a router, this is taking about 20 seconds to finally reach a UnknownHostException.
However, if I add this line of code first before the above code is executed:
InetAddress iAddr = InetAddress.getByName("myserverName.com");
Then it will give me the proper SocketTimeOutException and responds exactly how I would hope/expect it to. The above line of code apparently caches the DNS Lookup, and the timeouts work as expected.
Also, something to note: that once the failure is cached, executing the code above will take as long to fail as the other previous code. (Can't tell you exactly what will trigger this) But if you connect to the internet again, and then enter the connected but no connectivity state again, the earlier success will be cached and the timeouts will again work properly.
This wasn't particularly easy to find or figure out, so I hope this helps somebody.
Cheers,
You could implement a CountDownTimer that has a limit of 5000ms see this http://dewful.com/?p=3

In Java, how close the connection and free the port/socket using HttpURLConnection?

I'm using HttpURLConnection to do download of pages in Java. I forgot to release the connections and I think this was causing some problems to my system (a webcrawler).
Now I've done some tests and see that after disconnecting, some connections still like TIME_WAIT in the results from the netstat command on Windows.
How I do to free this connection immediately?
Example code:
private HttpURLConnection connection;
boolean openConnection(String url) {
try {
URL urlDownload = new URL(url);
connection = (HttpURLConnection) urlDownload.openConnection();
connection.setInstanceFollowRedirects(true);
connection.connect();
connection.disconnect();
return true;
} catch (Exception e) {
System.out.println(e);
return false;
}
}
In some implementations, if you have called getInputStream or getOutputStream, you need to ensure that those streams are closed. Otherwise, the connection can stay open even after calling disconnect.
EDIT:
This is from the J2SE docs for HttpURLConnection [emphasis added]:
Calling the disconnect() method may close the underlying socket if a persistent connection is otherwise idle at that time.
And this is from the Android docs:
To reduce latency, this class may reuse the same underlying Socket for multiple request/response pairs. As a result, HTTP connections may be held open longer than necessary. Calls to disconnect() return the socket to a pool of connected sockets. This behavior can be disabled by setting the "http.keepAlive" system property to "false" before issuing any HTTP requests. The "http.maxConnections" property may be used to control how many idle connections to each server will be held.
I don't know what platform you are using, but it could have similar behavior.
The TME_WAIT state is imposed by TCP, not by Java. It lasts two minutes. This is normal.

BindException: address already in use on a client socket?

I've got a client-server tiered architecture with the client making RPC-like requests to the server. I'm using Tomcat to host the servlets, and the Apache HttpClient to make requests to it.
My code goes something like this:
private static final HttpConnectionManager CONN_MGR = new MultiThreadedHttpConnectionManager();
final GetMethod get = new GetMethod();
final HttpClient httpClient = new HttpClient(CONN_MGR);
get.getParams().setCookiePolicy(CookiePolicy.IGNORE_COOKIES);
get.getParams().setParameter(HttpMethodParams.USER_AGENT, USER_AGENT);
get.setQueryString(encodedParams);
int responseCode;
try {
responseCode = httpClient.executeMethod(get);
} catch (final IOException e) {
...
}
if (responseCode != 200)
throw new Exception(...);
String responseHTML;
try {
responseHTML = get.getResponseBodyAsString(100*1024*1024);
} catch (final IOException e) {
...
}
return responseHTML;
It works great in a lightly-loaded environment, but when I'm making hundreds of requests per second I start to see this -
Caused by: java.net.BindException: Address already in use
at java.net.PlainSocketImpl.socketBind(Native Method)
at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:336)
at java.net.Socket.bind(Socket.java:588)
at java.net.Socket.<init>(Socket.java:387)
at java.net.Socket.<init>(Socket.java:263)
at org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory.createSocket(DefaultProtocolSocketFactory.java:80)
at org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory.createSocket(DefaultProtocolSocketFactory.java:122)
at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707)
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
Any thoughts on how to fix this? I'm guessing it's something to do with the client trying to reuse the ephemeral client ports, but why is this happening / how can I fix it?
Thanks!
A very good discussion of the problem you are running into can be found here. On the Tomcat side, by default it will use the SO_REUSEADDR option, which will allow the server to reuse sockets which are in TIME_WAIT. Additionally, the Apache http client will by default use keep-alives, and attempt to reuse connections.
Your problems seems to be caused by not calling releaseConnection on the HttpClient. This is required in order for the connection to be reused. Otherwise, the connection will remain open until garbage collector comes and closes it, or the server disconnects the keep-alive. In both cases, it won't be returned to the pool.
With hundreds of connections a second, and without knowing how long your connections keep to open, do their thing, close, and get recycled, I suspect that this is just a problem you're going to have. One thing you can do is catch the BindException in your try block, use that to do anything you need to do in the bind-unsuccessful case, and wrap the whole call in a while loop that depends on a flag indicating whether the bind succeeded. Off the top of my head:
boolean hasBound = false;
while (!hasBound) {
try {
hasBound = true;
responseCode = httpClient.executeMethod(get);
} catch (BindException e) {
// do anything you want in the bound-unsuccessful case
} catch (final IOException e) {
...
}
}
Update with question: One curious question: what are the maximum total and per-host number of connections allowed by your MultiThreadedHttpConnectionManager? In your code, that'd be:
CONN_MGR.getParams().getDefaultMaxConnectionsPerHost();
CONN_MGR.getParams().getMaxTotalConnections();
Thus, you've fired more requests than TCP/IP ports are allowed to be opened. I don't do HttpClient, so I can't go in detail about this, but in theory there are three solutions for this particular problem:
Hardware based: add another NIC (network interface card).
Software based: close connections directly after use and/or increase the connection timeout.
Platform based: increase the amount of TCP/IP ports which are allowed to be opened. May be OS-specific and/or NIC driver-specific. The absolute maximum is 65535, of which several may already be reserved/in use (e.g. port 80).
So it turns out the problem was that one of the other HttpClient instances accidentally wasn't using the MultiThreadedHttpConnectionManager I instantiated, so I effectively had no rate limiting at all. Fixing this problem fixed the exception being thrown.
Thanks for all the suggestions, though!
Even though we invoke HttpClientUtils.closeQuietly(client); but in your code in case trying to read the content from HttpResponse entity like InputStream contentStream = HttpResponse.getEntity().getContent(), then you should close the inputstream also then only HttpClient connection get closed properly.

Categories

Resources