Would the following be an appropriate way of dealing with a 503 response code in java networking? What does this code- specifically the calls to disconnect and null do?
URL url = new URL(some url);
HttpURLConnection h =(HttpURLConnection)url.openConnection();
int x = h.getResponseCode();
while(x==503)
{
h.disconnect();
h = null;
h =(HttpURLConnection)url.openConnection();
x = h.getResponseCode();
}
The disconnect() closes the underlying TCP socket.
Setting the local variable to null immediately before reassigning it accomplishes nothing whatsoever.
There should be a sleep in that loop, with an interval that increases on every failure, and a limited number of retries.
Whatever you want it to do is an appropriate way. To make something failsafe, it would be better to repeat until success is achieved, rather than only handling a 503 scenario.
simplest example: loop until 200 (success) code comes back.
(better would be to abstract that out into methods and classes and use OOP and unit tests where possible.)
Related
private fun downloadAPKStream() : InputStream? {
val url = URL(this.url)
val connection = url.openConnection() as HttpURLConnection
connection.requestMethod = "GET"
connection.connect() connection.connectTimeout = 5000
fileSize = connection.contentLength
val inputStream = connection.inputStream
return inputStream
}
I'm using this method to download apk file. But here if internet is slow then due to timeout of 5000 ms, download gets stuck in between without get completed. And if I comment this line or I don't provide any **connection.connectTimeout then it runs fine but sometimes get stuck in infinite time loop. What should I do to make it download files of any size and with slow internet as well.
You got timeout meaning wrong. It is not a max. allowed time of given (network in this case) operation, but max. allowed time of inactivity after which operation is considered stalled and fail. So you should set the timeout to some sane value, that would make sense in real life. As value is in milliseconds, the 5000 is not the one because it's just 5 seconds - any small network hiccup and your connection will get axed. Set it to something higher, like 30 secs or 1 minute or more.
Also note that this is connection timeout only. This means you should be able to establish protocol connection to remote server during that time, but this got nothing to data transfer itself. Data transfer oa process that comes next, once connection is established. For data transfer timeout (which definitely should be set higher) you need to use setReadTimeout().
Finally, you must set connection timeout prior calling connect() otherwise it makes no sense as it is already too late - this is what you got in your code now.
PS: use Download Manager instead.
I am going to read from a socket in java. Here is what I am going to do:
System.out.println("Start Reading");
/* bab is socket connector */
/* and readLine is the method below.
/* public String readLine()throws IOException
{
String a = inStream.readLine();
return a;
}
*/
for( int j=0;j<9;j++)
{
response = bab.readLine();
System.out.println(response);
}
I see a lot of delay (2-3 seconds) between printing "start Reading" and first line of the response. But when I requested it with Firefox, it responsed quickly (20 ms). What is the problem? And how can I solve this problem?
I suspect the reason is the server doesn't send the line-delimiter for some time, so the readLine() method waits. I bet if you just do readByte() it must be quick.
As Firefox or any other browser wouldn't read line by line, it dosn't affect them.
Firefox is probably caching the response and is therefore able to display it very quickly to you. I suggest you clear the cache on Firefox and time it again.
If you are using a domain name for the call then Firefox will also cache the DNS lookup which could save time in Firefox whereas making the call in Java could require a DNS lookup.
If you are using Windows then download Fiddler which will allow you to monitor the HTTP connection and give you a better idea of what is happening.
I'm making an http request. I'm on a platform (android) where network operations often fail because the network connection might not be immediately available. Therefore I'd like to try the same connection N times before completely failing. Was thinking of something like this:
DefaultHttpClient mHttp = ...;
public HttpResponse runHttpRequest(HttpRequestBase httpRequest)
throws IOException
{
IOException last = null;
for (int attempt = 0; attempt < 3; attempt++) {
try {
HttpResponse response = mHttpClient.execute(httpRequest);
int statusCode = response.getStatusLine().getStatusCode();
if (statusCode == 200) {
return response;
}
} catch (IOException e) {
httpRequest.abort();
last = e;
}
}
throw last;
}
I'm mostly worried about the connection being in some state which is invalid on subsequent retries. In other words, do I need to completely recreate 'httpRequest', should I avoid calling httpRequest.abort() in the catch block, and only call it in the final failure?
Thanks
The documentation does not mention that such a thing will occur, although you'd have to try it. More importantly, though, there are some things that you should consider with your code...
You should probably expose the number of retries, allowing the caller to specify this value.
You should only retry if an exception was thrown; you currently retry unless you get a 200. However if, for example, you get a 404... this doesn't mean your request failed in the sense that the network did not fail... rather, you made a successful round-trip to the server, but the server simply doesn't have the requested resource... so it really doesn't make sense to retry in such a case.
As-is, you might suppress all sorts of different types of exceptions. It might make sense to record all the exceptions that occurred in a List and return some sort of result object which contains the response (possibly null if all attempts failed) in addition to a list of all exceptions. Otherwise, you throw some arbitrary exception from the set of exceptions that occurred, possibly obscuring failure.
Right now you just hammer away with the same request, over and over again... if there is congestion, you are just adding to it. And if your IP address was banned for too much activity, you are probably going to be adding to that... any sort of retry logic should have a back-off behavior where there is some amount of waiting between retries and that interval increases with each failure.
A HttpRequestRetryHandler seems like it might be helpful here.
I'd recommend to use AOP and Java annotations from jcabi-aspects (I'm a developer):
#RetryOnFailure(attempts = 3, delay = 5)
public String load(URL url) {
return url.openConnection().getContent();
}
I'm trying to connect to one URL that I know that exist but I don't know when.
I don't have access to this server so I can't change anything to receive a event.
The actual code is this.
URL url = new URL(urlName);
for(int j = 0 ; j< POOLING && disconnected; j++){
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
int status = connection.getResponseCode();
if(status == HttpURLConnection.HTTP_OK || status == HttpURLConnection.HTTP_NOT_MODIFIED){
//Some work
}else{
//wait 3s
Thread.sleep(3000);
}
}
Java not is my best skill and I'm not sure if this code is good from the point of view of performance.
I'm opening a new connection every 3 seconds? or the connection is reused?
If I call to disconnect() I ensure that no new connections are open in the loop, but... it will impact in performance?.
Suggestions? What is the fast/best ways to know it a URL exist?
1) Do use disconnect, you don't want numerous open connections you don't use. Discarding resources you don't use is a basic practice in any language.
2) I don't know if opening and closing new network connection every 3 seconds will pollute system resources, the only way to check it is to try.
3) You may want to watch for 'ConnectException', if by "URL [does not] exist" you mean server is down.
This code is okay from a performance point of view, it will create a new connection each time. Anyway if you have a Thread.sleep(3000) in your loop, you shouldn't have to worry about performance ;)
If you're concerned about connection usage on the server side, you can look into apache HTTP client, it has a lot of features. I think it handles keep alive connections by default.
private String indexPage(URL currentPage) throws IOException {
String content = "";
is = currentPage.openStream();
content = new Scanner( is ).useDelimiter( "\\Z" ).next();
return content;
}
This is my function with which I'm currently crawling webpages. The function that a problem is:
content = new Scanner( is ).useDelimiter( "\\Z" ).next();
If the webpage doesn't answer or takes a long time to answer, my thread just hangs at the above line. What's the easiest way to abort this function, if it takes longer than 5 seconds to load fully load that stream?
Thanks in advance!
Instead of struggling with a separate watcher thread, it might be enough for you (although not exactly an answer to your requirement) if you enable connect and read timeouts on the network connection, e.g.:
URL url = new URL("...");
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setConnectTimeout(5000);
conn.setReadTimeout(10000);
InputStream is = conn.getInputStream();
This example will fail if it takes more than 5 seconds (5000ms) to connect to the server or if you have to wait more than 10 seconds (10000ms) between any content chunks which are actually read. It does not however limit the total time you need to retrieve the page.
You can close the stream from another thread.
Google's recently released guava-libraries have some classes that offer similar functionality:
TimeLimiter:
Produces proxies that impose a time limit on method calls to the proxied object. For example, to return the value of target.someMethod(), but substitute DEFAULT_VALUE if this method call takes over 50 ms, you can use this code ...
Have a look at FutureTask...
Try to interrupt the thread; many blocking calls in Java will continue when they receive an interrupt.
In this case, content should be empty and Thread.isInterrupted() should be true.