I'm making an http request. I'm on a platform (android) where network operations often fail because the network connection might not be immediately available. Therefore I'd like to try the same connection N times before completely failing. Was thinking of something like this:
DefaultHttpClient mHttp = ...;
public HttpResponse runHttpRequest(HttpRequestBase httpRequest)
throws IOException
{
IOException last = null;
for (int attempt = 0; attempt < 3; attempt++) {
try {
HttpResponse response = mHttpClient.execute(httpRequest);
int statusCode = response.getStatusLine().getStatusCode();
if (statusCode == 200) {
return response;
}
} catch (IOException e) {
httpRequest.abort();
last = e;
}
}
throw last;
}
I'm mostly worried about the connection being in some state which is invalid on subsequent retries. In other words, do I need to completely recreate 'httpRequest', should I avoid calling httpRequest.abort() in the catch block, and only call it in the final failure?
Thanks
The documentation does not mention that such a thing will occur, although you'd have to try it. More importantly, though, there are some things that you should consider with your code...
You should probably expose the number of retries, allowing the caller to specify this value.
You should only retry if an exception was thrown; you currently retry unless you get a 200. However if, for example, you get a 404... this doesn't mean your request failed in the sense that the network did not fail... rather, you made a successful round-trip to the server, but the server simply doesn't have the requested resource... so it really doesn't make sense to retry in such a case.
As-is, you might suppress all sorts of different types of exceptions. It might make sense to record all the exceptions that occurred in a List and return some sort of result object which contains the response (possibly null if all attempts failed) in addition to a list of all exceptions. Otherwise, you throw some arbitrary exception from the set of exceptions that occurred, possibly obscuring failure.
Right now you just hammer away with the same request, over and over again... if there is congestion, you are just adding to it. And if your IP address was banned for too much activity, you are probably going to be adding to that... any sort of retry logic should have a back-off behavior where there is some amount of waiting between retries and that interval increases with each failure.
A HttpRequestRetryHandler seems like it might be helpful here.
I'd recommend to use AOP and Java annotations from jcabi-aspects (I'm a developer):
#RetryOnFailure(attempts = 3, delay = 5)
public String load(URL url) {
return url.openConnection().getContent();
}
Related
I got this servlet which return a pdf file to the client web browser.
We do not want to risk any chance that when the number of request is too much, the server is paralyzed.
We would like to make an application level (program) way to set a limit in the number of concurrent request, and return a error message to the browser when the limit is reached. We need to do it in applicantion level because we have different servlet container in development level(tomcat) and production level(websphere).
I must emphasize that I want to control the maximum number of request instead of session. A user can send multiple request over the server with the same session.
Any idea?
I've thought about using a static counter to keep track of the number of request, but it would raise a problem of race condition.
I'd suggest writing a simple servlet Filter. Configure it in your web.xml to apply to the path that you want to limit the number of concurrent requests. The code would look something like this:
public class LimitFilter implements Filter {
private int limit = 5;
private int count;
private Object lock = new Object();
public void doFilter(ServletRequest request, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
try {
boolean ok;
synchronized (lock) {
ok = count++ < limit;
}
if (ok) {
// let the request through and process as usual
chain.doFilter(request, response);
} else {
// handle limit case, e.g. return status code 429 (Too Many Requests)
// see https://www.rfc-editor.org/rfc/rfc6585#page-3
}
} finally {
synchronized (lock) {
count--;
}
}
}
}
Or alternatively you could just put this logic into your HttpServlet. It's just a bit cleaner and more reusable as a Filter. You might want to make the limit configurable through the web.xml rather than hard coding it.
Ref.:
Check definition of HTTP status code 429.
You can use RateLimiter. See this article for explanation.
You might want to have a look on Semaphore.
Semaphores are often used to restrict the number of threads than can access some (physical or logical) resource.
Or even better try to figure it out with the server settings. That would of course be server-dependant.
I've thought about using a static counter to keep track of the number of request, but it would raise a problem of race condition.
If you use a AtomicInteger for the counter, you will not have the problem of race conditions.
An other way would be using the Java Executor Framework (comes with Java 1.5). There you are able to limit the number of running threads, and block new once until there is a new free thread.
But I think the counter would work and be the easyest solution.
Attention: put the counter relese in a finally block!
//psydo code
final AtomicInteger counter;
...
while(true) {
int v = counter.getValue()
if (v > max) return FAILURE;
if(counter.compareAndSet(v, v+1)) break;
}
try{
doStuff();
} finally{
counter.decrementAndGet();
}
If you are serving static files, it's unlikely that the server will crash. The bottleneck would be the network throughput, and it degrades gracefully - when more requests come in, each still get served, just a little bit slower.
If you set a hard limit on total requests, remember to set a limit on requests per IP. Otherwise, it's easy for one bad guy to issue N requests, deliberately read the responses very slowly, and totally clog your service. This works even if he's on a dialup and your server network has a vast throughput.
In my code i have a lot of code like the following. I am wondering if it's a bad thing for my server and if it will cause the instance to restart.
if (opLoginId == loginId) {
datastore.delete(key);
return 0;
} else {
throw new WebApplicationException(Response.Status.UNAUTHORIZED);
}
Unless these are being caught at some higher level (say, to turn them into the right HTTP response), then yes, you're killing your instance.
What is the correct pattern for handling OLE in a (REST) web service? this is what I'm doing now, for example,
protected void doDelete(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
...
...
...
try {
try {
em.getTransaction().begin();
// ... remove the entity
em.getTransaction().commit();
} catch (RollbackException e) {
if (e.getCause() instanceof OptimisticLockException) {
try {
CLog.e("optimistic lock exception, waiting to retry ...");
Thread.sleep(1000);
} catch (InterruptedException ex) {
}
doDelete(request, response);
return;
}
}
// ... write response
} catch (NoResultException e) {
response.sendError(HttpServletResponse.SC_NOT_FOUND, e.getMessage());
return;
} finally {
em.close();
}
}
anytime you see a sleep in the code, there's a good chance it's incorrect. Is there a better way to handle this?
another approach would be to immediately send the failure back to the client, but I'd rather not have them worry about it. the correct thing seems to do whatever is required to make the request succeed on the server, even if it takes a while.
If you get an optimistic locking exception, it means that some other transaction has committed changes to entities you were trying to update/delete. Since the other transaction has committed, retrying immediately might have a good chance to succeed.
I would also make the method fail after N attempts, rather than waiting for a StackOverflowException to happen.
The "politically correct" answer in rest, is to return an HTTP 409 (Conflict) witch matches perfectly with the idea of optimistic locking. Your client should manage it, probably by retring a few seconds later.
I wouldn't add the logic to retry in your app, as your client will already handle situations when you return a 40X code.
By the way, catch (InterruptedException e) {} is always a bad idea, because the system has asked your computation to cancel, and you are ignoring it. In the context of a web service, an InterruptedException would be another good reason to signal an error to the client.
If you're just going to keep retrying until it works anyway, why not just disable optimistic locking? You should let the caller know that they made a decision based on out dated information! If you're in control of both sides an appropriate 400 code can be returned. If it's public it can be more friendly to arbitrary clients to just return 500. (Of course then you perpetuate the under-use of appropriate response codes! such a dilemma)
I got this servlet which return a pdf file to the client web browser.
We do not want to risk any chance that when the number of request is too much, the server is paralyzed.
We would like to make an application level (program) way to set a limit in the number of concurrent request, and return a error message to the browser when the limit is reached. We need to do it in applicantion level because we have different servlet container in development level(tomcat) and production level(websphere).
I must emphasize that I want to control the maximum number of request instead of session. A user can send multiple request over the server with the same session.
Any idea?
I've thought about using a static counter to keep track of the number of request, but it would raise a problem of race condition.
I'd suggest writing a simple servlet Filter. Configure it in your web.xml to apply to the path that you want to limit the number of concurrent requests. The code would look something like this:
public class LimitFilter implements Filter {
private int limit = 5;
private int count;
private Object lock = new Object();
public void doFilter(ServletRequest request, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
try {
boolean ok;
synchronized (lock) {
ok = count++ < limit;
}
if (ok) {
// let the request through and process as usual
chain.doFilter(request, response);
} else {
// handle limit case, e.g. return status code 429 (Too Many Requests)
// see https://www.rfc-editor.org/rfc/rfc6585#page-3
}
} finally {
synchronized (lock) {
count--;
}
}
}
}
Or alternatively you could just put this logic into your HttpServlet. It's just a bit cleaner and more reusable as a Filter. You might want to make the limit configurable through the web.xml rather than hard coding it.
Ref.:
Check definition of HTTP status code 429.
You can use RateLimiter. See this article for explanation.
You might want to have a look on Semaphore.
Semaphores are often used to restrict the number of threads than can access some (physical or logical) resource.
Or even better try to figure it out with the server settings. That would of course be server-dependant.
I've thought about using a static counter to keep track of the number of request, but it would raise a problem of race condition.
If you use a AtomicInteger for the counter, you will not have the problem of race conditions.
An other way would be using the Java Executor Framework (comes with Java 1.5). There you are able to limit the number of running threads, and block new once until there is a new free thread.
But I think the counter would work and be the easyest solution.
Attention: put the counter relese in a finally block!
//psydo code
final AtomicInteger counter;
...
while(true) {
int v = counter.getValue()
if (v > max) return FAILURE;
if(counter.compareAndSet(v, v+1)) break;
}
try{
doStuff();
} finally{
counter.decrementAndGet();
}
If you are serving static files, it's unlikely that the server will crash. The bottleneck would be the network throughput, and it degrades gracefully - when more requests come in, each still get served, just a little bit slower.
If you set a hard limit on total requests, remember to set a limit on requests per IP. Otherwise, it's easy for one bad guy to issue N requests, deliberately read the responses very slowly, and totally clog your service. This works even if he's on a dialup and your server network has a vast throughput.
I've got a client-server tiered architecture with the client making RPC-like requests to the server. I'm using Tomcat to host the servlets, and the Apache HttpClient to make requests to it.
My code goes something like this:
private static final HttpConnectionManager CONN_MGR = new MultiThreadedHttpConnectionManager();
final GetMethod get = new GetMethod();
final HttpClient httpClient = new HttpClient(CONN_MGR);
get.getParams().setCookiePolicy(CookiePolicy.IGNORE_COOKIES);
get.getParams().setParameter(HttpMethodParams.USER_AGENT, USER_AGENT);
get.setQueryString(encodedParams);
int responseCode;
try {
responseCode = httpClient.executeMethod(get);
} catch (final IOException e) {
...
}
if (responseCode != 200)
throw new Exception(...);
String responseHTML;
try {
responseHTML = get.getResponseBodyAsString(100*1024*1024);
} catch (final IOException e) {
...
}
return responseHTML;
It works great in a lightly-loaded environment, but when I'm making hundreds of requests per second I start to see this -
Caused by: java.net.BindException: Address already in use
at java.net.PlainSocketImpl.socketBind(Native Method)
at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:336)
at java.net.Socket.bind(Socket.java:588)
at java.net.Socket.<init>(Socket.java:387)
at java.net.Socket.<init>(Socket.java:263)
at org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory.createSocket(DefaultProtocolSocketFactory.java:80)
at org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory.createSocket(DefaultProtocolSocketFactory.java:122)
at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707)
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
Any thoughts on how to fix this? I'm guessing it's something to do with the client trying to reuse the ephemeral client ports, but why is this happening / how can I fix it?
Thanks!
A very good discussion of the problem you are running into can be found here. On the Tomcat side, by default it will use the SO_REUSEADDR option, which will allow the server to reuse sockets which are in TIME_WAIT. Additionally, the Apache http client will by default use keep-alives, and attempt to reuse connections.
Your problems seems to be caused by not calling releaseConnection on the HttpClient. This is required in order for the connection to be reused. Otherwise, the connection will remain open until garbage collector comes and closes it, or the server disconnects the keep-alive. In both cases, it won't be returned to the pool.
With hundreds of connections a second, and without knowing how long your connections keep to open, do their thing, close, and get recycled, I suspect that this is just a problem you're going to have. One thing you can do is catch the BindException in your try block, use that to do anything you need to do in the bind-unsuccessful case, and wrap the whole call in a while loop that depends on a flag indicating whether the bind succeeded. Off the top of my head:
boolean hasBound = false;
while (!hasBound) {
try {
hasBound = true;
responseCode = httpClient.executeMethod(get);
} catch (BindException e) {
// do anything you want in the bound-unsuccessful case
} catch (final IOException e) {
...
}
}
Update with question: One curious question: what are the maximum total and per-host number of connections allowed by your MultiThreadedHttpConnectionManager? In your code, that'd be:
CONN_MGR.getParams().getDefaultMaxConnectionsPerHost();
CONN_MGR.getParams().getMaxTotalConnections();
Thus, you've fired more requests than TCP/IP ports are allowed to be opened. I don't do HttpClient, so I can't go in detail about this, but in theory there are three solutions for this particular problem:
Hardware based: add another NIC (network interface card).
Software based: close connections directly after use and/or increase the connection timeout.
Platform based: increase the amount of TCP/IP ports which are allowed to be opened. May be OS-specific and/or NIC driver-specific. The absolute maximum is 65535, of which several may already be reserved/in use (e.g. port 80).
So it turns out the problem was that one of the other HttpClient instances accidentally wasn't using the MultiThreadedHttpConnectionManager I instantiated, so I effectively had no rate limiting at all. Fixing this problem fixed the exception being thrown.
Thanks for all the suggestions, though!
Even though we invoke HttpClientUtils.closeQuietly(client); but in your code in case trying to read the content from HttpResponse entity like InputStream contentStream = HttpResponse.getEntity().getContent(), then you should close the inputstream also then only HttpClient connection get closed properly.