I am working on a project in which "I have to get 4096 bytes of data to server" from "server" every "between 1-millisecond to 10-millisecond".But it's "taking too much time" i.e "around 300ms - 700ms" which causes my application to lose data.
I am using below snippet
HttpClient client = new DefaultHttpClient();
HttpPost request = new HttpPost("http://192.168.1.40/ping");
HttpResponse response = client.execute(request);
The HttpResponse is only taking too much time i.e around 300ms - 700ms.
How I can get response faster ?
Instead of this what else I can use to get a response from sever faster then this?
Please let me know any solution or way to solve it.
I have done google, gone through other ways like DataOutputStream and ByteOutputStream but no use of this, it also taking too much time then HttpResponse.
Help will be appreciated.
Before you can make the responses faster, you are going to need to investigate and understand why they are currently taking a long time. Roughly speaking, it could be:
the client side taking a long time to create the request and/or preocess the result (seems unlikely ...)
a slow android network protocol stack
a problem with your local networking (e.g. WiFi) or your telecoms provider
a congested / overloaded server or server-side network, or
something pessimal in the server implementation.
Do things like:
try the request from a web browser on a conventional PC and use the browser's web-developer stuff to try to tease out whether/why the request is taking a long time ...
look in the server-side logs and/or monitoring for request load and timing information
other suggestions please
Implementing SPDY might help, but it is unlikely to change response times in the order of 500ms to a couple of tens of milliseconds. The problem seems more fundamental than "HTTP is old and slow". And the same reasoning applies to all of the other suggestions that people have made.
This is not possible. You are recreating a connection every time.
You need to hold a persistent connection with the server. Try creating a persistent http connection.
If that doesn't work you can try sending raw udp packets (or anything else). It will be harder but it will take less time.
#sheldonCooper answer is right if the server enables SPDY. Also you can add Gzip compression. It has been added to all requests after GingerBread but you could add it for former SDK versions : http://android-developers.blogspot.fr/2011/09/androids-http-clients.html
Use SPDY protocol. This would improve your response time.
I think in your case you can use websockets so that you would not have to create a connection each time and the live connection is available every time.
Related
The setup:
We have an https://Main.externaldomain/xmlservlet site, which is authenticating/validating/geo-locating and proxy-ing (slightly modified) requests to http://London04.internaldomain/xmlservlet for example.
There's no direct access to internaldomain exposed to end-users at all. The communication between the sites gets occasionally interrupted and sometimes the internaldomain nodes become unavailable/dead.
The Main site is using org.apache.http.impl.client.DefaultHttpClient (I know it's deprecated, we're gradually upgrading this legacy code) with readTimeout set to 10.000 milli-seconds.
The request and response have xml payload/body of variable length and the Transfer-Encoding: chunked is used, also the Keep-Alive: timeout=15 is used.
The problem:
Sometimes London04 actually needs more than 10 seconds (let's say 2 minutes) to execute. Sometimes it non-gracefully crashes. Sometimes other (networking) issues happen.
Sometimes during those 2 minutes - the portions of response-xml-data are being so gradually filled that there're no 10-second gaps between the portions and therefore the readTimeout is never exceeded,
sometimes there's a 10+ seconds gap and HttpClient times out...
We could try to increase the timeout on Main side, but that would easily bloat/overload the listener pool (just by regular traffic, not even being DDOSed yet).
We need a way to distinguish between internal-site-still-working-on-generating-the-response and the cases where it really crashed/network_lost/etc.
And a best thing feels to be some kind of heart-beat (every 5 seconds) during the communication.
We thought the Keep-Alive would save us, but it seems to only secure the gaps between the requests (not during the requests) and it seems to not do any heartbeating during the gap (just having/waiting_for the timeout).
We thought chunked-encoding may save us by sending some heartbeat (0-bytes-sized-chunks) to let other side aware, but there seems to be no such/default implementation of supporting any heartbeat this way and moreso it seems that 0-bytes-sized chunk is an EOD indicator itself...
Question(s):
If we're correct in assumptions that KeepAlive/ChunkedEncoding won't help us with achieving the keptAlive/hearbeat/fastDetectionOfDeadBackend then:
1) which layer such a heart-beat should be rather implemented at? Http? tcp?
2) any standard framework/library/setting/etc implementing it already? (if possible: Java, REST)
UPDATE
I've also looked into heartbeat-implementers for WADL/WSDL, though found none for REST, checked out the WebSockets...
Also looked into TCP-keepalives which seem to be the right feauture for the task:
https://en.wikipedia.org/wiki/Keepalive
http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/usingkeepalive.html
Socket heartbeat vs keepalive
WebSockets ping/pong, why not TCP keepalive?
BUT according to those I'd have to set up something like:
tcp_keepalive_time=5
tcp_keepalive_intvl=1
tcp_keepalive_probes=3
which seems to be a counter-recommendation (2h is the recommended, 10min already presented as an odd value, is going to 5s sane/safe?? if it is - might be my solution upfront...)
also where should I configure this? on London04 alone or on Main too? (if I set it up on Main - won't it flood client-->Main frontend communication? or might the NATs/etc between sites ruin the keepalive intent/support easily?)
P.S. any link to an RTFM is welcome - I might just be missing something obvious :)
My advice would be don't use a heartbeat. Have your external-facing API return a 303 See Other with headers that indicates when and where the desired response might be available.
So you might call:
POST https://public.api/my/call
and get back
303 See Other
Location: "https://public.api/my/call/results"
Retry-After: 10
To the extent your server can guess how long a response will take to build, it should factor that into the Retry-After value. If a later GET call is made to the new location and the results are not yet done being built, return a response with an updated Retry-After value. So maybe you try 10, and if that doesn't work, you tell the client to wait another 110, which would be two minutes in total.
Alternately, use a protocol that's designed to stay open for long periods of time, such as WebSockets.
Take a look SSE
example code:
https://github.com/rsvoboda/resteasy-sse
or vertx event-bus:
https://vertx.io/docs/apidocs/io/vertx/core/eventbus/EventBus.html
I have this simple Spring boot based web app that downloads data from several APIs. Some of them don't respond in time, since my connectionTimeout is set to somewhat 4 seconds.
As soon as I get rid of connectionTimeout setting, I'm getting an exceptions after 20 or so seconds.
So, my question is, for how long am I able to try to connect to an API and what does it depend on? Where do those 20 seconds come from? What if an API responds after 40 minutes of time and I won't be able to catch that specific moment and just gonna lose data. I don't want that to happen. What are my options?
Here's the code to set the connection. Nothing special.
HttpComponentsClientHttpRequestFactory clientHttpRequestFactory = new HttpComponentsClientHttpRequestFactory(HttpClientBuilder.create().build());
clientHttpRequestFactory.setConnectTimeout(4000);
RestTemplate restTemplate = new RestTemplate(clientHttpRequestFactory);
Then I retrieve the values via:
myObject.setJsonString(restTemplate.getForObject(url, String.class));
Try increasing your timeout. 4 seconds is too little.
It will need to connect, formulate data and return. So 4 seconds is just for connecting, by the time it attempts to return anything, your application has already disconnected.
Set it to 20 seconds to test it. You can set it to much longer to give the API enough time to complete. This does not mean you app will use up all of the connection timeout time. It will finish as soon as a result is returned. Also API's are not designed to take long. They will perform the task and return the result as fast as possible
Connection timeout means that your program couldn't connect to the server at all within the time specified.
The timeout can be configured, as, like you say, some systems may take a longer time to connect to, and if this is known in advance, it can be allowed for. Otherwise the timeout serves as a guard to prevent the application from waiting forever, which in most cases doesn't really give a good user experience.
A separate timeout can normally be configured for reading data (socket timeout). They are not inclusive of each other.
To solve your problem:
Check that the server is running and accepting incoming connections.
You might want to use curl or depending on what it is simply your browser to try and connect.
If one tool can connect, but the other can't, check your firewall settings and ensure that outgoing connections from your Java program are permitted. The easiest way to test whether this is a problem is to disable anti virus and firewall tools temporarily. If this allows the connection, you'll either need to leave the FW off, or better add a corresponding exception.
Leave the timeout on a higher setting (or try setting it to 0, which is interpreted as infinite) while testing. Once you have it working, you can consider tweaking it to reflect your server spec and usability requirements.
Edit:
I realised that this doesn't necessarily help, as you did ultimately connect. I'll leave the above standing as general info.
for how long am I able to try to connect to an API and what does it depend on?
Most likely the server that the API is hosted on. If it is overloaded, response time may lengthen.
Where do those 20 seconds come from?
Again this depends on the API server. It might be random, or it may be processing each request for a fixed period of time before finding itself in an error state. In this case that might take 20 seconds each time.
What if an API responds after 40 minutes of time and I won't be able to catch that specific moment and just gonna lose data. I don't want that to happen. What are my options?
Use a more reliable API, possibly paying for a service guarantee.
Tweak your connection and socket timeouts to allow for the capabilities of the server side, if known in advance.
If the response is really 40 minutes, it is a really poor service, but moving on with that assumption - if the dataset is that large, explore whether the API offers a streaming callback, whereby you pass in an OutputStream into the API's library methods, to which it will (asynchronously) write the response when it is ready.
Keep in mind that connection and socket timeout are separate things. Once you have connected, the connection timeout becomes irrelevant (socket is established). As long as you begin to receive and continue to receive data (packet to packet) within the socket timeout, the socket timeout won't be triggered either.
Use infinite timeouts (set to 0), but this could lead to poor usability within your applications, as well as resource leaks if a server is in fact offline and will never respond. In that case you will be left with dangling connections.
The default and maximum has nothing to do with the the server. It depends on the client platform, but it is around a minute. You can decrease it, but not increase it. Four seconds is far too short. It should be measured in tens of seconds in most circumstances.
And absent or longer connection timeouts do not cause server errors of any kind. You are barking up the wrong tree here.
I'm creating a mod that needs to call a GET request to an endpoint.
I don't care about any result, I just want the request to be sent.
Right now I'm using
HttpClient httpClient = HttpClientBuilder.create().build();
HttpGet request = new HttpGet(url);
And it will block. Because the api takes some time to respond that's not good.
I saw that there's a library called async-http-client but I can't add libraries to my project.
I guess I have to create threads in my mod but that doesn't look like the best solution to me as minecraft mods shouldn't make new threads.
Is there any java package that won't care about the response?
Sending network traffic will always block until it's completed - there's no way around that. In this case it should be perfectly fine to create a new thread to do that actual work - the thread will just block (and not waste CPU resources) for most of the time.
Note that async-http-client will just create it's own threads to do it's work, so it won't help get around this restriction.
I have a couple of HTTP Request setup for my Thread Group. I noticed that the first request is always taking longer than any other requests. I reordered my requests and the problem still persists.
This is making it hard to analyse the response time.
Is it a known problem with JMeter? Is there a work around?
This is the setup that I have
org.apache.jmeter.threads.ThreadGroup#69bb01
org.apache.jmeter.config.ConfigTestElement#b3600d
org.apache.jmeter.sampler.DebugSampler#67149d
https: 1st request
Query Data:
https: 2nd request
Query Data:
Query Data:
org.apache.jmeter.reporters.ResultCollector#11b53af
org.apache.jmeter.reporters.ResultCollector#11308c7
org.apache.jmeter.reporters.ResultCollector#a5643e
org.apache.jmeter.reporters.ResultCollector#585611
org.apache.jmeter.reporters.Summariser#1e8f4b9
org.apache.jmeter.reporters.ResultCollector#11ad922
org.apache.jmeter.reporters.ResultCollector#1a56999
This could well be because
Servers usually need a warm-up before they reach their full speed:
this is particularly true for the Java platform where you surely don’t
want to measure class loading time, JSP compilation time or native
compilation time.
http://nico.vahlas.eu/2010/03/30/some-thoughts-on-stress-testing-web-applications-with-jmeter-part-2/
Are you allowing for some warm-up traffic to the servers under measurement first, to allow things to get in cache, JSP pages to compile, the database working set to be in memory, etc?
How do i measure how long a client has to wait for a request.
On the server side it is easy, through a filter for example.
But if we want to take into accout the total time including latency and data transfer, it gets diffcult.
is it possible to access the underlying socket to see when the request is finished?
or is it neccessary to do some javascript tricks? maybe through clock synchronisation between browser and server? are there any premade javascripts for this task?
You could wrap the HttpServletResponse object and the OutputStream returned by the HttpServletResponse. When output starts writing you could set a startDate, and when it stops (or when it's flushed etc) you can set a stopDate.
This can be used to calculate the length of time it took to stream all the data back to the client.
We're using it in our application and the numbers look reasonable.
edit: you can set the start date in a ServletFilter to get the length of time the client waited. I gave you the length of time it took to write output to the client.
There's no way you can know how long the client had to wait purely from the server side. You'll need some JavaScript.
You don't want to synchronize the client and server clocks, that's overkill. Just measure the time between when the client makes the request, and when it finishes displaying its response.
If the client is AJAX, this can be pretty easy: call new Date().getTime() to get the time in milliseconds when the request is made, and compare it to the time after the result is parsed. Then send this timing info to the server in the background.
For a non-AJAX application, when the user clicks on a request, use JavaScript to send the current timestamp (from the client's point of view) to the server along with the query, and pass that same timestamp back through to the client when the resulting page is reloaded. In that page's onLoad handler, measure the total elapsed time, and then send it back to the server - either using an XmlHttpRequest or tacking on an extra argument to the next request made to the server.
If you want to measure it from your browser to simulate any client request you can watch the net tab in firebug to see how long it takes each piece of the page to download and the download order.
Check out Jiffy-web, developed by netflix to give them a more accurate view of the total page -> page rendering time
I had the same problem. But this JavaOne Paper really helped me to solve this problem. I would request you to go thru it and it basically uses javascript to calculate the time.
You could set a 0 byte socket send buffer (and I don't exactly recommend this) so that when your blocking call to HttpResponse.send() you have a closer idea as to when the last byte left, but travel time is not included. Ekk--I feel queasy for even mentioning it. You can do this in Tomcat with connector specific settings. (Tomcat 6 Connector documentation)
Or you could come up with some sort of javascript time stamp approach, but I would not expect to set the client clock. Multiple calls to the web server would have to be made.
timestamp query
the real request
reporting the data
And this approach would cover latency, although you still have have some jitter variance.
Hmmm...interesting problem you have there. :)