Is it possible to send an HTTP request without waiting for a response?
I'm working on an IoT project that requires logging of data from sensors. In every setup, there are many sensors, and one central coordinator (Will mostly be implemented with Raspberry Pi) which gathers data from the sensors and sends the data to the server via the internet.
This logging happens every second. Therefore, the sending of data should happen quickly so that the queue does not become too large. If the request doesn't wait for a response (like UDP), it would be much faster.
It is okay if few packets are dropped every now and then.
Also, please do tell me the best way to implement this. Preferably in Java.
The server side is implemented using PHP.
Thanks in advance!
EDIT:
The sensors are wireless, but the tech they use has very little (or no) latency in sending to the coordinator. This coordinator has to send the data over the internet. But, just assume the internet connection is bad. As this is going to be implemented in a remote part of India.
You are looking for an asynchronous HTTP library such as OkHttp. It allows to specify a Callback that is executed asynchronous (by a second thread).
Therefore your main thread continues execution.
You can set the TCP timeout for a GET request to less than a second, and keep retriggering the access in a thread. Use more threads for more devices.
Something like:
HttpURLConnection con = (HttpURLConnection) new URL(url).openConnection();
con.setRequestMethod("GET");
con.setConnectTimeout(1000); //set timeout to 1 second
if (con.getResponseCode() == HttpURLConnection.HTTP_OK) {
...
}
Sleep the thread for the remainder of 1 second if the access is less than a second. You can consume the results on another thread if you add the results to thread-safe queues. Make sure to handle exceptions.
You can't use UDP with HTTP, HTTP is TCP only.
Related
I have a gRPC client that uses two bidi streams. For reasons unknown at the present, when we send a keepAlive ping every hour, onError with a statusRuntimeException are called on both streams.
To handle the reconnection, I've implemented the following retry mechanism, in java pseudocode. I will clarify anything as necessary in the comments.
The mechanism looks like so:
onError() {
retrySyncStream();
}
void retrySyncStream() {
// capture the current StreamObserver
previousStream = this.StreamObserver;
// open a new stream
streamObserver = bidiStub.startStream(responseObserver);
waitForChannelReady(); // <-- simplified version, we use the gRPC notification listener
previousStream.onCompleted(); // <-- called on notify of channel READY
}
Although we attempt to close the old stream, server side we see 2 connections open on 2 HA nodes. I don't have control over anything server side, I just need to handle reconnection logic on the client.
First things, is this common practice to ditch the old StreamObserver after getting a StatusRuntimeException? The reason I am doing this is because we have a mock server spring boot application that we use to test our client against. When I do a force shutdown (Ctl-c) the spring boot server app, and start it back up again, the client can't use the original StreamObserver, it has to create a new one by calling the gRPC bidi stream API call.
From what I've read online, people say not to ditch the managed channel, but how about stream observers, and making sure that multiple streams aren't being opened by mistake?
Thanks.
When the StreamObserver gets an error, the RPC is dead. It is appropriate to ditch it.
When you re-create the stream, consider what would happen if the server is having trouble. Generally you'd have exponential backoff in place somewhere. For bidi streaming cases, several cases in gRPC tend to reset the backoff if the client received a response from the server.
Since both streams die together, it sounds like the TCP connection was dead. Unfortunately, in TCP you have to send on the connection to learn it is dead. When the client learns the connection is dead, it learns that because it can't send to the HA proxy using that connection. That means HA has to separately discover the connection is dead. Server-side keepalive could help with this, although TCP keepalive at HA is probably also warranted.
I am working with Java. Another software developer has provided me his code performing synchronous HTTP calls and is responsible of maintaining it - he is using com.google.api.client.http. Updating his code to use an asynchronous HTTP client with a callback is not an available option, and I can't contact the developer to make changes to it. But I still want the efficient asynchronous behaviour of attaching a callback to an HTTP request.
(I am working in Spring Boot and my system is built using RabbitMQ AMQP if it has any effect.)
The simple HTTP GET (it is actually an API call) is performed as follows:
HttpResponse<String> response = httpClient.send(request, BodyHandlers.ofString());
This server I'm communicating with via HTTP takes some time to reply back... say 3-4 seconds. So my thread of execution is blocked for this duration, waiting for a reply. This scales very poorly, my single thread isn't doing is just waiting back for a reply to arrive - this is very heavy.
Sure, I can add the number of threads performing this call if I want to send more HTTP requests concurrently, i.e. I can scale in that way, but this doesn't sound efficient or correct. If possible, I would really like to get a better ratio than 1 thread waiting for 1 HTTP request in this situation.
In other words, I want to send thousands of HTTP requests with 2-3 available threads and handle the response once it arrives; I don't want to incur any significant delay between the execution of each request.
I was wondering: how can I achieve a more scalable solution? How can I handle thousands of this HTTP call per thread? What should I be looking at or do I just have no options and I am asking for the impossible?
EDIT: I guess this is another way to phrase my problem. Assume I have 1000 requests to be sent right now, each will last 3-4 seconds, but only 4-5 available threads of execution on which to send them. I would like to send them all at the same time, but that's not possible; if I manage to send them ALL within the span of 0.5s or less and handle their requests via some callback or something like that, I would consider that a great solution. But I can't switch to an asynchronous HTTP client library.
Using an asynchronous HTTP client is not an available option - I can't change my HTTP client library.
In that case, I think you are stuck with non-scalable synchronous behavior on the client side.
The only work-around I can think of is to run your requests as tasks in an ExecutorService with a bounded thread pool. That will limit the number of threads that are used ... but will also limit the number of simultaneous HTTP requests in play. This is replacing one scaling problem with another one: you are effectively rate-limiting your HTTP requests.
But the flip-side is that launching too many simultaneous HTTP requests is liable to overwhelm the target service(s) and / or the client or server-side network links. From that perspective, client-side rate limiting could be a good thing.
Assume I have 1000 requests to be sent right now, each will last 3-4 seconds, but only 4-5 available threads of execution on which to send them. I would like to send them all at the same time, but that's not possible; if I manage to send them ALL within the span of 0.5s or less and handle their requests via some callback or something like that, I would consider that a great solution. But I can't switch to an asynchronous HTTP client.
The only way you are going to be able to run > N requests at the same time with N threads is to use an asynchronous client. Period.
And "... callback or something like that ...". That's a feature you will only get with an asynchronous client. (Or more precisely, you can only get real asynchronous behavior via callbacks if there is a real asynchronous client library under the hood.)
So the solution is akin to sending the HTTP requests in a staggering manner i.e. some delay between one request and another, where each delay is limited by the number of available threads? If the delay between each request is not significant, I can find that acceptable, but I am assuming it would be a rather large delay between the time each thread is executed as each thread has to wait for each other to finish (3-4s)? In that case, it's not what I want.
With my proposed work-around, the delay between any two requests is difficult to quantify. However, if you are trying to submit a large number of requests at the same time and wait for all of the responses, then the delay between individual requests is not relevant. For that scenario, the relevant measure is the time taken to complete all of the requests. Assuming that nothing else is submitting to the executor, the time taken to complete the requests will be approximately:
nos_requests * average_request_time / nos_worker_threads
The other thing to note is that if you did manage to submit a huge number of requests simultaneously, the server delay of 3-4s per request is liable to increase. The server will only have the capacity to process a certain number of requests per second. If that capacity is exceeded, requests will either be delayed or dropped.
But if there are no other options.
I suppose, you could consider changing your server API so that you can submit multiple "requests" in a single HTTP request.
I think that the real problem here is there is a mismatch between what the server API was designed to support, and what you are trying to do with it.
And there is definitely a problem with this:
Another software developer has provided me his code performing synchronous HTTP calls and is responsible of maintaining it - he is using com.google.api.client.http. Updating his code to use an asynchronous HTTP client with a callback is not an available option, and I can't contact the developer to make changes to it.
Perhaps you need to "bite the bullet" and stop using his code. Work out what it is doing and replace it with your own implementation.
There is no magic pixie dust that will give scalable performance from a synchronous HTTP client. Period.
For an exercise, we are to implement a server that has a thread that listens for connections, accepts them and throws the socket into a BlockingQueue. A set of worker threads in a pool then goes through the queue and processes the requests coming in through the sockets.
Each client connects to the server, sends a large number of requests (waiting for the response before sending the next request) and eventually disconnects when done.
My current approach is to have each worker thread waiting on the queue, getting a socket, then processing one request, and finally putting the (still open) socket back into the queue before handling another request, potentially from a different client. There are many more clients than there are worker threads, so many connections queue up.
The problem with this approach: A thread will be blocked by a client even if the client doesn't send anything. Possible pseudo-solutions, all not satisfactory:
Call available() on the inputStream and put the connection back into the queue if it returns 0. The problem: It's impossible to detect if the client is still connected.
As above but use socket.isClosed() or socket.isConnected() to figure out if the client is still connected. The problem: Both methods don't detect a client hangup, as described nicely by EJP in Java socket API: How to tell if a connection has been closed?
Probe if the client is still there by reading from or writing to it. The problem: Reading blocks (i.e. back to the original situation where an inactive client blocks the queue) and writing actually sends something to the client, making the tests fail.
Is there a way to solve this problem? I.e. is it possible to distinguish a disconnected client from a passive client without blocking or sending something?
Short answer: no. For a longer answer, refer to the one by EJP.
Which is why you probably shouldn't put the socket back on the queue at all, but rather handle all the requests from the socket, then close it. Passing the connection to different worker threads to handle requests separately won't give you any advantage.
If you have badly behaving clients you can use a read timeout on the socket, so reading will block only until the timeout occurs. Then you can close that socket, because your server doesn't have time to cater to clients that don't behave nicely.
Is there a way to solve this problem? I.e. is it possible to distinguish a disconnected client from a passive client without blocking or sending something?
Not really when using blocking IO.
You could look into the non-blocking (NIO) package, which deals with things a little differently.
In essence you have a socket which can be registered with a "selector". If you register sockets for "is data ready to be read" you can then determine which sockets to read from without having to poll individually.
Same sort of thing for writing.
Here is a tutorial on writing NIO servers
Turns out the problem is solvable with a few tricks. After long discussions with several people, I combined their ideas to get the job done in reasonnable time:
After creating the socket, configure it such that a blocking read will only block for a certain time, say 100ms: socket.setSoTimeout(100);
Additionally, record the timestamp of the last successful read of each connection, e.g. with System.currentTimeMillis()
In principle (see below for exception to this principle), run available() on the connection before reading. If this returns 0, put the connection back into the queue since there is nothing to read.
Exception to the above principle in which case available() is not used: If the timestamp is too old (say, more than 1 second), use read() to actually block on the connection. This will not take longer than the SoTimeout that you set above for the socket. If you get a TimeoutException, put the connection back into the queue. If you read -1, throw the connection away since it was closed by the remote end.
With this strategy, most read attempts terminate immediately, either returning some data or nothing beause they were skipped since there was nothing available(). If the other end closed its connection, we will detect this within one second since the timestamp of the last successful read is too old. In this case, we perform an actual read that will return -1 and the socket's isClosed() is updated accordingly. And in the case where the socket is still open but the queue is so long that we have more than a second of delay, it takes us aditionally 100ms to find out that the connection is still there but not ready.
EDIT: An enhancement of this is to change "last succesful read" to "last blocking read" and also update the timestamp when getting a TimeoutException.
No, the only way to discern an inactive client from a client that didn't shut down their socket properly is to send a ping or something to check if they're still there.
Possible solutions I can see is
Kick clients that haven't sent anything for a while. You would have to keep track of how long they've been quiet for, and once they reach a limit you assume they've disconnected .
Ping the client to see if they're still there. I know you asked for a way to do this without sending anything, but if this is really a problem, i.e you can't use the above solution, this is probably the best way to do it depending on the specifics(since it's an exercise you might have to imagine the specifics).
A mix of both, actually this is probably better. Keep track of how long they've been quiet for, after a bit send them a ping to see if they still live.
I want to write a chat application in java which can handle many users simultaneously. I read about sockets and threadpools to limit thread number, but I can't imagine how to handle e.g. 100 socket connections at the same time and do not create 100 new threads. Idea is that client connects at the beginning and his connection stays opened until he leaves the chat. He can send data to server as well as receive other users messages.
Read from socket is blocking operation, so I would need to check all user's sockets in loop with some timeout if new data is available in particular socket connection? My first idea was to create e.g. 3 threads for handling input from all connected users and 3 threads for outcomming communication from server to clients, but how can I achieve that? Is there any async API for sockets in Java where can I define threadpools for in/out communication?
Make a Client class that extends Thread. Write all the methods and in the void run() method, write the code you want executed when the client connection is made.
On the Server side, listen for new connections. Accept a new connection, get the information about the connection, pass it in the constructor to create a new Client object, and add it to an ArrayList to keep track of all ongoing connections and execute the start() method. So, all the Client objects are in an Arraylist, and the they keep running at the same time.
I had made such a chat application about an year ago. And do not forget to close the connection once the Client disengages, orelse all the objects pile up and slow up the application. I learnt that the hard way.
Use Netty as it provides an NIO framework (non-blocking IO) so that you do not need 1 thread per connection. It is a little bit (or a lot..) more complicated to write a server using non-blocking IO, but there are performance gains in regards to not requiring one thread per connection.
However, 100 threads is not so many, so you could still create your server using standard IO and one thread per connection, it just depends on how much you need to scale.
For a server setup using Netty, you create a channel to which new connections are assigned. This channel is an ordered series of handlers which process incoming (and outgoing) messages from a connection / client. The handlers themselves all need to be asynchronous such that when a handler needs to return a message to the client it writes it asynchronously (non-blockingly) to the channel and receives a future back to which it can attach actions for when the message is actually written.
There is a little bit of a learning curve, but it is not that steep and the overall design of your application will be much better if built the Netty way vs using standard blocking IO.
I am developing an Android application communicating with a TCP Java-server over a WLAN connection. The Android application is a game with sprites being moved around the screen. Whenever a sprite moves, the AndroidClient sends its coordinates to the Java-server, wich then sends the data to the other clients (maximum 4 clients). The server handles each client on a separate thread, data updates are sent about every 20 ms., and each packet consists of about 1-10 bytes. I am on a 70 Mbit network (with about 15 Mbit effective on my Wireless).
I am having problems with an unstable connection, and experiencing latency at about 50-500 ms. every 10th-30th packet. I have set the tcpNoDelay to true, wich stopped the consistent 200ms latency, although it still lags a lot. As I am quite new to both Android and networking I don't know whether this is to be expected or not. I am also wondering if UDP could be suitable for my program, as I am interested in sending updates fast rather than every packet arriving correctly.
I would appreciate any guidance as to how to avoid/work around this latency problem. General tips on how to implement such a client-server architecture would also be applauded.
On a wireless LAN you'll occasionally see dropped packets, which results in a packet retransmission after a delay. If you want to control the delay before retransmission you're almost certainly going to have to use UDP.
You definitely want to use UDP. For a game you don't care if the position of a sprite is incorrect for a short time. So UDP is ideal in this case.
Also, if you have any control over the server code, I would not use separate threads for clients. Threads are useful if you need to make calls to libraries that you don't have control over and that can block (such as because they touch a file or try to perform additional network communication). But they are expensive. They consume a lot of resources and as such they actually make things slower than they could be.
So for a network game server where latency and performance are absolutely critical, I would just use one thread to process a queue of commands that have a state and then make sure that you never perform an operation that blocks. So each command is processed in order, it's state is evaluated and updated (like a laser blast intersected with another object). If the command requires blocking (like reading from a file) then you need to perform a non-blocking read and set the state of that command accordingly so that your command processor never blocks. The key is that the command processor can never never ever block. It would just run in a loop but you would have to call Thread.sleep(x) in an appropriate way so as not to waste CPU.
As for the client side, when a client submits a command (like they fired a laser or some such), the client would generate a response object and insert it into a Map with a sequence id as the key. Then it would send the request with the sequence id and when the server responds with the that id, you just lookup the response object in the Map and decode the response into that object. Meaning this allows you to perform concurrent operations.