I am trying to use a Java library to communicate with a car via the serial port using OBD2 protocol. The protocol is simple: you send an ASCII string (e.g. "01 0d"), and the car answers with an ASCII value. I've found many libraries in the web, but there is one concept I don't understand in the examples. After every send command, the programmer put a call to sleep. Why is that? For example:
send(pid)
sleep(200)
receive(response)
I don't understand, because read is a blocking function call, so I should be able to wait on read. Why is the additional call to sleep?
I did a bunch of work with the (Mitsubishi/Subaru) MUT-II protocol a few years ago, which uses the ISO9141 protocol and it was the same way. 200ms pause after every single request. It was later confirmed by the community/forums that the only pause that was actually necessary was the one after the initial 5 baud init, once changed to 10400 no more pauses were necessary.
If you are going via a hardware interface (like OBDKey or a similar ELM327 based device) then the protocol timings are taken care of for you, so that is unlikely to be the cause of the sleep delay.
You are right, read does block. But note that there can be a timeout set up in the read mechanism when establishing the COM / serial port parameters. In this case a call to read returns with some or no data when the timeout expires.
Related
For an exercise, we are to implement a server that has a thread that listens for connections, accepts them and throws the socket into a BlockingQueue. A set of worker threads in a pool then goes through the queue and processes the requests coming in through the sockets.
Each client connects to the server, sends a large number of requests (waiting for the response before sending the next request) and eventually disconnects when done.
My current approach is to have each worker thread waiting on the queue, getting a socket, then processing one request, and finally putting the (still open) socket back into the queue before handling another request, potentially from a different client. There are many more clients than there are worker threads, so many connections queue up.
The problem with this approach: A thread will be blocked by a client even if the client doesn't send anything. Possible pseudo-solutions, all not satisfactory:
Call available() on the inputStream and put the connection back into the queue if it returns 0. The problem: It's impossible to detect if the client is still connected.
As above but use socket.isClosed() or socket.isConnected() to figure out if the client is still connected. The problem: Both methods don't detect a client hangup, as described nicely by EJP in Java socket API: How to tell if a connection has been closed?
Probe if the client is still there by reading from or writing to it. The problem: Reading blocks (i.e. back to the original situation where an inactive client blocks the queue) and writing actually sends something to the client, making the tests fail.
Is there a way to solve this problem? I.e. is it possible to distinguish a disconnected client from a passive client without blocking or sending something?
Short answer: no. For a longer answer, refer to the one by EJP.
Which is why you probably shouldn't put the socket back on the queue at all, but rather handle all the requests from the socket, then close it. Passing the connection to different worker threads to handle requests separately won't give you any advantage.
If you have badly behaving clients you can use a read timeout on the socket, so reading will block only until the timeout occurs. Then you can close that socket, because your server doesn't have time to cater to clients that don't behave nicely.
Is there a way to solve this problem? I.e. is it possible to distinguish a disconnected client from a passive client without blocking or sending something?
Not really when using blocking IO.
You could look into the non-blocking (NIO) package, which deals with things a little differently.
In essence you have a socket which can be registered with a "selector". If you register sockets for "is data ready to be read" you can then determine which sockets to read from without having to poll individually.
Same sort of thing for writing.
Here is a tutorial on writing NIO servers
Turns out the problem is solvable with a few tricks. After long discussions with several people, I combined their ideas to get the job done in reasonnable time:
After creating the socket, configure it such that a blocking read will only block for a certain time, say 100ms: socket.setSoTimeout(100);
Additionally, record the timestamp of the last successful read of each connection, e.g. with System.currentTimeMillis()
In principle (see below for exception to this principle), run available() on the connection before reading. If this returns 0, put the connection back into the queue since there is nothing to read.
Exception to the above principle in which case available() is not used: If the timestamp is too old (say, more than 1 second), use read() to actually block on the connection. This will not take longer than the SoTimeout that you set above for the socket. If you get a TimeoutException, put the connection back into the queue. If you read -1, throw the connection away since it was closed by the remote end.
With this strategy, most read attempts terminate immediately, either returning some data or nothing beause they were skipped since there was nothing available(). If the other end closed its connection, we will detect this within one second since the timestamp of the last successful read is too old. In this case, we perform an actual read that will return -1 and the socket's isClosed() is updated accordingly. And in the case where the socket is still open but the queue is so long that we have more than a second of delay, it takes us aditionally 100ms to find out that the connection is still there but not ready.
EDIT: An enhancement of this is to change "last succesful read" to "last blocking read" and also update the timestamp when getting a TimeoutException.
No, the only way to discern an inactive client from a client that didn't shut down their socket properly is to send a ping or something to check if they're still there.
Possible solutions I can see is
Kick clients that haven't sent anything for a while. You would have to keep track of how long they've been quiet for, and once they reach a limit you assume they've disconnected .
Ping the client to see if they're still there. I know you asked for a way to do this without sending anything, but if this is really a problem, i.e you can't use the above solution, this is probably the best way to do it depending on the specifics(since it's an exercise you might have to imagine the specifics).
A mix of both, actually this is probably better. Keep track of how long they've been quiet for, after a bit send them a ping to see if they still live.
I've been trying to figure out how to go about calling a method when the connection has been forcefully terminated by the client, or if the client just loses connection in general. Currently I have an List<> of all of my online accounts, however if the player doesn't log out of the server naturally, the account will stay in the list.
I've been looking through the documents, and searching google wording my question in dozens of different ways, but I can't find the answer that I'm looking for.
Basically, I need a way to figure out which channel was disconnected, and pass it as a parameter to a method, is this possible? It almost has to be.
i guess this can be done using thread on both client and server side.
Make a Date variable lastActive in client class which will be set by client every 5min (let's say). Another thread will run from server side every 10 min to check for this flag, if lastActive is more than 10min then remove player from list. You can change this frequency time according to your need
Reliably detecting socket disconnects is a common problem and not unique to Netty. The issue as you described is that your peer may not reliably terminate their end of the connection. For example: peer loses power, peer application crashes, peer machine crashes, etc... One common solution is to close the connection if no read activity has been detected for longer than some time interval. Netty provides some utilities to ease this process such as the ReadTimeoutHandler. Setting the time interval is application specific and will depend on your protocol. If your desired interval is sufficiently small you may have to add additional messages to your protocol to serve as a heartbeat message (a simple request/response to indicate each side is talking to each other).
From a Netty specific point of view you can register a listener with the Channel's CloseFuture that will notify you when the channel is closed. If you setup the ReadTimeoutHandler as previously described then you will be notified of close events after your timeout interval passes and no activity is detected or the channel is closed normally.
I'm the main developer of an online game.
Players use a specific client software that connects to the game server with TCP/IP (TCP, not UDP)
At the moment, the architecture of the server is a classic multithreaded server with one thread per connection.
But in peak hours, when there are often 300 or 400 connected people, the server is getting more and more laggy.
I was wondering, if by switching to a java.nio.* asynchronous I/O model with few threads managing many connections, if the performances would be better.
Finding example codes on the web that cover the basics of such a server architecture is very easy. However, after hours of googling, I didn't find the answers to some more advanced questions:
1 - The protocol is text-based, not binary-based. The clients and the server exchanges lines of text encoded in UTF-8. A single line of text represents a single command, each lines are properly terminated by \n or \r\n.
For the classic multithreaded server, I have that kind of code :
public Connection (Socket sock) {
this.in = new BufferedReader( new InputStreamReader( sock.getInputStream(), "UTF-8" ));
this.out = new BufferedWriter( new OutputStreamWriter(sock.getOutputStream(), "UTF-8"));
new Thread(this) .start();
}
And then in run, data are read line by line with readLine.
In the doc, I found an utilitiy class Channels that can create a Reader out of a SocketChannel. But it is said that the produced Reader wont work if the Channel is in non-blocking mode, what contradicts the fact that non-blocking mode is mandatory to use the highly performant channel selection API I'm willing to use. So, I suspect that it isn't the right solution for what I would like to do.
The first question is therefore the following: if I can't use that, how to efficiently and properly take care of breaking lines and converting native java strings from/to UTF-8 encoded data in the nio API, with buffers and channels?
Do I have to play with get/put or inside the wrapped byte array by hand? How to go from ByteBuffer to strings encoded in UTF-8 ? I admit to don't understand very well how to use classes in the charset package and how it works to do that.
2 - In the asynchronous/non-blocking I/O world, what about the handling of consecutive read/write that have by nature to be executed sequencially one after the other?
For example, the login procedure, which is typicly challenge-response-based: the server sends a question (a particular computation), the client sends the response, and then the server checks the response given by the client.
The answer is, I think, certainly not to make a single task to send to worker threads for the whole login process, as it is quite long, with the risk to freeze worker threads for too much time (Imagine that scenario: 10 pool threads, 10 players try to connect at the same time; tasks related to players already online are delayed until one thread is again ready).
3 - What happens if two different threads simultaneously call Channel.write(ByteBuffer) on the same Channel?
Do the client might receive mixed up lines ? For example if a thread sends "aaaaa" and another sends "bbbbb", could the client receive "aaabbbbbaa", or am I ensured that everyting is sent in a consist order? Am I allowed to modify the buffer used right after the call returned?
Or asked differently, do I need additional synchronization to avoid this sort of situation?
If I need additionnal synchronization, how to know when release locks and so on, upon write finishes?
I'm afraid that the answer isn't as simple as registering for OP_WRITE in the selector. By trying that, I noticed that I get the write-ready event all the time and always for all clients, exiting Selector.select early mostly for nothing, since there are only 3 or 4 messages to send pers second per client, while the selection loop is performed hundreds of times per second. So, potentially, active wait in perspective, what is very bad.
4 - Can multiple threads call Selector.select on the same selector simultaneously without any concurrency problems such as missing an event, scheduling it twice, etc?
5 - In fact, is nio as good as it is said to be ? Would it be interesting to stay to classic multithreaded model, but unstead of creating a thread per connection, use fewer threads and loop over the connections to look for data availability using InputStream.isAvailable ? Is that idea stupid and/or inefficient?
1) Yes. I think that you need to write your own nonblocking readLine method. Note also that a nonblocking read may be signaled when there are several lines in the buffer, or when there is an incomplete line:
Example: (first read)
USER foo
PASS
(second read)
bar
You will need to store (see 2) the data that was not consumed, until enough information is ready to process it.
//channel was select for OP_READ
read data from channel
prepend data from previous read
split complete lines
save incomplete line
execute commands
2) You will need to keep the state of each client.
Map<SocketChannel,State> clients = new HashMap<SocketChannel,State>();
when a channel is connected, put a fresh state into the map
clients.put(channel,new State());
Or store the current state as the attached object of the SelectionKey.
Then, when executing each command, update the state. You may write it as a monolithic method, or do something more fancy such as polymorphic implementations of State, where each state knows how to deal with some commands (e.g. LoginState expects USER and PASS, then you change the state into a new AuthorizedState).
3) I don't recall using NIO with many asynchronous writers per channel, but the documentation says it is thread safe (I won't elaborate, since I have no proof of this). About OP_WRITE, note that it signals when the write buffer is not full. In other words, as said here: OP_WRITE is almost always ready, i.e. except when the socket send buffer is full, so you will just cause your Selector.select() method to spin mindlessly.
4) Yes. Selector.select() performs a blocking selection operation.
5) I think that the most difficult part is switching from a thread-per-client architecture, to a different design where reads and writes are decoupled from processing. Once you have done that, it is easier to work with channels than working your own way with blocking streams.
I am developing an Android application communicating with a TCP Java-server over a WLAN connection. The Android application is a game with sprites being moved around the screen. Whenever a sprite moves, the AndroidClient sends its coordinates to the Java-server, wich then sends the data to the other clients (maximum 4 clients). The server handles each client on a separate thread, data updates are sent about every 20 ms., and each packet consists of about 1-10 bytes. I am on a 70 Mbit network (with about 15 Mbit effective on my Wireless).
I am having problems with an unstable connection, and experiencing latency at about 50-500 ms. every 10th-30th packet. I have set the tcpNoDelay to true, wich stopped the consistent 200ms latency, although it still lags a lot. As I am quite new to both Android and networking I don't know whether this is to be expected or not. I am also wondering if UDP could be suitable for my program, as I am interested in sending updates fast rather than every packet arriving correctly.
I would appreciate any guidance as to how to avoid/work around this latency problem. General tips on how to implement such a client-server architecture would also be applauded.
On a wireless LAN you'll occasionally see dropped packets, which results in a packet retransmission after a delay. If you want to control the delay before retransmission you're almost certainly going to have to use UDP.
You definitely want to use UDP. For a game you don't care if the position of a sprite is incorrect for a short time. So UDP is ideal in this case.
Also, if you have any control over the server code, I would not use separate threads for clients. Threads are useful if you need to make calls to libraries that you don't have control over and that can block (such as because they touch a file or try to perform additional network communication). But they are expensive. They consume a lot of resources and as such they actually make things slower than they could be.
So for a network game server where latency and performance are absolutely critical, I would just use one thread to process a queue of commands that have a state and then make sure that you never perform an operation that blocks. So each command is processed in order, it's state is evaluated and updated (like a laser blast intersected with another object). If the command requires blocking (like reading from a file) then you need to perform a non-blocking read and set the state of that command accordingly so that your command processor never blocks. The key is that the command processor can never never ever block. It would just run in a loop but you would have to call Thread.sleep(x) in an appropriate way so as not to waste CPU.
As for the client side, when a client submits a command (like they fired a laser or some such), the client would generate a response object and insert it into a Map with a sequence id as the key. Then it would send the request with the sequence id and when the server responds with the that id, you just lookup the response object in the Map and decode the response into that object. Meaning this allows you to perform concurrent operations.
I'm writing a chess program in Java. The GUI should be able to communicate with a chess engine supporting the Chess Engine Communication Protocol. But I'm having some difficulties reconciling the protocol with Java's I/O facilities.
Because engines that predate protocol version 2 do not send "feature", xboard uses a timeout mechanism: when it first starts your engine, it sends "xboard" and "protover N", then listens for feature commands for two seconds before sending any other commands.
It seems that Java's facilities for interrupting I/O operations are limited. The only option I can find is NIO's InterruptibleChannel, which closes itself when interrupted.
I don't want the stream to close when the timeout occurs -- I just want to interrupt the read. Does anyone know a solution?
I think you may be overthinking the problem. You don't need to abort the read() call after 2 seconds, you just need your backing logic to understand that after 2 seconds it should not expect to receive any "feature" commands. Then your implementation can write the next command, and your read() will return the byte(s) from the response to that command.
That's how I'd approach it anyways, by having generic code that reads in bytes and passes them further up the chain where context-specific processing can be done. Then you don't need to interrupt the read, the upstream code just needs to understand that the data it eventually gets back may be a "feature" command, or it may not be.
It's not clear to me that you need to do anything much. What you have quoted is the timeout behaviour of the board. You don't have to implement that, it is done, at the board, which is the peer, i.e. the other end.