How to detect disconnection of Serial Port connection with javax.comm? - java

When a connection of SerialPorts (RS232) is interrupted/disconnected, how could I detect it and report it to the user?
As can happen in any moment, I suppose I should use a separate thread.
My connection type is: send msg - recieve msg

RS232 does not detect the "state" of an physical connection. So I guess it is not possible to directly get an event in case of connect/disconnect. Probably the only way is to send something and detect if the answer is missing (or hearbeat or whatever...).

How much control do you have over the hardware or the remote device? You might be able to do something with the flow control lines: (i.e. if you are not using RTS/CTS flow control, but you can make your target device assert/deassert CTS, you might detect a change in CTS when the device is disconnected.) But be warned; this might work fine on one piece of hardware and not work on another due to hardware differences. In the general case, I agree with #MrD above that the most reliable/portable solution is to implement some heartbeat messaging (you send a message, wait for a response, like a TCP/IP PING), and generate a 'disconnect' event if you don't get a response within some timeout.

Related

Java - Detect a ping

Is it possible to detect a ping? I.e. device 1 pings device 2 and I want code that can run on device 2 that would detect whenever it is pinged by device 1.
The literal message used by the ping utility ("ICMP Echo Request") can be difficult to detect because it is customarily handled, and "eaten," by the network protocol-stack.
But if you simply want one computer or process to "broadcast" the fact that it's present (i.e. you don't need to receive a "reply," and you don't strictly care whether the message actually arrives at a particular place), the "UDP" network protocol might be just what you're looking for. Programming examples abound on the Internet. (And, right here, awaiting your "search.")
("UDP" is a datagram protocol, which so-to-speak "tosses a paper airplane out the window", whereas "TCP/IP" is concerned with bidirectional connections that are established, used, then torn-down.)
You can open a TCP or UDP socket on device 2 on some specific port and then try to connect on same port from device 1.
https://docs.oracle.com/javase/tutorial/networking/sockets/readingWriting.html
You can decide what you want to use by reading about TCP and UDP.
https://en.wikipedia.org/wiki/Transmission_Control_Protocol
https://en.wikipedia.org/wiki/User_Datagram_Protocol

Netty - Call a method on connection termination

I've been trying to figure out how to go about calling a method when the connection has been forcefully terminated by the client, or if the client just loses connection in general. Currently I have an List<> of all of my online accounts, however if the player doesn't log out of the server naturally, the account will stay in the list.
I've been looking through the documents, and searching google wording my question in dozens of different ways, but I can't find the answer that I'm looking for.
Basically, I need a way to figure out which channel was disconnected, and pass it as a parameter to a method, is this possible? It almost has to be.
i guess this can be done using thread on both client and server side.
Make a Date variable lastActive in client class which will be set by client every 5min (let's say). Another thread will run from server side every 10 min to check for this flag, if lastActive is more than 10min then remove player from list. You can change this frequency time according to your need
Reliably detecting socket disconnects is a common problem and not unique to Netty. The issue as you described is that your peer may not reliably terminate their end of the connection. For example: peer loses power, peer application crashes, peer machine crashes, etc... One common solution is to close the connection if no read activity has been detected for longer than some time interval. Netty provides some utilities to ease this process such as the ReadTimeoutHandler. Setting the time interval is application specific and will depend on your protocol. If your desired interval is sufficiently small you may have to add additional messages to your protocol to serve as a heartbeat message (a simple request/response to indicate each side is talking to each other).
From a Netty specific point of view you can register a listener with the Channel's CloseFuture that will notify you when the channel is closed. If you setup the ReadTimeoutHandler as previously described then you will be notified of close events after your timeout interval passes and no activity is detected or the channel is closed normally.

How to check if a message is really delivered in Netty using websocket?

I'm developing a websocket application by using Netty. I'd like to know if a message is really delivered from a source to a destination. In particular, let's assume that a client and a server have an open channel and exchange some messages for a while. At a certain point, the client goes down, but the channel is still active in Netty. I tried to use isReachable() before sending the message, but this method seems to be buggy in some scenarios (e.g. a machine with Win7 is up, but isReachable() returns false). Now, my idea is to implement a mechanism using ACKs, namely the server sends the message and the client sends back an ack. To do that, I need a timeout to see if, after a certain interval, the corresponding ack does not arrive. Is there something similar in Netty?
Regarding isReachable() - it's only a best effort API. The documentation points out that it tries to send an ICMP echo request or create a TCP connection to port 7 on the destination host, both of which are highly likely to be blocked by a firewall. Is this happening in your case?
As for the acknowledgement, there's nothing in Netty that provides this as standard, but it shouldn't be too difficult to implement. Firstly each message needs to be uniquely identifible by some sort of identifier, possibly a sequence number but a globally unique identifier means you can potentially recover across disconnections. Then you want to create a combined handler that implements both ChannelInboundHandler and ChannelOutboundHandler (assuming Netty 4). When a message is sent
add the message to a map indexed by its id
create a timer associated with the message id. Add it to another map indexed by message id
forward the message
When the ACK is received cancel the timer and remove the timer and message from their respective maps. If the timer fires use the associated id to decide what to do with the timer and message (possibly retransmit and reset the timer).
Netty provides a HashedWheelTimer for efficiently managing lots of timers with a resolution suitable for this kind of activity.
You may also want to consider putting a limit on the number of retries so you can stop and raise an error rather than continually indefinitely.

Java TCP latency

I am developing an Android application communicating with a TCP Java-server over a WLAN connection. The Android application is a game with sprites being moved around the screen. Whenever a sprite moves, the AndroidClient sends its coordinates to the Java-server, wich then sends the data to the other clients (maximum 4 clients). The server handles each client on a separate thread, data updates are sent about every 20 ms., and each packet consists of about 1-10 bytes. I am on a 70 Mbit network (with about 15 Mbit effective on my Wireless).
I am having problems with an unstable connection, and experiencing latency at about 50-500 ms. every 10th-30th packet. I have set the tcpNoDelay to true, wich stopped the consistent 200ms latency, although it still lags a lot. As I am quite new to both Android and networking I don't know whether this is to be expected or not. I am also wondering if UDP could be suitable for my program, as I am interested in sending updates fast rather than every packet arriving correctly.
I would appreciate any guidance as to how to avoid/work around this latency problem. General tips on how to implement such a client-server architecture would also be applauded.
On a wireless LAN you'll occasionally see dropped packets, which results in a packet retransmission after a delay. If you want to control the delay before retransmission you're almost certainly going to have to use UDP.
You definitely want to use UDP. For a game you don't care if the position of a sprite is incorrect for a short time. So UDP is ideal in this case.
Also, if you have any control over the server code, I would not use separate threads for clients. Threads are useful if you need to make calls to libraries that you don't have control over and that can block (such as because they touch a file or try to perform additional network communication). But they are expensive. They consume a lot of resources and as such they actually make things slower than they could be.
So for a network game server where latency and performance are absolutely critical, I would just use one thread to process a queue of commands that have a state and then make sure that you never perform an operation that blocks. So each command is processed in order, it's state is evaluated and updated (like a laser blast intersected with another object). If the command requires blocking (like reading from a file) then you need to perform a non-blocking read and set the state of that command accordingly so that your command processor never blocks. The key is that the command processor can never never ever block. It would just run in a loop but you would have to call Thread.sleep(x) in an appropriate way so as not to waste CPU.
As for the client side, when a client submits a command (like they fired a laser or some such), the client would generate a response object and insert it into a Map with a sequence id as the key. Then it would send the request with the sequence id and when the server responds with the that id, you just lookup the response object in the Map and decode the response into that object. Meaning this allows you to perform concurrent operations.

Can Java ServerSocket and Sockets using ObjectIOStreams lose packets?

I'm using a ServerSocket on my server and Sockets that use ObjectIOStreams to send serializable objects over the network connection. I'm developing an essentially more financial version of monopoly and thus packets being sent and confirmed as sent/received is required. Do I need to implement my own packet loss watcher or is that already taken care of with (Server)Sockets?
I'm primarily asking about losing packets during network blips or whatnot, not full connection error. E.g. siblings move a lead plate between my router and computer's wi-fi adapter.
http://code.google.com/p/inequity/source/browse/#svn/trunk/src/network
Code can be found under network->ClientController and network->Server
Theoretically; yes. There is no way of giving 100 % theoretical guarantee that what is sent on the hardware layer, is received the same way on the receiving end.
Practically however, if you use TCP (Transmission Control Protocol) this stuff has already been taken care of; you won't loose any packets. (If you're using UDP on the other hand (User Datagram Protocol) it's another story, and it may very well be the case that you're loosing packets, or receiving them out of order).
Just looked briefly at your code, and it seems you're using multiple threads. If so you must be utterly careful with synchronization. It could very well be the case that it looks like a packet has been dropped, although, it is simply not handled due to a race condition in the program. (Keep in mind that the gui for instance, runs in its own thread.)
The best way to solve the synchronization, I think, is to put the network loop in a very small read/put-on-synchronized-queue loop, and pick up the received packets from the queue whenever you're sure no other thread will intervene.

Categories

Resources