UDP Packet route in Java - java

I have a question considering udp packet life/route. I have a simple client server UDP scheme with a send call in the client side and a receive call in the server side. Lets say the send method gets called and the packet actually arrives in the other side BUT the server's code execution hasn't yet reached the receive method call. What happens with the packet in that time . Now i tried to stop the execution before the receive call with a simple command input prompt , waited a little and then let it continue and noticed that the packet got received . Can you explain WHY that happen, like is it buffered on a different OSI level?
Thanks in advance.

Every TCP or UDP socket has a send buffer and a receive buffer. Your datagram got queued into the send buffer at the sender, then it got sent, then it got queued into the receive buffer at the receiver, then you read it from there.
NB osi has nothing to do with it. TCP/IP doesn't obey the OSI model. It has its own, prior model.

The "receive" method call doesn't receive the packet. If there's a UDP socket "open" for that port, it means that there is buffer space allocated, and that's where the NIC+OS put the data. When you call "receive", it just looks there, and if there's anything there, then it pretends to have just received it.
I should add that if the buffer is empty, then the receive call does go into a blocking state, waiting to get notified by the OS that something has arrived.

Related

Underlying Workings of SelectionKey.interestOps(int ops)

I understand that Server socket channel is registered to listen for accept, when accepted a channel is registered for read and once read it is registered for write and this is done by adding the relevant keys to the SelectionKey's interest set using the interestOps method.
However, when we remove some interestOps from a key for e.g key.interestOps(key.interestOps() & ~SelectionKey.OP_READ);
What actually happens here? Does this mean that the server will just not listen for any incoming requests to channel belonging to this socket, and the source channel will be oblivious of this decision by the server and might keep on sending data to the server? Or will it somehow inform the channel source of this decision.
In packet switching parlance, is the above operation effectively the same as server receiving packets and just dropping the packet if the interestKeys for the channel this packet belong to have been "unset"
However, when we remove some interestOps from a key for e.g key.interestOps(key.interestOps() & ~SelectionKey.OP_READ);
What actually happens here?
What literally happens is something like:
public void interestOps(int interestOps)
{
this.interestOps = interestOps;
}
Does this mean that the server will just not listen for any incoming requests to channel belonging to this socket
It means that the Selector won't trigger any OP_READ events if data arrives via the socket. It doesn't mean that data won't be received.
and the source channel will be oblivious of this decision by the server and might keep on sending data to the server?
If by 'source channel' you mean the peer, it is not advised in anyway, unless the receive buffer fills up at the receiver.
Or will it somehow inform the channel source of this decision.
No.
In packet switching parlance, is the above operation effectively the same as server receiving packets and just dropping the packet if the interestKeys for the channel this packet belong to have been "unset".
No.

Why java.nio.SocketChannel not send data (Jdiameter)?

I create simple diameter client and server (Link to sources). Client must send 10000 ccr messages, but in wireshark i see only ~300 ccr messages will be sended. Other messages raised timeouts on client. I run server and client on different computers with windows 7. I found in JDiameter sources line where jdiameter sended ccr (line 280) and i think in case when sending buffer of socket is full ccr not sended. I add before line 280 this code
while(bytes.hasRemaining())
Client send ~9900 ccr, but very slow.I tested client on other diameter server wroted on c++, client(on jdiameter without my changes) send ~7000 ccr, but this server hosted on debian.
I don't know ways to solve this problem, thanks for any help.
If the sender's send returns zero, it means the sender's socket send buffer is full, which in turn means the receiver's socket receive buffer is full, which in turn means that the receiver is reading slower than the sender is sending.
So speed up the receiver.
NB In non-blocking mode, merely looping around the write() call while it returns zero is not adequate. If write() returns zero you must:
Deregister the channel for OP_READ and register it for OP_WRITE
Return to the select loop.
When OP_WRITE fires, do the write again. This time, if it doesn't return zero, deregister OP_WRITE, and (probably, according to your requirements) register OP_READ.
Note that keeping the channel registered for OP_WRITE all the time isn't correct either. A socket channel is almost always writable, meaning there is almost always space in the socket send buffer. What you're interested in is the transistion between not-writable and writable.

java socket sending and receiving

since hours I am thinking about how I should use Java Sockets to send and receive data. I hope you can help me finding a good way to do it.
I have build a Server/Client structure where the client connects to the server and gets his own thread handling a request-response method. The information are capsuled in XML.
At the beginning the Server starts a blocking socket read. First I use writeInt() where the lenght of the XML message is transmitted. After that the Server reads the amount of lenght bytes and parses the message. After the transmission the client goes in the receive state and waits for an response.
That is fine but after the client authenticates the server waits for the things that will come and blocks.
But what should I do when the server has no information which needs to be transmitted. I could try to destroy the read blocking and send the message to the client. But what happens if the client comens in mind that he has also a message and also begins to send. In that moment no one will listen.
For that maybe I could use some sort of buffer but I have the feeling that this is not the way I should go.
I have been searching for a while and found some interesting information but I didn't understand them well. Sadly my book is only about this simple socket model.
Maybe I should use two threads. One sending and one receiving. The server has a database where the messages are stored and waiting for transmission. So when the server receives a message he stores that message in the database and also the answer like "message received" would be stored in the database. The sender thread would look if there are new messages and would send "message received" to the client. That approach would not send the answer in millisenconds but I could image that it would work well.
I hope that I gave you enough information about what I am trying. What would you recommend me how to implement that communication?
Thanks
But what should I do when the server has now information which needs to be transmitted.
I would make writes from the server synchronized. This will allow you to respond to requests in one thread, but also send other messages as required by another thread. For the client you can have a separate thread doing the receiving.
A simpler model may be to have two sockets. One socket works as you do now and when you "login" a unique id is sent to the client. At this point the client opens a second connection with a dedicated thread for listening to asynchronous events. It passes the unique id so the server know which client the asynchronous messages are for.
This will give a you a simple synchronous pattern as you have now and a simple asynchronous pattern. The only downside is you have two socket connections and you have co-ordinate them.

Where did datagram packet go when their destination is offline?

Are those packet simply disappear? or they waits for the destination? Or the packet go back then throws an exception?
And in java, what is the difference between the byte[] buffer with the length, in the DatagramPacket constructor?
DatagramPacket dp = new DatagramPacket(new byte[...], length);
From Wikipedia:
UDP is... Unreliable – When a message
is sent, it cannot be known if it will
reach its destination; it could get
lost along the way. There is no
concept of acknowledgment,
retransmission or timeout.
Even if the destination is online, there is no guarantee, the UDP packet will arrive, arrive in the order sent, or not be fragmented. (I believe packets smaller than 532 bytes will not be fragmented) It is possible to have all three; fragmented, out of order and incomplete for the same packet.
The simplicity and stability of your network will determine how robust UDP packet delivery is, but you have to assume it is unreliable at least some of the time. All you can do is minimise the loss.
It is up to you to decide what to do if a packet is lost and how to detect it.
If you want broadcast, reliable delivery of messages I suggest you look at JMS Topics or Queues, like ActiveMQ.
If using UDP protocol, you can't guarantee that your packet is going to be received.
So the answer is, it will be sent, even if its destination is not online.
TCP protocol, its guaranteed that costumer will receive the packet. Even if he is offline, once he get's online, that packet will be received.
Are those packet simply disappear? or they waits for the destination? Or the packet go back then throws an exception?
What happens depends on the nature of the "offline" status.
If the UDP message reaches the host, but the application is not listening, it will typically be silently discarded. It definitely won't be queued waiting for the application to listen. (That would be pointless, and potentially dangerous.)
If the UDP message cannot get to the host because the host itself is offline, the message will be silently discarded. (If the packets can reach the destination host's local network, then there is nothing apart from the host itself that can tell if the host actually received the packets.)
If the network doesn't know how to route the IP packets to the UDP server (and a few other scenarios), an ICMP "Destination Unreachable" packet may be sent to the sender, and that typically gets reported as a Java exception. However this is not guaranteed. So the possible outcomes are:
the UDP packet is black-holed and the sender gets no indication, or
the UDP packet is black-holed and the sender gets a Java exception.
If the UDP packet is blocked by a firewall, then the behaviour is hard to predict. (Firewalls often "lie" in their responses to unwanted traffic.)
The only situation where you would expect there to be queuing of UDP traffic is when the network is working, the host is working and the application is listening. Limited queuing is then possible if the application is slow in accepting the packets; i.e. it takes too long between successive calls to receive on the datagram socket. But even there, the queueing / buffering is strictly limited, and beyond that the messages will be dropped.

NIO: Send message and then disconnect immediately

In some circumstances I wish to send an error message from a server to client using non-blocking I/O (SocketChannel.write(ByteBuffer)) and then disconnect the client. Assuming I write the full contents of the message and then immediately disconnect I presume the client may not receive this message as I'm guessing that the OS hasn't actually sent the data at this point.
Is this correct, and if so is there a recommended approach to dealing with this situation?
I was thinking of using a timer whereby if I wish to disconnect a client I send a message and then close their connection after 1-2 seconds.
SocketChannel.write will in non-blocking mode return the number of bytes which could immediately be sent to the network without blocking. Your question makes me think that you expect the write method to consume the entire buffer and try asynchronously to send additional data to the network, but that is not how it's working.
If you really need to make sure that the error message is sent to the client before disconnecting the socket, I would simply enable blocking before calling the write method. Using non-blocking mode, you would have to call write in a loop, counting the number of bytes being sent by each invocation and exit the loop when you've succeeded to pass the entire message to the socket (bad solution, I know, unnecessary code, busy wait and so on).
you may be better off launching a thread and synchronously write data to the channel. the async api is more geared toward "one thread dispatching multiple channels" and not really intended for fire and forget communications.
The close() method of sockets makes sure, everything sent using write before is actually sent before the socket is really closed. However this assumes that your write() was able to copy all data to the tcp stacks output window, which will not always work. For solutions to this see the other answers.

Categories

Resources