Underlying Workings of SelectionKey.interestOps(int ops) - java

I understand that Server socket channel is registered to listen for accept, when accepted a channel is registered for read and once read it is registered for write and this is done by adding the relevant keys to the SelectionKey's interest set using the interestOps method.
However, when we remove some interestOps from a key for e.g key.interestOps(key.interestOps() & ~SelectionKey.OP_READ);
What actually happens here? Does this mean that the server will just not listen for any incoming requests to channel belonging to this socket, and the source channel will be oblivious of this decision by the server and might keep on sending data to the server? Or will it somehow inform the channel source of this decision.
In packet switching parlance, is the above operation effectively the same as server receiving packets and just dropping the packet if the interestKeys for the channel this packet belong to have been "unset"

However, when we remove some interestOps from a key for e.g key.interestOps(key.interestOps() & ~SelectionKey.OP_READ);
What actually happens here?
What literally happens is something like:
public void interestOps(int interestOps)
{
this.interestOps = interestOps;
}
Does this mean that the server will just not listen for any incoming requests to channel belonging to this socket
It means that the Selector won't trigger any OP_READ events if data arrives via the socket. It doesn't mean that data won't be received.
and the source channel will be oblivious of this decision by the server and might keep on sending data to the server?
If by 'source channel' you mean the peer, it is not advised in anyway, unless the receive buffer fills up at the receiver.
Or will it somehow inform the channel source of this decision.
No.
In packet switching parlance, is the above operation effectively the same as server receiving packets and just dropping the packet if the interestKeys for the channel this packet belong to have been "unset".
No.

Related

Nio Selector.select IO readiness

In java NIO, does Selector.select() guarantee that at least one entire UDP datagram content is available at the Socket Channel, or in theory Selector could wake when there is less then a datagram, say couple of bytes ?
What happens if transport protocol is TCP, with regards to Selector.select(), is there difference to UDP ?
From the API:
Selects a set of keys whose corresponding channels are ready for I/O operations.
It doesn't however specify what ready means.
So my questions:
how incoming datagrams/streams go from hardware to Java application Socket (Channels).
when using UDP or TCP client, should one assume that at least one datagram is received or Selector could wake when there is only a part of datagram available ?
It doesn't however specify what ready means.
So my questions:
how incoming packages/streams go from hardware to Java application Socket (Channels).
They arrive at the NIC where they are buffered and then passed to the network protocol stack and from there to the socket receive buffer. From there they are retrieved when you call read().
when using UDP or TCP client, should one assume that at least one package is received
You mean packet. Actually in the case of UDP you mean datagram. You can assume that an entire datagram has been received in the case of UDP.
or Selector could wake when there is only a part of [packet] available?
In the case of TCP you can assume that either at least one byte or end of stream is available. There is no such thing as a 'package' or 'packet' or 'message' at the TCP level.

detecting TCP/IP packet loss

I have tcp communication via socket code like :
public void openConnection() throws Exception
{
socket = new Socket();
InetAddress iNet = InetAddress.getByName("server");
InetSocketAddress sock = new InetSocketAddress(iNet, Integer.parseInt(port));
socket.connect(sock, 0);
out = new PrintWriter(socket.getOutputStream(), true);
in = new BufferedReader(new InputStreamReader(socket.getInputStream()));
}
and send method as :
synchronized void send(String message)
{
try
{
out.println(message);
}
catch (Exception e)
{
throw new RuntimeException(this.getClass() + ": Error Sending Message: "
+ message, e);
}
}
And I read from webpages that, TCP/IP does not guarantee delivery of packet, it retries but if network is too busy, packet may be dropped([link]).1
Packets can be dropped when transferring data between systems for two key reasons:
Heavy network utilization and resulting congestion
Faulty network hardware or connectors
TCP is designed to be able to react when packets are dropped on a network. When a packet is successfully delivered to its destination, the destination system sends an acknowledgement message back to the source system. If this acknowledgement is not received within a certain interval, that may be either because the destination system never received the packet or because the packet containing the acknowledgement was itself lost. In either case, if the acknowledgement is not received by the source system in the given time, the source system assumes that the destination system never received the message and retransmits it. It is easy to see that if the performance of the network is poor, packets are lost in the first place, and the increased load from these retransmit messages is only increasing the load on the network further, meaning that more packets will be lost. This behaviour can result in very quickly creating a critical situation on the network.
Is there any way, that I can detect if packet was received successfully by destination or not, I am not sure that out.println(message); will throw any exception, as this is non blocking call. It will put message in buffer and return to let TCP/IP do its work.
Any help?
TCP is designed to be able to react when packets are dropped on a network.
As your quote says, TCP is design to react automatically to the events you mention in this text. As such, you don't have anything to do at this level, since this will be handled by the TCP implementation you're using (e.g. in the OS).
TCP has some features that will do some of the work for you, but you are right to wonder about their limitations (many people think of TCP as a guaranteed delivery protocol, without context).
There is an interesting discussion on the Linux Kernel Mailing List ("Re: Client receives TCP packets but does not ACK") about this.
In your use case, practically, this means that you should treat your TCP connection as a stream of data, in each direction (the classic mistake is to assume that if you send n bytes from on end, you'll read n bytes in a single buffer read on the other end), and handle possible exceptions.
Handling java.io.IOExceptions properly (in particular subclasses in java.net) will cover error cases at the level you describe: if you get one, have a retry strategy (depending on what the application and its user is meant to do). Rely on timeouts too (don't set a socket as blocking forever).
Application protocols may also be designed to send their own acknowledgement when receiving commands or requests.
This is a matter of assigning responsibilities to different layers. The TCP stack implementation will handle the packet loss problems you mention, and throw an error/exception if it can't fix it by itself. Its responsibility is the communication with the remote TCP stack. Since in most cases you want your application to talk to a remote application, there needs to be an extra acknowledgement on top of that. In general, the application protocol needs to be designed to handle these cases. (You can go a number of layers up in some cases, depending on which entity is meant to take responsibility to handle the requests/commands.)
TCP/IP does not drop the packet. The congestion control algorithms inside the TCP implementation take care of retransmission. Assuming that there is a steady stream of data being sent, the receiver will acknowledge which sequence numbers it received back to the sender. The sender can use the acknowledgements to infer which packets need to be resent. The sender holds packets until they have been acknowledged.
As an application, unless the TCP implementation provides mechanisms to receive notification of congestion, the best it can do is establish a timeout for when the transaction can complete. If the timeout occurs before the transaction completes, the application can declare the network to be too congested for the application to succeed.
Code what you need. If you need acknowledgements, implement them. If you want the sender to know the recipient got the information, then have the recipient send some kind of acknowledgement.
From an application standpoint, TCP provides a bi-directional byte stream. You can communicate whatever information you want over that by simply specifying streams of bytes that convey the information you need to communicate.
Don't try to make TCP do anything else. Don't try to "teach TCP" your protocol.

Why java.nio.SocketChannel not send data (Jdiameter)?

I create simple diameter client and server (Link to sources). Client must send 10000 ccr messages, but in wireshark i see only ~300 ccr messages will be sended. Other messages raised timeouts on client. I run server and client on different computers with windows 7. I found in JDiameter sources line where jdiameter sended ccr (line 280) and i think in case when sending buffer of socket is full ccr not sended. I add before line 280 this code
while(bytes.hasRemaining())
Client send ~9900 ccr, but very slow.I tested client on other diameter server wroted on c++, client(on jdiameter without my changes) send ~7000 ccr, but this server hosted on debian.
I don't know ways to solve this problem, thanks for any help.
If the sender's send returns zero, it means the sender's socket send buffer is full, which in turn means the receiver's socket receive buffer is full, which in turn means that the receiver is reading slower than the sender is sending.
So speed up the receiver.
NB In non-blocking mode, merely looping around the write() call while it returns zero is not adequate. If write() returns zero you must:
Deregister the channel for OP_READ and register it for OP_WRITE
Return to the select loop.
When OP_WRITE fires, do the write again. This time, if it doesn't return zero, deregister OP_WRITE, and (probably, according to your requirements) register OP_READ.
Note that keeping the channel registered for OP_WRITE all the time isn't correct either. A socket channel is almost always writable, meaning there is almost always space in the socket send buffer. What you're interested in is the transistion between not-writable and writable.

UDP Packet route in Java

I have a question considering udp packet life/route. I have a simple client server UDP scheme with a send call in the client side and a receive call in the server side. Lets say the send method gets called and the packet actually arrives in the other side BUT the server's code execution hasn't yet reached the receive method call. What happens with the packet in that time . Now i tried to stop the execution before the receive call with a simple command input prompt , waited a little and then let it continue and noticed that the packet got received . Can you explain WHY that happen, like is it buffered on a different OSI level?
Thanks in advance.
Every TCP or UDP socket has a send buffer and a receive buffer. Your datagram got queued into the send buffer at the sender, then it got sent, then it got queued into the receive buffer at the receiver, then you read it from there.
NB osi has nothing to do with it. TCP/IP doesn't obey the OSI model. It has its own, prior model.
The "receive" method call doesn't receive the packet. If there's a UDP socket "open" for that port, it means that there is buffer space allocated, and that's where the NIC+OS put the data. When you call "receive", it just looks there, and if there's anything there, then it pretends to have just received it.
I should add that if the buffer is empty, then the receive call does go into a blocking state, waiting to get notified by the OS that something has arrived.

IP headers in Java

I am testing the behaviour of some client software, and need to write some software that emulates router-like functionality, preferably using something simple like UDP sockets. All it needs to do is receive the packet, alter the time to live, and send it back. Is this possible in regular Java? Or do you do something like the following:
Listen on Socket A
For EACH udp packet received, open a NEW socket, set time to live on that socket, and send it back (or this isn't possible/efficient?)
Receiver gets packet with altered values that appear like it has traversed some hops (but in reality hasn't)
So two approaches may be possible - editing the recieved packet directly (and then simply sending back), or constructing a new packet, copying the values from the original one and setting the appropriate headers/socket options before sending it out.
EDIT: the 'router' does not do any complex routing at all such as forwarding to other routers... it is simply decrements the t-t-l header field of the received message and sends the message directly back to the client.
Please refer API of Socket and ServerSocket class. Most of the server implementation for variety of protocols does accept packets at standard port like 80 and send response using some ephemaral port.

Categories

Resources