I have recently tried my hands over Socket Programming by creating an SSL Socket used to stream data live from a server with no success of course. When I analyze the data packets through Wireshark, I realize the size of request data has been magnified n number of times in the packet and hence the request reaches the server in fragments where as the actual JSON request is a handful of bytes and should reach the server in a single shot.
Any help would be appreciated.
Wrap a BufferedOutputStream around the SSLSocket's output stream, and don't flush it until you really gotta, which is usually not until you're about to read the reply. Otherwise you can be sending one byte at a time to the SSLSocket, which becomes one SSL message per byte, which can expand the data by more than 40x.
However:
the request reaches the server in fragments
That can happen any time. The server has to be able to cope with receiving data as badly fragmented as one byte at a time.
where as the actual JSON request is a handful of bytes and should reach the server in a single shot.
There is no such guarantee in TCP.
Say I have ONE SocketChannel on a server waiting for OP_READ to become ready and ONE SocketChannel on a client that uses ONE write call on a buffer to send data to the server. Is it then guaranteed that OP_READ will only be ready once ALL data that the client sent has arrived?
Is it then guaranteed that OP_READ will only be ready once ALL data that the client sent has arrived?
Short answer: no.
Long answer:
The receiving channel will become ready as soon as at least one byte is available to read. Bytes will typically become available more than one at a time, but in general, there is no guarantee how the overall buffer-full of bytes will be split up, and certainly none that all bytes will arrive before the receiving channel signals readiness. The data may be split up at multiple points on both the writing side and the reading side. This has little to do with number of senders, number of receivers, or resource contention.
The details depend to some extent on the underlying network protocol -- for example, pretty much all bets are off for a stream-type protocol such as TCP, but a datagram-type protocol such as UDP should give you the kind of all-at-once behavior you seem to want. UDP in particular, however, does not offer guaranteed delivery, so you have different issues to deal with in that case. The channel abstraction is a better fit to a stream-type protocol.
I have been given the classical task of transferring files using UDP. On different resources, I have read both checking for errors on the packets (adding CRC alongside data to packets) is necessary AND UDP already checks for corrupted packets and discards them, so I only need to worry about resending dropped packets.
Which one of them is correct? Do I need to manually perform an integrity check on the arrived packets or incorrect ones are already discarded?
Language for the project is Java by the way.
EDIT: Some sources (course books, internet) say checksum only covers the header, therefore ensures sender and receiver IP's are correct etc.. Some sources say checksum also covers the data segment. Some sources say checksum may cover data segment BUT it's optional and decided by the OS.
EDIT 2: Asked my professors and they say UDP error checking on data segment is optional in IPv4, defauld in IPv6. But I still don't know if it's in programmer's control, or OS's, or another layer...
First fact:
UDP has a 16 bit checksum field starting at bit 40 of the packet header. This suffers from (at least) 2 weaknesses:
Checksum is not mandatory, all bits set to 0 are defined as "No checksum"
it is a 16 bit check-sum in the strict sense of the word, so it is susceptible to undetected corruption.
Together this means, that UDP's built-in checksum may or may not be reliable enough, depending on your environment.
Second fact:
An even more realistic threat than data courruption along the transport is packet loss reordering: USP makes no guarantees about
all packets to (eventually) arrive at all
packets to arrive in the same sequence as sent
indeed UDP has no built-in mechanism at all to deal with payloads bigger than a single packet, stemming from the fact, that it wasn't built for that.
Conclusion:
Appending packet after packet as received without additional measures is bound to produce a receive stream differing from the send stream in all but the very favourablest environments., making it a less than optimal protocol for direct file transfer.
If you do want or must use UDP to transfer files, you need to build those parts, that are integral to TCP but not to UDP into the application. There is a saying though, that this will most likely result in an inefrior reimplementation of TCP.
Successfull implementations include many peer-to-peer file sharing protocols, where protection against connection interruption and packet loss or reordering need to be part of the apllication functionality anyway to defeat or mitigate filters.
Implementation recommendations:
What has worked for us is a chunked window implementation: The payload is separated into chunks of a fixed and convenient length, (we used 1023 bytes) a status array of N such chunks is kept on the sending and receiving end.
On the sending side:
A UDP message is inititated, containing such a chunk, its sequence number (more than once) in the stream and a checksum or hash.
The status array marks this chunk as "sent/pending" with a timestamp
Sending stops, if the complete status array (send window) is consumed
On the receiving side:
received packets are checked against their checksum,
corrupted packets are negativly acknowledged if all copies of the sequence number agree, dropped else
OK packets are marked in the status array as "received/pending" with a timestamp
Acknowledgement works by sending an ack packet if either enough chunks have been received to fill an ack packet, or the timestamp of the oldest "receive/pending" grows too old (some ms to some 100ms).
Ack packets need checksumming, but no sequencing.
Chunks, for which an ack has been sent, are marked as "ack/pending" with timestamp in the status array
On the sending side:
Ack packets are received and checked, corrupted packets are dropped
Chunks, for which an ack was received, are marked as "ack/done" in the status array
If the first chunk in the status array is marked "ack/done", the status array slides up, until its first chunk again is not maked done.
This possibly releases one or more unsent chunks to be sent.
for chunks in status "sent/pending", a timeout on the timestamp triggers a new send for this chunk, as the original chunk might have been lost.
On the receiving side:
Reception of chunk i+N (N being the window width) marks chunk i as ack/done, sliding up the receive window. If not all chunks sliding out of the receive window are makred as "ack/pending", this constitutes an unrecoverable error.
for chunks in status "ack/pending", a timeout on the timestamp triggers a new ack for this chunk, as the original ack message might have been lost.
Obviously there is the need for a special message type from the sending side, if the send window slides out the end of the file, to signal reception of an ack without sending chunk N+i, we implemented it by simply sending N chunks more than exist, but without the payload.
You can be sure the packets you receive are the same as what was sent (i.e. if you send packet A and receive packet A you can be sure they are identical). The transport layer CRC checking on the packets ensures this. Since UDP does not have guaranteed delivery however, you need to be sure you received everything that was sent and you need to make sure you order it correctly.
In other words, if packets A, B, and C were sent in that order you might actually receive only A and B (or none). You might get them out of order, C, B, A. So your checking needs to take care of the guaranteed delivery aspect that TCP provides (verify ordering, ensure all the data is there, and notify the server to resend whatever you didn't receive) to whatever degree you require.
The reason to prefer UDP over TCP is that for some applications neither data ordering nor data completeness matter. For example, when streaming AAC audio packets the individual audio frames are so small that a small amount of them can be safely discarded or played out of order without disrupting the listening experience to any significant degree. If 99.9% of the packets are received and ordered correctly you can play the stream just fine and no one will notice. This works well for some cellular/mobile applications and you don't even have to worry about resending missed frames (note that Shoutcast and some other servers do use TCP for streaming in some cases [to facilitate in-band metadata], but they don't have to).
If you need to be sure all the data is there and ordered correctly, then you should use TCP, which will take care of verifying that data is all there, ordering it correctly, and resending if necessary.
The UDP protocol uses the same strategy for checking packets with errors that the TCP protocol uses - a 16 bits checksum in the packet header.
The UDP packet structure is well known (as well as the TCP) so the packet can be easily tampered if not encrypted, adding another checksum (for instance CRC-32) would also make it more robust. If the purpose is to encrypt data (manually or over an SSL channel), I wouldn't bother adding another checksum.
Please take also into consideration that a packet can be sent twice. Make sure you deal with that accordingly.
You can check both packet structure on Wikipedia, both have checksums:
Transmission Control Protocol
User Datagram Protocol
You can check the TCP packet structure with more detail to get tips on how to deal with dropped packets. TCP protocol uses a "Sequence Number" and "Acknowledgment Number" for that purpose.
I hope this helps, and good luck.
UDP will drop packets that don't meet the internal per-packet checksum; CRC checking is useful to determine at the application layer if, once a payload appears to be complete, that what was received is actually complete (no dropped packets) and matches what was sent (no man-in-the-middle or other attacks).
Does anybody knows how and why a counter party would receive TCP packets merged instead of individually packages? I already set TCP Nodelay to true at socket level, but tcpdump still sees some packets as merged.
After 4 successful packets sent with size of 310 bytes, I got 3 x 1400 bytes instead of 15 x 310 bytes. This is causing some important latency. Thanks.
http://www.2shared.com/photo/_bN9UEqR/tcpdump2.html
s = new Socket(host, port);
s.setTcpNoDelay(true);
s.getOutputStream().write(byteMsg);
s.getOutputStream().flush()
TCP is a stream-based protocol. It doesn't preserve boundaries with respect to send/recv calls. The only thing guaranteed is that the concatenation of send's will be the same as the concatenation of recv's (under normal circumstances).
If you're implementing a custom protocol and need some way to split the data into multiple logical messages, you need an encoding for that.
A simple encoding is to encode each message as a 32-bit unsigned integer denoting the length of the message payload, followed by the actual message payload. Then, on the receiving side, properly decode the input according to this encoding. To do that, you will need a buffer that will store a partially received message. If manipulating raw integers is a problem, you can encode the length some other way, e.g. as a decimal number followed by a newline.
Coalescing can occur in many places
sender application buffering
sender OS buffering
sender network adapter buffers
router buffers
receiver network adapter buffers
receiver OS buffering
receiver application buffers/queues
It appears from what you have said there is coalescing between the senders network adapter and the receiver's OS. (As tcp-no-delay instructs the OS not to buffer and tcpdump reads before the application)
You can try to enable the TCP_NODELAY option on the used socket (setTcpNoDelay() method).
It is by default disabled which means that the transmitted data is optimized for a minimum number of packages sent (see Nagle's algorithm).
Does anybody knows how and why a counter party would receive TCP packages merged instead of individually packages?
Because that is what TCP is specifically designed to do.
I am building a non blocking server using javas NIO package, and have a couple questions about validating received data.
I have noticed that when i call read on a socket channel, it will attempt to fill the byte buffer that it is reading to (as per the documentation). When i send 10 bytes to the server from the client, the server will read the those ten bytes into the byte buffer, the rest of the bytes in the byte buffer will stay at zero, and the number returned from the read operation will be the size of my byte buffer even though the client only wrote 10 bytes.
What i am trying to figure out is if there is a way to get just the number of bytes the client sent to the server when the server reads from a socket channel (in the above case, 10 instead of 1024).
If that doesn't work i know i can get separate all the actual received data from the client from this 'excess' data stored in the byte buffer by using delimiters in conjunction with my 'instruction set headers' and what not, but it seems like this should exists so i have to wonder if i am just missing something obvious, or if there is some low level reason why this can't be done.
Thanks :)
you probably forgot to call the notorious flip() on your buffer.
buffer.clear();
channel.read(buffer);
buffer.flip();
// now you can read from the buffer
buffer.get...
I need to change my signature to nio.sucks