I have recently tried my hands over Socket Programming by creating an SSL Socket used to stream data live from a server with no success of course. When I analyze the data packets through Wireshark, I realize the size of request data has been magnified n number of times in the packet and hence the request reaches the server in fragments where as the actual JSON request is a handful of bytes and should reach the server in a single shot.
Any help would be appreciated.
Wrap a BufferedOutputStream around the SSLSocket's output stream, and don't flush it until you really gotta, which is usually not until you're about to read the reply. Otherwise you can be sending one byte at a time to the SSLSocket, which becomes one SSL message per byte, which can expand the data by more than 40x.
However:
the request reaches the server in fragments
That can happen any time. The server has to be able to cope with receiving data as badly fragmented as one byte at a time.
where as the actual JSON request is a handful of bytes and should reach the server in a single shot.
There is no such guarantee in TCP.
Related
I have read all around the Stack, but I cannot find a working solution to determine the packet size and set the array of bytes to match that length to receive a correct amount of bytes for one message at once.
DataInputStream in = new DataInputStream(socket.getInputStream());
while(true) {
short myShortStreamSize = in.readShort();
byte[] payload = new byte[myShortStreamSize];
in.readFully(payload);
Log.i("Foo", new String(payload));
}
That doesn't work for me.
My TLS server is made with Node.js. Normally Node.js TLS module handles the packets in the way if I use the same module to connect to the server it's enough to use socket.write("message") and the Node.js client understands the sizes already so you don't need to define it yourself. It will write that message to the console without issues. As I mentioned that my server is TLS. I'm not sure if these have something to do with my issue, but would be awesome to make it work.
My goal is to receive one message at once without cutting it into pieces. If I send "Good morning, today is a sunny day!" I want my Java/Android client to read it from start to end without mixing it up with other packets.
... size of TLS packet ... determine the packet size ... receive one message at once
There is no "TLS packet". There is a TLS record which has the length in the first 2 bytes. This TLS record might be split into multiple TCP fragments or TCP fragments might also contain multiple TLS records in full, in half or whatever. These TCP fragments then might even be cut into multiple IP packets although TCP tries hard to avoid this.
And your plain text "message" itself might end up being spread over multiple TLS records since the size of such a record might be smaller than the message. Depending on your application and TLS stack it might also be mixed together with other "messages".
In other words: what you want is impossible. Unless you somehow know already the size of your message before reading it you cannot rely on reading the full message and also only this message. Neither TCP nor TLS preserve any message "boundaries" you might have imagined when crafting your writes but which have no actual representation in the payload.
everyone. I am a beginner in network programming. Currently, I want to do an experiment which sends a file(~2M bytes) from Android phone to Ubuntu server. How can I send it at the highest speed? I have tried something like using Bufferedreader in Java, reading each byte out of the file and sending that single byte to server through socket "outputstream write" function. This way seems to cost too much time. I notice that if in the same network condition, I send same file using some Immediate Messenger tools,like Skype;it is much quicker than I did. Does anybody know the API or implementation protocol beneath those Immediate Messenger software?
I probably need to call other efficient APIs other than socket? I am also trying to read the whole file into a byte array and then call "socket write" function to send the huge byte array to server for a just single time.Although when I receive it at server side to found that there are a lot of "padding zeros" distributed in my original data, the whole transfer seems to cost less time than the "single byte transfer" method. Anybody has any advice on this?Thanks a lot!
Thanks for everybody's answer. I think I just made a silly mistake. The real reason for my slow transmission using TCP socket is that every time I just read a single byte out of the file and call "void write(int b)" to send this single byte to server.This method is very time consuming. Now every time I try to read 256 bytes out of file and send these 256 bytes through "void write(byte[] b,int off,int len)";in this way, the transmission is pretty fast.Therefore, it is not the problem of TCP itself. It is my mistake of calling the wrong API. I have not tried UDP yet. But I think it is also a good choice.Thanks again,everybody.
Short of reinventing TCP/IP the fastest way to send will be via UDP, if your connection is good enough (few losses) than you on;y need to implement packet sequence so the sender prepends sequence number to the data packets, the receiver keeps track of missed packets and requests them again after all data is sent. After all data is received the receiver can reassemble the complete file.
This is simplistic implementation of TCP over UDP.
I have been given the classical task of transferring files using UDP. On different resources, I have read both checking for errors on the packets (adding CRC alongside data to packets) is necessary AND UDP already checks for corrupted packets and discards them, so I only need to worry about resending dropped packets.
Which one of them is correct? Do I need to manually perform an integrity check on the arrived packets or incorrect ones are already discarded?
Language for the project is Java by the way.
EDIT: Some sources (course books, internet) say checksum only covers the header, therefore ensures sender and receiver IP's are correct etc.. Some sources say checksum also covers the data segment. Some sources say checksum may cover data segment BUT it's optional and decided by the OS.
EDIT 2: Asked my professors and they say UDP error checking on data segment is optional in IPv4, defauld in IPv6. But I still don't know if it's in programmer's control, or OS's, or another layer...
First fact:
UDP has a 16 bit checksum field starting at bit 40 of the packet header. This suffers from (at least) 2 weaknesses:
Checksum is not mandatory, all bits set to 0 are defined as "No checksum"
it is a 16 bit check-sum in the strict sense of the word, so it is susceptible to undetected corruption.
Together this means, that UDP's built-in checksum may or may not be reliable enough, depending on your environment.
Second fact:
An even more realistic threat than data courruption along the transport is packet loss reordering: USP makes no guarantees about
all packets to (eventually) arrive at all
packets to arrive in the same sequence as sent
indeed UDP has no built-in mechanism at all to deal with payloads bigger than a single packet, stemming from the fact, that it wasn't built for that.
Conclusion:
Appending packet after packet as received without additional measures is bound to produce a receive stream differing from the send stream in all but the very favourablest environments., making it a less than optimal protocol for direct file transfer.
If you do want or must use UDP to transfer files, you need to build those parts, that are integral to TCP but not to UDP into the application. There is a saying though, that this will most likely result in an inefrior reimplementation of TCP.
Successfull implementations include many peer-to-peer file sharing protocols, where protection against connection interruption and packet loss or reordering need to be part of the apllication functionality anyway to defeat or mitigate filters.
Implementation recommendations:
What has worked for us is a chunked window implementation: The payload is separated into chunks of a fixed and convenient length, (we used 1023 bytes) a status array of N such chunks is kept on the sending and receiving end.
On the sending side:
A UDP message is inititated, containing such a chunk, its sequence number (more than once) in the stream and a checksum or hash.
The status array marks this chunk as "sent/pending" with a timestamp
Sending stops, if the complete status array (send window) is consumed
On the receiving side:
received packets are checked against their checksum,
corrupted packets are negativly acknowledged if all copies of the sequence number agree, dropped else
OK packets are marked in the status array as "received/pending" with a timestamp
Acknowledgement works by sending an ack packet if either enough chunks have been received to fill an ack packet, or the timestamp of the oldest "receive/pending" grows too old (some ms to some 100ms).
Ack packets need checksumming, but no sequencing.
Chunks, for which an ack has been sent, are marked as "ack/pending" with timestamp in the status array
On the sending side:
Ack packets are received and checked, corrupted packets are dropped
Chunks, for which an ack was received, are marked as "ack/done" in the status array
If the first chunk in the status array is marked "ack/done", the status array slides up, until its first chunk again is not maked done.
This possibly releases one or more unsent chunks to be sent.
for chunks in status "sent/pending", a timeout on the timestamp triggers a new send for this chunk, as the original chunk might have been lost.
On the receiving side:
Reception of chunk i+N (N being the window width) marks chunk i as ack/done, sliding up the receive window. If not all chunks sliding out of the receive window are makred as "ack/pending", this constitutes an unrecoverable error.
for chunks in status "ack/pending", a timeout on the timestamp triggers a new ack for this chunk, as the original ack message might have been lost.
Obviously there is the need for a special message type from the sending side, if the send window slides out the end of the file, to signal reception of an ack without sending chunk N+i, we implemented it by simply sending N chunks more than exist, but without the payload.
You can be sure the packets you receive are the same as what was sent (i.e. if you send packet A and receive packet A you can be sure they are identical). The transport layer CRC checking on the packets ensures this. Since UDP does not have guaranteed delivery however, you need to be sure you received everything that was sent and you need to make sure you order it correctly.
In other words, if packets A, B, and C were sent in that order you might actually receive only A and B (or none). You might get them out of order, C, B, A. So your checking needs to take care of the guaranteed delivery aspect that TCP provides (verify ordering, ensure all the data is there, and notify the server to resend whatever you didn't receive) to whatever degree you require.
The reason to prefer UDP over TCP is that for some applications neither data ordering nor data completeness matter. For example, when streaming AAC audio packets the individual audio frames are so small that a small amount of them can be safely discarded or played out of order without disrupting the listening experience to any significant degree. If 99.9% of the packets are received and ordered correctly you can play the stream just fine and no one will notice. This works well for some cellular/mobile applications and you don't even have to worry about resending missed frames (note that Shoutcast and some other servers do use TCP for streaming in some cases [to facilitate in-band metadata], but they don't have to).
If you need to be sure all the data is there and ordered correctly, then you should use TCP, which will take care of verifying that data is all there, ordering it correctly, and resending if necessary.
The UDP protocol uses the same strategy for checking packets with errors that the TCP protocol uses - a 16 bits checksum in the packet header.
The UDP packet structure is well known (as well as the TCP) so the packet can be easily tampered if not encrypted, adding another checksum (for instance CRC-32) would also make it more robust. If the purpose is to encrypt data (manually or over an SSL channel), I wouldn't bother adding another checksum.
Please take also into consideration that a packet can be sent twice. Make sure you deal with that accordingly.
You can check both packet structure on Wikipedia, both have checksums:
Transmission Control Protocol
User Datagram Protocol
You can check the TCP packet structure with more detail to get tips on how to deal with dropped packets. TCP protocol uses a "Sequence Number" and "Acknowledgment Number" for that purpose.
I hope this helps, and good luck.
UDP will drop packets that don't meet the internal per-packet checksum; CRC checking is useful to determine at the application layer if, once a payload appears to be complete, that what was received is actually complete (no dropped packets) and matches what was sent (no man-in-the-middle or other attacks).
I read this question about the error that I'm getting and I learned that UDP data payloads can't be more than 64k. The suggestions that I've read are to use TCP, but that is not an option in this particular case. I am interfacing with an external system that is transmitting data over UDP, but I don't have access to that external system at this time, so I'm simulating it.
I have data messages that are upwards of 1,400,000 bytes in some instances and it's a requirement that the UDP protocol is used. I am not able to change protocols (I would much rather use TCP or a reliable protocol build on UDP). Instead, I have to find a way to transmit large payloads over UDP from a test application into the system that I am building and to read those large payloads in the system that I'm building for processing. I don't have to worry about dropped packets, either - if I don't get the datagram, I don't care - just wait for the next payload to arrive. If it's incomplete or missing, just throw it all away and continue waiting. I also don't know the size of the datagram in advance (they range of a few hundred bytes to 1,400,000+ bytes.
I've already set my send and receive buffer sizes large enough, but that's not sufficient. What else can I do?
UDP packets have a 16 bit length field. It's nothing to do with Java. They cannot be bigger, period. If the server you are talking to is immutable, you are stuck with what you can fit into a packet.
If you can change the server and thus the protocol, you can more or less reimplement TCP for yourself. Since UDP is defined to be unreliable, you need the full retransmission mechanism to cope with packets that are dropped in the network somewhere. So, you have to split the 'message' into chunks, send the chunks, and have a protocol for requesting retransmission of lost chunks.
It's a requirement ...
The requirement should also therefore dictate the packetization technique. You need more information about the external system and its protocol. Note that the maximum IPv4 UDP payload Is 65535-28 bytes, and the maximum practical payload is < 1500 bytes once a router gets involved.
I'd have a question regarding java SocketChannel.
Say I have a socket channel opened in blocking mode; after calling the the write(ByteBuffer) method, I get an integer describing how many bytes were written. The javadoc says:
"Returns: The number of bytes written, possibly zero"
But what exactly does this mean? does this mean that the number of bytes really has been delivered to the client (so that sender received tcp ack making evident how many bytes have been received by server), or does this mean that the number of bytes has been written to the tcp stack? (so that some bytes still might be waiting e.g. in the network card buffer).
does this mean that the number of bytes really has been delivered to the client
No. It simply means the number of bytes delivered to the local network stack.
The only way to be sure that data has been delivered to the remote application is if you receive an application level acknowledgment for the data.
The paragraph that confuses you is for non-blocking I/O.
For non-blocking operation, your initial call may indeed not write anything at the time of the call.
Unless otherwise specified, a write
operation will return only after
writing all of the r requested bytes.
Some types of channels, depending upon
their state, may write only some of
the bytes or possibly none at all. A
socket channel in non-blocking mode,
for example, cannot write any more
bytes than are free in the socket's
output buffer.
As you can see, for blocking I/O all bytes would be sent ( or exception thrown in the middle of the send )
Note, that there is no guarantee on when bytes will appear on the receiving side, this is totally up to the low level socket protocol.