Send bytes via tcp/ip with android phone faster - java

I am taking pictures with the camera2 api for Android on a Nexus 6. It's taking virtually no time at all from when my code reaches the end of onImageAvailable() to when it's called again.
However, it's taking ~700ms to send my picture over tcp/ip.
private ImageReader.OnImageAvailableListener mOnImageAvailableListener =
new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader)
{
Image image = mImageReader.acquireLatestImage();
if(image.getPlanes().length == 0)
return;
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
if(buffer == null)
return;
byte[] pictureBytes = new byte[buffer.remaining()];
buffer.get(pictureBytes);
mOutputStream.write((String.valueOf(pictureBytes.length) + "....").getBytes());
// FROM HERE
mOutputStream.write(pictureBytes);
// TO HERE TAKES ~700ms
mOutputStream.flush();
}
My connection is created in another thread, as android requires, with the following code:
ServerSocket serverSocket = new ServerSocket(#);
Socket clientSocket = serverSocket.accept();
OutputStream outputStream = clientSocket.getOutputStream();
Note: mOutputStream in the main thread is equal to outputStream. I pass outputStream to the main thread.
I've tried using BufferedOutputStream and that was actually slower.
The link speed between my phone and the device that it's connected to is 130Mbps. With images that are less than 2MB, I should be able to send at least 8 pictures a second.
How do I reduce the 700ms time? Thank you!

Even if you do have 130mbps connection to your other device if you're using TCP you won't be able to use all that bandwidth immediately. This because of TCP congestion control mechanism called slow-start.
At the beginning of each connection TCP starts with sending small amount of data because the capacity of the link is unknown. This amount of traffic sent before the ACK is received is defined by congestion window and depending on the configured MSS (Maximum Segment Size) it could be 2-4 times the MSS. Exact amount is defined in RFC5681:
If SMSS > 2190 bytes:
IW = 2 * SMSS bytes and MUST NOT be more than 2 segments
If (SMSS > 1095 bytes) and (SMSS <= 2190 bytes):
IW = 3 * SMSS bytes and MUST NOT be more than 3 segments
If SMSS <= 1095 bytes:
IW = 4 * SMSS bytes and MUST NOT be more than 4 segments
After receiving each ACK the size of congestion window is increased by the size of MSS. This actually means that the window size grows exponentially. Even though the exponential growth enables TCP to reach the maximum bandwidth very fast, for relatively small files this fast is not fast enough. Even if your connection is 100mbps you won't be able to send 8 pictures per second because the transfer time is relatively short and the window size will be too small for a considerable period of that transfer time to allow you to use the full bandwidth available.
Depending on the round trip time (to the server and back) it may even take 500ms to send the first 1MB of data so there is a small chance that you will be able to send more than 2 or maybe 3 images per second.

I switched from Java to C# to read in the bytes. That was the main bottleneck. I'm up to 3.33 fps; this includes the time writing to disk.
But yes, I verified the pictures were less than 2MB.
Thank you everyone!

Related

Fragment UDP/TCP segments in Java

I have to measure the speed of UDP and TCP between a Client and a Server, for a university project, and for that we have to choose the data we will send. For example 500 bytes, 750 bytes, 1500 bytes...
In Linux I know how to reduce or increment MTU of the segment, but I do not know how to do it in Java for my application. Are there any function to do it, or a way to make the socket bigger or smaller and force it to send the amount of data that I want?
Thank you in advance!
The Java socket API is pretty high level and doesn't give you much fine grained control.
That said you can write x number of bytes to the socket then flush it.
You should also enable TCP_NODELAY, otherwise the packets may end up being buffered.
So long as the amount of bytes is less than the underlying OS MTU then the messages should be sent in separate packets.

Send multiple very small packets or fewer large packets?

I am currently working on a simple application that transfers screenshots across sockets. I get the screenshot by instantiating and using the Robot class as follows:
private Robot robot;
public Robot getRobot(){
if(robot == null){
try{
robot = new Robot();
}catch(Exception e){}
}
return robot;
}
public BufferedImage screenshot(){
return getRobot().createScreenCapture(getScreenRectangle());
}
public byte[] getBytes(BufferedImage image){
byte[] data = null;
try{
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ImageIO.write(img, "PNG", baos);
data = baos.toByteArray();
baos.close();
}catch(IOException e){}
return data;
}
I then use the getBytes method above to convert the BufferedImage to a byte array which is then written to the socket output stream. The image is an average of 500KB. Would it be more efficient to split this 500KB into smaller segments of say 5KB or would it be better to keep it in larger chunks of say 30KB. My main aim here is speed and accuracy of delivery. I would also appreciate any reasoning of why either way would be more effective in these terms that the other.
Send multiple very small packets or fewer large packets?
This is a very common question in apps that QoS play an important role. I think that there isnt a correct answer, only an implemetation that adapts better to your requeriments.
A few aspects that you might consider:
Bigger packets reduces de % of overhead over data.
Bigger packets has a bigger impact when there is an error at recieving or sending data (corrupted data).
Smaller packets should be used for an application that provides better response to user.
In some applications where the traffic is important and necessary to process information quickly, instead of sending the entire image for each frame, portions of the image varied should be sent, using appropriate protocols.
The network packet size is limited by what is called MTU, you can't send a packet bigger than mtu. The mtu, as you can check from the link, is not that big (I believe 1500 bytes is the more common MTU for Ethernet). This is for the network side; on the java side deciding the segment size could depend on the number of images transmitted concurrently (if you send 100 image concurrently you have 100 segments allocated).
My suggestion is to try with a segment size slightly less that MTU (I guess you are using TCP/IP so you must consider TCP and IP header size) and see what happens.
Edit: some comments point out (correctly) that form the java perspective the MTU does not affect how the data should be chunked; this is true for packet bigger than MTU since the TCP/IP layer breaks larger chunks of data in smaller units; but the poster wants to know if there is a "best" buffer size; and the response is that over the MTU there is no benefit (for network transmission) in increasing the buffer size at the java side
Melli has great reasoning in their answer about why you'd choose larger or smaller packets. You're basically trading throughput for responsiveness.
I don't agree that there isn't a correct answer; at least in this case I think there's very obviously a better answer: If you want it to arrive at the other end as fast as possible, send it all at once!
Less system calls to send the data
Less overhead because you're sending less TCP packets & less redundant headers
Fragmentation is the responsibility of the lower levels of the network stack, not your application
The resulting code will be much more simple

Socket communication between Java and C: Good buffer size

I have to implement a socket communicatio between a Server written in Java and a Client written in C.
The maximum amount of data that I will have to transmit is 64KB.
In the most socket communication tutorials they are working with buffer sizes of about 1024 Byte or less.
Is it a (maybe performance) problem to set the buffer to 64KB?
The two software parts will run on the same machine or at least in the same local area network.
And if it is a problem: How to handle messages that are bigger than buffer in general?
The buffer can be smaller than the messages without any problem while the receiver consumes the data as fast as the sender generates it. A bigger buffer lets your receiver to have more time to process the message, but usually you don't need a giant buffer: for example, when you download software the size of a file can be more than 1GB, but your browser/ftp client just reads the buffer and stores the data in a file in your local hard disk.
And in general, you can ignore the language used to create the client or the server, only the network protocol matters. Every language has its own libraries to handle sockets with ease.
I suggest a larger buffer but I suspect you see less than 5% difference whether you use 1 KB or 64 KB.
Note: b = bit and B = byte, k = 1000 and K = 1024 and it is best not to get the confused (not that it is likely to matter here)

Java/Android: UDP - can't receive larger packets (but still < 64k)

I am working on a client-server android application (two applications - one client and one server). The server is expected to send a video to the client over UDP. I am dividing the video into individual frames, each of which end up being about 50,000 bytes, which is theoretically still less than the maximum for UDP.
I am currently testing on two Android emulators running on the same machine, and using UDP port forwarding in between to connect them.
I have set up the UDP such that if I send a byte array of ~5000 or less bytes, it works fine. If I attempt to send my frame byte arrays (50,000 bytes) the application freezes on the DatagramSocket.receive() method on the client.
Is there any way to set it up the UDP transmission to receive a larger byte size?
Thanks for your help.
But it isn't less than the practical maximum for UDP, which is 534 or 576 bytes, can't remember which at the moment, sorry. Whichever it is, that is the largest packet size that will avoid fragmentation. Once a UDP packet is fragmented into N fragments it is N times as likely to be lost.

Tcp round trip time calculation using java

When i do traceroute from one solaris m/c to another using 10 gig network interface it takes 0.073 ms for 40 bytes packets.
When i do the same thing in java, the time is way longer. It is longer even after 10K iteration. What could be the reason ?
Java: Sender (Snippet)
Socket sendingSocket = new Socket(address, RECEIVER_PORT);
sendingSocket.setTcpNoDelay(true);
OutputStream outputStream = sendingSocket.getOutputStream();
byte[] msg = new byte[64]; // assume that it is populated.
for (int i = 0; i < 10000; i++) {
long start = System.nanoTime();
outputStream.write(msg,0,64);
outputStream.flush();
inputStream.read(msg,0,64); // inputStream is initialized like outputstream
long end = System.nanoTime();
}
It takes way longer 69 millis and it does not even depends upon the byte size. Even if i reduce it to say 1 byte array, it still takes 69 millis. Any comment/Suggestion ?
Other Observation:
1. OutputStream.write and flush only takes 6 micros.
2. Similarly on the other end TCPReceiver side which receives and writes back, it only takes 6 micros.
Solution:
Thank you all you responded for this query.
I found that it was due to the socket buffer size:
Default buffer size set on solaris m/c.
Received Buffer Size 49152.
Sending Buffer Size 7552.
I increased the socket buffer size and the performance almost matches traceRoute.
You are not comparing like with like. ICMP and TCP are way different protocols.
If you want to decide if the latency is in your code, the JVM, Solaris TCP stack or the network you'll have to start with tcpdump / wireshark etc.
This is probably due to a number of factors. For starters, establishing a TCP channel takes time. There are several packets that have to be sent between both endpoints to establish the reliable medium. That's not the case with ICMP messages, they are simply single packets. In fact, because you are seeing no difference in the time it takes to transmit the data regardless of size, then you can likely assume that the amount of time required to actually transmit the data (you're talking about a very small amount of data in either case on a 10gig connection) is negligible in comparison to the time it takes to establish the channel. Also, it is entirely possible that there is some overhead associated with the fact that you're using Java (a bytecode language) rather than something like C or C++ that runs natively on the hardware.
The time it takes to connect can be about 20 ms. You need to test using an existing connection.
The TCP stack is quite slow going through the kernel. It takes about 50-100 us on many machines. You can reduce this sustantially using kernel bypass drivers/support.

Categories

Resources