Android N - DataOutputStream.writeInt() behaviour changed? - java

I have a simple class that handles socket connections:
public class SimpleConnection {
// Socket, input and output streams
protected Socket mSocket;
protected DataInputStream mIn;
protected DataOutputStream mOut;
public boolean createConnection(String ip, int port) {
SocketAddress socketAddress = new InetSocketAddress(ip, port);
mSocket = new Socket();
try {
mSocket.connect(socketAddress, 3000);
mIn = new DataInputStream(mSocket.getInputStream());
mOut = new DataOutputStream(mSocket.getOutputStream());
} catch (IOException e) {
return false;
}
return true;
}
public boolean sendData(byte[] data) {
try {
mOut.writeInt(data.length);
mOut.write(data);
mOut.flush();
} catch (Exception e) {
e.printStackTrace();
closeSocket();
return false;
}
return true;
}
}
This has worked until Android N. With Android N, mOut.writeInt(data.length) just sends four zeros instead of the length of data.length. This causes the server to misinterpreted the message and for the whole program to not work.
I was able to "fix" the problem by converting the integer to a byte[4]:
byte[] len = Utilities.intToByteArray(data.length);
mOut.write(len);
intToByteArray is shown here.
My question is: Why isn't writeInt not working anymore on Android N? On other Android versions this code runs just fine.
I use the latest Android Studio with Java 8, gradle 2.1.3 and Android buildtools 24.0.2.
Edit:
The receiving part looks like this in Qt:
void readData(QTcpSocket* client_) {
while (client_->bytesAvailable()) {
int expected_length_;
QDataStream s(client_);
s >> expected_length_;
qLog(Debug) << expected_length_;
// Read data with expected_length_
QBuffer buffer_;
buffer_.write(client_->read(expected_length_));
}
}
expected_length_ is 0 where with the fix it is 15. Interestingly, client_->bytesAvailable() is 1 with the writeInt variant on Android N.
I did another test using nc -p 1234 -l 0.0.0.0 | xxd:
▶ nc -p 1234 -l 0.0.0.0 | xxd
00000000: 0000 000f 0815 1001 aa01 0808 8edb 0110 ................
00000010: 0118 00 ...
This is the output for both variants... so it seems writeInt() works as expected, but why does it work for Android <= 6 and not for Android 7?!??
Edit2:
After analyzing the traffic I found out that the integer is split in multiple TCP frames. I changed the server code to check if client_->bytesAvailable() >= 4 and only then to read the integer from the socket. This fixed the problem and the writeInt() variant works now too.
But why did the behaviour suddenly change?

After analyzing the traffic I found out that writeInt() flushes the data
It did exactly what you told it to do. DataOutputStream is not buffered, and there is no BufferedOutputStream under it, so writeInt() wrote four bytes to the network.
prematurely.
There is no such thing as 'prematurely' in TCP. TCP makes no guarantees about packetization or segmentation. If you want to control this so-called 'premature flush', use mOut = new DataOutputStream(new BufferedOutputStream(Socket.getOutputStream())); and flush it yourself after writing the data.
So some frames have only one or two bytes, that's why bytesAvailable is only 1.
This was your main problem. You are misusing available(). It isn't an end of message indicator. See the Javadoc.
You also didn't check to see that you had actually read the four bytes of the length word.
As far as I can see you aren't checking to see that you've received all the bytes of data either.
In general you have to loop until you get everything that is expected.
The server code QDataStream reads the int with only one or two byte. After adding if (client_->bytesAvailable() < 4) break; and waiting for more data it works. But I still don't understand why the behaviour changed.
It can change any time. Your code broke because it relied on several invalid assumptions.

Related

Error while reading data through socket communication

Following scenario that explains my problem.
I've a PLC that acts as a server socket program. I've written a Client Java program to communicate through socket communication with the PLC.
Steps that take place in this process are:
1) For each second my Client program happen to communicate with the PLC, read the data in stream, store the data temporarily in a ByteArrayOutputStream and closing both input stream and socket. Following snippet gives the idea
try {
socket = new Socket(host, port);
is = socket.getInputStream();
outputBuffer = new ByteArrayOutputStream();
byte[] buffer = new byte[1024];
int read;
if((read = is.read(buffer)) != -1) {
outputBuffer.write(buffer, 0, read);
}
} catch (UnknownHostException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
System.out.println("Before closing the socket");
try {
is.close();
socket.close();
} catch (IOException e) {
e.printStackTrace();
}
System.out.println("After closing the socket");
} catch (Exception e) {
e.printStackTrace();
}
}
2) Processing stored data according to my requirement is what I'm trying to do. So for every 1 second, client program connects to Server, read the data(if data is present), store the data, close socket and process it. And it has to happen for a very long run, probably till the Server program is on. And that may happen till for every few weeks.
3) Problem what I'm facing is, I'm able to run the above show for 1-2 hours, but from then, Client Program unable to fetch the data from the Server Program(PLC in this case), though both are connected through socket. I.e 128 bytes of data present, but Client program isn't able to read that data. And this started happening after program run successfully for almost 2hours
4) Please find the brief code which may help for you to look into.
public class LoggingApplication {
public static void main(String[] args) throws NumberFormatException {
if (args.length > 0 && args.length == 2) {
String ipAddress = mappingService.getIpAddress();
int portNo = (int) mappingService.getPortNo();
ScheduledExecutorService execService = Executors.newScheduledThreadPool(1);
execService.schedule(new MyTask(execService, ipAddress, portNo, mappingService), 1000, TimeUnit.MILLISECONDS);
} else {
throw new IllegalArgumentException("Please pass IPAddress and port no as arguments");
}
}
}
Runnable Code:
public class MyTask implements Runnable {
public ScheduledExecutorService execService;
private String ipAddress;
private int portNo;
private ConfigurationMappingService mappingService;
private MySocketSocketUtil mySocketSocketUtil;
public MyTask(ScheduledExecutorService execService, String ipAddress, int portNo, ConfigurationMappingService mappingService) {
this.execService = execService;
this.ipAddress = ipAddress;
this.portNo = portNo;
this.mappingService = mappingService;
}
public void run() {
MySocketSocketUtil mySocketSocketUtil = new MySocketSocketUtil(ipAddress, portNo);
execService.schedule(new MyTask(execService, ipAddress, portNo, mappingService), 1000, TimeUnit.MILLISECONDS);
mySocketSocketUtil.getData(); //It's able to fetch the data for almost 2 hours but from then, it's just getting empty data and it's keep on giving empty data from then. and so on.
/*
*
*Some code
*/
}
}
Here's where, I'm having the problem
mySocketSocketUtil.getData(); is able to fetch the data for almost 2 hours but from then, it's just getting empty data and it's keep on giving empty data from then. and so on. It's a big question I know, And I want to understand what might have gone wrong.
Edit: I'm ignoring the condition to check end of the stream and closing a socket based on it is because, I knew I'm going to read first 1024 bytes of data only always. And So, I'm closing the socket in finally block
socket = new Socket(host, port);
if(socket != null && socket.isConnected())
It is impossible for socket to be null or socket.isConnected() to be false at this point. Don't write pointless code.
if((read = is.read(buffer)) != -1) {
outputBuffer.write(buffer, 0, read);
};
Here you are ignoring a possible end of stream. If read() returns -1 you must close the socket. It will never not return -1 again. This completely explains your 'empty data':
from then, it's just getting empty data and it's keep on giving empty data from then, and so on
And you should not create a new Socket unless you have received -1 or an exception on the previous socket.
} else {
System.err.println("Socket couldn't be connected");
}
Unreachable: see above. Don't write pointless code.
You should never disconnect from the established connection. Connect once in the LoggingApplication. Once the socket is connected keep it open. Reuse the socket on the next read.
I think there are couple of points you need to fix before getting to the solution to your problem. Please try to follow the following suggestions first:
As #EJP said this code block is not needed.
if(socket != null && socket.isConnected()) {
also you are using a byte array of length 1024 and not using while or for loop to read the data stream. Are you expecting only a block of data which will never exceed 1024 bytes?
byte[] buffer = new byte[1024];
int read;
if((read = is.read(buffer)) != -1) {
This is also not needed as it is unreachable.
} else {
System.err.println("Socket couldn't be connected");
}
Can you explain the data stream behavior you are expecting?
Last but not the least is.read(buffer) is a blocking call so if there is no data to read yet, it will hold the thread execution at that point.
Please try to answer the questions I have asked.
#KishoreKumarKorada from your description in the comment section, it seems like you are monitoring the data change on server side. Socket stream works in a read-once fashion. So,
First thing is, you need to request from server every time and the server needs to RESEND the data on every request.
Second, the way you presented is more like you are operating on byte level, which is not very good way to do that unless you have any legitimate reason to do so. The good way is to wrap the data in JSON or XML format and send it over the stream. But to reduce bandwidth consumption, you may need to operate on byte stream sometimes. You need to decide on that.
Third, for monitoring the data change, the better way is to use some timestamp to compare when the data has changed on the server side and what is the timestamp stored on the client side, if they match, data has not changed. Otherwise fetch the data from the server side and update the client side.
Fourth, when there is data available that you are not able to read, can you debug the ins.read(...) statement to see if its getting executed and the execution goes inside the if block or if statement is evaluated to false? if true then examine the read value and let me know what you have found?
Thanks.

Why a new InputStream will still read what is left over from an old InputStream?

Well please see this question and Jon Skeet's answer first.
This time I have this server:
public class SimpleServer {
public static void main(String[] args) throws Exception {
ServerSocket serverSocket = new ServerSocket(8888);
System.out.println("Server Socket created, waiting for client...");
Socket accept = serverSocket.accept();
InputStreamReader inputStreamReader = new InputStreamReader(accept.getInputStream());
char[] chars = new char[5];
System.out.println("Client connected, waiting for input");
while (true) {
inputStreamReader.read(chars,0,chars.length);
for (int i=0;i<5;i++) {
if(chars[i]!='\u0000') {
System.out.print(chars[i]);
}
}
inputStreamReader = new InputStreamReader(accept.getInputStream());
chars = new char[5];
}
}
}
And when I send the characters "123456789" from the client, this is what I exactly see in the Servers terminal, but should not I be seeing only 12345 ?
Why the difference in the behaviour?
Your client has been set up to only send 5 characters at a time, and then flush - so even though the InputStreamReader probably asked for more data than that, it received less, and then found that it could satisfy your request for 5 characters with what it had got.
Try changing the code on the server to only read 3 characters at a time instead of 5 (but leave the client sending 5) and you may well see a difference in behaviour. You may not, mind you - it will depend on a lot of different things around the timing of how the data is moving around.
Basically the lesson should be that you don't want to be constructing multiple readers over the same stream - it becomes hard to predict what will happen, due to buffering.

java socket outputstream and broken pipe

I have to send a dynamic buffer size to the socket stream.
It works correctly, but when I try to send multiple buffers with a size
bigger than
int my_buffer_size =18 * 1024; (this is an indicative value)
I get the error (for some write):
Java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
My code is very simple:
For example If I want to send a big file I read a file stream with
byte[] bs = new byte[my_buffer_size];
while (... ){
fileInputStream.read(bs);
byte[] myBufferToSend = new byte[sizeBuffer];
DataOutputStream out = new DataOutputStream(cclient.getoutputStream());
out.writeInt(myBufferToSend.length);
out.write(myBufferToSend);
out.flush();
}
(The file is just a test the buffer size can be variable)
the SendBufferSize is 146988.
Is there a way to fix the broken pipe error? I read around but actually I didn’t solve the problem.
Thank you
any help is appreciated
I use the classic ServerSocket serverSocket;
and Socket cclient
'Broken pipe' means that you've written data to a connection that has already been closed by the other end.
Ergo the problem lies at the other end, not in this code. Possibly the other end doesn't really understand your length-word protocol for example, or doesn't implement it correctly.
If it's anything like this code it won't, because you're ignoring the result returned by read() and assuming that it fills the buffer. It isn't specified to do that, only to transfer at least one byte.
In common, receiving huge blocks is not supported by DataInputStream, because the readmethod just delegates to the underlying socket input stream and that socket input stream does not complain about not having read all. E.g. in Oracle Java 8 you get some 2^16 bytes and the rest is ignored. So when you close the socket after DataInputStream.read has returned, the sender node observes a "pipe broken" while still trying the send the rest of the huge block. Solution is a windowed read. Below is a DataInputStream-subclass, which does precisely this.
import java.io.DataInputStream;
import java.io.IOException;
import java.io.InputStream;
public class HugeDataInputStream extends DataInputStream
{
int maxBlockLength;
public HugeDataInputStream(InputStream in)
{
this(in, 0x8000);
}
public HugeDataInputStream(InputStream in, int maxBlockLength)
{
super(in);
this.maxBlockLength = maxBlockLength;
}
public int readHuge(byte[] block) throws IOException
{
int n = block.length;
if (n > maxBlockLength)
{
int cr = 0;
while (cr < n)
{
cr += super.read(block, cr, Math.min(n - cr, maxBlockLength));
}
return cr;
}
else
{
return super.read(block);
}
}
}

Receiving UDP in Java without dropping packets

I have a library which I need to improve since it is dropping to many packets. I want to receive a RTP-Stream, but the streamer is sending bursts of 30-40 packets within a millisecond (MJPEG-Stream). I can see the packets being complete when monitoring traffic in Wireshark. But when trying to receive them in Java, I lose a lot of those packets.
I have already been able to improve the libraries behavior by implementing a ring buffer that would constantly get filled whenever a packet is available and a separate reader thread that reads from this buffer. But I'm still not able to get all the packets from my socket that I can see in wireshark. Through RTP sequence numbers I can monitor in the reader thread if the packet processed is the one expected.
The following code is handling packet receiving:
private volatile byte[][] packetBuffer = new byte[1500][BUFFER_SIZE];
private volatile DatagramPacket[] packets = new DatagramPacket[BUFFER_SIZE];
private volatile int writePointer = 0;
public void run() {
Thread reader = new RTPReaderThread();
reader.start();
while (!rtpSession.endSession) {
// Prepare a packet
packetBuffer[writePointer] = new byte[1500];
DatagramPacket packet = new DatagramPacket(packetBuffer[writePointer], packetBuffer[writePointer].length);
// Wait for it to arrive
if (!rtpSession.mcSession) {
// Unicast
try {
rtpSession.rtpSock.receive(packet);
} catch (IOException e) {
if (!rtpSession.endSession) {
e.printStackTrace();
} else {
continue;
}
}
} else {
// Multicast
try {
rtpSession.rtpMCSock.receive(packet);
} catch (IOException e) {
if (!rtpSession.endSession) {
e.printStackTrace();
} else {
continue;
}
}
}
packets[writePointer] = packet;
this.incrementWritePointer();
synchronized (reader) {
reader.notify();
}
}
}
What I already know:
I know that UDP is allowed to lose packets, but I still want to
achieve the best possible result. If wireshark can see the packet, I
want to be able to retrieve it as well, if possible.
I know that the ring buffer is never full while losing packets, so
this doesn't make me lose packets either. I tried with BUFFER_SIZES
of 100 and even 1000, but I already lose the first packets before a
total of 1000 packets has been send.
So the question is: what is best practice to receive as many packets as possible from a DatagramSocket? Can I improve handling of packet bursts?
Try setting the SO_RCVBUF size on the datagram socket with rtpSock.setReceiveBufferSize(size). This is only a suggestion to the OS, and the OS may not honor it, especially if it is too large. But I would try setting it to (SIZE_OF_PACKET * 30 * 100), where 30 is for the number of packets in a burst, and 100 is a guess of the number of milliseconds where you will not be able to keep up with the arrival speed.
Note that if your code cannot keep up with processing at the arrival speed in general, the OS has no choice but to drop packets.

Socket not Receiving Input in Java 7

I have run into an interesting issue trying to upgrade one of my applications from the Java 6 to Java 7. It is a simple Java socket program. It sends a command to a COM socket and receives a response. It works perfectly in a Java 6 environment, but when I try to run the same code in a Java 7 environment, the socket appears to receive nothing in the InputStream.
I can confirm that the COM socket it's connecting to does receive the command and sends the response. This is run on the exact same machine in both cases with the firewall disabled, and it's the exact same code ran both times.
Has something changed in Java 7, do I have some deeper flaw, or is this simply a Java bug?
Here is a slightly stripped version of the code.
public static void main(String[] arguments) throws Exception {
InetAddress server = InetAddress.getByName(serverAddress);
Socket sock = SSLSocketFactory.getDefault().createSocket(server.getHostAddress(), port);
InputStream in = sock.getInputStream();
OutputStream out = sock.getOutputStream();
out.write(command.getBytes()); //Is valid command
String token = "";
responseReader: while (true) {
try {
Thread.sleep(1);
}
catch (InterruptedException exception) {}
byte[] d = new byte[in.available()];
int avail = in.read(d);
for (int i = 0; i < avail; i++) {
if (d[i] == fieldSeperator) {
token = "";
}
else if (d[i] == commandSeperator) {
break responseReader;
}
else {
token += (char) d[i];
}
}
}
}
I've tried as much as I can think of, most of the time knowing it shouldn't matter. Using different methods of reading the stream, casting to SSLSocket and making different calls, adding some sleeps.
The code is wrong. You shouldn't use available() like that. If there is no data available you will allocate a zero length buffer and execute a zero length read, which will retun zero without blocking. Use a constant like 8192 for the buffer size, and allocate the buffer outside the loop. And get rid of the sleep() too.
There are few if any correct uses of available(), and this isn't one of them.
And note that available() always returns zero for an SSLSocket, and has always done so right back to Java 1.3 and the separate JSSE download. So I am unable to accept that the same code worked in Java 6.

Categories

Resources