Can someone explain to me why this works just fine with in.available()>0 commented out, but when I put it back in it breaks?
mySocket = new Socket("blahblah", 12345);
BufferedInputStream in = new BufferedInputStream(mySocket.getInputStream());
....
char result[] = new char[length];
for (int i = 0; i < length && !mySocket.isClosed() /*&& in.available()>0*/; i++){
result[i] = (char)in.read();
}
More specifically: I'm making an Android app where a user can search for a term, that search is sent to some thingy in interspace, I get back results in xml form, and do stuff with them. When the xml I get back is small enough (see "length" in code above), the code works just fine with in.available()>0 left in. But if the length is large, in.available() returns 0. But with that commented out, everything continues to run smoothly.
Why is that? And is it something I need to worry about and fix?
in.available() lets you know if you can read data at that moment without blocking. As Sockets have a stream of data, it may not available immediately but in a short time. e.g. if you have a 1 Gbit connection, full packets will be no closer than 15 micro-seconds which is long time for a computer.
I think the reason in.available() == 0 when the data is large is because it hasn't had a chance to write it to your socket yet. You shouldn't need to use in.available(). Also, I wouldn't suggest reading a single char at a time, that will be really slow with a lot of data and VERY chatty over the network. Consider reading in a byte array of size "length".
Related
Right now, I'm trying to write a GUI based Java tic-tac-toe game that functions over a network connection. It essentially works at this point, however I have an intermittent error in which several chars sent over the network connection are lost during gameplay. One case looked like this, when println statements were added to message sends/reads:
Player 1:
Just sent ROW 14 COLUMN 11 GAMEOVER true
Player 2:
Just received ROW 14 COLUMN 11 GAMEOV
Im pretty sure the error is happening when I read over the network. The read takes place in its own thread, with a BufferedReader wrapped around the socket's InputStream, and looks like this:
try {
int input;
while((input = dataIn.read()) != -1 ){
char msgChar = (char)input;
String message = msgChar + "";
while(dataIn.ready()){
msgChar = (char)dataIn.read();
message+= msgChar;
}
System.out.println("Just received " + message);
this.processMessage(message);
}
this.sock.close();
}
My sendMessage method is pretty simple, (just a write over a DataOutputStream wrapped around the socket's outputstream) so I don't think the problem is happening there:
try {
dataOut.writeBytes(message);
System.out.println("Just sent " + message);
}
Any thoughts would be highly appreciated. Thanks!
As it turns out, the ready() method guaruntees only that the next read WON'T block. Consequently, !ready() does not guaruntee that the next read WILL block. Just that it could.
I believe that the problem here had to do with the TCP stack itself. Being stream-oriented, when bytes were written to the socket, TCP makes no guarantees as to the order or grouping of the bytes it sends. I suspect that the TCP stack was breaking up the sent string in a way that made sense to it, and that in the process, the ready() method must detect some sort of underlying break in the stream, and return false, in spite of the fact that more information is available.
I refactored the code to add a newline character to every message send, then simply performed a readLine() instead. This allowed my network protocol to be dependent on the newline character as a message delimiter, rather than the ready() method. I'm happy to say this fixed the problem.
Thanks for all your input!
Try flushing the OutputStream on the sender side. The last bytes might remain in some intenal buffers.
It is really important what types of streamed objects you use to operate with data. It seems to me that this troubleshooting is created by the fact that you use DataOutputStream for sending info, but something else for receiving. Try to send and receive info by DataOutputStream and DataInputStream respectively.
Matter fact, if you send something by calling dataOut.writeBoolean(b)
but trying to receive this thing by calling dataIn.readString(), you will eventually get nothing. DataInputStream and DataOutputStream are type-sensitive. Try to refactor your code keeping it in mind.
Moreover, some input streams return on invocation of read() a single byte. Here you try to convert this one single byte into char, while in java char by default consists of two bytes.
msgChar = (char)dataIn.read();
Check whether it is a reason of data loss.
I've run into mind twisting bafflement after putting my hands into an old legacy project. The project consists of a Java application and a c++ application, which communicate with sockets. Both applications are designed to work on cross platform environments, so I'd be happy to keep the code as universal as possible.
I ended up rewriting parts of the communication logic, since the previous implementation had some issues with foreign characters. Now I ran into a problem with endianness, which I hope someone could spell out for me.
The Java software writes messages to socket with OutputStreamWriter, using UTF-16LE encoding, as follows.
OutputStream out = _socket.getOutputStream();
outputWriter = new OutputStreamWriter(new BufferedOutputStream(out), "UTF-16LE");
// ... create msg
outputWriter.write(msg, 0, msg.length());
outputWriter.flush();
The c++ program receives the message character by character as follows:
char buf[1];
std::queue<char> q;
std::u16string recUtf16Msg;
do {
int iResult = recv(socket, buf, 1, 0);
if (iResult <= 0)
break; // Error or EOS
for (int i = 0; i < iResult; i++) {
q.push(buf[i]);
}
while (q.size() >= 2) {
char firstByte = q.front();
q.pop();
char secondByte = q.front();
q.pop();
char16_t utf16char = (firstByte << (sizeof(char) * CHAR_BIT)) ^
(0x00ff & secondByte);
// Change endianness, if necessary
utf16char = ntohs(utf16char);
recUtf16Msg.push_back(utf16char);
}
// ... end of message check removed for clarity
} while (true);
Now the issue which I'm really facing is that the code above actually works, but I'm not really sure why. The c++ side is written to receive messages which use network byte order (big endian) but it seems that java is sending the data using little endian encoding.
On c++ side we even use ntons-function to change endianness to the one desired by host machine. According to specification I understand that hton is supposed to do swap endianness if host platform uses little endian byte order. However ntonhs actually swaps the endianness of the received small endian characters, which ends up as big endian and the software works flawlessly.
Maybe someone can point out what exactly is happening? Do I accidentally switch bytes already to when creating utf16char? Why htons makes everything work, while it seems to act exactly opposite to the documentation? To compile I'm using Clang with libc++.
I left out parts of the code for clarity, but you should get the general idea. Also, I'm aware that using queue and dynamic array may not be the most effective way of handling data, but it's clean and performs well enough for this purpose.
How can I make this piece of code extremely quick?
It reads a raw image using RandomAccessFile (in) and write it in a file using DataOutputStream (out)
final int WORD_SIZE = 4;
byte[] singleValue = new byte[WORD_SIZE];
long position;
for (int i=1; i<=100000; i++)
{
out.writeBytes(i + " ");
for(int j=1; j<=17; j++)
{
in.seek(position);
in.read(singleValue);
String str = Integer.toString(ByteBuffer.wrap(singleValue).order(ByteOrder.LITTLE_ENDIAN).getInt());
out.writeBytes(str + " ");
position+=WORD_SIZE;
}
out.writeBytes("\n");
}
The inner for creates a new line in the file every 17 elements
Thanks
I assume that the reason you are asking is because this code is running really slowly. If that is the case, then one reason is that each seek and read call is doing a system call. A RandomAccessFile has no buffering. (I'm guessing that singleValue is a byte[] of length 1.)
So the way to make this go faster is to step back and think about what it is actually doing. If I understand it correctly, it is reading each 4th byte in the file, converting them to decimal numbers and outputting them as text, 17 to a line. You could easily do that using a BufferedInputStream like this:
int b = bis.read(); // read a byte
bis.skip(3); // skip 3 bytes.
(with a bit of error checking ....). If you use a BufferedInputStream like this, most of the read and skip calls will operate on data that has already been buffered, and the number of syscalls will reduce to 1 for every N bytes, where N is the buffer size.
UPDATE - my guess was wrong. You are actually reading alternate words, so ...
bis.read(singleValue);
bis.skip(4);
Every 100000 offsets I have to jump 200000 and then do it again till the end of the file.
Use bis.skip(800000) to do that. It should do a big skip by moving the file position without actually reading any data. One syscall at most. (For a FileInputStream, at least.)
You can also speed up the output side by a roughly equivalent amount by wrapping the DataOutputStream around a BufferedOutputStream.
But System.out is already buffered.
I'm making a Server that gets packages at 64 kb size.
int length = 65536;
byte[] bytes = new byte[length];
int pos = 0;
while(pos < length -1)
{
System.out.println("Before read");
pos += dis.read(bytes, pos, length-pos);
System.out.println(""+pos+" >> "+ length);
}
This is the code I use to read all bytes from the socket. Dis is a InputStream.
When I run the code 1 out of n goes wrong. The code only receives 52964 bytes instead of 65536 bytes.
I also checked the C code and it says it send 65536 bytes.
Does someone know what I'm doing wrong?
This is yet another case where Jakarta Commons IOUtils is a better choice than writing it yourself. It's one line of code, and it's fully tested. I recommend IOUtils.readFully() in this case.
If it does not read the entire buffer, then you know that you're not sending all the content. Perhaps you're missing a flush on the server side.
InputStream.read() returns the number of bytes read or -1 if the end of the stream has been reached. You need to check for that error condition. Also, I suspect your while(..) loop is the problem. Why are you calling it pos as in position? You may be terminating prematurely. Also, ensure that your C code, whatever it is doing, is sending properly. You can examine the network traffic with a tool like Wireshark to be sure.
What do you mean it "goes wrong"? What is the output? It can't be exiting the loop before reading the full 64 KB, so what really happens?
Also, it's better to save the return value of the I/O call separately and inspect it, before assuming the I/O was successful. If that's DataInputStream.read(), it returns -1 on error.
Your code is incorrect as it doesn't check for -1.
This is a case for using DataInputStream.readFully() rather than coding it yourself and getting it wrong.
Goal: to get values from a text file and store it into values to load into my sqlite database.
Problem: My method is not efficient, and I need help comming up with an easier way.
As of right now I am parsing my textfile that looks like this.
agency_id,agency_name,agency_url,agency_timezone,agency_lang,agency_phone
1,"NJ TRANSIT BUS","http://www.njtransit.com/",America/New_York,en,""
2,"NJ TRANSIT RAIL","http://www.njtransit.com/",America/New_York,en,""
I am parsing everytime i read a comma, then storing that value into a variable, then I will use that variable as my database value.
This method works and is time consuming, The next text file I have to read in has over 200 lines of code, and i need to find an easier way.
AgencyString = readText();
tv = (TextView) findViewById(R.id.letter);
tv.setText(readText());
StringTokenizer st = new StringTokenizer(AgencyString, ",");
for (int i = 0; i < AgencyArray.length; i++) {
size = i; // which value i am targeting in the textfile
//ex. 1 would be agency_id, 2 would be agency_name
AgencyArray[i] = st.nextToken();
}
tv.setText(AgencyArray[size]); //the value im going to store into database value
}
private String readText() {
InputStream inputStream = getResources().openRawResource(R.raw.agency);
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
int i;
try {
i = inputStream.read();
while (i != -1) {
byteArrayOutputStream.write(i);
i = inputStream.read();
}
inputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
return byteArrayOutputStream.toString();
}
First, why is this a problem? I don't mean to answer your question with a question so to speak, but more context is required to understand in what way you need to improve the efficiency of what you're doing. Is there a perceived delay in the application due to the parsing of the file, or do you have a more serious ANR problem due to you running on the UI thread? Unless there is some bottleneck in other code not shown, I honestly doubt you'd read and tokenise it faster that you're presently doing. Well, actually, no doubt you probably could; however, I believe it's more a case of designing your application so that delays involved in fetching and parsing of large data aren't perceived by or cause irritation to the user. My own application parses massive files like this and it does take a fraction of a second, but it doesn't present a problem due to the design of the overall application and UI. Also, have you used the profiler to see what's taking time? And also, have you run this on a real device, without debugger attached? Having the debugger attached to the real device, or using the simulator greatly increases execution time by several orders.
I am making the assumption that you need to parse this file type after receiving it over a network, as opposed to being something that is bundled with the application and only needs parsing once.
You could just bundle the SQLite database with your application instead of representing it in a text file. Look at the answer to this question