The problem with the program works in real time.
For example: getevent
But when I try to read the data coming out of the process, the exec gives their parts at least 4096 bytes!
For example:
if getevent returned 1000 bytes of text that: stdout.available () == 0
if getevent returned 4000 bytes of text that: stdout.available () == 0
if getevent returned 4096 bytes of text that: stdout.available () == 4096
if getevent returned 8192 bytes of text that: stdout.available () == 8192
if getevent returned 10000 bytes of text that: stdout.available () == 8192
If to use stdout.read() the function will wait until 4096*n bytes or until getevent is closed.
How do I read the data that come in real time instead of waiting until 4096 will be dialed bytes?
Process p = Runtime.getRuntime().exec(new String[]{"su", "-c", "system/bin/sh"});
DataOutputStream stdin = new DataOutputStream(p.getOutputStream());
stdin.writeBytes("getevent\n");
InputStream stdout = p.getInputStream();
byte[] buffer = new byte[1];
int read;
String out = new String();
while(true){
read = stdout.read(buffer);
out += new String(buffer, 0, read);
System.out.println("MYLOG: "+(new String(buffer, 0, read)));
}
I find this buffed in documentation!
Copies the InputStream into the OutputStream, until the end of the
stream has been reached. This method uses a buffer of 4096 kbyte.
>> Documentation
The most likely cause of this is that the external application is buffering its output. This is pretty typical for an application that is writing to its "standard output". The solution is to modify the external application so that it "flushes" its output at the appropriate time.
There is nothing in your Java code that will cause it to delay if there is data that is available to be read. In particular, use of a DataOutputStream won't cause this.
It should also be noted that available() does not give reliable information. If you read the API documentation carefully, you will see that a return value of N only means that a simultaneous attempt to read more than N bytes might block. A thread cannot call both available() and read() simultaneously, so by the time you come to use the information supplied by available() it could be out of date.
Related
Goal: Decrypt data from one source and write the decrypted data to a file.
try (FileInputStream fis = new FileInputStream(targetPath.toFile());
ReadableByteChannel channel = newDecryptedByteChannel(path, associatedData))
{
FileChannel fc = fis.getChannel();
long position = 0;
while (position < ???)
{
position += fc.transferFrom(channel, position, CHUNK_SIZE);
}
}
The implementation of newDecryptedByteChannel(Path,byte[]) should not be of interest, it just returns a ReadableByteChannel.
Problem: What is the condition to end the while loop? When is the "end of the byte channel" reached? Is transferFrom the right choice here?
This question might be related (answer is to just set the count to Long.MAX_VALUE). Unfortunately this doesn't help me because the docs say that up to count bytes may be transfered, depending upon the natures and states of the channels.
Another thought was to just check whether the amount of bytes actually transferred is 0 (returned from transferFrom), but this condition may be true if the source channel is non-blocking and has fewer than count bytes immediately available in its input buffer.
It is one of the bizarre features of FileChannel. transferFrom() that it never tells you about end of stream. You have to know the input length independently.
I would just use streams for this: specifically, a CipherInputStream around a BufferedInputStream around a FileInputStream, and a FileOutputStream.
But the code you posted doesn't make any sense anyway. It can't work. You are transferring into the input file, and via a channel that was derived from a FileInputStream, so it is read-only, so transferFrom() will throw an exception.
As commented by #user207421, as you are reading from ReadableByteChannel, the target channel needs to be derived from FileOutputStream rather than FileInputStream. And the condition for ending loop in your code should be the size of file underlying the ReadableByteChannel which is not possible to get from it unless you are able to get FileChannel and find the size through its size method.
The way I could find for transferring is through ByteBuffer as below.
ByteBuffer buf = ByteBuffer.allocate(1024*8);
while(readableByteChannel.read(buf)!=-1)
{
buf.flip();
fc.write(buf); //fc is FileChannel derived from FileOutputStream
buf.compact();
}
buf.flip();
while(buf.hasRemainig())
{
fc.write(buf);
}
I have Java SSL/TLS server&client sockets. My client simply sends a file to the Server and Server receives it. Here are my codes:
My client method:
static boolean writeData(BufferedOutputStream bos, File data) {
FileInputStream fis = new FileInputStream(data);
BufferedInputStream bis = new BufferdInputStream(fis);
byte[] bytes = new byte[512];
int count = 0;
while ((count = bis.read(bytes, 0, bytes.length)) > 0) {
System.out.println("Sending file...");
bos.write(dataByte, 0, count);
System.out.println(count);
}
bos.flush();
System.out.println("File Sent");
}
My server method:
static boolean receiveData(BufferedInputStream bis, File data) {
byte[] bytes = new byte[512];
int count = 0;
while ((count = bis.read(bytes, 0, bytes.length)) > 0) {
System.out.println("Receiving file...");
// Do something..
System.out.println(count);
}
bos.flush();
System.out.println("File Received");
}
The problem is, the server hangs inside the while loop.. It never reaches the "File Received" message.
Even if the file is small, the bis.read() method never returns -1 at the end of file for some reason. I tested the methods with a file size of 16 bytes, and the output is as follows:
Client terminal:
> Sending file...
> 16
> File Sent
Server terminal:
> Receiving file...
> 16
As you can see, the server never reaches the "File Received" message and hangs inside the loop even after the end of stream is reached.. Can anyone guess the reason for this?
Thanks
Your server never detects that the file has been sent, because it checks whether you have closed the connection at the other end (the only reason why you would receive -1 bytes read).
But you never close the connection, you only flush it.
Replace bos.flush() with bos.close() in the writeData method and it should work.
If you don't want to close the connection, because you want to do more work with it, you have to add a protocol of some sort, because there is no default way to do that.
One thing you could do, which is one of the easier ways to implement this, is to send the length of the file as a 32-bit or 64-bit integer before the file.
Then the server knows how many bytes it should read before it can consider the file fully sent.
If you don't know the length of the file, there are many options. I'm not sure if there is a consensus on the most effective way to do this, but given that many protocols take different approaches, I don't think that there is.
These are just a few suggestions, which you can tune.
Before any piece of data, you send the length of the data you want to send as a 32-bit bit (signed) integer. So a file will be sent as multiple pieces of data. Sending a negative number means that the previous piece was the last piece and the file has ended. (If you needed to send a piece that was larger than the maximum that you can represent in a signed 32-bit integer, you need to split it in several pieces).
You think of a random number, with a long-enough length (something like 16 bytes or 32 bytes) that it will never occur in your data. You send that number before the file and when the file is done, you send it again to indicate that event. This is similar to the MIME multi-part encoding.
You take a byte or a number of bytes that indicates whether the file has ended (like 0xFF). But to ensure that you can still legitimately send 0xFF as part of the file, you add the rule that 0xFF 0xFF means that the file has ended, but 0xFF 0x00 means "just a literal 0xFF" in the file.
There are many more ways to do it.
I'm programming a little GUI for a file converter in java. The file converter writes its current progress to stdout. Looks like this:
Flow_1.wav: 28% complete, ratio=0,447
I wanted to illustrate this in a progress bar, so I'm reading the process' stdout like this:
ProcessBuilder builder = new ProcessBuilder("...");
builder.redirectErrorStream(true);
Process proc = builder.start();
InputStream stream = proc.getInputStream();
byte[] b = new byte[32];
int length;
while (true) {
length = stream.read(b);
if (length < 0) break;
// processing data
}
Now the problem is that regardless which byte array size I choose, the stream is read in chunks of 4 KB. So my code is being executed until length = stream.read(b); and then blocks for quite a while. Once the process generates 4 KB output data, my programm gets this chunk and works through it in 32 byte slices. And then waits again for the next 4 KB.
I tried to force java to use smaller buffers like this:
BufferedInputStream stream = new BufferedInputStream(proc.getInputStream(), 32);
Or this:
BufferedReader reader = new BufferedReader(new InputStreamReader(proc.getInputStream()), 32);
But neither changed anything.
Then I found this: Process source (around line 87)
It seems that the Process class is implemented in such a way that it pipes the process' stdout to a file. So what proc.getInputStream(); actually does, is returning a stream to a file. And this file seems to be written with a 4 KB buffer.
Does anyone know some kind of workaround for this situation? I just want to get the process' output instantly.
EDIT: As suggested by Ian Roberts, I also tried to pipe the converter's output into the stderr stream, since this stream doesn't seem to be wrapped in a BufferedInputStream. Still 4k chunks.
Another interesting thing is: I actually don't get exactly 4096 bytes, but about 5 more. I'm afraid the FileInputStream itself is buffered natively.
Looking at the code you linked to the process's standard output stream gets wrapped in a BufferedInputStream but its standard error remains unbuffered. So one possibility might be to execute not the converter directly, but a shell script (or Windows equivalent if you're on Windows) that sends the converter's stdout to stderr:
ProcessBuilder builder = new ProcessBuilder("/bin/sh", "-c",
"exec /path/to/converter args 1>&2");
Don't redirectErrorStream, and then read from proc.getErrorStream() instead of proc.getInputStream().
It may be the case that your converter is already using stderr for its progress reporting in which case you don't need the script bit, just turn off redirectErrorStream(). If the converter program writes to both stdout and stderr then you'll need to spawn a second thread to consume stdout as well (the script approach gets around this by sending everything to stderr).
In a java program I am compressing an InputStream like this:
ChannelBufferOutputStream outputStream = new ChannelBufferOutputStream(ChannelBuffers.dynamicBuffer(BUFFER_SIZE));
GZIPOutputStream compressedOutputStream = new GZIPOutputStream(outputStream);
try {
IOUtils.copy(inputStream, compressedOutputStream);
} finally {
// this should print the byte size after compression
System.out.println(outputStream.writtenBytes());
}
I am testing this code with a json file that is ~31.000 byte uncompressed and ~7.000 byte compressed on disk. Sending a InputStream that is wrapping the uncompressed json file to the code above, outputStream.writtenBytes() returns 10 which would indicate that it compressed down to only 10 byte. That seems wrong, so I wonder where the problem is. ChannelBufferOutputStream javadoc says: Returns the number of written bytes by this stream so far. So it should be working.
Try calling GZIPOutputStream.finish() or flush() methods before counting bytes
If that does not work, you can create a proxy stream, whose mission - to count the number of bytes that have passed through it
Is there an easy (therefore quick) way to accomplish this? Basically just take some input stream, could be something like a socket.getInputStream(), and have the stream's buffer autmoatically redirect to standard out?
There are no easy ways to do it, because InputStream has a pull-style interface, when OutputStream has a push-style one. You need some kind of pumping loop to pull data from InputStream and push them into OutputStream. Something like this (run it in a separate thread if necessary):
int size = 0;
byte[] buffer = new byte[1024];
while ((size = in.read(buffer)) != -1) out.write(buffer, 0, size);
It's already implemented in Apache Commons IO as IOUtils.copy()
You need a simple thread which reads from the input stream and writes to standard output. Make sure it yields to other threads.
Since Java 9, you can use InputStream.transferTo
Example
try (InputStream stream = Application.class.getResourceAsStream("/test.txt")) {
stream.transferTo(System.out);
}