I have named pipe .\pipe\pipe1 on Windows I want to read from with Java.
From the documentation, FileChannel should be interruptible. Read should throw a ClosedByInterruptException if the reading thread is interrupted from another therad. This probably works for regular files, but I now have a named pipe.
My situation is like this:
RandomAccessFile raf = new RandomAccessFile("\\\\.\\pipe\\pipe1", "r");
FileChannel fileChannel = raf.getChannel();
// later in another thread "readThread"
fileChannel.read(buffer);
// outside the thread reading
readThread.interrupt();
The problem is that the call to interrupt will block and read will remain blocked until something to the named pipe is written so that read will stop blocking.
I need to be able to abort/cancel the read when nothing is written to the pipe while it is not closed yet.
Why does interrupting with the NIO classes not work? Is there a solution to this problem that does not involve busy-waiting or sleep with ready? What would be the best solution for this problem or is there a workaround?
I have not figured out a real solution to the question how to cancel the read. But I needed to adjust anyway and will now explain why. If you have anything to add to the original problem of the blocked read, you can post an additional answer.
A named piped could be treated like a file and opened separately for reading and writing with classic Java IO streams. However, a named piped is often used like a socket and as such, it requires a single file open. So I one could use Java IO streams like this:
RandomAccessFile raf = new RandomAccessFile("\\\\.\\pipe\\pipe1", "rws");
FileChannel fileChannel = raf.getChannel();
InputStream fis = Channels.newInputStream(fileChannel);
OutputStream fos = Channels.newOutputStream(fileChannel);
BufferedReader br = new BufferedReader(new InputStreamReader(fis));
PrintWriter pw = new PrintWriter(fos, true);
Now a problem one will later notice is this: If you write while reading, things will get locket up. It seems concurrent reading/writing is not possible, which is outline here.
To solve this, I used a ReentrantLock set to fair to switch from reading/writing. The reading thread is checking readyness and can be triggered with interrupt if one finishes writing and expects a response after this. If ready, the reading buffer is drained. If it is not ready, an interval can be scheduled or simulated for sporadically expected messages. This last part is not optimal, but actually works very well for my use case.
With this, one can build a solution where all threads can be orchestrated to terminate correctly with no blocking and minimal overhead.
Related
I am writing to and reading from a Linux file in java, which in reality is a communication port to a hardware device. To do this I use RandomAccessFile (I'll explain why later) and it works well in most cases. But sometimes a byte is lost and then my routine blocks indefinitely since there is no timeout on the read method.
To give some more details on the file: it is a USB receipt printer that creates a file called /dev/usb/lp0 and though I can use a cups driver to print, I still need the low level communication through this file to query the status of the printer.
The reason I use RandomAccessFile is that I can have the same object for both reading and writing.
I tried to make a version with InputStream and OutputStream instead (since that would allow me to use the available() method to implement my timeout). But when I first open the InputStream and then the OutputStream I get an exception when opening the OutputStream since the file is occupied.
I tried writing with the OutputStream and then closing it before opening the InputStream to read, but then I lose some or all of the reply before it has opened the InputStream.
I tried switching to channels instead (Files.newByteChannel()). This also allows me to have just one object, and the documentation says it only reads the bytes available and returns the count (which also allows me to implement a timeout). But it blocks in the read method anyway when there is nothing to read, despite what the documentation says.
I also tried a number of ways to implement timeouts on the RandomAccessFile using threads.
The first approach was to start a separate thread at the same time as starting to read, and if the timeout elapsed in the thread I closed the file from the thread, hoping that this would unlock the read() operation with an exception, but it didn't (it stayed blocked).
I also tried to do the read in a separate thread and brutally kill it with the deprecated Thread.stop() once the time had elapsed. This worked one time, but it was not possible to reopen the file again after that.
The only solution I have made work is to have a separate thread that continuously calls read, and whenever it gets a byte it puts it in a LinkedBlockingQueue, which I can read from with a timeout. This approach works, but the drawback is that I can never close the file (again for the same reasons explained above, I can't unblock a blocked read). And my application requires that I sometimes close this connection to the hardware.
Anyone who can think of a way to read from a file with timeout that would work in my case (that allows me to have both a read and a write access open to the file at the same time)?
I am using Java8 by the way.
I am using ProcessBuilder to input and receive information from a C++ program, using Java. After starting the process once, I would like to be able to input new strings, and receive their output, without having to restart the entire process. This is the approach I have taken thus far:
public void getData(String sentence) throws InterruptedException, IOException{
InputStream stdout = process.getInputStream();
InputStreamReader isr = new InputStreamReader(stdout);
OutputStream stdin = process.getOutputStream();
OutputStreamWriter osr = new OutputStreamWriter(stdin);
BufferedWriter writer = new BufferedWriter(osr);
BufferedReader reader = new BufferedReader(isr);
writer.write(sentence);
writer.close();
String ch = reader.readLine();
preprocessed="";
while (ch!=null){
preprocessed = preprocessed+"~"+ch;
ch = reader.readLine();
}
reader.close();
}
Each time I want to send an input to the running process, I call this method. However, there is an issue: the first time I send an input, it is fine, and the output is received perfectly. However, the second time I call it, I receive the error
java.io.IOException: Stream closed
which is unexpected, as everything is theoretically recreated when the method is called again. Moreover, removing the line the closes the BufferedWriter results in the code halting at the following line, as if the BufferedReader is waiting for the BufferedWriter to be closed.
One final thing - even when I create a NEW BufferedWriter and instruct the method to use that when called for the second time, I get the same exception, which I do not understand at all.
Is there any way this can be resolved?
Thanks a lot!
Your unexpected IOException happens because when Readers and Writers are closed, they close their underlying streams in turn.
When you call your method the first time, everything appears to work. But you close the writer, which closes the process output stream, which closes stdin from the perspective of the process. Not sure what your C++ binary looks like, but probably it just exits happily when it's done with all its input.
So subsequent calls to your method don't work.
There's a separate but similar issue on the Reader side. You call readLine() until it returns null, meaning the Reader has felt the end of the stream. But this only happens when the process is completely done with its stdout.
You need some way of identifying when you're done processing a unit of work (whatever you mean by "sentence") without waiting for the whole entire stream to end. The stream has no concept of the logical pause between outputs. It's just a continuous stream. Reader and Writer are just a thin veneer to buffer between bytes and characters but basically work the same as streams.
Maybe the outputs could have delimiters. Or you could send the length of each chunk of output before actually sending the output and distinguish outputs that way. Or maybe you know in advance how long each response will be?
You only get one shot through streams. So they will have to outlive this method. You can't be opening and closing streams if you want to avoid restarting your process every time. (There are other ways for processes to communicate, e.g. sockets, but that's probably out of scope.)
On an orthogonal note, appending to a StringBuilder is generally more efficient than a big loop of string concatenations when you're accumulating your output.
You might also have some thread check process.exitValue() or otherwise make sure the process is working as intended.
Don't keep trying to create and close your Streams, because once you close it, it's closed for good. Create them once, then in your getData(...) method use the existing Streams. Only close your Streams or their wrapping classes when you're fully done with them.
Note that you should open and close the Streams in the same method, and thus may need additional methods or classes to help you process the Streams. Consider creating a Runnable class for this and then reading from the Streams in another Thread. Also don't ignore the error stream, as that may be sending key information that you will need to fully understand what's going on here.
consider the following scenario:
Process 1 (Writer) continuously appends a line to a file ( sharedFile.txt )
Process 2 (Reader) continuously reads a line from sharedFile.txt
my questions are:
In java is it possible that :
Reader process somehow crashes Writer process (i.e. breaks the process of Writer)?
Reader some how knows when to stop reading the file purely based on the file stats (Reader doesn't know if others are writing to the file)?
to demonsterate
Process one (Writer):
...
while(!done){
String nextLine;//process the line
writeLine(nextLine);
...
}
...
Process Two (Reader):
...
while(hasNextLine()){
String nextLine= readLine();
...
}
...
NOTE:
Writer Process has priority. so nothing must interfere with it.
Since you are talking about processes, not threads, the answer depends on how the underlying OS manages open file handles:
On every OS I'm familiar with, Reader will never crash a writer process, as Reader's file handle only allows reading. On Linux, system calls a Reader can potentially invoke on the underlying OS are open(2) with O_RDONLY flag, lseek(2) and read(2) -- are known not to interfere with the syscalls that the Writer is invoking, such as write(2).
Reader most likely won't know when to stop reading on most OS. More precisely, on some read attempt it will receive zero as the number of read bytes and will treat this as an EOF (end of file). At this very moment, there can be Writer preparing to append some data to a file, but Reader have no way of knowing it.
If you need a way for two processes to communicate via file, you can do it using some extra files that pass meta-information between Readers and Writers, such as whether there are Writer currently running. Introducing some structure into a file can be useful too (for example, every Writer appends a byte to a file indicating that the write process is happening).
For very fast non-blocking I/O you may want consider memory mapped files via Java's MappedByteBuffer.
The code will not crash. However, the reader will terminate when the end is reached, even if the writer may still be writing. You will have to synchronize somehow!
Concern:
Your reader thread can read a stale value even when you think another writer thread has updated the variable value
Even if you write to a file if synchronization is not there you will see a different value while reading
Java File IO and plain files were not designed for simultaneous writes and reads. Either your reader will overtake your writer, or your reader will never finish.
JB Nizet provided the answer in his comment. You use a BlockingQueue to hold the writer data while you're reading it. Either the queue will empty, or the reader will never finish. You have the means through the BlockingQueue methods to detect either situation.
I implemented a download manager, which works fine except that I noted one thing, sometimes the thread blocks for a while(50 milliseconds to up to 10 seconds) when writing to files, I am running this program on Android(Linux based), my guess is if there're some kind of buffer in the OS level that needs to be flushed, and my writing actually writes to that buffer, and if that buffer is full, writing needs to wait.
My question is what is the possible reason that could cause the blocking?
IO is well known to be a 'blocking' activity, hence your question should be 'what should you do while your program is busy waiting for IO to complete'
Adopting some of the well known concurrency strategy and event-based programming pattern is a good start
- I have done the writing and reading of files in the following way and never encountered any probs.
Eg:
File f = new File("Path");
FileWriter fw = new FileWriter(f);
BufferedWriter bw = new BufferedWriter(fw);
- You can alternatively try out the NIO package in Java.
I have a small program whereby in the main thread, I ask for input from the user in the console.
System.out.print("Alternatively, enter peer's ID to connect:");
InputStreamReader reader = new InputStreamReader(System.in);
BufferedReader bReader = new BufferedReader(reader);
String peerID = bReader.readLine();
and in a separate thread I listen from my sockets' InputStream. If I receive something from this stream, i then try to "unblock" the readLine by calling System.in.close() without waiting for the user's input. The main thread can then proceed to do something with the information obtained either from the socket's read or from the user.
Somehow it seem to work on my Mac, but if I try it on Windows, stepping through the debugger, I've found that System.in.close() blocks and the whole program will hangs.
Any idea why and how should i unblock readline()? Otherwise what would be a good way of rewriting the logic?
You could try to close bReader instead, but a sounder approach would be to use interruptible io in the nio package and possibly the Console. I would try using the Console.read(CharBuffer), and interrupt the thread. That "should" work. Haven't tested though...
Update: But a Selector would maybe suit your purpose even better. Listen to both your socket and System.in, and act on the one that provides input?