I am running very time-consuming analyses and only their (very short) results are outputed to text file using printWriter.
Since my computer broke down twice recently and the results were not saved since the process wasn't finished (it only saves the file whenever printerWriter.close() is reached at the end), I was wondering whether there was a way to save the file various times throughout the process and update the output file each time. In that case, if the computer crashes at least parts of the results would still be available and wouldn't have to be repeated.
Some details:
A process is repeated for n=10 iterations using different (fixed) random seeds. After each iteration, I would like to save the results obtained in the iterations run so far. Thus, the chosen output file would have to be updated and saved after each iteration.
I suspect all you're looking for is calling flush on the PrintWriter.
Sounds like you should potentially look for a new computer, mind you...
You can create PrintWriter using:
PrintWriter writer = new PrintWriter(new FileWriter("file name"), true);
to get output buffer flushed automatically when println() or format() or printf() called on writer. Or you can manually use writer.flush() to flush output buffer when you desire.
Related
I want to have a central log file in my system, to which a certain application can write and read from.
The writes are for new data, and the reads will be to compare generated data to written data.
I would like this application to run in multiple instances at a time, which means I need to find a way to read diffs from the file, and write.
I have seen this code, but it's good for one go over the file and I don't see it working in multiple instances.
I'm building this app as a command line tool, so I'm thinking about creating a file for each instance and them migrating it to the "general" log file.
I'd like to hear inputs from the forum regarding the different approaches to this question.
What I'm worried about is having a few instances reading and writing from the same file and generating a lock.
This is the code I have found so far:
public class Tp {
public static void main(String[] args) throws IOException{
File f = new File("/path/to/your/file/filename.txt");
BufferedWriter bw = new BufferedWriter(new FileWriter(f));
BufferedReader br = new BufferedReader(new FileReader(f));
bw.write("Some text");
bw.flush();
System.out.println(br.readLine());
bw.write("Some more text");
bw.flush();
bw.close();
br.close();
}
}
You seem to be trying writing and reading the same file not only in one program but even within one thread. I do not believe this would be of benefit as during the program you know when/what you wrote so you can get rid of the whole I/O logic.
In the beginning try to write two different programs that run as separate processes. If need be, you can still try to bring them into the same JVM as separate threads.
Writing for sure is no problem, so the more interesting part is the reading logic. I'd probably implement this algorithm:
Loop until the program is terminated...
open the file, use skip() to jump to the location with new data
consume the existing data
remember how many bytes were read/remember the file size
close the file
wait until file has changed
Waiting for the file to change can be done by monitoring File.lastModified or File.length, or using the WatchService.
But be aware if you have multiple applications writing to the same file in parallel it can break any meaningful structure you have in the data. Log4j ensures parallel writes from within one application/multiple threads will go correctly into the file. If you need multiple processes running synchronized writes, consider logging into a database.
I am using ProcessBuilder to input and receive information from a C++ program, using Java. After starting the process once, I would like to be able to input new strings, and receive their output, without having to restart the entire process. This is the approach I have taken thus far:
public void getData(String sentence) throws InterruptedException, IOException{
InputStream stdout = process.getInputStream();
InputStreamReader isr = new InputStreamReader(stdout);
OutputStream stdin = process.getOutputStream();
OutputStreamWriter osr = new OutputStreamWriter(stdin);
BufferedWriter writer = new BufferedWriter(osr);
BufferedReader reader = new BufferedReader(isr);
writer.write(sentence);
writer.close();
String ch = reader.readLine();
preprocessed="";
while (ch!=null){
preprocessed = preprocessed+"~"+ch;
ch = reader.readLine();
}
reader.close();
}
Each time I want to send an input to the running process, I call this method. However, there is an issue: the first time I send an input, it is fine, and the output is received perfectly. However, the second time I call it, I receive the error
java.io.IOException: Stream closed
which is unexpected, as everything is theoretically recreated when the method is called again. Moreover, removing the line the closes the BufferedWriter results in the code halting at the following line, as if the BufferedReader is waiting for the BufferedWriter to be closed.
One final thing - even when I create a NEW BufferedWriter and instruct the method to use that when called for the second time, I get the same exception, which I do not understand at all.
Is there any way this can be resolved?
Thanks a lot!
Your unexpected IOException happens because when Readers and Writers are closed, they close their underlying streams in turn.
When you call your method the first time, everything appears to work. But you close the writer, which closes the process output stream, which closes stdin from the perspective of the process. Not sure what your C++ binary looks like, but probably it just exits happily when it's done with all its input.
So subsequent calls to your method don't work.
There's a separate but similar issue on the Reader side. You call readLine() until it returns null, meaning the Reader has felt the end of the stream. But this only happens when the process is completely done with its stdout.
You need some way of identifying when you're done processing a unit of work (whatever you mean by "sentence") without waiting for the whole entire stream to end. The stream has no concept of the logical pause between outputs. It's just a continuous stream. Reader and Writer are just a thin veneer to buffer between bytes and characters but basically work the same as streams.
Maybe the outputs could have delimiters. Or you could send the length of each chunk of output before actually sending the output and distinguish outputs that way. Or maybe you know in advance how long each response will be?
You only get one shot through streams. So they will have to outlive this method. You can't be opening and closing streams if you want to avoid restarting your process every time. (There are other ways for processes to communicate, e.g. sockets, but that's probably out of scope.)
On an orthogonal note, appending to a StringBuilder is generally more efficient than a big loop of string concatenations when you're accumulating your output.
You might also have some thread check process.exitValue() or otherwise make sure the process is working as intended.
Don't keep trying to create and close your Streams, because once you close it, it's closed for good. Create them once, then in your getData(...) method use the existing Streams. Only close your Streams or their wrapping classes when you're fully done with them.
Note that you should open and close the Streams in the same method, and thus may need additional methods or classes to help you process the Streams. Consider creating a Runnable class for this and then reading from the Streams in another Thread. Also don't ignore the error stream, as that may be sending key information that you will need to fully understand what's going on here.
consider the following scenario:
Process 1 (Writer) continuously appends a line to a file ( sharedFile.txt )
Process 2 (Reader) continuously reads a line from sharedFile.txt
my questions are:
In java is it possible that :
Reader process somehow crashes Writer process (i.e. breaks the process of Writer)?
Reader some how knows when to stop reading the file purely based on the file stats (Reader doesn't know if others are writing to the file)?
to demonsterate
Process one (Writer):
...
while(!done){
String nextLine;//process the line
writeLine(nextLine);
...
}
...
Process Two (Reader):
...
while(hasNextLine()){
String nextLine= readLine();
...
}
...
NOTE:
Writer Process has priority. so nothing must interfere with it.
Since you are talking about processes, not threads, the answer depends on how the underlying OS manages open file handles:
On every OS I'm familiar with, Reader will never crash a writer process, as Reader's file handle only allows reading. On Linux, system calls a Reader can potentially invoke on the underlying OS are open(2) with O_RDONLY flag, lseek(2) and read(2) -- are known not to interfere with the syscalls that the Writer is invoking, such as write(2).
Reader most likely won't know when to stop reading on most OS. More precisely, on some read attempt it will receive zero as the number of read bytes and will treat this as an EOF (end of file). At this very moment, there can be Writer preparing to append some data to a file, but Reader have no way of knowing it.
If you need a way for two processes to communicate via file, you can do it using some extra files that pass meta-information between Readers and Writers, such as whether there are Writer currently running. Introducing some structure into a file can be useful too (for example, every Writer appends a byte to a file indicating that the write process is happening).
For very fast non-blocking I/O you may want consider memory mapped files via Java's MappedByteBuffer.
The code will not crash. However, the reader will terminate when the end is reached, even if the writer may still be writing. You will have to synchronize somehow!
Concern:
Your reader thread can read a stale value even when you think another writer thread has updated the variable value
Even if you write to a file if synchronization is not there you will see a different value while reading
Java File IO and plain files were not designed for simultaneous writes and reads. Either your reader will overtake your writer, or your reader will never finish.
JB Nizet provided the answer in his comment. You use a BlockingQueue to hold the writer data while you're reading it. Either the queue will empty, or the reader will never finish. You have the means through the BlockingQueue methods to detect either situation.
I am writing a object to a file in a separate thread and this thread executes in every one minute. Every thing is work fine but if system crashes(remove power supply) then the file(in which I am writing the object) size become zero byte on next reboot.
My Code is:
FileOutputStream fileOut = new FileOutputStream("/sdcard/vis.ser");
ObjectOutputStream out = new ObjectOutputStream(fileOut);
out.writeObject(/*An object*/);
out.close();
The idea is to use a checksum to ensure the file has been written correctly and use renaming as Whity suggests.
However, if you are saving a primitive type, then you can use SharedPreferences, which will avoid your "0 bytes" problem.
This question will give you a broader idea about how to prevent it.
So your worries that previous data destroyed and new was not yet saved?
Shell you try to write in tmp file and if u managed to close it simply rename?
I'm trying to run an external program in Java and to read the output. The program is a Linux application in C++ that runs a data mining algorithm and prints the patterns found on standard output. I want to be able to read that output from my Java app and to show the patterns using a table. The problem is that the size of the output is quite big (as a test it produces 6.5MB in about 30 seconds). I'm using ProcessBuilder and reading the output using an InputStreamReader buffered using a BufferedReader as you can see in the following code:
String[] cmd = {"./clogen_periodic", selected, support, "-t 4"};
Process p = new ProcessBuilder(cmd).start();
input = new BufferedReader (new InputStreamReader(p.getInputStream()));
while ((line = input.readLine()) != null) {
...
process line;
...
}
The problem is that the output gets corrupted. When I execute the same program on a console the output is correct but when I use the Java app some lines are merged. More precisely output should be like this
TMEmulation log_pseduo_allocation (34985) (2 45 76 89 90)
__divw clock timer (8273) (4 6 67 4 2)
but it is like this
TMEmulation log_pseduo_allocation (34985) (2__divw 45clock 76timer (89 8273) 904) (6 67 4 2)
Any idea about the possible problem?
Thanks a lot in advance,
Patricia
A few possibilities all to do with the called program
1) as #Artefacto says the C++ program output might not be fully buffered so call setvbuf to make it consistant. ie the first output is partially buffered and second is not and so first flushes after the end of the second. In general buffering can differ if called from the command line and from a process.
2) The program is multi-threaded and the output behaves differently when called from java and so the output timing differs.
Basically you need to look at the code for the called program to force logging/output to be all through the same call.
Try calling in C++ program, setvbuf with the option _IOLBF. The end of the pipe exposed to the C++ is probably unbuffered, while when you run the programs from the command line with |, it's line buffered.
If you're doing a System.out.print() or what ever for debugging in every iteration currently, then try putting all lines from all iterations into one String and give that a try.
Maybe your output method prints out asynchronously. Therefore your printed output may be corrupted but not the one you got from input stream.
Just an idea ...
You should be reading stdout and stderr in separate threads to avoid blocking issues.
I can't say for sure if that will fix your problem but it should be done anyway to avoid other problems you may hit (your app may deadlock waiting on stdout for example).
Luckily there's a very good example with sample code that walks you through this.
http://www.javaworld.com/javaworld/jw-12-2000/jw-1229-traps.html
The article states (see bottom of page 2) that you should always read from stderr and stdout even if you don't need the output to prevent possible deadlocks.
Because some native platforms only provide limited buffer size for standard input and output streams, failure to promptly write the input stream or read the output stream of the subprocess may cause the subprocess to block, and even deadlock.