java BufferedWriter file deleted from other source - java

i have wrote a small piece of code that can be summarized as
Thread() {
run() {
BufferedWriter fileout = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(log, true), "UTF-8"));;
while (true) {
fileout.write(blockingQueue.take());
}
}
}
now, some other threads will produce rows and add them to blockingQueue.
now, if i remove the file from console, the fileout.write doesn't fail nor throw exceptions.
I was wondering how i can re-open the file if someone remove the file from
filesystem via rm logfile.txt from console.
The problem is not how to reopen it, but how to detect that the file was removed.
Some options are
1.do take() and save it to a string
2. open the file and write to it
but even if i change the code in this way, it doesn't guarantee that
the file get written before someone remove it.
The other option is to lock the file, but i don't want to do that.
I don't want to avoid the delete of the file :)

If the file you are writing to can disappear, your best option is to not keep the stream open, but recreate a fresh FileOutputStream whenever you need to write something. That will recreate the file, too (which I suppose is what you want).
Or you could check if the file exists before each write. I suppose that performance-wise the two methods come to about the same.
If performance is an issue, you could buffer in memory and when the buffer is full, open the FileOutputStream (and immediately close it again after writing out the buffer).

Related

parallel write and read from file

I want to have a central log file in my system, to which a certain application can write and read from.
The writes are for new data, and the reads will be to compare generated data to written data.
I would like this application to run in multiple instances at a time, which means I need to find a way to read diffs from the file, and write.
I have seen this code, but it's good for one go over the file and I don't see it working in multiple instances.
I'm building this app as a command line tool, so I'm thinking about creating a file for each instance and them migrating it to the "general" log file.
I'd like to hear inputs from the forum regarding the different approaches to this question.
What I'm worried about is having a few instances reading and writing from the same file and generating a lock.
This is the code I have found so far:
public class Tp {
public static void main(String[] args) throws IOException{
File f = new File("/path/to/your/file/filename.txt");
BufferedWriter bw = new BufferedWriter(new FileWriter(f));
BufferedReader br = new BufferedReader(new FileReader(f));
bw.write("Some text");
bw.flush();
System.out.println(br.readLine());
bw.write("Some more text");
bw.flush();
bw.close();
br.close();
}
}
You seem to be trying writing and reading the same file not only in one program but even within one thread. I do not believe this would be of benefit as during the program you know when/what you wrote so you can get rid of the whole I/O logic.
In the beginning try to write two different programs that run as separate processes. If need be, you can still try to bring them into the same JVM as separate threads.
Writing for sure is no problem, so the more interesting part is the reading logic. I'd probably implement this algorithm:
Loop until the program is terminated...
open the file, use skip() to jump to the location with new data
consume the existing data
remember how many bytes were read/remember the file size
close the file
wait until file has changed
Waiting for the file to change can be done by monitoring File.lastModified or File.length, or using the WatchService.
But be aware if you have multiple applications writing to the same file in parallel it can break any meaningful structure you have in the data. Log4j ensures parallel writes from within one application/multiple threads will go correctly into the file. If you need multiple processes running synchronized writes, consider logging into a database.

Save updated version of printWriter several times troughou a process

I am running very time-consuming analyses and only their (very short) results are outputed to text file using printWriter.
Since my computer broke down twice recently and the results were not saved since the process wasn't finished (it only saves the file whenever printerWriter.close() is reached at the end), I was wondering whether there was a way to save the file various times throughout the process and update the output file each time. In that case, if the computer crashes at least parts of the results would still be available and wouldn't have to be repeated.
Some details:
A process is repeated for n=10 iterations using different (fixed) random seeds. After each iteration, I would like to save the results obtained in the iterations run so far. Thus, the chosen output file would have to be updated and saved after each iteration.
I suspect all you're looking for is calling flush on the PrintWriter.
Sounds like you should potentially look for a new computer, mind you...
You can create PrintWriter using:
PrintWriter writer = new PrintWriter(new FileWriter("file name"), true);
to get output buffer flushed automatically when println() or format() or printf() called on writer. Or you can manually use writer.flush() to flush output buffer when you desire.

answer for necessity of flush in I/O streams in java in fileoutputstreams

Hi I need a answer for necessity of flush in I/O streams in java.since in my program with flush and without flush the output is same.ie,every thing is written in to the destination file.then why i need flush?will file input stream consumes buffer memory?
the below is my simple sample program
file = new File("c:/newfile.txt");
fop = new FileOutputStream("c:/newfile.txt");
// if file doesnt exists, then create it
if (!file.exists()) {
file.createNewFile();
}
// get the content in bytes
byte[] contentInBytes = content.getBytes();
fop.write(contentInBytes);
fop.flush();
fop.close();
even when i command flush and close it can write the contents in to the file properly..?then y we need flush?and will file outputstream consumes memory?
Close calls flush on the stream, so flush is not needed if you want to close the stream.
Flush is useful if you want to make sure that the data is saved, without closing a stream, e.g. when sending messages over the Internet, or writing to the console. You may notice, that if you write to console with system.out.print(), then the output is not displayed, until you call flush, or until there is a new line in the text (in which case Java will call flush for you).
See more on this question
In fact, FileOutputStream is not buffered, so the data is directly written to the file.
The abstract OutputStream defines flush (an empty method) to accomodate also the needs of buffered streams, so FileOutputStream inherits it.
If you are not certain of the underlying implementation, it is generally good practice to flush the streams before closing them.
Also, in your code there is a little error:
file = new File("c:/newfile.txt");
fop = new FileOutputStream("c:/newfile.txt");
// Will never happen, new FileOutputStream creates the file
if (!file.exists()) {
file.createNewFile();
}
EDIT:
As for the close part of the question:
When you comment out close(), then exiting main() the close method is called by the finalizer (i.e before the stream is garbage collected, a JVM thread calls its finalize() method, which in turn calls the close() method), but you can't sensibly rely on the finalizer: you don't own it and you can't be sure of when it is activated.
Again , best practice is to call close() explicitly.

File become zero byte if system crashes in android

I am writing a object to a file in a separate thread and this thread executes in every one minute. Every thing is work fine but if system crashes(remove power supply) then the file(in which I am writing the object) size become zero byte on next reboot.
My Code is:
FileOutputStream fileOut = new FileOutputStream("/sdcard/vis.ser");
ObjectOutputStream out = new ObjectOutputStream(fileOut);
out.writeObject(/*An object*/);
out.close();
The idea is to use a checksum to ensure the file has been written correctly and use renaming as Whity suggests.
However, if you are saving a primitive type, then you can use SharedPreferences, which will avoid your "0 bytes" problem.
This question will give you a broader idea about how to prevent it.
So your worries that previous data destroyed and new was not yet saved?
Shell you try to write in tmp file and if u managed to close it simply rename?

Have you ever seen a Java File close() throw an exception?

Has anyone ever seen an exception thrown when calling close method on any closable object?
An IOException will be thrown on close if the final flush fails. Possible causes include:
the file system is full, or the user is over quota,
hard disc errors,
a file system was forcibly unmounted,
a remote file system is unavailable due to networking or other problems,
(possibly) a character encoding error if writing to the file via an OutputStreamWriter or similar,
a device error if the "file" is a device file,
a lost connection if the closeable is a network stream,
a broken pipe if the closeable is a pipe to external process,
and so on.
I have certainly seen some of these. Others are unlikely.
However, if the data you are writing is important then you should allow for close failing. For example, if your application is writing out a critical file the file system fills up, your application had better notice this before it replaces the old copy of the file with the truncated version.
Yes, it's not that rare, IMHO if you are working with anything other than non-local disk files.
Close() works if at that point your closable is still valid and open. Many things like pipes, remote files, etc., can die prematurely.
In addition, I have seen code that ignores errors on open and write and still tries to close (e.g., in a finally block).
Not in terms of file-io, but in terms of sockets the close will raise IOException when the other side has aborted the connection. For example, when you fire a HTTP request on a (large) webpage and then immediately navigate away by clicking another link on the webpage (while it isn't finished loading), then the server side will get an IOException (or a subclass like ClientAbortException in Tomcat servers and clones) when the outputstream of the HTTP response is to be flushed/closed.
Old post and long since answered but here's a real example:
The following code will except out when bufferedWriter.close() is called. This happens because the BufferedWriter's underlying Writer (the FileWriter) has already been closed and when a BufferedWriter closes, it first attempts to flush any data in its buffer to its underlying Writer.
File newFile = new File("newFile.txt");
FileWriter fileWriter = new FileWriter(newFile);
BufferedWriter bufferedWriter = new BufferedWriter(fileWriter);
bufferedWriter.write("Hello World");
fileWriter.close();
bufferedWriter.close();
Note: If there's no data in the buffer [comment out the write() line or add a flush() call] then no exception will be generated
I haven't, but it's possible. Imagine if there's an OutputStream that for some reason hasn't written to the file yet. Well, calling close() will flush out the data, but if the file is locked - then an IOException would be raised.
Try yanking a USB drive with an open file on it. If it doesn't give an exception I'd be pretty surprised.
I guess you could try to force this by unplugging the disk your file is on. But on any Closable? I think it would be easy to get something that uses a socket to throw an exception upon closing.
I have - in my unit tests against mocks ;)

Categories

Resources