I am writing a object to a file in a separate thread and this thread executes in every one minute. Every thing is work fine but if system crashes(remove power supply) then the file(in which I am writing the object) size become zero byte on next reboot.
My Code is:
FileOutputStream fileOut = new FileOutputStream("/sdcard/vis.ser");
ObjectOutputStream out = new ObjectOutputStream(fileOut);
out.writeObject(/*An object*/);
out.close();
The idea is to use a checksum to ensure the file has been written correctly and use renaming as Whity suggests.
However, if you are saving a primitive type, then you can use SharedPreferences, which will avoid your "0 bytes" problem.
This question will give you a broader idea about how to prevent it.
So your worries that previous data destroyed and new was not yet saved?
Shell you try to write in tmp file and if u managed to close it simply rename?
Related
I have a function, that's purpose is to create a directory and copy a csv file to that directory. This same function gets ran multiple times, each time by an object in a different thread. It gets called in the object's constructor, but I have logic in there to only copy the file if it does not already exist (meaning, it checks to make sure that one of the other instances in parallel did not already create it).
Now, I know that I could simply rearrange the code so that this directory is created and the file is copied before the objects are ran in parallel, but that is not ideal for my use case.
I am wondering, will the following code ever fail? That is, due to one of the instances being in the middle of copying a file, while another instance attempts to start copying that same file to the same location?
private void prepareGroupDirectory() {
new File(outputGroupFolderPath).mkdirs();
String map = "/path/map.csv"
File source = new File(map);
String myFile = "/path/test_map.csv";
File dest = new File(myFile);
// copy file
if (!dest.exists()) {
try{
Files.copy(source, dest);
}catch(Exception e){
// do nothing
}
}
}
To sum it all up. Is this function thread-safe in the sense that, different threads could all run this function in parallel without it breaking? I think yes, but any thoughts would be helpful!
To be clear, I have tested this many many times and it has worked every time. I am asking this question to make sure, that in theory, it will still never fail.
EDIT: Also, this is highly simplified so that I could ask the question in an easy to understand format.
This is what I have now after following comments (I still need to use nio instead), but this is currently working:
private void prepareGroupDirectory() {
new File(outputGroupFolderPath).mkdirs();
logger.info("created group directory");
String map = instance.getUploadedMapPath().toString();
File source = new File(map);
String myFile = FilenameUtils.getBaseName(map) + "." + FilenameUtils.getExtension(map);
File dest = new File(outputGroupFolderPath + File.separator + "results_" + myFile);
instance.setWritableMapForGroup(dest.getAbsolutePath());
logger.info("instance details at time of preparing group folder: {} ", instance);
final ReentrantLock lock = new ReentrantLock();
lock.lock();
try {
// copy file
if (!dest.exists()) {
String pathToWritableMap = createCopyOfMap(source, dest);
logger.info(pathToWritableMap);
}
} catch (Exception e) {
// do nothing
// thread-safe
} finally {
lock.unlock();
}
}
It isn't.
What you're looking for is the concept of rotate-into-place. The problem with file operations is that almost none of it is atomic.
Presumably you don't just want 'only one' thread to win the race for making this file, you also want that file to either be perfect, or not exist at all: You would not want anybody to be able to observe that CSV file in a half-baked state, and you most certainly wouldn't want a crash halfway through generating the CSV file to mean that the file is there, half-baked, but its mere existence means it prevents any attempt to write it out properly. You can't use finally blocks or exception catching to address this issue; someone might trip over a powercable.
So, how do you solve all these problems?
You do not write to foo.csv. Instead you write to foo.csv.23498124908.tmp where that number is randomly generated. Because that just isn't the actual CSV file anybody is looking for, you can take all the time in the world to finish it properly. Once it is done, then you do the magic trick:
You rename foo.csv.23498124908.tmp into foo.csv, and do so atomically - one instant in time foo.csv does not exist, the next instant in time it does and it has the complete contents. Also, that rename will only succeed if the file didn't exist before: It is impossible for two separate threads to both rename their foo.csv.23481498.tmp file into foo.csv simultaneously. If you were to try it and get the timing just perfect, one of them (arbitrary which one) 'wins', the other one gets an IOException and doesn't rename anything.
The way to do this is using Files.move(from, to, StandardCopyOptions.ATOMIC_MOVE). ATOMIC_MOVE is even kind enough to flat out refuse to execute if somehow the OS/filesystem combination simply does not support ATOMIC_MOVE (they pretty much all do, though).
The second advantage is that this locking mechanism works even if you have multiple entirely different apps running. If they all use ATOMIC_MOVE or the equivalent of this in that language's API, only one can win, whether we're talking 'threads in a JVM' or 'apps on a system'.
If you want to instead avoid the notion that multiple threads are both simultaneously doing the work to make this CSV file even though only one should do so and the rest should 'wait' until the first thread is done, file system locks are not the answer - you can try (make an empty file whose existence is a sign that some other thread is working on it) - and there's even a primitive for that in java's java.nio.file APIs. The CREATE_NEW flag can be used when creating a file, which means: Atomically create it, failing if the file already exists with concurrency guarantees (if multiple processes/threads all run that simultaneously, one succeeds and all others fail, guaranteed). However, CREATE_NEW can only atomically create. It cannot atomically write, nothing can (hence the whole 'rename it into place' trick above).
The problem with such locks are two fold:
If the JVM crashes that file doesn't go away. Ever launched a linux daemon process, such as postgresd, and it told you that 'the pid file is still there, if there is no postgres running please delete it'? Yeah, that problem.
There's no way to know when it is done, other than to just re-check for that file's existence every few milliseconds. If you wait very few milliseconds you're trashing the disk potentially (hopefully your OS and disk cache algorithms do a decent job). If you wait a lot you might be waiting around for no reason for a long time.
Hence why you shouldn't do this stuff, and just use locks within the process. Use synchronized or make a new java.util.concurrent.ReentrantLock or whatnot.
To answer your code snippet specifically, no that is broken: It is possible for 2 threads to run simultaneously and both get false when it runs dest.exists(), thus both entering the copy block, and then they fall all over each other when copying - depending on file system, usually one thread ends up 'winning', with their copy operation succeeding and the other thread's seemingly lost to the aether (most file systems are ref/node based, meaning, the file was written to disk but its 'pointer' was immediately overwritten, and the filesystem considers it garbage, more or less).
Presumably you consider that a failing scenario, and your code does not guarantee that it can't happen.
NB: What API are you using? Files.copy(instanceOfJavaIoFile, anotherInstanceOfJavaIoFile) isn't java. There is java.nio.file.Files.copy(instanceOfjnfPath, anotherInstanceOfjnfPath) - that's the one you want. Perhaps this Files you have is from apache commons? I strongly suggest you don't use that stuff; those APIs are usually obsolete (java itself has better APIs to do the same thing), and badly designed. Ditch java.io.File, it's outdated API. Use java.nio.file instead. The old API doesn't have ATOMIC_MOVE or CREATE_NEW, and doesn't throw exceptions when things go wrong - it just returns false which is easily ignored and has no room to explain what went wrong. Hence why you should not use it. One of the major issues with the apache libraries is that it uses the anti-pattern of piling a ton of static utility methods into a giant container. Unfortunately, the second take on file stuff in java itself (java.nio.file) is similarly boneheaded API design. I guess in the java world, third time will be the charm. At any rate, a bad core java API with advanced capabilities is still a better than a bad apache utility API that wraps around the older API which simply does not expose the kinds of capabilities you need here.
i have wrote a small piece of code that can be summarized as
Thread() {
run() {
BufferedWriter fileout = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(log, true), "UTF-8"));;
while (true) {
fileout.write(blockingQueue.take());
}
}
}
now, some other threads will produce rows and add them to blockingQueue.
now, if i remove the file from console, the fileout.write doesn't fail nor throw exceptions.
I was wondering how i can re-open the file if someone remove the file from
filesystem via rm logfile.txt from console.
The problem is not how to reopen it, but how to detect that the file was removed.
Some options are
1.do take() and save it to a string
2. open the file and write to it
but even if i change the code in this way, it doesn't guarantee that
the file get written before someone remove it.
The other option is to lock the file, but i don't want to do that.
I don't want to avoid the delete of the file :)
If the file you are writing to can disappear, your best option is to not keep the stream open, but recreate a fresh FileOutputStream whenever you need to write something. That will recreate the file, too (which I suppose is what you want).
Or you could check if the file exists before each write. I suppose that performance-wise the two methods come to about the same.
If performance is an issue, you could buffer in memory and when the buffer is full, open the FileOutputStream (and immediately close it again after writing out the buffer).
I am running very time-consuming analyses and only their (very short) results are outputed to text file using printWriter.
Since my computer broke down twice recently and the results were not saved since the process wasn't finished (it only saves the file whenever printerWriter.close() is reached at the end), I was wondering whether there was a way to save the file various times throughout the process and update the output file each time. In that case, if the computer crashes at least parts of the results would still be available and wouldn't have to be repeated.
Some details:
A process is repeated for n=10 iterations using different (fixed) random seeds. After each iteration, I would like to save the results obtained in the iterations run so far. Thus, the chosen output file would have to be updated and saved after each iteration.
I suspect all you're looking for is calling flush on the PrintWriter.
Sounds like you should potentially look for a new computer, mind you...
You can create PrintWriter using:
PrintWriter writer = new PrintWriter(new FileWriter("file name"), true);
to get output buffer flushed automatically when println() or format() or printf() called on writer. Or you can manually use writer.flush() to flush output buffer when you desire.
I've recently finished a small game and have been trying to add audio to it. Currently the sound system I have is working (basically the same code as the top answer here
), but there is a significant stall during every output (~200-300 ms). Since it's a quick game I'm looking for something significant quicker. I'm not experienced with Threads, but would those be applicable here?
Instead of reading the file every time you wish to play its contents in audio format, read the file once into a byte array and then read the audio from that array of bytes.
public static byte[] getBytes(String file) {
RandomAccessFile raf = new RandomAccessFile(file, "r");
byte[] bytes = new byte[(int) raf.length()];
raf.read(bytes);
return bytes;
}
Then, you could simply alter the playSound method to take a byte array as the parameter, and then write them to the SourceDataLine instance to play the sound (like is done in the original method, but it reads them from the file just before it writes them).
You could try passing a BufferedInputStream to the overloaded method AudioSystem.getAudioInputStream() instead of passing a File.
The call to drain is a blocking one and it causes the delays that you observe. You do not need to wait there. However, if you let the sound output operate in parallel with your other code, you should also define what happens if there is a lot of sound in your sound buffers and you are queueing more. Learn about the available method and the rest of the API to be able to manage the sound card flexibly and without any "lagging sound" effects.
Threads can also be used for this purpose, but it is not necessary here. The role of the parallel process can be adequately played by the sound driver itself and the single threaded approach will make your application easier to design and easier to debug.
As much as I'd like to accept one of these existing answers, I solved my problem in a simple way. By loading all the referenced File variables during initialization, the delay does not come back at any point during gameplay. However if this is not an adequate solution for anyone else viewing this question, I would also recommend Vulcan's answer.
Short question,
I saw in some old code where a ByteArrayInputStream was created like:
new BufferedReader(new InputStreamReader(new ByteArrayInputStream(somebytes)));
And then the BufferedReader is used to read out somebytes line by line.
All working fine, but I noticed that the BufferedReader is never closed.
This is all working in a long running websphere application, the somebytes are not terrible big (200k most), it is only invoked a few times a week and we're not experiencing any apparent memory leaks. So I expect that all the objects are successfully garbage collected.
I always (once) learned that input/output streams need to be closed, in a finally statement. Are ByteStreams the exception to this rule?
kind regards
Jeroen.
You don't have to close ByteArrayInputStream, the moment it is not referenced by any variable, garbage collector will release the stream and somebytes (of course assuming they aren't referenced somewhere else).
However it is always a good practice to close every stream, in fact, maybe the implementation creating the stream will change in the future and instead of raw bytes you'll be reading file? Also static code analyzing tools like PMD or FindBugs (see comments) will most likely complain.
If you are bored with closing the stream and being forced to handle impossible IOException, you might use IOUtils:
IOUtils.closeQuietly(stream);
It is always good practice to close your readers. However not closing a ByteArrayInputStream does not have as heavy of a potential negative effect because you are not accessing a file, just a byte array in memory.
As #TomaszNurkiewicz mentioned it's always good to close the opened stream. Another good way to let it do the try block itself. Use try with resource like.......
try ( InputStream inputStream = new ByteArrayInputStream(bytes); Workbook workBook = new XSSFWorkbook(inputStream)) {
here Workbook and InputStream both implements Closeable Interface so once try block completes ( normally or abruptly), stream will be closed for sure.
Resources need to be closed in a finally (or equivalent). But where you just have some bytes, no it doesn't matter. Although when writing, be careful to flush in the happy case.