I'm looking for a "trick" or an "hack" to be certain that a file has been persisted on a remote disk, passing through vmware cache, NAS cache, etc.
Flushing and closing a FileOutputStream is not enough. I think Channel.force(true) is neither.
I'm thinking about something like these:
write the file and read back the file
write the file, check timestamp, rename the file, check for a different timestamp
write the file with "wrong content", overwrite with the original content, read it back and check the content
maybe someone had the same problem and found a solution.
My requirement is not to lose data. The java application works in this way:
accept a file from a remote source
add a digital signature and a certified timestamp creating a new file. If this file is lost it cannot be recreated in any way.
write this file to the storage
mark the file as signed on the database
tell the remote side that everything is ok
Tonight we had a crash and three transactions failed after step 5 but before the data was actually flushed to the remote store. So the database says that everything is fine, the remote side was told the same but 15 seconds of signed data was lost. And this is no good.
The correct solution could be to do a "synch mount" of the remote file-system. But this is not going to happen in a short time. Even in this case I do not completely trust this scenario given that the app is running on a VMWare server.
So I'd like to have a "best effort hack" to prevent (mitigate) incidents like this one.
Let's start with one assumption: you cannot guarantee any single write to any single disk. There are just too many layers of software and hardware between your write and the disk platter. And even if you could guarantee the write, you cannot guarantee that the data will be readable. It's possible that the disk will crash between the write and the read.
The only solution is redundancy, either provided by a framework (eg, RDMS) or your app.
When you receive and sign the file, you need to send it to multiple destinations on different physical hosts, and wait for them to reply that they saved the file. One of them might crash. Two of them might crash. How important the data is will determine how many remote hosts you need.
Incidentally, redundancy also applies to your database. The fact that a transaction committed does not mean that you'll be able to recover it after a database crash (although DBMS engineers have a lot more experience than either you or I in ensuring writes, all of it depends on a sysadmin who understands things like "logs and datafiles must reside on separate physical drives). I strongly recommend that you (redundantly) store enough metadata along with the file to be able to reconstruct the database entry.
Related
we are using mapdb to store a list of files that have been visited during a long running process, so that if we need to abort or if the process crashes we can resume where we left off.
we want to protect against crashes corrupting our mapdb file store.
so we are using transactions where we periodically commit changes to disk.
but then i noticed something interesting that if we crash our process in certain times we still get the error
Header checksum broken. Store was not closed correctly and might be corrupted. Use DBMaker.checksumHeaderBypass() to recover your data. Use clean shutdown or enable transactions to protect the store in the future.
But indeed setting the checksumHeaderBypass makes the error go away. What is the cost of using this checksumHeaderBypass setting?
If you use mapdb from #postcontruct in springboot app it throws this error. Avoid initialising mapdb before the app started (don't initialize from #postconstruct).
Not getting any traffic here because there are note a whole lot of mapdb people on sof. So i'll post the answer I think is the best
Basically if you allow checksum header bypass you can load the mapdb but it may have invalid entries in the mapdb. because if the checksum doesn't match, that indicates the content isn't what it should be. so you'll likely have some bad data in the mapdb. depending how often you commit to storage, could result in a lot or a little corrupted data.
I am looking to force synchronisation to disk after files are written at certain points in my application. Since it runs on Linux, I could get away with just running
Runtime.getRuntime().exec("sync");
However, I would rather not introduce Linux specific system calls and would rather use
java.io.FileDescriptor#sync();
However, I use Apache VFS to perform operations on the local file system and to my knowledge it does not provide access to the underlying file descriptor. But do I need access to the actual file descriptor that was just written to to force synchronization? Could I not just use any FileDescriptor to call sync for the same effect, for example
FileDescriptor.in.sync();
Would that be a valid approach, and would the results match that of calling sync in Linux?
Just in case anyone knows if / how it is possible to get access to the underlying FileDescriptor in VFS, it would be useful to know as well.
Edit: it appears that
FileDescriptor.in.sync();
does not want to work on Linux (although it works on my Windows machine when run from Eclipse), but
new FileOutputStream(new File("anyfile")).getFD().sync();
definitely works and the results of calling this match the results of calling the Linux sync command directly. However, it involves opening and closing a redundant file output stream, so it's not exactly ideal. Any other reason this might be a bad idea, as it does seem to work? Is there some other way to get a FileDescriptor that can be used to sync?
I investigates such issues some time ago: Question 1, Question 2.
In Linux, a java.io.FileDescriptor#sync call ensures that the modified data of the file associated with the descriptor is sent to the disk. (That cheap disk tend to skip the write and only place the data in an unreliable (aka no NVRAM) write cache is a different/additional problem.)
It does not guarantee that also modified data of other files is written back. This is just not in the contract of sync or of the underlying fsync POSIX function.
However, in certain circumstances (e.g. ext3 in data=ordered mode), an fsync on a file writes back up modified data of the file system. This is really fun because this may create significant latencies just because some other application has created a ton of dirty blocks.
I have the following problem:
I have a web application that stores data in the database. I would like for the clients to be able to extract the data e.g. of 2 tables to a file (local to the client).
The database could be arbitrarily big (meaning I have no idea how many data can potentially be in the database. Could be huge).
What is the best approach for this?
Should all the data be SELECTed out of the tables and returned to the client as a single structure to be stored in a file?
Or should the data be retrieved in parts e.g. first 100 then next 100 entries etc and create the single structure in the client?
Are there any pros-cons to consider here?
I've built something similar - there are some really awkward problems here, especially as the filesize can grow beyond what you can comfortably handle in a browser. As the amount of data grows, the time to generate the file increases; this in turn is not what a web application is good at, so you run the risk of your web server getting unhappy with even a smallish number of visitors all requesting a large file.
What we did is split the application into 3 parts.
The "file request" was a simple web page, in which authenticated users can request their file. This kicks off the second part outside the context of the web page request:
File generator.
In our case, this was a windows service which looked at a database table with file requests, picked the latest one, ran the appropriate SQL query, wrote the output to a CSV file, and ZIPped that file, before moving it to the output directory and mailing the user with a link. It set the state of the record in the database to make sure only one process happened at any one point in time.
FTP/WebDAV site:
The ZIP files were written to a folder which was accessible via FTP and WebDAV - these protocols tend to do better with huge files than a standard HTTP download.
This worked pretty well - users didn't like to wait for their files, but the delay was rarely more than a few minutes.
We have a similar use case with an oracle cluster containig approx. 40GB of data. The solution working best for us is a maximum of data per select statement as it reduces DB-overhead significantly.
That being said, there are three optimizations which worked very well for us:
1.) We partition the data into 10 roughly same-sized sets and select them from the database in parallel. For our cluster we found that 8 connections in parallel work approx. 8 times faster than a single connection. There is some additional speedup up to 12 connections but that depends on your database and your dba.
2.) Keep away from hibernate or other ORMs and use custom made JDBCs once you talk about large amounts of data. Use all optimiziations you can get there (e.g. ResultSet.setFetchSize())
3.) Our data compresses very well and putting the data through a gziper saves lots of I/O time. In our case it eliminated I/O from the critical path. By the way, this is also true for storing the data in a file.
In my Java app, on Linux, I need to periodically read some text files that change often.
(these text files are updated by a separate app).
Do I need to be concerned about the rare case when attempting to read the file at the exact moment it is being updated? If so, how can I guarantee that my reads always return without failing? Does the OS handle this for me, or could I potentially read 1/2 a file?
thanks.
The OS can help you achieve consistent reads, but it requires that both apps are written with this in mind.
In a nutshell, you open the file in your java app with exclusive read/write permission - this ensures that no one else, including your other app is modifying the file while you are reading it. The FileLock class can help you ensure you have exclusive access to a file.
Your other app will then periodically try to write to the file. If it does this at the same time you are reading the file, then access will be denied, and the other app should retry. This is the critical part, since if the app doesn't expect the file to be unavailable and treats this as a fatal error condition, the write will fail, and app doesn't save the data and may fail/exit etc.
If the other app must always be able to write to the file, then you have to avoid using exclusive reads. Instead, you have to try to detect an inconsistent read, such as by checking the last modified timestamp when you start reading, and when you finish reading. If the timestamps are the same, then you are good to go and have a consistent read.
Yes, you need to worry about this.
No, your reads shouldn't "fail" AFAIK, unless the file is momentarily being locked, in which you can catch the exception and try again after a brief pause. You might certainly, though, get more or less data than you expected.
(If you post code we might be able to comment more accurately on what'll happen.)
In an application I'm working on, I need a write-behind data log. That is, the application accumulates data in memory, and can hold all the data in memory. It must, however, persist, tolerate reasonable faults, and allow for backup.
Obviously, I could write to a SQL database; Derby springs to mind for easy embedding. I'm not tremendously fond of the dealing with a SQL API (JDBC however lipsticked) and I don't need any queries, indices, or other decoration. The records go out, and on restart, I need to read them all back.
Are there any other suitable alternatives?
Try using a just a simple log file.
As data comes in, store in memory and write (append) to a file. write() followed by fsync() will guarantee (on most systems, read your system and filesystem docs carefully) that the data is written to persistent storage (disc). These are the same mechanisms any database engine would use to get data in persistent storage.
On restart, reload the log. Occasionally, trim the front of the log file so data usage doesn't grow infinitely. Or, model the log file as a circular buffer the same size as what you can hold in memory.
Have you looked at (now Oracle) Berkeley DB for Java? The "Direct Persistence Layer" is actually quite simple to use. Docs here for DPL.
Has different options for backups comes with a few utilities. Runs embedded.
(Licensing: a form of the BSD License I beleive.)