I have a strange problem. When I try to delete a file created by my application it gets deleted and gets replaced with a junk file of the exact same filesize. Can someone please help me out with this? Beats me. The same thing happens when I try to delete the file manually.
are you perhaps using an NFS file system on linux? NFS will leave tombstones behind deleted files in some cases.
(Unless you specify your operating system and post some of your code, this is pure guesswork.)
Since deleting the same file manually causes the same behaviour, it's reasonable to assume that this is not an issue with your code specifically.
Some filesystems (FUSE on Linux comes to mind, as well as some network filesystems) present this behaviour when deleting files that are in use by another process.
Related
I can't find anywhere that explains this, but I might have just missed it. Anyway, if you use java.io.File.delete () to delete a file, what happens to the file? I only need an answer for how this works on Windows. Does the file get sent to the Recycling Bin, a separate designated location for files that were deleted by Java, or is it completely lost? Again, sorry if this is a duplicate. I couldn't find an answer anywhere. Thanks
Using the Recycle Bin is a bit more complicated to do than using File.delete(). To make Java work on every platform, File.delete() simply deletes the file for good.
Is it possible with Java to delete to the Recycle Bin?
If you want to not actually delete the files completely, why not move "deleted" files to your app's own designated folder and clear the folder periodically?
I am trying to programatically purge log files from a running(!) system consisting of several Java and non-Java servers. I used Java's File.delete() operation and it usually works fine. I am also perfectly fine with log files that are currently in use not being deleted, so I just log it as a warning whenever File.delete() returns false.
However, in log files which are currently still being written to by NON-Java applications (Postgres, Apache HTTPD etc., Java applications might also be affected, but I didn't notice yet, and all are using the same logging framework anyway, which seems to be OK) are not actually deleted (which is what I expected), however, File.delete() returns "true" for them.
But not only do these files still exist on the file system (Windows explorer and "dir" still show them), but afterwards they are inaccessible... when I try to open them with a text editor etc. I get "access denied" or similar error messages, when I try to copy them with explorer, it also claims that I do not have permissions, when I check its "properties" with explorer, it gives me "You do not have permission to view or edit this object's permissions".
Just to be clear: before I ran the File.delete() operation, I could access or delete these files without any problems, the delete operation "breaks" them. Once I stop the application, the file then disappears, and on restart, the application creates it from scratch and everything is back to normal.
The problem is that when NOT restarting the application after the log file purge operation, the application logs to nirvana.
This behavior reminds me a bit of the file deletion behavior of Linux: if you delete a file that is still held open by an application, it disappears from the file system, but the application - still holding a file handle - will happily continue writing to that file, but you will never be able to access it afterwards. The only difference being that here the files are still visible in the FS, but also not accessible otherwise.
I should mention that both my Java program and the applications themselves are running with "system" user.
I also tried Files.delete(), which allegedly throws an IOException indicating the error... but it seems there is no error.
What I tried to work around the problem is to check if the files are currently locked, using the method described here https://stackoverflow.com/a/1390669/5837050, but this only works for some of the files, not for all of them.
I basically need a reliable way (at least for Windows, if it worked also for Linux, that would be great) to determine if a file is still being used by some program, so I could just not delete it.
Any hints appreciated.
I haven't reproduced it but it seems like an OS expected behaviour, normally different applications run with different users which have ownership on this type of files but I understand that you want like a master purge Java which checks the log files not in use to delete them (running with enough grants of course).
So, considering that the OS behaviour is not going to change I would suggest to configure your logs with "roll file appender" policies and then check the files that match these policies.
Check the rollback policies for logback to make you an idea:
http://logback.qos.ch/manual/appenders.html#onRollingPolicies
For example, if your appender file policy is "more than one day or more than 1Gb" then just delete files which last edition date are older than one day or size are 1Gb. With this rule you will be sure to delete log files that are not in use.
Note that.. with a proper rolling policy maybe you even don't need your purge method, look at this configuration example:
<!-- keep 30 days' worth of history capped at 3GB total size -->
<maxHistory>30</maxHistory>
<totalSizeCap>3GB</totalSizeCap>
I hope this could help you a bit!!
I was using Android Studio, latest release yesterday when my PC decided to just turn off(Turns out PSU's dont last forever). I took the HDD out until I can find a replacement PSU and put it into another PC, upon opening my project I can open the MainAcivity anymore with Android Studio. I opened the Java file with a text editor and it comes up with about 8000 lines of 0's.
Does anyone know how to fix this as I put alot of work and time into that file?
Your file is corrupted, the sector of the disk where the file is stored is effectively damaged / unallocated because the file was being held in the disk cache / currently written.
Resetting the file from version control
If you use a version control mechanism such as git, mercurial or cvs, you can try check-out the file from the system from the moment of your latest successful check-in. If that fails you can try cloning/checkout out the online version of your repository and see if that is the correct version to.
Recovering the file from backups
Every person SHOULD have a proper backup system, making backups is relatively easy to do, and even a git repository stored on multiple computers can already be a good backup.
Even Jeff Atwood has made this mistake in the past. (yes, he is user 1)
Recovering the file using disk checking applications
Sometimes, you may have more luck using chkdsk on Windows or fsck on Linux.
This may happen because you worked on the file, and the file system decided to move the file to another sector for various reasons. Then, while it was moving the file, the hard disk crashed. This left the file pointer referring to the new location, while the file is still safe at the old location.
On Linux and Mac, files recovered by this technique are stored in directory /lost+found, while on Windows, this directory is called "/Found" located in the affected hard drive.
Decompiling your old application
Sometimes, the above techniques are delivering anything resembling your original code. In this case we need to use our must ugly method, decompiling.
There are various java and android decompilers you can use, these can be easily found on google by searching for "java decompiler". I am not going to name them here to prevent this answer from becoming opinium based on what decompiler is the best.
I have a piece of JAVA code which reads a few files and keeps them loaded into memory for sometime. The file handles are preserved after reading. My problem here is that I want to restrict user from deleting these files using "DEL" key or rm command.
I could achieve the same on windows by preserving file handles while on Unix rm does not honour the lock on the files. I even tried Filechannel.lock() but it did not help either.
Any suggestions are appreciated.
As long as you have the handle open, they can remove the file from a directory, but they can't delete the file. i.e. the file isn't removed until you close the file or your process dies.
I even tried Filechaanel.lock() but it did not help either.
That is because it's the directory, not the file that is being altered. e.g. if they have write access to the file but not the directory they cannot delete it.
You could also look into chattr which can be used to lock the file.
chattr +i filename
Should render the file undeletable. You can then make it deleteable again via...
chattr -i filename
There is no pure Java solution to this. In fact, I don't think there is a solution at all that doesn't have potentially nasty consequences. The fundamental problem is that UNIX / LINUX doesn't have a way to temporarily place a mandatory lock on a file. (The Linux syscall for locking a file is flock, but flock-style locks are discretionary. An application that doesn't bother to flock a file won't be affected by other applications locks on the file.)
The best you can do is to use chattr +i to set the "immutable" attribute on the file. Unfortunately, that has other effects:
The immutable file cannot be written to or linked to either.
If your application crashes without unsetting the attribute, the user is left with a file that he / she mysteriously cannot change or delete. Not even with sudo or su.
I am developing a Java Desktop Application. This app needs a configuration to be started. For this, I want to supply a defaultConfig.properties or defaultConfig.xml file with the application so that If user doesn't select any configuration, then the application will start with the help of defaultConfig file.
But I am afraid of my application crash if the user accidentally edit the defaultConfig file. So Is there any mechanism through which I can check before the start of the application that whether the config file has changed or not.
How other applications (out in the market) deal with this type of situation in which their application depends on a configuration file?
If the user edited the config file accidentally or intentionally, then the application won't run in future unless he re-installs the application.
I agree with David in that using a MD5 hash is a good and simple way to accomplish what you want.
Basically you would use the MD5 hashing code provided by the JDK (or somewhere else) to generate a hash-code based on the default data in Config.xml, and save that hash-code to a file (or hardcode it into the function that does the checking). Then each time your application starts load the hash-code that you saved to the file, and then load the Config.xml file and again generate a hash-code from it, compare the saved hash-code to the one generated from the loaded config file, if they are the same then the data has not changed, if they are different, then the data has been modified.
However as others are suggesting if the file should not be editable by the user then you should consider storing the configuration in a manner that the user can not easily edit. The easiest thing I can think of would be to wrap the Output Stream that you are using to write the Config.xml file in a GZIP Output Stream. Not only will this make it difficult for the user to edit the configuration file, but it will also cause the Config.xml file to take up less space.
I am not at all sure that this is a good approach but if you want to go ahead with this you can compute a hash of the configuration file (say md5) and recompute and compare every time the app starts.
Come to think of it, if the user is forbidden to edit a file why expose it? Stick it in a jar file for example, far away from the user's eyes.
If the default configuration is not supposed to be edited, perhaps you don't really want to store it in a file in the first place? Could you not store the default values of the configuration in the code directly?
Remove write permissions for the file. This way the user gets a warning before trying to change the file.
Add a hash or checksum and verify this before loading file
For added security, you can replace the simple hash with a cryptographic signature.
From I have found online so far there seems to be different approaches code wise. none appear to be a 100 hundred percent fix, ex:
The DirectoryWatcher implements
AbstractResourceWatcher to monitor a
specified directory.
Code found here twit88.com develop-a-java-file-watcher
one problem encountered was If I copy
a large file from a remote network
source to the local directory being
monitored, that file will still show
up in the directory listing, but
before the network copy has completed.
If I try to do almost anything non
trivial to the file at that moment
like move it to another directory or
open it for writing, an exception will
be thrown because really the file is
not yet completely there and the OS
still has a write lock on it.
found on the same site, further below.
How the program works It accepts a ResourceListener class, which is FileListener. If a change is detected in the program a onAdd, onChange, or onDelete event will be thrown and passing the file to.
will keep searching for more solutions.