Briefing: I'm developing a system which must grant access to all the files of a project, it has to be an interface to open, upload and modify those files, i decided to store all of them (the files) in an archive (zip), in order to enhance the response time i decided not to unzip and then rezip all its contents, besides i decided to modify the zip as is, using the Zip FileSystem Provider of java, but I'm facing many troubles because of the lack of info; when the user need an specific file i only decompress that file so the user can work on it, i monitor the changes in the file, then if i detect any change, i upload the file (replace) into the zip.
the problem is:
Since I'm using threads while saving the files into the archive (to prevent the GUI from freezing) the user can open others files and modify them even when another file is being saved, i want to have the changes as updated as possible into the archive to prevent the lost of information in case of a blackout but there is not a method like FileSystem.commit() or FileSystem.flush(), so the changes occurs into the archive just when I close the file system, but opening another file system takes too long, adding vulnerability to the lost of infomation to my system during the time that the another filesystem is initialized... any idea of how to commit changes or be always capable of have a way to save?
Opening two filesystems (once as a backup to do the operations while the other is being instantiated) does not work either, because they change the name of the file for a brief time but during that time the other instance may try to be created and fail because it cannot find the name for the file...
Greetings...
Related
The title is maybe not detailed enough.
I have an application that is dealing with app installation (it does thing as creating backup file, copying and also deleting file) - I call it AppUpdater. Then I do have another Application that is tracking certain directory with java.nio.file.WatchService lib - lets call it DirWatcher. I have used this example: https://kodejava.org/how-to-monitor-file-or-directory-changes/
What I really want is to be able to recognized whether file in that certain location is updated by the AppUpdater or for example manually by user. Is there anyway to register/log some information while modifying file with AppUpdater, so that I will know that the update came from AppUpdater?
Thanks
I have an app that accesses words from a csv text files. Since they usually do not change I have them placed inside a .jar file and read them using .getResourceAsStream call. I really like this approach since I do not have to place a bunch of files onto a user's computer - I just have one .jar file.
The problem is that I wanted to allow "admin" to add or delete the words within the application and then send the new version of the app to other users. This would happen very rarely (99.9% only read operations and 0.1% write). However, I found out that it is not possible to write to text files inside the .jar file. Is there any solution that would be appropriate for what I want and if so please explain it in detail as I'm still new to Java.
It is not possible because You can't change any content of a jar which is currently used by a JVM.
Better Choose alternate solution like keeping your jar file and text file within the same folder
Is there any way in Java to write out to a temporary file securely?
As far as I can tell, the only way to create a temporary file (createTempFile) does't actually open it at the same time, so there's a race condition between file open & file write. Am I missing something? I couldn't find the C source code behind createFileExclusively(String) in UnixFileSystem.java, but I doubt it can really do anything since the file open occurs in the Java code after the temp file is created (unless it tries to do something with file locks?).
The problem
Between when the temporary file is created & you open it, a malicious attacker could unlink that temporary file & put malicious stuff there. For example, an attacker could create a named pipe to read sensitive data. Or similarly if you eventually copy the file by reading it, then the named pipe could just ignore everything written & supply malicious content to be read.
I remember reading of numerous examples of temporary file attacks in the past 10+ years that exploit the race condition between when the name appears in the namespace and when the file is actually opened.
Hopefully a mitigating factor is that Java set's the umask correctly so a less-privileged user can't read/write to the file and typically the /tmp directory restricts permissions properly so that you can't perform an unlink attack.
Of course if you pass a custom directory for the temporary file that's owned by a less-privileged user who's compromised, the user could do an unlink attack against you. Hell, with inotify, it's probably even easier to exploit the race condition than just a brute force loop that does a directory listing.
http://kurt.seifried.org/2012/03/14/creating-temporary-files-securely/
Java
use java.io.File.createTempFile() – some interesting info at http://www.veracode.com/blog/2009/01/how-boring-flaws-become-interesting/
for directories there is a helpful posting at How to create a temporary directory/folder in Java?
Java 7
for files use java.io.File.createTempFile()
for directories use createTempDirectory()
http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html
Since Java 7 we have OpenOption.
An object that configures how to open or create a file.
Objects of this type are used by methods such as newOutputStream, newByteChannel, FileChannel.open, and AsynchronousFileChannel.open when opening or creating a file.
Of particular interest is StandardOpenOptions.CREATE_NEW.
Create a new file, failing if the file already exists. The check for the existence of the file and the creation of the file if it does not exist is atomic with respect to other file system operations.
So, you can do something like this:
FileChannel mkstemp() {
Path path = Files.createTempFile(null, null);
Files.delete(path);
return FileChannel.open(path, WRITE, CREATE_NEW);
}
Implementing the same template behaviour is left as exercise to the reader.
Keep in mind that on many systems, just because a file doesn't have a name doesn't at all mean it's inaccessible. For example, on Linux open file descriptors are available in /proc/<pid>/fd/<fdno>. So you should make sure that your use of temporary files is secure even if someone knows / has a reference to the open file.
You might get a more useful answer if you specify exactly what classes of attacks you are trying to prevent.
Secure against other ordinary userid's? Yes, on any properly functioning multi-user system.
Secure against the your own userid or the superuser? No.
I am polling file system for new file, which is upload by someone from web interface.
Now I have to process every new file, but before that I want to insure that the file I am processing is complete (I mean to say it is completely transferred through web interface).
How do I verify if file is complete downloaded or not before processing?
Renaming a filename is an atomic action in most (if not all) filesystems. You can make use of this by uploading the file to a recognizable temporary name and renaming it as soon as the upload is complete.
This way you will "see" only those files that have been uploaded completely and are safe for processing.
rsp's answer is very good. If, by any chance, it does not work for you, and if your polling code is running within a process different from the process of the web server which is saving the file, you might want to try the following:
Usually, when a file is being saved, the sharing options are "allow anyone to read" and "allow no-one to write". (exclusive write.) Therefore, you can attempt to open the file also with exclusive write access: if this fails, then you know that the web server is still holding the file open, and writing to it. If it succeeds, then you know that the web server is done. Of course be sure to try it, because I cannot guarantee that this is precisely how the web server chooses to lock the file.
I'm unsure of the best solution for this but this is what I've done.
I'm using PHP to look into a directory that contains zip files.
These zip files contain text files that are to be loaded into an oracle database through SqlLoader (sqlldr).
I want to be able to start more than one PHP process via the command line to load these zip files into the db.
If other 'php loader' processes are running, they shouldn't overlap and try to load the same zip file. I know I could start one process and let it process each zip file but I'd rather start up a new process for incoming zip files so I can load concurrently.
Right now, I've created a class that will 'lock' a zip file, a directory, or a generic text file by creating a file called 'filename.ext.lock'. Other process that start up will check to see if a file has been 'locked' in this way, if it has it will skip that file and move on to another file for processing.
I've made a class that uses a directory and creates 'process id' files so that each PHP process has an id it can use for logging purposes and for identifying which PHP process has locked the file.
I'm on a windows machine and it isn't in the plan to make this an ubuntu machine, for those of you that might suggest pcntl.
What other solutions do you see? I know that this isn't truly synchronized because a lock file might be about to be created and then a context switch occurs and then another PHP process 'locks' the file before the first one can create the lock file.
Can you please provide me with some ideas about how I can make this solution better? A java implementation? Erlang?
Also forgot to mention, the PHP process connects to the DB to fetch metadata about the files that it is going to load via SqlLoader. I don't think that is important but just in case.
Quick note : I'm aware that sqlldr locks the table it is loading and that if multiple processes try to load to the same table it will become a bottle neck. To alleviate this problem I plan on making a directory that will contain files name after tables that are currently being loaded. After a table has completed loading the respective file will be deleted and other processes will check that it is safe to load that table.
Extra information : I'm using 7zip to unzip the files and php's exec to perform these commands.
I'm using exec to call sqlldr as well.
The zip files can be huge (1gb) and loading one table can take up to an 1hr.
Rather than creating a .lock file, you can just rename the zip file when a loader start to process a zip file. e.g. "foobar.zip.bar", the process should be faster than creating a new file on disk.
But it doesn't ensure your next loader will be loaded after the file rename. You should at least have some
controls loading new loaders in another script.
Also, just some side suggestion, its possible to emulate threading in PHP using CURL, you might want to try it out.
https://web.archive.org/web/20091014034235/http://www.ibuildings.co.uk/blog/archives/811-Multithreading-in-PHP-with-CURL.html
I do not know if I understand right, but I have a suggestion: get the lock files with a prefix of priority.
Example:
10-script.php started
20-script.php started (enters a loop waiting for a 10-foobar.ext.lock)
while 10-foobar.ext.lock is not generated by 10-script.php, still waiting
30-script.php will have to wait for 10-foobar.ext.lock and 20-example.ext.lock
I tried to find pcntl_fork with cygwin, but found nothing that works