Hi I try to write to non existing file
public static void main(String[] args) throws IOException {
Path newFile = Paths.get("output.txt");
Files.write(newFile, "Sample text".getBytes());
}
And everything is OK but if I put option
Files.write(newFile, "Sample text".getBytes(),StandardOpenOption.DELETE_ON_CLOSE);
An error appears
Exception in thread "main" java.nio.file.NoSuchFileException: problem.txt
at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
So to work I have to add option
StandardOpenOption.CREATE_NEW
Why in the second attempt with StandardOpenOption.DELETE_ON_CLOSE doesn't work but the first without any option works and creates file?
I am using java version(build 1.7.0_45-b18)
From the documentation for Files.write:
If no options are present then this method works as if the CREATE, TRUNCATE_EXISTING, and WRITE options are present
So, once you start specifying OpenOptions, you have to specify the options you need from those three as well (or as you already noted, CREATE_NEW instead of CREATE).
According to Documentation
public static final StandardOpenOption DELETE_ON_CLOSE
Delete on close. When this option is present then the implementation makes a best effort attempt to delete the file when closed by the appropriate close method. If the close method is not invoked then a best effort attempt is made to delete the file when the Java virtual machine terminates (either normally, as defined by the Java Language Specification, or where possible, abnormally). This option is primarily intended for use with work files that are used solely by a single instance of the Java virtual machine. This option is not recommended for use when opening files that are open concurrently by other entities. Many of the details as to when and how the file is deleted are implementation specific and therefore not specified. In particular, an implementation may be unable to guarantee that it deletes the expected file when replaced by an attacker while the file is open. Consequently, security sensitive applications should take care when using this option.
So how are supposed to delete a file that does not exist ?
Related
I'm working on a project which, in part, displays all the files in a directory in a JTable, including sub-directories. Users can double-click the sub-directories to update the table with that new directory's content. However, I've run into a problem.
My lists of files are generated with file.listFiles(), which pulls up everything: hidden files, locked files, OS files, the whole kit and caboodle, and I don't have access to all of them. For example, I don't have permission to read/write in "C:\Users\user\Cookies\" or "C:\ProgramData\ApplicationData\". That's ok though, this isn't a question about getting access to these. Instead, I don't want the program to display a directory it can't open. However, the directories I don't have access to and the directories I do are behaving almost exactly the same, which is making it very difficult to filter them out.
The only difference in behavior I've found is if I call listFiles() on a locked directory, it returns null.
Here's the block of code I'm using as a filter:
for(File file : folder.listFiles())
if(!(file.isDirectory() && file.listFiles() == null))
strings.add(file.getName());
Where 'folder' is the directory I'm looking inside and 'strings' is a list of names of the files in that directory. The idea is a file only gets loaded into the list if it's a file or directory I'm allowed to edit. The filtering aspect works, but there are some directories which contain hundreds of sub-directories, each of which contains hundreds more files, and since listFiles() is O(n), this isn't a feasible solution (list() isn't any better either).
However,
file.isHidden() returns false
canWrite()/canRead()/canExecute() return true
getPath() returns the same as getAbsolutePath() and getCanonicalPath()
createNewFile() returns false for everything, even directories I know are ok. Plus, that's a solution I'd really like to avoid even if that worked.
Is there some method or implementation I just don't know to help me see if this directory is accessible without needing to parse through all of its contents?
(I'm running Windows 7 Professional and I'm using Eclipse Mars 4.5.2, and all instances of File are java.io.File).
The problem you have is that you are dealing with File. By all accounts, in 2016, and, in fact, since 2011 (when Java 7 came out), it has been superseded by JSR 203.
Now, what is JSR 203? It is a totally new API to deal with anything file systems and file system objects; and it extend the definition of a "file system" to include what you find on your local machine (the so called "default filesystem" by the JDK) and other file systems which you may use.
Sample page on how to use it: here
Among the many advantages of this API is that it grants access to metadata which you could not access before; for instance, you specifically mention the case, in a comment, that you want to know which files Windows considers as "system files".
This is how you can do it:
// get the path
final Path path = Paths.get(...);
// get the attributes
final DosAttributes attrs = Files.readAttributes(path, DosFileAttributes.class);
// Is this file a "system file"?
final boolean isSystem = attrs.isSystem();
Now, what is Paths.get()? As mentioned previously, the API gives you access to more than one filesystem at a time; a class called FileSystems gives access to all file systems visible by the JDK (including creating new filesystems), and the default file system, which always exists, is given by FileSystems.getDefault().
A FileSystem instance also gives you access to a Path using FileSystem#getPath.
Combine this and you get that those two are equivalent:
Paths.get(a, b, ...)
FileSystems.getDefault().getPath(a, b, ...)
About exceptions: File handles them very poorly. Just two examples:
File#createNewFile will return false if the file cannot be created;
File#listFiles will return null if the contents of the directory pointed by the File object cannot be read for whatever reason.
JSR 203 has none of these drawbacks, and does even more. Let us take the two equivalent methods:
File#createNewFile becomes Files#createFile;
File#listFiles becomes either of Files#newDirectoryStream (or derivatives; see javadoc) or (since Java 8) Files#list.
These methods, and others, have a fundamental difference in behaviour: in the event of a failure, they will throw an exception.
And what is more, you can differentiate what exception this is:
if it is a FileSystemException or derivative, the error is at the filesystem level (for instance, "access denied" is an AccessDeniedException);
if is is an IOException, then the problem is more fundamental.
This answer cannot contain each and every use case of JSR 203; this API is vast, very complete, although not without flaws, but it is infinitely better than what File has to offer in any case.
I faced the very same problem with paths like C://users/myuser/cookies.
I already used JSR203, so the above answer kind of didn't help me.
In my case the important attribute of those files was the hidden one.
I ended up using the FileSystemview, which excluded those files as I wanted.
File[] files = FileSystemView.getFileSystemView().getFiles(new File(strHomeDirectory), !showHidden);
I am facing this security flaw in my project at multiple places. I don't have any white-list to do a check at every occurrence of this flaw. I want to use ESAPI call to perform a basic blacklist check on the file name. I have read that we can use SafeFile object of ESAPI but cannot figure out how and where.
Below are a few options I came up with, Please let me know which one will work out?
ESAPI.validator().getValidInput() or ESAPI.validator().getValidFileName()
Blacklists are a no-win scenario. This can only protect you against known threats. Any code scanning tool you use here will continue to report the vulnerability... because a blacklist is a vulnerability. See this note from OWASP:
This strategy, also known as "negative" or "blacklist" validation is a
weak alternative to positive validation. Essentially, if you don't
expect to see characters such as %3f or JavaScript or similar, reject
strings containing them. This is a dangerous strategy, because the set
of possible bad data is potentially infinite. Adopting this strategy
means that you will have to maintain the list of "known bad"
characters and patterns forever, and you will by definition have
incomplete protection.
Also, character encoding and OS makes this a problem too. Let's say we accept an upload of a *.docx file. Here's the different corner-cases to consider, and this would be for every application in your portfolio.
Is the accepting application running on a linux platform or an NT platform? (File separators are \ in Windows and / in linux.)
a. spaces are also treated differently in file/directory paths across systems.
Does the application already account for URL-encoding?
Is the file being sent stored in a database or on the system itself?
Is the file you're receiving executable or not? For example, if I rename netcat.exe to foo.docx does your application actually check to see if the file being uploaded contains the magic numbers for an exe file?
I can go on. But I won't. I could write an encyclopedia.
If this is across multiple applications against your company's portfolio it is your ethical duty to state this clearly, and then your company needs to come up with an app/by/app whitelist.
As far as ESAPI is concerned, you would use Validator.getValidInput() with a regex that was an OR of all the files you wanted to reject, ie. in validation.properties you'd do something like: Validator.blackListsAreABadIdea=regex1|regex2|regex3|regex4
Note that the parsing penalty for blacklists is higher too... every input string will have to be run against EVERY regex in your blacklist, which as OWASP points out, can be infinite.
So again, the correct solution is to have every application team in your portfolio construct a whitelist for their application. If this is really impossible (and I doubt that) then you need to make sure that you've stated the risks cited here clearly to management and you refuse to proceed with the blacklist approach until you have written documentation that the company chooses to accept the risk. This will protect you from legal liability when the blacklist fails and you're taken to court.
[EDIT]
The method you're looking for was called HTTPUtilites.safeFileUpload() listed here as acceptance criteria but this was most likely never implemented due to the difficulties I posted above. Blacklists are extremely custom to the application. The best you'll get is a method HTTPUtilities.getFileUploads() which uses a list defined in ESAPI.properties under the key HttpUtilities.ApprovedUploadExtensions
However, the default version needs to be customized as I doubt you want your users uploading .class files and dll to your system.
Also note: This solution is a whitelist and NOT a blacklist.
The following code snippet works to get past the issue CWE ID 73, if the directory path is static and just the filename is externally controlled :
//'DIRECTORY_PATH' is the directory of the file
//'filename' variable holds the name of the file
//'myFile' variable holds reference to the file object
File dir = new File(DIRECTORY_PATH);
FileFilter fileFilter = new WildcardFileFilter(filename);
File[] files = dir.listFiles(fileFilter);
File myFile = null ;
if(files.length == 1 )
myFile = files[0];
Java's java.nio.file.Files.walkFileTree() executes the visitor's visitFile() method even if a file doesn't exist (a recently-deleted file).
FileUtils.forceDelete(certainFile);
Files.exists(certainFile.toPath()); // Returns false, as expected
MySimpleFileVisitor visitor = new MySimpleFileVisitor(); // Extends SimpleFileVisitor. All it does is override visitFile() so I can see that it visits the deleted file.
Files.walkFileTree(directory, visitor); // Calls visitor.visitFile on certainFile. Not expected!
Is this possible? I am using Windows, and the file is on a network drive.
Files.walkFileTree() calls FileTreeWalker.walk(), which calls Files.newDirectoryStream(). The only explanation I can think of is that Files.newDirectoryStream returns a DirectoryStream that includes the deleted file.
Yes, it is possible.
Let us assume that the Files.walk… methods all use DirectoryStreams to walk the file tree (which, at least as of Java 1.8.0_05, they in fact do) or an internal equivalent. The documentation for DirectoryStream says:
The iterator is weakly consistent. It is thread safe but does not freeze the directory while iterating, so it may (or may not) reflect updates to the directory that occur after the DirectoryStream is created.
Yes, it is possible. In my case, the following conditions had to be met to reproduce the failure:
The file of interest exists in a folder that is indexed by Windows.
The file's type has a Windows Property Handler associated with it.
Windows has time to start indexing the file before it is deleted.
The property handler takes a long time (a few minutes) to release its hold on the file.
I just discovered all of this information, which is why none of it is mentioned in the original question.
Suppose two (or more) concurrently-running Java processes need to check for the existence of a file, create it if it doesn't exist, and then potentially read from that file over the course of their runs. We want to protect ourselves against the possibility of multiple writer processes clobbering each other and/or reader processes reading an incomplete or inconsistent version of the file.
What we're currently doing to arbitrate this situation is to use Java NIO FileLocks. One process manages to acquire an exclusive lock on the file to be created using FileChannel.tryLock() and creates it, while the other concurrently-running processes fail to acquire a lock and fall back to using an in-memory version of the file for their runs.
Locking is causing various problems for us on our compute farm, however, so we're exploring alternatives. So my question to you is: is there a way to do this safely without using file locks?
Could, eg., the processes write to independent temporary files when they find a file doesn't exist, and then more or less "atomically" move the temp file(s) into place after they've been written? In this scenario, we might end up with multiple writer processes, but that would be ok provided that any processes reading from the file always read one version or another, and not a mix of two or more versions. However, I don't think all operating systems guarantee that if you have a file open for reading, you'll continue reading from the original version of the file even if it's overwritten mid-way through the read.
Any suggestions would be much appreciated!
Suppose two (or more) concurrently-running Java processes need to check for the existence of a file, create it if it doesn't exist, and then potentially read from that file over the course of their runs.
I don't quite understand the create and read part of the question. If you are looking to make sure that you have a unique file then you could use new File(...).createNewFile() and check to make sure that it returns true. To quote from the Javadocs:
Atomically creates a new, empty file named by this abstract pathname if
and only if a file with this name does not yet exist. The check for the
existence of the file and the creation of the file if it does not exist
are a single operation that is atomic with respect to all other
filesystem activities that might affect the file.
This would give you a unique file that only that process (or thread) would then "own". I'm not sure how you were planning on letting the writer know which file to write to however.
If you are talking about creating a unique file that you write do and then moved into a write directory to be consumed then the above should work. You would need to create a unique name in the write directory once you were done as well.
You could use something like the following:
private File getUniqueFile(File dir, String prefix) {
long suffix = System.currentTimeMillis();
while (true) {
File file = new File(dir, prefix + suffix);
// try creating this file, if true then it is unique
if (file.createNewFile()) {
return file;
}
// someone already has that suffix so ++ and try again
suffix++;
}
}
As an alternative, you could also create a unique filename using UUID.randomUUID() or something to generate a unique name.
I've been trying to devise a file path and if possible a file name that is impossible for Java to create on Linux/MacOS/Windows using the following code:
File directory = new Directory(dir);
directory.mkdirs(); // should always fail and not affect an existing file/dir
File file = new File(dir, filename);
file.createNewFile(); // should always fail and not affect an existing file/dir
This kind of path will be used in unit tests to prove certain error conditions are being handled correctly Assume the tests are being run as root (they aren't, but I want to focus on invalid paths vice privs). So far everything I've tried will fail on one platform (usually Windows) but not another (usually Linux).
Suggestions?
PS. I know about mock objects, PowerMock, etc. but really just want to get Java's as-is File class to fail to create the directory/file.
There are many reasons for a file name to be illegal in Linux (and probably similar ones in iOS), different ones entirely in Windows. What do you want to check? If you try to handle an illegal name the Java functions, they will fail. If it is that what you want to catch, cook up a name that is illegal for each (the conditions arent' so simple). If you want to check if your code catches this, I'd just defer the problem to the lower level: Try to create; if it fails, complain.
If you want to know what names are illegal for each system, better ask specifically for that.