After years of coding with the old File API, I'm finally ready to hop onto the whole Path/Paths train. For the most part, this has gone smoothly, however, I'm stumped on this particular aspect: temporary files.
The documentation on java.nio.Files#createTempFile says:
As with the File.createTempFile methods, this method is only part of a temporary-file facility. Where used as a work files, the resulting file may be opened using the DELETE_ON_CLOSE option so that the file is deleted when the appropriate close method is invoked. Alternatively, a shutdown-hook, or the File.deleteOnExit() mechanism may be used to delete the file automatically.
I don't see where the DELETE_ON_CLOSE option is supposed to be specified. Using a shutdown hook is incredibly inconvenient (unless I'm thinking of it wrong). In an effort to avoid using both Path objects and File objects, I am looking for a solution similar to the File.deleteOnExit() for the Path object, but obviously one that doesn't require using Path.toFile().[...].toPath() sort of calling pattern.
What is the correct way to implement "self-destructing" temporary files using the java.nio.Files API?
You set that option when you write, for example:
Path myTempFile = Files.createTempFile(...);
Files.write(myTempFile, ..., StandardOpenOption.DELETE_ON_CLOSE);
Related
Is there a way to use the Checkstyle API without providing a java.io.File?
Our app already has the file contents in memory (these aren't read from a local file, but from another source), so it
seems inefficent to me to have to create a temporary file and write the in-memory contents to it just to throw it away.
I've looked into using in-memory file systems to circumvent this, but it seems java.io.File is always bound to the
actual file system. Obviously I have no way of testing whether or not performance would be better, just wanted to ask if Checkstyle supports such a use case.
There is no clean way to do this. I recommend creating an issue at Checkstyle expanding more on your process and asking for a way to integrate it with Checkstyle.
Files are needed for our support of caching, as we skip over reading and processing a file if it is in the cache and it has not changed since the last run. The cache process is intertwined which is why no non-file route exists. Even without a file, Checkstyle processes the contents of files through FileText, which again needs a File for just a file name reference and lines of the file in a List.
I recently started working on a POORLY designed and developed web application.. I am finding that it uses about 300 properties files, and all the properties files are being read somewhat like this:
Properties prop= new Properties();
FileInputStream fisSubsSysten = new FileInputStream("whatever.properties");
prop.load(fisSubsSysten);
That is, it is reading the properties files from current working directory.. Another problem is the developers have chosen to use the above lines multiple times within the same java file. For example if there are 10 methods, each method will have the above code instead of having one method and calling it wherever necessary..
This means, we can NEVER change the location of the properties files, currently they are directly under the websphere profiles directory, isn't this ugly? If I move them somewhere else, and set that location in classpath, it does not work.
I tried changing the above lines like this using Spring IO utils library:
Resource resource = new ClassPathResource("whatever.properties");
Properties prop = PropertiesLoaderUtils.loadProperties(resource);
But this application has over 1000 files, and I am finding it impossible to change each file.. How would you go about refactoring this mess? Is there any easy way around?
Thanks!
In these cases of "refactoring" i use a simple find and replace approach. Notepad++ has a " find in files" feature but there are plenty of similar programs.
Create a class which does the properties loading with a method probably with a name parameter for the property file.
This can be a java singleton or a spring bean.
Search and replace all "new Properties()" lines with an empty line.
Replace all "load..." lines with a reference to your new class/ method. Notepad++ supports regex replacement, so you can use the filename as a parameter.
Once this is done go to eclipse and launch a "cleanup" or "organize imports" and fix some compile errors manually if needed.
This approach is quite straight forward and takes no more than 10min if you are lucky or 1 hour if you are unlucky, f.e. the code formatting is way of and each file looks different.
You can make your replace simpler if you format the project once before with a line length of 300 or more so each java statement is on one line. This makes find and replace a bit easier as you dont have newlines to consider.
I can only agree that I find your project a bit daunting, from your reference.
However, the choice of how to maintain or improve of it is a risk that merely needs to be assessed and prioritised.
Consider building a highrise and subsequently realising the bolts that holds the infrastructure have a design flaw. The prospect of replacing them all is indeed daunting as well, so considerations into how to change them and if they really, really needs to be replaced, few, many or all.
I assume it must be a core system for the company, which somebody built and they have probably left the project (?), and you have consideration about improvement or maintaining them. But again, you must assess whether it really is important to move your property files, or if you can just for instance use symbolic links in your file system. Alternatively, do you really need to move them all or is there just a few that would really benefit from being moved. Can you just mark all places in the code with a marker to-be-fixed-later. I sometimes mark bad classes with deprecation, and promise to fix affected classes but postpone until I have other changes in those classes until finally the deprecated class can be safely removed.
Anyway you should assess your options, leave files, replace all or partials, and provide some estimation of cost and consequences, and ask your manager which course to take.
Just note that always overestimate the solution you don't want to do, as you would be twice as likely to stop for coffee breaks, and a billboard of told-you-so's is a great leverage for decision making :)
On the technology side of your question, regex search and replace is probably the only option. I would normally put configuration files in a place accessible by classpath.
You can try using eclipse search feature. For example if you right click on load() method of the properties class and select References -> Project it will give you all location in your project where that method is used.
Also from there maybe you can attempt a global regex search and replace.
I'm using a third-party commercial library which seems to be leaking file handles (I verified this on Linux using lsof). Eventually the server (Tomcat) starts getting the infamous "Too many open files error", and I have to re-start the JVM.
I've already contacted the vendor. In the meantime, however, I would like to find a workaround for this. I do not have access to their source code. Is there any way, in Java, to clean up file handles without having access to the original File object (or FileWriter, FileOutputStream, etc.)?
a fun way would be to write a dynamic library and use LD_PRELOAD to load it for the java instance you are launching ... this DLL could override the appropriate underlying open(2) system call (or use some other logic) to close existing file descriptors of the process before passing the call to the libc implementation (or the kernel). You need to do some serious accounting and possibly deal with threads; but it can be done. Especially if you take hints from /proc/pid/fd/ for figuring whether or not a close is appropriate for the target fd.
You could, on startup, open a bunch of files and use File*putStream.getFD() to obtain a bunch of java.io.FileDescriptors, then close them, but hold onto the descriptors. Later you might be able to create streams using those stored FileDescriptors and close them.
I have not tested this, so would not be surprised if it did not work on some platforms.
I have a class that does operations on file's on a disk.
More exactly it traverses a directory, reads through all files with a given suffix
and does some operations on the data and then outputs them to a new file.
I'm a bit dubious as to how to design a unittest for this class.
I'm thinking having the setup method create a temporary directory and temporary files in /tmp/somefolder, but I'm suspecting this is a bad idea for a couple of reasons(Developers using windows, file permissions etc).
Another idea would be to mock the classes I'm using to write and read to the disk, by encapsulating the classes using an interface and then providing a mock object, but it seems to be a bit messy.
What would be the standard way of approaching such a problem?
If using JUnit 4.7 and up, you can use the #TemporaryFolder rule to transparently obtain a temporary folder which should automatically get cleared after each test.
Your strategy is the right one, IMO. Just make sure not to hardcode the temp directory. Use System.getProperty("java.io.tmpdir") to get the path of the temp directory, and use a finally block in your test or a #After method to cleanup the created files and directories once your test is finished.
Mocking everything out is possible, but probably much more effort than it's worth. You can use the temporary directory supplied from Java System.getProperty("java.io.tmpdir") which you should be able to write to etc. no matter which system you're on. Stick to short file names and you'll be safe even if running on something ancient.
I have a set of files. The set of files is read-only off a NTFS share, thus can have many readers. Each file is updated occasionally by one writer that has write access.
How do I ensure that:
If the write fails, that the previous file is still readable
Readers cannot hold up the single writer
I am using Java and my current solution is for the writer to write to a temporary file, then swap it out with the existing file using File.renameTo(). The problem is on NTFS, renameTo fails if target file already exists, so you have to delete it yourself. But if the writer deletes the target file and then fails (computer crash), I don't have a readable file.
nio's FileLock only work with the same JVM, so it useless to me.
How do I safely update a file with many readers using Java?
According to the JavaDoc:
This file-locking API is intended to
map directly to the native locking
facility of the underlying operating
system. Thus the locks held on a file
should be visible to all programs that
have access to the file, regardless of
the language in which those programs
are written.
I don't know if this is applicable, but if you are running in a pure Vista/Windows Server 2008 solution, I would use TxF (transactional NTFS) and then make sure you open the file handle and perform the file operations by calling the appropriate file APIs through JNI.
If that is not an option, then I think you need to have some sort of service that all clients access which is responsible to coordinate the reading/writing of the file.
On a Unix system, I'd remove the file and then open it for writing. Anybody who had it open for reading would still see the old one, and once they'd all closed it it would vanish from the file system. I don't know if NTFS has similar semantics, although I've heard that it's losely based on BSD's file system so maybe it does.
Something that should always work, no matter what OS etc, is changing your client software.
If this is an option, then you could have a file "settings1.ini" and if you want to change it, you create a file "settings2.ini.wait", then write your stuff to it and then rename it to "settings2.ini" and then delete "settings1.ini".
Your changed client software would simply always check for settings2.ini if it has read settings1.ini last, and vice versa.
This way you have always a working copy.
There might be no need for locking. I am not too familiar with the FS API on Windows, but as NTFS supports both hard links and soft links, AFAIK, you can try this if your setup allows it:
Use a hard or soft link to point to the actual file, and name the file diferently. Let everyone access the file using the link's name.
Write the new file under a different name, in the same folder.
Once it is finished, have the file point to the new file. Optimally, Windows would allow you to create the new link with replacing the existing link in one atomic operation. Then you'd effectively have the link always identify a valid file, either the old or the new one. At worst, you'd have to delete the old one first, then create the link to the new file. In that case, there'd be a short time span in which a program would not be able to locate the file. (Also, Mac OS X offers a "ExchangeObjects" function that allows you to swap two items atomically - maybe Windows offers something similar).
This way, any program that has the old file already opened will continue to access the old one, and you won't get into its way creating the new one. Only if an app then notices the existence of the new version, it could then close the current and open it again, this way getting access to the new version.
I don't know, however, how to create links in Java. Maybe you have to use some native API for that.
I hope this helps anyways.
I have been dealing with something similar recently. If you are running Java 5, perhaps you could consider using NIO file locks in conjunction with a ReentrantReadWriteLock? Make sure all code referencing the FileChannel object ALSO references the ReentrantReadWriteLock. This way the NIO locks it at a per-VM level while the reentrant lock locks it at a per-thread level.
FileLock fileLock = filechannel.lock(position, size, shared);
reentrantReadWriteLock.lock();
// do stuff
fileLock.release();
reentrantReadWriteLock.unlock();
Of course, some exception handling would be required.