You have:
A process (READER) that opens a text file (TEXTFILE), reads all the lines until the EOF and waits for new lines to appear.
The READER is implemented in Java and the waiting part uses java.nio.file.WatchService, which if I understand correctly on Linux uses inotify. I am not sure which is more relevant to the question.
The implementation is quite simple (exception handling and some ifs left out for brevity):
WatchService watcher;
watcher = FileSystems.getDefault().newWatchService();
Path logFolder = Paths.get("/p/a/t/h");
logFolder.register(watcher, ENTRY_MODIFY);
reader = Files.newBufferedReader("TEXTFILE", Charset.forName("US-ASCII"));
key = watchService.take();
for (WatchEvent<?> event : key.pollEvents()) {
WatchEvent.Kind<?> kind = event.kind();
doSomethingWithTheNewLine(reader.readLine());
}
Now, if I run READER and
Open TEXTFILE in an editor, add a line and save it, the result is that the READER doesn't seem to get the new line
If, on the other hand, I do something like this in bash
while true; do echo $(date) ; sleep 2; done >> TEXTFILE
then the READER does get the new lines
EDIT:
As far as I can see, the difference here that may matter is that in the first case, the editor loads the content of the file, closes it (I assume), and on saving it opens the file again and synchronizes the content with the file system, while the bash line keeps the file opened... how would that make any difference, I am not sure
I suppose the simple question is why???
They way I understood scenario like this is that Linux is using some sort of locking when >1 processes need access to the same file on filesystem at the same time. I also thought that when a process A opens a file descriptor to a file at time t0, it gets let's say a snapshot of what the file content was at t0. Even if the process A doesn't close the file descriptor (which is what seems to be the case here) and a process B appends to that file at some tome t0 + delta, then the process A would have to reopen the file descriptor to see the changes, it cannot hold to the same file descriptor and get new data being appended to that file... though it's obvious that what I've observed contradicts that assumption....
Related
This is very confusing problem.
We have a Java-application (Java8 and running on JBoss 6.4) that is looping a certain amount of objects and writing some rows to a File on each round.
On each round we check did we receive the File object as a parameter and if we did not, we create a new object and create a physical file:
if (file == null){
File file = new File(filename);
try{
file.createNewFile();
} catch (IOException e) {e.printStackTrace();}}
So the idea is that the file get's created only once and after that the step is skipped and we proceed straight to writing. The variable filename is not a path, it's just a file name with no path so the file gets created to a path jboss_root/tmp/hsperfdata_username/
edit1. I'll add here also the methods used from writing if they happen to make relevance:
fw = new FileWriter(indeksiFile, true); // append = true
bw = new BufferedWriter(fw);
out = new PrintWriter(bw);
.
.
out.println(..)
.
.
out.flush();
out.close(); // this flushes as well -> line above is useless
So now the problem is that occasionally, quite rarely thou, the physical file disappears from the path in the middle of the process. The java-object reference is never lost, but is seems that the object itself disappears because the code automatically creates the file again to the same path and keeps on writing stuff to it. This would not happen if the condition file == null would not evaluate to true. The effect is obviously that we loose the rows which were written to the previous file. Java application does not notice any errors and keeps on working.
So, I would have three questions which are strongly related for which I was not able to find answer from google.
If we call method File.CreateNewFile(), is the resulting file a permanent file in the filesystem or some JVM-proxy-file?
If it's permanent file, do you have any idea why it's disappearing? The default behavior in our case is that at some point the file is always deleted from the path. My guess is that same mechanism is deleting the file too early. I just dunno how to control that mechanism.
My best guess is that this is related to this path jboss_root/tmp/hsperfdata_username/ which is some temp-data folder created by the JVM and probably there is some default behavior that cleans the path. Am I even close?
Help appreciated! Thanks!
File.createNewFile I never used in my code: it is not needed.
When afterwards actually writing to the file, it probaby creates it anew, or appends.
In every case there is a race on the file system. Also as these are not atomic actions,
you might end up with something unstable.
So you want to write to a file, either appending on an existing file, or creating it.
For UTF-8 text:
Path path = Paths.get(filename);
try (PrintWriter out = new PrintWriter(
Files.newBufferedWriter(path, StandardOpenOption.CREATE, StandardOpenOption.APPEND),
false)) {
out.println("Killroy was here");
}
After comment
Honestly as you are interested in the cause, it is hard to say. An application restart or I/O (?) exceptions one would find in the logs. Add logging to a specific log for appending to the files, and a (logged) periodic check for those files' existence.
Safe-guard
Here we are doing repeated physical access to the file system.
To prevent appending to a file twice at the same time (of which I would expect an exception), one can make a critical section in some form.
// For 16 semaphores:
final int semaphoreCount = 16;
final int semaphoreMask = 0xF;
Semaphore[] semaphores = new Semaphore[semaphoreCount];
for (int i = 0; i < semaphores.length; ++i) {
semaphores[i] = new Semaphore(1, true); // FIFO
}
int hash = filename.hashcode() & semaphoreMask ; // toLowerCase on Windows
Semaphore semaphore = semaphores[hash];
try {
semaphore.aquire();
... append
} finally {
semaphore.release();
}
File locks would have been a more technical solution, which I would not like to propose.
The best solution, you perhaps already have, would be to queue messages per file.
Firstly, I have checked java.io.IOException: The process cannot access the file because another process has locked a portion - when using IOUtils.copyLarge() in Windows. My question is not about finding whether a file is open or not.
I have a Java application A that is writing log files to the disk. At a given time, there are a large number of files that are written and closed by the A, and few files which are still open, and are being written by A.
I have this second Java application B, which need to read the logs sometimes, and it does. But the problem is, in case the file is open by A, B will spill the error java.io.IOException: The process cannot access the file because another process has locked a portion.
The code that reads the file from B looks like this:
void readFile(Path filePath) {
FileInputStream fis = new FileInputStream(filePath.toFile());
byte[] buffer = new byte[1024]
for (int len; (len = fis.read(buffer)) != -1) {
// do things with the read bytes
}
}
I have no control over application A's code. Neither can I determine when application A will release the file. It could be 24+ hours. So I need B to read the file while it's open (if it can be done at all).
I can, however change B's code. Is there any way I can change the above code such that B can read the contents of the locked file? Note that B only needs read access to the files.
The code I'm writing in Java is is close a file left open by the user. So, here is what typically happens: a user is editing an Excel file, they save it, leave it open, and then close the lid on their laptop. The file is still kept open and locked so no one else can edit it. Is there a way to kick them off and unlock the file? When they are using the file, it is "checked out." Here is what shows up:
What checked out looks like: (image)
The following code, interfacing through WinDAV with SharePoint, tells me if a file is locked or not (I know it's not great code, but it works and I've tried several other solutions including Filelock, Apache IO, FileStream, etc.):
String fileName = String.valueOf(node);
File file = new File(fileName);
boolean replaced;
File sameFileName = new File(fileName);
if(file.renameTo(new File(sameFileName + "_UNLOCK"))){
replaced = true; //file is currently not in use
(new File(sameFileName + "_UNLOCK")).renameTo(sameFileName);
}else{
replaced = false; //file is currently in use
}
So, how would I unlock a file now? The only other solution is PowerShell using SharePoint libraries, but that has a whole lot of other problems...
As per the post, you can use the tool Handle, which is a CLI tool to find out which process is locking the file. Once you have the process ID, you can kill that process. I'm not aware of any Java API that would identify the culprit process. For killing the process you can use taskkill, and you can call it using Runtime like this. Both the operation require you app to run at Administrator or above privilege.
I have a watch service watching a directory. Once files are created, I'm processing the directory and updating a tree view.
This works fine on ENTRY_DELETE, but sometimes (not always) when a WatchEvent of ENTRY_CREATE occurs, the file has not yet been written to the disk.
I've confirmed this by creating a new File() of the directory the watch service is registered to along with the path of the file and checking the exists() method, so it seems that the OS is triggering the create event before the file is actually created.
This question appears to be the same issue, but from the folder's point of view.
Any way I can work around this?
The event is triggered when a file is created. The file needs to be created before it can be written to. A file doesn't simply appear once it is fully written, it appears once it is created.
What you can do is once you get the creation event:
Create a File object to point to the file
Create a java.nio.channels.FileChannel for random access using RandomAccessFile with rw mode (so read & write access)
Lock the channel. This will block until the file is free for read/write access (read the more general Lock method for more info)
When the lock is acquired, your file was released by the process that wrote the file
A simplified example:
File lockFile = new File( "file_to_lock" );
FileChannel channel = new RandomAccessFile( lockFile, "rw" ).getChannel( );
channel.lock( );
I had the same issue, I added few seconds delay once the event is created before processing. Since Other application used to write the file and it used to take couple of seconds to flush the content and release the file.
In case I have the following code:
private PrintWriter m_Writer;
m_Writer = new PrintWriter(new FileWriter(k_LoginHistoryFile));
I am writing to a local file on server which name is k_LoginHistoryFile.
Now, as my program runs it doing writings to this file so how can I delete all the file content between each writes?
I think it is important as I don't want to write to a file which will eventually have current updated information on its beginning + not up to date info at its end.
Thanks in advance
This expression:
new FileWriter(k_LoginHistoryFile)
will truncate your file if it exists. It won't just overwrite the start of the file. It's not clear how often this code is executing, but each time it does execute, you'll start a new file (and effectively delete the old contents).
I think it is important as I don't want to write to a file which will eventually have current updated information on its beginning + not up to date info at its end.
If you want to keep a running output file (and you can't keep the file open), consider this constructor: FileWriter(String, boolean)
If the boolean is true, your updated information will be at the end of the file.