I have a watch service watching a directory. Once files are created, I'm processing the directory and updating a tree view.
This works fine on ENTRY_DELETE, but sometimes (not always) when a WatchEvent of ENTRY_CREATE occurs, the file has not yet been written to the disk.
I've confirmed this by creating a new File() of the directory the watch service is registered to along with the path of the file and checking the exists() method, so it seems that the OS is triggering the create event before the file is actually created.
This question appears to be the same issue, but from the folder's point of view.
Any way I can work around this?
The event is triggered when a file is created. The file needs to be created before it can be written to. A file doesn't simply appear once it is fully written, it appears once it is created.
What you can do is once you get the creation event:
Create a File object to point to the file
Create a java.nio.channels.FileChannel for random access using RandomAccessFile with rw mode (so read & write access)
Lock the channel. This will block until the file is free for read/write access (read the more general Lock method for more info)
When the lock is acquired, your file was released by the process that wrote the file
A simplified example:
File lockFile = new File( "file_to_lock" );
FileChannel channel = new RandomAccessFile( lockFile, "rw" ).getChannel( );
channel.lock( );
I had the same issue, I added few seconds delay once the event is created before processing. Since Other application used to write the file and it used to take couple of seconds to flush the content and release the file.
Related
I have written a custom error parser for Eclipse which needs to mark lines in a (log) file that it writes i.e. all files that are processed by the error parser are written to my synthetic log and the troublesome ones are marked.
This works perfectly the second or subsequent time it is run i.e. so long as there is already a copy of my my_log.txt existing on the filesystem. But the first time I run it (ie when my_log.txt doesn't yet exist), the file isn't marked, though it pops into existence as soon as the build is done. Hence lines like this fail on the first run:
errorResource = ep.findFileName(multilineLog);
where multilineLog is the name of the file (and ep is the ErrorParserManager).
How do I force the my_log.txt into physical existence so it can be found at all times?
I am trying this at the moment (and have tried a few other things) but to no avail:
FileOutputStream fOS = new FileOutputStream(logFile);
fOS.write(line.getBytes());
FileDescriptor fd = fOS.getFD();
fd.sync();
fOS.close();
(And then I delete the file and start writing to a clean one).
I have a file to upload (say abc.pdf). Very first time I want to upload this file as a temp file (say abc.pdf.temp). Then , if the file is successfully transferred (fully transferred) then I need to rename it to its original name (abc.pdf). But if the file is not fully transferred then I need to delete the temp file that I uploaded initially since I don't want to keep a corrupted file in the server. Is this achievable to do using this JSch library. Below is the sample code. Does this code make sense to achieve this?
Sample Code:
originalFile = 'abc.pdf';
tempFile = 'abc.pdf.temp';
fileInputStream = createobject("java", "java.io.FileInputStream").init('C:\abc.pdf');
SftpChannel.put(fileInputStream,tempFile);
// Comparing remote file size with local file
if(SftpChannel.lstat(tempFile).getSize() NEQ localFileSize){
// Allow to Resume the file transfer since the file size is different
SftpChannel.put(fileInputStream,tempFile,SftpChannel.RESUME);
if(SftpChannel.lstat(tempFile).getSize() NEQ localFileSize){
// Check again if the file is not fully transferred (During RESUME) then
// deleting the file since dont want to keep a corrupted file in the server.
SftpChannel.rm(tempFile);
}
}else{//assuming file is fully transferred
SftpChannel.rename(tempFile ,originalFile);
}
It's very unlikely that after the put finishes without throwing, the file size won't match. It can hardly happen. Even if it happens, it makes little sense to call RESUME. If something catastrophic goes wrong that is not detected by put, RESUME is not likely to help.
And even if you want to try with RESUME, it does not make sense to try once. If you believe it makes sense to retry, you have to keep retrying until you succeed, not only once.
You should catch exception and resume/delete/whatever. That's the primary recovery mechanism. This is 100x more likely to happen than 1.
You have:
A process (READER) that opens a text file (TEXTFILE), reads all the lines until the EOF and waits for new lines to appear.
The READER is implemented in Java and the waiting part uses java.nio.file.WatchService, which if I understand correctly on Linux uses inotify. I am not sure which is more relevant to the question.
The implementation is quite simple (exception handling and some ifs left out for brevity):
WatchService watcher;
watcher = FileSystems.getDefault().newWatchService();
Path logFolder = Paths.get("/p/a/t/h");
logFolder.register(watcher, ENTRY_MODIFY);
reader = Files.newBufferedReader("TEXTFILE", Charset.forName("US-ASCII"));
key = watchService.take();
for (WatchEvent<?> event : key.pollEvents()) {
WatchEvent.Kind<?> kind = event.kind();
doSomethingWithTheNewLine(reader.readLine());
}
Now, if I run READER and
Open TEXTFILE in an editor, add a line and save it, the result is that the READER doesn't seem to get the new line
If, on the other hand, I do something like this in bash
while true; do echo $(date) ; sleep 2; done >> TEXTFILE
then the READER does get the new lines
EDIT:
As far as I can see, the difference here that may matter is that in the first case, the editor loads the content of the file, closes it (I assume), and on saving it opens the file again and synchronizes the content with the file system, while the bash line keeps the file opened... how would that make any difference, I am not sure
I suppose the simple question is why???
They way I understood scenario like this is that Linux is using some sort of locking when >1 processes need access to the same file on filesystem at the same time. I also thought that when a process A opens a file descriptor to a file at time t0, it gets let's say a snapshot of what the file content was at t0. Even if the process A doesn't close the file descriptor (which is what seems to be the case here) and a process B appends to that file at some tome t0 + delta, then the process A would have to reopen the file descriptor to see the changes, it cannot hold to the same file descriptor and get new data being appended to that file... though it's obvious that what I've observed contradicts that assumption....
I am submitting a JCL job to allocate a VB dataset in Mainframe. After submitting the job, the dataset gets created successfully.
Then I am running a java program in omvs region of mainframe, to open the file and write some contents into it. When I try to write the data into the file I am getting the below error.
//DD:SYS00011 : fwrite() failed. EDC5024I An attempt was made to close a file that had been opened by another thread.; errno=24 errno2=0xc0640021 last_op=0 errorCode=0x0.
JCL submitted to allocate the dataset:
//USERNAME JOB ABC,,NOTIFY=&SYSUID,CLASS=F,MSGLEVEL=(1,1),MSGCLASS=X
//STEP1 EXEC PGM=IEFBR14
//STEP DD DSN=ASD00T.SM.ULRL,
// DISP=(NEW,CATLG,DELETE),
// UNIT=SYSDA,SPACE=(1,(10,60),RLSE),AVGREC=M,
// DCB=(RECFM=VB),
// DSORG=PS
code to write the file:
zFileIn = new ZFile("//'ASD00T.INPUT.ULRL'", "rb,type=record,noseek");
if (zFileIn.getDsorg() != ZFile.DSORG_PS) {
throw new IllegalStateException("Input dataset must be DSORG=PS");
}
zFileOut = new ZFile("//'ASD00T.SM.ULRL'", "wb,type=record,noseek, recfm="+ zFileIn.getRecfm()+ ", lrecl="+ zFileIn.getLrecl());
long count = 0;
byte[] recBuf = new byte[zFileIn.getLrecl()];
int nRead;
while((nRead = zFileIn.read(recBuf)) >= 0) {
zFileOut.write(recBuf, 0, nRead);
count++;
}
The heart of your problem is that you need to invoke the ZFile.close() method after you're done writing. Doing so will guarantee that the open, writes and close all happen under the same thread and you should be fine. This is a side-effect of using conventional datasets instead of USS files.
The reason for this is complicated, but it has to do with the fact that in z/OS, "conventional" QSAM/BSAM/VSAM datasets behave slightly differently than do UNIX filesystem files.
If you were writing to a UNIX file (HFS, ZFS, NFS, etc) instead of a conventional dataset, what you're doing would work perfectly fine...this is because USS treats resource ownership differently - file handles are owned at a process level, not a thread. When you open a USS file, that file handle can be used or closed by any thread in the process...this is mandated by the various UNIX standards, so z/OS has no choice but to work this way.
Conventional datasets are a bit different. When you open a conventional dataset, the operating system structures that define the open file are stored in memory anchored to the thread where the file was opened. There's enough information in the file handle that you can do I/O from other threads, but closing the file needs to happen from the thread where the file was opened.
Now, since you don't seem to have a close() in your code, the file stays open until your Java thread ends. When your Java thread ends, the system runtime gets control in order to clean up any resources you might have allocated. It sees the lingering open file and tries to close it, but now it's not running under the thread that opened the file, so you get the failure you're seeing.
Normally, UNIX files and z/OS datasets work almost exactly the same, but this issue is one of the slight differences. IBM gets away with it from a standards compliance perspective since z/OS datasets aren't part of any standard, and generally, the way they can be used interchangeably is a great feature.
By the way, all of this is spelled out in the fine print of the LE (Language Environment) and C Runtime references.
My app downloads a zip file from a remote webserver, then extracts it.
The javascript successfully calls FileTransfer, which logs:
FileTransfer Saved file: file:///data/data/com.yadda.yadda/update.zip
As part of the success function, javascript calls my custom update plugin which immediately tests for the file:
Context ctx = this.cordova.getActivity().getBaseContext();
File update = new File(ctx.getFilesDir(),"update.zip");
if(!update.exists()) Log.w("File not found",update.getAbsolutePath());
The log for that last line is:
File Not Found /data/data/com.yadda.yadda/update.zip
Later in a try/catch block I have an InputStream variable created and one of the catch blocks is a FileNotFoundException which is firing every time.
Begin edit - more info
The FileNotFoundException has an interesting bit. The file path is wrong - even though I'm sending the same "update" variable to create the FileInputStream
InputStream fileis = new FileInputStream(update);
And the interesting bit of the exception:
Caused by: java.io.FileNotFoundException: /data/data/com.yadda.yadda/files/update.zip
End edit
What is going wrong here? Cordova logs that the file transfer completed and the file was saved, but then the file doesn't exist when I test for it! When I create the FileInputStream, why is the path different, causing the exception?
What am I missing? Everything works fine in the IOS version of the app.
Edit 2: per request, I browsed the device filesystem and found that update.zip does indeed exist in /data/user/0/com.yadda.yadda
OK, somewhere there is a bug. I'm inclined to believe it's a bug in getAbsolutePath() because I'm seeing consistent operation elsewhere.
When I create the "File update" variable, then immediately test and log the update.getAbsolutePath() - it shows the correct path. But when I attempt to create the FileInputStream, the path is different (+ /files/)
So, a little searching and I found that in order to access the application data directory (without /files) I must send a different directory with the new File command. Here's what it looks like:
File update = new File(ctx.getApplicationInfo().dataDir,"update.zip");
Obtaining the dir with getFilesDir()
ctx.getFilesDir() = /data/data/com.yadda.yadda/files
Obtaining the correct dir
ctx.getApplicationInfo().dataDir = /data/data/com.yadda.yadda