Testing if a file is open or not - java

I know there are no standard Java mechanisms for testing if a file is open or not.
Can anyone please tell me whether there is any libraries through which we can achieve this task, or some other ways,
This is my scenario, say I've open a docx file manually, when I try to upload that file itself its not getting uploaded since the file is opened, so if we tries to upload a file which is open outside it needs to display a message telling "The file is opened".

Using Java
If your client is a Java client (which you don't say in your question), you could try the following code from this answer:
Using the Apache Commons IO library...
boolean isFileUnlocked = false;
try {
org.apache.commons.io.FileUtils.touch(yourFile);
isFileUnlocked = true;
} catch (IOException e) {
isFileUnlocked = false;
}
if(isFileUnlocked){
// Do stuff you need to do with a file that is NOT locked.
} else {
// Do stuff you need to do with a file that IS locked
}
BTW: This is the very first answer when you search for "java file is open" on Google.
Using JavaScript
However, I assume that you don't have a Java client, only a Java server and you want to upload a file via a webpage (so you have a HTML + JS client). I assume this, because you don't tell us.
JavaScript doesn't allow you to write to the disk since this would be a security issue (I think Chrome supports some kind of the JavaScript File API which allows you to write to the disk, but none of the other browsers). So you cannot test this via JavaScript.
In general
The general way to test if a file is open, would be to test if there is some lock on the file. But this would not guarantee that the file is not opened without a lock, but it should be sufficient.

Related

Downloading a dynamically created text file from Java, write directly to computer or save on server?

When a user clicks a download button (coded in simple Javascript) on my webpage, it triggers a PHP function that calls to a java file. This Java code connects to the database and writes to a text file.
Currently, the code is in development on my local machine. Here is a snippet of the code is currently creating a local text file and writing the information to it:
StreamFactory sf = StreamFactory.newInstance();
sf.loadResource("mapping.xml");
File file = new File("C:\\MyLocal\\foo.txt");
BeanWriter bw = sf.createWriter("fileExport", file);
// writes beans
bw.write("", "");
...
bw.flush(); // flushes to foo.txt located in C:\MyLocal
My question is is it possible to write this dynamically created text file to the user's computer instead of my local, and if so, is it a good web development practice? The benefit this way is that I don't need to store foo.txt on the server that the code will reside. However, I'm not very familiar with web development practices, and did not see a concrete rule on this subject matter.
If it is better to save it to the server and then implement some download code from there, should this be handled within the PHP of the page or the Java backend functions?
It's not possible to write on user's computer filesystem.
Because of javascript security measures, if it wasn't because of this, a malicious website/webapp could write a huge amount of data on users hard drive, or could write malicious scripts.
The option a user has it's to download a file, but a webpage doesn't have direct access to user filesystem.
There was a proposal API for doing something like this, but it was discontinued
https://www.html5rocks.com/en/tutorials/file/filesystem/
Answering to your question: definitely it's not a good practice to try to write files on users filesystem
You should handle the download with Java on the Backend

file getting corrupt while FTP

I am using com.enterprisedt.net.ftp jar to FTP files to remote location. My code looks like
try{
ftp.mkdir(actDirToSearch);
}catch(Exception e){
Log.addInLog(Log.ERR,e.getMessage());
}
ftp.chdir(actDirToSearch);
try{
ftp.put(tarStream, fileName);
}
catch(Exception ex){
throw new FTPException(ex.getMessage());
}
}catch(FTPException e){
throw e;
}catch(IOException e){
throw e;
}finally{
try {
if(ftp != null){
ftp.quit();
}
}
Also i am using this code to upload tar.gz file to 2 different remote machine having RHEL 5.4 and 6. But sometime i am getting successful message , sometime tar.gz file gets corrupt after uploading with lesser size on remote machine. While debugging i found the behavior that if i halt at if(ftp != null) line , and then after some time i execute ftp.quit() , it will always get succeed. I have seen through the ftp code , i found no separate thread to ftp the tar.gz file . Its all executing serially. My doubt is why this tar.gz file is getting corrupt , and why i am getting succeeded while debugging?
vsftpd services are running on both the machine. Also while doing the ftp manually from terminal , its getting succeeded. Java version is 1.6 .
Check your FTP settings -- most FTP implementations have settings for the type of file; if it is text, binary, or whether the implementation is supposed to determine for itself what type it is.
in response to comment:
I am not familiar with the FTP protocol in great detail, but I know that FTP clients normally have a setting for "text" or "binary" files, and an option for the client to determine which kind of file is being transferred by looking at the first bytes in it. I expect this is something that is communicated by the client to the server, so that the server can do things like translate end-of-line to a set of bytes that is correct for the server's particular OS.
I gather you are not setting that, since you don't seem to know about it, and it could cause these symptoms. Read your library documentation and look for this. You refer to an 'ftp' object, look at the documentation (or the source) for that library and figure out if there isn't a way to set an option for text or binary.
You can also use a hex editor to look at the bytes in the source and the result files to see if you can see a pattern in the corruption -- do the bytes look all right until they get to a place where the source had an end-of-line character, but in fact it's a binary file? Is the server stripping off 8th bits (FTP goes back to the days of commonly used 7-bit ASCII, after all).

File upload with Ajax - not getting complete fileName

It is quite a common question but I can't find an answer to it
I have a simple HTML with an input text box (type=file) and a submit button. On clicking the submit button, I call a js function where I try to get the complete path of the file
var data = $('#fileName').val();
the issue is I am not getting complete file path of the file I am uploading. I know due to security reasons chrome gives me a C:\fakePath\filename and firefox gives me only the fileName. But in case I need a complete path what shall I do?
PS: Further I will make an ajax call and give that file path to the back-end which needs it to read that file using FileReader
You cannot get the complete path! there is no way to do that!! Even though you are on an intranet and you have enough permissions.
A workaround for this is to have a textarea and ask the user to enter the complete path of the file.
In short you can't have the full name of a file once is loaded on server side, you will just have the file name and its content in a raw byte array (among other attributes). This is not a Java thing nor other server side technologies issue, is related to browser implementation (but it looks that IE6 may contain a flaw about this).
Not directly related to your question but caught my attention
PS: Further I will make an ajax call and give that file path to the back-end which needs it to read that file using FileReader
Usually, you can't handle a file upload using ajax because it can lead to security holes. Still, there are some browsers (like Chrome and Firefox) that allows you to send a file using XMLHttpRequest but that isn't allowed on some browsers (like IE8-) so you have to use an iframe in order to make the file ajax uploading work.
In order to avoid handling all these problems, I would advice you to use a third-party js library that handles the ajax file upload. An example is blueimp jQuery file upload that also has Java server side examples (DISCLAIMER: I do not work in this project nor I'm associated with blueimp in any way). Note that using this plugin requires that you have a mid knowledge on HTML/JavaScript/jQuery/Java Server Side so if you're a starter it may take you some time to make it work, but once it does is pretty good.
I dont know which technology you are using.. but you can always get file name once it is uploaded on server (Using php or .net )
your steps to upload should be like below:
1) Upload file to the server (e.z. /uploadedFiles/...filename
2) Create a method which will fetch file name from the uploaded path
3) simply insert file name in to the database (this will give you flexibility to change folder name of uploaded docs in future if required)
Generally filenames are not stored as it is . to avoid name conflict in future. So it is a advisable to always rename your filename by adding minutes & seconds after itsname.
If any doubts do ask.
Hope it helps.
Browsers block the filepath access on javascript for securit reasons.
The behavior makes sense, because the server doesn't have to know where the user stores the file on his computer, it is irrelevant to the upload process.

Knowing file is complete or not, before getting the Java file Object

I am polling file system for new file, which is upload by someone from web interface.
Now I have to process every new file, but before that I want to insure that the file I am processing is complete (I mean to say it is completely transferred through web interface).
How do I verify if file is complete downloaded or not before processing?
Renaming a filename is an atomic action in most (if not all) filesystems. You can make use of this by uploading the file to a recognizable temporary name and renaming it as soon as the upload is complete.
This way you will "see" only those files that have been uploaded completely and are safe for processing.
rsp's answer is very good. If, by any chance, it does not work for you, and if your polling code is running within a process different from the process of the web server which is saving the file, you might want to try the following:
Usually, when a file is being saved, the sharing options are "allow anyone to read" and "allow no-one to write". (exclusive write.) Therefore, you can attempt to open the file also with exclusive write access: if this fails, then you know that the web server is still holding the file open, and writing to it. If it succeeds, then you know that the web server is done. Of course be sure to try it, because I cannot guarantee that this is precisely how the web server chooses to lock the file.

blocking (synchronous) ftp download in java?

I'm currently using commons-net library for FTP client in my app. I have to download from remote server some files, by some criteria based on the file name. This is a very simplified and reduced version of my actual code (because I do some checks and catch all possible exceptions), but the essence is there:
//ftp is FTPClient object
//...
files = ftp.listFiles();
for (FTPFile ftpFile : files) {
String name = ftpFile.getName();
if(conformsCriteria(name)) {
String path = outDirectory + File.separatorChar + name;
os = new FileOutputStream(path);
ftp.retrieveFile(name, os);
}
}
Now, what I noticed is that when I run this code, wait a few seconds, and then plug out network cable, output directory contains some "empty" files plus the files actually downloaded, which leads me to believe that this method is working somewhat asynchronously... But then again, some files are downloaded (size > 0KB), and there are these empty files (size = 0KB), which leads me to believe that it is still serialized download... Also, function retrieveFile() returns, I quote documentation:
True if successfully completetd, false if not
What I need is serialized download, because I need to log every unsuccessful download.
What I saw browsing through the commons-net source is that, if I'm not wrong, new Socket is created for each retrieveFile() call.
I'm pretty confused about this, so If someone could explain what is actually happening, and offer solution with this library, or recommend some other FTP java library that supports blocking download per file, that would be nice.
Thanks.
You could just use the java.net.URLConnection class that has been present forever. It should know how to handle FTP URLs just fine. Here is a simple example that should give the blocking behavior that you are looking for.
The caveat is that you have to manage the input/output streams yourself, but this should be pretty simple.
Ok, to briefly answer this in order not to confuse people who might see this question.
Yes, commons-net for FTP is working as I thought it would, that is, retrieveFile() method blocks until it's finished with the download.
It was (of course) my own "mistake" in the code that let me think otherwise.

Categories

Resources