Create NIO 2 FileSystem from current Jar - java

I'm trying to create java.nio.file.FileSystem from current Jar to extract something inside it. However, I couldn't get required Jar URI in any way. Which may be best way to do that?

My goal was to copy some group of native files from current Jar to a known location. In that way, I first tried to create a FileSystem object from current Jar to use FileSystem's copy operation for easiness. However, it looks to me not easy and to make this happen, I got each resource as stream and copied them to current filesystem.
for (String nativeFile : nativeFiles) {
InputStream inputStream = IOHelper.class.getResourceAsStream("/" + nativeFile);
Path nativePath = dataDir.resolve(nativeFile);
if (Files.notExists(nativePath)) {
Files.createDirectories(nativePath.getParent());
Files.copy(inputStream, nativePath);
}
}

Related

Getting a resource's path

I have been searching for a way to get a file object from a file, in the resources folder. I have read a lot of similar questions on this website but non fix my problem exactly.
Link already referred to
how-to-get-a-path-to-a-resource-in-a-java-jar-file
that got really close to answering my question:
String path = this.getClass().getClassLoader().getResource(<resourceFileName>)
.toExternalForm()
I am trying to have a resource file that I can write data into and then bring that file object to another part of my program, I know I can technically create a temp file that, I then write data into then pass it into a part of my program, the problem with this approach is that I think it can take a lot of system recourses, my program will need to create a lot of these temp files.
Is there any way, I can reuse one file in the resource folder? all I need is to get it's path (and it needs to work in a jar).I have tried this snipper of code i created for testing, i don't really know why it returns false, because in the ide it returns true.
public File getFile(String fileName) throws FileNotFoundException {
//Getting file from the resources folder
ClassLoader classLoader = getClass().getClassLoader();
URL fileUrl = classLoader.getResource(fileName);
if (fileUrl == null)
throw new FileNotFoundException("Cannot find file " + fileName);
System.out.println("before: " + fileUrl.toExternalForm());
final String result = fileUrl.toExternalForm()
.replace("jar:" , "")
.replace("file:" , "");
System.out.println("after: " + result);
return new File(result);
}
Output:
before: jar:file:/C:/Users/%myuser%/Downloads/Untitlecd.jar!/Recording.wav
after: /C:/Users/%myuser%/Downloads/Untitlecd.jar!/Recording.wav
false
i have been searching for a way to get a file object from a file in the resources folder.
This is flat out impossible. The resources folder is going to end up jarred into your distribution, and you can't edit jar files, they are read only (or at least, you should consider them so. Non-idiotic deployments will generally mark their own code files (which includes those jars) as read-only to the running process. Even if not, editing jar files is extremely heavy and not something you want to do. Even if you do, on windows, open files can't be edited/replaced like this without significant headaches).
The 'resources' folder simply isn't designed for files that are meant to be modified.
The usual strategy is to make a directory someplace (for example, the user's home dir, accessing via System.getProperty("user.home"), and then make/edit files within that dir. If you wish, you can put templates in your resources folder and use those to 'initialize' that dir hanging off the user's home dir with a skeleton version.
If you have a few ten thousand files to make, whatever process needs this needs to be adjusted to not need this. For example, by using a database (H2, perhaps, if you want to ship it with your java app and have it be as low impact as possible).

How to copy multiple files atomically from src to dest in java?

in one requirement, i need to copy multiple files from one location to another network location.
let assume that i have the following files present in the /src location.
a.pdf, b.pdf, a.doc, b.doc, a.txt and b.txt
I need to copy a.pdf, a.doc and a.txt files atomically into /dest location at once.
Currently i am using Java.nio.file.Files packages and code as follows
Path srcFile1 = Paths.get("/src/a.pdf");
Path destFile1 = Paths.get("/dest/a.pdf");
Path srcFile2 = Paths.get("/src/a.doc");
Path destFile2 = Paths.get("/dest/a.doc");
Path srcFile3 = Paths.get("/src/a.txt");
Path destFile3 = Paths.get("/dest/a.txt");
Files.copy(srcFile1, destFile1);
Files.copy(srcFile2, destFile2);
Files.copy(srcFile3, destFile3);
but this process the file are copied one after another.
As an alternate to this, in order to make whole process as atomic,
i am thinking of zipping all the files and move to /dest and unzip at the destination.
is this approach is correct to make whole copy process as atomic ? any one experience similar concept and resolved it.
is this approach is correct to make whole copy process as atomic ? any one experience similar concept and resolved it.
You can copy the files to a new temporary directory and then rename the directory.
Before renaming your temporary directory, you need to delete the destination directory
If other files are already in the destination directory that you don't want to overwrite, you can move all files from the temporary directory to the destination directory.
This is not completely atomic, however.
With removing /dest:
String tmpPath="/tmp/in/same/partition/as/source";
File tmp=new File(tmpPath);
tmp.mkdirs();
Path srcFile1 = Paths.get("/src/a.pdf");
Path destFile1 = Paths.get(tmpPath+"/dest/a.pdf");
Path srcFile2 = Paths.get("/src/a.doc");
Path destFile2 = Paths.get(tmpPath+"/dest/a.doc");
Path srcFile3 = Paths.get("/src/a.txt");
Path destFile3 = Paths.get(tmpPath+"/dest/a.txt");
Files.copy(srcFile1, destFile1);
Files.copy(srcFile2, destFile2);
Files.copy(srcFile3, destFile3);
delete(new File("/dest"));
tmp.renameTo("/dest");
void delete(File f) throws IOException {
if (f.isDirectory()) {
for (File c : f.listFiles())
delete(c);
}
if (!f.delete())
throw new FileNotFoundException("Failed to delete file: " + f);
}
With just overwriting the files:
String tmpPath="/tmp/in/same/partition/as/source";
File tmp=new File(tmpPath);
tmp.mkdirs();
Path srcFile1 = Paths.get("/src/a.pdf");
Path destFile1=paths.get("/dest/a.pdf");
Path tmp1 = Paths.get(tmpPath+"/a.pdf");
Path srcFile2 = Paths.get("/src/a.doc");
Path destFile2=Paths.get("/dest/a.doc");
Path tmp2 = Paths.get(tmpPath+"/a.doc");
Path srcFile3 = Paths.get("/src/a.txt");
Path destFile3=Paths.get("/dest/a.txt");
Path destFile3 = Paths.get(tmpPath+"/a.txt");
Files.copy(srcFile1, tmp1);
Files.copy(srcFile2, tmp2);
Files.copy(srcFile3, tmp3);
//Start of non atomic section(it can be done again if necessary)
Files.deleteIfExists(destFile1);
Files.deleteIfExists(destFile2);
Files.deleteIfExists(destFile2);
Files.move(tmp1,destFile1);
Files.move(tmp2,destFile2);
Files.move(tmp3,destFile3);
//end of non-atomic section
Even if the second method contains a non-atomic section, the copy process itself uses a temporary directory so that the files are not overwritten.
If the process aborts during moving the files, it can easily be completed.
See https://stackoverflow.com/a/4645271/10871900 as reference for moving files and https://stackoverflow.com/a/779529/10871900 for recursively deleting directories.
First there are several possibilities to copy a file or a directory. Baeldung gives a very nice insight into different possibilities. Additionally you can also use the FileCopyUtils from Spring. Unfortunately, all these methods are not atomic.
I have found an older post and adapt it a little bit. You can try using the low-level transaction management support. That means you make a transaction out of the method and define what should be done in a rollback. There is also a nice article from Baeldung.
#Autowired
private PlatformTransactionManager transactionManager;
#Transactional(rollbackOn = IOException.class)
public void copy(List<File> files) throws IOException {
TransactionDefinition transactionDefinition = new DefaultTransactionDefinition();
TransactionStatus transactionStatus = transactionManager.getTransaction(transactionDefinition);
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization() {
#Override
public void afterCompletion(int status) {
if (status == STATUS_ROLLED_BACK) {
// try to delete created files
}
}
});
try {
// copy files
transactionManager.commit(transactionStatus);
} finally {
transactionManager.rollback(transactionStatus);
}
}
Or you can use a simple try-catch-block. If an exception is thrown you can delete the created files.
Your question lacks the goal of atomicity. Even unzipping is never atomic, the VM might crash with OutOfMemoryError right in between inflating the blocks of the second file. So there's one file complete, a second not and a third entirely missing.
The only thing I can think of is a two phase commit, like all the suggestions with a temporary destination that suddenly becomes the real target. This way you can be sure, that the second operation either never occurs or creates the final state.
Another approach would be to write a sort of cheap checksum file in the target afterwards. This would make it easy for an external process to listen for creation of such files and verify their content with the files found.
The latter would be the same like offering the container/ ZIP/ archive right away instead of piling files in a directory. Most archives have or support integrity checks.
(Operating systems and file systems also differ in behaviour if directories or folders disappear while being written. Some accept it and write all data to a recoverable buffer. Others still accept writes but don't change anything. Others fail immediately upon first write since the target block on the device is unknown.)
FOR ATOMIC WRITE:
There is no atomicity concept for standard filesystems, so you need to do only single action - that would be atomic.
Therefore, for writing more files in an atomic way, you need to create a folder with, let's say, the timestamp in its name, and copy files into this folder.
Then, you can either rename it to the final destination or create a symbolic link.
You can use anything similar to this, like file-based volumes on Linux, etc.
Remember that deleting the existing symbolic link and creating a new one will never be atomic, so you would need to handle the situation in your code and switch to the renamed/linked folder once it's available instead of removing/creating a link. However, under normal circumstances, removing and creating a new link is a really fast operation.
FOR ATOMIC READ:
Well, the problem is not in the code, but on the operation system/filesystem level.
Some time ago, I got into a very similar situation. There was a database engine running and changing several files "at once". I needed to copy the current state, but the second file was already changed before the first one was copied.
There are two different options:
Use a filesystem with support for snapshots. At some moment, you create a snapshot and then copy files from it.
You can lock the filesystem (on Linux) using fsfreeze --freeze, and unlock it later with fsfreeze --unfreeze. When the filesystem is frozen, you can read the files as usual, but no process can change them.
None of these options worked for me as I couldn't change the filesystem type, and locking the filesystem wasn't possible (it was root filesystem).
I created an empty file, mount it as a loop filesystem, and formatted it. From that moment on, I could fsfreeze just my virtual volume without touching the root filesystem.
My script first called fsfreeze --freeze /my/volume, then perform the copy action, and then called fsfreeze --unfreeze /my/volume. For the duration of the copy action, the files couldn't be changed, and so the copied files were all exactly from the same moment in time - for my purpose, it was like an atomic operation.
Btw, be sure to not fsfreeze your root filesystem :-). I did, and restart is the only solution.
DATABASE-LIKE APPROACH:
Even databases cannot rely on atomic operations, and so they first write the change to WAL (write-ahead log) and flush it to the storage. Once it's flushed, they can apply the change to the data file.
If there is any problem/crash, the database engine first loads the data file and checks whether there are some unapplied transactions in WAL and eventually apply them.
This is also called journaling, and it's used by some filesystems (ext3, ext4).
I hope this solution would be useful : as per my understanding you need to copy the files from one directory to another directory.
so my solution is as follows:
Thank You.!!
public class CopyFilesDirectoryProgram {
public static void main(String[] args) throws IOException {
// TODO Auto-generated method stub
String sourcedirectoryName="//mention your source path";
String targetdirectoryName="//mention your destination path";
File sdir=new File(sourcedirectoryName);
File tdir=new File(targetdirectoryName);
//call the method for execution
abc (sdir,tdir);
}
private static void abc(File sdir, File tdir) throws IOException {
if(sdir.isDirectory()) {
copyFilesfromDirectory(sdir,tdir);
}
else
{
Files.copy(sdir.toPath(), tdir.toPath());
}
}
private static void copyFilesfromDirectory(File source, File target) throws IOException {
if(!target.exists()) {
target.mkdir();
}else {
for(String items:source.list()) {
abc(new File(source,items),new File(target,items));
}
}
}
}

Converting a Jar-URI into a nio.Path

I'm having trouble coverting from a URI to a nio.Path in the general case. Given a URI with multiple schemas, I wish to create a single nio.Path instance to reflect this URI.
//setup
String jarEmbeddedFilePathString = "jar:file:/C:/Program%20Files%20(x86)/OurSoftware/OurJar_x86_1.0.68.220.jar!/com/our_company/javaFXViewCode.fxml";
URI uri = URI.create(jarEmbeddedFilePathString);
//act
Path nioPath = Paths.get(uri);
//assert --any of these are acceptable
assertThat(nioPath).isEqualTo("C:/Program Files (x86)/OurSoftware/OurJar_x86_1.0.68.220.jar/com/our_company/javaFXViewCode.fxml");
//--or assertThat(nioPath).isEqualTo("/com/our_company/javaFXViewCode.fxml");
//--or assertThat(nioPath).isEqualTo("OurJar_x86_1.0.68.220.jar!/com/our_company/javaFXViewCode.fxml")
//or pretty well any other interpretation of jar'd-uri-to-path any reasonable person would have.
This code currently throws FileSystemNotFoundException on the Paths.get() call.
The actual reason for this conversion is to ask the resulting path about things regarding its package location and file name --so in other words, as long as the resulting path object preserves the ...com/our_company/javaFXViewCode.fxml portion, then its still very convenient for us to use the NIO Path object.
Most of this information is actually used for debugging, so it would not be impossible for me to retrofit our code to avoid use of Paths in this particular instance and instead use URI's or simply strings, but that would involve a bunch of retooling for methods already conveniently provided by the nio.Path object.
I've started digging into the file system provider API and have been confronted with more complexity than I wish to deal with for such a small thing. Is there a simple way to convert from a class-loader provided URI to a path object corresponding to OS-understandable traversal in the case of the URI pointing to a non-jar file, and not-OS-understandable-but-still-useful traversal in the case where the path would point to a resource inside a jar (or for that matter a zip or tarball)?
Thanks for any help
A Java Path belongs to a FileSystem. A file system is implemented by a FileSystemProvider.
Java comes with two file system providers: One for the operating system (e.g. WindowsFileSystemProvider), and one for zip files (ZipFileSystemProvider). These are internal and should not be accessed directly.
To get a Path to a file inside a Jar file, you need to get (create) a FileSystem for the content of the Jar file. You can then get a Path to a file in that file system.
First, you'll need to parse the Jar URL, which is best done using the JarURLConnection:
URL jarEntryURL = new URL("jar:file:/C:/Program%20Files%20(x86)/OurSoftware/OurJar_x86_1.0.68.220.jar!/com/our_company/javaFXViewCode.fxml");
JarURLConnection jarEntryConn = (JarURLConnection) jarEntryURL.openConnection();
URL jarFileURL = jarEntryConn.getJarFileURL(); // file:/C:/Program%20Files%20(x86)/OurSoftware/OurJar_x86_1.0.68.220.jar
String entryName = jarEntryConn.getEntryName(); // com/our_company/javaFXViewCode.fxml
Once you have those, you can create a FileSystem and get a Path to the jar'd file. Remember that FileSystem is an open resource and needs to be closed when you are done with it:
try (FileSystem jarFileSystem = FileSystems.newFileSystem(jarPath, null)) {
Path entryPath = jarFileSystem.getPath(entryName);
System.out.println("entryPath: " + entryPath); // com/our_company/javaFXViewCode.fxml
System.out.println("parent: " + entryPath.getParent()); // com/our_company
}

creating a virtual file system with JIMFS

I'd like to use Google's JIMFS for creating a virtual file system for testing purposes. I have trouble getting started, though.
I looked at this tutorial: http://www.hascode.com/2015/03/creating-in-memory-file-systems-with-googles-jimfs/
However, when I create the file system, it actually gets created in the existing file system, i. e. I cannot do:
Files.createDirectory("/virtualfolder");
because I am denied access.
Am I missing something?
Currently, my code looks something like this:
Test Class:
FileSystem fs = Jimfs.newFileSystem(Configuration.unix());
Path vTargetFolder = fs.getPath("/Store/homes/linux/abc/virtual");
TestedClass test = new TestedClass(vTargetFolder.toAbsolutePath().toString());
Java class somewhere:
targetPath = Paths.get(targetName);
Files.createDirectory(targetPath);
// etc., creating files and writing them to the target directory
However, I created a separate class just to test JIMFS and here the creation of the directory doesnt fail, but I cannot create a new file like this:
FileSystem fs = Jimfs.newFileSystem(Configuration.unix());
Path data = fs.getPath("/virtual");
Path dir = Files.createDirectory(data);
Path file = Files.createFile(Paths.get(dir + "/abc.txt")); // throws NoSuchFileException
What am I doing wrong?
The problem is a mix of Default FileSystem and new FileSystem.
Problem 1:
Files.createDirectory("/virtualfolder");
This will actually not compile so I suspect you meant:
Files.createDirectory( Paths.get("/virtualfolder"));
This attempts to create a directory in your root directory of the default filesystem. You need privileges to do that and probably should not do it as a test. I suspect you tried to work around this problem by using strings and run into
Problem 2:
Lets look at your code with comments
FileSystem fs = Jimfs.newFileSystem(Configuration.unix());
// now get path in the new FileSystem
Path data = fs.getPath("/virtual");
// create a directory in the new FileSystem
Path dir = Files.createDirectory(data);
// create a file in the default FileSystem
// with a parent that was never created there
Path file = Files.createFile(Paths.get(dir + "/abc.txt")); // throws NoSuchFileException
Lets look at the last line:
dir + "/abc.txt" >> is the string "/virtual/abc.txt"
Paths.get(dir + "/abc.txt") >> is this as path in the default filesystem
Remember the virtual filesystem lives parallel to the default filesystem.
Paths have a filesystem and can not be used in an other filesystem. They are not just names.
Notes:
Working with virtual filesystems avoid the Paths class. This class will always work in the default filesystem. Files is ok because you have create a path in the correct filesystem first.
if your original plan was to work with a virtual filesystem mounted to the default filesystem you need bit more. I have a project where I create a Webdav server based on a virtual filesystem and then use OS build in methods to mount that as a volume.
In your shell try ls /
the output should contain the "/virtual" directory.
If this is not the case which I suspect it is then:
The program is masking a:
java.nio.file.AccessDeniedException: /virtual/abc.txt
In reality the code should be failing at Path dir = Files.createDirectory(data);
But for some reason this exception is silent and the program continues without creating the directory (or thinking it has) and attempts to write to the directory that doesn't exist
Leaving a misleading java.nio.file.NoSuchFileException
I suggest you use memoryfilesystem instead. It has a much more complete implementation than Jimfs; in particular, it supports POSIX attributes when creating a "Linux" filesystem etc.
Using it, your code will actually work:
try (
final FileSystem fs = MemoryFileSystemBuilder.newLinux()
.build("testfs");
) {
// create a directory, a file within this directory etc
}
Seems like instead of
Path file = Files.createFile(Paths.get(dir + "/abc.txt"));
You should be doing
Path file = Files.createFile(dir.resolve("/abc.txt"))
This way, the context of dir (it's filesystem) is not lost.

Java fails in moving (renaming) a file when the resulting file is on another filesystem

A program we have erred when trying to move files from one directory to another. After much debugging I located the error by writing a small utility program that just moves a file from one directory to another (code below). It turns out that while moving files around on the local filesystem works fine, trying to move a file to another filesystem fails.
Why is this? The question might be platform specific - we are running Linux on ext3, if that matters.
And the second question; should I have been using something else than the renameTo() method of the File class? It seems as if this just works on local filesystems.
Tests (run as root):
touch /tmp/test/afile
java FileMover /tmp/test/afile /root/
The file move was successful
touch /tmp/test/afile
java FileMover /tmp/test/afile /some_other_disk/
The file move was erroneous
Code:
import java.io.File;
public class FileMover {
public static void main(String arguments[] ) throws Exception {
boolean success;
File file = new File(arguments[0]);
File destinationDir = new File(arguments[1]);
File destinationFile = new File(destinationDir,file.getName() );
success = file.renameTo(destinationFile);
System.out.println("The file move was " + (success?"successful":"erroneous"));
}
}
Java 7 and above
Use Files.move(Path source, Path target, CopyOption... opts).
Note that you must not provide the ATOMIC_MOVE option when moving files between file systems.
Java 6 and below
From the docs of File.renameTo:
[...] The rename operation might not be able to move a file from one filesystem to another [...]
The obvious workaround would be to copy the file "manually" by opening a new file, write the content to the file, and delete the old file.
You could also try the FileUtils.moveFile method from Apache Commons.
Javadoc to the rescue:
Many aspects of the behavior of this method are inherently
platform-dependent: The rename operation might not be able to move a
file from one filesystem to another, it might not be atomic, and it
might not succeed if a file with the destination abstract pathname
already exists. The return value should always be checked to make sure
that the rename operation was successful.
Note that the Files class defines the move method to move or rename a
file in a platform independent manner.
From the docs:
Renames the file denoted by this abstract pathname.
Many aspects of the behavior of this method are inherently
platform-dependent: The rename operation might not be able to move a
file from one filesystem to another, it might not be atomic, and it
might not succeed if a file with the destination abstract pathname
already exists. The return value should always be checked to make sure
that the rename operation was successful.
If you want to move file between different file system you can use Apache's moveFile
your ider is error
beause /some_other_disk/ is relative url but completely url ,can not find the url
i have example
java FileMover D:\Eclipse33_workspace_j2ee\test\src\a\a.txt D:\Eclipse33_workspace_j2ee\test\src
The file move was successful
java FileMover D:\Eclipse33_workspace_j2ee\test\src\a\a.txt \Eclipse33_workspace_j2ee\test\src
The file move was erronous
result is url is error

Categories

Resources