This question already has answers here:
How to recover deleted files using Java? [closed]
(2 answers)
Closed 4 years ago.
This is my code to delete '.txt' files from specific folder(/home/user). But once it is deleted I don't know how to recover these deleted files.
Please show me some example code (if possible), to achieve this.
Is it possible in java? If so then please help me (I am happy to use any other language.)
import java.*;
import java.io.File;
class CountFiles
{
public static void main(String args[])
{
String dirPath="/home/user";
File f = new File(dirPath);
File[] files = f.listFiles();
int s=files.length;
System.out.println("Number of files in folder "+s);
for(int i=0;i <s; i++)
{
if (files[i].isFile())
{
String FilesName = files[i].getName();
System.out.println(FilesName);
if (FilesName.endsWith(".txt") || FilesName.endsWith(".TXT"))
{
boolean success = files[i].delete();
vif (!success) throw new IllegalArgumentException("Delete: deletion failed");
else System.out.println("file deleted");
}
System.out.println("file deleted out side.....");
}
}
}
}
Yes, it is possible in Java, however for low level or OS specific tasks it may be better to use a native language.
The below info applies to both Java and other languages:
If you have control of the files before they are deleted then you can simply make a copy/backup of the file before you delete it.
If you want to recover data from a system trash/recycle bin then you need a specific solution for each OS.
However, if you do are not able to backup the files first, and do not have access to a trash folder, then you can NEVER EVER be 100% sure that you can recover the data (See edit below for more details). You can however read raw data from the storage device. This is an incredibly complex and advanced subject. If you have to ask the question then you should not be trying to write code to do it without first doing a lot of your own research.
Before reading on, refer to this answer showing how you can read raw bytes from a storage device: How to recover deleted files using Java?
After reading the accepted answer you also need to consider:
You can access the drive sector by sector, but you will have to
interpret the data differently for different file systems (FAT, NTFS,
HPFS, and more)
You cannot use a file path to get a deleted file, you need to scan the whole drive and make an educated guess at what to recover,
or ask for user input so they can choose what to recover.
The task can be a long and complex one, and you have to interpret the raw data to see if it is a valid plain text file, as well as finding it's start and end.
Edit to include the comment from Bill:
Once a file has been deleted, any further action that the computer takes (Including using recovery tools) has the potential to write over the data that you want to recover. The best solution is to force shutdown the PC, and clone the drive so that data recovery can be done on another PC using the clone while keeping the original safe.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
Currently, I am downloading a jar file from a website, the classes are further processed, but the resources aren't. This means I do not need to decompress when reading from the URL and recompress the resources when writing to a file.
However, given a ZipInputStream there is no method I am aware of to read the zip entry compressed's data and write it directly to a file with NIO. Normally with NIO with files, I can use the Files#copy function to do this, but I am downloading these files from the network, therefor I do not have this luxury.
Essentially, I have a ZipInputStream and an NIO FileSystem for a Zip file, how do I copy some (not all) data from this input stream to the file without decompressing and recompressing each entry?
It's not clear what you are asking here.
zip to zip
Do you mean: You want to stream a zip file across a network, saving it to the local machine on disk, but only some of the files. You want to do this without actually doing any (de)compression. For example, if the stream contains a zip with 18 files in it, you want to save the 8 files whose name doesn't end in .class, but in a fashion that streams the compressed bytes straight from the network into a zipped file without any de- or recompression.
In that sense it is equivalent to saving the zip file from network to disk and then attempting to efficiently wipe out some of the entries. Except in one go.
This is a bad idea. There are no easy answers here. It is technically possible, with so many caveats that I'm pretty sure you wouldn't want this.
If you need more context as to why that is, scroll down to the end of this answer.
zip to files
If you just mean: "I want to stream a zip from the network, skipping some of them without decompressing the skipped items or saving them to disk at all (compressed or not), and writing the ones I want to keep straight from network to disk, decompressing them on the fly" - that's simple.
Use .getNextEntry() to skip. Treat the ZipInputStream as the single entry stream. It EOFs until you move to the next entry, which makes that 'work'.
Here is an example that reads all entries from a zip file, skips all entries that end in .class, and writes all the other ones to disk, uncompressing on the fly:
public void unpackResources(Path zip, Path tgt) throws IOException {
try (InputStream raw = Files.newInputStream(zip)) {
ZipInputStream zip = new ZipInputStream(raw);
for (ZipEntry entry = zip.getNextEntry(); entry != null; entry = zip.getNextEntry()) {
if (entry.getName().endsWith(".class")) continue;
Path to = tgt.resolve(entry.getName());
try (OutputStream out = Files.newOutputStream(to)) {
zip.transferTo(to);
}
}
}
}
in.transferTo(out) is the Input/OutputStream equivalent to Files.copy. If reads bytes from in and tosses them straight into out until in says that there are no more bytes to give.
Context: Why is zip-to-stripped-zip not feasible?
Compression is extremely inefficient at times if you treat each file in a batch entirely on its own: After all, then you cannot take advantage of duplicated patterns between files. Imagine compressing a database of common baby names, where the input data consists of 1 file per name, and they just contain the text Name: Joanna, over and over again. You really need to take advantage of those repeated Name: entries to get good compression rates.
If a compression format does it right, then what you want doesn't really work: You'd have a single table (I'm oversimplifying how compression works here) that maps shorter sequences onto longer ones, but it is designed for efficient storage of the entire deal. If you strip out half the files, that table is probably not at all efficient anymore. If you don't copy over the table, the compressed bytes don't mean anything.
Some compression formats do it wrong and do treat each file entirely on its own, scoring rather badly at the 'name files' test. ZIP is, unfortunately, such a format. Which does indeed mean that technically, streaming the compressed data straight into a file / stripping out some files can be done without de/recompressing, assuming a zip file that uses all the usual algorithms (ZIP is not so much an algorithm, it's a container format. However, 99% of the zips out there use a specific algorithm, and many zip readers fail on other zips). Encryption is probably also going to cause issues here.
Given that it's a bit odd, generally libraries for compression just don't offer this feature; it can't be done except, specifically, to common zip files.
You'd have to write it yourself. I'm not sure this is worth doing. De- and recompressing is quite fast (zip was doable 30 years ago. Sprinkle some moore's law over that number and you may get some sense of how trivial it is these days. Your disk will be the bottleneck, not the CPU. Even with fast SSDs).
I have object which i want load to memory on start program.
My question is:
It is better to insert objects into the (JAR) package or put the folder with the program?
What is faster way for reads object?
EDIT:
public MapStandard loadFromFileMS(String nameOfFile) {
MapStandard hm = null;
/*
InputStream inputStreaminputStream
= getClass().getClassLoader().
getResourceAsStream("data/" + nameOfFile + ".data");
*/
try {
FileInputStream inputStreaminputStream = new FileInputStream("C:\\"+nameOfFile+".data");
try (ObjectInputStream is = new ObjectInputStream(inputStreaminputStream)) {
hm = (MapStandard) is.readObject();
}
} catch (IOException | ClassNotFoundException e) {
System.out.println("Error: " + e);
}
return hm;
}
In theory it is faster to read a file from directory as from JAR file. JAR file is basically zip file with some metadata (MANIFEST.MF) so reading from JAR will include unzipping the content.
I don't think that there is a clear answer. Of course, reading a compressed archive requires time to un-compress. But: CPU cycles are VERY cheap. The time it takes to read a smaller archive and extract its content might still be quicker than reading "much more" content directly from the file system. You can do A LOT of computations while waiting for your IO to come in.
On the other hand: do you really think that the loading of this file is a performance bottleneck?
There is an old saying that the root of all evil is premature optimization.
If you or your users complain about bad performance - only then you start analyzing your application; for example using a profiler. And then you can start to fix those performance problems that REALLY cause problems; not those that you "assume" to be problematic.
And finally: if were are talking abut such huge dimensions - then you SHOULD not ask for stackoverflow opinions, but start to measure exact times yourself! We can only assume - but you have all the data in front of you - you just have to collect it!
A qualified guess would be that when the program starts the jar file entry will load a lot faster than the external file, but repeated usages will be much more alike.
The reason is that the limiting factor here on modern computers is "How fast can the bytes be retrieved from disk" and for jar-files the zip file is already being read by the JVM so many of the bytes needed are already loaded and does not have to be read again. An external file needs a completely separate "open-read" dance with the operating system. Later both will be in the disk read cache maintained by the operating system, so the difference is neglectible.
Considerations about cpu-usage is not really necessary. A modern CPU can do a lot of uncompressing in the time needed to read extra data from disk.
Note that reading through the jar file makes it automatically write protected. If you need to update the contents you need an external file.
I am attempting to create something like an iso of the hard drive of a computer in java, but with no data in the files. Like a file tree, but an iso. This happens on client A.
The point of this is to transfer this ISO file tree over GAE to another client (Let's say client B) who should be able to mount the iso on his computer using windows explorer.
The above is what I want to achieve - I know it is very specific, sorry about this. However, all I want to know is how to create an ISO (or some other mountable image of a hard drive) that contains no data in the files.
No data in the files = the files are still there (I must be able to see the names of the files), but they are empty. You know. Open them with notepad and all you get is "" in the file. Or a space. Whatever. The point is to make the iso small in size so I can transfer into to client B, instead of transfering the whole hard drive. After this client B can choose the file he wants to fetch of the other computer, but that's a different story.
The question:
How to create something like an ISO of the hard drive of a computer in Java, but with no data in the files?
Feel free to recommend a solution that has the same functionality but takes a different approach.
Update:
Stuffed the ISO approach. Created an object with lots of trees of files. Contact me somehow if u want to do the same.
Just to copy the entire file structure with empty files:
// you may want to actually handle the IOException rather than just throwing it
public static void main(String[] args) throws IOException
{
makeFileStructure(new File("someDirectory"), "someDestinationDirectory");
}
static void makeFileStructure(File src, String dest) throws IOException
{
for (File f: src.listFiles())
{
String newPath = dest + File.separatorChar + f.getName();
if (f.isDirectory())
{
if (!new File(newPath).mkdirs())
// you may want to handle this better
throw new IOException("Directory could not be created!");
makeFileStructure(f, newPath);
}
else
new File(newPath).createNewFile();
}
}
Just make sure "someDestinationDirectory" isn't a subdirectory of "someDirectory", otherwise things will obviously not go very well.
Pretty sure you'll need an external library for creating an ISO image (if you want to create it). Try Googling it. But it might be easier to just do it with an external application and a batch file (after having run the above code).
How to open file in shared mode in Java to allow other users to read and modify the file?
Thanks
In case you're asking about the Windows platform, where files are locked at filesystem level, here's how to do it with Java NIO:
Files.newInputStream(path, StandardOpenOption.READ)
And a demonstration that it actually works:
File file = new File("<some existing file>");
try (InputStream in = Files.newInputStream(file.toPath(), StandardOpenOption.READ)) {
System.out.println(file.renameTo(new File("<some other name>"));
}
Will print true, because a file open in shared-read mode may be moved.
For more details refer to java.nio.file.StandardOpenOption.
I'm not entirely sure I know what you mean, but if you mean concurrent modification of the file, that is not a simple process. Actually, it's pretty involved and there's no simple way to do that, off the top of my head you'd have to:
Decide whether the file gets refreshed for every user when someone else modifies it, losing all changes or what to do in that case;
Handle diffing & merging, if necessary;
Handle synchronization for concurrent writing to the same file, so that when two users want to write that file, the content doesn't end up gibberishly, e.g., if one user wants to write "foo" and another one wants to write "bar", the content might end up being "fbaroo" without synchronization.
If you just want to open a file in read-only mode, all you gotta do is open it via FileInputStream or something similar, an object that only permits reading operations.
How can I be sure if a file was processed before? There is a remote storage location which is a file source for my application. My program gets files from this location and processes them in a scheduled way. How can I be sure that the next time I fetch only non-processed files? I'm thinking about using file attributes. The archive and modified date can be a solution. But I learned that two bits of file attributes are not used. How can I use these fields in Java? By the way I don't want to use a database.
A common strategy is to use some form of hash function to create a checksum. Record the checksum of the file, and compare the list of processed files identified by checksum against the file in question. If the checksum of the file in question is in the list, you have already processed it.
Protect your list of processed file checksums. If you lose it, or it becomes corrupted, it might be a long, bad day.
To prevent unnecessary network traffic, you might consider preparing 'check' files on the remote repository that contain a checksum that corresponds to a potential input file.
EDIT:
Upon further comment, it is potentially possible to directly interact with file system attributes. The proposed Java 1.7 spec introduces file-system specific attribute views to directly interact with these attributes. The view you would be interested in is 'DosFileAttributeView'.
Basic use might be something similar to this ('input' is a file based on a java 'Path'; add necessary exception handling):
// import as necessary from java.nio.file and java.io
DosFileAttributeView view = input.getFileAttributeView(DosFileAttributeView.class);
//Check if the system supports this view
if (view != null)
{
DosFileAttributes attributes = view.readAttributes();
// skip any file already marked as an archive
if (!attributes.isArchive())
{
myObject.process(input)
attributes.setArchive(true)
}
}
Can you rename the file (e.g. "filename.archive")? or into an "archive" subdirectory?