I have usecase where multiple thread can write file to a folder. At a given point of time I want to identify which is the latest file in that folder.
Since I cannot use timestamp as it can be same for more than 1 file in the folder. So I want to lock the folder, generate sequence number by counting number of file in folder, write new file by using the generated sequence number, release lock. Is this possible in java?
Similarly while reading take the file with largest sequence number.
Chances of concurrent writing file to a folder is less so performance won't be an issue.
You can't use FileLock on a directory so you will have to handle locking in Java. You could do something like:
private final Object lock = new Object();
public void writeToNext(String dirPath) {
synchronized(lock) {
File dir = new File(dirPath);
List<File> files = Arrays.asList(dir.listFiles(new FileFilter() {
#Override
public boolean accept(File pathname) {
return !pathname.isDirectory();
}
}));
int numFiles = files.size();
String nextFile = dir.getAbsolutePath() + File.separator + (numFiles + 1) + ".txt"; // get a path for the new file
System.out.println("Writing to " + nextFile);
// TODO write to file
}
}
Note
You could implement your solution such that each write increments a counter somewhere and you can just use that to get the next value; only order and look for the last file if the counter hasn't been initialized.
Using Java SE 7 or above:
WatchService API allows tracking file operations (create, modify and delete file) in a specified directory. In this scenario create a watch service to track new files created in the specific folder. Each time a new file is created the file creating an event is triggered and the process allows do some user-defined action.
The file already has a created time attribute (java.nio.file.attribute.BasicFileAttributes). This can be extracted as of type java.nio.file.attribute.FileTime which is in millis or can be a more specific java.util.concurrent.TimeUnit (this allows nanosecond precision). This gives a chance to be more specific about what is the newest file.
Also, there is an option to create a custom user-defined file attribute for any file. The attribute allows defining as key-value pair. This unique attribute value can be associated with a file to identify if its the latest. The following APIs allow creating and reading a custom file attribute: java.nio.file.attribute.UserDefinedFileAttributeView and Files.getFileAttributeView().
I think using the above APIs and methods one can create an application to track the latest files created in a specified folder and perform a required action. Note there is no locking mechanism involved if one is using these APIs.
EDIT (included):
Using a collection to retrieve latest file:
A thread-safe collection can be used to store the filenames (or file path) and retrieve them LIFO (last-in-first-out). The watch service (or similar process) can store the filename of the (latest) file created in the folder to this collection. A read operation just gets the latest filename from this collection and work with it. One can consider java.util.concurrent.ConcurrentLinkedDeque or LinkedBlockingDeque based on requirement.
EDIT (included):
A possible solution's process diagram:
Use File.createNewFile() in a loop for writing. Because it
Atomically creates a new, empty file named by this abstract pathname if and only if a file with this name does not yet exist. The check for the existence of the file and the creation of the file if it does not exist are a single operation that is atomic with respect to all other filesystem activities that might affect the file.
Like this:
import java.io.*;
import java.util.*;
public class FileCreator {
public static void main(String[] args) throws IOException {
String creatorId = UUID.randomUUID().toString();
File dir = new File("dir");
for (int filesCreated = 0; filesCreated < 1000; filesCreated++) {
File newFile;
for (int fileIdx = dir.list().length; ; fileIdx++) {
newFile = new File(dir, "file-" + fileIdx + ".txt");
if (newFile.createNewFile()) {
break;
}
}
try (PrintWriter pw = new PrintWriter(newFile)) {
pw.println(creatorId);
}
}
}
}
Another option would be Files.createFile(...). It throws an exception if the file already exists.
As for reading:
Similarly while reading take the file with largest sequence number.
What's the question here? Just take it.
Related
in one requirement, i need to copy multiple files from one location to another network location.
let assume that i have the following files present in the /src location.
a.pdf, b.pdf, a.doc, b.doc, a.txt and b.txt
I need to copy a.pdf, a.doc and a.txt files atomically into /dest location at once.
Currently i am using Java.nio.file.Files packages and code as follows
Path srcFile1 = Paths.get("/src/a.pdf");
Path destFile1 = Paths.get("/dest/a.pdf");
Path srcFile2 = Paths.get("/src/a.doc");
Path destFile2 = Paths.get("/dest/a.doc");
Path srcFile3 = Paths.get("/src/a.txt");
Path destFile3 = Paths.get("/dest/a.txt");
Files.copy(srcFile1, destFile1);
Files.copy(srcFile2, destFile2);
Files.copy(srcFile3, destFile3);
but this process the file are copied one after another.
As an alternate to this, in order to make whole process as atomic,
i am thinking of zipping all the files and move to /dest and unzip at the destination.
is this approach is correct to make whole copy process as atomic ? any one experience similar concept and resolved it.
is this approach is correct to make whole copy process as atomic ? any one experience similar concept and resolved it.
You can copy the files to a new temporary directory and then rename the directory.
Before renaming your temporary directory, you need to delete the destination directory
If other files are already in the destination directory that you don't want to overwrite, you can move all files from the temporary directory to the destination directory.
This is not completely atomic, however.
With removing /dest:
String tmpPath="/tmp/in/same/partition/as/source";
File tmp=new File(tmpPath);
tmp.mkdirs();
Path srcFile1 = Paths.get("/src/a.pdf");
Path destFile1 = Paths.get(tmpPath+"/dest/a.pdf");
Path srcFile2 = Paths.get("/src/a.doc");
Path destFile2 = Paths.get(tmpPath+"/dest/a.doc");
Path srcFile3 = Paths.get("/src/a.txt");
Path destFile3 = Paths.get(tmpPath+"/dest/a.txt");
Files.copy(srcFile1, destFile1);
Files.copy(srcFile2, destFile2);
Files.copy(srcFile3, destFile3);
delete(new File("/dest"));
tmp.renameTo("/dest");
void delete(File f) throws IOException {
if (f.isDirectory()) {
for (File c : f.listFiles())
delete(c);
}
if (!f.delete())
throw new FileNotFoundException("Failed to delete file: " + f);
}
With just overwriting the files:
String tmpPath="/tmp/in/same/partition/as/source";
File tmp=new File(tmpPath);
tmp.mkdirs();
Path srcFile1 = Paths.get("/src/a.pdf");
Path destFile1=paths.get("/dest/a.pdf");
Path tmp1 = Paths.get(tmpPath+"/a.pdf");
Path srcFile2 = Paths.get("/src/a.doc");
Path destFile2=Paths.get("/dest/a.doc");
Path tmp2 = Paths.get(tmpPath+"/a.doc");
Path srcFile3 = Paths.get("/src/a.txt");
Path destFile3=Paths.get("/dest/a.txt");
Path destFile3 = Paths.get(tmpPath+"/a.txt");
Files.copy(srcFile1, tmp1);
Files.copy(srcFile2, tmp2);
Files.copy(srcFile3, tmp3);
//Start of non atomic section(it can be done again if necessary)
Files.deleteIfExists(destFile1);
Files.deleteIfExists(destFile2);
Files.deleteIfExists(destFile2);
Files.move(tmp1,destFile1);
Files.move(tmp2,destFile2);
Files.move(tmp3,destFile3);
//end of non-atomic section
Even if the second method contains a non-atomic section, the copy process itself uses a temporary directory so that the files are not overwritten.
If the process aborts during moving the files, it can easily be completed.
See https://stackoverflow.com/a/4645271/10871900 as reference for moving files and https://stackoverflow.com/a/779529/10871900 for recursively deleting directories.
First there are several possibilities to copy a file or a directory. Baeldung gives a very nice insight into different possibilities. Additionally you can also use the FileCopyUtils from Spring. Unfortunately, all these methods are not atomic.
I have found an older post and adapt it a little bit. You can try using the low-level transaction management support. That means you make a transaction out of the method and define what should be done in a rollback. There is also a nice article from Baeldung.
#Autowired
private PlatformTransactionManager transactionManager;
#Transactional(rollbackOn = IOException.class)
public void copy(List<File> files) throws IOException {
TransactionDefinition transactionDefinition = new DefaultTransactionDefinition();
TransactionStatus transactionStatus = transactionManager.getTransaction(transactionDefinition);
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization() {
#Override
public void afterCompletion(int status) {
if (status == STATUS_ROLLED_BACK) {
// try to delete created files
}
}
});
try {
// copy files
transactionManager.commit(transactionStatus);
} finally {
transactionManager.rollback(transactionStatus);
}
}
Or you can use a simple try-catch-block. If an exception is thrown you can delete the created files.
Your question lacks the goal of atomicity. Even unzipping is never atomic, the VM might crash with OutOfMemoryError right in between inflating the blocks of the second file. So there's one file complete, a second not and a third entirely missing.
The only thing I can think of is a two phase commit, like all the suggestions with a temporary destination that suddenly becomes the real target. This way you can be sure, that the second operation either never occurs or creates the final state.
Another approach would be to write a sort of cheap checksum file in the target afterwards. This would make it easy for an external process to listen for creation of such files and verify their content with the files found.
The latter would be the same like offering the container/ ZIP/ archive right away instead of piling files in a directory. Most archives have or support integrity checks.
(Operating systems and file systems also differ in behaviour if directories or folders disappear while being written. Some accept it and write all data to a recoverable buffer. Others still accept writes but don't change anything. Others fail immediately upon first write since the target block on the device is unknown.)
FOR ATOMIC WRITE:
There is no atomicity concept for standard filesystems, so you need to do only single action - that would be atomic.
Therefore, for writing more files in an atomic way, you need to create a folder with, let's say, the timestamp in its name, and copy files into this folder.
Then, you can either rename it to the final destination or create a symbolic link.
You can use anything similar to this, like file-based volumes on Linux, etc.
Remember that deleting the existing symbolic link and creating a new one will never be atomic, so you would need to handle the situation in your code and switch to the renamed/linked folder once it's available instead of removing/creating a link. However, under normal circumstances, removing and creating a new link is a really fast operation.
FOR ATOMIC READ:
Well, the problem is not in the code, but on the operation system/filesystem level.
Some time ago, I got into a very similar situation. There was a database engine running and changing several files "at once". I needed to copy the current state, but the second file was already changed before the first one was copied.
There are two different options:
Use a filesystem with support for snapshots. At some moment, you create a snapshot and then copy files from it.
You can lock the filesystem (on Linux) using fsfreeze --freeze, and unlock it later with fsfreeze --unfreeze. When the filesystem is frozen, you can read the files as usual, but no process can change them.
None of these options worked for me as I couldn't change the filesystem type, and locking the filesystem wasn't possible (it was root filesystem).
I created an empty file, mount it as a loop filesystem, and formatted it. From that moment on, I could fsfreeze just my virtual volume without touching the root filesystem.
My script first called fsfreeze --freeze /my/volume, then perform the copy action, and then called fsfreeze --unfreeze /my/volume. For the duration of the copy action, the files couldn't be changed, and so the copied files were all exactly from the same moment in time - for my purpose, it was like an atomic operation.
Btw, be sure to not fsfreeze your root filesystem :-). I did, and restart is the only solution.
DATABASE-LIKE APPROACH:
Even databases cannot rely on atomic operations, and so they first write the change to WAL (write-ahead log) and flush it to the storage. Once it's flushed, they can apply the change to the data file.
If there is any problem/crash, the database engine first loads the data file and checks whether there are some unapplied transactions in WAL and eventually apply them.
This is also called journaling, and it's used by some filesystems (ext3, ext4).
I hope this solution would be useful : as per my understanding you need to copy the files from one directory to another directory.
so my solution is as follows:
Thank You.!!
public class CopyFilesDirectoryProgram {
public static void main(String[] args) throws IOException {
// TODO Auto-generated method stub
String sourcedirectoryName="//mention your source path";
String targetdirectoryName="//mention your destination path";
File sdir=new File(sourcedirectoryName);
File tdir=new File(targetdirectoryName);
//call the method for execution
abc (sdir,tdir);
}
private static void abc(File sdir, File tdir) throws IOException {
if(sdir.isDirectory()) {
copyFilesfromDirectory(sdir,tdir);
}
else
{
Files.copy(sdir.toPath(), tdir.toPath());
}
}
private static void copyFilesfromDirectory(File source, File target) throws IOException {
if(!target.exists()) {
target.mkdir();
}else {
for(String items:source.list()) {
abc(new File(source,items),new File(target,items));
}
}
}
}
I have a simple code to generate the temp files and store the some values(I don't want to store the files in normal storage area)
In future, I want to use that file and get the data from that file (its not a problem if the user manually delete the files).
But I don't want to delete the files automatically. when read this link, I get some information, generally temp files not deleted
when you explicitly call deleteOnExit() but when my JVM finish the work temp file deleted automatically.
//create a temp file
File temp = File.createTempFile("demo_", ".txt");
String path = temp.getParent();
//count the file which names starts with "demo"
File f = new File(path);
File[] matchingFiles = f.listFiles(new FilenameFilter() {
public boolean accept(File dir, String name) {
return name.startsWith("demo") && name.endsWith(".txt");
}
});
// Print count array elements
System.out.println("Length : " + matchingFiles.length + " ");
Here I never call the deleteOnExit() (but file delete automtically)OR JVM automatically delete the file? By the way its deleted automatically if is it possible
to avoid deleting the file? or any other ways to do my requirement?
File.createTempFile only creates files with unique names, other than that they are just regular files. They are not deleted automatically. It is explained in File.createTempFile API: This method provides only part of a temporary-file facility. To arrange for a file created by this method to be deleted automatically, use the deleteOnExit method
I am working on part of a proof of concept program in Java for an antivirus idea I had. Right now I'm still just kicking the idea around and the details aren't important, but I want the program I'm writing to get the file paths of every file within a certain range of each other(say 5 levels apart) in the directory and write them to a text file.
What I have right now(I will include my code below) can do this to a limited extent by checking if there are files in a given folder in the directory and writing their file paths to a text file, and then going down another level and doing it again. I have it set up to do 2 levels in the directory currently and it sort of works. But it only works if there is only one item in the given level of the directory. If there is one text file it will write that filepath to another text file and then terminate. But if there's a text file and folder, it ignores the text file and goes down to the next level of directory and records the file path of whatever text file it finds there. If there are two or more folders it will always choose one in particular over the other or others.
I realize now that it's doing that because I used the wrong conditional. I used if else and should have done something else, but I'm not sure which one I should have used. However I have to do it, I want to fix it so that it branches out with each level. For example, I start the program and give it starting directory C:/Users/"Name"/Desktop/test/. Test has 2 folders and a text file in it. Working the way I want it to, it would then record the file path of the .txt, go down a level into both folders, record any .txts or other files it found there, and then go down another level into each folder it found in those two folders, record what it found there, and so on until it finished the pre-determined number of levels to go through.
EDIT: To clarify confusion over what the problem is, I'll sum it up. I want the program to write the file paths of any files it finds in each level of the directory it goes through in another text file. It will do this, but only if there is one file in a given level of directory. If there is just one .txt for example, it will write the file path of that .txt to the other text file. But if there are multiple files in that level of directory(for example, two .txts) it will only write the file path of one of them and ignore the other. If there's a .txt and a folder, it ignores the .txt and enters the folder to go to the next level of directory. I want it to record all files in a given location and then branch into all the folders in that same location.
EDIT 2: I got the part of my code that gets the file path from this question( Read all files in a folder ) and the section that writes to my other text file from this one( How do I create a file and write to it in Java? )
EDIT 3: How can I edit my code to have recursion, as #horatius pointed out that I need?
EDIT 4: How can I edit my code so that it doesn't need a hard coded starting file path to work, and can instead detect the location of the executable .jar and use that as its starting directory?
Here is my code:
public class ScanFolder {
private static final int LEVELS = 5;
private static final String START_DIR = "C:/Users/Joe/Desktop/Test-Level1/";
private static final String REPORT_FILE = "C:/Users/Joe/Desktop/reports.txt";
public static void main(String[] args) throws IOException {
try (PrintWriter writer = new PrintWriter(REPORT_FILE, "UTF-8");
Stream<Path> pathStream = Files.walk(Paths.get(START_DIR), LEVELS)) {
pathStream.filter(Files::isRegularFile).forEach(writer::println);
} catch (Exception e) {
e.printStackTrace(System.err);
}
}
}
Thanks in advance
If you are using Files.walk(...) it does all the recursion for you.
Opening and writing to the PrintWriter will truncate your output file each time it is opened/written to, leaving just the last filename written.
I think something like the below is what you are after. As you progress, rather than writing to a file, you may want to put the found Path objects into an ArrayList<Path> or similar for easier later processing, but not clear from your question what requirements you have here.
public class Walk
{
public static void main(String[] args) throws IOException {
try (PrintWriter writer = new PrintWriter("C:/Users/Joe/Desktop/reports.txt", "UTF-8")) {
Files.walk(Paths.get("C:/Users/Joe/Desktop/test")).forEach(filePath -> {
if (Files.isRegularFile(filePath)) {
writer.println(filePath);
}
});
}
}
}
Here is an improved example that you can use to limit depth. It also deals with properly closing the Stream returned by Files.walk(...) that the previous example did not, and is a little more streams/lambda idiomatic:
public class Walk
{
// Can use Integer.MAX_VALUE for all
private static final int LEVELS = 2;
private static final String START_DIR = "C:/Users/Joe/Desktop/test";
private static final String REPORT_FILE = "C:/Users/Joe/Desktop/reports.txt";
public static void main(String[] args) {
try (PrintWriter writer = new PrintWriter(REPORT_FILE, "UTF-8");
Stream<Path> pathStream = Files.walk(Paths.get(START_DIR), LEVELS)) {
pathStream.filter(Files::isRegularFile).forEach(writer::println);
} catch (Exception e) {
e.printStackTrace(System.err);
}
}
}
I wrote a programm that reads a csv file and puts it into a TableModel. My problem is that I want to expand the programm so, that if the csv file gets changes from outside my tablemodel gets updated and gets the new values.
I would now programm a scheduler so that the thread sleeps for about a minute and checks it every minute if the timestamp of the file changed. If so it would read the file again. But i dont know what happens to the whole programm if i use a scheduler because this little software i write will be a part of a much much bigger software wich is running on JDK 6. So I search for a performant and independent from the bigger software solution to get the changes in the tablemodel.
Can someone help out?
java.nio.file package now contains the Watch Service API. This, effectively:
This API enables you to register a directory (or directories) with the
watch service. When registering, you tell the service which types of
events you are interested in: file creation, file deletion, or file
modification. When the service detects an event of interest, it is
forwarded to the registered process. The registered process has a
thread (or a pool of threads) dedicated to watching for any events it
has registered for. When an event comes in, it is handled as needed.
See reference here.
Oh! This API is only available from JDK 7 (onwards).
**OpenCsv is a best way to read csv file in java.
if your are using maven then you can use below dependency or download it's jar from web.**
#SuppressWarnings({"rawtypes", "unchecked"})
public void readCsvFile() {
CSVReader csvReader;
CsvToBean csv;
File fileEntry;
try {
fileEntry = new File("path of your file");
csv = new CsvToBean();
csvReader = new CSVReader(new FileReader(fileEntry), ',', '"', 1);
List list = csv.parse(setColumMapping(), csvReader);
//List of LabReportSampleData class
} catch (IOException e) {
e.printStackTrace();
}
}
//Below function is used to map the your csv file to your mapping object.
//columns String array: The value inside your csv file. means 0 index map with degree variable in your mapping class.
#SuppressWarnings({"rawtypes", "unchecked"})
private static ColumnPositionMappingStrategy setColumMapping() {
ColumnPositionMappingStrategy strategy = new ColumnPositionMappingStrategy();
strategy.setType(LabReportSampleData.class);
String[] columns =
new String[] {"degree", "radian", "shearStress", "shearingStrain", "sourceUnit"};
strategy.setColumnMapping(columns);
return strategy;
}
I have a cluster of machines, each running a Java app.
These Java apps need to access a unique resource.txt file concurrently.
I need to atomically rename a temp.txt file to resource.txt in Java, even if resource.txt already exist.
Deleting resource.txt and renaming temp.txt doesn't work, as it's not atomic (it creates a small timeframe where resource.txt doesn't exist).
And it should be cross-platform...
For Java 1.7+, use java.nio.file.Files.move(Path source, Path target, CopyOption... options) with CopyOptions "REPLACE_EXISTING" and "ATOMIC_MOVE".
See API documentation for more information.
For example:
Files.move(src, dst, StandardCopyOption.ATOMIC_MOVE);
On Linux (and I believe Solaris and other UNIX operating systems), Java's File.renameTo() method will overwrite the destination file if it exists, but this is not the case under Windows.
To be cross platform, I think you'd have to use file locking on resource.txt and then overwrite the data.
The behavior of the file lock is
platform-dependent. On some platforms,
the file lock is advisory, which means
that unless an application checks for
a file lock, it will not be prevented
from accessing the file. On other
platforms, the file lock is mandatory,
which means that a file lock prevents
any application from accessing the
file.
try {
// Get a file channel for the file
File file = new File("filename");
FileChannel channel = new RandomAccessFile(file, "rw").getChannel();
// Use the file channel to create a lock on the file.
// This method blocks until it can retrieve the lock.
FileLock lock = channel.lock();
// Try acquiring the lock without blocking. This method returns
// null or throws an exception if the file is already locked.
try {
lock = channel.tryLock();
} catch (OverlappingFileLockException e) {
// File is already locked in this thread or virtual machine
}
// Release the lock
lock.release();
// Close the file
channel.close();
} catch (Exception e) {
}
Linux, by default, uses voluntary locking, while Windows enforces it. Maybe you could detect the OS, and use renameTo() under UNIX with some locking code for Windows?
There's also a way to turn on mandatory locking under Linux for specific files, but it's kind of obscure. You have to set the mode bits just right.
Linux, following System V (see System
V Interface Definition (SVID) Version
3), lets the sgid bit for files
without group execute permission mark
the file for mandatory locking
Here is a discussion that relates: http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4017593
As stated here, it looks like the Windows OS doesn't even support atomic file rename for older versions. It's very likely you have to use some manual locking mechanisms or some kind of transactions. For that, you might want to take a look into the apache commons transaction package.
If this should be cross-platform I suggest 2 options:
Implement an intermediate service that is responsible for all the file accesses. Here you can use several mechanisms for synchronizing the requests. Each client java app accesses the file only through this service.
Create a control file each time you need to perform synchronized operations. Each java app that accesses the file is responsible checking for the control file and waiting while this control file exists. (almost like a semaphore). The process doing the delete/rename operation is responsible for creating/deleting the control file.
If the purpose of the rename is to replace resource.txt on the fly and you have control over all the programs involved, and the frequency of replacement is not high, you could do the following.
To open/read the file:
Open "resource.txt", if that fails
Open "resource.old.txt", if that fails
Open "resource.txt" again, if that fails
You have an error condition.
To replace the file:
Rename "resource.txt" to "resource.old.txt", then
Rename "resource.new.txt" to "resource.txt", then
Delete "resource.old.txt".
Which will ensure all your readers always find a valid file.
But, easier, would be to simply try your opening in a loop, like:
InputStream inp=null;
StopWatch tmr=new StopWatch(); // made up class, not std Java
IOException err=null;
while(inp==null && tmr.elapsed()<5000) { // or some approp. length of time
try { inp=new FileInputStream("resource.txt"); }
catch(IOException thr) { err=thr; sleep(100); } // or some approp. length of time
}
if(inp==null) {
// handle error here - file did not turn up after required elapsed time
throw new IOException("Could not obtain data from resource.txt file");
}
... carry on
You might get some traction by establishing a filechannel lock on the file before renaming it (and deleting the file you're going to overwrite once you have the lock).
-r
I solve with a simple rename function.
Calling :
File newPath = new File("...");
newPath = checkName(newPath);
Files.copy(file.toPath(), newPath.toPath(), StandardCopyOption.REPLACE_EXISTING);
The checkName function checks if exits.
If exits then concat a number between two bracket (1) to the end of the filename.
Functions:
private static File checkName(File newPath) {
if (Files.exists(newPath.toPath())) {
String extractRegExSubStr = extractRegExSubStr(newPath.getName(), "\\([0-9]+\\)");
if (extractRegExSubStr != null) {
extractRegExSubStr = extractRegExSubStr.replaceAll("\\(|\\)", "");
int parseInt = Integer.parseInt(extractRegExSubStr);
int parseIntPLus = parseInt + 1;
newPath = new File(newPath.getAbsolutePath().replace("(" + parseInt + ")", "(" + parseIntPLus + ")"));
return checkName(newPath);
} else {
newPath = new File(newPath.getAbsolutePath().replace(".pdf", " (" + 1 + ").pdf"));
return checkName(newPath);
}
}
return newPath;
}
private static String extractRegExSubStr(String row, String patternStr) {
Pattern pattern = Pattern.compile(patternStr);
Matcher matcher = pattern.matcher(row);
if (matcher.find()) {
return matcher.group(0);
}
return null;
}
EDIT: Its only works for pdf. If you want other please replace the .pdf or create an extension paramter for it.
NOTE: If the file contains additional numbers between brackets '(' then it may mess up your file names.