I have two java application which works on some file exist check mechanism , where one application wait till file deletion occurs and create a file on deletion of file to manage concurrency. If the process are not process safe my application fails.
The pseudocode:
if file exists:
do something with it
It's not concurrent safe as nothing ensures the file does not get deleted between the first and the second line.
The safest way would be to use a FileLock. If you are planning to react to file creation/deletion events on Linux, I'd recommend to use some inotify based solution.
Related
I have a Java application that creates multiple threads. There is 1 producer thread which reads from a 10gb file, parses that information, creates objects from it and puts them into multiple blocking queues (5 queues).
The rest of the 5 consumer threads read from a blockingqueue (each consumer thread has its own blockingqueue). The consumer threads then each write to an individual file, so 5 files in total get created. It takes around 30min to create all files.
The problem:
The threads are writing to an external mount directory in a linux box. We've experience problems where other linux mounts have gone down and applications crash so I want to prevent that in this application.
What I would like to do is keep checking if the mount (directory) exists before writing to it. Im assuming if the directory goes down it will throw a FileNotFoundException. If that is the case I want it to keep checking if the directory is there for about 10-20min before completely crashing. Because I dont want to have to read the 10gb file again I want the consumer threads to be able to pick up from where they last left off.
What Im not sure would be best practice is:
Is it best to check if the directory exists in the main class before creating the threads? Or check in each consumer thread?
If I keep checking if the directory exists in each consumer thread it seems like repeatable code. I can check in the main class but it takes 30min to create these files. What if in those 30min the mount goes down then if Im only checking in the main class whether the directory exists the application will crash. Or if Im already writing to a directory is it impossible for an external directory to go down? Does it get locked?
thank you
We have something similar in our application, but in our case we are running a web app and if our mounted file system goes down we just throw an exception, but we want to do something more elegant, like you do...
I would recommend using a combination of the following patterns: State, CircuitBreaker, which I believe CircuitBreaker is a more specific version of the State pattern, and Observer/Observable.
These would work in the following way...
Create something that represents your file system. Maybe a class called MountedFileSystem. Make all your write calls to this particular class.
This class will catch all FileNotFoundException and one occurs, the CircutBreaker gets triggered. This change will be like the State pattern. One state is when things are working 'fine', the other state is when things aren't working 'fine', meaning that the mount has gone away.
Then, in the background, I would have a task that starts on a thread and checks the actual underlying file system to see if it is back. When the file system is back, change the state in the MountedFileSystem, and fire an Event (Observer/Observable) to try writing the files again to disk.
And as yuan quigfei stated, I am fairly certain you're going to have to rewrite those files. I just don't see being able to restart writing to them, but perhaps someone else has an idea.
write a method to detect folder exist or not.
call this method before actual writing.
create 5 thread based on 2. Once detect file is not existed, you seems have no choice but rewrite. Of course, you don't need re-read if all your content are in memory already(Big memory).
I have a situation where there are two java applications are watching a directory for incoming file. Say there is a directory DIR that is being watched by two JVM processes for any files with the extension .SGL.
The problem we face here is that, sometimes both nodes are being notified about the new files and both nodes are trying to process the same file.
Usually we handle these situations using a database that try to insert into a table with unique file name column and only one will succeed and continue processing.
But for this situation, we don't have database.
What is the best way to handle these kind of problems? Can we depend on the file renaming solutions? Is file renaming is atomic operation?
For such a situation Spring Integration suggests FileSystemPersistentAcceptOnceFileListFilter: https://docs.spring.io/spring-integration/reference/html/files.html#file-reading
Stores "seen" files in a MetadataStore to survive application restarts.
The default key is 'prefix' plus the absolute file name; value is the timestamp of the file.
Files are deemed as already 'seen' if they exist in the store and have the
same modified time as the current file.
When you have shared persistent MetadataStore for all your application instances only one of them will process the file. All others will just filter it.
Every watcher (even two in the same JVM) should always be notified of the new File being added.
If you want to divide the work, you can either
use one JVM to run twice as many threads and divide the work via a queue.
use an operation which will only succeed for one JVM. e.g.
file rename
create a lock file
lock the file itself
Is file renaming is atomic operation?
Yes, only one process can successful rename a file, even if both attempt to rename to same name.
If I have multiple threads that use log4j to write to a single log file, and I want another thread to read it back out, is there a way to safely read(line by line) those logs such that I always read a full line?
EDIT:
Reason for this is I need to upload all logs to a central location and it might be logs that are days old or those that are just being written
You should use a read write lock.
Read locks can be held by multiple users if there is no one writing to the file, but a write lock can only be held by 1 thread at a time no matter what.
Just make sure that as your writing thread is done writing, it releases the readwritelock to allow the reading threads to read. Likewise, always release the read lock when the reader are done reading so log4j can continue to write
Check out
http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/locks/ReadWriteLock.html
However, coming to think of it, what is your purpose for this? If you simply want to monitor your logs, you should use a different solution rather than having a monitor thread within the same application. Seems to not make sense. If the data is available within the application / service, why pass it off to a file and read it right back in?
It is going to be a pain if you need to implement what you are doing, especially you have to deal with file rolling.
For your specific requirement, there are better choices:
If the location you are going to be backed up can be directly written (i.e. mounted in your file system), it is better to simply set your file rolling to write to that backup directory; or
Make use of log management tools like Splunk to monitor and manage your log files (so that you don't even need to copy to that backup directory); or
Even you need to do the backup all by yourself, you don't need to (and have no reason to) do it in a separate thread. Trying to write a shell script monitoring your log directory, and make use of tools like rsync or write similar logic by yourself, to do the upload only for files that are not matching in local and remote location.
I am writing a Java application which should (among other things) generate a sequence of integers, starting with a given number (such as 900, 901, 902, 903, ... - the 900 is given as a parameter).
The current sequence value should persist when the application gets shut down and then started again.
When multiple instances of the application are running at the same time, they should share the same sequence (e.g. the union of the sequences generated by all instances should be the same as the sequence generated by a single instance, when running alone).
The administrator should be able to shut down the application and reset the current sequence value manually.
If the application crashes, the file should always stay accessible for other instances so that they can continue to work.
It was decided that the application would use a plain text file which would contain just the current number. When the application starts, it checks out if the file already exists and if not, creates it and writes the initial number into it. Everytime the application is about to generate a new number, it should read the current value inside the file, use it as the current sequence value, and then increment the number in the file.
I would like to now, how to do these two things atomically (with regards to other running instances of the same application):
check out if a file exists and if not, create it and write a number into it
read the current content of a file and then change it
Suggestions on how to achieve the listed goals in other ways are appreciated as well.
Using a database sequence would be a simple and solid solution but you've decided it will be a file. Then you'll need to manage the distributed synchronization yourself. There are systems offering that, like Terracotta or Hazelcast. I would definitely use one of them instead of implementing a new one based on locking a file. Why not a database?
I would create a lock file when a client writes the file and delete that lock file immediatly when the write process is done.
When the lock file is present other clients will not read or write the db file and wait until the lock file is deleted - simultanious reads are allowed.
You questions:
Shutdownhook
Is solved by using the lock file mechanisem
Every client could create an ID file beside the db file and when that file is deleted by the admin the client shuts down.
Depends: if the shutdownhook is respected this should not be a problem but if the client is killed immediatly you dont have any chance to clean up.
Problems:
If to many clients try to write the db file you cannot make shure that the first client will be served first.
What happens if a clients crashes during the write process and is not able to clean up the lock file?
What happens if two clients try to create the lock file at the same time? I think this depends on the os filesystem.
We have an application that reads files from a particular folder, processes them and copies(some business logic) it to another folder.
The problem here is when there are very large number of files to be processed, running a single instance of an application or a single thread is no longer enough to process this files.
One approach we have for this is to start multiple instances of the application(I feel something is wrong with this approach. Suggest me an alternative if there is one).
Spawning threads or starting multiple instances of the application, care should be taken that, if a thread reads one file and starts processing it, another thread should not pick it up.
We are trying to achieve this by having a database table with the list of file names in the folder, so that when a thread first reads the table for the file name ,we will change the status to in-process or completed and pessimistically lock the table so that other threads cannot read it.
Is there any better solution to the problem ?
You can use most of your existing implementation as the front-end processor to feed file streams to worker threads that you can start/stop as demand dictates. Only the front-end thread opens files, so there is no possibility of one worker interfering with another.
EDIT: Added the word 'no' as it changes the meaning quite a bit...
Also have a look at JDK 7. It has a new file I/O API and a fork/ join framework which might help.
Take a look at Apache Camel (http://camel.apache.org), and its File component (http://camel.apache.org/file2.html). Using Camel allows you to very easily define a set of processing instructions to consume files in a directory atomically, and also to configure a thread pool to deal with multiple files at the same time. Camel in Action's a great book to get you started.
What you describe reminds me of the classical style to develop on UNIX.
In this classical style, you would move a file to a work-in-progress directory so that other files do not pick it up. In general: You could use one directory per processing state and than move files from state to state.
This works essentially because file moves are atomic (at least under Unix systems and NFTS).
What is nice with this approach, is that it is pretty easy to handle problematic situations like crashes and it has automatically a nice management interface everyone is familiar with (the filesystem GUI, ls, Windows Explorer, ...).