Synchronizing process execution in a cluster with 2 nodes in Java - java

I have a cluster with 2 nodes and a shared file system. Each of these nodes runs has a java process that runs periodically. That process accesses the file system and handles some files and after the processing it deletes those files.
The problem here is that only one of the schedules processes should access the files. The other process should, if the first process runs, skip the execution.
My first attempt to solve this issue to create a hidden file .lock
. When the first process starts the execution it should move the file
into another folder and start handling files. When the other
scheduled process starts the execution it first checks if the .lock
file is present and if it isn't the process skips the execution.
When the first process finishes the execution it moves the .lock
file back to its original folder. I was using the Files.move()
method with ATOMIC_MOVE option but after a certain amount of time I
got unexpected behaviour.
My second attempt was to use a distributed lock like Hazelcast. I did some tests and it seems ok but this solution seems a bit complicated for a task that is this simple.
My question is: Is there any other smarter/simpler solution for this problem or my only option is to use Hazelcast? How would you solve this issue?

Related

Isuee with concurrent ant calls to DITA OT

We have a multi-thread application, and an integration with DITA-OT throught ant which is called from java.
We are started to face an issue with multiple concurrent ant calls to DITA-OT to run transformations, so when two threads or more run the ant call from java to DITA-OT, it randomly starts to generate an error reading the build_preprocess file.
It seems at the same time when one thread is trying to read the build_preprocess, another thread is deleting it; the build_preprocess is generated in the folder DITA-OT\plugins\org.dita.base
Is there a way to fix the issue, o have DITA-OT to support concurrent requests to run transformations?
enter image description here
This problem:
Failed to read job file: Content is not allowed in trailing section.
might occur if the same temporary files folder is used by two parallel processes.
So just make sure the "dita.temp.dir" and "output.dir" parameters are set to distinct values for the parallel processes so they do not use the same temporary files folder or output folder.
https://www.dita-ot.org/dev/parameters/parameters-base.html#ariaid-title1

What would be best practice If I am trying to constantly check if a directory exists? JAVA

I have a Java application that creates multiple threads. There is 1 producer thread which reads from a 10gb file, parses that information, creates objects from it and puts them into multiple blocking queues (5 queues).
The rest of the 5 consumer threads read from a blockingqueue (each consumer thread has its own blockingqueue). The consumer threads then each write to an individual file, so 5 files in total get created. It takes around 30min to create all files.
The problem:
The threads are writing to an external mount directory in a linux box. We've experience problems where other linux mounts have gone down and applications crash so I want to prevent that in this application.
What I would like to do is keep checking if the mount (directory) exists before writing to it. Im assuming if the directory goes down it will throw a FileNotFoundException. If that is the case I want it to keep checking if the directory is there for about 10-20min before completely crashing. Because I dont want to have to read the 10gb file again I want the consumer threads to be able to pick up from where they last left off.
What Im not sure would be best practice is:
Is it best to check if the directory exists in the main class before creating the threads? Or check in each consumer thread?
If I keep checking if the directory exists in each consumer thread it seems like repeatable code. I can check in the main class but it takes 30min to create these files. What if in those 30min the mount goes down then if Im only checking in the main class whether the directory exists the application will crash. Or if Im already writing to a directory is it impossible for an external directory to go down? Does it get locked?
thank you
We have something similar in our application, but in our case we are running a web app and if our mounted file system goes down we just throw an exception, but we want to do something more elegant, like you do...
I would recommend using a combination of the following patterns: State, CircuitBreaker, which I believe CircuitBreaker is a more specific version of the State pattern, and Observer/Observable.
These would work in the following way...
Create something that represents your file system. Maybe a class called MountedFileSystem. Make all your write calls to this particular class.
This class will catch all FileNotFoundException and one occurs, the CircutBreaker gets triggered. This change will be like the State pattern. One state is when things are working 'fine', the other state is when things aren't working 'fine', meaning that the mount has gone away.
Then, in the background, I would have a task that starts on a thread and checks the actual underlying file system to see if it is back. When the file system is back, change the state in the MountedFileSystem, and fire an Event (Observer/Observable) to try writing the files again to disk.
And as yuan quigfei stated, I am fairly certain you're going to have to rewrite those files. I just don't see being able to restart writing to them, but perhaps someone else has an idea.
write a method to detect folder exist or not.
call this method before actual writing.
create 5 thread based on 2. Once detect file is not existed, you seems have no choice but rewrite. Of course, you don't need re-read if all your content are in memory already(Big memory).

Is file creation process safe among different processes at os level (ubuntu)?

I have two java application which works on some file exist check mechanism , where one application wait till file deletion occurs and create a file on deletion of file to manage concurrency. If the process are not process safe my application fails.
The pseudocode:
if file exists:
do something with it
It's not concurrent safe as nothing ensures the file does not get deleted between the first and the second line.
The safest way would be to use a FileLock. If you are planning to react to file creation/deletion events on Linux, I'd recommend to use some inotify based solution.

Resuming a Multithreaded program that parses a file after crashing

I have a single thread program, that parses the contents of a file and gives me an output. For the single threaded program, I'm creating a dump file and updating it regularly for each line read so that, even if the system crashes the program will resume from the last execution point. Now, I want to implement this as a multi threaded program, but I'm confused as to what I'll do in the case of crash. Since multiple threads will be running in parallel how can I resume from the last execution point. Any suggestions?
use mulitthread to read a file is not a good idea. if yours processing logic is time costing, you can use one thread to read content from a file and then forward it to a pool of processing threading.
crash recovery is a long period, Oracle need several hours to recovery , so trying to use mulitthread to speed up recovery procedure is ridiculous

Run a process and find out that it was started

I need to run .exe file, and after launching this file execute some script once the process is loaded and running, before it has terminated. So script will throw an exception if previous .exe is not fully loaded..
How I can know that .exe file is fully loaded?
I can use Timer to schedule script execution to some time, but it's not good idea, because .exe file may still not be launched after some scheduling time.
Check out Apache Commons Exec which handles a lot of the pain of process launching. In particular look at the DefaultExecuteResultHandler, which will get a callback when the launched process exits. So long as you don't receive that callback your launched process is still running.
Note (in case it's not clear) that if your Java process launches an executable it will get an immediate callback when that process dies, provided Process.waitFor() has been called.
As the last thing in your exe files duty create a file in a shared location. Check for the file from your other code. So that shared file will act as the lock.
You could check the list of running processes regularly until you find the executable in the list - once you see it you know it has been launched.
Note: if the executable ends after 1 second for example, and you check the processes every 10 seconds, you might miss it.

Categories

Resources