I've designed a Server-Client App using Java and i have connected to the Server more than one users.
The Server provides some features, such as:
downloading files
creating files
writting/appending files etc.
I spotted some issues that need to be synchronized when two or more users send the same request.
For Example: When users want to download the same file at the same time, how can i synchronize this action by using synchronized blocks or any other method?
//main (connecting 2 users in the server)
ServerSocket server= new ServerSocket(8080, 50);
MyThread client1=new MyThread(server);
MyThread client2=new MyThread(server);
client1.start();
client2.start();
Here is the method i would like to synchronize:
//outstream = new BufferedWriter(new OutputStreamWriter(sock.getOutputStream()));//output to client
//instream = new BufferedReader(new InputStreamReader(sock.getInputStream()));//input from client
public void downloadFile(File file,String name) throws FileNotFoundException, IOException {
synchronized(this)
{
if (file.exists()) {
BufferedReader readfile = new BufferedReader(new FileReader(name + ".txt"));
String newpath = "../Socket/" + name + ".txt";
BufferedWriter socketfile = new BufferedWriter(new FileWriter(newpath));
String line;
while ((line = readfile.readLine()) != null) {
outstream.write(line + "\n");
outstream.flush();
socketfile.write(line);
}
outstream.write("EOF\n");
outstream.flush();
socketfile.close();
outstream.write("Downloaded\n");
outstream.flush();
} else {
outstream.write("FAIL\n");
}
outstream.flush();
}
}
Note: This method is in a class that extends Thread and is being used when i want to "download" the file in the overriden method Run()
Does this example assures me that when 2 users want to download the same file one of them will have to wait? and will the other one get it? Thanks for your time!
Locking in concurrent is used to provide mutual exclusion to some piece of code. For locking you can use as synchronized as unstructured lock like ReentrantLock and others.
The main goal of any lock is to provide mutual exclusion to the piece of code placed inside which will mean that this piece will be executed only by one thead at a time. Section inside the lock is called critical section.
To achieve a proper locking it is not enough just to place critical code there. Also you have to make sure that modification of you variables made inside the critical section is made only there. Because if you locked some piece of code but references to variables inside also got passed to some concurrent executing thread without any locking then lock wont save you in that case and you will get a data race. Locks secure only execution of a critical section and only guarantee you that code placed in the critical section will be executed only by one thread at a time.
//outstream = new BufferedWriter(new
OutputStreamWriter(sock.getOutputStream()));//output to client
//instream = new BufferedReader(new
InputStreamReader(sock.getInputStream()));//input from client
public void downloadFile(File file,String name) throws
FileNotFoundException, IOException {
synchronized(this)
{
Who is the owner of this method? Client? If yes then it won't work. You should lock on the same object. It should be shared with all threads which require locking. But in your case every client will have it's own lock and the other threads know nothing about other thread's locks. You can lock at the Client.class. This will work.
synchronize(this) vs synchronize(MyClass.class)
After doing that you will have proper locking for reading (downloading) the file. But what about write? Imagine the case when during reading some other thread will want to modify that file. But you have locks only for reading. You are reading the beginning of the file and the other thread is modifying the end of it. So the writing thread will succeed and you will get logically a corrupted file with the begging from one file and the end of the other. Of course file systems and standard java library try to take care about such cases (by using locks in readers\writes, locking the file offsets etc) but in general it is a possible scenario. So you will need also the same lock on write. And read and write methods should share and use the same lock.
And we've came to a situation when we have correct behavior but low performance. This is our tradeoff. But we can do better. Now we are using the same lock for every write and read method and this means that we can read or write to only one any file at a time. But this is not correct cause we can modify or read different files without any possible corruption. So the better approach will be to associate a lock with a file not the whole method. And here nio comes to help you.
How can I lock a file using java (if possible)
https://docs.oracle.com/javase/7/docs/api/java/nio/channels/FileLock.html
And actually you can read a file concurrently if offsets are different. Due to obvious physical reasons you can't read the same part of a file concurrently. But concurrent read and taking care about offsets seems as a huge overhead fmpv and im not sure that you will need that. Anyway here is some info: Concurrent reading of a File (java preferred)
Related
I want to have a central log file in my system, to which a certain application can write and read from.
The writes are for new data, and the reads will be to compare generated data to written data.
I would like this application to run in multiple instances at a time, which means I need to find a way to read diffs from the file, and write.
I have seen this code, but it's good for one go over the file and I don't see it working in multiple instances.
I'm building this app as a command line tool, so I'm thinking about creating a file for each instance and them migrating it to the "general" log file.
I'd like to hear inputs from the forum regarding the different approaches to this question.
What I'm worried about is having a few instances reading and writing from the same file and generating a lock.
This is the code I have found so far:
public class Tp {
public static void main(String[] args) throws IOException{
File f = new File("/path/to/your/file/filename.txt");
BufferedWriter bw = new BufferedWriter(new FileWriter(f));
BufferedReader br = new BufferedReader(new FileReader(f));
bw.write("Some text");
bw.flush();
System.out.println(br.readLine());
bw.write("Some more text");
bw.flush();
bw.close();
br.close();
}
}
You seem to be trying writing and reading the same file not only in one program but even within one thread. I do not believe this would be of benefit as during the program you know when/what you wrote so you can get rid of the whole I/O logic.
In the beginning try to write two different programs that run as separate processes. If need be, you can still try to bring them into the same JVM as separate threads.
Writing for sure is no problem, so the more interesting part is the reading logic. I'd probably implement this algorithm:
Loop until the program is terminated...
open the file, use skip() to jump to the location with new data
consume the existing data
remember how many bytes were read/remember the file size
close the file
wait until file has changed
Waiting for the file to change can be done by monitoring File.lastModified or File.length, or using the WatchService.
But be aware if you have multiple applications writing to the same file in parallel it can break any meaningful structure you have in the data. Log4j ensures parallel writes from within one application/multiple threads will go correctly into the file. If you need multiple processes running synchronized writes, consider logging into a database.
I have this method:
GenericDatumWriter<GenericRecord> datumWriter = new GenericDatumWriter<GenericRecord>(schema);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
BinaryEncoder encoder = EncoderFactory.get().binaryEncoder(baos, null);
public void WriteToFile(Record record) {
this.baos.reset();
try (FileOutputStream fileOut = new FileOutputStream(avroFile, true)) {
datumWriter.write(record, encoder);
encoder.flush();
fileOut.write("RecordStart\n".getBytes());
baos.writeTo(fileOut);
fileOut.write("\nRecordEnd\n".getBytes());
this.baos.flush();
} catch (IOException e) {
logger.error("Error while writing: ", e);
}
}
The above method is begin called by multiple threads and each thread will write a record between RecordStart and RecordEnd, there may be case where interleaving of logs is happening i.e we will not get our record between RecordStart and RecordEnd
So to avoid this situation one solution is to use synchronized but this will cause the performance issue since we are making threads to wait.
So i want some suggestion so we can avoid multiple threads writing to the same file at same time which may cause interleaving of logs ?
You can only benefit from parallel processing when your operations can be parallelized. By that I mean:
If you are writing to a file, this specific step of the computation must be done synchronously, be that via synchronized or via file lock, or else you'll get scrambled data.
What you can do to improve performance is: reduce the synchronous/locked block to the minumum possible, leaving the very last step (writing) only on a synchronized or locked block. Other than that you can write to multiple files.
I would prefer to use a file lock because it will keep the method more generalist. If you ever decide expand it so it can be used to write multiple files. Also it avoids other processes to use the file meanwhile (other than your program).
Take a look at this question.
Answering the specific question:
So i want some suggestion so we can avoid multiple threads writing to the same file at same time which may cause interleaving of logs ?
Without losing performance... I don't think there is a way. The very nature of writing to a file demands it to be sequential.
Most of the systems I've seen, which write all the log to a single file, use a queue, and a method that keeps writing record by record while the queue can offer, so everything gets written eventually as long as the system is not constantly receiving more records than the disk can manage.
I have an occasional hard to replicate bug where one of my threads hangs.
A web spider thread dumps html files into a directory.
A file processing thread reads the files in the directory, process
them one by one and moves them.
Since the file processor move file (by logical necessity) can only occur when a file is already in the directory, the file processor file read process is asynchronous and unlikely to lead to a hang.
HOWEVER, the fileprocessor thread also scans the directory and this can happen as the web spider thread saves a file into the directory.
QUESTION:
If a file is saved into this directory while the following read directory method is called, will it cause a hang? (Frankly, I don't see how it could, but maybe that is why I have tthe bug).
If yes, then how do I resolve the issue?
private void listFiles(Path path)
{
Log.getLogger().debug("started ......");
try (DirectoryStream<Path> stream = Files.newDirectoryStream(path))
{
for (Path entry : stream)
{
if (Files.isDirectory(entry))
{
listFiles(entry);
}
else
{
files.add(entry);
}
}
}
catch (Exception e)
{
Log.getLogger().error(e.getMessage(), e);
}
Log.getLogger().debug("done");
}
To avoid threads from interfering in each others work a semaphore (or mutex in it's simplest form) should be used. Sempaphores can be acquired by a thread in order to run code in the so called critical section. Code in this section could for example access an ArrayList. If multiple threads access that list and add and remove elements to and from it, you will eventually get a ConcurrentModificationException. In other cases you will not get an exception at all but your program might do unexpected things (see the lost-update problem for example).
If you however acquire a lock each time you access the critical section, other threads won't be able to access the shared resource (the list in this case or the directory in your case).
In order to achieve this behaviour you can either use classes that implement the lock interface, create an object and use that as a lock like so:
Object lock = new Object();
synchronized(lock) {
// do critical work here
}
A third and probably the most uneffective (but simplest) way is to use the synchronized keyword for your methods. Only one method that is declared as synchronized inside a class can be called at a time.
I came across this scenario and did not understand why it is happening. Can someone please help me understand the behaviour of nio file lock.
I opened a file using FileOutputStream and after acquiring an exclusive lock using nio FileLock I wrote some data into the file. Did not release the lock. Opened another FileOutputStream on the same file with an intention to acquire a lock and do a write operation and expect this to fail.But opening the second fileoutputstream overwrote the already locked file which had data written into it even before I try to get second lock. Is this expected? My understanding was acquiring an exclusive lock would prevent any changes on the locked file. How can I prevent overwriting my file when trying to get another lock ? (as if another process tries to get a lock on the same file on a different vm ? )
Sample program I tried:
File fileToWrite = new File("C:\\temp\\myfile.txt");
FileOutputStream fos1 = new FileOutputStream(fileToWrite);
FileOutputStream fos2 =null;
FileLock lock1,lock2 =null;
lock1=fos1.getChannel().tryLock();
if(lock1!=null){
//wrote date to myfile.txt after acquiring lock
fos1.write(data.getBytes());
//opened myfile.txt again and this replaced the file
fos2 = new FileOutputStream(fileToWrite);
//got an overlappingfilelock exception here
lock2=fos2.getChannel().tryLock();
fos2.write(newdata.getBytes());
}
lock1.release();
fos1.close();
if(lock2!=null)
lock2.release();
fos2.close();
Also tried splitting the above into two programs. Executed 1st and started second when 1st is waiting. File which is locked by program1 got overwritten by program2. Sample below:
Program1:
File fileToWrite = new File("C:\\temp\\myfile.txt");
FileOutputStream fos1 = new FileOutputStream(fileToWrite);
FileLock lock1 =null;
lock1=fos1.getChannel().tryLock();
if(lock1!=null){
//wrote date to myfile.txt after acquiring lock
fos1.write(data.getBytes());
System.out.println("wrote data and waiting");
//start other program while sleep
Thread.sleep(10000);
System.out.println("finished wait");
}
lock1.release();
fos1.close();
Program2:
File fileToWrite = new File("C:\\temp\\myfile.txt");
System.out.println("opening 2nd out stream");
//this overwrote the file
FileOutputStream fos2 = new FileOutputStream(fileToWrite);
FileLock lock2 =null;
lock2=fos2.getChannel().tryLock();
//lock is null here
System.out.println("lock2="+lock2);
if(lock2!=null){
//wrote date to myfile.txt after acquiring lock
System.out.println("writing NEW data");
fos2.write(newdata.getBytes());
}
if(lock2!=null)
lock2.release();
fos2.close();
Thanks
When you acquire a FileLock, you acquire it for the entire JVM. That’s why creating more FileOutputStreams and overwriting the same file within the same JVM will never been prevented by a FileLock— the JVM owns the lock. Thus, the OverlappingFileLockException is not meant to tell you that the lock isn’t available (that would be signaled by tryLock via returning null), it’s meant to tell you that there is a programming error: an attempt to acquire a lock that you already own.
When trying to access the same file from a different JVM, you stumble across the fact that the locking isn’t necessarily preventing other processes from writing into the locked region, it just prevents them from locking that region. And since you are using the constructor which truncates existing files, that might happen before your attempt of acquiring the lock.
One solution is use new FileOutputStream(fileToWrite, true) to avoid truncating the file. This works regardless of whether you open the file within the same JVM or a different process.
However, maybe you don’t want to append to the file. I guess you want to overwrite in the case you successfully acquired the lock. In this case, the constructors of FileOutputStream don’t help you as they force you to decide for either, truncating or appending.
The solution is to abandon the old API and open the FileChannel directly (requires at least Java 7). Then you have plenty of standard open options where truncating and appending are distinct. Omitting both allows overwriting without eagerly truncating the file:
try(FileChannel fch=FileChannel.open(fileToWrite.toPath(),
StandardOpenOption.CREATE, StandardOpenOption.WRITE)){
try(FileLock lock=fch.tryLock()) {
if(lock!=null) {
// you can directly write into the channel
// but in case you really need an OutputStream:
OutputStream fos=Channels.newOutputStream(fch);
fos.write(testData.getBytes());
// you may explicitly truncate the file to the actually written content:
fch.truncate(fch.position());
System.out.println("waiting while holding lock...");
LockSupport.parkNanos(TimeUnit.SECONDS.toNanos(5));
}
else System.out.println("couldn't acquire lock");
}
}
Since it requires Java 7 anyway you can use automatic resource management for cleaning up. Note that this code uses CREATE which implies the already familiar behavior of creating the file if it doesn’t exists, in contrast to CREATE_NEW which would require that the file doesn’t exist.
Due to the specified options, the open operation may create the file but not truncate it. All subsequent operations are only performed when acquiring the lock succeeded.
File locks only are only specified to work against other file locks.
From the Javadoc:
Whether or not a lock actually prevents another program from accessing the content of the locked region is system-dependent and therefore unspecified. The native file-locking facilities of some systems are merely advisory, meaning that programs must cooperatively observe a known locking protocol in order to guarantee data integrity. On other systems native file locks are mandatory, meaning that if one program locks a region of a file then other programs are actually prevented from accessing that region in a way that would violate the lock. On yet other systems, whether native file locks are advisory or mandatory is configurable on a per-file basis. To ensure consistent and correct behavior across platforms, it is strongly recommended that the locks provided by this API be used as if they were advisory locks.
To put it simple: a swing app that uses sqlitejdbc as backend. Currently, there's no problem launching multiple instances that work with the same database file. And there should be.
The file is locked (can't delete it while the app is running) so the check should be trivial. Turns out not.
File f = new File("/path/to/file/db.sqlite");
FileChannel channel = new RandomAccessFile(f, "rw").getChannel();
System.out.println(channel.isOpen());
System.out.println(channel.tryLock());
results in
true
sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid]
No matter whether the app is running or not. Am I missing the point?
TIA.
FileLocks are exclusive to the JVM, not an individual thread. So if you ran that code inside the same process as your Swing app, you would get the lock because it is shared by the JVM.
If your Swing app is not running, no other process is contending for the lock so you will obtain it there is well.
A File System level lock interacts with other applications. You get one of these from FileChannel. So what you do in your example code will make the file seem locked to another process, for example vi.
However, other Java threads or processes within the JVM will NOT see the lock. The key sentence is "File locks are held on behalf of the entire Java virtual machine. They are not suitable for controlling access to a file by multiple threads within the same virtual machine." You are not seeing the lock, so you are running sqlitejdbc from within the same JVM as your application.
So the question is how do you see whether your JVM has already acquired a lock on a file (assuming you don't control the code acquiring the lock)? One suggestion I would have is try and acquire an exclusive lock on a different subset of the file, for example with this code:
fc.tryLock(0L, 1L, false)
If there is already a lock you should get an OverlappingFileLockException. This is a bit hacky but might work.
Can you do a little experiment? Run two copies of this program (just your code with a sleep):
public class Main {
public static void main(String [] args) throws Exception {
File f = new File("/path/to/file/db.sqlite");
FileChannel channel = new RandomAccessFile(f, "rw").getChannel();
System.out.println(channel.isOpen());
System.out.println(channel.tryLock());
Thread.sleep(60000);
}
}
If this doesn't lock you know that tryLock() isn't working on your OS/drive/JVM. If this does lock then something else is wrong with your logic. Let us know the result in a comment.