Are RandomAccessFile writes asynchronous? - java

Looking at the constructor of RandomAccessFile for the mode it says 'rws' The file is opened for reading and writing. Every change of the file's content or metadata must be written synchronously to the target device.
Does this imply that the mode 'rw' is asynchronous? Do I need to include the 's' if I need to know when the file write is complete?

Are RandomAccessFile writes asynchronous?
The synchronous / asynchronous distinction refers to the guarantee that the data / metadata has been safely to disk before the write call returns. Without the guarantee of synchronous mode, it is possible that the data that you wrote may still only be in memory at the point that the write system call completes. (The data will be written to disk eventually ... typically within a few seconds ... unless the operating system crashes or the machine dies due to a power failure or some such.)
Synchronous mode output is (obviously) slower that asynchronous mode output.
Does this imply that the mode 'rw' is asynchronous?
Yes, it is, in the sense above.
Do I need to include the 's' if I need to know when the file write is complete?
Yes, if by "complete" you mean "written to disc".

That is true for RandomAccessFile and java.io classes when using multiple threads. The mode "rw" offers asynchronous read/write but you can use a synchronous mode for read and write operations.

Related

How to set a timeout when reading from a Java RandomAccessFile

I am writing to and reading from a Linux file in java, which in reality is a communication port to a hardware device. To do this I use RandomAccessFile (I'll explain why later) and it works well in most cases. But sometimes a byte is lost and then my routine blocks indefinitely since there is no timeout on the read method.
To give some more details on the file: it is a USB receipt printer that creates a file called /dev/usb/lp0 and though I can use a cups driver to print, I still need the low level communication through this file to query the status of the printer.
The reason I use RandomAccessFile is that I can have the same object for both reading and writing.
I tried to make a version with InputStream and OutputStream instead (since that would allow me to use the available() method to implement my timeout). But when I first open the InputStream and then the OutputStream I get an exception when opening the OutputStream since the file is occupied.
I tried writing with the OutputStream and then closing it before opening the InputStream to read, but then I lose some or all of the reply before it has opened the InputStream.
I tried switching to channels instead (Files.newByteChannel()). This also allows me to have just one object, and the documentation says it only reads the bytes available and returns the count (which also allows me to implement a timeout). But it blocks in the read method anyway when there is nothing to read, despite what the documentation says.
I also tried a number of ways to implement timeouts on the RandomAccessFile using threads.
The first approach was to start a separate thread at the same time as starting to read, and if the timeout elapsed in the thread I closed the file from the thread, hoping that this would unlock the read() operation with an exception, but it didn't (it stayed blocked).
I also tried to do the read in a separate thread and brutally kill it with the deprecated Thread.stop() once the time had elapsed. This worked one time, but it was not possible to reopen the file again after that.
The only solution I have made work is to have a separate thread that continuously calls read, and whenever it gets a byte it puts it in a LinkedBlockingQueue, which I can read from with a timeout. This approach works, but the drawback is that I can never close the file (again for the same reasons explained above, I can't unblock a blocked read). And my application requires that I sometimes close this connection to the hardware.
Anyone who can think of a way to read from a file with timeout that would work in my case (that allows me to have both a read and a write access open to the file at the same time)?
I am using Java8 by the way.

Java concurrent writes from multiple threads to a single text file?

I have a multi-threaded Java 7 program (a jar file) which uses JDBC to perform work (it uses a fixed thread pool).
The program works fine and it logs things as it progresses to the command shell console window (System.out.printf()) from multiple concurrent threads.
In addition to the console output I also need to add the ability for this program to write to a single plain ASCII text log file - from multiple threads.
The volume of output is low, the file will be relatively small as its a log file, not a data file.
Can you please suggest a good and relatively simple design/approach to get this done using Java 7 features (I dont have Java 8 yet)?
Any code samples would also be appreciated.
thank you very much
EDIT:
I forgot to add: in Java 7 using Files.newOutputStream() static factory method is stated to be thread safe - according to official Java documentation. Is this the simplest option to write a single shared text log file from multiple threads?
If you want to log output, why not use a logging library, like e.g. log4j2? This will allow you to tailor your log to your specific needs, and can log without synchronizing your threads on stdout (you know that running System.out.print involves locking on System.out?)
Edit: For the latter, if the things you log are thread-safe, and you are OK with adding LMAX' disruptor.jar to your build, you can configure async loggers (just add "async") that will have a logging thread take care of the whole message formatting and writing (and keeping your log messages in order) while allowing your threads to run on without a hitch.
Given that you've said the volume of output is low, the simplest option would probably be to just write a thread-safe writer which uses synchronization to make sure that only one thread can actually write to the file at a time.
If you don't want threads to block each other, you could have a single thread dedicated to the writing, using a BlockingQueue - threads add write jobs (in whatever form they need to - probably just as strings) to the queue, and the single thread takes the values off the queue and writes them to the file.
Either way, it would be worth abstracting out the details behind a class dedicated for this purpose (ideally implementing an interface for testability and flexibility reasons). That way you can change the actual underlying implementation later on - for example, starting off with the synchronized approach and moving to the producer/consumer queue later if you need to.
Keep a common PrintStream reference where you'll write to (instead of System.out) and set it to System.out or channel it through to a FileOutputStream depending on what you want.
Your code won't change much (barely at all) and PrintStream is already synchronized too.

Flush on Java application exit

Suppose a Java application writes to a file using BufferedWriter API (and does not call flush after every write). I guess that if the application exits with System.exit the buffer is not flushed and so the file might be corrupted.
Suppose also that the application component, which decides to exit, is not aware of the component, which writes to the file.
What is the easiest and correct way to solve the "flush problem" ?
You may use the Runtime.addShutdownHook method, which can be used to add a jvm shutdown hook. This is basically a unstarted Thread, which executes on shutdown of the Java Virtual Machine.
So if you have a handle of the file available for that thread, then you can try to close the stream and flush the output.
Note: Although it seems feasible to use this, but I believe there will be implementation challenges to it because of the fact that whether your file handle is not stale when your shutdown hook is called. So the better approach should be to close your streams gracefully using finally blocks in the code where file operations are done.
You can add a shutdown hook but you need to have a reference to each of these BufferedWriter or other Flushable or Closable objects. You won't gain anything from it. You should perform close() and flush() directly in the code that is manipulating the object.
Think of the Information Expert GRASP pattern, the code manipulating the BufferedWriter is the place that has the information about when an operation is finished and should be flushed, so that's where that logic should go.
If some application component is calling System.exit when things aren't done, I would consider that an abnormal exit, should not return 0 and therefore shouldn't guarantee that streams are flushed.

How to compute file size, which is already opened and being read by another thread using JAVA

I want to compute the size of the file which is already opened and being written by another thread using JAVA.
You can use FileChannel of the Java API's.
A channel for reading, writing, mapping, and manipulating a file.
File channels are safe for use by multiple concurrent threads. The close method may be invoked at any time, as specified by the Channel interface. Only one operation that involves the channel's position or can change its file's size may be in progress at any given time; attempts to initiate a second such operation while the first is still in progress will block until the first operation completes
If you want to know the size of a file use File.length(). Who is writing to the file doesn't matter.

Why use Java's AsynchronousFileChannel?

I can understand why network apps would use multiplexing (to not create too many threads), and why programs would use async calls for pipelining (more efficient). But I don't understand the efficiency purpose of AsynchronousFileChannel.
Any ideas?
It's a channel that you can use to read files asynchronously, i.e. the I/O operations are done on a separate thread, so that the thread you're calling it from can do other things while the I/O operations are happening.
For example: The read() methods of the class return a Future object to get the result of reading data from the file. So, what you can do is call read(), which will return immediately with a Future object. In the background, another thread will read the actual data from the file. Your own thread can continue doing things, and when it needs the read data, you call get() on the Future object. That will then return the data (if the background thread hasn't completed reading the data, it will make your thread block until the data is ready). The advantage of this is that your thread doesn't have to wait the whole length of the read operation; it can do some other things until it really needs the data.
See the documentation.
Note that AsynchronousFileChannel will be a new class in Java SE 7, which is not released yet.
I've just come across another, somewhat unexpected reason for using AsynchronousFileChannel. When performing random record-oriented writes across large files (exceeding physical memory so caching isn't helping everything) on NTFS, I find that AsynchronousFileChannel performs over twice as many operations, in single-threaded mode, versus a normal FileChannel.
My best guess is that because the asynchronous io boils down to overlapped IO in Windows 7, the NTFS file system driver is able to update its own internal structures faster when it doesn't have to create a sync point after every call.
I micro-benchmarked against RandomAccessFile to see how it would perform (results are very close to FileChannel, and still half of the performance of AsynchronousFileChannel.
Not sure what happens with multi-threaded writes. This is on Java 7, on an SSD (the SSD is an order of magnitude faster than magnetic, and another order of magnitude faster on smaller files that fit in memory).
Will be interesting to see if the same ratios hold on Linux.
The main reason I can think of to use asynchronous IO is to better utilize the processor. Imagine you have some application which does some sort of processing on a file. And also let's assume you can process the data contained in the file in chunks. If you don't make use of asynchronous IO then your application will probably behave something like this:
Read a block of data. No processor utilization at this point as you're blocked waiting for the data to be read.
process the data you just read. At this point your application will start consuming CPU cycles as it processed the data.
If more data to read, goto #1.
The processor utilization will go up and then to zero and then up and then to zero, ... . Ideally you want to not be idle if you want your application to be efficient and process the data as fast as possible. A better approach would be:
Issue async read
When read completes issue next async read and then process data
The first step is the bootstrapping. You have no data yet so you have to issue a read. From then on, when you get notified a read has completed you issue another async read and then process the data. The benefit here is that by the time you finish processing the chunk of data the next read has probably finished, so you always have data available to process and thus you're more efficiently using the processor. If your processing finishes before the read has finished you might need to issue multiple asynchronous reads so that you have more data to process.
Nick
Here's something no one has mentioned:
A plain FileChannel implements InterruptibleChannel so it, as well as anything that uses it such as the OutputStream returned by Files.newOutputStream(), has the unfortunate[1][2] behaviour that performing any blocking operation on it (e.g. read() and write()) in a thread in interrupted state will cause the Channel itself to close with java.nio.channels.ClosedByInterruptException.
If this is a problem, using AsynchronousFileChannel instead is a possible alternative.
[1] http://bugs.java.com/bugdatabase/view_bug.do?bug_id=6608965
[2] https://bugs.openjdk.java.net/browse/JDK-4469683

Categories

Resources