I wrote a method which replace some lines in a file (it's not the purpose of this question). Everything works fine, but I'm wondering if file is closed for reading when I start writing. I'd like to ensure that my solution is safe. That's what I've done:
private void replaceCodeInTranslationFile(File file, String code) {
if (file.exists()) {
try (Stream<String> lines = Files.lines(Paths.get(file.getAbsolutePath()), Charset.defaultCharset())) {
String output = this.getLinesWithUpdatedCode(lines, code);
this.replaceFileWithContent(file, output); // is it safe?
} catch (IOException e) {
throw new UncheckedIOException(e);
}
}
}
Method replaceFileWithContent() looks like this:
private void replaceFileWithContent(File file, String content) throws IOException {
try (FileOutputStream fileOut = new FileOutputStream(file.getAbsolutePath())) {
fileOut.write(content.getBytes(Charset.defaultCharset()));
}
}
I think that try-with-resources closes resource at the end of a statement, so this code can be potentially the source of problems. Am I correct?
Read / Write lock implementations may be helpful for this kind of scenario to ensure thread safe operations.
Refer this http://tutorials.jenkov.com/java-concurrency/read-write-locks.html for more..
Related
I am writing a piece of code:
OutputStream outputStream = new FileOutputStream(createdFile);
GZIPOutputStream gzipOutputStream = new GZIPOutputStream(outputStream);
BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(gzipOutputStream));
Do I need to close every stream or writer like the following?
gzipOutputStream.close();
bw.close();
outputStream.close();
Or will just closing the last stream be fine?
bw.close();
Assuming all the streams get created okay, yes, just closing bw is fine with those stream implementations; but that's a big assumption.
I'd use try-with-resources (tutorial) so that any issues constructing the subsequent streams that throw exceptions don't leave the previous streams hanging, and so you don't have to rely on the stream implementation having the call to close the underlying stream:
try (
OutputStream outputStream = new FileOutputStream(createdFile);
GZIPOutputStream gzipOutputStream = new GZIPOutputStream(outputStream);
OutputStreamWriter osw = new OutputStreamWriter(gzipOutputStream);
BufferedWriter bw = new BufferedWriter(osw)
) {
// ...
}
Note you no longer call close at all.
Important note: To have try-with-resources close them, you must assign the streams to variables as you open them, you cannot use nesting. If you use nesting, an exception during construction of one of the later streams (say, GZIPOutputStream) will leave any stream constructed by the nested calls inside it open. From JLS ยง14.20.3:
A try-with-resources statement is parameterized with variables (known as resources) that are initialized before execution of the try block and closed automatically, in the reverse order from which they were initialized, after execution of the try block.
Note the word "variables" (my emphasis).
E.g., don't do this:
// DON'T DO THIS
try (BufferedWriter bw = new BufferedWriter(
new OutputStreamWriter(
new GZIPOutputStream(
new FileOutputStream(createdFile))))) {
// ...
}
...because an exception from the GZIPOutputStream(OutputStream) constructor (which says it may throw IOException, and writes a header to the underlying stream) would leave the FileOutputStream open. Since some resources have constructors that may throw and others don't, it's a good habit to just list them separately.
We can double-check our interpretation of that JLS section with this program:
public class Example {
private static class InnerMost implements AutoCloseable {
public InnerMost() throws Exception {
System.out.println("Constructing " + this.getClass().getName());
}
#Override
public void close() throws Exception {
System.out.println(this.getClass().getName() + " closed");
}
}
private static class Middle implements AutoCloseable {
private AutoCloseable c;
public Middle(AutoCloseable c) {
System.out.println("Constructing " + this.getClass().getName());
this.c = c;
}
#Override
public void close() throws Exception {
System.out.println(this.getClass().getName() + " closed");
c.close();
}
}
private static class OuterMost implements AutoCloseable {
private AutoCloseable c;
public OuterMost(AutoCloseable c) throws Exception {
System.out.println("Constructing " + this.getClass().getName());
throw new Exception(this.getClass().getName() + " failed");
}
#Override
public void close() throws Exception {
System.out.println(this.getClass().getName() + " closed");
c.close();
}
}
public static final void main(String[] args) {
// DON'T DO THIS
try (OuterMost om = new OuterMost(
new Middle(
new InnerMost()
)
)
) {
System.out.println("In try block");
}
catch (Exception e) {
System.out.println("In catch block");
}
finally {
System.out.println("In finally block");
}
System.out.println("At end of main");
}
}
...which has the output:
Constructing Example$InnerMost
Constructing Example$Middle
Constructing Example$OuterMost
In catch block
In finally block
At end of main
Note that there are no calls to close there.
If we fix main:
public static final void main(String[] args) {
try (
InnerMost im = new InnerMost();
Middle m = new Middle(im);
OuterMost om = new OuterMost(m)
) {
System.out.println("In try block");
}
catch (Exception e) {
System.out.println("In catch block");
}
finally {
System.out.println("In finally block");
}
System.out.println("At end of main");
}
then we get the appropriate close calls:
Constructing Example$InnerMost
Constructing Example$Middle
Constructing Example$OuterMost
Example$Middle closed
Example$InnerMost closed
Example$InnerMost closed
In catch block
In finally block
At end of main
(Yes, two calls to InnerMost#close is correct; one is from Middle, the other from try-with-resources.)
You can close the outer most stream, in fact you don't need to retain all the streams wrapped and you can use Java 7 try-with-resources.
try (BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(
new GZIPOutputStream(new FileOutputStream(createdFile)))) {
// write to the buffered writer
}
If you subscribe to YAGNI, or you-aint-gonna-need-it, you should be only adding code you actually need. You shouldn't be adding code you imagine you might need but in reality doesn't do anything useful.
Take this example and imagine what could possibly go wrong if you didn't do this and what the impact would be?
try (
OutputStream outputStream = new FileOutputStream(createdFile);
GZIPOutputStream gzipOutputStream = new GZIPOutputStream(outputStream);
OutputStreamWriter osw = new OutputStreamWriter(gzipOutputStream);
BufferedWriter bw = new BufferedWriter(osw)
) {
// ...
}
Lets start with FileOutputStream which calls open to do all the real work.
/**
* Opens a file, with the specified name, for overwriting or appending.
* #param name name of file to be opened
* #param append whether the file is to be opened in append mode
*/
private native void open(String name, boolean append)
throws FileNotFoundException;
If the file is not found, there is no underlying resource to close, so closing it won't make any difference. If The file exists, it should be throwing a FileNotFoundException. So there is nothing to be gained by trying to close the resource from this line alone.
The reason you need to close the file is when the file is opened successfully, but you later get an error.
Lets look at the next stream GZIPOutputStream
There is code which can throw an exception
private void writeHeader() throws IOException {
out.write(new byte[] {
(byte) GZIP_MAGIC, // Magic number (short)
(byte)(GZIP_MAGIC >> 8), // Magic number (short)
Deflater.DEFLATED, // Compression method (CM)
0, // Flags (FLG)
0, // Modification time MTIME (int)
0, // Modification time MTIME (int)
0, // Modification time MTIME (int)
0, // Modification time MTIME (int)
0, // Extra flags (XFLG)
0 // Operating system (OS)
});
}
This writes the header of the file. Now it would be very unusual for you to be able to open a file for writing but not be able to write even 8 bytes to it, but lets imagine this could happen and we don't close the file afterwards. What does happen to a file if it is not closed?
You don't get any unflushed writes, they are discarded and in this case, there is no successfully written bytes to the stream which isn't buffered at this point anyway. But a file which is not closed doesn't live forever, instead FileOutputStream has
protected void finalize() throws IOException {
if (fd != null) {
if (fd == FileDescriptor.out || fd == FileDescriptor.err) {
flush();
} else {
/* if fd is shared, the references in FileDescriptor
* will ensure that finalizer is only called when
* safe to do so. All references using the fd have
* become unreachable. We can call close()
*/
close();
}
}
}
If you don't close a file at all, it gets closed anyway, just not immediately (and like I said, data which is left in a buffer will be lost this way, but there is none at this point)
What is the consequence of not closing the file immediately? Under normal conditions, you potentially lose some data, and you potentially run out of file descriptors. But if you have a system where you can create files but you can't write anything to them, you have a bigger problem. i.e. it hard to imagine why you are repeatedly trying to create this file despite the fact you are failing.
Both OutputStreamWriter and BufferedWriter don't throw IOException in their constructors, so it not clear what problem they would cause. In The case of BufferedWriter, you could get an OutOfMemoryError. In this case it will immediately trigger a GC, which as we have seen will close the file anyway.
If all of the streams have been instantiated then closing only the outermost is just fine.
The documentation on Closeable interface states that close method:
Closes this stream and releases any system resources associated with it.
The releasing system resources includes closing streams.
It also states that:
If the stream is already closed then invoking this method has no effect.
So if you close them explicitly afterwards, nothing wrong will happen.
I'd rather use try(...) syntax (Java 7), e.g.
try (OutputStream outputStream = new FileOutputStream(createdFile)) {
...
}
It will be fine if you only close the last stream - the close call will be send to the underlying streams, too.
No, the topmost level Stream or reader will ensure that all underlying streams / readers are closed.
Check the close() method implementation of your topmost level stream.
In Java 7, there is a feature try-with-resources. You no need to explicitly close your streams, it will take care of that.
Imagine that MyOpenedFile is something wrapping file with opened streams. Then suppose this code:
// method in an Util class
static void safeClose(MyOpenedFile f) {
if (f != null) {
try {
f.close();
} catch(IOException ex) { /* add logging or console output */ }
}
}
Actual method for the question:
void doSomeFileOperation(...) throws IOException, ... {
MyOpenedFile f1 = null;
MyOpenedFile f2 = null;
try {
/* method's "business logic" code beings */
f1 = new MyOpenedFile(...);
//do stuff
f2 = new MyOpenedFile(...);
// do stuff
f1.close(); f1 = null;
// do stuff with f1 closed
f2.close(); f2 = null;
// do stuff with f2 closed
/* method's "business logic" code ends */
} finally {
Util.safeClose(f1); f1 = null;
Util.safeClose(f2); f2 = null;
}
}
Now this is quite messy and especially error-prone (some code in finally block might be very hard to get called in unit tests, for example). For example in C++, destructor would take care of cleanup (either getting called by scoped pointer destructor or directly) and code would be much cleaner.
So, is there better/nicer/cleaner way to wrap above piece of business logic code, so that any exceptions get propagated but both files f1 and f2 get closed (or at least close is attempted on both, even if it fails)?
Also answers pointing to any open source libraries such as Apache Commons, providing nice wrappers are welcome.
A file is a wrapper for a String which contains a file name which may or may not exist. It is stateless so you don't need to close it.
A resource you need to close is FileInputStream or BufferedReader and you can close these implicitly with ARM in Java 7
try(BufferedReader br = new BufferedReader(new FileReader(file))) {
}
This will close br whent he block exits.
http://www.oracle.com/technetwork/articles/java/trywithresources-401775.html
Take a look at The try-with-resources Statement which will close resources after the try-block ends.
The File class you use does not seem to be java.io.File because it does not have any close() method. In that case make sure that your own File class implements Closeable to make it work with ARM.
try (FileInputStream f1 = new FileInputStream("test1.txt");
FileInputStream f2 = new FileInputStream("test2.txt")) {
// Some code
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
You dont need to close Files (which are representations of files on the file system) as mentioned here:
Do I need to close files I perform File.getName() on?
I assume you are asking more about the File Streams/Readers?
In which case java 7 has a nice new feature:
http://www.vineetmanohar.com/2011/03/java-7-try-with-auto-closable-resources/
If you are working on an older version of java I'd just keep it simple with this:
void doSomeFileOperation(...) throws IOException, ... {
FileInputStream f1 = null;
FileInputStream f2 = null;
try {
// do stuff
} finally {
Util.safeClose(f1);
Util.safeClose(f2);
}
}
An option that comes automatically to my head is: Separate the code that handles files, from the code that does any processing. In this way you can encapsulate the nasty code that handles does the open, close and exception handling.
The other bit is that the sample you have does a lot of extra, unneeded steps
void doSomeFileOperation(...) throws IOException, ... {
File f1 = null;
File f2 = null;
try {
f1 = new File(...);
f2 = new File(...);
// callback to another class / method that does the real work
} finally {
Util.safeClose(f1);
Util.safeClose(f2);
}
}
You don't need to set the File instances to null. If you try to use them, you'll get an exception.
I don wonder what File object you're using. The standard File class in java doesn't have a close() method.
I serialize an object and save it as a file on my HDD. When I'm reading it, in only some occasions it throws EOFException. After couple of hours debugging I am not able to find a problem.
Here is my code:
public void serialize(MyClass myClass,String path) {
FileOutputStream foStream = null;
ObjectOutputStream ooStream = null;
try {
File file = new File(path);
if (!file.exists()) {
file.createNewFile();
}
foStream = new FileOutputStream(file);
ooStream = new ObjectOutputStream(foStream);
ooStream.writeObject(myClass);
} catch (Throwable t) {
log.error(t);
} finally {
if (ooStream != null) {
try {
ooStream.flush();
ooStream.close();
} catch (IOException e) {
log.error(e);
}
}
}
}
For getting Object:
public MyClass deSerialize(String path) {
MyClass myClass=null;
FileInputStream fiStream = null;
ObjectInputStream oiStream = null;
String errorMessage = "";
try {
File file = new File(path);
if (!file.exists()) {
return null;
}
fiStream = new FileInputStream(path);
oiStream = new ObjectInputStream(fiStream);
Object o = oiStream.readObject();
myClass = (MyClass) o;
} catch (Throwable t) {
log.warn(t);
} finally {
if (oiStream != null) {
try {
oiStream.close();
} catch (IOException e) {
log.error(e);
}
}
}
return myClass;
}
Stacktrace:
java.io.EOFException at
java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2498)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1273)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
at java.util.LinkedList.readObject(LinkedList.java:776) at
sun.reflect.GeneratedMethodAccessor583.invoke(Unknown Source) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585) at
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:946)
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1809)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1719)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1908)
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1832)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1719)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
Question:
My serialized object is now corrupted and then is it rubbish now?
Because this object is responsible for rendering the UI which saved by user. If User logs in it should render previously saved state of UI. However for some user the file cannot be deserialized.
EOFException means you are trying to read past the end of the file. Normally you don't have any way of knowing whethere there are more objects to read, other than trying it, so you shouldn't regard EOFException as a problem in the first place. If it is thrown in a situation where you think you know there are more objects in the file, e.g. when you have prefixed an object count to the file, it indicates a problem with the code that wrote the file, or possible corruption of the file itself. Another example is a zero length file that shouldn't be zero length. Whatever the problem is, it can't be solved by the reading end, it is already too late.
I cannot see any problem with the writing and reading of the file.
So my best guess is that the problem is at the file level. For example:
you could be writing one file and reading a different one, or
you could be reading the file before the file write completes, or
something else could be clobbering the file in between the running of your write code and read code.
I suggest that you add some tracing code that uses File.length() to find out what the file size is after you've written it and before you read it.
A couple of other possibilities:
the writer and reader code is using different versions of MyClass (or a dependent class) with incompatible representations and the same serialVersionId values, or
you could be using custom readObject and writeObject methods that are incompatible.
In my case, EOF Exception was solved by ensuring the read and writes to the file were thread safe. Like Stephen C answered above, if you try to write to a file which you also are trying to read from say from another thread, you may be stepping on the ObjectInputStream which is going to throw EOF Exception in this case.
I have the following code in a java Web Service:
public boolean makeFile(String fileName, String audio)
{
if (makeUserFolder())
{
File file = new File(getUserFolderPath() + fileName + amr);
FileOutputStream fileOutputStream = null;
try
{
file.createNewFile();
fileOutputStream = new FileOutputStream(file);
fileOutputStream.write(Base64.decode(audio));
return true;
}
catch(FileNotFoundException ex)
{
return false;
}
catch(IOException ex)
{
return false;
}
finally{
try {
fileOutputStream.close();
convertFile(fileName);
} catch (IOException ex) {
Logger.getLogger(FileUtils.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
else
return false;
}
public boolean convertFile(String fileName)
{
Process ffmpeg;
String filePath = this.userFolderPath + fileName;
try {
ProcessBuilder pb = new ProcessBuilder("ffmpeg","-i",filePath + amr,filePath + mp3);
pb.redirectErrorStream();
ffmpeg = pb.start();
} catch (IOException ex) {
return false;
}
return true;
}
It used to work and now it simply won't execute the ffmpeg conversion for some reason. I thought it was a problem with my file but after running the command from terminal no errors are thrown or anything, thought it was maybe permissions issue but all the permissions have been granted in the folder I'm saving the files. I noticed that the input BufferedReader ins being set to null after running the process, any idea what's happening?
First of all, a small nitpick with your code...when you create the FileOutputStream you create it using a string rather than a File, when you have already created the File before, so you might as well recycle that rather than force the FileOutputStream to instantiate the File itself.
Another small nitpick is the fact that when you are writing out the audio file, you should enclose that in a try block and close the output stream in a finally block. If you are allowed to add a new library to your project, you might use Guava which has a method Files.write(byte[],File), which will take care of all the dirty resource management for you.
The only thing that I can see that looks like a definite bug is the fact that you are ignoring the error stream of ffmpeg. If you are blocking waiting for input on the stdout of ffmpeg, then it will not work.
The easiest way to take care of this bug is to use ProcessBuilder instead of Runtime.
ProcessBuilder pb = new ProcessBuilder("ffmpeg","-i",filePath+amr,filePath+mp3);
pb.redirectErrorStream(); // This will make both stdout and stderr be redirected to process.getInputStream();
ffmpeg = pb.start();
If you start it this way, then your current code will be able to read both input streams fully. It is possible that the stderr was hiding some error that you were not able to see due to not reading it.
If that was not your problem, I would recommend using absolute paths with ffmpeg...in other words:
String lastdot = file.getName().lastIndexOf('.');
File mp3file = new File(file.getParentFile(),file.getName().substring(0,lastdot)+".mp3");
ProcessBuilder pb = new ProcessBuilder("ffmpeg","-i",file.getAbsolutePath(),mp3file.getAbsolutePath());
// ...
If that doesn't work, I would change ffmpeg to be an absolute path as well (in order to rule out path issues).
Edit: Further suggestions.
I would personally refactor the writing code into its own method, so that you can use it elsewhere necessary. In other other words:
public static boolean write(byte[] content, File to) {
FileOutputStream fos = new FileOutputStream(to);
try {
fos.write(content);
} catch (IOException io) {
// logging code here
return false;
} finally {
closeQuietly(fos);
}
return true;
}
public static void closeQuietly(Closeable toClose) {
if ( toClose == null ) { return; }
try {
toClose.close();
} catch (IOException e) {
// logging code here
}
}
The reason that I made the closeQuietly(Closeable) method is due to the fact that if you do not close it in that way, there is a possibility that an exception will be thrown by the close() method, and that exception will obscure the exception that was thrown originally. If you put these in a utility class (although looking at your code, I assume that the class that it is currently in is named FileUtils), then you will be able to use them throughout your application whenever you need to deal with file output.
This will allow you to rewrite the block as:
File file = new File(getUserFolderPath() + fileName + amr);
file.createNewFile()
write(Base64.decode(audio),file);
convertFile(fileName);
I don't know whether or not you should do this, however if you want to be sure that the ffmpeg process has completed, then you should say ffmpeg.waitFor(); to be sure that it has completed. If you do that, then you should examine ffmpeg.exitValue(); to make sure that it completed successfully.
Another thing that you might want to do is once it has completed, write what it output to a log file so you have a record of what happened, just in case something happens.
I have an application that writes information to file. This information is used post-execution to determine pass/failure/correctness of the application. I'd like to be able to read the file as it is being written so that I can do these pass/failure/correctness checks in real time.
I assume it is possible to do this, but what are the gotcha's involved when using Java? If the reading catches up to the writing, will it just wait for more writes up until the file is closed, or will the read throw an exception at this point? If the latter, what do I do then?
My intuition is currently pushing me towards BufferedStreams. Is this the way to go?
Could not get the example to work using FileChannel.read(ByteBuffer) because it isn't a blocking read. Did however get the code below to work:
boolean running = true;
BufferedInputStream reader = new BufferedInputStream(new FileInputStream( "out.txt" ) );
public void run() {
while( running ) {
if( reader.available() > 0 ) {
System.out.print( (char)reader.read() );
}
else {
try {
sleep( 500 );
}
catch( InterruptedException ex ) {
running = false;
}
}
}
}
Of course the same thing would work as a timer instead of a thread, but I leave that up to the programmer. I'm still looking for a better way, but this works for me for now.
Oh, and I'll caveat this with: I'm using 1.4.2. Yes I know I'm in the stone ages still.
If you want to read a file while it is being written and only read the new content then following will help you achieve the same.
To run this program you will launch it from command prompt/terminal window and pass the file name to read. It will read the file unless you kill the program.
java FileReader c:\myfile.txt
As you type a line of text save it from notepad and you will see the text printed in the console.
public class FileReader {
public static void main(String args[]) throws Exception {
if(args.length>0){
File file = new File(args[0]);
System.out.println(file.getAbsolutePath());
if(file.exists() && file.canRead()){
long fileLength = file.length();
readFile(file,0L);
while(true){
if(fileLength<file.length()){
readFile(file,fileLength);
fileLength=file.length();
}
}
}
}else{
System.out.println("no file to read");
}
}
public static void readFile(File file,Long fileLength) throws IOException {
String line = null;
BufferedReader in = new BufferedReader(new java.io.FileReader(file));
in.skip(fileLength);
while((line = in.readLine()) != null)
{
System.out.println(line);
}
in.close();
}
}
You might also take a look at java channel for locking a part of a file.
http://java.sun.com/javase/6/docs/api/java/nio/channels/FileChannel.html
This function of the FileChannel might be a start
lock(long position, long size, boolean shared)
An invocation of this method will block until the region can be locked
I totally agree with Joshua's response, Tailer is fit for the job in this situation. Here is an example :
It writes a line every 150 ms in a file, while reading this very same file every 2500 ms
public class TailerTest
{
public static void main(String[] args)
{
File f = new File("/tmp/test.txt");
MyListener listener = new MyListener();
Tailer.create(f, listener, 2500);
try
{
FileOutputStream fos = new FileOutputStream(f);
int i = 0;
while (i < 200)
{
fos.write(("test" + ++i + "\n").getBytes());
Thread.sleep(150);
}
fos.close();
}
catch (Exception e)
{
e.printStackTrace();
}
}
private static class MyListener extends TailerListenerAdapter
{
#Override
public void handle(String line)
{
System.out.println(line);
}
}
}
The answer seems to be "no" ... and "yes". There seems to be no real way to know if a file is open for writing by another application. So, reading from such a file will just progress until content is exhausted. I took Mike's advice and wrote some test code:
Writer.java writes a string to file and then waits for the user to hit enter before writing another line to file. The idea being that it could be started up, then a reader can be started to see how it copes with the "partial" file. The reader I wrote is in Reader.java.
Writer.java
public class Writer extends Object
{
Writer () {
}
public static String[] strings =
{
"Hello World",
"Goodbye World"
};
public static void main(String[] args)
throws java.io.IOException {
java.io.PrintWriter pw =
new java.io.PrintWriter(new java.io.FileOutputStream("out.txt"), true);
for(String s : strings) {
pw.println(s);
System.in.read();
}
pw.close();
}
}
Reader.java
public class Reader extends Object
{
Reader () {
}
public static void main(String[] args)
throws Exception {
java.io.FileInputStream in = new java.io.FileInputStream("out.txt");
java.nio.channels.FileChannel fc = in.getChannel();
java.nio.ByteBuffer bb = java.nio.ByteBuffer.allocate(10);
while(fc.read(bb) >= 0) {
bb.flip();
while(bb.hasRemaining()) {
System.out.println((char)bb.get());
}
bb.clear();
}
System.exit(0);
}
}
No guarantees that this code is best practice.
This leaves the option suggested by Mike of periodically checking if there is new data to be read from the file. This then requires user intervention to close the file reader when it is determined that the reading is completed. Or, the reader needs to be made aware the content of the file and be able to determine and end of write condition. If the content were XML, the end of document could be used to signal this.
There are a Open Source Java Graphic Tail that does this.
https://stackoverflow.com/a/559146/1255493
public void run() {
try {
while (_running) {
Thread.sleep(_updateInterval);
long len = _file.length();
if (len < _filePointer) {
// Log must have been jibbled or deleted.
this.appendMessage("Log file was reset. Restarting logging from start of file.");
_filePointer = len;
}
else if (len > _filePointer) {
// File must have had something added to it!
RandomAccessFile raf = new RandomAccessFile(_file, "r");
raf.seek(_filePointer);
String line = null;
while ((line = raf.readLine()) != null) {
this.appendLine(line);
}
_filePointer = raf.getFilePointer();
raf.close();
}
}
}
catch (Exception e) {
this.appendMessage("Fatal error reading log file, log tailing has stopped.");
}
// dispose();
}
You can't read a file which is opened from another process using FileInputStream, FileReader or RandomAccessFile.
But using FileChannel directly will work:
private static byte[] readSharedFile(File file) throws IOException {
byte buffer[] = new byte[(int) file.length()];
final FileChannel fc = FileChannel.open(file.toPath(), EnumSet.of(StandardOpenOption.READ));
final ByteBuffer dst = ByteBuffer.wrap(buffer);
fc.read(dst);
fc.close();
return buffer;
}
Not Java per-se, but you may run into issues where you have written something to a file, but it hasn't been actually written yet - it might be in a cache somewhere, and reading from the same file may not actually give you the new information.
Short version - use flush() or whatever the relevant system call is to ensure that your data is actually written to the file.
Note I am not talking about the OS level disk cache - if your data gets into here, it should appear in a read() after this point. It may be that the language itself caches writes, waiting until a buffer fills up or file is flushed/closed.
I've never tried it, but you should write a test case to see if reading from a stream after you have hit the end will work, regardless of if there is more data written to the file.
Is there a reason you can't use a piped input/output stream? Is the data being written and read from the same application (if so, you have the data, why do you need to read from the file)?
Otherwise, maybe read till end of file, then monitor for changes and seek to where you left off and continue... though watch out for race conditions.