Is there any good example do demonstrate file descriptor leak in Android? I read somewhere that it occurs if we don't close the streams for example FileInputStream or FileOutputStream but I could not find any good reference example which demonstrates it.
Please share some blog/code snippet. thank you!
Because Dalvik's FileInputStream will close itself when it is garbage collected (this is also true for OpenJDK/Oracle) it is less common than you'd think to actually leak file descriptors. Of course, the file descriptors will be "leaked" until the GC runs so depending on your program it could take a while before they are reclaimed.
To accomplish a more permanent leak you will have to prevent the stream from being garbage collected by keeping a reference to it somewhere in memory.
Here's a short example that loads a properties file every 1 second and keeps track of every time it has changed:
public class StreamLeak {
/**
* A revision of the properties.
*/
public static class Revision {
final ZonedDateTime time = ZonedDateTime.now();
final PropertiesFile file;
Revision(PropertiesFile file) {
this.file = file;
}
}
/*
* Container for {#link Properties} that implements lazy loading.
*/
public static class PropertiesFile {
private final InputStream stream;
private Properties properties;
PropertiesFile(InputStream stream) {
this.stream = stream;
}
Properties getProperties() {
if(this.properties == null) {
properties = new Properties();
try {
properties.load(stream);
} catch(IOException e) {
e.printStackTrace();
}
}
return properties;
}
#Override
public boolean equals(Object o) {
if(o instanceof PropertiesFile) {
return ((PropertiesFile)o).getProperties().equals(getProperties());
}
return false;
}
}
public static void main(String[] args) throws IOException, InterruptedException {
URL url = new URL(args[0]);
LinkedList<Revision> revisions = new LinkedList<>();
// Loop indefinitely
while(true) {
// Load the file
PropertiesFile pf = new PropertiesFile(url.openStream());
// See if the file has changed
if(revisions.isEmpty() || !revisions.getLast().file.equals(pf)) {
// Store the new revision
revisions.add(new Revision(pf));
System.out.println(url.toString() + " has changed, total revisions: " + revisions.size());
}
Thread.sleep(1000);
}
}
}
Because of the lazy loading we keep the InputStream in the PropertiesFile which will be kept whenever we create a new Revision and since we never close the stream we will be leaking file descriptors here.
Now, these open file descriptors will be closed by the OS when the program terminates, but as long as the program is running it will continue to leak file descriptors as can be seen by using lsof:
$ lsof | grep pf.properties | head -n 3
java 6938 raniz 48r REG 252,0 0 262694 /tmp/pf.properties
java 6938 raniz 49r REG 252,0 0 262694 /tmp/pf.properties
java 6938 raniz 50r REG 252,0 0 262694 /tmp/pf.properties
$ lsof | grep pf.properties | wc -l
431
And if we force the GC to run we can see that most of these are returned:
$ jcmd 6938 GC.run
6938:
Command executed successfully
$ lsof | grep pf.properties | wc -l
2
The remaining two descriptors are the ones stored in the Revisions.
I ran this on my Ubuntu machine but the output would look similar if run on Android.
InputStream in;
try {
in = new BufferedInputStream(socket.getInputStream());
// Do your stuff with the input stream
} catch (Exception e) {
// Handle your exception
} finally {
// Close the stream here
if (in != null) {
try {
in.close();
} catch (IOException e) {
Log.e(TAG, "Unable to close stream: " + e);
}
}
}
The idea is to close your file descriptor in the finally block. Whether you finish successfully or an exception occurs, the file descriptor will be properly closed.
Now, if you're looking for something to demonstrate how to NOT do this properly, just wrap this code in a while(1) loop, comment out the in.close() line, and put a break; in your catch block so that when it blows up you'll break out of your infinite loop.
InputStream in;
try {
in = new FileInputStream(new File("abc");
in.read(); // Do some stuff with open fileinputstream
// If an exception is generated, inputstream object will not be closed
// as the next statement will not be executed, instead jumping to
// the catch block. this will cause a leak of the fd assigned to file
// "abc" while opening it
in.close()'
} catch (Exception e) {
// Handle your exception
}
Related
Using a JarURLConnection I am able to read a file (e.g. version.txt) from a JAR hosted on Dropbox using the following code structure:
public static void checkForUpdates() {
JarURLConnection jarConn = null;
try {
System.out.println("Checking for updates..");
URL updateURL = new URL("jar:https://www.dropbox.com/s/.../foo.jar?dl=1!/version.txt");
jarConn = (JarURLConnection) updateURL.openConnection();
JarFile jarFile = jarConn.getJarFile();
InputStream inputStream = jarFile.getInputStream(jarConn.getJarEntry());
BufferedReader versionTXT = new BufferedReader(new InputStreamReader(inputStream));
/* Version comparing left out */
// If there is an update:
System.out.println("Update found!");
} catch (IOException e) {
e.printStackTrace();
} finally {
if (jarConn != null) {
// This doesn't seem to work
jarConn.getInputStream().close();
}
}
}
It works correctly when I call this method for the first time; you can see a delay between the "checking for updates"-message and the "result"-message.
When I upload a new foo.jar on Dropbox, and run the checkForUpdates() method again (without restarting the JVM), it will use the 'old' jar, and there is no delay between the checking & result messages. When I do restart the JVM, it will use the 'new' jar and show delay between the messages.
Is there any way to close the JarURLConnection, other than closing the InputStream (which doesn't seem to work)?
I have tried the following things:
Closing the JarURLConnection's OutputStream -> Throws an error saying that the connection doesn't have an OutputStream.
Closing the URLConnections Input- & OutputStream (by creating a new variable before I cast it to a JarURLConnection) -> Closing the InputStream doesn't seem to do anything and closing the OutputStream throws the same error.
Closing the BufferedReader -> No effect.
If it's not possible to close the JarURLConnection, is it possible to create a new one that does reconnect? Restarting the JVM apparently does something that it does reconnect, is it possible to simulate that without restarting the JVM?
Thanks in advance.
JarURLConnection uses a cache for jar files. Therefore you see no delay in the second attempt.
So simply turn off the cache before you access the Jar file:
JarURLConnection con = ...;
con.setUseCaches(false);
JarFile jarFile = jarConn.getJarFile();
This might answer your question:
URLConnection cache prevents updating JARs with the JarArchiveRepository
The only workaround I found is to disable JarURLConnection caching and
it then works as expected:
urlConnection.setDefaultUseCaches(false);
You can see the Sun code here:
public void connect() throws IOException {
if (!connected) {
/* the factory call will do the security checks */
jarFile = factory.get(getJarFileURL(), getUseCaches());
/* we also ask the factory the permission that was required
* to get the jarFile, and set it as our permission.
*/
if (getUseCaches()) {
jarFileURLConnection = factory.getConnection(jarFile);
}
if ((entryName != null)) {
jarEntry = (JarEntry)jarFile.getEntry(entryName);
if (jarEntry == null) {
try {
if (!getUseCaches()) {
jarFile.close();
}
} catch (Exception e) {
}
throw new FileNotFoundException("JAR entry " + entryName + " not found in " + jarFile.getName());
}
}
connected = true;
}
}
I'm writing a wrapper program in java that's just supposed to pass arguments to other processes by writing to their standard in streams, and reading the response from their standard out streams. However, when the String I try to pass in is too large, PrintWriter.print simply blocks. No error, just freezes. Is there a good workaround for this?
Relevant code
public class Wrapper {
PrintWriter writer;
public Wrapper(String command){
start(command);
}
public void call(String args){
writer.println(args); // Blocks here
writer.flush();
//Other code
}
public void start(String command) {
try {
ProcessBuilder pb = new ProcessBuilder(command.split(" "));
pb.redirectErrorStream(true);
process = pb.start();
// STDIN of the process.
writer = new PrintWriter(new OutputStreamWriter(process.getOutputStream(), "UTF-8"));
} catch (Exception e) {
e.printStackTrace();
System.out.println("Process ended catastrophically.");
}
}
}
If I try using
writer.print(args);
writer.print("\n");
it can handle a larger string before freezing, but still ultimately locks up.
Is there maybe a buffered stream way to fix this? Does print block on the processes stream having enough space or something?
Update
In response to some answers and comments, I've included more information.
Operating System is Windows 7
BufferedWriter slows the run time, but didn't stop it from blocking eventually.
Strings could get very long, as large as 100,000 characters
The Process input is consumed, but by line i.e Scanner.nextLine();
Test code
import java.io.IOException;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeoutException;
import ProcessRunner.Wrapper;
public class test {
public static void main(String[] args){
System.out.println("Building...");
Wrapper w = new Wrapper("java echo");
System.out.println("Calling...");
String market = "aaaaaa";
for(int i = 0; i < 1000; i++){
try {
System.out.println(w.call(market, 1000));
} catch (InterruptedException | ExecutionException
| TimeoutException e) {
System.out.println("Timed out");
}
market = market + market;
System.out.println("Size = " + market.length());
}
System.out.println("Stopping...");
try {
w.stop();
} catch (IOException e) {
e.printStackTrace();
System.out.println("Stop failed :(");
}
}
}
Test Process:
You have to first compile this file, and make sure the .class is in the same folder as the test .class file
import java.util.Scanner;
public class echo {
public static void main(String[] args){
while(true){
Scanner stdIn = new Scanner(System.in);
System.out.println(stdIn.nextLine());
}
}
}
I suspect that what is happening here is that the external process is writing to its standard output. Since your Java code doesn't read it, it eventually fills the external process's standard out (or err) pipe. That blocks the external process, which means that it can read from its input pipe .... and your Java process freezes.
If this is the problem, then using a buffered writer won't fix it. You either need to read the external processes output or redirect it to a file (e.g. "/dev/null" on Linux)
Writing to any pipe or socket by any means in java.io blocks if the peer is slower reading than you are writing.
Nothing you can do about it.
I have the following Java code which iterates through all the files in a directory and deletes them.
for(File file : tmpDir.listFiles())
{
file.delete();
}
It does however not delete all files. Some, usually 20-30, out of a couple of thousand, are left behind when I do this. Is it possible to fix this, or have I stumbled upon some Java voodoo that is best left alone?
It returns a boolean value, you should check that. From the JavaDoc:
Returns:
true if and only if the file or directory is successfully deleted; false otherwise
You should check the value of the return and take action.
If it returns false it may well be that you do not have permission to delete the file.
In that case you can check whether the file is writeable by the application and if not attempt to make it writeable - again this returns a boolean. If successful you can try deleting again.
You could use a utility method:
private void deleteFile(final File f) throws IOException {
if (f.delete()) {
return;
}
if (!f.canWrite() && !f.setWritable(true)) {
throw new IOException("No write permissions on file '" + f + "' and cannot set writeable.");
}
if (!f.delete()) {
throw new IOException("Failed to delete file '" + f + "' even after setting writeable; file may be locked.");
}
}
I would also take their advice in the JavaDoc:
Note that the Files class defines the delete method to throw an
IOException when a file cannot be deleted. This is useful for error
reporting and to diagnose why a file cannot be deleted.
Provided that you are using Java 7 that is. That method throws a number of exceptions that you can handle:
try {
Files.delete(path);
} catch (NoSuchFileException x) {
System.err.format("%s: no such" + " file or directory%n", path);
} catch (DirectoryNotEmptyException x) {
System.err.format("%s not empty%n", path);
} catch (IOException x) {
// File permission problems are caught here.
System.err.println(x);
}
Example taken from the Oracle tutorial page.
Forcing the garbage collector to run using System.gc(); made all the files deletable.
Make sure that you don't have any open stream like BufferedReader/Writer, FileReader/Writer etc. First close them, then you should be able to delete the file.
One more point, E.g. if you open a BufferedReader via another reader like FileReader, you must close both of the readers seperately.
So instead of this:
BufferedReader reader = new BufferedReader(new FileReader(new File(filePath)););
do this:
BufferedReader bufferedReader = null;
FileReader fileReader = null;
try{
fileReader = new FileReader(readFile);
bufferedReader = new BufferedReader(fileReader);
}catch{...}
...
try {
fileReader.close();
bufferedReader .close();
readFile.delete();
} catch (IOException e) {
e.printStackTrace();
}
I have the following code in a java Web Service:
public boolean makeFile(String fileName, String audio)
{
if (makeUserFolder())
{
File file = new File(getUserFolderPath() + fileName + amr);
FileOutputStream fileOutputStream = null;
try
{
file.createNewFile();
fileOutputStream = new FileOutputStream(file);
fileOutputStream.write(Base64.decode(audio));
return true;
}
catch(FileNotFoundException ex)
{
return false;
}
catch(IOException ex)
{
return false;
}
finally{
try {
fileOutputStream.close();
convertFile(fileName);
} catch (IOException ex) {
Logger.getLogger(FileUtils.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
else
return false;
}
public boolean convertFile(String fileName)
{
Process ffmpeg;
String filePath = this.userFolderPath + fileName;
try {
ProcessBuilder pb = new ProcessBuilder("ffmpeg","-i",filePath + amr,filePath + mp3);
pb.redirectErrorStream();
ffmpeg = pb.start();
} catch (IOException ex) {
return false;
}
return true;
}
It used to work and now it simply won't execute the ffmpeg conversion for some reason. I thought it was a problem with my file but after running the command from terminal no errors are thrown or anything, thought it was maybe permissions issue but all the permissions have been granted in the folder I'm saving the files. I noticed that the input BufferedReader ins being set to null after running the process, any idea what's happening?
First of all, a small nitpick with your code...when you create the FileOutputStream you create it using a string rather than a File, when you have already created the File before, so you might as well recycle that rather than force the FileOutputStream to instantiate the File itself.
Another small nitpick is the fact that when you are writing out the audio file, you should enclose that in a try block and close the output stream in a finally block. If you are allowed to add a new library to your project, you might use Guava which has a method Files.write(byte[],File), which will take care of all the dirty resource management for you.
The only thing that I can see that looks like a definite bug is the fact that you are ignoring the error stream of ffmpeg. If you are blocking waiting for input on the stdout of ffmpeg, then it will not work.
The easiest way to take care of this bug is to use ProcessBuilder instead of Runtime.
ProcessBuilder pb = new ProcessBuilder("ffmpeg","-i",filePath+amr,filePath+mp3);
pb.redirectErrorStream(); // This will make both stdout and stderr be redirected to process.getInputStream();
ffmpeg = pb.start();
If you start it this way, then your current code will be able to read both input streams fully. It is possible that the stderr was hiding some error that you were not able to see due to not reading it.
If that was not your problem, I would recommend using absolute paths with ffmpeg...in other words:
String lastdot = file.getName().lastIndexOf('.');
File mp3file = new File(file.getParentFile(),file.getName().substring(0,lastdot)+".mp3");
ProcessBuilder pb = new ProcessBuilder("ffmpeg","-i",file.getAbsolutePath(),mp3file.getAbsolutePath());
// ...
If that doesn't work, I would change ffmpeg to be an absolute path as well (in order to rule out path issues).
Edit: Further suggestions.
I would personally refactor the writing code into its own method, so that you can use it elsewhere necessary. In other other words:
public static boolean write(byte[] content, File to) {
FileOutputStream fos = new FileOutputStream(to);
try {
fos.write(content);
} catch (IOException io) {
// logging code here
return false;
} finally {
closeQuietly(fos);
}
return true;
}
public static void closeQuietly(Closeable toClose) {
if ( toClose == null ) { return; }
try {
toClose.close();
} catch (IOException e) {
// logging code here
}
}
The reason that I made the closeQuietly(Closeable) method is due to the fact that if you do not close it in that way, there is a possibility that an exception will be thrown by the close() method, and that exception will obscure the exception that was thrown originally. If you put these in a utility class (although looking at your code, I assume that the class that it is currently in is named FileUtils), then you will be able to use them throughout your application whenever you need to deal with file output.
This will allow you to rewrite the block as:
File file = new File(getUserFolderPath() + fileName + amr);
file.createNewFile()
write(Base64.decode(audio),file);
convertFile(fileName);
I don't know whether or not you should do this, however if you want to be sure that the ffmpeg process has completed, then you should say ffmpeg.waitFor(); to be sure that it has completed. If you do that, then you should examine ffmpeg.exitValue(); to make sure that it completed successfully.
Another thing that you might want to do is once it has completed, write what it output to a log file so you have a record of what happened, just in case something happens.
I have an application that writes information to file. This information is used post-execution to determine pass/failure/correctness of the application. I'd like to be able to read the file as it is being written so that I can do these pass/failure/correctness checks in real time.
I assume it is possible to do this, but what are the gotcha's involved when using Java? If the reading catches up to the writing, will it just wait for more writes up until the file is closed, or will the read throw an exception at this point? If the latter, what do I do then?
My intuition is currently pushing me towards BufferedStreams. Is this the way to go?
Could not get the example to work using FileChannel.read(ByteBuffer) because it isn't a blocking read. Did however get the code below to work:
boolean running = true;
BufferedInputStream reader = new BufferedInputStream(new FileInputStream( "out.txt" ) );
public void run() {
while( running ) {
if( reader.available() > 0 ) {
System.out.print( (char)reader.read() );
}
else {
try {
sleep( 500 );
}
catch( InterruptedException ex ) {
running = false;
}
}
}
}
Of course the same thing would work as a timer instead of a thread, but I leave that up to the programmer. I'm still looking for a better way, but this works for me for now.
Oh, and I'll caveat this with: I'm using 1.4.2. Yes I know I'm in the stone ages still.
If you want to read a file while it is being written and only read the new content then following will help you achieve the same.
To run this program you will launch it from command prompt/terminal window and pass the file name to read. It will read the file unless you kill the program.
java FileReader c:\myfile.txt
As you type a line of text save it from notepad and you will see the text printed in the console.
public class FileReader {
public static void main(String args[]) throws Exception {
if(args.length>0){
File file = new File(args[0]);
System.out.println(file.getAbsolutePath());
if(file.exists() && file.canRead()){
long fileLength = file.length();
readFile(file,0L);
while(true){
if(fileLength<file.length()){
readFile(file,fileLength);
fileLength=file.length();
}
}
}
}else{
System.out.println("no file to read");
}
}
public static void readFile(File file,Long fileLength) throws IOException {
String line = null;
BufferedReader in = new BufferedReader(new java.io.FileReader(file));
in.skip(fileLength);
while((line = in.readLine()) != null)
{
System.out.println(line);
}
in.close();
}
}
You might also take a look at java channel for locking a part of a file.
http://java.sun.com/javase/6/docs/api/java/nio/channels/FileChannel.html
This function of the FileChannel might be a start
lock(long position, long size, boolean shared)
An invocation of this method will block until the region can be locked
I totally agree with Joshua's response, Tailer is fit for the job in this situation. Here is an example :
It writes a line every 150 ms in a file, while reading this very same file every 2500 ms
public class TailerTest
{
public static void main(String[] args)
{
File f = new File("/tmp/test.txt");
MyListener listener = new MyListener();
Tailer.create(f, listener, 2500);
try
{
FileOutputStream fos = new FileOutputStream(f);
int i = 0;
while (i < 200)
{
fos.write(("test" + ++i + "\n").getBytes());
Thread.sleep(150);
}
fos.close();
}
catch (Exception e)
{
e.printStackTrace();
}
}
private static class MyListener extends TailerListenerAdapter
{
#Override
public void handle(String line)
{
System.out.println(line);
}
}
}
The answer seems to be "no" ... and "yes". There seems to be no real way to know if a file is open for writing by another application. So, reading from such a file will just progress until content is exhausted. I took Mike's advice and wrote some test code:
Writer.java writes a string to file and then waits for the user to hit enter before writing another line to file. The idea being that it could be started up, then a reader can be started to see how it copes with the "partial" file. The reader I wrote is in Reader.java.
Writer.java
public class Writer extends Object
{
Writer () {
}
public static String[] strings =
{
"Hello World",
"Goodbye World"
};
public static void main(String[] args)
throws java.io.IOException {
java.io.PrintWriter pw =
new java.io.PrintWriter(new java.io.FileOutputStream("out.txt"), true);
for(String s : strings) {
pw.println(s);
System.in.read();
}
pw.close();
}
}
Reader.java
public class Reader extends Object
{
Reader () {
}
public static void main(String[] args)
throws Exception {
java.io.FileInputStream in = new java.io.FileInputStream("out.txt");
java.nio.channels.FileChannel fc = in.getChannel();
java.nio.ByteBuffer bb = java.nio.ByteBuffer.allocate(10);
while(fc.read(bb) >= 0) {
bb.flip();
while(bb.hasRemaining()) {
System.out.println((char)bb.get());
}
bb.clear();
}
System.exit(0);
}
}
No guarantees that this code is best practice.
This leaves the option suggested by Mike of periodically checking if there is new data to be read from the file. This then requires user intervention to close the file reader when it is determined that the reading is completed. Or, the reader needs to be made aware the content of the file and be able to determine and end of write condition. If the content were XML, the end of document could be used to signal this.
There are a Open Source Java Graphic Tail that does this.
https://stackoverflow.com/a/559146/1255493
public void run() {
try {
while (_running) {
Thread.sleep(_updateInterval);
long len = _file.length();
if (len < _filePointer) {
// Log must have been jibbled or deleted.
this.appendMessage("Log file was reset. Restarting logging from start of file.");
_filePointer = len;
}
else if (len > _filePointer) {
// File must have had something added to it!
RandomAccessFile raf = new RandomAccessFile(_file, "r");
raf.seek(_filePointer);
String line = null;
while ((line = raf.readLine()) != null) {
this.appendLine(line);
}
_filePointer = raf.getFilePointer();
raf.close();
}
}
}
catch (Exception e) {
this.appendMessage("Fatal error reading log file, log tailing has stopped.");
}
// dispose();
}
You can't read a file which is opened from another process using FileInputStream, FileReader or RandomAccessFile.
But using FileChannel directly will work:
private static byte[] readSharedFile(File file) throws IOException {
byte buffer[] = new byte[(int) file.length()];
final FileChannel fc = FileChannel.open(file.toPath(), EnumSet.of(StandardOpenOption.READ));
final ByteBuffer dst = ByteBuffer.wrap(buffer);
fc.read(dst);
fc.close();
return buffer;
}
Not Java per-se, but you may run into issues where you have written something to a file, but it hasn't been actually written yet - it might be in a cache somewhere, and reading from the same file may not actually give you the new information.
Short version - use flush() or whatever the relevant system call is to ensure that your data is actually written to the file.
Note I am not talking about the OS level disk cache - if your data gets into here, it should appear in a read() after this point. It may be that the language itself caches writes, waiting until a buffer fills up or file is flushed/closed.
I've never tried it, but you should write a test case to see if reading from a stream after you have hit the end will work, regardless of if there is more data written to the file.
Is there a reason you can't use a piped input/output stream? Is the data being written and read from the same application (if so, you have the data, why do you need to read from the file)?
Otherwise, maybe read till end of file, then monitor for changes and seek to where you left off and continue... though watch out for race conditions.