Good uses of the finalize() method [duplicate] - java

This question already has answers here:
Why would you ever implement finalize()?
(21 answers)
Closed 5 years ago.
This is mostly out of curiosity.
I was wandering if anyone has encountered any good usage for Object.finalize() except for debugging/logging/profiling purposes ?
If you haven't encountered any what would you say a good usage would be ?

If your Java object uses JNI to instruct native code to allocate native memory, you need to use finalize to make sure it gets freed.

Late to the party here but thought I would still chime in:
One of the best uses I have found for finalizers is to call explicit termination methods which, for what ever reason, were not called. When this occurs, we also log the issue because it is a BUG!
Because:
There is no guarantee that finalizers will be executed promptly (or technically at all), per the language specification
Execution is largely dependent on the JVM implementation
Execution can sometimes be delayed if the GC has a lower thread priority
This leaves only a handful of tasks that they can address without much risk.

close external connections (db, socket etc)
close open files. may be even try to write some additional information.
logging
if this class runs external processes that should exist only while object exists you can try to kill them here.
But it is just a fallback that is used is "normal" mechanism did not work. Normal mechanism should be initiated explicitly.

Release resources that should be released manually in normal circumstances, but were not released for some reason. Perhaps with write a warning to the log.

I use it to write back data to a database when using soft references for caching database-backed objects.

I see one good use for finalize(): freeing resources that are available in large amounts and are not exclusive.
For example, by default there are 1024 file handles available for a Linux process and about 10000 for Windows. This is pretty much, so for most applications if you open a file, you don't have to call .close() (and use the ugly try...finally blocks), and you'll be OK - finally() will free it for you some time later. However for some pieces of code (like intensive server applications), releasing resources with .close() is a must, otherwise finally() may be called too late for you and you may run out of file handles.
The same technique is used by Swing - operating system resources for displaying windows and drawing aren't released by any .close() method, but just by finalize(), so you don't have to worry about all .close() or .dispose() methods like in SWT for example.
However, when there is very limited number of resources, or you must 'lock' resource to use it, also remember to 'unlock' it. For example if you create a file lock on a file, remember also to remove this lock, otherwise nobody else will be able to read or write this file and this can lead to deadlocks - then you can't rely on finalize() to remove this lock for you - you must do it manually at the right place.

Related

What exactly are resource leaks?

I have heard people many times use the term "Resource leak". I'm sure this is a global phenomenon, though for the purpose of this answer, I will stick to those in Java. Take for example the following code :
public void append(String text) throws IOException
{
BufferedWriter buffWriter = new BufferedWriter(new FileWriter("tf2rocks.imnotkidding", true));
buffWriter.write(event);
buffWriter.close();
}
In the above snippet, there is a resource leak, as if an IOException was thrown by write(), close() will never be called.
Now my question is : What exactly is a resource leak? How can they cause harm to me?
If each java program is executed in it's own instance of the JVM, in an enclosed environment, how exactly can these "resource leaks" cause me harm? Is it possible for other malicious programs to take advantage of this?
Classes implementing java.io.Closeable (since JDK 1.5) and java.lang.AutoCloseable (since JDK 1.7) are considered to represent external resources, which should be closed using method close(), when they are no longer needed. All operating system have limits on the number of sockets, file handles.etc. that can be opened at particular instance of time. If you don't close the resources then you are unnecessary keeping them open and if you continue to keep opening more and more resources without closing them then after some time operating system will not be able to allocate more resources.
You are right with your example. Think from OS perspective, where you have predefined file/sockets handles that you could create i.e. say this many files can be open at a time. Now if you keep on opening multiple files, you may go out of number of files that can be open.
So although, its JVM but JVM alone can't run without the help from OS. File handler/descriptor is just one of the example.
If it's memory, then you don't have you worry as it will be handled by GC automatically but GC wont take care of resources.

How to deal with memory leaks from external library

I have a small java application running a set of computational heavy tasks. For processing the tasks, I use an external library which does most of the computation via native methods and some C code. Unfortunately, after solving one task, the library suffers from heavy memory leaks and can therefore only solve one task per application execution.
The memory problem is known to the coders from the library, but not fixed yet and maybe never will (it has something to do with the java garbage collector not properly working with the native inferface). Since there is no alternative for this particular library, I am looking for options to solve the tasks by sequentially application executions.
Currently, I have a bash wrapper script, which gets a list of tasks that should be executed and for each task the script calls the application with just this single task to execute.
Since tasks often need the results from previous tasks, this involves serializing and deserializing execution results to files. This does not seem to be good practice to me, also because the user has basically no way to interact with the program control flow.
Does anybody have an idea how I can to this sequential task execution inside one single java application? I guess this would involve starting a new JVM for each task exection, hopefully only transferring the task result and not the memory leaks from the new JVM to my application.
Edit providing further information:
Changing the root of the problem: Unfortunately, the library is not open source and I have neither access to the native methods nor to the java interface api.
New processes / JVMs: Is that the same in this context? I have not much experience with the java process api or starting new JVMs. My assumption is that this would involve starting a separate java program with its own main function using ProcessBuilder.start()?
Exchange of data: It is only a couple of kilobytes so performance is not an issue. Still, a solution without files would be preferable, but if I understand correctly memory mapped files also use local files. Sockets on the other hand do sound promising.
Funnily enough, I've faced the same issue. By definition, you need to accept nothing will be best practice or nice faced with having to use a faulty library you must use but cannot upgrade.
The solution we came up with was to isolate calls to the library in it's own process. This process was a child of a master process. The master process contains the good code and the child the bad. We were then able to keep track of the number of invocations of the child process and tear it down once it reached a certain number. We knew that we could get away with X invocations before the child process was corrupt.
Because of the nature of our problem, bringing up a fresh process enabled us to have another X invocations before repeating.
Any state was returned to the master process on a successful invocation. Any state gathered during an unsuccessful invocation was discarded and we started again.
Again, none of the above is "nice" but it worked for us.
For what it's worth, if I did this again, I'd use Akka and remote actors which would make all the sub-process, remoting etc far simpler.
That depends. Do you have the source code of this external application, i.e. can you recompile it? The easiest approach is obviously to fix the leak at its root. This might however be impractical. If the library, as you say, is implemented via native methods and some C code, I do not think that the problem has something to do with the Java garbage collector not properly working. Native methods and C code do not normally store their data on the JVM's heap and are therefore not garbage collected, i.e. it is the job of the library to clean up after itself.
If the leak is indeed in the bit of Java code that the library exposes, than there is a way. Memory leaks in Java occure by forgetting about references, e.g. consider the following example:
class Foo {
private ExpensiveObject eo;
Foo(ExpensiveObject eo) {
this.eo = eo;
}
}
The ExpensiveObject is alive (at least) as long as its referencing Foo instance. If you (or your library) do(es) not isolate instance life-cycles well enough, you get into trouble. If you do not have a chance to refactor, you can however use reflection to clean up the biggest mess from another place in your code:
void release(Foo foo) {
Field f = Foo.class.getDeclaredField("eo");
f.setAccessible(true);
f.set(foo, null);
}
This should however be considered a last-resort as it is quite a hack.
Alternatively, a better approach is normally to fork another instance of a JVM to do the dirty work. It seems like you are doing something similar already. By forking a JVM, you isolate the use of memory on a process level. Once the process dies, all memory is released by the OS. The problem with this approach is normally platform compatibility but as you already use a native library, this does not worsen your situation.
You say that you currently use files to communicate between these different processes. Why do you need to store data in a file? Rather consider using sockets or memory-mapped files (NIO), if performance is important for this matter.

Are resources used by a Java application freed when it terminates?

Java applications may use IO streams, sockets or database connections that should be closed when they are no longer needed.
However, applications may be terminated (e.g. by killing the process). Will all used resources be freed in this case? Who will free them: OS or JRE?
The JVM will release all active resources upon termination; however, this does not ensure that the other end will free the resource too, so explicitly closing resources is in every programmer's best interest.
An alternative to explicitly closing streams exists in Java 7, called the try-resource "statement", which is the equivalent of closing a resource in a finally block after a try block. More info on that can be found here.
If your software doesn't take care of resource management properly, the following will happen:
at runtime: the JVM will attempt during the duration of your program to close open streams if they're are seemingly unused (during garbage collection cycles),
at your program's termination point: the JVM should close open streams of all kinds left open by your program,
at the JVM's process termination point: the operating system will take care of releasing anything that hasn't been properly released by the JVM when it exists (hopefully, or this OS has some serious issues...).
As mentioned by Vulcan, none of this ensures that they are properly dealt with on the other end, obviously.
Note that the 3rd bullet point is a rather generic thing: most operating systems will take care of this, and it doesn't relate to the Java Platform's internals. It's about the OS managing its processes and resources on its own.
More Info
See also:
Cleanup of resources on an abnormal Exit, also on StackOverflow
How do I implement graceful termination in Java
Java Theory and Practice: Good Housekeeping Practices, at the IBM Developer Series

What are some examples of non-critical resources?

A quote from Effective Java states that:
A second legitimate use of finalizers concerns objects with native peers. A
native peer is a native object to which a normal object delegates via native methods. Because a native peer is not a normal object, the garbage collector doesn’t
know about it and can’t reclaim it when its Java peer is reclaimed. A finalizer is an
appropriate vehicle for performing this task, assuming the native peer holds no
critical resources.
I've not done C++ before, though I'm vaguely aware that file handlers and database connections are critical resources. But what exactly does it mean for a resource to be non-critical?
Or rather, what are some examples of non-critical resources?
“Non-critical resources” don’t exist. The quote isn’t talking about non-critical resources, it’s merely talking about the absence of critical resources.
In a way, you could say that memory is a non-critical resource in a garbage-collected system. However, I’m not convinced that this would be correct (quite the opposite, in fact: managed resources can still be critical if they run out), and I’ve never heard this being said.
I don't think it's really the resource that's critical, despite the phrase used. I think it's recovering the resource that may or may not be critical, and the quote could be rephrased, "assuming it is not critical that the resource is freed".
If it's critical that the resource is freed by a particular point in program execution, after the object is unreachable but before the resource is needed for some other purpose, then a finalizer is inadequate. Instead you need some program logic to make sure it happens.
So, file handles or db connections are critical if you're worried that you might run out, they're not critical otherwise. If you've reached some limit of open DB connections, because the finalizers that would close your old ones haven't been run yet, and you try to open another DB connection, chances are it'll fail. The situation with memory is rather better, since if you've run out of memory because of unreachable objects, and try to create a new object, then the GC will at least make an effort to find something to finalize and free.
Thus, file handles and db connections should have a close() function that the user can call to free all resources in cases where the program logic is able to determine that the object will not be used again. Expecting the GC to close the connection via a finalizer isn't reliable enough. It also doesn't deal well with the possibility of a flush or commit failing, although that's a separate issue.

Can I use thread.stop () in Java if I really need it?

I need to use deprecated stop () because I need to run Runnable classes which were developed by other programmers and I can't use while (isRunning == true) inside method run.
The question is: Is it safety enough to use method stop ()? Theads don't work with any resources (like files, DB, or Internet connections). But I want to be sure that JVM wouln't be corrupted after I stop a dozen of threads with stop () method.
P.S.: yes, I can write some code to test it, but I hope somebody knows the answer)
Sort of. There's nothing inherently "corrupting" about Thread.stop(). The problem is that it can leave objects in a damaged state, when the thread executing them suddenly stops. If the rest of your program has no visibility to those objects, then it's alright. On the other hand, if some of those objects are visible to the rest of the program, you might run into problems that are hard to diagnose.
If you use Thread.stop you'll probably get away with, assuming you have few users. It is exceptionally hard to test for. It can cause an exception anywhere in executing code. You can't test all possible situations. On your machine in your set up you might never find a problem; come the next JRE update your program might start failing with a highly obscure intermittent bug.
An example problem case is if the thread is loading a class at the time. The class fails to load and will not be retried again. You program is broken.
The JVM won't be corrupt, but read the javadocs closely to make sure that you don't meet their conditions for "disaster."
You'll need to take a close look at any synchronization monitors that the thread is holding onto. You mentioned files and sockets as resources being hung onto, but you'll also need to consider any shared data structures. Also make sure your exception handling doesn't catch RuntimeExceptions (see stop()).

Categories

Resources