With its built-in garbage collection, Java allows developers to create new objects without worrying explicitly about memory allocation and deallocation, because the garbage collector automatically reclaims memory for reuse.
AFAIK Garbage Collector usually runs when your app runs out of memory. it holds a graph that represents the links between the objects and isolated objects can be freed.
Though we have System.gc(), but if you write System.gc() in your
code the Java VM may or may not decide at runtime to do a garbage
collection at that pointas explained by this post System.gc() in Java
So I was having some doubts regarding the Garbage collection process of java.
I wonder if there is such method in java like free() as such in C language, that we could invoke when we explicitly want to free the part of memory allocated by a new operator.
Also does new performs the same operation as do malloc() or calloc()?
Are there any alternates for delete(), free(), malloc(), calloc() and sizeof() methods in java.
No, there aren't. Java is not c, and you're not supposed to manage memory explicitly.
AFAIK Garbage Collector usually runs when your app runs out of memory.
Little disagree on that. No. It runs asynchronously and collects the referenced memory locations.
I wonder if there is such method in java like free() as such in C language, that we could invoke when we explicitly want to free the part of memory allocated by a new operator.
Again System.gc() is your call then, but not 100% sure of memory clear immediately.
Also does new performs the same operation as do malloc() or calloc()?
If you mean allocation memory, then yes for that Object
Are there any alternates for delete(), free(), malloc(), calloc() and sizeof() methods in java.
AFAIK there is no direct functions to do so.
On top of my head, you need not to worry about such things and Modern JVM's are smart enoguh to handle these things.
An interesting thread here found on SO, to when GC decides to run. Hope that helps.
I haven't worked on this particularly but I have read it as my knowleadge enhancement of java nio.
In nio we have a bytebuffer what it seemed to me it can be java version of malloc.
A buffer is essentially a block of memory into which you can write data, which you can then later read again. This memory block is wrapped in a NIO Buffer object, which provides a set of methods that makes it easier to work with the memory block.
Syntax:
ByteBuffer buf = ByteBuffer.allocate(24);
For more reading ByteBuffer.
In Java, we have System.gc() which is basically used for invoking garbage collection explicitly. But, one should potentially avoid that since it shows the gaps un-filled.
You can probably have a look at this: stackoverflow
However, Java performs this task of garbage collection itself when the system runs out of memory. All you can do on application level is, to assign null to all your unused variables and objects which make them unuable and allows JVM to perform garbage collection.
Related
Is there a way to free memory in Java, similar to C's free() function? Or is setting the object to null and relying on GC the only option?
Java uses managed memory, so the only way you can allocate memory is by using the new operator, and the only way you can deallocate memory is by relying on the garbage collector.
This memory management whitepaper (PDF) may help explain what's going on.
You can also call System.gc() to suggest that the garbage collector run immediately. However, the Java Runtime makes the final decision, not your code.
According to the Java documentation,
Calling the gc method suggests that
the Java Virtual Machine expend effort
toward recycling unused objects in
order to make the memory they
currently occupy available for quick
reuse. When control returns from the
method call, the Java Virtual Machine
has made a best effort to reclaim
space from all discarded objects.
No one seems to have mentioned explicitly setting object references to null, which is a legitimate technique to "freeing" memory you may want to consider.
For example, say you'd declared a List<String> at the beginning of a method which grew in size to be very large, but was only required until half-way through the method. You could at this point set the List reference to null to allow the garbage collector to potentially reclaim this object before the method completes (and the reference falls out of scope anyway).
Note that I rarely use this technique in reality but it's worth considering when dealing with very large data structures.
System.gc();
Runs the garbage collector.
Calling the gc method suggests that the Java Virtual Machine expend effort toward recycling unused objects in order to make the memory they currently occupy available for quick reuse. When control returns from the method call, the Java Virtual Machine has made a best effort to reclaim space from all discarded objects.
Not recommended.
Edit: I wrote the original response in 2009. It's now 2015.
Garbage collectors have gotten steadily better in the ~20 years Java's been around. At this point, if you're manually calling the garbage collector, you may want to consider other approaches:
If you're forcing GC on a limited number of machines, it may be worth having a load balancer point away from the current machine, waiting for it to finish serving to connected clients, timeout after some period for hanging connections, and then just hard-restart the JVM. This is a terrible solution, but if you're looking at System.gc(), forced-restarts may be a possible stopgap.
Consider using a different garbage collector. For example, the (new in the last six years) G1 collector is a low-pause model; it uses more CPU overall, but does it's best to never force a hard-stop on execution. Since server CPUs now almost all have multiple cores, this is A Really Good Tradeoff to have available.
Look at your flags tuning memory use. Especially in newer versions of Java, if you don't have that many long-term running objects, consider bumping up the size of newgen in the heap. newgen (young) is where new objects are allocated. For a webserver, everything created for a request is put here, and if this space is too small, Java will spend extra time upgrading the objects to longer-lived memory, where they're more expensive to kill. (If newgen is slightly too small, you're going to pay for it.) For example, in G1:
XX:G1NewSizePercent (defaults to 5; probably doesn't matter.)
XX:G1MaxNewSizePercent (defaults to 60; probably raise this.)
Consider telling the garbage collector you're not okay with a longer pause. This will cause more-frequent GC runs, to allow the system to keep the rest of it's constraints. In G1:
XX:MaxGCPauseMillis (defaults to 200.)
*"I personally rely on nulling variables as a placeholder for future proper deletion. For example, I take the time to nullify all elements of an array before actually deleting (making null) the array itself."
This is unnecessary. The way the Java GC works is it finds objects that have no reference to them, so if I have an Object x with a reference (=variable) a that points to it, the GC won't delete it, because there is a reference to that object:
a -> x
If you null a than this happens:
a -> null
x
So now x doesn't have a reference pointing to it and will be deleted. The same thing happens when you set a to reference to a different object than x.
So if you have an array arr that references to objects x, y and z and a variable a that references to the array it looks like that:
a -> arr -> x
-> y
-> z
If you null a than this happens:
a -> null
arr -> x
-> y
-> z
So the GC finds arr as having no reference set to it and deletes it, which gives you this structure:
a -> null
x
y
z
Now the GC finds x, y and z and deletes them aswell. Nulling each reference in the array won't make anything better, it will just use up CPU time and space in the code (that said, it won't hurt further than that. The GC will still be able to perform the way it should).
To extend upon the answer and comment by Yiannis Xanthopoulos and Hot Licks (sorry, I cannot comment yet!), you can set VM options like this example:
-XX:+UseG1GC -XX:MinHeapFreeRatio=15 -XX:MaxHeapFreeRatio=30
In my jdk 7 this will then release unused VM memory if more than 30% of the heap becomes free after GC when the VM is idle. You will probably need to tune these parameters.
While I didn't see it emphasized in the link below, note that some garbage collectors may not obey these parameters and by default java may pick one of these for you, should you happen to have more than one core (hence the UseG1GC argument above).
VM arguments
Update: For java 1.8.0_73 I have seen the JVM occasionally release small amounts with the default settings. Appears to only do it if ~70% of the heap is unused though.. don't know if it would be more aggressive releasing if the OS was low on physical memory.
A valid reason for wanting to free memory from any programm (java or not ) is to make more memory available to other programms on operating system level. If my java application is using 250MB I may want to force it down to 1MB and make the 249MB available to other apps.
I have done experimentation on this.
It's true that System.gc(); only suggests to run the Garbage Collector.
But calling System.gc(); after setting all references to null, will improve performance and memory occupation.
If you really want to allocate and free a block of memory you can do this with direct ByteBuffers. There is even a non-portable way to free the memory.
However, as has been suggested, just because you have to free memory in C, doesn't mean it a good idea to have to do this.
If you feel you really have a good use case for free(), please include it in the question so we can see what you are rtying to do, it is quite likely there is a better way.
Entirely from javacoffeebreak.com/faq/faq0012.html
A low priority thread takes care of garbage collection automatically
for the user. During idle time, the thread may be called upon, and it
can begin to free memory previously allocated to an object in Java.
But don't worry - it won't delete your objects on you!
When there are no references to an object, it becomes fair game for
the garbage collector. Rather than calling some routine (like free in
C++), you simply assign all references to the object to null, or
assign a new class to the reference.
Example :
public static void main(String args[])
{
// Instantiate a large memory using class
MyLargeMemoryUsingClass myClass = new MyLargeMemoryUsingClass(8192);
// Do some work
for ( .............. )
{
// Do some processing on myClass
}
// Clear reference to myClass
myClass = null;
// Continue processing, safe in the knowledge
// that the garbage collector will reclaim myClass
}
If your code is about to request a large amount of memory, you may
want to request the garbage collector begin reclaiming space, rather
than allowing it to do so as a low-priority thread. To do this, add
the following to your code
System.gc();
The garbage collector will attempt to reclaim free space, and your
application can continue executing, with as much memory reclaimed as
possible (memory fragmentation issues may apply on certain platforms).
In my case, since my Java code is meant to be ported to other languages in the near future (Mainly C++), I at least want to pay lip service to freeing memory properly so it helps the porting process later on.
I personally rely on nulling variables as a placeholder for future proper deletion. For example, I take the time to nullify all elements of an array before actually deleting (making null) the array itself.
But my case is very particular, and I know I'm taking performance hits when doing this.
* "For example, say you'd declared a List at the beginning of a
method which grew in size to be very large, but was only required
until half-way through the method. You could at this point set the
List reference to null to allow the garbage collector to potentially
reclaim this object before the method completes (and the reference
falls out of scope anyway)." *
This is correct, but this solution may not be generalizable. While setting a List object reference to null -will- make memory available for garbage collection, this is only true for a List object of primitive types. If the List object instead contains reference types, setting the List object = null will not dereference -any- of the reference types contained -in- the list. In this case, setting the List object = null will orphan the contained reference types whose objects will not be available for garbage collection unless the garbage collection algorithm is smart enough to determine that the objects have been orphaned.
Althrough java provides automatic garbage collection sometimes you will want to know how large the object is and how much of it is left .Free memory using programatically import java.lang; and Runtime r=Runtime.getRuntime(); to obtain values of memory using mem1=r.freeMemory(); to free memory call the r.gc(); method and the call freeMemory()
Recommendation from JAVA is to assign to null
From https://docs.oracle.com/cd/E19159-01/819-3681/abebi/index.html
Explicitly assigning a null value to variables that are no longer needed helps the garbage collector to identify the parts of memory that can be safely reclaimed. Although Java provides memory management, it does not prevent memory leaks or using excessive amounts of memory.
An application may induce memory leaks by not releasing object references. Doing so prevents the Java garbage collector from reclaiming those objects, and results in increasing amounts of memory being used. Explicitly nullifying references to variables after their use allows the garbage collector to reclaim memory.
One way to detect memory leaks is to employ profiling tools and take memory snapshots after each transaction. A leak-free application in steady state will show a steady active heap memory after garbage collections.
I am using JCUDA and would like to know if the JNI objects are smart enough to deallocate when they are garbage collected? I can understand why this may not work in all situations, but I know it will work in my situation, so my followup question is: how can I accomplish this? Is there a "mode" I can set? Will I need to build a layer of abstraction? Or maybe the answer really is "no don't ever try that" so then why not?
EDIT: I'm referring only to native objects created via JNI, not Java objects. I am aware that all Java objects are treated equally W.R.T. garbage collection.
Usually, such libraries do not deallocate memory due to garbage collection. Particularly: JCuda does not do this, and has no option or "mode" where this can be done.
The reason is quite simple: It does not work.
You'll often have a pattern like this:
void doSomethingWithJCuda()
{
CUdeviceptr data = new CUdeviceptr();
cuMemAlloc(data, 1000);
workWith(data);
// *(See notes below)
}
Here, native memory is allocated, and the Java object serves as a "handle" to this native memory.
At the last line, the data object goes out of scope. Thus, it becomes eligible for garbage collection. However, there are two issues:
1. The garbage collector will only destroy the Java object, and not free the memory that was allocated with cuMemAlloc or any other native call.
So you'll usually have to free the native memory, by explicitly calling
cuMemFree(data);
before leaving the method.
2. You don't know when the Java object will be garbage collected - or whether it will be garbage collected at all.
A common misconception is that an object becomes garbage collected when it is no longer reachable, but this is not necessarily true.
As bmargulies pointed out in his answer:
One means is to have a Java object with a finalizer that makes the necessary JNI call to free native memory.
It may look like a viable option to simply override the finalize() method of these "handle" objects, and do the cuMemFree(this) call there. This has been tried, for example, by the authors of JavaCL (a library that also allows using the GPU with Java, and thus, is conceptually somewhat similar to JCuda).
But it simply does not work: Even if a Java object is no longer reachable, this does not mean that it will be garbage collected immediately.
You simply don't know when the finalize() method will be called.
This can easily cause nasty errors: When you have 100 MB of GPU memory, you can use 10 CUdeviceptr objects, each allocating 10MB. Your GPU memory is full. But for Java, these few CUdeviceptr objects only occupy a few bytes, and the finalize() method may not be called at all during the runtime of the application, because the JVM simply does not need to reclaim these few bytes of memory. (Omitting discussions about hacky workarounds here, like calling System.gc() or so - the bottom line is: It does not work).
So answering your actual question: JCuda is a very low-level library. This means that you have the full power, but also the full responsibilities of manual memory management. I know that this is "inconvenient". When I started creating JCuda, I originally intended it as a low-level backend for an object-oriented wrapper library. But creating a robust, stable and universally applicable abstraction layer for a complex general-purpose library like CUDA is challenging, and I did not dare to tackle such a project - last but not least because of the complexities that are implied by ... things like garbage collection...
Java objects created in JNI are equal to all other Java objects, and are garbage collected and destroyed when their time comes. To keep such objects from being destroyed too early, we often use JNI function env->NewGlobalRef() (but its usage is by no ways limited to objects created in native).
On the other hand, native objects are not subject to garbage collection.
There are two cases here.
Native code allocates Java Objects. These objects are GC's like all other Java objects. If the native goofs up and holds strong references, it can prevent GC.
Native code allocates Native memory. The GC knows nothing about it; it's up to the library to arrange to free it. One means is to have a Java object with a finalizer that makes the necessary JNI call to free native memory.
I understand that when a directbytebuffer is allocated, its not subject to garbage collection, but what I'm wondering is if the wrapping object is garbage collected.
For example, if I allocated a new DirectByteBuffer dbb, and then duplicated(shallow copied) it using dbb.duplicate(), I'd have two wrappers around the same chunk of memory.
Are those wrappers subject to garbage collection? If I did
while(true){
DirectByteBuffer dbb2 = dbb.duplicate();
}
Would I eventually OOM myself?
In the Sun JDK, a java.nio.DirectByteBuffer—created by ByteBuffer#allocateDirect(int)—has a field of type sun.misc.Cleaner, which extends java.lang.ref.PhantomReference.
When this Cleaner (remember, a subtype of PhantomReference) gets collected and is about to move into the associated ReferenceQueue, the collection-related thread running through the nested type ReferenceHandler has a special case treatment of Cleaner instances: it downcasts and calls on Cleaner#clean(), which eventually makes its way back to calling on DirectByteBuffer$Deallocator#run(), which in turn calls on Unsafe#freeMemory(long). Wow.
It's rather circuitous, and I was surprised to not see any use of Object#finalize() in play. The Sun developers must have had their reasons for tying this in even closer to the collection and reference management subsystem.
In short, you won't run out of memory by virtue of abandoning references to DirectByteBuffer instances, so long as the garbage collector has a chance to notice the abandonment and its reference handling thread makes progress through the calls described above.
A direct ByteBuffer object is just like any other object: it can be garbage-collected.
The memory used by a direct buffer will be released when the ByteBuffer object is GC'd (this is not explicitly stated for ByteBuffer, but is implied by the documentation of MappedByteBuffer).
Where things get interesting is when you've filled your virtual memory space with direct buffers, but still have lots of room in the heap. It turns out that (at least on the Sun JVM), running out of virtual space when allocating a direct buffer will trigger a GC of the Java heap. Which may collect unreferenced direct buffers and free their virtual memory commitment.
If you're running on a 64-bit machine, you should use -XX:MaxDirectMemorySize, which puts an upper limit on the number of buffers that you can allocate (and also triggers GC when you hit that limit).
Looking at the source code to DirectByteBuffer it just returns a new instance, so no, you won't OOM yourself.
As long as the rest of your code doesn't hold onto a reference to the original dbb then that object will get garbage collected as normal. The extra dbb2 objects will similarly get garbage collected when there is no longer any reference to them(ie, the end of the while loop).
Both the Java object and the native memory are freed at the same time by the garbage collector.
However, note that because the garbage collector doesn’t work well at cleaning up direct buffers, it’s best to allocate and reuse long-lived direct buffers instead of creating and abandoning new ones.
when a directbytebuffer is allocated, its not subject to garbage
collection
Where did you get that idea? It's not correct. Are you getting them mixed up with MappedByteBuffers?
Effective Java says :
There is a severe performance penalty for using finalizers.
Why is it slower to destroy an object using the finalizers?
Because of the way the garbage collector works. For performance, most Java GCs use a copying collector, where short-lived objects are allocated into an "eden" block of memory, and when the it's time for that generation of objects to be collected, the GC just needs to copy the objects that are still "alive" to a more permanent storage space, and then it can wipe (free) the entire "eden" memory block at once. This is efficient because most Java code will create many thousands of instances of objects (boxed primitives, temporary arrays, etc.) with lifetimes of only a few seconds.
When you have finalizers in the mix, though, the GC can't simply wipe an entire generation at once. Instead, it needs to figure out all the objects in that generation that need to be finalized, and queue them on a thread that actually executes the finalizers. In the meantime, the GC can't finish cleaning up the objects efficiently. So it either has to keep them alive longer than they should be, or it has to delay collecting other objects, or both. Plus you have the arbitrary wait time of actually executing the finalizers.
All these factors add up to a significant runtime penalty, which is why deterministic finalization (using a close() method or similar to explicitly finalize the object's state) is usually preferred.
Having actually run into one such problem:
In the Sun HotSpot JVM, finalizers are processed on a thread that is given a fixed, low priority. In a high-load application, it's easy to create finalization-required objects faster than the low-priority finalization thread can process them. Meanwhile, the space on the heap used by the finalization-pending objects is unavailable for other uses. Eventually, your application may spend all of its time garbage collecting, because all of the available memory is in use by objects pending finalization.
This is, of course, in addition to the other many reasons to not use finalizers that are described in Effective Java.
I just picked up my copy Effective Java off my desk to see what he's referring to.
If you read Chapter 2, Section 6, he goes into good detail about the various performance hits.
You can't know when the finalizer will run, or even if it will at all. Because those resources may never be claimed, you will have to run with fewer resources.
I would recommend reading the entirety of the section - it explains things much better than I can parrot here.
If you read the documentation of finalize() closely, you will notice that finalizers enable an object to prevent being collected by the GC.
If no finalizer is present, the object simply can be removed and does not need any more attention. But if there is a finalizer, it needs to be checked afterwards, if the object didn't become "visible" again.
Without knowing exactly how the current Java garbage collection is implemented (actually, because there are different Java implementations out there, there are also different GCs), you can assume that the GC has to do some additional work if an object has a finalizer, because of this feature.
My thought is this:
Java is a garbage collected language, which deallocates memory based on its own internal algorithms. Every so often, the GC scans the heap, determines which objects are no longer referenced, and de-allocates the memory.
A finalizer interrupts this and forces the deallocation of memory outside of the GC cycle, potentially causing inefficiencies.
I think best practices are to use finalizers only when ABSOLUTELY necessary such as freeing file handles or closing DB connections which should be done deterministically.
One reason I can think of is that explicit memory cleanup is unnecessary if your resources are all Java Objects, and not native code.
After reading this question, I was reminded of when I was taught Java and told never to call finalize() or run the garbage collector because "it's a big black box that you never need to worry about". Can someone boil the reasoning for this down to a few sentences? I'm sure I could read a technical report from Sun on this matter, but I think a nice, short, simple answer would satisfy my curiosity.
The short answer: Java garbage collection is a very finely tuned tool. System.gc() is a sledge-hammer.
Java's heap is divided into different generations, each of which is collected using a different strategy. If you attach a profiler to a healthy app, you'll see that it very rarely has to run the most expensive kinds of collections because most objects are caught by the faster copying collector in the young generation.
Calling System.gc() directly, while technically not guaranteed to do anything, in practice will trigger an expensive, stop-the-world full heap collection. This is almost always the wrong thing to do. You think you're saving resources, but you're actually wasting them for no good reason, forcing Java to recheck all your live objects “just in case”.
If you are having problems with GC pauses during critical moments, you're better off configuring the JVM to use the concurrent mark/sweep collector, which was designed specifically to minimise time spent paused, than trying to take a sledgehammer to the problem and just breaking it further.
The Sun document you were thinking of is here: Java SE 6 HotSpot™ Virtual Machine Garbage Collection Tuning
(Another thing you might not know: implementing a finalize() method on your object makes garbage collection slower. Firstly, it will take two GC runs to collect the object: one to run finalize() and the next to ensure that the object wasn't resurrected during finalization. Secondly, objects with finalize() methods have to be treated as special cases by the GC because they have to be collected individually, they can't just be thrown away in bulk.)
Don't bother with finalizers.
Switch to incremental garbage collection.
If you want to help the garbage collector, null off references to objects you no longer need. Less path to follow= more explicitly garbage.
Don't forget that (non-static) inner class instances keep references to their parent class instance. So an inner class thread keeps a lot more baggage than you might expect.
In a very related vein, if you're using serialization, and you've serialized temporary objects, you're going to need to clear the serialization caches, by calling ObjectOutputStream.reset() or your process will leak memory and eventually die.
Downside is that non-transient objects are going to get re-serialized.
Serializing temporary result objects can be a bit more messy than you might think!
Consider using soft references. If you don't know what soft references are, have a read of the javadoc for java.lang.ref.SoftReference
Steer clear of Phantom references and Weak references unless you really get excitable.
Finally, if you really can't tolerate the GC use Realtime Java.
No, I'm not joking.
The reference implementation is free to download and Peter Dibbles book from SUN is really good reading.
As far as finalizers go:
They are virtually useless. They aren't guaranteed to be called in a timely fashion, or indeed, at all (if the GC never runs, neither will any finalizers). This means you generally shouldn't rely on them.
Finalizers are not guaranteed to be idempotent. The garbage collector takes great care to guarantee that it will never call finalize() more than once on the same object. With well-written objects, it won't matter, but with poorly written objects, calling finalize multiple times can cause problems (e.g. double release of a native resource ... crash).
Every object that has a finalize() method should also provide a close() (or similar) method. This is the function you should be calling. e.g., FileInputStream.close(). There's no reason to be calling finalize() when you have a more appropriate method that is intended to be called by you.
Assuming finalizers are similar to their .NET namesake then you only really need to call these when you have resources such as file handles that can leak. Most of the time your objects don't have these references so they don't need to be called.
It's bad to try to collect the garbage because it's not really your garbage. You have told the VM to allocate some memory when you created objects, and the garbage collector is hiding information about those objects. Internally the GC is performing optimisations on the memory allocations it makes. When you manually try to collect the garbage you have no knowledge about what the GC wants to hold onto and get rid of, you are just forcing it's hand. As a result you mess up internal calculations.
If you knew more about what the GC was holding internally then you might be able to make more informed decisions, but then you've missed the benefits of GC.
The real problem with closing OS handles in finalize is that the finalize are executed in no guaranteed order. But if you have handles to the things that block (think e.g. sockets) potentially your code can get into deadlock situation (not trivial at all).
So I'm for explicitly closing handles in a predictable orderly manner. Basically code for dealing with resources should follow the pattern:
SomeStream s = null;
...
try{
s = openStream();
....
s.io();
...
} finally {
if (s != null) {
s.close();
s = null;
}
}
It gets even more complicated if you write your own classes that work via JNI and open handles. You need to make sure handles are closed (released) and that it will happen only once. Frequently overlooked OS handle in Desktop J2SE is Graphics[2D]. Even BufferedImage.getGrpahics() can potentially return you the handle that points into a video driver (actually holding the resource on GPU). If you won't release it yourself and leave it garbage collector to do the work - you may find strange OutOfMemory and alike situation when you ran out of video card mapped bitmaps but still have plenty of memory. In my experience it happens rather frequently in tight loops working with graphics objects (extracting thumbnails, scaling, sharpening you name it).
Basically GC does not take care of programmers responsibility of correct resource management. It only takes care of memory and nothing else. The Stream.finalize calling close() IMHO would be better implemented throwing exception new RuntimeError("garbage collecting the stream that is still open"). It will save hours and days of debugging and cleaning code after the sloppy amateurs left the ends lose.
Happy coding.
Peace.
The GC does a lot of optimization on when to properly finalize things.
So unless you're familiar with how the GC actually works and how it tags generations, manually calling finalize or start GC'ing will probably hurt performance than help.
Avoid finalizers. There is no guarantee that they will be called in a timely fashion. It could take quite a long time before the Memory Management system (i.e., the garbage collector) decides to collect an object with a finalizer.
Many people use finalizers to do things like close socket connections or delete temporary files. By doing so you make your application behaviour unpredictable and tied to when the JVM is going to GC your object. This can lead to "out of memory" scenarios, not due to the Java Heap being exhausted, but rather due to the system running out of handles for a particular resource.
One other thing to keep in mind is that introducing the calls to System.gc() or such hammers may show good results in your environment, but they won't necessarily translate to other systems. Not everyone runs the same JVM, there are many, SUN, IBM J9, BEA JRockit, Harmony, OpenJDK, etc... This JVM all conform to the JCK (those that have been officially tested that is), but have a lot of freedom when it comes to making things fast. GC is one of those areas that everyone invests in heavily. Using a hammer will often times destroy that effort.