Where are exceptions stored ?
Stack, Heap.
How is memory allocated and deallocated for Exceptions?
Now if you have more than one exception which needs to be handled are there objects of all these exceptions created?
I would assume that memory for exceptions is allocated the same way as for all other objects (on the heap).
This used to be a problem, because then you cannot allocate memory for an OutOfMemoryError,
which is why there was no stack trace until Java 1.6. Now they pre-allocate space for the stacktrace as well.
If you are wondering where the reference to the exception is stored while it is being thrown, the JVM keeps the reference internally while it unwinds the call stack to find the exception handler, who then gets the reference (on its stack frame, just like any other local variable).
There cannot be two exceptions being thrown at the same time (on the same thread). They can be nested, but then you have only one "active" exception with a reference to the nested exception.
When all references to the exception disappear (e.g. after the exception handler is finished), the exception gets garbage-collected like everything else.
For C++ it's not defined where to store exceptions, but most compilers use a special stack.
For Java as somebody wrote before, they're stored on the heap.
As exceptions are objects in C++ as well as in Java, they're allocated and deallocated as objects in the specific language.
There is always only one exception active per thread in both languages.
For the most complete information on exceptions, we can go directly to the source: Chapter 11: Exceptions of The Java Language Specification, Second Edition.
Exceptions are indeed objects. They are a subclass of Throwable:
Every exception is represented by an
instance of the class Throwable or one
of its subclasses; such an object can
be used to carry information from the
point at which an exception occurs to
the handler that catches it.
Therefore, it would probably be safe to assume, that as with any other Object in Java, it will be allocated on the heap.
In terms of having mulitple Exception objects, that probably wouldn't be the case, as once an exception occurs, the Java Virtual Machine will start to find an exception handler.
During the process of throwing an
exception, the Java virtual machine
abruptly completes, one by one, any
expressions, statements, method and
constructor invocations, initializers,
and field initialization expressions
that have begun but not completed
execution in the current thread. This
process continues until a handler is
found that indicates that it handles
that particular exception by naming
the class of the exception or a
superclass of the class of the
exception.
For more information on how exceptions are to be handled at runtime, Section 11.3 Handling of an Exception has the details.
Exceptions in Java are objects, and as such, they are stored on the heap.
When an exception is thrown, the JVM looks for a matching exception handler in the code. This applies even when there are multiple exceptions that something could throw.
Where are exceptions stored ? Stack,
Heap. How is memory allocated and
deallocated for Exceptions?
Exceptions are objects like any other in this regard. Allocated on the heap via new, deallocated by the garbage collector.
Now if you have more than one
exception which needs to be handled
are there objects of all these
exceptions created?
Not sure what you mean with this. They're created via new, one at a time. There can be more than one around when you use exception chaining - and there's actually nothing keeping you from creating thousands of exceptions and putting them in a List somewhere - it just doesn't make much sense.
In Java, Exception extends Throwable which extends Object. i.e. from a memory point of view its an object just like any other.
It has been suggested in Java 7 that local variables could be placed on the stack using escape analysis to find candidates for stack variables. However, Exceptions are typically thrown from a method to a caller (or its caller etc.) so its wouldn't be very useful placing the Exception on the stack.
Related
I can declare a 2D array of size 10^9*10^9 but when for a given test case which requires this size throws an exception of Exception in thread "main" java.lang.OutOfMemoryError: Java heap space and yes if it leads to complete memory usage then why are able to instantiate such arrays?
java.lang.OutOfMemoryError is a run-time error. Exceptions and errors of this kind are reserved for situations when it is not possible for the compiler to decide if an operation is possible or not.
The decision to throw out-of-memory error is made by the running JVM at the time when you attempt to instantiate the object. If you run the same program on a system with more memory, or let your running JVM use more memory by increasing heap size (how?), the program may run to completion without throwing the error.
One aspect worth mentioning: of course some level of checking takes place at compile time: you can only use int values when declaring the array size.
The compiler will prevent you from using a long value there, and of course will not allow when you go for a negative number (when using literals).
But as the other answer says: the compiler accepting your code isn't the same as reality at runtime accepting your input.
I see references to pre-allocated JVM exceptions there:
- http://www.oracle.com/technetwork/java/javase/relnotes-139183.html
- http://dev.clojure.org/display/community/Project+Ideas+2016
But looking for that I see only information about missing stacktraces.
What are JVM allocated exceptions? It seems like an optimisation.
How does it work, and what are it's trade-offs?
These are exceptions which are preallocated on the start of JVM.
Preallocated exceptions should be implicit: they are thrown by JVM, not by throw new ... when unexpected condition occurs: dereferencing null pointer, accessing array with negative index etc.
When method starts to throw (implicitly) one of these exceptions too frequently, JVM will notice that and replace allocation of exception on every throw with throwing already preallocated exception without stacktrace.
This mechanism is implementation dependent, so if we're talking about hotspot you can find the list of these exceptions in graphKit.cpp:
NullPointerException
ArithmeticException
ArrayIndexOutOfBoundsException
ArrayStoreException
ClassCastException
The rationale is pretty simple: the most expensive part of throwing an exception is not an actual throwing and stack unwinding, but creating stacktrace in exception (it's a relatively slow call to the VM and happens in exception constructor via Throwable#fillInStackTrace). To find concrete numbers and relative cost you can read awesome article by hotspot performance engineer about exceptional performance.
Some people use exceptions for regular control flow (please don't do that) or for performance sake (which is usually incorrect, for example see this kinda popular connection pool framework), so hotspot makes this [probably bad] code a little bit faster via throwing already created exception without stacktrace (so the most expensive part of throwing is eliminated).
Downside of this approach is that now you have stacktraceless exceptions. It's not a big deal: if these implicit exceptions are thrown that frequently, you probably don't use their stacktraces. But if this assumption is incorrect you will have exceptions without traces in your logs. To prevent this you can use -XX:-OmitStackTraceInFastThrow
The release notes you posted explain the feature:
Reading the release notes in your post: "The compiler in the server VM now provides correct stack backtraces for all "cold" built-in exceptions. For performance purposes, when such an exception is thrown a few times, the method may be recompiled. After recompilation, the compiler may choose a faster tactic using preallocated exceptions that do not provide a stack trace."
Yes, it is an optimization.
To increase the speed of exception handling, exceptions that are thrown often may be preallocated.
This removes the need to continuously create new exceptions every time one occurs, at the price of loosing stack trace information.
You can remove this feature using the flag -XX:-OmitStackTraceInFastThrow
Recently we have got pinned object overflow error in production environment, e.g.
Caused by: java.lang.InternalError: pinned object overflow!
Could you please explain
1) What is a pinned object ?
2) Does JVM do it internally or can it be done programmatically also ?
3) Possible cases when pinned object overflow can happen ?
ok, we will assume that you are working with JRockit.
1)what is pinning object ?
A pinned object is one that is not allowed to move. Normally, an object might be moved from one address to another if it is being promoted or as part of compaction. But if an object is pinned, the GC will not try to move it until it is unpinned. This basically means that someone has a pointer to the memory address of an object and JVM have to keep the object in place.
2)Does JVM do it internally or it can be done programmatically also ?
As far as I know it can be done only programmatically. For example, the following JNI method allows direct access to the data held by the JVM
(*env)->GetPrimitiveArrayCritical().
Also JRockit has a performance optimization - pinning a buffer during an I/O operation which allows to hand it's address directly to the operating system. This optimization is used implicitly by calling any method in *InputStream or *OutputStream(See details here).
3)possible cases when pinned objectoverflow can happen ?
There are a lot of cases - issues in JNI call, bad exception handling in I/O calls. In order to be more precise we have to have heap dumps or profiling results(JRockit Mission Control). The first thing we have to look at is amount of stacks blocked in I/O or amount of *InputStream instances.
Once this exception is thrown, is object that caused this thrown out of memory? Does it happen right away?
In other words, if i am adding objects to a list, at some point this can no longer happen and OOM error is thrown. At that time, does anything happen to the list itself?
java.lang.OutOfMemoryError: Java heap space
This is thrown when a new object could not be created. The current object will continue to exist.
However, due to the nature of an error like this the current code will stop executing and it's likely that the current object will be garbage-collected soon. It just depends on the structure of your code and whether references are still being held to your object.
From documentation:
Thrown when the Java Virtual Machine cannot allocate an object because it is out of memory, and no more memory could be made available by the garbage collector.
According to http://docs.oracle.com/javase/6/docs/api/, an OutOfMemoryError is:
Thrown when the Java Virtual Machine cannot allocate an object because
it is out of memory, and no more memory could be made available by the
garbage collector.
So it happens as soon as the JVM sees that it does not have enough heapspace to allocate memory for the new object you're trying to create. So the object never gets created because you don't have enough memory.
I have a misbehaving application that seems to leak. After a brief profiler investigation, most memory (80%) is held by java.lang.ref.Finalizer instances. I suspect that finalizers fail to run.
A common cause of this seems to be exceptions thrown from the finalizer. However, the javadoc for the finalize method of the Object class (see here for instance) seems to contradict itself: it states
If an uncaught exception is thrown by the finalize method, the exception is ignored and finalization of that object terminates.
but later, it also states that
Any exception thrown by the finalize method causes the finalization of this object to be halted, but is otherwise ignored.
What should I believe (i.e., is finalization halted or not?), and do you have any tips on how to investigate such apparent leaks?
Thanks
Both quotes say:
An exception will cause finalization of this object to be halted/terminated.
Both quotes also say:
The uncaught exception is ignored (i.e. not logged or handled by the VM in any way)
So that answers the first half of your question. I don't know enough about Finalizers to give you advice on tracking down your memory leak though.
EDIT: I found this page which might be of use. It has advice such as setting fields to null manually in finalizers to allow the GC to reclaim them.
EDIT2: Some more interesting links and quotes:
From Anatomy of a Java Finalizer
Finalizer threads are not given maximum priorities on systems. If a "Finalizer" thread cannot keep up with the rate at which higher priority threads cause finalizable objects to be queued, the finalizer queue will keep growing and cause the Java heap to fill up. Eventually the Java heap will get exhausted and a java.lang.OutOfMemoryError will be thrown.
and also
it's not guaranteed that any objects that have a finalize() method are garbage collected.
EDIT3: Upon reading more of the Anatomy link, it appears that throwing exceptions in the Finalizer thread really slows it down, almost as much as calling Thread.yield(). You appear to be right that the Finalizer thread will eventually flag the object as able to be GC'd even if an exception is thrown. However, since the slowdown is significant it is possible that in your case the Finalizer thread is not keeping up with the object-creation-and-falling-out-of-scope rate.
My first step would be to establish whether this is a genuine memory leak or not.
The points raised in the previous answers all relate to the speed at which objects are collected, not the question of whether your objects are collected at all. Only the latter is a genuine memory leak.
We had a similar predicament on my project, and ran the application in "slow motion" mode to figure out if we had a real leak. We were able to do this by slowing down the stream of input data.
If the problem disappears when you run in "slow motion" mode, then the problem is probably one of the ones suggested in the previous answers, i.e. the Finalizer thread can't process the finalizer queue fast enough.
If that is the problem, it sounds like you might need to do some non-trivial refactoring as described in the page Bringer128 linked to, e.g.
Now let's look at how to write classes that require postmortem cleanup so that their users do not encounter the problems previously outlined. The best way to do so is to split such classes into two -- one to hold the data that need postmortem cleanup, the other to hold everything else -- and define a finalizer only on the former
The item 7 of Effective Java second edition is: "Avoid finalizers". I strongly recommend you to read it. Here is an extract that may help you:
"Explicit termination methods are typically used in combination with try-finally construct to ensure termination"
I have same issue with you (below picture). For our case, it because an object has wait(0) in its finalize and it never get notified, which block the java.lang.ref.Finalizer$FinalizerThread. More reference
The Secret Life Of The Finalizer
java/lang/ref/Finalizer.java
i have once see a similar problem, that is the finalizer thread can not catch up with the rate of generating finalizable objects.
my solution is the make a closed loop control, by use the MemoryMXBean .getObjectPendingFinalizationCount(), a PD( proportinal and diffrential) control algo to control the speed we generate the finalizable objects, since we have a single entry to create it, just sleep number of seconds with the result of pd algo. it works well, though you need to tune the paramter for pd algo.
hope it helps.