I see references to pre-allocated JVM exceptions there:
- http://www.oracle.com/technetwork/java/javase/relnotes-139183.html
- http://dev.clojure.org/display/community/Project+Ideas+2016
But looking for that I see only information about missing stacktraces.
What are JVM allocated exceptions? It seems like an optimisation.
How does it work, and what are it's trade-offs?
These are exceptions which are preallocated on the start of JVM.
Preallocated exceptions should be implicit: they are thrown by JVM, not by throw new ... when unexpected condition occurs: dereferencing null pointer, accessing array with negative index etc.
When method starts to throw (implicitly) one of these exceptions too frequently, JVM will notice that and replace allocation of exception on every throw with throwing already preallocated exception without stacktrace.
This mechanism is implementation dependent, so if we're talking about hotspot you can find the list of these exceptions in graphKit.cpp:
NullPointerException
ArithmeticException
ArrayIndexOutOfBoundsException
ArrayStoreException
ClassCastException
The rationale is pretty simple: the most expensive part of throwing an exception is not an actual throwing and stack unwinding, but creating stacktrace in exception (it's a relatively slow call to the VM and happens in exception constructor via Throwable#fillInStackTrace). To find concrete numbers and relative cost you can read awesome article by hotspot performance engineer about exceptional performance.
Some people use exceptions for regular control flow (please don't do that) or for performance sake (which is usually incorrect, for example see this kinda popular connection pool framework), so hotspot makes this [probably bad] code a little bit faster via throwing already created exception without stacktrace (so the most expensive part of throwing is eliminated).
Downside of this approach is that now you have stacktraceless exceptions. It's not a big deal: if these implicit exceptions are thrown that frequently, you probably don't use their stacktraces. But if this assumption is incorrect you will have exceptions without traces in your logs. To prevent this you can use -XX:-OmitStackTraceInFastThrow
The release notes you posted explain the feature:
Reading the release notes in your post: "The compiler in the server VM now provides correct stack backtraces for all "cold" built-in exceptions. For performance purposes, when such an exception is thrown a few times, the method may be recompiled. After recompilation, the compiler may choose a faster tactic using preallocated exceptions that do not provide a stack trace."
Yes, it is an optimization.
To increase the speed of exception handling, exceptions that are thrown often may be preallocated.
This removes the need to continuously create new exceptions every time one occurs, at the price of loosing stack trace information.
You can remove this feature using the flag -XX:-OmitStackTraceInFastThrow
Related
This question deals with cases where the JIT compiler determines that it will no longer generate stacktraces if it deems that it has done so some certain number of times previously. I understand this is known as "fast throw" or "preallocated" exceptions.
Generally speaking if one were to encounter such a preallocated exception, the missing stacktrace should be findable at least once at some earlier point in the JVM's life, before JIT had deemed it worthy of compiling out.
My question is whether the mapping back from a reported occurrence of a preallocated exception to at least one instance of an earlier exception can be guaranteed to be deterministic, and if not, is there any way to avoid this being a source of ambiguity short of disabling the optimization altogether using -XX:-OmitStackTraceInFastThrow.
A simplistic example:
The shortened/preallocated exception that gets reported is something generic such as NullPointerException.
If there was only one type of stacktrace topping out with a NPE in the earlier life of the JVM, then no problem. But what if there was more than one NPE already from various points in code? The JVM does not give any indication which or if any of the earlier stacks have been compiled out, so how could you deterministically establish what the stacktrace would otherwise have been?
Could this circumstance actually arise, or is the modern Hotspot JIT clever enough to avoid creating such an ambiguity?
The JVM itself does not try to map these preallocated exception to any previously thrown exception.
You, as a developer, can try to guess where these preallocated exceptions come from but there are no guarantees.
If you are trying to debug something and see stacktrace-less exceptions, the safest thing is to disable the optimization using -XX:-OmitStackTraceInFastThrow while you try to find the source of the problem.
See this question where everyone talks about how "obviously" performance will suffer, or exceptions should be avoided when performance is an issue, etc.
But I haven't seen a good explanation as to Why throwing exceptions are bad for performance, everyone in that question seem to take it for granted.
The reason I ask this, is that I'm attempting to optimize an application and have noticed that several hundred exceptions are thrown and swallowed on certain actions, such as clicking a button to load a new page.
First, of course, it's simply bad design because "exception" has a semantic meaning ("some circumstance prevented this method from fulfilling its contract"), and that's abusing the feature in a bad-surprise way.
In the case of Java, creating exception objects (specifically, filling in stack traces) is extremely expensive because it involves walking the stack, lots of object allocations and string manipulations, and so on. Actually throwing the exception isn't where the main performance penalty is.
As we all know, there are multiple reasons of OutOfMEmoryError (see first answer). Why there is only one exception covering all these cases instead of multiple fine-grained ones inheriting from OutOfMEmoryError?
I'd expect because you really can't do anything else when that happens: it almost doesn't matter WHY you ran out, since you're screwed regardless. Perhaps the additional info would be nice, but...
I know tomcat tries to do this "Out Of Memory Parachute" thing, where they hold onto a chunk of memory and try and release it, but I'm not sure how well it works.
The garbage collection process is deliberately very vaguely described to allow the greatest possible freedom for the JVM-implementors.
Hence the classes you mention are not provided in the API, but only in the implementation.
If you relied on them, your program would crash if running on a JVM without these vendor-specific sub-classes, so you don't do that.
You only need to subclass an exception if applications need to be able to catch and deal with the different cases differently. But you shouldn't be catching and attempting to recover from these cases at all ... so the need should not arise.
... but yeah I would still like to have a more descriptive reason for dying on me.
The exception message tells you which of the OOME sub-cases have occurred. If you are complaining that the messages are too brief, it is not the role of Java exception messages to give a complete explanation of the problem they are reporting. That's what the javadocs and other documentation is for.
#Thorbjørn presents an equally compelling argument. Basically, the different subcases are all implementation specific. Making them part of the standard API risks constraining JVM implementations to do things in suboptimal ways to satisfy the API requirements. And this approach risks creating unnecessary application portability barriers when new subclasses are created for new implementation specific subcases.
(For instance the hypothetical UnableToCreateNativeThreadError 1) assumes that the thread creation failed because of memory shortage, and 2) that the memory shortage is qualitatively different from a normal out of memory. 2) is true for current Oracle JVMs, but not for all JVMs. 1) is possibly not even true for current Oracle JVMs. Thread creation could fail because of an OS-imposed limit on the number of native threads.)
If you are interested in why it is a bad idea to try to recover from OOME's, see these Questions:
Catching java.lang.OutOfMemoryError?
Can the JVM recover from an OutOfMemoryError without a restart
Is it possible to catch out of memory exception in java? (my answer).
IMO there is no definite answer to this question and it all boils down to the design decisions made at that time. This question is very similar to something like "why isn't Date class immutable" or "why does Properties extend from HashTable". As pointed out by another poster, subclasses really wouldn't matter since you are anyways screwed. Plus the descriptive error messages are good enough to start with troubleshooting measures.
Mostly because computing something smart will require to allocate memory at some point. So you have to trhrow OutOfMemoryException without doing anymore computation.
This is not a big deal anyway, because your program is already screwed up. At most what you can do is return an error to the system System.exit(ERROR_LEVEL); but you can't even log because it would require to allocate memory or use memory that is possibly screwed up.
this is because all 4 are fatal errors that impossible to recover from (except perhaps out of heap space but that would still remain near the edge of the failure point)
Where are exceptions stored ?
Stack, Heap.
How is memory allocated and deallocated for Exceptions?
Now if you have more than one exception which needs to be handled are there objects of all these exceptions created?
I would assume that memory for exceptions is allocated the same way as for all other objects (on the heap).
This used to be a problem, because then you cannot allocate memory for an OutOfMemoryError,
which is why there was no stack trace until Java 1.6. Now they pre-allocate space for the stacktrace as well.
If you are wondering where the reference to the exception is stored while it is being thrown, the JVM keeps the reference internally while it unwinds the call stack to find the exception handler, who then gets the reference (on its stack frame, just like any other local variable).
There cannot be two exceptions being thrown at the same time (on the same thread). They can be nested, but then you have only one "active" exception with a reference to the nested exception.
When all references to the exception disappear (e.g. after the exception handler is finished), the exception gets garbage-collected like everything else.
For C++ it's not defined where to store exceptions, but most compilers use a special stack.
For Java as somebody wrote before, they're stored on the heap.
As exceptions are objects in C++ as well as in Java, they're allocated and deallocated as objects in the specific language.
There is always only one exception active per thread in both languages.
For the most complete information on exceptions, we can go directly to the source: Chapter 11: Exceptions of The Java Language Specification, Second Edition.
Exceptions are indeed objects. They are a subclass of Throwable:
Every exception is represented by an
instance of the class Throwable or one
of its subclasses; such an object can
be used to carry information from the
point at which an exception occurs to
the handler that catches it.
Therefore, it would probably be safe to assume, that as with any other Object in Java, it will be allocated on the heap.
In terms of having mulitple Exception objects, that probably wouldn't be the case, as once an exception occurs, the Java Virtual Machine will start to find an exception handler.
During the process of throwing an
exception, the Java virtual machine
abruptly completes, one by one, any
expressions, statements, method and
constructor invocations, initializers,
and field initialization expressions
that have begun but not completed
execution in the current thread. This
process continues until a handler is
found that indicates that it handles
that particular exception by naming
the class of the exception or a
superclass of the class of the
exception.
For more information on how exceptions are to be handled at runtime, Section 11.3 Handling of an Exception has the details.
Exceptions in Java are objects, and as such, they are stored on the heap.
When an exception is thrown, the JVM looks for a matching exception handler in the code. This applies even when there are multiple exceptions that something could throw.
Where are exceptions stored ? Stack,
Heap. How is memory allocated and
deallocated for Exceptions?
Exceptions are objects like any other in this regard. Allocated on the heap via new, deallocated by the garbage collector.
Now if you have more than one
exception which needs to be handled
are there objects of all these
exceptions created?
Not sure what you mean with this. They're created via new, one at a time. There can be more than one around when you use exception chaining - and there's actually nothing keeping you from creating thousands of exceptions and putting them in a List somewhere - it just doesn't make much sense.
In Java, Exception extends Throwable which extends Object. i.e. from a memory point of view its an object just like any other.
It has been suggested in Java 7 that local variables could be placed on the stack using escape analysis to find candidates for stack variables. However, Exceptions are typically thrown from a method to a caller (or its caller etc.) so its wouldn't be very useful placing the Exception on the stack.
This may be a strange question, but do 'try-catch' blocks add any more to memory in a server environment than just running a particular block of code. For example, if I do a print stack trace, does the JVM hold on to more information. Or is more information retained on the heap?
try {
... do something()
} catch(Exception e) {
e.printStackTrace();
}
... do something()
The exception will hae a reference to the stack trace. printStackTrace will allocate more memory as it formats that stack trace into something pretty.
The try catch block will likely result in a largely static code/data segment but not in run time memory allocations
The important here is as soon the exception variable 'e' is no longer reachable (ie, out of scope) it becomes eligible for memory collection.
Technically the answer to your question is probably no. There are lots of reasons to avoid throwing Exceptions whenever possible, but memory isn't really a concern.
The real reason to only throw Exceptions for truly exceptional conditions is that it's SLOW. Generating an exception involves carefully examining the stack. It's not a fast operation at all. If you're doing it as part of your regularly flow of execution, it will noticeably affect your speed. I once wrote a logging system that I thought was extremely clever because it automatically figured out which class had invoked it by generating an Exception and examining the stack in that manner. Eventually I had to go back and take that part out because it was noticeably slowing everything else down.
The stack trace is built when the exception is created. Printing the stack trace doesn't do anything more memory intensive than printing anything else.
The try/catch block might have some performance overhead, but not in the form of increased memory requirements.
For the most part, don't worry about memory/performance when exceptions happen. If you have an exception that is a common code path then that suggest you are misusing exceptions.
If your question is more for academic purposes, then I don't know the full extent of what is going on there in terms of heap/memory space. However, Joshua Bloch in "Effective Java" mentions that the catch block of the try catch block is often relatively unoptimized by most JVM implementations.
While not directly related to memory consumption, there was a thread here a while back discussing How slow are the Java exceptions? It is worth a look, in my opinion.
I also had this link in my bookmarks. As far as I can recall, it gave an example of speed up possible when stack trace generation is skipped on exception throw, but the site seems down now.