See this question where everyone talks about how "obviously" performance will suffer, or exceptions should be avoided when performance is an issue, etc.
But I haven't seen a good explanation as to Why throwing exceptions are bad for performance, everyone in that question seem to take it for granted.
The reason I ask this, is that I'm attempting to optimize an application and have noticed that several hundred exceptions are thrown and swallowed on certain actions, such as clicking a button to load a new page.
First, of course, it's simply bad design because "exception" has a semantic meaning ("some circumstance prevented this method from fulfilling its contract"), and that's abusing the feature in a bad-surprise way.
In the case of Java, creating exception objects (specifically, filling in stack traces) is extremely expensive because it involves walking the stack, lots of object allocations and string manipulations, and so on. Actually throwing the exception isn't where the main performance penalty is.
Related
I see references to pre-allocated JVM exceptions there:
- http://www.oracle.com/technetwork/java/javase/relnotes-139183.html
- http://dev.clojure.org/display/community/Project+Ideas+2016
But looking for that I see only information about missing stacktraces.
What are JVM allocated exceptions? It seems like an optimisation.
How does it work, and what are it's trade-offs?
These are exceptions which are preallocated on the start of JVM.
Preallocated exceptions should be implicit: they are thrown by JVM, not by throw new ... when unexpected condition occurs: dereferencing null pointer, accessing array with negative index etc.
When method starts to throw (implicitly) one of these exceptions too frequently, JVM will notice that and replace allocation of exception on every throw with throwing already preallocated exception without stacktrace.
This mechanism is implementation dependent, so if we're talking about hotspot you can find the list of these exceptions in graphKit.cpp:
NullPointerException
ArithmeticException
ArrayIndexOutOfBoundsException
ArrayStoreException
ClassCastException
The rationale is pretty simple: the most expensive part of throwing an exception is not an actual throwing and stack unwinding, but creating stacktrace in exception (it's a relatively slow call to the VM and happens in exception constructor via Throwable#fillInStackTrace). To find concrete numbers and relative cost you can read awesome article by hotspot performance engineer about exceptional performance.
Some people use exceptions for regular control flow (please don't do that) or for performance sake (which is usually incorrect, for example see this kinda popular connection pool framework), so hotspot makes this [probably bad] code a little bit faster via throwing already created exception without stacktrace (so the most expensive part of throwing is eliminated).
Downside of this approach is that now you have stacktraceless exceptions. It's not a big deal: if these implicit exceptions are thrown that frequently, you probably don't use their stacktraces. But if this assumption is incorrect you will have exceptions without traces in your logs. To prevent this you can use -XX:-OmitStackTraceInFastThrow
The release notes you posted explain the feature:
Reading the release notes in your post: "The compiler in the server VM now provides correct stack backtraces for all "cold" built-in exceptions. For performance purposes, when such an exception is thrown a few times, the method may be recompiled. After recompilation, the compiler may choose a faster tactic using preallocated exceptions that do not provide a stack trace."
Yes, it is an optimization.
To increase the speed of exception handling, exceptions that are thrown often may be preallocated.
This removes the need to continuously create new exceptions every time one occurs, at the price of loosing stack trace information.
You can remove this feature using the flag -XX:-OmitStackTraceInFastThrow
This question deals with cases where the JIT compiler determines that it will no longer generate stacktraces if it deems that it has done so some certain number of times previously. I understand this is known as "fast throw" or "preallocated" exceptions.
Generally speaking if one were to encounter such a preallocated exception, the missing stacktrace should be findable at least once at some earlier point in the JVM's life, before JIT had deemed it worthy of compiling out.
My question is whether the mapping back from a reported occurrence of a preallocated exception to at least one instance of an earlier exception can be guaranteed to be deterministic, and if not, is there any way to avoid this being a source of ambiguity short of disabling the optimization altogether using -XX:-OmitStackTraceInFastThrow.
A simplistic example:
The shortened/preallocated exception that gets reported is something generic such as NullPointerException.
If there was only one type of stacktrace topping out with a NPE in the earlier life of the JVM, then no problem. But what if there was more than one NPE already from various points in code? The JVM does not give any indication which or if any of the earlier stacks have been compiled out, so how could you deterministically establish what the stacktrace would otherwise have been?
Could this circumstance actually arise, or is the modern Hotspot JIT clever enough to avoid creating such an ambiguity?
The JVM itself does not try to map these preallocated exception to any previously thrown exception.
You, as a developer, can try to guess where these preallocated exceptions come from but there are no guarantees.
If you are trying to debug something and see stacktrace-less exceptions, the safest thing is to disable the optimization using -XX:-OmitStackTraceInFastThrow while you try to find the source of the problem.
As we all know, there are multiple reasons of OutOfMEmoryError (see first answer). Why there is only one exception covering all these cases instead of multiple fine-grained ones inheriting from OutOfMEmoryError?
I'd expect because you really can't do anything else when that happens: it almost doesn't matter WHY you ran out, since you're screwed regardless. Perhaps the additional info would be nice, but...
I know tomcat tries to do this "Out Of Memory Parachute" thing, where they hold onto a chunk of memory and try and release it, but I'm not sure how well it works.
The garbage collection process is deliberately very vaguely described to allow the greatest possible freedom for the JVM-implementors.
Hence the classes you mention are not provided in the API, but only in the implementation.
If you relied on them, your program would crash if running on a JVM without these vendor-specific sub-classes, so you don't do that.
You only need to subclass an exception if applications need to be able to catch and deal with the different cases differently. But you shouldn't be catching and attempting to recover from these cases at all ... so the need should not arise.
... but yeah I would still like to have a more descriptive reason for dying on me.
The exception message tells you which of the OOME sub-cases have occurred. If you are complaining that the messages are too brief, it is not the role of Java exception messages to give a complete explanation of the problem they are reporting. That's what the javadocs and other documentation is for.
#Thorbjørn presents an equally compelling argument. Basically, the different subcases are all implementation specific. Making them part of the standard API risks constraining JVM implementations to do things in suboptimal ways to satisfy the API requirements. And this approach risks creating unnecessary application portability barriers when new subclasses are created for new implementation specific subcases.
(For instance the hypothetical UnableToCreateNativeThreadError 1) assumes that the thread creation failed because of memory shortage, and 2) that the memory shortage is qualitatively different from a normal out of memory. 2) is true for current Oracle JVMs, but not for all JVMs. 1) is possibly not even true for current Oracle JVMs. Thread creation could fail because of an OS-imposed limit on the number of native threads.)
If you are interested in why it is a bad idea to try to recover from OOME's, see these Questions:
Catching java.lang.OutOfMemoryError?
Can the JVM recover from an OutOfMemoryError without a restart
Is it possible to catch out of memory exception in java? (my answer).
IMO there is no definite answer to this question and it all boils down to the design decisions made at that time. This question is very similar to something like "why isn't Date class immutable" or "why does Properties extend from HashTable". As pointed out by another poster, subclasses really wouldn't matter since you are anyways screwed. Plus the descriptive error messages are good enough to start with troubleshooting measures.
Mostly because computing something smart will require to allocate memory at some point. So you have to trhrow OutOfMemoryException without doing anymore computation.
This is not a big deal anyway, because your program is already screwed up. At most what you can do is return an error to the system System.exit(ERROR_LEVEL); but you can't even log because it would require to allocate memory or use memory that is possibly screwed up.
this is because all 4 are fatal errors that impossible to recover from (except perhaps out of heap space but that would still remain near the edge of the failure point)
Disclaimer: I know how classes are loaded in JVM and how and when they are unloaded. The question is not about the current behaviour, the question is, why JVM does not support "forced" class/classloader unloading?
It could have the following semantics: when classloader is "forced unloaded", all classes it loaded are marked by "unloaded", meaning no new instances will be created (an exception will be thrown, like "ClassUnloadedException"). Then, all instances of such unloaded classes are marked as "unloaded" too, so every access to them will throw InstanceUnloadedException (just like NullPointerException).
Implementation: I think, this could be done during garbage collection. For example, compacting collector moves live objects anyway, so it can check if class of current object was "unloaded" and instead of moving object change the reference to guarded page of memory (accessing it will throw the abovementioned InstanceUnloadedException). This will effectively make object garbage, too. Or probably that could be done during "mark" phase of GC. Anyway, I think this is technically possible, with little overhead when no "unloading" occurs.
The rationale: Such "hardcore" mechanism could be useful for runtimes where a lot of dynamic code reloading occurs and failure of particular application or part of it is tolerable whereas failure of whole JVM is undesirable. For example, application servers and OSGi runtimes.
In my experience, dynamic redeployment in 9 cases of 10 leads to PermGenSpace due to the references not being cleaned up correctly (like ThreadLocal in static field filled in long-living thread, etc). Also, having an explicit exception instead of hard-to-debug leak could help polishing the code so no references are leaked into the long-living scope uncontrolled.
What do you think?
This feature would just cause havoc and confusion. Forcing the unload of a class would bring a lot of problems, like deprecated Thread.stop() had, except that would be many more times worse.
Just for comparing, Thread.stop() tends to leave a lot of objects in inconsistent states due to the abrupt thread interrupting, and the thread could be executing any type of code. Coding against that in practice is impossible, or at least an tremendous extreme effort. It is considered between almost impossible and completely impossible to write a correct multithreded code in that scenario.
In your case, that sort of feature would have similar bad side-effects, but in much worse scale. The code could get the exception anywhere unexpectedly, so in practice would be very difficult or impossible to code defensively against it or handle it. Suppose that you have a try block doing some IO, and then a class is abruptely unloaded. The code will throw a ClassUnloadedException in an unexpected place, potentially leaving objects in inconsistent states. If you try to defend your code against it, the code responsible for that defense might fail as well due to another unexpected ClassUnloadedException. If you have a finally block that tries to close a resource and a ClassUnloadedException is thrown inside that block, the resource could not be closed. And again, handling it would be very hard, because the handler could get a ClassUnloadedException too.
By the way, NullPointerException is completely predictable. You got a pointer to something null and tried to derefence it. Normally it is a programming error, but the behaviour is completelly predictable. ClassCastException, IllegalStateException, IllegalArgumentException and other RuntimeExceptions are (or at least should be) happening in predictable conditions. Exceptions which are not RuntimeException may happen unexpectedly sometimes (like IOException), but the compiler forces you to handle or rethrow them.
On the other hand, StackOverflowError, OutOfMemoryError, ExceptionIninitializerError and NoClassDefFoundError, are unpredictable things that may happen anywhere in the code and very rarely there is something possible to do to handle or recover from them. When some program hits that, normally they just go erratic crazy. The few ones that try to handle them, limits to warning the user that it must be terminated immediatelly, maybe trying to save the unsaved data. Your ClassUnloadedException is a typical thing that would be a ClassUnloadedError instead. It would manifest itself like a ExceptionIninitializerError or NoClassDefFoundError which in 99% of the cases means just that your application is completely broken, except that it would be much worse because it has not a fail-fast behaviour, and so it gets still more randomness and unpredictableness to it.
Dynamic redeployment is by its very nature, one of the most ugly hacks that may happens in a JVM/Container since it changes abruptely the code of something that is already running on it, which tends to get you to a very erratic random buggy behavior. But it has its value, since it helps a lot in debugging. So, a defense against erratic behavior that the container implements, is to create a new set of classes to the running program and shares memory with the older one. If the new and the old parts of your program don't communicate directly (i.e., just by compatible serialization or by no communication at all), you are fine. You are normally safe too if no structural changes occurs and no living object depends of some specific implementation detail that changed. If you do follow these rules, no ClassUnloadedError will be show. However there are some situations where you may not follow these rules and still be safe, like fixing a bug in a method and changing its parameters and no live object exists which depends on the bug (i.e., they never existed or they are all dead).
But if you really wants a ClassUnloadedError being thrown if an object of the older part is accessed, as this behaviour flags that one of that isolament rules were broke, and then bring everything down. So, there is no point in have new and old parts of the program in the same time, it would be simpler to just redeploy it completely.
And about the implementation in the GC, it does not works. A dead object is dead, no matter if its class object is dead too or alive. The living objects of unloaded classes can't be garbage collected, because they are still reachable to other objects, no matter if the implementation of every method will magically change to something that always throws an Exception/Error. Backtracking the references to the object would be a very expensive operation in any implementation, and multithreading this would be still worse, possibly with a severe performance hit in perfectly living objects and classes.
Further, dynamic loading classes are not intended for production use, just for developer tests. So, it is no worth to buy all that trouble and complexity for this feature.
Concluding, in practice your idea creates something that combines something similar to Thread.stop() with something similar to NoClassdefFoundError, but is stronger than the sum of the two. Like a queen is a combination of a bishop and a rook in chess, but is stronger than the sum of the two. It is a really bad idea.
This may be a strange question, but do 'try-catch' blocks add any more to memory in a server environment than just running a particular block of code. For example, if I do a print stack trace, does the JVM hold on to more information. Or is more information retained on the heap?
try {
... do something()
} catch(Exception e) {
e.printStackTrace();
}
... do something()
The exception will hae a reference to the stack trace. printStackTrace will allocate more memory as it formats that stack trace into something pretty.
The try catch block will likely result in a largely static code/data segment but not in run time memory allocations
The important here is as soon the exception variable 'e' is no longer reachable (ie, out of scope) it becomes eligible for memory collection.
Technically the answer to your question is probably no. There are lots of reasons to avoid throwing Exceptions whenever possible, but memory isn't really a concern.
The real reason to only throw Exceptions for truly exceptional conditions is that it's SLOW. Generating an exception involves carefully examining the stack. It's not a fast operation at all. If you're doing it as part of your regularly flow of execution, it will noticeably affect your speed. I once wrote a logging system that I thought was extremely clever because it automatically figured out which class had invoked it by generating an Exception and examining the stack in that manner. Eventually I had to go back and take that part out because it was noticeably slowing everything else down.
The stack trace is built when the exception is created. Printing the stack trace doesn't do anything more memory intensive than printing anything else.
The try/catch block might have some performance overhead, but not in the form of increased memory requirements.
For the most part, don't worry about memory/performance when exceptions happen. If you have an exception that is a common code path then that suggest you are misusing exceptions.
If your question is more for academic purposes, then I don't know the full extent of what is going on there in terms of heap/memory space. However, Joshua Bloch in "Effective Java" mentions that the catch block of the try catch block is often relatively unoptimized by most JVM implementations.
While not directly related to memory consumption, there was a thread here a while back discussing How slow are the Java exceptions? It is worth a look, in my opinion.
I also had this link in my bookmarks. As far as I can recall, it gave an example of speed up possible when stack trace generation is skipped on exception throw, but the site seems down now.