I see references to pre-allocated JVM exceptions there:
- http://www.oracle.com/technetwork/java/javase/relnotes-139183.html
- http://dev.clojure.org/display/community/Project+Ideas+2016
But looking for that I see only information about missing stacktraces.
What are JVM allocated exceptions? It seems like an optimisation.
How does it work, and what are it's trade-offs?
These are exceptions which are preallocated on the start of JVM.
Preallocated exceptions should be implicit: they are thrown by JVM, not by throw new ... when unexpected condition occurs: dereferencing null pointer, accessing array with negative index etc.
When method starts to throw (implicitly) one of these exceptions too frequently, JVM will notice that and replace allocation of exception on every throw with throwing already preallocated exception without stacktrace.
This mechanism is implementation dependent, so if we're talking about hotspot you can find the list of these exceptions in graphKit.cpp:
NullPointerException
ArithmeticException
ArrayIndexOutOfBoundsException
ArrayStoreException
ClassCastException
The rationale is pretty simple: the most expensive part of throwing an exception is not an actual throwing and stack unwinding, but creating stacktrace in exception (it's a relatively slow call to the VM and happens in exception constructor via Throwable#fillInStackTrace). To find concrete numbers and relative cost you can read awesome article by hotspot performance engineer about exceptional performance.
Some people use exceptions for regular control flow (please don't do that) or for performance sake (which is usually incorrect, for example see this kinda popular connection pool framework), so hotspot makes this [probably bad] code a little bit faster via throwing already created exception without stacktrace (so the most expensive part of throwing is eliminated).
Downside of this approach is that now you have stacktraceless exceptions. It's not a big deal: if these implicit exceptions are thrown that frequently, you probably don't use their stacktraces. But if this assumption is incorrect you will have exceptions without traces in your logs. To prevent this you can use -XX:-OmitStackTraceInFastThrow
The release notes you posted explain the feature:
Reading the release notes in your post: "The compiler in the server VM now provides correct stack backtraces for all "cold" built-in exceptions. For performance purposes, when such an exception is thrown a few times, the method may be recompiled. After recompilation, the compiler may choose a faster tactic using preallocated exceptions that do not provide a stack trace."
Yes, it is an optimization.
To increase the speed of exception handling, exceptions that are thrown often may be preallocated.
This removes the need to continuously create new exceptions every time one occurs, at the price of loosing stack trace information.
You can remove this feature using the flag -XX:-OmitStackTraceInFastThrow
This question deals with cases where the JIT compiler determines that it will no longer generate stacktraces if it deems that it has done so some certain number of times previously. I understand this is known as "fast throw" or "preallocated" exceptions.
Generally speaking if one were to encounter such a preallocated exception, the missing stacktrace should be findable at least once at some earlier point in the JVM's life, before JIT had deemed it worthy of compiling out.
My question is whether the mapping back from a reported occurrence of a preallocated exception to at least one instance of an earlier exception can be guaranteed to be deterministic, and if not, is there any way to avoid this being a source of ambiguity short of disabling the optimization altogether using -XX:-OmitStackTraceInFastThrow.
A simplistic example:
The shortened/preallocated exception that gets reported is something generic such as NullPointerException.
If there was only one type of stacktrace topping out with a NPE in the earlier life of the JVM, then no problem. But what if there was more than one NPE already from various points in code? The JVM does not give any indication which or if any of the earlier stacks have been compiled out, so how could you deterministically establish what the stacktrace would otherwise have been?
Could this circumstance actually arise, or is the modern Hotspot JIT clever enough to avoid creating such an ambiguity?
The JVM itself does not try to map these preallocated exception to any previously thrown exception.
You, as a developer, can try to guess where these preallocated exceptions come from but there are no guarantees.
If you are trying to debug something and see stacktrace-less exceptions, the safest thing is to disable the optimization using -XX:-OmitStackTraceInFastThrow while you try to find the source of the problem.
As we all know, there are multiple reasons of OutOfMEmoryError (see first answer). Why there is only one exception covering all these cases instead of multiple fine-grained ones inheriting from OutOfMEmoryError?
I'd expect because you really can't do anything else when that happens: it almost doesn't matter WHY you ran out, since you're screwed regardless. Perhaps the additional info would be nice, but...
I know tomcat tries to do this "Out Of Memory Parachute" thing, where they hold onto a chunk of memory and try and release it, but I'm not sure how well it works.
The garbage collection process is deliberately very vaguely described to allow the greatest possible freedom for the JVM-implementors.
Hence the classes you mention are not provided in the API, but only in the implementation.
If you relied on them, your program would crash if running on a JVM without these vendor-specific sub-classes, so you don't do that.
You only need to subclass an exception if applications need to be able to catch and deal with the different cases differently. But you shouldn't be catching and attempting to recover from these cases at all ... so the need should not arise.
... but yeah I would still like to have a more descriptive reason for dying on me.
The exception message tells you which of the OOME sub-cases have occurred. If you are complaining that the messages are too brief, it is not the role of Java exception messages to give a complete explanation of the problem they are reporting. That's what the javadocs and other documentation is for.
#Thorbjørn presents an equally compelling argument. Basically, the different subcases are all implementation specific. Making them part of the standard API risks constraining JVM implementations to do things in suboptimal ways to satisfy the API requirements. And this approach risks creating unnecessary application portability barriers when new subclasses are created for new implementation specific subcases.
(For instance the hypothetical UnableToCreateNativeThreadError 1) assumes that the thread creation failed because of memory shortage, and 2) that the memory shortage is qualitatively different from a normal out of memory. 2) is true for current Oracle JVMs, but not for all JVMs. 1) is possibly not even true for current Oracle JVMs. Thread creation could fail because of an OS-imposed limit on the number of native threads.)
If you are interested in why it is a bad idea to try to recover from OOME's, see these Questions:
Catching java.lang.OutOfMemoryError?
Can the JVM recover from an OutOfMemoryError without a restart
Is it possible to catch out of memory exception in java? (my answer).
IMO there is no definite answer to this question and it all boils down to the design decisions made at that time. This question is very similar to something like "why isn't Date class immutable" or "why does Properties extend from HashTable". As pointed out by another poster, subclasses really wouldn't matter since you are anyways screwed. Plus the descriptive error messages are good enough to start with troubleshooting measures.
Mostly because computing something smart will require to allocate memory at some point. So you have to trhrow OutOfMemoryException without doing anymore computation.
This is not a big deal anyway, because your program is already screwed up. At most what you can do is return an error to the system System.exit(ERROR_LEVEL); but you can't even log because it would require to allocate memory or use memory that is possibly screwed up.
this is because all 4 are fatal errors that impossible to recover from (except perhaps out of heap space but that would still remain near the edge of the failure point)
I've always thought that using a "if" is way better (in performance terms) than catching an exception. For example, doing this:
User u = Users.getUser("Michael Jordan");
if(u!=null)
System.out.println(u.getAge());
vs.
User u = Users.getUser("Michael Jordan");
try{
System.out.println(u.getAge());
}catch(Exception e){
//Do something with the exception
}
If we benchmark this, it's quite obvious that the first snippet is faster than the second one. That's what i've always thought.
But, yesterday, a guy told me something like this:
Have you ever considered what happens
in thousands of executions of your
programs? Every time your execution
goes through your "if" you've a little
(real little, but still something)
performance cost. With exceptions it
doesn't happens. Becouse it could
never arise.
To make it clear: thousands of times executing an "if" vs one exception catch.
I think it has sense, but don't have any proof.
Can you help me?
Thanks!!
No. Not once the JIT kicks in. Tell him to read up on trace caches.
A dynamic trace ("trace path") contains only instructions whose results are actually used, and eliminates instructions following taken branches (since they are not executed); a dynamic trace can be a concatenation of multiple basic blocks. This allows the instruction fetch unit of a processor to fetch several basic blocks, without having to worry about branches in the execution flow.
Basically, if the same branches are taken every time through a loop body, then that body ends up as a single trace in the trace cache. The processor will not incur the cost of fetching the extra branch instructions (unless that pushes the basic block over the limit of what can be stored in the trace cache, unlikely) and does not have to wait for the result of the test before starting to execute following instructions.
Do not ever, EVER, sacrifice your code quality for performance until you've proven beyond a reasonable doubt it's actually called for. I highly doubt that the performance of if() statement will ever become the bottleneck of you program. If it does, you should re-write it in C. In Java land, 99% of the time the bottleneck is the I/O - disk and/or network.
Except for maybe machine exceptions, any exception you've caught has been preceded by some kind of if conditional. Even a NullPointerException was thrown following a if(something == null) down in the JVM. Don't worry about the performance of an if statement. Don't worry about the performance of try/catch either since presumably an error should occur much less frequently than successful execution. I don't think you need to optimize how fast your errors are thrown. :-)
You have no proof because you have no case. It is certainly an interesting abstract question if you have a piece of code that was a bottleneck that you could eliminate an if for because one side of the condition was very rare, then it might mean something. But in the abstract, doing that with exceptions is making code harder to read and maintain, so it should not be done without a real world problem in front of you. In the real world, it may throw the exception 50% of the time. You don't know until you have a real-world scenario.
This question already has answers here:
How to deal with "java.lang.OutOfMemoryError: Java heap space" error?
(31 answers)
Closed 9 years ago.
I have to serialize around a million items and I get the following exception when I run my code:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOfRange(Unknown Source)
at java.lang.String.<init>(Unknown Source)
at java.io.BufferedReader.readLine(Unknown Source)
at java.io.BufferedReader.readLine(Unknown Source)
at org.girs.TopicParser.dump(TopicParser.java:23)
at org.girs.TopicParser.main(TopicParser.java:59)
How do I handle this?
I know that the official Java answer is "Oh noes! Out of memories! I give in!". This is all rather frustrating for anyone who has programmed in environments where running out of memory is not allowed to be a fatal error (for example, writing an OS, or writing apps for non-protected OSes).
The willingness to surrender is necessary - you can't control every aspect of Java memory allocation, so you can't guarantee that your program will succeed in low-memory conditions. But that doesn't mean you must go down without a fight.
Before fighting, though, you could look for ways to avoid the need. Perhaps you can avoid Java serialization, and instead define your own data format which does not require significant memory allocation to create. Serialization allocates a lot of memory because it keeps a record of objects it has seen before, so that if they occur again it can reference them by number instead of outputting them again (which could lead to an infinite loop). But that's because it needs to be general-purpose: depending on your data structure, you might be able to define some text/binary/XML/whatever representation which can just be written to a stream with very little need to store extra state. Or you might be able to arrange that any extra state you need is stored in the objects all along, not created at serialization time.
If your application does one operation which uses a lot of memory, but mostly uses much less, and especially if that operation is user-initiated, and if you can't find a way to use less memory or make more memory available, then it might be worth catching OutOfMemory. You could recover by reporting to the user that the problem is too big, and inviting them to trim it down and try again. If they've just spend an hour setting up their problem, you do not want to just bail out of the program and lose everything - you want to give them a chance to do something about it. As long as the Error is caught way up the stack, the excess memory will be unreferenced by the time the Error is caught, giving the VM at least a chance to recover. Make sure you catch the error below your regular event-handling code (catching OutOfMemory in regular event handling can result in busy loops, because you try to display a dialog to the user, you're still out of memory, and you catch another Error). Catch it only around the operation which you've identified as the memory-hog, so that OutOfMemoryErrors you can't handle, that come from code other than the memory hog, are not caught.
Even in a non-interactive app, it might make sense to abandon the failed operation, but for the program itself to carry on running, processing further data. This is why web servers manage multiple processes such that if one page request fails for lack of memory, the server itself doesn't fall over. As I said at the top, single-process Java apps can't make any such guarantees, but they can at least be made a bit more robust than the default.
That said, your particular example (serialization) may not be a good candidate for this approach. In particular, the first thing the user might want to do on being told there's a problem is save their work: but if it's serialization which is failing, it may be impossible to save. That's not what you want, so you might have to do some experiments and/or calculations, and manually restrict how many million items your program permits (based on how much memory it is running with), before the point where it tries to serialize.
This is more robust than trying to catch the Error and continue, but unfortunately it's difficult to work out the exact bound, so you would probably have to err on the side of caution.
If the error is occurring during deserialization then you're on much firmer ground: failing to load a file should not be a fatal error in an application if you can possibly avoid it. Catching the Error is more likely to be appropriate.
Whatever you do to handle lack of resources (including letting the Error take down the app), if you care about the consequences then it's really important to test it thoroughly. The difficulty is that you never know exactly what point in your code the problem will occur, so there is usually a very large number of program states which need to be tested.
Ideally, restructure your code to use less memory. For example, perhaps you could stream the output instead of holding the whole thing in memory.
Alternatively, just give the JVM more memory with the -Xmx option.
You should not handle it in code. OutOfMemory should not be caught and handled. Instead start your JVM with a bigger heapspace
java -Xmx512M
should do the trick.
See here for more details
Everyone else has already covered how to give Java more memory, but because "handle" could conceivably mean catch, I'm going to quote what Sun has to say about Errors:
An Error is a subclass of Throwable
that indicates serious problems that a
reasonable application should not try
to catch. Most such errors are
abnormal conditions.
(emphasis mine)
You get an OutOfMemoryError because your program requires more memory than the JVM has available. There is nothing you can specifically do at runtime to help this.
As noted by krosenvold, your application may be making sensible demands for memory but it just so happens that the JVM is not being started with enough (e.g. your app will have a 280MB peak memory footprint but the JVM only starts with 256MB). In this case, increasing the size allocated will solve this.
If you feel that you are supplying adequate memory at start up, then it is possible that your application is either using too much memory transiently, or has a memory leak. In the situation you have posted, it sounds like you are holding references to all of the million items in memory at once, even though potentially you are dealing with them sequentially.
Check what your references are like for items that are "done" - you should deference these as soon as possible so that they can be garbage collected. If you're adding a million items to a collection and then iterating over that collection, for example, you'll need enough memory to store all of those object instances. See if you can instead take one object at a time, serialise it and then discard the reference.
If you're having trouble working this out, posting a pseudo-code snippet would help.
In addition to some of the tips that have been give to you, as review the memory lacks and
also start the JVM with more memory (-Xmx512M).
Looks like you have a OutOfMemoryError cause your TopicParser is reading a line that probably is pretty big (and here is what you should avoid), you can use the FileReader (or, if the encoding is an issue, an InputStreamReader wrapping a FileInputStream). Use its read(char[]) method with a reasonably sized char[] array as a buffer.
Also finally to investigate a little why is the OutOfMemoryError you can use
-XX:+HeapDumpOnOutOfMemoryError
Flag in the JVM to get a dump heap information to disk.
Good luck!
Interesting - you are getting an out of memory on a readline. At a guess, you are reading in a big file without linebreaks.
Instead of using readline to get the stuff out of the file as one single big long string, write stuff that understands the input a bit better, and handles it in chunks.
If you simply must have the whole file in a single big long string ... well, get better at coding. In general, trying to handle mutimegabyte data by stuffing it all into a single array of byte (or whatever) is a good way to lose.
Go have a look at CharacterSequence.
Use the transient keyword to mark fields in the serialized classes which can be generated from existing data.
Implement writeObject and readObject to help with reconstructing transient data.
After you follow the suggestion of increasing heap space (via -Xmx) but sure to use either JConsole or JVisualVM to profile your applications memory usage. Make sure that memory usage does not continuously grow. If so you'll still get the OutOfMemoryException, it'll just take longer.
You can increase the size of the memory java uses with the -Xmx-option, for instance:
java -Xmx512M -jar myapp.jar
Better is to reduce the memory-footprint of your app. You serialize millions of items? Do you need to keep all of them in memory? Or can you release some of them after using them? Try to reduce the used objects.
Start java with a larger value for option -Xmx, for instance -Xmx512m
There's no real way of handling it nicely. Once it happens you are in the unknown territory. You can tell by the name - OutOfMemoryError. And it is described as:
Thrown when
the Java Virtual Machine cannot allocate an object because it is out of
memory, and no more memory could be made available by the garbage
collector
Usually OutOfMemoryError indicates that there is something seriously wrong with the system/approach (and it's hard to point a particular operation that triggered it).
Quite often it has to do with ordinary running out of heapspace. Using the -verbosegc and mentioned earlier -XX:+HeapDumpOnOutOfMemoryError should help.
You can find a nice and concise summary of the problem at javaperformancetuning
Before taking any dangerous, time-consuming or strategic actions, you should establish exactly what in your program is using up so much of the memory. You may think you know the answer, but until you have evidence in front of you, you don't. There's the possibility that memory is being used by something you weren't expecting.
Use a profiler. It doesn't matter which one, there are plenty of them. First find out how much memory is being used up by each object. Second, step though iterations of your serializer, compare memory snapshots and see what objects or data are created.
The answer will most likely be to stream the output rather than building it in memory. But get evidence first.
I have discovered an alternate, respecting all other views that we should not try to catch the memory out of exception, this is what I've learned in recent time.
catch (Throwable ex){
if (!(ex instanceof ThreadDeath))
{
ex.printStackTrace(System.err);
}}
for your reference: OutOfMemoryError
any feedback is welcome.
Avishek Arang