I would like to provide my system with a way of detecting whether out of memory exception has occurred or not. The aim for this exercise is to expose this flag through JMX and act correspondingly (e.g. by configuring a relevant alert on the monitoring system), as otherwise these errors sit unnoticed for days.
Naive approach for this would be to set an uncaught exception handler for every thread and check whether the raised exception is instance of OutOfMemoryError and set a relevant flag. However, this approach isn't realistic for the following reasons:
The exception can occur anywhere, including 3rd party libraries. There is nothing I can do to prevent them catching Throwable and keeping it for themselves.
Libraries can spawn their own threads and I have no way of enforcing uncaught exception handlers for these threads.
One of possible scenarios I see is bytecode manipulation (e.g. attaching some sort of aspect on top of OutOfMemoryError), however I am not sure if that's right approach or whether this is doable in general.
We have -XX:+HeapDumpOnOutOfMemoryError enabled, but I don't see this as a solution for this problem as it was designed for something else - and it provides no Java callback when this happens.
Has anyone done this? How would you solve it or suggest solving it? Any ideas are welcome.
You could use an out of memory warning system; this OutOfMemoryError Warning System can be an inspiration. You could configure a listener which is invoked after a certain memory threshold ( say 80%) is breached - you can use this invocation to start taking corrective measures.
We use something similar, where we suspend the component's service when the memory threshold of the component reaches 80% and start the clean up action; the component comes back only when the used memory comes below a another configurable value threshold.
There is an article based on the post that Scorpion has already given a link to.
The technique is again based on using MemoryPoolMXBean and subscribing to the "memory threshold exceeded" event, but it's slightly different from what was described in original post.
Author states that when you subscribe for the plain "memory threshold exceeded" event, there is a possibility of "false alarm". Imagine a situation when the memory consumption is above the threshold, but there will be a garbage collection performed soon and a lot of the memory is freed after that. In fact that situation is quite common in real world applications.
Fortunately, there is another threshold, "collection usage threshold", and a corresponding event, which is fired based on memory consumption right after garbage collection. When you receive that event, you can be much more confident you're running out of memory.
We have -XX:+HeapDumpOnOutOfMemoryError enabled, but I don't see this
as a solution for this problem as it was designed for something else -
and it provides no Java callback when this happens.
This flag should be all that you need. Set the output directory of the resulting heap dump file in some known location that you check regularly. Having a callback would be of no use to you. If you are out of memory, you can't guarantee that the callback code has enough memory to execute! All you can do is collect the data and use an external program to analyze why you ran out of memory. Any attempt at recovering in process can create bigger problems.
Bytecode instrumentation is possible - but hard. HPjmeter's monitoring tool has the ability to predict future OOM's (with caveats) -- but only on HP-UX/Itanium based systems. You could dedicate a daemon thread to calculating used memory in process and trigger an alert when this is exceeded, but really you're not solving the problem.
You can catch any and all uncaught exceptions with the static Thread.setDefaultUncaughtExceptionHandler. Of course, it doesn't help if someone is catching all Throwables. (I don't think anything will, though with an OOME I'd suspect you'd get a cascading effect until something outside the offending try block blew up.) Hopefully the thread would have released enough memory for the exception handler to work; OOM errors do tend to multiply as you try to deal with them.
Related
I have a misbehaving application that seems to leak. After a brief profiler investigation, most memory (80%) is held by java.lang.ref.Finalizer instances. I suspect that finalizers fail to run.
A common cause of this seems to be exceptions thrown from the finalizer. However, the javadoc for the finalize method of the Object class (see here for instance) seems to contradict itself: it states
If an uncaught exception is thrown by the finalize method, the exception is ignored and finalization of that object terminates.
but later, it also states that
Any exception thrown by the finalize method causes the finalization of this object to be halted, but is otherwise ignored.
What should I believe (i.e., is finalization halted or not?), and do you have any tips on how to investigate such apparent leaks?
Thanks
Both quotes say:
An exception will cause finalization of this object to be halted/terminated.
Both quotes also say:
The uncaught exception is ignored (i.e. not logged or handled by the VM in any way)
So that answers the first half of your question. I don't know enough about Finalizers to give you advice on tracking down your memory leak though.
EDIT: I found this page which might be of use. It has advice such as setting fields to null manually in finalizers to allow the GC to reclaim them.
EDIT2: Some more interesting links and quotes:
From Anatomy of a Java Finalizer
Finalizer threads are not given maximum priorities on systems. If a "Finalizer" thread cannot keep up with the rate at which higher priority threads cause finalizable objects to be queued, the finalizer queue will keep growing and cause the Java heap to fill up. Eventually the Java heap will get exhausted and a java.lang.OutOfMemoryError will be thrown.
and also
it's not guaranteed that any objects that have a finalize() method are garbage collected.
EDIT3: Upon reading more of the Anatomy link, it appears that throwing exceptions in the Finalizer thread really slows it down, almost as much as calling Thread.yield(). You appear to be right that the Finalizer thread will eventually flag the object as able to be GC'd even if an exception is thrown. However, since the slowdown is significant it is possible that in your case the Finalizer thread is not keeping up with the object-creation-and-falling-out-of-scope rate.
My first step would be to establish whether this is a genuine memory leak or not.
The points raised in the previous answers all relate to the speed at which objects are collected, not the question of whether your objects are collected at all. Only the latter is a genuine memory leak.
We had a similar predicament on my project, and ran the application in "slow motion" mode to figure out if we had a real leak. We were able to do this by slowing down the stream of input data.
If the problem disappears when you run in "slow motion" mode, then the problem is probably one of the ones suggested in the previous answers, i.e. the Finalizer thread can't process the finalizer queue fast enough.
If that is the problem, it sounds like you might need to do some non-trivial refactoring as described in the page Bringer128 linked to, e.g.
Now let's look at how to write classes that require postmortem cleanup so that their users do not encounter the problems previously outlined. The best way to do so is to split such classes into two -- one to hold the data that need postmortem cleanup, the other to hold everything else -- and define a finalizer only on the former
The item 7 of Effective Java second edition is: "Avoid finalizers". I strongly recommend you to read it. Here is an extract that may help you:
"Explicit termination methods are typically used in combination with try-finally construct to ensure termination"
I have same issue with you (below picture). For our case, it because an object has wait(0) in its finalize and it never get notified, which block the java.lang.ref.Finalizer$FinalizerThread. More reference
The Secret Life Of The Finalizer
java/lang/ref/Finalizer.java
i have once see a similar problem, that is the finalizer thread can not catch up with the rate of generating finalizable objects.
my solution is the make a closed loop control, by use the MemoryMXBean .getObjectPendingFinalizationCount(), a PD( proportinal and diffrential) control algo to control the speed we generate the finalizable objects, since we have a single entry to create it, just sleep number of seconds with the result of pd algo. it works well, though you need to tune the paramter for pd algo.
hope it helps.
As we all know, there are multiple reasons of OutOfMEmoryError (see first answer). Why there is only one exception covering all these cases instead of multiple fine-grained ones inheriting from OutOfMEmoryError?
I'd expect because you really can't do anything else when that happens: it almost doesn't matter WHY you ran out, since you're screwed regardless. Perhaps the additional info would be nice, but...
I know tomcat tries to do this "Out Of Memory Parachute" thing, where they hold onto a chunk of memory and try and release it, but I'm not sure how well it works.
The garbage collection process is deliberately very vaguely described to allow the greatest possible freedom for the JVM-implementors.
Hence the classes you mention are not provided in the API, but only in the implementation.
If you relied on them, your program would crash if running on a JVM without these vendor-specific sub-classes, so you don't do that.
You only need to subclass an exception if applications need to be able to catch and deal with the different cases differently. But you shouldn't be catching and attempting to recover from these cases at all ... so the need should not arise.
... but yeah I would still like to have a more descriptive reason for dying on me.
The exception message tells you which of the OOME sub-cases have occurred. If you are complaining that the messages are too brief, it is not the role of Java exception messages to give a complete explanation of the problem they are reporting. That's what the javadocs and other documentation is for.
#Thorbjørn presents an equally compelling argument. Basically, the different subcases are all implementation specific. Making them part of the standard API risks constraining JVM implementations to do things in suboptimal ways to satisfy the API requirements. And this approach risks creating unnecessary application portability barriers when new subclasses are created for new implementation specific subcases.
(For instance the hypothetical UnableToCreateNativeThreadError 1) assumes that the thread creation failed because of memory shortage, and 2) that the memory shortage is qualitatively different from a normal out of memory. 2) is true for current Oracle JVMs, but not for all JVMs. 1) is possibly not even true for current Oracle JVMs. Thread creation could fail because of an OS-imposed limit on the number of native threads.)
If you are interested in why it is a bad idea to try to recover from OOME's, see these Questions:
Catching java.lang.OutOfMemoryError?
Can the JVM recover from an OutOfMemoryError without a restart
Is it possible to catch out of memory exception in java? (my answer).
IMO there is no definite answer to this question and it all boils down to the design decisions made at that time. This question is very similar to something like "why isn't Date class immutable" or "why does Properties extend from HashTable". As pointed out by another poster, subclasses really wouldn't matter since you are anyways screwed. Plus the descriptive error messages are good enough to start with troubleshooting measures.
Mostly because computing something smart will require to allocate memory at some point. So you have to trhrow OutOfMemoryException without doing anymore computation.
This is not a big deal anyway, because your program is already screwed up. At most what you can do is return an error to the system System.exit(ERROR_LEVEL); but you can't even log because it would require to allocate memory or use memory that is possibly screwed up.
this is because all 4 are fatal errors that impossible to recover from (except perhaps out of heap space but that would still remain near the edge of the failure point)
I'm profiling a program using sampling profiling in YourKit and JProfiler, and also "manually" (I launch it and press Ctrl-Break several times to get thread dumps).
All three methods give me extremely strange results: some tens of percents of time spent in a 3-line method that does not even do any allocation or synchronization and doesn't have loops etc. Moreover, after I made this method into a NOP and even removed its invocation completely, the observable program performance didn't change at all (although it got a negligible memory leak, since it was a method for freeing a cheap resource).
I'm thinking that this might be because of the constraints that JVM puts on the moments at which a thread's stacktrace may be taken, and it somehow turns out that in my program it is exactly the moments where this method is invoked, although there is absolutely nothing special about it or the context in which it is invoked.
What can be the explanation for this phenomenon?
What are the aforementioned constraints?
What further measurements can I take to clarify the situation?
To my minds, these results only show that this method gets called a huge number of times. Since its code is quite small, and it may be called as an invocation tree leaf, its impact on your profiling results seems neglectable. However, I had many time that kind of weird results.
Some 3rd party libraries cause the heap dumps to go completely haywire due to unexpected usage patterns, for example if cglib is used, it will mask away the actual cause of the issues and instead show a lot of Proxy objects (if I remember correctly) filling up the VM instead.
So in short, code generation and reflection may cause the stats to go wrong.
When you did the Ctrl-Break several times and got thread dumps, I'm curious what you saw. The call stacks are, to my mind, the most useful information. If your 3-liner is on the stack a large % of time, then you can see why by looking at where it is called from, and where that is called from, etc. Those call sites are just as responsible for the time being spent as the 3-liner is.
If the stack traces seem to make no sense, it may be because they are being delayed until after something completes. If that is so, look up the stack to see what has just completed, because the break could have occurred within that.
I am running JBOSS server by deploying my own classes.Now i started doing some operations on my application.Now i would like to know the memory used by my application before and after performing operations.please support me in this regard
By using
MemoryMXBean
(retrieved by calling
ManagementFactory.getMemoryMXBean())
as well as
Runtime.getRuntime()'s methods:
.totalMemory(),
.maxMemory()
and
.freeMemory().
Note that this is not an exact art: while creating a new object, other temporary ones may be allocated, which will not give you an accurate measurement. As we know, java garbage collection is not guaranteed so you can't necessarily do that to eliminate dead objects.
If you research, you'll see that most code that attempts to do these measurements will have loops of Runtime.gc() calls and sleeps etc to try and ensure that the measurement is accurate. And this will only work on certain JVM implementations...
On an app server/deployed application, you will likely only get gross measurements/usage changes as the heap is allocated and the gc fires, but it should be enough. [I'm presuming that you wouldn't implement gc()'s and sleeps in production code :)]
Get the free memory before doing the operation Runtime.getRuntime().freeMemory() and then again after finishing the operation and you will get the memory used by your operation.
You may find the results you get are inconclusive. The GC will clean up used memory at random points in the background so you might find at if you run the same operations many times you will get different results. You can even appear to have more memory free after performing an operation.
This question already has answers here:
How to deal with "java.lang.OutOfMemoryError: Java heap space" error?
(31 answers)
Closed 9 years ago.
I have to serialize around a million items and I get the following exception when I run my code:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOfRange(Unknown Source)
at java.lang.String.<init>(Unknown Source)
at java.io.BufferedReader.readLine(Unknown Source)
at java.io.BufferedReader.readLine(Unknown Source)
at org.girs.TopicParser.dump(TopicParser.java:23)
at org.girs.TopicParser.main(TopicParser.java:59)
How do I handle this?
I know that the official Java answer is "Oh noes! Out of memories! I give in!". This is all rather frustrating for anyone who has programmed in environments where running out of memory is not allowed to be a fatal error (for example, writing an OS, or writing apps for non-protected OSes).
The willingness to surrender is necessary - you can't control every aspect of Java memory allocation, so you can't guarantee that your program will succeed in low-memory conditions. But that doesn't mean you must go down without a fight.
Before fighting, though, you could look for ways to avoid the need. Perhaps you can avoid Java serialization, and instead define your own data format which does not require significant memory allocation to create. Serialization allocates a lot of memory because it keeps a record of objects it has seen before, so that if they occur again it can reference them by number instead of outputting them again (which could lead to an infinite loop). But that's because it needs to be general-purpose: depending on your data structure, you might be able to define some text/binary/XML/whatever representation which can just be written to a stream with very little need to store extra state. Or you might be able to arrange that any extra state you need is stored in the objects all along, not created at serialization time.
If your application does one operation which uses a lot of memory, but mostly uses much less, and especially if that operation is user-initiated, and if you can't find a way to use less memory or make more memory available, then it might be worth catching OutOfMemory. You could recover by reporting to the user that the problem is too big, and inviting them to trim it down and try again. If they've just spend an hour setting up their problem, you do not want to just bail out of the program and lose everything - you want to give them a chance to do something about it. As long as the Error is caught way up the stack, the excess memory will be unreferenced by the time the Error is caught, giving the VM at least a chance to recover. Make sure you catch the error below your regular event-handling code (catching OutOfMemory in regular event handling can result in busy loops, because you try to display a dialog to the user, you're still out of memory, and you catch another Error). Catch it only around the operation which you've identified as the memory-hog, so that OutOfMemoryErrors you can't handle, that come from code other than the memory hog, are not caught.
Even in a non-interactive app, it might make sense to abandon the failed operation, but for the program itself to carry on running, processing further data. This is why web servers manage multiple processes such that if one page request fails for lack of memory, the server itself doesn't fall over. As I said at the top, single-process Java apps can't make any such guarantees, but they can at least be made a bit more robust than the default.
That said, your particular example (serialization) may not be a good candidate for this approach. In particular, the first thing the user might want to do on being told there's a problem is save their work: but if it's serialization which is failing, it may be impossible to save. That's not what you want, so you might have to do some experiments and/or calculations, and manually restrict how many million items your program permits (based on how much memory it is running with), before the point where it tries to serialize.
This is more robust than trying to catch the Error and continue, but unfortunately it's difficult to work out the exact bound, so you would probably have to err on the side of caution.
If the error is occurring during deserialization then you're on much firmer ground: failing to load a file should not be a fatal error in an application if you can possibly avoid it. Catching the Error is more likely to be appropriate.
Whatever you do to handle lack of resources (including letting the Error take down the app), if you care about the consequences then it's really important to test it thoroughly. The difficulty is that you never know exactly what point in your code the problem will occur, so there is usually a very large number of program states which need to be tested.
Ideally, restructure your code to use less memory. For example, perhaps you could stream the output instead of holding the whole thing in memory.
Alternatively, just give the JVM more memory with the -Xmx option.
You should not handle it in code. OutOfMemory should not be caught and handled. Instead start your JVM with a bigger heapspace
java -Xmx512M
should do the trick.
See here for more details
Everyone else has already covered how to give Java more memory, but because "handle" could conceivably mean catch, I'm going to quote what Sun has to say about Errors:
An Error is a subclass of Throwable
that indicates serious problems that a
reasonable application should not try
to catch. Most such errors are
abnormal conditions.
(emphasis mine)
You get an OutOfMemoryError because your program requires more memory than the JVM has available. There is nothing you can specifically do at runtime to help this.
As noted by krosenvold, your application may be making sensible demands for memory but it just so happens that the JVM is not being started with enough (e.g. your app will have a 280MB peak memory footprint but the JVM only starts with 256MB). In this case, increasing the size allocated will solve this.
If you feel that you are supplying adequate memory at start up, then it is possible that your application is either using too much memory transiently, or has a memory leak. In the situation you have posted, it sounds like you are holding references to all of the million items in memory at once, even though potentially you are dealing with them sequentially.
Check what your references are like for items that are "done" - you should deference these as soon as possible so that they can be garbage collected. If you're adding a million items to a collection and then iterating over that collection, for example, you'll need enough memory to store all of those object instances. See if you can instead take one object at a time, serialise it and then discard the reference.
If you're having trouble working this out, posting a pseudo-code snippet would help.
In addition to some of the tips that have been give to you, as review the memory lacks and
also start the JVM with more memory (-Xmx512M).
Looks like you have a OutOfMemoryError cause your TopicParser is reading a line that probably is pretty big (and here is what you should avoid), you can use the FileReader (or, if the encoding is an issue, an InputStreamReader wrapping a FileInputStream). Use its read(char[]) method with a reasonably sized char[] array as a buffer.
Also finally to investigate a little why is the OutOfMemoryError you can use
-XX:+HeapDumpOnOutOfMemoryError
Flag in the JVM to get a dump heap information to disk.
Good luck!
Interesting - you are getting an out of memory on a readline. At a guess, you are reading in a big file without linebreaks.
Instead of using readline to get the stuff out of the file as one single big long string, write stuff that understands the input a bit better, and handles it in chunks.
If you simply must have the whole file in a single big long string ... well, get better at coding. In general, trying to handle mutimegabyte data by stuffing it all into a single array of byte (or whatever) is a good way to lose.
Go have a look at CharacterSequence.
Use the transient keyword to mark fields in the serialized classes which can be generated from existing data.
Implement writeObject and readObject to help with reconstructing transient data.
After you follow the suggestion of increasing heap space (via -Xmx) but sure to use either JConsole or JVisualVM to profile your applications memory usage. Make sure that memory usage does not continuously grow. If so you'll still get the OutOfMemoryException, it'll just take longer.
You can increase the size of the memory java uses with the -Xmx-option, for instance:
java -Xmx512M -jar myapp.jar
Better is to reduce the memory-footprint of your app. You serialize millions of items? Do you need to keep all of them in memory? Or can you release some of them after using them? Try to reduce the used objects.
Start java with a larger value for option -Xmx, for instance -Xmx512m
There's no real way of handling it nicely. Once it happens you are in the unknown territory. You can tell by the name - OutOfMemoryError. And it is described as:
Thrown when
the Java Virtual Machine cannot allocate an object because it is out of
memory, and no more memory could be made available by the garbage
collector
Usually OutOfMemoryError indicates that there is something seriously wrong with the system/approach (and it's hard to point a particular operation that triggered it).
Quite often it has to do with ordinary running out of heapspace. Using the -verbosegc and mentioned earlier -XX:+HeapDumpOnOutOfMemoryError should help.
You can find a nice and concise summary of the problem at javaperformancetuning
Before taking any dangerous, time-consuming or strategic actions, you should establish exactly what in your program is using up so much of the memory. You may think you know the answer, but until you have evidence in front of you, you don't. There's the possibility that memory is being used by something you weren't expecting.
Use a profiler. It doesn't matter which one, there are plenty of them. First find out how much memory is being used up by each object. Second, step though iterations of your serializer, compare memory snapshots and see what objects or data are created.
The answer will most likely be to stream the output rather than building it in memory. But get evidence first.
I have discovered an alternate, respecting all other views that we should not try to catch the memory out of exception, this is what I've learned in recent time.
catch (Throwable ex){
if (!(ex instanceof ThreadDeath))
{
ex.printStackTrace(System.err);
}}
for your reference: OutOfMemoryError
any feedback is welcome.
Avishek Arang