Java Heap size issue [duplicate] - java

Documentation for java.lang.Error says:
An Error is a subclass of Throwable that indicates serious problems that a reasonable application should not try to catch
But as java.lang.Error is a subclass of java.lang.Throwable, I can catch this type of Throwable.
I understand why it's not a good idea to catch this sort of exception. As far as I understand, if we decide to catch it, the catch handler should not allocate any memory by itself. Otherwise OutOfMemoryError will be thrown again.
So, my question is:
Are there any real world scenarios when catching java.lang.OutOfMemoryError might be a good idea?
If we decide to catch java.lang.OutOfMemoryError, how can we make sure the catch handler doesn't allocate any memory by itself (any tools or best practices)?

There are a number of scenarios where you may wish to catch an OutOfMemoryError and in my experience (on Windows and Solaris JVMs), only very infrequently is OutOfMemoryError the death-knell to a JVM.
There is only one good reason to catch an OutOfMemoryError and that is to close down gracefully, cleanly releasing resources and logging the reason for the failure best you can (if it is still possible to do so).
In general, the OutOfMemoryError occurs due to a block memory allocation that cannot be satisfied with the remaining resources of the heap.
When the Error is thrown the heap contains the same amount of allocated objects as before the unsuccessful allocation and now is the time to drop references to run-time objects to free even more memory that may be required for cleanup. In these cases, it may even be possible to continue but that would definitely be a bad idea as you can never be 100% certain that the JVM is in a reparable state.
Demonstration that OutOfMemoryError does not mean that the JVM is out of memory in the catch block:
private static final int MEGABYTE = (1024*1024);
public static void runOutOfMemory() {
MemoryMXBean memoryBean = ManagementFactory.getMemoryMXBean();
for (int i=1; i <= 100; i++) {
try {
byte[] bytes = new byte[MEGABYTE*500];
} catch (Exception e) {
e.printStackTrace();
} catch (OutOfMemoryError e) {
MemoryUsage heapUsage = memoryBean.getHeapMemoryUsage();
long maxMemory = heapUsage.getMax() / MEGABYTE;
long usedMemory = heapUsage.getUsed() / MEGABYTE;
System.out.println(i+ " : Memory Use :" + usedMemory + "M/" +maxMemory+"M");
}
}
}
Output of this code:
1 : Memory Use :0M/247M
..
..
..
98 : Memory Use :0M/247M
99 : Memory Use :0M/247M
100 : Memory Use :0M/247M
If running something critical, I usually catch the Error, log it to syserr, then log it using my logging framework of choice, then proceed to release resources and close down in a clean fashion. What's the worst that can happen? The JVM is dying (or already dead) anyway and by catching the Error there is at least a chance of cleanup.
The caveat is that you have to target the catching of these types of errors only in places where cleanup is possible. Don't blanket catch(Throwable t) {} everywhere or nonsense like that.

You can recover from it:
package com.stackoverflow.q2679330;
public class Test {
public static void main(String... args) {
int size = Integer.MAX_VALUE;
int factor = 10;
while (true) {
try {
System.out.println("Trying to allocate " + size + " bytes");
byte[] bytes = new byte[size];
System.out.println("Succeed!");
break;
} catch (OutOfMemoryError e) {
System.out.println("OOME .. Trying again with 10x less");
size /= factor;
}
}
}
}
But does it make sense? What else would you like to do? Why would you initially allocate that much of memory? Is less memory also OK? Why don't you already make use of it anyway? Or if that's not possible, why not just giving the JVM more memory from the beginning on?
Back to your questions:
1: is there any real word scenarios when catching java.lang.OutOfMemoryError may be a good idea?
None comes to mind.
2: if we catching java.lang.OutOfMemoryError how can we sure that catch handler doesn't allocate any memory by itself (any tools or best practicies)?
Depends on what has caused the OOME. If it's declared outside the try block and it happened step-by-step, then your chances are little. You may want to reserve some memory space beforehand:
private static byte[] reserve = new byte[1024 * 1024]; // Reserves 1MB.
and then set it to zero during OOME:
} catch (OutOfMemoryException e) {
reserve = new byte[0];
// Ha! 1MB free!
}
Of course this makes all with all no sense ;) Just give JVM sufficient memory as your applictation require. Run a profiler if necessary.

In general, it is a bad idea to try to catch and recover from an OOM.
An OOME could also have been thrown on other threads, including threads that your application doesn't even know about. Any such threads will now be dead, and anything that was waiting on a notify could be stuck for ever. In short, your app could be terminally broken.
Even if you do successfully recover, your JVM may still be suffering from heap starvation and your application will perform abysmally as a result.
The best thing to do with an OOME is to let the JVM die.
(This assumes that the JVM does die. For instance OOMs on a Tomcat servlet thread do not kill the JVM, and this leads to the Tomcat going into a catatonic state where it won't respond to any requests ... not even requests to restart.)
EDIT
I am not saying that it is a bad idea to catch OOM at all. The problems arise when you then attempt to recover from the OOME, either deliberately or by oversight. Whenever you catch an OOM (directly, or as a subtype of Error or Throwable) you should either rethrow it, or arrange that the application / JVM exits.
Aside: This suggests that for maximum robustness in the face of OOMs an application should use Thread.setDefaultUncaughtExceptionHandler() to set a handler that will cause the application to exit in the event of an OOME, no matter what thread the OOME is thrown on. I'd be interested in opinions on this ...
The only other scenario is when you know for sure that the OOM has not resulted in any collateral damage; i.e. you know:
what specifically caused the OOME,
what the application was doing at the time, and that it is OK to simply discard that computation, and
that a (roughly) simultaneous OOME cannot have occurred on another thread.
There are applications where it is possible to know these things, but for most applications you cannot know for sure that continuation after an OOME is safe. Even if it empirically "works" when you try it.
(The problem is that it a formal proof is required to show that the consequences of "anticipated" OOMEs are safe, and that "unanticipated" OOME's cannot occur within the control of a try/catch OOME.)

Yes, there are real-world scenarios. Here's mine: I need to process data sets of very many items on a cluster with limited memory per node. A given JVM instances goes through many items one after the other, but some of the items are too big to process on the cluster: I can catch the OutOfMemoryError and take note of which items are too big. Later, I can re-run just the large items on a computer with more RAM.
(Because it's a single multi-gigabyte allocation of an array that fails, the JVM is still fine after catching the error and there's enough memory to process the other items.)

There are definitely scenarios where catching an OOME makes sense. IDEA catches them and pops up a dialog to let you change the startup memory settings (and then exits when you are done). An application server might catch and report them. The key to doing this is to do it at a high level on the dispatch so that you have a reasonable chance of having a bunch of resources freed up at the point where you are catching the exception.
Besides the IDEA scenario above, in general the catching should be of Throwable, not just OOM specifically, and should be done in a context where at least the thread will be terminated shortly.
Of course most times memory is starved and the situation is not recoverable, but there are ways that it makes sense.

I came across this question because I was wondering whether it is a good idea to catch OutOfMemoryError in my case. I'm answering here partially to show yet another example when catching this error can make sense to someone (i.e. me) and partially to find out whether it is a good idea in my case indeed (with me being an uber junior developer I can never be too sure about any single line of code I write).
Anyway, I'm working on an Android application which can be run on different devices with different memory sizes. The dangerous part is decoding a bitmap from a file and dislaying it in an ImageView instance. I don't want to restrict the more powerful devices in terms of the size of decoded bitmap, nor can be sure that the app won't be run on some ancient device I've never come across with very low memory. Hence I do this:
BitmapFactory.Options bitmapOptions = new BitmapFactory.Options();
bitmapOptions.inSampleSize = 1;
boolean imageSet = false;
while (!imageSet) {
try {
image = BitmapFactory.decodeFile(filePath, bitmapOptions);
imageView.setImageBitmap(image);
imageSet = true;
}
catch (OutOfMemoryError e) {
bitmapOptions.inSampleSize *= 2;
}
}
This way I manage to provide for more and less powerful devices according to their, or rather their users' needs and expectations.

I have an application that needs to recover from OutOfMemoryError failures, and in single-threaded programs it always works, but sometimes doesn't in multi-threaded programs. The application is an automated Java testing tool that executes generated test sequences to the maximum possible depth on test classes. Now, the UI must be stable, but the test engine can run out of memory while growing the tree of test cases. I handle this by the following kind of code idiom in the test engine:
boolean isOutOfMemory = false; // flag used for reporting
try {
SomeType largeVar;
// Main loop that allocates more and more to largeVar
// may terminate OK, or raise OutOfMemoryError
}
catch (OutOfMemoryError ex) {
// largeVar is now out of scope, so is garbage
System.gc(); // clean up largeVar data
isOutOfMemory = true; // flag available for use
}
// program tests flag to report recovery
This works every time in single-threaded applications. But I recently put my test engine into a separate worker-thread from the UI. Now, the out of memory may occur arbitrarily in either thread, and it is not clear to me how to catch it.
For example, I had the OOME occur while the frames of an animated GIF in my UI were being cycled by a proprietary thread that is created behind-the-scenes by a Swing class that is out of my control. I had thought that I had allocated all the resources needed in advance, but clearly the animator is allocating memory every time it fetches the next image. If anyone has an idea about how to handle OOMEs raised in any thread, I would love to hear.

Yes, the real question is "what are you going to do in the exception handler?" For almost anything useful, you'll allocate more memory. If you'd like to do some diagnostic work when an OutOfMemoryError occurs, you can use the -XX:OnOutOfMemoryError=<cmd> hook supplied by the HotSpot VM. It will execute your command(s) when an OutOfMemoryError occurs, and you can do something useful outside of Java's heap. You really want to keep the application from running out of memory in the first place, so figuring out why it happens is the first step. Then you can increase the heap size of the MaxPermSize as appropriate. Here are some other useful HotSpot hooks:
-XX:+PrintCommandLineFlags
-XX:+PrintConcurrentLocks
-XX:+PrintClassHistogram
See the full list here

An OOME can be caught, but it is going to be generally useless, depending on if the JVM is able to garbage-collect some objects when reaching the catch block, and how much heap memory is left by that time.
Example: in my JVM, this program runs to completion:
import java.util.LinkedList;
import java.util.List;
public class OOMErrorTest {
public static void main(String[] args) {
List<Long> ll = new LinkedList<Long>();
try {
long l = 0;
while(true){
ll.add(new Long(l++));
}
} catch(OutOfMemoryError oome){
System.out.println("Error catched!!");
}
System.out.println("Test finished");
}
}
However, just adding a single line in the catch block will show you what I'm talking about:
import java.util.LinkedList;
import java.util.List;
public class OOMErrorTest {
public static void main(String[] args) {
List<Long> ll = new LinkedList<Long>();
try {
long l = 0;
while(true){
ll.add(new Long(l++));
}
} catch(OutOfMemoryError oome){
System.out.println("Error caught!!");
System.out.println("size:" +ll.size());
}
System.out.println("Test finished");
}
}
The first program runs fine because when reaching the catch block, the JVM detects that the list isn't going to be used anymore (this detection can be also an optimization made at compile time). So when we reach the print statement, the heap memory has been freed almost entirely, so we now have a wide margin of maneuver to continue. This is the best case.
However, if the code is arranged such as the list ll is used after the OOME has been caught, the JVM is unable to collect it. This happens in the second snippet. The OOME, triggered by a new Long creation, is caught, but soon we're creating a new Object (a String in the System.out.println line), and the heap is almost full, so a new OOME is thrown. This is the worst case scenario: we tried to create a new object, we failed, we caught the OOME, yes, but now the first instruction requiring new heap memory (e.g: creating a new object) will throw a new OOME. Think about it, what else can we do at this point with so little memory left? Probably just exiting, hence I said it's useless.
Among the reasons the JVM isn't garbage-collecting resources, one is really scary: a shared resource with other threads also making use of it. Anyonecan see how dangerous catching OOME can be if added to some non-experimental app of any kind.
I'm using a Windows x86 32bits JVM (JRE6). Default memory for each Java app is 64MB.

The only reason i can think of why catching OOM errors could be that you have some massive data structures you're not using anymore, and can set to null and free up some memory. But (1) that means you're wasting memory, and you should fix your code rather than just limping along after an OOME, and (2) even if you caught it, what would you do? OOM can happen at any time, potentially leaving everything half done.

For the question 2 I already see the solution I would suggest, by BalusC.
Is there any real word scenarios when catching java.lang.OutOfMemoryError may be a good idea?
I think I just came across a good example. When awt application is dispatching messages the uncatched OutOfMemoryError is displayed on stderr and the processing of the current message is stopped. But the application keeps running! User may still issue other commands unaware of the serious problems happening behind the scene. Especially when he cannot or does not observe the standard error. So catching oom exception and providing (or at least suggesting) application restart is something desired.

I just have a scenario where catching an OutOfMemoryError seems to make sense and seems to work.
Scenario: in an Android App, I want to display multiple bitmaps in highest possible resolution, and I want to be able to zoom them fluently.
Because of fluent zooming, I want to have the bitmaps in memory. However, Android has limitations in memory which are device dependent and which are hard to control.
In this situation, there may be OutOfMemoryError while reading the bitmap. Here, it helps if I catch it and then continue with lower resolution.

Depends on how you define "good". We do that in our buggy web application and it does work most of the time (thankfully, now OutOfMemory doesn't happen due to an unrelated fix). However, even if you catch it, it still might have broken some important code: if you have several threads, memory allocation can fail in any of them. So, depending on your application there is still 10--90% chance of it being irreversibly broken.
As far as I understand, heavy stack unwinding on the way will invalidate so many references and thus free so much memory you shouldn't care about that.
EDIT: I suggest you try it out. Say, write a program that recursively calls a function that allocates progressively more memory. Catch OutOfMemoryError and see if you can meaningfully continue from that point. According to my experience, you will be able to, though in my case it happened under WebLogic server, so there might have been some black magic involved.

You can catch anything under Throwable, generally speaking you should only catch subclasses of Exception excluding RuntimeException (though a large portion of developers also catch RuntimeException... but that was never the intent of the language designers).
If you were to catch OutOfMemoryError what on earth would you do? The VM is out of memory, basically all you can do is exit. You probably cannot even open a dialog box to tell them you are out of memory since that would take memory :-)
The VM throws an OutOfMemoryError when it is truly out of memory (indeed all Errors should indicate unrecoverable situations) and there should really be nothing you can do to deal with it.
The things to do are find out why you are running out of memory (use a profiler, like the one in NetBeans) and make sure you don't have memory leaks. If you don't have memory leaks then increase the memory that you allocate to the VM.

Related

Correct place to catch out of memory error

I'm experiencing a problem with a producer-consumer setup for a local bot competition (think Scalatron, but with more languages allowed, and using pipes to connect with stdin and stdout). The items are produced fine, and handled correctly by the consumer, however, the consumer's task in this setting is to call other pieces of software that might take up too much memory, hence the out of memory error.
I've got a Python script (i.e. the consumer) continuously calling other pieces of code using subprocess.call. These are all submitted by other people for evaluation, however, sometimes one of these submitted pieces use so much memory, the engine produces an OutOfMemoryError, which causes the entire script to halt.
There are three layers in the used setup:
Consumer (Python)
Game engine (Java)
Players' bots (languages differ)
The consumer calls the game engine using two bots as arguments:
subprocess.call(['setsid', 'sudo', '-nu', 'botrunner', '/opt/bots/sh/run_bots.sh', bot1, bot2]).
Inside the game engine a loop runs pitting the bots against each other, and afterwards all data is saved in a database so players can review their bots. The idea is, should a bot cause an error, to log the error and hand victory to the opponent.
What is the correct place to catch this, though? Should this be done on the "highest" (i.e. consumer) level, or in the game engine itself?
The correct place to catch any Exception or Error in Java is the place where you have a mechanism to handle them and perform some recovery steps. In the case of OutOfMemoryError, you should catch the error ONLY when you are able to to close it down gracefully, cleanly releasing resources and logging the reason for the failure, if possible.
OutOfMemoryError occurs due to a block memory allocation that cannot be satisfied with the remaining resources of the heap. Whenever OutOfMemoryError is thrown, the heap contains the exact same number of allocated objects before the unsuccessful attempt of allocation. This should be the actual time when you should catch the OutOfMemoryError and attempt to drop references to run-time objects to free even more memory that may be required for cleanup.
If the JVM is in reparable state, which you can never determine it through the program, it is even possible to recover & continue from the error. But this is generally considered as a not good design as I said you can never determine it through the program.
If you see the documentation of java.lang.Error, it says
An Error is a subclass of Throwable that indicates serious problems
that a reasonable application should not try to catch.
If you are catching any error on purpose, please remember NOT to blanket catch(Throwable t) {...} everywhere in your code.
More details here.
You can catch and attempt to recover from OutOfMemoryError (OOM) exceptions, BUT IT IS PROBABLY A BAD IDEA ... especially if your aim is for the application to "keep going".
There are a number of reasons for this:
As pointed out, there are better ways to manage memory resources than explicitly freeing things; i.e. using SoftReference and WeakReference for objects that could be freed if memory is short.
If you wait until you actually run out of memory before freeing things, your application is likely to spend more time running the garbage collector. Depending on your JVM version and on your GC tuning parameters, the JVM can end up running the GC more and more frequently as it approaches the point at which will throw an OOM. The slowdown (in terms of the application doing useful work) can be significant. You probably want to avoid this.
If the root cause of your problem is a memory leak, then the chances are that catching and recovering from the OOM will not reclaim the leaked memory. You application will keep going for a bit then OOM again, and again, and again at ever reducing intervals.
So my advice is NOT attempt to keep going from an OOM ... unless you know:
where and why the OOM happened,
that there won't have been any "collateral damage", and
that your recovery will release enough memory to continue.
There is probably at least one good time to catch an OutOfMemoryError, when you are specifically allocating something that might be way too big:
public static int[] decode(InputStream in, int len) throws IOException {
int result[];
try {
result = new int[len];
} catch (OutOfMemoryError e) {
throw new IOException("Result too long to read into memory: " + len);
} catch (NegativeArraySizeException e) {
throw new IOException("Cannot read negative length: " + len);
}
}

'Catching' OutOfMemoryError completely solves out-of-memory issue?

I was getting OutOfMemoryError messages in LogCat and my app was crashing because the errors were uncaught.
Two things were causing the OutOfMemoryError:
1. Reading a large text file into a string.
2. Sending that string to my TextView.
Simply adding a catch to these two things not only catches the OutOfMemoryError but appears to completely solve the out-of-memory problem.
No more crashing and no more error messages in LogCat. The app just runs perfectly.
How is this possible? What exactly is happening?
With this code, I was getting the error messages & app crashing:
try
{
myString = new Scanner(new File(myFilePath)).useDelimiter("\\A").next();
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
myTextView.setText(myString);
Just by 'catching' the OutOfMemoryError, no more error messages in LogCat and no crashing:
try
{
myString = new Scanner(new File(myFilePath)).useDelimiter("\\A").next();
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
catch(OutOfMemoryError e)
{
}
try
{
myTextView.setText(myString);
}
catch(OutOfMemoryError e)
{
}
I guess your string isn't loaded completely, or even if it is (it may throw the error just after adding the text), what happens depends on the current memory available for your app so catching OutOfMemoryError isn't a viable solution here.
If you really want to load a large string file and display it in an EditText, I recommend you to load only a small part of the file (let's say 50kB), and implement some kind of paging, like a button which loads the next 50kB. You can also load additional lines as the user scrolls through the EditText.
If you catch the OutOfMemoryError, the garbage collector tries to free up the memory previously used and thus the application can carry on if the application will let the garbage collector do its job (i.e. the application has no longer a reference to that large string of yours).
However, catching an OutOfMemoryError is far from fool-proof. See Catching java.lang.OutOfMemoryError?.
When you catch the exception the JVM tries to recover from it by calling the Garbage Collector and scraping the objects that are not used anymore.
This might solve the problem in your case. But imagine that the problem appears because of bad coding and memory leaks all over your code. Catching will not solve the problem because GC will not collect any objects. GC will kick in more frequently and the performance of your application will drop until it becomes unusable.
Basically, this error happens when the JVM cannot allocate more memory on heap for new objects. Catching the exception and letting the GC to clean up and release memory might be a solution but you are never absolutely sure that you are in recoverable state. I would use the catch block to recover from the error, log it and close the application. If you want to solve the memory problem in this case, do it properly and initialize the JVM with more memory (using the java argument -Xmx)

out of memory exception when compiling files

Compilefile.this.compileThread = new Thread() {
#Override
public void run() {
try {
synchronized (this) {
Application.getDBHandler().setAutoCommit(false);
MIBParserUtils.getDefaultMibsMap();
compileSelectedFiles();
Application.getDBHandler().CommitTrans();
Application.getDBHandler().setAutoCommit(true);
}
}
catch(OutOfMemoryError exp) {
JOptionPane.showMessageDialog(null, "Compilation Stopped.. Insufficient Memory!!!");
CompileMib.this.compileThread.interrupt();
System.gc();
dispose();
NmsLogger.writeDebugLog(exp);
}
finally {
}
}
I tried to compile some files within a thread. The UI selects more than 200 files to compile. During compilation an OutOfMemoryError occurred due to in sufficient memory in Eclipse. I want to stop the thread and display a message box and dispose the compile window in my application. I wrote the below code but its not working. Can I catch the exception and handle it, or is there a better solution?
can i handle the exception in catch block?
You can certainly catch an OOME. But successfully recovering is another thing entirely. This Answer discusses some of the issues: https://stackoverflow.com/a/1692421/139985.
Another thing to consider is that the OOME might be being thrown on a different thread:
The compileSelectedFiles() method or one of the other methods could be doing the work on another thread and throwing OOME there.
The OOME could be being thrown on one of Eclipse's background threads.
In either case, that catch obviously won't catch it.
It is worth noting that calling System.gc() after an OOME is a waste of time. I can guarantee that it won't release any memory that wouldn't be released anyway. All you are doing is suggesting that the JVM waste time on something that won't help. If you are lucky, the JVM will ignore the suggestion.
My advice would be to just increase Eclipse's heap size by altering the -Xmx JVM parameter in the eclipse.ini file.
There is almost always no reliable way to recover from OOM, as anything that you try to put into catch block can require more memory, which is not available. And GC has already tried its best before OOM is thrown, so no point in asking him again.
As always, you can either increase the amount of memory available to your application via Xmx option, or fix your application for it not to require that much memory.
One more possible source of error is memory leak. In this case there is only 1 course of action: find it and fix it. Plumbr can help in that.
Have you tried adding the following to your eclipse.ini (located in same folder as eclipse.exe):
-Xmx1024m
This increases the heap space available to Eclipse. If your issue is during compilation, this may solve it. It gives 1GB of memory as heap space limit. Try -Xmx512m if you don't want to allocate quite so much space though.

Java - Avoiding repetitive manual Garbage Collection - mstor and javaxmail OutOfMemoryError

I'm using the mstor library to parse an mbox mail file. Some of the files exceed a gigabyte in size. As you can imagine, this can cause some heap space issues.
There's a loop that, for each iteration, retrieves a particular message. The getMessage() call is what is trying to allocate heap space when it runs out. If I add a call to System.gc() at the top of this loop, the program parses the large files without error, but I realize that collecting garbage 40,000 times has to be slowing the program down.
My first attempt was to make the call look like if (i % 500 == 0) System.gc() to make the call happen every 500 records. I tried raising and lowering this number, but the results are inconsistent and generally return an OutOfMemory error.
My second, more clever attempt looks like this:
try {
message = inbox.getMessage(i);
} catch (OutOfMemoryError e) {
if (firstTry) {
i--;
firstTry = false;
} else {
firstTry = true;
System.out.println("Message " + i + " skipped.");
}
System.gc();
continue;
}
The idea is to only call the garbage collector if an OutOfMemory error is thrown, and then decrement the count to try again. Unfortunately, after parsing several thousand e-mails the program just starts outputting:
Message 7030 skipped.
Message 7031 skipped.
....
and so on for the rest of them.
I'm just confused as to how hitting the collector for each iteration would return different results than this. From my understanding, garbage is garbage, and all this should be changing is how much is collected at a given time.
Can anyone explain this odd behavior? Does anyone have recommendations for other ways to call the collector less frequently? My heap space is maxed out.
You should not rely on System.gc() as it can be ignored by VM. If you get OutOfMemory it means VM already tried to run GC. You can try increasing heap size, changing sizes of generations in heap (say most of your objects end up in old generation, then you don't need much memory for young generation), review your code to make sure you are not holding any references to resources you don't need.
Calling System.gc() is a waste of time in the general sense, it doesn't guarantee to do anything at anytime, it is a suggestion at best and in most cases is ignored. Calling it after an OutOfMemoryException is even more useless, because the JVM has already tried to reclaim memory before the exception was thrown.
The only thing you can do if you are using third party code you can't control is increase the JVM heap allocation at the command line to the most that your particular machine can handle.
Get started with java JVM memory (heap, stack, -xss -xms -xmx -xmn...)
Here are my suggestions:
Increase heap space. This is probably the easiest thing to do. You can do this with the -Xmx. parameter.
See if the API to load messages provides a "streaming" option. Perhaps you don't need to load the entire message into memory at once.
Calling System.gc() won't do you any good because it doesn't guarantee that the GC will be called. In effect, it is a sure sign of bad code. If you're depending on System.gc() for your code to work, then your code is probably broken. In this case you seem to be relying on it for performance's sake and that is a sign that your code is definitely broken.
You can never be sure that the JVM will honor your request, and you can't tell how it will perform the garbage collection either. The JVM may decide to ignore your request completely (i.e., it is not a guarantee). Whether System.gc() will do what it's supposed to, is pretty iffy. Since its behavior is not guaranteed, it is better to not use it altogether.
Finally, you can disable explicit calls to System.gc() by using the -XX:DisableExplicitGC option, which means that again, it is not guaranteed that your System.gc() call will run because it might be running on a JVM that has been configured to ignore that explicit call.
By default mstor will cache messages retrieved from a folder in an ehcache cache for faster access. This caching may be disabled however, and I would recommend disabling it for large folders.
You can disable caching by creating a text file called 'mstor.properties' in the root of your classpath with the following content:
mstor.cache.disabled=true
You can also set this value as a system property:
java -Dmstor.cache.disabled=true SomeProgram
The mstor library wasn't handling the caching of messages well. After doing some research I found that if you call Folder.close() (inbox is my folder object above) mstor and javaxmail releases all of the messages that were cached as a result of the getMessage() method.
I made the try/catch block look like this:
try {
message = inbox.getMessage(i);
// moved all of my calls to message.getFrom(),
// message.getAllRecipients(), etc. inside this try/catch.
} catch (OutOfMemoryError e) {
if (firstTry) {
i--;
firstTry = false;
} else {
firstTry = true;
System.out.println("Message " + i + " skipped.");
}
inbox.close(false);
System.gc();
inbox.open(Folder.READ_ONLY);
continue;
}
firstTry = true;
Each time the catch statement is hit, it takes 40-50 ms to manually clear the cached messages and re-open the folder.
With calling the garbage collector through every iteration, it took 57 minutes to parse a 1.6 gigabyte file. With this logic, it takes only 18 minutes to parse the same file.
Update - Another important aspect in lowering the amount of memory used by mstor is in the cache properties. Somebody else already mentioned setting "mstor.cache.disabled" to true, and this helped. Today I discovered another important property that greatly reduced the amount of OOM catches for even larger files.
Properties props = new Properties();
props.setProperty("mstor.mbox.metadataStrategy", "none");
props.setProperty("mstor.cache.disabled", "true");
props.setProperty("mstor.mbox.cacheBuffers", "false"); // most important

Behaviour of JVM during out of memory error? List s = new ArrayList<String>();

try {
for(;;) {
s.add("Pradeep");
}
} finally {
System.out.println("In Finally");
}
In the try block the jvm runs out of memory,then how is jvm excuting finally block when it has no memory?
Output:
In Finally
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
Presumably the System.out.println call requires less memory than the s.add("Pradeep") call.
If s is an ArrayList for instance, the s.add call may cause the list to attempt to double up it's capacity. This is possibly a quite memory demanding operation, thus it is not very surprising that the JVM can continue executing even if it can't perform such relatively expensive task.
Here is more simple code that demonstrated better what happens:
try {
int[] a = new int[Integer.MAX_VALUE];
} catch( Exception e ) {
e.printStackTrace();
}
The allocation of the array fails but that doesn't mean Java has no free memory anymore. If you add items to a list, the list grows in jumps. At one time, the list will need more than half of the memory of the VM (about 64MB by default). The next add will try to allocate an array that is too big.
But that means the VM still has about 30MB unused heap left for other tasks.
If you want to get the VM into trouble, use a LinkedList because it grows linearly. When the last allocation fails, there will be only very little memory to handle the exception:
LinkedList<Integer> list = new LinkedList<Integer>();
try {
for(;;) {
list.add(0);
}
} catch( Exception e ) {
e.printStackTrace();
}
That program takes longer to terminate but it still terminates with an error. Maybe Java sets aside part of the heap for error handling or error handling doesn't need to allocate memory (allocation happens before the code is executed).
In the try block the jvm runs out of memory, then how is jvm excuting finally block when it has no memory?
The JVM "runs out of memory" and throws an OOME when an attempt to allocate an object or array fails because there is not enough space available for that object. That doesn't mean that everything has to stop instantly:
The JVM can happily keep doing things that don't entail creating any new objects. I'm pretty sure that this is what happens in this case. (The String literal already exists, and implementation of the println method is most likely copying characters into a Buffer that was previously allocated.)
The JVM could potentially allocate objects that are smaller than the one that triggered the OOME.
The propagation of the OOME may cause variables containing object references to go out of scope, and the objects that they refer to may then become unreachable. A subsequent new may then trigger the GC that reclaims said objects, making space for new ones.
Note: the JLS does not specify that the String object that represents that literal must be created when the class is loaded. However, it certainly says that it may be created at class load time ... and that is what happens in any decent JVM.
Another answer said this:
Maybe Java sets aside part of the heap for error handling or error handling doesn't need to allocate memory (allocation happens before the code is executed).
I think this is right. However, I think that this special heap region is only used while instantiating the OOME exception, and filling in the stack trace information. Once that has happened, the JVM goes back to using the regular heap. (It would be easy to get some evidence for this. Just retry the add call in the finally block. If it succeeds, that is evidence that something has made more memory available for general use.)
The JVM isn't really out of memory and unable to proceed. This error says that it failed to allocate memory and so it didn't. That might mean its very low. But here what failed was resizing the collection's internal array which is huge. There's a lot of memory left just not that much to double a large array. So it can proceed just fine with finally.
The error is thrown when the heap space exceeds that set by the -Xmx flag, and it cannot continue as normal. The error propagates, but it does not immediately cause the JVM to be shutdown (of the system exited in such cases, there would be no point in the OOM error, as it could never be thrown).
As the JVM has not exited it will try to, as according to the language spec, execute the finally block.
Finally executes almost always.
When the exception was thrown, the JVM collected as much as memory as possible, which, reading your code, probably meant that it collected the whole s collection.
When the finally is reached, it only has to create a new string "In finally" in the string pool no additional memory is required, and it has no problems since it has freed up space before.
Try printing s.size() on the finally, you'll see how it is not able to do it. (EDIT: if in catch, finally or after the try block, thereĀ“s a line of code using the s collection, the Garbage Collector is unable to collect it at the moment the OOME is thrown. This is why the heap memory will be almost full, so any new object allocation may throw another OOME again. It is difficult to predict without seeing the complete code).

Categories

Resources