'Catching' OutOfMemoryError completely solves out-of-memory issue? - java

I was getting OutOfMemoryError messages in LogCat and my app was crashing because the errors were uncaught.
Two things were causing the OutOfMemoryError:
1. Reading a large text file into a string.
2. Sending that string to my TextView.
Simply adding a catch to these two things not only catches the OutOfMemoryError but appears to completely solve the out-of-memory problem.
No more crashing and no more error messages in LogCat. The app just runs perfectly.
How is this possible? What exactly is happening?
With this code, I was getting the error messages & app crashing:
try
{
myString = new Scanner(new File(myFilePath)).useDelimiter("\\A").next();
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
myTextView.setText(myString);
Just by 'catching' the OutOfMemoryError, no more error messages in LogCat and no crashing:
try
{
myString = new Scanner(new File(myFilePath)).useDelimiter("\\A").next();
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
catch(OutOfMemoryError e)
{
}
try
{
myTextView.setText(myString);
}
catch(OutOfMemoryError e)
{
}

I guess your string isn't loaded completely, or even if it is (it may throw the error just after adding the text), what happens depends on the current memory available for your app so catching OutOfMemoryError isn't a viable solution here.
If you really want to load a large string file and display it in an EditText, I recommend you to load only a small part of the file (let's say 50kB), and implement some kind of paging, like a button which loads the next 50kB. You can also load additional lines as the user scrolls through the EditText.

If you catch the OutOfMemoryError, the garbage collector tries to free up the memory previously used and thus the application can carry on if the application will let the garbage collector do its job (i.e. the application has no longer a reference to that large string of yours).
However, catching an OutOfMemoryError is far from fool-proof. See Catching java.lang.OutOfMemoryError?.

When you catch the exception the JVM tries to recover from it by calling the Garbage Collector and scraping the objects that are not used anymore.
This might solve the problem in your case. But imagine that the problem appears because of bad coding and memory leaks all over your code. Catching will not solve the problem because GC will not collect any objects. GC will kick in more frequently and the performance of your application will drop until it becomes unusable.
Basically, this error happens when the JVM cannot allocate more memory on heap for new objects. Catching the exception and letting the GC to clean up and release memory might be a solution but you are never absolutely sure that you are in recoverable state. I would use the catch block to recover from the error, log it and close the application. If you want to solve the memory problem in this case, do it properly and initialize the JVM with more memory (using the java argument -Xmx)

Related

Java Heap size issue [duplicate]

Documentation for java.lang.Error says:
An Error is a subclass of Throwable that indicates serious problems that a reasonable application should not try to catch
But as java.lang.Error is a subclass of java.lang.Throwable, I can catch this type of Throwable.
I understand why it's not a good idea to catch this sort of exception. As far as I understand, if we decide to catch it, the catch handler should not allocate any memory by itself. Otherwise OutOfMemoryError will be thrown again.
So, my question is:
Are there any real world scenarios when catching java.lang.OutOfMemoryError might be a good idea?
If we decide to catch java.lang.OutOfMemoryError, how can we make sure the catch handler doesn't allocate any memory by itself (any tools or best practices)?
There are a number of scenarios where you may wish to catch an OutOfMemoryError and in my experience (on Windows and Solaris JVMs), only very infrequently is OutOfMemoryError the death-knell to a JVM.
There is only one good reason to catch an OutOfMemoryError and that is to close down gracefully, cleanly releasing resources and logging the reason for the failure best you can (if it is still possible to do so).
In general, the OutOfMemoryError occurs due to a block memory allocation that cannot be satisfied with the remaining resources of the heap.
When the Error is thrown the heap contains the same amount of allocated objects as before the unsuccessful allocation and now is the time to drop references to run-time objects to free even more memory that may be required for cleanup. In these cases, it may even be possible to continue but that would definitely be a bad idea as you can never be 100% certain that the JVM is in a reparable state.
Demonstration that OutOfMemoryError does not mean that the JVM is out of memory in the catch block:
private static final int MEGABYTE = (1024*1024);
public static void runOutOfMemory() {
MemoryMXBean memoryBean = ManagementFactory.getMemoryMXBean();
for (int i=1; i <= 100; i++) {
try {
byte[] bytes = new byte[MEGABYTE*500];
} catch (Exception e) {
e.printStackTrace();
} catch (OutOfMemoryError e) {
MemoryUsage heapUsage = memoryBean.getHeapMemoryUsage();
long maxMemory = heapUsage.getMax() / MEGABYTE;
long usedMemory = heapUsage.getUsed() / MEGABYTE;
System.out.println(i+ " : Memory Use :" + usedMemory + "M/" +maxMemory+"M");
}
}
}
Output of this code:
1 : Memory Use :0M/247M
..
..
..
98 : Memory Use :0M/247M
99 : Memory Use :0M/247M
100 : Memory Use :0M/247M
If running something critical, I usually catch the Error, log it to syserr, then log it using my logging framework of choice, then proceed to release resources and close down in a clean fashion. What's the worst that can happen? The JVM is dying (or already dead) anyway and by catching the Error there is at least a chance of cleanup.
The caveat is that you have to target the catching of these types of errors only in places where cleanup is possible. Don't blanket catch(Throwable t) {} everywhere or nonsense like that.
You can recover from it:
package com.stackoverflow.q2679330;
public class Test {
public static void main(String... args) {
int size = Integer.MAX_VALUE;
int factor = 10;
while (true) {
try {
System.out.println("Trying to allocate " + size + " bytes");
byte[] bytes = new byte[size];
System.out.println("Succeed!");
break;
} catch (OutOfMemoryError e) {
System.out.println("OOME .. Trying again with 10x less");
size /= factor;
}
}
}
}
But does it make sense? What else would you like to do? Why would you initially allocate that much of memory? Is less memory also OK? Why don't you already make use of it anyway? Or if that's not possible, why not just giving the JVM more memory from the beginning on?
Back to your questions:
1: is there any real word scenarios when catching java.lang.OutOfMemoryError may be a good idea?
None comes to mind.
2: if we catching java.lang.OutOfMemoryError how can we sure that catch handler doesn't allocate any memory by itself (any tools or best practicies)?
Depends on what has caused the OOME. If it's declared outside the try block and it happened step-by-step, then your chances are little. You may want to reserve some memory space beforehand:
private static byte[] reserve = new byte[1024 * 1024]; // Reserves 1MB.
and then set it to zero during OOME:
} catch (OutOfMemoryException e) {
reserve = new byte[0];
// Ha! 1MB free!
}
Of course this makes all with all no sense ;) Just give JVM sufficient memory as your applictation require. Run a profiler if necessary.
In general, it is a bad idea to try to catch and recover from an OOM.
An OOME could also have been thrown on other threads, including threads that your application doesn't even know about. Any such threads will now be dead, and anything that was waiting on a notify could be stuck for ever. In short, your app could be terminally broken.
Even if you do successfully recover, your JVM may still be suffering from heap starvation and your application will perform abysmally as a result.
The best thing to do with an OOME is to let the JVM die.
(This assumes that the JVM does die. For instance OOMs on a Tomcat servlet thread do not kill the JVM, and this leads to the Tomcat going into a catatonic state where it won't respond to any requests ... not even requests to restart.)
EDIT
I am not saying that it is a bad idea to catch OOM at all. The problems arise when you then attempt to recover from the OOME, either deliberately or by oversight. Whenever you catch an OOM (directly, or as a subtype of Error or Throwable) you should either rethrow it, or arrange that the application / JVM exits.
Aside: This suggests that for maximum robustness in the face of OOMs an application should use Thread.setDefaultUncaughtExceptionHandler() to set a handler that will cause the application to exit in the event of an OOME, no matter what thread the OOME is thrown on. I'd be interested in opinions on this ...
The only other scenario is when you know for sure that the OOM has not resulted in any collateral damage; i.e. you know:
what specifically caused the OOME,
what the application was doing at the time, and that it is OK to simply discard that computation, and
that a (roughly) simultaneous OOME cannot have occurred on another thread.
There are applications where it is possible to know these things, but for most applications you cannot know for sure that continuation after an OOME is safe. Even if it empirically "works" when you try it.
(The problem is that it a formal proof is required to show that the consequences of "anticipated" OOMEs are safe, and that "unanticipated" OOME's cannot occur within the control of a try/catch OOME.)
Yes, there are real-world scenarios. Here's mine: I need to process data sets of very many items on a cluster with limited memory per node. A given JVM instances goes through many items one after the other, but some of the items are too big to process on the cluster: I can catch the OutOfMemoryError and take note of which items are too big. Later, I can re-run just the large items on a computer with more RAM.
(Because it's a single multi-gigabyte allocation of an array that fails, the JVM is still fine after catching the error and there's enough memory to process the other items.)
There are definitely scenarios where catching an OOME makes sense. IDEA catches them and pops up a dialog to let you change the startup memory settings (and then exits when you are done). An application server might catch and report them. The key to doing this is to do it at a high level on the dispatch so that you have a reasonable chance of having a bunch of resources freed up at the point where you are catching the exception.
Besides the IDEA scenario above, in general the catching should be of Throwable, not just OOM specifically, and should be done in a context where at least the thread will be terminated shortly.
Of course most times memory is starved and the situation is not recoverable, but there are ways that it makes sense.
I came across this question because I was wondering whether it is a good idea to catch OutOfMemoryError in my case. I'm answering here partially to show yet another example when catching this error can make sense to someone (i.e. me) and partially to find out whether it is a good idea in my case indeed (with me being an uber junior developer I can never be too sure about any single line of code I write).
Anyway, I'm working on an Android application which can be run on different devices with different memory sizes. The dangerous part is decoding a bitmap from a file and dislaying it in an ImageView instance. I don't want to restrict the more powerful devices in terms of the size of decoded bitmap, nor can be sure that the app won't be run on some ancient device I've never come across with very low memory. Hence I do this:
BitmapFactory.Options bitmapOptions = new BitmapFactory.Options();
bitmapOptions.inSampleSize = 1;
boolean imageSet = false;
while (!imageSet) {
try {
image = BitmapFactory.decodeFile(filePath, bitmapOptions);
imageView.setImageBitmap(image);
imageSet = true;
}
catch (OutOfMemoryError e) {
bitmapOptions.inSampleSize *= 2;
}
}
This way I manage to provide for more and less powerful devices according to their, or rather their users' needs and expectations.
I have an application that needs to recover from OutOfMemoryError failures, and in single-threaded programs it always works, but sometimes doesn't in multi-threaded programs. The application is an automated Java testing tool that executes generated test sequences to the maximum possible depth on test classes. Now, the UI must be stable, but the test engine can run out of memory while growing the tree of test cases. I handle this by the following kind of code idiom in the test engine:
boolean isOutOfMemory = false; // flag used for reporting
try {
SomeType largeVar;
// Main loop that allocates more and more to largeVar
// may terminate OK, or raise OutOfMemoryError
}
catch (OutOfMemoryError ex) {
// largeVar is now out of scope, so is garbage
System.gc(); // clean up largeVar data
isOutOfMemory = true; // flag available for use
}
// program tests flag to report recovery
This works every time in single-threaded applications. But I recently put my test engine into a separate worker-thread from the UI. Now, the out of memory may occur arbitrarily in either thread, and it is not clear to me how to catch it.
For example, I had the OOME occur while the frames of an animated GIF in my UI were being cycled by a proprietary thread that is created behind-the-scenes by a Swing class that is out of my control. I had thought that I had allocated all the resources needed in advance, but clearly the animator is allocating memory every time it fetches the next image. If anyone has an idea about how to handle OOMEs raised in any thread, I would love to hear.
Yes, the real question is "what are you going to do in the exception handler?" For almost anything useful, you'll allocate more memory. If you'd like to do some diagnostic work when an OutOfMemoryError occurs, you can use the -XX:OnOutOfMemoryError=<cmd> hook supplied by the HotSpot VM. It will execute your command(s) when an OutOfMemoryError occurs, and you can do something useful outside of Java's heap. You really want to keep the application from running out of memory in the first place, so figuring out why it happens is the first step. Then you can increase the heap size of the MaxPermSize as appropriate. Here are some other useful HotSpot hooks:
-XX:+PrintCommandLineFlags
-XX:+PrintConcurrentLocks
-XX:+PrintClassHistogram
See the full list here
An OOME can be caught, but it is going to be generally useless, depending on if the JVM is able to garbage-collect some objects when reaching the catch block, and how much heap memory is left by that time.
Example: in my JVM, this program runs to completion:
import java.util.LinkedList;
import java.util.List;
public class OOMErrorTest {
public static void main(String[] args) {
List<Long> ll = new LinkedList<Long>();
try {
long l = 0;
while(true){
ll.add(new Long(l++));
}
} catch(OutOfMemoryError oome){
System.out.println("Error catched!!");
}
System.out.println("Test finished");
}
}
However, just adding a single line in the catch block will show you what I'm talking about:
import java.util.LinkedList;
import java.util.List;
public class OOMErrorTest {
public static void main(String[] args) {
List<Long> ll = new LinkedList<Long>();
try {
long l = 0;
while(true){
ll.add(new Long(l++));
}
} catch(OutOfMemoryError oome){
System.out.println("Error caught!!");
System.out.println("size:" +ll.size());
}
System.out.println("Test finished");
}
}
The first program runs fine because when reaching the catch block, the JVM detects that the list isn't going to be used anymore (this detection can be also an optimization made at compile time). So when we reach the print statement, the heap memory has been freed almost entirely, so we now have a wide margin of maneuver to continue. This is the best case.
However, if the code is arranged such as the list ll is used after the OOME has been caught, the JVM is unable to collect it. This happens in the second snippet. The OOME, triggered by a new Long creation, is caught, but soon we're creating a new Object (a String in the System.out.println line), and the heap is almost full, so a new OOME is thrown. This is the worst case scenario: we tried to create a new object, we failed, we caught the OOME, yes, but now the first instruction requiring new heap memory (e.g: creating a new object) will throw a new OOME. Think about it, what else can we do at this point with so little memory left? Probably just exiting, hence I said it's useless.
Among the reasons the JVM isn't garbage-collecting resources, one is really scary: a shared resource with other threads also making use of it. Anyonecan see how dangerous catching OOME can be if added to some non-experimental app of any kind.
I'm using a Windows x86 32bits JVM (JRE6). Default memory for each Java app is 64MB.
The only reason i can think of why catching OOM errors could be that you have some massive data structures you're not using anymore, and can set to null and free up some memory. But (1) that means you're wasting memory, and you should fix your code rather than just limping along after an OOME, and (2) even if you caught it, what would you do? OOM can happen at any time, potentially leaving everything half done.
For the question 2 I already see the solution I would suggest, by BalusC.
Is there any real word scenarios when catching java.lang.OutOfMemoryError may be a good idea?
I think I just came across a good example. When awt application is dispatching messages the uncatched OutOfMemoryError is displayed on stderr and the processing of the current message is stopped. But the application keeps running! User may still issue other commands unaware of the serious problems happening behind the scene. Especially when he cannot or does not observe the standard error. So catching oom exception and providing (or at least suggesting) application restart is something desired.
I just have a scenario where catching an OutOfMemoryError seems to make sense and seems to work.
Scenario: in an Android App, I want to display multiple bitmaps in highest possible resolution, and I want to be able to zoom them fluently.
Because of fluent zooming, I want to have the bitmaps in memory. However, Android has limitations in memory which are device dependent and which are hard to control.
In this situation, there may be OutOfMemoryError while reading the bitmap. Here, it helps if I catch it and then continue with lower resolution.
Depends on how you define "good". We do that in our buggy web application and it does work most of the time (thankfully, now OutOfMemory doesn't happen due to an unrelated fix). However, even if you catch it, it still might have broken some important code: if you have several threads, memory allocation can fail in any of them. So, depending on your application there is still 10--90% chance of it being irreversibly broken.
As far as I understand, heavy stack unwinding on the way will invalidate so many references and thus free so much memory you shouldn't care about that.
EDIT: I suggest you try it out. Say, write a program that recursively calls a function that allocates progressively more memory. Catch OutOfMemoryError and see if you can meaningfully continue from that point. According to my experience, you will be able to, though in my case it happened under WebLogic server, so there might have been some black magic involved.
You can catch anything under Throwable, generally speaking you should only catch subclasses of Exception excluding RuntimeException (though a large portion of developers also catch RuntimeException... but that was never the intent of the language designers).
If you were to catch OutOfMemoryError what on earth would you do? The VM is out of memory, basically all you can do is exit. You probably cannot even open a dialog box to tell them you are out of memory since that would take memory :-)
The VM throws an OutOfMemoryError when it is truly out of memory (indeed all Errors should indicate unrecoverable situations) and there should really be nothing you can do to deal with it.
The things to do are find out why you are running out of memory (use a profiler, like the one in NetBeans) and make sure you don't have memory leaks. If you don't have memory leaks then increase the memory that you allocate to the VM.

Correct place to catch out of memory error

I'm experiencing a problem with a producer-consumer setup for a local bot competition (think Scalatron, but with more languages allowed, and using pipes to connect with stdin and stdout). The items are produced fine, and handled correctly by the consumer, however, the consumer's task in this setting is to call other pieces of software that might take up too much memory, hence the out of memory error.
I've got a Python script (i.e. the consumer) continuously calling other pieces of code using subprocess.call. These are all submitted by other people for evaluation, however, sometimes one of these submitted pieces use so much memory, the engine produces an OutOfMemoryError, which causes the entire script to halt.
There are three layers in the used setup:
Consumer (Python)
Game engine (Java)
Players' bots (languages differ)
The consumer calls the game engine using two bots as arguments:
subprocess.call(['setsid', 'sudo', '-nu', 'botrunner', '/opt/bots/sh/run_bots.sh', bot1, bot2]).
Inside the game engine a loop runs pitting the bots against each other, and afterwards all data is saved in a database so players can review their bots. The idea is, should a bot cause an error, to log the error and hand victory to the opponent.
What is the correct place to catch this, though? Should this be done on the "highest" (i.e. consumer) level, or in the game engine itself?
The correct place to catch any Exception or Error in Java is the place where you have a mechanism to handle them and perform some recovery steps. In the case of OutOfMemoryError, you should catch the error ONLY when you are able to to close it down gracefully, cleanly releasing resources and logging the reason for the failure, if possible.
OutOfMemoryError occurs due to a block memory allocation that cannot be satisfied with the remaining resources of the heap. Whenever OutOfMemoryError is thrown, the heap contains the exact same number of allocated objects before the unsuccessful attempt of allocation. This should be the actual time when you should catch the OutOfMemoryError and attempt to drop references to run-time objects to free even more memory that may be required for cleanup.
If the JVM is in reparable state, which you can never determine it through the program, it is even possible to recover & continue from the error. But this is generally considered as a not good design as I said you can never determine it through the program.
If you see the documentation of java.lang.Error, it says
An Error is a subclass of Throwable that indicates serious problems
that a reasonable application should not try to catch.
If you are catching any error on purpose, please remember NOT to blanket catch(Throwable t) {...} everywhere in your code.
More details here.
You can catch and attempt to recover from OutOfMemoryError (OOM) exceptions, BUT IT IS PROBABLY A BAD IDEA ... especially if your aim is for the application to "keep going".
There are a number of reasons for this:
As pointed out, there are better ways to manage memory resources than explicitly freeing things; i.e. using SoftReference and WeakReference for objects that could be freed if memory is short.
If you wait until you actually run out of memory before freeing things, your application is likely to spend more time running the garbage collector. Depending on your JVM version and on your GC tuning parameters, the JVM can end up running the GC more and more frequently as it approaches the point at which will throw an OOM. The slowdown (in terms of the application doing useful work) can be significant. You probably want to avoid this.
If the root cause of your problem is a memory leak, then the chances are that catching and recovering from the OOM will not reclaim the leaked memory. You application will keep going for a bit then OOM again, and again, and again at ever reducing intervals.
So my advice is NOT attempt to keep going from an OOM ... unless you know:
where and why the OOM happened,
that there won't have been any "collateral damage", and
that your recovery will release enough memory to continue.
There is probably at least one good time to catch an OutOfMemoryError, when you are specifically allocating something that might be way too big:
public static int[] decode(InputStream in, int len) throws IOException {
int result[];
try {
result = new int[len];
} catch (OutOfMemoryError e) {
throw new IOException("Result too long to read into memory: " + len);
} catch (NegativeArraySizeException e) {
throw new IOException("Cannot read negative length: " + len);
}
}

Free up memory after using scanner on large text file?

If I run the Scanner code below once, it runs flawlessly.
If I run it a second time, my app crashes and I get an "Out of memory" error in LogCat.
How do I go about freeing up the memory used by the initial run so that the app won't crash on the second run?
Any suggestions would be much appreciated
try
{
myString = new Scanner(new File(myFilePath)).useDelimiter("\\A").next();
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
Additional misc. info:
The purpose of the code is to load the entire contents of a large (1.5MB) text file into a string.
The exact error message in LogCat is: Out of memory on a 4194320-byte allocation
The code is being run in an AsyncTask background thread.
The try/catch was added automatically by eclipse. I don't know if it's formatted properly or not.
I tried emptying myString to free memory before the second run but that didn't help.
I've tried using other methods to load the file into a string (including the often-recommended apache Utils methods) and settled on this method because it's incredibly fast compared to the other methods I've tried.

out of memory exception when compiling files

Compilefile.this.compileThread = new Thread() {
#Override
public void run() {
try {
synchronized (this) {
Application.getDBHandler().setAutoCommit(false);
MIBParserUtils.getDefaultMibsMap();
compileSelectedFiles();
Application.getDBHandler().CommitTrans();
Application.getDBHandler().setAutoCommit(true);
}
}
catch(OutOfMemoryError exp) {
JOptionPane.showMessageDialog(null, "Compilation Stopped.. Insufficient Memory!!!");
CompileMib.this.compileThread.interrupt();
System.gc();
dispose();
NmsLogger.writeDebugLog(exp);
}
finally {
}
}
I tried to compile some files within a thread. The UI selects more than 200 files to compile. During compilation an OutOfMemoryError occurred due to in sufficient memory in Eclipse. I want to stop the thread and display a message box and dispose the compile window in my application. I wrote the below code but its not working. Can I catch the exception and handle it, or is there a better solution?
can i handle the exception in catch block?
You can certainly catch an OOME. But successfully recovering is another thing entirely. This Answer discusses some of the issues: https://stackoverflow.com/a/1692421/139985.
Another thing to consider is that the OOME might be being thrown on a different thread:
The compileSelectedFiles() method or one of the other methods could be doing the work on another thread and throwing OOME there.
The OOME could be being thrown on one of Eclipse's background threads.
In either case, that catch obviously won't catch it.
It is worth noting that calling System.gc() after an OOME is a waste of time. I can guarantee that it won't release any memory that wouldn't be released anyway. All you are doing is suggesting that the JVM waste time on something that won't help. If you are lucky, the JVM will ignore the suggestion.
My advice would be to just increase Eclipse's heap size by altering the -Xmx JVM parameter in the eclipse.ini file.
There is almost always no reliable way to recover from OOM, as anything that you try to put into catch block can require more memory, which is not available. And GC has already tried its best before OOM is thrown, so no point in asking him again.
As always, you can either increase the amount of memory available to your application via Xmx option, or fix your application for it not to require that much memory.
One more possible source of error is memory leak. In this case there is only 1 course of action: find it and fix it. Plumbr can help in that.
Have you tried adding the following to your eclipse.ini (located in same folder as eclipse.exe):
-Xmx1024m
This increases the heap space available to Eclipse. If your issue is during compilation, this may solve it. It gives 1GB of memory as heap space limit. Try -Xmx512m if you don't want to allocate quite so much space though.

Determining the point of OutOfMemory Error

In Java, is it possible to accurately find the point at which java.lang.OutOfMemoryError occurred?
I am looking to better understand exactly how much memory did my application took a claim to before failing
you could catch the OutOfMemoryError and ask the runtime as shown below.
try {
//...
} catch (OutOfMemoryError er) {
// this will tell you how much you have used
long heapSize = Runtime.getRuntime().totalMemory();
System.err.println("memory used "+heapSize);
}
(an OutOfMemoryError should occur when the heap runs out of memory)
if you have no idea where in your code it will fail, you could try register a shutdown hook and output the heapSize there.

Categories

Resources