Documentation for java.lang.Error says:
An Error is a subclass of Throwable that indicates serious problems that a reasonable application should not try to catch
But as java.lang.Error is a subclass of java.lang.Throwable, I can catch this type of Throwable.
I understand why it's not a good idea to catch this sort of exception. As far as I understand, if we decide to catch it, the catch handler should not allocate any memory by itself. Otherwise OutOfMemoryError will be thrown again.
So, my question is:
Are there any real world scenarios when catching java.lang.OutOfMemoryError might be a good idea?
If we decide to catch java.lang.OutOfMemoryError, how can we make sure the catch handler doesn't allocate any memory by itself (any tools or best practices)?
There are a number of scenarios where you may wish to catch an OutOfMemoryError and in my experience (on Windows and Solaris JVMs), only very infrequently is OutOfMemoryError the death-knell to a JVM.
There is only one good reason to catch an OutOfMemoryError and that is to close down gracefully, cleanly releasing resources and logging the reason for the failure best you can (if it is still possible to do so).
In general, the OutOfMemoryError occurs due to a block memory allocation that cannot be satisfied with the remaining resources of the heap.
When the Error is thrown the heap contains the same amount of allocated objects as before the unsuccessful allocation and now is the time to drop references to run-time objects to free even more memory that may be required for cleanup. In these cases, it may even be possible to continue but that would definitely be a bad idea as you can never be 100% certain that the JVM is in a reparable state.
Demonstration that OutOfMemoryError does not mean that the JVM is out of memory in the catch block:
private static final int MEGABYTE = (1024*1024);
public static void runOutOfMemory() {
MemoryMXBean memoryBean = ManagementFactory.getMemoryMXBean();
for (int i=1; i <= 100; i++) {
try {
byte[] bytes = new byte[MEGABYTE*500];
} catch (Exception e) {
e.printStackTrace();
} catch (OutOfMemoryError e) {
MemoryUsage heapUsage = memoryBean.getHeapMemoryUsage();
long maxMemory = heapUsage.getMax() / MEGABYTE;
long usedMemory = heapUsage.getUsed() / MEGABYTE;
System.out.println(i+ " : Memory Use :" + usedMemory + "M/" +maxMemory+"M");
}
}
}
Output of this code:
1 : Memory Use :0M/247M
..
..
..
98 : Memory Use :0M/247M
99 : Memory Use :0M/247M
100 : Memory Use :0M/247M
If running something critical, I usually catch the Error, log it to syserr, then log it using my logging framework of choice, then proceed to release resources and close down in a clean fashion. What's the worst that can happen? The JVM is dying (or already dead) anyway and by catching the Error there is at least a chance of cleanup.
The caveat is that you have to target the catching of these types of errors only in places where cleanup is possible. Don't blanket catch(Throwable t) {} everywhere or nonsense like that.
You can recover from it:
package com.stackoverflow.q2679330;
public class Test {
public static void main(String... args) {
int size = Integer.MAX_VALUE;
int factor = 10;
while (true) {
try {
System.out.println("Trying to allocate " + size + " bytes");
byte[] bytes = new byte[size];
System.out.println("Succeed!");
break;
} catch (OutOfMemoryError e) {
System.out.println("OOME .. Trying again with 10x less");
size /= factor;
}
}
}
}
But does it make sense? What else would you like to do? Why would you initially allocate that much of memory? Is less memory also OK? Why don't you already make use of it anyway? Or if that's not possible, why not just giving the JVM more memory from the beginning on?
Back to your questions:
1: is there any real word scenarios when catching java.lang.OutOfMemoryError may be a good idea?
None comes to mind.
2: if we catching java.lang.OutOfMemoryError how can we sure that catch handler doesn't allocate any memory by itself (any tools or best practicies)?
Depends on what has caused the OOME. If it's declared outside the try block and it happened step-by-step, then your chances are little. You may want to reserve some memory space beforehand:
private static byte[] reserve = new byte[1024 * 1024]; // Reserves 1MB.
and then set it to zero during OOME:
} catch (OutOfMemoryException e) {
reserve = new byte[0];
// Ha! 1MB free!
}
Of course this makes all with all no sense ;) Just give JVM sufficient memory as your applictation require. Run a profiler if necessary.
In general, it is a bad idea to try to catch and recover from an OOM.
An OOME could also have been thrown on other threads, including threads that your application doesn't even know about. Any such threads will now be dead, and anything that was waiting on a notify could be stuck for ever. In short, your app could be terminally broken.
Even if you do successfully recover, your JVM may still be suffering from heap starvation and your application will perform abysmally as a result.
The best thing to do with an OOME is to let the JVM die.
(This assumes that the JVM does die. For instance OOMs on a Tomcat servlet thread do not kill the JVM, and this leads to the Tomcat going into a catatonic state where it won't respond to any requests ... not even requests to restart.)
EDIT
I am not saying that it is a bad idea to catch OOM at all. The problems arise when you then attempt to recover from the OOME, either deliberately or by oversight. Whenever you catch an OOM (directly, or as a subtype of Error or Throwable) you should either rethrow it, or arrange that the application / JVM exits.
Aside: This suggests that for maximum robustness in the face of OOMs an application should use Thread.setDefaultUncaughtExceptionHandler() to set a handler that will cause the application to exit in the event of an OOME, no matter what thread the OOME is thrown on. I'd be interested in opinions on this ...
The only other scenario is when you know for sure that the OOM has not resulted in any collateral damage; i.e. you know:
what specifically caused the OOME,
what the application was doing at the time, and that it is OK to simply discard that computation, and
that a (roughly) simultaneous OOME cannot have occurred on another thread.
There are applications where it is possible to know these things, but for most applications you cannot know for sure that continuation after an OOME is safe. Even if it empirically "works" when you try it.
(The problem is that it a formal proof is required to show that the consequences of "anticipated" OOMEs are safe, and that "unanticipated" OOME's cannot occur within the control of a try/catch OOME.)
Yes, there are real-world scenarios. Here's mine: I need to process data sets of very many items on a cluster with limited memory per node. A given JVM instances goes through many items one after the other, but some of the items are too big to process on the cluster: I can catch the OutOfMemoryError and take note of which items are too big. Later, I can re-run just the large items on a computer with more RAM.
(Because it's a single multi-gigabyte allocation of an array that fails, the JVM is still fine after catching the error and there's enough memory to process the other items.)
There are definitely scenarios where catching an OOME makes sense. IDEA catches them and pops up a dialog to let you change the startup memory settings (and then exits when you are done). An application server might catch and report them. The key to doing this is to do it at a high level on the dispatch so that you have a reasonable chance of having a bunch of resources freed up at the point where you are catching the exception.
Besides the IDEA scenario above, in general the catching should be of Throwable, not just OOM specifically, and should be done in a context where at least the thread will be terminated shortly.
Of course most times memory is starved and the situation is not recoverable, but there are ways that it makes sense.
I came across this question because I was wondering whether it is a good idea to catch OutOfMemoryError in my case. I'm answering here partially to show yet another example when catching this error can make sense to someone (i.e. me) and partially to find out whether it is a good idea in my case indeed (with me being an uber junior developer I can never be too sure about any single line of code I write).
Anyway, I'm working on an Android application which can be run on different devices with different memory sizes. The dangerous part is decoding a bitmap from a file and dislaying it in an ImageView instance. I don't want to restrict the more powerful devices in terms of the size of decoded bitmap, nor can be sure that the app won't be run on some ancient device I've never come across with very low memory. Hence I do this:
BitmapFactory.Options bitmapOptions = new BitmapFactory.Options();
bitmapOptions.inSampleSize = 1;
boolean imageSet = false;
while (!imageSet) {
try {
image = BitmapFactory.decodeFile(filePath, bitmapOptions);
imageView.setImageBitmap(image);
imageSet = true;
}
catch (OutOfMemoryError e) {
bitmapOptions.inSampleSize *= 2;
}
}
This way I manage to provide for more and less powerful devices according to their, or rather their users' needs and expectations.
I have an application that needs to recover from OutOfMemoryError failures, and in single-threaded programs it always works, but sometimes doesn't in multi-threaded programs. The application is an automated Java testing tool that executes generated test sequences to the maximum possible depth on test classes. Now, the UI must be stable, but the test engine can run out of memory while growing the tree of test cases. I handle this by the following kind of code idiom in the test engine:
boolean isOutOfMemory = false; // flag used for reporting
try {
SomeType largeVar;
// Main loop that allocates more and more to largeVar
// may terminate OK, or raise OutOfMemoryError
}
catch (OutOfMemoryError ex) {
// largeVar is now out of scope, so is garbage
System.gc(); // clean up largeVar data
isOutOfMemory = true; // flag available for use
}
// program tests flag to report recovery
This works every time in single-threaded applications. But I recently put my test engine into a separate worker-thread from the UI. Now, the out of memory may occur arbitrarily in either thread, and it is not clear to me how to catch it.
For example, I had the OOME occur while the frames of an animated GIF in my UI were being cycled by a proprietary thread that is created behind-the-scenes by a Swing class that is out of my control. I had thought that I had allocated all the resources needed in advance, but clearly the animator is allocating memory every time it fetches the next image. If anyone has an idea about how to handle OOMEs raised in any thread, I would love to hear.
Yes, the real question is "what are you going to do in the exception handler?" For almost anything useful, you'll allocate more memory. If you'd like to do some diagnostic work when an OutOfMemoryError occurs, you can use the -XX:OnOutOfMemoryError=<cmd> hook supplied by the HotSpot VM. It will execute your command(s) when an OutOfMemoryError occurs, and you can do something useful outside of Java's heap. You really want to keep the application from running out of memory in the first place, so figuring out why it happens is the first step. Then you can increase the heap size of the MaxPermSize as appropriate. Here are some other useful HotSpot hooks:
-XX:+PrintCommandLineFlags
-XX:+PrintConcurrentLocks
-XX:+PrintClassHistogram
See the full list here
An OOME can be caught, but it is going to be generally useless, depending on if the JVM is able to garbage-collect some objects when reaching the catch block, and how much heap memory is left by that time.
Example: in my JVM, this program runs to completion:
import java.util.LinkedList;
import java.util.List;
public class OOMErrorTest {
public static void main(String[] args) {
List<Long> ll = new LinkedList<Long>();
try {
long l = 0;
while(true){
ll.add(new Long(l++));
}
} catch(OutOfMemoryError oome){
System.out.println("Error catched!!");
}
System.out.println("Test finished");
}
}
However, just adding a single line in the catch block will show you what I'm talking about:
import java.util.LinkedList;
import java.util.List;
public class OOMErrorTest {
public static void main(String[] args) {
List<Long> ll = new LinkedList<Long>();
try {
long l = 0;
while(true){
ll.add(new Long(l++));
}
} catch(OutOfMemoryError oome){
System.out.println("Error caught!!");
System.out.println("size:" +ll.size());
}
System.out.println("Test finished");
}
}
The first program runs fine because when reaching the catch block, the JVM detects that the list isn't going to be used anymore (this detection can be also an optimization made at compile time). So when we reach the print statement, the heap memory has been freed almost entirely, so we now have a wide margin of maneuver to continue. This is the best case.
However, if the code is arranged such as the list ll is used after the OOME has been caught, the JVM is unable to collect it. This happens in the second snippet. The OOME, triggered by a new Long creation, is caught, but soon we're creating a new Object (a String in the System.out.println line), and the heap is almost full, so a new OOME is thrown. This is the worst case scenario: we tried to create a new object, we failed, we caught the OOME, yes, but now the first instruction requiring new heap memory (e.g: creating a new object) will throw a new OOME. Think about it, what else can we do at this point with so little memory left? Probably just exiting, hence I said it's useless.
Among the reasons the JVM isn't garbage-collecting resources, one is really scary: a shared resource with other threads also making use of it. Anyonecan see how dangerous catching OOME can be if added to some non-experimental app of any kind.
I'm using a Windows x86 32bits JVM (JRE6). Default memory for each Java app is 64MB.
The only reason i can think of why catching OOM errors could be that you have some massive data structures you're not using anymore, and can set to null and free up some memory. But (1) that means you're wasting memory, and you should fix your code rather than just limping along after an OOME, and (2) even if you caught it, what would you do? OOM can happen at any time, potentially leaving everything half done.
For the question 2 I already see the solution I would suggest, by BalusC.
Is there any real word scenarios when catching java.lang.OutOfMemoryError may be a good idea?
I think I just came across a good example. When awt application is dispatching messages the uncatched OutOfMemoryError is displayed on stderr and the processing of the current message is stopped. But the application keeps running! User may still issue other commands unaware of the serious problems happening behind the scene. Especially when he cannot or does not observe the standard error. So catching oom exception and providing (or at least suggesting) application restart is something desired.
I just have a scenario where catching an OutOfMemoryError seems to make sense and seems to work.
Scenario: in an Android App, I want to display multiple bitmaps in highest possible resolution, and I want to be able to zoom them fluently.
Because of fluent zooming, I want to have the bitmaps in memory. However, Android has limitations in memory which are device dependent and which are hard to control.
In this situation, there may be OutOfMemoryError while reading the bitmap. Here, it helps if I catch it and then continue with lower resolution.
Depends on how you define "good". We do that in our buggy web application and it does work most of the time (thankfully, now OutOfMemory doesn't happen due to an unrelated fix). However, even if you catch it, it still might have broken some important code: if you have several threads, memory allocation can fail in any of them. So, depending on your application there is still 10--90% chance of it being irreversibly broken.
As far as I understand, heavy stack unwinding on the way will invalidate so many references and thus free so much memory you shouldn't care about that.
EDIT: I suggest you try it out. Say, write a program that recursively calls a function that allocates progressively more memory. Catch OutOfMemoryError and see if you can meaningfully continue from that point. According to my experience, you will be able to, though in my case it happened under WebLogic server, so there might have been some black magic involved.
You can catch anything under Throwable, generally speaking you should only catch subclasses of Exception excluding RuntimeException (though a large portion of developers also catch RuntimeException... but that was never the intent of the language designers).
If you were to catch OutOfMemoryError what on earth would you do? The VM is out of memory, basically all you can do is exit. You probably cannot even open a dialog box to tell them you are out of memory since that would take memory :-)
The VM throws an OutOfMemoryError when it is truly out of memory (indeed all Errors should indicate unrecoverable situations) and there should really be nothing you can do to deal with it.
The things to do are find out why you are running out of memory (use a profiler, like the one in NetBeans) and make sure you don't have memory leaks. If you don't have memory leaks then increase the memory that you allocate to the VM.
I was getting OutOfMemoryError messages in LogCat and my app was crashing because the errors were uncaught.
Two things were causing the OutOfMemoryError:
1. Reading a large text file into a string.
2. Sending that string to my TextView.
Simply adding a catch to these two things not only catches the OutOfMemoryError but appears to completely solve the out-of-memory problem.
No more crashing and no more error messages in LogCat. The app just runs perfectly.
How is this possible? What exactly is happening?
With this code, I was getting the error messages & app crashing:
try
{
myString = new Scanner(new File(myFilePath)).useDelimiter("\\A").next();
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
myTextView.setText(myString);
Just by 'catching' the OutOfMemoryError, no more error messages in LogCat and no crashing:
try
{
myString = new Scanner(new File(myFilePath)).useDelimiter("\\A").next();
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
catch(OutOfMemoryError e)
{
}
try
{
myTextView.setText(myString);
}
catch(OutOfMemoryError e)
{
}
I guess your string isn't loaded completely, or even if it is (it may throw the error just after adding the text), what happens depends on the current memory available for your app so catching OutOfMemoryError isn't a viable solution here.
If you really want to load a large string file and display it in an EditText, I recommend you to load only a small part of the file (let's say 50kB), and implement some kind of paging, like a button which loads the next 50kB. You can also load additional lines as the user scrolls through the EditText.
If you catch the OutOfMemoryError, the garbage collector tries to free up the memory previously used and thus the application can carry on if the application will let the garbage collector do its job (i.e. the application has no longer a reference to that large string of yours).
However, catching an OutOfMemoryError is far from fool-proof. See Catching java.lang.OutOfMemoryError?.
When you catch the exception the JVM tries to recover from it by calling the Garbage Collector and scraping the objects that are not used anymore.
This might solve the problem in your case. But imagine that the problem appears because of bad coding and memory leaks all over your code. Catching will not solve the problem because GC will not collect any objects. GC will kick in more frequently and the performance of your application will drop until it becomes unusable.
Basically, this error happens when the JVM cannot allocate more memory on heap for new objects. Catching the exception and letting the GC to clean up and release memory might be a solution but you are never absolutely sure that you are in recoverable state. I would use the catch block to recover from the error, log it and close the application. If you want to solve the memory problem in this case, do it properly and initialize the JVM with more memory (using the java argument -Xmx)
Compilefile.this.compileThread = new Thread() {
#Override
public void run() {
try {
synchronized (this) {
Application.getDBHandler().setAutoCommit(false);
MIBParserUtils.getDefaultMibsMap();
compileSelectedFiles();
Application.getDBHandler().CommitTrans();
Application.getDBHandler().setAutoCommit(true);
}
}
catch(OutOfMemoryError exp) {
JOptionPane.showMessageDialog(null, "Compilation Stopped.. Insufficient Memory!!!");
CompileMib.this.compileThread.interrupt();
System.gc();
dispose();
NmsLogger.writeDebugLog(exp);
}
finally {
}
}
I tried to compile some files within a thread. The UI selects more than 200 files to compile. During compilation an OutOfMemoryError occurred due to in sufficient memory in Eclipse. I want to stop the thread and display a message box and dispose the compile window in my application. I wrote the below code but its not working. Can I catch the exception and handle it, or is there a better solution?
can i handle the exception in catch block?
You can certainly catch an OOME. But successfully recovering is another thing entirely. This Answer discusses some of the issues: https://stackoverflow.com/a/1692421/139985.
Another thing to consider is that the OOME might be being thrown on a different thread:
The compileSelectedFiles() method or one of the other methods could be doing the work on another thread and throwing OOME there.
The OOME could be being thrown on one of Eclipse's background threads.
In either case, that catch obviously won't catch it.
It is worth noting that calling System.gc() after an OOME is a waste of time. I can guarantee that it won't release any memory that wouldn't be released anyway. All you are doing is suggesting that the JVM waste time on something that won't help. If you are lucky, the JVM will ignore the suggestion.
My advice would be to just increase Eclipse's heap size by altering the -Xmx JVM parameter in the eclipse.ini file.
There is almost always no reliable way to recover from OOM, as anything that you try to put into catch block can require more memory, which is not available. And GC has already tried its best before OOM is thrown, so no point in asking him again.
As always, you can either increase the amount of memory available to your application via Xmx option, or fix your application for it not to require that much memory.
One more possible source of error is memory leak. In this case there is only 1 course of action: find it and fix it. Plumbr can help in that.
Have you tried adding the following to your eclipse.ini (located in same folder as eclipse.exe):
-Xmx1024m
This increases the heap space available to Eclipse. If your issue is during compilation, this may solve it. It gives 1GB of memory as heap space limit. Try -Xmx512m if you don't want to allocate quite so much space though.
try {
for(;;) {
s.add("Pradeep");
}
} finally {
System.out.println("In Finally");
}
In the try block the jvm runs out of memory,then how is jvm excuting finally block when it has no memory?
Output:
In Finally
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
Presumably the System.out.println call requires less memory than the s.add("Pradeep") call.
If s is an ArrayList for instance, the s.add call may cause the list to attempt to double up it's capacity. This is possibly a quite memory demanding operation, thus it is not very surprising that the JVM can continue executing even if it can't perform such relatively expensive task.
Here is more simple code that demonstrated better what happens:
try {
int[] a = new int[Integer.MAX_VALUE];
} catch( Exception e ) {
e.printStackTrace();
}
The allocation of the array fails but that doesn't mean Java has no free memory anymore. If you add items to a list, the list grows in jumps. At one time, the list will need more than half of the memory of the VM (about 64MB by default). The next add will try to allocate an array that is too big.
But that means the VM still has about 30MB unused heap left for other tasks.
If you want to get the VM into trouble, use a LinkedList because it grows linearly. When the last allocation fails, there will be only very little memory to handle the exception:
LinkedList<Integer> list = new LinkedList<Integer>();
try {
for(;;) {
list.add(0);
}
} catch( Exception e ) {
e.printStackTrace();
}
That program takes longer to terminate but it still terminates with an error. Maybe Java sets aside part of the heap for error handling or error handling doesn't need to allocate memory (allocation happens before the code is executed).
In the try block the jvm runs out of memory, then how is jvm excuting finally block when it has no memory?
The JVM "runs out of memory" and throws an OOME when an attempt to allocate an object or array fails because there is not enough space available for that object. That doesn't mean that everything has to stop instantly:
The JVM can happily keep doing things that don't entail creating any new objects. I'm pretty sure that this is what happens in this case. (The String literal already exists, and implementation of the println method is most likely copying characters into a Buffer that was previously allocated.)
The JVM could potentially allocate objects that are smaller than the one that triggered the OOME.
The propagation of the OOME may cause variables containing object references to go out of scope, and the objects that they refer to may then become unreachable. A subsequent new may then trigger the GC that reclaims said objects, making space for new ones.
Note: the JLS does not specify that the String object that represents that literal must be created when the class is loaded. However, it certainly says that it may be created at class load time ... and that is what happens in any decent JVM.
Another answer said this:
Maybe Java sets aside part of the heap for error handling or error handling doesn't need to allocate memory (allocation happens before the code is executed).
I think this is right. However, I think that this special heap region is only used while instantiating the OOME exception, and filling in the stack trace information. Once that has happened, the JVM goes back to using the regular heap. (It would be easy to get some evidence for this. Just retry the add call in the finally block. If it succeeds, that is evidence that something has made more memory available for general use.)
The JVM isn't really out of memory and unable to proceed. This error says that it failed to allocate memory and so it didn't. That might mean its very low. But here what failed was resizing the collection's internal array which is huge. There's a lot of memory left just not that much to double a large array. So it can proceed just fine with finally.
The error is thrown when the heap space exceeds that set by the -Xmx flag, and it cannot continue as normal. The error propagates, but it does not immediately cause the JVM to be shutdown (of the system exited in such cases, there would be no point in the OOM error, as it could never be thrown).
As the JVM has not exited it will try to, as according to the language spec, execute the finally block.
Finally executes almost always.
When the exception was thrown, the JVM collected as much as memory as possible, which, reading your code, probably meant that it collected the whole s collection.
When the finally is reached, it only has to create a new string "In finally" in the string pool no additional memory is required, and it has no problems since it has freed up space before.
Try printing s.size() on the finally, you'll see how it is not able to do it. (EDIT: if in catch, finally or after the try block, there´s a line of code using the s collection, the Garbage Collector is unable to collect it at the moment the OOME is thrown. This is why the heap memory will be almost full, so any new object allocation may throw another OOME again. It is difficult to predict without seeing the complete code).
This may be a strange question, but do 'try-catch' blocks add any more to memory in a server environment than just running a particular block of code. For example, if I do a print stack trace, does the JVM hold on to more information. Or is more information retained on the heap?
try {
... do something()
} catch(Exception e) {
e.printStackTrace();
}
... do something()
The exception will hae a reference to the stack trace. printStackTrace will allocate more memory as it formats that stack trace into something pretty.
The try catch block will likely result in a largely static code/data segment but not in run time memory allocations
The important here is as soon the exception variable 'e' is no longer reachable (ie, out of scope) it becomes eligible for memory collection.
Technically the answer to your question is probably no. There are lots of reasons to avoid throwing Exceptions whenever possible, but memory isn't really a concern.
The real reason to only throw Exceptions for truly exceptional conditions is that it's SLOW. Generating an exception involves carefully examining the stack. It's not a fast operation at all. If you're doing it as part of your regularly flow of execution, it will noticeably affect your speed. I once wrote a logging system that I thought was extremely clever because it automatically figured out which class had invoked it by generating an Exception and examining the stack in that manner. Eventually I had to go back and take that part out because it was noticeably slowing everything else down.
The stack trace is built when the exception is created. Printing the stack trace doesn't do anything more memory intensive than printing anything else.
The try/catch block might have some performance overhead, but not in the form of increased memory requirements.
For the most part, don't worry about memory/performance when exceptions happen. If you have an exception that is a common code path then that suggest you are misusing exceptions.
If your question is more for academic purposes, then I don't know the full extent of what is going on there in terms of heap/memory space. However, Joshua Bloch in "Effective Java" mentions that the catch block of the try catch block is often relatively unoptimized by most JVM implementations.
While not directly related to memory consumption, there was a thread here a while back discussing How slow are the Java exceptions? It is worth a look, in my opinion.
I also had this link in my bookmarks. As far as I can recall, it gave an example of speed up possible when stack trace generation is skipped on exception throw, but the site seems down now.