So today while learning Java, I finally encountered this particular error. It seems that this error is pretty prevalent and trying to recover from it has garnered mixed reactions. Some think it is useful in certain scenarios so its a good 'good to know', some have used it in their projects, while others vehemently oppose the idea of catching this error and then a lot of others are just as confused as me.
Edit: Oh btw, this is the error that I have encountered. I wish to catch these errors and decrease the value of incrementor by 1/10th next time until another error is found(and so on... until I find the upper bound)
Since I'm taking my baby steps in Java and cannot find anything much specific on this topic, I'd like to ask your help on this particular example:
public class SpeedDemoClass {
static int iterations=0;
static int incrementor=10000000;
public static void main(String[] args) {
while(incrementor>1){
try{
iterations=iterations+incrementor;
iterator(iterations);
}
catch(Exception e){
System.out.println("So this is the limiting error: "+e);
iterations=iterations-incrementor;
incrementor=incrementor/10;
iterations=iterations+incrementor;
}
}
System.out.println("The upper limit of iterations is: "+iterations);
}
public static void iterator(int a){
long start_time= System.currentTimeMillis();
StringBuilder sbuild= new StringBuilder("JAVA");
for(int i=0;i<a;i++){
sbuild.append("JAVA");
}
System.out.println("Performing "+a+" append operations;"
+"process completed in :"
+(System.currentTimeMillis()-start_time)+"ms");
}
}
Did you try compiling it?
Ofcourse, it does not work
Here's a short description on what I'm trying to do!
I am trying to do the following:
Initialize incrementor=10000000 and iterations=0 and pass the result of incrementor=incrementor+iterations to iterator().
If there is an Error, the program should catch it and then decrease iterations* by **incrementor, then divide incrementor by 10, then try this again
Repeat steps 1 to 2 until, it starts working.
Goal is to find values of iterations where it produces error and stay below that level after each iteration until the upper limit of the value of iterations is found
(i.e. when incrementor becomes 0)
With my own manual tests, I have found out the value of this to be
92,274,686. That's the magic number. I don't know why it is so, and whether or not it is that value for my own computer only. It would be
awesome if someone could post a modified code that could spit out this
result.
You catch Exception, but OutOfMemoryError is an Error. Note that Error and Exception are two different classes in Java! Both implement Throwable, though.
The catch clause restricts the type of Throwable classes to be caught. As Error is not an Exception, your code won't catch OutOfMemoryError.
That "magic number" will change if you allocate some extra objects that use up memory, use a PC with less memory etc. or allocate memory smart.
Trying to estimate memory consumption brute-force is a bad idea!
Instead, understand how Java memory allocations work, how the memory layout of a string builder is, and how to control this. Here, you should study the method ensureCapacityInternal.
Look at the error you get - in particular the classes and line numbers.
I bet that if you replace this line:
StringBuilder sbuild= new StringBuilder("JAVA");
by
StringBuilder sbuild= new StringBuilder(500000000);
Then you get much further before seeing an OOM error. Catching this error is not a very clever way. There are some nice variants to observe memory consumption; but you really should use them only once you have understood memory organization of Java, so you don't draw wrong conclusions.
Your PC probably has 8 GB of RAM? Try using java -Xmx7500m. You may be able to go 3x as far. Because by default, Java only uses 25% of your memory simply to allow other processes to use some memory, too. If you want Java to use all your memory, you have to tell it that this is desired.
bro to catch the outofmemoryerror just replace Exception in catch with OutOfMemoryError
catch(OutOfMemoryError e)
Out of Memory generally means you can't do much else, the VM is already crashing. While it might seem you can doe something, like deallocate memory, Java does all memory management for you. You can't deallocate anything, you can dereference objects and then trigger garbage collection, but at the moment the exception occurs it's too late. Almost anything you could do, requires memory allocation, and you have none.
In general all errors that are thrown by the VM cannot be recovered from. Usually because the VM itself is unstable, and you don't have access to the internals of the VM to fix it. Not only that, but it would be the rare case that your software could fix it.
I was going to post this solution but somebody beat me to it. Earlier this day, I seemed to have overlooked a solution presented in one of the alternative questions of this sort that I've mentioned in the beginning of this question.
Well, just for general courtesy, so that people in the future do not overlook a right-at-your-face solution like I have, I'm posting an answer as easy as they come*. Here are the little modifications I've made that made it happen. It seems, it is possible to catch java.lang.OutOfMemoryError although I do not yet know the implications of what I have really done.
* Please feel free to correct me and read the end of this answer :)
I'll update this answer in the future reflecting on things that I might discover later(like discrepancies and such)
So without further ado, here's the code in its raw glory:
public class SpeedDemoClass {
static int iterations=0;
static int incrementor=10000000;
public static void main(String[] args) {
while(incrementor>0){
try{
iterations=iterations+incrementor;
int a = iterations;
long start_time= System.currentTimeMillis();
StringBuilder sbuild= new StringBuilder("JAVA");
for(int i=0;i<a;i++){
sbuild.append("JAVA");
}
System.out.println("Performing "+a+" append operations;"
+"process completed in :"
+(System.currentTimeMillis()-start_time)+"ms");
}
catch(OutOfMemoryError e){
System.out.println("OutOfMemory bound reached beyond this point with error: "+e
+"\nReverting back, and Changing the value of incrementor by 1/10th...");
iterations=iterations-incrementor;
incrementor=incrementor/10;
iterations=iterations+incrementor;
}
}
System.out.println("The upper limit of iterations is: "+iterations);
}
}
Changelog:
If it is not yet apparent, I have made some tiny changes to the original code.
(1) I ditched the method iterator() because I realize now that it made it harder for people to realize the intent of the question.
(2) I changed catch(Exception e) to catch(OutOfMemoryError e) . Turns out there is a native solution available, and I had overlooked it earlier.
OUTPUT:
It seems to work perfectly for me and now I have found the magic number. I welcome all comments that can indicate this answer is wrong and why? :)
Related
I'm reading a blog post and trying to understand what's going on.
This is the blogpost.
it has this code:
if (validation().hasErrors())
throw new IllegalArgumentException(validation().errorMessage());
In the validation() method we have some object initialization and calculations so let' say it's an expensive call. Is it going to be executed twice? Or will it be optimized by the compiler to be something like this?
var validation = validation();
if (validation.hasErrors())
throw new IllegalArgumentException(validation.errorMessage());
Thanks!
The validation method will be called twice, and it will do the same work each time. First, the method is relatively big, and so it won't get inlined. Without being inlined, the compiler doesn't know what it does. Therefore, it safely assumes that the method has side effects, and so it cannot optimize away the second call.
Even if the method was inlined, and the compiler could examine it, it would see that there are in fact side effects. Calling LocalDate.now() returns a different result each time. For this reason, the code that you linked to is defective, although it's not likely to experience a problem in practice.
It's safer to capture the validation result in a local variable not for performance reasons, but for stability reasons. Imagine the odd case in which the initial validation call fails, but the second call passes. You'd then throw an exception with no message in it.
The Java to Bytecode compiler has a limited set of optimization techniques (e.g. 9*9 in the condition would turn into 81).
The real optimization happens by the JIT (Just In Time) compiler. This compiler is the result of over a decade and a half of extensive research and there is no simple answer to tell what it is capable of in every scenario.
With that being said, as a good practice, I always handle repetitive identical method calls by storing their result before approaching any loop structure where that result is needed. Example:
int[] grades = new int[500];
int countOfGrades = arr.length;
for (int i = 0; i < countOfGrades; i++) {
// Some code here
}
For your code (which is only run twice), you shouldn't worry as much about such optimization. But if you're looking for the ultimate – guaranteed – optimization on the account of a fraction of space (which is cheap), then you're better off using a variable to store any identical method result when needed more than once:
var validation = validation();
if (validation.hasErrors())
throw new IllegalArgumentException(validation.errorMessage());
However, I must simply question ... "these days," does it even actually matter anymore? Simply write the source-code "in the most obvious manner available," as the original programmer certainly did.
"Microseconds" really don't matter anymore. But, "clarity still does." To me, the first version of the code is frankly more understandable than the second, and "that's what matters to me most." Please don't bother to try to "out-smart" the compiler, if it results in source-code that is in any way harder to understand.
I have come across 2 samples of code from a legacy system that i'm at a loss to understand why someone would code like this. The app is in Java and is about 10-15 years old.
It seems so inefficient to hard to understand done like this.
if(condition) {
String[] hdtTmp = { "Range (Brand):", "Build Region:", "Design:", "Plan (Size):", "Facade:", "Region:" , "Internal Colour", "External Colour"};
hdt = hdtTmp;
String[] hddTmp = { p.RangeName, brName, p.HomeName, p.Name, f.Name, "North", "Red", "Blue"};
hdd = hddTmp;
hddTmp = null;
hdtTmp = null;
}
I do not understand why you would not just assigned it to the attribute in the first place? And since the hdtTmp and hddTmp are inside the block why make them null?
max = hdt.length -1;
for(int i=0; ; i++) {
// do some stuff here
if(i == max)
break;
}
Again, it seems that the original programmer didn't know how for loops worked?
They never taught this when i did my degree, so my question is, Why would anyone write code like this?
Setting local variables to null... Well back in the 90s, some old Sun documentation suggested that this would help, but it's been so long and that information is probably no longer available as it is no longer correct. I have run into this in a lot of old code and at this point it does nothing since local variables loose the reference to the object as soon as the method exits and GC is smart enough to figure that out.
The for() loop question is more of someone building a loop with exit criterion inside the actual loop. It's just unfortunate coding style and probably written by a relatively junior developer.
This code definitely seems like someone coding java after learning C/C++. I have seen enough of it, C/C++ people were taught to clean up after allocation and assigning to null was something that made them happy (at least the ones I have worked with back in the day). Another thing they did was override the finalize method, which is bad form in java but was the closest thing they had to destructors.
You may also see infinite loops with stopping condition inside like:
for (;;) {
// Do stuff
if (something_happened)
break;
}
Old (bad) habits die hard.
There is a narrow use case for nulling variables in the general case.
If the code between assigning null and the end of the variable's scope (often the end of the method) is longish running, nulling gives an earlier opportunity to GC the object previously referenced.
On the surface this is a micro optimisation, but it can have security benefits: if a memory dump or other similar snapshot (eg by a profiler) was to occur it could (very slightly) reduce the chance that sensitive data was part of the dump.
These "benefits" are pretty tenuous though.
The code in the question however is utterly useless.
This question already has answers here:
Can the JVM recover from an OutOfMemoryError without a restart
(7 answers)
Closed 6 years ago.
Here says that it is impossible to recover from errors.
I am not sure what does it mean because I can catch Error just like Exception.
Eg:
public static void main(String args[]) {
while (true) {
try {
throw new StackOverflowError("stackoverflow");
} catch (Throwable throwable) {
//do something
}
//program recover and continue to execute
}
}
The above program execute normally and It seems that It's possible to recover from errors. Can anyone help?
UPDATE:
example of stackoverflow is a little puzzling, only idiot want to recover from stackoverflow. Here is another example about OutOfMemory:
public class Main {
public static Map<Long, Object> cache = null;
public static void main(String args[]){
while(true){
try {
if(cache == null) {
cache = new HashMap<>();
}
for (long i = 0; i < Long.MAX_VALUE; ++i) {
cache.put(i, i);
}
}catch(OutOfMemoryError error) {
cache.clear();//delete some unused entry or clear all.
cache = null;
System.gc();
System.out.println("release memory");
}
}
}
}
It's a simple HashMap cache, execute the code using java -Xmx1m Main , and we will see an OutOfMemoryError soon, then release memory manually, and the program will continue to execute. insert-> OutOfMemoryError -> release -> insert...
See? the program have recovered from an OutOfMemoryError. Isn't it? And I think it's meaningful. So, Why someone still said program can't recover from OutOfMemoryError.
Obviously, the mere ability to "catch" an exception does not mean that you will (always ...) be able to "recover from" it ...
... i.e.: "to sally forth, entirely unscathed, as though 'such an inconvenient event' never had occurred in the first place."
I think that the essential point of the original author is more-or-less that: Even though the poor souls on the Titanic might have "caught" the fact that their ship was sinking, there was nothing that they could do to "recover" the ship and to sail it on unscathed to New York.
--- Edit:
"Now, let me add one more thought!" (As other Answerers have also done here.) Sometimes, an Error Exception is intentionally "thrown" in some block of code that has intentionally been surrounded by a catch-block.
Maybe the error (which is itself "an object") is of a particular user-defined type that the catcher will recognize. (He can "re-throw" any that he does not recognize, or simply decline to catch it.) This can be a very elegant and efficient way to handle, well, "the exceptions to the rule." (Hence the name...)
Whenever a block of code encounters a situation that "only happens once in a blue moon, but it just did," it can throw an exception high into the air, knowing that the nearest catcher will catch it. This catcher, in turn, can recognize the thing that just landed into his baseball-mitt, and respond accordingly. There can be any number of "catchers" out there, looking for different things.
This strategy is not "an Error," but rather an intentional alternative flow-of-control within the program.
Consider the following code which actually throws a StackOverflowError
public class Code
{
public static void main(String[] args)
{
try
{
f();
}
catch (Throwable t)
{
System.out.println("OK I'm recovered, let me try that again");
try
{
f();
}
catch (Throwable t2)
{
System.out.println("What's going on here??");
// let me try one more time..
try
{
f();
}
catch (Throwable t3)
{
System.out.println("Why can't I really recover");
}
}
}
}
// Bad recursion...
private static void f()
{
f();
}
}
Do you really want to continue with execution knowing that your code has such an issue? What if you need to call f() again somewhere down the line? You would want to have such code in production that tries to continue if a method keeps giving a StackOverflowError?
You're just catching the Throwable here, before it ever gets to the point where it's an error. Throwable is the superclass of both errors and exceptions. Therefore, a Throwable is not comparable to an error itself, much less a StackOverflowError, and your program is not recovering from an error.
An exception is sort of like a misdemeanor charge. It's not a fantastic thing to happen to you, or rather, your program, but it definitely won't land you in jail for extended periods of time. However, an error is dangerous - it is on the level of a felony, to continue this analogy. Java is considered safer than, say, C, because it will tell you when you are doing something that is unsafe. C, on the other hand, will allow you to access data that is beyond the scope of an array, for example, and you'll get awful bugs that are very difficult to debug that way.
The analogy to crime ends after that point. If you had scrolled down a little further on the page that you linked to, it continues to delineate the reasons why exceptions and errors are different, namely, that "errors are caused by the environment in which the application is running," and therefore, because your application is only in charge of itself, and not the actual environment, it cannot of its own volition recover from an error.
The page also gives the example of an OutOfMemoryError. While you, as the programmer, can plan ahead to try to prevent an OutOfMemoryError by making your code as efficient as you can, you won't be able to prevent such an error if the code is run on a machine that has, say, 1 byte of RAM. Errors cannot be caught because they are (mostly) out of the hands of the application itself. It's up to the programmer, such as in Michael Markidis' example, to not cause an error and avoid any error-causing programming techniques (such as using "Bad recursion") that will crash your program.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
For a homework exercise involving exception handling, I need to produce an OutOfMemoryError exception so I can write a try-catch and catch it:
"13.10 (OutOfMemoryError) Write a program that causes the JVM to throw an
OutOfMemoryError and catches and handles this error."
I searched the Java API and couldn't find anything on OutOfMemoryError exceptions in the exception list. What is an OutOfMemoryError exception, and how can I produce one for my assignment?
Much simpler (and guaranteed to work) than creating millions of objects:
public static void throwOomE() {
throw new well, you can just search for the rest of the answer
}
The easiest way I can think of:
for (String x = "x";; x += x);
You might want to get a bit creative to crash it even faster.
OutOfMemoryError is thrown when the JVM can't allocate enough memory to complete the requested action.
To produce these just allocate a bunch of memory in a loop or something similar, but remember to keep your already allocated objects, preferably in a ArrayList or something, or else the garbage collector might reuse the space and free up some memory.
Something like
ArrayList<Object> list = new ArrayList<Object>();
try {
while ( true )
list.add( new Object() );
} catch ( OutOfMemoryError e ) {
// And we are done...
}
You might want to replace Object with a class that takes up some considerable amount of space, or else this might take some time.
Reference:
http://docs.oracle.com/javase/1.4.2/docs/api/java/lang/OutOfMemoryError.html
Much like I do with every java related question, the first thing I did was google:
java 6 <class-name>
In other words:
java 6 OutOfMemoryError
I got this informative link as the first result:
http://docs.oracle.com/javase/6/docs/api/java/lang/OutOfMemoryError.html
My favorite way to do this is.
byte[] crasher = new byte[Integer.MAX_VALUE];
An OutOfMemoryError is thrown by the VM when it runs out of memory for allocating new objects. Thus you can cause this exception to hapen when you create too many objects.
To do so, start creating objects that will not be collected by the garbage collector (for example, link them to each other) in an infinite loop -- the VM will eventually run out of free memory for the next object to be allocated, throwing the exception. When you catch it, be careful though as you won't be able to do much -- the VM does not have (much) memory left for any operations.
OutOfMemoryException appears when your heap contains no free space for new objects.
Also you should persist references for objects somewhere, otherwise garbage colector takes care about they.
Binary crash (needed amount of bytes doubled on each iteration :)
String x = "1";
while (true){
x = (x + x);
}
** read all references for better understanding
This question already has answers here:
What are the effects of exceptions on performance in Java?
(18 answers)
Closed 9 years ago.
Do you know how expensive exception throwing and handling in java is?
We had several discussions about the real cost of exceptions in our team. Some avoid them as often as possible, some say the loss of performance by using exceptions is overrated.
Today I found the following piece of code in our software:
private void doSomething()
{
try
{
doSomethingElse();
}
catch(DidNotWorkException e)
{
log("A Message");
}
goOn();
}
private void doSomethingElse()
{
if(isSoAndSo())
{
throw new DidNotWorkException();
}
goOnAgain();
}
How is the performance of this compared to
private void doSomething()
{
doSomethingElse();
goOn();
}
private void doSomethingElse()
{
if(isSoAndSo())
{
log("A Message");
return;
}
goOnAgain();
}
I don't want to discuss code aesthetic or anything, it's just about runtime behaviour!
Do you have real experiences/measurements?
Exceptions are not free... so they are expensive :-)
The book Effective Java covers this in good detail.
Item 39 Use exceptions only for exceptional conditions.
Item 40 Use exceptions for recoverable conditions
The author found that exceptions resulted in the code tunning 70 times slower for his test case on his machine with his particular VM and OS combo.
The slowest part of throwing an exception is filling in the stack trace.
If you pre-create your exception and re-use it, the JIT may optimize it down to "a machine level goto."
All that having been said, unless the code from your question is in a really tight loop, the difference will be negligible.
The slow part about exceptions is building the stack trace (in the constructor of java.lang.Throwable), which depends on stack depth. Throwing in itself is not slow.
Use exceptions to signal failures. The performance impact then is negligible and the stack trace helps to pin-point the failure's cause.
If you need exceptions for control flow (not recommended), and profiling shows that exceptions are the bottleneck, then create an Exception subclass that overrides fillInStackTrace() with an empty implementation. Alternatively (or additionally) instantiate only one exception, store it in a field and always throw the same instance.
The following demonstrates exceptions without stack traces by adding one simple method to the micro benchmark (albeit flawed) in the accepted answer:
public class DidNotWorkException extends Exception {
public Throwable fillInStackTrace() {
return this;
}
}
Running it using the JVM in -server mode (version 1.6.0_24 on Windows 7) results in:
Exception:99ms
Boolean:12ms
Exception:92ms
Boolean:11ms
The difference is small enough to be ignorable in practice.
I haven't bothered to read up on Exceptions but doing a very quick test with some modified code of yours I come to the conclusion that the Exception circumstance quite a lot slower than the boolean case.
I got the following results:
Exception:20891ms
Boolean:62ms
From this code:
public class Test {
public static void main(String args[]) {
Test t = new Test();
t.testException();
t.testBoolean();
}
public void testException() {
long start = System.currentTimeMillis();
for(long i = 0; i <= 10000000L; ++i)
doSomethingException();
System.out.println("Exception:" + (System.currentTimeMillis()-start) + "ms");
}
public void testBoolean() {
long start = System.currentTimeMillis();
for(long i = 0; i <= 10000000L; ++i)
doSomething();
System.out.println("Boolean:" + (System.currentTimeMillis()-start) + "ms");
}
private void doSomethingException() {
try {
doSomethingElseException();
} catch(DidNotWorkException e) {
//Msg
}
}
private void doSomethingElseException() throws DidNotWorkException {
if(!isSoAndSo()) {
throw new DidNotWorkException();
}
}
private void doSomething() {
if(!doSomethingElse())
;//Msg
}
private boolean doSomethingElse() {
if(!isSoAndSo())
return false;
return true;
}
private boolean isSoAndSo() { return false; }
public class DidNotWorkException extends Exception {}
}
I foolishly didn't read my code well enough and previously had a bug in it (how embarassing), if someone could triple check this code I'd very much appriciate it, just in case I'm going senile.
My specification is:
Compiled and run on 1.5.0_16
Sun JVM
WinXP SP3
Intel Centrino Duo T7200 (2.00Ghz, 977Mhz)
2.00 GB Ram
In my opinion you should notice that the non-exception methods don't give the log error in doSomethingElse but instead return a boolean so that the calling code can deal with a failure. If there are multiple areas in which this can fail then logging an error inside or throwing an Exception might be needed.
This is inherently JVM specific, so you should not blindly trust whatever advice is given, but actually measure in your situation. It shouldn't be hard to create a "throw a million Exceptions and print out the difference of System.currentTimeMillis" to get a rough idea.
For the code snippet you list, I would personally require the original author to thoroughly document why he used exception throwing here as it is not the "path of least surprises" which is crucial to maintaining it later.
(Whenever you do something in a convoluted way you cause unneccesary work to be done by the reader in order to understand why you did it like that instead of just the usual way - that work must be justified in my opinion by the author carefully explaining why it was done like that as there MUST be a reason).
Exceptions are a very, very useful tool, but should only be used when necessary :)
I have no real measurements, but throwing an exception is more expensive.
Ok, this is a link regarding the .NET framework, but I think the same applies to Java as well:
exceptions & performance
That said, you should not hesitate to use them when appropriate. That is : do not use them for flow-control, but use them when something exceptional happend; something that you didn't expect to happen.
I think if we stick to using exceptions where they are needed (exceptional conditions), the benefits far outweigh any performance penalty you might be paying. I say might since the cost is really a function of the frequency with which exceptions are thrown in the running application.
In the example you give, it looks like the failure is not unexpected or catastrophic, so the method should really be returning a bool to signal its success status rather than using exceptions, thus making them part of regular control flow.
In the few performace improvement works that I have been involved in, cost of exceptions has been fairly low. You would be spending far more time time in improving the complexity of common, hightly repeating operations.
Thank you for all the responses.
I finally followed Thorbjørn's suggestion and wrote a little test programm, measuring the performance myself. The result is: No difference between the two variants (in matters of performance).
Even though I didn't ask about code aesthetics or something, i.e. what the intention of exceptions was etc. most of you addressed also that topic. But in reality things are not always that clear... In the case under consideration the code was born a long time ago when the situation in which the exception is thrown seemed to be an exceptional one. Today the library is used differently, behaviour and usage of the different applications changed, test coverage is not very well, but the code still does it's job, just a little bit too slow (That's why I asked for performance!!). In that situation, I think, there should be a good reason for changing from A to B, which, in my opinion, can't be "That's not what exceptions were made for!".
It turned out that the logging ("A message") is (compared to everything else happening) very expensive, so I think, I'll get rid of this.
EDIT:
The test code is exactly like the one in the original post, called by a method testPerfomance() in a loop which is surrounded by System.currentTimeMillis()-calls to get the execution time...but:
I reviewed the test code now, turned of everything else (the log statement) and looping a 100 times more, than before and it turns out that you save 4.7 sec for a million calls when using B instead of A from the original post. As Ron said fillStackTrace is the most expensive part (+1 for that) and you can save nearly the same (4.5 sec) if you overwrite it (in the case you don't need it, like me). All in all it's still a nearly-zero-difference in my case, since the code is called 1000 times an hour and the measurements show I can save 4.5 millis in that time...
So, my 1st answer part above was a little misleading, but what I said about balancing the cost-benefit of a refactoring remains true.
I think you're asking this from slightly the wrong angle. Exceptions are designed to be used to signal exceptional cases, and as a program flow mechanism for those cases. So the question you should be asking is, does the "logic" of the code call for exceptions.
Exceptions are generally designed to perform well enough in the use for which they are intended. If they're used in such a way that they're a bottleneck, then above all, that's probably an indication that they're just being used for "the wrong thing" full stop-- i.e. what you have underlyingly is a program design problem rather than a performance problem.
Conversely, if the exception appears to be being "used for the right thing", then that probably means it'll also perform OK.
Let's say exception won't occur when trying to execute statements 1 and 2. Are there ANY performance hits between those two sample-codes?
If no, what if the DoSomething() method has to do a huuuge amount of work (loads of calls to other methods, etc.)?
1:
try
{
DoSomething();
}
catch (...)
{
...
}
2:
DoSomething();