This question already has answers here:
What are the effects of exceptions on performance in Java?
(18 answers)
Closed 9 years ago.
Do you know how expensive exception throwing and handling in java is?
We had several discussions about the real cost of exceptions in our team. Some avoid them as often as possible, some say the loss of performance by using exceptions is overrated.
Today I found the following piece of code in our software:
private void doSomething()
{
try
{
doSomethingElse();
}
catch(DidNotWorkException e)
{
log("A Message");
}
goOn();
}
private void doSomethingElse()
{
if(isSoAndSo())
{
throw new DidNotWorkException();
}
goOnAgain();
}
How is the performance of this compared to
private void doSomething()
{
doSomethingElse();
goOn();
}
private void doSomethingElse()
{
if(isSoAndSo())
{
log("A Message");
return;
}
goOnAgain();
}
I don't want to discuss code aesthetic or anything, it's just about runtime behaviour!
Do you have real experiences/measurements?
Exceptions are not free... so they are expensive :-)
The book Effective Java covers this in good detail.
Item 39 Use exceptions only for exceptional conditions.
Item 40 Use exceptions for recoverable conditions
The author found that exceptions resulted in the code tunning 70 times slower for his test case on his machine with his particular VM and OS combo.
The slowest part of throwing an exception is filling in the stack trace.
If you pre-create your exception and re-use it, the JIT may optimize it down to "a machine level goto."
All that having been said, unless the code from your question is in a really tight loop, the difference will be negligible.
The slow part about exceptions is building the stack trace (in the constructor of java.lang.Throwable), which depends on stack depth. Throwing in itself is not slow.
Use exceptions to signal failures. The performance impact then is negligible and the stack trace helps to pin-point the failure's cause.
If you need exceptions for control flow (not recommended), and profiling shows that exceptions are the bottleneck, then create an Exception subclass that overrides fillInStackTrace() with an empty implementation. Alternatively (or additionally) instantiate only one exception, store it in a field and always throw the same instance.
The following demonstrates exceptions without stack traces by adding one simple method to the micro benchmark (albeit flawed) in the accepted answer:
public class DidNotWorkException extends Exception {
public Throwable fillInStackTrace() {
return this;
}
}
Running it using the JVM in -server mode (version 1.6.0_24 on Windows 7) results in:
Exception:99ms
Boolean:12ms
Exception:92ms
Boolean:11ms
The difference is small enough to be ignorable in practice.
I haven't bothered to read up on Exceptions but doing a very quick test with some modified code of yours I come to the conclusion that the Exception circumstance quite a lot slower than the boolean case.
I got the following results:
Exception:20891ms
Boolean:62ms
From this code:
public class Test {
public static void main(String args[]) {
Test t = new Test();
t.testException();
t.testBoolean();
}
public void testException() {
long start = System.currentTimeMillis();
for(long i = 0; i <= 10000000L; ++i)
doSomethingException();
System.out.println("Exception:" + (System.currentTimeMillis()-start) + "ms");
}
public void testBoolean() {
long start = System.currentTimeMillis();
for(long i = 0; i <= 10000000L; ++i)
doSomething();
System.out.println("Boolean:" + (System.currentTimeMillis()-start) + "ms");
}
private void doSomethingException() {
try {
doSomethingElseException();
} catch(DidNotWorkException e) {
//Msg
}
}
private void doSomethingElseException() throws DidNotWorkException {
if(!isSoAndSo()) {
throw new DidNotWorkException();
}
}
private void doSomething() {
if(!doSomethingElse())
;//Msg
}
private boolean doSomethingElse() {
if(!isSoAndSo())
return false;
return true;
}
private boolean isSoAndSo() { return false; }
public class DidNotWorkException extends Exception {}
}
I foolishly didn't read my code well enough and previously had a bug in it (how embarassing), if someone could triple check this code I'd very much appriciate it, just in case I'm going senile.
My specification is:
Compiled and run on 1.5.0_16
Sun JVM
WinXP SP3
Intel Centrino Duo T7200 (2.00Ghz, 977Mhz)
2.00 GB Ram
In my opinion you should notice that the non-exception methods don't give the log error in doSomethingElse but instead return a boolean so that the calling code can deal with a failure. If there are multiple areas in which this can fail then logging an error inside or throwing an Exception might be needed.
This is inherently JVM specific, so you should not blindly trust whatever advice is given, but actually measure in your situation. It shouldn't be hard to create a "throw a million Exceptions and print out the difference of System.currentTimeMillis" to get a rough idea.
For the code snippet you list, I would personally require the original author to thoroughly document why he used exception throwing here as it is not the "path of least surprises" which is crucial to maintaining it later.
(Whenever you do something in a convoluted way you cause unneccesary work to be done by the reader in order to understand why you did it like that instead of just the usual way - that work must be justified in my opinion by the author carefully explaining why it was done like that as there MUST be a reason).
Exceptions are a very, very useful tool, but should only be used when necessary :)
I have no real measurements, but throwing an exception is more expensive.
Ok, this is a link regarding the .NET framework, but I think the same applies to Java as well:
exceptions & performance
That said, you should not hesitate to use them when appropriate. That is : do not use them for flow-control, but use them when something exceptional happend; something that you didn't expect to happen.
I think if we stick to using exceptions where they are needed (exceptional conditions), the benefits far outweigh any performance penalty you might be paying. I say might since the cost is really a function of the frequency with which exceptions are thrown in the running application.
In the example you give, it looks like the failure is not unexpected or catastrophic, so the method should really be returning a bool to signal its success status rather than using exceptions, thus making them part of regular control flow.
In the few performace improvement works that I have been involved in, cost of exceptions has been fairly low. You would be spending far more time time in improving the complexity of common, hightly repeating operations.
Thank you for all the responses.
I finally followed Thorbjørn's suggestion and wrote a little test programm, measuring the performance myself. The result is: No difference between the two variants (in matters of performance).
Even though I didn't ask about code aesthetics or something, i.e. what the intention of exceptions was etc. most of you addressed also that topic. But in reality things are not always that clear... In the case under consideration the code was born a long time ago when the situation in which the exception is thrown seemed to be an exceptional one. Today the library is used differently, behaviour and usage of the different applications changed, test coverage is not very well, but the code still does it's job, just a little bit too slow (That's why I asked for performance!!). In that situation, I think, there should be a good reason for changing from A to B, which, in my opinion, can't be "That's not what exceptions were made for!".
It turned out that the logging ("A message") is (compared to everything else happening) very expensive, so I think, I'll get rid of this.
EDIT:
The test code is exactly like the one in the original post, called by a method testPerfomance() in a loop which is surrounded by System.currentTimeMillis()-calls to get the execution time...but:
I reviewed the test code now, turned of everything else (the log statement) and looping a 100 times more, than before and it turns out that you save 4.7 sec for a million calls when using B instead of A from the original post. As Ron said fillStackTrace is the most expensive part (+1 for that) and you can save nearly the same (4.5 sec) if you overwrite it (in the case you don't need it, like me). All in all it's still a nearly-zero-difference in my case, since the code is called 1000 times an hour and the measurements show I can save 4.5 millis in that time...
So, my 1st answer part above was a little misleading, but what I said about balancing the cost-benefit of a refactoring remains true.
I think you're asking this from slightly the wrong angle. Exceptions are designed to be used to signal exceptional cases, and as a program flow mechanism for those cases. So the question you should be asking is, does the "logic" of the code call for exceptions.
Exceptions are generally designed to perform well enough in the use for which they are intended. If they're used in such a way that they're a bottleneck, then above all, that's probably an indication that they're just being used for "the wrong thing" full stop-- i.e. what you have underlyingly is a program design problem rather than a performance problem.
Conversely, if the exception appears to be being "used for the right thing", then that probably means it'll also perform OK.
Let's say exception won't occur when trying to execute statements 1 and 2. Are there ANY performance hits between those two sample-codes?
If no, what if the DoSomething() method has to do a huuuge amount of work (loads of calls to other methods, etc.)?
1:
try
{
DoSomething();
}
catch (...)
{
...
}
2:
DoSomething();
Related
Im trying to implement validation module used for handling events. The validation module is based on simple interface:
public interface Validator {
Optional<ValidationException> validate(Event event);
}
Existing code base in my team relies on the wrapping exception mechanism - I cannot really play with it.
I have encountered problems when implementing new validator, that is responsible for validating single event, in two terms.
Assume the event is PlayWithDogEvent, and it contains Toys a dog can play with.
Flow of validation of such event:
For each toy,
Check if its a ball
If its a ball, it should be not too large.
If any of the toys is either not a ball/too big ball, my validate(Event event) method should return Optional.of(new ValidationException("some msg")).
I have implemented my validator the following way:
public class ValidBallsOnlyValidator implements Validator {
#Override
public Optional<ValidationException> validate(Event event) {
try {
event.getToys().forEach(this::validateSingleToy);
return Optional.empty();
} catch (InvalidToyException ex) {
return Optional.of(new ValidationException(ex.getMessage()));
}
}
private void validateSingleToy(Toy toy) {
// In real code the optional here is kinda mandatory
Optional<Toy> potentialBall = castToyToBall(toy);
// Im using Java 8
if(potentiallBall.isPresent()) {
checkIfBallIsOfValidSize(potentialBall.get(), "exampleSize");
} else {
throw new InvalidToyException("The toy is not a ball!")
}
}
private void checkIfBallIsOfValidSize(Toy toy, String size) {
if(toyTooLarge(toy, size)) throw new InvalidToyException("The ball is too big!")
}
}
The piece seems to work just fine, but im uncomfortable with the way it looks. My biggest concern is whether it is a good practice to place whole stream processing inside single try. Moreover, I don't think such mixing of exception-catching + returning optionals is elegant.
I could use some advice and/or best practices for such scenarios.
but im uncomfortable with the way it looks.
The API you're working against is crazy design. The approach to dealing with silly APIs is generally the same:
Try to fix it 'upstream': Make a pull request, talk to the team that made it, etc.
If and only if that option has been exhausted, then [A] write whatever ugly hackery you have to, to make it work, [B] restrict the ugliness to as small a snippet of code as you can; this may involve writing a wrapper that 'contains' the ugly, and finally [C] do not worry about code elegance within the restricted 'ugly is okay here' area.
The reason the API is bizarre is that it is both getting validation wrong, and not capitalizing on the benefits of their mistake (as in, if I'm wrong about their approach being wrong, then at least they aren't doing the best job at their approach).
Specifically, an exception is a return value, in the sense that it is a way to return from a method. Why isn't that interface:
public interface Validator {
void validate(Event event) throws ValidationException;
}
More generally, validation is not a 'there is at most one thing wrong' situation, and that goes towards your problem with 'it feels weird to write a try/catch around the whole thing'.
Multiple things can be wrong. There could be 5 toys, one of which is a ball but too large, and one of which is a squeaky toy. It is weird to report only one error (and presumably, an arbitrarily chosen one).
If you're going to go with the route of not throwing validation exceptions but returning validation issues, then the issues should presumably not be exceptions in the first place, but some other object, and, you should be working with a List<ValidationIssue> and not with an Optional<ValidationIssue>. You've gotten rid of an optional, which is always a win, and you now can handle multiple issues in one go. If the 'end point' that processes all this is fundamentally incapable of dealing with more than one problem at the time, that's okay: They can just treat that list as an effective optional, with list.isEmpty() serving as the 'all is well' indicator, and list.get(0) otherwise used to get the first problem (that being the only problem this one-error-at-a-time system can deal with).
This goes to code elegance, the only meaningful way to define that word 'elegance': It's code that is easier to test, easier to understand, and more flexible. It's more flexible: If later on the endpoint code that deals with validation errors is updated to be capable of dealing with more than one, you can now do that without touching the code that makes validation issue objects.
Thus, rewrite it all. Either:
Make the API design such that the point is to THROW that exception, not to shove it into an optional, -or-
Make the API list-based, also get rid of optional (yay!) and probably don't work with a validation issue object that extends SomeException. If you're not gonna throw it, don't make it a throwable.
If that's not okay, mostly just don't worry about elegance so much - elegance is off the table once you're forced to work with badly designed APIs.
However, there's of course almost always some style notes to provide for any code.
return Optional.of(new ValidationException(ex.getMessage()));
Ordinarily, this is extremely bad exception handling and your linter tool SHOULD be flagging this down as unacceptable. If wrapping exceptions, you want the cause to remain to preserve both the stack trace and any exception-type-specific information. You're getting rid of all that by ignoring everything about ex, except for its message. Ordinarily, this should be new ValidationException("Some string that adds appropriate context", ex) - thus preserving the chain. If there is no context to add / it is hard to imagine what this might be, then you shouldn't be wrapping at all, and instead throwing the original exception onwards.
However, given that exceptions are being abused here, perhaps this code is okay - this again goes to the central point: Once you're committed to working with a badly designed API, rules of thumb on proper code style go right out the window.
private void checkIfBallIsOfValidSize(Toy toy, String size) {
if(toyTooLarge(toy, size)) throw new InvalidToyException("The ball is too big!")
}
Yes, this is a good idea - whilst the API expects you not to throw exceptions but to wrap them in optionals, that part is bad, and you should usually not perpetuate a mistake even if that means your code starts differing in style.
event.getToys().forEach(this::validateSingleToy);
Generally speaking, using the forEach method directly, or .stream().forEach(), is a code smell. forEach should be used in only two cases:
It's the terminal on a bunch of stream ops (.stream().filter().flatMap().map()....forEach - that'd be fine).
You already have a Consumer<T> object and want it to run for each element in a list.
You have neither. This code is best written as:
for (var toy : event.getToys()) validateSingleToy(toy);
Lambdas have 3 downsides (which turn into upsides if using lambdas as they were fully intended, namely as code that may run in some different context):
Not control flow transparent.
Not mutable local var transparent.
Not checked exception type transparent.
3 things you lose, and you gain nothing in return. When there are 2 equally succint and clear ways to do the same thing, but one of the two is applicable in a strict superset of scenarios, always write it in the superset style, because code consistency is a worthwhile goal, and that leads to more consistency (it's worthwhile in that it reduces style friction and lowers learning curves).
That rule applies here.
Returning exceptions instead of returning them is weird, but whatever. (Why not return a ValidationResult object instead? Exceptions are usually intended to be thrown and caught).
But you could change your private methods to also return Optional instances which would make it easier to combine them. It would also avoid mixing throwing and returning and streams. Not sure if that is what you are looking for?
public class ValidBallsOnlyValidator implements Validator {
#Override
public Optional<ValidationException> validate(Event event)
return event.getToys()
.stream()
.filter(Optional::isPresent)
.findFirst()
.map(ex -> new ValidationException(ex.getMessage()));
}
private Optional<InvalidToyException> validateSingleToy(Toy toy) {
// In real code the optional here is kinda mandatory
Optional<Toy> potentialBall = castToyToBall(toy);
if(potentiallBall.isPresent()) {
return checkIfBallIsOfValidSize(potentialBall.get(), "exampleSize");
} else {
return Optional.of(new InvalidToyException("The toy is not a ball!"));
}
}
private Optional<InvalidToyException> checkIfBallIsOfValidSize(Toy toy, String size) {
if(toyTooLarge(toy, size)) return Optional.of(new InvalidToyException("The ball is too big!"));
return Optional.empty();
}
}
So today while learning Java, I finally encountered this particular error. It seems that this error is pretty prevalent and trying to recover from it has garnered mixed reactions. Some think it is useful in certain scenarios so its a good 'good to know', some have used it in their projects, while others vehemently oppose the idea of catching this error and then a lot of others are just as confused as me.
Edit: Oh btw, this is the error that I have encountered. I wish to catch these errors and decrease the value of incrementor by 1/10th next time until another error is found(and so on... until I find the upper bound)
Since I'm taking my baby steps in Java and cannot find anything much specific on this topic, I'd like to ask your help on this particular example:
public class SpeedDemoClass {
static int iterations=0;
static int incrementor=10000000;
public static void main(String[] args) {
while(incrementor>1){
try{
iterations=iterations+incrementor;
iterator(iterations);
}
catch(Exception e){
System.out.println("So this is the limiting error: "+e);
iterations=iterations-incrementor;
incrementor=incrementor/10;
iterations=iterations+incrementor;
}
}
System.out.println("The upper limit of iterations is: "+iterations);
}
public static void iterator(int a){
long start_time= System.currentTimeMillis();
StringBuilder sbuild= new StringBuilder("JAVA");
for(int i=0;i<a;i++){
sbuild.append("JAVA");
}
System.out.println("Performing "+a+" append operations;"
+"process completed in :"
+(System.currentTimeMillis()-start_time)+"ms");
}
}
Did you try compiling it?
Ofcourse, it does not work
Here's a short description on what I'm trying to do!
I am trying to do the following:
Initialize incrementor=10000000 and iterations=0 and pass the result of incrementor=incrementor+iterations to iterator().
If there is an Error, the program should catch it and then decrease iterations* by **incrementor, then divide incrementor by 10, then try this again
Repeat steps 1 to 2 until, it starts working.
Goal is to find values of iterations where it produces error and stay below that level after each iteration until the upper limit of the value of iterations is found
(i.e. when incrementor becomes 0)
With my own manual tests, I have found out the value of this to be
92,274,686. That's the magic number. I don't know why it is so, and whether or not it is that value for my own computer only. It would be
awesome if someone could post a modified code that could spit out this
result.
You catch Exception, but OutOfMemoryError is an Error. Note that Error and Exception are two different classes in Java! Both implement Throwable, though.
The catch clause restricts the type of Throwable classes to be caught. As Error is not an Exception, your code won't catch OutOfMemoryError.
That "magic number" will change if you allocate some extra objects that use up memory, use a PC with less memory etc. or allocate memory smart.
Trying to estimate memory consumption brute-force is a bad idea!
Instead, understand how Java memory allocations work, how the memory layout of a string builder is, and how to control this. Here, you should study the method ensureCapacityInternal.
Look at the error you get - in particular the classes and line numbers.
I bet that if you replace this line:
StringBuilder sbuild= new StringBuilder("JAVA");
by
StringBuilder sbuild= new StringBuilder(500000000);
Then you get much further before seeing an OOM error. Catching this error is not a very clever way. There are some nice variants to observe memory consumption; but you really should use them only once you have understood memory organization of Java, so you don't draw wrong conclusions.
Your PC probably has 8 GB of RAM? Try using java -Xmx7500m. You may be able to go 3x as far. Because by default, Java only uses 25% of your memory simply to allow other processes to use some memory, too. If you want Java to use all your memory, you have to tell it that this is desired.
bro to catch the outofmemoryerror just replace Exception in catch with OutOfMemoryError
catch(OutOfMemoryError e)
Out of Memory generally means you can't do much else, the VM is already crashing. While it might seem you can doe something, like deallocate memory, Java does all memory management for you. You can't deallocate anything, you can dereference objects and then trigger garbage collection, but at the moment the exception occurs it's too late. Almost anything you could do, requires memory allocation, and you have none.
In general all errors that are thrown by the VM cannot be recovered from. Usually because the VM itself is unstable, and you don't have access to the internals of the VM to fix it. Not only that, but it would be the rare case that your software could fix it.
I was going to post this solution but somebody beat me to it. Earlier this day, I seemed to have overlooked a solution presented in one of the alternative questions of this sort that I've mentioned in the beginning of this question.
Well, just for general courtesy, so that people in the future do not overlook a right-at-your-face solution like I have, I'm posting an answer as easy as they come*. Here are the little modifications I've made that made it happen. It seems, it is possible to catch java.lang.OutOfMemoryError although I do not yet know the implications of what I have really done.
* Please feel free to correct me and read the end of this answer :)
I'll update this answer in the future reflecting on things that I might discover later(like discrepancies and such)
So without further ado, here's the code in its raw glory:
public class SpeedDemoClass {
static int iterations=0;
static int incrementor=10000000;
public static void main(String[] args) {
while(incrementor>0){
try{
iterations=iterations+incrementor;
int a = iterations;
long start_time= System.currentTimeMillis();
StringBuilder sbuild= new StringBuilder("JAVA");
for(int i=0;i<a;i++){
sbuild.append("JAVA");
}
System.out.println("Performing "+a+" append operations;"
+"process completed in :"
+(System.currentTimeMillis()-start_time)+"ms");
}
catch(OutOfMemoryError e){
System.out.println("OutOfMemory bound reached beyond this point with error: "+e
+"\nReverting back, and Changing the value of incrementor by 1/10th...");
iterations=iterations-incrementor;
incrementor=incrementor/10;
iterations=iterations+incrementor;
}
}
System.out.println("The upper limit of iterations is: "+iterations);
}
}
Changelog:
If it is not yet apparent, I have made some tiny changes to the original code.
(1) I ditched the method iterator() because I realize now that it made it harder for people to realize the intent of the question.
(2) I changed catch(Exception e) to catch(OutOfMemoryError e) . Turns out there is a native solution available, and I had overlooked it earlier.
OUTPUT:
It seems to work perfectly for me and now I have found the magic number. I welcome all comments that can indicate this answer is wrong and why? :)
Let say you have a method that checks if the argument (Answer) is correct and check if the question already have answers in the list that is also correct:
public void addAnswer(Answer answer) {
if (answer.isCorrect()) {
...
}
}
However, I only want one answer to be correct in the list. I have multiple options. I could throw an exception, I could ignore it, I could return some boolean value from the addAnswer that tells me if the operation was ok or not. How are you supposed to think in such scenarios?
The rule is pretty simple: Use exceptions on exceptional, erroneous, unpredicted failures. Don't use exceptions when you expect something to happen or when something happens really often.
In your case it's not an error or something truly rare that an answer is not correct. It's part of your business logic. You can throw an exception, but only as part of some validation (assertion) if you expect an answer at given point to always be correct and suddenly it's not (precondition failure).
And of course if some failure occurs while checking correctness (database connection lost, wrong array index) exception are desired.
This entirely depends on what you want to achieve. Should the caller of your method already have made sure that it doesn't add two correct answers? Is it a sign of a programming error if that happens? Then throw an exception, but definitely an unchecked exception.
If your method's purpose is to relieve the caller from enforcing the one-true-answer invariant (I doubt that, though), then you can just arrange to signal via a boolean return value, which makes it only an optional information channel for the caller.
If there is no way to know in advance whether there are other correct answers—for example, the answers are added concurrently from several threads or even processes (via a database)—then it would be meaningful to throw a checked exception.
Bottom line: there is no one-size-fits-all best practice, but there is a best practice for every scenario you want to accomplish.
The exception police will be down on you like a ton of bricks, and me for this answer, with statements like "don't use exceptions for flow control" and "don't use exceptions for normal conditions".
The trouble with the first statement is that exceptions are a form of flow control. This makes the argument self-contradictory, and therefore invalid.
The trouble with the second statement is that it seems to inevitably go along with endlessly redefining exceptional conditions as normal. You will find examples in this very site: for example, a lively discussion where the police insisted that EOF was 'normal' and therefore that EOFException shouldn't be caught, despite the existence of dozens of Java APIs that don't give you any choice in the matter. Travel far enough down this path and you can end up with nothing that is exceptional whatsoever, and therefore no occasion to use them at all.
These are not logical arguments. These are unexamined dogmas.
The original and real point, back in about 1989 when it was first formulated, was that you shouldn't throw exceptions to yourself, to be handled in the same method: in other words, don't treat it as a GOTO. This principle continues to have validity.
The point about checked exceptions is that you force the caller to do something about handling them. If you believe, on your own analysis, that this is what you want, use an exception. Or, if you are using an API that forces you to catch them, catch them, at the appropriate level (whatever that is: left as an exercise for the reader).
In other words, like most things in the real world, it is up to your discretion and judgment. The feature is there to be used, or abused, like anything else.
#Exception police: you will find me in the telephone book. But be prepared for an argument.
An exception thrown from a method enforces the callers to take some action in the anticipation of the exception occurring for some inputs. A return value doesn't enforce the same and so it is up to the caller to capture it and take some action.
If you want the callers to handle the scenario to take some corrective action, then you should throw a checked exception (sub class of java.lang.Exception).
The problem here is that your API is error prone. I'd use the following scheme instead:
public class Question {
private List<Answer> answers;
private int mCorrect;
// you may want a List implementation without duplicates
public void setAnswers(List<Answer> answers, int correct) {
this.answers = answers;
// check if int is between bounds
mCorrect = correct;
}
public boolean isCorrect(Answer answer) {
return answers.indexOf(answer) == mCorrect;
}
}
because an Answer by itself is simply a statement, and usually cannot be true of false without being associated to a Question. This API makes it impossible to have zero or more than one correct answers, and forces the user to supply the correct one when he adds answers, so your program is always in a consistent state and simply can't fail.
Before deciding how to signal errors, it's always better to design the API so that errors are less common as possible. With your current implementation, you have to make checks on your side, and the client programmer must check on his side as well. With the suggested design no check is needed, and you'll have correct, concise and fluent code on both sides.
Regarding when to use a boolean and when to use Exceptions, I often see boolean used to mirror the underlying API (mostly low level C-code).
I agree with Tomasz Nurkiewicz's response. I cant comment on it because I'm a new user. I would also recommend that if the addAnswer() method is not always going to add the answer (because they already exists a correct one), name it to suggest this behaviour. "add" is suggest normal collections behaviour.
public boolean submitAnswer(Answer answer); // returns true is answer accepted
Your exact solution may depend on the bigger picture about your application that we dont know about. Maybe you do want to throw an Exception but also make it the responsibility of the caller to check if adding the Answer is valid.
It's all a rich tapestry.
I would implement it in this way:
public class Question {
private int questionId;
private final Set<Answer> options = new HashSet<Answer>();
private final Set<Answer> correctAnswers = new HashSet<Answer>();
public boolean addAnswer(Answer answer) throws WrongAnswerForThisQuestionException {
if(!answer.isValid(questionId)) {
throw new WrongAnswerForThisQuestionException(answer, this);
}
if (answer.isCorrect(questionId)) {
correctAnswers.add(answer);
}
return options.add(answer);
}
}
I've heard that using exceptions for control flow is bad practice. What do you think of this?
public static findStringMatch(g0, g1) {
int g0Left = -1;
int g0Right = -1;
int g1Left = -1;
int g1Right = -1;
//if a match is found, set the above ints to the proper indices
//...
//if not, the ints remain -1
try {
String gL0 = g0.substring(0, g0Left);
String gL1 = g1.substring(0, g1Left);
String g0match = g0.substring(g0Left, g0Right);
String g1match = g1.substring(g1Left, g1Right);
String gR0 = g0.substring(g0Right);
String gR1 = g1.substring(g1Right);
return new StringMatch(gL0, gR0, g0match, g1match, gL1, gR1);
}
catch (StringIndexOutOfBoundsException e) {
return new StringMatch(); //no match found
}
So, if no match has been found, the ints will be -1. This will cause an exception when I try to take the substring g0.substring(0, -1). Then the function just returns an object indicating that no match is found.
Is this bad practice? I could just check each index manually to see if they're all -1, but that feels like more work.
UPDATE
I have removed the try-catch block and replaced it with this:
if (g0Left == -1 || g0Right == -1 || g1Left == -1 || g1Right == -1) {
return new StringMatch();
}
Which is better: checking if each variable is -1, or using a boolean foundMatch to keep track and just check that at the end?
Generally exceptions are expensive operations and as the name would suggest, exceptional conditions. So using them in the context of controlling the flow of your application is indeed considered bad practice.
Specifically in the example you provided, you would need to do some basic validation of the inputs you are providing to the StringMatch constructor. If it were a method that returns an error code in case some basic parameter validation fails you could avoid checking beforehand, but this is not the case.
I've done some testing on this. On modern JVMs, it actually doesn't impact runtime performance much (if at all). If you run with debugging turned on, then it does slow things down considerably.
See the following for details
(I should also mention that I still think this is a bad practice, even if it doesn't impact performance. More than anything, it reflects a possibly poor algorithm design that is going to be difficult to test)
Yes, this is a bad practice, especially when you have a means to avoid an exception (check the string length before trying to index into it). Try and catch blocks are designed to partition "normal" logic from "exceptional" and error logic. In your example, you have spread "normal" logic into the exceptional/error block (not finding a match is not exceptional). You are also misusing substring so you can leverage the error it produces as control flow.
Program flow should be in as straight a line as possible(since even then applications get pretty complex), and utilize standard control flow structures. The next developer to touch the code may not be you and (rightly)misunderstand the non-standard way you are using exceptions instead of conditionals to determine control flow.
I am fighting a slightly different slant on this problem right now during some legacy code refactoring.
The largest issue that I find with this approach is that using the try/catch breaks normal programmatic flow.
In the application I am working on(and this is different from the sample you have applied), exceptions are used to communicate from within a method call that a given outcome(for instance looking for an account number and not finding it) occurred. This creates spaghetti code on the client side, since the calling method (during a non-exceptional event, or a normal use-case event) breaks out of whatever code it was executing before the call and into the catch block. This is repeated in some very long methods many times over, making the code very easy to mis-read.
For my situation, a method should return a value per it's signature for all but truly exceptional events. The exception handling mechanism is intended to take another path when the exception occurs (try and recover from within the method so you can still return normally).
To my mind you could do this if you scope your try/catch blocks very tightly; but I think it is a bad habit and can lead to code that is very easy to misinterpret, since the calling code will interpret any thrown exception as a 'GOTO' type message, altering program flow. I fear that although this case does not fall into this trap, doing this often could result in a coding habit leading to the nightmare that I am living right now.
And that nightmare is not pleasant.
I currently have a technical point of difference with an acquaintance. In a nutshell, it's the difference between these two basic styles of Java exception handling:
Option 1 (mine):
try {
...
} catch (OneKindOfException) {
...
} catch (AnotherKind) {
...
} catch (AThirdKind) {
...
}
Option 2 (his):
try {
...
} catch (AppException e) {
switch(e.getCode()) {
case Constants.ONE_KIND:
...
break;
case Constants.ANOTHER_KIND:
...
break;
case Constants.A_THIRD_KIND:
...
break;
default:
...
}
}
His argument -- after I used copious links about user input validation, exception handling, assertions and contracts, etc. to back up my point of view -- boiled down to this:
"It’s a good model. I've used it since me and a friend of mine came up with it in 1998, almost 10 years ago. Take another look and you'll see that the compromises we made to the academic arguments make a lot of sense."
Does anyone have a knock-down argument for why Option 1 is the way to go?
When you have a switch statement, you're less object oriented. There are also more opportunities for mistakes, forgetting a "break;" statement, forgetting to add a case for an Exception if you add a new Exception that is thrown.
I also find your way of doing it to be MUCH more readable, and it's the standard idiom that all developers will immediately understand.
For my taste, the amount of boiler plate to do your acquaintance's method, the amount of code that has nothing to do with actually handling the Exceptions, is unacceptable. The more boilerplate code there is around your actual program logic, the harder the code is to read and to maintain. And using an uncommon idiom makes code more difficult to understand.
But the deal breaker, as I said above, is that when you modify the called method so that it throws an additional Exception, you will automatically know you have to modify your code because it will fail to compile. However, if you use your acquaintance's method and you modify the called method to throw a new variety of AppException, your code will not know there is anything different about this new variety and your code may silently fail by going down an inappropriate error-handling leg. This is assuming that you actually remembered to put in a default so at least it's handled and not silently ignored.
the way option 2 is coded, any unexpected exception type will be swallowed! (this can be fixed by re-throwing in the default case, but that is arguably an ugly thing to do - much better/more efficient to not catch it in the first place)
option 2 is a manual recreation of what option 1 most likely does under the hood, i.e. it ignores the preferred syntax of the language to use older constructs best avoided for maintenance and readability reasons. In other words, option 2 is reinventing the wheel using uglier syntax than that provided by the language constructs.
clearly, both ways work; option 2 is merely obsoleted by the more modern syntax supported by option 1
I don't know if I have a knock down argument but initial thoughts are
Option 2 works until your trying to catch an Exception that doesn't implement getCode()
Option 2 encourages the developer to catch general exceptions, this is a problem because if you don't implement a case statement for a given subclass of AppException the compiler will not warn you. Ofcourse you could run into the same problem with option 1 but atleast option 1 does not activly encourage this.
With option 1, the caller has the option of selecting exactly which exception to catch, and to ignore all others. With option 2, the caller has to remember to re-throw any exceptions not explicitly caught.
Additionally, there's better self-documentation with option 1, as the method signature needs to specify exactly which exceptions are thrown, rather than a single over-riding exception.
If there's a need to have an all-encompassing AppException, the other exception types can always inherit from it.
The knock-down argument would be that it breaks encapsulation since I now I have to know something about the subclass of Exception's public interface in order to handle exceptions by it. A good example of this "mistake" in the JDK is java.sql.SQLException, exposing getErrorCode and getSQLState methods.
It looks to me like you're overusing exceptions in either case. As a general rule, I try to throw exceptions only when both of the following are true:
An unexpected condition has occurred that cannot be handled here.
Somebody will care about the stack trace.
How about a third way? You could use an enum for the type of error and simply return it as part of the method's type. For this, you would use, for example, Option<A> or Either<A, B>.
For example, you would have:
enum Error { ONE_ERROR, ANOTHER_ERROR, THIRD_ERROR };
and instead of
public Foo mightError(Bar b) throws OneException
you will have
public Either<Error, Foo> mightError(Bar b)
Throw/catch is a bit like goto/comefrom. Easy to abuse. See Go To Statement Considered Harmful. and Lazy Error Handling
I think it depends on the extent to which this is used. I certainly wouldn't have "one exception to rule them all" which is thrown by everything. On the other hand, if there is a whole class of situations which are almost certainly going to be handled the same way, but you may need to distinguish between them for (say) user feedback purposes, option 2 would make sense just for those exceptions. They should be very narrow in scope - so that wherever it makes sense for one "code" to be thrown, it should probably make sense for all the others to be thrown too.
The crucial test for me would be: "would it ever make sense to catch an AppException with one code, but want to let another code remain uncaught?" If so, they should be different types.
Each checked Exception is an, um, exception condition that must be handled for the program behavior to be defined. There's no need to go into contractual obligations and whatnot, it's a simple matter of meaningfulness. Say you ask a shopkeeper how much something costs and it turns out the item is not for sale. Now, if you insist you'll only accept non-negative numerical values for an answer, there is no correct answer that could ever be provided to you. This is the point with checked exceptions, you ask that some action be performed (perhaps producing a response), and if your request cannot be performed in a meaningful manner, you'll have to plan for that reasonably. This is the cost of writing robust code.
With Option 2 you are completely obscuring the meaning of the exception conditions in your code. You should not collapse different error conditions into a single generic AppException unless they will never need to be handled differently. The fact that you're branching on getCode() indicates otherwise, so use different Exceptions for different exceptions.
The only real merit I can see with Option 2 is for cleanly handling different exceptions with the same code block. This nice blog post talks about this problem with Java. Still, this is a style vs. correctness issue, and correctness wins.
I'd support option 2 if it was:
default:
throw e;
It's a bit uglier syntax, but the ability to execute the same code for multiple exceptions (ie cases in a row) is much better. The only thing that would bug me is producing a unique id when making an exception, and the system could definitely be improved.
Unnecessary have to know the code and declare constants for the exception which could have been abstract when using option 1.
The second option (as I guess) will change to traditional (as option 1) when there is only one specific exception to catch, so I see inconsistencey over there.
Use both.
The first for most of the exceptions in your code.
The second for those very "specific" exceptions you've create.
Don't struggle with little things like this.
BTW 1st is better.