I'm currently reviewing for my OCPJP 6 using the Sierra & Bates reviewer. I stumbled upon a question regarding an endless loop not throwing a StackOverflowError. As far as I've learned, it should throw it eventually.
Please refer to this PDF for the question: https://java.net/downloads/jfjug/SCJP%20Sun%20Certified%20Programmer%20for%20Java%206-0071591060.pdf
Question I'm referring to is from Self Test Chapter 5 Question 9 (page 455 of the PDF).
I answered, CDF. The correct answer, according to the book was DF. It was also explained in there that case 0 initiates an endless loop, not a StackOverflowError.
True, it does initiate an endless loop, but eventually turn out to be a StackOverflowError. The answer C stated "might throw a StackOverflowError" so I knew C was correct.
If I'm wrong, can anybody explain why?
Since, in that loop, you're not actually calling methods that need to call other methods (a la recursion), you're not adding more calls to the stack. You're merely repeating the steps you did most every time through.
Since a StackOverflowError is only invoked in certain conditions - namely, the calling of another method (which would call more methods), or the allocation of more elements onto the stack, then there's really no way that this particular loop could cause such an error.
The stack overflows is commonly :- excessively deep or infinite recursion.
In simple terms,for an example: calling a method within a method.
public static void proneToStackOverFlow() {
proneToStackOverFlow();
}
public static void main(String[] args)
{
proneToStackOverFlow();
}
Related
I am taking a Java class in school. We had this assignment to design a class to function as a menu, with several sub menus.
The structure is kinda like this (sort of pseudocode below, just to show structure):
public static void mainMenu() {
switch(integer variable){
case 1: submenu1();
break;
case 2: submenu2();
break;
}
}
public static void submenu1() {
switch(integer variable){
case 1: subsubmenu1();
break;
case 2: subsubmenu2();
break;
default: mainMenu(
}
}
public static void subsubmenu1() {
switch(integer variable) {
case 1: anothersubmenu1()
break;
case 2: anothersubmenu2();
break;
default: submenu1();
}
}
My question is: my teacher said this is wrong, because JVM stores in the memory the path that the program takes from one place to the other if I make it this way, and in the long run this would cause a stack overflow. He didn't quite explained it, he just said that I should surround the whole thing with a while loop using a boolean variable, adding an option to flip that boolean value to exit the while loop, because this way Java wouldn't be storing the path the program was taking from one method to the other.
Again, he didn't explain it with details, and it sounded very confusing the way he was explaining it (I tried to make it as clearly as I could, from what he has given me). I have been looking for the last 3 hours online for anything that resembled what he told me, and I couldn't find anything...so I decided to ask the experts.
Could you guys help me out?
When the computer executes a method/function call, it has to:
Remember what the calling function is doing -- the values of local variables, and where to resume execution when the called function completes;
Transfer control to the called function.
When the called function is finished, it:
Returns control to the remembered position in the calling function; There, it
Restores the values of local variables, etc.; and
Continues processing with the value returned from the called function, if any.
The problem with your system of functions is that they can just keep calling and never return:
mainMenu -> submenu1 -> mainMenu -> subMenu1... etc., etc.
If your functions never return, then it just has to keep remembering more and more stuff each time you make a new call, possibly at some point exceeding the amount of (stack) memory that is available for storing that stuff, resulting in a stack overflow error.
Some languages implement an optimization called "tail call optimization" that will actually avoid storing those things when calling another function is the last thing your function can do. In this case it's not going to need the values of local variables again, and doesn't need to remember where to resume, because it can resume in its calling function, which is already remembered.
In languages like that, your code can actually be OK... but java is not one of those languages.
Yes, your teacher is (partly) correct.
The critical part is that you potentially call mainMenu() from within submenu1() and you call submenu1() from within subsubmenu1().
If you call mainMenu() everytime you are in submenu1() and call submenu1() whenever you are in mainMenu() your program will crash.
For every function call, the underlying system needs to reserve memory for the function's local variables and such. That's the so-called stackframe. It is called recursion when you call a function from within itself (directly or indirectly). Recursion needs to return at some point. If it doesn't you get a stackoverflow because the memory runs out.
I've been playing with recursive constructors in Java. The following class is accepted by the compiler two examples of recursive constructors in Java. It crashes with a StackOverflowError at runtime using java 1.7.0_25 and Eclipse Juno (Version: Juno Service Release 2 Build id: 20130225-0426).
class MyList<X> {
public X hd;
public MyList<X> tl;
public MyList(){
this.hd = null;
this.tl = new MyList<X>();
}
}
The error message makes sense, but I'm wondering if the compiler should catch it. A counterexample might be a list of integers with a constructor that takes an int as an argument and sets this.tl to null if the argument is less than zero. This seems reasonable to allow in the same way that recursive methods are allowed, but on the other hand I think constructors ought to terminate. Should a constructor be allowed to call itself?
So I'm asking a higher authority before submitting a Java bug report.
EDIT: I'm advocating for a simple check, like prohibiting a constructor from calling itself or whatever the Java developers did to address https://bugs.openjdk.java.net/browse/JDK-1229458. A wilder solution would be to check that the arguments to recursive constructor calls are decreasing with respect to some well-founded relation, but the point of the question is not "should Java determine whether all constructors terminate?" but rather "should Java use a stronger heuristic when compiling constructors?".
You could even have several constructors with different parameters, calling each other wiht this(...). In general, by computer science, a termination of code can not always be guaranteed. Some intelligence, like in this simple case, would be nice to have, but one may not require a compiler error. A bit like unreachable code. There is no difference between a constructor or normal method in my eyes however.
I wouldn't see any reason why a constructor should more need to terminate than any other kind of function. But, as with any other kind of function, the compiler cannot infer in the general case whether such function ever terminates (halting problem).
Now whether there's generally much need for a recursive constructor is debatable, but it certainly is not a bug, unless the Java specification would explicitly state that recursive constructor calls must result in an error.
And finally, it's important to differentiate between recursive calls to constructor(s) of the same object, which is a common pattern for instance to overcome the lack of default parameters, and calling the constructor of the same class to create another object, as done in your example.
Although this specific situation seems quite obvious, determining whether or not code terminates is an impossible question to answer.
If you try to configure compiler warnings for infinite recursion, you run into the Halting Problem:
"Given a description of an arbitrary computer program, decide whether
the program finishes running or continues to run forever."
Alan Turing proved in 1936 that a general algorithm to solve the
halting problem for all possible program-input pairs cannot exist.
I'm doing a card game in swing (java)
The user has to wait his turn, take a card, and press confirm. When it's not his turn, he can't take any card.
It starts this way:
this.cardTaken = false;
board.canTakeCards(!cardTaken);
Then in board class it comes the next action:
public void canTakeCards(boolean can) {
if (can) {
this.btnConfirm.setEnabled(false);
this.pnlCards.setCanTake(true);
} else {
this.btnConfirm.setEnabled(true);
this.pnlCards.setCanTake(false);
}
(the else happens when the user takes a card).
So. I got the Comparison method violates its general contract at line board.canTakeCards(!cardTaken);
That only happened one time and I "tested" my game for about 8 times. I'm really confused and afraid about this.
One of my theories is that I call this function from 2 differents parts of the code at the same execution time, and it receives a true and false at the same time. But I revised my code and i think that's imposible.
Any advice? Thanks
This message text is included in an exception thrown from Java 7 sorted collections, indicating that the object in question has an inconsistent implementation of compareTo, which basically means it is not imposing a total ordering on the objects. Prior to Java 7 this was silently ignored. Revise your Comparable classes.
I have been having a lot of trouble with the Concurrent Modification Exception error. With the help of the community I managed to fix my last error, however I think this error is more puzzling.
I am now getting a concurrent modification error while painting, I am worried that this error occurs in the paint function itself. Unlike my last error I am nto removing an entity, in that case I completely understood what was going on just not how to fix it, this error on the other hand I do not understand.
TowerItr = activeTowers.iterator();
while (TowerItr.hasNext()) {
try {
theTower = (ArcherTower) TowerItr.next();
g.drawImage(tower, theTower.y * Map.blockSize, theTower.x * Map.blockSize, this);
} catch (ConcurrentModificationException e) {
}
}
The line that throws the exception is this one:
theTower = (ArcherTower) TowerItr.next();
There are always two sides to a ConcurrentModificationException (CME), and only one side reports the error.
In this case, your code looks perfectly fine. You are looping through the members of the activeTowers.
The stack trace will show you nothing that you don't already know.... you have a CME.
What you need to find out is where you are adding/removing data from the activeTowers collection.
In many cases, there are reasonably easy fixes. A probable fix is to synchronize the access to the activeTowers array in every place it is used. This may be quite hard to do.
Another option is to use a collection from the java.util.concurrent.* package, and to use the toArray(...) methods to get a snapshot of the collection, and then you can iterate that snapshot in your while loop.
// activeTower must be something from java.util.concurrent.*
for (ArcherTower theTower : activeTower.toArray(new ArcherTower[0]) {
g.drawImage(tower, theTower.y * Map.blockSize, theTower.x * Map.blockSize, this);
}
I should add that if you use a java.util.concurrent.* collection, then the iterator() will return a 'stable' iterator too (will not throw CME, and may (or may not) reflect changes in the collection)
Bottom line is that a CME only tells you half the story.... and only your code will tell you the rest....
Carrying on from what our esteemed members have suggested:
1) This error is generally seen when the List is being modified after an Iterator has been extracted.
2) I am looking at your code and it seems alright except this being used in the drawImage() method. This may potentially be altering your List because This has direct access to class members / class variables.
3) This can also happen if multiple threads are accessing the List concurrently. And one of the threads may be trying to alter your List from some other method which share the same instance of the List. I am saying some other method because concurrent read access should not create trouble if multiple threads were accessing this method.
Note: Please all posible code and Stake trace to debug exact problem.
This question already has answers here:
What are the effects of exceptions on performance in Java?
(18 answers)
Closed 9 years ago.
Do you know how expensive exception throwing and handling in java is?
We had several discussions about the real cost of exceptions in our team. Some avoid them as often as possible, some say the loss of performance by using exceptions is overrated.
Today I found the following piece of code in our software:
private void doSomething()
{
try
{
doSomethingElse();
}
catch(DidNotWorkException e)
{
log("A Message");
}
goOn();
}
private void doSomethingElse()
{
if(isSoAndSo())
{
throw new DidNotWorkException();
}
goOnAgain();
}
How is the performance of this compared to
private void doSomething()
{
doSomethingElse();
goOn();
}
private void doSomethingElse()
{
if(isSoAndSo())
{
log("A Message");
return;
}
goOnAgain();
}
I don't want to discuss code aesthetic or anything, it's just about runtime behaviour!
Do you have real experiences/measurements?
Exceptions are not free... so they are expensive :-)
The book Effective Java covers this in good detail.
Item 39 Use exceptions only for exceptional conditions.
Item 40 Use exceptions for recoverable conditions
The author found that exceptions resulted in the code tunning 70 times slower for his test case on his machine with his particular VM and OS combo.
The slowest part of throwing an exception is filling in the stack trace.
If you pre-create your exception and re-use it, the JIT may optimize it down to "a machine level goto."
All that having been said, unless the code from your question is in a really tight loop, the difference will be negligible.
The slow part about exceptions is building the stack trace (in the constructor of java.lang.Throwable), which depends on stack depth. Throwing in itself is not slow.
Use exceptions to signal failures. The performance impact then is negligible and the stack trace helps to pin-point the failure's cause.
If you need exceptions for control flow (not recommended), and profiling shows that exceptions are the bottleneck, then create an Exception subclass that overrides fillInStackTrace() with an empty implementation. Alternatively (or additionally) instantiate only one exception, store it in a field and always throw the same instance.
The following demonstrates exceptions without stack traces by adding one simple method to the micro benchmark (albeit flawed) in the accepted answer:
public class DidNotWorkException extends Exception {
public Throwable fillInStackTrace() {
return this;
}
}
Running it using the JVM in -server mode (version 1.6.0_24 on Windows 7) results in:
Exception:99ms
Boolean:12ms
Exception:92ms
Boolean:11ms
The difference is small enough to be ignorable in practice.
I haven't bothered to read up on Exceptions but doing a very quick test with some modified code of yours I come to the conclusion that the Exception circumstance quite a lot slower than the boolean case.
I got the following results:
Exception:20891ms
Boolean:62ms
From this code:
public class Test {
public static void main(String args[]) {
Test t = new Test();
t.testException();
t.testBoolean();
}
public void testException() {
long start = System.currentTimeMillis();
for(long i = 0; i <= 10000000L; ++i)
doSomethingException();
System.out.println("Exception:" + (System.currentTimeMillis()-start) + "ms");
}
public void testBoolean() {
long start = System.currentTimeMillis();
for(long i = 0; i <= 10000000L; ++i)
doSomething();
System.out.println("Boolean:" + (System.currentTimeMillis()-start) + "ms");
}
private void doSomethingException() {
try {
doSomethingElseException();
} catch(DidNotWorkException e) {
//Msg
}
}
private void doSomethingElseException() throws DidNotWorkException {
if(!isSoAndSo()) {
throw new DidNotWorkException();
}
}
private void doSomething() {
if(!doSomethingElse())
;//Msg
}
private boolean doSomethingElse() {
if(!isSoAndSo())
return false;
return true;
}
private boolean isSoAndSo() { return false; }
public class DidNotWorkException extends Exception {}
}
I foolishly didn't read my code well enough and previously had a bug in it (how embarassing), if someone could triple check this code I'd very much appriciate it, just in case I'm going senile.
My specification is:
Compiled and run on 1.5.0_16
Sun JVM
WinXP SP3
Intel Centrino Duo T7200 (2.00Ghz, 977Mhz)
2.00 GB Ram
In my opinion you should notice that the non-exception methods don't give the log error in doSomethingElse but instead return a boolean so that the calling code can deal with a failure. If there are multiple areas in which this can fail then logging an error inside or throwing an Exception might be needed.
This is inherently JVM specific, so you should not blindly trust whatever advice is given, but actually measure in your situation. It shouldn't be hard to create a "throw a million Exceptions and print out the difference of System.currentTimeMillis" to get a rough idea.
For the code snippet you list, I would personally require the original author to thoroughly document why he used exception throwing here as it is not the "path of least surprises" which is crucial to maintaining it later.
(Whenever you do something in a convoluted way you cause unneccesary work to be done by the reader in order to understand why you did it like that instead of just the usual way - that work must be justified in my opinion by the author carefully explaining why it was done like that as there MUST be a reason).
Exceptions are a very, very useful tool, but should only be used when necessary :)
I have no real measurements, but throwing an exception is more expensive.
Ok, this is a link regarding the .NET framework, but I think the same applies to Java as well:
exceptions & performance
That said, you should not hesitate to use them when appropriate. That is : do not use them for flow-control, but use them when something exceptional happend; something that you didn't expect to happen.
I think if we stick to using exceptions where they are needed (exceptional conditions), the benefits far outweigh any performance penalty you might be paying. I say might since the cost is really a function of the frequency with which exceptions are thrown in the running application.
In the example you give, it looks like the failure is not unexpected or catastrophic, so the method should really be returning a bool to signal its success status rather than using exceptions, thus making them part of regular control flow.
In the few performace improvement works that I have been involved in, cost of exceptions has been fairly low. You would be spending far more time time in improving the complexity of common, hightly repeating operations.
Thank you for all the responses.
I finally followed Thorbjørn's suggestion and wrote a little test programm, measuring the performance myself. The result is: No difference between the two variants (in matters of performance).
Even though I didn't ask about code aesthetics or something, i.e. what the intention of exceptions was etc. most of you addressed also that topic. But in reality things are not always that clear... In the case under consideration the code was born a long time ago when the situation in which the exception is thrown seemed to be an exceptional one. Today the library is used differently, behaviour and usage of the different applications changed, test coverage is not very well, but the code still does it's job, just a little bit too slow (That's why I asked for performance!!). In that situation, I think, there should be a good reason for changing from A to B, which, in my opinion, can't be "That's not what exceptions were made for!".
It turned out that the logging ("A message") is (compared to everything else happening) very expensive, so I think, I'll get rid of this.
EDIT:
The test code is exactly like the one in the original post, called by a method testPerfomance() in a loop which is surrounded by System.currentTimeMillis()-calls to get the execution time...but:
I reviewed the test code now, turned of everything else (the log statement) and looping a 100 times more, than before and it turns out that you save 4.7 sec for a million calls when using B instead of A from the original post. As Ron said fillStackTrace is the most expensive part (+1 for that) and you can save nearly the same (4.5 sec) if you overwrite it (in the case you don't need it, like me). All in all it's still a nearly-zero-difference in my case, since the code is called 1000 times an hour and the measurements show I can save 4.5 millis in that time...
So, my 1st answer part above was a little misleading, but what I said about balancing the cost-benefit of a refactoring remains true.
I think you're asking this from slightly the wrong angle. Exceptions are designed to be used to signal exceptional cases, and as a program flow mechanism for those cases. So the question you should be asking is, does the "logic" of the code call for exceptions.
Exceptions are generally designed to perform well enough in the use for which they are intended. If they're used in such a way that they're a bottleneck, then above all, that's probably an indication that they're just being used for "the wrong thing" full stop-- i.e. what you have underlyingly is a program design problem rather than a performance problem.
Conversely, if the exception appears to be being "used for the right thing", then that probably means it'll also perform OK.
Let's say exception won't occur when trying to execute statements 1 and 2. Are there ANY performance hits between those two sample-codes?
If no, what if the DoSomething() method has to do a huuuge amount of work (loads of calls to other methods, etc.)?
1:
try
{
DoSomething();
}
catch (...)
{
...
}
2:
DoSomething();