Assertions in recursive call? - java

This is a code to check if the graph is bipartite or not. My question is regarding assertions.
I want a check to validate if graph is null or not. Effective java encourages checks even in private functions. Lets say I add an assert graph != null, it would be checked as many times the recursive function is called. This appears inefficient. If the check if done before recursive function is called, then we violate best practices stated in effective java, that every function should validate parameters.Is there some best practice / tradeoff etc? Thanks.
private void dfsBipartiteDetector(Graph graph, int vertex, int i) {
assert graph != null; // <--------- appears inefficient for recursive call.
visited[vertex] = true;
vertexSets.get(i).add(vertex);
final List<Integer> adjList = graph.adj(vertex);
for (int v : adjList) {
if (!visited[v]) {
dfsBipartiteDetector(graph, v, i == 0 ? 1 : 0);
} else {
if (vertexSets.get(i).contains(v)) {
isBipartite = false;
}
}
}
}

Trading efficiency for safety in debug-only code is good practice.
It's pretty common to add quite complex debug-only sanity-checking code, to check the integrity of a whole data structure for instance.
Only if the code slows down so much that it gets in the way of your development process should you think about reducing the amount of such checking.

assert isn't on by default. The check only actually runs if you explicitly enable it by starting the JVM with the -ea option. The idea is to enable assertions while in development, and disable in production to solve the very tradeoff you mention.
Having said that, I find it useful to have such checks on in production, and this is why I prefer using Guava's Preconditions instead of assert keyword, because checks using the former will always run. Performance drop due to this kind of checks are usually negligible compared to other parts of your code, and it can help debugging hard to debug bugs.

Related

Can the java compiler optimize loops to return early?

I'm working with an external library that decided to handle collections on its own. Not working with it or updating is outside my control. To work with elements of this third party "collection" it only returns iterators.
A question came up during a code review about having multiple returns in the code to gain performance. We all agree (within the team) the code is more readable with a single return, but some are worried about optimizations.
I'm aware premature optimization is bad. That is a topic for another day.
I believe the JIT compiler can handle this and skip the unneeded iterations, but could not find any info to back this up. Is JIT capable of such a thing?
A code sample of the issue at hand:
public void boolean contains(MyThings things, String valueToFind) {
Iterator<Thing> thingIterator = things.iterator();
boolean valueFound = false;
while(thingIterator.hasNext()) {
Thing thing = thingIterator.next();
if (valueToFind.equals(thing.getValue())) {
valueFound = true;
}
}
return valueFound;
}
VS
public void boolean contains(MyThings things, String valueToFind) {
Iterator<Thing> thingIterator = things.iterator();
while(thingIterator.hasNext()) {
Thing thing = thingIterator.next();
if (valueToFind.equals(thing.getValue())) {
return true;
}
}
return false;
}
We all agree the code is more readable with a single return.
Not really. This is just old school structured programming when functions were typically not kept small and the paradigms of keeping values immutable weren't popular yet.
Although subject to debate, there is nothing wrong with having very small methods (a handful of lines of code), which return at different points. For example, in recursive methods, you typically have at least one base case which returns immediately, and another one which returns the value returned by the recursive call.
Often you will find that creating an extra result variable, just to hold the return value, and then making sure no other part of the function overwrites the result, when you already know you can just return, just creates noise which makes it less readable not more. The reader has to deal with cognitive overload to see the result is not modified further down. During debugging this increases the pain even more.
I don't think your example is premature optimisation. It is a logical and critical part of your search algorithm. That is why you can break from loops, or in your case, just return the value. I don't think the JIT could realise that easily it should break out the loop. It doesn't know if you want to change the variable back to false if you find something else in the collection. (I don't think it is that smart to realise that valueFound doesn't change back to false).
In my opinion, your second example is not only more readable (the valueFound variable is just extra noise) but also faster, because it just returns when it does its job. The first example would be as fast if you put a break after setting valueFound = true. If you don't do this, and you have a million items to check, and the item you need is the first, you will be comparing all the others just for nothing.
Java compiler cannot do an optimization like that, because doing so in a general case would change the logic of the program.
Specifically, adding an early return would change the number of invocations of thingIterator.hasNext(), because your first code block continues iterating the collection to the end.
Java could potentially replace a break with an early return, but that would have any effect on the timing of the program.

Is using return at the start of method bad coding practice?

I have found myself using the following practice, but something inside me kind of cringes every time i use it. Basically, it's a precondition test on the parameters to determine if the actual work should be done.
public static void doSomething(List<String> things)
{
if(things == null || things.size() <= 0)
return;
//...snip... do actual work
}
It is good practice to return at the earliest opportunity.
That way the least amount of code gets executed and evaluated.
Code that does not run cannot be in error.
Furthermore it makes the function easier to read, because you do not have to deal with all the cases that do not apply anymore.
Compare the following code
private Date someMethod(Boolean test) {
Date result;
if (null == test) {
result = null
} else {
result = test ? something : other;
}
return result;
}
vs
private Date someMethod(Boolean test) {
if (null == test) {
return null
}
return test ? something : other;
}
The second one is shorter, does not need an else and does not need the temp variable.
Note that in Java the return statement exits the function right away; in other languages (e.g. Pascal) the almost equivalent code result:= something; does not return.
Because of this fact it is customary to return at many points in Java methods.
Calling this bad practice is ignoring the fact that that particular train has long since left the station in Java.
If you are going to exit a function at many points in a function anyway, it's best to exit at the earliest opportunity
It's a matter of style and personal preference. There's nothing wrong with it.
To the best of my understanding - no.
For the sake of easier debugging there should be only one return/exit point in a subroutine, method or function.
With such approach your program may become longer and less readable, but while debugging you can put a break point at the exit and always see the state of what you return. For example you can log the state of all local variables - it may be really helpful for troubleshooting.
It looks like there a two "schools" - one says "return as early as possible", whereas another one says "there should be only one return/exit point in a program".
I am a proponent of the first one, though in practice sometimes follow the second one, just to save time.
Also, do not forget about exceptions. Very often the fact that you have to return from a method early means that you are in an exceptional situation. In your example I think throwing an exception is more appropriate.
PMD seems to think so, and that you should always let your methods run to the end, however, for certain quick sanity checks, I still use premature return statements.
It does impair the readability of the method a little, but in some cases that can be better than adding yet another if statement or other means by which to run the method to the end for all cases.
There's nothing inherently wrong with it, but if it makes you cringe, you could throw an IllegalArgumentException instead. In some cases, that's more accurate. It could, however, result in a bunch of code that look this whenever you call doSomething:
try {
doSomething(myList);
} catch (IllegalArgumentException e) {}
There is no correct answer to this question, it is a matter of taste.
In the specific example above there may be better ways of enforcing a pre-condition, but I view the general pattern of multiple early returns as akin to guards in functional programming.
I personally have no issue with this style - I think it can result in cleaner code. Trying contort everything to have a single exit point can increase verbosity and reduce readability.
It's good practice. So continue with your good work.
There is nothing wrong with it. Personally, I would use else statement to execute the rest of the function, and let it return naturally.
If you want to avoid the "return" in your method : maybe you could use a subClass of Exception of your own and handle it in your method's call ?
For example :
public static void doSomething(List<String> things) throws MyExceptionIfThingsIsEmpty {
if(things == null || things.size() <= 0)
throw new MyExceptionIfThingsIsEmpty(1, "Error, the list is empty !");
//...snip... do actual work
}
Edit :
If you don't want to use the "return" statement, you could do the opposite in the if() :
if(things != null && things.size() > 0)
// do your things
If function is long (say, 20 lines or more), then, it is good to return for few error conditions in the beginning so that reader of code can focus on logic when reading rest of the function. If function is small (say 5 lines or less), then return statements in the beginning can be distracting for reader.
So, decision should be based on primarily on whether the function becomes more readable or less readable.
Java good practices say that, as often as possible, return statements should be unique and written at the end of the method. To control what you return, use a variable. However, for returning from a void method, like the example you use, what I'd do would be perform the check in a middle method used only for such purpose. Anyway, don't take this too serious - keywords like continue should never be used according to Java good practices, but they're there, inside your scope.

Recursion or Looping [duplicate]

This question already has answers here:
Recursion or iteration?
(14 answers)
Closed 2 years ago.
I have this method that calculates some statistics:
public void calculateAverage(int hour){
if (hour != 20) {
int data =0;
int times = 0;
for (CallQueue cq : queues) {
data += cq.getCallsByTime().get(hour);
times++;
}
averageData.add((double)data/times);
calculateAverage(hour + 1);
}
}
Now I am very proud that I have created a recursive method but I know that this could have been solved with a loop.
My question is: is it better to solve these kind of problems recursive or with a loop?
Recursion in general
In general, a recursion would be more expensive, because the stack has to be modified with copies of variables for each time the function recurses.
A set of addresses & states need to be saved, so that the recursive procedure can return to the right state after that particular run.
Iteration would be better if possible. Recursion, when iteration just won't cut it, or will result in a lot more complicated code.
Code Maintenance
From a maintenance perspective, debugging iterative code is a lot easier than recursive procedures as it is relatively easier to understand what the state is at any particular iteration, as compared to thinking about a particular recursion.
Your code
The procedure calls itself, but each run has nothing to do with the results of the previous run. Each run being independent, is usually the biggest give-away, that recursion there might not be necessary.
In my opinion, calculateAverage(hour + 1); should be moved outside the function, as it would also be clearer to someone reading your code. that each call is independent.
In Java, C, and Python, recursion is fairly expensive compared to iteration (in general) because it requires the allocation of a new stack frame. In some C compilers, one can use a compiler flag to eliminate this overhead, which transforms certain types of recursion (actually, certain types of tail calls) into jumps instead of function calls. (source)
For this particular problem there isn't too much of a runtime difference. I personally would rather use iteration, I think it would be more simple and easier to understand, but to each his own I suppose.
now some recursive functions(like recursive Fibonacci numbers for example) should be done by iteration instead, simply because they can have exponential growth.
generally, I don't use recursion unless It would make my problem actually easier to understand.
You should investigate the perimeter circumstances. For big recursions stack might get overflow, thats +1 for loops.
I'm not sure which one runs faster but that is relatively easy to measure, taking JIT and other stuff into considerations.
Code maintenance aspect: it is much easier for the most of us to understand and fix loops than recursion. Developers time is usually more important than minor performance differences.
It depends on the context. For example if I have a tree of Composite objects (in SWT) and you wish to traverse them the easiest way is to use recursion like this:
private boolean checkControlParent(Composite comp) {
boolean ret = false;
if (comp != null) {
if (this.equals(comp)) {
ret = true;
} else {
ret = checkControlParent(comp.getParent());
}
}
return ret;
}
otherwise if performance is important be advised that recursive calls are slower in most cases than simple loops because of the function/method call overhead.
So the main thing is that if you need to iterate through objects where recursion is a natural solution and you don't risk a StackOverflowError go ahead and use recursion. Otherwise you'll probably better off with a loop.
One more thing: recursive methods are sometimes tend to be harder to read, understand and debug.

Java: Exceptions as control flow?

I've heard that using exceptions for control flow is bad practice. What do you think of this?
public static findStringMatch(g0, g1) {
int g0Left = -1;
int g0Right = -1;
int g1Left = -1;
int g1Right = -1;
//if a match is found, set the above ints to the proper indices
//...
//if not, the ints remain -1
try {
String gL0 = g0.substring(0, g0Left);
String gL1 = g1.substring(0, g1Left);
String g0match = g0.substring(g0Left, g0Right);
String g1match = g1.substring(g1Left, g1Right);
String gR0 = g0.substring(g0Right);
String gR1 = g1.substring(g1Right);
return new StringMatch(gL0, gR0, g0match, g1match, gL1, gR1);
}
catch (StringIndexOutOfBoundsException e) {
return new StringMatch(); //no match found
}
So, if no match has been found, the ints will be -1. This will cause an exception when I try to take the substring g0.substring(0, -1). Then the function just returns an object indicating that no match is found.
Is this bad practice? I could just check each index manually to see if they're all -1, but that feels like more work.
UPDATE
I have removed the try-catch block and replaced it with this:
if (g0Left == -1 || g0Right == -1 || g1Left == -1 || g1Right == -1) {
return new StringMatch();
}
Which is better: checking if each variable is -1, or using a boolean foundMatch to keep track and just check that at the end?
Generally exceptions are expensive operations and as the name would suggest, exceptional conditions. So using them in the context of controlling the flow of your application is indeed considered bad practice.
Specifically in the example you provided, you would need to do some basic validation of the inputs you are providing to the StringMatch constructor. If it were a method that returns an error code in case some basic parameter validation fails you could avoid checking beforehand, but this is not the case.
I've done some testing on this. On modern JVMs, it actually doesn't impact runtime performance much (if at all). If you run with debugging turned on, then it does slow things down considerably.
See the following for details
(I should also mention that I still think this is a bad practice, even if it doesn't impact performance. More than anything, it reflects a possibly poor algorithm design that is going to be difficult to test)
Yes, this is a bad practice, especially when you have a means to avoid an exception (check the string length before trying to index into it). Try and catch blocks are designed to partition "normal" logic from "exceptional" and error logic. In your example, you have spread "normal" logic into the exceptional/error block (not finding a match is not exceptional). You are also misusing substring so you can leverage the error it produces as control flow.
Program flow should be in as straight a line as possible(since even then applications get pretty complex), and utilize standard control flow structures. The next developer to touch the code may not be you and (rightly)misunderstand the non-standard way you are using exceptions instead of conditionals to determine control flow.
I am fighting a slightly different slant on this problem right now during some legacy code refactoring.
The largest issue that I find with this approach is that using the try/catch breaks normal programmatic flow.
In the application I am working on(and this is different from the sample you have applied), exceptions are used to communicate from within a method call that a given outcome(for instance looking for an account number and not finding it) occurred. This creates spaghetti code on the client side, since the calling method (during a non-exceptional event, or a normal use-case event) breaks out of whatever code it was executing before the call and into the catch block. This is repeated in some very long methods many times over, making the code very easy to mis-read.
For my situation, a method should return a value per it's signature for all but truly exceptional events. The exception handling mechanism is intended to take another path when the exception occurs (try and recover from within the method so you can still return normally).
To my mind you could do this if you scope your try/catch blocks very tightly; but I think it is a bad habit and can lead to code that is very easy to misinterpret, since the calling code will interpret any thrown exception as a 'GOTO' type message, altering program flow. I fear that although this case does not fall into this trap, doing this often could result in a coding habit leading to the nightmare that I am living right now.
And that nightmare is not pleasant.

Should we assert every object creation in java?

Sounds like a stupid question with an obvious answer :)
Still I've ventured to ask just be doubly sure.
We are indeed using asserts like given below
ArrayList alProperties = new ArrayList();
assert alProperties != null : "alProperties is null";
Problem is that making a small & simple document to follow, on asserts is difficult. There are many books on asserts, but ideally I like to give a new programmer very simple guidelines on using something like asserts. Btw, does some tool like pmd check for proper usage of asserts?
Thanks in advance.
There's no sane reason to use asserts like that. If the object won't be created for some reason, your assert won't even be reached (because an exception was thrown or the VM exited, for example)
There are some fairly concise guidelines on using assertions in Sun's Programming with Assertions. That article advises that asserts should be used for things like Internal Invariants, Control-Flow Invariants, and Preconditions, Postconditions, and Class Invariants.
No , you don't want to check object creation.
If the object creation fails, the jvm will throw an OutOfMemoryError, and if that happens you're likely to be screwd beyond repair anyway.
that's like not trusting the JVM. Concerning what you take as a given, you got to draw a line somewhere...
This assert only clutters your code, it would be equivalent to this assert:
boolean a = true;
assert a : "A should be true"
You shouldn't be testing your JVM, unless that's the point of your program (say, it's a test suite for a JVM you are making). Instead you should be testing your pre-conditions, post-conditions and invariants. Sometimes these tests are too basic or too expensive.
Pre-conditions probably should only appear at the start of a method (if your have very long methods, then you should break that method into small parts, even if they are all private).
Post-conditions should make it clear what you have returned to the caller, you don't test that the sqrt function just returned the sqrt, but you might test that it was positive to make it clear what you are expecting (perhaps later code uses complex numbers and yours is not tested for that). Instead leave a comment at the bottom.
Invariants also often can't be tested, you can't test that your current solution is the correct partial solution (see below) -- though this is one of the nice things about writing things with tail-recursion. Instead, you declare the invariant with a comment.
If you are calling things externally, you would also use an assert, for instance in your example if you had ArrayList.Create(), then you might choose the assertion check for null. But only because you don't trust the other code. If you wrote that code, you could put the assertion (comment or otherwise) in the factory method itself.
int max(int[] a, int n) {
assert n <= a.length : "N should not exceed the bounds of the array"
assert n > 0 : "N should be at least one"
// invariant: m is the maximum of a[0..i]
int m = a[0];
for( int i = 1; i < n; n++ ) {
if( m < a[i] )
m = a[i];
}
// if these were not basic types, we might assert that we found
// something sensible here, such as m != null
return m;
}
In Java each call to new returns either a non-null reference to the new object or raises an Exception or an Error. In the first case your assert is true, in the second case the assert will not be reached, because you end in the next matching catch-block.
This assert tests if your Java-implementation is broken and in this case you can't even rely on the assert. So I would not make such asserts. Use assert for restrictions on objects, that aren't enforced by the language (for instance, if your method is passed a parameter that is null but shouldn't be).
I'm not sure of complete understand your question but i think that assertions of that kind aren't neccesary.
When you create an instance, if the program flow continue, the instance isn't a null reference.
You want ASSERTS to check properties or invariants of your program. A good document to teach this should encourage the programmer to think about such properties in a systematic/methodical manner.
if the assert fails, believe me, you're going to have bigger problems than just dealing with the assert.
If that assert fails I think it's time I look for another job because the computer is not behaving how it's supposed to and when that happens all hell is going to break loose!

Categories

Resources