Related
I have found myself using the following practice, but something inside me kind of cringes every time i use it. Basically, it's a precondition test on the parameters to determine if the actual work should be done.
public static void doSomething(List<String> things)
{
if(things == null || things.size() <= 0)
return;
//...snip... do actual work
}
It is good practice to return at the earliest opportunity.
That way the least amount of code gets executed and evaluated.
Code that does not run cannot be in error.
Furthermore it makes the function easier to read, because you do not have to deal with all the cases that do not apply anymore.
Compare the following code
private Date someMethod(Boolean test) {
Date result;
if (null == test) {
result = null
} else {
result = test ? something : other;
}
return result;
}
vs
private Date someMethod(Boolean test) {
if (null == test) {
return null
}
return test ? something : other;
}
The second one is shorter, does not need an else and does not need the temp variable.
Note that in Java the return statement exits the function right away; in other languages (e.g. Pascal) the almost equivalent code result:= something; does not return.
Because of this fact it is customary to return at many points in Java methods.
Calling this bad practice is ignoring the fact that that particular train has long since left the station in Java.
If you are going to exit a function at many points in a function anyway, it's best to exit at the earliest opportunity
It's a matter of style and personal preference. There's nothing wrong with it.
To the best of my understanding - no.
For the sake of easier debugging there should be only one return/exit point in a subroutine, method or function.
With such approach your program may become longer and less readable, but while debugging you can put a break point at the exit and always see the state of what you return. For example you can log the state of all local variables - it may be really helpful for troubleshooting.
It looks like there a two "schools" - one says "return as early as possible", whereas another one says "there should be only one return/exit point in a program".
I am a proponent of the first one, though in practice sometimes follow the second one, just to save time.
Also, do not forget about exceptions. Very often the fact that you have to return from a method early means that you are in an exceptional situation. In your example I think throwing an exception is more appropriate.
PMD seems to think so, and that you should always let your methods run to the end, however, for certain quick sanity checks, I still use premature return statements.
It does impair the readability of the method a little, but in some cases that can be better than adding yet another if statement or other means by which to run the method to the end for all cases.
There's nothing inherently wrong with it, but if it makes you cringe, you could throw an IllegalArgumentException instead. In some cases, that's more accurate. It could, however, result in a bunch of code that look this whenever you call doSomething:
try {
doSomething(myList);
} catch (IllegalArgumentException e) {}
There is no correct answer to this question, it is a matter of taste.
In the specific example above there may be better ways of enforcing a pre-condition, but I view the general pattern of multiple early returns as akin to guards in functional programming.
I personally have no issue with this style - I think it can result in cleaner code. Trying contort everything to have a single exit point can increase verbosity and reduce readability.
It's good practice. So continue with your good work.
There is nothing wrong with it. Personally, I would use else statement to execute the rest of the function, and let it return naturally.
If you want to avoid the "return" in your method : maybe you could use a subClass of Exception of your own and handle it in your method's call ?
For example :
public static void doSomething(List<String> things) throws MyExceptionIfThingsIsEmpty {
if(things == null || things.size() <= 0)
throw new MyExceptionIfThingsIsEmpty(1, "Error, the list is empty !");
//...snip... do actual work
}
Edit :
If you don't want to use the "return" statement, you could do the opposite in the if() :
if(things != null && things.size() > 0)
// do your things
If function is long (say, 20 lines or more), then, it is good to return for few error conditions in the beginning so that reader of code can focus on logic when reading rest of the function. If function is small (say 5 lines or less), then return statements in the beginning can be distracting for reader.
So, decision should be based on primarily on whether the function becomes more readable or less readable.
Java good practices say that, as often as possible, return statements should be unique and written at the end of the method. To control what you return, use a variable. However, for returning from a void method, like the example you use, what I'd do would be perform the check in a middle method used only for such purpose. Anyway, don't take this too serious - keywords like continue should never be used according to Java good practices, but they're there, inside your scope.
I've heard that using exceptions for control flow is bad practice. What do you think of this?
public static findStringMatch(g0, g1) {
int g0Left = -1;
int g0Right = -1;
int g1Left = -1;
int g1Right = -1;
//if a match is found, set the above ints to the proper indices
//...
//if not, the ints remain -1
try {
String gL0 = g0.substring(0, g0Left);
String gL1 = g1.substring(0, g1Left);
String g0match = g0.substring(g0Left, g0Right);
String g1match = g1.substring(g1Left, g1Right);
String gR0 = g0.substring(g0Right);
String gR1 = g1.substring(g1Right);
return new StringMatch(gL0, gR0, g0match, g1match, gL1, gR1);
}
catch (StringIndexOutOfBoundsException e) {
return new StringMatch(); //no match found
}
So, if no match has been found, the ints will be -1. This will cause an exception when I try to take the substring g0.substring(0, -1). Then the function just returns an object indicating that no match is found.
Is this bad practice? I could just check each index manually to see if they're all -1, but that feels like more work.
UPDATE
I have removed the try-catch block and replaced it with this:
if (g0Left == -1 || g0Right == -1 || g1Left == -1 || g1Right == -1) {
return new StringMatch();
}
Which is better: checking if each variable is -1, or using a boolean foundMatch to keep track and just check that at the end?
Generally exceptions are expensive operations and as the name would suggest, exceptional conditions. So using them in the context of controlling the flow of your application is indeed considered bad practice.
Specifically in the example you provided, you would need to do some basic validation of the inputs you are providing to the StringMatch constructor. If it were a method that returns an error code in case some basic parameter validation fails you could avoid checking beforehand, but this is not the case.
I've done some testing on this. On modern JVMs, it actually doesn't impact runtime performance much (if at all). If you run with debugging turned on, then it does slow things down considerably.
See the following for details
(I should also mention that I still think this is a bad practice, even if it doesn't impact performance. More than anything, it reflects a possibly poor algorithm design that is going to be difficult to test)
Yes, this is a bad practice, especially when you have a means to avoid an exception (check the string length before trying to index into it). Try and catch blocks are designed to partition "normal" logic from "exceptional" and error logic. In your example, you have spread "normal" logic into the exceptional/error block (not finding a match is not exceptional). You are also misusing substring so you can leverage the error it produces as control flow.
Program flow should be in as straight a line as possible(since even then applications get pretty complex), and utilize standard control flow structures. The next developer to touch the code may not be you and (rightly)misunderstand the non-standard way you are using exceptions instead of conditionals to determine control flow.
I am fighting a slightly different slant on this problem right now during some legacy code refactoring.
The largest issue that I find with this approach is that using the try/catch breaks normal programmatic flow.
In the application I am working on(and this is different from the sample you have applied), exceptions are used to communicate from within a method call that a given outcome(for instance looking for an account number and not finding it) occurred. This creates spaghetti code on the client side, since the calling method (during a non-exceptional event, or a normal use-case event) breaks out of whatever code it was executing before the call and into the catch block. This is repeated in some very long methods many times over, making the code very easy to mis-read.
For my situation, a method should return a value per it's signature for all but truly exceptional events. The exception handling mechanism is intended to take another path when the exception occurs (try and recover from within the method so you can still return normally).
To my mind you could do this if you scope your try/catch blocks very tightly; but I think it is a bad habit and can lead to code that is very easy to misinterpret, since the calling code will interpret any thrown exception as a 'GOTO' type message, altering program flow. I fear that although this case does not fall into this trap, doing this often could result in a coding habit leading to the nightmare that I am living right now.
And that nightmare is not pleasant.
Sounds like a stupid question with an obvious answer :)
Still I've ventured to ask just be doubly sure.
We are indeed using asserts like given below
ArrayList alProperties = new ArrayList();
assert alProperties != null : "alProperties is null";
Problem is that making a small & simple document to follow, on asserts is difficult. There are many books on asserts, but ideally I like to give a new programmer very simple guidelines on using something like asserts. Btw, does some tool like pmd check for proper usage of asserts?
Thanks in advance.
There's no sane reason to use asserts like that. If the object won't be created for some reason, your assert won't even be reached (because an exception was thrown or the VM exited, for example)
There are some fairly concise guidelines on using assertions in Sun's Programming with Assertions. That article advises that asserts should be used for things like Internal Invariants, Control-Flow Invariants, and Preconditions, Postconditions, and Class Invariants.
No , you don't want to check object creation.
If the object creation fails, the jvm will throw an OutOfMemoryError, and if that happens you're likely to be screwd beyond repair anyway.
that's like not trusting the JVM. Concerning what you take as a given, you got to draw a line somewhere...
This assert only clutters your code, it would be equivalent to this assert:
boolean a = true;
assert a : "A should be true"
You shouldn't be testing your JVM, unless that's the point of your program (say, it's a test suite for a JVM you are making). Instead you should be testing your pre-conditions, post-conditions and invariants. Sometimes these tests are too basic or too expensive.
Pre-conditions probably should only appear at the start of a method (if your have very long methods, then you should break that method into small parts, even if they are all private).
Post-conditions should make it clear what you have returned to the caller, you don't test that the sqrt function just returned the sqrt, but you might test that it was positive to make it clear what you are expecting (perhaps later code uses complex numbers and yours is not tested for that). Instead leave a comment at the bottom.
Invariants also often can't be tested, you can't test that your current solution is the correct partial solution (see below) -- though this is one of the nice things about writing things with tail-recursion. Instead, you declare the invariant with a comment.
If you are calling things externally, you would also use an assert, for instance in your example if you had ArrayList.Create(), then you might choose the assertion check for null. But only because you don't trust the other code. If you wrote that code, you could put the assertion (comment or otherwise) in the factory method itself.
int max(int[] a, int n) {
assert n <= a.length : "N should not exceed the bounds of the array"
assert n > 0 : "N should be at least one"
// invariant: m is the maximum of a[0..i]
int m = a[0];
for( int i = 1; i < n; n++ ) {
if( m < a[i] )
m = a[i];
}
// if these were not basic types, we might assert that we found
// something sensible here, such as m != null
return m;
}
In Java each call to new returns either a non-null reference to the new object or raises an Exception or an Error. In the first case your assert is true, in the second case the assert will not be reached, because you end in the next matching catch-block.
This assert tests if your Java-implementation is broken and in this case you can't even rely on the assert. So I would not make such asserts. Use assert for restrictions on objects, that aren't enforced by the language (for instance, if your method is passed a parameter that is null but shouldn't be).
I'm not sure of complete understand your question but i think that assertions of that kind aren't neccesary.
When you create an instance, if the program flow continue, the instance isn't a null reference.
You want ASSERTS to check properties or invariants of your program. A good document to teach this should encourage the programmer to think about such properties in a systematic/methodical manner.
if the assert fails, believe me, you're going to have bigger problems than just dealing with the assert.
If that assert fails I think it's time I look for another job because the computer is not behaving how it's supposed to and when that happens all hell is going to break loose!
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
In the same spirit of other platforms, it seemed logical to follow up with this question: What are common non-obvious mistakes in Java? Things that seem like they ought to work, but don't.
I won't give guidelines as to how to structure answers, or what's "too easy" to be considered a gotcha, since that's what the voting is for.
See also:
Perl - Common gotchas
.NET - Common gotchas
"a,b,c,d,,,".split(",").length
returns 4, not 7 as you might (and I certainly did) expect. split ignores all trailing empty Strings returned. That means:
",,,a,b,c,d".split(",").length
returns 7! To get what I would think of as the "least astonishing" behaviour, you need to do something quite astonishing:
"a,b,c,d,,,".split(",",-1).length
to get 7.
Comparing equality of objects using == instead of .equals() -- which behaves completely differently for primitives.
This gotcha ensures newcomers are befuddled when "foo" == "foo" but new String("foo") != new String("foo").
I think a very sneaky one is the String.substring method. This re-uses the same underlying char[] array as the original string with a different offset and length.
This can lead to very hard-to-see memory problems. For example, you may be parsing extremely large files (XML perhaps) for a few small bits. If you have converted the whole file to a String (rather than used a Reader to "walk" over the file) and use substring to grab the bits you want, you are still carrying around the full file-sized char[] array behind the scenes. I have seen this happen a number of times and it can be very difficult to spot.
In fact this is a perfect example of why interface can never be fully separated from implementation. And it was a perfect introduction (for me) a number of years ago as to why you should be suspicious of the quality of 3rd party code.
Overriding equals() but not hashCode()
It can have really unexpected results when using maps, sets or lists.
SimpleDateFormat is not thread safe.
There are two that annoy me quite a bit.
Date/Calendar
First, the Java Date and Calendar classes are seriously messed up. I know there are proposals to fix them, I just hope they succeed.
Calendar.get(Calendar.DAY_OF_MONTH) is 1-based
Calendar.get(Calendar.MONTH) is 0-based
Auto-boxing preventing thinking
The other one is Integer vs int (this goes for any primitive version of an object). This is specifically an annoyance caused by not thinking of Integer as different from int (since you can treat them the same much of the time due to auto-boxing).
int x = 5;
int y = 5;
Integer z = new Integer(5);
Integer t = new Integer(5);
System.out.println(5 == x); // Prints true
System.out.println(x == y); // Prints true
System.out.println(x == z); // Prints true (auto-boxing can be so nice)
System.out.println(5 == z); // Prints true
System.out.println(z == t); // Prints SOMETHING
Since z and t are objects, even they though hold the same value, they are (most likely) different objects. What you really meant is:
System.out.println(z.equals(t)); // Prints true
This one can be a pain to track down. You go debugging something, everything looks fine, and you finally end up finding that your problem is that 5 != 5 when both are objects.
Being able to say
List<Integer> stuff = new ArrayList<Integer>();
stuff.add(5);
is so nice. It made Java so much more usable to not have to put all those "new Integer(5)"s and "((Integer) list.get(3)).intValue()" lines all over the place. But those benefits come with this gotcha.
Try reading Java Puzzlers which is full of scary stuff, even if much of it is not stuff you bump into every day. But it will destroy much of your confidence in the language.
List<Integer> list = new java.util.ArrayList<Integer>();
list.add(1);
list.remove(1); // throws...
The old APIs were not designed with boxing in mind, so overload with primitives and objects.
This one I just came across:
double[] aList = new double[400];
List l = Arrays.asList(aList);
//do intense stuff with l
Anyone see the problem?
What happens is, Arrays.asList() expects an array of object types (Double[], for example). It'd be nice if it just threw an error for the previous ocde. However, asList() can also take arguments like so:
Arrays.asList(1, 9, 4, 4, 20);
So what the code does is create a List with one element - a double[].
I should've figured when it took 0ms to sort a 750000 element array...
this one has trumped me a few times and I've heard quite a few experienced java devs wasting a lot of time.
ClassNotFoundException --- you know that the class is in the classpath BUT you are NOT sure why the class is NOT getting loaded.
Actually, this class has a static block. There was an exception in the static block and someone ate the exception. they should NOT. They should be throwing ExceptionInInitializerError. So, always look for static blocks to trip you. It also helps to move any code in static blocks to go into static methods so that debugging the method is much more easier with a debugger.
Floats
I don't know many times I've seen
floata == floatb
where the "correct" test should be
Math.abs(floata - floatb) < 0.001
I really wish BigDecimal with a literal syntax was the default decimal type...
Not really specific to Java, since many (but not all) languages implement it this way, but the % operator isn't a true modulo operator, as it works with negative numbers. This makes it a remainder operator, and can lead to some surprises if you aren't aware of it.
The following code would appear to print either "even" or "odd" but it doesn't.
public static void main(String[] args)
{
String a = null;
int n = "number".hashCode();
switch( n % 2 ) {
case 0:
a = "even";
break;
case 1:
a = "odd";
break;
}
System.out.println( a );
}
The problem is that the hash code for "number" is negative, so the n % 2 operation in the switch is also negative. Since there's no case in the switch to deal with the negative result, the variable a never gets set. The program prints out null.
Make sure you know how the % operator works with negative numbers, no matter what language you're working in.
Manipulating Swing components from outside the event dispatch thread can lead to bugs that are extremely hard to find. This is a thing even we (as seasoned programmers with 3 respective 6 years of java experience) forget frequently! Sometimes these bugs sneak in after having written code right and refactoring carelessly afterwards...
See this tutorial why you must.
Immutable strings, which means that certain methods don't change the original object but instead return a modified object copy. When starting with Java I used to forget this all the time and wondered why the replace method didn't seem to work on my string object.
String text = "foobar";
text.replace("foo", "super");
System.out.print(text); // still prints "foobar" instead of "superbar"
I think i big gotcha that would always stump me when i was a young programmer, was the concurrent modification exception when removing from an array that you were iterating:
List list = new ArrayList();
Iterator it = list.iterator();
while(it.hasNext()){
//some code that does some stuff
list.remove(0); //BOOM!
}
if you have a method that has the same name as the constructor BUT has a return type. Although this method looks like a constructor(to a noob), it is NOT.
passing arguments to the main method -- it takes some time for noobs to get used to.
passing . as the argument to classpath for executing a program in the current directory.
Realizing that the name of an Array of Strings is not obvious
hashCode and equals : a lot of java developers with more than 5 years experience don't quite get it.
Set vs List
Till JDK 6, Java did not have NavigableSets to let you easily iterate through a Set and Map.
Integer division
1/2 == 0 not 0.5
Using the ? generics wildcard.
People see it and think they have to, e.g. use a List<?> when they want a List they can add anything to, without stopping to think that a List<Object> already does that. Then they wonder why the compiler won't let them use add(), because a List<?> really means "a list of some specific type I don't know", so the only thing you can do with that List is get Object instances from it.
(un)Boxing and Long/long confusion. Contrary to pre-Java 5 experience, you can get a NullPointerException on the 2nd line below.
Long msec = getSleepMsec();
Thread.sleep(msec);
If getSleepTime() returns a null, unboxing throws.
The default hash is non-deterministic, so if used for objects in a HashMap, the ordering of entries in that map can change from run to run.
As a simple demonstration, the following program can give different results depending on how it is run:
public static void main(String[] args) {
System.out.println(new Object().hashCode());
}
How much memory is allocated to the heap, or whether you're running it within a debugger, can both alter the result.
When you create a duplicate or slice of a ByteBuffer, it does not inherit the value of the order property from the parent buffer, so code like this will not do what you expect:
ByteBuffer buffer1 = ByteBuffer.allocate(8);
buffer1.order(ByteOrder.LITTLE_ENDIAN);
buffer1.putInt(2, 1234);
ByteBuffer buffer2 = buffer1.duplicate();
System.out.println(buffer2.getInt(2));
// Output is "-771489792", not "1234" as expected
Among the common pitfalls, well known but still biting occasionally programmers, there is the classical if (a = b) which is found in all C-like languages.
In Java, it can work only if a and b are boolean, of course. But I see too often newbies testing like if (a == true) (while if (a) is shorter, more readable and safer...) and occasionally writing by mistake if (a = true), wondering why the test doesn't work.
For those not getting it: the last statement first assign true to a, then do the test, which always succeed!
-
One that bites lot of newbies, and even some distracted more experienced programmers (found it in our code), the if (str == "foo"). Note that I always wondered why Sun overrode the + sign for strings but not the == one, at least for simple cases (case sensitive).
For newbies: == compares references, not the content of the strings. You can have two strings of same content, stored in different objects (different references), so == will be false.
Simple example:
final String F = "Foo";
String a = F;
String b = F;
assert a == b; // Works! They refer to the same object
String c = "F" + F.substring(1); // Still "Foo"
assert c.equals(a); // Works
assert c == a; // Fails
-
And I also saw if (a == b & c == d) or something like that. It works (curiously) but we lost the logical operator shortcut (don't try to write: if (r != null & r.isSomething())!).
For newbies: when evaluating a && b, Java doesn't evaluate b if a is false. In a & b, Java evaluates both parts then do the operation; but the second part can fail.
[EDIT] Good suggestion from J Coombs, I updated my answer.
The non-unified type system contradicts the object orientation idea. Even though everything doesn't have to be heap-allocated objects, the programmer should still be allowed to treat primitive types by calling methods on them.
The generic type system implementation with type-erasure is horrible, and throws most students off when they learn about generics for the first in Java: Why do we still have to typecast if the type parameter is already supplied? Yes, they ensured backward-compatibility, but at a rather silly cost.
Going first, here's one I caught today. It had to do with Long/long confusion.
public void foo(Object obj) {
if (grass.isGreen()) {
Long id = grass.getId();
foo(id);
}
}
private void foo(long id) {
Lawn lawn = bar.getLawn(id);
if (lawn == null) {
throw new IllegalStateException("grass should be associated with a lawn");
}
}
Obviously, the names have been changed to protect the innocent :)
Another one I'd like to point out is the (too prevalent) drive to make APIs generic. Using well-designed generic code is fine. Designing your own is complicated. Very complicated!
Just look at the sorting/filtering functionality in the new Swing JTable. It's a complete nightmare. It's obvious that you are likely to want to chain filters in real life but I have found it impossible to do so without just using the raw typed version of the classes provided.
System.out.println(Calendar.getInstance(TimeZone.getTimeZone("Asia/Hong_Kong")).getTime());
System.out.println(Calendar.getInstance(TimeZone.getTimeZone("America/Jamaica")).getTime());
The output is the same.
I had some fun debugging a TreeSet once, as I was not aware of this information from the API:
Note that the ordering maintained by a set (whether or not an explicit comparator is provided) must be consistent with equals if it is to correctly implement the Set interface. (See Comparable or Comparator for a precise definition of consistent with equals.) This is so because the Set interface is defined in terms of the equals operation, but a TreeSet instance performs all key comparisons using its compareTo (or compare) method, so two keys that are deemed equal by this method are, from the standpoint of the set, equal. The behavior of a set is well-defined even if its ordering is inconsistent with equals; it just fails to obey the general contract of the Set interface.
http://download.oracle.com/javase/1.4.2/docs/api/java/util/TreeSet.html
Objects with correct equals/hashcode implementations were being added and never seen again as the compareTo implementation was inconsistent with equals.
IMHO
1. Using vector.add(Collection) instead of vector.addall(Collection). The first adds the collection object to vector and second one adds the contents of collection.
2. Though not related to programming exactly, the use of xml parsers that come from multiple sources like xerces, jdom. Relying on different parsers and having their jars in the classpath is a nightmare.
I often see code like:
Iterator i = list.iterator();
while(i.hasNext()) {
...
}
but I write that (when Java 1.5 isn't available or for each can't be used) as:
for(Iterator i = list.iterator(); i.hasNext(); ) {
...
}
because
It is shorter
It keeps i in a smaller scope
It reduces the chance of confusion. (Is i used outside the
while? Where is i declared?)
I think code should be as simple to understand as possible so that I only have to make complex code to do complex things. What do you think? Which is better?
From: http://jamesjava.blogspot.com/2006/04/iterating.html
I prefer the for loop because it also sets the scope of the iterator to just the for loop.
There are appropriate uses for the while, the for, and the foreach constructs:
while - Use this if you are iterating and the deciding factor for looping or not is based merely on a condition. In this loop construct, keeping an index is only a secondary concern; everything should be based on the condition
for - Use this if you are looping and your primary concern is the index of the array/collection/list. It is more useful to use a for if you are most likely to go through all the elements anyway, and in a particular order (e.g., going backwards through a sorted list, for example).
foreach - Use this if you merely need to go through your collection regardless of order.
Obviously there are exceptions to the above, but that's the general rule I use when deciding to use which. That being said I tend to use foreach more often.
Why not use the for-each construct? (I haven't used Java in a while, but this exists in C# and I'm pretty sure Java 1.5 has this too):
List<String> names = new ArrayList<String>();
names.add("a");
names.add("b");
names.add("c");
for (String name : names)
System.out.println(name.charAt(0));
I think scope is the biggest issue here, as you have pointed out.
In the "while" example, the iterator is declared outside the loop, so it will continue to exist after the loop is done. This may cause issues if this same iterator is used again at some later point. E. g. you may forget to initialize it before using it in another loop.
In the "for" example, the iterator is declared inside the loop, so its scope is limited to the loop. If you try to use it after the loop, you will get a compiler error.
if you're only going to use the iterator once and throw it away, the second form is preferred; otherwise you must use the first form
IMHO, the for loop is less readable in this scenario, if you look at this code from the perspective of English language. I am working on a code where author does abuse for loop, and it ain't pretty. Compare following:
for (; (currUserObjectIndex < _domainObjectReferences.Length) && (_domainObjectReferences[currUserObjectIndex].VisualIndex == index); ++currUserObjectIndex)
++currNumUserObjects;
vs
while (currUserObjectIndex < _domainObjectReferences.Length && _domainObjectReferences[currUserObjectIndex].VisualIndex == index)
{
++currNumUserObjects;
++currUserObjectIndex;
}
I would agree that the "for" loop is clearer and more appropriate when iterating.
The "while" loop is appropriate for polling, or where the number of loops to meet exit condition will change based on activity inside the loop.
Not that it probably matters in this case, but Compilers, VMs and CPU's normally have special optimization techniques they user under the hood that will make for loops performance better (and in the near future parallel), in general they don't do that with while loops (because its harder to determine how it's actually going to run). But in most cases code clarity should trump optimization.
Using for loop you can work with a single variable, as it sets the scope of variable for a current working for loop only. However this is not possible in while loop.
For Example:
int i; for(i=0; in1;i++) do something..
for(i=0;i n2;i+=2) do something.
So after 1st loop i=n1-1 at the end. But while using second loop you can set i again to 0.
However
int i=0;
while(i less than limit) { do something ..; i++; }
Hence i is set to limit-1 at the end. So you cant use same i in another while loop.
Either is fine. I use for () myself, and I don't know if there are compile issues. I suspect they both get optimized down to pretty much the same thing.
I agree that the for loop should be used whenever possible but sometimes there's more complex logic that controls the iterator in the body of the loop. In that case you have to go with while.
I was the for loop for clarity. While I use the while loop when faced with some undeterministic condition.
Both are fine, but remember that sometimes access to the Iterator directly is useful (such as if you are removing elements that match a certain condition - you will get a ConcurrentModificationException if you do collection.remove(o) inside a for(T o : collection) loop).
I prefer to write the for(blah : blah) [foreach] syntax almost all of the time because it seems more naturally readable to me. The concept of iterators in general don't really have parallels outside of programming
Academia tends to prefer the while-loop as it makes for less complicated reasoning about programs. I tend to prefer the for- or foreach-loop structures as they make for easier-to-read code.
Although both are really fine, I tend to use the first example because it is easier to read.
There are fewer operations happening on each line with the while() loop, making the code easier for someone new to the code to understand what's going on.
That type of construct also allows me to group initializations in a common location (at the top of the method) which also simplifies commenting for me, and conceptualization for someone reading it for the first time.