Related
Consider a code, where I need to find square of sum of x largest elements in an array. ( this is NOT a which data structure question, so please dont post replies recommending heap etc.).
I initially code it up :
OPTION1
singleFunction {
// code to sort
// code to sum
// code to sqaure
return;
}
Soon, I realize I could leverage helper functions and break them into functions.
OPTION 2
getFinalAnswer() {
// sort;
return sumAndSquare();
}
sumAndSqaure() {
// sum
return square();
}
square() {
// return square.
}
Now I realize sort, sum and square can be used as utility methods rather than simply helper methods.
Now I break down functionality into 3 functions (1) sort (2) sum x (3) square()
OPTION3
someFunction(int[] arr, int x) {
sort(arr);
b = sumOfLastXElements(arr, x);
c = sqaure(b);
return c;
}
Now questions:
Looks like option 3 is the best of the lot, still so many times we find a function calling another. What is an advantage of option 2 over option 3 ?
A method by definition is supposed to do a single task/responsibility. but somefunction is doing 3 different things. What are such functions called ?
First, being strict, I must say that Java has no functions, only methods due to its OO nature.
1) Looks like option 3 is the best of the lot, still so many times we find a function calling another. What is an advantage of option 2 over option 3 ?
As you said, sort, sum and square methods hold a single responsibility each one, so there's no need to have a single monster method that do al the three. Also, each one can be reused later in other methods.
Option 2 has a sumAndSquare method that may or may not be reusable. This will heavily depends on your needs. The fact that you need this method or not will be noted if you have lots of these along your code (and by lots, I mean is at least 10 times in different methods):
long theSum = sum(array);
long theSquare = square(theSum);
2) A method by definition is supposed to do a single task/responsibility. but somefunction is doing 3 different things. What are such functions called?
It's task or responsibility is:
sort a list of numbers (I guess?)
sum the largets numbers
apply the square of the sum
So, the method is doing its task as expected. IMO you can even split the sumOfLastXElements into two methods: int[] findLastXElements(array) and long sum(array).
To answer this: What are such functions called? there's no specific nor special name, they are just methods. But the process to go from option 1 to option 3 is called Code Refactoring
Answer to your first question:
Number two has more overhead, since each function/method has to be instantiated.
Option 2 has more flexibility, and if you're going to use those individual pieces all the time, might be worth your time. However, if they are only used in code together, consider grouping them together.
As a wise professor of mine once said, separate what changes from what stays the same.
If they are only used together, no need to have individual methods/functions.
Answer to your second question:
Difference between a method and a function
(don't get too hung up on terminology, IMHO.)
Hope this helps.
I commonly find myself writing code like this:
private List<Foo> fooList = new ArrayList<Foo>();
public Foo findFoo(FooAttr attr) {
for(Foo foo : fooList) {
if (foo.getAttr().equals(attr)) {
return foo;
}
}
}
However, assuming I properly guard against null input, I could also express the loop like this:
for(Foo foo : fooList) {
if (attr.equals(foo.getAttr()) {
return foo;
}
}
I'm wondering if one of the above forms has a performance advantage over the other. I'm well aware of the dangers of premature optimization, but in this case, I think the code is equally legible either way, so I'm looking for a reason to prefer one form over another, so I can build my coding habits to favor that form. I think given a large enough list, even a small performance advantage could amount to a significant amount of time.
In particular, I'm wondering if the second form might be more performant because the equals() method is called repeatedly on the same object, instead of different objects? Maybe branch prediction is a factor?
I would offer 2 pieces of advice here:
Measure it
If nothing else points you in any given direction, prefer the form which makes most sense and sounds most natural when you say it out loud (or in your head!)
I think that considering branch prediction is worrying about efficiency at too low of a level. However, I find the second example of your code more readable because you put the consistent object first. Similarly, if you were comparing this to some other object that, I would put the this first.
Of course, equals is defined by the programmer so it could be asymmetric. You should make equals an equivalence relation so this shouldn't be the case. Even if you have an equivalence relation, the order could matter. Suppose that attr is a superclass of the various foo.getAttr and the first test of your equals method checks if the other object is an instance of the same class. Then attr.equals(foo.getAttr()) will pass the first check but foo.getAttr().equals(attr) will fail the first check.
However, worrying about efficiency at this level seldom has benefits.
This depends on the implementation of the equals methods. In this situation I assume that both objects are instances of the same class. So that would mean that the methods are equal. This makes no performance difference.
If both objects are of the same type, then they should perform the same. If not, then you can't really know in advance what's going to happen, but usually it will be stopped quite quickly (with an instanceof or something else).
For myself, I usually start the method with a non-null check on the given parameter and I then use the attr.equals(foo.getAttr()) since I don't have to check for null in the loop. Just a question of preference I guess.
The only thing which does affect performance is code which does nothing.
In some cases you have code which is much the same or the difference is so small it just doesn't matter. This is the case here.
Where its is useful to swap the .equals() around is when you have a known value which cannot be null (This doesn't appear to be the cases here) of the type you are using is known.
e.g.
Object o = (Integer) 123;
String s = "Hello";
o.equals(s); // the type of equals is unknown and a virtual table look might be required
s.equals(o); // the type of equals is known and the class is final.
The difference is so small I wouldn't worry about it.
DEVENTER (n) A decision that's very hard to make because so little depends on it, such as which way to walk around a park
-- The Deeper Meaning of Liff by Douglas Adams and John Lloyd.
The performance should be the same, but in terms of safety, it's usually best to have the left operand be something that you are sure is not null, and have your equals method deal with null values.
Take for instance:
String s1 = null;
s1.equals("abc");
"abc".equals(s1);
The two calls to equals are not equivalent as one would issue a NullPointerException (the first one), and the other would return false.
The latter form is generally preferred for comparing with string constants for exactly this reason.
When you're designing the API for a code library, you want it to be easy to use well, and hard to use badly. Ideally you want it to be idiot proof.
You might also want to make it compatible with older systems that can't handle generics, like .Net 1.1 and Java 1.4. But you don't want it to be a pain to use from newer code.
I'm wondering about the best way to make things easily iterable in a type-safe way... Remembering that you can't use generics so Java's Iterable<T> is out, as is .Net's IEnumerable<T>.
You want people to be able to use the enhanced for loop in Java (for Item i : items), and the foreach / For Each loop in .Net, and you don't want them to have to do any casting. Basically you want your API to be now-friendly as well as backwards compatible.
The best type-safe option that I can think of is arrays. They're fully backwards compatible and they're easy to iterate in a typesafe way. But arrays aren't ideal because you can't make them immutable. So, when you have an immutable object containing an array that you want people to be able to iterate over, to maintain immutability you have to provide a defensive copy each and every time they access it.
In Java, doing (MyObject[]) myInternalArray.clone(); is super-fast. I'm sure that the equivalent in .Net is super-fast too. If you have like:
class Schedule {
private Appointment[] internalArray;
public Appointment[] appointments() {
return (Appointment[]) internalArray.clone();
}
}
people can do like:
for (Appointment a : schedule.appointments()) {
a.doSomething();
}
and it will be simple, clear, type-safe, and fast.
But they could do something like:
for (int i = 0; i < schedule.appointments().length; i++) {
Appointment a = schedule.appointments()[i];
}
And then it would be horribly inefficient because the entire array of appointments would get cloned twice for every iteration (once for the length test, and once to get the object at the index). Not such a problem if the array is small, but pretty horrible if the array has thousands of items in it. Yuk.
Would anyone actually do that? I'm not sure... I guess that's largely my question here.
You could call the method toAppointmentArray() instead of appointments(), and that would probably make it less likely that anyone would use it the wrong way. But it would also make it harder for people to find when they just want to iterate over the appointments.
You would, of course, document appointments() clearly, to say that it returns a defensive copy. But a lot of people won't read that particular bit of documentation.
Although I'd welcome suggestions, it seems to me that there's no perfect way to make it simple, clear, type-safe, and idiot proof. Have I failed if a minority of people are unwitting cloning arrays thousands of times, or is that an acceptable price to pay for simple, type-safe iteration for the majority?
NB I happen to be designing this library for both Java and .Net, which is why I've tried to make this question applicable to both. And I tagged it language-agnostic because it's an issue that could arise for other languages too. The code samples are in Java, but C# would be similar (albeit with the option of making the Appointments accessor a property).
UPDATE: I did a few quick performance tests to see how much difference this made in Java. I tested:
cloning the array once, and iterating over it using the enhanced for loop
iterating over an ArrayList using
the enhanced for loop
iterating over an unmodifyable
ArrayList (from
Collections.unmodifyableList) using
the enhanced for loop
iterating over the array the bad way (cloning it repeatedly in the length check
and when getting each indexed item).
For 10 objects, the relative speeds (doing multiple repeats and taking the median) were like:
1,000
1,300
1,300
5,000
For 100 objects:
1,300
4,900
6,300
85,500
For 1000 objects:
6,400
51,700
56,200
7,000,300
For 10000 objects:
68,000
445,000
651,000
655,180,000
Rough figures for sure, but enough to convince me of two things:
Cloning, then iterating is definitely
not a performance issue. In fact
it's consistently faster than using a
List. (this is why Java's
enum.values() method returns a
defensive copy of an array instead of
an immutable list.)
If you repeatedly call the method,
repeatedly cloning the array unnecessarily,
performance becomes more and more of an issue the larger the arrays in question. It's pretty horrible. No surprises there.
clone() is fast but not what I would describe as super faster.
If you don't trust people to write loops efficiently, I would not let them write a loop (which also avoids the need for a clone())
interface AppointmentHandler {
public void onAppointment(Appointment appointment);
}
class Schedule {
public void forEachAppointment(AppointmentHandler ah) {
for(Appointment a: internalArray)
ah.onAppointment(a);
}
}
Since you can't really have it both ways, I would suggest that you create a pre generics and a generics version of your API. Ideally, the underlying implementation can be mostly the same, but the fact is, if you want it to be easy to use for anyone using Java 1.5 or later, they will expect the usage of Generics and Iterable and all the newer languange features.
I think the usage of arrays should be non-existent. It does not make for an easy to use API in either case.
NOTE: I have never used C#, but I would expect the same holds true.
As far as failing a minority of the users, those that would call the same method to get the same object on each iteration of the loop would be asking for inefficiency regardless of API design. I think as long as that's well documented, it's not too much to ask that the users obey some semblance of common sense.
Sometimes i extract boolean checks into local variables to achief better readability.
What do you think?
Any disadvantages?
Does the compiler a line-in or something if the variable isn't used anywhere else? I also thought about reducing the scope with an additional block "{}".
if (person.getAge() > MINIMUM_AGE && person.getTall() > MAXIMUM_SIZE && person.getWeight < MAXIMUM_WEIGHT) {
// do something
}
final boolean isOldEnough = person.getAge() > MINIMUM_AGE;
final boolean isTallEnough = person.getTall() > MAXIMUM_SIZE;
final boolean isNotToHeavy = person.getWeight < MAXIMUM_WEIGHT;
if (isOldEnough && isTallEnough && isNotToHeavy) {
// do something
}
I do this all the time. The code is much more readable that way. The only reason for not doing this is that it inhibits the runtime from doing shortcut optimisation, although a smart VM might figure that out.
The real risk in this approach is that it loses responsiveness to changing values.
Yes, people's age, weight, and height don't change very often, relative to the runtime of most programs, but they do change, and if, for example, age changes while the object from which your snippet is still alive, your final isOldEnough could now yield a wrong answer.
And yet I don't believe putting isEligible into Person is appropriate either, since the knowledge of what constitutes eligibility seems to be of a larger scope. One must ask: eligible for what?
All in all, in a code review, I'd probably recommend that you add methods in Person instead.
boolean isOldEnough (int minimumAge) { return (this.getAge() > minimumAge); }
And so on.
Your two blocks of code are inequivalent.
There are many cases that could be used to show this but I will use one. Suppose that person.getAge() > MINIMUM_AGE were true and person.getTall() threw an exception.
In the first case, the expression will execute the if code block, while the second case will throw an exception. In computability theory, when an exception is thrown, then this is called 'the bottom element. It has been shown that a program when evaluated using eager evaluation semantics (as in your second example), that if it terminates (does not resolve to bottom), then it is guaranteed that an evaluation strategy of laziness (your first example) is guaranteed to terminate. This is an important tenet of programming. Notice that you cannot write Java's && function yourself.
While it is unlikely that your getTall() method will throw an exception, you cannot apply your reasoning to the general case.
I think the checks probably belong in the person class. You could pass in the Min/Max values, but calling person.IsEligable() would be a better solution in my opinion.
You could go one step further and create subtypes of the Person:
Teenager extends Person
ThirdAgePerson extends Person
Kid extends Person
Subclasses will be overriding Person's methods in their own way.
One advantage to the latter case is that you will have the isOldEnough, isTallEnough, and isNotToHeavy (sic) variables available for reuse later in the code. It is also more easily readable.
You might want to consider abstracting those boolean checks into their own methods, or combining the check into a method. For example a person.isOldEnough() method which would return the value of the boolean check. You could even give it an integer parameter that would be your minimum age, to give it more flexible functionality.
I think this is a matter of personal taste. I find your refactoring quite readable.
In this particualr case I might refactor the whole test into a
isThisPersonSuitable()
method.
If there were much such code I might even create a PersonInterpreter (maybe inner) class which holds a person and answers questions about their eligibility.
Generally I would tend to favour readability over any minor performance considerations.
The only possible negative is that you lose the benefits of the AND being short-circuited. But in reality this is only really of any significance if any of your checks is largely more expensive than the others, for example if person.getWeight() was a significant operation and not just an accessor.
I have nothing against your construct, but it seems to me that in this case the readability gain could be achieved by simply putting in line breaks, i.e.
if (person.getAge() > MINIMUM_AGE
&& person.getTall() > MAXIMUM_SIZE
&& person.getWeight < MAXIMUM_WEIGHT)
{
// do something
}
The bigger issue that other answers brought up is whether this belongs inside the Person object. I think the simple answer to that is: If there are several places where you do the same test, it belongs in Person. If there are places where you do similar but different tests, then they belong in the calling class.
Like, if this is a system for a site that sells alcohol and you have many places where you must test if the person is of legal drinking age, then it makes sense to have a Person.isLegalDrinkingAge() function. If the only factor is age, then having a MINIMUM_DRINKING_AGE constant would accomplish the same result, I guess, but once there's other logic involved, like different legal drinking ages in different legal jurisdictions or there are special cases or exceptions, then it really should be a member function.
On the other hand, if you have one place where you check if someone is over 18 and somewhere else where you check if he's over 12 and somewhere else where you check if he's over 65 etc etc, then there's little to be gained by pushing this function into Person.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
In the same spirit of other platforms, it seemed logical to follow up with this question: What are common non-obvious mistakes in Java? Things that seem like they ought to work, but don't.
I won't give guidelines as to how to structure answers, or what's "too easy" to be considered a gotcha, since that's what the voting is for.
See also:
Perl - Common gotchas
.NET - Common gotchas
"a,b,c,d,,,".split(",").length
returns 4, not 7 as you might (and I certainly did) expect. split ignores all trailing empty Strings returned. That means:
",,,a,b,c,d".split(",").length
returns 7! To get what I would think of as the "least astonishing" behaviour, you need to do something quite astonishing:
"a,b,c,d,,,".split(",",-1).length
to get 7.
Comparing equality of objects using == instead of .equals() -- which behaves completely differently for primitives.
This gotcha ensures newcomers are befuddled when "foo" == "foo" but new String("foo") != new String("foo").
I think a very sneaky one is the String.substring method. This re-uses the same underlying char[] array as the original string with a different offset and length.
This can lead to very hard-to-see memory problems. For example, you may be parsing extremely large files (XML perhaps) for a few small bits. If you have converted the whole file to a String (rather than used a Reader to "walk" over the file) and use substring to grab the bits you want, you are still carrying around the full file-sized char[] array behind the scenes. I have seen this happen a number of times and it can be very difficult to spot.
In fact this is a perfect example of why interface can never be fully separated from implementation. And it was a perfect introduction (for me) a number of years ago as to why you should be suspicious of the quality of 3rd party code.
Overriding equals() but not hashCode()
It can have really unexpected results when using maps, sets or lists.
SimpleDateFormat is not thread safe.
There are two that annoy me quite a bit.
Date/Calendar
First, the Java Date and Calendar classes are seriously messed up. I know there are proposals to fix them, I just hope they succeed.
Calendar.get(Calendar.DAY_OF_MONTH) is 1-based
Calendar.get(Calendar.MONTH) is 0-based
Auto-boxing preventing thinking
The other one is Integer vs int (this goes for any primitive version of an object). This is specifically an annoyance caused by not thinking of Integer as different from int (since you can treat them the same much of the time due to auto-boxing).
int x = 5;
int y = 5;
Integer z = new Integer(5);
Integer t = new Integer(5);
System.out.println(5 == x); // Prints true
System.out.println(x == y); // Prints true
System.out.println(x == z); // Prints true (auto-boxing can be so nice)
System.out.println(5 == z); // Prints true
System.out.println(z == t); // Prints SOMETHING
Since z and t are objects, even they though hold the same value, they are (most likely) different objects. What you really meant is:
System.out.println(z.equals(t)); // Prints true
This one can be a pain to track down. You go debugging something, everything looks fine, and you finally end up finding that your problem is that 5 != 5 when both are objects.
Being able to say
List<Integer> stuff = new ArrayList<Integer>();
stuff.add(5);
is so nice. It made Java so much more usable to not have to put all those "new Integer(5)"s and "((Integer) list.get(3)).intValue()" lines all over the place. But those benefits come with this gotcha.
Try reading Java Puzzlers which is full of scary stuff, even if much of it is not stuff you bump into every day. But it will destroy much of your confidence in the language.
List<Integer> list = new java.util.ArrayList<Integer>();
list.add(1);
list.remove(1); // throws...
The old APIs were not designed with boxing in mind, so overload with primitives and objects.
This one I just came across:
double[] aList = new double[400];
List l = Arrays.asList(aList);
//do intense stuff with l
Anyone see the problem?
What happens is, Arrays.asList() expects an array of object types (Double[], for example). It'd be nice if it just threw an error for the previous ocde. However, asList() can also take arguments like so:
Arrays.asList(1, 9, 4, 4, 20);
So what the code does is create a List with one element - a double[].
I should've figured when it took 0ms to sort a 750000 element array...
this one has trumped me a few times and I've heard quite a few experienced java devs wasting a lot of time.
ClassNotFoundException --- you know that the class is in the classpath BUT you are NOT sure why the class is NOT getting loaded.
Actually, this class has a static block. There was an exception in the static block and someone ate the exception. they should NOT. They should be throwing ExceptionInInitializerError. So, always look for static blocks to trip you. It also helps to move any code in static blocks to go into static methods so that debugging the method is much more easier with a debugger.
Floats
I don't know many times I've seen
floata == floatb
where the "correct" test should be
Math.abs(floata - floatb) < 0.001
I really wish BigDecimal with a literal syntax was the default decimal type...
Not really specific to Java, since many (but not all) languages implement it this way, but the % operator isn't a true modulo operator, as it works with negative numbers. This makes it a remainder operator, and can lead to some surprises if you aren't aware of it.
The following code would appear to print either "even" or "odd" but it doesn't.
public static void main(String[] args)
{
String a = null;
int n = "number".hashCode();
switch( n % 2 ) {
case 0:
a = "even";
break;
case 1:
a = "odd";
break;
}
System.out.println( a );
}
The problem is that the hash code for "number" is negative, so the n % 2 operation in the switch is also negative. Since there's no case in the switch to deal with the negative result, the variable a never gets set. The program prints out null.
Make sure you know how the % operator works with negative numbers, no matter what language you're working in.
Manipulating Swing components from outside the event dispatch thread can lead to bugs that are extremely hard to find. This is a thing even we (as seasoned programmers with 3 respective 6 years of java experience) forget frequently! Sometimes these bugs sneak in after having written code right and refactoring carelessly afterwards...
See this tutorial why you must.
Immutable strings, which means that certain methods don't change the original object but instead return a modified object copy. When starting with Java I used to forget this all the time and wondered why the replace method didn't seem to work on my string object.
String text = "foobar";
text.replace("foo", "super");
System.out.print(text); // still prints "foobar" instead of "superbar"
I think i big gotcha that would always stump me when i was a young programmer, was the concurrent modification exception when removing from an array that you were iterating:
List list = new ArrayList();
Iterator it = list.iterator();
while(it.hasNext()){
//some code that does some stuff
list.remove(0); //BOOM!
}
if you have a method that has the same name as the constructor BUT has a return type. Although this method looks like a constructor(to a noob), it is NOT.
passing arguments to the main method -- it takes some time for noobs to get used to.
passing . as the argument to classpath for executing a program in the current directory.
Realizing that the name of an Array of Strings is not obvious
hashCode and equals : a lot of java developers with more than 5 years experience don't quite get it.
Set vs List
Till JDK 6, Java did not have NavigableSets to let you easily iterate through a Set and Map.
Integer division
1/2 == 0 not 0.5
Using the ? generics wildcard.
People see it and think they have to, e.g. use a List<?> when they want a List they can add anything to, without stopping to think that a List<Object> already does that. Then they wonder why the compiler won't let them use add(), because a List<?> really means "a list of some specific type I don't know", so the only thing you can do with that List is get Object instances from it.
(un)Boxing and Long/long confusion. Contrary to pre-Java 5 experience, you can get a NullPointerException on the 2nd line below.
Long msec = getSleepMsec();
Thread.sleep(msec);
If getSleepTime() returns a null, unboxing throws.
The default hash is non-deterministic, so if used for objects in a HashMap, the ordering of entries in that map can change from run to run.
As a simple demonstration, the following program can give different results depending on how it is run:
public static void main(String[] args) {
System.out.println(new Object().hashCode());
}
How much memory is allocated to the heap, or whether you're running it within a debugger, can both alter the result.
When you create a duplicate or slice of a ByteBuffer, it does not inherit the value of the order property from the parent buffer, so code like this will not do what you expect:
ByteBuffer buffer1 = ByteBuffer.allocate(8);
buffer1.order(ByteOrder.LITTLE_ENDIAN);
buffer1.putInt(2, 1234);
ByteBuffer buffer2 = buffer1.duplicate();
System.out.println(buffer2.getInt(2));
// Output is "-771489792", not "1234" as expected
Among the common pitfalls, well known but still biting occasionally programmers, there is the classical if (a = b) which is found in all C-like languages.
In Java, it can work only if a and b are boolean, of course. But I see too often newbies testing like if (a == true) (while if (a) is shorter, more readable and safer...) and occasionally writing by mistake if (a = true), wondering why the test doesn't work.
For those not getting it: the last statement first assign true to a, then do the test, which always succeed!
-
One that bites lot of newbies, and even some distracted more experienced programmers (found it in our code), the if (str == "foo"). Note that I always wondered why Sun overrode the + sign for strings but not the == one, at least for simple cases (case sensitive).
For newbies: == compares references, not the content of the strings. You can have two strings of same content, stored in different objects (different references), so == will be false.
Simple example:
final String F = "Foo";
String a = F;
String b = F;
assert a == b; // Works! They refer to the same object
String c = "F" + F.substring(1); // Still "Foo"
assert c.equals(a); // Works
assert c == a; // Fails
-
And I also saw if (a == b & c == d) or something like that. It works (curiously) but we lost the logical operator shortcut (don't try to write: if (r != null & r.isSomething())!).
For newbies: when evaluating a && b, Java doesn't evaluate b if a is false. In a & b, Java evaluates both parts then do the operation; but the second part can fail.
[EDIT] Good suggestion from J Coombs, I updated my answer.
The non-unified type system contradicts the object orientation idea. Even though everything doesn't have to be heap-allocated objects, the programmer should still be allowed to treat primitive types by calling methods on them.
The generic type system implementation with type-erasure is horrible, and throws most students off when they learn about generics for the first in Java: Why do we still have to typecast if the type parameter is already supplied? Yes, they ensured backward-compatibility, but at a rather silly cost.
Going first, here's one I caught today. It had to do with Long/long confusion.
public void foo(Object obj) {
if (grass.isGreen()) {
Long id = grass.getId();
foo(id);
}
}
private void foo(long id) {
Lawn lawn = bar.getLawn(id);
if (lawn == null) {
throw new IllegalStateException("grass should be associated with a lawn");
}
}
Obviously, the names have been changed to protect the innocent :)
Another one I'd like to point out is the (too prevalent) drive to make APIs generic. Using well-designed generic code is fine. Designing your own is complicated. Very complicated!
Just look at the sorting/filtering functionality in the new Swing JTable. It's a complete nightmare. It's obvious that you are likely to want to chain filters in real life but I have found it impossible to do so without just using the raw typed version of the classes provided.
System.out.println(Calendar.getInstance(TimeZone.getTimeZone("Asia/Hong_Kong")).getTime());
System.out.println(Calendar.getInstance(TimeZone.getTimeZone("America/Jamaica")).getTime());
The output is the same.
I had some fun debugging a TreeSet once, as I was not aware of this information from the API:
Note that the ordering maintained by a set (whether or not an explicit comparator is provided) must be consistent with equals if it is to correctly implement the Set interface. (See Comparable or Comparator for a precise definition of consistent with equals.) This is so because the Set interface is defined in terms of the equals operation, but a TreeSet instance performs all key comparisons using its compareTo (or compare) method, so two keys that are deemed equal by this method are, from the standpoint of the set, equal. The behavior of a set is well-defined even if its ordering is inconsistent with equals; it just fails to obey the general contract of the Set interface.
http://download.oracle.com/javase/1.4.2/docs/api/java/util/TreeSet.html
Objects with correct equals/hashcode implementations were being added and never seen again as the compareTo implementation was inconsistent with equals.
IMHO
1. Using vector.add(Collection) instead of vector.addall(Collection). The first adds the collection object to vector and second one adds the contents of collection.
2. Though not related to programming exactly, the use of xml parsers that come from multiple sources like xerces, jdom. Relying on different parsers and having their jars in the classpath is a nightmare.