I am trying to do something like:
t.set(field("ColumnName"), select(max(field("ColumnName"))).from("TableName"));
But I am getting the following compile error:
Ambiguous method call, Both
set(Field,Object) in InsertSetStep and
set(Field,Select<? extends Record1>) in InsertSetStep match
I have tried to resolve the ambiguity with casting, but I still receive the same error
Select<? extends Record1> sq = select(max(field("ColumnName"))).from("TableName");
t.set( field("ColumnName"), (Select<? extends Record1>)sq );
I have a couple questions:
Why does casting not resolve the ambiguity in this scenario? (I have tried casting to (Object) and that does resolve the ambiguity)
Is there a way for me to resolve the ambiguity?
This is a very unfortunate, but specified behaviour of the Java language and the various compilers. While pre-generics, the method taking a Select type is going to be more specific for your particular call, it is no longer so, once generics are involved. The rationale can be seen here.
There's not much a heavily generic and overloaded API like jOOQ can do, but you can, as a user. You have to avoid binding <T> to Object in such cases, either by using the code generator, or by passing data types manually:
// Assuming this is an INTEGER type
t.set(
field("ColumnName", SQLDataType.INTEGER),
select(max(field("ColumnName", SQLDataType.INTEGER))).from("TableName"));
Or, you start storing your column references in some static variables like:
Field<Integer> COLUMN_NAME = field("ColumnName", SQLDataType.INTEGER);
// And then:
t.set(COLUMN_NAME, select(max(COLUMN_NAME)).from("TableName"));
Using the code generator
Note, this is hardly ever a problem when you're using the code generator, in case of which you have specific type information bound to the <T> types of your Field<T> references, in case of which the overloads won't both be applicable anymore.
I really recommend you use the code generator for this reason (and for many others).
Related
I would like to ask if it is possible in Java 8+ to declare a generic bound T so that it extends superclass/superinterface U (which could be Object, or Serializable) but breaks compilation if T extends L (which must extend U first).
I have found this problem using filter range objects: one of my developers invoked the wrong method and spent much time questioning on why it was giving inconsistent results. So I wanted to help her changing the method signature, someway, to early detect that she is using wrong code.
I will display my example case in a very simplified way. We are talking about tables and dynamic filters.
#Displays a text "[field name] is equal to [value]"
#Value (T) must be Oject
#Internally uses Object::toString
#Null shows blank string
public static String <T> localizeEq(Localizable fieldName, T value);
<LocalDate> localize(forI18nLabel("DATE_OF_BIRTH_LABEL",dateOfBirth)
"Date of birth equals 01/01/1900" (en)
"syntymäaika on 01/01/1990" (fi)
#Additional diplays for "ge, gte, le..."
#Overload methods not displayed
#SimpleFilter is {op:"ge|ge|eq...",value:""}}
#The effective display depends on the op attribute
#Example "[field name] is [operator] [value]"
#Example "[field name] is less or equal than [upper]"
#If <filter != null but filter.op == null || filter.value> the method returns null
public static String <T> localize(Localizable fieldName, SimpleFilter<T> filter)
#localize(forI18nLabel("SALARY"),salaryFilter)
#salaryFilter = {op:"lt",value:10000}
#Salary is less than 10000 (en)
Now the problem is that the the upper bound U of my generics is Serializable and developer inadvertently invoked localizeEq, which accepts atomic values, with a parameter of type SimpleFilter<?> that extends Serializable. The method localizeEq builds a filter text "[field name] is equal to {op:null,value:null}".
The main issue is the null check. Methods that operate on atomic values (e.g. localizeEq, localizeNe) check if the parameter is null. Methods that operate on complex filters check that either the filter parameter is null or its value is null before going on.
That is the reason of the question. Obviously I can (will amend my code in order to) inspect the type of the value parameter when the method is invoked but has three drawbacks:
Developers find it only at runtime
Developers find the problem only when value is not null
Nobody in my company runs automated tests, so they will find out only when kickstarting the entire application and setting a non-null value to the filter. Once a manual test is done, it is never repeated
[Edit]
For my specific case there is another trick, but it involves creating more than a dozen overloaded deprecated methods:
#Deprecated
public static String localize[Eq|Ne...](Localizable fieldName, SimpleFilter<?> value){ throw new UnsupportedOperationException("Wrong method");}
[Edit 3]
The code is on Gist. Please note that in repository code we statically import SimpleFilter.filter or LocalDateRangeFilter.filter methods. In the question it is assumed that localize(Localizable,SimpleFilter) is part of the same class than other methods. And please note there are a few other *RangeFilter classes to support Joda Time, Java Util Date and NumericRange in our repository. They all suffer same issue.
I would like to focus anyway on the scope of the question: forbidding extension in generic, which seems not possible in the JLS.
I would like to ask if it is possible in Java 8+ to declare a generic
bound T so that it extends superclass/superinterface U (which could be
Object, or Serializable) but breaks compilation if T extends L (which
must extend U first).
The T in your pseudocode seems to be a type parameter, not a bound. Bounds are something different, and in fact, putting a bound on T seems to be what you are asking about. Indeed, without one -- in particular, without a lower bound -- your localizeEq() method is not gaining anything from being generic. As it stands, that method would be clearer if you just got rid of T altogether, and declared the second parameter to be of type Object (which would be equivalent to the current code) or Serializable or whatever.
I take it that the method was made generic in hopes of somehow using its type parameter to exclude arguments of certain subtypes, but that is not possible in Java, because
lower type bounds are inclusive, not exclusive
lower type bounds limit types meeting the bound to a single line of inheritance, which seems inconsistent with your intent
Now the problem is that the the upper bound U of my generics is
Serializable and developer inadvertently invoked localizeEq, which
accepts atomic values, with a parameter of type SimpleFilter<?> that
extends Serializable. The method localizeEq builds a filter text
"[field name] is equal to {op:null,value:null}".
If one is not supposed to pass a SimpleFilter to the localizedEq() method then I'd say that you have a design flaw here. You could catch violations at runtime, of course, but the type system does not provide a way to express the compile-time constraint you're looking for.
For my specific case there is another trick, but it involves creating more than a dozen overloaded deprecated methods:
Indeed, overloading is probably the best available solution, but I would suggest approaching it from the other direction. Instead of adding overloads for localizeEq, localizeNe, etc., deprecate the existing versions of those methods and instead overload localize with a version or versions that provide the wanted behavior for arguments that are not SimpleFilters.
IMPORTANT:
the code I currently have Is working per my expectations. It does what I want it to do. My Question is about wether the WAY in which I have made it work is wrong. The reason I am asking this is because I've seen plenty of stack overflow results about raw types and how they should basically NEVER be used.
What I'm doing and Why I used raw types
Currently I am dynamically creating a concrete subclass of a generic interface where the interface takes in parameters when the class is constructed. When I make an instance of this class and use its returned object to call various methods, I use raw types because it works for what I'm trying to do. Here is an example in my functioning code where the raw types are used. This code is in top down order i.e. between code blocks there is no code.
Loading properties file
Properties prop = new Properties();
try {
prop.load(ObjectFactory.class.getResourceAsStream("config.properties"));
This is the File parser that implements FileParserImplementation and takes in the data and puts it into an array. This code gets the Class type and then makes an instance of that type dynamically.
Class<? extends FileParserImplementation> parser = null;
parser = Class.forName(prop.getProperty("FileParserImplementation")).asSubclass(FileParserImplementation.class);
FileParserImplementation ParserInstance = (FileParserImplementation) parser.getDeclaredConstructors()[0].newInstance();
These two classes and their instances are the two seperate DataParsers implementing DataParserImplementation. These take in the array of Strings that the FileParser gives and creates objects/manipulates the data into whatever is needed. It puts out a Collection of this data. The Fileparser dependency is passed in through constructor injection. This can be configured through the properties file at runtime.
Class<? extends DataParserImplementation> dataset1 = Class.forName(prop.getProperty("DataParserImplementation_1")).asSubclass(DataParserImplementation.class);
Class<? extends DataParserImplementation> dataset2 = Class.forName(prop.getProperty("DataParserImplementation_2")).asSubclass(DataParserImplementation.class);
DataParserImplementation Dataset1Instance = (DataParserImplementation) dataset1.getDeclaredConstructors()[0].newInstance(ParserInstance);
DataParserImplementation Dataset2Instance = (DataParserImplementation) dataset2.getDeclaredConstructors()[0].newInstance(ParserInstance);
This is the Crossreferencer class that implements CrossReferencerImplementation. It takes in the two datasets and Cross references them In whatever way is desired by the actual concrete reflected class. This also can be configured at runtime. It outputs a Map in this main.
The map serves as the final collection for the data (I might change that later).
Class<? extends CrossReferenceImplementation> crossreferencer = Class.forName(prop.getProperty("CrossReferenceImplementation")).asSubclass(CrossReferenceImplementation.class);
CrossReferenceImplementation crossReferencerInstance =
(CrossReferenceImplementation) crossreferencer.getDeclaredConstructors()[0].newInstance();
Getting the Map result from calling a method on our reflected instance. Then the contents of this map are printed out. currently it seems the map parameters are gotten as well because the Objects that are inside the map are properly using their toString methods when reflectiveFinalMap.get(key).toString() is called.
This leads me to believe it works as I intend.
Map reflectiveFinalMap = (Map)
crossReferencerInstance.CrossReference(Dataset1Instance.Parse(), Dataset2Instance.Parse());
for (Object key:reflectiveFinalMap.keySet()) {
System.out.println(key + " { " +
reflectiveFinalMap.get(key).toString() + " }");
}
return reflectiveFinalMap;
}
//catch block goes here
Notice that each time I reflectively create an instance of a class that implements one of my interfaces, I use the interface as the raw type. My Hope is that the reflection then sees the parameterized type of this raw type when it creates the concrete subclass, because thats where the parameter types are actually specified. The point is to let any class that implements those interfaces be generic to the point where they can take in just about anything and return just about anything.
Things I tried to not use raw types.
I've tried to actually obtain the parameterized type of CrossReferenceImplementation in the reflected crossreferencer Class that I get right now by calling
Class arrayparametertype = (Class)((ParameterizedType)crossreferencer.getClass().getGenericSuperclass()).getActualTypeArguments()[0];
And then I tried to pass in that arrayparameter when creating an instance of crossreferencer like this:
CrossReferenceImplementation crossReferencer = (CrossReferenceImplementation<<arrayparametertype>>) crossreferencer.getDeclaredConstructors()[0].newInstance();
That didn't work since variable parameter types apparently aren't a thing.
I tried to manually specify the specific parameter of the concrete reflected class(I DON'T want this anyway because it breaks the whole point of reflection here, decoupling the Classes from each other by being able to use anythng that implements the appropriate interface). This caused this warning to appear and the code to not actually run the methods it was supposed to:
//how the parameters were specified. Messy and breaks the reflection.
CrossReferenceImplementation<Map<String, SalesRep>,Map<String, SalesRep>,Map<String, SalesRep>> crossReferencer = (CrossReferenceImplementation) crossreferencer.getDeclaredConstructors()[0].newInstance();
//where the warning occured
Map reflectiveFinalMap = (Map) crossReferencer.CrossReference(Dataset1.Parse(), Dataset2.Parse());
The Warning:
"Dataset1 has raw type so result of Parse is erased".
Note that SalesRep here is the object in which the data is held as fields of that object. This object gets manipulated and put into various collections. It too is accessed via reflection in the many methods of DataParserImplementations
A similar error message and problem occured when specifying the parameter type of the Map (AGAIN I DON'T want this because it makes the reflection pointless I want the map return result to be generic and be specified by the implementing class).
//where the parameterized type was specified
Map reflectiveFinalMap = (Map<String,SalesRep>) crossReferencer.CrossReference(Dataset1.Parse(), Dataset2.Parse());
When specifying the actual parameterized type of the map result the error message was:
"crossReferencer has raw type so result of CrossReference is erased".
Running the code did indeed confirm for me that .CrossReference method's results were erased while everything else ran fine.
What internet searches I tried before asking here
So I used the raw types for both operations As can be seen in the main code and everything worked fine. But I have seen so much "Don't use raw types". And this is why I ask: Is this an appropriate use of raw types? Should I do it a different way that DOESN'T break the reflection? It breaks the reflection because manually specifying the type parameter not only makes my code not run, it also means ONLY that concrete class can be used. I reflected so that I could use anything that implements the generic interface. I don't want to only be able to use specific concrete instances. I've tried searching stack overflow for whats in my title and other similar things. I think this might be related to type erasure but I'm honestly not sure of that. Nothing else really addressed this problem because nothing talked about generics, parameterized types and reflection all at once (the crux of my problem). I have been told generics and reflection don't play well together but this code works anyways and works the way I want it to. It works well. I just want to make sure I'm not doing something TERRIBLY wrong.
The Goal.
To gain an Understanding of my current usage of raw types so I know I'm doing it the right way. By 'Right' I mean the opposite of what I define as the 'Wrong' Way below. An example of what 'Understanding' I seek is:
To understand why puesdo code along the lines of:
ConcreteClass forname(myPropertiesFileObject.get(ConcreteClassname)) as subClass of (MyGenericInterface);
MyRAWGenericInterfaceType ConcreteClassInstance = (MyRAWGenericInterfaceType) ConcreteClass.newInstance( Insert generic Type constructor arguments here);
RAWCollectionType someCollection = RAWCollectionType concreteClassInstance.CallingAMethod(Insert generic Type method arguments here);
Uses Raw types where RAW is contained in the Interface or collection type name. This is as opposed to doing it in some way that doesn't use raw types but doesn't break the point of the reflection, to decouple the interactions between these classes. Specifying the parameters with hard code would 'break the reflection' in this case. Additionally I'd like to understand Why specifying parameters (even if I know thats not what I'm going to do) for these RAW types in the pusedocode above causes the errors listed above in the question, Namely why is the result of CallingAMethod erased when supplying the actual parameters to the RAWCollectionType that the method returns? The root problem is that when I supply type parameters to RAWCollectionType when I declare it, it refuses to be updated by what CallingAMethod returns and I Don't Understand Why. It takes the return value, but if the body of the method CallingAMethod has the returned value passed in as an argument, updated inside the method and then returned, the return that I receive doesn't have the updates. CallingAMethod in this example would be like if I hada list like:
[1,2,3]
and inside the method I had something like:
foreach(thing in list){
thing += 1
}
and then I returned the list, the return I'd get when specifying parameters would be [1,2,3] and when using raw types it would be [2,3,4] like I desire. I'm asking this because I've heard bad things about using raw types.
Additionally I want to make sure that my use of raw types is not horribly wrong and that it works because it's SUPPOSED to work. Maybe I've just gotten good at the whole reflection and generics thing and found a valid use for raw types, or I could be doing something so horrible it warrants my arrest. Thats what i intend to find out. To clarify, by wrong I mean:
bad design (should use a different way to call my methods reflectively and also use reflective classes that use generic interfaces)
inefficient design(time complexity wise, code line wise or maintainability wise)
there is a better way, you shouldn't even be doing this in the first place
If any of those reasons or something I missed popped out when you read this code then TELL ME. Otherwise please explain then why my use of raw types is Valid and isn't a violation of this question:[link]What is a raw type and why shouldn't we use it?
Java have type erasure, so your Map<A,B> in runtime is just a Map, same for CrossReferenceImplementation<Map<String, SalesRep>,Map<String, SalesRep>,Map<String, SalesRep>> is just a CrossReferenceImplementation.
This also means that you can cast any map to Map and just put any objects you want in it, so you can have Map<String, Long> that is actually storing objects of Map<Cookie, Fish> type, and this is why you need to be careful with raw types and reflections.
You can't really use reflection and generics normally - you will always have some unchecked code then, but you can limit it to minimum and make it kind of type-safe anyways.
Like you can create own method to get field: (this is a bit of pseudocode, I will skip all possible exceptions, etc)
public class FieldAccessor<O, T> {
final Field field; // + private constructor
public T get(O object) { return (T) field.get(object); } // unsafe, bu we validated this before constructing this accessor
public static <O, T> FieldAccessor<O, T> create(Class<? super O> definingClass, Class<? super T> fieldClass, String fieldName) {
Field field = definingClass.getDeclaredField(fieldName);
if (field.getType() != fieldClass) {
throw some exception;
}
return new FieldAccessor<>(field);
}
Then you have all the needed validation before you need to use that field, and it will already return valid type. So you can get some value of valid type and add it to normal generic Map instance.
FieldAccessor<X, A> keyAccessor = FieldAccessor.create(X.class, A.class, "someProperty");
FieldAccessor<Y, B> valueAccessor = FieldAccessor.create(Y.class, B.class, "someOtherProperty");
Map<A, B> myMap = new HashMap<>();
mapMap.put(keyAccessor.get(myXValue), valueAccessor.get(myYValue));
This way you have type safe code that still works on reflections - it might still fail at runtime if you will provide invalid types, but at least you always know where it will fail - as here FieldAccessor is already checking all the types in runtime to ensure that you will not do something stupid like add Integer to Map<String, Long> as this might be hard to debug later. (unless someone will use this accessor as raw type, as .get isn't validated - but you can add that by passing definingClass to constructor and checking object instance in get methods)
You can do similar stuff for methods and fields that use generic types (like field of Map<X, Y> type, this FieldAccessor would only allow you to check if it is some kind of Map) - but it would be much harder as API for generics is still a bit "empty" - there is no build in way to create own instances of generic types or to check if they are assignable. (libraries like gson does that so they can deserialize maps and other generic types, they have own implementation of java generic type representation interfaces, like that ParameterizedType and implemented own method to check if given types are assignable)
Just when you are using reflections you need to always remember and understand that you are the one responsible for validating types, as compiler can't help you here, so that unsafe and raw typed code is fine as long as you have logic that validates if this code will never do something really unsafe (like that passing wrong type to generic method, like Integer to map of Long).
Just don't throw raw types and reflections in the middle of some normal code, add some abstraction to it, so it will be easier to maintain such code and project.
I hope this somewhat answers your question.
I try to update a Postgres daterange. Whatever I try it isn't working. Currently I get
Error:(51, 17) java: reference to set is ambiguous
both method set(org.jooq.Field,T) in org.jooq.UpdateSetStep and method set(org.jooq.Field,org.jooq.Field) in org.jooq.UpdateSetStep match
this is my code
ctx.update(AT_PREFERENCS)
.set(AT_PREFERENCS.DIRECTION, preferences.direction)
.set(AT_PREFERENCS.START_END, (Field<Object>) DSL.field("daterange(?, ?)", Object.class, preferences.start, preferences.end))
.where(AT_PREFERENCS.USER.eq(userId))
.execute();
How can I update daterange with jOOQ?
This is due to the very unfortunate Java language design problem (a major flaw in my opinion) documented in this question here. jOOQ should have worked around this problem, but given that jOOQ predates Java 8 and the language design regression was introduced in Java 8, this cannot be fixed in the jOOQ API easily and backwards compatibly right now.
There are several workarounds for this:
Create a data type binding
This might be the most robust solution if you plan on using this range type more often, in case of which you should define a custom data type binding. It's a bit of extra work up front, but once you have that specified, you will be able to write:
.set(AT_PREFERENCES.START_END, new MyRangeType(preferences.start, preferences.end))
Where AT_PREFERENCES.START_END would be a Field<MyRangeType>.
Cast to raw types and bind an unchecked, explicit type variable that isn't Object
This is a quick workaround if you're using this type only once or twice. It has no impact on the runtime, just tweaks the compiler into believing that this is correct.
.<Void>set(
(Field) AT_PREFERENCES.START_END,
(Field) DSL.field("daterange(?, ?)", Object.class, preferences.start, preferences.end))
Cast to raw types and then back to some other Field<T> type
Same as before, but this lets type inference do the job of inferring <Void> for <T>
.set(
(Field<Void>) (Field) AT_PREFERENCES.START_END,
(Field<Void>) (Field) DSL.field("daterange(?, ?)", Object.class,
preferences.start, preferences.end))
Explicitly bind to the "wrong" API method
jOOQ internally handles all method calls in those rare cases where type safety breaks and the "wrong" overload is called. So, you could also simply call this:
.set(
AT_PREFERENCES.START_END,
(Object) DSL.field("daterange(?, ?)", Object.class,
preferences.start, preferences.end))
With this cast, only the set(Field<T>, T) method is applicable and you no longer rely on the Java compiler finding the most specific method among the applicable overloads (which no longer works since Java 8).
jOOQ will run an instanceof check on the T argument, to see if it is really of type Field, in case of which it internally re-routes to the intended API method set(Field<T>, Field<T>)
I wrote some code using generics and I got into the following situation I didn't manage to understand:
I have the interface IpRange, and the following class:
public class Scope<IpRange<T extends IP>> {
List<IpRange<T>> rangesList;
public List<IpRange<T>> getRangesList() {return rangesList;}
}
Now from some test class if i write the following:
Scope<Ipv4> myScope = new Scope<Ipv4>();
scope.getRangesList().get(0)
I'm getting object of IpRange type, but if I'm using a raw type and doing this:
Scope myScope = new Scope();
scope.getRangesList().get(0)
I'm getting Object, and I can't use the ipRange methods unless i explicitly cast it to Range.
If it would have been List<T> i get it, since i used raw type the compiler has no way to know what is the actual type of the list items, but in this case it will be always IpRange type, so why I'm not getting Object?
The thing is that when I'm creating the scope I don't necessarily know the actual range type. Consider this constructor: public Scope(String rangeStringList); for all I know, the string could be "16.59.60.80" or "fe80::10d9:159:f:fffa%". But what I do know is that I passed some IpRange object to the compiler and I would expect to be able to use this interface whether this is ipv4 or ipv6. And since the compiler can know for sure that this is ipRange even if I used row type, i wonder why java chose to do it this way
People have pointed out that all generic type information is stripped when using raw types, and hinted that this is to do with backwards compatibility. I imagine this might not be satisfactory without an explanation, so I'll try to explain how such a problem might be encountered with code like yours.
First of all, imagine the code you have written there is part of an old library, and you're in the process of upgrading the library by adding generics. Perhaps it's a popular library and lots of people have used the old code.
Someone may have done something like this using the classes from your library:
private void someMethod(Scope scope, Object object) {
scope.getRangesList().add(object);
}
Now, looking at this we know that Object might not be of the type IpRange, but this is a private method, so let's assume that type checking is effectively performed by whatever methods call someMethod. This might not be good code, but without generics it does compile and it might work just fine.
Imagine that the person who wrote this upgraded to the new version of your library for some new features or unrealted bug fixes, along with this they now have access to more type safety with your generic classes. They might not want to use it, though, too much legacy like the extract above code using raw types.
What you are effectively suggesting is that even though 'scope' is a raw type, the List returned from getRangesList() must always be of type List<IpRange<? extends IP>>, so the compiler should notice this.
If this were the case though, the legacy code above which adds an Object to the list will no longer compile without being edited. This is one way backwards compatibility would be broken without disregarding all available generic type information for raw types.
Yes, if you use raw types, all generics are "turned off" in the rest of that method, and all generic types become raw types instead, even if they would otherwise not be affected by the missing generic parameter of the raw type.
If you use a raw type, all generic type information is stripped from the class, including static methods if called on the instance.
The reason this was done was for backward compatibility with java 1.4.
I've read the whole SCJP6 book Sierra and Bates book, scored 88% the exam.
But still, i never heard of how this kind of code works as it's not explained in the generics chapter:
Collections.<TimeUnit>reverseOrder()
What is this kind of generics usage?
I discovered it in some code but never read anything about it.
It seems to me it permits to give some help to type inference.
I've tried to search about that but it's not so easy to find (and it's not even in the SCJP book/exam!)
So can someone give me a proper explaination of how it works, which are all the usecases etc?
Thanks
Edit
Thanks for the answers but i expected more details :) so if someone want to add some extra informations:
What about more complex cases like
Using a type declared in class , can i do something like Collections.<T>reverseOrder() for exemple?
Using extends, super?
Using ?
Giving the compiler only partial help (ie O.manyTypesMethod<?,MyHelpTypeNotInfered,?,?,?,?,?>() )
It is explicit type specification of a generic method. You can always do it, but in most cases it's not needed. However, it is required in some cases if the compiler is unable to infer generic type on its own.
See an example towards the end of the tutorial page.
Update: only the first of your examples is valid. The explicit type argument must be, well, explicit, so no wildcards, extends or super is allowed there. Moreover, either you specify each type argument explicitly or none of them; i.e. the number of explicit type arguments must match the number of type parameters of the called method. A type parameter such as T is allowed if it is well defined in the current scope, e.g. as a type parameter of the enclosing class.
You are 100% correct, it is to help with type inference. Most of the time you don't need to do this in Java, as it can infer the type (even from the left hand side of an assignment, which is quite cool). This syntax is covered in the generics tutorial on the Java website.
Just a small addition to the other responses.
When getting the according compiler error:
While the "traditional" casting approach
(Comparator<TimeUnit>) Collections.reverseOrder()
looks similar to the generics approach
Collections.<TimeUnit>reverseOrder()
the casting approach is of course not type-safe (possible runtime exception), while the generics approach would create a compilation error, if there is an issue. Thus the generics approach is preferred, of course.
As the other answers have clarified, it's to help the compiler figure out what generic type you want. It's usually needed when using the Collections utility methods that return something of a generic type and do not receive parameters.
For example, consider the Collections.empty* methods, which return an empty collection. If you have a method that expects a Map<String, String>:
public static void foo(Map<String, String> map) { }
You cannot directly pass Collections.emptyMap() to it. The compiler will complain even if it knows that it expects a Map<String, String>:
// This won't compile.
foo(Collections.emptyMap());
You have to explicitly declare the type you want in the call, which i think looks quite ugly:
foo(Collections.<String, String>emptyMap());
Or you can omit that type declaration in the method call if you assign the emptyMap return value to a variable before passing it to the function, which i think is quite ridiculous, because it seems unnecessary and it shows that the compiler is really inconsistent: it sometimes does type inference on generic methods with no parameters, but sometimes it doesn't:
Map<String, String> map = Collections.emptyMap();
foo(map);
It may not seem like a very important thing, but when the generic types start getting more complex (e.g. Map<String, List<SomeOtherGenericType<Blah>>>) one kind of starts wishing that Java would have more intelligent type inference (but, as it doesn't, one will probably start writing new classes where it's not needed, just to avoid all those ugly <> =D).
In this case it is a way of telling the reverseOrder method what kind of ordering should be imposed on the object, based on what type you specify. The comparator needs to get specific information about how to order things.