I try to update a Postgres daterange. Whatever I try it isn't working. Currently I get
Error:(51, 17) java: reference to set is ambiguous
both method set(org.jooq.Field,T) in org.jooq.UpdateSetStep and method set(org.jooq.Field,org.jooq.Field) in org.jooq.UpdateSetStep match
this is my code
ctx.update(AT_PREFERENCS)
.set(AT_PREFERENCS.DIRECTION, preferences.direction)
.set(AT_PREFERENCS.START_END, (Field<Object>) DSL.field("daterange(?, ?)", Object.class, preferences.start, preferences.end))
.where(AT_PREFERENCS.USER.eq(userId))
.execute();
How can I update daterange with jOOQ?
This is due to the very unfortunate Java language design problem (a major flaw in my opinion) documented in this question here. jOOQ should have worked around this problem, but given that jOOQ predates Java 8 and the language design regression was introduced in Java 8, this cannot be fixed in the jOOQ API easily and backwards compatibly right now.
There are several workarounds for this:
Create a data type binding
This might be the most robust solution if you plan on using this range type more often, in case of which you should define a custom data type binding. It's a bit of extra work up front, but once you have that specified, you will be able to write:
.set(AT_PREFERENCES.START_END, new MyRangeType(preferences.start, preferences.end))
Where AT_PREFERENCES.START_END would be a Field<MyRangeType>.
Cast to raw types and bind an unchecked, explicit type variable that isn't Object
This is a quick workaround if you're using this type only once or twice. It has no impact on the runtime, just tweaks the compiler into believing that this is correct.
.<Void>set(
(Field) AT_PREFERENCES.START_END,
(Field) DSL.field("daterange(?, ?)", Object.class, preferences.start, preferences.end))
Cast to raw types and then back to some other Field<T> type
Same as before, but this lets type inference do the job of inferring <Void> for <T>
.set(
(Field<Void>) (Field) AT_PREFERENCES.START_END,
(Field<Void>) (Field) DSL.field("daterange(?, ?)", Object.class,
preferences.start, preferences.end))
Explicitly bind to the "wrong" API method
jOOQ internally handles all method calls in those rare cases where type safety breaks and the "wrong" overload is called. So, you could also simply call this:
.set(
AT_PREFERENCES.START_END,
(Object) DSL.field("daterange(?, ?)", Object.class,
preferences.start, preferences.end))
With this cast, only the set(Field<T>, T) method is applicable and you no longer rely on the Java compiler finding the most specific method among the applicable overloads (which no longer works since Java 8).
jOOQ will run an instanceof check on the T argument, to see if it is really of type Field, in case of which it internally re-routes to the intended API method set(Field<T>, Field<T>)
Related
I am trying to do something like:
t.set(field("ColumnName"), select(max(field("ColumnName"))).from("TableName"));
But I am getting the following compile error:
Ambiguous method call, Both
set(Field,Object) in InsertSetStep and
set(Field,Select<? extends Record1>) in InsertSetStep match
I have tried to resolve the ambiguity with casting, but I still receive the same error
Select<? extends Record1> sq = select(max(field("ColumnName"))).from("TableName");
t.set( field("ColumnName"), (Select<? extends Record1>)sq );
I have a couple questions:
Why does casting not resolve the ambiguity in this scenario? (I have tried casting to (Object) and that does resolve the ambiguity)
Is there a way for me to resolve the ambiguity?
This is a very unfortunate, but specified behaviour of the Java language and the various compilers. While pre-generics, the method taking a Select type is going to be more specific for your particular call, it is no longer so, once generics are involved. The rationale can be seen here.
There's not much a heavily generic and overloaded API like jOOQ can do, but you can, as a user. You have to avoid binding <T> to Object in such cases, either by using the code generator, or by passing data types manually:
// Assuming this is an INTEGER type
t.set(
field("ColumnName", SQLDataType.INTEGER),
select(max(field("ColumnName", SQLDataType.INTEGER))).from("TableName"));
Or, you start storing your column references in some static variables like:
Field<Integer> COLUMN_NAME = field("ColumnName", SQLDataType.INTEGER);
// And then:
t.set(COLUMN_NAME, select(max(COLUMN_NAME)).from("TableName"));
Using the code generator
Note, this is hardly ever a problem when you're using the code generator, in case of which you have specific type information bound to the <T> types of your Field<T> references, in case of which the overloads won't both be applicable anymore.
I really recommend you use the code generator for this reason (and for many others).
I am looking at the changes introduced in Java 5 , the following piece of documentation looks unclear to me .
<T extends Annotation> T getAnnotation(Class<T> annotationType);
This is a generic method. It infers the value of its type parameter T from its argument, and returns an appropriate instance of
T, as illustrated by the following snippet:
Author a = Othello.class.getAnnotation(Author.class);
Prior to generics, you would have had to cast the result to Author. Also you would have had no way to make the compiler check that
the actual parameter represented a subclass of Annotation.
I would still have been able to make the compiler check that the parameter represented a subclass of Annotation by using Annotation as the parameter type . What am I missing here, how is the compile time check changed with the introduction of generics?
I agree that I will not need to cast the result now though.
"...represented a subclass..." does not mean instance of a subclass. In that case you could use Annotation as a parameter type. It instead means an instance of Class that corresponds to a subclass of Annotation.
Without generics:
Annotation getAnnotation(Class annotationType);
You could pass any Class to the method. For instance:
SomeType.class.getAnnotation(Object.class);
While Object is not actually a subtype of Annotation.
But with generics, you have a type bound, and the Class itself has a generic parameter that is the type it encodes.
With generics, passing Object.class, which has the type of Class<Object>, would throw a compiler error, since T would be Object, and Object does not conform to the bound: <T extends Annotation>.
What am I missing here, how is the compile time check changed with the introduction of generics?
First thing. Annotations were introduced in Java 5 too.
But lets assume that they existed before then. The signature of the (hypothetical) pre-Java 5 version of getAnnotation would have to be:
Annotation getAnnotation(Class annotationType);
so the getting the author annotation would need to be written as:
Author a = (Author) Othello.class.getAnnotation(Author.class);
The type signature of getAnnotation only allows the compiler to know that an Annotation is returned. In does not know that a specific subtype (Author) of Annotation is returned. Therefore the explicit typecast1 is necessary to perform the runtime check.
Now, you could also hypothesize that they could have made a special case for getAnnotation (since this is a system class). However, the example is to illustrate a point about normal Java type checking. And besides, there are other examples in Java 4 and earlier where they could have added similarr special cases, and didn't. (Thank heavens!)
1 - In fact, if you examine the generate bytecodes for the call sequence in the generic case, you will see that the compiler has inserted an implicit type-cast for the result assignment. This is necessary, because it is possible for a generic method to use unchecked conversions and return an object that violates the type constraint that connects the argument and result types.
I know the difference between a Collection<?>, Collection<Object> and Collection.
The first takes any type only, the second must allow all Objects, and the third is unchecked.
But for a single instance (non Collection), e.g. ScheduledFuture<?>, ScheduledFuture<Object> and ScheduledFuture, what is the difference? It seems that they all allow everything.
The Java Language Specification writes:
The use of raw types is allowed only as a concession to compatibility of legacy code. The use of raw types in code written after the introduction of generics into the Java programming language is strongly discouraged. It is possible that future versions of the Java programming language will disallow the use of raw types.
That leaves the other two. As ScheduledFuture only uses its type parameter in a method return type, ScheduledFuture<?> (which is the same as ScheduledFuture<? extends Object>) is equivalent to ScheduledFuture<Object> to calling code.
However, code that actually creates an instance of a ScheduledFuture needs to work with a subclass of ScheduledFuture, and may care a great deal about the type parameter that subclass implements ScheduledFuture with.
When declaring a method that returns a ScheduledFuture, you will therefore want to use the type ScheduledFuture<?> as that signature is easier to implement for the producer of the ScheduledFuture, but equally useful to its consumer. (This is simply a special case of the PECS rule.)
They are not interchangeable. Collection without a type specifier is unbound, and thus its API methods will return Object by default. See this question for a good explanation of the difference between Collection<?> and Collection<Object>: What is the difference between ? and Object in Java generics?.
I wrote some code using generics and I got into the following situation I didn't manage to understand:
I have the interface IpRange, and the following class:
public class Scope<IpRange<T extends IP>> {
List<IpRange<T>> rangesList;
public List<IpRange<T>> getRangesList() {return rangesList;}
}
Now from some test class if i write the following:
Scope<Ipv4> myScope = new Scope<Ipv4>();
scope.getRangesList().get(0)
I'm getting object of IpRange type, but if I'm using a raw type and doing this:
Scope myScope = new Scope();
scope.getRangesList().get(0)
I'm getting Object, and I can't use the ipRange methods unless i explicitly cast it to Range.
If it would have been List<T> i get it, since i used raw type the compiler has no way to know what is the actual type of the list items, but in this case it will be always IpRange type, so why I'm not getting Object?
The thing is that when I'm creating the scope I don't necessarily know the actual range type. Consider this constructor: public Scope(String rangeStringList); for all I know, the string could be "16.59.60.80" or "fe80::10d9:159:f:fffa%". But what I do know is that I passed some IpRange object to the compiler and I would expect to be able to use this interface whether this is ipv4 or ipv6. And since the compiler can know for sure that this is ipRange even if I used row type, i wonder why java chose to do it this way
People have pointed out that all generic type information is stripped when using raw types, and hinted that this is to do with backwards compatibility. I imagine this might not be satisfactory without an explanation, so I'll try to explain how such a problem might be encountered with code like yours.
First of all, imagine the code you have written there is part of an old library, and you're in the process of upgrading the library by adding generics. Perhaps it's a popular library and lots of people have used the old code.
Someone may have done something like this using the classes from your library:
private void someMethod(Scope scope, Object object) {
scope.getRangesList().add(object);
}
Now, looking at this we know that Object might not be of the type IpRange, but this is a private method, so let's assume that type checking is effectively performed by whatever methods call someMethod. This might not be good code, but without generics it does compile and it might work just fine.
Imagine that the person who wrote this upgraded to the new version of your library for some new features or unrealted bug fixes, along with this they now have access to more type safety with your generic classes. They might not want to use it, though, too much legacy like the extract above code using raw types.
What you are effectively suggesting is that even though 'scope' is a raw type, the List returned from getRangesList() must always be of type List<IpRange<? extends IP>>, so the compiler should notice this.
If this were the case though, the legacy code above which adds an Object to the list will no longer compile without being edited. This is one way backwards compatibility would be broken without disregarding all available generic type information for raw types.
Yes, if you use raw types, all generics are "turned off" in the rest of that method, and all generic types become raw types instead, even if they would otherwise not be affected by the missing generic parameter of the raw type.
If you use a raw type, all generic type information is stripped from the class, including static methods if called on the instance.
The reason this was done was for backward compatibility with java 1.4.
I read from an interview with Neal Gafter:
"For example, adding function types to the programming language is much more difficult with Erasure as part of Generics."
EDIT:
Another place where I've met similar statement was in Brian Goetz's message in Lambda Dev mailing list, where he says that lambdas are easier to handle when they are just anonymous classes with syntactic sugar:
But my objection to function types was not that I don't like function types -- I love function types -- but that function types fought badly with an existing aspect of the Java type system, erasure. Erased function types are the worst of both worlds. So we removed this from the design.
Can anyone explain these statements? Why would I need runtime type information with lambdas?
The way I understand it, is that they decided that thanks to erasure it would be messy to go the way of 'function types', e.g. delegates in C# and they only could use lambda expressions, which is just a simplification of single abstract method class syntax.
Delegates in C#:
public delegate void DoSomethingDelegate(Object param1, Object param2);
...
//now assign some method to the function type variable (delegate)
DoSomethingDelegate f = DoSomething;
f(new Object(), new Object());
(another sample here
http://geekswithblogs.net/joycsharp/archive/2008/02/15/simple-c-delegate-sample.aspx)
One argument they put forward in Project Lambda docs:
Generic types are erased, which would expose additional places where
developers are exposed to erasure. For example, it would not be
possible to overload methods m(T->U) and m(X->Y), which would be
confusing.
section 2 in:
http://cr.openjdk.java.net/~briangoetz/lambda/lambda-state-3.html
(The final lambda expressions syntax will be a bit different from the above document:
http://mail.openjdk.java.net/pipermail/lambda-dev/2011-September/003936.html)
(x, y) => { System.out.printf("%d + %d = %d%n", x, y, x+y); }
All in all, my best understanding is that only a part of syntax stuff that could, actually will be used.
What Neal Gafter most likely meant was that not being able to use delegates will make standard APIs more difficult to adjust to functional style, rather than that javac/JVM update would be more difficult to be done.
If someone understands this better than me, I will be happy to read his account.
Goetz expands on the reasoning in State of the Lambda 4th ed.:
An alternative (or complementary) approach to function types,
suggested by some early proposals, would have been to introduce a new,
structural function type. A type like "function from a String and an
Object to an int" might be expressed as (String,Object)->int. This
idea was considered and rejected, at least for now, due to several
disadvantages:
It would add complexity to the type system and further mix structural and nominal types.
It would lead to a divergence of library styles—some libraries would continue to use callback interfaces, while others would use structural
function types.
The syntax could be unweildy, especially when checked exceptions were included.
It is unlikely that there would be a runtime representation for each distinct function type, meaning developers would be further exposed to
and limited by erasure. For example, it would not be possible (perhaps
surprisingly) to overload methods m(T->U) and m(X->Y).
So, we have instead chosen to take the path of "use what you
know"—since existing libraries use functional interfaces extensively,
we codify and leverage this pattern.
To illustrate, here are some of the functional interfaces in Java SE 7
that are well-suited for being used with the new language features;
the examples that follow illustrate the use of a few of them.
java.lang.Runnable
java.util.concurrent.Callable
java.util.Comparator
java.beans.PropertyChangeListener
java.awt.event.ActionListener
javax.swing.event.ChangeListener
...
Note that erasure is just one of the considerations. In general, the Java lambda approach goes in a different direction from Scala, not just on the typed question. It's very Java-centric.
Maybe because what you'd really want would be a type Function<R, P...>, which is parameterised with a return type and some sequence of parameter types. But because of erasure, you can't have a construct like P..., because it could only turn into Object[], which is too loose to be much use at runtime.
This is pure speculation. I am not a type theorist; i haven't even played one on TV.
I think what he means in that statement is that at runtime Java cannot tell the difference between these two function definitions:
void doIt(List<String> strings) {...}
void doIt(List<Integer> ints) {...}
Because at compile time, the information about what type of data the List contains is erased, so the runtime environment wouldn't be able to determine which function you wanted to call.
Trying to compile both of these methods in the same class will throw the following exception:
doIt(List<String>) clashes with doIt(List<Integer); both methods have the same erasure