I stumbled upon a function looking like this:
public void function(Class<?> clazz) {...}
What are the pros/cons of changing the method to:
public <T> void function(Class<T> clazz) {...}
edit: what are the compile time / runtime diff.
todd.run is totally right on, but that's only half the answer. There are also use cases for choosing <T> over <?> (or vice versa) that apply when you don't add type parameter to the class that encloses the method. For example, consider the difference between
public <E extends JLabel> boolean add(List<E> j) {
boolean t = true;
for (JLabel b : j) {
if (b instanceof JLabel) {
t = t && labels.add(b);
}
}
return t;
}
and
public boolean add(List<? extends JLabel> j) {
boolean t = true;
for (JLabel b : j) {
if (b instanceof JLabel) {
t = t && labels.add(b);
}
}
return t;
}
The first method will actually not compile UNLESS you add an appropriate type parameter to the enclosing class, whereas the second method WILL compile regardless of whether the enclosing class has a type parameter. If you do not use <?>, then you are locally responsible for telling the compiler how to acquire the type that will be filled in by the letter used in its place. You frequently encounter this problem - needing to use ? rather than T - when attempting to write generic methods that use or need "extends" and "super." A better but more elaborate treatment of this issue is on page 18 of Gilad Bracha's Generics Tutorial (PDF). Also see this stack overflow question whose answer illuminates these issues.
Check out this stack overflow link for information about your second question: Java generics - type erasure - when and what happens. While I don't know the answer to your question about the compile time difference between <?> and <T>, I'm pretty sure the answer can be found at this FAQ that erickson mentioned in that post.
Using "?" is the same as "any", whereas "T" means "a specific type". So, compare these interfaces:
public interface StrictClass<T> {
public T doFunction(Class<T> class);
}
public interface EasyClass<T> {
public > doFunction(Class<?> class);
}
Now, we can create classes:
public class MyStrictClass implements StrictClass<String> {
public String doFunction(Class<String> stringClass) {
//do something here that returns String
}
}
public class MyEasyClass implements EasyClass<String> {
public String doFunction(Class<?> anyClass) {
//do something here that returns String
}
}
Hope that helps!
Basically, they are equivalent. You can use the first syntax where you don't need to declare anything of type T.
UPDATE: oh, and T can be used to bind types together: if Class<T> is used in different parts of the function it will refer to the same class, but not Class<?>.
A good resource might be this: http://sites.google.com/site/io/effective-java-reloaded
The interesting part related to your question starts around the 5th minute.
Just in addiction to what previous users said.
Hope that helps :]
Related
Could anybody explain what would be the difference in using the following methods:
<T extends Object> void method(T input){
}
and just
void method(Object input){
}
As far as I understand - in both cases we have Object type in runtime.
What is the benefit of using bounded generic?
Why do you assume there is a benefit? The first is just a more verbose form of the second one, like 1+1+1+1+1 is a more verbose way of expressing 5.
There is no difference the way you have it.
But, the construct T extends Object is used in context of some other "feature". I am forced to use it when I refactor some code, at times. Imagine this method (yes, we had code like this in production at some point in time):
public static <T> int sizeOfList(T obj) {
if (obj instanceof List) {
return ((List) obj).size();
}
throw new IllegalArgumentException("only list is supported");
}
It's bad. I can just move the runtime check to a compile time check, so naively you can do:
public static <T extends List<?>> int sizeOfList(T obj) {
return obj.size();
}
There is a subtle problem here though. The erasure of T in the first method is Object, the erasure in the second method is List. So now same callers (if they do not recompile) are in for a nasty surprise called : java.lang.NoSuchMethodError.
To get away from that (and people hating me after I refactor), I do:
public static <T extends Object & List<?>> int sizeOfList(T obj) {
return obj.size();
}
The compile time safety is still there, but the erasure is now on the first bound : Object, same as it was before.
I don't remember the correct name of this, that's why I can't Google it, so I'll ask this using an example, so it will inmediatly ring a bell.
This is for Java.
I'm using abstract classes and interfaces with bounded type parameters, just like this:
public interface DBInterface<T, E> {
public int delete(T element);
}
My problem is that when I try to overload a function like this:
public interface DBInterface<T, E> {
public int delete(T element);
public int delete(E example);
}
The IDE complains about T may beign the same than E, so it thinks I'm declaring the exact same method twice (because, in fact, T could be the same than E in that declaration).
My questiosn are:
How is this called? I want to rewrite the question using proper terminology so it becomes more usefull for others in the future
How can I declase T and E in the interface to guarantee they will not be the same type?
Something like this:
public interface DBInterface<T, E != T> {
public int delete(T element);
}
The way I'm doing it right now is by delimiting the legal types like this. Is it the correct way? Is there a way to allow ANY object EXCEPT the other one provided?:
public interface DBInterface<T extends DatabaseObject, E> {
}
There's a problem because the compiler has to use erasures of parameterized types for your methods. As T and E don't make it to the runtime, the compiler replaces your method parameter types in a systematic way.
As per the tutorial (check this and this), unbounded type parameters are replaced with Object, and bounds are enforced for bounded type parameters.
This is to say that your compiled interface method signatures are like this:
public interface DBInterface {
public int delete(Object element);
public int delete(Object example);
}
And your second version:
public interface DBInterface<T extends DatabaseObject, E> {
public int delete(DatabaseObject element);
public int delete(Object example);
}
I believe this makes the problem obvious for your first code snippet.
How can I declare T and E in the interface to guarantee they will not be the same type?
This question can't accept an unqualified answer. The actual answer depends on the meaning of T and E. For example, it wouldn't make sense to suggest that these type parameters be bounded differently, because that may make no sense in some cases.
With that said, one thing that you should consider is renaming methods: I can't think of a DBInterface with two similarly typed parameters that don't play different roles. For example, the methods can be:
public int deleteById(T element);
public int deleteByKey(E key);
In other words, if we were to forget about single-letter generic type names convention, what would you name your type parameters?
public interface DBInterface<IdColumnType, PKColumnType> {}
I don't see why this shouldn't be applied to your method names, you have to acknowledge that you've allowed your users to pass the same class for both type arguments, after all... (that's to say that you'd technically still have a problem even if the compiler didn't do type erasure)
I'm wondering if it's possible to introduce type variables within the scope of a method in Java. That is, limiting their scope within the method body.
However, rather than trying to describe the problem in the abstract, let my illustrate with my concrete problem. I have a couple of classes that look a bit like this:
public class Settings {
public static abstract class Setting<T> {
public T value;
public abstract T fallbackvalue();
}
private final List<Setting<?>> settings;
}
I now want to write a function, in Settings, for setting the value of all the Settings to their fallback values as provided by the abstract method. My first thought would be to do it like this:
public void reset() {
for(Setting<?> setting : settings)
setting.value = setting.fallbackvalue();
}
However, on second thought it is rather obvious why this does not work; the capture of <?> for setting.value is not the same capture as for setting.fallbackvalue(). Ergo, I need some way to unify the captures. It is possible to solve it like this:
private static <T> void reset1(Setting<T> s) {
s.value = setting.fallbackvalue();
}
public void reset() {
for(Setting<?> setting : settings)
reset1(setting);
}
The explicit type variable <T> for reset1 unifies the captures conveniently, but it's obviously the ugliest thing in the world to introduce this function, pollute the namespace, clutter the screen and make the code less readable merely to satisfy the type system.
Is there no way I can do this within the body of reset? What I'd like to do is simply something like this:
public void reset() {
for(Setting<?> setting : settings) {
<T> {
Setting<T> foo = setting;
foo.value = foo.fallbackvalue();
}
}
}
It's not the prettiest thing in the world, but at least to my eyes it is far less strainful than the variant above. Alas, it's not possible; but what is possible, then?
There's no way to do what you're asking without changing other aspects of your code. However (to solve your particular issue), you can write a reset method in the inner Setting class:
public void reset() {
value = fallbackvalue();
}
Then your loop (in the reset method of the Settings class) would simply be:
for (Setting<?> setting : settings)
setting.reset();
No...
Although wildcard capture does introduce new type variables, they are only available to the compiler; there's no way for programmer to access them directly.
Currently, only class/method can introduce type variables. Therefore the only way to convert an expression type with wildcards to a type without wildcard is to pass the expression through a method (or a constructor with diamond inference new Foo<>(setting) which is essentially the same mechanism)
Your reset1 thing is the generally accepted way to do it. It's called a "capture helper", and is an often-cited pattern in generics. It usually comes up in a situation like the following:
public void swap(List<?> list, int i, int j) { // swap elements i and j of the list
// how to write?
}
In this case, you need to both get something out of the parameterized type, and put something back in of that type. A wildcard just won't let you do that. This is just like your case, because you are also getting something out and putting something in. We know that the type must be the same, but wildcards are too weak to enforce that. Only an explicit type variable can allow us to do it:
public <T> void swap(List<T> list, int i, int j) {
T tmp = list.get(i);
list.set(i, list.get(j));
list.set(j, tmp);
}
However, we don't want this extraneous <T> that is only used in only one place in the argument list. To the outside world, swap(List<?> list, int i, int j) should work perfectly fine. That we need to use this <T> type parameter is an implementation detail that nobody needs to know. So to hide it, we wrap the generic function with the function that takes the wildcard:
private <T> void swap_private(List<T> list, int i, int j) { ... }
public void swap(List<?> list, int i, int j) {
swap_private(list, i, j);
}
It does seem a waste, but that's how it is.
Given the analogous situation between your situation and this one, and the fact that the capture helper is the canonical solution in this situation, I can tell you with confidence that, no, there is no better way to do it.
public class Test<T>{
public boolean isMember(T item) {
if(item instanceof Test)
{
return true;
}
return false;
}
}
Is this the correct way to check if the item is an instance of the class?
I went through some searches and it seems that for a generic class, this will not work.
It's unclear what you're trying to test here, but here are a few possibilities:
Is item a T? Yes. Otherwise, it presumably couldn't be passed into the isMember method. The compiler would disallow it. (See Alex's caveat in the comments below.)
Is item a Test? Your isMember method as it is written would test this, but I'm sensing a code smell here. Why would you expect a T to also be a Test, but only some of the time? You may want to reconsider how you're organizing your classes. Also, if this is really what you want, then your method could be written as:
public boolean isMember(T item) {
return (item instanceof Test);
}
Which begs the question: why have a method like this in the first place? Which is easier to write?
if(obj instanceof Test) {...}
or
if(Test<Something>.isMember(obj)) {...}
I would argue that the first one is simpler, and most Java developers will understand what it means more readily than a custom method.
Is item a Test<T>? There is no way to know this at run time because Java implements generics using erasure. If this is what you want, you'll have to modify the method signature to be like Mike Myers's example.
T is not a variable, but a place holder for a class that is defined at runtime. Generics are a compile time feature, for that reason they add checks at compile time which may not have any meaning at runtime. We can only check the type of the object referenced at runtime which could be a super class type in the code. If we want to pass the type T as parameter to the method, all we have to do is to approach explicitly like the following:
void genericMethod(Class<T> tClass) {
if(String.class.isAssignableFrom(tClass))
or
void genericMethod(Class<T> tClass, T tArg) {
Note that the type might not be the same as here we can see how to manipulate:
genericMethod(Number.class, 1);
public class LinkList<T>{
public boolean isMemberOfClass(T item) {
if(item instanceof LinkList)
{
return true;
}
return false;
}
}
I'm sorry, I'm not suppost to post question as answer.
the class LinkList is a generic class.
the function is to check if the item belongs to the class. check if the have the same T.
we have been simplifying some definition and usage of generics in our code.
Now we got an interesting case, take this example:
public class MyWeirdClass {
public void entryPoint() {
doSomethingWeird();
}
#SuppressWarnings( "unchecked" )
private <T extends A & B> T getMyClass() {
if ( System.currentTimeMillis() % 2 == 0 ) {
return (T) new MyClass_1();
} else {
return (T) new MyClass_2();
}
}
private <T extends A & B> void doSomethingWeird() {
T obj = getMyClass();
obj.methodFromA();
obj.methodFromB();
}
static interface A {
void methodFromA();
}
static interface B {
void methodFromB();
}
static class MyClass_1 implements A, B {
public void methodFromA() {};
public void methodFromB() {};
}
static class MyClass_2 implements A, B {
public void methodFromA() {};
public void methodFromB() {};
}
}
Now look at the method 'doSeomthingWeird() in MyWeirdClass:
This code will compile correctly using the eclipse JDT compiler, however it will fail when using the Oracle compiler. Since the JDT is able to produce working byte-code, it means that at JVM level, this is valid code and it is 'only' the Oracle compiler not allowing to compile such dirty(!?) stuff.
We understand that Oracle's compiler won't accept the call 'T obj = getMyClass();' since T is not a really existent type. However since we know that the returned object implements A and B, why not allowing it? (The JDT compiler and the JVM do).
Note also that since the generics code is used only internally in private methods, we do not want to expose them at class level, polluting external code with generics definitions, that we are not interested at (from outside the class).
The school book solution will be to create an interface AB extends A,B however since we have a larger number of interfaces which are used in different combinations and coming from different modules, making shared interfaces for all the combinations will significantly increase the number of 'dummy' interfaces and finally make the code less readable. In theory it would require up to N-permutations of different wrapper interfaces in order to cover all the cases.
The 'business-oriented-engineer'(other people call it the 'lazy-engineer') solution would be to leave the code this way and start using only JDT for compiling the code.
Edit: It's a bug in Oracle's Javac 6 and works without problems also on Oracle's Javac 7
What do you mean? Are there any hidden dangers by adopting this 'strategy'?
Addition in order to avoid discussion on (for me) not relevant points:
I am not asking why the code above does not compile on Oracle's compiler I know the reason and I do not want to modify this kind of code without a very good reason if it works perfectly when using another compiler.
Please concentrate on the definition and usage (without giving a specific type) of the method 'doSomethingWeird()'.
Is there a good reason, why we should not use only the JDT compiler that allows writing and compiling this code and stop compiling with the Oracle's compiler, which will not accept the code above?
(Thanks for input)
Edit: The code above compiles correctly on Oracle Javac 7 but not on Javac 6. It is a Javac 6 bug. So this means that there is nothing wrong in our code and we can stick on it.
Question is answered, and I'll mark it as such after the two days timeout on my own answer.
Thanks everybody for the constructive feedback.
In java you can do generic methods if generic type used in Parameter or Return Type of method signature. In your sample generic doSomethingWeird method but never used it in method signature.
see following sample:
class MyWeirdClass
{
public void entryPoint()
{
doSomethingWeird(new MyClass_1());
}
private <T extends A & B> T getMyClass()
{
if (System.currentTimeMillis() % 2 == 0)
{
return (T) new MyClass_1();
}
else
{
return (T) new MyClass_2();
}
}
private <T extends A & B> void doSomethingWeird(T a)
{
T obj = getMyClass();
obj.methodFromA();
obj.methodFromB();
}
}
This code work fine.
JLS(Java Language Specification) says in Generic Method part:
Type parameters of generic methods need not be provided explicitly when a
generic method is invoked. Instead, they are almost always inferred as specified in
ยง15.12.2.7
By this quotation when you don't use T in doSomethingWeird method signature,What you specify raw type of T in invoking time(in entryPoint method)?
I did not check the code (compile with both compilers). There is a lot of weird stuff in language specification at even more basic level (well, check the array declaration...). However, I believe that the design above is little "over-engineered" and if I translate the need correctly, the required functionality can be achieved with Factory pattern or if you are using some IoC framework (Spring?) then lookup method injection can do the magic for you. I think the code will be more intuitive and easy to read and to maintain.
I think the cause is different. That's not true that the type T on line "T obj = getMyClass();" is unknow - in fact, because of definition "T extends A & B", its erasure is A. This is called multiple bounds and following applies: "When a multiple bound is used, the first type mentioned in the bound is used as the erasure of the type variable."
Based on #MJM's answer, I suggest you to update the code as below. Then your code will not rely on JVM's type infer.
public void entryPoint() {
doSomethingWeird(getMyClass());
}
private <T extends A & B> T getMyClass() {
if (System.currentTimeMillis() % 2 == 0) {
return (T)new MyClass_1();
} else {
return (T)new MyClass_2();
}
}
private <T extends A & B> void doSomethingWeird(T t) {
t.methodFromA();
t.methodFromB();
}
I would go again for creating what you called a "dummy" interface AB.
First of all, I don't find it dummy at all. There are two classes with the same common method definitions and at one place you need to use one of them regardless of which one it is actually. That is the exact useage of inheritence. So the interface AB fits perfect here. And generics is the wrong solution here. Generics were never meant to implement inheritance.
Second, defining the interface will remove all the generics stuff in your code, and will make it much more readable. Actually adding an interface (or class) never makes your code less readable. Otherwise, it would be better to put all the code in a single class.
This is what the OpenJDK guys answered to my question:
These failures are caused by the fact that JDK 6 compiler doesn't
implement type-inference correctly. A lot of effort has been put into
JDK 7 compiler in order to get rid of all these problems (your program
compiles fine in JDK 7). However, some of those inference improvements
require source incompatible changes, which is why we cannot backport
these fixes in the JDK 6 release.
So this means for us: There is absolutely nothing wrong with our code and is officially supported also by Oracle. We can also stick to this kind of code and use Javac 7 with target=1.6 for our maven builds while development in eclipse will guarantee that we do not use Java 7 APIs :D yaaahyyy!!!
Your approach is questionable, because the unchecked casts sacrifice runtime type safety. Consider this example:
interface A {
void methodFromA();
}
interface B {
void methodFromB();
}
class C implements A { // but not B!
#Override public void methodFromA() {
// do something
}
}
class D implements A, B {
#Override
public void methodFromA() {
// TODO implement
}
#Override
public void methodFromB() {
// do something
}
}
class Factory {
#SuppressWarnings( "unchecked" )
public static <T extends A & B> T getMyClass() {
if ( System.currentTimeMillis() % 2 == 0 ) {
return (T) new C();
} else {
return (T) new D();
}
}
}
public class Innocent {
public static <T extends A & B> void main(String[] args) {
T t = Factory.getMyClass();
// Sometimes this line throws a ClassCastException
// really weird, there isn't even a cast here!
// The maintenance programmer
t.methodFromB();
}
}
(You may have to run the program several times to see what confused the maintenance programmer.)
Yes, in this simple program the error is rather obvious, but what if the object is passed around half your program until its interface is missed? How would you find out where the bad object came from?
If that didn't convince you, what about this:
class NotQuiteInnocent {
public static void main(String[] args) {
// Sometimes this line throws a ClassCastException
D d = Factory.getMyClass();
}
}
Is eliminating a couple interface declarations really worth that?