For an implementation of a command system, I wanted users to be able to denote a class[] of types that they want as their command arguments, and have a method accepting those types that they can utilize having the arguments pre-parsed.
Ex.
Player wants to create an statistics command that lists player statistics.
The command would have syntax /info PLAYER_NAME STATISTIC_NAME
On the code end, I want the user to be able to extend my Command class and have access to a method
public void resolve(S arg1, T arg2);
Here we must now have generics which is a problem if I have a long list or arguments. Also, S and T must implement some interface so they can be converted from String to their type. Also there is no class array, only a various number of overloaded methods that doesn't ensure 100% compatability For example what if I want an Integer argument, I can't just modify Integer to help me out here. My attempted solution was to create an argument wrapper (ArgType) and have S and T extend ArgType, this works fine until I tried to make ArgType a pseudo singleton by using a Manager to store Class to instance. This was a problem because
Map<Class<? extends ArgType<T>, ArgType<T>>
is not a valid statement because T is not defined. Is there a way to make this map work without casting a lot, or is there a better way to do this entirely?
There's no way to do this with generics without casting. Effectively, what you've got here is a variant of the typesafe heterogeneous container pattern, as discussed in this answer, only it's a typesafe heterogeneous argument list, not a container.
For what it's worth, if you didn't mind switching languages you could look into Scala tuples, which are basically the typesafe generic list you're looking for. Even Scala under the hood has a million (well, 22) Tuple implementation classes, though, Tuple1[T], Tuple2[T1, T2], Tuple3[T1, T2, T3] etc.
Related
IMPORTANT:
the code I currently have Is working per my expectations. It does what I want it to do. My Question is about wether the WAY in which I have made it work is wrong. The reason I am asking this is because I've seen plenty of stack overflow results about raw types and how they should basically NEVER be used.
What I'm doing and Why I used raw types
Currently I am dynamically creating a concrete subclass of a generic interface where the interface takes in parameters when the class is constructed. When I make an instance of this class and use its returned object to call various methods, I use raw types because it works for what I'm trying to do. Here is an example in my functioning code where the raw types are used. This code is in top down order i.e. between code blocks there is no code.
Loading properties file
Properties prop = new Properties();
try {
prop.load(ObjectFactory.class.getResourceAsStream("config.properties"));
This is the File parser that implements FileParserImplementation and takes in the data and puts it into an array. This code gets the Class type and then makes an instance of that type dynamically.
Class<? extends FileParserImplementation> parser = null;
parser = Class.forName(prop.getProperty("FileParserImplementation")).asSubclass(FileParserImplementation.class);
FileParserImplementation ParserInstance = (FileParserImplementation) parser.getDeclaredConstructors()[0].newInstance();
These two classes and their instances are the two seperate DataParsers implementing DataParserImplementation. These take in the array of Strings that the FileParser gives and creates objects/manipulates the data into whatever is needed. It puts out a Collection of this data. The Fileparser dependency is passed in through constructor injection. This can be configured through the properties file at runtime.
Class<? extends DataParserImplementation> dataset1 = Class.forName(prop.getProperty("DataParserImplementation_1")).asSubclass(DataParserImplementation.class);
Class<? extends DataParserImplementation> dataset2 = Class.forName(prop.getProperty("DataParserImplementation_2")).asSubclass(DataParserImplementation.class);
DataParserImplementation Dataset1Instance = (DataParserImplementation) dataset1.getDeclaredConstructors()[0].newInstance(ParserInstance);
DataParserImplementation Dataset2Instance = (DataParserImplementation) dataset2.getDeclaredConstructors()[0].newInstance(ParserInstance);
This is the Crossreferencer class that implements CrossReferencerImplementation. It takes in the two datasets and Cross references them In whatever way is desired by the actual concrete reflected class. This also can be configured at runtime. It outputs a Map in this main.
The map serves as the final collection for the data (I might change that later).
Class<? extends CrossReferenceImplementation> crossreferencer = Class.forName(prop.getProperty("CrossReferenceImplementation")).asSubclass(CrossReferenceImplementation.class);
CrossReferenceImplementation crossReferencerInstance =
(CrossReferenceImplementation) crossreferencer.getDeclaredConstructors()[0].newInstance();
Getting the Map result from calling a method on our reflected instance. Then the contents of this map are printed out. currently it seems the map parameters are gotten as well because the Objects that are inside the map are properly using their toString methods when reflectiveFinalMap.get(key).toString() is called.
This leads me to believe it works as I intend.
Map reflectiveFinalMap = (Map)
crossReferencerInstance.CrossReference(Dataset1Instance.Parse(), Dataset2Instance.Parse());
for (Object key:reflectiveFinalMap.keySet()) {
System.out.println(key + " { " +
reflectiveFinalMap.get(key).toString() + " }");
}
return reflectiveFinalMap;
}
//catch block goes here
Notice that each time I reflectively create an instance of a class that implements one of my interfaces, I use the interface as the raw type. My Hope is that the reflection then sees the parameterized type of this raw type when it creates the concrete subclass, because thats where the parameter types are actually specified. The point is to let any class that implements those interfaces be generic to the point where they can take in just about anything and return just about anything.
Things I tried to not use raw types.
I've tried to actually obtain the parameterized type of CrossReferenceImplementation in the reflected crossreferencer Class that I get right now by calling
Class arrayparametertype = (Class)((ParameterizedType)crossreferencer.getClass().getGenericSuperclass()).getActualTypeArguments()[0];
And then I tried to pass in that arrayparameter when creating an instance of crossreferencer like this:
CrossReferenceImplementation crossReferencer = (CrossReferenceImplementation<<arrayparametertype>>) crossreferencer.getDeclaredConstructors()[0].newInstance();
That didn't work since variable parameter types apparently aren't a thing.
I tried to manually specify the specific parameter of the concrete reflected class(I DON'T want this anyway because it breaks the whole point of reflection here, decoupling the Classes from each other by being able to use anythng that implements the appropriate interface). This caused this warning to appear and the code to not actually run the methods it was supposed to:
//how the parameters were specified. Messy and breaks the reflection.
CrossReferenceImplementation<Map<String, SalesRep>,Map<String, SalesRep>,Map<String, SalesRep>> crossReferencer = (CrossReferenceImplementation) crossreferencer.getDeclaredConstructors()[0].newInstance();
//where the warning occured
Map reflectiveFinalMap = (Map) crossReferencer.CrossReference(Dataset1.Parse(), Dataset2.Parse());
The Warning:
"Dataset1 has raw type so result of Parse is erased".
Note that SalesRep here is the object in which the data is held as fields of that object. This object gets manipulated and put into various collections. It too is accessed via reflection in the many methods of DataParserImplementations
A similar error message and problem occured when specifying the parameter type of the Map (AGAIN I DON'T want this because it makes the reflection pointless I want the map return result to be generic and be specified by the implementing class).
//where the parameterized type was specified
Map reflectiveFinalMap = (Map<String,SalesRep>) crossReferencer.CrossReference(Dataset1.Parse(), Dataset2.Parse());
When specifying the actual parameterized type of the map result the error message was:
"crossReferencer has raw type so result of CrossReference is erased".
Running the code did indeed confirm for me that .CrossReference method's results were erased while everything else ran fine.
What internet searches I tried before asking here
So I used the raw types for both operations As can be seen in the main code and everything worked fine. But I have seen so much "Don't use raw types". And this is why I ask: Is this an appropriate use of raw types? Should I do it a different way that DOESN'T break the reflection? It breaks the reflection because manually specifying the type parameter not only makes my code not run, it also means ONLY that concrete class can be used. I reflected so that I could use anything that implements the generic interface. I don't want to only be able to use specific concrete instances. I've tried searching stack overflow for whats in my title and other similar things. I think this might be related to type erasure but I'm honestly not sure of that. Nothing else really addressed this problem because nothing talked about generics, parameterized types and reflection all at once (the crux of my problem). I have been told generics and reflection don't play well together but this code works anyways and works the way I want it to. It works well. I just want to make sure I'm not doing something TERRIBLY wrong.
The Goal.
To gain an Understanding of my current usage of raw types so I know I'm doing it the right way. By 'Right' I mean the opposite of what I define as the 'Wrong' Way below. An example of what 'Understanding' I seek is:
To understand why puesdo code along the lines of:
ConcreteClass forname(myPropertiesFileObject.get(ConcreteClassname)) as subClass of (MyGenericInterface);
MyRAWGenericInterfaceType ConcreteClassInstance = (MyRAWGenericInterfaceType) ConcreteClass.newInstance( Insert generic Type constructor arguments here);
RAWCollectionType someCollection = RAWCollectionType concreteClassInstance.CallingAMethod(Insert generic Type method arguments here);
Uses Raw types where RAW is contained in the Interface or collection type name. This is as opposed to doing it in some way that doesn't use raw types but doesn't break the point of the reflection, to decouple the interactions between these classes. Specifying the parameters with hard code would 'break the reflection' in this case. Additionally I'd like to understand Why specifying parameters (even if I know thats not what I'm going to do) for these RAW types in the pusedocode above causes the errors listed above in the question, Namely why is the result of CallingAMethod erased when supplying the actual parameters to the RAWCollectionType that the method returns? The root problem is that when I supply type parameters to RAWCollectionType when I declare it, it refuses to be updated by what CallingAMethod returns and I Don't Understand Why. It takes the return value, but if the body of the method CallingAMethod has the returned value passed in as an argument, updated inside the method and then returned, the return that I receive doesn't have the updates. CallingAMethod in this example would be like if I hada list like:
[1,2,3]
and inside the method I had something like:
foreach(thing in list){
thing += 1
}
and then I returned the list, the return I'd get when specifying parameters would be [1,2,3] and when using raw types it would be [2,3,4] like I desire. I'm asking this because I've heard bad things about using raw types.
Additionally I want to make sure that my use of raw types is not horribly wrong and that it works because it's SUPPOSED to work. Maybe I've just gotten good at the whole reflection and generics thing and found a valid use for raw types, or I could be doing something so horrible it warrants my arrest. Thats what i intend to find out. To clarify, by wrong I mean:
bad design (should use a different way to call my methods reflectively and also use reflective classes that use generic interfaces)
inefficient design(time complexity wise, code line wise or maintainability wise)
there is a better way, you shouldn't even be doing this in the first place
If any of those reasons or something I missed popped out when you read this code then TELL ME. Otherwise please explain then why my use of raw types is Valid and isn't a violation of this question:[link]What is a raw type and why shouldn't we use it?
Java have type erasure, so your Map<A,B> in runtime is just a Map, same for CrossReferenceImplementation<Map<String, SalesRep>,Map<String, SalesRep>,Map<String, SalesRep>> is just a CrossReferenceImplementation.
This also means that you can cast any map to Map and just put any objects you want in it, so you can have Map<String, Long> that is actually storing objects of Map<Cookie, Fish> type, and this is why you need to be careful with raw types and reflections.
You can't really use reflection and generics normally - you will always have some unchecked code then, but you can limit it to minimum and make it kind of type-safe anyways.
Like you can create own method to get field: (this is a bit of pseudocode, I will skip all possible exceptions, etc)
public class FieldAccessor<O, T> {
final Field field; // + private constructor
public T get(O object) { return (T) field.get(object); } // unsafe, bu we validated this before constructing this accessor
public static <O, T> FieldAccessor<O, T> create(Class<? super O> definingClass, Class<? super T> fieldClass, String fieldName) {
Field field = definingClass.getDeclaredField(fieldName);
if (field.getType() != fieldClass) {
throw some exception;
}
return new FieldAccessor<>(field);
}
Then you have all the needed validation before you need to use that field, and it will already return valid type. So you can get some value of valid type and add it to normal generic Map instance.
FieldAccessor<X, A> keyAccessor = FieldAccessor.create(X.class, A.class, "someProperty");
FieldAccessor<Y, B> valueAccessor = FieldAccessor.create(Y.class, B.class, "someOtherProperty");
Map<A, B> myMap = new HashMap<>();
mapMap.put(keyAccessor.get(myXValue), valueAccessor.get(myYValue));
This way you have type safe code that still works on reflections - it might still fail at runtime if you will provide invalid types, but at least you always know where it will fail - as here FieldAccessor is already checking all the types in runtime to ensure that you will not do something stupid like add Integer to Map<String, Long> as this might be hard to debug later. (unless someone will use this accessor as raw type, as .get isn't validated - but you can add that by passing definingClass to constructor and checking object instance in get methods)
You can do similar stuff for methods and fields that use generic types (like field of Map<X, Y> type, this FieldAccessor would only allow you to check if it is some kind of Map) - but it would be much harder as API for generics is still a bit "empty" - there is no build in way to create own instances of generic types or to check if they are assignable. (libraries like gson does that so they can deserialize maps and other generic types, they have own implementation of java generic type representation interfaces, like that ParameterizedType and implemented own method to check if given types are assignable)
Just when you are using reflections you need to always remember and understand that you are the one responsible for validating types, as compiler can't help you here, so that unsafe and raw typed code is fine as long as you have logic that validates if this code will never do something really unsafe (like that passing wrong type to generic method, like Integer to map of Long).
Just don't throw raw types and reflections in the middle of some normal code, add some abstraction to it, so it will be easier to maintain such code and project.
I hope this somewhat answers your question.
If I am creating a java class to be generic, such as:
public class Foo<T>
How can one determine internally to that class, what 'T' ended up being?
public ???? Bar()
{
//if its type 1
// do this
//if its type 2
// do this
//if its type 3
// do this
//if its type 4
// do this
}
I've poked around the Java API and played with the Reflection stuff, instanceof, getClass, .class, etc, but I can't seem to make heads or tails of them. I feel like I'm close and just need to combine a number of calls, but keep coming up short.
To be more specific, I am attempting to determine whether the class was instantiated with one of 3 possible types.
I've used a similar solution to what he explains here for a few projects and found it pretty useful.
http://blog.xebia.com/2009/02/07/acessing-generic-types-at-runtime-in-java/
The jist of it is using the following:
public Class returnedClass() {
ParameterizedType parameterizedType = (ParameterizedType)getClass()
.getGenericSuperclass();
return (Class) parameterizedType.getActualTypeArguments()[0];
}
In contrast to .NET Java generics are implemented by a technique called "type erasure".
What this means is that the compiler will use the type information when generating the class files, but not transfer this information to the byte code. If you look at the compiled classes with javap or similar tools, you will find that a List<String> is a simple List (of Object) in the class file, just as it was in pre-Java-5 code.
Code accessing the generic List will be "rewritten" by the compiler to include the casts you would have to write yourself in earlier versions. In effect the following two code fragments are identical from a byte code perspective once the compiler is done with them:
Java 5:
List<String> stringList = new ArrayList<String>();
stringList.add("Hello World");
String hw = stringList.get(0);
Java 1.4 and before:
List stringList = new ArrayList();
stringList.add("Hello World");
String hw = (String)stringList.get(0);
When reading values from a generic class in Java 5 the necessary cast to the declared type parameter is automatically inserted. When inserting, the compiler will check the value you try to put in and abort with an error if it is not a String.
The whole thing was done to keep old libraries and new generified code interoperable without any need to recompile the existing libs. This is a major advantage over the .NET way where generic classes and non-generic ones live side-by-side but cannot be interchanged freely.
Both approaches have their pros and cons, but that's the way it is in Java.
To get back to your original question: You will not be able to get at the type information at runtime, because it simply is not there anymore, once the compiler has done its job. This is surely limiting in some ways and there are some cranky ways around it which are usually based on storing a class-instance somewhere, but this is not a standard feature.
Because of type erasure, there is no way to do this directly. What you could do, though, is pass a Class<T> into the constructor and hold onto it inside your class. Then you can check it against the three possible Class types that you allow.
However, if there are only three possible types, you might want to consider refactoring into an enum instead.
The Problem is that most of the Generic stuff will disappear during compilation.
One common solution is to save the type during the creation of the Object.
For a short introduction in the Type Erasure behaviour of java read this page
If you know a few specific types that are meaningful, you should create subclasses of your generic type with the implementation.
So
public class Foo<T>
public ???? Bar()
{
//else condition goes here
}
And then
public class DateFoo extends Foo<Date>
public ???? Bar()
{
//Whatever you would have put in if(T == Date) would go here.
}
The whole point of a generic class is that you dont need to know the type that is being used....
It looks like what you want is in fact not a Generic class, but an interface with a number of different implementations. But maybe it would become clearer if you stated your actual, concrete goal.
I agree with Visage. Generics is for compile-time validation, not runtime dynamic typing. Sounds like what you need is really just the factory pattern. But if your "do this" isn't instantiation, then a simple Enum will probably work just as well. Like what Michael said, if you have a slightly more concrete example, you'll get better answers.
This is a follow-up thread on How to get rid of instanceof in this Builder implementation
There are still some problems with this design. Every time a new parameter is introduced, one must create a new ConcereteParameter class.
That's not a problem. But one must also add method in the CommandBuilder append(ConcreteParameter). And I'm not quite liking that dependency.
To summarize
Commands can be configured with parameters. Not every command can receive the same parameters. So some have to be ignored. When being applied to a command (In this implementation this is achieved by throwing an UnsupportedOperationException
Parameters that can be applied to certain classes are used differently in those classes (Like FTPCommand & HTTPCommand might use IpParameter in a different way)
In the future new Commands and Parameters might be introduced
Update
The implementation as it is now works. But isn't it overkill that if I have
about 30 parameters, that for every parameter I have to have a separate method?
If there is,
What is a more clean and more flexible way/pattern to achieve this?
What is a parameter for you, and what is a parameter type? If you really have different kinds of objects as parameters, with different operations you may perform on them, then you cannot avoid having different classes to handle them. If your parameters only differ in how the commands interpret them, but otherwise are mostly String and Integer or whatever, then having extra classes for each possible meaning is surely overkill. And if your parameter are some form of key-value pair, then I'd represent them as such: a single class (or perhaps one for each reasonable value type) to contain the name and the value of the parameter.
If you can use the above to reduce the number of parameter types, you might want to consider reflection for the actual command building. You could have an annotation #Parameter which you use to decorate setter methods of your command classes. E.g. #Parameter void setIP(String) would mean that the command accepts a String parameter, and will interpret that as am IP address. If you use key-value parameters, you can either derive the key from the method name, or add a value to the annotation, or both. Using such a framework, you could have a single command builder which would take care of feeding parameters to the appropriate setters.
Even though there is an accepted answer, I feel you need to be aware of another option.
I would use a Map as a context object, and pass the context to the execute method of your command. The command will simply pull the parameters it needs out of the Map by String.
public interface Command {
public void execute(Map<String, Object> context);
}
class OneCommandImpl extends Command {
public void execute(Map<String, Object> context) {
context.get('p1');
context.get('p2');
}
}
The advantages of this approach are that it's simple, and there is no need for reflection. You can build any command you want, that requires any number of arguments, using this one interface. The primary disadvantage is the type of value in the Map is not specific.
When I was programming a Form Validator in PHP, when creating new methods, I needed to increase the number of arguments in old methods.
When I was learning Java, when I read that extends is to not touch previously tested, working code, I thought I shouldn't have increased the number of arguments in the old methods, but overridden the old methods with the new methods.
Imagine if you are to verify if a field is empty in one part of the form, in an other and in yet an other.
If the arguments are different, you'll overload isEmpty, but, if the arguments are equal, is it right to use isEmpty, isEmpty2, isEmpty3, three classes and one isEmpty per class or, if both are wrong, what should I have done?
So the question is:
If I need different behaviors for a method isEmpty which receives the same number arguments, what should I do?
Use different names? ( isEmpty, isEmpty2, isEmpty3 )
Have three classes with a single isEmpty method?
Other?
If that's the question then I think you should use:
When they belong to the same logical unit ( they are of the same sort of validation ) but don't use numbers as version, better is to name them after what they do: isEmptyUser, isEmptyAddress, isEmptyWhatever
When the validator object could be computed in one place and passed around during the program lifecycle. Let's say: Validator v = Validator.getInstance( ... ); and then use it as : validator.isEmpty() and let polymorphism to it's job.
Alternatively you could pack the arguments in one class and pass it to the isEmpty method, although you'll end up with pretty much the same problem of the name. Still it's easier to refactor from there and have the new class doing the validation for you.
isEmpty( new Arguments(a,b,c ) ); => arguments.isEmpty();
The Open/Closed Principle [usually attributed to Bertrand Meyer] says that "software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification". This might be the principle that you came across in your Java days. In real life this applies to completed code where the cost of modification, re-testing and re-certification outweighs the benefit of the simplicity gained by making a direct change.
If you are changing a method because it needs an additional argument, you might choose to use the following steps:
Copy the old method.
Remove the implementation from the copy.
Change the signature of the original method to add the new argument.
Update the implementation of the original method to use the new argument.
Implement the copy in terms of the new method with a default value for the argument.
If your implementation language doesn't support method overloading then the principle is the same but you need to find a new name for the new method signature.
The advantage of this approach is that you have added the new argument to the method, and your existing client code will continue to compile and run.
This works well if there is an obvious default for the new argument, and less well if there isn't.
Since java 5 you can use variable list of arguments as in void foo(Object ... params)
You will need to come up with creative names for your methods since you can't overload methods that have same type and number of arguments (or based on return type). I actually personally prefer this to overloading anyway. So you can have isEmpty and isEmptyWhenFoo and isEmptyWhenIHaveTheseArguments (well meybe not the last one :)
Not sure if this actually answers your question, but the best way to think about OO in "real life" is to think of the Nygaard Classification:
ObjectOrientedProgramming. A program execution is regarded as a physical model, simulating the behavior of either a real or imaginary part of the world.
So how would you build a physical device to do what you are trying to do in code? You'd probably have some kind of "Form" object, and the form object would have little tabs or bits connected to it to represent the different Form variables, and then you would build a Validator object that would take the Form object in a slot and then flash one light if the form was valid and another if it was invalid. Or your Validator could take a Form object in one slot and return a Form object out (possibly the same one), but modified in various ways (that only the Validator understood) to make it "valid". Or maybe a Validator is part of a Form, and so the Form has this Validator thingy sticking out of it...
My point is, try to imagine what such a machine would look like and how it would work. Then think of all of the parts of that machine, and make each one an object. That's how "object-oriented" things work in "real life", right?
With that said, what is meant by "extending" a class? Well, a class is a "template" for objects -- each object instance is made by building it from a class. A subclass is simply a class that "inherits" from a parent class. In Java at least, there are two kinds of inheritance: interface inheritance and implementation inheritance. In Java, you are allowed to inherit implementation (actual method code) from at most one class at a time, but you can inherit many interfaces -- which are basically just collections of attributes that someone can see from outside your class.
Additionally, a common way of thinking about OO programming is to think about "messages" instead of "method calls" (in fact, this is the original term invented by Alan Kay for Smalltalk, which was the first language to actually be called "object-oriented"). So when you send an isEmpty message to the object, how do you want it to respond? Do you want to be able to send different arguments with the isEmpty message and have it respond differently? Or do you want to send the isEmpty message to different objects and have them respond differently? Either are appropriate answers, depending on the design of your code.
Instead having one class providing multiple versions of isEmpty with differing names, try breaking down your model into a finer grained pieces the could be put together in more flexible ways.
Create an interface called Empty with
one method isEmpty(String value);
Create implemntations of this
interface like EmptyIgnoreWhiteSpace
and EmptyIgnoreZero
Create FormField
class that have validation methods
which delegate to implementations of
Empty.
Your Form object will have
instances of FormField which will
know how to validate themselves.
Now you have a lot of flexibility, you can combine your Empty implemenation classes to make new classes like EmptyIgnoreWhiteSpaceAndZero. You can use them in other places that have nothing to do with form field validation.
You don't have have have multple similarly named methods polluting your object model.
If I am creating a java class to be generic, such as:
public class Foo<T>
How can one determine internally to that class, what 'T' ended up being?
public ???? Bar()
{
//if its type 1
// do this
//if its type 2
// do this
//if its type 3
// do this
//if its type 4
// do this
}
I've poked around the Java API and played with the Reflection stuff, instanceof, getClass, .class, etc, but I can't seem to make heads or tails of them. I feel like I'm close and just need to combine a number of calls, but keep coming up short.
To be more specific, I am attempting to determine whether the class was instantiated with one of 3 possible types.
I've used a similar solution to what he explains here for a few projects and found it pretty useful.
http://blog.xebia.com/2009/02/07/acessing-generic-types-at-runtime-in-java/
The jist of it is using the following:
public Class returnedClass() {
ParameterizedType parameterizedType = (ParameterizedType)getClass()
.getGenericSuperclass();
return (Class) parameterizedType.getActualTypeArguments()[0];
}
In contrast to .NET Java generics are implemented by a technique called "type erasure".
What this means is that the compiler will use the type information when generating the class files, but not transfer this information to the byte code. If you look at the compiled classes with javap or similar tools, you will find that a List<String> is a simple List (of Object) in the class file, just as it was in pre-Java-5 code.
Code accessing the generic List will be "rewritten" by the compiler to include the casts you would have to write yourself in earlier versions. In effect the following two code fragments are identical from a byte code perspective once the compiler is done with them:
Java 5:
List<String> stringList = new ArrayList<String>();
stringList.add("Hello World");
String hw = stringList.get(0);
Java 1.4 and before:
List stringList = new ArrayList();
stringList.add("Hello World");
String hw = (String)stringList.get(0);
When reading values from a generic class in Java 5 the necessary cast to the declared type parameter is automatically inserted. When inserting, the compiler will check the value you try to put in and abort with an error if it is not a String.
The whole thing was done to keep old libraries and new generified code interoperable without any need to recompile the existing libs. This is a major advantage over the .NET way where generic classes and non-generic ones live side-by-side but cannot be interchanged freely.
Both approaches have their pros and cons, but that's the way it is in Java.
To get back to your original question: You will not be able to get at the type information at runtime, because it simply is not there anymore, once the compiler has done its job. This is surely limiting in some ways and there are some cranky ways around it which are usually based on storing a class-instance somewhere, but this is not a standard feature.
Because of type erasure, there is no way to do this directly. What you could do, though, is pass a Class<T> into the constructor and hold onto it inside your class. Then you can check it against the three possible Class types that you allow.
However, if there are only three possible types, you might want to consider refactoring into an enum instead.
The Problem is that most of the Generic stuff will disappear during compilation.
One common solution is to save the type during the creation of the Object.
For a short introduction in the Type Erasure behaviour of java read this page
If you know a few specific types that are meaningful, you should create subclasses of your generic type with the implementation.
So
public class Foo<T>
public ???? Bar()
{
//else condition goes here
}
And then
public class DateFoo extends Foo<Date>
public ???? Bar()
{
//Whatever you would have put in if(T == Date) would go here.
}
The whole point of a generic class is that you dont need to know the type that is being used....
It looks like what you want is in fact not a Generic class, but an interface with a number of different implementations. But maybe it would become clearer if you stated your actual, concrete goal.
I agree with Visage. Generics is for compile-time validation, not runtime dynamic typing. Sounds like what you need is really just the factory pattern. But if your "do this" isn't instantiation, then a simple Enum will probably work just as well. Like what Michael said, if you have a slightly more concrete example, you'll get better answers.