shouldn't this code produce a ClassCastException - java

The following code compiles and runs successfully without any exception
import java.util.ArrayList;
class SuperSample {}
class Sample extends SuperSample {
#SuppressWarnings("unchecked")
public static void main(String[] args) {
try {
ArrayList<Sample> sList = new ArrayList<Sample>();
Object o = sList;
ArrayList<SuperSample> ssList = (ArrayList<SuperSample>)o;
ssList.add(new SuperSample());
} catch (Exception e) {
e.printStackTrace();
}
}
}
shouldn't the line ArrayList<SuperSample> ssList = (ArrayList<SuperSample>)o; produce a ClassCastException ?
while the following code produces a compile time error error to prevent heap pollution, shouldn't the code mentioned above hold a similar prevention at runtime?
ArrayList<Sample> sList = new ArrayList<Sample>();
ArrayList<SuperSample> ssList = (ArrayList<SuperSample>) sList;
EDIT:
If Type Erasure is the reason behind this, shouldn't there be additional mechanisms to prevent an invalid object from being added to the List? for instance
String[] iArray = new String[5];
Object[] iObject = iArray;
iObject[0]= 5.5; // throws ArrayStoreException
then why,
ssList.add(new SuperSample());
is not made to throw any Exception?

No it should not, at run time both lists have the same type ArrayList. This is called erasure. Generic parameters are not part of compiled class, they all are erased during compilation. From JVM's perspective your code is equal to:
public static void main(String[] args) {
try {
ArrayList sList = new ArrayList();
Object o = sList;
ArrayList ssList = (ArrayList)o;
ssList.add(new SuperSample());
} catch (Exception e) {
e.printStackTrace();
}
}
Basically generics only simplify development, by producing compile time errors and warnings, but they don't affect execution at all.
EDIT:
Well, the base concept behind this is Reifiable Type. Id strongly recomend reading this manual:
A reifiable type is a type whose type information is fully available
at runtime. This includes primitives, non-generic types, raw types,
and invocations of unbound wildcards.
Non-reifiable types are types where information has been removed at
compile-time by type erasure
To be short: arrays are rifiable and generic collections are not. So when you store smth in the array, type is checked by JVM, because array's type is present at runtime. Array represents just a piece of memmory, while collection is an ordinary class, which might have any sort of implementation. For example it can store data in db or on the disk under the hood. If you'd like to get deeper, I suggest reading Java Generics and Collections book.

In your code example,
class SuperSample { }
class Sample extends SuperSample { }
...
ArrayList<Sample> sList = new ArrayList<Sample>();
Object o = sList;
ArrayList<SuperSample> ssList = (ArrayList<SuperSample>)o;
Shouldn't the last line produce a ClassCastException?
No. That exception is thrown by the JVM when it detects incompatible types being cast at runtime. As others have noted, this is because of erasure of generic types. That is, generic types are known only to the compiler. At the JVM level, the variables are all of type ArrayList (the generics having been erased) so there is no ClassCastException at runtime.
As an aside, instead of assigning to an intermediate local variable of type Object, a more concise way to do this assignment is to cast through raw:
ArrayList<SuperSample> ssList = (ArrayList)sList;
where a "raw" type is the erased version of a generic type.
Shouldn't there be additional mechanisms to prevent an invalid object from being added to the List?
Yes, there are. The first mechanism is compile-time checking. In your own answer you found the right location in the Java Language Specification where it describes heap pollution which is the term for an invalid object occurring in the list. The money quote from that section, way down at the bottom, is
If no operation that requires a compile-time unchecked warning to be issued takes place, and no unsafe aliasing occurs of array variables with non-reifiable element types, then heap pollution cannot occur.
So the mechanism you're looking for is in the compiler, and the compiler notifies you of this via compilation warnings. However, you've disabled this mechanism by using the #SuppressWarnings annotation. If you were to remove this annotation, you'd get a compiler warning at the offending line. If you absolutely want to prevent heap pollution, don't use #SuppressWarnings, and add the options -Xlint:unchecked -Werror to your javac command line.
The second mechanism is runtime checking, which requires use of one of the checked wrappers. Replace the initialization of sList with the following:
List<Sample> sList = Collections.checkedList(new ArrayList<Sample>(), Sample.class);
This will cause a ClassCastException to be thrown at the point where a SuperSample is added to the list.

The key here to answer your question is Type Erasure in java
You have a warning at compile time for your first case and not in the second because of your indirection by an object which prevent the compiler to raise you a warning (I'm guessing that this warning is raised when casting a parametrized type to another one which is not done on your second case, if anyone can confirm that I would be glad to here about it).
And your code run because, in the end sList ssList et o are all ArrayList

I think that this cant produce ClassCastException because of backward compatibility issue in Java.
Generic information is not included in bytecode (compiler get rids of it during compilation).
Imagine scenario that you use in your project some old legacy code (some old library writen in java 1.4) and you pass generic List to some method in this legacy code.
You can do this.
In time before generics legacy code was allowed to put anything at all (except primitives) into a collection.
So this legacy code cant get ClassCastException even if it try to put String to List<Integer>.
From the legacy code perspective it is just List.
So this strange behaviour is a consequence of type erasure and to allow backward compatibility in Java.
EDIT:
You get ArrayStoreException for arrays because at runtime the JVM KNOWS the type of arrays, and you dont get any exception for collections because of type erasure and this backward compatibility issue JVM doesnt know the type of collection at runtime.
You can read about this topic in "SCJP Sun® Certified Programmer for Java™ 6 Study Guide" book in chapter 7 "Generics and Collections"

From the JLS (4.12.2)
It is possible that a variable of a parameterized type refers to an object that is not
of that parameterized type. This situation is known as heap pollution. This situation
can only occur if the program performed some operation that would give rise
to an unchecked warning at compile-time.
For example, the code:
List l = new ArrayList<Number>();
List<String> ls = l; // unchecked warning
gives rise to an unchecked warning, because it is not possible to ascertain, either at compile-
time (within the limits of the compile-time type checking rules) or at run-time, whether
the variable l does indeed refer to a List<String>.
If the code above is executed, heap pollution arises, as the variable ls, declared to be a
List<String>, refers to a value that is not in fact a List<String>.
The problem cannot be identified at run-time because type variables are not reified,
and thus instances do not carry any information at run-time regarding the actual type
parameters used to create them.

Related

Java raw type declaration

Well, there were many question on this site regarding raw types and generics in Java. Even questions regarding why does the next line of code comes up with a warning:
List<String> list = new ArrayList();
And answered many times, since ArrayList() is of raw type, therefore the compiler raises a warning since now list is not "type safe", and the option to write this line of code is solely for backward compatibility.
What I don't understand, and didn't find questions about it, is why? Since the compiler compiles the Java code by "looking" only on the static references, how come there is a difference in compilation time for writing new ArrayList(); instead of new ArrayList<>();.
For example, writing this code:
List<String> list = new ArrayList(); // 1
list.add("A string"); // 2
list.add(new Object()); // 3
results in a compilation warning in line 1, no compilation problem in line 2, but a compilation error in line 3 - of type safety.
Therefore - adding a generic reference to the first line (new ArrayList<>();), results only in the removal of the compiler warning.
I understand it's a bad habit to use raw types, but my question is really what is the difference (except for the compilation warning) in writing the right hand side as a raw type.
Thanks!
The compiler does not care what mechanism created the object that your variable list refers to. In fact, it could also refer to null. Or it could be a call to a method. Example:
void yourMethod() {
List<String> list = createStringList();
...
}
List<String> createStringList() {
return new ArrayList(); // raw type here
}
When having a proper typed variable (that was not declared with a raw type) all usages of this variable are checked against the generic type.
Another thing would be if your variable itself is declared with a raw type: Example:
List list = new ArrayList();
list.add("A string");
list.add(new Object());
This compiles fine, but the warning should alert you because things may break later!
Suppose you have another class where the constructor parameters depend on the type parameter:
class Foo<T> {
Foo(T obj) { }
}
Then the compiler checks the parameter type when you create it with a type parameter or diamond operator:
Foo<String> bar = new Foo<>(42); // doesn't compile
But raw types turns off generics checking:
Foo<String> bar = new Foo(42); // does compile but causes heap pollution
so a warning is necessary.

varargs heap pollution : what's the big deal?

I was reading about varargs heap pollution and I don't really get how varargs or non-reifiable types would be responsible for problems that do not already exist without genericity. Indeed, I can very easily replace
public static void faultyMethod(List<String>... l) {
Object[] objectArray = l; // Valid
objectArray[0] = Arrays.asList(42);
String s = l[0].get(0); // ClassCastException thrown here
}
with
public static void faultyMethod(String... l) {
Object[] objectArray = l; // Valid
objectArray[0] = 42; // ArrayStoreException thrown here
String s = l[0];
}
The second one simply uses the covariance of arrays, which is really the problem here. (Even if List<String> was reifiable, I guess it would still be a subclass of Object and I would still be able to assign any object to the array.) Of course I can see there's a little difference between the two, but this code is faulty whether it uses generics or not.
What do they mean by heap pollution (it makes me think about memory usage but the only problem they talk about is potential type unsafetiness), and how is it different from any type violation using arrays' covariance?
You're right that the common (and fundamental) problem is with the covariance of arrays. But of those two examples you gave, the first is more dangerous, because can modify your data structures and put them into a state that will break much later on.
Consider if your first example hadn't triggered the ClassCastException:
public static void faultyMethod(List<String>... l) {
Object[] objectArray = l; // Valid
objectArray[0] = Arrays.asList(42); // Also valid
}
And here's how somebody uses it:
List<String> firstList = Arrays.asList("hello", "world");
List<String> secondList = Arrays.asList("hello", "dolly");
faultyMethod(firstList, secondList);
return secondList.isEmpty()
? firstList
: secondList;
So now we have a List<String> that actually contains an Integer, and it's floating around, safely. At some point later — possibly much later, and if it's serialized, possibly much later and in a different JVM — someone finally executes String s = theList.get(0). This failure is so far distant from what caused it that it could be very difficult to track down.
Note that the ClassCastException's stack trace doesn't tell us where the error really happened; it just tells us who triggered it. In other words, it doesn't give us much information about how to fix the bug; and that's what makes it a bigger deal than an ArrayStoreException.
The difference between an array and a List is that the array checks it's references. e.g.
Object[] array = new String[1];
array[0] = new Integer(1); // fails at runtime.
however
List list = new ArrayList<String>();
list.add(new Integer(1)); // doesn't fail.
From the linked document, I believe what Oracle means by "heap pollution" is to have data values that are technically allowed by the JVM specification, but are disallowed by the rules for generics in the Java programming language.
To give you an example, let's say we define a simple List container like this:
class List<E> {
Object[] values;
int len = 0;
List() { values = new Object[10]; }
void add(E obj) { values[len++] = obj; }
E get(int i) { return (E)values[i]; }
}
This is an example of code that is generic and safe:
List<String> lst = new List<String>();
lst.add("abc");
This is an example of code that uses raw types (bypassing generics) but still respects type safety at a semantic level, because the value we added has a compatible type:
String x = (String)lst.values[0];
The twist - now here is code that works with raw types and does something bad, causing "heap pollution":
lst.values[lst.len++] = new Integer("3");
The code above works because the array is of type Object[], which can store an Integer. Now when we try to retrieve the value, it'll cause a ClassCastException - at retrieval time (which is way after the corruption occurred), instead of at add time:
String y = lst.get(1); // ClassCastException for Integer(3) -> String
Note that the ClassCastException happens in our current stack frame, not even in List.get(), because the cast in List.get() is a no-op at run time due to Java's type erasure system.
Basically, we inserted an Integer into a List<String> by bypassing generics. Then when we tried to get() an element, the list object failed to uphold its promise that it must return a String (or null).
Prior to generics, there was absolutely no possibility that an object's runtime type is inconsistent with its static type. This is obviously a very desirable property.
We can cast an object to an incorrect runtime type, but the cast would fail immediately, at the exact site of casting; the error stops there.
Object obj = "string";
((Integer)obj).intValue();
// we are not gonna get an Integer object
With the introduction of generics, along with type erasure (the root of all evils), now it is possible that a method returns String at compile time, yet returns Integer at runtime. This is messed up. And we should do everything we can to stop it from the source. It is why the compiler is so vocal about every sight of unchecked casts.
The worst thing about heap pollution is that the runtime behavior is undefined! Different compiler/runtime may execute the program in different ways. See case1 and case2.
They are different because ClassCastException and ArrayStoreException are different.
Generics compile-time type checking rules should ensure that it's impossible to get a ClassCastException in a place where you didn't put an explicit cast, unless your code (or some code you called or called you) did something unsafe at compile-time, in which case you should (or whatever code did the unsafe thing should) receive a compile-time warning about it.
ArrayStoreException, on the other hand, is a normal part of how arrays work in Java, and pre-dates Generics. It is not possible for compile-time type checking to prevent ArrayStoreException because of the way the type system for arrays is designed in Java.

Why can't I create an array of a type parameter in Java?

Well, I have read a lot of answers to this question, but I have a more specific one. Take the following snippet of code as an example.
public class GenericArray<E>{
E[] s= new E[5];
}
After type erasure, it becomes
public class GenericArray{
Object[] s= new Object[5];
}
This snippet of code seems to work well. Why does it cause a compile-time error?
In addition, I have known from other answers that the following codes work well for the same purpose.
public class GenericArray<E>{
E[] s= (E[])new Object[5];
}
I've read some comments saying that the piece of code above is unsafe, but why is it unsafe? Could anyone provide me with a specific example where the above piece of code causes an error?
In addition, the following code is wrong as well. But why? It seems to work well after erasure, too.
public class GenericArray<E>{
E s= new E();
}
Array declarations are required to have a reifiable type, and generics are not reifiable.
From the documentation: the only type you can place on an array is one that is reifiable, that is:
It refers to a non-generic class or interface type declaration.
It is a parameterized type in which all type arguments are unbounded wildcards (§4.5.1).
It is a raw type (§4.8).
It is a primitive type (§4.2).
It is an array type (§10.1) whose element type is reifiable.
It is a nested type where, for each type T separated by a ".", T itself is reifiable.
This means that the only legal declaration for a "generic" array would be something like List<?>[] elements = new ArrayList[10];. But that's definitely not a generic array, it's an array of List of unknown type.
The main reason that Java is complaining about the you performing the cast to E[] is because it's an unchecked cast. That is, you're going from a checked type explicitly to an unchecked one; in this case, a checked generic type E to an unchecked type Object. However, this is the only way to create an array that is generic, and is generally considered safe if you have to use arrays.
In general, the advice to avoid a scenario like that is to use generic collections where and when you can.
This snippet of code seems to work well. Why does it cause a compile-time error?
First, because it would violate type safety (i.e. it is unsafe - see below), and in general code that can be statically determined to do this is not allowed to compile.
Remember that, due to type erasure, the type E is not known at run-time. The expression new E[10] could at best create an array of the erased type, in this case Object, rendering your original statement:
E[] s= new E[5];
Equivalent to:
E[] s= new Object[5];
Which is certainly not legal. For instance:
String[] s = new Object[10];
... is not compilable, for basically the same reason.
You argued that after erasure, the statement would be legal, implying that you think this means that the original statement should also be considered legal. However this is not right, as can be shown with another simple example:
ArrayList<String> l = new ArrayList<Object>();
The erasure of the above would be ArrayList l = new ArrayList();, which is legal, while the original is clearly not.
Coming at it from a more philosophical angle, type erasure is not supposed to change the semantics of the code, but it would do so in this case - the array created would be an array of Object rather than an array of E (whatever E might be). Storing a non-E object reference in it would then be possible, whereas if the array were really an E[], it should instead generate an ArrayStoreException.
why is it unsafe?
(Bearing in mind we are now talking about the case where E[] s= new E[5]; has been replaced with E[] s = (E[]) new Object[5];)
It is unsafe (which in this instance is short for type unsafe) because it creates at run-time a situation in which a variable (s) holds a reference to an object instance which is not a sub-type of the variable's declared type (Object[] is not a subtype of E[], unless E==Object).
Could anyone provide me with a specific example where the above piece of code causes an error?
The essential problem is that it is possible to put non-E objects into an array that you create by performing a cast (as in (E[]) new Object[5]). For example, say there is a method foo which takes an Object[] parameter, defined as:
void foo(Object [] oa) {
oa[0] = new Object();
}
Then take the following code:
String [] sa = new String[5];
foo(sa);
String s = sa[0]; // If this line was reached, s would
// definitely refer to a String (though
// with the given definition of foo, this
// line won't be reached...)
The array definitely contains String objects even after the call to foo. On the other hand:
E[] ea = (E[]) new Object[5];
foo(ea);
E e = ea[0]; // e may now refer to a non-E object!
The foo method might have inserted a non-E object into the array. So even though the third line looks safe, the first (unsafe) line has violated the constraints that guarantee that safety.
A full example:
class Foo<E>
{
void foo(Object [] oa) {
oa[0] = new Object();
}
public E get() {
E[] ea = (E[]) new Object[5];
foo(ea);
return ea[0]; // returns the wrong type
}
}
class Other
{
public void callMe() {
Foo<String> f = new Foo<>();
String s = f.get(); // ClassCastException on *this* line
}
}
The code generates a ClassCastException when run, and it is not safe. Code without unsafe operations such as casts, on the other hand, cannot produce this type of error.
In addition, the following code is wrong as well. But why? It seems to work well after erasure, too.
The code in question:
public class GenericArray<E>{
E s= new E();
}
After erasure, this would be:
Object s = new Object();
While this line itself would be fine, to treat the lines as being the same would introduce the semantic change and safety issue that I have described above, which is why the compiler won't accept it. As an example of why it could cause a problem:
public <E> E getAnE() {
return new E();
}
... because after type erasure, 'new E()' would become 'new Object()' and returning a non-E object from the method clearly violates its type constraints (it is supposed to return an E) and is therefore unsafe. If the above method were to compile, and you called it with:
String s = <String>getAnE();
... then you would get a type error at runtime, since you would be attempting to assign an Object to a String variable.
Further notes / clarification:
Unsafe (which is short for "type unsafe") means that it could potentially cause a run-time type error in code that would otherwise be sound. (It actually means more than this, but this definition is enough for purposes of this answer).
it's possible to cause a ClassCastException or ArrayStoreException or other exceptions with "safe" code, but these exceptions only occur at well defined points. That is, you can normally only get a ClassCastException when you perform a cast, an operation that inherently carries this risk. Similarly, you can only get an ArrayStoreException when you store a value into an array.
the compiler doesn't verify that such an error will actually occur before it complains that an operation is unsafe. It just knows that that certain operations are potentially able to cause problems, and warns about these cases.
that you can't create a new instance of (or an array of) a type parameter is both a language feature designed to preserve safety and probably also to reflect the implementation restrictions posed by the use of type erasure. That is, new E() might be expected to produce an instance of the actual type parameter, when in fact it could only produce an instance of the erased type. To allow it to compile would be unsafe and potentially confusing. In general you can use E in place of an actual type with no ill effect, but that is not the case for instantiation.
A compiler can use a variable of type Object to do anything a variable of type Cat can do. The compiler may have to add a typecast, but such typecast will either throw an exception or yield a reference to an instance of Cat. Because of this, the generated code for a SomeCollection<T> doesn't have to actually use any variables of type T; the compiler can replace T with Object and cast things like function return values to T where necessary.
A compiler cannot use an Object[], however, to do everything a Cat[] can do. If a SomeCollection[] had an array of type T[], it would not be able to create an instance of that array type without knowing the type of T. It could create an instance of Object[] and store references to instances of T in it without knowing the type of T, but any attempt to cast such an array to T[] would be guaranteed to fail unless T happened to be Object.
Let's say generic arrays are allowed in Java. Now, take a look at following code,
Object[] myStrs = new Object[2];
myStrs[0] = 100; // This is fine
myStrs[1] = "hi"; // Ambiguity! Hence Error.
If user is allowed to create generic Array, then user can do as I've shown in above code and it will confuse compiler. It defeats the purpose of arrays (Arrays can handle only same/similar/homogeneous type of elements, remember?). You can always use array of class/struct if you want heterogeneous array.
More info here.

Is it possible to have Runtime type mismatch in Java?

I am reading up OCaml and from wiki, it says:
*its static type system renders runtime type mismatches impossible*
I understand why, but then I think, why is this so special in OCaml (and FP)? How do you cause a runtime type mismatch in, say, Java? e.g.
boolean a = true;
int b = a + 1;
will return an error in compile time.
EDIT 1:
Haskell
func :: Int -> Bool
func i = if i > 0 then True else False
Java
boolean func (int i) {
if (i > 0) return true; else return false;
}
Isn;t it the case that both will guarantee the argument type when the func is called?
In Java, you can cause a Runtime type mismatch like this:
Object i = Integer.valueOf(6);
String s = (String) i;
System.out.println(s);
This will compile, because the compile-time type of i (Object) is allowed to be cast to String, however at Runtime, the actual value of i (6, as Integer) will be incompatible to String.
Given this, a ClassCastException is thrown.
Consider the following code using arrays:
// create an array of strings
String[] strings = new String[10];
// cast it to an array of objects
Object[] objects = strings;
// insert an object into the array
objects[0] = new Object(); // Run-time error occurs here
Java allows this to compile, despite the fact that casting a array of strings to an array of objects array introduces the possibility of run-time errors. Line 8 demonstrates this, causing a run-time exception of a type created specifically for this situation: java.lang.ArrayStoreException: java.lang.Object.
See Java generics and type erasure
That wiki is discussing static type systems in general, and contrasting them with dynamically-typed languages rather than other statically-typed languages. There's nothing specific to OCaml or Haskell about runtime type mismatches that doesn't apply to all statically-typed languages.
Note that impossible is a little disingenuous. Pretty much all statically-typed languages give you the ability to also do runtime typing in a limited way, because certain tasks are extremely difficult without it. In fact, the very paragraph you're quoting lists a couple of those cases, like serialization. Other answers here have provided some good examples in Java. However, the vast majority of your code should be able to easily avoid runtime type mismatches.
In Java, type mismatches are possible. For example, the following throws a ClassCastException.
Object o = 1;
String s = (String) o;
However, the arguments passed to a method are checked by the compiler.
It is impossible to invoke a method with signature
boolean func (int i)
unless i is an int, and it is impossible to invoke a method with signature
boolean func2 (String s)
unless s is a String or null.
Therefore you will never get a ClassCastException at runtime within the body of func2 because s is not a String.
In Java, it is impossible to have a type mismatch between reifiable types (primitive types, non-generic reference types, raw types, types parameterized by all wildcards, or array types whose element type is reifiable).
If you have a variable of a reifiable type, then the value it holds at any point in time is guaranteed to be a value of that type. For reference types, this means that the reference is either null or points to an object whose runtime class is a subtype of the variable's type. This is guaranteed because Java requires a cast when storing a value whose type is not a subtype of the variable's type, and casts to reifiable types are checked casts, which means they are checked at runtime, and if the type is not compatible it will throw an exception rather than let there be a type mismatch.
On the other hand, for non-reifiable types (e.g. parameterized types), it is possible to have a type mismatch (which is called "heap pollution" in Java terminology). This is because casts to non-reifiable types are unchecked casts.
List<String> foo = new ArrayList<String>();
foo.add("hi");
List<?> bar = foo;
List<Integer> baz = (List<Integer>)bar; // unchecked cast
// now there is a type mismatch

No error with this collection declared with generics?

I have below code snippet and this works fine. Shouldn't it throw compile time error because I have defined c as ArrayList which will contain String object but I am adding Integer object. So why it did not throw compile time/Run time error?
Collection c = new ArrayList<String>();
c.add(123);
I know below will throw compile time error but why not above. Whats the logical difference between both these code snippet?
Collection<String>() c = new ArrayList();
c.add(123);
The first code snippet does not result in a compile time error, because at the line
c.add(123)
the compiler inspects the type of c. Since you declared c as Collection, the compiler treats it as such. Since Collection offers a method add(Object), it is perfectly reasonable to add any object to the c, especially an integer. Note that this program will however result in a runtime-error, if you attempt to read back the collection values as Strings.
In your second code snippet you provide more information for the compiler to work with. In this snippet it knows that the Collection it deals with is an Collection<String>, which can only accept Strings. Thus, there is no method add(int) or add(Object), only add(String). This leads to a compile-time error.
why it did not throw compile time error?
Because it's not syntactically or semantically invalid, it's just unwise.
Note that most modern IDEs (e.g. Eclipse) can be configured to warn you about the unparameterised Collection c, and optionally to fail to compile.
In the first example, the collection is "raw". This will usually result in a warning but not an error (depending on your exact set-up). This is primary in order to be able to compile all the pre-Java 5 legacy code around.
The the second example, you assign a "raw" object to a parameterized version, which only can be done with an explicit cast.
1) What is the logical difference?
Above: A Collection can be declared without a generic type. This is called a raw type. The collection can then hold any kind of collection. Since, with a raw typed collection, at runtime you might use a collection of strings as a collection of integers causing a runtime exception the compiler will usually throw a warning. Since you have not typed the collection in the above example the compiler can not prevent these runtime exceptions. The warning can be ignored if you know what it is for and know what you are doing.
Below: But a variable declared as a Collection<String> cannot hold any kind of collection. It has to be a collection of the type String. It is strong typed. The compiler is correct to see this as an error.
2) Why does the above snippet not cause a compiler error?
Java is strong typed, which ensures type safety. The above snippet is not type safe, but allowed by Java nonetheless. This is probably for historical reasons: Generics were only introduced with Java 1.5, so if the above snippet would have caused a compile error then most Java 1.4 code would have been broken in the Java 1.5 compiler.
Not every programming language evolves in such a backward compatible manner (PHP for instance). Apparently backward compatibility was valued over type safety when introducing Java 1.5.

Categories

Resources