I have below code snippet and this works fine. Shouldn't it throw compile time error because I have defined c as ArrayList which will contain String object but I am adding Integer object. So why it did not throw compile time/Run time error?
Collection c = new ArrayList<String>();
c.add(123);
I know below will throw compile time error but why not above. Whats the logical difference between both these code snippet?
Collection<String>() c = new ArrayList();
c.add(123);
The first code snippet does not result in a compile time error, because at the line
c.add(123)
the compiler inspects the type of c. Since you declared c as Collection, the compiler treats it as such. Since Collection offers a method add(Object), it is perfectly reasonable to add any object to the c, especially an integer. Note that this program will however result in a runtime-error, if you attempt to read back the collection values as Strings.
In your second code snippet you provide more information for the compiler to work with. In this snippet it knows that the Collection it deals with is an Collection<String>, which can only accept Strings. Thus, there is no method add(int) or add(Object), only add(String). This leads to a compile-time error.
why it did not throw compile time error?
Because it's not syntactically or semantically invalid, it's just unwise.
Note that most modern IDEs (e.g. Eclipse) can be configured to warn you about the unparameterised Collection c, and optionally to fail to compile.
In the first example, the collection is "raw". This will usually result in a warning but not an error (depending on your exact set-up). This is primary in order to be able to compile all the pre-Java 5 legacy code around.
The the second example, you assign a "raw" object to a parameterized version, which only can be done with an explicit cast.
1) What is the logical difference?
Above: A Collection can be declared without a generic type. This is called a raw type. The collection can then hold any kind of collection. Since, with a raw typed collection, at runtime you might use a collection of strings as a collection of integers causing a runtime exception the compiler will usually throw a warning. Since you have not typed the collection in the above example the compiler can not prevent these runtime exceptions. The warning can be ignored if you know what it is for and know what you are doing.
Below: But a variable declared as a Collection<String> cannot hold any kind of collection. It has to be a collection of the type String. It is strong typed. The compiler is correct to see this as an error.
2) Why does the above snippet not cause a compiler error?
Java is strong typed, which ensures type safety. The above snippet is not type safe, but allowed by Java nonetheless. This is probably for historical reasons: Generics were only introduced with Java 1.5, so if the above snippet would have caused a compile error then most Java 1.4 code would have been broken in the Java 1.5 compiler.
Not every programming language evolves in such a backward compatible manner (PHP for instance). Apparently backward compatibility was valued over type safety when introducing Java 1.5.
Related
I've been working on studying for the OCJA8 Java exam and started reading about Exceptions, especially about ClassCastException. I realized I have some trouble in identifying whether it's a good cast, a ClassCastException or a compilation error with the message "incompatible types".
As far as I understood, "incompatible types" compilation error is going to result when trying to cast from a class to an unrelated class (for example, from String to Integer. String isn't neither a subclass, nor a superclass of Integer, so they are unrelated). Such casting does, indeed, result in a compilation error.
Regarding ClassCastException, I'm not sure when it actually happens. Tried reading about it in Boyarsky and Selikoff's OCJA8 book, but still don't have a proper idea of when it happens.
What I know, for sure, is that when I'm trying to cast from a subclass to a superclass, it works. I thought that might happen because the subclass inherits every method/variable of the superclass, so no issues will happen.
I'm still confused about when ClassCastException happens, compared to the "incompatible types" compilation error. Shouldn't this code also result in a runtime exception?
class A {}
class B extends A {}
public class Main {
public static void main(String[] args) {
A a = new A();
B b = a;
}
}
It doesn't, though. I receive a compilation error. Seems that I don't know when, what happens and can't seem to find it anywhere.
The cast operator looks like this: (Type) expression.
It used for 3 completely unrelated things, and due to the way java works, effectively, a 4th and 5th thing, though it's not the cast operation itself that causes it, it's merely a side-effect. A real guns and grandmas situation. Just like + in java means 2 entirely unrelated things: Either numeric addition, or string concatenation.
Hence, you shouldn't ever call it 'casting' unless you mean specifically writing 'parens, type, close parens, expression' which should rarely come up in normal conversation. Instead, call it what the effect of the cast operator actually is, which depends entirely on what you're writing.
The 5 things are:
Primitive conversion. Requires Type to be primitive and expression to also be primitive.
Type coercion. Requires Type to be non-primitive and expression to be non-primitive, and is only about the part that is not in <> (so not the generics part).
Type assertion. Requires Type to be non-primitive and contain generics, and is specifically about the generics part.
Boxing/Unboxing. Java automatically wraps a primitive into its boxed type, or unwraps the value out of a boxed type, as needed, depending on context. casting is one way to create this context.
Lambda/MethodRef selection. Lambdas/methodrefs are a compiler error unless, from context, the compiler can figure out what functional interface type the lambda/methodref is an implementation for. Casts are one way to establish this context.
The space you're currently playing in is the Type Coercion part. Note that neither type coercion nor assertion do any conversion. These do nothing at all at runtime (type assertion), or mostly nothing at all - type coercion, at runtime, either throws ClassCastEx, or does nothing. No conversion ever takes place. This doesn't work:
Number n = 5;
String s = (String) n;
One might think this results in the string "5". That's not how casting works.
What is type coercion
Type coercion casting does 2 completely separate things:
Changes the type of an expression
In java, when you invoke a method, the compiler must figure out which exact method you mean and codes that into the bytecode. If the compiler can't figure out which one you want, it won't compile. The lookup is based on a combination of the method name as well as the parameter types - specifically, the compile time type of them.
Number n = 5;
foo(n); // prints 'Number', not 'Integer'!
void foo(Number n) { System.out.println("Number variant"); }
void foo(Integer n) { System.out.println("Integer variant"); }
Hence, the type of the expression itself, as the compiler thinks of it, is important for this sort of thing. Casting changes the compile-time type. foo((Integer) n) would print 'Integer variant'.
Check if its actually true
The second thing type coercion does, is generate bytecode that checks the claim. Given:
Number n = getNumber();
Integer i = (Integer) n;
Number getNumber() {
return new Double(5.5); // a double!
}
Then clearly we can tell: That type cast is not going to work out, n is not, in fact, pointing at an instance of Integer at all. However, at compile time we can't be sure: We'd have to go through the code of getNumber to know, and given the halting problem, it's not possible for arbitrary code to be analysed like this. Even if it was, maybe tomorrow this code changes - signatures are set, but implementations can change.
Thus, the compiler will just let you write this, but will insert code that checks. This is the CHECKCAST bytecode instruction. That instruction does nothing if the cast holds (the value is indeed pointing at an object of the required type), or, if the object it is pointing at isn't, then a ClassCastException is thrown. Which should probably be called TypeCoercionException instead, and the bytecode should probably be called CHECKTYPE.
compiler error 'incompatible types' vs ClassCastEx
A type coercion cast comes in 3 flavours. That 'change the compile time type of the expression' thing is common to all 3. But about the check if it's actually true thing, you have 3 options:
It is always true
This seems pointless:
Integer i = 5;
Number n = (Number) i;
And it is - any linting tool worth its salt will point out this cast does absolutely nothing at all. The compiler knows it does nothing (all integers are also numbers, doing a runtime check is useless), and doesn't even generate the CHECKCAST bytecode. However, sometimes you do this solely for the fact that the type changes:
Integer i = 5;
foo((Number) i); // would print 'Number variant', even though its an integer.
Point is, this cast, while usually pointless, is technically legal; java just lets it happen and doesn't even generate the CHECKCAST. It cannot possibly throw anything at runtime.
It is always false
Integer i = 5;
Double d = (Double) i;
At compile time the compiler already knows this is never going to work. No type exists that it both Integer and Double. Technically, null would work, but nevertheless the java spec dictates that the compiler must reject this code, and fail with a 'incompatible types' compiler error. There are other ways to make the compiler emit this error message; this is just one of them.
The check may be true or false
In which case the compiler compiles it and adds a CHECKCAST bytecode instruction so that at runtime the type is checked. This could result in a ClassCastException.
The other way to get CCEx
generics are entirely a compile time affair. The runtime has no idea what they mean. That means that this code:
List<String> list = getListOfStrings();
list.get(0).toLowerCase();
is compiled to:
List list = getListOfStrings();
((String) list.get(0)).toLowerCase();
The compiler injects a cast (and as the generics-erased List's get method returns Object, the test could pass, or fail, a CHECKCAST bytecode instruction is generated, which could throw ClassCastEx). This means you can cast ClassCastExceptions on lines with no casts, but then it does mean someone messed up their generics and ignored a compile time warning. This method would do the job:
public List<String> getListOfStrings() {
var broken = new ArrayList<Number>();
broken.add(5); // not a string
List raw = broken; // raw type.
return (List<String>) raw;
}
B class is A type, but A not is B.
Below is an excerpt from java documentation here
Because of type erasure, List<Number> and List<String> both become List. Consequently, the compiler allows the assignment of the object l, which has a raw type of List, to the object ls.
Also from the same documentation
Consider the following example:
List l = new ArrayList<Number>();
List<String> ls = l; // unchecked warning
l.add(0, new Integer(42)); // another unchecked warning
String s = ls.get(0); // ClassCastException is thrown
During type erasure, the types ArrayList and List become ArrayList and List, respectively.
Why compiler can't show this as an error (instead of warning). Could someone kindly help with an example where warning is needed and will cause an issue (or make certain useful patterns impossible) when warning becomes an error.
Java gained type variables in java 1.5. Before java 1.5, they did not exist.
Java also strongly dislikes releasing new versions such that 'upgrading' is more involved than just 'simply compile your existing code with the new compiler and it all just works exactly like it did before'. Also, 'you can run code compiled with the javac from java 1.4, on the java.exe from 1.5 and it runs fine'.
Why? Well, because otherwise you get a split community, and because most projects use 500 dependencies, that is hugely detrimental. You can't 'upgrade' from java 1.4 to 1.5 until each and every single last one of those 500 deps has upgraded.
Combine these 2 facts and generics makes sense:
Why erasure at all? Because how else could it work? Class files stemming from javac1.4 cannot possibly have the generics info, given that they did not exist when javac1.4 was released!
Why 'raw types'? Because pre-generics code by their nature have them (List<T> would be a compiler error on java 1.4), and by their nature, there is no sanity, typevar-wise. You could write this:
List foo = new ArrayList();
foo.add(5);
foo.add("hello");
and it compiles just fine on javac 1.4. Therefore it MUST also compile on javac 1.5. As a compromise, if you attempt to compile that on javac from 1.5, it works and produces a class file that acts identically, but you do get warnings.
java 1.5 is now 25 years old or whatnot. So it all seems weird and archaic now. But you're asking 'why'. This is why.
The following code compiles and runs successfully without any exception
import java.util.ArrayList;
class SuperSample {}
class Sample extends SuperSample {
#SuppressWarnings("unchecked")
public static void main(String[] args) {
try {
ArrayList<Sample> sList = new ArrayList<Sample>();
Object o = sList;
ArrayList<SuperSample> ssList = (ArrayList<SuperSample>)o;
ssList.add(new SuperSample());
} catch (Exception e) {
e.printStackTrace();
}
}
}
shouldn't the line ArrayList<SuperSample> ssList = (ArrayList<SuperSample>)o; produce a ClassCastException ?
while the following code produces a compile time error error to prevent heap pollution, shouldn't the code mentioned above hold a similar prevention at runtime?
ArrayList<Sample> sList = new ArrayList<Sample>();
ArrayList<SuperSample> ssList = (ArrayList<SuperSample>) sList;
EDIT:
If Type Erasure is the reason behind this, shouldn't there be additional mechanisms to prevent an invalid object from being added to the List? for instance
String[] iArray = new String[5];
Object[] iObject = iArray;
iObject[0]= 5.5; // throws ArrayStoreException
then why,
ssList.add(new SuperSample());
is not made to throw any Exception?
No it should not, at run time both lists have the same type ArrayList. This is called erasure. Generic parameters are not part of compiled class, they all are erased during compilation. From JVM's perspective your code is equal to:
public static void main(String[] args) {
try {
ArrayList sList = new ArrayList();
Object o = sList;
ArrayList ssList = (ArrayList)o;
ssList.add(new SuperSample());
} catch (Exception e) {
e.printStackTrace();
}
}
Basically generics only simplify development, by producing compile time errors and warnings, but they don't affect execution at all.
EDIT:
Well, the base concept behind this is Reifiable Type. Id strongly recomend reading this manual:
A reifiable type is a type whose type information is fully available
at runtime. This includes primitives, non-generic types, raw types,
and invocations of unbound wildcards.
Non-reifiable types are types where information has been removed at
compile-time by type erasure
To be short: arrays are rifiable and generic collections are not. So when you store smth in the array, type is checked by JVM, because array's type is present at runtime. Array represents just a piece of memmory, while collection is an ordinary class, which might have any sort of implementation. For example it can store data in db or on the disk under the hood. If you'd like to get deeper, I suggest reading Java Generics and Collections book.
In your code example,
class SuperSample { }
class Sample extends SuperSample { }
...
ArrayList<Sample> sList = new ArrayList<Sample>();
Object o = sList;
ArrayList<SuperSample> ssList = (ArrayList<SuperSample>)o;
Shouldn't the last line produce a ClassCastException?
No. That exception is thrown by the JVM when it detects incompatible types being cast at runtime. As others have noted, this is because of erasure of generic types. That is, generic types are known only to the compiler. At the JVM level, the variables are all of type ArrayList (the generics having been erased) so there is no ClassCastException at runtime.
As an aside, instead of assigning to an intermediate local variable of type Object, a more concise way to do this assignment is to cast through raw:
ArrayList<SuperSample> ssList = (ArrayList)sList;
where a "raw" type is the erased version of a generic type.
Shouldn't there be additional mechanisms to prevent an invalid object from being added to the List?
Yes, there are. The first mechanism is compile-time checking. In your own answer you found the right location in the Java Language Specification where it describes heap pollution which is the term for an invalid object occurring in the list. The money quote from that section, way down at the bottom, is
If no operation that requires a compile-time unchecked warning to be issued takes place, and no unsafe aliasing occurs of array variables with non-reifiable element types, then heap pollution cannot occur.
So the mechanism you're looking for is in the compiler, and the compiler notifies you of this via compilation warnings. However, you've disabled this mechanism by using the #SuppressWarnings annotation. If you were to remove this annotation, you'd get a compiler warning at the offending line. If you absolutely want to prevent heap pollution, don't use #SuppressWarnings, and add the options -Xlint:unchecked -Werror to your javac command line.
The second mechanism is runtime checking, which requires use of one of the checked wrappers. Replace the initialization of sList with the following:
List<Sample> sList = Collections.checkedList(new ArrayList<Sample>(), Sample.class);
This will cause a ClassCastException to be thrown at the point where a SuperSample is added to the list.
The key here to answer your question is Type Erasure in java
You have a warning at compile time for your first case and not in the second because of your indirection by an object which prevent the compiler to raise you a warning (I'm guessing that this warning is raised when casting a parametrized type to another one which is not done on your second case, if anyone can confirm that I would be glad to here about it).
And your code run because, in the end sList ssList et o are all ArrayList
I think that this cant produce ClassCastException because of backward compatibility issue in Java.
Generic information is not included in bytecode (compiler get rids of it during compilation).
Imagine scenario that you use in your project some old legacy code (some old library writen in java 1.4) and you pass generic List to some method in this legacy code.
You can do this.
In time before generics legacy code was allowed to put anything at all (except primitives) into a collection.
So this legacy code cant get ClassCastException even if it try to put String to List<Integer>.
From the legacy code perspective it is just List.
So this strange behaviour is a consequence of type erasure and to allow backward compatibility in Java.
EDIT:
You get ArrayStoreException for arrays because at runtime the JVM KNOWS the type of arrays, and you dont get any exception for collections because of type erasure and this backward compatibility issue JVM doesnt know the type of collection at runtime.
You can read about this topic in "SCJP Sun® Certified Programmer for Java™ 6 Study Guide" book in chapter 7 "Generics and Collections"
From the JLS (4.12.2)
It is possible that a variable of a parameterized type refers to an object that is not
of that parameterized type. This situation is known as heap pollution. This situation
can only occur if the program performed some operation that would give rise
to an unchecked warning at compile-time.
For example, the code:
List l = new ArrayList<Number>();
List<String> ls = l; // unchecked warning
gives rise to an unchecked warning, because it is not possible to ascertain, either at compile-
time (within the limits of the compile-time type checking rules) or at run-time, whether
the variable l does indeed refer to a List<String>.
If the code above is executed, heap pollution arises, as the variable ls, declared to be a
List<String>, refers to a value that is not in fact a List<String>.
The problem cannot be identified at run-time because type variables are not reified,
and thus instances do not carry any information at run-time regarding the actual type
parameters used to create them.
I have the following code
String innerText = null;
innerText = this.getException(detail.getChildElements());
causing this warning
Type safety: The expression of type Iterator needs unchecked conversion to conform
to Iterator
The referenced method is
private String getException(Iterator<OMElementImpl> iterator) { ... }
The other method, getChildElements(), is in a JAR file that I can't touch. There are no other warnings or errors.
From Googling, it seems like the usual way to get rid of this sort of warning is
#SuppressWarnings("unchecked")
String innerText = this.getException(detail.getChildElements());
because the compiler can't guarantee safety ahead of time, but I'd prefer to avoid using SuppressWarnings if possible... is there a better way?
EDIT: getChildElements() is documented here
You can suppress the warning, but if you do so, you are relying 100% on the third-party library, and discarding the assurance of Java generic types: that any ClassCastException raised at runtime will occur right at an explicit cast.
Our coding standard is to suppress warnings only when we can prove the code is type safe—and we treat any calls outside the package as a black box, and don't rely on any comments about the content of a raw collection. So, suppression is extremely rare. Usually, if the code is type safe, the compiler can determine it, although sometimes we have to give it some help. The few exceptions involve arrays of generic type that don't "escape" from a private context.
If you don't fully trust the third-party library, create a new collection, and add the contents after casting them to OMEElementImpl. That way, if there is a bug in the library, you find out about it right away, rather than having some code far distant in time and space blow up with a ClassCastException.
For example:
Iterator<?> tmp = detail.getChildElements();
Collection<OMElementImpl> elements = new ArrayList<OMElementImpl>();
while (tmp.hasNext())
elements.add((OMElementImpl) tmp.next()); /* Any type errors found here! */
String innerText = getException(elements.iterator());
Remember, generics were not invented to make code look pretty and require less typing! The promise of generics is this: Your code is guaranteed to be type-safe if it compiles without warnings. That is it. When warnings are ignored or suppressed, code without a cast operator can mysteriously raise a ClassCastException.
Update: In this case, especially, it seems extremely risky to assume that the result of getChildElements is a iterator of OMElementImpl. At best, you might assume that they are OMElement, and that's only implied from the class, not anything on the method in particular.
Is there any overhead when we cast objects of one type to another? Or the compiler just resolves everything and there is no cost at run time?
Is this a general things, or there are different cases?
For example, suppose we have an array of Object[], where each element might have a different type. But we always know for sure that, say, element 0 is a Double, element 1 is a String. (I know this is a wrong design, but let's just assume I had to do this.)
Is Java's type information still kept around at run time? Or everything is forgotten after compilation, and if we do (Double)elements[0], we'll just follow the pointer and interpret those 8 bytes as a double, whatever that is?
I'm very unclear about how types are done in Java. If you have any reccommendation on books or article then thanks, too.
There are 2 types of casting:
Implicit casting, when you cast from a type to a wider type, which is done automatically and there is no overhead:
String s = "Cast";
Object o = s; // implicit casting
Explicit casting, when you go from a wider type to a more narrow one. For this case, you must explicitly use casting like that:
Object o = someObject;
String s = (String) o; // explicit casting
In this second case, there is overhead in runtime, because the two types must be checked and in case that casting is not feasible, JVM must throw a ClassCastException.
Taken from JavaWorld: The cost of casting
Casting is used to convert between
types -- between reference types in
particular, for the type of casting
operation in which we're interested
here.
Upcast operations (also called
widening conversions in the Java
Language Specification) convert a
subclass reference to an ancestor
class reference. This casting
operation is normally automatic, since
it's always safe and can be
implemented directly by the compiler.
Downcast operations (also called
narrowing conversions in the Java
Language Specification) convert an
ancestor class reference to a subclass
reference. This casting operation
creates execution overhead, since Java
requires that the cast be checked at
runtime to make sure that it's valid.
If the referenced object is not an
instance of either the target type for
the cast or a subclass of that type,
the attempted cast is not permitted
and must throw a
java.lang.ClassCastException.
For a reasonable implementation of Java:
Each object has a header containing, amongst other things, a pointer to the runtime type (for instance Double or String, but it could never be CharSequence or AbstractList). Assuming the runtime compiler (generally HotSpot in Sun's case) cannot determine the type statically a some checking needs to be performed by the generated machine code.
First that pointer to the runtime type needs to be read. This is necessary for calling a virtual method in a similar situation anyway.
For casting to a class type, it is known exactly how many superclasses there are until you hit java.lang.Object, so the type can be read at a constant offset from the type pointer (actually the first eight in HotSpot). Again this is analogous to reading a method pointer for a virtual method.
Then the read value just needs a comparison to the expected static type of the cast. Depending upon instruction set architecture, another instruction will need to branch (or fault) on an incorrect branch. ISAs such as 32-bit ARM have conditional instruction and may be able to have the sad path pass through the happy path.
Interfaces are more difficult due to multiple inheritance of interface. Generally the last two casts to interfaces are cached in the runtime type. IN the very early days (over a decade ago), interfaces were a bit slow, but that is no longer relevant.
Hopefully you can see that this sort of thing is largely irrelevant to performance. Your source code is more important. In terms of performance, the biggest hit in your scenario is liable to be cache misses from chasing object pointers all over the place (the type information will of course be common).
For example, suppose we have an array of Object[], where each element might have a different type. But we always know for sure that, say, element 0 is a Double, element 1 is a String. (I know this is a wrong design, but let's just assume I had to do this.)
The compiler does not note the types of the individual elements of an array. It simply checks that the type of each element expression is assignable to the array element type.
Is Java's type information still kept around at run time? Or everything is forgotten after compilation, and if we do (Double)elements[0], we'll just follow the pointer and interpret those 8 bytes as a double, whatever that is?
Some information is kept around at run time, but not the static types of the individual elements. You can tell this from looking at the class file format.
It is theoretically possible that the JIT compiler could use "escape analysis" to eliminate unnecessary type checks in some assignments. However, doing this to the degree you are suggesting would be beyond the bounds of realistic optimization. The payoff of analysing the types of individual elements would be too small.
Besides, people should not write application code like that anyway.
The byte code instruction for performing casting at runtime is called checkcast. You can disassemble Java code using javap to see what instructions are generated.
For arrays, Java keeps type information at runtime. Most of the time, the compiler will catch type errors for you, but there are cases where you will run into an ArrayStoreException when trying to store an object in an array, but the type does not match (and the compiler didn't catch it). The Java language spec gives the following example:
class Point { int x, y; }
class ColoredPoint extends Point { int color; }
class Test {
public static void main(String[] args) {
ColoredPoint[] cpa = new ColoredPoint[10];
Point[] pa = cpa;
System.out.println(pa[1] == null);
try {
pa[0] = new Point();
} catch (ArrayStoreException e) {
System.out.println(e);
}
}
}
Point[] pa = cpa is valid since ColoredPoint is a subclass of Point, but pa[0] = new Point() is not valid.
This is opposed to generic types, where there is no type information kept at runtime. The compiler inserts checkcast instructions where necessary.
This difference in typing for generic types and arrays makes it often unsuitable to mix arrays and generic types.
In theory, there is overhead introduced.
However, modern JVMs are smart.
Each implementation is different, but it is not unreasonable to assume that there could exist an implementation that JIT optimized away casting checks when it could guarantee that there would never be a conflict.
As for which specific JVMs offer this, I couldn't tell you. I must admit I'd like to know the specifics of JIT optimization myself, but these are for JVM engineers to worry about.
The moral of the story is to write understandable code first. If you're experiencing slowdowns, profile and identify your problem.
Odds are good that it won't be due to casting.
Never sacrifice clean, safe code in an attempt to optimize it UNTIL YOU KNOW YOU NEED TO.