Does Java casting introduce overhead? Why? - java

Is there any overhead when we cast objects of one type to another? Or the compiler just resolves everything and there is no cost at run time?
Is this a general things, or there are different cases?
For example, suppose we have an array of Object[], where each element might have a different type. But we always know for sure that, say, element 0 is a Double, element 1 is a String. (I know this is a wrong design, but let's just assume I had to do this.)
Is Java's type information still kept around at run time? Or everything is forgotten after compilation, and if we do (Double)elements[0], we'll just follow the pointer and interpret those 8 bytes as a double, whatever that is?
I'm very unclear about how types are done in Java. If you have any reccommendation on books or article then thanks, too.

There are 2 types of casting:
Implicit casting, when you cast from a type to a wider type, which is done automatically and there is no overhead:
String s = "Cast";
Object o = s; // implicit casting
Explicit casting, when you go from a wider type to a more narrow one. For this case, you must explicitly use casting like that:
Object o = someObject;
String s = (String) o; // explicit casting
In this second case, there is overhead in runtime, because the two types must be checked and in case that casting is not feasible, JVM must throw a ClassCastException.
Taken from JavaWorld: The cost of casting
Casting is used to convert between
types -- between reference types in
particular, for the type of casting
operation in which we're interested
here.
Upcast operations (also called
widening conversions in the Java
Language Specification) convert a
subclass reference to an ancestor
class reference. This casting
operation is normally automatic, since
it's always safe and can be
implemented directly by the compiler.
Downcast operations (also called
narrowing conversions in the Java
Language Specification) convert an
ancestor class reference to a subclass
reference. This casting operation
creates execution overhead, since Java
requires that the cast be checked at
runtime to make sure that it's valid.
If the referenced object is not an
instance of either the target type for
the cast or a subclass of that type,
the attempted cast is not permitted
and must throw a
java.lang.ClassCastException.

For a reasonable implementation of Java:
Each object has a header containing, amongst other things, a pointer to the runtime type (for instance Double or String, but it could never be CharSequence or AbstractList). Assuming the runtime compiler (generally HotSpot in Sun's case) cannot determine the type statically a some checking needs to be performed by the generated machine code.
First that pointer to the runtime type needs to be read. This is necessary for calling a virtual method in a similar situation anyway.
For casting to a class type, it is known exactly how many superclasses there are until you hit java.lang.Object, so the type can be read at a constant offset from the type pointer (actually the first eight in HotSpot). Again this is analogous to reading a method pointer for a virtual method.
Then the read value just needs a comparison to the expected static type of the cast. Depending upon instruction set architecture, another instruction will need to branch (or fault) on an incorrect branch. ISAs such as 32-bit ARM have conditional instruction and may be able to have the sad path pass through the happy path.
Interfaces are more difficult due to multiple inheritance of interface. Generally the last two casts to interfaces are cached in the runtime type. IN the very early days (over a decade ago), interfaces were a bit slow, but that is no longer relevant.
Hopefully you can see that this sort of thing is largely irrelevant to performance. Your source code is more important. In terms of performance, the biggest hit in your scenario is liable to be cache misses from chasing object pointers all over the place (the type information will of course be common).

For example, suppose we have an array of Object[], where each element might have a different type. But we always know for sure that, say, element 0 is a Double, element 1 is a String. (I know this is a wrong design, but let's just assume I had to do this.)
The compiler does not note the types of the individual elements of an array. It simply checks that the type of each element expression is assignable to the array element type.
Is Java's type information still kept around at run time? Or everything is forgotten after compilation, and if we do (Double)elements[0], we'll just follow the pointer and interpret those 8 bytes as a double, whatever that is?
Some information is kept around at run time, but not the static types of the individual elements. You can tell this from looking at the class file format.
It is theoretically possible that the JIT compiler could use "escape analysis" to eliminate unnecessary type checks in some assignments. However, doing this to the degree you are suggesting would be beyond the bounds of realistic optimization. The payoff of analysing the types of individual elements would be too small.
Besides, people should not write application code like that anyway.

The byte code instruction for performing casting at runtime is called checkcast. You can disassemble Java code using javap to see what instructions are generated.
For arrays, Java keeps type information at runtime. Most of the time, the compiler will catch type errors for you, but there are cases where you will run into an ArrayStoreException when trying to store an object in an array, but the type does not match (and the compiler didn't catch it). The Java language spec gives the following example:
class Point { int x, y; }
class ColoredPoint extends Point { int color; }
class Test {
public static void main(String[] args) {
ColoredPoint[] cpa = new ColoredPoint[10];
Point[] pa = cpa;
System.out.println(pa[1] == null);
try {
pa[0] = new Point();
} catch (ArrayStoreException e) {
System.out.println(e);
}
}
}
Point[] pa = cpa is valid since ColoredPoint is a subclass of Point, but pa[0] = new Point() is not valid.
This is opposed to generic types, where there is no type information kept at runtime. The compiler inserts checkcast instructions where necessary.
This difference in typing for generic types and arrays makes it often unsuitable to mix arrays and generic types.

In theory, there is overhead introduced.
However, modern JVMs are smart.
Each implementation is different, but it is not unreasonable to assume that there could exist an implementation that JIT optimized away casting checks when it could guarantee that there would never be a conflict.
As for which specific JVMs offer this, I couldn't tell you. I must admit I'd like to know the specifics of JIT optimization myself, but these are for JVM engineers to worry about.
The moral of the story is to write understandable code first. If you're experiencing slowdowns, profile and identify your problem.
Odds are good that it won't be due to casting.
Never sacrifice clean, safe code in an attempt to optimize it UNTIL YOU KNOW YOU NEED TO.

Related

ClassCastException vs "Incompatible types" in Java

I've been working on studying for the OCJA8 Java exam and started reading about Exceptions, especially about ClassCastException. I realized I have some trouble in identifying whether it's a good cast, a ClassCastException or a compilation error with the message "incompatible types".
As far as I understood, "incompatible types" compilation error is going to result when trying to cast from a class to an unrelated class (for example, from String to Integer. String isn't neither a subclass, nor a superclass of Integer, so they are unrelated). Such casting does, indeed, result in a compilation error.
Regarding ClassCastException, I'm not sure when it actually happens. Tried reading about it in Boyarsky and Selikoff's OCJA8 book, but still don't have a proper idea of when it happens.
What I know, for sure, is that when I'm trying to cast from a subclass to a superclass, it works. I thought that might happen because the subclass inherits every method/variable of the superclass, so no issues will happen.
I'm still confused about when ClassCastException happens, compared to the "incompatible types" compilation error. Shouldn't this code also result in a runtime exception?
class A {}
class B extends A {}
public class Main {
public static void main(String[] args) {
A a = new A();
B b = a;
}
}
It doesn't, though. I receive a compilation error. Seems that I don't know when, what happens and can't seem to find it anywhere.
The cast operator looks like this: (Type) expression.
It used for 3 completely unrelated things, and due to the way java works, effectively, a 4th and 5th thing, though it's not the cast operation itself that causes it, it's merely a side-effect. A real guns and grandmas situation. Just like + in java means 2 entirely unrelated things: Either numeric addition, or string concatenation.
Hence, you shouldn't ever call it 'casting' unless you mean specifically writing 'parens, type, close parens, expression' which should rarely come up in normal conversation. Instead, call it what the effect of the cast operator actually is, which depends entirely on what you're writing.
The 5 things are:
Primitive conversion. Requires Type to be primitive and expression to also be primitive.
Type coercion. Requires Type to be non-primitive and expression to be non-primitive, and is only about the part that is not in <> (so not the generics part).
Type assertion. Requires Type to be non-primitive and contain generics, and is specifically about the generics part.
Boxing/Unboxing. Java automatically wraps a primitive into its boxed type, or unwraps the value out of a boxed type, as needed, depending on context. casting is one way to create this context.
Lambda/MethodRef selection. Lambdas/methodrefs are a compiler error unless, from context, the compiler can figure out what functional interface type the lambda/methodref is an implementation for. Casts are one way to establish this context.
The space you're currently playing in is the Type Coercion part. Note that neither type coercion nor assertion do any conversion. These do nothing at all at runtime (type assertion), or mostly nothing at all - type coercion, at runtime, either throws ClassCastEx, or does nothing. No conversion ever takes place. This doesn't work:
Number n = 5;
String s = (String) n;
One might think this results in the string "5". That's not how casting works.
What is type coercion
Type coercion casting does 2 completely separate things:
Changes the type of an expression
In java, when you invoke a method, the compiler must figure out which exact method you mean and codes that into the bytecode. If the compiler can't figure out which one you want, it won't compile. The lookup is based on a combination of the method name as well as the parameter types - specifically, the compile time type of them.
Number n = 5;
foo(n); // prints 'Number', not 'Integer'!
void foo(Number n) { System.out.println("Number variant"); }
void foo(Integer n) { System.out.println("Integer variant"); }
Hence, the type of the expression itself, as the compiler thinks of it, is important for this sort of thing. Casting changes the compile-time type. foo((Integer) n) would print 'Integer variant'.
Check if its actually true
The second thing type coercion does, is generate bytecode that checks the claim. Given:
Number n = getNumber();
Integer i = (Integer) n;
Number getNumber() {
return new Double(5.5); // a double!
}
Then clearly we can tell: That type cast is not going to work out, n is not, in fact, pointing at an instance of Integer at all. However, at compile time we can't be sure: We'd have to go through the code of getNumber to know, and given the halting problem, it's not possible for arbitrary code to be analysed like this. Even if it was, maybe tomorrow this code changes - signatures are set, but implementations can change.
Thus, the compiler will just let you write this, but will insert code that checks. This is the CHECKCAST bytecode instruction. That instruction does nothing if the cast holds (the value is indeed pointing at an object of the required type), or, if the object it is pointing at isn't, then a ClassCastException is thrown. Which should probably be called TypeCoercionException instead, and the bytecode should probably be called CHECKTYPE.
compiler error 'incompatible types' vs ClassCastEx
A type coercion cast comes in 3 flavours. That 'change the compile time type of the expression' thing is common to all 3. But about the check if it's actually true thing, you have 3 options:
It is always true
This seems pointless:
Integer i = 5;
Number n = (Number) i;
And it is - any linting tool worth its salt will point out this cast does absolutely nothing at all. The compiler knows it does nothing (all integers are also numbers, doing a runtime check is useless), and doesn't even generate the CHECKCAST bytecode. However, sometimes you do this solely for the fact that the type changes:
Integer i = 5;
foo((Number) i); // would print 'Number variant', even though its an integer.
Point is, this cast, while usually pointless, is technically legal; java just lets it happen and doesn't even generate the CHECKCAST. It cannot possibly throw anything at runtime.
It is always false
Integer i = 5;
Double d = (Double) i;
At compile time the compiler already knows this is never going to work. No type exists that it both Integer and Double. Technically, null would work, but nevertheless the java spec dictates that the compiler must reject this code, and fail with a 'incompatible types' compiler error. There are other ways to make the compiler emit this error message; this is just one of them.
The check may be true or false
In which case the compiler compiles it and adds a CHECKCAST bytecode instruction so that at runtime the type is checked. This could result in a ClassCastException.
The other way to get CCEx
generics are entirely a compile time affair. The runtime has no idea what they mean. That means that this code:
List<String> list = getListOfStrings();
list.get(0).toLowerCase();
is compiled to:
List list = getListOfStrings();
((String) list.get(0)).toLowerCase();
The compiler injects a cast (and as the generics-erased List's get method returns Object, the test could pass, or fail, a CHECKCAST bytecode instruction is generated, which could throw ClassCastEx). This means you can cast ClassCastExceptions on lines with no casts, but then it does mean someone messed up their generics and ignored a compile time warning. This method would do the job:
public List<String> getListOfStrings() {
var broken = new ArrayList<Number>();
broken.add(5); // not a string
List raw = broken; // raw type.
return (List<String>) raw;
}
B class is A type, but A not is B.

Why are new java.util.Arrays methods in Java 8 not overloaded for all the primitive types?

I'm reviewing the API changes for Java 8 and I noticed that the new methods in java.util.Arrays are not overloaded for all primitives. The methods I noticed are:
parallelSetAll
parallelPrefix
spliterator
stream
Currently these new methods only handle int, long, and double primitives.
int, long, and double are probably the most widely used primitives so it makes sense that if they had to limit the API that they would choose those three, but why did they have to limit the API?
To address the questions as a whole, and not just this particular scenario, I think we all want to know...
Why There's Interface Pollution in Java 8
For instance, in a language like C#, there is a set of predefined function types accepting any number of arguments with an optional return type (Func and Action each one going up to 16 parameters of different types T1, T2, T3, ..., T16), but in the JDK 8 what we have is a set of different functional interfaces, with different names and different method names, and whose abstract methods represent a subset of well known function arities (i.e. nullary, unary, binary, ternary, etc). And then we have an explosion of cases dealing with primitive types, and there are even other scenarios causing an explosion of more functional interfaces.
The Type Erasure Issue
So, in a way, both languages suffer from some form of interface pollution (or delegate pollution in C#). The only difference is that in C# they all have the same name. In Java, unfortunately, due to type erasure, there is no difference between Function<T1,T2> and Function<T1,T2,T3> or Function<T1,T2,T3,...Tn>, so evidently, we couldn't simply name them all the same way and we had to come up with creative names for all possible types of function combinations. For further reference on this, please refer to How we got the generics we have by Brian Goetz.
Don't think the expert group did not struggle with this problem. In the words of Brian Goetz in the lambda mailing list:
[...] As a single example, let's take function types. The lambda
strawman offered at devoxx had function types. I insisted we remove
them, and this made me unpopular. But my objection to function types
was not that I don't like function types -- I love function types --
but that function types fought badly with an existing aspect of the
Java type system, erasure. Erased function types are the worst of
both worlds. So we removed this from the design.
But I am unwilling to say "Java never will have function types"
(though I recognize that Java may never have function types.) I
believe that in order to get to function types, we have to first deal
with erasure. That may, or may not be possible. But in a world of
reified structural types, function types start to make a lot more
sense [...]
An advantage of this approach is that we can define our own interface types with methods accepting as many arguments as we would like, and we could use them to create lambda expressions and method references as we see fit. In other words, we have the power to pollute the world with yet even more new functional interfaces. Also, we can create lambda expressions even for interfaces in earlier versions of the JDK or for earlier versions of our own APIs that defined SAM types like these. And so now we have the power to use Runnable and Callable as functional interfaces.
However, these interfaces become more difficult to memorize since they all have different names and methods.
Still, I am one of those wondering why they didn't solve the problem as in Scala, defining interfaces like Function0, Function1, Function2, ..., FunctionN. Perhaps, the only argument I can come up with against that is that they wanted to maximize the possibilities of defining lambda expressions for interfaces in earlier versions of the APIs as mentioned before.
Lack of Value Types Issue
So, evidently type erasure is one driving force here. But if you are one of those wondering why we also need all these additional functional interfaces with similar names and method signatures and whose only difference is the use of a primitive type, then let me remind you that in Java we also lack of value types like those in a language like C#. This means that the generic types used in our generic classes can only be reference types and not primitive types.
In other words, we can't do this:
List<int> numbers = asList(1,2,3,4,5);
But we can indeed do this:
List<Integer> numbers = asList(1,2,3,4,5);
The second example, though, incurs in the cost of boxing and unboxing of the wrapped objects back and forth from/to primitive types. This can become really expensive in operations dealing with collections of primitive values. So, the expert group decided to create this explosion of interfaces to deal with the different scenarios. To make things "less worse" they decided to only deal with three basic types: int, long and double.
Quoting the words of Brian Goetz in the lambda mailing list:
[...] More generally: the philosophy behind having specialized
primitive streams (e.g., IntStream) is fraught with nasty tradeoffs.
On the one hand, it's lots of ugly code duplication, interface
pollution, etc. On the other hand, any kind of arithmetic on boxed ops
sucks, and having no story for reducing over ints would be terrible.
So we're in a tough corner, and we're trying to not make it worse.
Trick #1 for not making it worse is: we're not doing all eight
primitive types. We're doing int, long, and double; all the others
could be simulated by these. Arguably we could get rid of int too, but
we don't think most Java developers are ready for that. Yes, there
will be calls for Character, and the answer is "stick it in an int."
(Each specialization is projected to ~100K to the JRE footprint.)
Trick #2 is: we're using primitive streams to expose things that are
best done in the primitive domain (sorting, reduction) but not trying
to duplicate everything you can do in the boxed domain. For example,
there's no IntStream.into(), as Aleksey points out. (If there were,
the next question(s) would be "Where is IntCollection? IntArrayList?
IntConcurrentSkipListMap?) The intention is many streams may start as
reference streams and end up as primitive streams, but not vice versa.
That's OK, and that reduces the number of conversions needed (e.g., no
overload of map for int -> T, no specialization of Function for int
-> T, etc.) [...]
We can see that this was a difficult decision for the expert group. I think few would agree that this is elegant, but most of us would most likely agree it was necessary.
For further reference on the subject you may want to read The State of Value Types by John Rose, Brian Goetz, and Guy Steele.
The Checked Exceptions Issue
There was a third driving force that could have made things even worse, and it is the fact that Java supports two types of exceptions: checked and unchecked. The compiler requires that we handle or explicitly declare checked exceptions, but it requires nothing for unchecked ones. So, this creates an interesting problem, because the method signatures of most of the functional interfaces do not declare to throw any exceptions. So, for instance, this is not possible:
Writer out = new StringWriter();
Consumer<String> printer = s -> out.write(s); //oops! compiler error
It cannot be done because the write operation throws a checked exception (i.e. IOException) but the signature of the Consumer method does not declare it throws any exception at all. So, the only solution to this problem would have been to create even more interfaces, some declaring exceptions and some not (or come up with yet another mechanism at the language level for exception transparency. Again, to make things "less worse" the expert group decided to do nothing in this case.
In the words of Brian Goetz in the lambda mailing list:
[...] Yes, you'd have to provide your own exceptional SAMs. But then
lambda conversion would work fine with them.
The EG discussed additional language and library support for this
problem, and in the end felt that this was a bad cost/benefit
tradeoff.
Library-based solutions cause a 2x explosion in SAM types (exceptional
vs not), which interact badly with existing combinatorial explosions
for primitive specialization.
The available language-based solutions were losers from a
complexity/value tradeoff. Though there are some alternative
solutions we are going to continue to explore -- though clearly not
for 8 and probably not for 9 either.
In the meantime, you have the tools to do what you want. I get that
you prefer we provide that last mile for you (and, secondarily, your
request is really a thinly-veiled request for "why don't you just give
up on checked exceptions already"), but I think the current state lets
you get your job done. [...]
So, it's up to us, the developers, to craft yet even more interface explosions to deal with these in a case-by-case basis:
interface IOConsumer<T> {
void accept(T t) throws IOException;
}
static<T> Consumer<T> exceptionWrappingBlock(IOConsumer<T> b) {
return e -> {
try { b.accept(e); }
catch (Exception ex) { throw new RuntimeException(ex); }
};
}
In order to do:
Writer out = new StringWriter();
Consumer<String> printer = exceptionWrappingBlock(s -> out.write(s));
Probably, in the future when we get Support for Value Types in Java and Reification, we will be able to get rid of (or at least no longer need to use anymore) some of these multiple interfaces.
In summary, we can see that the expert group struggled with several design issues. The need, requirement or constraint to keep backward compatibility made things difficult, then we have other important conditions like the lack of value types, type erasure and checked exceptions. If Java had the first and lacked the other two the design of JDK 8 would probably have been different. So, we all must understand that these were difficult problems with lots of tradeoffs and the EG had to draw a line somewhere and make decisions.

Performance of Object Typecasting

How costly is Object Typecasting in terms of performance?
Should I try to avoid Typecasting when possible?
It is cheap enough that it falls into the category of premature optimization. Don't waste time even thinking or asking questions about it unless you have profiled your application and determined that it's a problem, and most importantly: don't compromise your design to avoid it.
JavaWorld: The cost of casting
Casting is used to convert between
types -- between reference types in
particular, for the type of casting
operation in which we're interested
here.
Upcast operations (also called
widening conversions in the Java
Language Specification) convert a
subclass reference to an ancestor
class reference. This casting
operation is normally automatic, since
it's always safe and can be
implemented directly by the compiler.
Downcast operations (also called
narrowing conversions in the Java
Language Specification) convert an
ancestor class reference to a subclass
reference. This casting operation
creates execution overhead, since Java
requires that the cast be checked at
runtime to make sure that it's valid.
If the referenced object is not an
instance of either the target type for
the cast or a subclass of that type,
the attempted cast is not permitted
and must throw a
java.lang.ClassCastException.
Depending on what you mean by typecasting. There is "upcasting" which costs you nothing and there is "downcasting" which costs you a lot. The answer to the second also begins with "it depends". Usually I avoid downcasting in my code because, from my expierience, if it is overused in your code, it means that the design is bad. Which on the other hand does not necessarily have to mean that it should not be used at all.
Typecasting will have a cost because the runtime type information has to be checked to ensure the cast will work. Compared to everything else, I doubt this will be significant, but you could try and measure it.
More generally, typecasting is (IMHO) a sign that something is not right in the design. Sure, sometimes you can't avoid it (working with legacy collections, for example), but I would definitely see if I could remove it.
No it shouldn't affect performance significantly enough to matter.

Casting and Generics, Any performance difference?

I am coding in Android a lot lately, Though I am comfortable in JAVA, but missing some
ideas about core concepts being used there.
I am interested to know whether any performance difference is there between these 2 codes.
First Method:
//Specified as member variable.
ArrayList <String> myList = new ArrayList <String>();
and using as String temp = myList.get(1);
2nd Method:
ArrayList myList = new ArrayList(); //Specified as member variable.
and using
String temp1 = myList.get(1).toString();
I know its about casting. Does the first method has great advantage over the second,
Most of the time in real coding I have to use second method because arraylist can take different data types, I end up specifying
ArrayList <Object> = new ArrayList <Object>();
or more generic way.
In short, there's no performance difference worth worrying about, if it exists at all. Generic information isn't stored at runtime anyway, so there's not really anything else happening to slow things down - and as pointed out by other answers it may even be faster (though even if it hypothetically were slightly slower, I'd still advocate using generics.) It's probably good to get into the habit of not thinking about performance so much on this level. Readability and code quality are generally much more important than micro-optimisations!
In short, generics would be the preferred option since they guarantee type safety and make your code cleaner to read.
In terms of the fact you're storing completely different object types (i.e. not related from some inheritance hierarchy you're using) in an arraylist, that's almost definitely a flaw with your design! I can count the times I've done this on one hand, and it was always a temporary bodge.
Generics aren't reified, which means they go away at runtime. Using generics is preferred for several reasons:
It makes your code clearer, as to which classes are interacting
It keeps it type safe: you can't accidentally add a List to a List
It's faster: casting requires the JVM to test type castability at runtime, in case it needs to throw a ClassCastException. With Generics, the compiler knows what types things must be, and so it doesn't need to check them.
There is a performance difference in that code:
The second method is actually slower.
The reason why:
Generics don't require casting/conversion (your code uses a conversion method, not a cast), the type is already correct. So when you call the toString() method, it is an extra call with extra operations that are unnecessary when using the method with generics.
There wouldn't be a problem with casting, as you are using the toString() method. But you could accidentally add an incorrect object (such as an array of Strings). The toString() method would work properly and not throw an exception, but you would get odd results.
As android is used for Mobiles and handheld devices where resources are limited you have to be careful using while coding.
Casting can be overhead if you are using String data type to store in ArrayList.
So in my opinion you should use first method of being specific.
There is no runtime performance difference because of "type erasure".
But if you are using Java 1.5 or above, you SHOULD use generics and not the weakly typed counterparts.
Advantages of generics --
* The flexibility of dynamic binding, with the advantage of static type-checking. Compiler-detected errors are less expensive to repair than those detected at runtime.
* There is less ambiguity between containers, so code reviews are simpler.
* Using fewer casts makes code cleaner.

Is there any runtime cost for Casting in Java?

Would there be any performance differences between these two chunks?
public void doSomething(Supertype input)
{
Subtype foo = (Subtype)input;
foo.methodA();
foo.methodB();
}
vs.
public void doSomething(Supertype input)
{
((Subtype)input).methodA();
((Subtype)input).methodB();
}
Any other considerations or recommendations between these two?
Well, the compiled code probably includes the cast twice in the second case - so in theory it's doing the same work twice. However, it's very possible that a smart JIT will work out that you're doing the same cast on the same value, so it can cache the result. But it is having to do work at least once - after all, it needs to make a decision as to whether to allow the cast to succeed, or throw an exception.
As ever, you should test and profile your code if you care about the performance - but I'd personally use the first form anyway, just because it looks more readable to me.
Yes. Checks must be done with each cast along with the actual mechanism of casting, so casting multiple times will cost more than casting once. However, that's the type of thing that the compiler would likely optimize away. It can clearly see that input hasn't changed its type since the last cast and should be able to avoid multiple casts - or at least avoid some of the casting checks.
In any case, if you're really that worried about efficiency, I'd wonder whether Java is the language that you should be using.
Personally, I'd say to use the first one. Not only is it more readable, but it makes it easier to change the type later. You'll only have to change it in one place instead of every time that you call a function on that variable.
I agree with Jon's comment, do it once, but for what it's worth in the general question of "is casting expensive", from what I remember: Java 1.4 improved this noticeably with Java 5 making casts extremely inexpensive. Unless you are writing a game engine, I don't know if it's something to fret about anymore. I'd worry more about auto-boxing/unboxing and hidden object creation instead.
Acording to this article, there is a cost associated with casting.
Please note that the article is from 1999 and it is up to the reader to decide if the information is still trustworthy!
In the first case :
Subtype foo = (Subtype)input;
it is determined at compile time, so no cost at runtime.
In the second case :
((Subtype)input).methodA();
it is determined at run time because compiler will not know. The jvm has to check if it can converted to a reference of Subtype and if not throw ClassCastException etc. So there will be some cost.

Categories

Resources