Why in "java" when you declare a "parameter" of an "annotation" you have to put "pair of parentheses" after the parameter, annotation are anyway "very different" form "interface" syntactically, so why this weird syntax...I know it has something to do with, that annotation are managed using interface behind the scenes or something, but what exactly?
This is what a normal interface declaration would look like:
public interface Example {
String method();
}
Note that when annotations were released, the java feature 'default methods for interfaces' wasn't around yet. The default keyword already existed (and has existed since java 1.0), solely as a thing you can put in a switch block, as the 'case' for 'it didnt match any of the cases'.
This is what an annotation interface definition looks like:
public #interface Example {
String method();
}
or, if defaults are involved:
public #interface Example {
String method() default "";
}
Note how parsing wise there is no difference between these two, other than the '#' symbol. In the 'default' case; yeah that's entirely new, but looking solely at that bit, it's not weird looking. The parentheses are, but not that bit.
The reason it's done this way is to not 'explode' parsing. Since the introduction of module-info in java9, if you want to write a parser for java, you mostly DO need an 'on the fly switching modes' parser; the "language specification" for a module file is so very different.
But that is a big step; the vast majority of parser libraries out there don't deal with this well, they can't switch grammars in the middle of parsing a source file.
Even if I can snap my fingers and break java (which, to be clear, java does not generally do: updating the language so that existing code now no longer compiles or means something else is a big step that is very rarely taken for obvious reasons. It restricts language design, though. That's the cost of being the world's most popular language*)... there are advantages here.
The way annotations work is that, if you at runtime obtain one, it acts like an object that is an instance of the annotation interface:
Example foo = Class.forName("foo.bar.Baz").getAnnotation(Example.class);
System.out.println(foo.method());
Note how it's not foo.method. It's foo.method(). And that has different reasons: fields, in java, are second-rate citizens. You can't put them in lambda method references (ClassName::methodName is valid java; there's no such thing for fields), they don't inherit, you can't put them in interfaces (fields in interfaces are automatically public, final, and static, i.e. constants. They don't decree a requirement to any implementing class, unlike methods in interfaces). That means fields as a general point of principle aren't used in public APIs in java. It'd be weird if in this instance they would be.
So, given that the params act like args-less method calls, it's convenient in that sense that you declare them that way as well in an #interface definition.
*) Give or take a few spots.
Related
Looking into j.l.r.Executable class I've found a method called hasRealParameterData() and from its name and code context I assume that it tells whether a particular method has 'real' or 'synthetic' params.
If I take e.g. method Object.wait(long, int) and call hasRealParameterData() it turns out that it returns false which is confusing to me, as the method is declared in Object class along with its params.
From this I've got a couple of questions:
What are 'real' and 'synthetic' Method parameters and why Java believes that params of Object.wait(long, int) are not 'real'?
How can I define a method with 'real' params?
Preamble - don't do this.
As I mentioned in the comments as well: This is a package private method. That means:
[A] It can change at any time, and code built based on assuming it is there will need continuous monitoring; any new java release means you may have to change things. You probably also need a framework if you want your code to be capable of running on multiple different VM versions. Maybe it'll never meaningfully change, but you have no guarantee so you're on the hook to investigate each and every JVM version released from here on out.
[B] It's undocumented by design. It may return weird things.
[C] The java module system restriction stuff is getting tighter every release; calling this method is hard, and will become harder over time.
Whatever made you think this method is the solution to some problem you're having - unlikely. If it does what you want at all, there are probably significantly better solutions available. I strongly advise you take one step backwards and ask a question about the problem you're trying to solve, instead of asking questions about this particular solution you've come up with.
Having gotten that out of the way...
Two different meanings
The problem here is that 'synthetic' means two utterly unrelated things and the docs are interchanging the meaning. The 4 unrelated meanings here are:
SYNTHETIC, the JVM flag. This term is in the JLS.
'real', a slang term used to indicate anything that is not marked with the JVM SYNTETHIC flag. This term is, as far as I know, not official. There isn't an official term other than simply 'not SYNTHETIC'.
Synthetic, as in, the parameter name (and other data not guaranteed to be available in class files) are synthesised.
Real, as in, not the previous bullet point's synthetic. The parameter is fully formed solely on the basis of what the class file contains.
The 'real' in hasRealParameterData is referring to the 4th bullet, not the second. But, all 4 bullet point meanings are used in various comments in the Executable.java source file!
The official meaning - the SYNTHETIC flag
The JVM has the notion of the synthetic flag.
This means it wasn't in the source code but javac had to make this element in order to make stuff work. This is done to paper over mismatches between java-the-language and java-the-VM-definition, as in, differences between .java and .class. Trivial example: At least until the nestmates concept, the notion of 'an inner class' simply does not exist at the class file level. There is simply no such thing. Instead, javac fakes it: It turns:
class Outer {
private static int foo() {
return 5;
}
class Inner {
void example() {
Outer.foo();
}
}
}
Into 2 seemingly unrelated classes, one named Outer, and one named Outer$Inner, literally like that. You can trivially observe this: Compile the above file and look at that - 2 class files, not one.
This leaves one problem: The JLS claims that inner classes get to call private members from their outer class. However, at the JVMS (class file) level, we turned these 2 classes into separate things, and thus, Outer$Inner cannot call foo. Now what? Well, javac generates a 'bridger' method. It basically compiles this instead:
class Outer {
private static int foo() {
return 5;
}
/* synthetic */ static int foo$() {
return foo();
}
}
class Outer$Inner {
private /* synthetic */ Outer enclosingInstance;
void example() {
Outer.foo$();
}
}
The JVM can generate fields, extra overload methods (for example, if you write class MyClass implements List<String> {}, you will write e.g. add(String x), but .add(Object x) still needs to exist to cater to erasure - that method is generated by javac, and will be marked with the SYNTHETIC modifier.
One effect of the SYNTHETIC modifier is that javac acts as if these methods do not exist. If you attempt to actually write Outer.foo$() in java code, it won't compile, javac will act as if the method does not exist. Even though it does. If you use bytebuddy or a hex editor to clear that flag in the class file, then javac will compile that code just fine.
generating parameter names
Weirdly, perhaps, in the original v1.0 Java Language Spec, parameter types were, obviously, a required part of a method's signature and are naturally encoded in class files. You can write this code: Integer.class.getMethods();, loop through until you find the static parseInt method, and then ask the j.l.r.Method instance about its parameter type, which will dutifully report: the first param's type is String. You can even ask it for its annotations.
But weirdly enough as per JLS 1.0 you cannot ask for its name - simply because it is not there, there was no actual need to know it, it does take up space, java wanted to be installed on tiny devices (I'm just guessing at the reasons here), so the info is not there. You can add it - as debug info, via the -g parameter, because having the names of things is convenient.
However, in later days this was deemed too annoying, and more recently compilers DO stuff the param name in a class file. Even if you do not use the -g param to 'include debug symbol info'.
Which leaves one final question: java17 can still load classes produced by javac 1.1. So what is it supposed to do when you ask for the name of param1 of such a method? The name simply cannot be figured out, it simply isn't there in the class file. It can fall back to looking at the debug symbol table (and it does), but if that isn't there - then you're just out of luck.
What the JVM does is make that name arg0, arg1, etc. You may have seen this in decompiler outputs.
THAT is what the hasRealParameterData() method is referring to as 'real' - arg0 is 'synthesized', and in contrast, foo (the actual name of the param) is 'real'.
So how would one have a method that has 'real' data in that sense (the 4th bullet)? Simply compile it, it's quite hard to convince a modern java compiler to strip all param names. Some obfuscators do this. You can compile with a really old -target and definitely don't add -g, and you'll probably get non-real, as per hasRealParameterData().
EDIT
Even though I use a pseudo-Java syntax below for illustration, this question is NOT limited to any 1 programming language. Please feel free to post an idiom or language-provided mechanism from your favorite programming language.
When attempting to reuse an existing class, Old, via composition instead of inheritance, it is very tedious to first manually create a new interface out of the existing class, and then write forwarding functions in New. The exercise becomes especially wasteful if Old has tons of public methods in it and whereas you need to override only a handful of them.
Ignoring IDE's like Eclipse that though can help with this process but still cannot reduce the resulting verbosity of code that one has to read and maintain, it would greatly help to have a couple language mechanisms to...
automatically extract the public methods of Old, say, via an interfaceOf operator; and
by default forward all automatically generated interface methods of Old , say, via a forwardsTo operator, to a composed instance of Old, with you only providing definitions for the handful of methods you wish to override in New.
An example:
// A hypothetical, Java-like language
class Old {
public void a() { }
public void b() { }
public void c() { }
private void d() { }
protected void e() { }
// ...
}
class New implements interfaceOf Old {
public New() {
// This would auto-forward all Old methods to _composed
// except the ones overridden in New.
Old forwardsTo _composed;
}
// The only method of Old that is being overridden in New.
public void b() {
_composed.b();
}
private Old _composed;
}
My question is:
Is this possible at the code level (say, via some reusable design pattern, or idiom), so that the result is minimal verbosity in New and classes like New?
Are there any other languages where such mechanisms are provided?
EDIT
Now, I don't know these languages in detail but I'm hoping that 'Lispy' languages like Scheme, Lisp, Clojure won't disappoint here... for Lisp after all is a 'programmable programming language' (according to Paul Graham and perhaps others).
EDIT 2
I may not be the author of Old or may not want to change its source code, effectively wanting to use it as a blackbox.
This could be done in languages that allow you to specify a catch-all magic method (eg. __call() in php). You could catch any function call here that you have not specifically overriden, check if it exists in class Old and if it does, just forward the call.
Something like this:
public function __call($name, $args)
{
if (method_exists($old, $name))
{
call_user_func([$obj, $name], $args);
}
}
First, to answer the design question in the context of "OOP" (class-oriented) languages:
If you really need to replace Old with its complete interface IOld everywhere you use it, just to make New, which implements IOld, behave like you want, then you actually should use inheritance.
If you only need a small part of IOld for New, then you should only put that part into the interface ICommon and let both Old and New implement it. In this case, you would only replace Old by ICommon where both Old and New make sense.
Second, what can Common Lisp do for you in such a case?
Common Lisp is very different from Java and other class-oriented languages.
Just a few pointers: In Common Lisp, objects are primarily used to structure and categorize data, not code. You won't find "one class per file", "one file per class", or "package names completely correspond to directory structure" here. Methods do not "belong" to classes but to generic functions whose sole responsibility it is to dispatch according to the classes of their arguments (which has the nice side effect of enabling a seamless multiple dispatch). There is multiple inheritance. There are no interfaces as such. There is a much stronger tendency to use packages for modularity instead of just organizing classes. Which symbols are exported ("public" in Java parlance) is defined per package, not per class (which would not make sense with the above obviously).
I think that your problem would either completely disappear in a Common Lisp environment because your code is not forced into a class structure, or be quite naturally solved or expressed in terms of multiple dispatch and/or (maybe multiple) inheritance.
One would need at least a complete example and large parts of the surrounding system to even attempt a translation into Common Lisp idioms. You just write code so differently that it would not make any sense to try a one-to-one translation of a few forms.
I think Go has such a mechanism, a struct can embed methods from another struct.
Take a look here. This could be what you are asking as second question.
Let's say I have:
class A {
Integer b;
void c() {}
}
Why does Java have this syntax: A.class, and doesn't have a syntax like this: b.field, c.method?
Is there any use that is so common for class literals?
The A.class syntax looks like a field access, but in fact it is a result of a special syntax rule in a context where normal field access is simply not allowed; i.e. where A is a class name.
Here is what the grammar in the JLS says:
Primary:
ParExpression
NonWildcardTypeArguments (
ExplicitGenericInvocationSuffix | this Arguments)
this [Arguments]
super SuperSuffix
Literal
new Creator
Identifier { . Identifier }[ IdentifierSuffix]
BasicType {[]} .class
void.class
Note that there is no equivalent syntax for field or method.
(Aside: The grammar allows b.field, but the JLS states that b.field means the contents of a field named "field" ... and it is a compilation error if no such field exists. Ditto for c.method, with the addition that a field c must exist. So neither of these constructs mean what you want them to mean ... )
Why does this limitation exist? Well, I guess because the Java language designers did not see the need to clutter up the language syntax / semantics to support convenient access to the Field and Method objects. (See * below for some of the problems of changing Java to allow what you want.)
Java reflection is not designed to be easy to use. In Java, it is best practice use static typing where possible. It is more efficient, and less fragile. Limit your use of reflection to the few cases where static typing simply won't work.
This may irk you if you are used to programming to a language where everything is dynamic. But you are better off not fighting it.
Is there any use that is so common for class literals?
I guess, the main reason they supported this for classes is that it avoids programs calling Class.forName("some horrible string") each time you need to do something reflectively. You could call it a compromise / small concession to usability for reflection.
I guess the other reason is that the <type>.class syntax didn't break anything, because class was already a keyword. (IIRC, the syntax was added in Java 1.1.)
* If the language designers tried to retrofit support for this kind of thing there would be all sorts of problems:
The changes would introduce ambiguities into the language, making compilation and other parser-dependent tasks harder.
The changes would undoubtedly break existing code, whether or not method and field were turned into keywords.
You cannot treat b.field as an implicit object attribute, because it doesn't apply to objects. Rather b.field would need to apply to field / attribute identifiers. But unless we make field a reserved word, we have the anomalous situation that you can create a field called field but you cannot refer to it in Java sourcecode.
For c.method, there is the problem that there can be multiple visible methods called c. A second issue that if there is a field called c and a method called c, then c.method could be a reference to an field called method on the object referred to by the c field.
I take it you want this info for logging and such. It is most unfortunate that such information is not available although the compiler has full access to such information.
One with a little creativity you can get the information using reflection. I can't provide any examples for asthere are little requirements to follow and I'm not in the mood to completely waste my time :)
I'm not sure if I fully understand your question. You are being unclear in what you mean by A.class syntax. You can use the reflections API to get the class from a given object by:
A a = new A()
Class c = a.getClass()
or
Class c = A.class;
Then do some things using c.
The reflections API is mostly used for debugging tools, since Java has support for polymorphism, you can always know the actual Class of an object at runtime, so the reflections API was developed to help debug problems (sub-class given, when super-class behavior is expected, etc.).
The reason there is no b.field or c.method, is because they have no meaning and no functional purpose in Java. You cannot create a reference to a method, and a field cannot change its type at runtime, these things are set at compile-time. Java is a very rigid language, without much in the way of runtime-flexibility (unless you use dynamic class loading, but even then you need some information on the loaded objects). If you have come from a flexible language like Ruby or Javascript, then you might find Java a little controlling for your tastes.
However, having the compiler help you figure our potential problems in your code is very helpful.
In java, Not everything is an object.
You can have
A a = new A()
Class cls = a.getClass()
or directly from the class
A.class
With this you get the object for the class.
With reflection you can get methods and fields but this gets complicated. Since not everything is an object. This is not a language like Scala or Ruby where everything is an object.
Reflection tutorial : http://download.oracle.com/javase/tutorial/reflect/index.html
BTW: You did not specify the public/private/protected , so by default your things are declared package private. This is package level protected access http://download.oracle.com/javase/tutorial/java/javaOO/accesscontrol.html
I’m a huge believer in consistency, and hence conventions.
However, I’m currently developing a framework in Java where these conventions (specifically the get/set prefix convention) seem to get in the way of readability. For example, some classes will have id and name properties and using o.getId() instead of o.id() seems utterly pointless for a number of reasons:
The classes are immutable so there will (generally) be no corresponding setter,
there is no chance of confusion,
the get in this case conveys no additional semantics, and
I use this get-less naming schema quite consistently throughout the library.
I am getting some reassurance from the Java Collection classes (and other classes from the Java Platform library) which also violate JavaBean conventions (e.g. they use size instead of getSize etc.).
To get this concern out of the way: the component will never be used as a JavaBean since they cannot be meaningfully used that way.
On the other hand, I am not a seasoned Java user and I don’t know what other Java developers expect of a library. Can I follow the example of the Java Platform classes in this or is it considered bad style? Is the violation of the get/set convention in Java library classes deemed a mistake in retrospect? Or is it completely normal to ignore the JavaBean conventions when not applicable?
(The Sun code conventions for Java don’t mention this at all.)
If you follow the appropriate naming conventions, then 3rd-party tools can easily integrate with and use your library. They will expect getX(), isX() etc. and try to find these through reflection.
Although you say that these won't be exposed as JavaBeans currently, I would still follow the conventions. Who knows what you may want to do further down the line ? Or perhaps at a later stage you'll want to extract an interface to this object and create a proxy that can be accessed via other tools ?
I actually hate this convention. I would be very happen if it was replaced by a real java tool that would provide the accessor/modifier methods.
But I do follow this convention in all my code. We don't program alone, and even if the whole team agrees on a special convention right now, you can be assured that future newcomers, or a future team that will maintain your project, will have a hard time at the beginning... I believe the inconvenience for get/set is not as big as the inconvenience from being non-standard.
I would like to raise another concern : often, java software uses too many accessors and modifiers (get/set). We should apply much more the "Tell, don't ask" advice. For example, replace the getters on B by a "real" method:
class A {
B b;
String c;
void a() {
String c = b.getC();
String d = b.getD();
// algorithm with b, c, d
}
}
by
class A {
B b;
String c;
void a() {
b.a(c); // Class B has the algorithm.
}
}
Many good properties are obtained by this refactor:
B can be made immutable (excellent for thread-safe)
Subclasses of B can modify the computation, so B might not require another property for that purpose.
The implementation is simpler in B it would have been in A, because you don't have to use the getter and external access to the data, you are inside B and can take advantage of implementation details (checking for errors, special cases, using cached values...).
Being located in B to which it has more coupling (two properties instead of one for A), chances are that refactoring A will not impact the algorithm. For a B refactoring, it may be an opportunity to improve the algorithm. So maintenance is less.
The violation of the get/set convention in the Java library classes is most certainly a mistake. I'd actually recommend that you follow the convention, to avoid the complexity of knowing why/when the convention isn't followed.
Josh Bloch actually sides with you in this matter in Effective Java, where he advocates the get-less variant for things which aren't meant to be used as beans, for readability's sake. Of course, not everyone agrees with Bloch, but it shows there are cases for and against dumping the get. (I think it's easier to read, and so if YAGNI, ditch the get.)
Concerning the size() method from the collections framework; it seems unlikely it's just a "bad" legacy name when you look at, say, the more recent Enum class which has name() and ordinal(). (Which probably can be explained by Bloch being one of Enum's two attributed authors. ☺)
The get-less schema is used in a language like scala (and other languages), with the Uniform Access Principle:
Scala keeps field and method names in the same namespace, which means we can’t name the field count if a method is named count. Many languages, like Java, don’t have this restriction, because they keep field and method names in separate namespaces.
Since Java is not meant to offer UAP for "properties", it is best to refer to those properties with the get/set conventions.
UAP means:
Foo.bar and Foo.bar() are the same and refer to reading property, or to a read method for the property.
Foo.bar = 5 and Foo.bar(5) are the same and refer to setting the property, or to a write method for the property.
In Java, you cannot achieve UAP because Foo.bar and Foo.bar() are in two different namespaces.
That means to access the read method, you will have to call Foo.bar(), which is no different than calling any other method.
So this get-set convention can help to differentiate that call from the others (not related to properties), since "All services (here "just reading/setting a value, or computing it") offered by a module cannot be available through a uniform notation".
It is not mandatory, but is a way to recognize a service related to get/set or compute a property value, from the other services.
If UAP were available in Java, that convention would not be needed at all.
Note: the size() instead of getSize() is probably a legacy bad naming preserved for the sake of Java's mantra is 'Backwardly compatible: always'.
Consider this: Lots of frameworks can be told to reference a property in object's field such as "name". Under the hood the framework understands to first turn "name" into "setName", figure out from its singular parameter what is the return type and then form either "getName" or "isName".
If you don't provide such well-documented, sensible accessor/mutator mechanism, your framework/library just won't work with the majority of other libraries/frameworks out there.
I work on a team of Java programmers. One of my co-workers suggests from time-to-time that I do something like "just add a type field" (usu. "String type"). Or code will be committed laden with "if (foo instanceof Foo){...} else if( foo instanceof Bar){...}".
Josh Bloch's admonition that "tagged classes are a wan imitation of a proper class hierarchy" notwithstanding, what is my one-line response to this sort of thing? And then how do I elaborate the concept more seriously?
It's clear to me that - the context being Java - the type of Object under consideration is right in front of our collective faces - IOW: The word right after the "class", "enum" or "interface", etc.
But aside from the difficult-to-demonstrate or quantify (on the spot) "it makes your code more complicated", how do I say that "duck-typing in a (more or less) strongly-typed language is a stupid idea that suggests a much deeper design pathology?
Actually, you said it reasonably well right there.
The truth is that the "instance of" comb is almost always a bad idea (the exception happening for example when you're marshaling or serializing, when for a short interval you may not have all the type information at hand.) As josh says, that's a sign of a bad class hierarchy otherwise.
The way that you know it's a bad idea is that it makes the code brittle: if you use that, and the type hierarchy changes, then it probably breaks that instance-of comb everywhere it occurs. What's more, you then lose the benefit of strong typing; the compiler can't help you by catching errors ahead of time. (This is somewhat analogous to the problems caused by typecasts in C.)
Update
Let me extend this a bit, since from a comment it appears I wasn't quite clear. The reason you use a typecast in C, or instanceof, it that you want to say "as if": use this foo as if it were a bar. Now, in C, there is no run time type information around at all, so you're just working without a net: if you typecast something, the generated code is going to treat that address as if it contained a particular type no matter what, and you should only hope that it will cause a run-time error instead of silently corrupting something.
Duck typing just raises that to a norm; in a dynamic, weakly typed language like Ruby or Python or Smalltalk, everything is an untyped reference; you shoot messages at it at runtime and see what happens. If it understands a particular message, it "walks like a duck" -- it handles it.
This can be very handy and useful, because it allows marvelous hacks like assigning a generator expression to a variable in Python, or a block to a variable in Smalltalk. But it does mean you're vulnerable to errors at runtime that a strongly typed language can catch at compile time.
In a strongly-typed language like Java, you can't really, strictly, have duck typing at all: you must tell the compiler what type you're going to treat something as. You can get something like duck typing by using type casts, so that you can do something like
Object x; // A reference to an Object, analogous to a void * in C
// Some code that assigns something to x
((FoodDispenser)x).dropPellet(); // [1]
// Some more code
((MissleController)x).launchAt("Moon"); // [2]
Now at run time, you're fine as long as x is a kind of FoodDispenser at [1] or MissleController at [2]; otherwise boom. Or unexpectedly, no boom.
In your description, you protect yourself by using a comb of else if and instanceof
Object x ;
// code code code
if(x instanceof FoodDispenser)
((FoodDispenser)x).dropPellet();
else if (x instanceof MissleController )
((MissleController)x).launchAt("Moon");
else if ( /* something else...*/ ) // ...
else // error
Now, you're protected against the run-time error, but you've got the responsibility of doing something sensible later, at the else.
But now imagine you make a change to the code, so that 'x' can take the types 'FloorWax' and 'DessertTopping'. You now must go through all the code and find all the instances of that comb and modify them. Now the code is "brittle" -- changes in the requirements mean lots of code changes. In OO, you're striving to make the code less brittle.
The OO solution is to use polymorphism instead, which you can think of as a kind of limited duck typing: you're defining all the operations that something can be trusted to perform. You do this by defining a superior class, probably abstract, that has all the methods of the inferior classes. In Java, a class like that is best expressed an "interface", but it has all the type properties of a class. In fact, you can see an interface as being a promise that a particular class can be trusted to act "as if" it were another class.
public interface VeebleFeetzer { /* ... */ };
public class FoodDispenser implements VeebleFeetzer { /* ... */ }
public class MissleController implements VeebleFeetzer { /* ... */ }
public class FloorWax implements VeebleFeetzer { /* ... */ }
public class DessertTopping implements VeebleFeetzer { /* ... */ }
All you have to do now is use a reference to a VeebleFeetzer, and the compiler figures it out for you. If you happen to add another class that's a subtype of VeebleFeetzer, the compiler will select the method and check the arguments in the bargain
VeebleFeetzer x; // A reference to anything
// that implements VeebleFeetzer
// Some code that assigns something to x
x.dropPellet();
// Some more code
x.launchAt("Moon");
This isn't so much duck typing as it is just proper object-oriented style; indeed, being able to subclass class A and call the same method on class B and have it do something else is the entire point of inheritance in languages.
If you're constantly checking the type of an object, then you're either being too clever (though I suppose it's this cleverness that duck typing aficionados enjoy, except in a less brittle form) or you're not embracing the basics of object-oriented programming.
hmmm...
correct me if I am wrong but tagged classes and duck-typing are two different concepts though not necessarely mutally exclusive.
When one has the urge of using tags in a class to define the type then one should, IMHO, revise their class hiearchy as it is a clear sing of conceptual bleed where an abstract class needs to know the the implementation details that the class parenthood tries to hide. Are you using the correct pattern ? In other words are you trying to coerce behaviour in a pattern that does not naturally support it ?
Where as duck-typing is the ability to loosely define a type where a method can accept any types just so long as the necessary methods in the parameter instance are defined. The method will then use the parameter and call the necessary methods without too much bother on the parenthood of the instance.
So here... the smelly hint is, as Charlie pointed out, the use of instanceof. Much like static or other smelly keywords, whenever they appear one must ask "Am I doing the right thing here ?", not that they are inhertitly wrong but they are oftenly used to hack through a bad or ill fitted OO desing.
My one line response would be that you lose one of the main benefits of OOP: polymorphism. This reduces the time to develop new code (developers love to develop new code, so that should help your argument :-)
If, when adding a new type to an existing system, you have to add logic, aside from figuring out which instance to construct, then, in Java, you are doing something wrong (assuming that the new class should simply be a drop in replacement for another).
Generally, the appropriate way to handle this in Java is to keep the code polymorphic and make use of interfaces. So anytime they find themselves wanting to add another variable or do an instanceof they should probably be implementing an interface instead.
If you can convince them to change the code it is pretty easy to retrofit interfaces into the existing code base. For that matter, I'd take the time to take a piece of code with instanceof and refactor it to be polymorphic. It is much easier for people to see the point if they can see the before and after versions and compare them.
You might want to point your co-worker to the Liskov substitution principle, one of the five pillars in SOLID.
Links:
Wikipedia entry
Article written by Uncle Bob
When you say "duck typing in strongly-typed languages" you actually mean "imitating (subtype) polymorphism in statically-typed languages".
It's not that bad when you have data objects (DTOs) that don't contain any behaviour. When you do have a full-blown OO model (ask yourself if this is really the case) then you should use the polymorphism offered by the language where appropriate.
Although I'm generally a fan of duck-typed languages like python, I can see your problem with it in java.
If you are writing all the classes that will ever be used with this code, then you don't need to duck-type, because you don't need to allow for cases where code can't directly inherit from (or implement) an interface or other unifying abstraction.
A downside of duck-typing is that you have an extra class of unit tests to run on your code: a new class could return a different type than expected, and subsequently cause the rest of the code to fail. So although duck-typing allows backward-flexibility, it requires a lot of forward thinking for tests.
In short you have a catch-all (hard) instead of a catch-few (easy). I think that's the pathology.
Why "imitate a class hierarchy" instead of designing and using it? One of the refactoring methods is replacing "switch"es (chained ifs are almost the same) with polymorphism. Why use switches where polymorphism would lead to cleaner code?
This isn't duck typing, it is just a bad way to simulate polymorphism in a language that has (more or less) real polymorphism.
Two arguments to answer the titled question:
1) Java is supposed to be "write once, run anywhere," so code that was written for one hierarchy shouldn't throw RuntimeExceptions when we change the environment somewhere. (Of course, there are exceptions -- pun -- to this rule.)
2) The Java JIT performs very aggressive optimizations that rely on knowing that a given symbol must be of one type and one type only. The only way to work around this is to cast.
As others have mentioned, your "instance of" doesn't match with the question I've answered here. Anything with any types, duck or static, may have the issue you described. There are better OOP ways to deal with it.
Instead of instanceof you can use the Method- and the Strategy-Pattern, mixed together the code looks much better than before...