Java 8 interface/class loader changes? - java

I have found some difficult cautious between Java 1.7_51 and Java 1.8_20.
The initial situation:
One Interface:
interface InterfaceA {
public void doSomething();
}
Two classes:
public class ClassA implements InterfaceA {
public void doSomething() {
System.out.println("Hello World!");
}
}
public class ClassB {
public static void main(String[] args) {
ClassA a = new ClassA();
a.doSomething();
}
}
Next i have compiled the classes with (Java 1.8) -> javac *.java
after the compiler finished i removed the InterfaceA.java and InterfaceA.class file's.
Now i try again too compile only the ClassB.java and got the error message:
ClassB.java:4: error: cannot access InterfaceA
a.doSomething();
class file for InterfaceA not found
1 error
The same i have tried with java 1.7.. -> javac *.java
after the compiler finished i removed the InterfaceA.java and InterfaceA.class file's.
But know i got no error message ..
Can someone explain me this?
.. sorry for my bad english ..

The formal specification describes the process of finding the target method of an invocation expression as first searching all applicable methods and then selecting the most specific one, succeeding if there is no ambiguity.
Compare JLS 15.12.2.1. Identify Potentially Applicable Methods
The class or interface determined by compile-time step 1 (§15.12.1) is searched for all member methods that are potentially applicable to this method invocation; members inherited from superclasses and superinterfaces are included in this search.
In your case it is possible to deduce that the method found in ClassA is an exact match for which the compiler can’t find a more specific method in InterfaceA, however, the specification does not mandate that the compiler has to stop at this point, short-circuiting the search. That’s an optimization a compiler might have, but implementing the search just like formally specified, i.e. searching the entire type hierarchy first and choosing then, is appropriate.
Given how subtle and complex the process is with all the new Java 8 features and type inference, it is understandable that the current implementation is more conservative rather that optimized.

I can think of two possible explanations:
Maybe the addition of default methods, or type annotations, or something else to Java 8 meant that the compiler needed to be changed to load the classfiles for indirectly referenced interfaces.
Maybe it was just an harmless side-effect of some other restructuring of the compiler.
Either way, it doesn't necessarily make any difference to what happens at runtime. And the "fix" at compile time is to not remove the interface classfile like that.

Related

Why method reference produces java.lang.BootstrapMethodError [duplicate]

I have some code with a method reference that compiles fine and fails at runtime.
The exception is this:
Caused by: java.lang.invoke.LambdaConversionException: Invalid receiver type class redacted.BasicEntity; not a subtype of implementation type interface redacted.HasImagesEntity
at java.lang.invoke.AbstractValidatingLambdaMetafactory.validateMetafactoryArgs(AbstractValidatingLambdaMetafactory.java:233)
at java.lang.invoke.LambdaMetafactory.metafactory(LambdaMetafactory.java:303)
at java.lang.invoke.CallSite.makeSite(CallSite.java:289)
The class triggering the exception:
class ImageController<E extends BasicEntity & HasImagesEntity> {
void doTheThing(E entity) {
Set<String> filenames = entity.getImages().keySet().stream()
.map(entity::filename)
.collect(Collectors.toSet());
}
}
The exception is thrown trying to resolve entity::filename. filename() is declared in HasImagesEntity. As far as I can tell, I get the exception because the erasure of E is BasicEntity and the JVM doesn't (can't?) consider other bounds on E.
When I rewrite the method reference as a trivial lambda, everything is fine. It seems really fishy to me that one construct works as expected and its semantic equivalent blows up.
Could this possibly be in the spec? I'm trying very hard to find a way for this not to be a problem in the compiler or runtime, and haven't come up with anything.
Here is a simplified example which reproduces the problem and uses only core Java classes:
public static void main(String[] argv) {
System.out.println(dummy("foo"));
}
static <T extends Serializable&CharSequence> int dummy(T value) {
return Optional.ofNullable(value).map(CharSequence::length).orElse(0);
}
Your assumption is correct, the JRE-specific implementation receives the target method as a MethodHandle which has no information about generic types. Therefore the only thing it sees is that the raw types mismatch.
Like with a lot of generic constructs, there is a type cast required on the byte code level which doesn’t appear in the source code. Since LambdaMetafactory explicitly requires a direct method handle, a method reference which encapsulates such a type cast cannot be passed as a MethodHandle to the factory.
There are two possible ways to deal with it.
First solution would be to change the LambdaMetafactory to trust the MethodHandle if the receiver type is an interface and insert the required type cast by itself in the generated lambda class instead of rejecting it. After all, it does similar for parameter and return types already.
Alternatively, the compiler would be in charge to create a synthetic helper method encapsulating the type cast and method call, just like if you had written a lambda expression. This is not a unique situation. If you use a method reference to a varargs method or an array creation like, e.g. String[]::new, they can’t be expressed as direct method handles and end up in synthetic helper methods.
In either case, we can consider the current behavior a bug. But obviously, compiler and JRE developers must agree on which way it should be handled before we can say on which side the bug resides.
I've just fixed this issue in JDK9 and JDK8u45. See this bug. The change will take a little while to percolate into promoted builds.
Dan just pointed me at this Stack Overflow question, so I'm adding this note When you find bugs, please do submit them.
I addressed this by having the compiler create a bridge, as is the approach for many cases of complex method references. We are also examining spec implications.
This bug is not entirely fixed. I just ran into a LambdaConversionException in 1.8.0_72 and saw that there are open bug reports in Oracle's bug tracking system: link1, link2.
(Edit: The linked bugs are reported to be closed in JDK 9 b93)
As a simple workaround I avoid method handles. So instead of
.map(entity::filename)
I do
.map(entity -> entity.filename())
Here is the code for reproducing the problem on Debian 3.11.8-1 x86_64.
import java.awt.Component;
import java.util.Collection;
import java.util.Collections;
public class MethodHandleTest {
public static void main(String... args) {
new MethodHandleTest().run();
}
private void run() {
ComponentWithSomeMethod myComp = new ComponentWithSomeMethod();
new Caller<ComponentWithSomeMethod>().callSomeMethod(Collections.singletonList(myComp));
}
private interface HasSomeMethod {
void someMethod();
}
static class ComponentWithSomeMethod extends Component implements HasSomeMethod {
#Override
public void someMethod() {
System.out.println("Some method");
}
}
class Caller<T extends Component & HasSomeMethod> {
public void callSomeMethod(Collection<T> components) {
components.forEach(HasSomeMethod::someMethod); // <-- crashes
// components.forEach(comp -> comp.someMethod()); <-- works fine
}
}
}
I found a workaround for this was swapping the order of the generics. For instance, use class A<T extends B & C> where you need to access a B method, or use class A<T extends C & B> if you need to access a C method. Of course, if you need access to methods from both classes, this won't work. I found this useful when one of the interfaces was a marker interface like Serializable.
As for fixing this in the JDK, the only info I could find were some bugs on openjdk's bug tracker that are marked resolved in version 9 which is rather unhelpful.
For the record, the Eclipse compiler of Eclipse 2021-09 (4.21.0) still seems to have this (or a very similar) bug, which I've reported here: https://bugs.eclipse.org/bugs/show_bug.cgi?id=577466
So, if you're developing with Eclipse, it might be this error still persists when developing (using the Eclipse compiler), even when at build time, it is absent (using javac via Maven or Gradle, etc.).

How does java compiler choose correct methods and variables in inheritance

I am a bit confused when inheritance and type casting is mixed. I want to understand the rules which java compiler follows when choosing correct methods and variables in inheritance.
I have read something like
Variables are bound at compile time and methods are bound at run time.
The second one is from stackoverflow (#John Skeet)
Overload resolution (which method signature is called) is determined at compile-time, based on the compile-time types of both the method target and the argument expressions
The implementation of that method signature (overriding) is based on the actual type of the target object at execution time.
But the problem is that they are explanations of specific situations and do not give the general process to be followed when other factors (like exception handling) are taken into consideration.
This may be bad practice but assume that both methods and variables are overridden(hidden in case of static variables).
Now if java compiler has to choose at compile time which method/variable needs to be invoked then what algorithm will it follow? Similarly at run time what is the algorithm that java compiler will use(based on the actual type of the object whose reference is being used)?
All method signatures and variables are verified during compile-time but the actual method calls are done / resolved during run-time.
For example :
class A {
int i=5;
public void doSomething(){
//print "in A"
}
}
class B extends A{
int i=10;
public void doSomething(){
// print "in B"
}
public static void main(String[] args){
A a = new B();
a.doSomething();
}
}
Now, when you call a.doSomething(); , during compilation, the compiler just checks whether doSomething() is defined for class A (lhs reference). It is not even bothered about whether the method is also defined for class B. Even if the method were not present in B, the program would compile fine.
Next, during runtime, the JVM dynamically decides which method to call based on type of Object (B in your case.).
So, "in B" is printed.
Now, coming back to fields. Access to fields are resolved during compile time. So, if a field doesn't exist during compile-time, then the compilation fails. The fields are called based on the reference type. So, a.i will print 5 (A's value of i) because the field was resolved during compile-time. Thus, the difference is, method calls are resolved during runtime, their signatures are needed / checked during compile-time where-as fields are checked / resolved during compile-time.

How to restrict that subclass cannot be generic?

Compile time error: The generic class may not subclass java.lang.Throwable
public class TestGenericClass<E> extends Exception {
/*Above line will give compile error, the generic class TestGenericClass<E> may
not subclass java.lang.Throwable*/
public TestGenericClass(String msg) {
super(msg);
}
}
Above compile time error is for the reason given in § jls-8.1.2 as below, and explained in this question:
It is a compile-time error if a generic class is a direct or indirect subclass of Throwable(§11.1.1).
This restriction is needed since the catch mechanism of the Java Virtual Machine works only with non-generic classes.
Question:
How it is restricted that subclass of java.lang.Throwable will not be generic class?
Or more generic question will be, how to restrict that subclasses of any class cannot be generic?
How it is restricted that subclass of java.lang.Throwable will not be
generic class?
Here's how OpenJDK compiler performs the check:
import com.sun.tools.javac.code.Symbol.*;
private void attribClassBody(Env<AttrContext> env, ClassSymbol c) {
....
// Check that a generic class doesn't extend Throwable
if (!c.type.allparams().isEmpty() && types.isSubtype(c.type, syms.throwableType))
log.error(tree.extending.pos(), "generic.throwable");
As you can see forbidden type is kind of harcoded, so you can't use the same technique for your custom class without compiler code customization.
Full source code
How it is restricted that subclass of java.lang.Throwable will not be generic class?
This was a decision to write a special case into the compiler itself. The reason why is detailed in this question. Basically, this is to do with a type being reifiable. You can read about this term here. In short, a type is reifiable if it's type is fully available at compile time. Generic types, for example, are not reifiable, because their types are removed by type erasure. Objects appearing in a catch block need to be reifiable.
Or more generic question, how to restrict that subclasses of a class cannot be generic?
Well there's a few options..
At present, there is no option within the normal bounds of Java to do this. It doesn't have some sort of final implementation that prevents genericity being applied to subclasses. The closest thing you can get to this, as explained in the comments, is to extend the compiler and add a rule specifically for your class. This solution sends shivers down my spine. Dodge it. It means that your code will only behave with your version of Java, and that anyone else who wants to use your code will have to install the same version.
Obviously, the other option is to extend Throwable, but this really isn't a good idea. It adds a whole bunch of functionality to your class, and adds a lot of new methods to the interface of your class, that you will never use. From an OOP point of view, you're sacrificing the integrity of your class for the benefit of having this feature.
If you're willing to delay the error until run time, you can use use reflection in the superclass constructor(s) to see if the subclass declares any type parameters.
public class Example {
public static class ProhibitsGenericSubclasses {
public ProhibitsGenericSubclasses() {
if (getClass().getTypeParameters().length > 0)
throw new AssertionError("ProhibitsGenericSubclasses prohibits generic subclasses (see docs)");
}
}
public static class NonGenericSubclass extends ProhibitsGenericSubclasses {}
public static class GenericSubclass<T> extends ProhibitsGenericSubclasses {}
public static void main(String[] args) {
System.out.println(new NonGenericSubclass());
System.out.println(new GenericSubclass<Object>());
}
}
This code prints
Example$NonGenericSubclass#15db9742
Exception in thread "main" java.lang.AssertionError: ProhibitsGenericSubclasses prohibits generic subclasses (see docs)
at Example$ProhibitsGenericSubclasses.<init>(Example.java:12)
at Example$GenericSubclass.<init>(Example.java:17)
at Example.main(Example.java:21)
If you want to prohibit type parameters of all classes in the hierarchy, not just the most-derived class, you'll need to walk up the hierarchy using Class#getSuperclass().

Difference between Reflection and Late Binding in java with real time examples

While studying Java tutorials, Reflection and Late Binding have confused me. In some tutorials, they have written that they are both the same, and that there isn't any difference between Reflection and Late Binding. But other tutorials say that there is a difference.
I am confused, so can someone please explain what Reflection and Late Binding are in Java, and if posible, please give me some real world examples of both.
Thanks..
Java uses late-binding to support polymorphism; which means the decision of which of the many methods should be used is deferred until runtime.
Take the case of N classes implementing an abstract method of an interface (or an abstract class, fwiw).
public interface IMyInterface {
public void doSomething();
}
public class MyClassA implements IMyInterface {
public void doSomething(){ ... }
}
public class MyClassB implements IMyInterface {
public void doSomething(){ ... }
}
public class Caller {
public void doCall(IMyInterface i){
// which implementation of doSomething is called?
// it depends on the type of i: this is late binding
i.doSomething();
}
}
Reflection is used instead to describe code which is able to inspect other code, ie. to know which methods or attributes are available in a class, to call a method (or load a class) by name, and doing a lot of very interesting things at runtime.
A very nice explaination of reflection is here: What is reflection and why is it useful?
Late binding (also known as dynamic dispatch) does not need reflection -- it still needs to know which member to dynamically bind to at compile-time (i.e. the signature of the member is known at compile-time), even though the binding to overridden members happens at run-time.
When doing reflection, you don't even know which member you're using (not even the name is known at compile-time, let alone the signature) -- everything happens at run-time, so it's a lot slower.
Real world examples:
If you build your project with jdesktop 0.8, but ship with jdesktop 0.9, your code will still use the 0.9 features, because it takes advantage of late binding, i.e. the code that your code calls is the version that is loaded by the class loader, irrespective of the version that it was compiled against. (This is as opposed to linkers, which embed the compile-time version of the called code into the application.)
For reflection, let's say you are trying to target Java 1.5 and 1.6, but want to use tab components in 1.6 if they are available, then you'll check for their presence by using reflection on the JTabbedPane class to find the setTabComponentAt method. In this case you're building against Java 1.5, which doesn't have those features at all, so you can't call them directly or the compile will fail. However if on the end-user's system you find yourself running against 1.6 (late binding comes into play here) you can use reflection to call methods that didn't exist in 1.5.
They are related; many uses of reflection rely on late binding to be useful, but they are fundamentally different aspects of the language and its implementation.
One important issue which is addressed by "Late Binding" is the polymorphism, i.e. that the call of the proper overriden method along your class hierachy is determined during the run-time, not during compilation. Reflection is the feature to gather and manipulate information about your objects during run-time. E.g. you can get all attributes or method names of an object using its 'Class' attribute during the runtime and call those methods or manipulate its attributes.
In following code you can dynamically create a new object by the means of reflection (see how the constructor is retrieved and accessed using a Class, instead of using simply something like object obj = new MyClass( "MyInstance" ) ). In a similar way it is possible to access other constructor forms, methods and attributes. For more information about reflection in java visit: http://java.sun.com/developer/technicalArticles/ALT/Reflection/
... in some method of some class ...
Class c = getClass();
Constructor ctor = c.getConstructor( String.class );
Object obj = ctor.newInstance( "MyInstance" );
I have to disagree with most of the responses here -
Everyone calls what Java does in terms of zeroing in on a method implementation at runtime as late binding, but in my opinion its not correct to use the term late binding for what java does.
Late binding implies absolutely no checks on a method call at compile time and no compilation errors if the method does not exist.
Java however will throw a compile error if the method does not exist somewhere in the type hierarchy of the type qualifying the method call (being somewhat approximate when describing the behavior here). This is not pure traditional late binding.
What Java does in a normal non private non final non static method call would be better termed as dynamic dispatch.
However if we use reflection in Java, then Java does perform pure late binding as the compiler simply cannot verify if the called method exists or not.
Here is an example:
class A
{
public void foo()
{
System.out.println("Foo from A");
}
}
class B extends A
{
public void foo()
{
System.out.println("Foo from B");
}
}
public class C
{
public static void main(String [] args)
{
A a=new A();
B b=new B();
A ref=null;
Class ref1 = null;
ref1 = b.getClass();
ref.foo1();//will not compile because Java in this normal method
//call does some compile time checks for method and method
//signature existence. NOT late binding in its pure form.
try {
ref1.getMethod("foo1").invoke(null); //will throw a
//NoSuchMethodException at runtime, but compiles perfectly even
//though foo1 does not exist. This is pure late binding.
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}

Does Java support dynamic method invocation?

class A { void F() { System.out.println("a"); }}
class B extends A { void F() { System.out.println("b"); }}
public class X {
public static void main(String[] args) {
A objA = new B();
objA.F();
}
}
Here, F() is being invoked dynamically, isn't it?
This article says:
... the Java bytecode doesn’t support
dynamic method invocation. There are
three supported invocations modes :
invokestatic, invokespecial,
invokeinterface or invokevirtual.
These modes allows to call methods
with known signature. We talk of
strongly typed language. This allows
to to make some checks directly at
compile time.
On the other side, the dynamic
languages use dynamic types. So we can
call a method unknown at the compile
time, but that’s completely impossible
with the Java bytecode.
What am I missing?
You are confusing dynamic invocation with dynamic binding..
The first one allows the type checker to accept programs in which you are not sure if a method will be present on an object at run-time, while dynamic binding just chooses the right implementation according to the runtime type of the object but maintaining the statically type checking.
What does it mean?
It means that in your example, Java will call the implementation on object B because the runtime type of the objA variable is B; and it will compile because it knows that a B is a A so the method invocation won't fail at runtime (objA will have a F implementation for sure).
With dynamic invocation instead it won't check at compile time that the type of the object on which you are calling F contains that method, of course it will raise an exception if during execution the method won't be available on specified object.
Just for trivia: the invokedynamic feature will be added with Java7 because many scripting languages have been written to work on top of JVM and the lack of a dynamic invocation feature forced the developers of these languages to add a middle layer between the script and the real JVM that cares about dynamic invocation using the reflection. Of course this approach causes a lot of overhead (think about Grovvy's MetaClass), that's why Sun decided to give them a help..
In your example the correct method is called because polymorphically the instance of B appears like an instance of A. The method can be located by examining the runtime type of the object; that is, B; as opposed to the compile-time type of the object reference, A. The other important part is the signature of method - these must always match (polymorphically of course).
This differs from dynamic languages because in those there is essentially no compile-time for the object - and everything must be resolved at runtime.
In fact, what you're missing is that this is the part of 'invokevirtual' which is explained in the article.
You're simply overriding the method and that uses a virtual method table to invoke the correct method.
I wouldn't call your example "dynamic", rather virtual. Because at compile time the method name and signature is known (and its existence is checked by the compiler). The only that is resolved at runtime is the concrete implementation to be used for that method.
A more proper example of "dynamic" method invocation would involve reflection, (see the Method class ). In that way, methods whose existence are unkown at compile type can be invoked in runtime (this is extensively used by frameworks, not so much by application code).
The article you mention seems a little misleading in that respect. But it's true that the signature of the methods you explicitly call must be known/checked in compile time, and so, in that sense, Java is not dynamic.
You can make functional interfaces.
class Logger {
private BiConsumer<Object, Integer> logger = null;
// ...
private Logger(Object logger) {
this.akkaLogger = (LoggingAdapter) logger;
this.logger = (message, level) -> {
switch (level) {
case INFO: akkaInfo(message);
break;
case DEBUG: akkaDebug(message);
break;
case ERROR: akkaError(message);
break;
case WARN: akkaWarn(message);
break;
}
};
}
private Logger() {
this.logger = (message, level) -> System.out.println(message);
}
// ...
}

Categories

Resources