Why does Java's Dynamic Proxies need reflection? - java

Java's Dynamic Proxy Docs describe these constructors as the following:
A dynamic proxy class is a class that implements a list of interfaces specified at runtime such that a method invocation through one of the interfaces on an instance of the class will be encoded and dispatched to another object through a uniform interface. Thus, a dynamic proxy class can be used to create a type-safe proxy object for a list of interfaces without requiring pre-generation of the proxy class.
Now, while everything in this sentence is accurate. All of this information is in fact present at compile time. In Java, when you are creating a proxy, you specify the exact interface you want to proxy in your code.
Now the first thing that really confuses me, is why the byte code here needs to be generated at run time? All of this information is present at the compile time... (and you don't even have to deal with type erasure)
P.S: I am not sure if this is still the case, basing this on a quite dated accepted answer here: How does Java's Dynamic Proxy actually work?)
The next step in getting this work is the type checking/type inference. I am not sure how Java actually handles this, but you need to be able to use Proxy<A> interchangeably with A. In order to pull this of you need the following:
∀ method m ∈ A, m ∈ Proxy<A>
Which means that
you need your proxy to have the same structure
you need some kind of delegation (i.e. dynamic dispatch).
Once you start writing out the inference rules, this gives us something very familiar. Structural Typing
Now Java doesn't have structural typing but one could easily add the few inference rules (i.e. Typescript) especially given Java has boxed primitives (Scala for example doesn't which makes it very difficult to introduce structural inference).
The Actual Question
Reflection is hard, not safe and not super performant. My question is why and how Java's proxies use reflection? It seems like most of this functionality could be implemented using other features already present in the language.

Related

How to detect java local variables by an interface type and then find methods called on them?

I have some (maybe) strange requirements - I wanted to detect definitions of local (method) variables of a given interface name. When finding such a variable I would like to detect which methods (set/get*) will be called on this variable.
I tried Javassist without luck, and now I have a deeper look into ASM, but not sure if it is possible what I wanted.
The reason for this is that I like to generated a dependency graph with GraphViz of beans that depend on the same data structure.
If this thing is possible could somebody please give me a hint on how it could be done? Maybe there are other Frameworks that could do?
01.09.2015
To make things more clear:
The interface is self written - the target of the whole action is to create a dependency graph in the first step automatically - later on a graphical editor should be implemented that is based on the dependencies.
I wonder how FindBugs/PMD work, because they also use the byte code and detect for example null pointer calls (variable not initialized and method will be called on it). So I thought that I could implement my idea in the same way. The whole code is Spring based - maybe this opens another solution to the point? Last but not least I could work on a source-jar?
While thinging about the problem - would it be possible via ASM/javassist to detect all available methods from the interface and find calls to them in the other classes?
I’m afraid, what you want to do is not possible. In compiled Java code, there are no local variables in the form you have in the source code. Methods use stack frames which have memory reserved for local variables, which is addressed by a numerical index. The type is implied by what instructions write to it and may change throughout the method’s code as the memory may get reused for different variables having a disjunct scope. The names on the other hand are completely irrelevant.
When bytecode gets verified, the effect of all instructions to the stack frame will get modeled to infer the type of each stack frame slot at each point of the execution so that the validity of all operations can be checked. Starting with class file version 50, there will be StackMapTable attributes aiding the process by containing explicit type information, but only for code with branches. For sequential code, the type of variables still has to be derived by inference.
These inferred types are not necessarily the declared types. E.g., on the byte code level, there will be no difference between
CharSequence cs="foo";
cs.charAt(0);
and
String s="foo";
((CharSequence)s).charAt(0);
In both cases, there will be a storage of a String constant into a local variable followed by the invocation of an interface method. The inferred type will be String in both cases and the invocation of a CharSequence method considered valid as String implements CharSequence.
This disproves the idea of detecting that there is a local variable declared using the CharSequence (interface) type, as the actual declared type is irrelevant and not stored in the regular byte code.
There are, however, debugging attributes containing information about the local variables, see the LocalVariableTable attribute and libraries like ASM will tell you about the declarations if such information is present. But you can’t rely on these optional information. E.g. Oracle’s JRE libraries are by default shipped without them.
I don't sure I understood exacly what you want but .
you can use implement on each object ,
evry object that have getter you can implement it with class called getable .
and then you could do stuff only on object that have the function that you implement from the class getable .
https://docs.oracle.com/javase/tutorial/java/IandI/createinterface.html

rJava generics type

I have been playing with rJava package, but since it seems that rJava is not aware of Java generic types, I have difficulties creating java object with generic type parameters. If I have a java class like:
public class A<T> {
private B<T> b;
public A(B<T> b) {
this.b = b;
}
}
I would like to create an A object from R session using .jnew() by passing a B object already created (with instantiated type parameter), but rJava always gives error:
java.lang.NoSuchMethodError: <init>
Is there any work around for this?
There are a lot of moving parts in this question. Digging through the documentation for the various parts, I think that you need to do this on the line that broke:
gesinstance = .jnew("edu/cmu/tetrad/search/Ges", .jcast(dataset, "edu/cmu/tetrad/data/DataSet"))
The key difference being the call to .jcast on the second argument. (I don't have R installed, so I could not test this - If it doesn't work, I will update my answer based on any feedback you can provide on new error messages.)
So then the question is "why that?" The answer seems to be:
On the Java side, DataReader.parseTabularData returns an object with type DataSet as you noted, but DataSet is an interface not a class. That necessarily means that the actual object returned is of some class that implements the DataSet interface.
For reasons that aren't immediately clear to me, the rJava package does not really handle polymorphism well. It requires that you call methods with an "exact" signature match to the objects that you are passing. In this case, you will need to "up-cast" from whatever specific class you got to the interface DataSet. See the documentation for .jnew (https://www.rforge.net/doc/packages/rJava/html/jnew.html), especially for the arguments that they denote by "...". This refers you to the corresponding part of the documentation for .jcall (https://www.rforge.net/doc/packages/rJava/html/jcall.html), when then explains the requirement to call .jcast (https://www.rforge.net/doc/packages/rJava/html/jcast.html) with some examples.
The error that you got java.lang.NoSuchMethodError: <init> was telling you that the JVM could not find the constructor that you called. This was mysterious looking in the example that you posted in the comments. (It might be good to edit your question, by the way, and include that information up there for posterity.) The code certainly looks right, and, knowing Java, I intuitively expected the interface to respect the polymorphism of the Java. Given that (for whatever reason), the interface to R does "exact" type matching without considering inheritance, it's clear that it will not find a constructor due to reason #1 above.
Finally, I didn't actually encounter any Java classes using generics in my limited exploration of Tetrad. As it turns out, that was a complete red herring though. Should it be an issue in the future, you'll probably want to check out "Type Erasure" (https://docs.oracle.com/javase/tutorial/java/generics/erasure.html). If you were interfacing between Java and C, C++, Fortran, any language that Java considers "native," then you'd deal with the generics in the native code by dealing in the type-erased forms. The rJava interface may be different though, since this seems to fall into the same general type of structure that tripped you up on your current problem. (Maybe worthy of its own bounty later!)

java generics vs dynamically loading class using Class.forName()

Assume I am making a class called Government. Government has members like officers, ministers, departments etc. For each of those members I create an interface, and any specific government defines them as they like.
The main method in the Government class is called Serve(Request req). Assume that the query rate is very large (1000+ queries per second).
To create the government, I can:
1) Use Java generics to write Government<Class Minister, Class Officer, ...> and any specific government implementation needs to create its own Government object in java code, and a main() to have deployable jar.
2) Have a configuration file which specifies the class names of officers, ministers etc., and whenever Serve() is called, it uses Class.forName()and Class.newInstance() to create an object of the class. Any new government just needs to write classes for its members, and the configuration file. There is one single main() for all governments.
From a purely performance point of view - which is better and why? My main concerns are:
a) does forName() execute a costly search every time? Assume a very large universe of classes.
b) Do we miss out on the compiler optimizations that might be performed in case 1 but not in case 2 on the dynamic classes?
As long as you reuse your goverment object, there is no difference at runtime. Difference isonly at object creation time.
1 & 2 differ in concept - 1 ist hardwired, while 2 is dynamic ( you may even use DI contrainers like spring, guice or pico - basically you proposed to write your own )
As for forName() performance - it is up on classloader ( and also on container ) . MOst of them would cache name resolution results, look up in map - but I can not speak for all
As for optimisations - there are compiler optimisations, and also agressive runtime optiomisations from JIT compilers - they matter more.
I don't get it. Those two are not alternatives; they are pretty much orthogonal: Generics is a compile-time construct. It is erased and does not translate to anything at runtime. On the other hand, loading classes by calling forName is a runtime thing. One does not affect the other.
If you are using generics, then that means you don't need the class object at runtime, since in generics you don't have access to the class object, unless you pass it in explicitly. If you don't need the class object at runtime, that means you don't need to load it with forName, so that is inconsistent with forName. If you do pass the class object in explicitly, then that means you already have the class object and don't need to load it, also inconsistent with forName.
Your description kind of reads like this to me: "I want to use dependency injection, should I roll my own?"
Look into Spring (or Guice by Google). I'll assume Spring.
Create interfaces for stuff and configure which implementation to use for each in Spring.

What's the conception behind: Type - Element - Mirror

I'm working with Java 6's annotation processing, i.e. what can be found within javax.annotation.processing (not Java 5's APT).
I wonder what the conceptional difference between the various Element, Type, and Mirror classes is. As I don't really understand this, it's hard to efficiently program an annotation processor. There are various methods that 'convert' between these notions but I'm not really sure what I'm doing when using them.
So, for example, let me have an instance of AnnotationMirror.
When I call getAnnotationType() I get an instance of DeclaredType (which implements TypeMirror for whatever reason).
Then I can call asElement() on this one and obtain an instance of Element.
What has happened?
There is indeed on overlap between these concepts.
Element models the static structure of the program, ie packages, classes, methods and variables. Just think of all you see in the package explorer of Eclipse.
Type models the statically defined type constraints of the program, ie types, generic type parameters, generic type wildcards. Just think of everything that is part of Java's type declarations.
Mirror is an alternative concept to reflection by Gilad Bracha and Dave Ungar initially developed for Self, a prototype-based Smalltalk dialect. The basic idea is to separate queries about the structure of code (and also runtime manipulation of the structure, alas not available in Java) from the domain objects. So to query an object about its methods, instead of calling #getClass you would ask the system for a mirror through which you can see the reflection of the object. Thanks to that separation you can also mirror on classes that are not loaded (as is the case during annotation processing) or even classes in a remote image. For example V8 (Google's Javascript engine) uses mirrors for debugging Javascript code that runs in another object space.
This paper may help understanding the design of Java 6 annotation processing:
Gilad Bracha and David Ungar. Mirrors:
Design Principles for Meta-level
Facilities of Object-Oriented
Programming Languages. In Proc. of
the ACM Conf. on Object-Oriented
Programming, Systems, Languages and
Applications, October 2004.
The object of type javax.lang.model.element.AnnotationMirror represents an annotation in your code.
The declared type represents the annotation class.
Its element is the generic class (see http://java.sun.com/javase/6/docs/api/javax/lang/model/element/TypeElement.html for more information on that matter). The element might be the generic version of a class, like List, where as the declared type is the parametrized version, for instance List<String>. However I'm not sure it is possible to have annotations classes use generics and thus the distinction might be irrelevant in that context.
For instance lets say you have the following JUnit4 method:
#Test(expected = MyException.class)
public void myTest() {
// do some tests on some class...
}
The AnnotationMirror represents #Test(expected = NullPointerException.class). The declared type is the org.junit.Test class. The element is more or less the same as there are no generics involved.

Can we take advantage of the type system to make programs more secure?

This question is inspired from Joel's "Making Wrong Code Look Wrong"
http://www.joelonsoftware.com/articles/Wrong.html
Sometimes you can use types to enforce semantics on objects beyond their interfaces. For example, the Java interface Serializable does not actually define methods, but the fact that an object implements Serializable says something about how it should be used.
Can we have UnsafeString and SafeString interfaces/subclasses in, say Java, that are used in much of the same way as Joel's Hungarian notation and Java's Serializable so that it doesn't just look bad--it doesn't compile?
Is this feasible in Java/C/C++ or are the type systems too weak or too dynamic?
Also, beyond input sanitization, what other security functions can be implemented in this manner?
The type system already enforces a huge number of such safety features. That is essentially what it's for.
For a very simple example, it prevents you from treating a float as an int. That's one aspect of safety -- it guarantees that the type you're working on are going to behave as expected. It guarantees that only string methods are called on a string. Assembly doesn't have that safeguard, for example.
It's also the job of the type system to ensure that you don't call private functions on a class. That's another safety feature.
Java's type system is too anemic to enforce a lot of interesting constraints effectively, but in many other languages (including C++), the type system can be used to enforce far more wide-ranging rules.
In C++, template metaprogramming gives you a lot of tools for prohibiting "bad" code. For example:
class myclass : boost::noncopyable {
...
};
enforces at compile-time that the class can not be copied. The following will produce compile errors:
myclass m;
myclass m2(m); // copy construction isn't allowed
myclass m3;
m3 = m; // assignment also not allowed
Likewise, we can ensure at compile-time that a template function only gets called on types which fulfill certain criteria (say, they must be random-access iterators, while bilinear ones aren't allowed, or they must be POD types, or they must not be any kind of integer type (char, short, int, long), but all other types should be legal.
A textbook example of template metaprogramming in C++ implements a library for computing physical units. It allows you to multiply a value of type "meter" with another value of the same type, and automatically determines that the result must be of type "square meter". Or divide a value of type "mile" with a value of type "hour" and get a unit of type "miles per hour".
Again, a safety feature that prevents you from getting your types mixed up and accidentally getting your units mixed up. You'll get a compile error if you compute a value and try to assign it to the wrong type. trying to divide, say, liters by meters^2 and assigning the result to a value of, say, kilograms, will result in a compile error.
Most of this requires some manual work to set up, certainly, but the language gives you the tools you need to basically build the type-checks you want. Some of this could be better supported directly in the language, but the more creative checks would have to be implemented manually in any case.
Yes you can do such thing. I don't know about Java, but in C++ it isn't customary and there is no support for this, so you have to do some manual work. It is customary in some other languages, Ada for example, which have the equivalent of a typedef which introduces a new type which can't be converted implicitly into the orignal one (this new type "inherits" some basic operations from the one it is created, so it stays usefull).
BTW, in general inheritance isn't a good way to introduce the new types, as even if there is no implicit conversion in one way, there is one in the other one.
You can do a certian amount of this out of the box in Ada. For example, you can make integer types that cannot implcitily interoperate with each other, and Ada enumerations are not compatible with any integer type. You can still convert between them, but you have to explicitly do it, which calls attention to what you are doing.
You could do the same with present-day C++, but you'd have to wrap all your integers and enums in classes, which is just way too much work for something that should be simple (or better yet, the default way of doing things).
I understand the next version of C++ is going to fix at least the enumeration issue.
In C++, I suppose you could use typedef to create a synonym for a primitive type. Your synonym could imply something about the content of that variable, replacing the function of the apps hungarian notation.
Intellisense will report the synonym you used during declaration, so if you don't like using actual hungarian, it does save you from scrolling about (or using Go To Definition).
I guess you are thinking of something along the lines of Perl's "tainting" analysis.
In Java, it should be possible to use custom annotations and an annotation processor to implement this. Not necessarily easy though.
You can't have a UnsafeString subclass of String in Java, since java.lang.String is final.
In general, you cannot provide any kind of security on the source level - if you want to protect against evil code, you must do that on the binary level (e.g. Java bytecode). That's why private/protected can't be used as a security mechanism in C++: it is possible to bypass that with pointer manipulations.

Categories

Resources