Where to patch back the information gathered during program analysis - java

I'm new to compiler design and have few years with java.
Using this and the paper
It's look like after Class hierarchy analysis and rapid type analysis will get information to do de-virtualisation. But where to patch back the information on source code or on Byte-code. And how to check the results?
Trying to understand how things really happens but stuck here.
For example : We have an example program taken from paper specified above.
public class MyProgram {
public static void main(String[] args) {
EUCitizen citizen = getCitizen();
citizen.hasRightToVote(); // Call site 1
Estonian estonian = getEstonian();
estonian.hasRightToVote(); // Call site 2
}
private static EUCitizen getCitizen() {
return new Estonian();
}
private static Estonian getEstonian() {
return new Estonian();
}
}
Using Class hieracrchy method we can conclude as none of the subclasses override hasRightToVote() , the dynamic method invocation can be replaced with a static procedure call to Estonian#hasRightToVote() . But where to replace this information and How? How to tell JVM (feed JVM) that information that we have gathered during analysis.
You can't change source code and put this there ? Could anyone provide me an example so i can start trying new ways to do analysis and still be able to patch that information.
Thanks.

Class Hierarchy Analysis is an optimization done by the virtual machine itself at runtime, you do not have to tell the VM anything. It simply does the analysis by itself based on the information available in the class files.

What generally happens is that analysis results are typically stored as some kind of association with a program representation, or are used immediately to effect the optimization so "nothing" needs to be stored.
You are right: there is generally no "good" way to annotate the source code with an analysis result (you can use Java annotations as a way). But the compiler has already read the source code and isn't going read it again.
In general, the program is parsed and variety of compiler-like structures are built (ASTs, symbol tables, control flow graphs, data flow arcs, ...) by the compiler pretty much before any serious analysis/optimization begins. A low level model of the program (data flow over the operators) is normally what gets analyzed, and the optimization analyzer will either decorate this structure with its opinions, or often just directly modify this structure to achieve the effect of the optimization.
With Java, there are two opportunities to do this: in JavaC, and in the JITter. My understanding (probably wrong, probably varies across JavaC implementations) is that not much optimization occurs in JavaC at all; it just generates naive JVM bytecode, and that all the real work is done in the JITter. The JITter doesn't have source code, but it can do all the same kinds of analysis (control flow, dataflow, ...) on the byte code that one can do on classic compiler structures, and thus achieve the same effect.

I had some doubts with the same and Rohan Padhey Cleared the ones.
In Java, I don't think there is a way to specify monomophrism of virtual method calls in byte-code. The de-virtualization analysis usually happens in the JIT compiler which compiles bytecode to native code and it does so using dynamic analysis.
Why Patching is a Problem :
In Java bytecode, the only method call instructions are: invokestatic, invokedynamic, invokevirtual, invokeinterface and invokespecial (the last is used for constructors, etc). The only type of call that does not refer to virtual method table lookups is the invokestatic call, since static methods cannot be overridden and used polymorphically on objects.
Hence, while there is no way to do a compile-time specification of the target method, you can replace virtual calls with static calls. How? consider an object "x" with a method "foo", and a call-site:
x.foo(arg1, arg2, ...)
If you know for sure that "x" is of the class "A", then you can transform this to:
A.static_foo(x, arg1, arg2, ...)
where "static_foo" is a newly created static method in class A whose body contains exactly everything that the body of "foo()" in "A" would have done, except that references to "this" inside the body should now be replaced by the first parameter, whatever you may call it.
That is exactly what the Whole-Jimple-Optimization-Pack (WJOP) in Soot does.
As regards static analysis using Soot, there is an optimization pack that does devirtualization using a work-around: https://github.com/Sable/soot/wiki/Whole-program-Devirtualization-Optimizations
But That's just a hack.
Why JIT Times Its Better :
JIT doing this better is due to the fact that static analysis has to be sound because you need to be sure when doing this transformation that 100% of the time the target of the virtual call will be one class. With JIT compilation, you can find more opportunities for optimization because even if the target is a single class 90% of the time, but not 10%, you can just-in-time compile the code to use the most-frequently taken route, and fall-back to using bytecode in the 10% of the cases where this prediction was wrong, because you can check this mistake dynamically. While the fall-back is expensive, the common-case of correct predictions 90% of the time leads to overall benefit. With static transformation, you have to make a decision of whether or not to optimize and it better be sound.

Related

Swig behavior for methods that accept rvalue reference as a parameter

Swig documentation specifies that it intentionally ignores move constructors. However it does not specify what it does with methods that accept rvlaue reference, for example:
class AAA {
public:
AAA() {}; // generates java code
AAA(const AAA& a) {}; // generates java code
AAA(AAA&& a) {}; // ignored by swig, no java code generated
virtual void move(const std::string& str); // generates java code
virtual void move(std::string&& str); // also generates java code, but accepts SWIGTYPE_p_std__string as a parameter instead of String
};
Would these methods behave differently? Is there a go-to way of working with rvalue parameters in swig? Or should I just not use them?
Generally for well written C++ code you'd expect the same behavior of two overloaded functions or constructors to be equivalent, with the caveat that the rvalue reference variant will most likely be destructive to its input.
So for making an interface in another language, which probably doesn't really have any neat 1:1 mapping to the concept that rvalues are trying to express then the vast majority of the time the right answer is to simply ignore these overloads. Internally in some cases the SWIG generated wrappers might end up using these overloads, or could be written to do so with some effort on your part, but it's just an optimization. And if you're jumping across a JNI boundary then that's probably unlikely to be your biggest performance bottleneck. (Measure it to be sure before doing anything).
In some language (e.g. Python, Lua?) you might be tempted to use the fact that your objects are reference counted to your advantage and (transparently to these languages) write a typemap that picks which overload to use based on if we're the only ones holding a reference to an object still. I'd argue that even this case is wrong (and also premature optimization) though because:
There's a race condition around weak references if the language supports such a construct. It's hard to spot and avoid this.
Even though you have only one reference to your (e.g.) Python object there could still be multiple references to the same C++ object, for instance:
struct a {};
a& blah() {
static a a;
return a;
}
With
import test
a=test.blah()
b=test.blah()
c=test.blah()
Has both a,b, and c only having 1 reference, yet moving those would clearly be wrong. And it's almost impossible to to prove when it would be safe.
So what I'm saying is: ignore them unless you have no choice. Sometimes you might come across cases where the only option is to call a function that takes an rvalue reference. You can wrap those functions, but it needs to be done on a case by case basis as you need to understand the semantics of the function/constructor being wrapped.

Memory/Performance differences of declaring variable for return result of method call versus inline method call

Are there any performance or memory differences between the two snippets below? I tried to profile them using visualvm (is that even the right tool for the job?) but didn't notice a difference, probably due to the code not really doing anything.
Does the compiler optimize both snippets down to the same bytecode? Is one preferable over the other for style reasons?
boolean valid = loadConfig();
if (valid) {
// OK
} else {
// Problem
}
versus
if (loadConfig()) {
// OK
} else {
// Problem
}
The real answer here: it doesn't even matter so much what javap will tell you how the corresponding bytecode looks like!
If that piece of code is executed like "once"; then the difference between those two options would be in the range of nanoseconds (if at all).
If that piece of code is executed like "zillions of times" (often enough to "matter"); then the JIT will kick in. And the JIT will optimize that bytecode into machine code; very much dependent on a lot of information gathered by the JIT at runtime.
Long story short: you are spending time on a detail so subtle that it doesn't matter in practical reality.
What matters in practical reality: the quality of your source code. In that sense: pick that option that "reads" the best; given your context.
Given the comment: I think in the end, this is (almost) a pure style question. Using the first way it might be easier to trace information (assuming the variable isn't boolean, but more complex). In that sense: there is no "inherently" better version. Of course: option 2 comes with one line less; uses one variable less; and typically: when one option is as readable as another; and one of the two is shorter ... then I would prefer the shorter version.
If you are going to use the variable only once then the compiler/optimizer will resolve the explicit declaration.
Another thing is the code quality. There is a very similar rule in sonarqube that describes this case too:
Local Variables should not be declared and then immediately returned or thrown
Declaring a variable only to immediately return or throw it is a bad practice.
Some developers argue that the practice improves code readability, because it enables them to explicitly name what is being returned. However, this variable is an internal implementation detail that is not exposed to the callers of the method. The method name should be sufficient for callers to know exactly what will be returned.
https://jira.sonarsource.com/browse/RSPEC-1488

Will java compiler optimize static functions which are conditionalized based on static variable?

For an android application, I am using a logging function which looks like this -
public static void logData(String str) {
if (BuildConfig.DEBUG) Log.d("MYTAG", str);
}
Here, BuildConfig.DEBUG is a static variable which sets to true when compiling the code for debug mode and to false while compiling for release.
I know if I use if(BuildConfig.DEBUG) Log.d("MYTAG", msg); anywhere in my code directly, the compiler would optimize and strip the call completely in release mode.
I would like to know if a function like logData which depends entirely on a single static variable would also get optimized and it's calls removed completely by the compiler. Or would the compiler only make logData to be an empty function and keep all the calls?
No, the compiler won't remove the calls to your logging method though the if block inside would get optimized.
This is called the wrapper approach to logging and its biggest drawback is that any arguments passed are still allocated memory, and in addition to that, any String processing that you do, like,
LogHelper.logData("varName is " + varName);
would incur a slight performance hit as a StringBuilder is still created although the arguments are never actually used for logging.
Optimizations are mostly dependent on the target JVM implementation because it's not the javac compiler that optimizes most of your code but the JIT compiler that translates the byte code to machine code.
That's because it has access to run time stats, which classes are loading or not, and even knows the actual target platform which javac doesn't. But, in the logging case, since BuildConfig.DEBUG is a static final constant, javac can safely optimize out the if block.
To check compile-time optimizations you can use javap disassmebler or any Java de-compiler to look at the generated byte code. For runtime optimizations, you can take a look at this whitepaper The Java HotSpot Performance Engine Architecture from Oracle, especially, Chapter 3. The Java HotSpot Compilers.
i think if BuildConfig.DEBUG defined as final static, it is optimized as the declared value. static is not optimized.

Is this a bug? (recursive constructors in Java)

I've been playing with recursive constructors in Java. The following class is accepted by the compiler two examples of recursive constructors in Java. It crashes with a StackOverflowError at runtime using java 1.7.0_25 and Eclipse Juno (Version: Juno Service Release 2 Build id: 20130225-0426).
class MyList<X> {
public X hd;
public MyList<X> tl;
public MyList(){
this.hd = null;
this.tl = new MyList<X>();
}
}
The error message makes sense, but I'm wondering if the compiler should catch it. A counterexample might be a list of integers with a constructor that takes an int as an argument and sets this.tl to null if the argument is less than zero. This seems reasonable to allow in the same way that recursive methods are allowed, but on the other hand I think constructors ought to terminate. Should a constructor be allowed to call itself?
So I'm asking a higher authority before submitting a Java bug report.
EDIT: I'm advocating for a simple check, like prohibiting a constructor from calling itself or whatever the Java developers did to address https://bugs.openjdk.java.net/browse/JDK-1229458. A wilder solution would be to check that the arguments to recursive constructor calls are decreasing with respect to some well-founded relation, but the point of the question is not "should Java determine whether all constructors terminate?" but rather "should Java use a stronger heuristic when compiling constructors?".
You could even have several constructors with different parameters, calling each other wiht this(...). In general, by computer science, a termination of code can not always be guaranteed. Some intelligence, like in this simple case, would be nice to have, but one may not require a compiler error. A bit like unreachable code. There is no difference between a constructor or normal method in my eyes however.
I wouldn't see any reason why a constructor should more need to terminate than any other kind of function. But, as with any other kind of function, the compiler cannot infer in the general case whether such function ever terminates (halting problem).
Now whether there's generally much need for a recursive constructor is debatable, but it certainly is not a bug, unless the Java specification would explicitly state that recursive constructor calls must result in an error.
And finally, it's important to differentiate between recursive calls to constructor(s) of the same object, which is a common pattern for instance to overcome the lack of default parameters, and calling the constructor of the same class to create another object, as done in your example.
Although this specific situation seems quite obvious, determining whether or not code terminates is an impossible question to answer.
If you try to configure compiler warnings for infinite recursion, you run into the Halting Problem:
"Given a description of an arbitrary computer program, decide whether
the program finishes running or continues to run forever."
Alan Turing proved in 1936 that a general algorithm to solve the
halting problem for all possible program-input pairs cannot exist.

Is there a zero-time, startup (no recompilation) switchable condition flag in Java?

I'm looking for a way to provide the fastest (I mean zero-time - compilation/classloading/JIT time resolved) possible On/Off flag for if condition. Of course this condition will be changed only once per application run - at startup.
I know that "compile-time constant if conditions" can be conditionaly compiled and whole condition can be removed from code. But what is the fastest (and possibly simple) alternative without need to recompile sources?
Can I move condition to separate .jar with single class & method with condition, where I produce two versions of that .jar and will swtich those versions in classpath on application startup? Will JIT remove call to method in separate .jar if it discovers, that method is empty?
Can I do it by providing two classes in classpath implementing "ClassWithMyCondition", where one of those class will have a real implementation and second will have just empty method and instantiate one of it by Class.forName and .newInstance()?Will JIT remove call to empty method from my primary very loop-nested method?
What can be simplest byte-code manipulation solution to this problem?
A standard way to do this sort of logic is to create an interface for the functionality you want, and then to create two (or more) implementations for that functionality. Only one of the implementations will be loaded in your runtime, and that implementation can make the assumptions it needs to in order to avoid the if condition entirely.
This has the advantage that each implementation is mutually exclusive, and things like the JIT compiler can ignore all the useless code for this particular run.
The simplest solution works here. Don't overcomplicate things for yourself.
Just put a final static boolean that isn't a compile-time constant (as defined in the JLS) somewhere and reference it wherever you want the "conditional" compilation. The JVM will evaluate it the first time it sees it, and by the time the code gets JIT'ed, the JVM will know that the value won't change and can then remove the check and, if the value is false, the block.
Some sources: Oracle has a wiki page on performance techniques which says to use constants when possible (note that in this context, the compiler is the JVM/JIT, and therefore a final field counts as a constant even if it isn't a compile-time constant by JLS standards). That page links to an index of performance tactics the JIT takes, which mentions techniques such as constant folding and flow-sensitive rewrites, including dead code removal.
You can pass custom values in the command line, and then check for that value once. So in your code, have something like this:
final static boolean customProp = "true".equalsIgnoreCase(System.getProperty("customProp"));
Depending on your command line parameters, the static final value will change. This will set the value to true:
java -DcustomProp="true" -jar app.jar
While this will set the value to false:
java -jar app.jar
This gives you the benefits of a static final boolean, but allows the value to be altered without recompiling.
[Edit]
As indicated in the comments, this approach does not allow for optimizations at compile time. The value of the static final boolean is set on classload, and is unchanged from there. "Normal" execution of the bytecode will likely need to evaluate every if (customProp). However, JIT happens at runtime, compiling bytecode down to native code. At this point, since the bytecode has the runtime value, more aggressive optimizations like inlining or excluding code are possible. Note that you cannot predict exactly if or when the JIT will kick in, though.
you should load the value from a properties file so that you can avoid having to recompile each time it cahnges. Simply update the text file and on next program run, it uses the new value. Here's an example I wrote a long time ago:
https://github.com/SnakeDoc/JUtils/blob/master/src/net/snakedoc/jutils/Config.java
The JIT recompiles the code every time you run it. You are doing this already whether you know it or not. This means if you have a field which the JIT believe is not changed (it doesn't even have to be final) it will be inlined and the check and code optimised away.
Trying to out smart the JIT is getting harder over time.

Categories

Resources