Can Eclipse's Java Format/Cleanup Operations Change Runtime Behavior? - java

I am about to cleanup/format over 700 Java files in my work using Eclipse's format and cleanup operations. My bosses are worried that all of this cleanup/formatting may cause changes in runtime behavior.
As far as I am aware the only cleanup/format preference that may change runtime behavior is "Clean Up/Code Organizing/Members/Sort Members/Sort all members" and Eclipse warns you about this.
So my question is aside from the preference mentioned above are all other Eclipse cleanup and format preferences safe? Or has anyone ever come across a situation where performing cleanup/format has changed their program's runtime behavior?
Thanks for your time.

Although unlikely it is possible that changing the order of a field declaration or initializer could cause changes at runtime. Take this example:
public class MyClass {
private static int I;
private static int J = I + 1;
static {
I = 2;
}
}
If you reorder the declarations like this the value of J will end up getting initialized to 3 instead of 1.
public class MyClass {
private static int I;
static {
I = 2;
}
private static int J = I + 1;
}

Related

Advantages/Disadvantages of code location when accessing Resource file

Here are 3 options I have seen when accessing a resource file.
Option 1 is probably the least recommended due to the potential of exceptions so the question really pertains to Option 2 or 3 as to which is the preferred or recommended implementation.
Option 1 - done in the attributes area. Very generic. Doesn't capture potential exceptions.
class MyClass
{
static ResourceBundle bundle = Resource.getBundle("MyFile");
float value1 = Float.parseFloat(bundle.getString("myValue1"));
float value2 = Float.parseFloat(bundle.getString("myValue2"));
}
Option 2 - access the resources within the Constructor. Since the values won't be dynamic it seems a waste to access them every time that class is instantiated as this class is a heavily used item.
class MyClass
{
static ResourceBundle bundle = Resource.getBundle("MyFile");
float value1;
float value2;
public MyClass()
{
try
{
value1; = Float.parseFloat(bundle.getString("myValue1"));
value2 = Float.parseFloat(bundle.getString("myValue2"));
}catch(Exception e)
{
//Do something
}
}
}
Option 3 - code within the attributes section of the class. I like this as its only accessed once if the class is already in memory, but since all the attributes tend to be at the top of the class, it does make the code appear all cluttered with try/catch and extra code.
class MyClass
{
static ResourceBundle bundle = Resource.getBundle("MyFile");
float value1;
float value2;
{
try
{
value1; = Float.parseFloat(bundle.getString("myValue1"));
value2 = Float.parseFloat(bundle.getString("myValue2"));
}catch(Exception e)
{
//Do something
}
}
}
It seems like its more a matter of preference than anything else as I don't expect the overhead difference to be measurable even if there were 20-30 resources being accessed.
Your option 2 and 3 produce same bytecode. So difference is only aesthetical.

Static variables within Interception class java

I've got a problem which is kinda obvious, though I'm not sure how to solve it.
I've got 2 classes, 1 of which is Interceptor.
#Stateless
#Interceptors(AutoId.class)
public class TestClass {
private static final Logger LOG = Logger.getLogger(RepositoryBean.class.getName());
public void executeUpdate(){
int k=0;
for (int i = 0; i < 1000000; i++) {
for (int j = 0; j < 100000; j++) {
for (int r = 0; r < 1000000; r++) {
k = 1;
}
}
}
getLogger().log(Level.INFO, "Current time some time ago was "+AutoId.MyTime/1000);
}
private Logger getLogger() {
return Logger.getLogger(getClass().getCanonicalName());
}}
and here is Interceptor class:
public class AutoId {
public static Long MyTime;
#AroundInvoke
public Object addLog(InvocationContext context) throws Exception {
MyTime= System.currentTimeMillis();
return context.proceed();
}
}
an obvious problem is that if I run this application (when it's deployed on a glassfish server) and then in a couple of seconds I run another copy of it, it is going to rewrite MyTime variable with new time and, as a result, both programs will print same time.
One of the obvious solutions is to make a variable inside executeUpdate which will save the value of MyTime, BUT this is not good for the real project I'm working on.
I was told that I might want to do something with ContextResolver and #Context.
Any thoughs on how do I solve this?
Thanks.
EDIT
I found one solution, though I don't think it is the best
public class AutoId {
private static Long[] MyTime = new Long[1000];
#AroundInvoke
public Object addLog(InvocationContext context) throws Exception {
MyTime[(int)Thread.currentThread().getId()]= System.currentTimeMillis();
return context.proceed();
}
public static Long MyTime(){
return MyTime[(int)Thread.currentThread().getId()];
}
}
naming array the same way as procedure allows to minimize code changes in main class only by adding () after AutoId.MyTime -> AutoId.MyTime()
That's still not the best Idea, though it doesn't cause rewriting of variable anymore.
EDIT2 please don't really mind all the code in executeUpdate() procedure. It is just written in a way it takes some tome to finish working, so that I can execute 1 more copy of it and print out AutoId.MyTime. The value of this variable is the only thing that matters.
Also it's qute obvious that if I wasn't using Interceptor and just created an AutoId variable within class to call it before any other procedure (that's what interceptors for) that error wouldn't appear since every copy of program will have its own id easily - that's not option though. Interceptors are required for autorisation here before executing any procedure. Hope that explains everything I haven't told before :)
You could use #Produces for logger creation and use then #Inject to inject your logger in your class and interceptor. This way you should log different times.

Why does a String need to be initialized even if the assignment will happen later?

I get the "The local variable string may not have been initialized" error with the following code. The code itself doesn't make sense it was written just for the sake of exercise.
public class StringExercise
{
public static void main(String[] args)
{
String string; // initializing here fixes the issue
for (int i = 0; i < 10; ++i)
{
if( (i % 4) == 2 )
{
string = "Number: " + i;
}
}
System.out.println(string); // this is marked as wrong by Eclipse
}
}
To get it working it is sufficient to initialize String as expressed in the comment above.
My question is why is it needed? The method println will never be given null and initialization will happen the first time the condition in the loop returns true. Am I doing something wrong or is it just Java being overcautious over programmer's errors? If the latter, how is it justified from the theoretical point of view?
My question is why is it needed?
Because even though your code is "logically" written so that string will indeed be initialized in the loop, the compiler doesn't know it. All it sees is:
for (loop; elements; here)
if (someCondition)
string = something;
In short: a compiler will not check the logic of your code; it it only smart enough as to check for syntax errors, but after that, the bytecode generation itself is "dumb".
And as Java requires that all variables be initialized before use, you get this compiler error.
The compiler can't guarantee that string = "Number: " + i; will be executed within your for and if.

Why Java identifies unreachable code only in case of while loop? [duplicate]

This question already has answers here:
if(false) vs. while(false): unreachable code vs. dead code
(3 answers)
Closed 8 years ago.
If I have code like
public static void main(String args[]){
int x = 0;
while (false) { x=3; } //will not compile
}
compiler will complaint that x=3 is unreachable code but if I have code like
public static void main(String args[]){
int x = 0;
if (false) { x=3; }
for( int i = 0; i< 0; i++) x = 3;
}
then it compiles correctly though the code inside if statement and for loop is unreachable. Why is this redundancy not detected by java workflow logic ? Any usecase?
As described in Java Language Specification, this feature is reserved for "conditional compilation".
An example, described in the JLS, is that you may have a constant
static final boolean DEBUG = false;
and the code that uses this constant
if (DEBUG) { x=3; }
The idea is to provide a possibility to change DEBUG from true to false easily without making any other changes to the code, which would not be possible if the above code gave a compilation error.
The use case with the if condition is debugging. AFAIK it is explicitly allowed by the spec for if-statements (not for loops) to allow code like this:
class A {
final boolean debug = false;
void foo() {
if (debug) {
System.out.println("bar!");
}
...
}
}
You can later (or via debugger at runtime) change the value of debug to get output.
EDIT
As Christian pointed out in his comment, an answer linking to the spec can be found here.
Regarding the for loop, I thinks it's just that it's not as easy to detect as the use of a false constant inside a while loop.
Regarding the if, that was a deliberate choice to authorize it in order to be able to remove debugging code from the byte code at compilation time:
private static final boolean DEBUG = false; // or true
...
if (DEBUG) {
...
}

Is defaulting to an empty lambda better or worse than checking for a potentially null lambda?

I'm working on a small scene graph implementation in Java 8. The basic scene node looks something like this:
public class SceneNode {
private final List<SceneNode> children = new ArrayList<>();
protected Runnable preRender;
protected Runnable postRender;
protected Runnable render;
public final void render() {
preRender.run();
render.run();
for (Renderable child : children) {
child.render();
}
postRender.run();
}
}
This works fine if the Runnables default to () -> {}. However, alternatively I could allow them to be null, but that means that render() method has to look like this:
public final void render() {
if (null != preRender) { preRender.run(); }
if (null != render) { render.run(); }
for (Renderable child : children) {
child.render();
}
if (null != postRender) { postRender.run(); }
}
So my question is, is the implicit cost of the branching introduced by the null check likely to cost more or less than whatever the JVM ends up compiling an empty lambda into? It seems like it should end up costing more to check for null, because a potential branch limits optimization, while presumably the Java compiler or JVM should be smart enough to compile an empty lambda into a no-op.
Interestingly, it seems that checking for null is a little bit faster, than calling an empty lambda or an empty anonymous class, when the JVM is run with the -client argument. When running with -server, the performance is the same for all approaches.
I have done a micro benchmark with Caliper, to test this.
Here is the test class (latest Caliper form git necessary to compile):
#VmOptions("-client")
public class EmptyLambdaTest {
public Runnable emptyLambda = () -> {};
public Runnable emptyAnonymousType = new Runnable() {
#Override
public void run() {}
};
public Runnable nullAbleRunnable;
#Benchmark
public int timeEmptyLambda(int reps){
int dummy = 0;
for (int i = 0; i < reps; i++) {
emptyLambda.run();
dummy |= i;
}
return dummy;
}
#Benchmark
public int timeEmptyAnonymousType(int reps){
int dummy = 0;
for (int i = 0; i < reps; i++) {
emptyAnonymousType.run();
dummy |= i;
}
return dummy;
}
#Benchmark
public int timeNullCheck(int reps){
int dummy = 0;
for (int i = 0; i < reps; i++) {
if (nullAbleRunnable != null) {
nullAbleRunnable.run();
}
dummy |= i;
}
return dummy;
}
}
And here are the benchmark results:
Running with -client
Running with -server
Is defaulting to an empty lambda better or worse than checking for a potentially null lambda?
This is essentially the same as asking if it is better to test for a null String parameter or try to substitute an empty String.
The answer is that it depends on whether you want to treat the null as a programming error ... or not.
My personal opinion is that unexpected nulls should be treated as programming errors, and that you should allow the program to crash with an NPE. That way, the problem will come to your attention earlier and will be easier to track down and fix ... than if you substituted some "make good" value to stop the NPE from being thrown.
But of course, that doesn't apply for expected null values; i.e. when the API javadocs say that a null is a permissible value, and say what it means.
This also relates to how you design your APIs. In this case, the issue is whether your API spec (i.e. the javadoc!) should insist on the programmer providing a no-op lambda, or treat null as meaning the same thing. That boils down to a compromise between:
API client convenience,
API implementor work, and
robustness; e.g. when using the value of an incorrectly initialized variable ...
I'm more concerned about the implications of the runtime performance of using an empty lambda vs using a null and having to do a null check.
My intuition is that testing for null would be faster, but any difference in performance will be small, and that the chances are that it won't be significant to the overall performance of the application.
(UPDATE - Turns out that my intuition is "half right" according to #Balder's micro-benchmarking. For a -client mode JVM, null checking is a bit faster, but not enough to be concerning. For a -server mode JVM, the JIT compiler is apparently optimizing both cases to native code with identical performance.)
I suggest that you treat that you would (or at least should) treat any potential optimization problem:
Put off any optimization until your application is working.
Benchmark the application to see if it is already fast enough
Profile the applications to see where the real hotspots are
Develop and test a putative optimization
Rerun the benchmarks to see if it improved things
Go to step 2.

Categories

Resources