Java: time complexity of basic operations - java

I'm going to make some investigations about dividing large arrays/matrix computations among multiple threads. But I need to know the relative time complexity of Java basic operations.
For instance:
int a = 23498234;
int b = -34234;
int[] array = new int[10000];
int c = a + b; // 1
int c = array[234]; // 2
String 1 (summary of two integers) is 10+ times faster than string 2 (memory access)
or (i & 1) == 0 is 10+ faster than i % 2 == 0.
Question: Can you supppose time relations between next operations:
+, * and / operands (suppose on Integer type)
memory access
starting new thread

For performance timing, there are many confounding factors. Rather than try to get exact timings, it's better to understand what's going on and measure what you can.
The time utility will give you detailed stats on an executable, but keep in mind you're timing the JVM which is running the code, not just your code.
You might try the javap disassembler too -- ultimately you'll want to know how your individual operations break down into java bytecode, and the amount of time it takes to execute certain key bits.
Example source code:
public class T {
public static void main(String [] args) {
int x=2;
int y=3;
int z=x+y;
System.out.println(""+x);
}
}
Compiled, then disassembled:
$ javap -c T
Compiled from "T.java"
public class T {
public T();
Code:
0: aload_0
1: invokespecial #1 // Method java/lang/Object."<init>":()V
4: return
public static void main(java.lang.String[]);
Code:
0: iconst_2
1: istore_1
2: iconst_3
3: istore_2
4: iload_1
5: iload_2
6: iadd
7: istore_3
8: getstatic #2 // Field java/lang/System.out:Ljava/io/PrintStream;
11: new #3 // class java/lang/StringBuilder
14: dup
15: invokespecial #4 // Method java/lang/StringBuilder."<init>":()V
18: ldc #5 // String
20: invokevirtual #6 // Method java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
23: iload_1
24: invokevirtual #7 // Method java/lang/StringBuilder.append:(I)Ljava/lang/StringBuilder;
27: invokevirtual #8 // Method java/lang/StringBuilder.toString:()Ljava/lang/String;
30: invokevirtual #9 // Method java/io/PrintStream.println:(Ljava/lang/String;)V
33: return
}
Look at code #6 - that's where the actual addition is happening.
One thing you need to establish is how the operations you're interested in turn into bytecode.
Within the JVM itself, you can use System.getCurrentTimeMillis() as a way of timing, but it won't give you sub-ms resolution. You can also use System.nanoTime(); to get higher precision time, (in the sense that it's sub-ms resolution) but it's less accurate.

Related

Java constant expression related code optimization on compile

Here is a simple question about Java compile optimisation.
Is
final int CONSTANT_NUMBER="Foo Bar".length();
equal to
final int CONSTANT_NUMBER=7;
on compiling code or generally in performance aspect?
No the java compiler doesn't evaluate "Foo Bar".length() at compile time.
Consider these classes
public class ConstantCheck {
final int CONSTANT_NUMBER = "Foo Bar".length();
}
and
public class ConstantCheck {
final int CONSTANT_NUMBER = 7;
}
Using javap -v on the compiled .class file you can see, that the .length() call is kept:
The former results in
...
final int CONSTANT_NUMBER;
descriptor: I
flags: ACC_FINAL
public text.ConstantCheck();
descriptor: ()V
flags: ACC_PUBLIC
Code:
stack=2, locals=1, args_size=1
0: aload_0
1: invokespecial #1 // Method java/lang/Object."<init>":()V
4: aload_0
5: ldc #2 // String Foo Bar
7: invokevirtual #3 // Method java/lang/String.length:()I
10: putfield #4 // Field CONSTANT_NUMBER:I
13: return
...
the latter in
...
final int CONSTANT_NUMBER;
descriptor: I
flags: ACC_FINAL
ConstantValue: int 7
public text.ConstantCheck();
descriptor: ()V
flags: ACC_PUBLIC
Code:
stack=2, locals=1, args_size=1
0: aload_0
1: invokespecial #1 // Method java/lang/Object."<init>":()V
4: aload_0
5: bipush 7
7: putfield #2 // Field CONSTANT_NUMBER:I
10: return
....
In the first case the .length call is present
7: invokevirtual #3 // Method java/lang/String.length:()I
in the second case it's just a constant that is written to the field
5: bipush 7
7: putfield #2 // Field CONSTANT_NUMBER:I
From compiling, definitely second is going to be a little faster, as it first has a method call. However, machines these days are way too fast for this to be noticeable or significant.
For performance - Modern compilers should be smart to see that "Foo Bar" is a constant and should replace the expression to length. If however, you change the line to be a variable, then you may be able to fool the compiler to call the method every time, there by making 1st slightly faster if run in a big loop.
I tested this by making them run in a big loop and second method gave slightly better performance. Guess my compiler isn't smart enough to replace the method call with a constant.
So short answer , performance wise, direct int is better than method call on my machine, but it may differ on a different compiler.

Java memory allocation

I have a question that i have been pondering for a while. Take for instance this particular piece of class
class A{
private static ArrayList<String> listOne;
public Static ArrayList<String> getList()
{
return this.listOne
}
}
Let say i have class B that possess a method to read the details with listOne. To Look through the arraylist, i would need to first get the size of the list in order for my code to know when the arraylist ends. There are 2 ways in which i can do so, one being
int listSize = A.getList().size()
for(int count =0; count < listSize; count++)
{
// code to read through arraylist
}
or i can achieve the same thing with
for(int count=0; count < A.getList().size(); count++)
{
// code to read through arraylist
}
In terms of memory and efficiency, which method is better? Furthermore let says i am reading through a very large array recursively. For simplicity purposes, lets assume that recursively reading through this array would a stack overflow exception. In this situation, would the first method theoretically cause a stack overflow to happen earlier then the second method seeing that each recursive call's stack frame has to keep the state of the variable "listSize"?
Have a look at the result of javap -verbose:
0: invokestatic #2 // Method A.getList:()Ljava/util/ArrayList;
3: invokevirtual #3 // Method java/util/ArrayList.size:()I
6: istore_1
7: iconst_0
8: istore_2
9: iload_2
10: iload_1
11: if_icmpge 27
14: getstatic #4 // Field java/lang/System.out:Ljava/io/PrintStream;
17: iload_2
18: invokevirtual #5 // Method java/io/PrintStream.println:(I)V
21: iinc 2, 1
24: goto 9
27: iconst_0
28: istore_2
29: iload_2
30: invokestatic #2 // Method A.getList:()Ljava/util/ArrayList;
33: invokevirtual #3 // Method java/util/ArrayList.size:()I
36: if_icmpge 52
39: getstatic #4 // Field java/lang/System.out:Ljava/io/PrintStream;
42: iload_2
43: invokevirtual #5 // Method java/io/PrintStream.println:(I)V
46: iinc 2, 1
49: goto 29
52: return
First case is:
9: iload_2
10: iload_1
11: if_icmpge 27
14: getstatic #4 // Field java/lang/System.out:Ljava/io/PrintStream;
17: iload_2
18: invokevirtual #5 // Method java/io/PrintStream.println:(I)V
21: iinc 2, 1
24: goto 9
And the second one:
29: iload_2
30: invokestatic #2 // Method A.getList:()Ljava/util/ArrayList;
33: invokevirtual #3 // Method java/util/ArrayList.size:()I
36: if_icmpge 52
39: getstatic #4 // Field java/lang/System.out:Ljava/io/PrintStream;
42: iload_2
43: invokevirtual #5 // Method java/io/PrintStream.println:(I)V
46: iinc 2, 1
49: goto 29
As you can see, it will get the list and its size during each loop iteration.
But, this might be optimized by JIT, so the result is not obvious from just the compiled bytecode.
Created from:
import java.io.*;
import java.util.*;
public class Z {
public static void main(String[] args) throws Exception {
int listSize = A.getList().size();
for(int count =0; count < listSize; count++) {
System.out.println(count);
}
for(int count =0; count < A.getList().size(); count++) {
System.out.println(count);
}
}
}
class A{
private static ArrayList<String> listOne = new ArrayList<>(Arrays.asList("1", "2", "3"));
public static ArrayList<String> getList()
{
return listOne;
}
}
Both the loop are same. Second one is better coding as it reduces line of code.
Since you mentioned that your need is to traverse the list it is much better to use the enhanced (for-each) for loop.
What are the Advantages of Enhanced for loop and Iterator in Java?
why is enhanced for loop efficient than normal for loop
Regarding which of the method is more efficient, I think they won't have any noticeable differences. And depending on the JVM, like Germann has said, the compiler will even optimize this. So just don't worry about this negligible difference.
I personally will use the second method since it has fewer lines of code and I'm lazy...
However, why not use neither?
There's a super cool alternative, and its name is... JON CENA The enhanced for loop.
Let's compare it with a normal for loop:
Normal:
for (int i = 0 ; i < A.getList().size() ; i++ {
}
Enhanced:
for (String item : A.getList()) {
// Instead of using A.getList().get(i) to access the items, just use "item"!
}
Look how nice it is!
The major difference between these two for loops is that
The normal for loop is just a kind of a while loop with initialization and incrementation.
The enhanced for loop calls .iterator().hasNext() and .iterator().next() to loop.
You need to know the size of the list to use a normal for loop
Your list just needs to implement Iterable and probably Iterator to use an enhanced for loop. No size is needed.
An enhanced for loop has the following limitations:
You can't loop through two arrays at the same time
It's better to use a normal for loop if you want to know the index of the list, since calling indexOf a lot of times is not very efficient.

Why does Java have an IINC bytecode instruction?

Why does Java have an IINC bytecode instruction?
There is already an IADD bytecode instruction that can be used to accomplish the same.
So why does IINC exist?
Only the original designers of Java can answer why they made particular design decisions. However, we can speculate:
IINC does not let you do anything that can't already be accomplished by a ILOAD/SIPUSH/IADD/ISTORE combo. The difference is that IINC is a single instruction, which only takes 3 or 6 bytes, while the 4 instruction sequence is obviously longer. So IINC slightly reduces the size of bytecode that uses it.
Apart from that, early versions of Java used an interpreter, where every instruction has overhead during execution. In this case, using a single IINC instruction could be faster than the equivalent alternative bytecode sequence. Note that JITting has made this largely irrelevant, but IINC dates back to the original version of Java.
As already pointed out a single iinc instruction is shorter than the a iload, sipush, iadd, istore sequence. There is also evidence, that performing a common-case code size reduction was an important motivation.
There are specialized instructions for dealing with the first four local variables, e.g. aload_0 does the same as aload 0 and it will be used often for loading the this reference on the operand stack. There’s an ldc instruction being able to refer to one of the first 255 constant pool items whereas all of them could be handled by ldc_w, branch instructions use two bytes for offsets, so only overly large methods have to resort to goto_w, and iconst_n instructions for -1 to 5 exist despite these all could be handled by bipush which supports values which all also could all be handled by sipush, which could be superseded by ldc.
So asymmetric instructions are the norm. In typical applications, there are a lot of small methods with only a few local variables and smaller numbers are more common than larger numbers. iinc is a direct equivalent to stand-alone i++ or i+=smallConstantNumber expressions (applied to local variables) which often occur within loops. By being able to express common code idioms in more compact code without loosing the ability to express all code, you’ll get great savings in overall code size.
As also already pointed out, there is only a slight opportunity for faster execution in interpreted executions which is irrelevant for compiled/optimized code execution.
Looking at this table, there are a couple important differences.
iinc: increment local variable #index by signed byte const
iinc uses a register instead of the stack.
iinc can only increment by a signed byte value. If you want to add [-128,127] to an integer, then you could use iinc, but as soon as you want to add a number outside that range you would need to use isub, iadd, or multiple iinc instructions.
E1:
TL;DR
I was basically right, except that the limit is signed short values (16 bits [-32768,32767]). There's a wide bytecode instruction which modifies iinc (and a couple other instructions) to use 16 bit numbers instead of 8 bit numbers.
Additionally, consider adding two variables together. If one of the variables is not constant, the compiler will never be able to inline its value to bytecode, so it cannot use iinc; it will have to use iadd.
package SO37056714;
public class IntegerIncrementTest {
public static void main(String[] args) {
int i = 1;
i += 5;
}
}
I'm going to be experimenting with the above piece of code. As it is, is uses iinc, as expected.
$ javap -c IntegerIncrementTest.class
Compiled from "IntegerIncrementTest.java"
public class SO37056714.IntegerIncrementTest {
public SO37056714.IntegerIncrementTest();
Code:
0: aload_0
1: invokespecial #8 // Method java/lang/Object."<init>":()V
4: return
public static void main(java.lang.String[]);
Code:
0: iconst_1
1: istore_1
2: iinc 1, 5
5: return
}
i += 127 uses iinc as expected.
$ javap -c IntegerIncrementTest.class
Compiled from "IntegerIncrementTest.java"
public class SO37056714.IntegerIncrementTest {
public SO37056714.IntegerIncrementTest();
Code:
0: aload_0
1: invokespecial #8 // Method java/lang/Object."<init>":()V
4: return
public static void main(java.lang.String[]);
Code:
0: iconst_1
1: istore_1
2: iinc 1, 127
5: return
}
i += 128 does not use iinc anymore, but instead iinc_w:
$ javap -c IntegerIncrementTest.class
Compiled from "IntegerIncrementTest.java"
public class SO37056714.IntegerIncrementTest {
public SO37056714.IntegerIncrementTest();
Code:
0: aload_0
1: invokespecial #8 // Method java/lang/Object."<init>":()V
4: return
public static void main(java.lang.String[]);
Code:
0: iconst_1
1: istore_1
2: iinc_w 1, 128
8: return
}
i -= 601 also uses iinc_w:
$ javap -c IntegerIncrementTest.class
Compiled from "IntegerIncrementTest.java"
public class SO37056714.IntegerIncrementTest {
public SO37056714.IntegerIncrementTest();
Code:
0: aload_0
1: invokespecial #8 // Method java/lang/Object."<init>":()V
4: return
public static void main(java.lang.String[]);
Code:
0: iconst_1
1: istore_1
2: iinc_w 1, -601
8: return
}
The _w suffix refers to the wide bytecode, which allows for constants up to 16 bits ([-32768, 32767]).
If we try i += 32768, we will see what I predicted above:
$ javap -c IntegerIncrementTest.class
Compiled from "IntegerIncrementTest.java"
public class SO37056714.IntegerIncrementTest {
public SO37056714.IntegerIncrementTest();
Code:
0: aload_0
1: invokespecial #8 // Method java/lang/Object."<init>":()V
4: return
public static void main(java.lang.String[]);
Code:
0: iconst_1
1: istore_1
2: iload_1
3: ldc #16 // int 32768
5: iadd
6: istore_1
7: return
}
Additionally, consider the case where we are adding another variable to i (i += c). The compiler doesn't know if c is constant or not, so it cannot inline c's value to bytecode. It will use iadd for this case too:
int i = 1;
byte c = 3;
i += c;
$ javap -c IntegerIncrementTest.class
Compiled from "IntegerIncrementTest.java"
public class SO37056714.IntegerIncrementTest {
public SO37056714.IntegerIncrementTest();
Code:
0: aload_0
1: invokespecial #8 // Method java/lang/Object."<init>":()V
4: return
public static void main(java.lang.String[]);
Code:
0: iconst_1
1: istore_1
2: iconst_3
3: istore_2
4: iload_1
5: iload_2
6: iadd
7: istore_1
8: return
}

Java increase or decrease? A performance evaluation [duplicate]

This question already has answers here:
Performance difference between post- and pre- increment operators? [closed]
(2 answers)
Closed 7 years ago.
Is there in JAVA a performance difference between i++; and i--;
I'm not able to evaluate bytecode for this, and I think that simple benchmarks are not reliable because of dependence on a specific algorithm.
im not able to evaluate bytecode
Besides the duplicate which I linked and which shows some general things to consider when asking performance related questions:
Given the following sample code (System.err.println is essentially necessary so that the compiler does not optimize away the unused variable):
public class IncDec {
public static void main(String[] args) {
int i = 5;
i++;
System.err.println(i);
i--;
System.err.println(i);
}
}
Disassembled code:
> javap -c IncDec
Compiled from "IncDec.java"
public class IncDec {
public IncDec();
Code:
0: aload_0
1: invokespecial #8 // Method java/lang/Object."<init>":()V
4: return
public static void main(java.lang.String[]);
Code:
0: iconst_5
1: istore_1 // int i = 5
2: iinc 1, 1 // i++
5: getstatic #16 // Field java/lang/System.err:Ljava/io/PrintStream;
8: iload_1
9: invokevirtual #22 // Method java/io/PrintStream.println:(I)V
12: iinc 1, -1 // i--
15: getstatic #16 // Field java/lang/System.err:Ljava/io/PrintStream;
18: iload_1
19: invokevirtual #22 // Method java/io/PrintStream.println:(I)V
22: return
}
So, no, there is no performance difference in this particular case on a bytecode level - both statements are compiled to the same bytecode instruction.
The JIT compiler could be free to do any additional optimization though.
In Java, there isn't a difference in speed between the two. At the most basic level, subtraction is simply addition. That is, taking the 2's complement and adding it.

Java JITC native code generation/execution example?

I'm trying to understand 'native code generation and execution' part of Java JITC, but having a hard time visualizing exactly what happens. E.g. say I have the following class:
class Foo
{
private int x;
public void incX()
{
x++;
}
}
javac generates the following byte code for the method:
public void incX();
Code:
Stack=3, Locals=1, Args_size=1
0: aload_0
1: dup
2: getfield #17; //Field x:I
5: iconst_1
6: iadd
7: putfield #17; //Field x:I
10: return
LineNumberTable:
line 33: 0
line 34: 10
LocalVariableTable:
Start Length Slot Name Signature
0 11 0 this LFoo;
When JITC converts this into native code, what exactly happens? And how is this native code executed by JVM?
When the method gets called sufficiently often to pass the JVM's compilation threshold, the JIT compiles the bytecode into native code, and sets it up so that calls to the function go directly to the natively compiled method.

Categories

Resources