JMH does not run all benchmarks - java

In my JMH benchmark I have 8 benchmark methods, 4 out of which, are not run. What is going on there?
My snippet:
#Benchmark
#Warmup(iterations = 1, time = 2, timeUnit = TimeUnit.SECONDS)
#Measurement(iterations = 1)
#BenchmarkMode(Mode.AverageTime)
#OutputTimeUnit(TimeUnit.MILLISECONDS)
#Fork(0)
#Group(value = "g1")
#GroupThreads(1)
public void Benchmark_1A_profileAddLastRoddeList(
IndexedLinkedListState state) {
profileAddLast(state.list,
ADD_LAST_OPERATIONS,
state.random);
}
// More 3 similar benchmarks as above sit here.
#Benchmark
#Warmup(iterations = 1, time = 2, timeUnit = TimeUnit.SECONDS)
#Measurement(iterations = 1)
#BenchmarkMode(Mode.AverageTime)
#OutputTimeUnit(TimeUnit.MILLISECONDS)
#Fork(0)
#Group(value = "g1")
#GroupThreads(1)
public void Benchmark_1B_profileGetRoddeList(IndexedLinkedListState state,
Blackhole blackhole) {
profileGet(state.list,
GET_OPERATIONS,
state.random,
blackhole);
}
// 3 more friend benchmark methods sit here.
More precisely, the methods that add data to the state list are run, but no the access get benchmarks. My entire project is here.

Related

run method for a given time - or until a condition is met

I want a method to recieve TimeUnit object (any) , and long timeout .
the method should return true if during this time something happened (-not relevant) or false otherwise ,
the method should not sleep for the time given and then test if condition was met - but to continue testing condition until the required time passed or condition met
public static boolean checkTimeOut(TimeUnit unit ,long timeout){
// wait based on TimeUnit and timeout
if(/*condition met*/){
return true;
}
return false;
}
public static void main(String[] args) {
TimeUnit testingMinutes = TimeUnit.valueOf("MINUTES");
TimeUnit testingSeconds = TimeUnit.valueOf("SECONDS");
TimeUnit testingMilli = TimeUnit.valueOf("MILLISECONDS");
checkTimeOut(testingMinutes , 1);
checkTimeOut(testingSeconds, 200);
checkTimeOut(testingMilli, 500);
}
this code is used only as example of general structure ,and not part of the actual project.
Without further details, a while loop seems to be the tool for this job:
public static boolean checkTimeOut(TimeUnit unit ,long timeout){
long limit = unit.toMillis(timeout);
long now = Clock.systemDefaultZone().millis();
while(Clock.systemDefaultZone().millis() <= now + limit) {
if (/* condition */) return true;
}
return false;
}
By the way, instead of using valueOf(), you could simple use one of the Enum available, so that
public static void main(String[] args) {
TimeUnit testingMinutes = TimeUnit.MINUTES;
TimeUnit testingSeconds = TimeUnit.SECONDS;
TimeUnit testingMilli = TimeUnit.MILLISECONDS;
checkTimeOut(testingMinutes , 1);
checkTimeOut(testingSeconds, 200);
checkTimeOut(testingMilli, 500);
}

Does ScheduledExecutorService guarantee order when pool size is one?

I have a ScheduledExecutorService that has a pool size of 1 threads.
If I schedule many tasks using that service with the same delay, is the order of scheduling preserved during the execution?
Yes, the order is preserved. From the javadocs
Delayed tasks execute no sooner than they are enabled, but without any real-time guarantees about when, after they are enabled, they will commence. Tasks scheduled for exactly the same execution time are enabled in first-in-first-out (FIFO) order of submission.
You can see this in action too
public static void main(String args[]) {
ScheduledExecutorService e = Executors.newScheduledThreadPool(1);
e.schedule(delay("delay for 1 second", 10), 1, TimeUnit.SECONDS);
e.schedule(delay("delay for 5 second", 0), 5, TimeUnit.SECONDS);
e.schedule(delay("delay for 3 second", 0), 3, TimeUnit.SECONDS);
e.schedule(delay("delay for 7 second", 0), 7, TimeUnit.SECONDS);
e.schedule(delay("delay for 2 second", 0), 2, TimeUnit.SECONDS);
}
private static Runnable delay(String message, int initialDelay) {
return () -> {
Thread.sleep(initialDelay);
System.out.println(message);
};
}
prints
delay for 1 second
delay for 2 second
delay for 3 second
delay for 5 second
delay for 7 second
Yes, as long as the scheduler implementation used will follow the interface specification. For example, new ScheduledThreadPoolExecutor(1) will use a DelayedWorkQueue which will preserve the order.
As per javadoc all ScheduledExecutorService implementations should preserve order:
Tasks scheduled for exactly the same execution time are enabled in first-in-first-out (FIFO) order of submission.
One can test the implementation with the example below:
import com.google.code.tempusfugit.concurrency.IntermittentTestRunner;
import com.google.code.tempusfugit.concurrency.annotations.Intermittent;
import org.junit.Test;
import org.junit.runner.RunWith;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicInteger;
import static org.assertj.core.api.Assertions.assertThat;
#RunWith(IntermittentTestRunner.class)
public class ScheduledExecutorServiceTest {
#Test
#Intermittent(repetition = 20)
public void preservesOrderOfTasksScheduledWithSameDelay() throws InterruptedException {
ScheduledExecutorService scheduledExecutorService = new ScheduledThreadPoolExecutor(1);
AtomicInteger atomicInteger = new AtomicInteger(0);
int numTasks = 1_000;
CountDownLatch countDownLatch = new CountDownLatch(numTasks);
for (int i = 0; i < numTasks; i++) {
int finalI = i;
scheduledExecutorService.schedule(() -> {
atomicInteger.compareAndSet(finalI, finalI + 1);
countDownLatch.countDown();
}, 10, TimeUnit.MILLISECONDS);
}
countDownLatch.await();
assertThat(atomicInteger.get()).isEqualTo(numTasks);
}
}

MethodHandles.lookup().lookupClass() vs getClass()

Can anyone tell me the (subtle) differences of
version 1:
protected final Logger log = Logger.getLogger(getClass());
vs
version 2:
protected final Logger log = Logger.getLogger(MethodHandles.lookup().lookupClass());
Is version 2 in general faster than version 1?
I guess version 1 uses reflection (on runtime) to determine the current class while version 2 does not need to use reflection, or (is the check done on build time)?
There is no reflection involved in you first case. Object#getClass() is mapped to JVM's native method.
Your second case is not drop-in replacement for Object#getClass(), it is used to lookup method handles.
So subtle difference is, they are used for completely different purposes.
These are entirely different things. The documentation of lookupClass, specifically says:
Tells which class is performing the lookup. It is this class against which checks are performed for visibility and access permissions
So it's the class which performs the lookup. It's not necessarily the class where you call MethodHandles.lookup(). What I mean by that is that this:
Class<?> c = MethodHandles.privateLookupIn(String.class, MethodHandles.lookup()).lookupClass();
System.out.println(c);
will print String.class and not the class where you define this code.
The only "advantage" (besides confusing every reader of this code), is that if you copy/paste that log creation line across various source files, it will use the proper class, if you, by accident, don't edit it (which probably happens).
Also notice that:
protected final Logger log = Logger.getLogger(getClass());
should be a static field, usually, and you can't call getClass if it is.
A JMH test shows that there is no performance gain to obfuscate your code that much:
#State(Scope.Benchmark)
#BenchmarkMode(Mode.AverageTime)
#OutputTimeUnit(TimeUnit.NANOSECONDS)
#Warmup(iterations = 5, time = 2, timeUnit = TimeUnit.SECONDS)
#Measurement(iterations = 5, time = 2, timeUnit = TimeUnit.SECONDS)
public class LookupTest {
private static final MethodHandles.Lookup LOOKUP = MethodHandles.lookup();
public static void main(String[] args) throws RunnerException {
Options opt = new OptionsBuilder().include(LookupTest.class.getSimpleName())
.verbosity(VerboseMode.EXTRA)
.build();
new Runner(opt).run();
}
#Benchmark
#Fork(3)
public Class<?> getClassCall() {
return getClass();
}
#Benchmark
#Fork(3)
public Class<?> methodHandlesInPlaceCall() {
return MethodHandles.lookup().lookupClass();
}
#Benchmark
#Fork(3)
public Class<?> methodHandlesCall() {
return LOOKUP.lookupClass();
}
}
results:
Benchmark Mode Cnt Score Error Units
LookupTest.getClassCall avgt 15 2.264 ± 0.044 ns/op
LookupTest.methodHandlesCall avgt 15 2.262 ± 0.030 ns/op
LookupTest.methodHandlesInPlaceCall avgt 15 4.890 ± 0.783 ns/op

JMH - why JIT does not eliminate my dead-code

I wrote two benchmarks to demonstrate that JIT can be a problem with writing fine benchmark (Please skip that I doesnt use #State here):
#Fork(value = 1)
#Warmup(iterations = 2, time = 10)
#Measurement(iterations = 3, time = 2)
#BenchmarkMode(Mode.AverageTime)
public class DeadCodeTraps {
#Benchmark
#OutputTimeUnit(TimeUnit.MICROSECONDS)
public static void summaryStatistics_standardDeviationForFourNumbers() {
final SummaryStatistics summaryStatistics = new SummaryStatistics();
summaryStatistics.addValue(10.0);
summaryStatistics.addValue(20.0);
summaryStatistics.addValue(30.0);
summaryStatistics.addValue(40.0);
summaryStatistics.getStandardDeviation();
}
#Benchmark
#OutputTimeUnit(TimeUnit.MICROSECONDS)
public static void summaryStatistics_standardDeviationForTenNumbers() {
final SummaryStatistics summaryStatistics = new SummaryStatistics();
summaryStatistics.addValue(10.0);
summaryStatistics.addValue(20.0);
summaryStatistics.addValue(30.0);
summaryStatistics.addValue(40.0);
summaryStatistics.addValue(50.0);
summaryStatistics.addValue(60.0);
summaryStatistics.addValue(70.0);
summaryStatistics.addValue(80.0);
summaryStatistics.addValue(90.0);
summaryStatistics.addValue(100.0);
summaryStatistics.getStandardDeviation();
}
}
I thought that JIT will eliminate dead code, so two methods will be executed at the same time. But in the end, I have:
summaryStatistics_standardDeviationForFourNumbers 0.158 ± 0.046
DeadCodeTraps.summaryStatistics_standardDeviationForTenNumbers 0.359 ± 0.294
Why JIT does not optimize it? The result of summaryStatistics.getStandardDeviation(); is not used anywhere outside the method and it is not returned by it.
(I am using OpenJDK build 10.0.2+13-Ubuntu-1ubuntu0.18.04.4)
If you're talking about the Apache Commons Math SummaryStatistics class, then it's a massive class. Its construction will most certainly not be inlined. To see why, run with -XX:+UnlockDiagnosticVMOptions -XX:+PrintInlining -XX:-BackgroundCompilation
Dead code elimination happens after inlining.
Unused objects will back-propagate, but the non-inlined constructor will break the chain because the JIT optimizer can no longer be sure there are no side effects.
In other words, the code you expect to be eliminated is too big.

How to run JMH from inside JUnit tests?

How can I run JMH benchmarks inside my existing project using JUnit tests? The official documentation recommends making a separate project, using Maven shade plugin, and launching JMH inside the main method. Is this necessary and why is it recommended?
I've been running JMH inside my existing Maven project using JUnit with no apparent ill effects. I cannot answer why the authors recommend doing things differently. I have not observed a difference in results. JMH launches a separate JVM to run benchmarks to isolate them. Here is what I do:
Add the JMH dependencies to your POM:
<dependency>
<groupId>org.openjdk.jmh</groupId>
<artifactId>jmh-core</artifactId>
<version>1.21</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.openjdk.jmh</groupId>
<artifactId>jmh-generator-annprocess</artifactId>
<version>1.21</version>
<scope>test</scope>
</dependency>
Note that I've placed them in scope test.
In Eclipse, you may need to configure the annotation processor manually. NetBeans handles this automatically.
Create your JUnit and JMH class. I've chosen to combine both into a single class, but that is up to you. Notice that OptionsBuilder.include is what actually determines which benchmarks will be run from your JUnit test!
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import java.util.concurrent.TimeUnit;
import org.junit.Test;
import org.openjdk.jmh.annotations.*;
import org.openjdk.jmh.infra.Blackhole;
import org.openjdk.jmh.runner.Runner;
import org.openjdk.jmh.runner.options.*;
public class TestBenchmark
{
#Test public void
launchBenchmark() throws Exception {
Options opt = new OptionsBuilder()
// Specify which benchmarks to run.
// You can be more specific if you'd like to run only one benchmark per test.
.include(this.getClass().getName() + ".*")
// Set the following options as needed
.mode (Mode.AverageTime)
.timeUnit(TimeUnit.MICROSECONDS)
.warmupTime(TimeValue.seconds(1))
.warmupIterations(2)
.measurementTime(TimeValue.seconds(1))
.measurementIterations(2)
.threads(2)
.forks(1)
.shouldFailOnError(true)
.shouldDoGC(true)
//.jvmArgs("-XX:+UnlockDiagnosticVMOptions", "-XX:+PrintInlining")
//.addProfiler(WinPerfAsmProfiler.class)
.build();
new Runner(opt).run();
}
// The JMH samples are the best documentation for how to use it
// http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/
#State (Scope.Thread)
public static class BenchmarkState
{
List<Integer> list;
#Setup (Level.Trial) public void
initialize() {
Random rand = new Random();
list = new ArrayList<>();
for (int i = 0; i < 1000; i++)
list.add (rand.nextInt());
}
}
#Benchmark public void
benchmark1 (BenchmarkState state, Blackhole bh) {
List<Integer> list = state.list;
for (int i = 0; i < 1000; i++)
bh.consume (list.get (i));
}
}
JMH's annotation processor seems to not work well with compile-on-save in NetBeans. You may need to do a full Clean and Build whenever you modify the benchmarks. (Any suggestions appreciated!)
Run your launchBenchmark test and watch the results!
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running com.Foo
# JMH version: 1.21
# VM version: JDK 1.8.0_172, Java HotSpot(TM) 64-Bit Server VM, 25.172-b11
# VM invoker: /usr/lib/jvm/java-8-jdk/jre/bin/java
# VM options: <none>
# Warmup: 2 iterations, 1 s each
# Measurement: 2 iterations, 1 s each
# Timeout: 10 min per iteration
# Threads: 2 threads, will synchronize iterations
# Benchmark mode: Average time, time/op
# Benchmark: com.Foo.benchmark1
# Run progress: 0.00% complete, ETA 00:00:04
# Fork: 1 of 1
# Warmup Iteration 1: 4.258 us/op
# Warmup Iteration 2: 4.359 us/op
Iteration 1: 4.121 us/op
Iteration 2: 4.029 us/op
Result "benchmark1":
4.075 us/op
# Run complete. Total time: 00:00:06
REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
experiments, perform baseline and negative tests that provide experimental control, make sure
the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts.
Do not assume the numbers tell you what you want them to tell.
Benchmark Mode Cnt Score Error Units
Foo.benchmark1 avgt 2 4.075 us/op
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.013 sec
Runner.run even returns RunResult objects on which you can do assertions, etc.
A declarative approach using annotations:
#State(Scope.Benchmark)
#Threads(1)
public class TestBenchmark {
#Param({"10","100","1000"})
public int iterations;
#Setup(Level.Invocation)
public void setupInvokation() throws Exception {
// executed before each invocation of the benchmark
}
#Setup(Level.Iteration)
public void setupIteration() throws Exception {
// executed before each invocation of the iteration
}
#Benchmark
#BenchmarkMode(Mode.AverageTime)
#Fork(warmups = 1, value = 1)
#Warmup(batchSize = -1, iterations = 3, time = 10, timeUnit = TimeUnit.MILLISECONDS)
#Measurement(batchSize = -1, iterations = 10, time = 10, timeUnit = TimeUnit.MILLISECONDS)
#OutputTimeUnit(TimeUnit.MILLISECONDS)
public void test() throws Exception {
Thread.sleep(ThreadLocalRandom.current().nextInt(0, iterations));
}
#Test
public void benchmark() throws Exception {
String[] argv = {};
org.openjdk.jmh.Main.main(argv);
}
}
#State(Scope.Benchmark)
#Threads(1)
#Fork(1)
#OutputTimeUnit(TimeUnit.MICROSECONDS)
#Warmup(iterations = 5, time = 1)
#Measurement(iterations = 5, time = 1)
#BenchmarkMode(Mode.All)
public class ToBytesTest {
public static void main(String[] args) {
ToBytesTest test = new ToBytesTest();
System.out.println(test.string()[0] == test.charBufferWrap()[0] && test.charBufferWrap()[0] == test.charBufferAllocate()[0]);
}
#Test
public void benchmark() throws Exception {
org.openjdk.jmh.Main.main(new String[]{ToBytesTest.class.getName()});
}
char[] chars = new char[]{'中', '国'};
#Benchmark
public byte[] string() {
return new String(chars).getBytes(StandardCharsets.UTF_8);
}
#Benchmark
public byte[] charBufferWrap() {
return StandardCharsets.UTF_8.encode(CharBuffer.wrap(chars)).array();
}
#Benchmark
public byte[] charBufferAllocate() {
CharBuffer cb = CharBuffer.allocate(chars.length).put(chars);
cb.flip();
return StandardCharsets.UTF_8.encode(cb).array();
}
}

Categories

Resources