This is my code, I am trying to test what's the average time to make a call to getLocationIp method by passing ipAddress in that. So what I did is that, I am generating some random ipAddress and passing that to getLocationIp and then calculating the time difference. And then putting that to HashMap with there counts. And after wards I am priniting the hash map to see
what's the actual count. So this is the right way to test this? or there is some other way. Becuase in my case I am not sure whether my generateIPAddress method generates random ipAddress everytime. I am also having start_total time before entering the loop and then end_total time after everything gets completed. So on that I can calculate the average time?
long total = 10000;
long found = 0;
long found_country = 0;
long runs = total;
Map<Long, Long> histgram = new HashMap<Long, Long>();
try {
long start_total = System.nanoTime();
while(runs > 0) {
String ipAddress = generateIPAddress();
long start_time = System.nanoTime();
resp = GeoLocationService.getLocationIp(ipAddress);
long end_time = System.nanoTime();
long difference = (end_time - start_time)/1000000;
Long count = histgram.get(difference);
if (count != null) {
count++;
histgram.put(Long.valueOf(difference), count);
} else {
histgram.put(Long.valueOf(difference), Long.valueOf(1L));
}
runs--;
}
long end_total = System.nanoTime();
long finalTotal = (end_total - start_total)/1000000;
float avg = (float)(finalTotal) / total;
Set<Long> keys = histgram.keySet();
for (Long key : keys) {
Long value = histgram.get(key);
System.out.println("$$$GEO OPTIMIZE SVC MEASUREMENT$$$, HG data, " + key + ":" + value);
}
This is my generateIpAddress method-
private String generateIPAddress() {
Random r = new Random();
String s = r.nextInt(256) + "." + r.nextInt(256) + "." + r.nextInt(256) + "." + r.nextInt(256);
return s;
}
Any suggestions will be appreciated.
Generally when you benchmark functions you want to run the multiple times and average the results That gives you are clearer indication of the actual time your program will spend in them, considering that you rarely care about the performance of something only run once.
Related
I want to try practicing avoiding using static methods/variables when not needed, because I've heard/seen/been told that you want to avoid using them when you can. I decided to make a simple password cracker in Java:
import java.util.Random;
public class PasswordCracker
{
public static void main(String args[])
{
PasswordCracker pwcSimulation = new PasswordCracker();
long totalTimeSpentCracking = 0;
int numSimulations = 100;
for(int i = 0; i < numSimulations; i++)
{
System.out.println(pwcSimulation.PasswordCrackingSimulation());
}
}
long PasswordCrackingSimulation()
{
long startTime = System.currentTimeMillis();
int upperBound = 999999;
Random rand = new Random();
int randomPassword = rand.nextInt(upperBound);
int passwordGuess;
for(int i = 0; i <= upperBound; i++)
{
passwordGuess = i;
if(passwordGuess == randomPassword)
{
System.out.println("password Guessed correctly, the password was: " + randomPassword);
break;
}
/*else
{
System.out.println("Your inputted password is incorrect, please try again.");
}*/
}
long endTime = System.currentTimeMillis();
long timeSpentCracking = (endTime - startTime);
System.out.println("The program took " + timeSpentCracking + "ms OR ~" + ((timeSpentCracking/1000) % 60) + " seconds to complete");
return timeSpentCracking;
}
}
first instantiated a new class (hopefully i did this they way you should?) to avoid having to use a static method for the method PasswordCrackingSimulation. Now i'm having trouble returning a value from the method. The printline in the loop will always print 0, so I know that it isn't taking the returned value in the method. Any help would be lovely :) just trying to learn
No, you're doing everything correctly.
You're returning how long it takes in milliseconds to crack that password.
The answer is less than 1 millisecond. That 0 you see? That's because your method is returning 0. It is doing that because endTime - startTime is zero.
Just write return 1 to test this out yourself - you'll see your print loop print 1 instead.
I have this code:
public class MainActivity extends AppCompatActivity implements SensorEventListener {
long start_time;
int record_state;
#Override
public void onSensorChanged(SensorEvent event) {
Long time = System.currentTimeMillis();
if (record_state == 1)
{
start_time = time;
record_state = 0;
}
if(Ax.size() == N_SAMPLES && Ay.size() == N_SAMPLES && Az.size() == N_SAMPLES) //assuming this gets executed
{
Toast.makeText(MainActivity.this, "Size of Ax: " + Integer.toString(Ax.size()) +
"\nSize of Ay: " + Integer.toString(Ay.size()) +
"\nSize of Az: " + Integer.toString(Az.size()) + "\n" + Long.toString((time)) + "\n" + Long.toString((start_time)) + "\nrecord state: " + Integer.toString((record_state)), Toast.LENGTH_LONG).show();
}
}
}
But it appears that time and start_time always have the same value. I want start_time to record the time at the very beginning (or freeze the value of time at one instant) only. How can I do this? What is wrong with this code?
If the code below does not work as you expected it to, then I suspect, that you have an issue with when and how "record_state" is being set somewhere in your code.
Local values with class wide scope:
int record_state = 1;
int iterations = 0;
long start_time;
When your onSensorChanged triggers call checkTimeDiff
private void checkTimeDiff(){
iterations++;
long time = SystemClock.elapsedRealtime();
if (record_state == 1)
{
start_time = SystemClock.elapsedRealtime();
record_state = 0;
}
long diff = time - start_time;
Log.e("My Timer", "Time difference = " + diff + " number of iterations = " + iterations);
if (diff >= 4000)
{
record_start = 1;
Toast.makeText(MainActivity.this, Long.toString(time) + "\n" + Long.toString(start_time), Toast.LENGTH_SHORT).show();
}
}
It appears as if you are using "record_state" as a digital flag. In that case a boolean would be more elegant. But I will use your code as much as possible.
Use lowercase 'long'. You are using objects currently and pointing them both to the same reference, so when you change the value of one it affects the other.
Because this code is going to be used in onSensorChanged (referring to your comments) then the solution should be these variables should be declared and used throughout your class and not in the single method and the problem will disappear so please declare this outside any method as local variables:
int record_state = 1;
Long start_time;
Code and logic improvements
Use long instead of Long
Use >= instead of == because its too hard to get it exactly equal!
This question already has answers here:
How do I write a correct micro-benchmark in Java?
(11 answers)
Closed 5 years ago.
I have this following code snippet
// List of persons with name and age
List<Person> persons = new ArrayList<>();
// Adding 10,000 objects
for(int i = 0 ; i < 10000 ; i ++) {
Person p = new Person();
p.setName("Person " + i);
p.setAge(i);
persons.add(p);
}
long time1 = System.nanoTime();
System.out.println("Time before steram.reduce()" + time1);
Optional<Person> o1 = Optional<Person> o1 = persons.stream().reduce(BinaryOperator.maxBy(Comparator.comparingInt(p -> p.getAge())));
long time2 = System.nanoTime();
System.out.println(o1.get() + "\nTime after stream.reduce() " + time2);
System.out.println("**** Rough execution time for stream.reduce() : " + (time2 - time1) + " nano secs");
long time3 = System.nanoTime();
System.out.println("Time before stream.max() " + time3);
Optional<Person> o2 = persons.stream().max((p01, p02) -> p01.getAge() - p02.getAge());
long time4 = System.nanoTime();
System.out.println(o2.get() + "\nTime after stream.max() " + time4);
System.out.println("**** Rough execution time for stream.max() : " + (time4 - time3) + " nano secs");
While this might not be the ideal way to figure out execution time, basically what I am trying to do here is find the oldest Person and print out the time it took to find it out using stream.reduce() vs stream.max().
Output
Time before steram.reduce()8834253431112
[ Person 9999, 9999]
Time after stream.reduce() 8834346269743
**** Rough execution time for stream.reduce() : 92838631 nano secs
Time before stream.max() 8834346687875
[ Person 9999, 9999]
Time after stream.max() 8834350117000
**** Rough execution time for stream.max() : 3429125 nano secs
P.S. I have ran this code multiple times changing the order of stream.max() and stream.reduce() and found out that stream.reduce() takes significantly more time to produce the output than stream.max().
So is stream.max() always faster than stream.reduce()? If yes then, when should we use stream.reduce()?
The ReferencePipeline implementation of max looks as follows:
public final Optional<P_OUT> max(Comparator<? super P_OUT> comparator) {
return reduce(BinaryOperator.maxBy(comparator));
}
So any performance difference that you observe is just an artifact of the approach that you use for measuring the performance.
Or, more clearly: The answer is No, it is not "always faster".
Edit: Just for reference, here is a slightly adjusted version of your code. It does run the test for different numbers of elements, repeatedly. This is still not a real, reliable (Micro) Benchmark, but more reliable than running the whole thing only once:
import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
public class MaxReducePerformance
{
public static void main(String[] args)
{
for (int n=500000; n<=5000000; n+=500000)
{
List<Person> persons = new ArrayList<>();
for (int i = 0; i < n; i++)
{
Person p = new Person();
p.setName("Person " + i);
p.setAge(i);
persons.add(p);
}
System.out.println("For " + n);
long time1 = System.nanoTime();
Optional<Person> o1 = persons.stream().reduce((p01, p02) ->
{
if (p01.getAge() < p02.getAge())
return p02;
return p01;
});
long time2 = System.nanoTime();
double d0 = (time2 - time1) / 1e9;
System.out.println("Reduce: "+d0+" seconds, " + o1);
long time3 = System.nanoTime();
Optional<Person> o2 =persons.stream().max(
(p01, p02) -> p01.getAge() - p02.getAge());
long time4 = System.nanoTime();
double d1 = (time4 - time3) / 1e9;
System.out.println("Max : "+d1+" seconds, " + o2);
}
}
}
class Person
{
String name;
int age;
void setName(String name)
{
this.name = name;
}
void setAge(int age)
{
this.age = age;
}
int getAge()
{
return age;
}
}
The output should show that the durations are basically equal.
Your reduce function evaluates getAge twice in each iteration, that's why the result might be slower depending on the compiler optimizations, restructure your code and check the result.
Also, Stream.max might benefit from built-in VM optimizations so you should always stick with built-in functions instead of implementing equivalent ones.
I am compiling a Java program using for loop to find out the biggest value of long. However, nothing was printed when I run the program. Why?
Here's my code:
class LongMaxMin {
public static void main(String args[]) {
long i = 0L;
long result = 0L;
for (; ; ) {
result = i++;
if (i<0)
break;
}
System.out.println("The biggest integer:" + result);
}
Mostly because of time.
A long will have a max of about ~9.22 quintillion. You're starting at zero and incrementing up. That means you need to go through 9 quintillion loops before it wraps over and breaks. I just tried to run 2 billion operations in my javascript console and times out for a couple of minutes before I force quit.
If you sit there and let it run long enough, you'll get your output. Alternatively, start i at something close to the max already, like 9,223,372,036,854,700,000, and see if it still gives you the same issues. In Java 8, adding underscore to numeric literals is allowed. Initializing i to something like 9_223_372_036_854_700_000L will give you something in a more timely manner.
The max long is significantly high, at 9.223372e+18. For specifics, 9,223,372,036,854,775,807 is the number in question. This also contributes to that whole "this works, it'll just take WAY too long" theory.
I was curious how long it would take so I wrote a class to do the same thing. Wrote it with a separate thread to update results to the console every 1 second.
"int" results
1,343,211,433 37.4518434691484288634492200 % left
Max Value: 2,147,483,647
Time Taken (seconds): **1.588**
"long" results
1,220,167,357 99.9999999867709190074470400 % left
2,519,937,368 99.9999999726787843108699600 % left
3,881,970,343 99.9999999579115932059510100 % left
5,210,983,861 99.9999999435023997711689800 % left
6,562,562,290 99.9999999288485570811055300 % left
7,853,387,353 99.9999999148534037050721500 % left
9,137,607,100 99.9999999009298653086103000 % left
10,467,975,104 99.9999998865059865071902600 % left
11,813,910,300 99.9999998719133278719112300 % left
13,183,196,499 99.9999998570674971548090400 % left
...it continues on and on...
1,362,032,97 - difference between the 2nd and 3rd values (1 second)
6,771,768,529 seconds - how many seconds it would take to reach long's max value (Long.MAX_VALUE / 2nd3rdDifference)
6,771,768,529 seconds = 214.73 years (per conversion by google search)
So if my calculations are correct...you'd be dead of old age by the time an average computer calculated the max value of long via incrementing and checking if it's overflowed. Your children would be dead to. Your grandchildren, they might be around when it finished...
Code for Max Value Calculation
import java.math.BigDecimal;
import java.math.RoundingMode;
import java.text.NumberFormat;
public class MainLongMaxTest {
// /*
public static final long MAX_VALUE = Long.MAX_VALUE;
public static long value = 0;
public static long previousValue = 0;
// */
/*
public static final int MAX_VALUE = Integer.MAX_VALUE;
public static int value = 0;
public static int previousValue = 0;
*/
public static boolean done;
public static BigDecimal startTime;
public static BigDecimal endTime;
public static void main(String[] args) {
Runnable task = new StatusPrinterRunnable();
new Thread(task).start(); // code waits 1 second before result printing loop
done = false;
startTime = new BigDecimal(System.currentTimeMillis());
while(value >= 0) {
previousValue = value;
value += 1;
}
endTime = new BigDecimal(System.currentTimeMillis());
done = true;
}
}
class StatusPrinterRunnable implements Runnable {
public static final NumberFormat numberFormat = NumberFormat.getNumberInstance();
private static long SLEEP_TIME = 1000;
#Override
public void run() {
try { Thread.sleep(SLEEP_TIME); } catch (InterruptedException e) { throw new RuntimeException(e); }
while(!MainLongMaxTest.done) {
long value = MainLongMaxTest.value;
//long valuesLeft = MAX_VALUE - value;
BigDecimal maxValueBd = new BigDecimal(MainLongMaxTest.MAX_VALUE);
BigDecimal valueBd = new BigDecimal(value);
BigDecimal differenceBd = maxValueBd.subtract(valueBd);
BigDecimal percentLeftBd = differenceBd.divide(maxValueBd, 25, RoundingMode.HALF_DOWN);
percentLeftBd = percentLeftBd.multiply(new BigDecimal(100));
String numberAsString = numberFormat.format(value);
String percentLeftAsString = percentLeftBd.toString();
String message = "" + numberAsString + "\t" + percentLeftAsString + " % left";
System.out.println(message);
try { Thread.sleep(SLEEP_TIME); } catch (InterruptedException e) { throw new RuntimeException(e); }
}
BigDecimal msTaken = MainLongMaxTest.endTime.subtract(MainLongMaxTest.startTime);
BigDecimal secondsTaken = msTaken.divide(new BigDecimal("1000"));
System.out.println();
System.out.println("Max Value: " + numberFormat.format(MainLongMaxTest.previousValue));
System.out.println("Time Taken (seconds): " + secondsTaken);
}
}
I think your logic is correct just it will take a lot of time to reach that value.
the maximum Long value can hold is Long.MAX_value which is 9223372036854775807L
to speed up the logic, I modified the program as below and got the expected result.
public static void main(String args[]) {
long i = 9223372036854775806L;
long result = 0L;
for (; ; ) {
result = i++;
if (i<0) {
System.out.println("result"+result);
System.out.println("i"+i);
break;
}
}
System.out.println("The biggest integer: is" + result);
}
Output:
result9223372036854775807
i-9223372036854775808
The biggest integer: is9223372036854775807
result has the maximum value it can hold after that it changes to its minimum value.
You can get the result in one step if you take advantage of binary algebra by:
result = -1L >>> 1;
According to its documentation, System.nanoTime returns
nanoseconds since some fixed but arbitrary origin time. However, on all x64 machines I tried the code below, there were time jumps, moving that fixed origin time around. There may be some flaw in my method to acquire the correct time using an alternative method (here, currentTimeMillis). However, the main purpose of measuring relative times (durations) is negatively affected, too.
I came across this problem trying to measure latencies when comparing different queues to LMAX's Disruptor where I got very negative latencies sometimes. In those cases, start and end timestamps were created by different threads, but the latency was computed after those threads had finished.
My code here takes time using nanoTime, computes the fixed origin in currentTimeMillis time, and compares that origin between calls. And since I must ask a question here: What is wrong with this code? Why does it observe violations of the fixed origin contract? Or does it not?
import java.text.*;
/**
* test coherency between {#link System#currentTimeMillis()} and {#link System#nanoTime()}
*/
public class TimeCoherencyTest {
static final int MAX_THREADS = Math.max( 1, Runtime.getRuntime().availableProcessors() - 1);
static final long RUNTIME_NS = 1000000000L * 100;
static final long BIG_OFFSET_MS = 2;
static long startNanos;
static long firstNanoOrigin;
static {
initNanos();
}
private static void initNanos() {
long millisBefore = System.currentTimeMillis();
long millisAfter;
do {
startNanos = System.nanoTime();
millisAfter = System.currentTimeMillis();
} while ( millisAfter != millisBefore);
firstNanoOrigin = ( long) ( millisAfter - ( startNanos / 1e6));
}
static NumberFormat lnf = DecimalFormat.getNumberInstance();
static {
lnf.setMaximumFractionDigits( 3);
lnf.setGroupingUsed( true);
};
static class TimeCoherency {
long firstOrigin;
long lastOrigin;
long numMismatchToLast = 0;
long numMismatchToFirst = 0;
long numMismatchToFirstBig = 0;
long numChecks = 0;
public TimeCoherency( long firstNanoOrigin) {
firstOrigin = firstNanoOrigin;
lastOrigin = firstOrigin;
}
}
public static void main( String[] args) {
Thread[] threads = new Thread[ MAX_THREADS];
for ( int i = 0; i < MAX_THREADS; i++) {
final int fi = i;
final TimeCoherency tc = new TimeCoherency( firstNanoOrigin);
threads[ i] = new Thread() {
#Override
public void run() {
long start = getNow( tc);
long firstOrigin = tc.lastOrigin; // get the first origin for this thread
System.out.println( "Thread " + fi + " started at " + lnf.format( start) + " ns");
long nruns = 0;
while ( getNow( tc) < RUNTIME_NS) {
nruns++;
}
final long runTimeNS = getNow( tc) - start;
final long originDrift = tc.lastOrigin - firstOrigin;
nruns += 3; // account for start and end call and the one that ends the loop
final long skipped = nruns - tc.numChecks;
System.out.println( "Thread " + fi + " finished after " + lnf.format( nruns) + " runs in " + lnf.format( runTimeNS) + " ns (" + lnf.format( ( double) runTimeNS / nruns) + " ns/call) with"
+ "\n\t" + lnf.format( tc.numMismatchToFirst) + " different from first origin (" + lnf.format( 100.0 * tc.numMismatchToFirst / nruns) + "%)"
+ "\n\t" + lnf.format( tc.numMismatchToLast) + " jumps from last origin (" + lnf.format( 100.0 * tc.numMismatchToLast / nruns) + "%)"
+ "\n\t" + lnf.format( tc.numMismatchToFirstBig) + " different from first origin by more than " + BIG_OFFSET_MS + " ms"
+ " (" + lnf.format( 100.0 * tc.numMismatchToFirstBig / nruns) + "%)"
+ "\n\t" + "total drift: " + lnf.format( originDrift) + " ms, " + lnf.format( skipped) + " skipped (" + lnf.format( 100.0 * skipped / nruns) + " %)");
}};
threads[ i].start();
}
try {
for ( Thread thread : threads) {
thread.join();
}
} catch ( InterruptedException ie) {};
}
public static long getNow( TimeCoherency coherency) {
long millisBefore = System.currentTimeMillis();
long now = System.nanoTime();
if ( coherency != null) {
checkOffset( now, millisBefore, coherency);
}
return now - startNanos;
}
private static void checkOffset( long nanoTime, long millisBefore, TimeCoherency tc) {
long millisAfter = System.currentTimeMillis();
if ( millisBefore != millisAfter) {
// disregard since thread may have slept between calls
return;
}
tc.numChecks++;
long nanoMillis = ( long) ( nanoTime / 1e6);
long nanoOrigin = millisAfter - nanoMillis;
long oldOrigin = tc.lastOrigin;
if ( oldOrigin != nanoOrigin) {
tc.lastOrigin = nanoOrigin;
tc.numMismatchToLast++;
}
if ( tc.firstOrigin != nanoOrigin) {
tc.numMismatchToFirst++;
}
if ( Math.abs( tc.firstOrigin - nanoOrigin) > BIG_OFFSET_MS) {
tc.numMismatchToFirstBig ++;
}
}
}
Now I made some small changes. Basically, I bracket the nanoTime calls between two currentTimeMillis calls to see if the thread has been rescheduled (which should take more than currentTimeMillis resolution). In this case, I disregard the loop cycle. Actually, if we know that nanoTime is sufficiently fast (as on newer architectures like Ivy Bridge), we can bracket in currentTimeMillis with nanoTime.
Now the long >10ms jumps are gone. Instead, we count when we get more than 2ms away from first origin per thread. On the machines I have tested, for a runtime of 100s, there are always close to 200.000 jumps between calls. It is for those cases that I think either currentTimeMillis or nanoTime may be inaccurate.
As has been mentioned, computing a new origin each time means you are subject to error.
// ______ delay _______
// v v
long origin = (long)(System.currentTimeMillis() - System.nanoTime() / 1e6);
// ^
// truncation
If you modify your program so you also compute the origin difference, you'll find out it's very small. About 200ns average I measured which is about right for the time delay.
Using multiplication instead of division (which should be OK without overflow for another couple hundred years) you'll also find that the number of origins computed that fail the equality check is much larger, about 99%. If the reason for error is because of the time delay, they would only pass when the delay happens to be identical to the last one.
A much simpler test is to accumulate elapsed time over some number of subsequent calls to nanoTime and see if it checks out with the first and last calls:
public class SimpleTimeCoherencyTest {
public static void main(String[] args) {
final long anchorNanos = System.nanoTime();
long lastNanoTime = System.nanoTime();
long accumulatedNanos = lastNanoTime - anchorNanos;
long numCallsSinceAnchor = 1L;
for(int i = 0; i < 100; i++) {
TestRun testRun = new TestRun(accumulatedNanos, lastNanoTime);
Thread t = new Thread(testRun);
t.start();
try {
t.join();
} catch(InterruptedException ie) {}
lastNanoTime = testRun.lastNanoTime;
accumulatedNanos = testRun.accumulatedNanos;
numCallsSinceAnchor += testRun.numCallsToNanoTime;
}
System.out.println(numCallsSinceAnchor);
System.out.println(accumulatedNanos);
System.out.println(lastNanoTime - anchorNanos);
}
static class TestRun
implements Runnable {
volatile long accumulatedNanos;
volatile long lastNanoTime;
volatile long numCallsToNanoTime;
TestRun(long acc, long last) {
accumulatedNanos = acc;
lastNanoTime = last;
}
#Override
public void run() {
long lastNanos = lastNanoTime;
long currentNanos;
do {
currentNanos = System.nanoTime();
accumulatedNanos += currentNanos - lastNanos;
lastNanos = currentNanos;
numCallsToNanoTime++;
} while(currentNanos - lastNanoTime <= 100000000L);
lastNanoTime = lastNanos;
}
}
}
That test does indicate the origin is the same (or at least the error is zero-mean).
As far as I know the method System.currentTimeMillis() makes indeed sometimes jumps, dependent on the underlying OS. I have observed this behaviour myself sometimes.
So your code gives me the impression you try to get the offset between System.nanoTime() and System.currentTimeMillis() repeated times. You should rather try to observe this offset by calling System.currentTimeMillis() only once before you can say that System.nanoTimes() causes sometimes jumps.
By the way, I will not pretend that the spec (javadoc describes System.nanoTime() related to some fixed point) is always perfectly implemented. You can look on this discussion where multi-core CPUs or changes of CPU-frequencies can negatively affect the required behaviour of System.nanoTime(). But one thing is sure. System.currentTimeMillis() is far more subject to arbitrary jumps.