I have a class that creates a random string based on BigInteger. All works fine and efficient when run standalone (Windows, 22ms).
private SecureRandom random = new SecureRandom();
public String testMe() {
return new BigInteger(130, random).toString(30)
}
When this code is put into a library (jar) and called from Coldfusion (9.0.2), this code hangs for 1 to 1.5 minutes (on my server, linux). This code is called from a cfc:
<cfset myTest = CreateObject("java", "com.acme.MyTest")>
<cffunction name="runTest" access="public">
<cfset var value = myTest.testMe()/>
</cffunction>
What am I missing?
I am just astonished that the difference was not noticable on my Windows box.
There are different SecureRandom strategies. On window it could be using a random seed based on the host name, which for windows can wander off to a DNS to get a reverse lookup the first time. This can time out the request after a minute or so.
I would ensure you have a recent update of Java because I believe this is a problem which was fixed in some update of Java 6. (Not to do with SecureRandom, but with first network operation being incredibly slow)
BTW This was tested on a Windows 7 box and the first time, it hung for a few seconds, but not after that.
If your code is hanging to 60 to 90 seconds, it is not due to this method, far more likely you are performing a GC, and this method is stopping because it allocated memory.
While BigInteger is slow, SecureRandom is much, much slower. If you want this to be faster, use plain Random.
It would be slightly faster if you used less bits.
BTW I would use base 36 (the maximum), rather than base 30.
static volatile String dontOptimiseAway = null;
public static void testRandomBigInteger(Random random) {
long start = System.nanoTime();
int runs = 10000;
for(int i=0;i< runs;i++) {
dontOptimiseAway = new BigInteger(130, random).toString(36);
}
long time = System.nanoTime() - start;
System.out.printf("%s took %.1f micro-seconds on average%n", random.getClass().getSimpleName(), time/runs/1e3);
}
public static void main(String... ignored) {
for (int i = 0; i < 10; i++) {
testRandomBigInteger(new Random());
testRandomBigInteger(new SecureRandom());
}
}
prints
Random took 1.7 micro-seconds on average
SecureRandom took 2.1 micro-seconds on average
The time to generate the string is significant, but still no where near enough to cause a multi-second delay.
Related
I was confused by the codes as follows:
public static void test(){
long currentTime1 = System.currentTimeMillis();
final int iBound = 10000000;
final int jBound = 100;
for(int i = 1;i<=iBound;i++){
int a = 1;
int tot = 10;
for(int j = 1;j<=jBound;j++){
tot *= a;
}
}
long updateTime1 = System.currentTimeMillis();
System.out.println("i:"+iBound+" j:"+jBound+"\nIt costs "+(updateTime1-currentTime1)+" ms");
}
That's the first version, it costs 443ms on my computer.
first version result
public static void test(){
long currentTime1 = System.currentTimeMillis();
final int iBound = 100;
final int jBound = 10000000;
for(int i = 1;i<=iBound;i++){
int a = 1;
int tot = 10;
for(int j = 1;j<=jBound;j++){
tot *= a;
}
}
long updateTime1 = System.currentTimeMillis();
System.out.println("i:"+iBound+" j:"+jBound+"\nIt costs "+(updateTime1-currentTime1)+" ms");
}
The second version costs 832ms.
second version result
The only difference is that I simply swap the i and j.
This result is incredible, I test the same code in C and the difference in C is not that huge.
Why is this 2 similar codes so different in java?
My jdk version is openjdk-14.0.2
TL;DR - This is just a bad benchmark.
I did the following:
Create a Main class with a main method.
Copy in the two versions of the test as test1() and test2().
In the main method do this:
while(true) {
test1();
test2();
}
Here is the output I got (Java 8).
i:10000000 j:100
It costs 35 ms
i:100 j:10000000
It costs 33 ms
i:10000000 j:100
It costs 33 ms
i:100 j:10000000
It costs 25 ms
i:10000000 j:100
It costs 0 ms
i:100 j:10000000
It costs 0 ms
i:10000000 j:100
It costs 0 ms
i:100 j:10000000
It costs 0 ms
i:10000000 j:100
It costs 0 ms
i:100 j:10000000
It costs 0 ms
i:10000000 j:100
It costs 0 ms
....
So as you can see, when I run two versions of the same method alternately in the same JVM, the times for each method are roughly the same.
But more importantly, after a small number of iterations the time drops to ... zero! What has happened is that the JIT compiler has compiled the two methods and (probably) deduced that their loops can be optimized away.
It is not entirely clear why people are getting different times when the two versions are run separately. One possible explanation is that the first time run, the JVM executable is being read from disk, and the second time is already cached in RAM. Or something like that.
Another possible explanation is that JIT compilation kicks in earlier1 with one version of test() so the proportion of time spent in the slower interpreting (pre-JIT) phase is different between the two versions. (It may be possible to teas this out using JIT logging options.)
But it is immaterial really ... because the performance of a Java application while the JVM is warming up (loading code, JIT compiling, growing the heap to its working size, loading caches, etc) is generally speaking not important. And for the cases where it is important, look for a JVM that can do AOT compilation; e.g. GraalVM.
1 - This could be because of the way that the interpreter gathers stats. The general idea is that the bytecode interpreter accumulates statistics on things like branches until it has "enough". Then the JVM triggers the JIT compiler to compile the bytecodes to native code. When that is done, the code runs typically 10 or more times faster. The different looping patterns might it reach "enough" earlier in one version compared to the other. NB: I am speculating here. I offer zero evidence ...
The bottom line is that you have to be careful when writing Java benchmarks because the timings can be distorted by various JVM warmup effects.
For more information read: How do I write a correct micro-benchmark in Java?
I test it myself, I get same difference (around 16ms and 4ms).
After testing, I found that :
Declare 1M of variable take less time than multiple by 1 1M time.
How ?
I made a sum of 100
final int nb = 100000000;
for(int i = 1;i<=nb;i++){
i *= 1;
i *= 1;
[... written 20 times]
i *= 1;
i *= 1;
}
And of 100 this:
final int nb = 100000000;
for(int i = 1;i<=nb;i++){
int a = 0;
int aa = 0;
[... written 20 times]
int aaaaaaaaaaaaaaaaaaaaaa = 0;
int aaaaaaaaaaaaaaaaaaaaaaa = 0;
}
And I respectively get 8 and 3ms, which seems to correspond to what you get.
You can have different result if you have different processor.
you found the answer in algorithm books first chapter :
cost of producing and assigning is 1. so in first algorithm you have 2 declaration and assignation 10000000 and in second one you make it 100. so you reduce time ...
in first :
5 in main loop and 3 in second loop -> second loop is : 3*100 = 300
then 300 + 5 -> 305 * 10000000 = 3050000000
in second :
3*10000000 = 30000000 - > (30000000 + 5 )*100 = 3000000500
so the second one in algorithm is faster in theory but I think its back to multi cpu's ...which they can do 10000000 parallel job in first but only 100 parallel job in second .... so the first one became faster.
The short code below isolates the problem. Basically I'm timing the method addToStorage. I start by executing it one million times and I'm able to get its time down to around 723 nanoseconds. Then I do a short pause (using a busy spinning method not to release the cpu core) and time the method again N times, on a different code location. For my surprise I find that the smaller the N the bigger is the addToStorage latency.
For example:
If N = 1 then I get 3.6 micros
If N = 2 then I get 3.1 and 2.5 micros
if N = 5 then I get 3.7, 1.8, 1.7, 1.5 and 1.5 micros
Does anyone know why this is happening and how to fix it? I would like my method to consistently perform at the fastest time possible, no matter where I call it.
Note: I would not think it is thread related since I'm not using Thread.sleep. I've also tested using taskset to pin my thread to a cpu core with the same results.
import java.util.ArrayList;
import java.util.List;
public class JvmOdd {
private final StringBuilder sBuilder = new StringBuilder(1024);
private final List<String> storage = new ArrayList<String>(1024 * 1024);
public void addToStorage() {
sBuilder.setLength(0);
sBuilder.append("Blah1: ").append(System.nanoTime()).append('\n');
sBuilder.append("Blah2: ").append(System.nanoTime()).append('\n');
sBuilder.append("Blah3: ").append(System.nanoTime()).append('\n');
sBuilder.append("Blah4: ").append(System.nanoTime()).append('\n');
sBuilder.append("Blah5: ").append(System.nanoTime()).append('\n');
sBuilder.append("Blah6: ").append(System.nanoTime()).append('\n');
sBuilder.append("Blah7: ").append(System.nanoTime()).append('\n');
sBuilder.append("Blah8: ").append(System.nanoTime()).append('\n');
sBuilder.append("Blah9: ").append(System.nanoTime()).append('\n');
sBuilder.append("Blah10: ").append(System.nanoTime()).append('\n');
storage.add(sBuilder.toString());
}
public static long mySleep(long t) {
long x = 0;
for(int i = 0; i < t * 10000; i++) {
x += System.currentTimeMillis() / System.nanoTime();
}
return x;
}
public static void main(String[] args) throws Exception {
int warmup = Integer.parseInt(args[0]);
int mod = Integer.parseInt(args[1]);
int passes = Integer.parseInt(args[2]);
int sleep = Integer.parseInt(args[3]);
JvmOdd jo = new JvmOdd();
// first warm up
for(int i = 0; i < warmup; i++) {
long time = System.nanoTime();
jo.addToStorage();
time = System.nanoTime() - time;
if (i % mod == 0) System.out.println(time);
}
// now see how fast the method is:
while(true) {
System.out.println();
// Thread.sleep(sleep);
mySleep(sleep);
long minTime = Long.MAX_VALUE;
for(int i = 0; i < passes; i++) {
long time = System.nanoTime();
jo.addToStorage();
time = System.nanoTime() - time;
if (i > 0) System.out.print(',');
System.out.print(time);
minTime = Math.min(time, minTime);
}
System.out.println("\nMinTime: " + minTime);
}
}
}
Executing:
$ java -server -cp . JvmOdd 1000000 100000 1 5000
59103
820
727
772
734
767
730
726
840
736
3404
MinTime: 3404
There is so much going on in here that I don't know where to start. But lets start here....
long time = System.nanoTime();
jo.addToStorage();
time = System.nanoTime() - time;
The latency of addToStoarge() cannot be measured using this technique. It simply runs for too quickly meaning you're likely below the resolution of the clock. Without running this, my guess is that your measures are dominated by clock edge counts. You'll need to bulk up the unit of work to get a measure with lower levels of noise in it.
As for what is happening? There are a number of call site optimizations the most important being inlining. Inlining would totally eliminate the call site but it's a path specific optimization. If you call the method from a different place, that would follow the slow path of performing a virtual method lookup followed by a jump to that code. So to see the benefits of inlining from a different path, that path would also have to be "warmed up".
I would strongly recommend that you look at both JMH (delivered with the JDK). There are facilities in there such as blackhole which will help with the effects of CPU clocks winding down. You might also want to evaluate the quality of the bench with the help of tools like JITWatch (Adopt OpenJDK project) which will take logs produced by the JIT and help you interrupt them.
There is so much to this subject, but the bottom line is that you can't write a simplistic benchmark like this and expect it to tell you anything useful. You will need to use JMH.
I suggest watching this: https://www.infoq.com/presentations/jmh about microbenchmarking and JMH
There's also a chapter on microbenchmarking & JMH in my book: http://shop.oreilly.com/product/0636920042983.do
Java internally uses JIT(Just in Compiler). Based on the number of times the same method executes it optimizes the instruction and perform better. For lesser values, the usage of method would be normal which may not fall under optimization that shows the execution time more. When the same method called more time, it uses JIT and executes in lesser time because of the optimized instruction for the same method execution.
My general experience with Java 7 tells me that it is faster than Java 6. However, I've run into enough information that makes me believe that this is not always the case.
The first bit of information comes from Minecraft Snooper data found here. My intention was to look at that data to determine the effects of the different switches used to launch Minecraft. For example I wanted to know if using -Xmx4096m had a negative or positive effect on performance. Before I could get there I looked at the different version of Java being used. It covers everything from 1.5 to a developer using 1.8. In general as you increase the java version you see an increase in fps performance. Throughout the different versions of 1.6 you even see this gradual trend up. I honestly wasn't expecting to see as many different versions of java still in the wild but I guess people don't run the updates like they should.
Some time around the later versions of 1.6 you get the highest peeks. 1.7 performs about 10fps on average below the later versions of 1.6 but still higher than the early versions of 1.6. On a sample from my own system it's almost impossible to see the difference but when looking at the broader sample it's clear.
To control for the possibility that someone might have found a magic switch for Java I control with by only looking at the data with No switches being passed. That way I'd have a reasonable control before I started looking at the different flags.
I dismissed most of what I was seeing as this could be some Magic Java 6 that someone's just not sharing with me.
Now I've been working on another project that requires me to pass an array in an InputStream to be processed by another API. Initially I used a ByteArrayInputStream because it would work out of the box. When I looked at the code for it I noticed that every function was synchronized. Since this was unnecessary for this project I rewrote one with the synchronization stripped out. I then decided that I wanted to know what the general cost of Synchronization was for me in this situation.
I mocked up a simple test just to see. I timed everything in with System.nanoTime() and used Java 1.6_20 x86 and 1.7.0-b147 AMD64, and 1.7_15 AMD64 and using the -server. I expected the AMD64 version to outperform based on architecture alone and have any java 7 advantages. I also looked at the 25th, 50th, and 75th percentile (blue,red,green). However 1.6 with no -server beat the pants off of every other configuration.
So my question is.
What is in the 1.6 -server option that is impacting performance that is also defaulted to on in 1.7?
I know most of the speed enhancement in 1.7 came from defaulting some of the more radical performance options in 1.6 to on, but one of them is causing a performance difference. I just don't know which ones to look at.
public class ByteInputStream extends InputStream {
public static void main(String args[]) throws IOException {
String song = "This is the song that never ends";
byte[] data = song.getBytes();
byte[] read = new byte[data.length];
ByteArrayInputStream bais = new ByteArrayInputStream(data);
ByteInputStream bis = new ByteInputStream(data);
long startTime, endTime;
for (int i = 0; i < 10; i++) {
/*code for ByteInputStream*/
/*
startTime = System.nanoTime();
for (int ctr = 0; ctr < 1000; ctr++) {
bis.mark(0);
bis.read(read);
bis.reset();
}
endTime = System.nanoTime();
System.out.println(endTime - startTime);
*/
/*code for ByteArrayInputStream*/
startTime = System.nanoTime();
for (int ctr = 0; ctr < 1000; ctr++) {
bais.mark(0);
bais.read(read);
bais.reset();
}
endTime = System.nanoTime();
System.out.println(endTime - startTime);
}
}
private final byte[] array;
private int pos;
private int min;
private int max;
private int mark;
public ByteInputStream(byte[] array) {
this(array, 0, array.length);
}
public ByteInputStream(byte[] array, int offset, int length) {
min = offset;
max = offset + length;
this.array = array;
pos = offset;
}
#Override
public int available() {
return max - pos;
}
#Override
public boolean markSupported() {
return true;
}
#Override
public void mark(int limit) {
mark = pos;
}
#Override
public void reset() {
pos = mark;
}
#Override
public long skip(long n) {
pos += n;
if (pos > max) {
pos = max;
}
return pos;
}
#Override
public int read() throws IOException {
if (pos >= max) {
return -1;
}
return array[pos++] & 0xFF;
}
#Override
public int read(byte b[], int off, int len) {
if (pos >= max) {
return -1;
}
if (pos + len > max) {
len = max - pos;
}
if (len <= 0) {
return 0;
}
System.arraycopy(array, pos, b, off, len);
pos += len;
return len;
}
#Override
public void close() throws IOException {
}
}// end class
I think, as the others are saying, that your tests are too short to see the core issues - the graph is showing nanoTime, and that implies the core section being measured completes in 0.0001 to 0.0006s.
Discussion
The key difference in -server and -client is that -server expects the JVM to be around for a long time and therefore expends effort early on for better long-term results. -client aims for fast startup times and good-enough performance.
In particular hotspot runs with more optimizations, and these take more CPU to execute. In other words, with -server, you may be seeing the cost of the optimizer outweighing any gains from the optimization.
See Real differences between "java -server" and "java -client"?
Alternatively, you may also be seeing the effects of tiered compilation where, in Java 7, hotspot doesn't kick in so fast. With only 1000 iterations, the full optimization of your code won't be done until later, and the benefits will therefore be lesser.
You might get insight if you run java with the -Xprof option the JVM will dump some data about the time spent in various methods, both interpreted and compiled. It should give an idea about what was compiled, and the ratio of (cpu) time before hotspot kicked in.
However, to get a true picture, you really need to run this much longer - secondsminutes, not milliseconds - to allow Java and the OS to warm up. It would be even better to loop the test in main (so you have a loop containing your instrumented main test loop) so that you can ignore the warm-up.
EDIT Changed seconds to minutes to ensure that hotspot, the jvm and the OS are properly 'warmed up'
My mini benchmark:
import java.math.*;
import java.util.*;
import java.io.*;
public class c
{
static Random rnd = new Random();
public static String addDigits(String a, int n)
{
if(a==null) return null;
if(n<=0) return a;
for(int i=0; i<n; i++)
a+=rnd.nextInt(10);
return a;
}
public static void main(String[] args) throws IOException
{
int n = 10000; \\number of iterations
int k = 10; \\number of digits added at each iteration
BigInteger a;
BigInteger b;
String as = "";
String bs = "";
as += rnd.nextInt(9)+1;
bs += rnd.nextInt(9)+1;
a = new BigInteger(as);
b = new BigInteger(bs);
FileWriter fw = new FileWriter("c.txt");
long t1 = System.nanoTime();
a.multiply(b);
long t2 = System.nanoTime();
//fw.write("1,"+(t2-t1)+"\n");
if(k>0) {
as = addDigits(as, k-1);
bs = addDigits(as, k-1);
}
for(int i=0; i<n; i++)
{
a = new BigInteger(as);
b = new BigInteger(bs);
t1 = System.nanoTime();
a.multiply(b);
t2 = System.nanoTime();
fw.write(((i+1)*k)+","+(t2-t1)+"\n");
if(i < n-1)
{
as = addDigits(as, k);
bs = addDigits(as, k);
}
System.out.println((i+1)*k);
}
fw.close();
}
}
It measures multiplication time of n-digit BigInteger
Result:
You can easily see the trend but why there is so big noise above 50000 digits?
It is because of garbage collector or is there something else that affects my results?
When performing the test, there were no other applications running.
Result from test with only odd digits. The test was shorter (n=1000, k=100)
Odd digits (n=10000, k=10)
As you can see there is a huge noise between 65000 and 70000. I wonder why...
Odd digits (n=10000, k=10), System.gc() every 1000 iterations
Results in noise between 50000-70000
I also suspect this is a JVM warmup effect. Not warmup involving classloading or the JIT compiler, but warmup of the heap.
Put a (java) loop around the whole benchmark, and run it a number of times. (If this gives you the same graphs as before ... you will have evidence that this is not a warmup effect. Currently you don't have any empirical evidence one way or the other.)
Another possibility is that the noise is caused by your benchmark's interactions with the OS and/or other stuff running on the machine.
You are writing your timing data to an unbuffered stream. That means LOTS of syscalls, and (potentially) lots of fine-grained disc writes.
You are making LOTS of calls to nanoTime(), and that might introduce noise.
If something else is running on your machine (e.g. you are web browsing) that will slow down your benchmark for a bit and introduce noise.
There could be competition over physical memory ... if you've got too much running on your machine for the amount of RAM.
Finally, a certain amount of noise is inevitable, because each of those multiply calls generates garbage, and the garbage collector is going to need to work to deal with it.
Finally finally, if you manually run the garbage collector (or increase the heap size) to "smooth out" the data points, what you are actually doing is concealing one of the costs of multiply calls. The resulting graphs looks nice, but it is misleading:
The noisiness reflects what will happen in real life.
The true cost of the multiply actually includes the amortized cost of running the GC to deal with the garbage generated by the call.
To get a measurements that reflect the way that BigInteger behaves in real life, you need to run the test a large number of times, calculate average times and fit a curve to the average data-points.
Remember, the real aim of the game is to get scientifically valid results ... not a smooth curve.
If you do a microbenchmark, you must "warm up" the JVM first to let the JIT optimize the code, and then you can measure the performance. Otherwise you are measuring the work done by the JIT and that can change the result on each run.
The "noise" happens probably because the cache of the CPU is exceeded and the performance starts degrading.
not sure if this question should be here or in serverfault, but it's java-related so here it is:
I have two servers, with very similar technology:
server1 is Oracle/Sun x86 with dual x5670 CPU (2.93 GHz) (4 cores each), 12GB RAM.
server2 is Dell R610 with dual x5680 CPU (3.3 GHz) (6 cores each), 16GB RAM.
both are running Solaris x86, with exact same configuration.
both have turbo-boost enabled, and no hyper-threading.
server2 should therefore be SLIGHTLY faster than server1.
I'm running the following short test program on the two platforms.
import java.io.*;
public class TestProgram {
public static void main(String[] args) {
new TestProgram ();
}
public TestProgram () {
try {
PrintWriter writer = new PrintWriter(new FileOutputStream("perfs.txt", true), true);
for (int i = 0; i < 10000; i++) {
long t1 = System.nanoTime();
System.out.println("0123456789qwertyuiop0123456789qwertyuiop0123456789qwertyuiop0123456789qwertyuiop");
long t2 = System.nanoTime();
writer.println((t2-t1));
//try {
// Thread.sleep(1);
//}
//catch(Exception e) {
// System.out.println("thread sleep exception");
//}
}
}
catch(Exception e) {
e.printStackTrace(System.out);
}
}
}
I'm opening perfs.txt and averaging the results, I get:
server1: average = 1664 , trim 10% = 1615
server2: average = 1510 , trim 10% = 1429
which is a somewhat expected result (server2 perfs > server1 perfs).
now, I uncomment the "Thread.sleep(1)" part and test again, the results are now:
server1: average = 27598 , trim 10% = 26583
server2: average = 52320 , trim 10% = 39359
this time server2 perfs < server1 perfs
that doesn't make any sense to me...
obviously I'm looking at a way to improve server2 perfs in the second case. there must be some kind of configuration that is different, and I don't know which one.
OS are identical, java version are identical.
could it be linked to the number of cores ?
maybe it's a BIOS setting ? although BIOS are different (AMI vs Dell), settings seem pretty similar.
I'll update the Dell's BIOS soon and retest, but I would appreciate any insight...
thanks
I would try a different test program, try running somthing like this.
public class Timer implements Runnable
{
public void startTimer()
{
time = 0;
running = true;
thread = new Thread(this);
thread.start();
}
public int stopTimer()
{
running = false;
return time;
}
public void run()
{
try
{
while(running)
{
Thread.sleep(1);
time++;
}
}catch(Exception e){e.printStackTrace();}
}
private int time;
private Thread thread;
private boolean running;
}
Thats the timer now heres the main:
public class Main
{
public static void main(String args[])
{
Timer timer = new Timer();
timer.startTimer();
for(int x=0;x<1000;x++)
System.out.println("Testing!!");
System.out.println("\n\nTime Taken: "+timer.stopTimer());
}
}
I think this is a good way to test wich system is truely running faster. Try this and let me know how it goes.
Ok, I have a theory: the Thread.sleep() prevents the hotspot compiler from kicking in. Because you have a sleep, it assumes the loop isn't "hot", i.e that it doesn't matter too much how efficient the code in the loop is (because, after all, you're sleep's only purpose could be to slow things down).
Hence, you add a Thread.sleep() inside the loop, and the other stuff in the loop also runs slower.
I wonder if it might make a difference if you have a loop inside a loop and measure the performance of the inner loop? (and only have the Thread.sleep() in the outer loop). In this case the compiler might optimize the inner loop (if there are enough iterations).
(Brings up a question: if this code is a test case extracted from production code, why does the production code sleep?)
I actually updated the BIOS on the DELL R610 and ensured all BIOS CPU parameters are adjusted for best low-latency performances (no hyper-threading, etc...).
it solved it. The performances with & without the Thread.sleep make sense, and the overall performances of the R610 in both cases are much better than the Sun.
It appears the original BIOS did not make a correct or a full usage of the nehalem capabilities (while the Sun did).
You are testing how fast the console updates. This is entirely OS and window dependent. If you run this in your IDE it will be much slower than running in an xterm. Even which font you use and how big your window is will make a big different to performance. If your window is closed while you run the test this will improve performance.
Here is how I would run the same test. This test is self contained and does the analysis you need.
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.io.PrintWriter;
import java.util.Arrays;
public class TestProgram {
public static void main(String... args) throws IOException {
File file = new File("out.txt");
file.deleteOnExit();
PrintWriter out = new PrintWriter(new FileWriter(file), true);
int runs = 100000;
long[] times = new long[runs];
for (int i = -10000; i < runs; i++) {
long t1 = System.nanoTime();
out.println("0123456789qwertyuiop0123456789qwertyuiop0123456789qwertyuiop0123456789qwertyuiop");
long t2 = System.nanoTime();
if (i >= 0)
times[i] = t2 - t1;
}
out.close();
Arrays.sort(times);
System.out.printf("Median time was %,d ns, the 90%%tile was %,d ns%n", times[times.length / 2], times[times.length * 9 / 10]);
}
}
prints on a 2.6 GHz Xeon WIndows Vista box
Median time was 3,213 ns, the 90%tile was 3,981 ns