I have a nested loop which iterates over all combinations of two elements from an array. However, if the sum of the two values is too large, I want to skip to the next x.
Here's the Java code snippet:
/* Let array be an array of integers
* and size be equal to its length.
*/
for (int a = 0; a < size; a++)
{
int x = array[a];
for (int b = 0; b < size(); b++)
{
int y = array[b];
if ((x + y) < MAX)
{
// do stuff with x and y
}
else
{
// x + y is too big; skip to next x
break;
}
}
}
This works exactly as expected.
However, if I replace the break statement with b = size;, it surprisingly runs about 20% faster. Note that by setting b = size;, the inner for conditional becomes false and execution continues to the next iteration of the outer a loop.
Why would this happen? It seems like break should be faster, as I would have thought it saves an assignment, jump, and compare. Though clearly it does not.
Why would this happen? It seems like break should be faster ...
IMO, the most likely explanation is some kind of JVM warmup effect, especial since the overall times (120ms versus 74ms) are so small. If you wrapped that loop in another one, so that you could perform the time measurements repeatedly in the same run, this anomaly is likely to go away.
(Just increasing the array sizes isn't necessarily going to help. The best way to be sure that you have accounted for JVM warmup anomalies it to use a benchmarking framework; e.g. Caliper. But, failing that, put the "snippet" into a method and call it repeatedly.)
... as I would have thought it saves an assignment, jump, and compare. Though clearly it does not.
It is not clear at all. Your Java code gets compiled to bytecodes by javac (or your IDE). When you run the code, it starts out interpreting the bytecodes, and then after a bit they are compiled to native code by the JIT compiler:
The JIT compilation takes time that is (probably) included in your time measurements ... and one source of warmup anomalies.
The code produced by the JIT compiler is influenced by statistics gathered while interpreting. One of the things that is typically measured is whether branches (e.g. if tests) go one way or the other. This is used to make branch predictions ... which if correct make the test-and-branch instruction sequences a lot faster.
Related
for (int x = 0; x < height; x++) {
map[x][y] = new Plot(x, y, "map");
if (x == 199 && y < 199) {
x = 0;
y++;
}
}
I have this code here that I set up to create a 2D array of 200x200 objects for a map, and I would like to know if it is the same speed or if it indeed runs faster. I'm trying to optimize the array creation.
Thanks!
EDIT : assuming Y starts at 0, and height is 200 always
EDIT 2 : Thank you to everybody who answered :D Yes I could've created something to test it, but eh
The generated code will not be faster - the instructions generated will be almost identical; in fact, when compiler optimisations are turned on, the compiler might struggle to optimise either loop effectively because it can't recognise them as simple loops.
The vast majority of the execution time will be spent allocating the new memory, and writing pointers to the map arrays. In fact, one potential improvement does leap out: at the moment, you're accessing the map arrays like this:
map[0][0]
map[1][0]
map[2][0]
...
map[0][1]
map[1][1]
map[2][1]
...
map[0][2]
and so on.
This is undesirable, because the addresses are far apart in memory. It is always better to access memory in such a way that addresses that are close to each other are accessed near to each other in time, because this is much friendlier to the cache.
So if you were to swap the order of iteration round (e.g. [0][0], [0][1], [0][2], ... [1][0], [1][1], [1][2]...) then you might find your code runs quicker - or you might not; it all depends on the architecture of the machine.
Just think about it, if you have i.e. 200*200 array and you want to put some new instance into each of it, you have to do it in every single "cell" = 40000 cells. You cant be better than that no matter the optimization.
Even if you dont have for-cycle and do it with
x[0][0] = ...
x[0][1] = ...
You still have to write 40000 commands
I am wondering how to reach a compromise between fast-cancel-responsiveness and performance with my threads which body look similar to this loop:
for(int i=0; i<HUGE_NUMBER; ++i) {
//some easy computation like adding numbers
//which are result of previous iteration of this loop
}
If a computation in loop body is quite easy then adding simple check-reaction to each iteration:
if (Thread.currentThread().isInterrupted()) {
throw new InterruptedException("Cancelled");
}
may slow down execution of the code.
Even if I change the above condition to:
if (i % 100 && Thread.currentThread().isInterrupted()) {
throw new InterruptedException("Cancelled");
}
Then compilator cannot just precompute values of i and check condition only in some specific situations since HUGE_NUMBER is variable and can have different values.
So I'd like to ask if there's any smart way of adding such check to a presented code knowing that:
HUGE_NUMBER is variable and can have different values
loop body consists of some easy-to-compute, but relying on prevoius computations code.
What I want to say is that one iteration of a loop is quite fast, but HUGE_NUMBER of iterations can take a little more time and this is what I want to avoid.
First of all, use Thread.interrupted() instead of Thread.currentThread().isInterrupted() in that case.
You should think about if checking the interruption flag really slows down your calculation too much! One the one hand, if the loop body is VERY simple, even a huge number of iterations (the upper limit is Integer.MAX_VALUE) will run in a few seconds. Even when checking the interruption flag will result in an overhead of 20 or 30%, this will not add very much to the total runtime of your algorithm.
On the other hand, if the loop body is not that simple and so it will run longer, testing the interruption flag will not be a remarkable overhead I think.
Don't do tricks like if (i % 10000 == 0), as this will slow down calculation much more than a 'short' Thread.interrupted().
There is one small trick that you could use - but think twice because it makes your code more complex and less readable:
Whenever you have a loop like that:
for (int i = 0; i < max; i++) {
// loop-body using i
}
you can split up the total range of i into several intervals of size INTERVAL_SIZE:
int start = 0;
while (start < max) {
final int next = Math.min(start + INTERVAL_SIZE, max);
for(int i = start; i < next; i++) {
// loop-body using i
}
start = next;
}
Now you can add your interruption check right before or after the inner loop!
I've done some tests on my system (JDK 7) using the following loop-body
if (i % 2 == 0) x++;
and Integer.MAX_VALUE / 2 iterations. The results are as follows (after warm-up):
Simple loop without any interruption checks: 1,949 ms
Simple loop with check per iteration: 2,219 ms (+14%)
Simple loop with check per 1 million-th iteration using modulo: 3,166 ms (+62%)
Simple loop with check per 1 million-th iteration using bit-mask: 2,653 ms (+36%)
Interval-loop as described above with check in outer loop: 1,972 ms (+1.1%)
So even if the loop-body is as simple as above, the overhead for a per-iteration check is only 14%! So it's recommended to not do any tricks but simply check the interruption flag via Thread.interrupted() in every iteration!
Make your calculation an Iterator.
Although this does not sound terribly useful the benefit here is that you can then quite easily write filter iterators that can be surprisingly flexible. They can be added and removed simply - even through configuration if you wish. There are a number of benefits - try it.
You can then add a filtering Iterator that watches the time and checks for interrupt on a regular basis - or something even more flexible.
You can even add further filtering without compromising the original calculation by interspersing it with brittle status checks.
This is the context of my program.
A function has 50% chance to do nothing, 50% to call itself twice.
What is the probability that the program will finish?
I wrote this piece of code, and it works great apparently. The answer which may not be obvious to everyone is that this program has 100% chance to finish. But there is a StackOverflowError (how convenient ;) ) when I run this program, occuring in Math.Random(). Could someone point to me where does it come from, and tell me if maybe my code is wrong?
static int bestDepth =0;
static int numberOfPrograms =0;
#Test
public void testProba(){
for(int i = 0; i <1000; i++){
long time = System.currentTimeMillis();
bestDepth = 0;
numberOfPrograms = 0;
loop(0);
LOGGER.info("Best depth:"+ bestDepth +" in "+(System.currentTimeMillis()-time)+"ms");
}
}
public boolean loop(int depth){
numberOfPrograms++;
if(depth> bestDepth){
bestDepth = depth;
}
if(proba()){
return true;
}
else{
return loop(depth + 1) && loop(depth + 1);
}
}
public boolean proba(){
return Math.random()>0.5;
}
.
java.lang.StackOverflowError
at java.util.Random.nextDouble(Random.java:394)
at java.lang.Math.random(Math.java:695)
.
I suspect the stack and the amount of function in it is limited, but I don't really see the problem here.
Any advice or clue are obviously welcome.
Fabien
EDIT: Thanks for your answers, I ran it with java -Xss4m and it worked great.
Whenever a function is called or a non-static variable is created, the stack is used to place and reserve space for it.
Now, it seems that you are recursively calling the loop function. This places the arguments in the stack, along with the code segment and the return address. This means that a lot of information is being placed on the stack.
However, the stack is limited. The CPU has built-in mechanics that protect against issues where data is pushed into the stack, and eventually override the code itself (as the stack grows down). This is called a General Protection Fault. When that general protection fault happens, the OS notifies the currently running task. Thus, originating the Stackoverflow.
This seems to be happening in Math.random().
In order to handle your problem, I suggest you to increase the stack size using the -Xss option of Java.
As you said, the loop function recursively calls itself. Now, tail recursive calls can be rewritten to loops by the compiler, and not occupy any stack space (this is called the tail call optimization, TCO). Unfortunately, java compiler does not do that. And also your loop is not tail-recursive. Your options here are:
Increase the stack size, as suggested by the other answers. Note that this will just defer the problem further in time: no matter how large your stack is, its size is still finite. You just need a longer chain of recursive calls to break out of the space limit.
Rewrite the function in terms of loops
Use a language, which has a compiler that performs TCO
You will still need to rewrite the function to be tail-recursive
Or rewrite it with trampolines (only minor changes are needed). A good paper, explaining trampolines and generalizing them further is called "Stackless Scala with Free Monads".
To illustrate the point in 3.2, here's how the rewritten function would look like:
def loop(depth: Int): Trampoline[Boolean] = {
numberOfPrograms = numberOfPrograms + 1
if(depth > bestDepth) {
bestDepth = depth
}
if(proba()) done(true)
else for {
r1 <- loop(depth + 1)
r2 <- loop(depth + 1)
} yield r1 && r2
}
And initial call would be loop(0).run.
Increasing the stack-size is a nice temporary fix. However, as proved by this post, though the loop() function is guaranteed to return eventually, the average stack-depth required by loop() is infinite. Thus, no matter how much you increase the stack by, your program will eventually run out of memory and crash.
There is nothing we can do to prevent this for certain; we always need to encode the stack in memory somehow, and we'll never have infinite memory. However, there is a way to reduce the amount of memory you're using by about 2 orders of magnitude. This should give your program a significantly higher chance of returning, rather than crashing.
We can do this by noticing that, at each layer in the stack, there's really only one piece of information we need to run your program: the piece that tells us if we need to call loop() again or not after returning. Thus, we can emulate the recursion using a stack of bits. Each emulated stack-frame will require only one bit of memory (right now it requires 64-96 times that, depending on whether you're running in 32- or 64-bit).
The code would look something like this (though I don't have a Java compiler right now so I can't test it):
static int bestDepth = 0;
static int numLoopCalls = 0;
public void emulateLoop() {
//Our fake stack. We'll push a 1 when this point on the stack needs a second call to loop() made yet, a 0 if it doesn't
BitSet fakeStack = new BitSet();
long currentDepth = 0;
numLoopCalls = 0;
while(currentDepth >= 0)
{
numLoopCalls++;
if(proba()) {
//"return" from the current function, going up the callstack until we hit a point that we need to "call loop()"" a second time
fakeStack.clear(currentDepth);
while(!fakeStack.get(currentDepth))
{
currentDepth--;
if(currentDepth < 0)
{
return;
}
}
//At this point, we've hit a point where loop() needs to be called a second time.
//Mark it as called, and call it
fakeStack.clear(currentDepth);
currentDepth++;
}
else {
//Need to call loop() twice, so we push a 1 and continue the while-loop
fakeStack.set(currentDepth);
currentDepth++;
if(currentDepth > bestDepth)
{
bestDepth = currentDepth;
}
}
}
}
This will probably be slightly slower, but it will use about 1/100th the memory. Note that the BitSet is stored on the heap, so there is no longer any need to increase the stack-size to run this. If anything, you'll want to increase the heap-size.
The downside of recursion is that it starts filling up your stack which will eventually cause a stack overflow if your recursion is too deep. If you want to ensure that the test ends you can increase your stack size using the answers given in the follow Stackoverflow thread:
How to increase to Java stack size?
I have a variable that gets read and updated thousands of times a second. It needs to be reset regularly. But "half" the time, the value is already the reset value. Is it a good idea to check the value first (to see if it needs resetting) before resetting (a write operaion), or I should just reset it regardless? The main goal is to optimize the code for performance.
To illustrate:
Random r = new Random();
int val = Integer.MAX_VALUE;
for (int i=0; i<100000000; i++) {
if (i % 2 == 0)
val = Integer.MAX_VALUE;
else
val = r.nextInt();
if (val != Integer.MAX_VALUE) //skip check?
val = Integer.MAX_VALUE;
}
I tried to use the above program to test the 2 scenarios (by un/commenting the 2nd "if" line), but any difference is masked by the natural variance of the run duration time.
Thanks.
Don't check it.
It's more execution steps = more cycles = more time.
As an aside, you are breaking one of the basic software golden rules: "Don't optimise early". Unless you have hard evidence that this piece if code is a performance problem, you shouldn't be looking at it. (Note that doesn't mean you code without performance in mind, you still follow normal best practice, but you don't add any special code whose only purpose is "performance related")
The check has no actual performance impact. We'd be talking about a single clock cycle or something, which is usually not relevant in a Java program (as hard-core number crunching usually isn't done in Java).
Instead, base the decision on readability. Think of the maintainer who's going to change this piece of code five years on.
In the case of your example, using my rationale, I would skip the check.
Most likely the JIT will optimise the code away because it doesn't do anything.
Rather than worrying about performance, it is usually better to worry about what it
simpler to understand
cleaner to implement
In both cases, you might remove the code as it doesn't do anything useful and it could make the code faster as well.
Even if it did make the code a little slower it would be very small compared to the cost of calling r.nextInt() which is not cheap.
I had a challenge to print out multiples of 7 (non-negative) to the 50th multiple in the simplest way humanly possible using for loops.
I came up with this (Ignoring the data types)
for(int i = 0; i <= 350; i += 7)
{System.out.println(i);}
The other guy came up with this
for(int i=0;i <=50; i++)
{
System.out.println(7*i);
}
However, I feel the two code snippets could be further optimized. If it actually can please tell. And what are the advantages/disadvantages of one over the other?
If you really want to optimize it, do this:
System.out.print("0\n7\n14\n21\n28\n35\n42\n49\n56\n63\n70\n77\n84\n91\n98\n105\n112\n119\n126\n133\n140\n147\n154\n161\n168\n175\n182\n189\n196\n203\n210\n217\n224\n231\n238\n245\n252\n259\n266\n273\n280\n287\n294\n301\n308\n315\n322\n329\n336\n343\n350");
and it's O(1) :)
The first one technically performs less operations (no multiplication).
The second one is slightly more readable (50 multiples of 7 vs. multiples of 7 up to 350).
Probably can't be optimized any further.
Unless you're willing to optimize away multiple println calls by doing:
StringBuilder s = new StringBuilder();
for(int i = 0; i <= 350; i += 7) s.append(i).append(", ");
System.out.println(s.toString());
(IIRC printlns are relatively expensive.)
This is getting to the point where you gain a tiny bit of optimization at the expense of simplicity.
In theory, your code is faster since it does not need one less multiplication instruction per loop.
However, the multiple calls to System.out.println (and the integer-to-string conversion) will dwarf the runtime the multiplication takes. To optimize, aggregate the Strings with a StringBuilder and output the whole result (or output the result when memory becomes a problem).
However, in real-world code, this is extremely unlikely to be the bottleneck. Profile, then optimize.
The second function is the best you would get:
O(n)