Concurrently adding a value in loop - java

I am trying to add a value 1.12 between the Min and Max values
Min = 1.3
Max = 6.9
ie
1.3 + 1.12 = 2.42
2.42 + 1.12 = 3.54
till it reaches max value.
What I did is
double sum = 0,BucWidth = 1.12;
sum = min + BucWidth ;
while(sum != max){
sum = sum +BucWidth ;
System.out.println("SUmmmmm" + sum);
}
But it is not stopping when sum reaches max.
Am I doing anything wrong.
Pls Suggest

while (sum <= max) {
sum = sum + BucWidth; // or sum += bucWidth;
System.out.println("SUmmmmm" + sum);
}
You should check if it is less than or equal to, not if it is not equal to in your while condition, since you want to exit the loop when it reaches the limit.

In general, comparing floating-point numbers for exact equality is asking for trouble unless you have a deep understanding of exactly where and when roundoff will occur. It's safer to use < rather than !=, since the value may never exactly match the one you're expecting.
(This annoyance is one of many reasons that programming languages have int and float as separate datatypes.)

For floating point numbers or long double types it might happen that doing mathematical operations like adding a value to another value might not be equal to a value you assumed as in integer addition operations.
int a=6;
while(sum!=12)
{
sum+=sum;
}
This iterates once.
Consider this
double a=7.4564;
double b=7.4567;
if(a==b){
System.out.println("Both are equal");
}
else{
System.out.println("Both are unequal");
}
Output:-Both are unequal
This is because some number after the decimal can change and so when operations like != are used the numbers have to be exact in all decimal places as used or else the logic won't work. So it is better in your case to use <= instead of != while comparing sum and max.

Related

Adding 1/3 in java results in 1.0, while it shouldn't

Note: question still not answered thoroughly! This questions does not deal with the issue of truncation of floating point parts!!!
In Java I have this simple code:
double sum = 0.0;
for(int i = 1; i <= n; i++){
sum += 1.0/n
}
System.out.println("Sum should be: 1");
System.out.println("The result is: " + sum);
Where n can be any integer. For numbers like 7,9, the expected value for sum is to have difference in the last digits of sum, and the result is 0.999999999998 or something but the output when I use 3 is 1.0.
If you add 1/3 3 times, you would expect a number close to 1, but I get exactly 1.0.
Why?
This is because the division is made in integer.
1/n always gives 0 for n > 1.
Therefore, you always end up with sum = 0 + 1/1 + 0 + 0...
Try with 1.0 / n
If you add 1/3 3 times, you would expect a number close to 1, but I
get exactly 1.0.
Actually a normal person uncontaminated by programming experience would expect n * 1 / n to equal 1, but we're not normal here.
I can't reproduce your problem exactly, I get
groovy:000> def foo(n) {
groovy:001> sum = 0.0
groovy:002> for (int i = 0; i < n; i++) {
groovy:003> sum += 1.0 / n
groovy:004> }
groovy:005> sum
groovy:006> }
===> true
groovy:000> foo(3)
===> 0.9999999999
There may be 2 issues here, at least you will want to be aware of them.
One is that doubles are not exact, they cannot represent some values exactly, and you just have to expect stuff to be off by a little bit. Your goal isn't 100% accuracy, it's to keep the error within acceptable bounds. (Peter Lawrey has an interesting article on doubles that you might want to check out.) If that's not ok for you, you'll want to avoid doubles. For a lot of uses BigDecimal is good enough. If you want a library where the division problems in your question give accurate answers you might check out the answers to this question.
The other issue is that System.out.println doesn't tell you the exact value of a double, it fudges a bit. If you add a line like:
System.out.println(new java.math.BigDecimal(sum));
then you will get an accurate view of what the double contains.
I'm not sure whether this will help clarify things, because I'm not sure what you consider to be the problem.
Here is a test program that uses BigDecimal, as previously suggested, to display the values of the intermediate answers. At the final step, adding the third copy of 1.0/3 to the sum of two copies, the exact answer is half way between 1.0 and the next double lower than it. In that situation the round-to-even rounding rule picks 1.0.
Given that, I think it should round to 1.0, contradicting the question title.
Test program:
import java.math.BigDecimal;
public class Test {
public static void main(String[] args) {
final double oneThirdD = 1.0/3;
final BigDecimal oneThirdBD = new BigDecimal(oneThirdD);
final double twoThirdsD = oneThirdD + oneThirdD;
final BigDecimal twoThirdsBD = new BigDecimal(twoThirdsD);
final BigDecimal exact = twoThirdsBD.add(oneThirdBD);
final double nextLowerD = Math.nextAfter(1.0, 0);
final BigDecimal nextLowerBD = new BigDecimal(nextLowerD);
System.out.println("1.0/3: "+oneThirdBD);
System.out.println("1.0/3+1.0/3: "+twoThirdsBD);
System.out.println("Exact sum: "+exact);
System.out.println("Rounding error rounding up to 1.0: "+BigDecimal.ONE.subtract(exact));
System.out.println("Largest double that is less than 1.0: "+nextLowerBD);
System.out.println("Rounding error rounding down to next lower double: "+exact.subtract(nextLowerBD));
}
}
Output:
1.0/3: 0.333333333333333314829616256247390992939472198486328125
1.0/3+1.0/3: 0.66666666666666662965923251249478198587894439697265625
Exact sum: 0.999999999999999944488848768742172978818416595458984375
Rounding error rounding up to 1.0: 5.5511151231257827021181583404541015625E-17
Largest double that is less than 1.0: 0.99999999999999988897769753748434595763683319091796875
Rounding error rounding down to next lower double: 5.5511151231257827021181583404541015625E-17
An int divided by an int will always produce another int. Now int has no place to store the fractional part of the number so it is discarded. Keep in mind that it is discarded not rounded.
Therefore 1 / 3 = 0.3333333, and the fractional part is discarded meaning that it becomes 0.
If you specify the number as a double (by including the decimal point, ex. 1. or 1.0) then the result will be a double (because java automatically converts an int to a double) and the fractional part will be preserved.
In your updated question, you are setting i to 1.0 but i is still an int. So that 1.0 is getting truncated to 1 and for further calculations, it is still an int. You need to change the type of i to double as well otherwise there will be no difference in the code.
Alternatively you can use sum += 1.0/n
This will have the effect of converting n to a double before performing the calculation

Fast calculation of RMS gives NaNs in Java - floating point error?

I'm getting a perplexing result doing math with floats. I have code that should never produce a negative number producing a negative number, which causes NaNs when I try to take the square root.
This code appears to work very well in tests. However, when operating on real-world (i.e. potentially very small, seven and eight negative exponents) numbers, eventually sum becomes negative, leading to the NaNs. In theory, the subtraction step only ever removes a number that has already been added to the sum; is this a floating-point error problem? Is there any way to fix it?
The code:
public static float[] getRmsFast(float[] data, int halfWindow) {
int n = data.length;
float[] result = new float[n];
float sum = 0.000000000f;
for (int i=0; i<2*halfWindow; i++) {
float d = data[i];
sum += d * d;
}
result[halfWindow] = calcRms(halfWindow, sum);
for (int i=halfWindow+1; i<n-halfWindow; i++) {
float oldValue = data[i-halfWindow-1];
float newValue = data[i+halfWindow-1];
sum -= (oldValue*oldValue);
sum += (newValue*newValue);
float rms = calcRms(halfWindow, sum);
result[i] = rms;
}
return result;
}
private static float calcRms(int halfWindow, float sum) {
return (float) Math.sqrt(sum / (2*halfWindow));
}
For some background:
I am trying to optimize a function that calculates a rolling root mean square (RMS) function on signal data. The optimization is pretty important; it's a hot-spot in our processing. The basic equation is simple - http://en.wikipedia.org/wiki/Root_mean_square - Sum the squares of the data over the window, divide the sum by the size of the window, then take the square.
The original code:
public static float[] getRms(float[] data, int halfWindow) {
int n = data.length;
float[] result = new float[n];
for (int i=halfWindow; i < n - halfWindow; i++) {
float sum = 0;
for (int j = -halfWindow; j < halfWindow; j++) {
sum += (data[i + j] * data[i + j]);
}
result[i] = calcRms(halfWindow, sum);
}
return result;
}
This code is slow because it reads the entire window from the array at each step, instead of taking advantage of the overlap in the windows. The intended optimization was to use that overlap, by removing the oldest value and adding the newest.
I've checked the array indices in the new version pretty carefully. It seems to be working as intended, but I could certainly be wrong in that area!
Update:
With our data, it was enough to change the type of sum to a double. Don't know why that didn't occur to me. But I left the negative check in. And FWIW, I was also able to implement a sol'n where recomputing the sum every 400 samples gave great run-time and enough accuracy. Thanks.
is this a floating-point error problem?
Yes it is. Due to rounding, you could well get negative values after subtracting a previous summand.
For example:
float sum = 0f;
sum += 1e10;
sum += 1e-10;
sum -= 1e10;
sum -= 1e-10;
System.out.println(sum);
On my machine, this prints
-1.0E-10
even though mathematically, the result is exactly zero.
This is the nature of floating point: 1e10f + 1e-10f gives exactly the same value as 1e10f.
As far as mitigation strategies go:
You could use double instead of float for enhanced precision.
From time to time, you could fully recompute the sum of squares to reduce the effect of rounding errors.
When the sum goes negative, you could either do a full recalculation as in (2) above, or simply set the sum to zero. The latter is safe since you know that you'll be pushing the sum towards its true value, and never away from it.
Try checking your indices in the second loop. The last value of i will be n-halfWindow-1 and n-halfWindow-1+halfWindow-1 is n-2.
You may need to change the loop to for (int i=halfWindow+1; i<n-halfWindow+1; i++).
You are running into issues with floating point numbers because you believe that they are just like mathematical real numbers. They are not, they are approximations of real numbers, mapped into discrete numbers, with a few special rules added into the mix.
Take the time to read up on what every programmer should know about floating point numbers, if you intend to use them often. Without some care the differences between floating point numbers and real numbers can come back and bite you in the worst ways.
Or, just take my word for it and know that every floating point number is "pretty close" to the requested value, with some being "dead on" accurate, but most being "mostly" accurate. This means you need to account for measurement error and keep it in mind after the calculations or risk believing you have an exact result at the end of the computation of the value (which you don't).

How can I accurately determine if a double is an integer? [duplicate]

This question already has answers here:
How to test if a double is an integer
(18 answers)
Closed 9 years ago.
Specifically in Java, how can I determine if a double is an integer? To clarify, I want to know how I can determine that the double does not in fact contain any fractions or decimals.
I am concerned essentially with the nature of floating-point numbers. The methods I thought of (and the ones I found via Google) follow basically this format:
double d = 1.0;
if((int)d == d) {
//do stuff
}
else {
// ...
}
I'm certainly no expert on floating-point numbers and how they behave, but I am under the impression that because the double stores only an approximation of the number, the if() conditional will only enter some of the time (perhaps even a majority of the time). But I am looking for a method which is guaranteed to work 100% of the time, regardless of how the double value is stored in the system.
Is this possible? If so, how and why?
double can store an exact representation of certain values, such as small integers and (negative or positive) powers of two.
If it does indeed store an exact integer, then ((int)d == d) works fine. And indeed, for any 32-bit integer i, (int)((double)i) == i since a double can exactly represent it.
Note that for very large numbers (greater than about 2**52 in magnitude), a double will always appear to be an integer, as it will no longer be able to store any fractional part. This has implications if you are trying to cast to a Java long, for instance.
How about
if(d % 1 == 0)
This works because all integers are 0 modulo 1.
Edit To all those who object to this on the grounds of it being slow, I profiled it, and found it to be about 3.5 times slower than casting. Unless this is in a tight loop, I'd say this is a preferable way of working it out, because it's extremely clear what you're testing, and doesn't require any though about the semantics of integer casting.
I profiled it by running time on javac of
class modulo {
public static void main(String[] args) {
long successes = 0;
for(double i = 0.0; i < Integer.MAX_VALUE; i+= 0.125) {
if(i % 1 == 0)
successes++;
}
System.out.println(successes);
}
}
VS
class cast {
public static void main(String[] args) {
long successes = 0;
for(double i = 0.0; i < Integer.MAX_VALUE; i+= 0.125) {
if((int)i == i)
successes++;
}
System.out.println(successes);
}
}
Both printed 2147483647 at the end.
Modulo took 189.99s on my machine - Cast took 54.75s.
if(new BigDecimal(d).scale() <= 0) {
//do stuff
}
Your method of using if((int)d == d) should always work for any 32-bit integer. To make it work up to 64 bits, you can use if((long)d == d, which is effectively the same except that it accounts for larger magnitude numbers. If d is greater than the maximum long value (or less than the minimum), then it is guaranteed to be an exact integer. A function that tests whether d is an integer can then be constructed as follows:
boolean isInteger(double d){
if(d > Long.MAX_VALUE || d < Long.MIN_VALUE){
return true;
} else if((long)d == d){
return true;
} else {
return false;
}
}
If a floating point number is an integer, then it is an exact representation of that integer.
Doubles are a binary fraction with a binary exponent. You cannot be certain that an integer can be exactly represented as a double, especially not if it has been calculated from other values.
Hence the normal way to approach this is to say that it needs to be "sufficiently close" to an integer value, where sufficiently close typically mean "within X %" (where X is rather small).
I.e. if X is 1 then 1.98 and 2.02 would both be considered to be close enough to be 2. If X is 0.01 then it needs to be between 1.9998 and 2.0002 to be close enough.

Are these two approaches to the smallest Double values in Java equivalent?

Alternative wording: When will adding Double.MIN_VALUE to a double in Java not result in a different Double value? (See Jon Skeet's comment below)
This SO question about the minimum Double value in Java has some answers which seem to me to be equivalent. Jon Skeet's answer no doubt works but his explanation hasn't convinced me how it is different from Richard's answer.
Jon's answer uses the following:
double d = // your existing value;
long bits = Double.doubleToLongBits(d);
bits++;
d = Double.longBitsToDouble();
Richards answer mentions the JavaDoc for Double.MIN_VALUE
A constant holding the smallest
positive nonzero value of type double,
2-1074. It is equal to the hexadecimal
floating-point literal
0x0.0000000000001P-1022 and also equal
to Double.longBitsToDouble(0x1L).
My question is, how is Double.logBitsToDouble(0x1L) different from Jon's bits++;?
Jon's comment focuses on the basic floating point issue.
There's a difference between adding
Double.MIN_VALUE to a double value,
and incrementing the bit pattern
representing a double. They're
entirely different operations, due to
the way that floating point numbers
are stored. If you try to add a very
little number to a very big number,
the difference may well be so small
that the closest result is the same as
the original. Adding 1 to the current
bit pattern, however, will always
change the corresponding floating
point value, by the smallest possible
value which is visible at that scale.
I don't see any difference to Jon's approach of incrementing a long, "bits++", with adding Double.MIN_VALUE. When will they produce different results?
I wrote the following code to test the differences. Maybe someone could provide more/better sample double numbers or use a loop to find a number where there is a difference.
double d = 3.14159269123456789; // sample double
long bits = Double.doubleToLongBits(d);
long bitsBefore = bits;
bits++;
long bitsAfter = bits;
long bitsDiff = bitsAfter - bitsBefore;
long bitsMinValue = Double.doubleToLongBits(Double.MIN_VALUE);
long bitsSmallValue = Double.doubleToLongBits(Double.longBitsToDouble(0x1L));
if (bitsMinValue == bitsSmallValue)
{
System.out.println("Double.doubleToLongBits(0x1L) is same as Double.doubleToLongBits(Double.MIN_VALUE)");
}
if (bitsDiff == bitsMinValue)
{
System.out.println("bits++ increments the same amount as Double.MIN_VALUE");
}
if (bitsDiff == bitsMinValue)
{
d = d + Double.MIN_VALUE;
System.out.println("Using Double.MIN_VALUE");
}
else
{
d = Double.longBitsToDouble(bits);
System.out.println("Using doubleToLongBits/bits++");
}
System.out.println("bits before: " + bitsBefore);
System.out.println("bits after: " + bitsAfter);
System.out.println("bits diff: " + bitsDiff);
System.out.println("bits Min value: " + bitsMinValue);
System.out.println("bits Small value: " + bitsSmallValue);
OUTPUT:
Double.doubleToLongBits(Double.longBitsToDouble(0x1L)) is same as Double.doubleToLongBits(Double.MIN_VALUE)
bits++ increments the same amount as Double.MIN_VALUE
Using doubleToLongBits/bits++
bits before: 4614256656636814345
bits after: 4614256656636814346
bits diff: 1
bits Min value: 1
bits Small value: 1
Okay, let's imagine it this way, sticking with decimal numbers. Suppose you have a floating decimal point type which allows you to represent 5 decimal digits, and a number between 0 and 3 for the exponent, to multiple the result by 1, 10, 100 or 1000.
So the smallest non-zero value is just 1 (i.e. mantissa=00001, exponent=0). The largest value is 99999000 (mantissa=99999, exponent=3).
Now, what happens when you add 1 to 50000000? You can't represent 50000001...the next representable number after 500000000 is 50001000. So if you try to add them together, the result is just going to be the closest value to the "true" result - which is still 500000000. That's like adding Double.MIN_VALUE to a large double.
My version (converting to bits, incrementing and then converting back) is like taking that 50000000, splitting into mantissa and exponent (m=50000, e=3) then incrementing it the smallest amount, to (m=50001, e=3) and then reassembling to 50001000.
Do you see how they're different?
Now here's a concrete example:
public class Test{
public static void main(String[] args) {
double before = 100000000000000d;
double after = before + Double.MIN_VALUE;
System.out.println(before == after);
long bits = Double.doubleToLongBits(before);
bits++;
double afterBits = Double.longBitsToDouble(bits);
System.out.println(before == afterBits);
System.out.println(afterBits - before);
}
}
This tries both approaches with a large number. The output is:
true
false
0.015625
Going through the output, that means:
Adding Double.MIN_VALUE didn't have any effect
Incrementing the bit did have an effect
The difference between afterBits and before is 0.015625, which is much bigger than Double.MIN_VALUE. No wonder the simple addition had no effect!
It's exactly as Jon said:
"If you try to add a very little
number to a very big number, the
difference may well be so small that
the closest result is the same as the
original."
For example:
// True:
(Double.MAX_VALUE + Double.MIN_VALUE) == Double.MAX_VALUE
// False:
Double.longBitsToDouble(Double.doubleToLongBits(Double.MAX_VALUE) + 1) == Double.MAX_VALUE)
MIN_VALUE is the smallest representable positive double, but that certainly does not imply that adding it to an arbitrary double results in a unequal one.
In contrast, adding 1 to the underlying bits results in a new bit pattern, and thus does result in a unequal double.

How to alter a float by its smallest increment in Java?

I have a double value d and would like a way to nudge it very slightly larger (or smaller) to get a new value that will be as close as possible to the original but still strictly greater than (or less than) the original.
It doesn't have to be close down to the last bit—it's more important that whatever change I make is guaranteed to produce a different value and not round back to the original.
(This question has been asked and answered for C, C++)
The reason I need this, is that I'm mapping from Double to (something), and I may have multiple items with the save double 'value', but they all need to go individually into the map.
My current code (which does the job) looks like this:
private void putUniqueScoreIntoMap(TreeMap map, Double score,
A entry) {
int exponent = 15;
while (map.containsKey(score)) {
Double newScore = score;
while (newScore.equals(score) && exponent != 0) {
newScore = score + (1.0d / (10 * exponent));
exponent--;
}
if (exponent == 0) {
throw new IllegalArgumentException("Failed to find unique new double value");
}
score = newScore;
}
map.put(score, entry);
}
In Java 1.6 and later, the Math.nextAfter(double, double) method is the cleanest way to get the next double value after a given double value.
The second parameter is the direction that you want. Alternatively you can use Math.nextUp(double) (Java 1.6 and later) to get the next larger number and since Java 1.8 you can also use Math.nextDown(double) to get the next smaller number. These two methods are equivalent to using nextAfter with Positive or Negative infinity as the direction double.
Specifically, Math.nextAfter(score, Double.MAX_VALUE) will give you the answer in this case.
Use Double.doubleToRawLongBits and Double.longBitsToDouble:
double d = // your existing value;
long bits = Double.doubleToLongBits(d);
bits++;
d = Double.longBitsToDouble(bits);
The way IEEE-754 works, that will give you exactly the next viable double, i.e. the smallest amount greater than the existing value.
(Eventually it'll hit NaN and probably stay there, but it should work for sensible values.)
Have you considered using a data structure which would allow multiple values stored under the same key (e.g. a binary tree) instead of trying to hack the key value?
What about using Double.MIN_VALUE?
d += Double.MIN_VALUE
(or -= if you want to take away)
Use Double.MIN_VALUE.
The javadoc for it:
A constant holding the smallest positive nonzero value of type double, 2-1074. It is equal to the hexadecimal floating-point literal 0x0.0000000000001P-1022 and also equal to Double.longBitsToDouble(0x1L).

Categories

Resources