I have a double value d and would like a way to nudge it very slightly larger (or smaller) to get a new value that will be as close as possible to the original but still strictly greater than (or less than) the original.
It doesn't have to be close down to the last bit—it's more important that whatever change I make is guaranteed to produce a different value and not round back to the original.
(This question has been asked and answered for C, C++)
The reason I need this, is that I'm mapping from Double to (something), and I may have multiple items with the save double 'value', but they all need to go individually into the map.
My current code (which does the job) looks like this:
private void putUniqueScoreIntoMap(TreeMap map, Double score,
A entry) {
int exponent = 15;
while (map.containsKey(score)) {
Double newScore = score;
while (newScore.equals(score) && exponent != 0) {
newScore = score + (1.0d / (10 * exponent));
exponent--;
}
if (exponent == 0) {
throw new IllegalArgumentException("Failed to find unique new double value");
}
score = newScore;
}
map.put(score, entry);
}
In Java 1.6 and later, the Math.nextAfter(double, double) method is the cleanest way to get the next double value after a given double value.
The second parameter is the direction that you want. Alternatively you can use Math.nextUp(double) (Java 1.6 and later) to get the next larger number and since Java 1.8 you can also use Math.nextDown(double) to get the next smaller number. These two methods are equivalent to using nextAfter with Positive or Negative infinity as the direction double.
Specifically, Math.nextAfter(score, Double.MAX_VALUE) will give you the answer in this case.
Use Double.doubleToRawLongBits and Double.longBitsToDouble:
double d = // your existing value;
long bits = Double.doubleToLongBits(d);
bits++;
d = Double.longBitsToDouble(bits);
The way IEEE-754 works, that will give you exactly the next viable double, i.e. the smallest amount greater than the existing value.
(Eventually it'll hit NaN and probably stay there, but it should work for sensible values.)
Have you considered using a data structure which would allow multiple values stored under the same key (e.g. a binary tree) instead of trying to hack the key value?
What about using Double.MIN_VALUE?
d += Double.MIN_VALUE
(or -= if you want to take away)
Use Double.MIN_VALUE.
The javadoc for it:
A constant holding the smallest positive nonzero value of type double, 2-1074. It is equal to the hexadecimal floating-point literal 0x0.0000000000001P-1022 and also equal to Double.longBitsToDouble(0x1L).
Related
This question already has answers here:
How to test if a double is an integer
(18 answers)
Closed 9 years ago.
Specifically in Java, how can I determine if a double is an integer? To clarify, I want to know how I can determine that the double does not in fact contain any fractions or decimals.
I am concerned essentially with the nature of floating-point numbers. The methods I thought of (and the ones I found via Google) follow basically this format:
double d = 1.0;
if((int)d == d) {
//do stuff
}
else {
// ...
}
I'm certainly no expert on floating-point numbers and how they behave, but I am under the impression that because the double stores only an approximation of the number, the if() conditional will only enter some of the time (perhaps even a majority of the time). But I am looking for a method which is guaranteed to work 100% of the time, regardless of how the double value is stored in the system.
Is this possible? If so, how and why?
double can store an exact representation of certain values, such as small integers and (negative or positive) powers of two.
If it does indeed store an exact integer, then ((int)d == d) works fine. And indeed, for any 32-bit integer i, (int)((double)i) == i since a double can exactly represent it.
Note that for very large numbers (greater than about 2**52 in magnitude), a double will always appear to be an integer, as it will no longer be able to store any fractional part. This has implications if you are trying to cast to a Java long, for instance.
How about
if(d % 1 == 0)
This works because all integers are 0 modulo 1.
Edit To all those who object to this on the grounds of it being slow, I profiled it, and found it to be about 3.5 times slower than casting. Unless this is in a tight loop, I'd say this is a preferable way of working it out, because it's extremely clear what you're testing, and doesn't require any though about the semantics of integer casting.
I profiled it by running time on javac of
class modulo {
public static void main(String[] args) {
long successes = 0;
for(double i = 0.0; i < Integer.MAX_VALUE; i+= 0.125) {
if(i % 1 == 0)
successes++;
}
System.out.println(successes);
}
}
VS
class cast {
public static void main(String[] args) {
long successes = 0;
for(double i = 0.0; i < Integer.MAX_VALUE; i+= 0.125) {
if((int)i == i)
successes++;
}
System.out.println(successes);
}
}
Both printed 2147483647 at the end.
Modulo took 189.99s on my machine - Cast took 54.75s.
if(new BigDecimal(d).scale() <= 0) {
//do stuff
}
Your method of using if((int)d == d) should always work for any 32-bit integer. To make it work up to 64 bits, you can use if((long)d == d, which is effectively the same except that it accounts for larger magnitude numbers. If d is greater than the maximum long value (or less than the minimum), then it is guaranteed to be an exact integer. A function that tests whether d is an integer can then be constructed as follows:
boolean isInteger(double d){
if(d > Long.MAX_VALUE || d < Long.MIN_VALUE){
return true;
} else if((long)d == d){
return true;
} else {
return false;
}
}
If a floating point number is an integer, then it is an exact representation of that integer.
Doubles are a binary fraction with a binary exponent. You cannot be certain that an integer can be exactly represented as a double, especially not if it has been calculated from other values.
Hence the normal way to approach this is to say that it needs to be "sufficiently close" to an integer value, where sufficiently close typically mean "within X %" (where X is rather small).
I.e. if X is 1 then 1.98 and 2.02 would both be considered to be close enough to be 2. If X is 0.01 then it needs to be between 1.9998 and 2.0002 to be close enough.
I need to increment a float value atomically. I get its int value by calling Float.floatToIntBits on it. If I just do an i++ and convert it back to float, it does not give me the expected value. So how would I go about it?
(I'm trying to create an AtomicFloat through AtomicInteger, hence this question).
EDIT: here's what I did:
Float f = 1.25f;
int i = Float.floatToIntBits(f);
i++;
f = Float.intBitsToFloat(i);
I wanted 2.25, but got 1.2500001 instead.
The reason is that the bits you get from floatToIntBits represents
sign
exponent
mantissa
laid out like this:
Repr: Sign Exponent Mantissa
Bit: 31 30......23 22.....................0
Incrementing the integer storing these fields with 1 won't increment the float value it represents by 1.
I'm trying to create an AtomicFloat through AtomicInteger, hence this question
I did precisely this in an answer to this question:
Java: is there no AtomicFloat or AtomicDouble?
To add functionality to increment the float by one, you could copy the code of incrementAndGet from AtomicInteger (and change from int to float):
public final float incrementAndGet() {
for (;;) {
float current = get();
float next = current + 1;
if (compareAndSet(current, next))
return next;
}
}
(Note that if you want to increment the float by the smallest possible value, you take the above code and change current + 1 to current +Math.ulp(current).)
The atomic part can be implemented atop compareAndSet for a wrapper class as shown in the link of aioobe. The increment operators of AtomicInteger are implemented like that.
The increment part is a completely different problem. Depending on what you mean by "increment a float", it either requires you to add one to the number, or increment it by one ULP. For the latter, in Java 6, the Math.nextUp method is what you are looking for. For decrement by one ULP, the Math.nextAfter method is useful.
When I call Math.ceil(5.2) the return is the double 6.0. My natural inclination was to think that Math.ceil(double a) would return a long. From the documentation:
ceil(double a)
Returns the smallest (closest to negative infinity) double value
that is not less than the argument and is equal to a mathematical
integer.
But why return a double rather than a long when the result is an integer? I think understanding the reason behind it might help me understand Java a bit better. It also might help me figure out if I'll get myself into trouble by casting to a long, e.g. is
long b = (long)Math.ceil(a);
always what I think it should be? I fear there could be some boundary cases that are problematic.
The range of double is greater than that of long. For example:
double x = Long.MAX_VALUE;
x = x * 1000;
x = Math.ceil(x);
What would you expect the last line to do if Math.ceil returned long?
Note that at very large values (positive or negative) the numbers end up being distributed very sparsely - so the next integer greater than integer x won't be x + 1 if you see what I mean.
A double can be larger than Long.MAX_VALUE. If you call Math.ceil() on such a value you would expect to return the same value. However if it returned a long, the value would be incorrect.
Alternative wording: When will adding Double.MIN_VALUE to a double in Java not result in a different Double value? (See Jon Skeet's comment below)
This SO question about the minimum Double value in Java has some answers which seem to me to be equivalent. Jon Skeet's answer no doubt works but his explanation hasn't convinced me how it is different from Richard's answer.
Jon's answer uses the following:
double d = // your existing value;
long bits = Double.doubleToLongBits(d);
bits++;
d = Double.longBitsToDouble();
Richards answer mentions the JavaDoc for Double.MIN_VALUE
A constant holding the smallest
positive nonzero value of type double,
2-1074. It is equal to the hexadecimal
floating-point literal
0x0.0000000000001P-1022 and also equal
to Double.longBitsToDouble(0x1L).
My question is, how is Double.logBitsToDouble(0x1L) different from Jon's bits++;?
Jon's comment focuses on the basic floating point issue.
There's a difference between adding
Double.MIN_VALUE to a double value,
and incrementing the bit pattern
representing a double. They're
entirely different operations, due to
the way that floating point numbers
are stored. If you try to add a very
little number to a very big number,
the difference may well be so small
that the closest result is the same as
the original. Adding 1 to the current
bit pattern, however, will always
change the corresponding floating
point value, by the smallest possible
value which is visible at that scale.
I don't see any difference to Jon's approach of incrementing a long, "bits++", with adding Double.MIN_VALUE. When will they produce different results?
I wrote the following code to test the differences. Maybe someone could provide more/better sample double numbers or use a loop to find a number where there is a difference.
double d = 3.14159269123456789; // sample double
long bits = Double.doubleToLongBits(d);
long bitsBefore = bits;
bits++;
long bitsAfter = bits;
long bitsDiff = bitsAfter - bitsBefore;
long bitsMinValue = Double.doubleToLongBits(Double.MIN_VALUE);
long bitsSmallValue = Double.doubleToLongBits(Double.longBitsToDouble(0x1L));
if (bitsMinValue == bitsSmallValue)
{
System.out.println("Double.doubleToLongBits(0x1L) is same as Double.doubleToLongBits(Double.MIN_VALUE)");
}
if (bitsDiff == bitsMinValue)
{
System.out.println("bits++ increments the same amount as Double.MIN_VALUE");
}
if (bitsDiff == bitsMinValue)
{
d = d + Double.MIN_VALUE;
System.out.println("Using Double.MIN_VALUE");
}
else
{
d = Double.longBitsToDouble(bits);
System.out.println("Using doubleToLongBits/bits++");
}
System.out.println("bits before: " + bitsBefore);
System.out.println("bits after: " + bitsAfter);
System.out.println("bits diff: " + bitsDiff);
System.out.println("bits Min value: " + bitsMinValue);
System.out.println("bits Small value: " + bitsSmallValue);
OUTPUT:
Double.doubleToLongBits(Double.longBitsToDouble(0x1L)) is same as Double.doubleToLongBits(Double.MIN_VALUE)
bits++ increments the same amount as Double.MIN_VALUE
Using doubleToLongBits/bits++
bits before: 4614256656636814345
bits after: 4614256656636814346
bits diff: 1
bits Min value: 1
bits Small value: 1
Okay, let's imagine it this way, sticking with decimal numbers. Suppose you have a floating decimal point type which allows you to represent 5 decimal digits, and a number between 0 and 3 for the exponent, to multiple the result by 1, 10, 100 or 1000.
So the smallest non-zero value is just 1 (i.e. mantissa=00001, exponent=0). The largest value is 99999000 (mantissa=99999, exponent=3).
Now, what happens when you add 1 to 50000000? You can't represent 50000001...the next representable number after 500000000 is 50001000. So if you try to add them together, the result is just going to be the closest value to the "true" result - which is still 500000000. That's like adding Double.MIN_VALUE to a large double.
My version (converting to bits, incrementing and then converting back) is like taking that 50000000, splitting into mantissa and exponent (m=50000, e=3) then incrementing it the smallest amount, to (m=50001, e=3) and then reassembling to 50001000.
Do you see how they're different?
Now here's a concrete example:
public class Test{
public static void main(String[] args) {
double before = 100000000000000d;
double after = before + Double.MIN_VALUE;
System.out.println(before == after);
long bits = Double.doubleToLongBits(before);
bits++;
double afterBits = Double.longBitsToDouble(bits);
System.out.println(before == afterBits);
System.out.println(afterBits - before);
}
}
This tries both approaches with a large number. The output is:
true
false
0.015625
Going through the output, that means:
Adding Double.MIN_VALUE didn't have any effect
Incrementing the bit did have an effect
The difference between afterBits and before is 0.015625, which is much bigger than Double.MIN_VALUE. No wonder the simple addition had no effect!
It's exactly as Jon said:
"If you try to add a very little
number to a very big number, the
difference may well be so small that
the closest result is the same as the
original."
For example:
// True:
(Double.MAX_VALUE + Double.MIN_VALUE) == Double.MAX_VALUE
// False:
Double.longBitsToDouble(Double.doubleToLongBits(Double.MAX_VALUE) + 1) == Double.MAX_VALUE)
MIN_VALUE is the smallest representable positive double, but that certainly does not imply that adding it to an arbitrary double results in a unequal one.
In contrast, adding 1 to the underlying bits results in a new bit pattern, and thus does result in a unequal double.
I have an array of ints ie. [1,2,3,4,5] . Each row corresponds to decimal value, so 5 is 1's, 4 is 10's, 3 is 100's which gives value of 12345 that I calculate and store as long.
This is the function :
public long valueOf(int[]x) {
int multiplier = 1;
value = 0;
for (int i=x.length-1; i >=0; i--) {
value += x[i]*multiplier;
multiplier *= 10;
}
return value;
}
Now I would like to check if value of other int[] does not exceed long before I will calculate its value with valueOf(). How to check it ?
Should I use table.length or maybe convert it to String and send to
public Long(String s) ?
Or maybe just add exception to throw in the valueOf() function ?
I hope you know that this is a horrible way to store large integers: just use BigInteger.
But if you really want to check for exceeding some value, just make sure the length of the array is less than or equal to 19. Then you could compare each cell individually with the value in Long.MAX_VALUE. Or you could just use BigInteger.
Short answer: All longs fit in 18 digits. So if you know that there are no leading zeros, then just check x.length<=18. If you might have leading zeros, you'll have to loop through the array to count how many and adjust accordingly.
A flaw to this is that some 19-digit numbers are valid longs, namely those less than, I believe it comes to, 9223372036854775807. So if you wanted to be truly precise, you'd have to say length>19 is bad, length<19 is good, length==19 you'd have to check digit-by-digit. Depending on what you're up to, rejecting a subset of numbers that would really work might be acceptable.
As others have implied, the bigger question is: Why are you doing this? If this is some sort of data conversion where you're getting numbers as a string of digits from some external source and need to convert this to a long, cool. If you're trying to create a class to handle numbers bigger than will fit in a long, what you're doing is both inefficient and unnecessary. Inefficient because you could pack much more than one decimal digit into an int, and doing so would give all sorts of storage and performance improvements. Unnecessary because BigInteger already does this. Why not just use BigInteger?
Of course if it's a homework problem, that's a different story.
Are you guaranteed that every value of x will be nonnegative?
If so, you could do this:
public long valueOf(int[]x) {
int multiplier = 1;
long value = 0; // Note that you need the type here, which you did not have
for (int i=x.length-1; i >=0; i--) {
next_val = x[i]*multiplier;
if (Long.MAX_LONG - next_val < value) {
// Error-handling code here, however you
// want to handle this case.
} else {
value += next_val
}
multiplier *= 10;
}
return value;
}
Of course, BigInteger would make this much simpler. But I don't know what your problem specs are.