I made class Sum extends RecursiveTask. The task is to calculate sum of 1 / a[i].
public class Sum extends RecursiveAction {
int[] items;
double result;
int min = 100000;
int from, to;
Sum(int[] items, int from, int to) {
this.items = items;
this.from = from;
this.to = to;
}
#Override
protected void compute() {
if (to - from <= min) {
for (int i = from; i < to; i++) {
result += 1d / items[i];
}
} else {
var mid = (from + to) / 2;
var left = new Sum(items, from, mid);
var right = new Sum(items, mid, to);
invokeAll(left, right);
result += left.result + right.result;
}
}
}
Results:
Single: 1.3180710500108106E8
Total time: 0.612
Parallel: 1.3180710501986596E8
Total time: 0.18
The numbers are very close and differ by a small accuracy. What could this be related to? I noticed that if you remove 1 / a[i], it will be calculated correctly
I'm guessing that you might be trying to sum a list of millions of numbers. You wrote a multi-threaded, divide-and-conquer routine that stops dividing when the list gets to be less than 100,000 long (int min=100000;) so if it's worth splitting the list in to chunks of that size, there must be at least a few of those chunks, right?
So, here's the issue: Let's say you want to add up a million numbers that are around the same order of magnitude. Maybe they're all readings from the same sensor. Let's say that the arithmetic mean of the whole list is X. If were to simply run down that list, from beginning to end, accumulating numbers,...
...The expected value of the first sum is X+X,
...of the next sum, X+2X
...
...of the last sum, X+999999X
OK, but 999999X is six orders of magnitude greater than X. In binary floating point, the exponent of 999999X is going to be greater than the exponent of X by about 20. That is to say, the binary value of 999999X is approximately the value of X shifted left by 20 bits.
In order to do the addition both numbers must have the same exponent, and the way that is accomplished is to denormalize X. If you shift the mantissa of X to the right by 20 bits, and then you add 20 to its exponent, it should, in theory, still represent the same number. Only problem is, you've just shifted away the 20 least-significant bits.
If you're using double, the original X had 54 bits of precision, but the de-normalized X that you can use in the addition only has 34 bits. If you're using float,* the original X had 23 bits, and the denormalized X has only three bits of precision.
Your goal in writing your "divide-and-conquer" algorithm was to break the problem into tasks that could be given to different threads. But a side-effect was, you also got a more accurate answer. More accurate because, for each chunk, the last step is to compute X + 99999X. The exponent mismatch there is only 16 or 17 bits instead of 19 or 20 bits. You threw away three fewer bits of precision.
To get the best possible precision, start by sorting the list—smallest numbers first. (NOTE: smallest means, closest to zero—least absolute value.) Then, remove the first two numbers from the list, add them, and insert the sum back into the list, in the correct place to keep the list sorted. (those inserts go a lot faster if you use a linked list.) Finally, repeat those steps until the list contains only one number, and that's the most accurate sum you can get without using a wider data type.
Wider data type!! That's what you really want. If you can accumulate your sum in an IEEE Quad float, you've got 112 bits of precision to work with. Adding up a million numbers? Lost 20 bits of precision? No problem! The 92 bits you've got at the end still is more than the 54 bits in the doubles that you started with. You could literally add lists of a trillion numbers before you started to lose precision as compared to double floats.
Using a wider data type if you've got one, will give you far better performance than the crazy sorted-list algorithm that I gave you above.
* Don't do math on float values. The only use for float is to save space in huge binary files and huge arrays. If you've got an array of float and you want to do math on them, convert to double, do the math, and convert back to float when you're done.
Related
The Babylonian aka Heron's method seems to be one of the faster algorithms for finding the square root for a number n. How fast it converges is dependent on how far of your initial guess is.
Now as number n increases its root x as its percentage decreases.
root(10):10 - 31%
10:100 - 10%
root(100) : 100 - 3%
root(1000) : 1000 - 1%
So basically for each digit in the number divide by around 3. Then use that as your intial guess. Such as -
public static double root(double n ) {
//If number is 0 or negative return the number
if(n<=0)
return n ;
//call to a method to find number of digits
int num = numDigits(n) ;
double guess = n ;
//Divide by 0.3 for every digit from second digit onwards
for (int i = 0 ; i < num-1 ; ++i )
guess = guess * 0.3;
//Repeat until it converges to within margin of error
while (!(n-(guess*guess) <= 0.000001 && n-(guess*guess) >= 0 )) {
double divide = n/guess ;
guess = Math.abs(0.5*(divide+guess)) ;
}
return Math.abs(guess) ;
}
Does this help and optimize the algorithm. And is this O(n) ?
Yes. What works even better is to exploit the floating point representation, by dividing the binary exponent approximately by two, because operating on the floating-point bits is very fast. See Optimized low-accuracy approximation to `rootn(x, n)`.
My belief is that the complexity of an algorithm is independent on the input provided. (the complexity is a general characteristic of that algorithm, we cannot say that algorithm x has complexity O1 for input I1 and complexity O2 for input I2). Thus, no matter what initial value you provide, it should not improve complexity. It may improve the number of iterations for that particular case, but that's a different thing. Reducing the number of iterations by half still means the same complexity. Keep in mind that n, 2*n, n/3 all fit into the O(n) class .
Now, with regard to the actual complexity, i read on wikipedia (https://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Babylonian_method) that
This is a quadratically convergent algorithm, which means that the number of correct digits of the approximation roughly doubles with each iteration.
This means you need as many iterations as the number of precision decimals that you expect. Which is constant. If you need 10 exact decimals, 10 is a constant, being totally independent of n.
But on wikipedia's example, they chose from the very begining a candidate which had the same order of magnitude as the correct answer (600 compared to 354). However, if your initial guess is way too wrong (by orders of magnitude) you will need some extra iterations to cut down/add to the necessary digits. Which will add complexity. Suppose correct answer is 10000 , while your initial guess is 10. The difference is 4 orders of magnitude, and i think in this case, the complexity needed to reach the correct magnitude is proportional to the difference between the number of digits of your guess and the number of digits of the correct answer. Since number of digits is approximativelly log(n), in this case the extra complexity is log(corect_answer) -log(initial_guess), taken as absolute value.
To avoid this, pick a number that has the right number of digits, which is generally half the number of digits of your initial number. My best choice would be picking the first half of the number as a candidate (from 123456, keep 123, from 1234567, either 123 or 1234). In java, you could use byte operations to keep the first half of a number/string/whatever is kept in memory. Thus you will need no iteration, just a byte operation with constant complexity.
For n ≥ 4, sqrt(n) = 2 sqrt(n / 4). For n < 1, sqrt(n) = 1/2 sqrt(n × 4). So you can always multiply or divide by 4 to normalize n on the range [1, 4).
Once you do that, take sqrt(4) = 2 as the starting point for Heron's algorithm, since that is the geometric mean and will yield the greatest possible improvement per iteration, and unroll the loop to perform the needed number of iterations for the desired accuracy.
Finally, multiply or divide by all the factors of 2 that you removed at the beginning. Note that multiplying and dividing by 2 or 4 is easy and fast for binary computers.
I discuss this algorithm at my blog.
I am modelling a system which contains a specific amount of energy between 0 and 1. This is stored in a double called storedEnergy.
storedEnergy should count from 0.0 to 1.0 over 21 minutes, but instead counts to 1.0000000000000004
Every iteration storedEnergy increases by onStoredEnergyIncreasePerMinute which is calculated like this:
onSlackMinutesIncreasePerMinute = (double) time1to0Energy / (double) time0to1Energy;
onStoredEnergyIncreasePerMinute = 1d / (double) time0to1Energy;
offSlackMinutesDecreasePerMinute = 1d;
offStoredEnergyDecreasePerMinute = 1d / (double) time1to0Energy;
Here is the stored energy increasing:
while (storedEnergy < target) {
storedEnergy += onStoredEnergyIncreasePerMinute;
}
A similar thing happens with slack minutes: there are 24.999999999999993 at 1.0 energy, when there should be 25 at 1.0
It may be relevant to mention that I will then count back down to 0 using offStoredEnergyDecreasePerMinute for 25 minutes
I don't know why this is happening (though I presume it's something to do with doubles not being able to represent fractions properly), or what I should do to resolve it. Do I have to use some sort of fraction class?
I presume it's something to do with doubles not being able to represent fractions properly
That's right, floating-point numbers can only represent binary fractions exactly, and up to a certain limit of precision.
You should carefully consider what set of numbers your computation uses. If it's just rational numbers, then you can relatively simply implement that with two integers. If you take square roots or similar, then no numerical representation will be exact.
Looks like your over adding here:
while (storedEnergy < target) {
//this goes above 1.0 on the final iteration
storedEnergy += onStoredEnergyIncreasePerMinute;
}
you can fix by doing:
while (storedEnergy < target) {
storedEnergy += onStoredEnergyIncreasePerMinute;
if(storedEnergy>target)
storedEnergy = target;
}
Read What Every Computer Scientist Should Know About Floating-Point Arithmetic to understand why these errors occur. Try setting the precision you want your variables to have or use rounding functions, to bypass the representation problem
When I assign from an int to a float I thought float allows more precision, so would not lose or change the value assigned, but what I am seeing is something quite different. What is going on here?
for(int i = 63000000; i < 63005515; i++) {
int a = i;
float f = 0;
f=a;
System.out.print(java.text.NumberFormat.getInstance().format(a) + " : " );
System.out.println(java.text.NumberFormat.getInstance().format(f));
}
some of the output :
...
63,005,504 : 63,005,504
63,005,505 : 63,005,504
63,005,506 : 63,005,504
63,005,507 : 63,005,508
63,005,508 : 63,005,508
Thanks!
A float has the same number of bits as an int -- 32 bits. But it allows for a greater range of values, far greater than the range of int values. However, the precision is fixed at 24 bits (23 "mantissa" bits, plus 1 implied 1 bit). At the value of about 63,000,000, the precision is greater than 1. You can verify this with the Math.ulp method, which gives the difference between 2 consecutive values.
The code
System.out.println(Math.ulp(63000000.0f));
prints
4.0
You can use double values for a far greater (yet still limited) precision:
System.out.println(Math.ulp(63000000.0));
prints
7.450580596923828E-9
However, you can just use ints here, because your values, at about 63 million, are still well below the maximum possible int value, which is about 2 billion.
A float in java is a number IEEE 754 floating point representation, even when it can be used to represent values from ±1.40129846432481707e-45 to ±3.40282346638528860e+38 it has only 6 or 7 significant decimal digits.
A simple solution would be use a double which has at least 14 significant digits and can cover without any issue all the values of an int.
However, if it is accuracy what you're looking for stay away from native floating point representations and go for classes like BigInteger and BigDecimal.
No, they are not necessarily the same value. An int and a float are each 32 bits but in a float some of those bits are used for the floating point part of the number so there are fewer whole numbers which can be represented in a float than in an int. Depending on what your application is doing with these numbers you may not care about these differences or maybe you want to look at using something like BigDecimal.
Floats don't allow more precision, floats allow wider range of numbers.
We've got 2^32 possible values for integers in range (approximately) -2 * 10^9 to 2 * 10^9. Floats are also 32bit, so the number of possible values is at most the same as for integers.
Out of these 32 bits, some of them are reserved for mantisa, the rest of these is for exponent. The resulting number represented by the float is then calculated (for simplicity I'll use 10-base) as mantisa * 10^exponent.
Obviously, the maximum precision is limited by the number of bits assigned to mantisa. So you can represent some integers exactly as integers, but they won't fit to mantisa, so the least significant bits are thrown off, as in your case.
Float have a greater range of values but lower precision.
Int have a lower range of values but higher precision.
Int is specific to 1, while Float is specific to 4.
So if you are dealing with trillions but don't care about +/- 4 then use float. but if you need the last digit to be precise you need to use int.
Here is what i tried:
public class LongToDoubleTest{
public static void main(String... args) {
System.out.println(Long.MAX_VALUE);
System.out.println(Long.MAX_VALUE/2);
System.out.println(Math.floor(Long.MAX_VALUE/2));
System.out.println(new Double(Math.floor(Long.MAX_VALUE/2)).longValue());
}
}
Here is the output:
9223372036854775807
4611686018427387903
4.6116860184273879E18
4611686018427387904
I was initially trying to figure out, is it possible to keep half of the Long.MAX_VALUE in double without losing data, So I had a test with all those lines, except the last one. So it appeared that i'm right and last 3 was missing. Then, just to clarify it, I added last line. And not 3 appeared but 4. So my question is, from where those 4 appeared and why it's 4 and not 3. Because 4 is actually an incorrect value here.
P.S. I'm very poor in knowledge of IEEE 754, so maybe behaviour I found is absolutely correct, but 4 is obviously a wrong value here.
You need to understand that not every long can be exactly represented as a double - after all, there are 2568 long values, and at most that many double values (although lots of those are reserved for "not a number" values etc). Given that there are also double values which clearly aren't long values (e.g. 0.5 - any non-integer, for a start) that means there can't possibly be a double value for every long value.
That means if you start with a long value that can't be represented, convert it to a double and then back to a long, it's entirely reasonable to get back a different number.
The absolute difference between adjacent double values increases as the magnitude of the numbers gets larger. So when the numbers are very small, the difference between two numbers is really tiny (very very small indeed) - but when the numbers get bigger - e.g. above the range of int - the gap between numbers becomes greater... greater even than 1. So adjacent double values near Long.MAX_VALUE can be quite a distance apart. That means several long values will map to the same nearest double.
The arithmetic here is completely predictable.
The Java double format uses one bit for sign and eleven bits for exponent. That leaves 52 bits to encode the significand (the fraction portion of a floating-point number).
For normal numbers, the significand has a leading 1 bit, followed by a binary point, followed by the 52 bits of the encoding.
When Long.MAX_VALUE/2, 4611686018427387903, is converted to double, it must be rounded to fit in these bits. 4611686018427387903 is 0x3fffffffffffffff. There are 62 significant bits there (two leading zeroes that are insignificant, then 62 bits). Since not all 62 bits fit in the 53 bits available, we must round them. The last nine bits, which we must eliminate by rounding, are 1111111112. We must either round them down to zero (producing 0x3ffffffffffffe00) or up to 10000000002 (which carries into the next higher bit and produces 0x4000000000000000). The latter change (adding 1) is smaller than the former change (subtracting 1111111112). We want a smaller error, so we choose the latter and round up. Thus, we round 0x3fffffffffffffff up to 0x4000000000000000. This is 262, which is 4611686018427387904.
This method returns 'true'. Why ?
public static boolean f() {
double val = Double.MAX_VALUE/10;
double save = val;
for (int i = 1; i < 1000; i++) {
val -= i;
}
return (val == save);
}
You're subtracting quite a small value (less than 1000) from a huge value. The small value is so much smaller than the large value that the closest representable value to the theoretical result is still the original value.
Basically it's a result of the way floating point numbers work.
Imagine we had some decimal floating point type (just for simplicity) which only stored 5 significant digits in the mantissa, and an exponent in the range 0 to 1000.
Your example is like writing 10999 - 1000... think about what the result of that would be, when rounded to 5 significant digits. Yes, the exact result is 99999.....9000 (with 999 digits) but if you can only represent values with 5 significant digits, the closest result is 10999 again.
When you set val to Double.MAX_VALUE/10, it is set to a value approximately equal to 1.7976931348623158 * 10^307. substracting values like 1000 from that would required a precision on the double representation that is not possible, so it basically leaves val unchanged.
Depending on your needs, you may use BigDecimal instead of double.
Double.MAX_VALUE is so big that the JVM does not tell the difference between it and Double.MAX_VALUE-1000
if you subtract a number fewer than "1.9958403095347198E292" from Double.MAV_VALUE the result is still Double.MAX_VALUE.
System.out.println(
new BigDecimal(Double.MAX_VALUE).equals( new BigDecimal(
Double.MAX_VALUE - 2.E291) )
);
System.out.println(
new BigDecimal(Double.MAX_VALUE).equals( new BigDecimal(
Double.MAX_VALUE - 2.E292) )
);
Ouptup:
true
false
A double does not have enough precision to perform the calculation you are attempting. So the result is the same as the initial value.
It is nothing to do with the == operator.
val is a big number and when subtracting 1 (or even 1000) from it, the result cannot be expressed properly as a double value. The representation of this number x and x-1 is the same, because double only has a limited number of bits to represent an unlimited number of numbers.
Double.MAX_VALUE is a huge number compared to 1 or 1000. Double.MAX_VALUE-1 is generally equals to Double.MAX_VALUE. So your code roughly does nothing when substracting 1 or 1000 to Double.MAX_VALUE/10.
Always remember that:
doubles or floats are just approximations of real numbers, they are just rationals not equally distributed among the reals
you should use very carefully arithmetic operators between doubles or floats which are not close (there is many other rules such like this...)
in general, never use doubles or float if you need arbitrary precision
Because double is a floating point numeric type, which is a way of approximating numeric values. Floating point representations encode numbers so that we can store numbers much larger or smaller than we normally could. However, not all numbers can be represented in the given space, so multiple numbers get rounded to the same floating point value.
As a simplified example, we might want to be able to store values ranging from -1000 to 1000 in some small amount of space where we would normally only be able to store -10 to 10. So we could round all values to the nearest thousand and store them in the small space: -1000 gets encoded as -10, -900 gets encoded as -9, 1000 gets encoded as 10. But what if we want to store -999? The closest value we can encoded is -1000, so we have to encode -999 as the same value as -1000: -10.
In reality, floating point schemes are much more complicated than the example above, but the concept is similar. Floating point representations of numbers can only represent some of all the possible numbers, so when we have a number that can't be represented as part of the scheme, we have to round it to the closest representable value.
In your code, all values within 1000 of Double.MAX_VALUE / 10 automatically get rounded to Double.MAX_VALUE / 10, which is why the computer thinks (Double.MAX_VALUE / 10) - 1000 == Double.MAX_VALUE / 10.
The result of a floating point calculation is the closest representable value to the exact answer. This program:
public class Test {
public static void main(String[] args) throws Exception {
double val = Double.MAX_VALUE/10;
System.out.println(val);
System.out.println(Math.nextAfter(val, 0));
}
}
prints:
1.7976931348623158E307
1.7976931348623155E307
The first of these numbers is your original val. The second is the largest double that is less than it.
When you subtract 1000 from 1.7976931348623158E307, the exact answer is between those two numbers, but very, very much closer to 1.7976931348623158E307 than to 1.7976931348623155E307, so the result will be rounded to 1.7976931348623155E307, leaving val unchanged.