Related
long mynum = Long.parseLong("7660142319573120");
long ans = (long)Math.sqrt(mynum) // output = 87522239
long ans_ans = ans * ans;
In this case, I am getting ans_ans > mynum where it should be <=mynum. Why such behavior? I tried this with node js as well. There also result is same.
Math.sqrt operates on doubles, not longs, so mynum gets converted to a double first. This is a 64-bits floating point number, which has "15–17 decimal digits of precision" (Wikipedia).
Your input number has 16 digits, so you may be losing precision on the input already. You may also be losing precision on the output.
If you really need an integer square root of long numbers, or generally numbers that are too big for accurate representation as a double, look into integer square root algorithms.
You can also use LongMath.sqrt() from the Guava library.
You are calling Math.sqrt with a long.
As the JavaDoc points out, it returns a "correctly rounded value".
Since your square-root is not an non-comma-value (87522238,999999994), your result is rounded up to your output 87522239.
After that, the square of the value is intuitively larger than mynum, since you multiply larger numbers than the root!
double ans = (double)Math.sqrt(15);
System.out.println("Double : " + ans);
double ans_ans = ans * ans;
System.out.println("Double : " + ans_ans);
long ans1 = (long)Math.sqrt(15);
System.out.println("Long : " + ans1);
long ans_ans1 = ans1 * ans1;
System.out.println("Long : " + ans_ans1);
Results :
Double : 3.872983346207417
Double : 15.000000000000002
Long : 3
Long : 9
I hope this makes it clear.
The answer is: rounding.
The result of (Long)Math.sqrt(7660142319573120) is 87522239, but the mathematical result is 87522238,999999994287166259537761....
if you multiply the ans value, which is rounded up in order to be stored as a whole number, you will get bigger number than multiplying the exact result.
You do not need the long type, all numbers are representable in double, and Math.sqrt first converts to double then computes the square root via FPU instruction (on a standard PC).
This situation occurs for numbers b=a^2-1 where a is an integer in the range
67108865 <= a <= 94906265
The square root of b has a series expansion starting with
a-1/(2*a)-1/(8*a^2)+...
If the relative error 1/(2*a^2) falls below the machine epsilon, the closest representable double number is a.
On the other hand for this trick to work one needs that a*a-1.0 is exactly representable in double, which gives the conditions
1/(2*a^2) <mu=2^(-53) < 1/(a^2)
or
2^52 < a^2 < 2^53
2^26+1=67108865 <= a <= floor(sqrt(2)*2^26)=94906265
We can easily get random floating point numbers within a desired range [X,Y) (note that X is inclusive and Y is exclusive) with the function listed below since Math.random() (and most pseudorandom number generators, AFAIK) produce numbers in [0,1):
function randomInRange(min, max) {
return Math.random() * (max-min) + min;
}
// Notice that we can get "min" exactly but never "max".
How can we get a random number in a desired range inclusive to both bounds, i.e. [X,Y]?
I suppose we could "increment" our value from Math.random() (or equivalent) by "rolling" the bits of an IEE-754 floating point double precision to put the maximum possible value at 1.0 exactly but that seems like a pain to get right, especially in languages poorly suited for bit manipulation. Is there an easier way?
(As an aside, why do random number generators produce numbers in [0,1) instead of [0,1]?)
[Edit] Please note that I have no need for this and I am fully aware that the distinction is pedantic. Just being curious and hoping for some interesting answers. Feel free to vote to close if this question is inappropriate.
I believe there is much better decision but this one should work :)
function randomInRange(min, max) {
return Math.random() < 0.5 ? ((1-Math.random()) * (max-min) + min) : (Math.random() * (max-min) + min);
}
First off, there's a problem in your code: Try randomInRange(0,5e-324) or just enter Math.random()*5e-324 in your browser's JavaScript console.
Even without overflow/underflow/denorms, it's difficult to reason reliably about floating point ops. After a bit of digging, I can find a counterexample:
>>> a=1.0
>>> b=2**-54
>>> rand=a-2*b
>>> a
1.0
>>> b
5.551115123125783e-17
>>> rand
0.9999999999999999
>>> (a-b)*rand+b
1.0
It's easier to explain why this happens with a=253 and b=0.5: 253-1 is the next representable number down. The default rounding mode ("round to nearest even") rounds 253-0.5 up (because 253 is "even" [LSB = 0] and 253-1 is "odd" [LSB = 1]), so you subtract b and get 253, multiply to get 253-1, and add b to get 253 again.
To answer your second question: Because the underlying PRNG almost always generates a random number in the interval [0,2n-1], i.e. it generates random bits. It's very easy to pick a suitable n (the bits of precision in your floating point representation) and divide by 2n and get a predictable distribution. Note that there are some numbers in [0,1) that you will will never generate using this method (anything in (0,2-53) with IEEE doubles).
It also means that you can do a[Math.floor(Math.random()*a.length)] and not worry about overflow (homework: In IEEE binary floating point, prove that b < 1 implies a*b < a for positive integer a).
The other nice thing is that you can think of each random output x as representing an interval [x,x+2-53) (the not-so-nice thing is that the average value returned is slightly less than 0.5). If you return in [0,1], do you return the endpoints with the same probability as everything else, or should they only have half the probability because they only represent half the interval as everything else?
To answer the simpler question of returning a number in [0,1], the method below effectively generates an integer [0,2n] (by generating an integer in [0,2n+1-1] and throwing it away if it's too big) and dividing by 2n:
function randominclusive() {
// Generate a random "top bit". Is it set?
while (Math.random() >= 0.5) {
// Generate the rest of the random bits. Are they zero?
// If so, then we've generated 2^n, and dividing by 2^n gives us 1.
if (Math.random() == 0) { return 1.0; }
// If not, generate a new random number.
}
// If the top bits are not set, just divide by 2^n.
return Math.random();
}
The comments imply base 2, but I think the assumptions are thus:
0 and 1 should be returned equiprobably (i.e. the Math.random() doesn't make use of the closer spacing of floating point numbers near 0).
Math.random() >= 0.5 with probability 1/2 (should be true for even bases)
The underlying PRNG is good enough that we can do this.
Note that random numbers are always generated in pairs: the one in the while (a) is always followed by either the one in the if or the one at the end (b). It's fairly easy to verify that it's sensible by considering a PRNG that returns either 0 or 0.5:
a=0 b=0 : return 0
a=0 b=0.5: return 0.5
a=0.5 b=0 : return 1
a=0.5 b=0.5: loop
Problems:
The assumptions might not be true. In particular, a common PRNG is to take the top 32 bits of a 48-bit LCG (Firefox and Java do this). To generate a double, you take 53 bits from two consecutive outputs and divide by 253, but some outputs are impossible (you can't generate 253 outputs with 48 bits of state!). I suspect some of them never return 0 (assuming single-threaded access), but I don't feel like checking Java's implementation right now.
Math.random() is twice for every potential output as a consequence of needing to get the extra bit, but this places more constraints on the PRNG (requiring us to reason about four consecutive outputs of the above LCG).
Math.random() is called on average about four times per output. A bit slow.
It throws away results deterministically (assuming single-threaded access), so is pretty much guaranteed to reduce the output space.
My solution to this problem has always been to use the following in place of your upper bound.
Math.nextAfter(upperBound,upperBound+1)
or
upperBound + Double.MIN_VALUE
So your code would look like this:
double myRandomNum = Math.random() * Math.nextAfter(upperBound,upperBound+1) + lowerBound;
or
double myRandomNum = Math.random() * (upperBound + Double.MIN_VALUE) + lowerBound;
This simply increments your upper bound by the smallest double (Double.MIN_VALUE) so that your upper bound will be included as a possibility in the random calculation.
This is a good way to go about it because it does not skew the probabilities in favor of any one number.
The only case this wouldn't work is where your upper bound is equal to Double.MAX_VALUE
Just pick your half-open interval slightly bigger, so that your chosen closed interval is a subset. Then, keep generating the random variable until it lands in said closed interval.
Example: If you want something uniform in [3,8], then repeatedly regenerate a uniform random variable in [3,9) until it happens to land in [3,8].
function randomInRangeInclusive(min,max) {
var ret;
for (;;) {
ret = min + ( Math.random() * (max-min) * 1.1 );
if ( ret <= max ) { break; }
}
return ret;
}
Note: The amount of times you generate the half-open R.V. is random and potentially infinite, but you can make the expected number of calls otherwise as close to 1 as you like, and I don't think there exists a solution that doesn't potentially call infinitely many times.
Given the "extremely large" number of values between 0 and 1, does it really matter? The chances of actually hitting 1 are tiny, so it's very unlikely to make a significant difference to anything you're doing.
What would be a situation where you would NEED a floating point value to be inclusive of the upper bound? For integers I understand, but for a float, the difference between between inclusive and exclusive is what like 1.0e-32.
Think of it this way. If you imagine that floating-point numbers have arbitrary precision, the chances of getting exactly min are zero. So are the chances of getting max. I'll let you draw your own conclusion on that.
This 'problem' is equivalent to getting a random point on the real line between 0 and 1. There is no 'inclusive' and 'exclusive'.
The question is akin to asking, what is the floating point number right before 1.0? There is such a floating point number, but it is one in 2^24 (for an IEEE float) or one in 2^53 (for a double).
The difference is negligible in practice.
private static double random(double min, double max) {
final double r = Math.random();
return (r >= 0.5d ? 1.5d - r : r) * (max - min) + min;
}
Math.round() will help to include the bound value. If you have 0 <= value < 1 (1 is exclusive), then Math.round(value * 100) / 100 returns 0 <= value <= 1 (1 is inclusive). A note here is that the value now has only 2 digits in its decimal place. If you want 3 digits, try Math.round(value * 1000) / 1000 and so on. The following function has one more parameter, that is the number of digits in decimal place - I called as precision:
function randomInRange(min, max, precision) {
return Math.round(Math.random() * Math.pow(10, precision)) /
Math.pow(10, precision) * (max - min) + min;
}
How about this?
function randomInRange(min, max){
var n = Math.random() * (max - min + 0.1) + min;
return n > max ? randomInRange(min, max) : n;
}
If you get stack overflow on this I'll buy you a present.
--
EDIT: never mind about the present. I got wild with:
randomInRange(0, 0.0000000000000000001)
and got stack overflow.
I am fairly less experienced, So I am also looking for solutions as well.
This is my rough thought:
Random number generators produce numbers in [0,1) instead of [0,1],
Because [0,1) is an unit length that can be followed by [1,2) and so on without overlapping.
For random[x, y],
You can do this:
float randomInclusive(x, y){
float MIN = smallest_value_above_zero;
float result;
do{
result = random(x, (y + MIN));
} while(result > y);
return result;
}
Where all values in [x, y] has the same possibility to be picked, and you can reach y now.
Generating a "uniform" floating-point number in a range is non-trivial. For example, the common practice of multiplying or dividing a random integer by a constant, or by scaling a "uniform" floating-point number to the desired range, have the disadvantage that not all numbers a floating-point format can represent in the range can be covered this way, and may have subtle bias problems. These problems are discussed in detail in "Generating Random Floating-Point Numbers by Dividing Integers: a Case Study" by F. Goualard.
Just to show how non-trivial the problem is, the following pseudocode generates a random "uniform-behaving" floating-point number in the closed interval [lo, hi], where the number is of the form FPSign * FPSignificand * FPRADIX^FPExponent. The pseudocode below was reproduced from my section on floating-point number generation. Note that it works for any precision and any base (including binary and decimal) of floating-point numbers.
METHOD RNDRANGE(lo, hi)
losgn = FPSign(lo)
hisgn = FPSign(hi)
loexp = FPExponent(lo)
hiexp = FPExponent(hi)
losig = FPSignificand(lo)
hisig = FPSignificand(hi)
if lo > hi: return error
if losgn == 1 and hisgn == -1: return error
if losgn == -1 and hisgn == 1
// Straddles negative and positive ranges
// NOTE: Changes negative zero to positive
mabs = max(abs(lo),abs(hi))
while true
ret=RNDRANGE(0, mabs)
neg=RNDINT(1)
if neg==0: ret=-ret
if ret>=lo and ret<=hi: return ret
end
end
if lo == hi: return lo
if losgn == -1
// Negative range
return -RNDRANGE(abs(lo), abs(hi))
end
// Positive range
expdiff=hiexp-loexp
if loexp==hiexp
// Exponents are the same
// NOTE: Automatically handles
// subnormals
s=RNDINTRANGE(losig, hisig)
return s*1.0*pow(FPRADIX, loexp)
end
while true
ex=hiexp
while ex>MINEXP
v=RNDINTEXC(FPRADIX)
if v==0: ex=ex-1
else: break
end
s=0
if ex==MINEXP
// Has FPPRECISION or fewer digits
// and so can be normal or subnormal
s=RNDINTEXC(pow(FPRADIX,FPPRECISION))
else if FPRADIX != 2
// Has FPPRECISION digits
s=RNDINTEXCRANGE(
pow(FPRADIX,FPPRECISION-1),
pow(FPRADIX,FPPRECISION))
else
// Has FPPRECISION digits (bits), the highest
// of which is always 1 because it's the
// only nonzero bit
sm=pow(FPRADIX,FPPRECISION-1)
s=RNDINTEXC(sm)+sm
end
ret=s*1.0*pow(FPRADIX, ex)
if ret>=lo and ret<=hi: return ret
end
END METHOD
Why do these simple double comparisons return true?
System.out.println(Double.MAX_VALUE == (Double.MAX_VALUE - 99 * Math.pow(10, 290)));
System.out.println(new Double(Double.MAX_VALUE).equals(new Double(Double.MAX_VALUE - 99 * Math.pow(10, 290))));
I know it's probably a IEEE 754 precision problem, but I can't figure out what the exact problem is here.
Very large floating point numbers are very imprecise. What you are seeing is rounding error.
We could demonstrate floating point in base-10 (scientific notation), say with 1 digit for the exponent and 4 digits for the base:
1234*10^1 == 1234
1234*10^-1 == 123.4
1234*10^9 == 1,234,000,000,000
1235*10^9 == 1,235,000,000,000
It follows that:
1234*10^9 - 1234*10^1 == 1234*10^9
You can see that as the numbers get larger, we lose precision. Lots of it.
For kicks, you can test it:
double d = 10;
while(Double.MAX_VALUE - d == Double.MAX_VALUE)
d *= 10;
System.out.println(d);
We can determine that the gap between Double.MAX_VALUE and the next smallest value (called an ULP) is somewhere around 10^292, which is a very large gap.
We can determine its exact value with the help of Math#ulp:
long lng = Double.doubleToRawLongBits(Double.MAX_VALUE);
double nextMax = Double.longBitsToDouble(lng - 1);
System.out.println(Math.ulp(nextMax));
Which is about 1.99584*10^292, or 2^971.
The larger floating-point numbers get, the larger the gap between one floating-point number and the next one down or up. This is because an error of 0.5 is pretty important in a measurement of value 1, but utterly insignificant in a measurement of value 1000000000000.
Double.MAX_VALUE is huge, and the numbers you're subtracting from it are less huge. In fact, the difference is big enough that the numbers you're subtracting are less huge than the gap between Double.MAX_VALUE and the next double down, so Double.MAX_VALUE is the closest representable number to the exact result. Thus, you just get Double.MAX_VALUE back.
I think the problem is in this line
|
v
new Double(Double.MAX_VALUE).equals(new Double(Double.MAX_VALUE - 99999999999999999999999D))
DOUBLE MAX_VALUE = Double.MAX_VALUE - 99999999999999999999999D
AND you're comparing it with
new Double(Double.MAX_VALUE - 99999999999999999999999D)
which I think is just the same so it will always return true.
I'd like to round manually without the round()-Method.
So I can tell my program that's my number, on this point i want you to round.
Let me give you some examples:
Input number: 144
Input rounding: 2
Output rounded number: 140
Input number: 123456
Input rounding: 3
Output rounded number: 123500
And as a litte addon maybe to round behind the comma:
Input number: 123.456
Input rounding: -1
Output rounded number: 123.460
I don't know how to start programming that...
Has anyone a clue how I can get started with that problem?
Thanks for helping me :)
I'd like to learn better programming, so i don't want to use the round and make my own one, so i can understand it a better way :)
A simple way to do it is:
Divide the number by a power of ten
Round it by any desired method
Multiply the result by the same power of ten in step 1
Let me show you an example:
You want to round the number 1234.567 to two decimal positions (the desired result is 1234.57).
x = 1234.567;
p = 2;
x = x * pow(10, p); // x = 123456.7
x = floor(x + 0.5); // x = floor(123456.7 + 0.5) = floor(123457.2) = 123457
x = x / pow(10,p); // x = 1234.57
return x;
Of course you can compact all these steps in one. I made it step-by-step to show you how it works. In a compact java form it would be something like:
public double roundItTheHardWay(double x, int p) {
return ((double) Math.floor(x * pow(10,p) + 0.5)) / pow(10,p);
}
As for the integer positions, you can easily check that this also works (with p < 0).
Hope this helps
if you need some advice how to start,
step by step write down calculations what you need to do to get from 144,2 --> 140
replace your math with java commands, that should be easy, but if you have problem, just look here and here
public static int round (int input, int places) {
int factor = (int)java.lang.Math.pow(10, places);
return (input / factor) * factor;
}
Basically, what this does is dividing the input by your factor, then multiplying again. When dividing integers in languages like Java, the remainder of the division is dropped from the results.
edit: the code was faulty, fixed it. Also, the java.lang.Math.pow is so that you get 10 to the n-th power, where n is the value of places. In the OP's example, the number of places to consider is upped by one.
Re-edit: as pointed out in the comments, the above will give you the floor, that is, the result of rounding down. If you don't want to always round down, you must also keep the modulus in another variable. Like this:
int mod = input % factor;
If you want to always get the ceiling, that is, rounding up, check whether mod is zero. If it is, leave it at that. Otherwise, add factor to the result.
int ceil = input + (mod == 0 ? 0 : factor);
If you want to round to nearest, then get the floor if mod is smaller than factor / 2, or the ceiling otherwise.
Divide (positive)/Multiply (negative) by the "input rounding" times 10 - 1 (144 / (10 * (2 - 1)). This will give you the same in this instance. Get the remainder of the last digit (4). Determine if it is greater than or equal to 5 (less than). Make it equal to 0 or add 10, depending on the previous answer. Multiply/Divide it back by the "input rounding" times 10 - 1. This should give you your value.
If this is for homework. The purpose is to teach you to think for yourself. I may have given you the answer, but you still need to write the code by yourself.
Next time, you should write your own code and ask what is wrong
For integers, one way would be to use a combination of the mod operator, which is the percent symbol %, and the divide operator. In your first example, you would compute 144 % 10, resulting in 4. And compute 144 / 10, which gives 14 (as an integer). You can compare the result of the mod operation to half of the denominator, to find out if you should round the 14 up to 15 or not (in this case not), and then multiply back by the denominator to get your answer.
In psuedo code, assuming n is the number to round, p is the power of 10 representing the position of the significant digits:
denom = power(10, p)
remainder = n % denom
dividend = n / denom
if (remainder < denom/2)
return dividend * denom
else
return (dividend + 1) * denom
Note: question still not answered thoroughly! This questions does not deal with the issue of truncation of floating point parts!!!
In Java I have this simple code:
double sum = 0.0;
for(int i = 1; i <= n; i++){
sum += 1.0/n
}
System.out.println("Sum should be: 1");
System.out.println("The result is: " + sum);
Where n can be any integer. For numbers like 7,9, the expected value for sum is to have difference in the last digits of sum, and the result is 0.999999999998 or something but the output when I use 3 is 1.0.
If you add 1/3 3 times, you would expect a number close to 1, but I get exactly 1.0.
Why?
This is because the division is made in integer.
1/n always gives 0 for n > 1.
Therefore, you always end up with sum = 0 + 1/1 + 0 + 0...
Try with 1.0 / n
If you add 1/3 3 times, you would expect a number close to 1, but I
get exactly 1.0.
Actually a normal person uncontaminated by programming experience would expect n * 1 / n to equal 1, but we're not normal here.
I can't reproduce your problem exactly, I get
groovy:000> def foo(n) {
groovy:001> sum = 0.0
groovy:002> for (int i = 0; i < n; i++) {
groovy:003> sum += 1.0 / n
groovy:004> }
groovy:005> sum
groovy:006> }
===> true
groovy:000> foo(3)
===> 0.9999999999
There may be 2 issues here, at least you will want to be aware of them.
One is that doubles are not exact, they cannot represent some values exactly, and you just have to expect stuff to be off by a little bit. Your goal isn't 100% accuracy, it's to keep the error within acceptable bounds. (Peter Lawrey has an interesting article on doubles that you might want to check out.) If that's not ok for you, you'll want to avoid doubles. For a lot of uses BigDecimal is good enough. If you want a library where the division problems in your question give accurate answers you might check out the answers to this question.
The other issue is that System.out.println doesn't tell you the exact value of a double, it fudges a bit. If you add a line like:
System.out.println(new java.math.BigDecimal(sum));
then you will get an accurate view of what the double contains.
I'm not sure whether this will help clarify things, because I'm not sure what you consider to be the problem.
Here is a test program that uses BigDecimal, as previously suggested, to display the values of the intermediate answers. At the final step, adding the third copy of 1.0/3 to the sum of two copies, the exact answer is half way between 1.0 and the next double lower than it. In that situation the round-to-even rounding rule picks 1.0.
Given that, I think it should round to 1.0, contradicting the question title.
Test program:
import java.math.BigDecimal;
public class Test {
public static void main(String[] args) {
final double oneThirdD = 1.0/3;
final BigDecimal oneThirdBD = new BigDecimal(oneThirdD);
final double twoThirdsD = oneThirdD + oneThirdD;
final BigDecimal twoThirdsBD = new BigDecimal(twoThirdsD);
final BigDecimal exact = twoThirdsBD.add(oneThirdBD);
final double nextLowerD = Math.nextAfter(1.0, 0);
final BigDecimal nextLowerBD = new BigDecimal(nextLowerD);
System.out.println("1.0/3: "+oneThirdBD);
System.out.println("1.0/3+1.0/3: "+twoThirdsBD);
System.out.println("Exact sum: "+exact);
System.out.println("Rounding error rounding up to 1.0: "+BigDecimal.ONE.subtract(exact));
System.out.println("Largest double that is less than 1.0: "+nextLowerBD);
System.out.println("Rounding error rounding down to next lower double: "+exact.subtract(nextLowerBD));
}
}
Output:
1.0/3: 0.333333333333333314829616256247390992939472198486328125
1.0/3+1.0/3: 0.66666666666666662965923251249478198587894439697265625
Exact sum: 0.999999999999999944488848768742172978818416595458984375
Rounding error rounding up to 1.0: 5.5511151231257827021181583404541015625E-17
Largest double that is less than 1.0: 0.99999999999999988897769753748434595763683319091796875
Rounding error rounding down to next lower double: 5.5511151231257827021181583404541015625E-17
An int divided by an int will always produce another int. Now int has no place to store the fractional part of the number so it is discarded. Keep in mind that it is discarded not rounded.
Therefore 1 / 3 = 0.3333333, and the fractional part is discarded meaning that it becomes 0.
If you specify the number as a double (by including the decimal point, ex. 1. or 1.0) then the result will be a double (because java automatically converts an int to a double) and the fractional part will be preserved.
In your updated question, you are setting i to 1.0 but i is still an int. So that 1.0 is getting truncated to 1 and for further calculations, it is still an int. You need to change the type of i to double as well otherwise there will be no difference in the code.
Alternatively you can use sum += 1.0/n
This will have the effect of converting n to a double before performing the calculation