long mynum = Long.parseLong("7660142319573120");
long ans = (long)Math.sqrt(mynum) // output = 87522239
long ans_ans = ans * ans;
In this case, I am getting ans_ans > mynum where it should be <=mynum. Why such behavior? I tried this with node js as well. There also result is same.
Math.sqrt operates on doubles, not longs, so mynum gets converted to a double first. This is a 64-bits floating point number, which has "15–17 decimal digits of precision" (Wikipedia).
Your input number has 16 digits, so you may be losing precision on the input already. You may also be losing precision on the output.
If you really need an integer square root of long numbers, or generally numbers that are too big for accurate representation as a double, look into integer square root algorithms.
You can also use LongMath.sqrt() from the Guava library.
You are calling Math.sqrt with a long.
As the JavaDoc points out, it returns a "correctly rounded value".
Since your square-root is not an non-comma-value (87522238,999999994), your result is rounded up to your output 87522239.
After that, the square of the value is intuitively larger than mynum, since you multiply larger numbers than the root!
double ans = (double)Math.sqrt(15);
System.out.println("Double : " + ans);
double ans_ans = ans * ans;
System.out.println("Double : " + ans_ans);
long ans1 = (long)Math.sqrt(15);
System.out.println("Long : " + ans1);
long ans_ans1 = ans1 * ans1;
System.out.println("Long : " + ans_ans1);
Results :
Double : 3.872983346207417
Double : 15.000000000000002
Long : 3
Long : 9
I hope this makes it clear.
The answer is: rounding.
The result of (Long)Math.sqrt(7660142319573120) is 87522239, but the mathematical result is 87522238,999999994287166259537761....
if you multiply the ans value, which is rounded up in order to be stored as a whole number, you will get bigger number than multiplying the exact result.
You do not need the long type, all numbers are representable in double, and Math.sqrt first converts to double then computes the square root via FPU instruction (on a standard PC).
This situation occurs for numbers b=a^2-1 where a is an integer in the range
67108865 <= a <= 94906265
The square root of b has a series expansion starting with
a-1/(2*a)-1/(8*a^2)+...
If the relative error 1/(2*a^2) falls below the machine epsilon, the closest representable double number is a.
On the other hand for this trick to work one needs that a*a-1.0 is exactly representable in double, which gives the conditions
1/(2*a^2) <mu=2^(-53) < 1/(a^2)
or
2^52 < a^2 < 2^53
2^26+1=67108865 <= a <= floor(sqrt(2)*2^26)=94906265
We can easily get random floating point numbers within a desired range [X,Y) (note that X is inclusive and Y is exclusive) with the function listed below since Math.random() (and most pseudorandom number generators, AFAIK) produce numbers in [0,1):
function randomInRange(min, max) {
return Math.random() * (max-min) + min;
}
// Notice that we can get "min" exactly but never "max".
How can we get a random number in a desired range inclusive to both bounds, i.e. [X,Y]?
I suppose we could "increment" our value from Math.random() (or equivalent) by "rolling" the bits of an IEE-754 floating point double precision to put the maximum possible value at 1.0 exactly but that seems like a pain to get right, especially in languages poorly suited for bit manipulation. Is there an easier way?
(As an aside, why do random number generators produce numbers in [0,1) instead of [0,1]?)
[Edit] Please note that I have no need for this and I am fully aware that the distinction is pedantic. Just being curious and hoping for some interesting answers. Feel free to vote to close if this question is inappropriate.
I believe there is much better decision but this one should work :)
function randomInRange(min, max) {
return Math.random() < 0.5 ? ((1-Math.random()) * (max-min) + min) : (Math.random() * (max-min) + min);
}
First off, there's a problem in your code: Try randomInRange(0,5e-324) or just enter Math.random()*5e-324 in your browser's JavaScript console.
Even without overflow/underflow/denorms, it's difficult to reason reliably about floating point ops. After a bit of digging, I can find a counterexample:
>>> a=1.0
>>> b=2**-54
>>> rand=a-2*b
>>> a
1.0
>>> b
5.551115123125783e-17
>>> rand
0.9999999999999999
>>> (a-b)*rand+b
1.0
It's easier to explain why this happens with a=253 and b=0.5: 253-1 is the next representable number down. The default rounding mode ("round to nearest even") rounds 253-0.5 up (because 253 is "even" [LSB = 0] and 253-1 is "odd" [LSB = 1]), so you subtract b and get 253, multiply to get 253-1, and add b to get 253 again.
To answer your second question: Because the underlying PRNG almost always generates a random number in the interval [0,2n-1], i.e. it generates random bits. It's very easy to pick a suitable n (the bits of precision in your floating point representation) and divide by 2n and get a predictable distribution. Note that there are some numbers in [0,1) that you will will never generate using this method (anything in (0,2-53) with IEEE doubles).
It also means that you can do a[Math.floor(Math.random()*a.length)] and not worry about overflow (homework: In IEEE binary floating point, prove that b < 1 implies a*b < a for positive integer a).
The other nice thing is that you can think of each random output x as representing an interval [x,x+2-53) (the not-so-nice thing is that the average value returned is slightly less than 0.5). If you return in [0,1], do you return the endpoints with the same probability as everything else, or should they only have half the probability because they only represent half the interval as everything else?
To answer the simpler question of returning a number in [0,1], the method below effectively generates an integer [0,2n] (by generating an integer in [0,2n+1-1] and throwing it away if it's too big) and dividing by 2n:
function randominclusive() {
// Generate a random "top bit". Is it set?
while (Math.random() >= 0.5) {
// Generate the rest of the random bits. Are they zero?
// If so, then we've generated 2^n, and dividing by 2^n gives us 1.
if (Math.random() == 0) { return 1.0; }
// If not, generate a new random number.
}
// If the top bits are not set, just divide by 2^n.
return Math.random();
}
The comments imply base 2, but I think the assumptions are thus:
0 and 1 should be returned equiprobably (i.e. the Math.random() doesn't make use of the closer spacing of floating point numbers near 0).
Math.random() >= 0.5 with probability 1/2 (should be true for even bases)
The underlying PRNG is good enough that we can do this.
Note that random numbers are always generated in pairs: the one in the while (a) is always followed by either the one in the if or the one at the end (b). It's fairly easy to verify that it's sensible by considering a PRNG that returns either 0 or 0.5:
a=0 b=0 : return 0
a=0 b=0.5: return 0.5
a=0.5 b=0 : return 1
a=0.5 b=0.5: loop
Problems:
The assumptions might not be true. In particular, a common PRNG is to take the top 32 bits of a 48-bit LCG (Firefox and Java do this). To generate a double, you take 53 bits from two consecutive outputs and divide by 253, but some outputs are impossible (you can't generate 253 outputs with 48 bits of state!). I suspect some of them never return 0 (assuming single-threaded access), but I don't feel like checking Java's implementation right now.
Math.random() is twice for every potential output as a consequence of needing to get the extra bit, but this places more constraints on the PRNG (requiring us to reason about four consecutive outputs of the above LCG).
Math.random() is called on average about four times per output. A bit slow.
It throws away results deterministically (assuming single-threaded access), so is pretty much guaranteed to reduce the output space.
My solution to this problem has always been to use the following in place of your upper bound.
Math.nextAfter(upperBound,upperBound+1)
or
upperBound + Double.MIN_VALUE
So your code would look like this:
double myRandomNum = Math.random() * Math.nextAfter(upperBound,upperBound+1) + lowerBound;
or
double myRandomNum = Math.random() * (upperBound + Double.MIN_VALUE) + lowerBound;
This simply increments your upper bound by the smallest double (Double.MIN_VALUE) so that your upper bound will be included as a possibility in the random calculation.
This is a good way to go about it because it does not skew the probabilities in favor of any one number.
The only case this wouldn't work is where your upper bound is equal to Double.MAX_VALUE
Just pick your half-open interval slightly bigger, so that your chosen closed interval is a subset. Then, keep generating the random variable until it lands in said closed interval.
Example: If you want something uniform in [3,8], then repeatedly regenerate a uniform random variable in [3,9) until it happens to land in [3,8].
function randomInRangeInclusive(min,max) {
var ret;
for (;;) {
ret = min + ( Math.random() * (max-min) * 1.1 );
if ( ret <= max ) { break; }
}
return ret;
}
Note: The amount of times you generate the half-open R.V. is random and potentially infinite, but you can make the expected number of calls otherwise as close to 1 as you like, and I don't think there exists a solution that doesn't potentially call infinitely many times.
Given the "extremely large" number of values between 0 and 1, does it really matter? The chances of actually hitting 1 are tiny, so it's very unlikely to make a significant difference to anything you're doing.
What would be a situation where you would NEED a floating point value to be inclusive of the upper bound? For integers I understand, but for a float, the difference between between inclusive and exclusive is what like 1.0e-32.
Think of it this way. If you imagine that floating-point numbers have arbitrary precision, the chances of getting exactly min are zero. So are the chances of getting max. I'll let you draw your own conclusion on that.
This 'problem' is equivalent to getting a random point on the real line between 0 and 1. There is no 'inclusive' and 'exclusive'.
The question is akin to asking, what is the floating point number right before 1.0? There is such a floating point number, but it is one in 2^24 (for an IEEE float) or one in 2^53 (for a double).
The difference is negligible in practice.
private static double random(double min, double max) {
final double r = Math.random();
return (r >= 0.5d ? 1.5d - r : r) * (max - min) + min;
}
Math.round() will help to include the bound value. If you have 0 <= value < 1 (1 is exclusive), then Math.round(value * 100) / 100 returns 0 <= value <= 1 (1 is inclusive). A note here is that the value now has only 2 digits in its decimal place. If you want 3 digits, try Math.round(value * 1000) / 1000 and so on. The following function has one more parameter, that is the number of digits in decimal place - I called as precision:
function randomInRange(min, max, precision) {
return Math.round(Math.random() * Math.pow(10, precision)) /
Math.pow(10, precision) * (max - min) + min;
}
How about this?
function randomInRange(min, max){
var n = Math.random() * (max - min + 0.1) + min;
return n > max ? randomInRange(min, max) : n;
}
If you get stack overflow on this I'll buy you a present.
--
EDIT: never mind about the present. I got wild with:
randomInRange(0, 0.0000000000000000001)
and got stack overflow.
I am fairly less experienced, So I am also looking for solutions as well.
This is my rough thought:
Random number generators produce numbers in [0,1) instead of [0,1],
Because [0,1) is an unit length that can be followed by [1,2) and so on without overlapping.
For random[x, y],
You can do this:
float randomInclusive(x, y){
float MIN = smallest_value_above_zero;
float result;
do{
result = random(x, (y + MIN));
} while(result > y);
return result;
}
Where all values in [x, y] has the same possibility to be picked, and you can reach y now.
Generating a "uniform" floating-point number in a range is non-trivial. For example, the common practice of multiplying or dividing a random integer by a constant, or by scaling a "uniform" floating-point number to the desired range, have the disadvantage that not all numbers a floating-point format can represent in the range can be covered this way, and may have subtle bias problems. These problems are discussed in detail in "Generating Random Floating-Point Numbers by Dividing Integers: a Case Study" by F. Goualard.
Just to show how non-trivial the problem is, the following pseudocode generates a random "uniform-behaving" floating-point number in the closed interval [lo, hi], where the number is of the form FPSign * FPSignificand * FPRADIX^FPExponent. The pseudocode below was reproduced from my section on floating-point number generation. Note that it works for any precision and any base (including binary and decimal) of floating-point numbers.
METHOD RNDRANGE(lo, hi)
losgn = FPSign(lo)
hisgn = FPSign(hi)
loexp = FPExponent(lo)
hiexp = FPExponent(hi)
losig = FPSignificand(lo)
hisig = FPSignificand(hi)
if lo > hi: return error
if losgn == 1 and hisgn == -1: return error
if losgn == -1 and hisgn == 1
// Straddles negative and positive ranges
// NOTE: Changes negative zero to positive
mabs = max(abs(lo),abs(hi))
while true
ret=RNDRANGE(0, mabs)
neg=RNDINT(1)
if neg==0: ret=-ret
if ret>=lo and ret<=hi: return ret
end
end
if lo == hi: return lo
if losgn == -1
// Negative range
return -RNDRANGE(abs(lo), abs(hi))
end
// Positive range
expdiff=hiexp-loexp
if loexp==hiexp
// Exponents are the same
// NOTE: Automatically handles
// subnormals
s=RNDINTRANGE(losig, hisig)
return s*1.0*pow(FPRADIX, loexp)
end
while true
ex=hiexp
while ex>MINEXP
v=RNDINTEXC(FPRADIX)
if v==0: ex=ex-1
else: break
end
s=0
if ex==MINEXP
// Has FPPRECISION or fewer digits
// and so can be normal or subnormal
s=RNDINTEXC(pow(FPRADIX,FPPRECISION))
else if FPRADIX != 2
// Has FPPRECISION digits
s=RNDINTEXCRANGE(
pow(FPRADIX,FPPRECISION-1),
pow(FPRADIX,FPPRECISION))
else
// Has FPPRECISION digits (bits), the highest
// of which is always 1 because it's the
// only nonzero bit
sm=pow(FPRADIX,FPPRECISION-1)
s=RNDINTEXC(sm)+sm
end
ret=s*1.0*pow(FPRADIX, ex)
if ret>=lo and ret<=hi: return ret
end
END METHOD
The below algorithm works to identify a factor of a small number but fails completely when using a large one such as 7534534523.0
double result = 7; // 7534534523.0;
double divisor = 1;
for (int i = 2; i < result; i++){
double r = result / (double)i;
if (Math.floor(r) == r){
divisor = i;
break;
}
}
System.out.println(result + "/" + divisor + "=" + (result/divisor));
The number 7534534523.0 divided by 2 on a calculator can give a decimal part or round it (losing the 0.5). How can I perform such a check on large numbers? Do I have to use BigDecimal for this? Or is there another way?
If your goal is to represent a number with exactly n significant figures to the right of the decimal, BigDecimal is the class to use.
Immutable, arbitrary-precision signed decimal numbers. A BigDecimal consists of an arbitrary precision integer unscaled value and a 32-bit integer scale. If zero or positive, the scale is the number of digits to the right of the decimal point. If negative, the unscaled value of the number is multiplied by ten to the power of the negation of the scale. The value of the number represented by the BigDecimal is therefore (unscaledValue × 10-scale).
Additionally, you can have a better control over scale manipulation, rounding and format conversion.
I don't see what the problem is in your code. It works exactly like it should.
When I run your code I get this output:
7.534534523E9/77359.0=97397.0
That may have confused you, but its perfectly fine. It's just using scientific notation, but there is nothing wrong with that.
7.534534523E9 = 7.534534523 * 109 = 7,534,534,523
If you want to see it in normal notation, you can use System.out.format to print the result:
System.out.format("%.0f/%.0f=%.0f\n", result, divisor, result / divisor);
Shows:
7534534523/77359=97397
But you don't need double or BigDecimal to check if a number is divisible by another number. You can use the modulo operator on integral types to check if one number is divisible by another. As long as your numbers fit in a long, this works, otherwise you can move on to a BigInteger:
long result = 7534534523L;
long divisor = 1;
for (int i = 2; i < result; i++) {
if (result % i == 0) {
divisor = i;
break;
}
}
System.out.println(result + "/" + divisor + "=" + (result / divisor));
BigDecimal is the way to move ahead for preserving high precision in numbers.
DO NOT do not use constructor BigDecimal(double val) as the rounding is performed and the output is not always same. The same is mentioned in the implementation as well. According to it:
The results of this constructor can be somewhat unpredictable. One might assume that writing new BigDecimal(0.1) in Java creates a BigDecimal which is exactly equal to 0.1 (an unscaled value of 1, with a scale of 1), but it is actually equal to 0.1000000000000000055511151231257827021181583404541015625. This is because 0.1 cannot be represented exactly as a double (or, for that matter, as a binary fraction of any finite length). Thus, the value that is being passed in to the constructor is not exactly equal to 0.1, appearances notwithstanding.
ALWAYS try to use constructor BigDecimal(String val) as it preserves precision and gives same output each time.
I am writing a small physics app. What I am planning to do is to make number rounding. The issue is that it is not a fixed rounding, but rather a variable rounding that depends on the value of the decimal digits. I will give an explanation for the issue.
I always need to keep the whole integer part (if any) and the first five decimal digits (if any).
half up rounding is always used.
21.1521421056 becomes 21.15214
34.1521451056 becomes 34.15215
If the result consists of only decimal digits then:
If the first five digits include non zero digits then keep them.
0.52131125 becomes 0.52131
0.21546874 becomes 0.21547
0.00120012 becomes 0.0012
If the first five digits are all zero digits 0.00000 then go down to first five digits that include non zero digits.
0.0000051234 becomes 0.0000051234
0.000000000000120006130031 becomes 0.00000000000012001
I need to play this rounding while working with BigDecimal because it is a requirement for my needs.
I think this will work, based on experimentation, if I understand correctly what you want. If d is a BigDecimal that contains the number:
BigDecimal rounded = d.round(new MathContext
(d.scale() - d.precision() < 5
? d.precision() - d.scale() + 5
: 5));
Is this, what you are looking for?
public static void main(String[] args){
double d = 0.000000000000120006130031;
System.out.println(round(d, 5));
}
private static double round(double d, int precision) {
double factor = Math.pow(10D, precision);
int value = (int)d;
double re = d-value;
if (re * factor <= 0.1 && d != 0) {
while (re * factor <= 0.1) {
factor *= 10;
}
factor *= Math.pow(10D, precision);
}
re = ((int)(re*factor))/factor+value;
return re;
}
(sorry, it's a little quick & dirty, but you can improve it, if you want)
EDIT:
make it <= in the conditions, this should work better
How, in Java, would you generate a random number but make that random number skewed toward a specific number. For example, I want to generate a number between 1 and 100 inclusive, but I want that number skewed toward say, 75. But I still want the possibility of getting other numbers in the range, but I want more of a change of getting numbers close to say 75 as opposed to just getting random numbers all across the range. Thanks
Question is a bit old, but if anyone wants to do this without the special case handling, you can use a function like this:
final static public Random RANDOM = new Random(System.currentTimeMillis());
static public double nextSkewedBoundedDouble(double min, double max, double skew, double bias) {
double range = max - min;
double mid = min + range / 2.0;
double unitGaussian = RANDOM.nextGaussian();
double biasFactor = Math.exp(bias);
double retval = mid+(range*(biasFactor/(biasFactor+Math.exp(-unitGaussian/skew))-0.5));
return retval;
}
The parameters do the following:
min - the minimum skewed value possible
max - the maximum skewed value possible
skew - the degree to which the values cluster around the mode of the distribution; higher values mean tighter clustering
bias - the tendency of the mode to approach the min, max or midpoint value; positive values bias toward max, negative values toward min
Try http://download.oracle.com/javase/6/docs/api/java/util/Random.html#nextGaussian()
Math.max(1, Math.min(100, (int) 75 + Random.nextGaussian() * stddev)))
Pick a stddev like 10 and play around until you get the distribution you want. There are going to be slightly more at 1 and 100 though than at 2 or 99. If you want to change the rate at which it drops off, you can raise the gaussian to a power.