factorial of 42 is going beyond the final limit of long data type in java. that's why I can't find digits.
42!
The factorial of 42 is of 51 digits while the max limit of long datatype in Java is 9,223,372,036,854,775,807 i.e only 20 digits. But don't worry, Java has a Big Integer class to store large numbers such as 100!. But it's a bit slower than primitive data types such as int, long etc because it stores integers in the form of arrays. There are many ways to implement the Big Integer class but here's the most used way. This code calculates the factorial of 42 and prints the same-
// Java program to find large factorials using BigInteger
import java.math.BigInteger;
public class Factorial
{
// Returns Factorial of N
static BigInteger factorial(int N)
{
BigInteger fact = new BigInteger("1"); // Or BigInteger.ONE
// Multiply f with 2, 3, ...N
for (int i = 2; i <= N; i++)
fact = fact.multiply(BigInteger.valueOf(i));
return fact;
}
public static void main(String args[])
{
int N = 42;
System.out.println(factorial(N));
}
}
Output:
1405006117752879898543142606244511569936384000000000
Explanation
We have to import the Big Integer class, which is stored in java.math package. I have named my file Factorial.java, so my class name is Factorial.
In this method, I've created a function, if you want the code without function, just comment below. Now in this syntax-
BigInteger f = new BigInteger("1");
I've assigned fact as Big integer which is equal to 1. In the for loop,
i value is set to 2 s 1*1=1.
fact = fact.multiply(BigInteger.valueOf(i));
The above syntax is for the multiplication of Big integers. This multiplies the Biginteger fact by i.
Have a look at this GeeksforGeeks article- https://www.geeksforgeeks.org/biginteger-class-in-java/
If you only care about the number of digits, I would recommend taking a more mathematical approach. There are ways to compute this number without actually computing the factorial itself. This would not require so big a variable and would be a lot faster.
You could think it this way:
Digits(n!) = floor(log10(n!)) + 1 = floor(log10(n * (n - 1) * ... * 1)) + 1 =floor(\sum_{i = 1}^{n}log10(i)) + 1
A picture of this expression: expression
This would still require iteration, but it deals with much smaller numbers.
If you still want O(1) complexity for this task, you can go with a pretty good approximation I've just tried.
Digits(n!) ~ floor(\int_{1}^{x}log10(x) dx) + 1 = floor(\frac{-x + x*ln(x) + 1}{ln(10)}) + 1
Another image of this formula: approximate expression
Of course, the latter is no absolutely exact since we are now integrating a continuous function. However, it will probably be worth implementing.
Digits(42!) = floor(50.37...) + 1 = 50 + 1 = 51
I wrote this simple code just out of curiosity and encountered some behavior of the nextInt() method from the Java Random class that I don't quite understand. Can anyone help me to figure it out?
The program simulates a simple coin flipping. So as far as I understand the probability of the nextInt(101) for numbers less and greater than 49 should be equal.
But as long as I increase the number of iterations, the balance tends to get positive, for example after 100,000 iterations, I didn't get a negative number. Why does this happen?
public static void main(String[] args) {
int balance = 0;
for (int i = 0; i < 100000; i++) {
Random random = new Random();
int result = random.nextInt(101);
if (result > 49) {
balance++;
} else {
balance--;
}
}
System.out.println("Player's balance = " + balance);
}
You call int result = random.nextInt(101) which creates uniformly distributed integers in [0,100], which can take 101 different values. If you check if (result > 49) then you have 51 possible values ([50,100]) and in the else case you have only 50 values ([0,49]). Thus the result is more likely to be in the upper part. To fix it you can do int result = random.nextInt(100).
you are testing 51 possibilities for a positive outcome and only 50 possibilities for a negative outcome.
100-50 = 51 possibilities
0-49 = 50 possibilities.
If you have tried, random.nextInt(99), result will be different, I got minus value many times.
The reason behind this is that random.nextInt() Method.
quoted from the JavaDoc.
The algorithm is slightly tricky. It rejects values that would result
in an uneven distribution (due to the fact that 2^31 is not divisible
by n). The probability of a value being rejected depends on n. The
worst case is n=2^30+1, for which the probability of a reject is 1/2,
and the expected number of iterations before the loop terminates is 2.
Please see here Random.java#nextInt(int)
We can easily get random floating point numbers within a desired range [X,Y) (note that X is inclusive and Y is exclusive) with the function listed below since Math.random() (and most pseudorandom number generators, AFAIK) produce numbers in [0,1):
function randomInRange(min, max) {
return Math.random() * (max-min) + min;
}
// Notice that we can get "min" exactly but never "max".
How can we get a random number in a desired range inclusive to both bounds, i.e. [X,Y]?
I suppose we could "increment" our value from Math.random() (or equivalent) by "rolling" the bits of an IEE-754 floating point double precision to put the maximum possible value at 1.0 exactly but that seems like a pain to get right, especially in languages poorly suited for bit manipulation. Is there an easier way?
(As an aside, why do random number generators produce numbers in [0,1) instead of [0,1]?)
[Edit] Please note that I have no need for this and I am fully aware that the distinction is pedantic. Just being curious and hoping for some interesting answers. Feel free to vote to close if this question is inappropriate.
I believe there is much better decision but this one should work :)
function randomInRange(min, max) {
return Math.random() < 0.5 ? ((1-Math.random()) * (max-min) + min) : (Math.random() * (max-min) + min);
}
First off, there's a problem in your code: Try randomInRange(0,5e-324) or just enter Math.random()*5e-324 in your browser's JavaScript console.
Even without overflow/underflow/denorms, it's difficult to reason reliably about floating point ops. After a bit of digging, I can find a counterexample:
>>> a=1.0
>>> b=2**-54
>>> rand=a-2*b
>>> a
1.0
>>> b
5.551115123125783e-17
>>> rand
0.9999999999999999
>>> (a-b)*rand+b
1.0
It's easier to explain why this happens with a=253 and b=0.5: 253-1 is the next representable number down. The default rounding mode ("round to nearest even") rounds 253-0.5 up (because 253 is "even" [LSB = 0] and 253-1 is "odd" [LSB = 1]), so you subtract b and get 253, multiply to get 253-1, and add b to get 253 again.
To answer your second question: Because the underlying PRNG almost always generates a random number in the interval [0,2n-1], i.e. it generates random bits. It's very easy to pick a suitable n (the bits of precision in your floating point representation) and divide by 2n and get a predictable distribution. Note that there are some numbers in [0,1) that you will will never generate using this method (anything in (0,2-53) with IEEE doubles).
It also means that you can do a[Math.floor(Math.random()*a.length)] and not worry about overflow (homework: In IEEE binary floating point, prove that b < 1 implies a*b < a for positive integer a).
The other nice thing is that you can think of each random output x as representing an interval [x,x+2-53) (the not-so-nice thing is that the average value returned is slightly less than 0.5). If you return in [0,1], do you return the endpoints with the same probability as everything else, or should they only have half the probability because they only represent half the interval as everything else?
To answer the simpler question of returning a number in [0,1], the method below effectively generates an integer [0,2n] (by generating an integer in [0,2n+1-1] and throwing it away if it's too big) and dividing by 2n:
function randominclusive() {
// Generate a random "top bit". Is it set?
while (Math.random() >= 0.5) {
// Generate the rest of the random bits. Are they zero?
// If so, then we've generated 2^n, and dividing by 2^n gives us 1.
if (Math.random() == 0) { return 1.0; }
// If not, generate a new random number.
}
// If the top bits are not set, just divide by 2^n.
return Math.random();
}
The comments imply base 2, but I think the assumptions are thus:
0 and 1 should be returned equiprobably (i.e. the Math.random() doesn't make use of the closer spacing of floating point numbers near 0).
Math.random() >= 0.5 with probability 1/2 (should be true for even bases)
The underlying PRNG is good enough that we can do this.
Note that random numbers are always generated in pairs: the one in the while (a) is always followed by either the one in the if or the one at the end (b). It's fairly easy to verify that it's sensible by considering a PRNG that returns either 0 or 0.5:
a=0 b=0 : return 0
a=0 b=0.5: return 0.5
a=0.5 b=0 : return 1
a=0.5 b=0.5: loop
Problems:
The assumptions might not be true. In particular, a common PRNG is to take the top 32 bits of a 48-bit LCG (Firefox and Java do this). To generate a double, you take 53 bits from two consecutive outputs and divide by 253, but some outputs are impossible (you can't generate 253 outputs with 48 bits of state!). I suspect some of them never return 0 (assuming single-threaded access), but I don't feel like checking Java's implementation right now.
Math.random() is twice for every potential output as a consequence of needing to get the extra bit, but this places more constraints on the PRNG (requiring us to reason about four consecutive outputs of the above LCG).
Math.random() is called on average about four times per output. A bit slow.
It throws away results deterministically (assuming single-threaded access), so is pretty much guaranteed to reduce the output space.
My solution to this problem has always been to use the following in place of your upper bound.
Math.nextAfter(upperBound,upperBound+1)
or
upperBound + Double.MIN_VALUE
So your code would look like this:
double myRandomNum = Math.random() * Math.nextAfter(upperBound,upperBound+1) + lowerBound;
or
double myRandomNum = Math.random() * (upperBound + Double.MIN_VALUE) + lowerBound;
This simply increments your upper bound by the smallest double (Double.MIN_VALUE) so that your upper bound will be included as a possibility in the random calculation.
This is a good way to go about it because it does not skew the probabilities in favor of any one number.
The only case this wouldn't work is where your upper bound is equal to Double.MAX_VALUE
Just pick your half-open interval slightly bigger, so that your chosen closed interval is a subset. Then, keep generating the random variable until it lands in said closed interval.
Example: If you want something uniform in [3,8], then repeatedly regenerate a uniform random variable in [3,9) until it happens to land in [3,8].
function randomInRangeInclusive(min,max) {
var ret;
for (;;) {
ret = min + ( Math.random() * (max-min) * 1.1 );
if ( ret <= max ) { break; }
}
return ret;
}
Note: The amount of times you generate the half-open R.V. is random and potentially infinite, but you can make the expected number of calls otherwise as close to 1 as you like, and I don't think there exists a solution that doesn't potentially call infinitely many times.
Given the "extremely large" number of values between 0 and 1, does it really matter? The chances of actually hitting 1 are tiny, so it's very unlikely to make a significant difference to anything you're doing.
What would be a situation where you would NEED a floating point value to be inclusive of the upper bound? For integers I understand, but for a float, the difference between between inclusive and exclusive is what like 1.0e-32.
Think of it this way. If you imagine that floating-point numbers have arbitrary precision, the chances of getting exactly min are zero. So are the chances of getting max. I'll let you draw your own conclusion on that.
This 'problem' is equivalent to getting a random point on the real line between 0 and 1. There is no 'inclusive' and 'exclusive'.
The question is akin to asking, what is the floating point number right before 1.0? There is such a floating point number, but it is one in 2^24 (for an IEEE float) or one in 2^53 (for a double).
The difference is negligible in practice.
private static double random(double min, double max) {
final double r = Math.random();
return (r >= 0.5d ? 1.5d - r : r) * (max - min) + min;
}
Math.round() will help to include the bound value. If you have 0 <= value < 1 (1 is exclusive), then Math.round(value * 100) / 100 returns 0 <= value <= 1 (1 is inclusive). A note here is that the value now has only 2 digits in its decimal place. If you want 3 digits, try Math.round(value * 1000) / 1000 and so on. The following function has one more parameter, that is the number of digits in decimal place - I called as precision:
function randomInRange(min, max, precision) {
return Math.round(Math.random() * Math.pow(10, precision)) /
Math.pow(10, precision) * (max - min) + min;
}
How about this?
function randomInRange(min, max){
var n = Math.random() * (max - min + 0.1) + min;
return n > max ? randomInRange(min, max) : n;
}
If you get stack overflow on this I'll buy you a present.
--
EDIT: never mind about the present. I got wild with:
randomInRange(0, 0.0000000000000000001)
and got stack overflow.
I am fairly less experienced, So I am also looking for solutions as well.
This is my rough thought:
Random number generators produce numbers in [0,1) instead of [0,1],
Because [0,1) is an unit length that can be followed by [1,2) and so on without overlapping.
For random[x, y],
You can do this:
float randomInclusive(x, y){
float MIN = smallest_value_above_zero;
float result;
do{
result = random(x, (y + MIN));
} while(result > y);
return result;
}
Where all values in [x, y] has the same possibility to be picked, and you can reach y now.
Generating a "uniform" floating-point number in a range is non-trivial. For example, the common practice of multiplying or dividing a random integer by a constant, or by scaling a "uniform" floating-point number to the desired range, have the disadvantage that not all numbers a floating-point format can represent in the range can be covered this way, and may have subtle bias problems. These problems are discussed in detail in "Generating Random Floating-Point Numbers by Dividing Integers: a Case Study" by F. Goualard.
Just to show how non-trivial the problem is, the following pseudocode generates a random "uniform-behaving" floating-point number in the closed interval [lo, hi], where the number is of the form FPSign * FPSignificand * FPRADIX^FPExponent. The pseudocode below was reproduced from my section on floating-point number generation. Note that it works for any precision and any base (including binary and decimal) of floating-point numbers.
METHOD RNDRANGE(lo, hi)
losgn = FPSign(lo)
hisgn = FPSign(hi)
loexp = FPExponent(lo)
hiexp = FPExponent(hi)
losig = FPSignificand(lo)
hisig = FPSignificand(hi)
if lo > hi: return error
if losgn == 1 and hisgn == -1: return error
if losgn == -1 and hisgn == 1
// Straddles negative and positive ranges
// NOTE: Changes negative zero to positive
mabs = max(abs(lo),abs(hi))
while true
ret=RNDRANGE(0, mabs)
neg=RNDINT(1)
if neg==0: ret=-ret
if ret>=lo and ret<=hi: return ret
end
end
if lo == hi: return lo
if losgn == -1
// Negative range
return -RNDRANGE(abs(lo), abs(hi))
end
// Positive range
expdiff=hiexp-loexp
if loexp==hiexp
// Exponents are the same
// NOTE: Automatically handles
// subnormals
s=RNDINTRANGE(losig, hisig)
return s*1.0*pow(FPRADIX, loexp)
end
while true
ex=hiexp
while ex>MINEXP
v=RNDINTEXC(FPRADIX)
if v==0: ex=ex-1
else: break
end
s=0
if ex==MINEXP
// Has FPPRECISION or fewer digits
// and so can be normal or subnormal
s=RNDINTEXC(pow(FPRADIX,FPPRECISION))
else if FPRADIX != 2
// Has FPPRECISION digits
s=RNDINTEXCRANGE(
pow(FPRADIX,FPPRECISION-1),
pow(FPRADIX,FPPRECISION))
else
// Has FPPRECISION digits (bits), the highest
// of which is always 1 because it's the
// only nonzero bit
sm=pow(FPRADIX,FPPRECISION-1)
s=RNDINTEXC(sm)+sm
end
ret=s*1.0*pow(FPRADIX, ex)
if ret>=lo and ret<=hi: return ret
end
END METHOD
I have a question about this problem, and any help would be great!
Write a program that takes one integer N as an
argument and prints out its truncated binary logarithm [log2 N]. Hint: [log2 N] = l is the largest integer ` such that
2^l <= N.
I got this much down:
int N = Integer.parseInt(args[0]);
double l = Math.log(N) / Math.log(2);
double a = Math.pow(2, l);
But I can't figure out how to truncate l while keeping 2^l <= N
Thanks
This is what i have now:
int N = Integer.parseInt(args[0]);
int i = 0; // loop control counter
int v = 1; // current power of two
while (Math.pow(2 , i) <= N) {
i = i + 1;
v = 2 * v;
}
System.out.println(Integer.highestOneBit(N));
This prints out the integer that is equal to 2^i which would be less than N. My test still comes out false and i think that is because the question is asking to print the i that is the largest rather than the N. So when i do
Integer.highestOneBit(i)
the correct i does not print out. For example if i do: N = 38 then the highest i should be 5, but instead it prints out 4.
Then i tried this:
int N = Integer.parseInt(args[0]);
int i; // loop control counter
for (i= 0; Math.pow(2 , i) == N; i++) {
}
System.out.println(Integer.highestOneBit(i));
Where if i make N = 2 i should print out to be 1, but instead it is printing out 0.
I've tried a bunch of things on top of that, but cant get what i am doing wrong. Help would be greatly appreciated. Thanks
I believe the answer you're looking for here is based on the underlying notion of how a number is actually stored in a computer, and how that can be used to your advantage in a problem such as this.
Numbers in a computer are stored in binary - a series of ones and zeros where each column represents a power of 2:
(Above image from http://www.mathincomputers.com/binary.html - see for more info on binary)
The zeroth power of 2 is over on the right. So, 01001, for example, represents the decimal value 2^0 + 2^3; 9.
This storage format, interestingly, gives us some additional information about the number. We can see that 2^3 is the highest power of 2 that 9 is made up of. Let's imagine it's the only power of two it contains, by chopping off all the other 1's except the highest. This is a truncation, and results in this:
01000
You'll now notice this value represents 8, or 2^3. Taking it down to basics, lets now look at what log base 2 really represents. It's the number that you raise 2 to the power of to get the thing your finding the log of. log2(8) is 3. Can you see the pattern emerging here?
The position of the highest bit can be used as an approximation to it's log base 2 value.
2^3 is the 3rd bit over in our example, so a truncated approximation to log base 2(9) is 3.
So the truncated binary logarithm of 9 is 3. 2^3 is less than 9; This is where the less than comes from, and the algorithm to find it's value simply involves finding the position of the highest bit that makes up the number.
Some more examples:
12 = 1100. Position of the highest bit = 3 (starting from zero on the right). Therefore the truncated binary logarithm of 12 = 3. 2^3 is <= 12.
38 = 100110. Position of the highest bit = 5. Therefore the truncated binary logarithm of 38 = 5. 2^5 is <= 38.
This level of pushing bits around is known as bitwise operations in Java.
Integer.highestOneBit(n) returns essentially the truncated value. So if n was 9 (1001), highestOneBit(9) returns 8 (1000), which may be of use.
A simple way of finding the position of that highest bit of a number involves doing a bitshift until the value is zero. Something a little like this:
// Input number - 1001:
int n=9;
int position=0;
// Cache the input number - the loop destroys it.
int originalN=n;
while( n!=0 ){
position++; // Also position = position + 1;
n = n>>1; // Shift the bits over one spot (Overwriting n).
// 1001 becomes 0100, then 0010, then 0001, then 0000 on each iteration.
// Hopefully you can then see that n is zero when we've
// pushed all the bits off.
}
// Position is now the point at which n became zero.
// In your case, this is also the value of your truncated binary log.
System.out.println("Binary log of "+originalN+" is "+position);
This question was asked in my interview.
random(0,1) is a function that generates integers 0 and 1 randomly.
Using this function how would you design a function that takes two integers a,b as input and generates random integers including a and b.
I have No idea how to solve this.
We can do this easily by bit logic (E,g, a=4 b=10)
Calculate difference b-a (for given e.g. 6)
Now calculate ceil(log(b-a+1)(Base 2)) i.e. no of bits required to represent all numbers b/w a and b
now call random(0,1) for each bit. (for given example range will be b/w 000 - 111)
do step 3 till the number(say num) is b/w 000 to 110(inclusive) i.e. we need only 7 levels since b-a+1 is 7.So there are 7 possible states a,a+1,a+2,... a+6 which is b.
return num + a.
I hate this kind of interview Question because there are some
answer fulfilling it but the interviewer will be pretty mad if you use them. For example,
Call random,
if you obtain 0, output a
if you obtain 1, output b
A more sophisticate answer, and probably what the interviewer wants is
init(a,b){
c = Max(a,b)
d = log2(c) //so we know how much bits we need to cover both a and b
}
Random(){
int r = 0;
for(int i = 0; i< d; i++)
r = (r<<1)| Random01();
return r;
}
You can generate random strings of 0 and 1 by successively calling the sub function.
So we have randomBit() returning 0 or 1 independently, uniformly at random and we want a function random(a, b) that returns a value in the range [a,b] uniformly at random. Let's actually make that the range [a, b) because half-open ranges are easier to work with and equivalent. In fact, it is easy to see that we can just consider the case where a == 0 (and b > 0), i.e. we just want to generate a random integer in the range [0, b).
Let's start with the simple answer suggested elsewhere. (Forgive me for using c++ syntax, the concept is the same in Java)
int random2n(int n) {
int ret = n ? randomBit() + (random2n(n - 1) << 1) : 0;
}
int random(int b) {
int n = ceil(log2(b)), v;
while ((v = random2n(n)) >= b);
return v;
}
That is-- it is easy to generate a value in the range [0, 2^n) given randomBit(). So to get a value in [0, b), we repeatedly generate something in the range [0, 2^ceil(log2(b))] until we get something in the correct range. It is rather trivial to show that this selects from the range [0, b) uniformly at random.
As stated before, the worst case expected number of calls to randomBit() for this is (1 + 1/2 + 1/4 + ...) ceil(log2(b)) = 2 ceil(log2(b)). Most of those calls are a waste, we really only need log2(n) bits of entropy and so we should try to get as close to that as possible. Even a clever implementation of this that calculates the high bits early and bails out as soon as it exits the wanted range has the same expected number of calls to randomBit() in the worst case.
We can devise a more efficient (in terms of calls to randomBit()) method quite easily. Let's say we want to generate a number in the range [0, b). With a single call to randomBit(), we should be able to approximately cut our target range in half. In fact, if b is even, we can do that. If b is odd, we will have a (very) small chance that we have to "re-roll". Consider the function:
int random(int b) {
if (b < 2) return 0;
int mid = (b + 1) / 2, ret = b;
while (ret == b) {
ret = (randomBit() ? mid : 0) + random(mid);
}
return ret;
}
This function essentially uses each random bit to select between two halves of the wanted range and then recursively generates a value in that half. While the function is fairly simple, the analysis of it is a bit more complex. By induction one can prove that this generates a value in the range [0, b) uniformly at random. Also, it can be shown that, in the worst case, this is expected to require ceil(log2(b)) + 2 calls to randomBit(). When randomBit() is slow, as may be the case for a true random generator, this is expected to waste only a constant number of calls rather than a linear amount as in the first solution.
function randomBetween(int a, int b){
int x = b-a;//assuming a is smaller than b
float rand = random();
return a+Math.ceil(rand*x);
}