I was curious to know why random_number=a+rand()%b; in C produces a random number between a and b (non-inclusive of b). But in Java this code will not work. I understand that the correct way to do this in Java is random_number=a+(Math.random()*(b-a)); but I was curious to know why there is a difference? Isn't it the same operation mathematically? I also understand that the return types are different for the random functions, but how does this difference explain the difference in output? Sorry if this seems like a trivial question but I was curious nonetheless.
The difference lies in what each random generator does.
In Java, Math.Random returns a pseudo-random number of the range 0 (inclusive) through 1 (exclusive). So it must be scaled up by b - a, the width of the range, then shifted by a, the start of the range.
In C, rand() returns numbers in the range 0 to RAND_MAX (my RAND_MAX is 32767), so %b is used to control the width of the range, and the start of the range is shifted by a.
One can, of course use java.util.Random's nextInt() or nextInt(int n) method to more exactly mimic the C scheme. nextInt() is almost exactly the same as rand() (though hopefully better distributed) while nextInt(int n) effectively subsumes the %n calculation.
In java it is nextInt()
int nextInt(int n)
This function returns random number between 0 to n
In C it is rand()
int rand(void)
This function returns an integer value between 0 and RAND_MAX.
Related
This question already has answers here:
BigInteger.pow(BigInteger)?
(9 answers)
Closed 3 years ago.
I have code that will get the exponent value from given input:
BigInteger a= new BigInteger(2,10);
BigInteger b;
b=a.pow(9999999999);
It is working when value is lower than 7 digits. For example:
BigInteger a= new BigInteger(2,10);
BigInteger b;
b=a.pow(1234567);
Does my code make it possible or is it not possible to have 10 digit in the exponent?
I'm using JDK 1.8.
BigInteger.pow() only exists for int parameters, so you can't take the power bigger than Integer.MAX_VALUE at once.
Those numbers would also be incredibly big (as in "rapidly approaching and passing the number of particles in the observable universe), if you could do it, and there are very few uses for this.
Note that the "power with modulus" operation which is often used in cryptography is implemented using BigInteger.modPow() which does take BigInteger arguments and can therefore handle effectively arbitrarily large values.
pow's parameter is an int. The range of int is -2147483648 to 2147483647, so the answer is it depends on which 10 digits you use. If those 10 digits are 1234567890, that's fine, it's in range (though potentially you'll get a really, really big BigInteger which may push your memory limits); if they're 9999999999 as in your question, that's not fine, it's out of range.
E.g., this compiles:
int a = 1234567890;
This does not:
int b = 9999999999;
^--------------- error: integer number too large: 9999999999
I was wondering how to have inclusive random numbers in Clojure. I came up with this:
(import java.util.Random)
(def rnd
(Random.))
(defn random-float
[min max rnd]
(+ min (* (.nextDouble rnd) (- max min))))
It seems that Java only has inclusive/exclusive random functions. According the documentation:
The general contract of nextDouble is that one double value, chosen
(approximately) uniformly from the range 0.0d (inclusive) to 1.0d
(exclusive), is pseudorandomly generated and returned.
Is there a way to have an inclusive/inclusive version of this function. I was thinking about increasing the maximum value with a tiny bit (0.0000000000000001). Not sure what is the impact if I do so.
Would this work?
(random-float 0.0 1.0000000000000001 rnd)
How important is the double precision for your case?
You could use integers for everything with clojure.core's rand-int e.g:
for a number between 0-100: (rand-int 101)
for a number between 0-1000: (rand-int 1001)
etc
I'd also note that there are other Clojure rand-style functions such as random-sample and rand ; if rand-int isn't what you're looking for, still you probably won't need to fallback to Java.
How do I safely generate a random integer value in a specific range?
I know many people have asked this before, this post for example, but the method doesn't seem to be safe. Let me explain:
The 'Math' library has Math.random() which generates a random value in the range [0, 1). Using that, one can construct an algorithm like
int randomInteger = Math.floor(Math.random() * (Integer.MAX_VALUE - Integer.MIN_VALUE + 1) + Integer.MIN_VALUE)
to generate a random number between Integer.MAX_VALUE and Integer.MIN_VALUE. However, Integer.MAX_VALUE - Integer.MIN_VALUE will overflow.
The goal is not to merely generate random numbers but to generate them evenly, meaning 1 has the same probability to appear as Integer.MAX_VALUE. I know there are work-arounds to this, such as casting large values to long but then the problem again is how to generate a long integer value from Long.MIN_VALUE to Long.MAX_VALUE.
I'm also not sure about other pre-written algorithms as they can overflow too and cause the probability distribution to change. So my question is whether there is a mathematical equation that uses only integers (no casting to long anywhere) and Math.random() to generate random numbers from Integer.MIN_VALUE to Integer.MAX_VALUE. Or if anyone know any random generators that don't get overflow internally?
The 'Math' library have Math.random() which generates a random value in [0, 1) range.
So don't use Math.random() - use this:
Random r = new Random();
int i = r.nextInt();
The docs for nextInt say:
nextInt() - Returns the next pseudorandom, uniformly distributed int value from this random number generator's sequence. All 2^32 possible int values are produced with (approximately) equal probability.
It appears I misread the question slightly, and you need long not int - luckily the contract is the same.
long l = r.nextLong()
This will quite literally take two ints and jam them together to make a long.
Still better might be to use
java.security.SecureRandom which is a cryptographically strong random number generator (RNG).
Random x = new Random(1000); // code running and generate values 1000
int i = x.nextInt();
I know that in Java (and probably other languages), Math.pow is defined on doubles and returns a double. I'm wondering why on earth the folks who wrote Java didn't also write an int-returning pow(int, int) method, which seems to this mathematician-turned-novice-programmer like a forehead-slapping (though obviously easily fixable) omission. I can't help but think that there's some behind-the-scenes reason based on the intricacies of CS that I just don't know, because otherwise... huh?
On a similar topic, ceil and floor by definition return integers, so how come they don't return ints?
Thanks to all for helping me understand this. It's totally minor, but has been bugging me for years.
java.lang.Math is just a port of what the C math library does.
For C, I think it comes down to the fact that CPU have special instructions to do Math.pow for floating point numbers (but not for integers).
Of course, the language could still add an int implementation. BigInteger has one, in fact. It makes sense there, too, because pow tends to result in rather big numbers.
ceil and floor by definition return integers, so how come they don't return ints
Floating point numbers can represent integers outside of the range of int. So if you take a double argument that is too big to fit into an int, there is no good way for floor to deal with it.
From a mathematical perspective, you're going to overflow your integer if it's larger than 231-1, and overflow your long if it's larger than 264-1. It doesn't take much to overflow it, either.
Doubles are nice in that they can represent numbers from ~10-308 to ~10308 with 53 bits of precision. There may be some fringe conversion issues (such as the next full integer in a double may not exactly be representable), but by and large you're going to get a much larger range of numbers than you would if you strictly dealt with integers or longs.
On a similar topic, ceil and floor by definition return integers, so how come they don't return ints?
For the same reason outlined above - overflow. If I have an integral value that's larger than what I can represent in a long, I'd have to use something that could represent it. A similar thing occurs when I have an integral value that's smaller than what I can represent in a long.
Optimal implementation of integer pow() and floating-point pow() are very different. And C's math library was probably developed around the time when floating-point coprocessors were a consideration. Optimal implementation of floating point operation is to shift the numbers closer to 1 (to force quicker conversion of the power series) and then shift the result back. For integer power, a more accurate result can be had in O(log(p)) time by doing something like this:
// p is a positive integer power set somewhere above, n is the number to raise to power p
int result = 1;
while( p != 0){
if (p & 1){
result *= n;
}
n = n*n;
p = p >> 1;
}
Because all ints can be upcast to a double without loss and the pow function on a double is no less efficient that that on an int.
The reason lies behind the implementation of Math.pow() (JNI of default implementation). The CPU has an exponentiation module which works with doubles as input and output. Why should Java convert that for you when you have much better control over this yourself?
For floor and ceil the reasons are the same, but note that:
(int) Math.floor(d) == (int) d; // d > 0
(int) Math.ceil(d) == -(int)(-d); // d < 0
For most cases (no warranty around or beyond Integer.MAX_VALUE or Integer.MIN_VALUE).
Java leaves you with
(int) Math.pow(a,b)
because the result of Math.pow may even be NaN or Infinity depending on input.
I have a scenario where I'm working with large integers (e.g. 160 bit), and am trying to create the biggest possible unsigned integer that can be represented with an n bit number at run time. The exact value of n isn't known until the program has begun executing and read the value from a configuration file. So for example, n might be 160, or 128, or 192, etcetera...
Initially what I was thinking was something like:
BigInteger.valueOf((long)Math.pow(2, n));
but then I realized, the conversion to long that takes place sort of defeats the purpose, given that long is not comprised of enough bits in the first place to store the result. Any suggestions?
On the largest n-bit unsigned number
Let's first take a look at what this number is, mathematically.
In an unsigned binary representation, the largest n-bit number would have all bits set to 1. Let's take a look at some examples:
1(2)= 1 =21 - 1
11(2)= 3 =22 - 1
111(2)= 7 =23 - 1
:
1………1(2)=2n -1
n
Note that this is analogous in decimal too. The largest 3 digit number is:
103- 1 = 1000 - 1 = 999
Thus, a subproblem of finding the largest n-bit unsigned number is computing 2n.
On computing powers of 2
Modern digital computers can compute powers of two efficiently, due to the following pattern:
20= 1(2)
21= 10(2)
22= 100(2)
23= 1000(2)
:
2n= 10………0(2)
n
That is, 2n is simply a number having its bit n set to 1, and everything else set to 0 (remember that bits are numbered with zero-based indexing).
Solution
Putting the above together, we get this simple solution using BigInteger for our problem:
final int N = 5;
BigInteger twoToN = BigInteger.ZERO.setBit(N);
BigInteger maxNbits = twoToN.subtract(BigInteger.ONE);
System.out.println(maxNbits); // 31
If we were using long instead, then we can write something like this:
// for 64-bit signed long version, N < 64
System.out.println(
(1L << N) - 1
); // 31
There is no "set bit n" operation defined for long, so traditionally bit shifting is used instead. In fact, a BigInteger analog of this shifting technique is also possible:
System.out.println(
BigInteger.ONE.shiftLeft(N).subtract(BigInteger.ONE)
); // 31
See also
Wikipedia/Binary numeral system
Bit Twiddling Hacks
Additional BigInteger tips
BigInteger does have a pow method to compute non-negative power of any arbitrary number. If you're working in a modular ring, there are also modPow and modInverse.
You can individually setBit, flipBit or just testBit. You can get the overall bitCount, perform bitwise and with another BigInteger, and shiftLeft/shiftRight, etc.
As bonus, you can also compute the gcd or check if the number isProbablePrime.
ALWAYS remember that BigInteger, like String, is immutable. You can't invoke a method on an instance, and expect that instance to be modified. Instead, always assign the result returned by the method to your variables.
Just to clarify you want the largest n bit number (ie, the one will all n-bits set). If so, the following will do that for you:
BigInteger largestNBitInteger = BigInteger.ZERO.setBit(n).subtract(BigInteger.ONE);
Which is mathematically equivalent to 2^n - 1. Your question has how you do 2^n which is actually the smallest n+1 bit number. You can of course do that with:
BigInteger smallestNPlusOneBitInteger = BigInteger.ZERO.setBit(n);
I think there is pow method directly in BigInteger. You can use it for your purpose
The quickest way I can think of doing this is by using the constructor for BigInteger that takes a byte[].
BigInteger(byte[] val) constructs the BigInteger Object from an array of bytes. You are, however, dealing with bits, and so creating a byte[] that might consist of {127, 255, 255, 255, 255} for a 39 bit integer representing 2^40 - 1 might be a little tedious.
You could also use the constructor BigInteger(String val, int radix) - which might be readily more apparently what's going on in your code if you don't mind a performance hit for parsing a String. Then you could generate a string like val = "111111111111111111111111111111111111111" and then call BigInteger myInt = new BigInteger(val, 2); - resulting in the same 39 bit integer.
The first option will require some thinking about how to represent your number. That particular constructor expects a two's-compliment, big-endian representation of the number. The second will likely be marginally slower, but much clearer.
EDIT: Corrected numbers. I thought you meant represent 2^n, and didn't correctly read the largest value n bits could store.