I have made a program in Java that calculates powers of two, but it seems very inefficient. For smaller powers (2^4000, say), it does it in less than a second. However, I am looking at calculating 2^43112609, which is one greater than the largest known prime number. With over 12 million digits, it will take a very long time to run. Here's my code so far:
import java.io.*;
public class Power
{
private static byte x = 2;
private static int y = 43112609;
private static byte[] a = {x};
private static byte[] b = {1};
private static byte[] product;
private static int size = 2;
private static int prev = 1;
private static int count = 0;
private static int delay = 0;
public static void main(String[] args) throws IOException
{
File f = new File("number.txt");
FileOutputStream output = new FileOutputStream(f);
for (int z = 0; z < y; z++)
{
product = new byte[size];
for (int i = 0; i < a.length; i++)
{
for (int j = 0; j < b.length; j++)
{
product[i+j] += (byte) (a[i] * b[j]);
checkPlaceValue(i + j);
}
}
b = product;
for (int i = product.length - 1; i > product.length - 2; i--)
{
if (product[i] != 0)
{
size++;
if (delay >= 500)
{
delay = 0;
System.out.print(".");
}
delay++;
}
}
}
String str = "";
for (int i = (product[product.length-1] == 0) ?
product.length - 2 : product.length - 1; i >= 0; i--)
{
System.out.print(product[i]);
str += product[i];
}
output.write(str.getBytes());
output.flush();
output.close();
System.out.println();
}
public static void checkPlaceValue(int placeValue)
{
if (product[placeValue] > 9)
{
byte remainder = (byte) (product[placeValue] / 10);
product[placeValue] -= 10 * remainder;
product[placeValue + 1] += remainder;
checkPlaceValue(placeValue + 1);
}
}
}
This isn't for a school project or anything; just for the fun of it. Any help as to how to make this more efficient would be appreciated! Thanks!
Kyle
P.S. I failed to mention that the output should be in base-10, not binary.
The key here is to notice that:
2^2 = 4
2^4 = (2^2)*(2^2)
2^8 = (2^4)*(2^4)
2^16 = (2^8)*(2^8)
2^32 = (2^16)*(2^16)
2^64 = (2^32)*(2^32)
2^128 = (2^64)*(2^64)
... and in total of 25 steps ...
2^33554432 = (2^16777216)*(16777216)
Then since:
2^43112609 = (2^33554432) * (2^9558177)
you can find the remaining (2^9558177) using the same method, and since (2^9558177 = 2^8388608 * 2^1169569), you can find 2^1169569 using the same method, and since (2^1169569 = 2^1048576 * 2^120993), you can find 2^120993 using the same method, and so on...
EDIT: previously there was a mistake in this section, now it's fixed:
Also, further simplification and optimization by noticing that:
2^43112609 = 2^(0b10100100011101100010100001)
2^43112609 =
(2^(1*33554432))
* (2^(0*16777216))
* (2^(1*8388608))
* (2^(0*4194304))
* (2^(0*2097152))
* (2^(1*1048576))
* (2^(0*524288))
* (2^(0*262144))
* (2^(0*131072))
* (2^(1*65536))
* (2^(1*32768))
* (2^(1*16384))
* (2^(0*8192))
* (2^(1*4096))
* (2^(1*2048))
* (2^(0*1024))
* (2^(0*512))
* (2^(0*256))
* (2^(1*128))
* (2^(0*64))
* (2^(1*32))
* (2^(0*16))
* (2^(0*8))
* (2^(0*4))
* (2^(0*2))
* (2^(1*1))
Also note that 2^(0*n) = 2^0 = 1
Using this algorithm, you can calculate the table of 2^1, 2^2, 2^4, 2^8, 2^16 ... 2^33554432 in 25 multiplications. Then you can convert 43112609 into its binary representation, and easily find 2^43112609 using less than 25 multiplications. In total, you need to use less than 50 multiplications to find any 2^n where n is between 0 and 67108864.
Displaying it in binary is easy and fast - as quickly as you can write to disk! 100000...... :D
Let n = 43112609.
Assumption: You want to print 2^n in decimal.
While filling a bit vector than represents 2^n in binary is trivial, converting that number to decimal notation will take a while. For instance, the implementation of java.math.BigInteger.toString takes O(n^2) operations. And that's probably why
BigInteger.ONE.shiftLeft(43112609).toString()
still hasn't terminated after an hour of execution time ...
Let's start with an asymptotic analysis of your algorithm. Your outer loop will execute n times. For each iteration, you'll do another O(n^2) operations. That is, your algorithm is O(n^3), so poor scalability is expected.
You can reduce this to O(n^2 log n) by making use of
x^64 = x^(2*2*2*2*2*2) = ((((((x^2)^2)^2)^2)^2)^2
(which requires only 8 multiplications) rather than the 64 multiplications of
x^64 = x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x
(Generalizing to arbitrary exponents is left as exercise for you. Hint: Write the exponent as binary number - or look at Lie Ryan's answer).
For speeding up multiplication, you might employ the Karatsuba Algorithm, reducing the overall runtime to O(n^((log 3)/(log 2)) log n).
As mentioned, powers of two correspond to binary digits. Binary is base 2, so each digit is double the value of the previous one.
For example:
1 = 2^0 = b1
2 = 2^1 = b10
4 = 2^2 = b100
8 = 2^3 = b1000
...
Binary is base 2 (that's why it's called "base 2", 2 is the the base of the exponents), so each digit is double the value of the previous one. The shift operator ('<<' in most languages) is used to shift each binary digit to the left, each shift being equivalent to a multiply by two.
For example:
1 << 6 = 2^6 = 64
Being such a simple binary operation, most processors can do this extremely quickly for numbers which can fit in a register (8 - 64 bits, depending on the processor). Doing it with larger numbers requires some type of abstraction (Bignum for example), but it still should be an extremely quick operation. Nevertheless, doing it to 43112609 bits will take a little work.
To give you a little context, 2 << 4311260 (missing the last digit) is 1297181 digits long. Make sure you have enough RAM to handle the output number, if you don't your computer will be swapping to disk, which will cripple your execution speed.
Since the program is so simple, also consider switching to a language which compiles directly into assembly, such as C.
In truth, generating the value is trivial (we already know the answer, a one followed by 43112609 zeros). It will take quite a bit longer to convert it into decimal.
As #John SMith suggests, you can try. 2^4000
System.out.println(new BigInteger("1").shiftLeft(4000));
EDIT: Turning a binary into a decimal is an O(n^2) problem. When you double then number of bits you double the length of each operation and you double the number of digits produced.
2^100,000 takes 0.166 s
2^1000,000 takes 11.7 s
2^10,000,000 should take 1200 seconds.
NOTE: The time taken is entriely in the toString(), not the shiftLeft which takes < 1 ms even for 10 million.
The other key to notice is that your CPU is much faster at multiplying ints and longs than you are by doing long multiplication in Java. Get that number split up into long (64-byte) chunks, and multiply and carry the chunks instead of individual digits. Coupled with the previous answer (using squaring instead of sequential multiplication of 2) will probably speed it up by a factor of 100x or more.
Edit
I attempted to write a chunking and squaring method and it runs slightly slower than BigInteger (13.5 seconds vs 11.5 seconds to calculate 2^524288). After doing some timings and experiments, the fastest method seems to be repeated squaring with the BigInteger class:
public static String pow3(int n) {
BigInteger bigint = new BigInteger("2");
while (n > 1) {
bigint = bigint.pow(2);
n /= 2;
}
return bigint.toString();
}
Some timing results for power of 2 exponents (2^(2^n) for some n)
131072 - 0.83 seconds
262144 - 3.02 seconds
524288 - 11.75 seconds
1048576 - 49.66 seconds
At this rate of growth, it would take approximately 77 hours to calculate 2^33554432, let alone the time storing and adding all the powers together to make the final result of 2^43112609.
Edit 2
Actually, for really large exponents, the BigInteger.ShiftLeft method is the fastest. I estimate that for 2^33554432 with ShiftLeft, it would take approximately 28-30 hours. Wonder how fast a C or Assembly version would take...
Because one actually wants all the digits of the result (unlike, e.g. RSA, where one is only interested in the residue mod a number that's much smaller than the numbers we have here) I think the best approach is probably to extract nine decimal digits at once using long division implemented using multiplication. Start with residue equal zero, and apply the following to each 32 bits in turn (MSB first)
residue = (residue SHL 32)+data
result = 0
temp = (residue >> 30)
temp += (temp*316718722) >> 32
result += temp;
residue -= temp * 1000000000;
while (residue >= 1000000000) /* I don't think this loop ever runs more than twice */
{
result ++;
residue -= 1000000000;
}
Then store the result in the 32 bits just read, and loop through each lower word. The residue after the last step will be the nine bottom decimal digits of the result. Since the computation of a power of two in binary will be quick and easy, I think dividing out to convert to decimal may be the best approach.
BTW, this computes 2^640000 in about 15 seconds in vb.net, so 2^43112609 should be about five hours to compute all 12,978,188 digits.
Related
I need to generate a random double between -0.10 and 0.25 inclusive of -0.10 and 0.25. I don't completely understand random number generators with java yet so I'm not quite sure how to do this. I know that the code below generates a number within that range, but I'm almost 100% certain it's not inclusive yet. How would I change it to be inclusive of the min and max of the range?
public double randDouble() {
//formula: (Math.random() * (max - min) + min
//in this case, min = -0.10 and max = 0.25
return (Math.random() * 0.35) - 0.10;
}
My question is different than the one #javaguy is talking about because no where on that thread does someone say how to make this inclusive on both ends. And I've tested this code, but haven't seen an output of -.10 or 0.25 so unless my tests just weren't big enough, I can't see how it's inclusive of both ends.
Since you want values between -0.10 and 0.25 both inclusive, i would suggest a different approach to control the granularity of your random values. Using Math.random() will return values between 0.0(inclusive) and 1.0(exclusive), so using your approach you will never get the value of 0.25. Instead use the Random() class and use integers to get both ends inclusive.
Random randomValue = new Random();
int randomInt = randomValue.nextInt(3501) - 1000; //Will get integer values between -1000 and 2500
Double rangedValue = Double(randomInt)/10000.0 // Get values between -0.10 and 0.25
Alternatively you can more decimal places by increasing the magnitude of the values passed into randomValue.nextInt() and accordingly altering the value being devided by in the last line.
Your plan to obtain random double values inclusive of the limits is ill-conceived, since there is no guarantee that you will ever receive a value which is equal to either limit.
That's due to the huge precision of double, which means that the possibility of obtaining any exact given random double is astronomically slim.
You can have that random number issued billions of times, and you may get thousands of values that are very close, extremely close to the limit, and yet none of them may ever happen to be equal to the limit.
Therefore, you cannot have any logic that depends on a random double number being issued which will be equal to the limit, because that random number may never be yielded.
So, the solution to your problem is very simple: stop worrying about inclusive vs. exclusive, because all you can ever hope for is exclusive. That should make things more simple for you.
Your current formula includes the lower bound but excludes the upper bound. To fix this, you can add Double.MIN_VALUE to the upper bound, creating a new max. This slightly changes your bounds so that you actually want to exclude the new max. Yout old max is included in [min, newMax).
public double randDouble() {
//formula: Math.random() * (max + Double.MIN_VALUE - min) + min
//in this case, min = -0.10 and max = 0.25
return (Math.random() * (0.35 + Double.MIN_VALUE)) - 0.10;
}
If you can live with float and with ~8 million steps, you can simply throw away almost half of the numbers (one less than half of the numbers):
private static Random rnd=new Random();
public static float next0to1() {
float f;
do {
f=rnd.nextFloat();
} while(f>0.5);
return f*2;
}
this function will generate random floats between 0 and 1, both ends included.
A test snippet like
long start=System.currentTimeMillis();
int tries=0;
while(next0to1()!=0)
tries++;
System.out.println(tries);
while(next0to1()!=1)
tries++;
System.out.println(tries);
System.out.println(System.currentTimeMillis()-start);
or a longer one with your actual numbers and some extra checks
long start=System.currentTimeMillis();
int tries=0;
float min=-0.1f;
float max=0.25f;
tries=0;
float current;
do {
tries++;
current=min+next0to1()*(max-min);
if(current<min)
throw new RuntimeException(current+"<"+min);
if(current>max)
throw new RuntimeException(current+">"+max);
} while(current!=min);
System.out.println(tries);
do {
tries++;
current=min+next0to1()*(max-min);
if(current<min)
throw new RuntimeException(current+"<"+min);
if(current>max)
throw new RuntimeException(current+">"+max);
} while(current!=max);
System.out.println(tries);
System.out.println(System.currentTimeMillis()-start);
will typically show a couple ten million tries to generate both a 0 and 1, completes in less than a second for me on a 5-year old laptop.
Side remark: it's normal that it usually takes more than 8 million tries: while nextFloat() generates 24 bits, which is reduced to ~23 by throwing away almost half of the numbers, the generator itself works on 48 bits.
The best you can do with Random is still nextInt() as shown in Sasang's answer. The usable range is 2^30:
static double next0to1() {
return rnd.nextInt(0x40000001)/(double)0x40000000;
}
This one takes far more time (minutes) and tries (billions, tries is better changed to long for this one), to generate both 0 and 1.
Random.nextDouble(), or "cracking" random
nextDouble() needs more precision than what the generator can produce in a single step and it puts together a 26 and 27-bit number instead:
public double nextDouble() {
return (((long)next(26) << 27) + next(27))
/ (double)(1L << 53);
}
where next() is described as
seed = (seed * 0x5DEECE66DL + 0xBL) & ((1L << 48) - 1)
return (int)(seed >>> (48 - bits))
(but it can be actually found too, like https://github.com/openjdk-mirror/jdk7u-jdk/blob/master/src/share/classes/java/util/Random.java#L183)
which implies that in order to generate 0, a a=next(26) and a consecutive b=next(27) have to both return 0, so
seeda=00000000 00000000 00000000 00xxxxxx xxxxxxxx xxxxxxxx (binary, 48 bits)
seedb=00000000 00000000 00000000 000xxxxx xxxxxxxx xxxxxxxx (binary, 48 bits)
and from the update:
seedb = (seeda * 0x5DEECE66DL + 0xBL) & ((1L << 48) - 1)
it can be brute-forced in a moment (4 million possibilities) what seeda has to be:
long mask = ((long) ((1 << 27) - 1)) << 21;
System.out.println(Long.toBinaryString(mask));
for (long i = 0; i < 1 << 22; i++)
if (((i * 0x5DEECE66DL + 0xBL) & mask) == 0)
System.out.println(i);
where the loop runs in the lower 22 bits (as we know that the rest is zero) and mask is 11111111 11111111 11111111 11100000 00000000 00000000 for checking if the relevant 27 bits of the next seed are zeros.
So seeda=0.
The next question is if there exists a previous seedx to generate seeda, so
seeda = (seedx * 0x5DEECE66DL + 0xBL) & ((1L << 48) - 1)
just this time we don't know the bits, so brute-forcing won't help. But this kind of equation is an actual thing, called a congruence relation, and is solvable.
0 "=" seedx * 0x5DEECE66DL + 0xBL (mod 2^48)
WolframAlpha needs it in the form of Ax "=" B (mod C), and in decimal, so the inputs are
25214903917x <- 0x5DEECE66D
281474976710656 <- 2^48
-11 <- 0xB, went to the other side
One possible solution is 107048004364969. Learning/knowing that Random XOR-s the seed with the magic number, it can be tested:
double low=-0.1,
high=0.25;
Random rnd=new Random(0x5DEECE66DL ^ 107048004364969L);
System.out.println(low+(high-low)*rnd.nextDouble()==low);
will result in true. So yes, Random.nextDouble() can generate an exact 0.
The next part is 0.5:
seeda=10000000 00000000 00000000 00xxxxxx xxxxxxxx xxxxxxxx (binary, 48 bits)
seedb=00000000 00000000 00000000 000xxxxx xxxxxxxx xxxxxxxx (binary, 48 bits)
seeda has its 48th digit set to 1.
Comes the brute-forcing loop:
long mask = ((long) ((1 << 27) - 1)) << 21;
System.out.println(Long.toBinaryString(mask));
for (long i = 0; i < 1 << 22; i++)
if ((((i + 0x800000000000L) * 0x5DEECE66DL + 0xBL) & mask) == 0)
System.out.println(i);
Ooops, there is no solution. 5DEECE66D has its lowest bit set (it's an odd number), so when we have exactly one bit set to 1 as in 0x80...., that will remain 1 after the multiplication - and of course this also applies if we try moving that single bit to the right).
TL;DR: Random.nextDouble() (and consequently Math.random()) will never generate an exact 0.5. (or 0.25, 0.125, etc.)
As i run in a similar problem getting some random float values in between a min and a max and found no solution ready to use, I would like to post mine. Thank You Sasang for providing me with the right idea!
The min and max values are inclusive but if the range or precision is big it's unlikely to happen. You could use the precision also as a parameter if you have different use cases for that function.
public class MyRandom {
private static Random randomValue = new Random();
public static double randomFloat(double min, double max) {
double precision = 1000000D;
double number = randomValue.nextInt((int) ((max - min) * precision + 1)) + min * precision;
return number / precision;
}
}
example:
called with min = -0.10 and max 0.25 and precision 1000000 generated output like:
-0.116965
0.067249
0.246948
-0.180695
-0.033533
0.08214
-0.053864
0.216388
-0.158086
0.05963
0.168015
0.119533
You can also use Streams but is [fromInclusive, toExclusive).
Check the following sample, one using Streams and another one using Random:
Edited: (I wasn't careful reading the docs, all versions are fromInclusive-toExclusive)
public class MainClass {
public static void main(String[] args) {
generateWithStreams(-0.10, 0.25);
generateWithoutStreams(-0.10, 0.25);
}
private static void generateWithStreams(double fromInclusive, double toExclusive) {
new Random()
.doubles(fromInclusive, toExclusive)
// limit the stream to 100 doubles
.limit(100)
.forEach(System.out::println);
}
private static void generateWithoutStreams(double fromInclusive, double toExcusive) {
Random random = new Random();
// generating 100 doubles
for (int index = 0; index < 100; index++) {
System.out.println(random.nextDouble() * (toExcusive - fromInclusive) + fromInclusive);
}
}
}
The formula you used will give you random values inclusive of lower bound of range but exclusive of upper bound, because:
Math.random() returns a pseudorandom double greater than or equal to 0.0 and less than 1.0
I have a question where I have to add numbers from 1 to N which have their set bits as 2. Like for N = 5 we should get value 8, as number 3 and 5 have 2 bits set to one. I am implementing the same in java. I am getting the o/p correct for int value but when it comes to the long values, either it's taking a lot of time or freezing, and when I submit the same on code judge sites, it's giving run time exceeded message. Please guide me how may I optimise my code to run it faster, thanks :)
public static void main(String[] args)
{
long n = 1000000L;
long sum = 0;
long start = System.currentTimeMillis();
for(long i = 1L ; i <= n ; i++)
{
if(Long.bitCount(i) == 2)
{
sum += i;
}
}
long end = System.currentTimeMillis();
System.out.println(sum);
System.out.println("time="+(end-start));
}
As #hbejgel notes, there is no point in looping over all numbers and checking their bit count. You can simply construct numbers with 2 bits and add them up.
You can construct a number with 2 bits by picking two different bit positions in the long, the "higher" bit and the "lower" bit":
long i = (1 << higher) + (1 << lower);
So, you can simply loop over all such numbers, until the value you have constructed exceeds your limit:
long sum = 0;
outer: for (int higher = 1; higher < 63; ++higher) {
for (int lower = 0; lower < higher; ++lower) {
long i = (1 << higher) + (1 << lower);
if (i <= n) {
sum += i;
}
if (i >= n) break outer;
}
}
Let's say we know the closest number, x, equal to or lower than N with 2 set bits, then we can use the formula for power series to quickly sum all positions of the two set bits, for example, if x = b11000, we sum
4*2^0 + S(4)
+ 3*2^1 + S(4) - S(1)
+ 2*2^2 + S(4) - S(2)
+ x
where S(n) = 2 * (1 - 2^n) / (1 - 2)
= 2 + 2^2 + 2^3 ... + 2^n
With numbers encoded 2 out of 5, exactly two bits are set in every one-digit number. The sum is 45, with the exception of N×(N-1)/2 for 0≤N<9.
I think the question is supposed to discover the pattern.
Fast forward. Given a number N, you can tell the largest number
should count by bitmask from the first two bits are set. So you have
a smaller number M
Skip to next counted number Given any number with two bit set, next
largest number is the shift the second bit by one, until underflow.
Skip to next order When underflow happens on set two, shift the
highest bit by one and also the bit on it's right.
You don't really need a loop on N, but the bits it have.
Next question: can you answer a large number? which N >100,000,000
Next Next question: can you answer the same question for X bits when X>2
As the title says, given a stock of integers 0-9, what is the last number I can write before I run out of some integer?
So if I'm given a stock of, say 10 for every number from 0 to 9, what is the last number I can write before I run out of some number. For example, with a stock of 2 I can write numbers 1 ... 10:
1 2 3 4 5 6 7 8 9 10
at this point my stock for ones is 0, and I cannot write 11.
Also note that if I was given a stock of 3, I could still write only numbers 1 ... 10, because 11 would cost me 2 ones, which would leave my stock for ones at -1.
What I have come up so far:
public class Numbers {
public static int numbers(int stock) {
int[] t = new int[10];
for (int k = 1; ; k++) {
int x = k;
while (x > 0) {
if (t[x % 10] == stock) return k-1;
t[x % 10]++;
x /= 10;
}
}
}
public static void main(String[] args) {
System.out.println(numbers(4));
}
}
With this I can get the correct answer for fairly big stock sizes. With a stock size of 10^6 the code completes in ~2 seconds, and with a stock of 10^7 numbers it takes a whole 27 seconds. This is not good enough, since I'm looking for a solution that can handle stock sizes of as big as 10^16, so I probably need a O(log(n)) solution.
This is a homework like assignment, so I didn't come here without wrestling with this pickle for quite some time. I have failed to come up with anything similiar by googling, and wolfram alpha doesn't recognize any kind of pattern this gives.
What I have concluded so far is that ones will allways run out first. I have no proof, but it is so.
Can anyone come up with any piece of advice? Thanks a lot.
EDIT:
I have come up with and implemented an efficient way of finding the cost of numbers 1...n thanks to btilly's pointers (see his post and comments below. also marked as a solution). I will elaborate this further after I have implemented the binary search for finding the last number you can write with the given stock later today.
EDIT 2: The Solution
I had completely forgotten about this post, so my apologies for not editing in my solution earlier. I won't copy the actual implementation, though.
My code for finding the cost of a number does the following:
First, let us choose a number, e.g. 9999. Now we will get the cost by summing the cost of each tens of digits like so:
9 9 9 9
^ ^ ^ ^
^ ^ ^ roof(9999 / 10^1) * 10^0 = 1000
^ ^ roof(9999 / 10^2) * 10^1 = 1000
^ roof(9999 / 10^3) * 10^2 = 1000
roof(9999 / 10^4) * 10^3 = 1000
Thus the cost for 9999 is 4000.
the same for 256:
2 5 6
^ ^ ^
^ ^ roof(256 / 10^1) * 10^0 = 26
^ roof(256 / 10^2) * 10^1 = 30
roof(256 / 10^3) * 10^2 = 100
Thus the cost for 256 is 156.
Implementing with this idea would make the program work only with numbers that have no digits 1 or 0, which is why further logic is needed. Let's call the method explained above C(n, d), where n is the number for which we are getting the cost for, and d is the d'th digit from n that we are currently working with. Let's also define a method D(n, d) that will return the d'th digit from n. Then we will apply the following logic:
sum = C(n, d)
if D(n, d) is 1:
for each k < d, k >= 0 :
sum -= ( 9 - D(n, k) ) * 10^(k-1);
else if D(n, d) is 0:
sum -= 10^(d-1)
With this the program will calculate the correct cost for a number efficiently. After this we simply apply a binary search for finding the number with the correct cost.
Step 1. Write an efficient function to calculate how much stock needs to be used to write all of the numbers up to N. (Hint: calculate everything that was used to write out the numbers in the last digit with a formula, and then use recursion to calculate everything that was used in the other digits.)
Step 2. Do a binary search to find the last number you can write with your amount of stock.
We can calculate the answer directly. A recursive formula can determine how much stock is needed to get from 1 to numbers that are powers of ten minus 1:
f(n, power, target){
if (power == target)
return 10 * n + power;
else
return f(10 * n + power, power * 10, target);
}
f(0,1,1) = 1 // a stock of 1 is needed for the numbers from 1 to 9
f(0,1,10) = 20 // a stock of 20 is needed for the numbers from 1 to 99
f(0,1,100) = 300 // a stock of 300 is needed for the numbers from 1 to 999
f(0,1,1000) = 4000 // a stock of 4000 is needed for the numbers from 1 to 9999
Where it gets complicated is accounting for the extra 1's needed when our calculation lands after the first multiple of any of the above coefficients; for example, on the second multiple of 10 (11-19) we need an extra 1 for each number.
JavaScript code:
function f(stock){
var cs = [0];
var p = 1;
function makeCoefficients(n,i){
n = 10*n + p;
if (n > stock){
return;
} else {
cs.push(n);
p *= 10;
makeCoefficients(n,i*10);
}
}
makeCoefficients(0,1);
var result = -1;
var numSndMul = 0;
var c;
while (stock > 0){
if (cs.length == 0){
return result;
}
c = cs.pop();
var mul = c + p * numSndMul;
if (stock >= mul){
stock -= mul;
result += p;
numSndMul++;
if (stock == 0){
return result;
}
}
var sndMul = c + p * numSndMul;
if (stock >= sndMul){
stock -= sndMul;
result += p;
numSndMul--;
if (stock == 0){
return result;
}
var numMul = Math.floor(stock / mul);
stock -= numMul * mul;
result += numMul * p;
}
p = Math.floor(p/10);
}
return result;
}
Output:
console.log(f(600));
1180
console.log(f(17654321));
16031415
console.log(f(2147483647));
1633388154
I am working with the basic Knuth 4.3.1 Algorithm M to do arbitrary precision multiplication on the natural numbers. My implementation in Java is below. The problem is that it is generating leading zeroes, seemingly as a side effect of the algorithm not knowing whether a given result has two places or one. For example, 2 x 3 = 6 (one digit), but 4 x 7 = 28 (two digits). The algorithm seems to always reserve two digits which results in leading zeroes.
My question is two-fold: (1) Is my algorithm a correct implementation of M, or am I doing something wrong which is unnecessarily creating leading zeroes, and (2) If it is an unavoidable side effect of M that it produces leading zeroes, then how can we adjust or use an improved algorithm to avoid leading zeroes.
// Knuth M algorithm 4.3.1
final public static void multiplyDecimals( int[] decimalM1, int[] decimalN1, int[] result, int radix ){
Arrays.fill( result, 0 );
int lenM = decimalM1[0];
int lenN = decimalN1[0];
result[0] = lenM + lenN;
int iStepM = lenM;
while( iStepM > 0 ){
int iStepN = lenN;
int iCarry = 0;
while( iStepN > 0 ){
int iPartial = decimalM1[iStepM] * decimalN1[iStepN] + result[iStepM + iStepN] + iCarry;
result[iStepM + iStepN] = iPartial % radix;
iCarry = iPartial / radix;
iStepN--;
}
result[iStepM] = iCarry;
iStepM--;
}
return;
}
Output of the algorithm showing factorials being generated which shows the leading zeroes.
1 01
2 002
3 0006
4 00024
5 000120
6 0000720
7 00005040
8 000040320
9 0000362880
10 000003628800
11 00000039916800
12 0000000479001600
13 000000006227020800
14 00000000087178291200
15 0000000001307674368000
16 000000000020922789888000
17 00000000000355687428096000
18 0000000000006402373705728000
19 000000000000121645100408832000
20 00000000000002432902008176640000
The algorithm isn't allocating any leading zeros at all. You are. You're providing the output array, and filling it with zeros too. Knuth Algorithm M doesn't do that.
In addition:
You should certainly skip all the leading zeros in both numbers. This can have a massive effect on performance, as it's an O(MN) algorithm. The sum of the final M and N is nearly the correct number of output digits; the final step after multiplication is to remove possibly one leading zero.
You can also skip the inner loop if the current M digit is zero. This is Knuth's step M2. Note that a zero digit occurs more frequently in numbers in nature than 1/10: there's a law about this that says each digit 1,2,3,5,6,7,8,9 is successively less likely.
Each individual multiplication allocates enough space for the worst case input. Allocating for the worst case is the right thing to do here, because in general you won't know for sure if your result has a leading zero until you've finished doing your multiplication!
To prevent the cascading effect of redundant leading zeros in your question, check for leading zeroes after you have performed the multiplication, and reduce the length accordingly. Note that, if neither of your inputs has any leading zeroes, the result of their multiplication should have no more than one. However, this is not true for, say, subtraction (which can obviously generate as many leading zeroes as you like!).
I figured out how to solve the problem. The program needs to be modified as follows:
final public static int multiplyDecimals( int[] decimalM1, int[] decimalN1, int[] result, int radix ){
Arrays.fill( result, 0 );
int lenM = decimalM1[0];
int lenN = decimalN1[0];
result[0] = lenM + lenN;
int iStepM = lenM;
while( iStepM > 0 ){
int iStepN = lenN;
int iCarry = 0;
while( iStepN > 0 ){
int iPartial = decimalM1[iStepM] * decimalN1[iStepN] + result[iStepM + iStepN] + iCarry;
result[iStepM + iStepN] = iPartial % radix;
iCarry = iPartial / radix;
iStepN--;
}
result[iStepM] = iCarry;
iStepM--;
}
int xFirstDigit = 1;
while( result[xFirstDigit] == 0 ) xFirstDigit++;
if( xFirstDigit > 1 ){
int ctDigits = result[0] - xFirstDigit + 1;
for( int xDigit = 1; xDigit <= ctDigits; xDigit++ ) result[xDigit] = result[xDigit + xFirstDigit - 1];
result[0] = ctDigits;
}
return result[0];
}
I need to work out a very large power modulo (2^32), i.e. I want the result of:
y = (p^n) mod (2^32)
p is a prime number
n is a large integer
Is there a trick to doing this efficiently in Java?
Or am I stuck with doing it in a loop with n iterations?
The simple way to mod 2^32 is to use & 0xFFFFFFFFL. Also, there happens to be a type which naturally keeps the lowest 32-bit called int ;) If you use that you don't even need to perform the & until you have the result (so the answer is unsigned) For this reason you only need to keep the last 32 bit of the answer. To speed up the ^n you can calculate the square, it's square and it's square etc, e.g if n is 0b11111 then you need to multiply p^16 * p^8 * p^4 * p^2 * p.
In short, you can use plain int as you only need 32-bit of accuracy and values with a cost of O(ln n) where n is the power.
int prime = 2106945901;
for (int i = 0; i < 10; i++) {
long start = System.nanoTime();
long answer1 = BigInteger.valueOf(prime)
.modPow(
BigInteger.valueOf(prime),
BigInteger.valueOf(2).pow(32)).longValue();
long mid = System.nanoTime();
int answer2 = 1;
int p = prime;
for (int n = prime; n > 0; n >>>= 1) {
if ((n & 1) != 0)
answer2 *= p;
p *= p;
}
long end = System.nanoTime();
System.out.printf("True answer %,d took %.3f ms, quick answer %,d took %.3f ms%n",
answer1, (mid - start) / 1e6, answer2 & 0xFFFFFFFFL, (end - mid) / 1e6);
}
prints finally
True answer 4,169,684,317 took 0.233 ms, quick answer 4,169,684,317 took 0.002 ms
You can utilize exponentiation by squaring. Firstly, break it down into powers of two for your given n. Since p^n (mod x) == p^(k1) (mod x) . p^(k2) (mod x) . ... p^(kn) (mod x) where sum k_i = n, you can utilize this and successive powers of two to calculate this in O(log n) steps.
In addition to the other answers you can use some elementary number theory to reduce the time needed to compute an mod 232 for a an odd integer to O(1). The Euler Phi function together with Euler's Theorem allows you to discard all but the low-order 31 bits of n.
φ(232) = 231, and aφ(232) = 1 mod 232.
Thus if n = q*(231) + r, 0 <= r < 231, then an mod 232 = ar mod 232
r is simply the low-order 31 bits of n, i.e. n & 0x7fffffff. In fact, by Carmichael's Theorem you can do a bit better (literally), and you only need to consider the low-order 30 bits of n, i.e. n & 0x3fffffff. You can precompute these once and store them in a table of size 4GB for a given base a. Here is some java code as an example.
import java.math.BigInteger;
public class PowMod2_32 {
private static final long MASK32 = 0xffffffffL;
public static long pow32(final int a, final int exponent)
{
int prod = 1;
for (int i = 29; i>=0; i--) {
prod *= prod; // square
if (((exponent >> i) & 1) == 1) {
prod *= a; // multiply
}
}
return prod & MASK32;
}
public static long pow32(BigInteger a, BigInteger exponent) {
return pow32(a.intValue(), exponent.intValue());
}
}
There are no tricks in java that I know of but rather there are some tricks in maths.
If you implement these as an algorithm it should speed up computation.
Look at 5 and 6. Look at 4 also if power of two is always even
Use the Class Bigintiger. here´s an example how to work / pow with it
public String higherPow() {
BigIntiger i = new Bigintger("2");
// doing a power(2^32)
i = i.pow(32);
// after 2^32 was made, do mod 100
i = i.mod(new Bigintiger("100"));
return i.toString();
}