What's the difference between those statements? - java

These days, I'm working on some problems regarding ACM-ICPC (although I already graduated.. just for fun..). Yesterday, I nearly became crazy because there is an online judge which always said "WRONG ANSWER" to code I'd written. And finally after more than horrible 10 hours, I realized that a following statement was the reason.
int target = (int)((double)(M * 100) / N) + 1; // RIGHT!!
int target = (int)((double) M / N * 100) + 1; // WRONG!!
I couldn't exactly know how and why the first statement behaves differently from the second one. Because I'm not allowed to see test cases which are used by the judge, it's a little hard for me to understand when the code can go wrong. Is there anybody who can explain to me? Thank you.
* I'm using Java.

As far as I can tell, the result of these two expressions
(double)(M * 100) / N
(double) M / N * 100
is the same EXCEPT for floating point precision errors (and also for possible overflows, but let us ignore them here since both lines are susceptible to them, albeit in a different way). These errors COULD cause the values to be one slightly above or equal to an integer, and one slightly below the same integer, which would cause
(int)((double)(M * 100) / N)
(int)((double) M / N * 100)
to differ by one. In general, when dealing with floating point, you have more chances to get closer to the "real" value if you leave the division as the last operation.
There is one further consideration, which can get quite tricky: you have no parentheses around (double)M/N in your second line. This MIGHT give additional freedom to an optimizer, which could make the result dependent on the optimization level. I don't know whether this can happen in Java.
As for the order of operations, I tried out this particular case in C (because that's quicker for me):
int i, j, k;
for (i = 1; i <= 100; i++) {
j = (int)(((double)i / 100) * 100);
if (i != j) {
printf("%d -> %d\n", i, j);
}
k = (int)(((double)i * 100) / 100);
if (i != k) {
printf("%d ?? %d\n", i, k);
}
}
and the output on my machine is
29 -> 28
57 -> 56
58 -> 57
Replacing 100 with 10000000 yields 587200 lines of the same kind (i.e., an error rate of 5.872%)

Related

reverse bits in Java - O(n)

I'm trying to understand this code which reverses bits in O(n) time. I understand the time complexity, but I'm not able to understand the logic behind this code.
public static long reverse(long a) {
long result = 0;
int i = 31;
while(a > 0){
result += (a % 2) * Math.pow(2, i);
i--;
a = a/2;
}
return result;
}
To keep it simple, for example, if I take 12 (1100) and only 4 bits (set i = 3), my output will be 3 (0011). I get that and I'm able to derive the answer as well.
But can someone explain the logic behind this code? Thanks!
That code is
broken for half the possible bit patterns (all the negative numbers), and
O(n), not O(log n), where n is the number of bits in a
Very inefficient
Confusingly written
The algorithm works only for positive numbers and does:
extract the rightmost bit from a
set the corresponding bit from the left end
shift a one position to the right
It repeats as long as a > 0. If the value of a has some leading zero bits then this algorithm will be a little better than O(n).
Inefficiency results from remainder and division for bit extraction when masking and shifting would be much faster, although a modern compiler should be able to convert a/2 to a >> 1 and a%2 to a & 0x00000001. However I don't know if it would recognize Math.pow(2, i) as 0x00000001 << i;
Here's the explanation
i = 31 //number of bits in integer
Following has two parts
result += (a % 2) * Math.pow(2, i);
(a % 2) calculates last bit.
Multiplying anything with a positive power of 2 has the effect of left shifting the bits. (Math.pow(2, i) shifts to left i times.
so we are calculating unit place bit and placing it at ith position from the unit place, which is (31 - i) from the right, which effectively reverses the bit's position from left to right.
and finally
i--; //move to next bit
a = a/2; //chop the unit place bit to proceed to next.
That's it.

Algorithm to efficiently determine the [n][n] element in a matrix

This is a question regarding a piece of coursework so would rather you didn't fully answer the question but rather give tips to improve the run time complexity of my current algorithm.
I have been given the following information:
A function g(n) is given by g(n) = f(n,n) where f may be defined recursively by
I have implemented this algorithm recursively with the following code:
public static double f(int i, int j)
{
if (i == 0 && j == 0) {
return 0;
}
if (i ==0 || j == 0) {
return 1;
}
return ((f(i-1, j)) + (f(i-1, j-1)) + (f(i, j-1)))/3;
}
This algorithm gives the results I am looking for, but it is extremely inefficient and I am now tasked to improve the run time complexity.
I wrote an algorithm to create an n*n matrix and it then computes every element up to the [n][n] element in which it then returns the [n][n] element, for example f(1,1) would return 0.6 recurring. The [n][n] element is 0.6 recurring because it is the result of (1+0+1)/3.
I have also created a spreadsheet of the result from f(0,0) to f(7,7) which can be seen below:
Now although this is much faster than my recursive algorithm, it has a huge overhead of creating a n*n matrix.
Any suggestions to how I can improve this algorithm will be greatly appreciated!
I can now see that is it possible to make the algorithm O(n) complexity, but is it possible to work out the result without creating a [n][n] 2D array?
I have created a solution in Java that runs in O(n) time and O(n) space and will post the solution after I have handed in my coursework to stop any plagiarism.
This is another one of those questions where it's better to examine it, before diving in and writing code.
The first thing i'd say you should do is look at a grid of the numbers, and to not represent them as decimals, but fractions instead.
The first thing that should be obvious is that the total number of you have is just a measure of the distance from the origin, .
If you look at a grid in this way, you can get all of the denominators:
Note that the first row and column are not all 1s - they've been chosen to follow the pattern, and the general formula which works for all of the other squares.
The numerators are a little bit more tricky, but still doable. As with most problems like this, the answer is related to combinations, factorials, and then some more complicated things. Typical entries here include Catalan numbers, Stirling's numbers, Pascal's triangle, and you will nearly always see Hypergeometric functions used.
Unless you do a lot of maths, it's unlikely you're familiar with all of these, and there is a hell of a lot of literature. So I have an easier way to find out the relations you need, which nearly always works. It goes like this:
Write a naive, inefficient algorithm to get the sequence you want.
Copy a reasonably large amount of the numbers into google.
Hope that a result from the Online Encyclopedia of Integer Sequences pops up.
3.b. If one doesn't, then look at some differences in your sequence, or some other sequence related to your data.
Use the information you find to implement said sequence.
So, following this logic, here are the numerators:
Now, unfortunately, googling those yielded nothing. However, there are a few things you can notice about them, the main being that the first row/column are just powers of 3, and that the second row/column are one less than powers of three. This kind boundary is exactly the same as Pascal's triangle, and a lot of related sequences.
Here is the matrix of differences between the numerators and denominators:
Where we've decided that the f(0,0) element shall just follow the same pattern. These numbers already look much simpler. Also note though - rather interestingly, that these numbers follow the same rules as the initial numbers - except the that the first number is one (and they are offset by a column and a row). T(i,j) = T(i-1,j) + T(i,j-1) + 3*T(i-1,j-1):
1
1 1
1 5 1
1 9 9 1
1 13 33 13 1
1 17 73 73 17 1
1 21 129 245 192 21 1
1 25 201 593 593 201 25 1
This looks more like the sequences you see a lot in combinatorics.
If you google numbers from this matrix, you do get a hit.
And then if you cut off the link to the raw data, you get sequence A081578, which is described as a "Pascal-(1,3,1) array", which exactly makes sense - if you rotate the matrix, so that the 0,0 element is at the top, and the elements form a triangle, then you take 1* the left element, 3* the above element, and 1* the right element.
The question now is implementing the formulae used to generate the numbers.
Unfortunately, this is often easier said than done. For example, the formula given on the page:
T(n,k)=sum{j=0..n, C(k,j-k)*C(n+k-j,k)*3^(j-k)}
is wrong, and it takes a fair bit of reading the paper (linked on the page) to work out the correct formula. The sections you want are proposition 26, corollary 28. The sequence is mentioned in Table 2 after proposition 13. Note that r=4
The correct formula is given in proposition 26, but there is also a typo there :/. The k=0 in the sum should be a j=0:
Where T is the triangular matrix containing the coefficients.
The OEIS page does give a couple of implementations to calculate the numbers, but neither of them are in java, and neither of them can be easily transcribed to java:
There is a mathematica example:
Table[ Hypergeometric2F1[-k, k-n, 1, 4], {n, 0, 10}, {k, 0, n}] // Flatten
which, as always, is ridiculously succinct. And there is also a Haskell version, which is equally terse:
a081578 n k = a081578_tabl !! n !! k
a081578_row n = a081578_tabl !! n
a081578_tabl = map fst $ iterate
(\(us, vs) -> (vs, zipWith (+) (map (* 3) ([0] ++ us ++ [0])) $
zipWith (+) ([0] ++ vs) (vs ++ [0]))) ([1], [1, 1])
I know you're doing this in java, but i could not be bothered to transcribe my answer to java (sorry). Here's a python implementation:
from __future__ import division
import math
#
# Helper functions
#
def cache(function):
cachedResults = {}
def wrapper(*args):
if args in cachedResults:
return cachedResults[args]
else:
result = function(*args)
cachedResults[args] = result
return result
return wrapper
#cache
def fact(n):
return math.factorial(n)
#cache
def binomial(n,k):
if n < k: return 0
return fact(n) / ( fact(k) * fact(n-k) )
def numerator(i,j):
"""
Naive way to calculate numerator
"""
if i == j == 0:
return 0
elif i == 0 or j == 0:
return 3**(max(i,j)-1)
else:
return numerator(i-1,j) + numerator(i,j-1) + 3*numerator(i-1,j-1)
def denominator(i,j):
return 3**(i+j-1)
def A081578(n,k):
"""
http://oeis.org/A081578
"""
total = 0
for j in range(n-k+1):
total += binomial(k, j) * binomial(n-k, j) * 4**(j)
return int(total)
def diff(i,j):
"""
Difference between the numerator, and the denominator.
Answer will then be 1-diff/denom.
"""
if i == j == 0:
return 1/3
elif i==0 or j==0:
return 0
else:
return A081578(j+i-2,i-1)
def answer(i,j):
return 1 - diff(i,j) / denominator(i,j)
# And a little bit at the end to demonstrate it works.
N, M = 10,10
for i in range(N):
row = "%10.5f"*M % tuple([numerator(i,j)/denominator(i,j) for j in range(M)])
print row
print ""
for i in range(N):
row = "%10.5f"*M % tuple([answer(i,j) for j in range(M)])
print row
So, for a closed form:
Where the are just binomial coefficients.
Here's the result:
One final addition, if you are looking to do this for large numbers, then you're going to need to compute the binomial coefficients a different way, as you'll overflow the integers. Your answers are lal floating point though, and since you're apparently interested in large f(n) = T(n,n) then I guess you could use Stirling's approximation or something.
Well for starters here are some things to keep in mind:
This condition can only occur once, yet you test it every time through every loop.
if (x == 0 && y == 0) {
matrix[x][y] = 0;
}
You should instead: matrix[0][0] = 0; right before you enter your first loop and set x to 1. Since you know x will never be 0 you can remove the first part of your second condition x == 0 :
for(int x = 1; x <= i; x++)
{
for(int y = 0; y <= j; y++)
{
if (y == 0) {
matrix[x][y] = 1;
}
else
matrix[x][y] = (matrix[x-1][y] + matrix[x-1][y-1] + matrix[x][y-1])/3;
}
}
No point in declaring row and column since you only use it once. double[][] matrix = new double[i+1][j+1];
This algorithm has a minimum complexity of Ω(n) because you just need to multiply the values in the first column and row of the matrix with some factors and then add them up. The factors stem from unwinding the recursion n times.
However you therefore need to do the unwinding of the recursion. That itself has a complexity of O(n^2). But by balancing unwinding and evaluation of recursion, you should be able to reduce complexity to O(n^x) where 1 <= x <= 2. This is some kind of similiar to algorithms for matrix-matrix multiplication, where the naive case has a complexity of O(n^3) but Strassens's algorithm is for example O(n^2.807).
Another point is the fact that the original formula uses a factor of 1/3. Since this is not accurately representable by fixed point numbers or ieee 754 floating points, the error increases when evaluating the recursion successively. Therefore unwinding the recursion could give you higher accuracy as a nice side effect.
For example when you unwind the recursion sqr(n) times then you have complexity O((sqr(n))^2+(n/sqr(n))^2). The first part is for unwinding and the second part is for evaluating a new matrix of size n/sqr(n). That new complexity actually can be simplified to O(n).
To describe time complexity we usually use a big O notation. It is important to remember that it only describes the growth given the input. O(n) is linear time complexity, but it doesn't say how quickly (or slowly) the time grows when we increase input. For example:
n=3 -> 30 seconds
n=4 -> 40 seconds
n=5 -> 50 seconds
This is O(n), we can clearly see that every increase of n increases the time by 10 seconds.
n=3 -> 60 seconds
n=4 -> 80 seconds
n=5 -> 100 seconds
This is also O(n), even though for every n we need twice that much time, and the raise is 20 seconds for every increase of n, the time complexity grows linearly.
So if you have O(n*n) time complexity and you will half the number of operations you perform, you will get O(0.5*n*n) which is equal to O(n*n) - i.e. your time complexity won't change.
This is theory, in practice the number of operations sometimes makes a difference. Because you have a grid n by n, you need to fill n*n cells, so the best time complexity you can achieve is O(n*n), but there are a few optimizations you can do:
Cells on the edges of the grid could be filled in separate loops. Currently in majority of the cases you have two unnecessary conditions for i and j equal to 0.
You grid has a line of symmetry, you could utilize it to calculate only half of it and then copy the results onto the other half. For every i and j grid[i][j] = grid[j][i]
On final note, the clarity and readability of the code is much more important than performance - if you can read and understand the code, you can change it, but if the code is so ugly that you cannot understand it, you cannot optimize it. That's why I would do only first optimization (it also increases readability), but wouldn't do the second one - it would make the code much more difficult to understand.
As a rule of thumb, don't optimize the code, unless the performance is really causing problems. As William Wulf said:
More computing sins are committed in the name of efficiency (without necessarily achieving it) than for any other single reason - including blind stupidity.
EDIT:
I think it may be possible to implement this function with O(1) complexity. Although it gives no benefits when you need to fill entire grid, with O(1) time complexity you can instantly get any value without having a grid at all.
A few observations:
denominator is equal to 3 ^ (i + j - 1)
if i = 2 or j = 2, numerator is one less than denominator
EDIT 2:
The numerator can be expressed with the following function:
public static int n(int i, int j) {
if (i == 1 || j == 1) {
return 1;
} else {
return 3 * n(i - 1, j - 1) + n(i - 1, j) + n(i, j - 1);
}
}
Very similar to original problem, but no division and all numbers are integers.
If the question is about how to output all values of the function for 0<=i<N, 0<=j<N, here is a solution in time O(N²) and space O(N). The time behavior is optimal.
Use a temporary array T of N numbers and set it to all ones, except for the first element.
Then row by row,
use a temporary element TT and set it to 1,
then column by column, assign simultaneously T[I-1], TT = TT, (TT + T[I-1] + T[I])/3.
Thanks to will's (first) answer, I had this idea:
Consider that any positive solution comes only from the 1's along the x and y axes. Each of the recursive calls to f divides each component of the solution by 3, which means we can sum, combinatorially, how many ways each 1 features as a component of the solution, and consider it's "distance" (measured as how many calls of f it is from the target) as a negative power of 3.
JavaScript code:
function f(n){
var result = 0;
for (var d=n; d<2*n; d++){
var temp = 0;
for (var NE=0; NE<2*n-d; NE++){
temp += choose(n,NE);
}
result += choose(d - 1,d - n) * temp / Math.pow(3,d);
}
return 2 * result;
}
function choose(n,k){
if (k == 0 || n == k){
return 1;
}
var product = n;
for (var i=2; i<=k; i++){
product *= (n + 1 - i) / i
}
return product;
}
Output:
for (var i=1; i<8; i++){
console.log("F(" + i + "," + i + ") = " + f(i));
}
F(1,1) = 0.6666666666666666
F(2,2) = 0.8148148148148148
F(3,3) = 0.8641975308641975
F(4,4) = 0.8879743941472337
F(5,5) = 0.9024030889600163
F(6,6) = 0.9123609205913732
F(7,7) = 0.9197747256986194

Compute the remainder of multiplying two long number?

I am writing a code for a crypto method to compute x^d modulo n using Repeated Squaring
public static long repeatedSquaring(long x, long d, long n){
x = x%n;
boolean dj = d % 2 == 1;
long c = dj ? x : 1;
d = d / 2;
while (d > 0){
dj = d % 2 == 1;
x = x * x % n; //Here
if (dj)
c = c * x % n; //and here..
d = d / 2;
}
return c;
}
This code work fine when n is small. But with n > sqrt(Long.MAX_VALUE)it gives an unexpected result.
Because with x ≈ n, we can have x*x > Long.MAX_VALUE and then the modulo operator give an incorrect value assign to x (or c).
So, my question is, how we can compute (A * B) % N (all are long type) using only math related method.
I don't want to use BigInteger (BigA.multiply(BigB).remainder(BigN) or we can use BigX.modPow(BigD, BigN) directly for the big problem).
I think that a normal computing will run faster than String computing? Morever with my problem, all temp values are long type 'enough'.
And I wonder that the solution will work fine with the worst case: A, B, N <≈ Long.MAX_VALUE.
multiplying can be done in log(B) time simliar to exponentiation
if(b is odd) a+multiply(2*a,(b-1)/2) mod N
else multiply(2*a,b/2) mod N
this works till longvalue/2
http://en.wikipedia.org/wiki/Montgomery_reduction might be more optimum
Really, the short answer is that you need to use BigInteger, even if you don't want to. As you've discovered, the approach you're currently taking will overflow the bounds of a long; even if you improve the algorithm, you still can't get more than 64 bits into the answer with a long.
You say you're using this for crypto; but 64-bit public key crypto is so weak that it is worse than not having it (because it gives a false sense of security). Even 1024 bits is not enough these days for public key, and 64 bits could be cracked more or less instantaneously.
Note that this is not the same as symmetric crypto, where the keys can be much smaller. (But even there, 64 bits is not enough to stop even an amateur hacker.)
See this question, where it was pointed out that 64-bit RSA can be cracked in a fraction of a second... and that was four years ago!

For loop won't add terms for some reason

So, I was putting my knowledge of for loops to the test by attempting to create the mathematical constant π using a series with user-defined accuracy:
public double pi(int accuracy) {
for (int i = 1; i <= accuracy; i++) {
rawPi += 1 / (i * i);
}
return Math.sqrt(rawPi * 6);
}
Now, you would think that this would get closer and closer to π as int accuracy shoots up, but it doesn't. It just stays at the square root of 6, meaning that private double rawPi gets to 1 and never goes any higher, meaning no terms are being added in my series (represented as a for loop) and I have absolutely no idea what the problem could be. Any ideas?
Try to change this:
rawPi += 1 / (i * i);
to
rawPi += 1.0 / (i * i);
or as commented by "Patricia Shanahan" , use this for better accuracy and to avoid integer overflow on i*i:
1/((double)i*i)

generate ill-conditioned data for testing floating point summation

I have implemented a Kahan floating point summation algorithm in Java. I want to compare it against the built-in floating point addition in Java and infinite precision addition in Mathematica. However the data set I have is not good for testing, because the numbers are close to each other. (Condition number ~= 1)
Running Kahan on my data set gives all most the same result as the built-in +.
Could anyone suggest how to generate a large amount of data that can potentially cause serious rounding off error?
However the data set I have is not good for testing, because the numbers are close to each other.
It sounds like you already know what the problem is. Get to it =)
There are a few things that you will want:
Numbers of wildly different magnitudes, so that most of the precision of the smaller number is lost with naive summation.
Numbers with different signs and nearly equal (or equal) magnitudes, such that catastrophic cancellation occurs.
Numbers that have some low-order bits set, to increase the effects of rounding.
To get you started, you could try some simple three-term sums, which should show the effect clearly:
1.0 + 1.0e-20 - 1.0
Evaluated with simple summation, this will give 0.0; clearly incorrect. You might also look at sums of the form:
a0 + a1 + a2 + ... + an - b
Where b is the sum a0 + ... + an evaluated naively.
You want a heap of high precision numbers? Try this:
double[] nums = new double[SIZE];
for (int i = 0; i < SIZE; i++)
nums[i] = Math.rand();
Are we talking about number pairs or sequences?
If pairs, start with 1 for both numbers, then in every iteration divide one by 3, multiply the other by 3. It's easy to calculate the theoretical sums of those pairs and you'll get a whole host of rounding errors. (Some from the division and some from the addition. If you don't want division errors, then use 2 instead of 3.)
By experiment, I found following pattern:
public static void main(String[] args) {
System.out.println(1.0 / 3 - 0.01 / 3);
System.out.println(1.0 / 7 - 0.01 / 7);
System.out.println(1.0 / 9 - 0.001 / 9);
}
I've subtracted close negative powers of prime numbers (which should not have exact representation in binary form). However, there are cases then such expression evaluates correctly, for example
System.out.println(1.0 / 9 - 0.01 / 9);
You can automate this approach by iterating power of subtrahend and stopping when multiplication by appropriate value doesn't yield integer number, for example:
System.out.println((1.0 / 9 - 0.001 / 9) * 9000);
if (1000 - (1.0 / 9 - 0.001 / 9) * 9000 > 1.0)
System.out.println("Found it!");
Scalacheck might be something for you. Here is a short sample:
cat DoubleSpecification.scala
import org.scalacheck._
object DoubleSpecification extends Properties ("Doubles") {
/*
(a/1000 + b/1000) = (a+b) / 1000
(a/x + b/x ) = (a+b) / x
*/
property ("distributive") = Prop.forAll { (a: Int, b: Int, c: Int) =>
(c == 0 || a*1.0/c + b*1.0/c == (a+b) * 1.0 / c) }
}
object Runner {
def main (args: Array[String]) {
DoubleSpecification.check
println ("...done")
}
}
To run it, you need scala, and the schalacheck-jar. I used version 2.8 (I don't have to say, that your c-path will vary):
scalac -cp /opt/scala/lib/scalacheck.jar:. DoubleSpecification.scala
scala -cp /opt/scala/lib/scalacheck.jar:. DoubleSpecification
! Doubles.distributive: Falsified after 6 passed tests.
> ARG_0: 28 (orig arg: 1030341)
> ARG_1: 9 (orig arg: 2147483647)
> ARG_2: 5
Scalacheck takes some random values (orig args) and tries to simplify these, if the test fails, in order to find simple examples.

Categories

Resources