Is there any java built-in method to get parity of Integer? - java

I was working on Bit manipulation lessons. And I found the c++ built-in functions such as __builtin_clz(),__builtin_popcount(),__builtin_parity().
I got all alternative java methods for set bits and trailing zeroes. For parity, I didn't find any method.
I did something like this.
int val = 0b100011;
System.out.println(Integer.bitCount(val)%2==0?"Even":"ODD");
Is there any efficient way to do this?

There is no built in method to get the parity of an integer.
I'm not sure what you mean by efficient, but in terms of time complexity your solution is O(1).
Another solution is to use something like this which is also constant time complexity (taken from the link above, but also similar to the book Hacker's Delight):
static boolean hasEvenParity(int x)
{
int y = x ^ (x >> 1);
y = y ^ (y >> 2);
y = y ^ (y >> 4);
y = y ^ (y >> 8);
y = y ^ (y >> 16);
// Rightmost bit of y holds the parity value
// if (y&1) is 1 then parity is odd else even
return (y & 1) == 0;
}

You can use the following method to find parity as there is no built-in function to do the same. A number is said to have odd parity if it contains odd number of 1s. Otherwise it is said to have even parity.
public static String findParity(int x){
boolean isOdd = false;
while(x != 0) {
isOdd = !isOdd;
x= x& (x-1);
}
return (isOdd)?"odd":"even";
}

You can also do it using a BitSet. And you can also pass in Strings.
int v = 292229202;
BitSet bs = BitSet.valueOf(new long[] { v
});
// number of bits set to 1.
System.out.println(bs.cardinality());
String s = "To be or not to be that is the question.";
BitSet bs1 = BitSet.valueOf(s.getBytes());
System.out.println(bs1.cardinality());
String parity = (bs1.cardinality() & 1) == 1 ? "odd" : "even";
And then there's the basic technique of iteration.
int sum = 0;
for (byte b : s.getBytes()) {
while (b != 0) {
sum += (b & 1);
b >>>= 1;
}
}
System.out.println((sum & 1) == 1 ? "odd"
: "even");

Related

Check if square root is a perfect integer in Java [duplicate]

I'm looking for the fastest way to determine if a long value is a perfect square (i.e. its square root is another integer):
I've done it the easy way, by using the built-in Math.sqrt()
function, but I'm wondering if there is a way to do it faster by
restricting yourself to integer-only domain.
Maintaining a lookup table is impractical (since there are about
231.5 integers whose square is less than 263).
Here is the very simple and straightforward way I'm doing it now:
public final static boolean isPerfectSquare(long n)
{
if (n < 0)
return false;
long tst = (long)(Math.sqrt(n) + 0.5);
return tst*tst == n;
}
Note: I'm using this function in many Project Euler problems. So no one else will ever have to maintain this code. And this kind of micro-optimization could actually make a difference, since part of the challenge is to do every algorithm in less than a minute, and this function will need to be called millions of times in some problems.
I've tried the different solutions to the problem:
After exhaustive testing, I found that adding 0.5 to the result of Math.sqrt() is not necessary, at least not on my machine.
The fast inverse square root was faster, but it gave incorrect results for n >= 410881. However, as suggested by BobbyShaftoe, we can use the FISR hack for n < 410881.
Newton's method was a good bit slower than Math.sqrt(). This is probably because Math.sqrt() uses something similar to Newton's Method, but implemented in the hardware so it's much faster than in Java. Also, Newton's Method still required use of doubles.
A modified Newton's method, which used a few tricks so that only integer math was involved, required some hacks to avoid overflow (I want this function to work with all positive 64-bit signed integers), and it was still slower than Math.sqrt().
Binary chop was even slower. This makes sense because the binary chop will on average require 16 passes to find the square root of a 64-bit number.
According to John's tests, using or statements is faster in C++ than using a switch, but in Java and C# there appears to be no difference between or and switch.
I also tried making a lookup table (as a private static array of 64 boolean values). Then instead of either switch or or statement, I would just say if(lookup[(int)(n&0x3F)]) { test } else return false;. To my surprise, this was (just slightly) slower. This is because array bounds are checked in Java.
I figured out a method that works ~35% faster than your 6bits+Carmack+sqrt code, at least with my CPU (x86) and programming language (C/C++). Your results may vary, especially because I don't know how the Java factor will play out.
My approach is threefold:
First, filter out obvious answers. This includes negative numbers and looking at the last 4 bits. (I found looking at the last six didn't help.) I also answer yes for 0. (In reading the code below, note that my input is int64 x.)
if( x < 0 || (x&2) || ((x & 7) == 5) || ((x & 11) == 8) )
return false;
if( x == 0 )
return true;
Next, check if it's a square modulo 255 = 3 * 5 * 17. Because that's a product of three distinct primes, only about 1/8 of the residues mod 255 are squares. However, in my experience, calling the modulo operator (%) costs more than the benefit one gets, so I use bit tricks involving 255 = 2^8-1 to compute the residue. (For better or worse, I am not using the trick of reading individual bytes out of a word, only bitwise-and and shifts.)
int64 y = x;
y = (y & 4294967295LL) + (y >> 32);
y = (y & 65535) + (y >> 16);
y = (y & 255) + ((y >> 8) & 255) + (y >> 16);
// At this point, y is between 0 and 511. More code can reduce it farther.
To actually check if the residue is a square, I look up the answer in a precomputed table.
if( bad255[y] )
return false;
// However, I just use a table of size 512
Finally, try to compute the square root using a method similar to Hensel's lemma. (I don't think it's applicable directly, but it works with some modifications.) Before doing that, I divide out all powers of 2 with a binary search:
if((x & 4294967295LL) == 0)
x >>= 32;
if((x & 65535) == 0)
x >>= 16;
if((x & 255) == 0)
x >>= 8;
if((x & 15) == 0)
x >>= 4;
if((x & 3) == 0)
x >>= 2;
At this point, for our number to be a square, it must be 1 mod 8.
if((x & 7) != 1)
return false;
The basic structure of Hensel's lemma is the following. (Note: untested code; if it doesn't work, try t=2 or 8.)
int64 t = 4, r = 1;
t <<= 1; r += ((x - r * r) & t) >> 1;
t <<= 1; r += ((x - r * r) & t) >> 1;
t <<= 1; r += ((x - r * r) & t) >> 1;
// Repeat until t is 2^33 or so. Use a loop if you want.
The idea is that at each iteration, you add one bit onto r, the "current" square root of x; each square root is accurate modulo a larger and larger power of 2, namely t/2. At the end, r and t/2-r will be square roots of x modulo t/2. (Note that if r is a square root of x, then so is -r. This is true even modulo numbers, but beware, modulo some numbers, things can have even more than 2 square roots; notably, this includes powers of 2.) Because our actual square root is less than 2^32, at that point we can actually just check if r or t/2-r are real square roots. In my actual code, I use the following modified loop:
int64 r, t, z;
r = start[(x >> 3) & 1023];
do {
z = x - r * r;
if( z == 0 )
return true;
if( z < 0 )
return false;
t = z & (-z);
r += (z & t) >> 1;
if( r > (t >> 1) )
r = t - r;
} while( t <= (1LL << 33) );
The speedup here is obtained in three ways: precomputed start value (equivalent to ~10 iterations of the loop), earlier exit of the loop, and skipping some t values. For the last part, I look at z = r - x * x, and set t to be the largest power of 2 dividing z with a bit trick. This allows me to skip t values that wouldn't have affected the value of r anyway. The precomputed start value in my case picks out the "smallest positive" square root modulo 8192.
Even if this code doesn't work faster for you, I hope you enjoy some of the ideas it contains. Complete, tested code follows, including the precomputed tables.
typedef signed long long int int64;
int start[1024] =
{1,3,1769,5,1937,1741,7,1451,479,157,9,91,945,659,1817,11,
1983,707,1321,1211,1071,13,1479,405,415,1501,1609,741,15,339,1703,203,
129,1411,873,1669,17,1715,1145,1835,351,1251,887,1573,975,19,1127,395,
1855,1981,425,453,1105,653,327,21,287,93,713,1691,1935,301,551,587,
257,1277,23,763,1903,1075,1799,1877,223,1437,1783,859,1201,621,25,779,
1727,573,471,1979,815,1293,825,363,159,1315,183,27,241,941,601,971,
385,131,919,901,273,435,647,1493,95,29,1417,805,719,1261,1177,1163,
1599,835,1367,315,1361,1933,1977,747,31,1373,1079,1637,1679,1581,1753,1355,
513,1539,1815,1531,1647,205,505,1109,33,1379,521,1627,1457,1901,1767,1547,
1471,1853,1833,1349,559,1523,967,1131,97,35,1975,795,497,1875,1191,1739,
641,1149,1385,133,529,845,1657,725,161,1309,375,37,463,1555,615,1931,
1343,445,937,1083,1617,883,185,1515,225,1443,1225,869,1423,1235,39,1973,
769,259,489,1797,1391,1485,1287,341,289,99,1271,1701,1713,915,537,1781,
1215,963,41,581,303,243,1337,1899,353,1245,329,1563,753,595,1113,1589,
897,1667,407,635,785,1971,135,43,417,1507,1929,731,207,275,1689,1397,
1087,1725,855,1851,1873,397,1607,1813,481,163,567,101,1167,45,1831,1205,
1025,1021,1303,1029,1135,1331,1017,427,545,1181,1033,933,1969,365,1255,1013,
959,317,1751,187,47,1037,455,1429,609,1571,1463,1765,1009,685,679,821,
1153,387,1897,1403,1041,691,1927,811,673,227,137,1499,49,1005,103,629,
831,1091,1449,1477,1967,1677,697,1045,737,1117,1737,667,911,1325,473,437,
1281,1795,1001,261,879,51,775,1195,801,1635,759,165,1871,1645,1049,245,
703,1597,553,955,209,1779,1849,661,865,291,841,997,1265,1965,1625,53,
1409,893,105,1925,1297,589,377,1579,929,1053,1655,1829,305,1811,1895,139,
575,189,343,709,1711,1139,1095,277,993,1699,55,1435,655,1491,1319,331,
1537,515,791,507,623,1229,1529,1963,1057,355,1545,603,1615,1171,743,523,
447,1219,1239,1723,465,499,57,107,1121,989,951,229,1521,851,167,715,
1665,1923,1687,1157,1553,1869,1415,1749,1185,1763,649,1061,561,531,409,907,
319,1469,1961,59,1455,141,1209,491,1249,419,1847,1893,399,211,985,1099,
1793,765,1513,1275,367,1587,263,1365,1313,925,247,1371,1359,109,1561,1291,
191,61,1065,1605,721,781,1735,875,1377,1827,1353,539,1777,429,1959,1483,
1921,643,617,389,1809,947,889,981,1441,483,1143,293,817,749,1383,1675,
63,1347,169,827,1199,1421,583,1259,1505,861,457,1125,143,1069,807,1867,
2047,2045,279,2043,111,307,2041,597,1569,1891,2039,1957,1103,1389,231,2037,
65,1341,727,837,977,2035,569,1643,1633,547,439,1307,2033,1709,345,1845,
1919,637,1175,379,2031,333,903,213,1697,797,1161,475,1073,2029,921,1653,
193,67,1623,1595,943,1395,1721,2027,1761,1955,1335,357,113,1747,1497,1461,
1791,771,2025,1285,145,973,249,171,1825,611,265,1189,847,1427,2023,1269,
321,1475,1577,69,1233,755,1223,1685,1889,733,1865,2021,1807,1107,1447,1077,
1663,1917,1129,1147,1775,1613,1401,555,1953,2019,631,1243,1329,787,871,885,
449,1213,681,1733,687,115,71,1301,2017,675,969,411,369,467,295,693,
1535,509,233,517,401,1843,1543,939,2015,669,1527,421,591,147,281,501,
577,195,215,699,1489,525,1081,917,1951,2013,73,1253,1551,173,857,309,
1407,899,663,1915,1519,1203,391,1323,1887,739,1673,2011,1585,493,1433,117,
705,1603,1111,965,431,1165,1863,533,1823,605,823,1179,625,813,2009,75,
1279,1789,1559,251,657,563,761,1707,1759,1949,777,347,335,1133,1511,267,
833,1085,2007,1467,1745,1805,711,149,1695,803,1719,485,1295,1453,935,459,
1151,381,1641,1413,1263,77,1913,2005,1631,541,119,1317,1841,1773,359,651,
961,323,1193,197,175,1651,441,235,1567,1885,1481,1947,881,2003,217,843,
1023,1027,745,1019,913,717,1031,1621,1503,867,1015,1115,79,1683,793,1035,
1089,1731,297,1861,2001,1011,1593,619,1439,477,585,283,1039,1363,1369,1227,
895,1661,151,645,1007,1357,121,1237,1375,1821,1911,549,1999,1043,1945,1419,
1217,957,599,571,81,371,1351,1003,1311,931,311,1381,1137,723,1575,1611,
767,253,1047,1787,1169,1997,1273,853,1247,413,1289,1883,177,403,999,1803,
1345,451,1495,1093,1839,269,199,1387,1183,1757,1207,1051,783,83,423,1995,
639,1155,1943,123,751,1459,1671,469,1119,995,393,219,1743,237,153,1909,
1473,1859,1705,1339,337,909,953,1771,1055,349,1993,613,1393,557,729,1717,
511,1533,1257,1541,1425,819,519,85,991,1693,503,1445,433,877,1305,1525,
1601,829,809,325,1583,1549,1991,1941,927,1059,1097,1819,527,1197,1881,1333,
383,125,361,891,495,179,633,299,863,285,1399,987,1487,1517,1639,1141,
1729,579,87,1989,593,1907,839,1557,799,1629,201,155,1649,1837,1063,949,
255,1283,535,773,1681,461,1785,683,735,1123,1801,677,689,1939,487,757,
1857,1987,983,443,1327,1267,313,1173,671,221,695,1509,271,1619,89,565,
127,1405,1431,1659,239,1101,1159,1067,607,1565,905,1755,1231,1299,665,373,
1985,701,1879,1221,849,627,1465,789,543,1187,1591,923,1905,979,1241,181};
bool bad255[512] =
{0,0,1,1,0,1,1,1,1,0,1,1,1,1,1,0,0,1,1,0,1,0,1,1,1,0,1,1,1,1,0,1,
1,1,0,1,0,1,1,1,1,1,1,1,1,1,1,1,1,0,1,0,1,1,1,0,1,1,1,1,0,1,1,1,
0,1,0,1,1,0,0,1,1,1,1,1,0,1,1,1,1,0,1,1,0,0,1,1,1,1,1,1,1,1,0,1,
1,1,1,1,0,1,1,1,1,1,0,1,1,1,1,0,1,1,1,0,1,1,1,1,0,0,1,1,1,1,1,1,
1,1,1,1,1,1,1,0,0,1,1,1,1,1,1,1,0,0,1,1,1,1,1,0,1,1,0,1,1,1,1,1,
1,1,1,1,1,1,0,1,1,0,1,0,1,1,0,1,1,1,1,1,1,1,1,1,1,1,0,1,1,0,1,1,
1,1,1,0,0,1,1,1,1,1,1,1,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,
1,0,1,1,1,0,1,1,1,1,0,1,1,1,1,1,0,1,1,1,1,1,0,1,1,1,1,1,1,1,1,
0,0,1,1,0,1,1,1,1,0,1,1,1,1,1,0,0,1,1,0,1,0,1,1,1,0,1,1,1,1,0,1,
1,1,0,1,0,1,1,1,1,1,1,1,1,1,1,1,1,0,1,0,1,1,1,0,1,1,1,1,0,1,1,1,
0,1,0,1,1,0,0,1,1,1,1,1,0,1,1,1,1,0,1,1,0,0,1,1,1,1,1,1,1,1,0,1,
1,1,1,1,0,1,1,1,1,1,0,1,1,1,1,0,1,1,1,0,1,1,1,1,0,0,1,1,1,1,1,1,
1,1,1,1,1,1,1,0,0,1,1,1,1,1,1,1,0,0,1,1,1,1,1,0,1,1,0,1,1,1,1,1,
1,1,1,1,1,1,0,1,1,0,1,0,1,1,0,1,1,1,1,1,1,1,1,1,1,1,0,1,1,0,1,1,
1,1,1,0,0,1,1,1,1,1,1,1,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,
1,0,1,1,1,0,1,1,1,1,0,1,1,1,1,1,0,1,1,1,1,1,0,1,1,1,1,1,1,1,1,
0,0};
inline bool square( int64 x ) {
// Quickfail
if( x < 0 || (x&2) || ((x & 7) == 5) || ((x & 11) == 8) )
return false;
if( x == 0 )
return true;
// Check mod 255 = 3 * 5 * 17, for fun
int64 y = x;
y = (y & 4294967295LL) + (y >> 32);
y = (y & 65535) + (y >> 16);
y = (y & 255) + ((y >> 8) & 255) + (y >> 16);
if( bad255[y] )
return false;
// Divide out powers of 4 using binary search
if((x & 4294967295LL) == 0)
x >>= 32;
if((x & 65535) == 0)
x >>= 16;
if((x & 255) == 0)
x >>= 8;
if((x & 15) == 0)
x >>= 4;
if((x & 3) == 0)
x >>= 2;
if((x & 7) != 1)
return false;
// Compute sqrt using something like Hensel's lemma
int64 r, t, z;
r = start[(x >> 3) & 1023];
do {
z = x - r * r;
if( z == 0 )
return true;
if( z < 0 )
return false;
t = z & (-z);
r += (z & t) >> 1;
if( r > (t >> 1) )
r = t - r;
} while( t <= (1LL << 33) );
return false;
}
I'm pretty late to the party, but I hope to provide a better answer; shorter and (assuming my benchmark is correct) also much faster.
long goodMask; // 0xC840C04048404040 computed below
{
for (int i=0; i<64; ++i) goodMask |= Long.MIN_VALUE >>> (i*i);
}
public boolean isSquare(long x) {
// This tests if the 6 least significant bits are right.
// Moving the to be tested bit to the highest position saves us masking.
if (goodMask << x >= 0) return false;
final int numberOfTrailingZeros = Long.numberOfTrailingZeros(x);
// Each square ends with an even number of zeros.
if ((numberOfTrailingZeros & 1) != 0) return false;
x >>= numberOfTrailingZeros;
// Now x is either 0 or odd.
// In binary each odd square ends with 001.
// Postpone the sign test until now; handle zero in the branch.
if ((x&7) != 1 | x <= 0) return x == 0;
// Do it in the classical way.
// The correctness is not trivial as the conversion from long to double is lossy!
final long tst = (long) Math.sqrt(x);
return tst * tst == x;
}
The first test catches most non-squares quickly. It uses a 64-item table packed in a long, so there's no array access cost (indirection and bounds checks). For a uniformly random long, there's a 81.25% probability of ending here.
The second test catches all numbers having an odd number of twos in their factorization. The method Long.numberOfTrailingZeros is very fast as it gets JIT-ed into a single i86 instruction.
After dropping the trailing zeros, the third test handles numbers ending with 011, 101, or 111 in binary, which are no perfect squares. It also cares about negative numbers and also handles 0.
The final test falls back to double arithmetic. As double has only 53 bits mantissa,
the conversion from long to double includes rounding for big values. Nonetheless, the test is correct (unless the proof is wrong).
Trying to incorporate the mod255 idea wasn't successful.
You'll have to do some benchmarking. The best algorithm will depend on the distribution of your inputs.
Your algorithm may be nearly optimal, but you might want to do a quick check to rule out some possibilities before calling your square root routine. For example, look at the last digit of your number in hex by doing a bit-wise "and." Perfect squares can only end in 0, 1, 4, or 9 in base 16, So for 75% of your inputs (assuming they are uniformly distributed) you can avoid a call to the square root in exchange for some very fast bit twiddling.
Kip benchmarked the following code implementing the hex trick. When testing numbers 1 through 100,000,000, this code ran twice as fast as the original.
public final static boolean isPerfectSquare(long n)
{
if (n < 0)
return false;
switch((int)(n & 0xF))
{
case 0: case 1: case 4: case 9:
long tst = (long)Math.sqrt(n);
return tst*tst == n;
default:
return false;
}
}
When I tested the analogous code in C++, it actually ran slower than the original. However, when I eliminated the switch statement, the hex trick once again make the code twice as fast.
int isPerfectSquare(int n)
{
int h = n & 0xF; // h is the last hex "digit"
if (h > 9)
return 0;
// Use lazy evaluation to jump out of the if statement as soon as possible
if (h != 2 && h != 3 && h != 5 && h != 6 && h != 7 && h != 8)
{
int t = (int) floor( sqrt((double) n) + 0.5 );
return t*t == n;
}
return 0;
}
Eliminating the switch statement had little effect on the C# code.
I was thinking about the horrible times I've spent in Numerical Analysis course.
And then I remember, there was this function circling around the 'net from the Quake Source code:
float Q_rsqrt( float number )
{
long i;
float x2, y;
const float threehalfs = 1.5F;
x2 = number * 0.5F;
y = number;
i = * ( long * ) &y; // evil floating point bit level hacking
i = 0x5f3759df - ( i >> 1 ); // wtf?
y = * ( float * ) &i;
y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration
// y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed
#ifndef Q3_VM
#ifdef __linux__
assert( !isnan(y) ); // bk010122 - FPE?
#endif
#endif
return y;
}
Which basically calculates a square root, using Newton's approximation function (cant remember the exact name).
It should be usable and might even be faster, it's from one of the phenomenal id software's game!
It's written in C++ but it should not be too hard to reuse the same technique in Java once you get the idea:
I originally found it at: http://www.codemaestro.com/reviews/9
Newton's method explained at wikipedia: http://en.wikipedia.org/wiki/Newton%27s_method
You can follow the link for more explanation of how it works, but if you don't care much, then this is roughly what I remember from reading the blog and from taking the Numerical Analysis course:
the * (long*) &y is basically a fast convert-to-long function so integer operations can be applied on the raw bytes.
the 0x5f3759df - (i >> 1); line is a pre-calculated seed value for the approximation function.
the * (float*) &i converts the value back to floating point.
the y = y * ( threehalfs - ( x2 * y * y ) ) line bascially iterates the value over the function again.
The approximation function gives more precise values the more you iterate the function over the result. In Quake's case, one iteration is "good enough", but if it wasn't for you... then you could add as much iteration as you need.
This should be faster because it reduces the number of division operations done in naive square rooting down to a simple divide by 2 (actually a * 0.5F multiply operation) and replace it with a few fixed number of multiplication operations instead.
I'm not sure if it would be faster, or even accurate, but you could use John Carmack's Magical Square Root, algorithm to solve the square root faster. You could probably easily test this for all possible 32 bit integers, and validate that you actually got correct results, as it's only an appoximation. However, now that I think about it, using doubles is approximating also, so I'm not sure how that would come into play.
If you do a binary chop to try to find the "right" square root, you can fairly easily detect if the value you've got is close enough to tell:
(n+1)^2 = n^2 + 2n + 1
(n-1)^2 = n^2 - 2n + 1
So having calculated n^2, the options are:
n^2 = target: done, return true
n^2 + 2n + 1 > target > n^2 : you're close, but it's not perfect: return false
n^2 - 2n + 1 < target < n^2 : ditto
target < n^2 - 2n + 1 : binary chop on a lower n
target > n^2 + 2n + 1 : binary chop on a higher n
(Sorry, this uses n as your current guess, and target for the parameter. Apologise for the confusion!)
I don't know whether this will be faster or not, but it's worth a try.
EDIT: The binary chop doesn't have to take in the whole range of integers, either (2^x)^2 = 2^(2x), so once you've found the top set bit in your target (which can be done with a bit-twiddling trick; I forget exactly how) you can quickly get a range of potential answers. Mind you, a naive binary chop is still only going to take up to 31 or 32 iterations.
I ran my own analysis of several of the algorithms in this thread and came up with some new results. You can see those old results in the edit history of this answer, but they're not accurate, as I made a mistake, and wasted time analyzing several algorithms which aren't close. However, pulling lessons from several different answers, I now have two algorithms that crush the "winner" of this thread. Here's the core thing I do differently than everyone else:
// This is faster because a number is divisible by 2^4 or more only 6% of the time
// and more than that a vanishingly small percentage.
while((x & 0x3) == 0) x >>= 2;
// This is effectively the same as the switch-case statement used in the original
// answer.
if((x & 0x7) != 1) return false;
However, this simple line, which most of the time adds one or two very fast instructions, greatly simplifies the switch-case statement into one if statement. However, it can add to the runtime if many of the tested numbers have significant power-of-two factors.
The algorithms below are as follows:
Internet - Kip's posted answer
Durron - My modified answer using the one-pass answer as a base
DurronTwo - My modified answer using the two-pass answer (by #JohnnyHeggheim), with some other slight modifications.
Here is a sample runtime if the numbers are generated using Math.abs(java.util.Random.nextLong())
0% Scenario{vm=java, trial=0, benchmark=Internet} 39673.40 ns; ?=378.78 ns # 3 trials
33% Scenario{vm=java, trial=0, benchmark=Durron} 37785.75 ns; ?=478.86 ns # 10 trials
67% Scenario{vm=java, trial=0, benchmark=DurronTwo} 35978.10 ns; ?=734.10 ns # 10 trials
benchmark us linear runtime
Internet 39.7 ==============================
Durron 37.8 ============================
DurronTwo 36.0 ===========================
vm: java
trial: 0
And here is a sample runtime if it's run on the first million longs only:
0% Scenario{vm=java, trial=0, benchmark=Internet} 2933380.84 ns; ?=56939.84 ns # 10 trials
33% Scenario{vm=java, trial=0, benchmark=Durron} 2243266.81 ns; ?=50537.62 ns # 10 trials
67% Scenario{vm=java, trial=0, benchmark=DurronTwo} 3159227.68 ns; ?=10766.22 ns # 3 trials
benchmark ms linear runtime
Internet 2.93 ===========================
Durron 2.24 =====================
DurronTwo 3.16 ==============================
vm: java
trial: 0
As you can see, DurronTwo does better for large inputs, because it gets to use the magic trick very very often, but gets clobbered compared to the first algorithm and Math.sqrt because the numbers are so much smaller. Meanwhile, the simpler Durron is a huge winner because it never has to divide by 4 many many times in the first million numbers.
Here's Durron:
public final static boolean isPerfectSquareDurron(long n) {
if(n < 0) return false;
if(n == 0) return true;
long x = n;
// This is faster because a number is divisible by 16 only 6% of the time
// and more than that a vanishingly small percentage.
while((x & 0x3) == 0) x >>= 2;
// This is effectively the same as the switch-case statement used in the original
// answer.
if((x & 0x7) == 1) {
long sqrt;
if(x < 410881L)
{
int i;
float x2, y;
x2 = x * 0.5F;
y = x;
i = Float.floatToRawIntBits(y);
i = 0x5f3759df - ( i >> 1 );
y = Float.intBitsToFloat(i);
y = y * ( 1.5F - ( x2 * y * y ) );
sqrt = (long)(1.0F/y);
} else {
sqrt = (long) Math.sqrt(x);
}
return sqrt*sqrt == x;
}
return false;
}
And DurronTwo
public final static boolean isPerfectSquareDurronTwo(long n) {
if(n < 0) return false;
// Needed to prevent infinite loop
if(n == 0) return true;
long x = n;
while((x & 0x3) == 0) x >>= 2;
if((x & 0x7) == 1) {
long sqrt;
if (x < 41529141369L) {
int i;
float x2, y;
x2 = x * 0.5F;
y = x;
i = Float.floatToRawIntBits(y);
//using the magic number from
//http://www.lomont.org/Math/Papers/2003/InvSqrt.pdf
//since it more accurate
i = 0x5f375a86 - (i >> 1);
y = Float.intBitsToFloat(i);
y = y * (1.5F - (x2 * y * y));
y = y * (1.5F - (x2 * y * y)); //Newton iteration, more accurate
sqrt = (long) ((1.0F/y) + 0.2);
} else {
//Carmack hack gives incorrect answer for n >= 41529141369.
sqrt = (long) Math.sqrt(x);
}
return sqrt*sqrt == x;
}
return false;
}
And my benchmark harness: (Requires Google caliper 0.1-rc5)
public class SquareRootBenchmark {
public static class Benchmark1 extends SimpleBenchmark {
private static final int ARRAY_SIZE = 10000;
long[] trials = new long[ARRAY_SIZE];
#Override
protected void setUp() throws Exception {
Random r = new Random();
for (int i = 0; i < ARRAY_SIZE; i++) {
trials[i] = Math.abs(r.nextLong());
}
}
public int timeInternet(int reps) {
int trues = 0;
for(int i = 0; i < reps; i++) {
for(int j = 0; j < ARRAY_SIZE; j++) {
if(SquareRootAlgs.isPerfectSquareInternet(trials[j])) trues++;
}
}
return trues;
}
public int timeDurron(int reps) {
int trues = 0;
for(int i = 0; i < reps; i++) {
for(int j = 0; j < ARRAY_SIZE; j++) {
if(SquareRootAlgs.isPerfectSquareDurron(trials[j])) trues++;
}
}
return trues;
}
public int timeDurronTwo(int reps) {
int trues = 0;
for(int i = 0; i < reps; i++) {
for(int j = 0; j < ARRAY_SIZE; j++) {
if(SquareRootAlgs.isPerfectSquareDurronTwo(trials[j])) trues++;
}
}
return trues;
}
}
public static void main(String... args) {
Runner.main(Benchmark1.class, args);
}
}
UPDATE: I've made a new algorithm that is faster in some scenarios, slower in others, I've gotten different benchmarks based on different inputs. If we calculate modulo 0xFFFFFF = 3 x 3 x 5 x 7 x 13 x 17 x 241, we can eliminate 97.82% of numbers that cannot be squares. This can be (sort of) done in one line, with 5 bitwise operations:
if (!goodLookupSquares[(int) ((n & 0xFFFFFFl) + ((n >> 24) & 0xFFFFFFl) + (n >> 48))]) return false;
The resulting index is either 1) the residue, 2) the residue + 0xFFFFFF, or 3) the residue + 0x1FFFFFE. Of course, we need to have a lookup table for residues modulo 0xFFFFFF, which is about a 3mb file (in this case stored as ascii text decimal numbers, not optimal but clearly improvable with a ByteBuffer and so forth. But since that is precalculation it doesn't matter so much. You can find the file here (or generate it yourself):
public final static boolean isPerfectSquareDurronThree(long n) {
if(n < 0) return false;
if(n == 0) return true;
long x = n;
while((x & 0x3) == 0) x >>= 2;
if((x & 0x7) == 1) {
if (!goodLookupSquares[(int) ((n & 0xFFFFFFl) + ((n >> 24) & 0xFFFFFFl) + (n >> 48))]) return false;
long sqrt;
if(x < 410881L)
{
int i;
float x2, y;
x2 = x * 0.5F;
y = x;
i = Float.floatToRawIntBits(y);
i = 0x5f3759df - ( i >> 1 );
y = Float.intBitsToFloat(i);
y = y * ( 1.5F - ( x2 * y * y ) );
sqrt = (long)(1.0F/y);
} else {
sqrt = (long) Math.sqrt(x);
}
return sqrt*sqrt == x;
}
return false;
}
I load it into a boolean array like this:
private static boolean[] goodLookupSquares = null;
public static void initGoodLookupSquares() throws Exception {
Scanner s = new Scanner(new File("24residues_squares.txt"));
goodLookupSquares = new boolean[0x1FFFFFE];
while(s.hasNextLine()) {
int residue = Integer.valueOf(s.nextLine());
goodLookupSquares[residue] = true;
goodLookupSquares[residue + 0xFFFFFF] = true;
goodLookupSquares[residue + 0x1FFFFFE] = true;
}
s.close();
}
Example runtime. It beat Durron (version one) in every trial I ran.
0% Scenario{vm=java, trial=0, benchmark=Internet} 40665.77 ns; ?=566.71 ns # 10 trials
33% Scenario{vm=java, trial=0, benchmark=Durron} 38397.60 ns; ?=784.30 ns # 10 trials
67% Scenario{vm=java, trial=0, benchmark=DurronThree} 36171.46 ns; ?=693.02 ns # 10 trials
benchmark us linear runtime
Internet 40.7 ==============================
Durron 38.4 ============================
DurronThree 36.2 ==========================
vm: java
trial: 0
It should be much faster to use Newton's method to calculate the Integer Square Root, then square this number and check, as you do in your current solution. Newton's method is the basis for the Carmack solution mentioned in some other answers. You should be able to get a faster answer since you're only interested in the integer part of the root, allowing you to stop the approximation algorithm sooner.
Another optimization that you can try: If the Digital Root of a number doesn't end in
1, 4, 7, or 9 the number is not a perfect square. This can be used as a quick way to eliminate 60% of your inputs before applying the slower square root algorithm.
I want this function to work with all
positive 64-bit signed integers
Math.sqrt() works with doubles as input parameters, so you won't get accurate results for integers bigger than 2^53.
An integer problem deserves an integer solution. Thus
Do binary search on the (non-negative) integers to find the greatest integer t such that t**2 <= n. Then test whether r**2 = n exactly. This takes time O(log n).
If you don't know how to binary search the positive integers because the set is unbounded, it's easy. You starting by computing your increasing function f (above f(t) = t**2 - n) on powers of two. When you see it turn positive, you've found an upper bound. Then you can do standard binary search.
Just for the record, another approach is to use the prime decomposition. If every factor of the decomposition is even, then the number is a perfect square. So what you want is to see if a number can be decomposed as a product of squares of prime numbers. Of course, you don't need to obtain such a decomposition, just to see if it exists.
First build a table of squares of prime numbers which are lower than 2^32. This is far smaller than a table of all integers up to this limit.
A solution would then be like this:
boolean isPerfectSquare(long number)
{
if (number < 0) return false;
if (number < 2) return true;
for (int i = 0; ; i++)
{
long square = squareTable[i];
if (square > number) return false;
while (number % square == 0)
{
number /= square;
}
if (number == 1) return true;
}
}
I guess it's a bit cryptic. What it does is checking in every step that the square of a prime number divide the input number. If it does then it divides the number by the square as long as it is possible, to remove this square from the prime decomposition.
If by this process, we came to 1, then the input number was a decomposition of square of prime numbers. If the square becomes larger than the number itself, then there is no way this square, or any larger squares, can divide it, so the number can not be a decomposition of squares of prime numbers.
Given nowadays' sqrt done in hardware and the need to compute prime numbers here, I guess this solution is way slower. But it should give better results than solution with sqrt which won't work over 2^54, as says mrzl in his answer.
It's been pointed out that the last d digits of a perfect square can only take on certain values. The last d digits (in base b) of a number n is the same as the remainder when n is divided by bd, ie. in C notation n % pow(b, d).
This can be generalized to any modulus m, ie. n % m can be used to rule out some percentage of numbers from being perfect squares. The modulus you are currently using is 64, which allows 12, ie. 19% of remainders, as possible squares. With a little coding I found the modulus 110880, which allows only 2016, ie. 1.8% of remainders as possible squares. So depending on the cost of a modulus operation (ie. division) and a table lookup versus a square root on your machine, using this modulus might be faster.
By the way if Java has a way to store a packed array of bits for the lookup table, don't use it. 110880 32-bit words is not much RAM these days and fetching a machine word is going to be faster than fetching a single bit.
The following simplification of maaartinus's solution appears to shave a few percentage points off the runtime, but I'm not good enough at benchmarking to produce a benchmark I can trust:
long goodMask; // 0xC840C04048404040 computed below
{
for (int i=0; i<64; ++i) goodMask |= Long.MIN_VALUE >>> (i*i);
}
public boolean isSquare(long x) {
// This tests if the 6 least significant bits are right.
// Moving the to be tested bit to the highest position saves us masking.
if (goodMask << x >= 0) return false;
// Remove an even number of trailing zeros, leaving at most one.
x >>= (Long.numberOfTrailingZeros(x) & (-2);
// Repeat the test on the 6 least significant remaining bits.
if (goodMask << x >= 0 | x <= 0) return x == 0;
// Do it in the classical way.
// The correctness is not trivial as the conversion from long to double is lossy!
final long tst = (long) Math.sqrt(x);
return tst * tst == x;
}
It would be worth checking how omitting the first test,
if (goodMask << x >= 0) return false;
would affect performance.
For performance, you very often have to do some compromsies. Others have expressed various methods, however, you noted Carmack's hack was faster up to certain values of N. Then, you should check the "n" and if it is less than that number N, use Carmack's hack, else use some other method described in the answers here.
This is the fastest Java implementation I could come up with, using a combination of techniques suggested by others in this thread.
Mod-256 test
Inexact mod-3465 test (avoids integer division at the cost of some false positives)
Floating-point square root, round and compare with input value
I also experimented with these modifications but they did not help performance:
Additional mod-255 test
Dividing the input value by powers of 4
Fast Inverse Square Root (to work for high values of N it needs 3 iterations, enough to make it slower than the hardware square root function.)
public class SquareTester {
public static boolean isPerfectSquare(long n) {
if (n < 0) {
return false;
} else {
switch ((byte) n) {
case -128: case -127: case -124: case -119: case -112:
case -111: case -103: case -95: case -92: case -87:
case -79: case -71: case -64: case -63: case -60:
case -55: case -47: case -39: case -31: case -28:
case -23: case -15: case -7: case 0: case 1:
case 4: case 9: case 16: case 17: case 25:
case 33: case 36: case 41: case 49: case 57:
case 64: case 65: case 68: case 73: case 81:
case 89: case 97: case 100: case 105: case 113:
case 121:
long i = (n * INV3465) >>> 52;
if (! good3465[(int) i]) {
return false;
} else {
long r = round(Math.sqrt(n));
return r*r == n;
}
default:
return false;
}
}
}
private static int round(double x) {
return (int) Double.doubleToRawLongBits(x + (double) (1L << 52));
}
/** 3465<sup>-1</sup> modulo 2<sup>64</sup> */
private static final long INV3465 = 0x8ffed161732e78b9L;
private static final boolean[] good3465 =
new boolean[0x1000];
static {
for (int r = 0; r < 3465; ++ r) {
int i = (int) ((r * r * INV3465) >>> 52);
good3465[i] = good3465[i+1] = true;
}
}
}
You should get rid of the 2-power part of N right from the start.
2nd Edit
The magical expression for m below should be
m = N - (N & (N-1));
and not as written
End of 2nd edit
m = N & (N-1); // the lawest bit of N
N /= m;
byte = N & 0x0F;
if ((m % 2) || (byte !=1 && byte !=9))
return false;
1st Edit:
Minor improvement:
m = N & (N-1); // the lawest bit of N
N /= m;
if ((m % 2) || (N & 0x07 != 1))
return false;
End of 1st edit
Now continue as usual. This way, by the time you get to the floating point part, you already got rid of all the numbers whose 2-power part is odd (about half), and then you only consider 1/8 of whats left. I.e. you run the floating point part on 6% of the numbers.
Project Euler is mentioned in the tags and many of the problems in it require checking numbers >> 2^64. Most of the optimizations mentioned above don't work easily when you are working with an 80 byte buffer.
I used java BigInteger and a slightly modified version of Newton's method, one that works better with integers. The problem was that exact squares n^2 converged to (n-1) instead of n because n^2-1 = (n-1)(n+1) and the final error was just one step below the final divisor and the algorithm terminated. It was easy to fix by adding one to the original argument before computing the error. (Add two for cube roots, etc.)
One nice attribute of this algorithm is that you can immediately tell if the number is a perfect square - the final error (not correction) in Newton's method will be zero. A simple modification also lets you quickly calculate floor(sqrt(x)) instead of the closest integer. This is handy with several Euler problems.
The sqrt call is not perfectly accurate, as has been mentioned, but it's interesting and instructive that it doesn't blow away the other answers in terms of speed. After all, the sequence of assembly language instructions for a sqrt is tiny. Intel has a hardware instruction, which isn't used by Java I believe because it doesn't conform to IEEE.
So why is it slow? Because Java is actually calling a C routine through JNI, and it's actually slower to do so than to call a Java subroutine, which itself is slower than doing it inline. This is very annoying, and Java should have come up with a better solution, ie building in floating point library calls if necessary. Oh well.
In C++, I suspect all the complex alternatives would lose on speed, but I haven't checked them all.
What I did, and what Java people will find usefull, is a simple hack, an extension of the special case testing suggested by A. Rex. Use a single long value as a bit array, which isn't bounds checked. That way, you have 64 bit boolean lookup.
typedef unsigned long long UVLONG
UVLONG pp1,pp2;
void init2() {
for (int i = 0; i < 64; i++) {
for (int j = 0; j < 64; j++)
if (isPerfectSquare(i * 64 + j)) {
pp1 |= (1 << j);
pp2 |= (1 << i);
break;
}
}
cout << "pp1=" << pp1 << "," << pp2 << "\n";
}
inline bool isPerfectSquare5(UVLONG x) {
return pp1 & (1 << (x & 0x3F)) ? isPerfectSquare(x) : false;
}
The routine isPerfectSquare5 runs in about 1/3 the time on my core2 duo machine. I suspect that further tweaks along the same lines could reduce the time further on average, but every time you check, you are trading off more testing for more eliminating, so you can't go too much farther on that road.
Certainly, rather than having a separate test for negative, you could check the high 6 bits the same way.
Note that all I'm doing is eliminating possible squares, but when I have a potential case I have to call the original, inlined isPerfectSquare.
The init2 routine is called once to initialize the static values of pp1 and pp2.
Note that in my implementation in C++, I'm using unsigned long long, so since you're signed, you'd have to use the >>> operator.
There is no intrinsic need to bounds check the array, but Java's optimizer has to figure this stuff out pretty quickly, so I don't blame them for that.
I like the idea to use an almost correct method on some of the input. Here is a version with a higher "offset". The code seems to work and passes my simple test case.
Just replace your:
if(n < 410881L){...}
code with this one:
if (n < 11043908100L) {
//John Carmack hack, converted to Java.
// See: http://www.codemaestro.com/reviews/9
int i;
float x2, y;
x2 = n * 0.5F;
y = n;
i = Float.floatToRawIntBits(y);
//using the magic number from
//http://www.lomont.org/Math/Papers/2003/InvSqrt.pdf
//since it more accurate
i = 0x5f375a86 - (i >> 1);
y = Float.intBitsToFloat(i);
y = y * (1.5F - (x2 * y * y));
y = y * (1.5F - (x2 * y * y)); //Newton iteration, more accurate
sqrt = Math.round(1.0F / y);
} else {
//Carmack hack gives incorrect answer for n >= 11043908100.
sqrt = (long) Math.sqrt(n);
}
Considering for general bit length (though I have used specific type here), I tried to design simplistic algo as below. Simple and obvious check for 0,1,2 or <0 is required initially.
Following is simple in sense that it doesn't try to use any existing maths functions. Most of the operator can be replaced with bit-wise operators. I haven't tested with any bench mark data though. I'm neither expert at maths or computer algorithm design in particular, I would love to see you pointing out problem. I know there is lots of improvement chances there.
int main()
{
unsigned int c1=0 ,c2 = 0;
unsigned int x = 0;
unsigned int p = 0;
int k1 = 0;
scanf("%d",&p);
if(p % 2 == 0) {
x = p/2;
}
else {
x = (p/2) +1;
}
while(x)
{
if((x*x) > p) {
c1 = x;
x = x/2;
}else {
c2 = x;
break;
}
}
if((p%2) != 0)
c2++;
while(c2 < c1)
{
if((c2 * c2 ) == p) {
k1 = 1;
break;
}
c2++;
}
if(k1)
printf("\n Perfect square for %d", c2);
else
printf("\n Not perfect but nearest to :%d :", c2);
return 0;
}
This a rework from decimal to binary of the old Marchant calculator algorithm (sorry, I don't have a reference), in Ruby, adapted specifically for this question:
def isexactsqrt(v)
value = v.abs
residue = value
root = 0
onebit = 1
onebit <<= 8 while (onebit < residue)
onebit >>= 2 while (onebit > residue)
while (onebit > 0)
x = root + onebit
if (residue >= x) then
residue -= x
root = x + onebit
end
root >>= 1
onebit >>= 2
end
return (residue == 0)
end
Here's a workup of something similar (there may be coding style/smells or clunky O/O - it's the algorithm that counts, and C++ is not my home language). In this case, we're looking for residue == 0:
#include <iostream>
using namespace std;
typedef unsigned long long int llint;
class ISqrt { // Integer Square Root
llint value; // Integer whose square root is required
llint root; // Result: floor(sqrt(value))
llint residue; // Result: value-root*root
llint onebit, x; // Working bit, working value
public:
ISqrt(llint v = 2) { // Constructor
Root(v); // Take the root
};
llint Root(llint r) { // Resets and calculates new square root
value = r; // Store input
residue = value; // Initialise for subtracting down
root = 0; // Clear root accumulator
onebit = 1; // Calculate start value of counter
onebit <<= (8*sizeof(llint)-2); // Set up counter bit as greatest odd power of 2
while (onebit > residue) {onebit >>= 2; }; // Shift down until just < value
while (onebit > 0) {
x = root ^ onebit; // Will check root+1bit (root bit corresponding to onebit is always zero)
if (residue >= x) { // Room to subtract?
residue -= x; // Yes - deduct from residue
root = x + onebit; // and step root
};
root >>= 1;
onebit >>= 2;
};
return root;
};
llint Residue() { // Returns residue from last calculation
return residue;
};
};
int main() {
llint big, i, q, r, v, delta;
big = 0; big = (big-1); // Kludge for "big number"
ISqrt b; // Make q sqrt generator
for ( i = big; i > 0 ; i /= 7 ) { // for several numbers
q = b.Root(i); // Get the square root
r = b.Residue(); // Get the residue
v = q*q+r; // Recalc original value
delta = v-i; // And diff, hopefully 0
cout << i << ": " << q << " ++ " << r << " V: " << v << " Delta: " << delta << "\n";
};
return 0;
};
I checked all of the possible results when the last n bits of a square is observed. By successively examining more bits, up to 5/6th of inputs can be eliminated. I actually designed this to implement Fermat's Factorization algorithm, and it is very fast there.
public static boolean isSquare(final long val) {
if ((val & 2) == 2 || (val & 7) == 5) {
return false;
}
if ((val & 11) == 8 || (val & 31) == 20) {
return false;
}
if ((val & 47) == 32 || (val & 127) == 80) {
return false;
}
if ((val & 191) == 128 || (val & 511) == 320) {
return false;
}
// if((val & a == b) || (val & c == d){
// return false;
// }
if (!modSq[(int) (val % modSq.length)]) {
return false;
}
final long root = (long) Math.sqrt(val);
return root * root == val;
}
The last bit of pseudocode can be used to extend the tests to eliminate more values. The tests above are for k = 0, 1, 2, 3
a is of the form (3 << 2k) - 1
b is of the form (2 << 2k)
c is of the form (2 << 2k + 2) - 1
d is of the form (2 << 2k - 1) * 10
It first tests whether it has a square residual with moduli of power of two, then it tests based on a final modulus, then it uses the Math.sqrt to do a final test. I came up with the idea from the top post, and attempted to extend upon it. I appreciate any comments or suggestions.
Update: Using the test by a modulus, (modSq) and a modulus base of 44352, my test runs in 96% of the time of the one in the OP's update for numbers up to 1,000,000,000.
Here is a divide and conquer solution.
If the square root of a natural number (number) is a natural number (solution), you can easily determine a range for solution based on the number of digits of number:
number has 1 digit: solution in range = 1 - 4
number has 2 digits: solution in range = 3 - 10
number has 3 digits: solution in range = 10 - 40
number has 4 digits: solution in range = 30 - 100
number has 5 digits: solution in range = 100 - 400
Notice the repetition?
You can use this range in a binary search approach to see if there is a solution for which:
number == solution * solution
Here is the code
Here is my class SquareRootChecker
public class SquareRootChecker {
private long number;
private long initialLow;
private long initialHigh;
public SquareRootChecker(long number) {
this.number = number;
initialLow = 1;
initialHigh = 4;
if (Long.toString(number).length() % 2 == 0) {
initialLow = 3;
initialHigh = 10;
}
for (long i = 0; i < Long.toString(number).length() / 2; i++) {
initialLow *= 10;
initialHigh *= 10;
}
if (Long.toString(number).length() % 2 == 0) {
initialLow /= 10;
initialHigh /=10;
}
}
public boolean checkSquareRoot() {
return findSquareRoot(initialLow, initialHigh, number);
}
private boolean findSquareRoot(long low, long high, long number) {
long check = low + (high - low) / 2;
if (high >= low) {
if (number == check * check) {
return true;
}
else if (number < check * check) {
high = check - 1;
return findSquareRoot(low, high, number);
}
else {
low = check + 1;
return findSquareRoot(low, high, number);
}
}
return false;
}
}
And here is an example on how to use it.
long number = 1234567;
long square = number * number;
SquareRootChecker squareRootChecker = new SquareRootChecker(square);
System.out.println(square + ": " + squareRootChecker.checkSquareRoot()); //Prints "1524155677489: true"
long notSquare = square + 1;
squareRootChecker = new SquareRootChecker(notSquare);
System.out.println(notSquare + ": " + squareRootChecker.checkSquareRoot()); //Prints "1524155677490: false"
Newton's Method with integer arithmetic
If you wish to avoid non-integer operations you could use the method below. It basically uses Newton's Method modified for integer arithmetic.
/**
* Test if the given number is a perfect square.
* #param n Must be greater than 0 and less
* than Long.MAX_VALUE.
* #return <code>true</code> if n is a perfect
* square, or <code>false</code> otherwise.
*/
public static boolean isSquare(long n)
{
long x1 = n;
long x2 = 1L;
while (x1 > x2)
{
x1 = (x1 + x2) / 2L;
x2 = n / x1;
}
return x1 == x2 && n % x1 == 0L;
}
This implementation can not compete with solutions that use Math.sqrt. However, its performance can be improved by using the filtering mechanisms described in some of the other posts.
Square Root of a number, given that the number is a perfect square.
The complexity is log(n)
/**
* Calculate square root if the given number is a perfect square.
*
* Approach: Sum of n odd numbers is equals to the square root of n*n, given
* that n is a perfect square.
*
* #param number
* #return squareRoot
*/
public static int calculateSquareRoot(int number) {
int sum=1;
int count =1;
int squareRoot=1;
while(sum<number) {
count+=2;
sum+=count;
squareRoot++;
}
return squareRoot;
}
Here is the simplest and most concise way, although I do not know how it compares in terms of CPU cycles. This works great if you only wish to know if the root is a whole number. If you really care if it is an integer, you can also figure that out. Here is a simple (and pure) function:
private static final MathContext precision = new MathContext(20);
private static final Function<Long, Boolean> isRootWhole = (n) -> {
long digit = n % 10;
if (digit == 2 || digit == 3 || digit == 7 || digit == 8) {
return false;
}
return new BigDecimal(n).sqrt(precision).scale() == 0;
};
If you do not need micro-optimization, this answer is better in terms of simplicity and maintainability. If you will be calculating negative numbers, you will need to handle that accordingly, and send the absolute value into the function. I have included a minor optimization because no perfect squares have a tens digit of 2, 3, 7, or 8 due to quadratic residues mod 10.
On my CPU, a run of this algorithm on 0 - 10,000,000 took an average of 1000 - 1100 nanoseconds per calculation.
If you are performing a lesser number of calculations, the earlier calculations take a bit longer.
I had a negative comment that my previous edit did not work for large numbers. The OP mentioned Longs, and the largest perfect square that is a Long is 9223372030926249001, so this method works for all Longs.
This question got me wondering, so I did some simple coding and I'm presenting it here because I think it's interesting, relevant, but I don't know how useful. There's a simple algorithm
a_n+1 = (a_n + x/a_n)/2
for calculating square roots, but it's meant to be used for decimals. I wondered what would happen if I just coded the same algorithm using integer maths. Would it even converge on the right answer? I didn't know, so I wrote a program...
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <math.h>
_Bool isperfectsquare(uint64_t x, uint64_t *isqrtx) {
// NOTE: isqrtx approximate for non-squares. (benchmarked at 162ns 3GHz i5)
uint32_t i;
uint64_t ai;
ai = 1 + ((x & 0xffff000000000000) >> 32) + ((x & 0xffff00000000) >> 24) + ((x & 0xffff0000) >> 16);
ai = (ai + x/ai)/2;
ai = (ai + x/ai)/2;
ai = (ai + x/ai)/2;
ai = (ai + x/ai)/2;
ai = (ai + x/ai)/2;
ai = (ai + x/ai)/2;
ai = (ai + x/ai)/2;
ai = (ai + x/ai)/2;
ai = (ai + x/ai)/2;
ai = (ai + x/ai)/2;
ai = (ai + x/ai)/2;
ai = (ai + x/ai)/2;
ai = ai & 0xffffffff;
if (isqrtx != NULL) isqrtx[0] = ai;
return ai*ai == x;
}
void main() {
uint64_t x, isqrtx;
uint64_t i;
for (i=1; i<0x100000000; i++) {
if (!isperfectsquare(i*i, &isqrtx)) {
printf("Failed at %li", i);
exit(1);
}
}
printf("All OK.\n");
}
So, it turns out that 12 iterations of the formula is enough to give correct results for all 64 bit unsigned longs that are perfect squares, and of course, non-squares will return false.
simon#simon-Inspiron-N5040:~$ time ./isqrt.bin
All OK.
real 11m37.096s
user 11m35.053s
sys 0m0.272s
So 697s/2^32 is approx 162ns. As it is, the function will have the same runtime for all inputs. Some of the measures detailed elsewhere in the discussion could speed it up for non-squares by checking the last four bits etc. Hope someone finds this interesting as I did.
If speed is a concern, why not partition off the most commonly used set of inputs and their values to a lookup table and then do whatever optimized magic algorithm you have come up with for the exceptional cases?
"I'm looking for the fastest way to determine if a long value is a perfect square (i.e. its square root is another integer)."
The answers are impressive, but I failed to see a simple check :
check whether the first number on the right of the long it a member of the set (0,1,4,5,6,9) . If it is not, then it cannot possibly be a 'perfect square' .
eg.
4567 - cannot be a perfect square.
It ought to be possible to pack the 'cannot be a perfect square if the last X digits are N' much more efficiently than that! I'll use java 32 bit ints, and produce enough data to check the last 16 bits of the number - that's 2048 hexadecimal int values.
...
Ok. Either I have run into some number theory that is a little beyond me, or there is a bug in my code. In any case, here is the code:
public static void main(String[] args) {
final int BITS = 16;
BitSet foo = new BitSet();
for(int i = 0; i< (1<<BITS); i++) {
int sq = (i*i);
sq = sq & ((1<<BITS)-1);
foo.set(sq);
}
System.out.println("int[] mayBeASquare = {");
for(int i = 0; i< 1<<(BITS-5); i++) {
int kk = 0;
for(int j = 0; j<32; j++) {
if(foo.get((i << 5) | j)) {
kk |= 1<<j;
}
}
System.out.print("0x" + Integer.toHexString(kk) + ", ");
if(i%8 == 7) System.out.println();
}
System.out.println("};");
}
and here are the results:
(ed: elided for poor performance in prettify.js; view revision history to see.)

How do I determine number of bytes needed to represent a given integer?

I need a function in Java to give me number of bytes needed to represent a given integer. When I pass 2 it should return 1, 400 -> 2, 822222 -> 3, etc.
#Edit: For now I'm stuck with this:
numOfBytes = Integer.highestOneBit(integer) / 8
Don't know exactly what highestOneBit() does, but have also tried this:
numOfBytes = (int) (Math.floor(Math.log(integer)) + 1);
Which I found on some website.
static int byteSize(long x) {
if (x < 0) throw new IllegalArgumentException();
int s = 1;
while (s < 8 && x >= (1L << (s * 8))) s++;
return s;
}
Integer.highestOneBit(arg) returns only the highest set bit, in the original place. For example, Integer.highestOneBit(12) is 8, not 3. So you probably want to use Integer.numberOfTrailingZeros(Integer.highestOneBit(12)), which does return 3. Here is the Integer API
Some sample code:
numOfBytes = (Integer.numberOfTrailingZeroes(Integer.highestOneBit(integer)) + 8) / 8;
The + 8 is for proper rounding.
The lazy/inefficient way to do this is with Integer#toBinaryString. It will remove all leading zeros from positive numbers for you, all you have to do is call String#length and divide by 8.
Think about how to solve the same problem using normal decimal numbers. Then apply the same principle to binary / byte representation i.e. use 256 where you would use 10 for decimal numbers.
static int byteSize(long number, int bitsPerByte) {
int maxNumberSaveByBitsPerByte = // get max number can be saved by bits in value bitsPerByte
int returnValue = getFloor(number/maxNumberSaveByBitsPerByte); // use Math lib
if(number % maxNumberSaveByBitsPerByte != 0)
returnValue++;
return returnValue;
}
For positive values: 0 and 1 need 1 digit, with 2 digits you get the doubled max value, and for every digit it is 2 times that value. So a recursive solution is to divide:
public static int binaryLength (long l) {
if (l < 2) return 1;
else 1 + binaryLength (l /2L);
}
but shifting works too:
public static int binaryLength (long l) {
if (l < 2) return 1;
else 1 + binaryLength (l >> 1);
}
Negative values have a leading 1, so it doesn't make much sense for the question. If we assume binary1 is decimal1, binary1 can't be -1. But what shall it be? b11? That is 3.
Why you wouldn't do something simple like this:
private static int byteSize(int val) {
int size = 0;
while (val > 0) {
val = val >> 8;
size++;
}
return size;
}
int numOfBytes = (Integer.SIZE >> 3) - (Integer.numberOfLeadingZeros(n) >> 3);
This implementation is compact enough while performance-friendly, since it doesn't involve any floating point operation nor any loop.
It is derived from the form:
int numOfBytes = Math.ceil((Integer.SIZE - Integer.numberOfLeadingZeros(n)) / Byte.SIZE);
The magic number 3 in the optimized form comes from the assumption: Byte.SIZE equals 8

Bitwise operator for simply flipping all bits in an integer?

I have to flip all bits in a binary representation of an integer. Given:
10101
The output should be
01010
What is the bitwise operator to accomplish this when used with an integer? For example, if I were writing a method like int flipBits(int n);, what would go in the body? I need to flip only what's already present in the number, not all 32 bits in the integer.
The ~ unary operator is bitwise negation. If you need fewer bits than what fits in an int then you'll need to mask it with & after the fact.
Simply use the bitwise not operator ~.
int flipBits(int n) {
return ~n;
}
To use the k least significant bits, convert it to the right mask.
(I assume you want at least 1 bit of course, that's why mask starts at 1)
int flipBits(int n, int k) {
int mask = 1;
for (int i = 1; i < k; ++i)
mask |= mask << 1;
return ~n & mask;
}
As suggested by Lưu Vĩnh Phúc, one can create the mask as (1 << k) - 1 instead of using a loop.
int flipBits2(int n, int k) {
int mask = (1 << k) - 1;
return ~n & mask;
}
There is a number of ways to flip all the bit using operations
x = ~x; // has been mentioned and the most obvious solution.
x = -x - 1; or x = -1 * (x + 1);
x ^= -1; or x = x ^ ~0;
Well since so far there's only one solution that gives the "correct" result and that's.. really not a nice solution (using a string to count leading zeros? that'll haunt me in my dreams ;) )
So here we go with a nice clean solution that should work - haven't tested it thorough though, but you get the gist. Really, java not having an unsigned type is extremely annoying for this kind of problems, but it should be quite efficient nonetheless (and if I may say so MUCH more elegant than creating a string out of the number)
private static int invert(int x) {
if (x == 0) return 0; // edge case; otherwise returns -1 here
int nlz = nlz(x);
return ~x & (0xFFFFFFFF >>> nlz);
}
private static int nlz(int x) {
// Replace with whatever number leading zero algorithm you want - I can think
// of a whole list and this one here isn't that great (large immediates)
if (x < 0) return 0;
if (x == 0) return 32;
int n = 0;
if ((x & 0xFFFF0000) == 0) {
n += 16;
x <<= 16;
}
if ((x & 0xFF000000) == 0) {
n += 8;
x <<= 8;
}
if ((x & 0xF0000000) == 0) {
n += 4;
x <<= 4;
}
if ((x & 0xC0000000) == 0) {
n += 2;
x <<= 2;
}
if ((x & 0x80000000) == 0) {
n++;
}
return n;
}
faster and simpler solution :
/* inverts all bits of n, with a binary length of the return equal to the length of n
k is the number of bits in n, eg k=(int)Math.floor(Math.log(n)/Math.log(2))+1
if n is a BigInteger : k= n.bitLength();
*/
int flipBits2(int n, int k) {
int mask = (1 << k) - 1;
return n ^ mask;
}
One Line Solution
int flippingBits(int n) {
return n ^ ((1 << 31) - 1);
}
I'd have to see some examples to be sure, but you may be getting unexpected values because of two's complement arithmetic. If the number has leading zeros (as it would in the case of 26), the ~ operator would flip these to make them leading ones - resulting in a negative number.
One possible workaround would be to use the Integer class:
int flipBits(int n){
String bitString = Integer.toBinaryString(n);
int i = 0;
while (bitString.charAt(i) != '1'){
i++;
}
bitString = bitString.substring(i, bitString.length());
for(i = 0; i < bitString.length(); i++){
if (bitString.charAt(i) == '0')
bitString.charAt(i) = '1';
else
bitString.charAt(i) = '0';
}
int result = 0, factor = 1;
for (int j = bitString.length()-1; j > -1; j--){
result += factor * bitString.charAt(j);
factor *= 2;
}
return result;
}
I don't have a java environment set up right now to test it on, but that's the general idea. Basically just convert the number to a string, cut off the leading zeros, flip the bits, and convert it back to a number. The Integer class may even have some way to parse a string into a binary number. I don't know if that's how the problem needs to be done, and it probably isn't the most efficient way to do it, but it would produce the correct result.
Edit: polygenlubricants' answer to this question may also be helpful
I have another way to solve this case,
public static int complementIt(int c){
return c ^ (int)(Math.pow(2, Math.ceil(Math.log(c)/Math.log(2))) -1);
}
It is using XOR to get the complement bit, to complement it we need to XOR the data with 1, for example :
101 XOR 111 = 010
(111 is the 'key', it generated by searching the 'n' square root of the data)
if you are using ~ (complement) the result will depend on its variable type, if you are using int then it will be process as 32bit.
As we are only required to flip the minimum bits required for the integer (say 50 is 110010 and when inverted, it becomes 001101 which is 13), we can invert individual bits one at a time from the LSB to MSB, and keep shifting the bits to the right and accordingly apply the power of 2. The code below does the required job:
int invertBits (int n) {
int pow2=1, int bit=0;
int newnum=0;
while(n>0) {
bit = (n & 1);
if(bit==0)
newnum+= pow2;
n=n>>1;
pow2*=2;
}
return newnum;
}
import java.math.BigInteger;
import java.util.Scanner;
public class CodeRace1 {
public static void main(String[] s) {
long input;
BigInteger num,bits = new BigInteger("4294967295");
Scanner sc = new Scanner(System.in);
input = sc.nextInt();
sc.nextLine();
while (input-- > 0) {
num = new BigInteger(sc.nextLine().trim());
System.out.println(num.xor(bits));
}
}
}
The implementation from openJDK, Integer.reverse():
public static int More ...reverse(int i) {
i = (i & 0x55555555) << 1 | (i >>> 1) & 0x55555555;
i = (i & 0x33333333) << 2 | (i >>> 2) & 0x33333333;
i = (i & 0x0f0f0f0f) << 4 | (i >>> 4) & 0x0f0f0f0f;
i = (i << 24) | ((i & 0xff00) << 8) |
((i >>> 8) & 0xff00) | (i >>> 24);
return i;
}
Base on my experiments on my laptop, the implementation below was faster:
public static int reverse2(int i) {
i = (i & 0x55555555) << 1 | (i >>> 1) & 0x55555555;
i = (i & 0x33333333) << 2 | (i >>> 2) & 0x33333333;
i = (i & 0x0f0f0f0f) << 4 | (i >>> 4) & 0x0f0f0f0f;
i = (i & 0x00ff00ff) << 8 | (i >>> 8) & 0x00ff00ff;
i = (i & 0x0000ffff) << 16 | (i >>> 16) & 0x0000ffff;
return i;
}
Not sure what's the reason behind it - as it may depends on how the java code is interpreted into machine code...
If you just want to flip the bits which are "used" in the integer, try this:
public int flipBits(int n) {
int mask = (Integer.highestOneBit(n) << 1) - 1;
return n ^ mask;
}
public static int findComplement(int num) {
return (~num & (Integer.highestOneBit(num) - 1));
}
int findComplement(int num) {
int i = 0, ans = 0;
while(num) {
if(not (num & 1)) {
ans += (1 << i);
}
i += 1;
num >>= 1;
}
return ans;
}
Binary 10101 == Decimal 21
Flipped Binary 01010 == Decimal 10
One liner (in Javascript - You could convert to your favorite programming language )
10 == ~21 & (1 << (Math.floor(Math.log2(21))+1)) - 1
Explanation:
10 == ~21 & mask
mask : For filtering out all the leading bits before the significant bits count (nBits - see below)
How to calculate the significant bit counts ?
Math.floor(Math.log2(21))+1 => Returns how many significant bits are there (nBits)
Ex:
0000000001 returns 1
0001000001 returns 7
0000010101 returns 5
(1 << nBits) - 1 => 1111111111.....nBits times = mask
It can be done by a simple way, just simply subtract the number from the value
obtained when all the bits are equal to 1 .
For example:
Number: Given Number
Value : A number with all bits set in a given number.
Flipped number = Value – Number.
Example :
Number = 23,
Binary form: 10111
After flipping digits number will be: 01000
Value: 11111 = 31
We can find the most significant set bit in O(1) time for a fixed size integer. For
example below code is for a 32-bit integer.
int setBitNumber(int n)
{
n |= n>>1;
n |= n>>2;
n |= n>>4;
n |= n>>8;
n |= n>>16;
n = n + 1;
return (n >> 1);
}

Fast way to find exponent of nearest superior power of 2

If I have a number a, I want the value of x in b=2^x, where b is the next power of 2 greater than a.
In case you missed the tag, this is Java, and a is an int. I'm looking for the fastest way to do this. My solution thusfar is to use bit-twiddling to get b, then do (int)(log(b)/log(2)), but I feel like there has to be a faster method that doesn't involve dividing two floating-point numbers.
What about a == 0 ? 0 : 32 - Integer.numberOfLeadingZeros(a - 1)? That avoids floating point entirely. If you know a is never 0, you can leave off the first part.
If anyone is looking for some "bit-twiddling" code that Andy mentions, that could look something like this: (if people have better ways, you should share!)
public static int nextPowerOf2(final int a)
{
int b = 1;
while (b < a)
{
b = b << 1;
}
return b;
}
Not necessarily faster, but one liner:
int nextPowerOf2(int num)
{
return num == 1 ? 1 : Integer.highestOneBit(num - 1) * 2;
}
If you need an answer that works for integers or floating point, both of these should work:
I would think that Math.floor(Math.log(a) * 1.4426950408889634073599246810019) + 1 would be your best bet if you don't want to do bit twiddling.
If you do want to bit-twiddle, you can use Double.doubleToLongBits(a) and then just extract the exponent. I'm thinking ((Double.doubleRawToLongBits(a) >>> 52) & 0x7ff) - 1022 should do the trick.
just do the following:
extract the highest bit by using this method (modified from hdcode):
int msb(int x) {
if (pow2(x)) return x;
x = x | (x >> 1);
x = x | (x >> 2);
x = x | (x >> 4);
x = x | (x >> 8);
x = x | (x >> 16);
x = x | (x >> 24);
return x - (x >> 1);
}
int pow2(int n) {
return (n) & (n-1) == 0;
}
combining both functions into this function to get a number 'b', that is the next power of 2 of a given number 'a':
int log2(int x) {
int pow = 0;
if(x >= (1 << 16)) { x >>= 16; pow += 16;}
if(x >= (1 << 8 )) { x >>= 8; pow += 8;}
if(x >= (1 << 4 )) { x >>= 4; pow += 4;}
if(x >= (1 << 2 )) { x >>= 2; pow += 2;}
if(x >= (1 << 1 )) { x >>= 1; pow += 1;}
return pow;
}
kind regards,
dave
How about divide-and-conquer? As in:
b = 0;
if (a >= 65536){a /= 65536; b += 16;}
if (a >= 256){a /= 256; b += 8;}
if (a >= 16){a /= 16; b += 4;}
if (a >= 4){a /= 4; b += 2;}
if (a >= 2){a /= 2; b += 1;}
Assuming a is unsigned, the divides should just be bit-shifts.
A binary IF-tree with 32 leaves should be even faster, getting the answer in 5 comparisons. Something like:
if (a >= (1<<0x10)){
if (a >= (1<<0x18)){
if (a >= (1<<0x1C)){
if (a >= (1<<0x1E)){
if (a >= (1<<0x1F)){
b = 0x1F;
} else {
b = 0x1E;
}
} else {
if (a >= (1<<0x1D)){
b = 0x1D;
} else {
b = 0x1C;
}
}
etc. etc.
Java provides a function that rounds down to the nearest power of 2. Thus a!=Integer.highestOneBit(a)?2*Integer.highestOneBit(a):a is a slightly prettier solution, assuming a is positive.
Storing Integer.highestOneBit(a) in a variable may further improve performance and readability.
To add to Jeremiah Willcock's answer, if you want the value of the power of 2 itself, then you will want:
(int) Math.pow(2, (a == 0) ? 0 : 32 - Integer.numberOfLeadingZeros(numWorkers));
Here is my code for the same. Will this be faster?
int a,b,x,y;
Scanner inp = new Scanner(System.in);
a = inp.nextInt();
y = (int) (Math.log(a)/Math.log(2));
x = y +1;
System.out.println(x);

How can I perform multiplication without the '*' operator?

I was just going through some basic stuff as I am learning C. I came upon a question to multiply a number by 7 without using the * operator. Basically it's like this
(x << 3) - x;
Now I know about basic bit manipulation operations, but I can't get how do you multiply a number by any other odd number without using the * operator? Is there a general algorithm for this?
Think about how you multiply in decimal using pencil and paper:
12
x 26
----
72
24
----
312
What does multiplication look like in binary?
0111
x 0101
-------
0111
0000
0111
-------
100011
Notice anything? Unlike multiplication in decimal, where you need to memorize the "times table," when multiplying in binary, you are always multiplying one of the terms by either 0 or 1 before writing it down in the list addends. There's no times table needed. If the digit of the second term is 1, you add in the first term. If it's 0, you don't. Also note how the addends are progressively shifted over to the left.
If you're unsure of this, do a few binary multiplications on paper. When you're done, convert the result back to decimal and see if it's correct. After you've done a few, I think you'll get the idea how binary multiplication can be implemented using shifts and adds.
Everyone is overlooking the obvious. No multiplication is involved:
10^(log10(A) + log10(B))
The question says:
multiply a number by 7 without using * operator
This doesn't use *:
number / (1 / 7)
Edit:
This compiles and works fine in C:
int number,result;
number = 8;
result = number / (1. / 7);
printf("result is %d\n",result);
An integer left shift is multiplying by 2, provided it doesn't overflow. Just add or subtract as appropriate once you get close.
int multiply(int multiplicand, int factor)
{
if (factor == 0) return 0;
int product = multiplicand;
for (int ii = 1; ii < abs(factor); ++ii) {
product += multiplicand;
}
return factor >= 0 ? product : -product;
}
You wanted multiplication without *, you got it, pal!
It's easy to avoid the '*' operator:
mov eax, 1234h
mov edx, 5678h
imul edx
No '*' in sight. Of course, if you wanted to get into the spirit of it, you could also use the trusty old shift and add algorithm:
mult proc
; Multiplies eax by ebx and places result in edx:ecx
xor ecx, ecx
xor edx, edx
mul1:
test ebx, 1
jz mul2
add ecx, eax
adc edx, 0
mul2:
shr ebx, 1
shl eax, 1
test ebx, ebx
jnz mul1
done:
ret
mult endp
Of course, with modern processors, all (?) have multiplication instructions, but back when the PDP-11 was shiny and new, code like this saw real use.
Mathematically speaking, multiplication distributes over addition. Essentially, this means:
x * (a + b + c ...) = (x * a) + (x * b) + (x * c) ...
Any real number (in your case 7), can be presented as a series of additions (such as 8 + (-1), since subtraction is really just addition going the wrong way). This allows you to represent any single multiplication statement as an equivalent series of multiplication statements, which will come up with the same result:
x * 7
= x * (8 + (-1))
= (x * 8) + (x * (-1))
= (x * 8) - (x * 1)
= (x * 8) - x
The bitwise shift operator essentially just multiplies or divides a number by a power of 2. So long as your equation is only dealing with such values, bit shifting can be used to replace all occurrence of the multiplication operator.
(x * 8) - x = (x * 23) - x = (x << 3) - x
A similar strategy can be used on any other integer, and it makes no difference whether it's odd or even.
It is the same as x*8-x = x*(8-1) = x*7
Any number, odd or even, can be expressed as a sum of powers of two. For example,
1 2 4 8
------------------
1 = 1
2 = 0 + 2
3 = 1 + 2
4 = 0 + 0 + 4
5 = 1 + 0 + 4
6 = 0 + 2 + 4
7 = 1 + 2 + 4
8 = 0 + 0 + 0 + 8
11 = 1 + 2 + 0 + 8
So, you can multiply x by any number by performing the right set of shifts and adds.
1x = x
2x = 0 + x<<1
3x = x + x<<1
4x = 0 + 0 + x<<2
5x = x + 0 + x<<2
11x = x + x<<1 + 0 + x<<3
When it comes down to it, multiplication by a positive integer can be done like this:
int multiply(int a, int b) {
int ret = 0;
for (int i=0; i<b; i++) {
ret += b;
}
return ret;
}
Efficient? Hardly. But it's correct (factoring in limits on ints and so forth).
So using a left-shift is just a shortcut for multiplying by 2. But once you get to the highest power-of-2 under b you just add a the necessary number of times, so:
int multiply(int a, int b) {
int ret = a;
int mult = 1;
while (mult <= b) {
ret <<= 1;
mult <<= 1;
}
while (mult < b) {
ret += a;
}
return ret;
}
or something close to that.
To put it another way, to multiply by 7.
Left shift by 2 (times 4). Left shift 3 is 8 which is >7;
Add b 3 times.
One evening, I found that I was extremely bored, and cooked this up:
#include <iostream>
typedef unsigned int uint32;
uint32 add(uint32 a, uint32 b) {
do {
uint32 s = a ^ b;
uint32 c = a & b;
a = s;
b = c << 1;
} while (a & b)
return (a | b)
}
uint32 mul(uint32 a, uint32 b) {
uint32 total = 0;
do {
uint32 s1 = a & (-(b & 1))
b >>= 1; a <<= 1;
total = add(s1, total)
} while (b)
return total;
}
int main(void) {
using namespace std;
uint32 a, b;
cout << "Enter two numbers to be multiplied: ";
cin >> a >> b;
cout << "Total: " << mul(a,b) << endl;
return 0;
}
The code above should be quite self-explanatory, as I tried to keep it as simple as possible. It should work, more or less, the way a CPU might perform these operations. The only bug I'm aware of is that a is not permitted to be greater than 32,767 and b is not permitted to be large enough to overflow a (that is, multiply overflow is not handled, so 64-bit results are not possible). It should even work with negative numbers, provided the inputs are appropriately reinterpret_cast<>.
O(log(b)) method
public int multiply_optimal(int a, int b) {
if (a == 0 || b == 0)
return 0;
if (b == 1)
return a;
if ((b & 1) == 0)
return multiply_optimal(a + a, b >> 1);
else
return a + multiply_optimal(a + a, b >> 1);
}
The resursive code works as follows:
Base case:
if either of the number is 0 ,product is 0.
if b=1, product =a.
If b is even:
ab can be written as 2a(b/2)
2a(b/2)=(a+a)(b/2)=(a+a)(b>>1) where'>>' arithematic right shift operator in java.
If b is odd:
ab can be written as a+a(b-1)
a+a(b-1)=a+2a(b-1)/2=a+(a+a)(b-1)/2=a+(a+a)((b-1)>>1)
Since b is odd (b-1)/2=b/2=b>>1
So ab=a+(2a*(b>>1))
NOTE:each recursive call b is halved => O(log(b))
unsigned int Multiply(unsigned int m1, unsigned int m2)
{
unsigned int numBits = sizeof(unsigned int) * 8; // Not part of the core algorithm
unsigned int product = 0;
unsigned int mask = 1;
for(int i =0; i < numBits; ++i, mask = mask << 1)
{
if(m1 & mask)
{
product += (m2 << i);
}
}
return product;
}
#Wang, that's a good generalization. But here is a slightly faster version. But it assumes no overflow and a is non-negative.
int mult(int a, int b){
int p=1;
int rv=0;
for(int i=0; a >= p && i < 31; i++){
if(a & p){
rv += b;
}
p = p << 1;
b = b << 1;
}
return rv;
}
It will loop at most 1+log_2(a) times. Could be faster if you swap a and b when a > b.
import java.math.BigInteger;
public class MultiplyTest {
public static void main(String[] args) {
BigInteger bigInt1 = new BigInteger("5");
BigInteger bigInt2 = new BigInteger("8");
System.out.println(bigInt1.multiply(bigInt2));
}
}
Shift and add doesn't work (even with sign extension) when the multiplicand is negative. Signed multiplication has to be done using Booth encoding:
Starting from the LSB, a change from 0 to 1 is -1; a change from 1 to 0 is 1, otherwise 0. There is also an implicit extra bit 0 below the LSB.
For example, the number 5 (0101) will be encoded as: (1)(-1)(1)(-1). You can verify this is correct:
5 = 2^3 - 2^2 + 2 -1
This algorithm also works with negative numbers in 2's complement form:
-1 in 4-bit 2's complement is 1111. Using the Booth algorithm: (1)(0)(0)(0)(-1), where there is no space for the leftmost bit 1 so we get: (0)(0)(0)(-1) which is -1.
/* Multiply two signed integers using the Booth algorithm */
int booth(int x, int y)
{
int prev_bit = 0;
int result = 0;
while (x != 0) {
int current_bit = x & 0x1;
if (prev_bit & ~current_bit) {
result += y;
} else if (~prev_bit & current_bit) {
result -= y;
}
prev_bit = current_bit;
x = static_cast<unsigned>(x) >> 1;
y <<= 1;
}
if (prev_bit)
result += y;
return result;
}
The above code does not check for overflow. Below is a slightly modified version that multiplies two 16 bit numbers and returns a 32 bit number so it never overflows:
/* Multiply two 16-bit signed integers using the Booth algorithm */
/* Returns a 32-bit signed integer */
int32_t booth(int16_t x, int16_t y)
{
int16_t prev_bit = 0;
int16_t sign_bit = (x >> 16) & 0x1;
int32_t result = 0;
int32_t y1 = static_cast<int32_t>(y);
while (x != 0) {
int16_t current_bit = x & 0x1;
if (prev_bit & ~current_bit) {
result += y1;
} else if (~prev_bit & current_bit) {
result -= y1;
}
prev_bit = current_bit;
x = static_cast<uint16_t>(x) >> 1;
y1 <<= 1;
}
if (prev_bit & ~sign_bit)
result += y1;
return result;
}
unsigned int Multiply( unsigned int a, unsigned int b )
{
int ret = 0;
// For each bit in b
for (int i=0; i<32; i++) {
// If that bit is not equal to zero
if (( b & (1 << i)) != 0) {
// Add it to our return value
ret += a << i;
}
}
return ret;
}
I avoided the sign bit, because it's kind of not the subject of the post. This is an implementation of what Wayne Conrad said basically. Here is another problem is you want to try more low level math operations. Project Euler is cool!
If you can use the log function:
public static final long multiplyUsingShift(int a, int b) {
int absA = Math.abs(a);
int absB = Math.abs(b);
//Find the 2^b which is larger than "a" which turns out to be the
//ceiling of (Log base 2 of b) == numbers of digits to shift
double logBase2 = Math.log(absB) / Math.log(2);
long bits = (long)Math.ceil(logBase2);
//Get the value of 2^bits
long biggerInteger = (int)Math.pow(2, bits);
//Find the difference of the bigger integer and "b"
long difference = biggerInteger - absB;
//Shift "bits" places to the left
long result = absA<<bits;
//Subtract the "difference" "a" times
int diffLoop = Math.abs(a);
while (diffLoop>0) {
result -= difference;
diffLoop--;
}
return (a>0&&b>0 || a<0&&b<0)?result:-result;
}
If you cannot use the log function:
public static final long multiplyUsingShift(int a, int b) {
int absA = Math.abs(a);
int absB = Math.abs(b);
//Get the number of bits for a 2^(b+1) larger number
int bits = 0;
int bitInteger = absB;
while (bitInteger>0) {
bitInteger /= 2;
bits++;
}
//Get the value of 2^bit
long biggerInteger = (int)Math.pow(2, bits);
//Find the difference of the bigger integer and "b"
long difference = biggerInteger - absB;
//Shift "bits" places to the left
long result = absA<<bits;
//Subtract the "difference" "a" times
int diffLoop = absA;
while (diffLoop>0) {
result -= difference;
diffLoop--;
}
return (a>0&&b>0 || a<0&&b<0)?result:-result;
}
I found this to be more efficient:
public static final long multiplyUsingShift(int a, int b) {
int absA = Math.abs(a);
int absB = Math.abs(b);
long result = 0L;
while (absA>0) {
if ((absA&1)>0) result += absB; //Is odd
absA >>= 1;
absB <<= 1;
}
return (a>0&&b>0 || a<0&&b<0)?result:-result;
}
and yet another way.
public static final long multiplyUsingLogs(int a, int b) {
int absA = Math.abs(a);
int absB = Math.abs(b);
long result = Math.round(Math.pow(10, (Math.log10(absA)+Math.log10(absB))));
return (a>0&&b>0 || a<0&&b<0)?result:-result;
}
In C#:
private static string Multi(int a, int b)
{
if (a == 0 || b == 0)
return "0";
bool isnegative = false;
if (a < 0 || b < 0)
{
isnegative = true;
a = Math.Abs(a);
b = Math.Abs(b);
}
int sum = 0;
if (a > b)
{
for (int i = 1; i <= b; i++)
{
sum += a;
}
}
else
{
for (int i = 1; i <= a; i++)
{
sum += b;
}
}
if (isnegative == true)
return "-" + sum.ToString();
else
return sum.ToString();
}
JAVA:Considering the fact, that every number can be splitted into powers of two:
1 = 2 ^ 0
2 = 2 ^ 1
3 = 2 ^ 1 + 2 ^ 0
...
We want to get x where:
x = n * m
So we can achieve that by doing following steps:
1. while m is greater or equal to 2^pow:
1.1 get the biggest number pow, such as 2^pow is lower or equal to m
1.2 multiply n*2^pow and decrease m to m-2^pow
2. sum the results
Sample implementation using recursion:
long multiply(int n, int m) {
int pow = 0;
while (m >= (1 << ++pow)) ;
pow--;
if (m == 1 << pow) return (n << pow);
return (n << pow) + multiply(n, m - (1 << pow));
}
I got this question in last job interview and this answer was accepted.
EDIT: solution for positive numbers
This is the simplest C99/C11 solution for positive numbers:
unsigned multiply(unsigned x, unsigned y) { return sizeof(char[x][y]); }
Another thinking-outside-the-box answer:
BigDecimal a = new BigDecimal(123);
BigDecimal b = new BigDecimal(2);
BigDecimal result = a.multiply(b);
System.out.println(result.intValue());
public static int multiply(int a, int b)
{
int temp = 0;
if (b == 0) return 0;
for (int ii = 0; ii < abs(b); ++ii) {
temp = temp + a;
}
return b >= 0 ? temp : -temp;
}
public static int abs(int val) {
return val>=0 ? val : -val;
}
public static void main(String[] args) {
System.out.print("Enter value of A -> ");
Scanner s=new Scanner(System.in);
double j=s.nextInt();
System.out.print("Enter value of B -> ");
Scanner p=new Scanner(System.in);
double k=p.nextInt();
double m=(1/k);
double l=(j/m);
System.out.print("Multiplication of A & B=> "+l);
}
package com.amit.string;
// Here I am passing two values, 7 and 3 and method getResult() will
// return 21 without use of any operator except the increment operator, ++.
//
public class MultiplyTwoNumber {
public static void main(String[] args) {
int a = 7;
int b = 3;
System.out.println(new MultiplyTwoNumber().getResult(a, b));
}
public int getResult(int i, int j) {
int result = 0;
// Check for loop logic it is key thing it will go 21 times
for (int k = 0; k < i; k++) {
for (int p = 0; p < j; p++) {
result++;
}
}
return result;
}
}
Loop it. Run a loop seven times and iterate by the number you are multiplying with seven.
Pseudocode:
total = 0
multiply = 34
loop while i < 7
total = total + multiply
endloop
A JavaScript approach for positive numbers
function recursiveMultiply(num1, num2){
const bigger = num1 > num2 ? num1 : num2;
const smaller = num1 <= num2 ? num1 : num2;
const indexIncrement = 1;
const resultIncrement = bigger;
return recursiveMultiplyHelper(bigger, smaller, 0, indexIncrement, resultIncrement)
}
function recursiveMultiplyHelper(num1, num2, index, indexIncrement, resultIncrement){
let result = 0;
if (index === num2){
return result;
}
if ((index+indexIncrement+indexIncrement) >= num2){
indexIncrement = 1;
resultIncrement = num1;
} else{
indexIncrement += indexIncrement;
resultIncrement += resultIncrement;
}
result = recursiveMultiplyHelper(num1, num2, (index+indexIncrement), indexIncrement, resultIncrement);
result += resultIncrement;
console.log(num1, num2, index, result);
return result;
}
Think about the normal multiplication method we use
1101 x =>13
0101 =>5
---------------------
1101
0000
1101
0000
===================
1000001 . => 65
Writing the same above in the code
#include<stdio.h>
int multiply(int a, int b){
int res = 0,count =0;
while(b>0) {
if(b & 0x1)
res = res + (a << count);
b = b>>1;
count++;
}
return res;
}
int main() {
printf("Sum of x+y = %d", multiply(5,10));
return 0;
}
Very simple, pal... Each time when you left shift a number it means you are multiplying the number by 2 which means the answer is (x<<3)-x.
To multiply of two numbers without * operator:
int mul(int a,int b) {
int result = 0;
if(b > 0) {
for(int i=1;i<=b;i++){
result += a;
}
}
return result;
}

Categories

Resources