I am learning recursion and understanding most of it but this specific one baffles me, it is rather basic but I don't get which statement is the base case, I think it's the print line but obviously could be wrong. I know what the net result is but can't seem to follow how it does it step by step.
Code:
private static final String DIGIT_TABLE = "0123456789abcdef";
public static void printIt(long n, int base) {
if(n>=base)
printIt(n / base, base);
System.out.print(DIGIT_TABLE.charAt((int) n % base));
}
The base case is when n<base, or when the remaining number can be represented as a single digit in base base.
Here's an example of how the program would execute if the base were 16:
n_1: 0x1a5
n_2: 0x1a
n_3: 0x1
****
print n_3 % 16 -> 1
print n_2 % 16 -> a
print n_1 % 16 -> 5
At the point marked ****, the condition evaluates to false, so it doesn't go into infinite recursion.
If you are only looking for the base case or termination condition of the recurrence relation, then try this.
public static long printIt(long n, int base) {
if(n==0 || (base==0 || base==1)) return 0;
if(n>=base)
printIt(n / base, base);
long ln = DIGIT_TABLE.charAt((int) n % base);
return ln;
}
In an n-base number system, all the digits from 0-(n-1) can be represented in a single digit notation. For example, in a binary system (2-base), 0 and 1 can be represented as single digit and in a hexadecimal system (16-base), all numbers from 0-15 can be represented as single digit (a=10, b=11...f=15). Your program reduces the long n by dividing it by base till the time it becomes lesser than base and then prints its value in that base-base system. I assume the base here is 16 since your string is till 'f'. This means your method will print the MSB in hexadecimal representation of n. Note that if you store all the quotients of 'n/base', you will get the hexadecimal representation reversed.
Related
factorial of 42 is going beyond the final limit of long data type in java. that's why I can't find digits.
42!
The factorial of 42 is of 51 digits while the max limit of long datatype in Java is 9,223,372,036,854,775,807 i.e only 20 digits. But don't worry, Java has a Big Integer class to store large numbers such as 100!. But it's a bit slower than primitive data types such as int, long etc because it stores integers in the form of arrays. There are many ways to implement the Big Integer class but here's the most used way. This code calculates the factorial of 42 and prints the same-
// Java program to find large factorials using BigInteger
import java.math.BigInteger;
public class Factorial
{
// Returns Factorial of N
static BigInteger factorial(int N)
{
BigInteger fact = new BigInteger("1"); // Or BigInteger.ONE
// Multiply f with 2, 3, ...N
for (int i = 2; i <= N; i++)
fact = fact.multiply(BigInteger.valueOf(i));
return fact;
}
public static void main(String args[])
{
int N = 42;
System.out.println(factorial(N));
}
}
Output:
1405006117752879898543142606244511569936384000000000
Explanation
We have to import the Big Integer class, which is stored in java.math package. I have named my file Factorial.java, so my class name is Factorial.
In this method, I've created a function, if you want the code without function, just comment below. Now in this syntax-
BigInteger f = new BigInteger("1");
I've assigned fact as Big integer which is equal to 1. In the for loop,
i value is set to 2 s 1*1=1.
fact = fact.multiply(BigInteger.valueOf(i));
The above syntax is for the multiplication of Big integers. This multiplies the Biginteger fact by i.
Have a look at this GeeksforGeeks article- https://www.geeksforgeeks.org/biginteger-class-in-java/
If you only care about the number of digits, I would recommend taking a more mathematical approach. There are ways to compute this number without actually computing the factorial itself. This would not require so big a variable and would be a lot faster.
You could think it this way:
Digits(n!) = floor(log10(n!)) + 1 = floor(log10(n * (n - 1) * ... * 1)) + 1 =floor(\sum_{i = 1}^{n}log10(i)) + 1
A picture of this expression: expression
This would still require iteration, but it deals with much smaller numbers.
If you still want O(1) complexity for this task, you can go with a pretty good approximation I've just tried.
Digits(n!) ~ floor(\int_{1}^{x}log10(x) dx) + 1 = floor(\frac{-x + x*ln(x) + 1}{ln(10)}) + 1
Another image of this formula: approximate expression
Of course, the latter is no absolutely exact since we are now integrating a continuous function. However, it will probably be worth implementing.
Digits(42!) = floor(50.37...) + 1 = 50 + 1 = 51
Sorry for a possible duplicate post, I saw many similar topics here but none was exactly I needed. Before actually posting a question I want to explicitly state that this question is NOT A HOMEWORK.
So the question is: how to convert a large integer number into binary representation? The integer number is large enough to fit in primitive type (Java long cannot be used). An input might be represented as a string format or as an array of digits. Disclaimer, This is not going to be a solution of production level, so I don't want to use BigInteger class. Instead, I want to implement an algorithm.
So far I ended up with the following approach:
Input and output values represented as strings. If the last digit of input is even, I prepend the output with "0", otherwise - with "1". After that, I replace input with input divided by 2. I use another method - divideByTwo for an arithmetical division. This process runs in a loop until input becomes "0" or "1". Finally, I prepend input to the output. Here's the code:
Helper Method
/**
* #param s input integer value in string representation
* #return the input divided by 2 in string representation
**/
static String divideByTwo(String s)
{
String result = "";
int dividend = 0;
int quotent = 0;
boolean dividendIsZero = false;
while (s.length() > 0)
{
int i = 1;
dividend = Character.getNumericValue(s.charAt(0));
while (dividend < 2 && i < s.length())
{
if (dividendIsZero) {result += "0";}
dividend = Integer.parseInt(s.substring(0, ++i));
}
quotent = dividend / 2;
dividend -= quotent * 2;
dividendIsZero = (dividend == 0);
result += Integer.toString(quotent);
s = s.substring(i);
if (!dividendIsZero && s.length() != 0)
{
s = Integer.toString(dividend) + s;
}
}
return result;
}
Main Method
/**
* #param s the integer in string representation
* #return the binary integer in string representation
**/
static String integerToBinary(String s)
{
if (!s.matches("[0-9]+"))
{
throw new IllegalArgumentException(s + " cannot be converted to integer");
}
String result = "";
while (!s.equals("0") && !s.equals("1"))
{
int lastDigit = Character.getNumericValue(s.charAt(s.length()-1));
result = lastDigit % 2 + result; //if last digit is even prepend 0, otherwise 1
s = divideByTwo(s);
}
return (s + result).replaceAll("^0*", "");
}
As you can see, the runtime is O(n^2). O(n) for integerToBinary method and O(n) for divideByTwo that runs inside the loop. Is there a way to achieve a better runtime? Thanks in advance!
Try this:
new BigDecimal("12345678901234567890123456789012345678901234567890").toString(2);
Edit:
For making a big-number class, you may want to have a look at my post about this a week ago. Ah, the question was by you, never mind.
The conversion between different number systems in principle is a repeated "division, remainder, multiply, add" operation. Let's look at an example:
We want to convert 123 from decimal to a base 3 number. What do we do?
Take the remainder modulo 3 - prepend this digit to the result.
Divide by 3.
If the number is bigger than 0, continue with this number at step 1
So it looks like this:
123 % 3 == 0. ==> The last digit is 0.
123 / 3 == 41.
41 % 3 == 2 ==> The second last digit is 2.
41 / 3 == 13
13 % 3 == 1 ==> The third digit is 1.
13 / 3 == 4
4 % 3 == 1 ==> The fourth digit is 1 again.
4 / 3 == 1
1 % 3 == 1 ==> The fifth digit is 1.
So, we have 11120 as the result.
The problem is that for this you need to have already some kind of division by 3 in decimal format, which is usually not the case if you don't implement your number in a decimal-based format (like I did in the answer to your last question linked above).
But it works for converting from your internal number format to any external format.
So, let's look at how we would do the inverse calculation, from 11120 (base 3) to its decimal equivalent. (Base 3 is here the placeholder for an arbitrary radix, Base 10 the placeholder for your internal radix.) In principle, this number can be written as this:
1 * 3^4 + 1 * 3^3 + 1*3^2 + 2*3^1 + 0*3^0
A better way (faster to calculate) is this:
((((1 * 3) + 1 )*3 + 1 )*3 + 2)*3 + 0
1
3
4
12
13
39
41
123
123
(This is known as Horner scheme, normally used for calculating values of polynomials.)
You can implement this in the number scheme you are implementing, if you know how to represent the input radix (and the digits) in your target system.
(I just added such a calculation to my DecimalBigInt class, but you may want to do the calculations directly in your internal data structure instead of creating a new object (or even two) of your BigNumber class for every decimal digit to be input.)
Among the simple methods there are two possible approaches (all numbers that appear here decimal)
work in decimal and divide by 2 in each step as you outlined in the question
work in binary and multiply by 10 in each step for example 123 = ((1 * 10) + 2) * 10 + 3
If you are working on a binary computer the approach 2 may be easier.
See for example this post for a more in-depth discussion of the topic.
In wikipedia, it is said:
For very large numbers, these simple methods are inefficient because
they perform a large number of multiplications or divisions where one
operand is very large. A simple divide-and-conquer algorithm is more
effective asymptotically: given a binary number, it is divided by
10^k, where k is chosen so that the quotient roughly equals the
remainder; then each of these pieces is converted to decimal and the
two are concatenated. Given a decimal number, it can be split into two
pieces of about the same size, each of which is converted to binary,
whereupon the first converted piece is multiplied by 10^k and added to
the second converted piece, where k is the number of decimal digits in
the second, least-significant piece before conversion.
I have tried, this method is faster than conventional one for numbers larger than 10,000 digits.
I have a question about this problem, and any help would be great!
Write a program that takes one integer N as an
argument and prints out its truncated binary logarithm [log2 N]. Hint: [log2 N] = l is the largest integer ` such that
2^l <= N.
I got this much down:
int N = Integer.parseInt(args[0]);
double l = Math.log(N) / Math.log(2);
double a = Math.pow(2, l);
But I can't figure out how to truncate l while keeping 2^l <= N
Thanks
This is what i have now:
int N = Integer.parseInt(args[0]);
int i = 0; // loop control counter
int v = 1; // current power of two
while (Math.pow(2 , i) <= N) {
i = i + 1;
v = 2 * v;
}
System.out.println(Integer.highestOneBit(N));
This prints out the integer that is equal to 2^i which would be less than N. My test still comes out false and i think that is because the question is asking to print the i that is the largest rather than the N. So when i do
Integer.highestOneBit(i)
the correct i does not print out. For example if i do: N = 38 then the highest i should be 5, but instead it prints out 4.
Then i tried this:
int N = Integer.parseInt(args[0]);
int i; // loop control counter
for (i= 0; Math.pow(2 , i) == N; i++) {
}
System.out.println(Integer.highestOneBit(i));
Where if i make N = 2 i should print out to be 1, but instead it is printing out 0.
I've tried a bunch of things on top of that, but cant get what i am doing wrong. Help would be greatly appreciated. Thanks
I believe the answer you're looking for here is based on the underlying notion of how a number is actually stored in a computer, and how that can be used to your advantage in a problem such as this.
Numbers in a computer are stored in binary - a series of ones and zeros where each column represents a power of 2:
(Above image from http://www.mathincomputers.com/binary.html - see for more info on binary)
The zeroth power of 2 is over on the right. So, 01001, for example, represents the decimal value 2^0 + 2^3; 9.
This storage format, interestingly, gives us some additional information about the number. We can see that 2^3 is the highest power of 2 that 9 is made up of. Let's imagine it's the only power of two it contains, by chopping off all the other 1's except the highest. This is a truncation, and results in this:
01000
You'll now notice this value represents 8, or 2^3. Taking it down to basics, lets now look at what log base 2 really represents. It's the number that you raise 2 to the power of to get the thing your finding the log of. log2(8) is 3. Can you see the pattern emerging here?
The position of the highest bit can be used as an approximation to it's log base 2 value.
2^3 is the 3rd bit over in our example, so a truncated approximation to log base 2(9) is 3.
So the truncated binary logarithm of 9 is 3. 2^3 is less than 9; This is where the less than comes from, and the algorithm to find it's value simply involves finding the position of the highest bit that makes up the number.
Some more examples:
12 = 1100. Position of the highest bit = 3 (starting from zero on the right). Therefore the truncated binary logarithm of 12 = 3. 2^3 is <= 12.
38 = 100110. Position of the highest bit = 5. Therefore the truncated binary logarithm of 38 = 5. 2^5 is <= 38.
This level of pushing bits around is known as bitwise operations in Java.
Integer.highestOneBit(n) returns essentially the truncated value. So if n was 9 (1001), highestOneBit(9) returns 8 (1000), which may be of use.
A simple way of finding the position of that highest bit of a number involves doing a bitshift until the value is zero. Something a little like this:
// Input number - 1001:
int n=9;
int position=0;
// Cache the input number - the loop destroys it.
int originalN=n;
while( n!=0 ){
position++; // Also position = position + 1;
n = n>>1; // Shift the bits over one spot (Overwriting n).
// 1001 becomes 0100, then 0010, then 0001, then 0000 on each iteration.
// Hopefully you can then see that n is zero when we've
// pushed all the bits off.
}
// Position is now the point at which n became zero.
// In your case, this is also the value of your truncated binary log.
System.out.println("Binary log of "+originalN+" is "+position);
I will explain first what I mean by "complementing integer value excluding the leading zero binary bits" (from now on, I will call it Non Leading Zero Bits complement or NLZ-Complement for brevity).
For example, there is integer number 92. the binary number is 1011100. If we perform normal bitwise-NOT or Complement, the result is: -93 (signed integer) or 11111111111111111111111110100011 (binary). That's because the leading zero bits are being complemented too.
So, for NLZ-Complement, the leading zero bits are not complemented, then the result of NLZ-complementing of 92 or 1011100 is: 35 or 100011 (binary). The operation is performed by XORing the input value with sequence of 1 bits as much as the non-leading zero value. The illustration:
92: 1011100
1111111 (xor)
--------
0100011 => 35
I had made the java algorithm like this:
public static int nonLeadingZeroComplement(int n) {
if (n == 0) {
return ~n;
}
if (n == 1) {
return 0;
}
//This line is to find how much the non-leading zero (NLZ) bits count.
//This operation is same like: ceil(log2(n))
int binaryBitsCount = Integer.SIZE - Integer.numberOfLeadingZeros(n - 1);
//We use the NLZ bits count to generate sequence of 1 bits as much as the NLZ bits count as complementer
//by using shift left trick that equivalent to: 2 raised to power of binaryBitsCount.
//1L is one value with Long literal that used here because there is possibility binaryBitsCount is 32
//(if the input is -1 for example), thus it will produce 2^32 result whom value can't be contained in
//java signed int type.
int oneBitsSequence = (int)((1L << binaryBitsCount) - 1);
//XORing the input value with the sequence of 1 bits
return n ^ oneBitsSequence;
}
I need an advice how to optimize above algorithm, especially the line for generating sequence of 1 bits complementer (oneBitsSequence), or if anyone can suggest better algorithm?
UPDATE: I also would like to know the known term of this non-leading zero complement?
You can get the highest one bit through the Integer.highestOneBit(i) method, shift this one step left, and then subtract 1. This gets you the correct length of 1s:
private static int nonLeadingZeroComplement(int i) {
int ones = (Integer.highestOneBit(i) << 1) - 1;
return i ^ ones;
}
For example,
System.out.println(nonLeadingZeroComplement(92));
prints
35
obviously #keppil has provided shortest solution. Another solution could be like.
private static int integerComplement(int n){
String binaryString = Integer.toBinaryString(n);
String temp = "";
for(char c: binaryString.toCharArray()){
if(c == '1'){
temp += "0";
}
else{
temp += "1";
}
}
int base = 2;
int complement = Integer.parseInt(temp, base);
return complement;
}
For example,
System.out.println(nonLeadingZeroComplement(92));
Prints answer as 35
I've been experimenting with Python as a begninner for the past few hours. I wrote a recursive function, that returns recurse(x) as x! in Python and in Java, to compare the two. The two pieces of code are identical, but for some reason, the Python one works, whereas the Java one does not. In Python, I wrote:
x = int(raw_input("Enter: "))
def recurse(num):
if num != 0:
num = num * recurse(num-1)
else:
return 1
return num
print recurse(x)
Where variable num multiplies itself by num-1 until it reaches 0, and outputs the result. In Java, the code is very similar, only longer:
public class Default {
static Scanner input = new Scanner(System.in);
public static void main(String[] args){
System.out.print("Enter: ");
int x = input.nextInt();
System.out.print(recurse(x));
}
public static int recurse(int num){
if(num != 0){
num = num * recurse(num - 1);
} else {
return 1;
}
return num;
}
}
If I enter 25, the Python Code returns 1.5511x10E25, which is the correct answer, but the Java code returns 2,076,180,480, which is not the correct answer, and I'm not sure why.
Both codes go about the same process:
Check if num is zero
If num is not zero
num = num multiplied by the recursion of num - 1
If num is zero
Return 1, ending that stack of recurse calls, and causing every returned num to begin multiplying
return num
There are no brackets in python; I thought that somehow changed things, so I removed brackets from the Java code, but it didn't change. Changing the boolean (num != 0) to (num > 0 ) didn't change anything either. Adding an if statement to the else provided more context, but the value was still the same.
Printing the values of num at every point gives an idea of how the function goes wrong:
Python:
1
2
6
24
120
720
5040
40320
362880
3628800
39916800
479001600
6227020800
87178291200
1307674368000
20922789888000
355687428096000
6402373705728000
121645100408832000
2432902008176640000
51090942171709440000
1124000727777607680000
25852016738884976640000
620448401733239439360000
15511210043330985984000000
15511210043330985984000000
A steady increase. In the Java:
1
2
6
24
120
720
5040
40320
362880
3628800
39916800
479001600
1932053504
1278945280
2004310016
2004189184
-288522240
-898433024
109641728
-2102132736
-1195114496
-522715136
862453760
-775946240
2076180480
2076180480
Not a steady increase. In fact, num is returning negative numbers, as though the function is returning negative numbers, even though num shouldn't get be getting below zero.
Both Python and Java codes are going about the same procedure, yet they are returning wildly different values. Why is this happening?
Two words - integer overflow
While not an expert in python, I assume it may expand the size of the integer type according to its needs.
In Java, however, the size of an int type is fixed - 32bit, and since int is signed, we actually have only 31 bits to represent positive numbers. Once the number you assign is bigger than the maximum, it overflows the int (which is - there is no place to represent the whole number).
While in the C language the behavior in such case is undefined, in Java it is well defined, and it just takes the least 4 bytes of the result.
For example:
System.out.println(Integer.MAX_VALUE + 1);
// Integer.MAX_VALUE = 0x7fffffff
results in:
-2147483648
// 0x7fffffff + 1 = 0x800000000
Edit
Just to make it clearer, here is another example. The following code:
int a = 0x12345678;
int b = 0x12345678;
System.out.println("a*b as int multiplication (overflown) [DECIMAL]: " + (a*b));
System.out.println("a*b as int multiplication (overflown) [HEX]: 0x" + Integer.toHexString(a*b));
System.out.println("a*b as long multiplication (overflown) [DECIMAL]: " + ((long)a*b));
System.out.println("a*b as long multiplication (overflown) [HEX]: 0x" + Long.toHexString((long)a*b));
outputs:
a*b as int multiplication (overflown) [DECIMAL]: 502585408
a*b as int multiplication (overflown) [HEX]: 0x1df4d840
a*b as long multiplication (overflown) [DECIMAL]: 93281312872650816
a*b as long multiplication (overflown) [HEX]: 0x14b66dc1df4d840
And you can see that the second output is the least 4 bytes of the 4 output
Unlike Java, Python has built-in support for long integers of unlimited precision. In Java, an integer is limited to 32 bit and will overflow.
As other already wrote, you get overflow; the numbers simply won't fit within java's datatype representation. Python has a built-in capability of bignum as to where java has not.
Try some smaller values and you will see you java-code works fine.
Java's int range
int
4 bytes, signed (two's complement). -2,147,483,648 to 2,147,483,647. Like all numeric types ints may be cast into other numeric types (byte, short, long, float, double). When lossy casts are done (e.g. int to byte) the conversion is done modulo the length of the smaller type.
Here the range of int is limited
The problem is very simple ..
coz in java the max limit of integer is 2147483647 u can print it by System.out.println(Integer.MAX_VALUE);
and minimum is System.out.println(Integer.MIN_VALUE);
Because in the java version you store the number as an int which I believe is 32-bit. Consider the biggest (unsigned) number you can store with two bits in binary: 11 which is the number 3 in decimal. The biggest number that can be stored four bits in binary is 1111 which is the number 15 in decimal. A 32-bit (signed) number cannot store anything bigger than 2,147,483,647. When you try to store a number bigger than this it suddenly wraps back around and starts counting up from the negative numbers. This is called overflow.
If you want to try storing bigger numbers, try long.