I needed to find the Trailing Zeros for the given number.
For smaller inputs, it is working perfectly but for longer inputs it's not working perfectly.
long input=s.nextInt();
long mul=1;
int count=0;
while(input!=1 && input<1000000000)
{
log.debug("mul="+mul+"Input="+input+"count="+count);
mul =mul*input;
input--;
while(mul%10==0)
{
count++;
mul=mul/10;
}
mul=mul%100;
//if()
}
System.out.println(count);
log.debug(count);
count=0;
For inputs
6//Working Correct
3//Working Correct
60//Working Correct
100//Working Correct
1024//Working Correct
23456//Working Wrong
8735373//Working Wrong
Expected Output:
0
14
24
253
5861//My output is 5858
2183837//My output is 2182992
First, I really like the solution provided by questioner and answerer. But that does not answer what is wrong with your code.
It took me a while to figure it out, but I think there is a very subtle mathematical assumption you are making that doesn't actually hold true. Your assumption is that you can reduce modulo 100 every iteration and nothing is lost. That assumption turns out to be false.
Reduction modulo 100 will not work in some cases. 100 = 5^2 * 2^2 . The problem is that you are losing 5s and 2s that might ultimately contribute to more 0s, which means the answer provided by your program may be less than the real answer.
For example, if at some iteration the result is 125, then after you reduce modulo 100, you get 25. If at the next iteration you multiply by a number like 72, then the result would be (25*72) = 1800 in your program, which means 2 zeros. Now step back and look at what the result would be if you multiplied 125 by 72: (125*72) = 9000. That's 3 zeros. So your program missed the 3rd zero because reducing modulo 100 turned the number 125 = 5^3 into 25 = 5^2 (i.e. it lost a multiple of 5).
If you do not follow the mathematical argument, here's what you can do to see proof that I am right: change your mod reduction by 100 to mod reduction by 1000. I bet it gets closer to the right answer. And so long as you are careful about overflows, you can even try 10000 to get even closer.
But ultimately, this solution approach is flawed because your modulo reduction will be losing multiples of 5s and 2s for high enough numbers. Conclusion: use an approach like questioner and answerer, but make sure you can prove that it works! (He already sketched the proof for you!)
You are losing zeroes due to truncation of your number to mod 100.
125 will add 3 zeroes. 625 adds 4. 3125 adds 5.
But you retain just 2 digits after zeroes are trimmed.
(That it works for 100 and 1024 is coincident).
However, when 25 or greater powers of 5 come in, you may lose a couple of zeros as a multiple of 8 may become a multiple of 2 due to truncation of the 100s digit.
Instead of doing a mul = mul % 100, you should keep more digits depending on the number itself.
How many? Same as the highest power of 5 less than the number.
Instead of actually computing the answer, which will take far too long and possibly overflow your integers, just check for the number of 5's in the number.
This works because the number of trailing zeroes is determined by the number of 5s. In a factorial number, there is always more factors of 2 than there are factors of 5. The number of 5 factors in the number is the number of trailing zeroes.
int answer = 0;
for (int i = 5; i <= input; i+=5) { //i starts at 5 because we do not count 0
answer++;
}
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm solving a problem of fraction,
if the fraction part is repeating, print the only repeating integers as String.
if the fraction part is not repeating, print upto 4 digits after decimal.
if the fraction output is simple print it directly as string.
E.g.
a) 1/3 = 0.3333... , here 3 is recurring, so need to print only 0.3.
b) 5/2 = 2.5 -- Simple Math.
c) 22/7 = 3.14287, here 14287 is repeating, so need to print 3.14287
Can you please help, the solution should have O(k) time complexity and space complexity
public String fractionToDecimal(int numerator, int denominator) {
StringBuilder tem = new StringBuilder();
long q = numerator / denominator;
return tem.append(q);
}
This question is quite complicated. You need to know quite a few advanced concepts to fully understand why it is so.
Binary vs. Decimal
The notion '1 divided by 3 repeats the 3 endlessly' only works if you presuppose a decimal rendering system. Think about it: Why does '10' come after '9'? As in, why did 'we' decide not to have a specific symbol for the notion 'ten', and instead this is the exact point on the number line 'we' decided to go for two digits, the first digit and the zero digit written next to each other? That's an arbitrary choice, and if you delve into history, humans made this choice because we have 10 fingers. Not all humans made this choice, to be clear: Sumerians had unique digits all the way up to 60, and this explains why there are 60 seconds in a minute, for example. There are remote tribes using 6, 3, and even weirder number systems.
If you want to spend some fun math time, go down the rabbithole on wikipedia reading about exotic number systems. It's mesmering stuff, a fine way to spend an hour (or two, or ten!)
Imagine a number system for aliens that had only 3 fingers total. They'd count:
Human Alien
0 0
1 1
2 2
3 10
4 11
5 12
6 20
8 21
9 22
10 30
Their number system isn't "weird" or "bad". Just different.
However, that number concept goes both ways: When you write, say, "1 divided by 4 = 0.25", that 25 is also in 'decimal' (the name for a number system that has 10 digits, like what most humans on the planet earth like to use).
In human, 1 divided by 10 is 0.1. Easy, right?
Well, in '3-finger alien', one divided by three is... 0.1.
Not 0.1 repeating. Nono. Just 0.1. It fits perfectly. In their number system, one divided by ten is actually quite complicated, where it is ridiculously simple in ours.
Computers are aliens too. They have 2 fingers. Just 0 and 1, that's all. A computer counts: 0 1 10 11 100 101 110 111 and so on.
An a / b operation that repeats in decimal may not repeat in binary. Or it may. Or a number that doesn't repeat in decimal may repeat in binary (1/5 repeats endlessly in binary, in decimal, it's just 0.2, easy).
Given that computers don't like counting in decimal, any basic operation is an instant 'loser' - you can no longer answer the question if you even write double or float anywhere in your code here.
But it requires knowledge of binary and some fairly fundamental math to even know that.
Solution strategy 1: BigDecimal
NOTE: I don't think this is the best way to go about it, I'd go with strategy 2, but for completeness...
Java has a class baked into the core library called java.math.BigDecimal that is intended to be used when you don't want any losses. double and float are [A] binary based, so trying to use them to figure out repeating strides is completely impossible, and [B] silently round numbers all the time to the nearest representable number. You get rounding errors. Even 0.1 + 0.2 isn't quite 0.3 in double math.
For these reasons, BigDecimal exists, which is decimal, and 'perfect'.
The problem is, well, it's perfect. In basis, dividing 1 by 3 in BigDecimal math is impossible - an exception occurs. You need to understand the quite complicated API of BigDecimal to know about how to navigate this issue. You can tell BigDecimal about exactly how much precision you're okay with.
So, what you can do:
Convert your divider and dividend into BigDecimal numbers.
Configure these for 100 digits after the comma precision.
Divide one by the other.
Convert the result to a string.
Analyse the string to find the repeating stride.
That algorithm is technically still incorrect - you can have inputs that have a repetition stride that is longer than 100 characters, or a division operation that appears to have a repeating stride when it actually doesn't.
Still, for probably every combination of numbers up to 100 or so you care to throw at it, the above would work. You can also choose to go further (more than 100 digits), or to write an algorithm that tries to find a repeat stride with 100 digits, and if it fails, that it just uses a while loop to start over, ever incrementing the # of digits used until you do find that repeating stride in the input.
You'll be using many methods of BigDecimal and doing some fairly tricky operation on the resulting string to attempt to find the repetition stride properly.
It's one way to solve this problem. If you would like to try, then read this and have fun!
Solution Strategy 2: Use math
You can use a mathematical algorithm to derive the next decimal digit given a divisor and dividend. This isn't so much computer science, it's purely a mathematical exercise, and hence, don't search for 'java' or 'programming' or whatnot online when looking for this.
The basic gist is this:
1/4 becomes the number 0.25, how would one derive that? Well, had you multiplied the input by 10 first, i.e. calculate 10/4, then all you really need to do is calculate in integral math. 4 fits into 10 twice, with some left over. That's where the 2 comes from.
Then to derive the 5: Take the left-over (4 fits into 10 twice, and that leaves 2), and multiply it again by 10. Now calculate 20/4. That is 5, with nothing left over. Great, that's where the 5 comes from, and we get to conclude that there is no need to continue. It's zeroes from here on out.
You can write this algorithm in java code. Never should it ever mention double or float (you immediately fail the exercise if you do this). a / b, if both a and b are integral, does exactly what you want: Calculates how often b fits into a, tossing any remainder. You can then obtain the remainder with some more simple math:
int dividend = 1;
int divisor = 4;
List<Integer> digits = new ArrayList<>();
// you're going to have to think about when to end this loop
System.out.print("The digits are: 0.");
while (dividend != 0 && digits.size() < 100) {
dividend *= 10;
int digit = dividend / divisor;
dividend -= (digit * divisor);
digits.add(digit);
System.out.print(digit);
}
I'll leave the code you'd need to write to find repetitions to you. You can be certain it repeats 'cleanly' when your divisor number ends up being a divisor value you've seen before. For example, when doing 1/3, going through this algorithm:
First loop through:
dividend (1) is multiplied by 10, becomes 10.
dividend is now integer-divided by the divisor (3), producing the digit 3.
We determine what's left: the digit times the divisor is 9, so 9 of those 10 have been used up, leaving the 1. We set dividend to 1.
As you can see, nothing actually changed: The dividend is still 1 just like it was at the start, therefore all loops through go like this, producing an endless stream of 3 values, which is indeed the correct answer.
You can maintain a list of dividends you've already seen. e.g. by storing them in a Set<Integer>. Once you hit a number you've already seen, you can just stop printing: You've started the repetition.
This algorithm has the considerable benefit of always being correct.
So what do I do?
I think your teacher wants you to figure out the second one, and not to delve into the BigDecimal API.
This is an awesome exercise, but more about math than programming.
The Babylonian aka Heron's method seems to be one of the faster algorithms for finding the square root for a number n. How fast it converges is dependent on how far of your initial guess is.
Now as number n increases its root x as its percentage decreases.
root(10):10 - 31%
10:100 - 10%
root(100) : 100 - 3%
root(1000) : 1000 - 1%
So basically for each digit in the number divide by around 3. Then use that as your intial guess. Such as -
public static double root(double n ) {
//If number is 0 or negative return the number
if(n<=0)
return n ;
//call to a method to find number of digits
int num = numDigits(n) ;
double guess = n ;
//Divide by 0.3 for every digit from second digit onwards
for (int i = 0 ; i < num-1 ; ++i )
guess = guess * 0.3;
//Repeat until it converges to within margin of error
while (!(n-(guess*guess) <= 0.000001 && n-(guess*guess) >= 0 )) {
double divide = n/guess ;
guess = Math.abs(0.5*(divide+guess)) ;
}
return Math.abs(guess) ;
}
Does this help and optimize the algorithm. And is this O(n) ?
Yes. What works even better is to exploit the floating point representation, by dividing the binary exponent approximately by two, because operating on the floating-point bits is very fast. See Optimized low-accuracy approximation to `rootn(x, n)`.
My belief is that the complexity of an algorithm is independent on the input provided. (the complexity is a general characteristic of that algorithm, we cannot say that algorithm x has complexity O1 for input I1 and complexity O2 for input I2). Thus, no matter what initial value you provide, it should not improve complexity. It may improve the number of iterations for that particular case, but that's a different thing. Reducing the number of iterations by half still means the same complexity. Keep in mind that n, 2*n, n/3 all fit into the O(n) class .
Now, with regard to the actual complexity, i read on wikipedia (https://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Babylonian_method) that
This is a quadratically convergent algorithm, which means that the number of correct digits of the approximation roughly doubles with each iteration.
This means you need as many iterations as the number of precision decimals that you expect. Which is constant. If you need 10 exact decimals, 10 is a constant, being totally independent of n.
But on wikipedia's example, they chose from the very begining a candidate which had the same order of magnitude as the correct answer (600 compared to 354). However, if your initial guess is way too wrong (by orders of magnitude) you will need some extra iterations to cut down/add to the necessary digits. Which will add complexity. Suppose correct answer is 10000 , while your initial guess is 10. The difference is 4 orders of magnitude, and i think in this case, the complexity needed to reach the correct magnitude is proportional to the difference between the number of digits of your guess and the number of digits of the correct answer. Since number of digits is approximativelly log(n), in this case the extra complexity is log(corect_answer) -log(initial_guess), taken as absolute value.
To avoid this, pick a number that has the right number of digits, which is generally half the number of digits of your initial number. My best choice would be picking the first half of the number as a candidate (from 123456, keep 123, from 1234567, either 123 or 1234). In java, you could use byte operations to keep the first half of a number/string/whatever is kept in memory. Thus you will need no iteration, just a byte operation with constant complexity.
For n ≥ 4, sqrt(n) = 2 sqrt(n / 4). For n < 1, sqrt(n) = 1/2 sqrt(n × 4). So you can always multiply or divide by 4 to normalize n on the range [1, 4).
Once you do that, take sqrt(4) = 2 as the starting point for Heron's algorithm, since that is the geometric mean and will yield the greatest possible improvement per iteration, and unroll the loop to perform the needed number of iterations for the desired accuracy.
Finally, multiply or divide by all the factors of 2 that you removed at the beginning. Note that multiplying and dividing by 2 or 4 is easy and fast for binary computers.
I discuss this algorithm at my blog.
This is the code I don't understand:
// Divide n by two until small enough for nextInt. On each
// iteration (at most 31 of them but usually much less),
What? With my trivial simulation for a randomly chosen n I get up to 32 iterations and 31 is the average.
// randomly choose both whether to include high bit in result
// (offset) and whether to continue with the lower vs upper
// half (which makes a difference only if odd).
This makes sense.
long offset = 0;
while (n >= Integer.MAX_VALUE) {
int bits = next(2);
Get two bits for the two decisions, makes sense.
long half = n >>> 1;
long nextn = ((bits & 2) == 0) ? half : n - half;
Here, n is n/2.0 rounded up or down, nice.
if ((bits & 1) == 0) offset += n - nextn;
n = nextn;
I'm lost.
}
return offset + nextInt((int) n);
I can see that it generates a properly sized number, but it looks pretty complicated and rather slow and I definitely can't see why the result should be uniformly distributed.1
1It can't be really uniformly distributed as the state is 48 bits only, so it can generate no more than 2**48 different numbers. The vast majority of longs can't be generated, but an experiment showing it would probably take years.
I think you're somewhat mistaken.... let me try to describe the algorithm the way I see it:
First, assume that nextInt(big) (nextInt, not nextLong) is correctly able to generate a well distributed range of values between 0 (inclusive) and big (exclusive).
This nextInt(int) function is used as part of nextLong(long)
So, the algorithm works by looping until the value is less than Integer.MAX_INT, at which point it uses nextInt(int). The more interesting thing is what it does before that...
Mathematically, if we take half a number, subtract half of it, and then subtract half of the half, and then half of the half of the half, etc. and we do that enough times it will tend toward zero. Similarly, if you take half of a number, and add it to half of the half of the half, etc. it will tend toward the original number.
What the algorithm does here is it takes half of the number. By doing integer division, if the number is odd, then there's a 'big' half and a 'small' half. The algorithm 'randomly' chooses one of those halves (either big or small).
Then, it randomly chooses to add that half, or not add that half to the output.
It keeps halving the number and (possibly) adding the half until the half is less than Integer.MAX_INT. At that point it simply computes the nextInt(half) value and adds that to the result.
Assuming the initial long limit was much greater than Integer.MAX_VALUE then the final result will get all the benefit of nextInt(int) for a large int value which is at least 32 bits of state, as well as 2 bits of state for all the higher bits above Integer.MAX_VALUE.
The larger the orignal limit is (closer it is to Long.MAX_VALUE), the more times the loop will iterate. At worst case it will go 32 times, but for smaller limits it will go through fewer times. At worst case, for very large limits you will get 64 bits of randomness used for the loops, and then whatever is needed for the nextInt(half) too.
EDIT: WalkThrough added - Working out the number of outcomes is harder, but, all values of long from 0 to Long.MAX_VALUE - 1 are possible outcomes. As 'proof' using nextLong(0x4000000000000000) is a neat example to start from because all the halving processes will be even and it has bit 63 set.
Because bit 63 is set (the most significant bit available because bit64 would make the number negative, which is illegal) it means that we will halve the value 32 times before the value is <= Integer.MAX_VALUE (which is 0x000000007fffffff - and our half will be 0x0000000004000000 when we get there....). Because halving and bit-shifting are the same process it holds that there are as many halves to do as the difference between the highest bit set and bit 31. 63 - 31 is 32, so we halve things 32 times, and thus we do 32 loops in the while loop. The initial start value of 0x4000000000000000 means that as we halve the value, there will only be one bit set in the half, and it will 'walk' down the value - shifting 1 to the right each time through the loop.
Because I chose the initial value carefully, it is apparent that in the while loop the logic is essentially deciding whether to set each bit or not. It takes half of the input value (which is 0x2000000000000000) and decides whether to add that to the result or not. Let's assume for the sake of argument that all our loops decide to add the half to the offset, in which case, we start with an offset of 0x0000000000000000, and then each time through the loop we add a half, which means each time we add:
0x2000000000000000
0x1000000000000000
0x0800000000000000
0x0400000000000000
.......
0x0000000100000000
0x0000000080000000
0x0000000040000000 (this is added because it is the last 'half' calculated)
At this point our loop has run 32 times, it has 'chosen' to add the value 32 times, and thus has at least 32 states in the value (64 if you count the big/little half decision). The actual offset is now 0x3fffffffc0000000 (all bits from 62 down to 31 are set).
Then, we call nextInt(0x40000000) which, as luck would have it, produces the result 0x3fffffff, using 31 bits of state to get there. We add this value to our offset and get the result:
0x3fffffffffffffff
With a 'perfect' distribution of nextInt(0x40000000) results we would have had a perfect coverage of the values 0x7fffffffc0000000 through 0x3fffffffffffffff with no gaps. With perfect randomness in our while loop, our high bits would have been a perfect distribution of 0x0000000000000000 through 0x3fffffffc0000000 Combined there is full coverage from 0 through to our limit (exclusive) of 0x4000000000000000
With the 32 bits of state from the high bits, and the (assumed) minimum 31 bits of state from nextInt(0x40000000) we have 63bits of state (more if you count the big/little half decision), and full coverage.
I am looking at this line of code and I cannot make sense of it. This particular code is javascript, but I eventually would like to make a java android app.
$("#TxtHalfDot").val(Math.round((60000/bpm)*3*1000)/1000);
//bpm being a user entered value
I understand the process of the math and have been through it with a calculator many times. However, I can not make sense of the *1000 followed by /1000.
My Question
Is this a strange behavior of the "math.round" function or is it just simply not needed. I have seen it a lot but when I look at it I feel it can be omitted, but I am not a computer...
(60000/bpm) * 3 gives the same result ((60000/bpm) *3*1000)/1000
If you look carefully you find that the whole term is divided by 1000 after rounding.
So it is not just x * 1000 / 1000.
Math.round(a*1000)/1000 results in number a rounded with 3 decimals.
Ex: Math.round(1234.123456 * 1000)/1000 = 1234.123
How this works is like this:
Suppose the number a has x decimals (in our example 6). You multiply the number by 10 to the power of n (in our example 3), effectively moving the decimal point n digits to the right. Then you round the number (cut all decimals). Then you divide by 10 to the power of n, moving the decimal point back.
It has to do with the parentheses.
Math.round((60000/bpm)*3*1000)/1000
In full it reads..
Divide 60000 by bpm then multiply by 3000 then perform Math.round then divide by 1000
You are rounding a possible float before dividing it by 1000
I'm reading through Chapter 3 of Joshua Bloch's Effective Java. In Item 8: Always override hashCode when you override equals, the author uses the following combining step in his hashing function:
result = 37 * result + c;
He then explains why 37 was chosen (emphasis added):
The multiplier 37 was chosen because it is an odd prime. If it was even and
the multiplication overflowed, information would be lost because multiplication
by two is equivalent to shifting. The advantages of using a prime number are less
clear, but it is traditional to use primes for this purpose.
My question is why does it matter that the combining factor (37) is odd? Wouldn't multiplication overflow result in a loss of information regardless of whether the factor was odd or even?
Consider what happens when a positive value is repeatedly multiplied by two in a base-2 representation -- all the set bits eventually march off the end, leaving you with zero.
An even multiplier would result in hash codes with less diversity.
Odd numbers, on the other hand, may result in overflow, but without loss of diversity.
The purpose of a hashCode is to have random bits based on the input (especially the lower bits as these are often used more)
When you multiple by 2 the lowest bit can only be 0, which lacks randomness. If you multiple by an odd number the lowest bit can be odd or even.
A similar question is what do you get here
public static void main(String... args) {
System.out.println(factorial(66));
}
public static long factorial(int n) {
long product = 1;
for (; n > 1; n--)
product *= n;
return product;
}
prints
0
Every second number is an even and every forth a multiple of 4 etc.
The solution lies in Number Theory and the Lowest common denominator of your multiplier and your modulo number.
An example may help. Lets say instead of 32bit you only got 2 bit to represent a number. So you got 4 numbers(classes). 0, 1, 2 and 3
An overflow in the CPU is the same as a modulo operation
Class - x2 - mod 4 - x2 - mod 4
0 0 0 0 0
1 2 2 4 0
2 4 0 0 0
3 6 2 4 0
After 2 operations you only got 1 possible number(class) left. So you have 'lost' information.
Class - x3 - mod 4 - x3 - mod 4 ...
0 0 0 0 0
1 3 3 9 1
2 6 2 6 2
3 9 1 3 3
This can go on forever and you still have all 4 classes. So you dont lose information.
The key is, that the LCD of your muliplier and your modulo class is 1. That holds true for all odd numbers because your modulo number is currently always a power of 2. They dont have to be primes and they dont have to be 37 specifically. But information loss is just one criteria why 37 is picked other criterias are distribution of values etc.
Non-math simple version of why...
Prime numbers are used for hashing to keep diversity.
Perhaps diversity is more important because of Set and Map implementations. These implementations use last bits of object hash numbers to index internal arrays of entries.
For example, in a HashMap with internal table (array) for entries with size 8 it will use last 3 bits of hash numbers to adress table entry.
static int indexFor(int h, int length) {
return h & (length-1);
}
In fact it's not but if Integer object would have
hash = 4 * number;
most of table elements will be empty but some will contain too many entries. This would lead to extra iterations and comparison operations while searching for particular entry.
I guess the main concern of Joshua Bloch was to distribute hash integers as even as possible to optimize performance of collections by distributing objects evenly in Maps and Sets. Prime numbers intuitively are seems to be a good factor of distribution.
Prime numbers aren't strictly necessary to ensure diversity; what's necessary is that the factor be relatively prime to the modulus.
Since the modulus for binary arithmetic is always a power of two, any odd number is relatively prime, and would suffice. If you were to take a modulus other than by overflow, though, a prime number would continue to ensure diversity (assuming you didn't choose the same prime...).