This question already has answers here:
How does addition work in Computers?
(2 answers)
Closed 6 years ago.
i believe computer must be achieving it with the help of exclusive OR with bitwise left shift opeartor. correct ?
Here is the implementation in java
public class TestAddWithoutPlus {
public static void main(String[] args) {
int result = addNumberWithoutPlus(6, 5);
System.out.println("result is " + result);
}
public static int addNumberWithoutPlus(int a, int b) {
if (a == 0) {
return b;
} else if (b == 0) {
return a;
}
int result = 0;
int carry = 0;
while (b != 0) {
result = a^b; // SUM of two bits is A XOR B
carry = (a&b); // CARRY is AND of two bits
carry = carry << 1; // shifts carry to 1 bit to calculate sum
a=result;
b=carry;
}
return result;
}
}
I'll answer for a typical bit-parallel processor as would be seen in personal computers, microcontrollers, etc. This does not apply to a bit-serial architecture, which is more often seen in specialized situations such as certain types of DSP and certain FPGA designs.
Typically this is not the case, since for a narrow width such as 32 or 64 bits, an adder circuit is more efficient than serial addition as you show, since it can complete an addition asynchronously, as opposed to over multiple clock cycles.
However, the principle is the same for a basic ripple-carry adder--the adder for the least-significant bit calculates a bit of the result and a carry bit, which is passed into the full adder corresponding to the next bit as the carry in, and so on, as shown in this image:
Source: Wikimedia Commons, user cburnett, under Creative Commons 3.0 Share-alike
In practice, however, the fact that a carry coming from the LSB adder may need to propagate all the way to the MSB adder poses a limitation on performance (due to propagation delays) so various lookahead schemes may be used.
Related
The 1st for loop in the below code does not find the maximum correctly due to an overflow. However, the 2nd for loop does. I used godbolt.com to look at the byte codes for this program which showed that to determine which number is greater the 1st for loop uses an isub and the 2nd for loop uses an if_icmple. Makes sense. However, why is the if_icmple able to successfully do this comparison since it too at some point must do a subtraction (which I would expect to produce an overflow)?
public class Overflow {
public static void main(String[] args) {
int[] nums = {3,-2147483648, 5, 7, 27, 9};
int curMax = nums[0];
for (int num : nums) {
int diff = curMax - num;
if (diff < 0) {
curMax = num;
}
}
System.out.println("1) max is " + curMax);
curMax = nums[0];
for (int num : nums) {
if (num > curMax) {
curMax = num;
}
}
System.out.println("2) max is " + curMax);
}
}
The output is
max is -2147483648
max is 27
Let's say that comparison is implemented using subtraction. Contrary to various other opinions here, I'd say that is highly likely. Eg cmp on x86 is just a subtraction that does not update its destination register, only the flags. Various other (but maybe not all) processors that have a flags register also work that way. In the rest of this answer I'll use x86 as a representative processor for examples.
However, there is an incorrect assumption made implicitly by your code: a comparison is not equivalent to a subtraction followed by checking the sign, it is equivalent to a subtraction followed by checking some combination of the Zero, Sign, and Overflow flags. For example, if you implement if (num > curMax) using some cmp followed by jle (to skip the body of the if when the condition is false), then jle does this:
Jump short if less or equal (ZF=1 or SF≠ OF).
Expressing the condition SF≠ OF directly in Java is not so easy. But the JVM itself has no such problem, it can use a comparison (or, equivalently, subtraction) followed by exactly the right kind of conditional jump.
There are some less-fortunate processors that do not have such as full set of conditional jumps as x86 has, but even in that case, the JVM has a lot more options than you do.
The option that the JVM does not have though, is implementing comparison incorrectly.
However, why is the if_icmple able to successfully do this comparison since it too at some point must do a subtraction (which I would expect to produce an overflow)?
It doesn't need to do a subtraction. It just needs to do a comparison. Comparisons don't have to involve subtraction, certainly not in the CPU.
Of course, we can also take alternate steps to deal with overflow. Here is one simple approach:
int cmp(int a, int b) {
boolean aNeg = (a >>> 31) != 0;
boolean bNeg = (b >>> 31) != 0;
if (aNeg != bNeg) {
return aNeg ? -1 : 1;
}
int diff = a - b; // subtracting two numbers with the same sign can't overflow
return (diff == 0) ? 0 : (diff < 0) ? -1 : 1;
}
All Java has to do is compile if_icmple to something like this, or whatever other instruction is appropriate on the target CPU. Using bytecode means Java can leave that up to the runtime to get right for the target CPU, whatever's fastest in such an environment -- using the overflow bit, doing something like this, whatever.
Ok. I think this is impossible. If you think the same, you do not need to post an answer. I have read a few lines of Chapter 5. Conversions and Promotions and it seems chapter 5 has no mention of disabling Conversions and Promotions in Java.
Here is my motivation:
long uADD(long a, long b) {
try {
long c;
c = 0;
boolean carry; //carry flag; true: need to carry; false: no need to carry
carry = false;
for (int i = 0; i < 64; ++i) { //i loops from 0 to 63,
if (((((a >>> i) & 1) ^ ((b >>> i)) & 1) != 0) ^ carry) { //calculate the ith digit of the sum
c += (1 << i);
}
if (((((a >>> i) & 1) & ((b >>> i) & 1)) != 0) || (carry && ((((a >>> i) & 1) ^ ((b >>> i) & 1)) != 0))) {
carry = true; //calculate the carry flag which will be used for calculation of the (i+1)th digit
} else {
carry = false;
}
}
if (carry) { //check if there is a last carry flag
throw new ArithmeticException(); //throw arithmetic exception if true
}
return c;
} catch (ArithmeticException arithmExcep) {
throw new ArithmeticException("Unsigned Long integer Overflow during Addition");
}
}
So basically, I am writing a method that will do unsigned addition for long integer. It will throw arithmetic exception if overflow. The code above is not readable enough, so I should try to explain it.
First, there is a for loop where i loops from 0 to 63.
Then, the first if statement acts as the sum output of the full adder, it uses the ith digit of a and that of b and carry flag to calculate the i + 1th digit (true or false). (Note that i = 0 corresponds to the units digit.) If true, it adds 1 << i to c, where c is initially 0.
After that, the second if statement acts as the carry flag output of the full adder, it uses again the ith digit of a and that of b and carry flag to calculate the carry flag of the i + 1th digit. If true, set the new carry flag to true, if false, set the new carry flag false.
Finally, after exit the for loop, check if the carry flag is true. If true, throw arithmetic exception.
However, the above code does not work. After debugging, it turns out the problem occurs at
c += (1 << i);
The correct code should be:
c += (1L << i);
because Java will automatically promote integer 1 << i to Long and add it to c, showing no warning to me.
I have several questions regarding to this.
Is it possible to disable automatic promotion of one data type to another
How often does automatic promotion causing problem to you?
Is it possible to tweak the IDE so that it shows a warning to me when automatic promotion occurs? (I am using NetBeans IDE 7.3.1 at the moment.)
Sorry for lots of questions and the hard to read code. I will be studying CS in September so I try to write some code in Java to familiarize myself with Java.
Is it possible to disable automatic promotion of one data type to another
No: as you already discovered the Java Language Specification mandates numeric promotion to occur, any compiler doing this would (by definition) not be a valid Java Compiler.
How often does automatic promotion causing problem to you?
Perhaps once a year (and I code in Java for a living)?
Is it possible to tweak the IDE so that it shows a warning to me when automatic promotion occurs? (I am using NetBeans IDE 7.3.1 at the moment.)
It is worth noting that such a warning would not detect all cases where an explicit promotion is needed. For instance, consider:
boolean isNearOrigin(int x, int y, int r) {
return x * x + y + y < r * r;
}
Even though there is no automatic promotion, the multiplications may overflow, which can make the method return incorrect results, and one should probably write
return (long) x * x + (long) y + y < (long) r * r;
instead.
It's also worth noting that your proposed warning would also appear for correct code. For instance:
int x = ...;
foo(x);
would warn about automatic promotion if foo is declared with parameter type long, even though that promotion can not have any adverse effects. Since such innocent situations are quite frequent, your warning would probably be so annoying that everybody would turn it off. I'd therefore by quite surprised to find any Java compiler emit such a warning.
In general, the compiler can not detect that an operation will overflow, and even finding likely candidates for overflow is complex. Given the rarity of overflow-related problems, such an imperfect detection seems a dubious benefit, which is probably why Java compilers and IDEs do not implement it. It therefore remains the responsibility of the programmer to verify, for each arithmetic operation, that the value set afforded by the operand types is suitable. This includes specifying suitable type suffixes for any numeric literals used as operands.
PS: Though I am impressed that you got your ripple-carry adder working, I think your uAdd method could be more easily implemented as follows:
long uAdd(long a, long b) {
long sum = a + b;
if (uLess(sum, a)) {
throw new ArithmeticException("Overflow");
} else {
return sum;
}
}
/** #return whether a < b, treating both a and b as unsigned longs */
boolean uLess(long a, long b) {
long signBit = 1L << -1;
return (signBit ^ a) < (signBit ^ b);
}
To see why this is correct, let < denote the less than relation for the signed interpretation (which is equivalent to the Java operator), and ≪ denote the less than relation for the unsigned values. Let a and b be any bit pattern, from which a' and b' are obtained by flipping the sign bit. By the definition of signed integers, we then have:
If sign(a) = sign(b), we have (a ≪ b) = (a' ≪ b') = (a' < b')
If sign(a) ≠ sign(b), we have (a ≪ b) = (b' ≪ a') = (a' < b')
Therefore, (a ≪ b) = (a' < b').
This question already has answers here:
Multiplication of two ints overflowing to result in a negative number
(5 answers)
Closed 9 years ago.
static int fn = 0;
static int sn = 0;
static boolean running = false;
public static void run()
{
while (running == true)
{
fn = numbers[0];
sn = numbers[1];
if (sign == 0)
{
input.setText(String.valueOf(fn));
}
}
}
static class one implements ActionListener
{
public void actionPerformed(ActionEvent e)
{
if (Display.sign == 0)
{
Display.numbers[0] = Display.numbers[0] *10;
Display.numbers[0] = Display.numbers[0] +1;
}
}
}
This is the code for a calculator that I am programming (not all of it of course). This is the part where I display the number on the screen which I have done, but weirdly this works up until 10 characters
So after I get the program to display 1111111111 I want to do it once more and it gives me this weird number -1773790777. I am confused about how the program comes up with this. As you can see, above Display.numbers[] is the array I am storing the two numbers in. So to go over a place I multiply the number in the array by 10 then add 1. So how does this give me a negative number in the first place and what can I do to solve this problem?
Is your number overflowing?
You can check it by looking at Integer.MAX_VALUE (assuming you are using an integer). If you go over that you will loop will get weird results like this. See - http://javapapers.com/core-java/java-overflow-and-underflow/ for more details.
It's overflowing!
1111111111*10 + 1 = 11111111111 which is 0x2964619C7 in hexadecimal. It's a 34-bit value which can't be stored in a 32-bit int
In Java arithmetic operations wrap around by default, so if the result overflowed then it'll be wrapped back to the other end of the value range. See How does Java handle integer underflows and overflows and how would you check for it?
However due to the use of 2's complement, the result will be the lower bits of the result 11111111111 mod 232 = 2521176519 = 0x964619C7 which is -1'773'790'777 in 32-bit int, that's why you see the number. You should read more on binary, that's the basic of nowadays computers
In Java 8 you'll have an easier way to detect overflow with the new *Exact methods
The platform uses signed two's complement integer arithmetic with int and long primitive types. The developer should choose the primitive type to ensure that arithmetic operations consistently produce correct results, which in some cases means the operations will not overflow the range of values of the computation. The best practice is to choose the primitive type and algorithm to avoid overflow. In cases where the size is int or long and overflow errors need to be detected, the methods addExact, subtractExact, multiplyExact, and toIntExact throw an ArithmeticException when the results overflow. For other arithmetic operations such as divide, absolute value, increment, decrement, and negation overflow occurs only with a specific minimum or maximum value and should be checked against the minimum or maximum as appropriate.
https://docs.oracle.com/javase/8/docs/api/java/lang/Math.html
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How can I check if multiplying two numbers in Java will cause an overflow?
Suppose I have a Java class method, which uses * and + operations.
int foo(int a, int b) {
... // some calculations with + and *
}
How to make sure that no overflow occurs in foo?
I guess I can either use BigDecimal or replace all + and * with "wrappers" like:
int sum(int a, int b) {
int c = a + b;
if (a > 0 && b > 0 && c < 0)
throw new MyOverfowException(a, b)
return c;
}
int prod(int a, int b) {
int c = a * b;
if (a > 0 && b > 0 && c < 0)
throw new MyOverfowException(a, b)
return c;
}
Are there better ways to make sure that no int overflow occurs in a Java method ?
One way to check for an overflow is to have the operands promoted to a larger type (of double the original operand bit length) then perform the operation, and then see if the resulting value is too large for the original type, e.g.
int sum(int a, int b) {
long r = (long)a + b;
if (r >>> 32 != 0) { // no sign extension
throw new MyOverflowException(a, b);
}
return (int)r;
}
If your original type is a long, you'd have to use BigInteger as that larger type.
It is a difficult problem from an engineering perspective.
The Secure Coding site recommends:
use of preconditions; i.e. range-check the inputs so that overflow is impossible,
doing each individual arithmetic operation using the next larger primitive integer type and explicitly checking for overflow, or
using BigInteger.
This Dr Dobbs article suggests creating a library of primitive arithmetic methods that do each primitive operation with an explicit overflow check. (You could view this as an implementation of bullet point #2 above.) But the authors go further by suggesting that you use bytecode rewriting to replace arithmetic bytecodes with calls to the equivalent methods which incorporate overflow checks.
Unfortunately, there is no way to enable overflow checking natively in Java. (But the same applies in lots of other languages; e.g. C, C++ ... )
Sum: Check whether b is larger than the difference of the maximum value you can store in int minus the value of a. If a and/or b can be negative, you must (i) be careful not to get an overflow already for the difference check and (ii) perform a similar check for the minimum.
Product: Thats more difficult. I would split the integers into two half-length integers (i.e. if int is 32 bit, split it into two 16 bit numbers using bit-masking and shifting). Then do the multiplication, and then look whether the result fits into 32 bit.
Everything under the condition that you do not want to simply take long for the temporary result.
Suppose both a and b are positive or negative, and if the sign of a + b is not equal with the sign of a and b, then overflow happens. You can use this rule to judge whether overflow happens and throw an exception. When you catch this expcetion, you can deal it according to the method metioned in previous answers.
Another method is to doing operation using largest range type which will not overflow. You can use long for the operation between Integers.
This question was asked in my interview.
random(0,1) is a function that generates integers 0 and 1 randomly.
Using this function how would you design a function that takes two integers a,b as input and generates random integers including a and b.
I have No idea how to solve this.
We can do this easily by bit logic (E,g, a=4 b=10)
Calculate difference b-a (for given e.g. 6)
Now calculate ceil(log(b-a+1)(Base 2)) i.e. no of bits required to represent all numbers b/w a and b
now call random(0,1) for each bit. (for given example range will be b/w 000 - 111)
do step 3 till the number(say num) is b/w 000 to 110(inclusive) i.e. we need only 7 levels since b-a+1 is 7.So there are 7 possible states a,a+1,a+2,... a+6 which is b.
return num + a.
I hate this kind of interview Question because there are some
answer fulfilling it but the interviewer will be pretty mad if you use them. For example,
Call random,
if you obtain 0, output a
if you obtain 1, output b
A more sophisticate answer, and probably what the interviewer wants is
init(a,b){
c = Max(a,b)
d = log2(c) //so we know how much bits we need to cover both a and b
}
Random(){
int r = 0;
for(int i = 0; i< d; i++)
r = (r<<1)| Random01();
return r;
}
You can generate random strings of 0 and 1 by successively calling the sub function.
So we have randomBit() returning 0 or 1 independently, uniformly at random and we want a function random(a, b) that returns a value in the range [a,b] uniformly at random. Let's actually make that the range [a, b) because half-open ranges are easier to work with and equivalent. In fact, it is easy to see that we can just consider the case where a == 0 (and b > 0), i.e. we just want to generate a random integer in the range [0, b).
Let's start with the simple answer suggested elsewhere. (Forgive me for using c++ syntax, the concept is the same in Java)
int random2n(int n) {
int ret = n ? randomBit() + (random2n(n - 1) << 1) : 0;
}
int random(int b) {
int n = ceil(log2(b)), v;
while ((v = random2n(n)) >= b);
return v;
}
That is-- it is easy to generate a value in the range [0, 2^n) given randomBit(). So to get a value in [0, b), we repeatedly generate something in the range [0, 2^ceil(log2(b))] until we get something in the correct range. It is rather trivial to show that this selects from the range [0, b) uniformly at random.
As stated before, the worst case expected number of calls to randomBit() for this is (1 + 1/2 + 1/4 + ...) ceil(log2(b)) = 2 ceil(log2(b)). Most of those calls are a waste, we really only need log2(n) bits of entropy and so we should try to get as close to that as possible. Even a clever implementation of this that calculates the high bits early and bails out as soon as it exits the wanted range has the same expected number of calls to randomBit() in the worst case.
We can devise a more efficient (in terms of calls to randomBit()) method quite easily. Let's say we want to generate a number in the range [0, b). With a single call to randomBit(), we should be able to approximately cut our target range in half. In fact, if b is even, we can do that. If b is odd, we will have a (very) small chance that we have to "re-roll". Consider the function:
int random(int b) {
if (b < 2) return 0;
int mid = (b + 1) / 2, ret = b;
while (ret == b) {
ret = (randomBit() ? mid : 0) + random(mid);
}
return ret;
}
This function essentially uses each random bit to select between two halves of the wanted range and then recursively generates a value in that half. While the function is fairly simple, the analysis of it is a bit more complex. By induction one can prove that this generates a value in the range [0, b) uniformly at random. Also, it can be shown that, in the worst case, this is expected to require ceil(log2(b)) + 2 calls to randomBit(). When randomBit() is slow, as may be the case for a true random generator, this is expected to waste only a constant number of calls rather than a linear amount as in the first solution.
function randomBetween(int a, int b){
int x = b-a;//assuming a is smaller than b
float rand = random();
return a+Math.ceil(rand*x);
}