Reading long binary numbers in java - java

I am trying to write a program which converts binary numbers into decimal, however as soon as I have a binary number which is bigger than 10 digits I get a java.lang.numberformatexception error. I was wondering how I should rewrite my code in order to handle binary numbers:
try{
//will throw an exception if the user's input contains a non-Integer
int inputNumber = Integer.parseInt(returnEnterNumber());
//when our user wants to convert from binary to decimal
if(binaryToDecimal.isSelected()){
//checks if number is binary
int checkNumber = inputNumber;
while (checkNumber != 0) {
if (checkNumber % 10 > 1) {
throw new InvalidBinaryException();
}
checkNumber = checkNumber / 10;
}
//converts from binary and outputs result
int n = Integer.parseInt(returnEnterNumber(), 2);
displayConvertedNumber(Integer.toString(n));
}
}
catch(Exception e) {
displayConvertedNumber("WRONG INPUT! - TRY again");
}
Edit: I understand why the code fails, seeing as how it takes the number as a decimal and overflows. I am not sure how to rewrite the code to take the input as a binary straight away.

That's not a valid way to check a binary number. You're converting to an int in base 10, then checking that each of the digits in base 10 is zero or one. The conversion itself will fail on long enough input strings, and if it doesn't the checking will fail as well.
You shouldn't be converting it all all, you should be checking the input string itself.
EDIT Actually you don't have to do anything. Integer.parseInt() will check it for you and throw NumberFormatException if the string isn't a number in the specified radix.

You are parsing your binary digit string as a decimal integer first. If it has more than 10 significant digits then its decimal interpretation is too big to fit in an int, so the decimal conversion fails.
When you are going to parse the digit string as a binary number, simply avoid first parsing it as a decimal one. For instance, most of what you posted could be reduced to this:
int inputNumber = Integer.parseInt(returnEnterNumber(),
binaryToDecimal.isSelected() ? 2 : 10);

Take a look at Integers MAX values:
public class MainClass {
public static void main(String[] arg) {
System.out.println(Integer.MAX_VALUE);
System.out.println(Integer.MIN_VALUE);
}
}
the output will be:
2147483647
-2147483648
This means that if you have more than 10 digits, you have exceeded the max number for the Integer data type.
Try Using BigInteger on your binary value or consider returning it as String

Here is one line of code that will accomplish what you are looking for
System.out.println(new BigInteger("10101010101010111101010101001101010101010101001010101010101001011101010101010101",2).toString());

Related

Issues Converting to Binary While Using a While Loop

I'm currently trying to solve the binary gap problem in java and started off with first trying to convert the decimal into binary using a while loop. I was testing it with different decimal inputs, but noticed after stepping through it, that on the final loop I'm getting integer overflow instead of appending a 1, (or at least I think I am, it goes from 100010000 to 411065418, I'm assuming because it multiples the 100010000 *10)
I tried stepping through it and This is my code currently:
public class BinaryGap {
public static void main(String[] args) {
// write your code in Java SE 8
int decimal = 529;
int ans =0;
//returns the number in binary but in big endian form
while(decimal != 0){
ans += (decimal % 2);
ans *= 10;
decimal /=2;
}
}
}
Any help in telling me where my line of thinking is wrong would be greatly appreciated
Your code is conceptually working fine - but variable 'ans' meets the limit of int - 2147483647. When +1 is added to this value - variable overflows and goes to minimal value.
To overcome it you can use type 'String' for 'ans' variable with small adjustments to code:
int decimal = 529;
String ans = "";
//returns the number in binary but in big endian form
while(decimal != 0){
ans += (decimal % 2);
decimal /=2;
}

How to convert binary to decimal using BigInteger?

I am trying to print decimal number from binary number using BigInteger class.
I am using BigInteger(String val, int radix) constructor of BigInteger class to convert given binary value to decimal but it is printing the exact binary value which is passed in constructor.Waht is the error?
The code is as follows:
System.out.print("Ente the decimal number:");
s=String.valueOf(sc.nextInt());
BigInteger i=new BigInteger(s,10);
System.out.println(i);
Actually you are not printing the decimal value of binary number
The value which is printed is the exact decimal representation of String val
According to radix that you put in this statement it will act like this :
BigInteger i = new BigInteger("11101", 10); // 11101
11101 this output is actually decimal number not a binary number
In order to get the expected result you should change radix value to 2 after that it will print the decimal value of the binary number:
BigInteger i = new BigInteger("11101", 2);
System.out.println(i);
Output:
29
If you are trying to parse the input String as a binary, you should write:
BigInteger i=new BigInteger(s,2);
When you print it
System.out.println(i);
it will be displayed in decimal by default.
It doesn't make sense, however, to read the number as int and then convert to String and pass to BigInteger constructor, since that limits the range of numbers that would work.
try:
s = sc.nextLine();
BigInteger i=new BigInteger(s,2);
System.out.println(i);
To parse the input in binary, use Scanner.nextBigInteger() with a radix of 2:
System.out.print("Enter the decimal number:");
BigInteger i = sc.nextBigInteger(2);
System.out.println(i);
The output will be in decimal by default.

Need help: Program to convert binary to decimals (Java)

I would like some help with my code. I'm doing a method to convert binary numbers to decimals. This is my code:
public double decimal(double number){
String num=(number+"");
char charac;
double n;
double cont=0;
double exp;
double res=0;
for(int i=num.length()-1;i>=0;i--){
charac=num.charAt(i);//obtains digits 1 by 1 and converts to char
n=(Character)charac;//converts the digit into a number (double)
exp=Math.pow(2, cont);//makes the exponential function to multiply to n
n=n*exp;//multiplies exponential with the digit
res+=n;//adds digit to accumulator
cont++;
}
return res;
}
the problem i'm having is that the numbers get all messed up for whatever reason, like n being assigned 48 in the first loop of the for cycle.
I tried using n as an Int instead and it seemed to be working well but at the second loop of the for cycle it was assigned -2 somehow and that ruined the addition.
Change
n=(Character)charac;
to use Character.digit(char, int) otherwise you get the ascii value (not the digit value)
n=Character.digit(charac, 10);
By assigning char to double, it is infact passing on the ASCII value of that digit e.g. 48 is returned for char 0.
I think this answer from How to convert binary string value to decimal will help you:
String c = "110010"; // as binary
int decimalValue = Integer.parseInt(c, 2);
System.out.println(decimalValue);
result: 50

Neat way to find the number of significant digits in a BigDecimal?

I do not want to limit the number of significant digits in a BigDecimal. I only want to find the number of significant digits that number has.
Is there a way to do this without converting the number to string and count the number characters?
I believe you want a combination of stripTrailingZeros, precision and scale, as demonstrated here:
import java.math.*;
public class Test {
public static void main(String[] args) {
test("5000"); // 4
test("5000.00"); // 4
test("5000.12"); // 6
test("35000"); // 5
test("35000.00"); // 5
test("35000.12"); // 7
test("35000.120"); // 7
test("0.0034"); // 2
test("1.0034"); // 5
test("1.00340"); // 5
}
private static void test(String input) {
System.out.println(input + " => " +
significantDigits(new BigDecimal(input)));
}
private static int significantDigits(BigDecimal input) {
input = input.stripTrailingZeros();
return input.scale() < 0
? input.precision() - input.scale()
: input.precision();
}
}
The call to stripTrailingZeros is required as otherwise it's entirely possible for a BigDecimal to be stored in a "non-normalized" form. For example, new BigDecimal(5000) has a precision of 4, not 1.
The call to scale() is used to handle cases where the normalized form has trailing zeroes before the decimal point, but nothing after the decimal point. In this case, the scale will always be negative, and indicates the number of trailing zeroes.
EDIT: Cases with trailing zeroes but no decimal point are inherently ambiguous - there's no definite number of significant digits to "5000" for example. The above code treats all trailing zeroes before the decimal point as significant.
The following modification of Jon's answer returns the results that seem correct to me:
private static int significantDigits(BigDecimal input) {
return input.scale() <= 0
? input.precision() + input.stripTrailingZeros().scale()
: input.precision();
}
(Note that input.stripTrailingZeros().scale() appears to always be negative in these tests.)
Also, as I noted above, BigDecimal isn't capable of distinguishing between, say, a "5000" with one significant digit and a "5000" with two, for example. Furthermore, according to the definitions, "5000." (with a trailing decimal point) should have exactly four significant digits, but BigDecimal isn't capable of handling that. (See http://en.wikipedia.org/wiki/Significant_figures for the definitions I'm using.)
Jon's answer is correct in most cases except exponential number:
private static int significantDigits(BigDecimal input) {
return input.scale() <= 0
? input.precision() + input.stripTrailingZeros().scale()
: input.precision();
}
let's say the input as 1.230000E17, the function returns 18 however the correct significant digits should be 7.

Removing a decimal point in java

I am wanting to store an integer named Amount, I want it to be stored in pence so if the user entered 11.45 it would be stored as 1145. What is the best way to remove the decimal point? Should I be using decimalFormatting in Java?
Edit:
It is entered in string format, was going to covert it to an int. I will give one of your solutions ago and let you know if it works but not sure which one would be the best.. Thanks everyone.
times it by 100 and cast as int. Use decimal formatting is double / float are too inaccurate which they may be for money
If the user input is in the form of a string (and the format has been verified), then you can strip out the decimal point and interpret the result as an integer (or leave it as a string without the decimal point).
String input = "11.45";
String stripped = input.replace(".", ""); // becomes "1145"
int value = Integer.parseInt(stripped);
If it's a float already, then just multiply by 100 and cast, as #user1281385 suggests.
What about convert to float, multiply by 100 and then convert to int?
String pound = "10.45"; // user-entered string
int pence = (int)Math.round(Float.parseFloat(pound) * 100);
This might be also useful: Best way to parseDouble with comma as decimal separator?
Tested and works. Even if the user enters a number without a decimal, it will keep it as such.
double x = 11.45; // number inputted
String s = String.valueOf(x); // String value of the number inputted
int index = s.indexOf("."); // find where the decimal is located
int amount = (int)x; // intialize it to be the number inputted, in case its an int
if (amount != x) // if the number inputted isn't an int (contains decimal)
// multiply it by 10 ^ (the number of digits after the decimal place)
amount = (int)(x * Math.pow(10,(s.length() - 1 - index)));
System.out.print(amount); // output is 1145
// if x was 11.4500, the output is 1145 as well
// if x was 114500, the output is 114500

Categories

Resources