This question already has answers here:
Different between parseInt() and valueOf() in java?
(11 answers)
Closed 5 years ago.
I have recently stumbled upon another question on Stack Overflow where one person suggests using Short.parseShort(String s) and another Short.valueOf(String s).
I have tried both myself, and I found no difference in functionality and the official Documentation didn't really help me either:
Short.parseShort:
Parses the string argument as a signed decimal short. The characters in the string must all be decimal digits, except that the first character may be an ASCII minus sign '-' ('\u002D') to indicate a negative value or an ASCII plus sign '+' ('\u002B') to indicate a positive value. The resulting short value is returned, exactly as if the argument and the radix 10 were given as arguments to the parseShort(java.lang.String, int) method.
Short.valueOf:
Returns a Short object holding the value given by the specified String. The argument is interpreted as representing a signed decimal short, exactly as if the argument were given to the parseShort(java.lang.String) method. The result is a Short object that represents the short value specified by the string.
Both accept the additional Parameter radix, and both throw a NumberFormatException.
They seem to be identical, but if that is the case, why do both exist?
valueOf is using parseShort internally and additionally wraps the value in a boxed type Short:
public static Short valueOf(String s, int radix)
throws NumberFormatException {
return valueOf(parseShort(s, radix));
}
Related
This question already has answers here:
Are there binary literals in Java?
(2 answers)
In Java, can I define an integer constant in binary format?
(8 answers)
What are the Java primitive data type modifiers?
(3 answers)
Java 7 underscore in numeric literals
(8 answers)
Closed 2 years ago.
I'm practicing with some tasks of a Java course and I came across this variable
int x = 0b1000_1100_1010;
I know that a "f" and a "d" beside a number means that the number is a float or a double, respectively. But what about this "b" between the number?
I saw here that this is related to bytes, but I didn't quite understand how it works.
My question also applies to the "x" between numbers that I just saw on that link.
Thanks!
This is the binary literal.
It's a notation to show that the number is represented in binary.
It's like when you use the hexadecimal notation: 0xF9. In Java, you can use 0b1111_1001 to represent the same number, which is 249 in decimal.
It's not related to bytes, but to bits. You can clearly see which bits are set and which are not. By default a number starting with 0b is an int, but you can write a long like this 0b1010L (note the trailing L).
The b can be lowercase or uppercase. So this is also valid: 0B1111. Note that since the 0b prefix indicates a binary representation, you're not allowed to use any character other than 0 and 1 (and _ to mark a separation).
Java allows you to use _ in int values as shown below:
public class Main {
public static void main(String[] args) {
int x = 1_2_3;
System.out.println(x + 100);
}
}
Output:
223
The byte data type is an 8-bit signed two's complement integer. It has a minimum value of -128 and a maximum value of 127 (inclusive).
The byte data type can be useful for saving memory in large arrays,
where the memory savings actually matters. They can also be used in
place of int where their limits help to clarify your code; the fact
that a variable's range is limited can serve as a form of
documentation.
https://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html
This question already has answers here:
Converting 32-bit binary string with Integer.parseInt fails
(4 answers)
Closed 5 years ago.
The binary string is "10101010101010101010101010101010", 32 bits
Got this exception when i try
Integer.parseInt("10101010101010101010101010101010", 2);
But the same string(add "0b" prefix)
System.out.print(0b10101010101010101010101010101010);
return -1431655766.
Is this a valid binary string?
Integer.parseInt() is the wrong method as it only accepts signed numbers.
With Java8, use:
Integer.parseUnsignedInt("10101010101010101010101010101010", 2);
Before Java8, use:
(int) Long.parseLong("10101010101010101010101010101010", 2);
The value you wanted to convert goes beyond the int type size, UseBigInteger:
BigInteger b = new BigInteger("10101010101010101010101010101010",2);
Integer.parseInt("10101010101010101010101010101010", 2);
Note that this would be 2863311530, which would cause an overflow, as it is above Integer.MAX_VALUE.
System.out.print(0b10101010101010101010101010101010);
This however uses the internal representation of an integer, which is two-complement form. Thats why it is negative. The Integer.parseInt() however treats it as an unsigned binary number, which causes the Exception.
It is important to understand the different bit representations.
Edit: If you want your input to be interpreted as two-complement, use this:
Integer.parseUnsignedInt("10101010101010101010101010101010", 2);
This question already has answers here:
hex to int number format exception in java
(4 answers)
Closed 6 years ago.
I get following exception:
java.lang.NumberFormatException: For input string: "3693019260"
while calling
Integer.parseInt(s);
And I don't know why I get it.
3693019260 is smaller than 2^32
3693019260 is clearly a natural number
the String is cleared from all non-digit elements with s.replaceAll("[^0-9]", "")
So why do I get this exception?
According to the little bit of debugging I did, I saw that the number dips under multmin but I don't know what this variable does and how I should interpret this observation.
While 3693019260 may fit into a 32 bit unsigned integer, it looks like you are trying to parse it into a plain int which is a signed integer. Signed simply means that it supports negative values using -.
With signed numbers, half of the namespace is reserved, so your number must fit into 2^32÷2−1 2147483647 instead of simply 2^32.
The simplest fix is to parse the value as a long instead of an int. Long numbers are 64 bits and support many more digits in the string.
Long story short, I was messing around with some basic genetic algorithm stuff in Java. I was using a long to store my genes, but I was using binary strings for readability while debugging. I came across an odd situation where I couldn't parse some binary strings that start with a 1 (I don't know if this is always the case, but it seems to be consistent with strings of 64 characters in length).
I was able to replicate this with the following example:
String binaryString = Long.toBinaryString(Long.MIN_VALUE);
long smallestLongPossibleInJava = Long.parseLong(binaryString, 2);
Which will throw and produce the following stacktrace:
Exception in thread "main" java.lang.NumberFormatException: For input string: "1000000000000000000000000000000000000000000000000000000000000000"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:592)
at com.company.Main.main(Main.java:25)
Given that I have a correctly formatted binary string of sixty four characters in length, why can't I parse some strings to a long? Most of the time, my strings are randomly generated, but in the instance above this should work (seeing as Long.MIN_VALUE is definitely a valid long in Java).
Quoting Long.toBinaryString(i) Javadoc (emphasis mine):
Returns a string representation of the long argument as an unsigned integer in base 2.
And quoting Long.parseLong(s, radix) (emphasis mine):
Parses the string argument as a signed long in the radix specified by the second argument.
The problem comes from the fact that toBinaryString returns a unsigned value whereas parseLong expects a signed value.
You should use Long.parseUnsignedLong(s, radix) instead:
String binaryString = Long.toBinaryString(Long.MIN_VALUE);
long smallestLongPossibleInJava = Long.parseUnsignedLong(binaryString, 2);
Note that this is actually explicitely said in toBinaryString Javadoc:
The value of the argument can be recovered from the returned string s by calling Long.parseUnsignedLong(s, 2).
The following expressions are valid in Java as obvious
int a = -0;
int b = +0;
and so are the following.
Integer c = new Integer(-0);
int d = Integer.parseInt("-0");
BigDecimal e = new BigDecimal("-0");
The following statements are however invalid.
Integer f = new Integer("+0"); //Leading + sign.
int g=Integer.parseInt("+0"); //Leading + sign.
Both of them throw the NumberFormatException.
The following statement with BigDecimal however compiles and runs without causing an exception to be thrown.
BigDecimal bigDecimal = new BigDecimal("+0"); //Leading + sign.
Why is a leading + sign valid with BigDecimal here which however doesn't appear to be the case with the other datatypes available in Java?
According to the documentation, negative signs needs the minus sign. But if its a positive Integer, no need for a plus sign.
public static int parseInt(String s)
The characters in the string must all be decimal digits, except that
the first character may be an ASCII minus sign '-' ('\u002D') to
indicate a negative value. The resulting integer value is returned,
exactly as if the argument and the radix 10 were given as arguments to
the parseInt(java.lang.String, int) method.
Then for the constructor:
public Integer(String s)
The string is converted to an int value in exactly the manner used by the parseInt method for radix 10.
The real answer is most likely that the inconsistent behaviour between new Integer("+0") and new BigDecimal("+0") is a result of a mistake in design of one or the other. Unfortunately, the mistake got "baked on" when the relevant class was publicly released, and Sun / Oracle were unwilling to fix it because:
the respective implementations do conform to their respective specifications,
the inconsistency is a relatively minor issue with a simple work-around, and
fixing it is likely to break forwards and backwards compatibility.
(And this explanation is supported by the evaluation section of Java Bug #4296955 that #rlay3 found!!)
Note that I have excluded your Java expression examples from consideration. That is because the context for Java expression syntax and converting text strings are sufficiently different that (IMO) you should not expect them to behave the same. (And in the same way, you should not expect a String reader to do something special with any \ characters that it encounters ...)
UPDATE
#ADTC has observed that they actually did change this in Java 7 and that Integer.parseInt now does accept a leading + sign.
The corresponding Java bug for this enhancement is #5017980. (And if you look at the linked bugs, the first one seems to imply that the change has been backported to OpenJDK6.)
However, Oracle didn't mention this change in the Java 7 compatibility / upgrade documents ... which is strange given that Sun had previously rejected the change because of compatibility concerns!!
This is all rather peculiar ...