I am new in java. I have a problem with Numeric literal. Here is my problem:
float rank = 1050.86F;
System.out.println(rank);
The output is: 1050.86
double rank1 = 1050.86D;
double rank2 = 1050.86F;
System.out.println(rank1);
System.out.println(rank2);
The output of rank1 is: 1050.86
The output of rank2 is: 1050.8599853515625
My question is:
(i)Why the output of rank1 and rank2 are different? and how I calculate that?
(ii) Why do I need to use L, D, F before semicolon? As we already used keyword double, int, so why we need to use L, D, F on variables?
Please Help me. I am new in programming.
To answer your first question, it has to do with how computers store numbers internally. Computers use floating point to store numbers. Double precision ("double") stores more "information" in each number than floats. There are 64 bits stored for a double, and 32 bits stored for a float. See how 64 is twice the amount of bits as 32? That's why it's called a double!
Both doubles and floats can only store a limited amount of information in their types, though, so they're not perfectly precise and you can expect little inaccuracies like that to happen all the time in programming. It's nothing to worry about, in fact it's very common to run into floating point errors like you describe. There is no solution other than you should use doubles if you want more precise decimal values. Explaining how to deal with floating point error would take a lot of explaining, so I will link one of my favorite explanations of it here.
To answer your second question, if you don't specify, Java will assume you're using a double value (see here), so you have to specify that you want a float instead. This is because most often you will want to use a double (like I said above, it stores more information), so it makes sense that double is default and you have to specify if you want a float instead.
Related
We are solving a numeric precision related bug. Our system collects some numbers and spits their sum.
The issue is that the system does not retain the numeric precision, e.g. 300.7 + 400.9 = 701.599..., while expected result would be 701.6. The precision is supposed to adapt to the input values so we cannot just round results to fixed precision.
The problem is obvious, we use double for the values and addition accumulates the error from the binary representation of the decimal value.
The path of the data is following:
XML file, type xsd:decimal
Parse into a java primitive double. Its 15 decimal places should be enough, we expect values no longer than 10 digits total, 5 fraction digits.
Store into DB MySql 5.5, type double
Load via Hibernate into a JPA entity, i.e. still primitive double
Sum bunch of these values
Print the sum into another XML file
Now, I assume the optimal solution would be converting everything to a decimal format. Unsurprisingly, there is a pressure to go with the cheapest solution. It turns out that converting doubles to BigDecimal just before adding a couple of numbers works in case B in following example:
import java.math.BigDecimal;
public class Arithmetic {
public static void main(String[] args) {
double a = 0.3;
double b = -0.2;
// A
System.out.println(a + b);//0.09999999999999998
// B
System.out.println(BigDecimal.valueOf(a).add(BigDecimal.valueOf(b)));//0.1
// C
System.out.println(new BigDecimal(a).add(new BigDecimal(b)));//0.099999999999999977795539507496869191527366638183593750
}
}
More about this:
Why do we need to convert the double into a string, before we can convert it into a BigDecimal?
Unpredictability of the BigDecimal(double) constructor
I am worried that such a workaround would be a ticking bomb.
First, I am not so sure that this arithmetic is bullet proof for all cases.
Second, there is still some risk that someone in the future might implement some changes and change B to C, because this pitfall is far from obvious and even a unit test may fail to reveal the bug.
I would be willing to live with the second point but the question is: Would this workaround provide correct results? Could there be a case where somehow
Double.valueOf("12345.12345").toString().equals("12345.12345")
is false? Given that Double.toString, according to javadoc, prints just the digits needed to uniquely represent underlying double value, so when parsed again, it gives the same double value? Isn't that sufficient for this use case where I only need to add the numbers and print the sum with this magical Double.toString(Double d) method? To be clear, I do prefer what I consider the clean solution, using BigDecimal everywhere, but I am kind of short of arguments to sell it, by which I mean ideally an example where conversion to BigDecimal before addition fails to do the job described above.
If you can't avoid parsing into primitive double or store as double, you should convert to BigDecimal as early as possible.
double can't exactly represent decimal fractions. The value in double x = 7.3; will never be exactly 7.3, but something very very close to it, with a difference visible from the 16th digit or so on to the right (giving 50 decimal places or so). Don't be mislead by the fact that printing might give exactly "7.3", as printing already does some kind of rounding and doesn't show the number exactly.
If you do lots of computations with double numbers, the tiny differences will eventually sum up until they exceed your tolerance. So using doubles in computations where decimal fractions are needed, is indeed a ticking bomb.
[...] we expect values no longer than 10 digits total, 5 fraction digits.
I read that assertion to mean that all numbers you deal with, are to be exact multiples of 0.00001, without any further digits. You can convert doubles to such BigDecimals with
new BigDecimal.valueOf(Math.round(doubleVal * 100000), 5)
This will give you an exact representation of a number with 5 decimal fraction digits, the 5-fraction-digits one that's closest to the input doubleVal. This way you correct for the tiny differences between the doubleVal and the decimal number that you originally meant.
If you'd simply use BigDecimal.valueOf(double val), you'd go through the string representation of the double you're using, which can't guarantee that it's what you want. It depends on a rounding process inside the Double class which tries to represent the double-approximation of 7.3 (being maybe 7.30000000000000123456789123456789125) with the most plausible number of decimal digits. It happens to result in "7.3" (and, kudos to the developers, quite often matches the "expected" string) and not "7.300000000000001" or "7.3000000000000012" which both seem equally plausible to me.
That's why I recommend not to rely on that rounding, but to do the rounding yourself by decimal shifting 5 places, then rounding to the nearest long, and constructing a BigDecimal scaled back by 5 decimal places. This guarantees that you get an exact value with (at most) 5 fractional decimal places.
Then do your computations with the BigDecimals (using the appropriate MathContext for rounding, if necessary).
When you finally have to store the number as a double, use BigDecimal.doubleValue(). The resulting double will be close enough to the decimal that the above-mentioned conversion will surely give you the same BigDecimal that you had before (unless you have really huge numbers like 10 digits before the decimal point - the you're lost with double anyway).
P.S. Be sure to use BigDecimal only if decimal fractions are relevant to you - there were times when the British Shilling currency consisted of twelve Pence. Representing fractional Pounds as BigDecimal would give a disaster much worse than using doubles.
It depends on the Database you are using. If you are using SQL Server you can use data type as numeric(12, 8) where 12 represent numeric value and 8 represents precision. similarly, for my SQL DECIMAL(5,2) you can use.
You won't lose any precision value if you use the above-mentioned datatype.
Java Hibernate Class :
You can define
private double latitude;
Database:
Let's say, using java, I type
double number;
If I need to use very big or very small values, how accurate can they be?
I tried to read how doubles and floats work, but I don't really get it.
For my term project in intro to programming, I might need to use different numbers with big ranges of value (many orders of magnitude).
Let's say I create a while loop,
while (number[i-1] - number[i] > ERROR) {
//does stuff
}
Does the limitation of ERROR depend on the size of number[i]? If so, how can I determine how small can ERROR be in order to quit the loop?
I know my teacher explained it at some point, but I can't seem to find it in my notes.
Does the limitation of ERROR depend on the size of number[i]?
Yes.
If so, how can I determine how small can ERROR be in order to quit the loop?
You can get the "next largest" double using Math.nextUp (or the "next smallest" using Math.nextDown), e.g.
double nextLargest = Math.nextUp(number[i-1]);
double difference = nextLargest - number[i-1];
As Radiodef points out, you can also get the difference directly using Math.ulp:
double difference = Math.ulp(number[i-1]);
(but I don't think there's an equivalent method for "next smallest")
If you don't tell us what you want to use it for, then we cannot answer anything more than what is standard knowledge: a double in java has about 16 significant digits, (that's digits of the decimal numbering system,) and the smallest possible value is 4.9 x 10-324. That's in all likelihood far higher precision than you will need.
The epsilon value (what you call "ERROR") in your question varies depending on your calculations, so there is no standard answer for it, but if you are using doubles for simple stuff as opposed to highly demanding scientific stuff, just use something like 1 x 10-9 and you will be fine.
Both the float and double primitive types are limited in terms of the amount of data they can store. However, if you want to know the maximum values of the two types, then run the code below with your favourite IDE.
System.out.println(Float.MAX_VALUE);
System.out.println(Double.MAX_VALUE);
double data type is a double-precision 64-bit IEEE 754 floating point (digits of precision could be between 15 to 17 decimal digits).
float data type is a single-precision 32-bit IEEE 754 floating point (digits of precision could be between 6 to 9 decimal digits).
After running the code above, if you're not satisfied with their ranges than I would recommend using BigDecimal as this type doesn't have a limit (rather your RAM is the limit).
I'm developing a chemistry app, and I need to include the Avogadro's number:
(602200000000000000000000)
I don't really know if I can use scientific notation to represent it as 6.022 x 10x23 (can't put the exponent).
I first used double, then long and now, I used java.math.BigInteger.
But it still says it's too big, what can I do or should this is just to much for a system?
Pass it to the BigInteger constructor as a String, and it works just fine.
BigInteger a = new BigInteger("602200000000000000000000");
a = a.multiply(new BigInteger("2"));
System.out.println(a);
Output: 1204400000000000000000000
First of all, you need to check your physics / chemistry text book.
Avogadro's number is not 602,200,000,000,000,000,000,000. It is approximately 6.022 x 1023. The key word is "approximately". As of 2019, the precise value is 6.02214076×1023 mol−1
(In 2015 when I originally wrote this reply, the current best approximation for Avogadro's number was 6.022140857(74)×1023 mol−1, and the relative error was +/- 1.2×10–8. In 2019, the SI redefined the mole / Avogadro's number to be the precise value above. Source: Wikipedia)
My original (2015) answer was that since the number only needed 8 decimal digits precision, the Java double type was an appropriate type to represent it. Hence, I recommended:
final double AVOGADROS_CONSTANT = 6.02214076E23;
Clearly, neither int or long can represent this number. A float could, but not with enough precision (assuming we use the best available measured value).
Now (post 2019) the BigInteger is the simplest correct representation.
Now to your apparent problems with declaring the constant as (variously) an double, a long and a BigInteger.
I expect you did something like this:
double a = 602200000000000000000000;
and so on. That isn't going to work, but the reason it won't work needs to be explained. The problem is that the number is being supplied as an int literal. An int cannot be that big. The largest possible int value is 231 - 1 ... which is a little bit bigger than 2 x 109.
That is what the Java compiler was complaining about. The literal is too big to be an int.
It is too big for long literal as well. (Do the math.)
But it is not too big for a double literal ... provided that you write it correctly.
The solution using BigInteger(String) works because it side-steps the problem of representing the number as a numeric literal by using a string instead, and parsing it at runtime. That's OK from the perspective of the language, but (IMO) wrong because the extra precision is an illusion.
You can use E notation to write the scientific notation:
double a = 6.022e23;
The problem is with how you're trying to create it (most likely), not because it can't fit.
If you have just a number literal in your code (even if you try to assign it to a double or long), this is first treated as an integer (before being converted to the type it needs to be), and the number you have can't fit into an integer.
// Even though this number can fit into a long, it won't compile, because it's first treated
// as an integer.
long l = 123456788901234;
To create a long, you can add L to your number, so 602200000000000000000000L, although it won't fit into a long either - the max value is 263-1.
To create a double, you can add .0 to your number, so 602200000000000000000000.0 (or 6.022e23 as Guffa suggested), although you should not use this if you want precise values, as you may lose some accuracy because of the way it stores the value.
To create a BigInteger, you can use the constructor taking a string parameter:
new BigInteger("602200000000000000000000");
Most probably you are using long to initialize BigInteger. Since long can represent 64-bit numbers, your number would be too big to fit in to long. Using String would help.
I have the below code somewhere in my app
float private myMethod(float c){
result = (float) (c+273.15);
}
When "c" gets the value something like -273.1455 the result is something very near to zero like 0.0044.
But when it gets the value -273.15 i get this instead of zero: 6.1035157E-6
Why does this happen?
The problem is that 273.15 is a double, not a float, and neither of them can represent 273.15 exactly. However, since they have different precision they will round actually store different numbers. When the addition is done the c is converted to a double which will be able store the float representation of 273.15. So now you have two doubles with almost the same value and the difference will be non zero.
To get "more predictable" result, use 273.15f to ensure you have floats through the calculations. That should solve this problem but what you need to do is to read up on binary floating point arithmetics and how that differs from decimal arithmetic that we are taught in school.
Wiki on floating point is a good place to start.
Floating point calculations in computers are not accurate. You should read something about floating point arithmetics to prevent such errors.
The problem is not with the value, but with the display to the user.
I'm assuming you are converting it into a String. The way this is done is detailed in http://docs.oracle.com/javase/1.4.2/docs/api/java/lang/Double.html#toString(double)
To Display a correct value use the NumberFormat class http://docs.oracle.com/javase/1.4.2/docs/api/java/text/NumberFormat.html
Example :
NumberFormat formater = NumberFormat.getNumberInstance()
formatter.setMaximumFractionDigits(4);
formater.format(myMethod(-273.15))
Now you should get 0.
In my JAVA program there is code like this:
int f_part = (int) ((f_num - num) * 100);
f_num is double and num is long. I just want to take the fractional part out and assign it to f_part. But some times f_part value is one less than it's value. Which means if f_num = 123.55 and num = 123, But f_part equals to 54. And it happens only f_num and num is greater than 100. I don't know why this happening. Please can someone explain why this happens and way to correct it.
This is due to the limited precision in doubles.
The root of your problem is that the literal 123.55 actually represents the value 123.54999....
It may seem like it holds the value 123.55 if you print it:
System.out.println(123.55); // prints 123.55
but in fact, the printed value is an approximation. This can be revealed by creating a BigDecimal out of it, (which provides arbitrary precision) and print the BigDecimal:
System.out.println(new BigDecimal(123.55)); // prints 123.54999999999999715...
You can solve it by going via Math.round but you would have to know how many decimals the source double actually entails, or you could choose to go through the string representation of the double in fact goes through a fairly intricate algorithm.
If you're working with currencies, I strongly suggest you either
Let prices etc be represented by BigDecimal which allows you to store numbers as 0.1 accurately, or
Let an int store the number of cents (as opposed to having a double store the number of dollars).
Both ways are perfectly acceptable and used in practice.
From The Floating-Point Guide:
internally, computers use a format (binary floating-point) that cannot
accurately represent a number like 0.1, 0.2 or 0.3 at all.
When the code is compiled or interpreted, your “0.1” is already
rounded to the nearest number in that format, which results in a small
rounding error even before the calculation happens.
It looks like you're calculating money values. double is a completely inappropriate format for this. Use BigDecimal instead.
int f_part = (int) Math.round(((f_num - num) * 100));
This is one of the most often asked (and answered) questions. Floating point arithmetics can not produce exact results, because it's impossible to have an inifinity of real numbers inside 64 bits. Use BigDecimal if you need arbitrary precision.
Floating point arithmetic is not as simple as it may seem and there can be precision issues.
See Why can't decimal numbers be represented exactly in binary?, What Every Computer Scientist Should Know About Floating-Point Arithmetic for details.
If you need absolutely sure precision, you might want to use BigDecimal.