Java: double vs float - java

In another Bruce Eckel exercise, the code I've written takes a method and changes value in another class. Here is my code:
class Big {
float b;
}
public class PassObject {
static void f(Letter y) {
y.c = 'z';
} //end f()
static void g(Big z) {
z.b = 2.2;
}
public static void main(String[] args ) {
Big t = new Big();
t.b = 5.6;
System.out.println("1: t.b : " + t.b);
g(x);
System.out.println("2: t.b: " + t.b);
} //end main
}//end class
It's throwing an error saying "Possible loss of precision."
PassObject.java:13: possible loss of precision
found: double
required : float z.b = 2.2
passobject.java:20: possible loss of precision
found : double
required : float t.b = 5.6
Can't doubles be floats as well?

Yes, but you have to specify that they are floats, otherwise they are treated as doubles:
z.b = 2.2f
The 'f' at the end of the number makes it a float instead of a double.
Java won't automatically narrow a double to a float.

No, floats can be automatically upcast to doubles, but doubles can never be floats without explicit casting because doubles have the larger range.
float range is 1.40129846432481707e-45 to 3.40282346638528860e+38
double range is 4.94065645841246544e-324d to 1.79769313486231570e+308d

By default, Java will treat a decimal (e.g. "4.3") as a double unless you otherwise specify a float by adding an f after the number (e.g. "4.3f").
You're having the same problem on both lines. First, the decimal literal is interpreted as a double by the compiler. It then attempts to assign it to b, which is of type float. Since a double is 64 bits and a float is only 32 bits (see Java's primitives documentation), Java gives you an error indicating that the float cannot fit inside the double. The solution is to add an f to your decimal literals.
If you were trying to do the opposite (i.e. assign a float to a double), that would be okay since you can fit a float's 32 bits within a double's 64.

Don't use float. There is almost never a good reason to use it and hasn't been for more than a decade. Just use double.

can't doubles be floats as well?
No. Each value or variable has exactly one type (double, float, int, long, etc...). The Java Language Specification states exactly what happens when you try to assign a value of one type to a variable of another type. Generally, assignments of a "smaller" value to a "larger" type are allowed and done implicitly, but assignments where information could be lost because the target type is too "small" to hold all values of the origin type are not allowed by the compiler, even if the concrete value does fit into the target type.
That's why the compiler complains that assigning a double value (which the literal implicitly is) to a float variable could lose information, and you have to placate it by either making the value a float, or by casting explicitly.
One area that often causes confusions is calculations, because these are implicitly "widened" to int for technical reasons. So if you multiply two shorts and try to assign the result to a short, the compiler will complain because the result of the calculation is an int.

float range is lower than double so a float can be easily represented in double, but the reverse is not possible because, let say we take a value which is out of float range then during convention we will lose the exact data

Related

Why do you need to place an 'f' at the end of the decimal number when declaring a float variable?

I have seen this question asked before. However, I am not satisfied with the answers that are given. Typical responses are that Java sees the number as a double, since that is the default in JAVA, and gives a mismatch error on compiling.
This behavior is apparently totally ignoring my use of a float declaration.
My question is: If I am declaring a variable type as float, why does Java think it is a double? I am specifically declaring:
float x = 3.14;
I have told Java that I want this variable to be a float type by using the float keyword. Why is this required?
float x = 3.14f;
If I want a double type variable, I use the double keyword specifically declaring the variable as a double.
double x = 3.14;
Why does Java not recognize this when declaring a variable and insists the literal needs to be cast from a double to a float by adding the 'f' at the end? Shouldn't Java recognize the use of the keyword float and properly assign the type to the literal?
Size matters
32-bit float → float x = 3.14 ; ← 64-bit double
The default for a fractional numeric literal in Java is the 64-bit primitive double type, as shown in the Answer by Mark Rotteveel. 3.14 is a double, as is 3.14d and 3.14D.
The primitive float type is smaller, 32-bit.
If Java were to infer that what you said is a double but what you meant is a float, Java would have to reduce the 64-bit value to a 32-bit value, discarding half the data, 32-bits. This could involve data loss for larger numbers. And this would definitely mean a loss of capacity.
So the designers of Java decided to be conservative in this matter. They decided to not be so presumptive as to throw away half your data, nor to reduce your specified capacity by half. They decided to take you at your word — if you said 64-bit double, they give you a double.
If you then go on to say you want to cram that 64-bit number into a 32-bit container, they flag the error, as 64-bits does not fit into 32-bits. Square peg, round hole.
round hole, 32-bit float → float x = 3.14 ; ← 64-bit double, square peg
Java does support the converse, a widening conversion. You can put a 32-bit float into a 64-bit double. And you can put a 32-bit int into a 64-bit long. These operations are safe, as the extra 32-bits can be filled safely with zeros. So Java is willing and able to support these operations.
Be explicit
I suggest you make a habit of being explicit. Do not rely on the default. If you mean a 32-bit float, append the f or F to your literal. If you mean a 64-bit double, append the d or D to your literal.
Doing so not only quiets the compiler, it makes your intentions clear to the humans reading your code.
float pi = 3.14F ;
double pi = 3.14d ;
BigDecimal
And, by the way, if you care about accuracy rather than performance, you’ll be using BigDecimal rather than float/double. In which case this issue is moot.
BigDecimal pi = new BigDecimal( "3.14" ) ;
The reason is that the Java Language Specification specifies that a floating point literal without the f or F suffix, is a double. From Floating-Point Literals in the Java 15 Language Specification:
A floating-point literal is of type float if it is suffixed with an
ASCII letter F or f; otherwise its type is double and it can
optionally be suffixed with an ASCII letter D or d.
Java doesn't try to infer the type of the literal based on the type of the variable you're assigning to. A double-literal is simply always a double. In other words, semantically, it behaves the same as if your code did:
double temp = 3.14;
float x = temp;
Which will also fail to compile, because assigning a double to a float can result in loss of information, if the magnitude of the value is too large to fit in a float. This is known as narrowing primitive conversion, and requires an explicit cast to tell the compiler you're aware of the risk and you accept it.
In some cases, it may seem as if Java will perform inference of type from the variable (e.g. when assigning an int or long literal to a double or float variable), but that is because the entire range of int and long can be stored in a float or double (possibly with loss of precision). This is known as widening primitive conversion.

Why can't I implicitly convert a double to an int?

You can implicitly convert an int to a double: double x = 5;
You can explicitly convert an int to a double: double x = (double) 5;
You can explicitly convert a double to an int: int x = (int) 5.0;
Why can't you implicitly convert a double to an int?: int x = 5.0;
The range of double is wider than int. That's why you need explicit cast. Because of the same reason you can't implicitly cast from long to int:
long l = 234;
int x = l; // error
Implicit casting is only available when you're guaranteed that a loss of data will not occur as a result of a conversion. So you could cast an integer to a double, or an integer to a long data type. Casting from a double to an integer however, runs the risk of you losing data.
Console.WriteLine ((int)5.5);
// Output > 5
This is why Microsoft didn't write an implicit cast for this specific conversion. The .NET framework instead forces you to use an explicit cast to ensure that you're making an informed decision by explicitly casting from one type to another.
The implicit keyword is used to declare an implicit user-defined type conversion operator. Use it to enable implicit conversions between a user-defined type and another type, if the conversion is guaranteed not to cause a loss of data.
Source > MSDN
C# follows the lead of Java's type system, which ranks types int->long->float->double, and specifies that any type which appears to the left of another may be cast to it. I personally think ranking types in this fashion was a bad idea, since it means that code like long l = getSomeValue(); float f=l; double d=f; will compile cleanly without a squawk despite a severe loss of precision compared with storing l directly to d; it has some other unfortunate consequences as well.
I suspect Gosling ranked the types as he did in order to ensure that passing a float and a double to a method which is overloaded for (float,float) and (double,double) would use the latter. Unfortunately, his ranking has the unfortunate side-effect that passing an int or long to a method which is only overloaded for float and double will cause the method to use the float overload rather than the double overload.
C# and .NET follows Java's lead in preferring a long-to-float conversion over a long-to-double; the one thing which "saves" them from some of the consequent method-overloading horrors is that nearly all the .NET Framework methods with overloads for float and double also have an overload for Decimal, and no implicit conversions exist between Decimal and the other floating-point types. As a consequence, attempting to pass a long to a method which has overloads for float, double, and Decimal but not long will result in a compilation error rather than a conversion to float.
Incidentally, even if the people choosing which implicit conversions to allow had given the issue more thought, it's unlikely that implicit conversions from floating-point types (e.g. double) to discrete types (e.g. int) would have been permitted. Among other things, the best integer representation for the result of a computation that yielded 2.9999999999994 would in some cases be 2, and in other cases it would be 3. In cases where a conversion might sometimes need to be done one way and sometimes another, it's generally good for the language to require that the programmer indicate the actual intention.
Imagine that you have this piece of code:
double x = pi/6;
double y = sin(x);
y would then be 0.5.
Now suppose that x is cast as an integer as follows:
int x = pi/6;
double y = sin(x);
In this case x would be truncated to 0, and then the sin(0) would be taken, i.e. y = 0.
This is the primary reason why implicit conversion from double to int is not implemented in a lot of strong-typed languages.
The conversion of int to double actually increases the amount of information in the type, and can thus always be done safely.
WHEN YOU CONVERT FROM DOUBLE TO INT IMPLICITLY then you are trying to store a no. with large memory into a variable having small memory(downcasting)
double d=4.5;
int x=d;//error or warning
which can be dangerous as you may lose out the information like you may lose the fractional part of a double while storing it in an integer
whereas that is not the case while storing an int value in a double variable(upcasting).
int x=2;
double d=x; //correct
so the compiler doesn't allows to implicitly convert from double to int( or storing double value in an int) because someone might do it unknowingly and expect no loss of data. But if you explicitly cast it means that you say to compiler that cast whatever be the danger ,no matter ,i will manage....hope it helps..

Why is float to double cast unprecise, double to float not?

I know that floating point variables loose precision while casting. But what I don't understand is, why a cast from a smaller primitive to a bigger one is unprecise but vice versa not. I would understand if it happens from double to float but it's the other way around. Why is this so?
See the results from these two tests:
#Test
public void castTwoPrimitiveDecimalsUnpreciseToPrecise()
{
float var1 = 6.2f;
double var2 = var1;
assertThat(var2, is(6.2d)); //false, because it's 6.199999809265137
}
#Test
public void castTwoPrimitiviesDecimalsPreciseToUnpresice()
{
double var1 = 7.6d;
float var2 = (float)var1;
assertThat(var2, is(7.6f)); //true
}
The precision issue is in the initialization of your variables, not in the conversion.
In the first case, you start with a number that is only the float approximation to decimal 6.2. The conversion to double gets a double with exactly the same value as that float approximation. You then compare it against the much closer double approximation, so of course it does not match.
In the second case, you start with the double approximation to decimal 7.6. You then convert it to float. That will round the double to a float. Rounding twice, on the original conversion to double and on the cast to float, could conceivably produce a different answer from directly converting a number to float, but usually you will get the float approximation. You then compare it to the float approximation, so it is not surprising you get a match.
Suppose someone measures a peg with a ruler and determines that it/s 3/8" diameter. The other person measures a hole with a calipers and discovers that it's 9.5267mm. Will the peg fit in the hole?
If one converts the cruder measurement to the higher-precision form, one finds that the hole appears to be 0.0017mm (i.e. about 1/15000") larger than the peg. If one converts them both to the lower-precision form, one will find that the values are indistinguishable.
If the peg that was measured as 3/8" is in fact precisely 9.525mm [the exact metric equivalent], such a measurement may be converted to a higher-resolution form without conversion loss. If, however, it simply means "something that's closer to the 3/8" mark than to 23/64" or 25/64", such conversion will cause a value which would generally be expected to be an approximation to be interpreted as being more precise than it really is.
Some people would regard the fact that the peg and hole are regarded as different sizes as being a good thing. Personally, I would think it would be better to regard them as being indistinguishable. With the measurements as described, there's no particular reason to believe with any certainty that the peg is bigger than the hole; it probably isn't exactly equal, but calling them indistinguishable is more accurate than saying the peg is bigger.
Personally, I detest language rules that require values to be written as or cast to [single-precision] float merely to shut up the compiler. If one wants to set a float equal to the value closest to 4.7, one should be able to simply say:
float f=4.7;
and achieve that effect. A person who sees:
static final float coef = 4.7f;
... some time later
float f = coef;
double d = coef;
will have no way of knowing whether the goal was to set d to the double value equal to the float value closest to 4.7, or whether the goal was to set d to the double value closest to 4.7. Unfortunately, Java provides no means by which one can declare a constant which can be silently assigned to float without an explicit cast, but which when assigned to double will use the precision available in that type.
Incidentally, if my goal was to set some double variable v equal the value of the float closest to the specified numeric value of coef, rather than the value of the double closest to that numeric value, I'd probably code it very explicitly:
v = (double)(float)coef;
That would make clear and unambiguous the fact that the programmer was expecting and intending that a double would be assigned a value which had been previously rounded to float precision. Absent the (double) typecast, one would have to consider the possibilities that the programmer might have expected v to be a float (e.g. because when the code was written, it was a float, but since then it was changed to double). Absent the (float) typecast, one would have to consider the possibility that the programmer was expecting coef to be a double (e.g. because it had been, but someone changed it to a float to allow it to be assigned to variables of float type without the compiler squawking).

Why is the letter f used at the end of a float no.?

I just wanted some information about this.
float n = 2.99944323200023f
What does the f at the end of the literal do? What is it called? Why is it used?
The f indicates it's a floating point literal, not a double literal (which it would implicitly be otherwise.) It hasn't got a particular technical name that I know of - I tend to call it the "letter suffix" if I need to refer to it specifically, though that's somewhat arbitrary!
For instance:
float f = 3.14f; //Compiles
float f = 3.14; //Doesn't compile, because you're trying to put a double literal in a float without a cast.
You could of course do:
float f = (float)3.14;
...which accomplishes near enough the same thing, but the F is a neater, more concise way of showing it.
Why was double chosen as the default rather than float? Well, these days the memory requirements of a double over a float aren't an issue in 99% of cases, and the extra accuracy they provide is beneficial in a lot of cases - so you could argue that's the sensible default.
Note that you can explicitly show a decimal literal as a double by putting a d at the end also:
double d = 3.14d;
...but because it's a double value anyway, this has no effect. Some people might argue for it advocating it's clearer what literal type you mean, but personally I think it just clutters code (unless perhaps you have a lot of float literals hanging around and you want to emphasise that this literal is indeed meant to be a double, and the omission of the f isn't just a bug.)
The default Java type which Java will be using for a float variable will be double. So, even if you declare any variable as float, what compiler has to actually do is to assign a double value to a float variable, which is not possible.So, to tell compiler to treat this value as a float, that 'f' is used.
In Java, when you type a decimal number as 3.6, its interpreted as a double. double is a 64-bit precision IEEE 754 floating point, while floatis a 32-bit precision IEEE 754 floating point. As a float is less precise than a double, the conversion cannot be performed implicitly.
If you want to create a float, you should end your number with f (i.e.: 3.6f).
For more explanation, see the primitive data types definition of the Java tutorial.
It's to distinguish between floating point and double precision numbers. The latter has no suffix.
You need to put the 'f' at the end, otherwise Java will assume its a double.
From the Oracle Java Tutorial, section Primitive Data Types under Floating-Point Literals
A floating-point literal is of type float if it ends with the letter F or f; otherwise its type is double and it can optionally end with the letter D or d.
It means that it's a single precision floating point literal rather than double precision. Otherwise, you'd have to write float n = (float)2.99944323200023; to cast the double to single.
When you write 1.0, it's ambiguous as to whether you intend the literal to be a float or double. By writing 1.0f, you're telling Java that you intend the literal to be a float, while using 1.0d specifies that it should be a double (which is also default if you do not specify that explicitely).
If f is not precised at the end, value is considered to be a double.
And a double leads to more bytes in memory than float.
For the new versions of C lang, the conversion from double input to a float variable (assigning operation) is done by the compiler without the 'F'
float fraction1 = 1337;
float fraction2 = 1337.0;
float fraction3 = 1337.0F;
printf("%f, %f, %f", fraction1, fraction2, fraction3);
output(C GNU): 1337.000000, 1337.000000, 1337.000000

why f is placed after float values?

I don't know why f or F is placed after float values in Java or other languages? for instance,
float fVariable = 12.3f;
any features other than indicating that this is a float value?
By default 12.3 is double literal. To tell compiler to treat it as float explicitly -> use f or F.
See tutorial page on the primitive types.
Seeing as there are only so many ways to represent a number in your program, the designers of Java had to cherry pick and assign each form to the most common use case. For those forms selected as default, the suffix that denotes the exact type is optional.
For Integer literals (int, long), the default is int. For obvious
reasons.
For Floating point literals (float, double) the default is double.
Because using double potentially allows safer arithmetic on the
stored values.
So, when you type 12 in your program, thats an int literal, as opposed to 12L, which is a long.
And when you type 12.3, thats a double literal, as opposed to 12.3F, which is a float.
So where is this relevant? Primarily in handling downcasts, or narrowing conversions. Whenever you downcast a long to an int, or a double to a float, the possibility for data loss exists. So, the compiler will force you to indicate that you really want to perform the narrowing conversion, by signaling a compile error for something like this:
float f = 12.3;
Because 12.3 represents a double, you have to explicitly cast it to a float (basically signing off on the narrowing conversion). Otherwise, you could indicate that the number is really a float, by using the correct suffix;
float f = 12.3f;
So too summarize, having to specify a suffix for longs and floats is a compromise the language designers chose, in order to balance the need to specify what exactly a number is, with the flexibility of converting numbers from one storage type to another.
float and double can only provide approximate representation values for some values. e.g. 12.3 or 0.1
The difference is that float is not as accurate (as it has less precision, because its smaller)
e.g.
System.out.println("0.1f == 0.1 is " + (0.1f == 0.1));
System.out.println("0.1f is actually " + new BigDecimal(0.1f));
System.out.println("0.1 is actually " + new BigDecimal(0.1));
prints
0.1f == 0.1 is false
0.1f is actually 0.100000001490116119384765625
0.1 is actually 0.1000000000000000055511151231257827021181583404541015625
So 0.1 is the closest representation in double and 0.1f is the closest representation in float
float fVariable = 12.3; is fine. but when you use only float value(without any identifier) in any expression that time you need to tell compiler that value is float hence we use suffix "f" after value. example
float fl =13f/5f;
here 13 and 5 are float values.
In java we have many different basic variable types so in order to make it general , it has some default features. Like if we give an input 16.02 it automatically takes it as a double input. So if you want to specify it as float we mention that 'f' after the number or we can simply use:
float f = (float) 16.02;
or
float f = 16.02f;
In the same way we have to mention 16l if we want the number to be saved as a long type else it will automatically select the default type ie int type.
During compilation, all floating point numbers (numbers with decimal point) default to double.
Therefore, if you don't want your number to double and just want it as float, you have to explicitly tell the compiler by adding a f or F at end of the literal constant.
If you do not use f it will be interpreted as double, which is the default in Java.
You can also write it like this##
float f=(float) 32.5956; float f=32.5956f;
Float is single-precision 32-bit IEEE 754 floating point and Double is double-precision 64-bit IEEE 754 floating point. When you use a value with decimal points and if you don`t specify is as 0.23f (specifically float) java identifies it as a double.
For decimal values, double data type is generally the default choice taken by java.
Check This
[10 years after the original post]
why f or F is placed after float values
With the f, the initialization occurs with the closest float value. Without the f, it might differ.
Many code values like 12.3 are not exactly representable as a double or a float. Instead a nearby value is used.
One rounding
Code 12.3 converted to the closest float: 12.30000019073486328125
float fVariable1 = 12.3f;
Two roundings
Code 12.3 converted to the closest double 12.300000000000000710542735760100185871124267578125 and then to the nearest float: 12.30000019073486328125.
float fVariable2 = 12.3;
Sometimes that 2-step approach makes for a different value due to double rounding.

Categories

Resources