Why can't I implicitly convert a double to an int? - java

You can implicitly convert an int to a double: double x = 5;
You can explicitly convert an int to a double: double x = (double) 5;
You can explicitly convert a double to an int: int x = (int) 5.0;
Why can't you implicitly convert a double to an int?: int x = 5.0;

The range of double is wider than int. That's why you need explicit cast. Because of the same reason you can't implicitly cast from long to int:
long l = 234;
int x = l; // error

Implicit casting is only available when you're guaranteed that a loss of data will not occur as a result of a conversion. So you could cast an integer to a double, or an integer to a long data type. Casting from a double to an integer however, runs the risk of you losing data.
Console.WriteLine ((int)5.5);
// Output > 5
This is why Microsoft didn't write an implicit cast for this specific conversion. The .NET framework instead forces you to use an explicit cast to ensure that you're making an informed decision by explicitly casting from one type to another.
The implicit keyword is used to declare an implicit user-defined type conversion operator. Use it to enable implicit conversions between a user-defined type and another type, if the conversion is guaranteed not to cause a loss of data.
Source > MSDN

C# follows the lead of Java's type system, which ranks types int->long->float->double, and specifies that any type which appears to the left of another may be cast to it. I personally think ranking types in this fashion was a bad idea, since it means that code like long l = getSomeValue(); float f=l; double d=f; will compile cleanly without a squawk despite a severe loss of precision compared with storing l directly to d; it has some other unfortunate consequences as well.
I suspect Gosling ranked the types as he did in order to ensure that passing a float and a double to a method which is overloaded for (float,float) and (double,double) would use the latter. Unfortunately, his ranking has the unfortunate side-effect that passing an int or long to a method which is only overloaded for float and double will cause the method to use the float overload rather than the double overload.
C# and .NET follows Java's lead in preferring a long-to-float conversion over a long-to-double; the one thing which "saves" them from some of the consequent method-overloading horrors is that nearly all the .NET Framework methods with overloads for float and double also have an overload for Decimal, and no implicit conversions exist between Decimal and the other floating-point types. As a consequence, attempting to pass a long to a method which has overloads for float, double, and Decimal but not long will result in a compilation error rather than a conversion to float.
Incidentally, even if the people choosing which implicit conversions to allow had given the issue more thought, it's unlikely that implicit conversions from floating-point types (e.g. double) to discrete types (e.g. int) would have been permitted. Among other things, the best integer representation for the result of a computation that yielded 2.9999999999994 would in some cases be 2, and in other cases it would be 3. In cases where a conversion might sometimes need to be done one way and sometimes another, it's generally good for the language to require that the programmer indicate the actual intention.

Imagine that you have this piece of code:
double x = pi/6;
double y = sin(x);
y would then be 0.5.
Now suppose that x is cast as an integer as follows:
int x = pi/6;
double y = sin(x);
In this case x would be truncated to 0, and then the sin(0) would be taken, i.e. y = 0.
This is the primary reason why implicit conversion from double to int is not implemented in a lot of strong-typed languages.
The conversion of int to double actually increases the amount of information in the type, and can thus always be done safely.

WHEN YOU CONVERT FROM DOUBLE TO INT IMPLICITLY then you are trying to store a no. with large memory into a variable having small memory(downcasting)
double d=4.5;
int x=d;//error or warning
which can be dangerous as you may lose out the information like you may lose the fractional part of a double while storing it in an integer
whereas that is not the case while storing an int value in a double variable(upcasting).
int x=2;
double d=x; //correct
so the compiler doesn't allows to implicitly convert from double to int( or storing double value in an int) because someone might do it unknowingly and expect no loss of data. But if you explicitly cast it means that you say to compiler that cast whatever be the danger ,no matter ,i will manage....hope it helps..

Related

Exact conversion from float to int

I want to convert float value to int value or throw an exception if this conversion is not exact.
I've found the following suggestion: use Math.round to convert and then use == to check whether those values are equal. If they're equal, then conversion is exact, otherwise it is not.
But I've found an example which does not work. Here's code demonstrating this example:
String s = "2147483648";
float f = Float.parseFloat(s);
System.out.printf("f=%f\n", f);
int i = Math.round(f);
System.out.printf("i=%d\n", i);
System.out.printf("(f == i)=%s\n", (f == i));
It outputs:
f=2147483648.000000
i=2147483647
(f == i)=true
I understand that 2147483648 does not fit into integer range, but I'm surprised that == returns true for those values. Is there better way to compare float and int? I guess it's possible to convert both values to strings, but that would be extremely slow for such a primitive function.
floats are rather inexact concepts. They are also mostly pointless unless you're running on at this point rather old hardware, or interacting specifically with systems and/or protocols that work in floats or have 'use a float' hardcoded in their spec. Which may be true, but if it isn't, stop using floats and start using double - unless you have a fairly large float[] there is zero memory and performance difference, floats are just less accurate.
Your algorithm cannot fail when using int vs double - all ints are perfectly representable as double.
Let's first explain your code snippet
The underlying error here is the notion of 'silent casting' and how java took some intentional liberties there.
In computer systems in general, you can only compare like with like. It's easy to put in exact terms of bits and machine code what it means to determine whether a == b is true or false if a and b are of the exact same type. It is not at all clear when a and b are different things. Same thing applies to pretty much any operator; a + b, if both are e.g. an int, is a clear and easily understood operation. But if a is a char and b is, say, a double, that's not clear at all.
Hence, in java, all binary operators that involve different types are illegal. In basis, there is no bytecode to directly compare a float and a double, for example, or to add a string to an int.
However, there is syntax sugar: When you write a == b where a and b are different types, and java determines that one of two types is 'a subset' of the other, then java will simply silently convert the 'smaller' type to the 'larger' type, so that the operation can then succeed. For example:
int x = 5;
long y = 5;
System.out.println(x == y);
This works - because java realizes that converting an int to a long value is not ever going to fail, so it doesn't bother you with explicitly specifying that you intended the code to do this. In JLS terms, this is called a widening conversion. In contrast, any attempt to convert a 'larger' type to a 'smaller' type isn't legal, you have to explicitly cast:
long x = 5;
int y = x; // does not compile
int y = (int) x; // but this does.
The point is simply this: When you write the reverse of the above (int x = 5; long y = x;), the code is identical, it's just that compiler silently injects the (long) cast for you, on the basis that no loss will occur. The same thing happens here:
int x = 5;
long y = 10;
long z = x + y;
That compiles because javac adds some syntax sugar for you, specifically, that is compiled as if it says: long z = ((long) x) + y;. The 'type' of the expression x + y there is long.
Here's the key trick: Java considers converting an int to a float, as well as an int or long to a double - a widening conversion.
As in, javac will just assume it can do that safely without any loss and therefore will not enforce that the programmer explicitly acknowledges by manually adding the cast. However, int->float, as well as long->double are not actually entirely safe.
floats can represent every integral value between -2^23 to +2^23, and doubles can represent every integral value between -2^52 to +2^52 (source). But int can represent every integral value between -2^31 to +2^31-1, and longs -2^63 to +2^63-1. That means at the edges (very large negative/positive numbers), integral values exist that are representable in ints but not in floats, or longs but not in doubles (all ints are representable in double, fortunately; int -> double conversion is entirely safe). But java doesn't 'acknowledge' this, which means silent widening conversions can nevertheless toss out data (introduce rounding) silently.
That is what happens here: (f == i) is syntax sugared into (f == ((float) i)) and the conversion from int to float introduces the rounding.
The solution
Mostly, when using doubles and floats and nevertheless wishing for exact numbers, you've already messed up. These concepts fundamentally just aren't exact and this exactness cannot be sideloaded in by attempting to account for error bands, as the errors introduced due to the rounding behaviour of float and double cannot be tracked (not easily, at any rate). You should not be using float/double as a consequence. Either find an atomary unit and represent those in terms of int/long, or use BigDecimal. (example: To write bookkeeping software, do not store finance amounts as a double. do store them as 'cents' (or satoshis or yen or pennies or whatever the atomic unit is in that currency) in long, or, use BigDecimal if you really know what you are doing).
I want an answer anyway
If you're absolutely positive that using float (or even double) here is acceptable and you still want exactness, we have a few solutions.
Option 1 is to employ the power of BigDecimal:
new BigDecimal(someDouble).intValueExact()
This works, is 100% reliable (unless float to double conversion can knock a non-exact value into an exact one somehow, I don't think that can happen), and throws. It's also very slow.
An alternative is to employ our knowledge of how the IEEE floating point standard works.
A real simple answer is simply to run your algorithm as you wrote it, but to add an additional check: If the value your int gets is below -2^23 or above +2^23 then it probably isn't correct. However, there are still a smattering of numbers below -2^23 and +2^23 that are perfectly representable in both float and int, just, no longer every number at that point. If you want an algorithm that will accept those exact numbers as well, then it gets much more complicated. My advice is not to delve into that cesspool: If you have a process where you end up with a float that is anywhere near such extremes, and you want to turn them to int but only if that is possible without loss, you've arrived at a crazy question and you need to rewire the parts that you got you there instead!
If you really need that, instead of trying to numbercrunch the float, I suggest using the BigDecimal().intValueExact() trick if you truly must have this.

Why do you need to place an 'f' at the end of the decimal number when declaring a float variable?

I have seen this question asked before. However, I am not satisfied with the answers that are given. Typical responses are that Java sees the number as a double, since that is the default in JAVA, and gives a mismatch error on compiling.
This behavior is apparently totally ignoring my use of a float declaration.
My question is: If I am declaring a variable type as float, why does Java think it is a double? I am specifically declaring:
float x = 3.14;
I have told Java that I want this variable to be a float type by using the float keyword. Why is this required?
float x = 3.14f;
If I want a double type variable, I use the double keyword specifically declaring the variable as a double.
double x = 3.14;
Why does Java not recognize this when declaring a variable and insists the literal needs to be cast from a double to a float by adding the 'f' at the end? Shouldn't Java recognize the use of the keyword float and properly assign the type to the literal?
Size matters
32-bit float → float x = 3.14 ; ← 64-bit double
The default for a fractional numeric literal in Java is the 64-bit primitive double type, as shown in the Answer by Mark Rotteveel. 3.14 is a double, as is 3.14d and 3.14D.
The primitive float type is smaller, 32-bit.
If Java were to infer that what you said is a double but what you meant is a float, Java would have to reduce the 64-bit value to a 32-bit value, discarding half the data, 32-bits. This could involve data loss for larger numbers. And this would definitely mean a loss of capacity.
So the designers of Java decided to be conservative in this matter. They decided to not be so presumptive as to throw away half your data, nor to reduce your specified capacity by half. They decided to take you at your word — if you said 64-bit double, they give you a double.
If you then go on to say you want to cram that 64-bit number into a 32-bit container, they flag the error, as 64-bits does not fit into 32-bits. Square peg, round hole.
round hole, 32-bit float → float x = 3.14 ; ← 64-bit double, square peg
Java does support the converse, a widening conversion. You can put a 32-bit float into a 64-bit double. And you can put a 32-bit int into a 64-bit long. These operations are safe, as the extra 32-bits can be filled safely with zeros. So Java is willing and able to support these operations.
Be explicit
I suggest you make a habit of being explicit. Do not rely on the default. If you mean a 32-bit float, append the f or F to your literal. If you mean a 64-bit double, append the d or D to your literal.
Doing so not only quiets the compiler, it makes your intentions clear to the humans reading your code.
float pi = 3.14F ;
double pi = 3.14d ;
BigDecimal
And, by the way, if you care about accuracy rather than performance, you’ll be using BigDecimal rather than float/double. In which case this issue is moot.
BigDecimal pi = new BigDecimal( "3.14" ) ;
The reason is that the Java Language Specification specifies that a floating point literal without the f or F suffix, is a double. From Floating-Point Literals in the Java 15 Language Specification:
A floating-point literal is of type float if it is suffixed with an
ASCII letter F or f; otherwise its type is double and it can
optionally be suffixed with an ASCII letter D or d.
Java doesn't try to infer the type of the literal based on the type of the variable you're assigning to. A double-literal is simply always a double. In other words, semantically, it behaves the same as if your code did:
double temp = 3.14;
float x = temp;
Which will also fail to compile, because assigning a double to a float can result in loss of information, if the magnitude of the value is too large to fit in a float. This is known as narrowing primitive conversion, and requires an explicit cast to tell the compiler you're aware of the risk and you accept it.
In some cases, it may seem as if Java will perform inference of type from the variable (e.g. when assigning an int or long literal to a double or float variable), but that is because the entire range of int and long can be stored in a float or double (possibly with loss of precision). This is known as widening primitive conversion.

JAVA SE7 questions about explicit casting

I'm preparing Java OCA exam.
And here is a questions about casting.
I understand that for primitive data types, if we trying to assign an int to long, we should be fine. Since it can be done automatically.
And if we trying to assign a long to an int. It cause a compiler error, right?
So, the first question:
when the explicit casting is needed and I didn't do that in my code, The code won't compile. Is there any case that the code will compile??
And the second question:
The book I'm reading actually has a switch case structure like:
int num = 10
switch (num)
case 10/3: //do something..
and the author says that, in this case, the decimal result will be chopped to 3....
But, there is no explicit casting over here, I think this should be a compile error...
As for the first question: if explicit casting is needed, the code won't compile. That's what "needed" means.
As for the second question, try this:
double x = 10/3;
x will be equal to 3 too. It's not casting, it's the standard behaviour of / operator.
First question - generally any typecast conversion that causes a loss of data / accuracy must be made explicitly.
For example:
int y = 3;
double x = y; // ok
Results in the value 3.0 stored in x and is perfectly legal. however:
double x = 3.0;
int y = x; // give a compile error
int y = (int) x; // must explicit cast
Second question - Think of operators as functions so division function is (<T> first, <T> second)
so the division operator changes its behavior based on the two types passed to it.
so 10/3 equals 3 an integer because it's a divison of two integers and
10.0/3 equals 3.3333333 because it cannot convert double to int (without explicitly being told to do so) therefore it converts the int (3) to a double and performs the calculation on doubles and returns a double.
So the return type of an operator is dependent on its arguments and automatic typecast will always be upward (more accuracy) if possible.

float/int implicit conversion

I'm doing multiplication and division of floats and ints and I forget the implicit conversion rules (and the words in the question seem too vague to google more quickly than asking here).
If I have two ints, but I want to do floating-point division, do I need only to cast one or both of the operands? How about for multiplication — if I multiply a float and an int, is the answer a float?
You can’t assign to an int result from division of a float by an int or vice-versa.
So the answers are:
If I have two ints, but I want to do floating point division…?
One cast is enough.
If I multiply a float and an int, is the answer a float?
Yes it is.
float f = 1000f;
int i = 3;
f = i; // Ok
i = f; // Error
f = i/f; //Ok 0.003
f = f/i; //Ok 333.3333(3)
i = i/f; //Error
i = f/i; //Error
To demonstrate:
int i1 = 5;
float f = 0.5f;
int i2 = 2;
System.out.println(i1 * f);
System.out.println(i1 / i2);
System.out.println(((float) i1) / i2);
Result:
2.5
2
2.5
In order to perform any sort of floating-point arithmetic with integers, you need to convert (read: cast) at least one of the operands to a float type.
If at least one of the operands to a
binary operator is of floating-point
type, then the operation is a
floating-point operation, even if the
other is integral.
(Source: Java language specifications - 4.2.4)
if I multiply a float and an int, is the answer a float?
System.out.println(((Object)(1f*1)).getClass());//class java.lang.Float
(If you use DrJava, you can simply type ((Object)(1f*1)).getClass() into the interactions panel. There's a plugin for DrJava for Eclipse too.)
The simple answer is that Java will perform widening conversions automatically but not narrowing conversions. So for example int->float is automatic but float->int requires a cast.
Java ranks primitive types in the order int < long < float < double. If an operator is used with different primitive types, the type which appears before the other in the above list will be implicitly converted to the other, without any compiler diagnostic, even in cases where this would cause a loss of precision. For example, 16777217-1.0f will yield 16777215.0f (one less than the correct value). In many cases, operations between a float and an int outside the range +/-16777216 should be performed by casting the int to double, performing the operation, and then--if necessary--casting the result to float. I find the requirement for the double casting annoying, but the typecasting rules in Java require that one either use the annoying double cast or suffer the loss of precision.

Java: double vs float

In another Bruce Eckel exercise, the code I've written takes a method and changes value in another class. Here is my code:
class Big {
float b;
}
public class PassObject {
static void f(Letter y) {
y.c = 'z';
} //end f()
static void g(Big z) {
z.b = 2.2;
}
public static void main(String[] args ) {
Big t = new Big();
t.b = 5.6;
System.out.println("1: t.b : " + t.b);
g(x);
System.out.println("2: t.b: " + t.b);
} //end main
}//end class
It's throwing an error saying "Possible loss of precision."
PassObject.java:13: possible loss of precision
found: double
required : float z.b = 2.2
passobject.java:20: possible loss of precision
found : double
required : float t.b = 5.6
Can't doubles be floats as well?
Yes, but you have to specify that they are floats, otherwise they are treated as doubles:
z.b = 2.2f
The 'f' at the end of the number makes it a float instead of a double.
Java won't automatically narrow a double to a float.
No, floats can be automatically upcast to doubles, but doubles can never be floats without explicit casting because doubles have the larger range.
float range is 1.40129846432481707e-45 to 3.40282346638528860e+38
double range is 4.94065645841246544e-324d to 1.79769313486231570e+308d
By default, Java will treat a decimal (e.g. "4.3") as a double unless you otherwise specify a float by adding an f after the number (e.g. "4.3f").
You're having the same problem on both lines. First, the decimal literal is interpreted as a double by the compiler. It then attempts to assign it to b, which is of type float. Since a double is 64 bits and a float is only 32 bits (see Java's primitives documentation), Java gives you an error indicating that the float cannot fit inside the double. The solution is to add an f to your decimal literals.
If you were trying to do the opposite (i.e. assign a float to a double), that would be okay since you can fit a float's 32 bits within a double's 64.
Don't use float. There is almost never a good reason to use it and hasn't been for more than a decade. Just use double.
can't doubles be floats as well?
No. Each value or variable has exactly one type (double, float, int, long, etc...). The Java Language Specification states exactly what happens when you try to assign a value of one type to a variable of another type. Generally, assignments of a "smaller" value to a "larger" type are allowed and done implicitly, but assignments where information could be lost because the target type is too "small" to hold all values of the origin type are not allowed by the compiler, even if the concrete value does fit into the target type.
That's why the compiler complains that assigning a double value (which the literal implicitly is) to a float variable could lose information, and you have to placate it by either making the value a float, or by casting explicitly.
One area that often causes confusions is calculations, because these are implicitly "widened" to int for technical reasons. So if you multiply two shorts and try to assign the result to a short, the compiler will complain because the result of the calculation is an int.
float range is lower than double so a float can be easily represented in double, but the reverse is not possible because, let say we take a value which is out of float range then during convention we will lose the exact data

Categories

Resources