New Java programmers are often confused by compilation error messages like:
"incompatible types: possible lossy conversion from double to int"
for this line of code:
int squareRoot = Math.sqrt(i);
In general, what does the "possible lossy conversion" error message mean, and how do you fix it?
First of all, this is a compilation error. If you ever see it in an exception message at runtime, it is because you have have run a program with compilation errors1.
The general form of the message is this:
"incompatible types: possible lossy conversion from <type1> to <type2>"
where <type1> and <type2> are both primitive numeric types; i.e. one of byte, char, short, int, long, float or double.
This error happens when your code attempts to do an implicit conversion from <type1> to <type2> but the conversion could be lossy.
In the example in the question:
int squareRoot = Math.sqrt(i);
the sqrt method produces a double, but a conversion from double to int is potentially lossy.
What does "potentially lossy" mean?
Well lets look at a couple of examples.
A conversion of a long to an int is a potentially lossy conversion because there are long values that do not have a corresponding int value. For example, any long value that is greater than 2^31 - 1 is too large to be represented as an int. Similarly, any number less than -2^31 is too small.
A conversion of an int to a long is NOT lossy conversion because every int value has a corresponding long value.
A conversion of a float to an long is a potentially lossy conversion because there float values that are outside of the range that can be represented as long values. Such numbers are (lossily) convert into Long.MAX_VALUE or Long.MIN_VALUE, as are NaN and Inf values.
A conversion of an long to a float is NOT lossy conversion because every long value has a corresponding float value. (The converted value may be less precise, but "lossiness" doesn't mean that ... in this context.)
These are all the conversions that are potentially lossy:
short to byte or char
char to byte or short
int to byte, short or char
long to byte, short, char or int
float to byte, short, char, int or long
double to byte, short, char, int, long or float.
How do you fix the error?
The way to make the compilation error go away is to add a typecast. For example;
int i = 47;
int squareRoot = Math.sqrt(i); // compilation error!
becomes
int i = 47;
int squareRoot = (int) Math.sqrt(i); // no compilation error
But is that really a fix? Consider that the square root of 47 is 6.8556546004 ... but squareRoot will get the value 6. (The conversion will truncate, not round.)
And what about this?
byte b = (int) 512;
That results in b getting the value 0. Converting from a larger int type to a smaller int type is done by masking out the high order bits, and the low-order 8 bits of 512 are all zero.
In short, you should not simply add a typecast, because it might not do the correct thing for your application.
Instead, you need to understand why your code needs to do a conversion:
Is this happening because you have made some other mistake in your code?
Should the <type1> be a different type, so that a lossy conversion isn't needed here?
If a conversion is necessary, is the silent lossy conversion that the typecast will do the correct behavior?
Or should your code be doing some range checks and dealing with incorrect / unexpected values by throwing an exception?
"Possible lossy conversion" when subscripting.
First example:
for (double d = 0; d < 10.0; d += 1.0) {
System.out.println(array[d]); // <<-- possible lossy conversion
}
The problem here is that array index value must be int. So d has to be converted from double to int. In general, using a floating point value as an index doesn't make sense. Either someone is under the impression that Java arrays work like (say) Python dictionaries, or they have overlooked the fact that floating-point arithmetic is often inexact.
The solution is to rewrite the code to avoid using a floating point value as an array index. (Adding a type cast is probably an incorrect solution.)
Second example:
for (long l = 0; l < 10; l++) {
System.out.println(array[l]); // <<-- possible lossy conversion
}
This is a variation of the previous problem, and the solution is the same. The difference is that the root cause is that Java arrays are limited to 32 bit indexes. If you want an "array like" data structure which has more than 231 - 1 elements, you need to define or find a class to do it.
"Possible lossy conversion" in method or constructor calls
Consider this:
public class User {
String name;
short age;
int height;
public User(String name, short age, int height) {
this.name = name;
this.age = age;
this.height = height;
}
public static void main(String[] args) {
User user1 = new User("Dan", 20, 190);
}
}
Compiling the above with Java 11 gives the following:
$ javac -Xdiags:verbose User.java
User.java:20: error: constructor User in class User cannot be applied to given types;
User user1 = new User("Dan", 20, 190);
^
required: String,short,int
found: String,int,int
reason: argument mismatch; possible lossy conversion from int to short
1 error
The problem is that the literal 20 is an int, and the corresponding parameter in the constructor is declared as a short. Converting an int to a short is lossy.
"Possible lossy conversion" in a return statement.
Example:
public int compute() {
long result = 42L;
return result; // <<-- possible lossy conversion
}
A return (with a value / expression) could be thought of an an "assignment to the return value". But no matter how you think about it, it is necessary to convert the value supplied to the actual return type of the method. Possible solutions are adding a typecast (which says "I acknowledge the lossy-ness") or changing the method's return type.
"Possible lossy conversion" due to promotion in expressions
Consider this:
byte b1 = 0x01;
byte mask = 0x0f;
byte result = b1 & mask; // <<-- possible lossy conversion
This will tell you that you that there is a "possible lossy conversion from int to byte". This is actually a variation of the first example. The potentially confusing thing is understanding where the int comes from.
The answer to that is it comes from the & operator. In fact all of the arithmetic and bitwise operators for integer types will produce an int or long, depending on the operands. So in the above example, b1 & mask is actually producing an int, but we are trying to assign that to a byte.
To fix this example we must type-cast the expression result back to a byte before assigning it.
byte result = (byte) (b1 & mask);
"Possible lossy conversion" when assigning literals
Consider this:
int a = 21;
byte b1 = a; // <<-- possible lossy conversion
byte b2 = 21; // OK
What is going on? Why is one version allowed but the other one isn't? (After all they "do" the same thing!)
First of all, the JLS states that 21 is an numeric literal whose type is int. (There are no byte or short literals.) So in both cases we are assigning an int to a byte.
In the first case, the reason for the error is that not all int values will fit into a byte.
In the second case, the compiler knows that 21 is a value that will always fit into a byte.
The technical explanation is that in an assignment context, it is permissible to perform a primitive narrowing conversion to a byte, char or short if the following are all true:
The value is the result of a compile time constant expression (which includes literals).
The type of the expression is byte, short, char or int.
The constant value being assigned is representable (without loss) in the domain of the "target" type.
Note that this only applies with assignment statements, or more technically in assignment contexts. Thus:
Byte b4 = new Byte(21); // incorrect
gives a compilation error.
1 - For instance, the Eclipse IDE has an option which allows you to ignore compilation errors and run the code anyway. If you select this, the IDE's compiler will create a .class file where the method with the error will throw an unchecked exception if it is called. The exception message will mention the compilation error message.
Related
You cannot convert from int to char, so this would be illegal
int i = 88; char c = i;,
However this is allowed char c = 88;.
Isn't a plain number and int literal? How is this allowed?
char is effectively an unsigned 16-bit integer type in Java.
Like other integer types, you can perform an assignment conversion from an integer constant to any integer type so long as it's in the appropriate range. That's why
byte b = 10;
works too.
From the JLS, section 5.2:
In addition, if the expression is a
constant expression (§15.28) of type
byte, short, char or int :
A narrowing primitive conversion may
be used if the type of the variable is
byte, short, or char, and the value of
the constant expression is
representable in the type of the
variable.
A narrowing primitive
conversion followed by a boxing
conversion may be used if the type of
the variable is :
Byte and the value
of the constant expression is
representable in the type byte.
Short
and the value of the constant
expression is representable in the
type short.
Character and the value of
the constant expression is
representable in the type char.
Actually, converting from int to char is legal, it just requires an explicit cast because it can potentially lose data:
int i = 88;
char c = (char) i;
However, with the literal, the compiler knows whether it will fit into a char without losing data and only complains when you use a literal that is too big to fit into a char:
char c = 70000; // compiler error
Its because the literals for integer or smaller than int as byte ,short and char is int. Understand the following in this way.
code:
byte a = 10;//compile fine
byte b= 11;//compile fine
byte c = a+b;//compiler error[says that result of **a+b** is **int**]
the same happens for any mathematical operations as of 'Divide', 'multiply', and other arithmetic operation. so cast the result to get the literal in desired data type
byte c = (byte)(a+b);
So that the same reason why the value int need to have primitive cast to change the value in char.
Hope this make some sense.
I have the methods overloading such as:
public int sum1(int a, int b)
{
int c= a+b;
System.out.println("The method1");
return c;
}
public float sum1(int a, float b)
{
float c= a+b;
System.out.println("The method2");
return c;
}
public double sum1(float a, float b)
{
double c= (double) a+b;
System.out.println("The method3");
return c;
}
From the main method, suppose we have
double x=10.10f;
double y=10.20f;
the apparent type for x and y is double, but the actual type is float. when I call
System.out.println(" the output is :"+cc.sum1(x,y));
the error in the compile-time.
The method sum1(int, int) in the type Class is not applicable for the arguments double, double).
where it should go to sum1 (i.e. method3) by casting double to float
TL;DR version of this answer:
Variables of primitive types never have a different type at execution-time to their compile-time. (A double is always a double, never a float, etc.)
Overload resolution (picking which method signature is used) is performed using the compile-time types of expressions
Method implementation of the picked signature is performed using the execution-time type of the target of the method call
the apparent type for x and y is double, but the actual type is float
No, it's not. You've just got a conversion from the assigned float literals to double values. The actual values are double.
Primitives don't work like objects - there's no idea of an int value still being an int inside a double variable, for example.
It's simpler to take an example with integer types. Suppose we have:
byte b = 100;
int x = b;
The value of x is the 32-bit integer representing the value 100. It doesn't "know" that it was originally assigned from a byte value... there just happened to be a conversion from byte to int at the point of assignment.
Compare that with reference types:
String x = "hello";
Object y = x;
Here, the value of y really is a reference to a String object. That type information is preserved precisely because there's a whole object that can contain it, and because the value of the variable itself is only a reference. (The bits themselves don't need to change as part of the assignment - in a simple VM at least, they'll be the exact same bits in x and y, because they're referring to the same object.)
Even in that case, however, overload resolution occurs based on the compile-time type of the arguments, not their actual values at execution time. The only way that the execution-time type of a variable gets involved is with overriding based on the target of a method call. So if we have:
Foo f = new X();
Foo g = new Y();
Foo h = new Z();
f.someMethod(g, h);
... then the compiler will look for a method in Foo which has two Foo parameters (or Object or other superclasses/interfaces) - the actual types of the objects involved are irrelevant. At execution time, however, if that method has been overridden in X, then that override will be called due to the execution-time type of the object f's value refers to.
Casting double to float may cause loss of data, since it's a narrowing conversion, and is therefore not done automatically by the compiler. You'll have to cast the variables to float explicitly if you want it to take place.
No, the actual type of the variables is double. The type of the constants that you're assigning to that double variable, which get promoted on assignment, is float.
Try to add new function: public double sum1(double a, double b). It will solve your problem.
And also, this kind of casting will cause loss of data.
Floats are funny that way.. they always try to convert to doubles automatically. This code compiles, with floats at every turn.
public class wood {
public static void main(String[] args) {
float x = 10.1f;
float y = 10.2f;
System.out.println("Sum of x and y = " + sum1((float)x, (float)y));
}
public static float sum1(float x, float y) {
return (float)((float)x+(float)y);
}
}
edit; note that using a cast operator outside of parenthesis will cast after what is inside of the parenthesis has computed. So;
System.out.println((int)(50.5 + 50.7));
will print out 101.
Within java, some data conversions are automatic, and others require cast operators.. simply put, java will automatically make widening conversions, while you will have to use a cast operator for narrowing conversions.
The hierarchy of primitive data types is as follows:
byte //1 byte (-128 through 127)
short //2 bytes (over 32,000 give/take from 0)
int //4 bytes (over 2billion give/take from 0)
long //8 bytes (over 9 quintillion (10^18) give/take from 0)
float //4 bytes (holds 7 decimal places)
double //8 bytes (holds 15 decimal places)
java will not make narrowing conversions automatically, because then data is at risk of being lost. Both byte and short will become ints automatically.
short s = 5;
byte b = 5;
short sum = s + b; //this will cause an error!
s and b automatically make a widening conversion to an int, and an int cannot be assigned to a short without a cast operator.
short sum = (short)(s + b);
would be needed.
There is a question in Java past exam paper that brothers me:
With implicit conversion of primitive data types, you can lose precision and get incorrect results.
A True, B False
The key to the answer is A: True
I think it will neither lose precision nor get incorrect results. I know the explicit conversion can lose precision and get incorrect results but not implicit one.
For example:
int i = 9;
short s = 3;
i = s; // implicit conversion, neither loose
//precision nor incorrect results
s = i; // compile error, do we call this implicit conversion?
//if yes, then the answer to question 3 is True,
//but I don't think this is an implicit conversion,
//so I think answer is false.
As states on the notes:
Implicit type conversion: The programmer does not make any attempt to convert the type, rather the type is automatically converted by the system under certain circumstances.
Could anyone please advise?
Many thanks.
Answer = A
float f = Long.MAX_VALUE;
System.out.println(Long.MAX_VALUE);
System.out.printf("%.0f", f);
output
9223372036854775807
9223372036854776000
There are some cases where the compiler will allow an implicit conversion, but you may still lose precision. For example:
long a = Long.MAX_VALUE; // 9223372036854775807
double b = a; // 9223372036854776000
See the JLS for more details on this.
There is implicit conversions in assignment operators. These can lose precision or cause an overflow. For regular assignments, implicit conversion only happen when the compiler knows it is safe. It can still lose precision, but not cause an overflow.
e.g.
final int six = 6;
byte b = six; // compiler uses constant propagation and value is in range.
int five = 5;
byte b2 = five; // fails to compile
double d = 5.5;
five += d; // compiles fine, even though implicit conversion drops the 0.5
// five == 10 not 10.5
five += Double.NaN; // five is now 0 ;)
i was developing the below code
static String m(float i) {
return "float";
}
static String m(double i) {
return "double";
}
public static void main(String[] args) {
int a1 = 1;
long b1 = 2;
System.out.print(m(a1) + "," + m(b1));
}
Both results in output float, float , what the reason behind that please advise and how can I call double please advise thanks a lot.
Short answer: Java can automatically widen ints and longs to both floats and doubles, but Java will choose to widen to a float because it is smaller (in terms of memory footprint) than a double. You can call the double version of the method by explicitly casting the argument to a double:
System.out.print(m((double)a1) + "," + m((double)b1));
Long answer: Each primitive data type in Java has a size (measured in bytes), which determines how much information (or rather, what range of values) a given primitive can hold. The table below shows the sizes of some of Java's primitive data types:
byte 1 byte
short 2 bytes
char 2 bytes
int 4 bytes
float 4 bytes
long 8 bytes
double 8 bytes
Java will automatically "widen" values for you in certain situations, following some well-defined rules from the Java Language Specification. The following widening primitive conversions are performed by Java automatically:
byte to short, int, long, float, or double
short to int, long, float, or double
char to int, long, float, or double
int to long, float, or double
long to float or double
float to double
The other rules that are applicable to this situation are:
Widening does not lose information about the overall magnitude of the numeric value.
Widening from an integral type to another integral type does not lose any information at all; the numeric value is preserved exactly.
Widening an int or long value to a float, or of a long value to a double may result in a loss of precision.
For example, Java can safely widen an int to a long without changing the numeric value at all, because both are integral types and a long is larger than an int (see rule 2). Java can also widen an int to a float, but there might be a loss of precision. The sample code below demonstrates this loss of precision.
public static void foo(float f) {
System.out.println(f);
}
public static void main(String[] args) {
int a = 123;
int b = 1234567890;
foo(a);
foo(b);
}
The first call to foo(float) prints "123.0" as expected. The int value "123" is widened to the float value "123.0". The second call prints "1.23456794E9", which makes sense when you take rule 3 into account. The value "1234567940" is the same magnitude as but less precise than "1234567890".
The other piece of information that is key here: when multiple overloaded methods are present, Java determines which method to call by choosing the method with the smallest parameter type (in terms of memory footprint) such that the given argument is capable of being widen to the parameter type. In other words, the number you pass to the method must be capable of being widened to the parameter type of the method, and if multiple overloaded methods satisfy that requirement, Java will choose the method whose parameter type is of the smallest size.
In your case, you are passing an int and a long to one of two overloaded methods. Java will look at methods m(float) and m(double) to determine which method to call. An int can be widened to a float and a double, but a float (4 bytes) is smaller than a double (8 bytes), so Java will choose to call m(float). The same is true when you call the method with a long argument: float is chosen because it's the smallest data type that a long can be widened to.
If you want to call the double version, make it explicitly a double: m((double)a1)
Take a look at the JLS §8.4.9 - Method Overloading
Try
System.out.print(m(1f)+","+ m(2d));
why are you creating an int and long to test it? Well, what is actually happening is that both parameters are converted to float by default.
how can I call double?
Use a double variable
double d = 3.3;
m(d);
import java.lang.Math;
class Squr
{
public static void main ()
{
Squr square = new Squr();
System.out.println ("square root of 10 : " + square.mysqrt(10));
System.out.println (" Square root of 10.4 : "+ square.mysqrt(10.4));
}
int mysqrt ( int x)
{
return Math.sqrt(x);
}
double mysqrt (double y)
{
return Math.sqrt(y);
}
}
When we compile it then it's giving error
possible loss of precision
found :double
required :int
I have written this program for calculating square root of int or double type values by method overloading concept.
How can I fix my error so I can find the square root of an int and a double?
I don't think this is anything to do with method overloading - it's actually quite simple.
Math.sqrt() returns a double, but in your first class method, you're trying to return this directly from a method that returns int. This would be a lossy conversion, and you'd need to perform it manually.
In particular - when you call mysqrt(5), which integer do you expect the answer to be? 2? 3? Some "warning, not a whole number" value like -1? I suspect that perhaps you meant to return double for both methods, and differ only in the type of the arguments.
My eclipse gives:
Type mismatch: cannot convert from double to int
Whatever the message is, you fix this by: return (int) Math.sqrt(x);
This has nothing to do with overloading though.
By the way, a square root of an integer can be double, so by having a return type int you are truncating possibly important information.
Hence I'd advise for simply using Math.sqrt(..) without making any additional methods. If you need the whole part, you can use Math.floor(..) or Math.round(..)
http://download.oracle.com/javase/6/docs/api/java/lang/Math.html#sqrt%28double%29
returns double. Casting in int will loose precision. Don't you think so?
JLS has something to say on conversions. See this http://java.sun.com/docs/books/jls/third_edition/html/conversions.html
The following 22 specific conversions on primitive types are called the narrowing primitive conversions:
* short to byte or char
* char to byte or short
* int to byte, short, or char
* long to byte, short, char, or int
* float to byte, short, char, int, or long
* double to byte, short, char, int, long, or float
Narrowing conversions may lose information about the overall magnitude of a numeric value and may also lose precision.