I am multiplying two numbers which is int and storing in long. But it displays error as ,
Cast one of the operands of this multiplication operation to a long. How to solve this?
private int milisecPerSecond = 1000;
/**
* The update interval in seconds.
*/
// Update frequency in seconds
private int updateIntervalInSec = 5;
/**
* The update interval.
*/
// Update frequency in milliseconds
private long updateInterval = milisecPerSecond * updateIntervalInSec;
First, this will not result in a compiler error. It's the usual case to multiply to int numbers and store the result in an int.
A signed int if 32 bit wide can take numbers up to approx. 2 billion. If your code does what it says, this will make an update interval of 2 million seconds, or 23 days.
If you really want to force a long-based calculation, just cast one of its factors:
long updateInterval = millisecsPerSecond * (long) updateIntervalInSec;
The second factor will get converted automatically then.
Keep in mind that only casting the result will not be enough:
private long updateInterval = (long) (milisecPerSecond * updateIntervalInSec);
// THIS WON'T DO AS EXPECTED
as it will do the problematic calculation on int and just cast the then maybe wrong result into a long.
Casting is something that's described on the first three pages of every book on Java. If you're serious enough about code quality to run code quality tools, then have someone with knowledge of the language you use sit beside you. A trained pair of fresh eyes for a code review is such a great resource.
Related
I have the following piece of code:
long[] blocks = new long[(someClass.getMemberArray().length - 1) / 64 + 1];
Basically the someClass.getMemberArray() can return an array that could be much larger than 64 and the code tries to determine how many blocks of len 64 are needed for subsequent processing.
I am confused about the logic and how does this work. It seems to me that just doing:
long[] blocks = new long[(int) Math.ceil(someClass.getMemberArray().length / 64.0)];
should work too any looks simpler.
Can someone help me understanding the -1 and +1 reasoning in the original snippet, how it works and if the ceil would fail in some cases?
as you correctly commented, the -1/+1 is required to get the correct number of blocks, including only partially filled ones. It effectively rounds up.
(But it has something that could be considered a bug: if the array has length 0, which would required 0 blocks, it returns 1. This is because integer division usually truncates on most systems, i.e. rounds UP for negative numbers, so (0 - 1)/64 yields 0. However, this may be a feature if zero blocks for some reasons are not allowed. It definitively requires a comment though.)
The reasoning for the first, original line is that it only uses integer arithmetics, which should translate on just a few basic and fast machine instructions on mostcomputers.
The second solution involved casting floating-point arithmetic and casting. Traditionally, floating-point arithmetic was MUCH slower on most processors, which probably was the reasoning for the first solution. However, on modern CPUs with integrated floating-point support, the performance depends more on other things like cache lines and pipelining.
Personally, I don't really like both solutions, as it's not really obvious what they do. So I would suggest the following solution:
int arrayLength = someClass.getMemberArray().length;
int blockCount = ceilDiv(arrayLength, 64);
long[] blocks = new long[blockCount];
//...
/**
* Integer division, rounding up.
* #return the quotient a/b, rounded up.
*/
static int ceilDiv(int a, int b) {
assert b >= 0 : b; // Doesn't work for negative divisor.
// Divide.
int quotient = a / b;
// If a is not a multiple of b, round up.
if (a % b != 0) {
quotient++;
}
return quotient;
}
It's wordy, but at least it's clear what should happen, and it provides a general solution which works for all integers (except negative divisors). Unfortunately, most languages don't provide an elegant "round up integer division" solution.
I don't see why the -1 would be necessary, but the +1 is likely there to rectify the case where the result of the division gets rounded down to the nearest non-decimal value (which should be, well, every case except those where the result is without decimals)
I am storing a long value comes from backend system and I had to convert it to float because of one of the third party api only understands float.
Now when I read back the float and trying to convert back to long I am getting a bigger number(some times I've seen smaller as well)than what I had.
My question is how can I get back the original number in long. I am happy to add any rounding logic if required
public class Main {
public static void main(String[] args) {
long item = 1502028000;
System.out.println(String.format("item -> %d",item));
float floatItem = item;
System.out.println(String.format("floatItem -> %f",floatItem));
long resultItem = (long)floatItem;
System.out.println(String.format("resultItem -> %d",resultItem));
}
}
Below is the output. I need my original number back which is 1502028000 in this case.
item -> 1502028000
floatItem -> 1502028032.000000
resultItem -> 1502028032
The short answer is that a float value is only precise to about 7 digits. In terms of the the long value you are converting to a float, that means the float can only retain the first 7 digits of 1502028000. Note those 7 are still retained in the float value and when when you convert back to a long.
If you would like to read more about float precision you should take a look at this. Just as an exercise, you could do the same thing with a double and more digits should be retained.
A float value only has ~24 significant bits, a long has 64 bits. So it must happen that some/many different long values are giving the same float value. And that can't be recovered.
You said that your longs always are multiples of 1000. So of the many longs that correspond to your given float, only those divisibe by 1000 are valid for you. That's what your rounding suggestion Math.round(value/1000)*1000 will use. But that only works until there are more than 1000 longs mapping to one float, and that happens around the 10 billion magnitude. Your example is just a factor of ten below that, so that won't help reliably if the numbers can grow.
If you REALLY MUST map back the float value to the original long (ask yourself: why is it necessary - can I do my job another way?), you might maintain a Map from Float values to the original Long values wherever you pass a long to the third-party library. IF the long values are sparse, there are chances that within the valid long values that you used, you can often map back from float without ambiguity.
But that's a nasty kludge. Be prepared to deal with the ambiguity if it arises.
It appears that float doesn't have enough size to convert a "long" value and thus while converting, it looses it's precision.
To resolve this, you can use double instead of float. That will retain your value after the double conversion you are doing.
long item = 1502028000;
double floatItem = (double)item;
long resultItem = (long)floatItem;
System.out.println(String.format("resultItem -> %d",resultItem));
This prints :
resultItem -> 1502028000
Here's the sample code from Item 9:
public final class PhoneNumber {
private final short areaCode;
private final short prefix;
private final short lineNumber;
#Override
public int hashCode() {
int result = 17;
result = 31 * result + areaCode;
result = 31 * result + prefix;
result = 31 * result + lineNumber;
return result;
}
}
Pg 48 states: "the value 31 was chosen because it is an odd prime. If it were even and the multiplication overflowed, information would be lost, as muiltiplication by 2 is equivalent to shifting."
I understand the concept of multiplication by 2 being equivalent to bit shifting. I also know that we'll still get an overflow (hence information loss) when we multiply a large number by a large odd prime number. What I don't get is why information loss arising from multiplication by large odd primes is preferable to information loss arising from multiplication by large even numbers.
With an even multiplier the least significant bit, after multiplication, is always zero. With an odd multiplier the least significant bit is either one or zero depending on what the previous value of result was. Hence the even multiplier is losing uncertainty about the low bit, while the odd multiplier is preserving it.
There is no such thing as a large even prime - the only even prime is 2.
That aside - the general point of using a medium-sized prime # rather than a small one like 3 or 5 is to minimize the chance that two objects will end up with the same hash value, overflow or not.
The risk of overflow is not the issue per se; the real issue is how distributed the hashcode values will be for the set of objects being hashed. Because hashcodes are used in data structures like HashSet, HashMap etc., you want to minimize the # of objects that could potentially share the same hash code to optimize lookup times on those collections.
In my example X is already long and Y is a long also. I am not casting at then.
I really just want to divide by a number that is cubed. (using native libraries)
These numbers are extremely large. If I convert them to floats and do it, its value is Infinite...
System.out.println(formatter.format("%20d", (X/(Y*Y*Y))));
Y is an extremely large number, it is not 0. X is a measurement of time in milliseconds.
I will post the exact code in a short while if this question doesn't get closed... I don't have access to it right this minute.
Context: I am dealing with a big notation calculation for O(n^3).
Error: "Exception in thread "main" java.lang.ArithmeticException: / by zero"
Answers:
Assuming you didn't really mean the quotes, the likely reason is that
Y * Y * Y is greater than 2 ^ 31. It's overflowing, with a lower part
of 0. I believe this would only happen if Y is a multiple of 2^11
(2048) - but I'm not certain*
-This is the case for me, Y is a multiple of 2048, hopefully this helps with trying to find a solution.
// Algorithm 3
for( int n = 524288; n <= 5000000; n *= 2 ){
int alg = 3;
long timing;
maxSum = maxSubSum3( a );
timing = getTimingInfo( n, alg );
System.out.println(fmt.format("%20s %20d %20d %20d %20d %20s%n", "Alg. 3", n, timing, timing, timing/(n*n), "time/(n*log(n))"));
}
Assuming you didn't really mean the quotes, the likely reason is that Y * Y * Y is greater than 2 ^ 31. It's overflowing, with a lower part of 0.
I believe this would only happen if Y is a multiple of 2^11 (2048) - but I'm not certain.
It can be avoided by making sure that the computation of Y^3 is done using some datatype that can hold it. If it's less than 2 million, you can use a long instead. If not, you'll have to either use a double or a BigInteger. Given that your other value is in milliseconds, I'd guess that floating point would be fine. So you'd end up with:
System.out.println(formatter.format("%20d", (int)(X/((double)Y*Y*Y))));
You may want to use floating point for the output as well - I assumed not.
Surely you don't mean to pass "(X/(Y*Y*Y))" as a string literal? that's a string containing your express, and not compilable Java code that expresses a computation that Java will perform. So that's problem #1: remove those quotes.
Second, the formatter has nothing to do with dividing numbers, so that is not relevant nor your problem.
Third, casting has nothing to do with this. Your problem is exactly what it says: you're dividing by zero. I assume you don't want to do that. So, Y must be 0.
Fourth, nothing here uses native libraries. It's all Java. Right, that's what you mean?
You may want to use BigInteger to perform math on very large values that overflow a long. But, that will not make division by zero somehow not be division by zero.
Perhaps you should try, either with long or float conversions:
( ( X / Y ) / Y ) / Y
If Y is a high enough power of 2 (2^22 or more), then Y^3 will be a higher than 2^64 power of 2. And long uses 64 bits in Java, isn't that correct?
I need to store an exact audio position in a database, namely SQLite. I could store the frame position (sample offset / channels) as an integer, but this would cause extra data maintenance in case of certain file conversions.
So I'm thinking about storing the position as an 8 byte real value in seconds, that is a double, and so as a REAL in SQLite. That makes the database structure more consistent.
But, given a maximum samplerate of 192kHz, is the double precision sufficient so that I can always recover the exact frame position when multiplying the value by the samplerate?
Is there a certain maximum position above which an error may occur? What is this maximum position?
PS: this is about SQLite REAL, but also about the C and Java double type which may hold the position value at various stages.
Update:
Since the discussions now focus on the risks related to conversion and rounding, here's the C method that I'm planning to use:
// Given these types:
int samplerate;
long long framepos;
double position;
// First compute the position in seconds from the framepos:
position = (double) framepos / samplerate;
// Now store the position in an SQLite REAL column, and retrieve it later
// Then compute the framepos back from position, with rounding:
framepos = position * samplerate + 0.5;
Is this safe and symmetrical?
A double has 51 bits worth of precision. Depending on the exponent part, some of these bits will represent whole numbers (seconds in your case), the others fractions of seconds.
At 48 kilobits, a minimum of 16 bits is required to get the sub-second precise enough (more if rounding is not optimal). That leaves 35 bits for the seconds, which will span just over a thousand years.
So even if you need an extra bit or two for the sub-second to guard against rounding, and even if SQL loses a bit or two of precision converting it to decimal and back here and there, you aren't anywhere near losing sample precision with your double precision number. Make sure your rounding works correctly - C tends to always round down on convert to integer, so even an infintessimaly small conversion error could throw you off by 1.
I would store it as a (64-bit) integer representing microseconds (approx 2**20). This avoids floating point hardware/software, is readily understood by all, and gives you a range of 0..2**44 seconds which is a little over 55 thousand years.
As an alternative, use a readable fixed precision decimal representation (20 digits should be enough). Right-justified with leading zeros. The cost of conversion is negligible compared to DB accesses anyway.
One advantage of these options is that any database will trivially know how to order them, not necessarily obvious for floating point values.
As the answer by Matthias Wandel explains, there's probably nothing to worry about. OTOH by using integers you would get fixed precision regardless of the magnitude which might be useful.
Say, use a 64-bit integer, and store the time as microseconds. That gives you an equivalent sampling precision of 1 MHz and a range of almost 300000 years (if my quick calculation is correct).
Edit Even when taking into account the need for the timestamp * sample_rate to fit into a 64-bit integer, you still have a range of 1.5 years (2**63/1e6/3600/24/365/192e3), assuming a max sample rate of 192kHz.