According to IEEE the following doubles exist:
Mantissa Exponent
double 64bit: 52 bit 11 bit
double 80bit: 64 bit 15 bit
In Java only the 64bit double can be directly
stored in an instance variable. I would like for
whatever reason work with 80bit floats as defined
above in Java. I am interested in the full set
of arithmetic functions, I/O and trigonometric
functions. How could I do that?
One could of course do something along the
following lines:
public class DoubleExt {
private long mantissa;
private short exponent;
}
And then make a package that interfaces with
some of the known C libs for 80bit floats.
But would this be considered the best practice?
What about supporting a couple of plattforms
and architectures?
Bye
I'm pretty sure primitives won't get you there, but the BigDecimal class is as good as it gets (for everything except trigonometry).
For trigonometric functions, however, you will have to resort to an external library, like APFloat (see this previous question).
Perhaps BigDecimal is an adequate way for you. But I believe it doesn't provide the full set of mathematic functions.
http://download.oracle.com/javase/1,5.0/docs/api/java/math/BigDecimal.html
The question is already 5 years old. Time to look around
whether there are some new candidates, maybe draw inspiraction
from other languages.
In Racket we find a BigFloat data type:
https://docs.racket-lang.org/math/bigfloat.html
The underlying library is GNU MPFR, a Java interface was started:
https://github.com/kframework/mpfr-java
Is this the only interface so far?
Related
I'm new here so please excuse my noob mistakes. I'm currently working on a little project of mine that sees me dealing with digits with a length in the forty thousands and beyond.
I'm currently using BigInteger to handle these values, and I need something that performs faster. I've read that BigInteger uses an array of integers in its implementation, and what I need to know is whether BigInteger is using each index in this array to represent each decimal point, as in 1 - 9, or is it using something more efficient.
I ask this because I already have an implementation in mind that uses bit operations, which makes it more efficient, memory and processing wise.
So the final question is - is BigInteger already efficient enough, and should I just rely on that? It would better to know this rather than putting it to the test unnecessarily, which would take a lot of time.
Thank you.
At least with Oracle's Java 8 and OpenJDK 8, it doesn't store one decimal digit per int. It stores full 32-bit portions per 32-bit int in the int[], which can be seen with its source code.
Bit operations are fast for it, since it's a sign-magnitude value and the magnitude is stored packed just as you'd expect, just make sure that you use the relevant BigInteger bitwise methods rather than implementing your own.
If you still need more speed, try something like GMP, though be aware that it uses a LGPL or GPL license. It would also be better to use it outside of Java.
I'm currently writing a "new language" at school, and I have to implement a Math class.
One of the specifications is to implement ulp method, like Math.ulp() in Java. We are working on float types.
I found many interesting sources, but I'm still not able to calculate ulp of a float..
AFAIK, if
then
But, how can I get this normalized form for a float without any lib ?
And how to get parameters e & n+1 ?
Thank you for your help,
Best regards,
I'm not sure Java has the possibility of aliasing to get the bit pattern of the floating point number (there is a close enough alternative, as the second part of the answer shows), but if it does, then e is the bits 23 through 30 (31 is sign) minus some constant (as in the wikipedia description of the format, it's 127 unless the number is subnormal) while n is fixed (in this case it's 23 bits, or 24 if it includes the implicit 1)
It's recommended you use a lib that does this job for you properly.
Another option I've been notified in the comments of (rather indirectly) implies converting the float bits to int. I will write a snippet of code directly. I'm not fully versed in Java (the code may not work immediately due to lacking package/class specifiers (floatToIntBits as well as intToFloatBits are static methods of the class java.lang.Float). This is different from the above suggestion since it's a bit unorthodox but it has better performance than the code you suggested in the question itself.
float ulp(float x) {
int repr;
float next;
if (Float.isNaN(x)) return Float.NaN; //special handling, to be safe
repr = Float.floatToIntBits(x);
x++; //will work correctly independently of sign
next = Float.intBitsToFloat(repr)
return next-x;
}
I work on a product called Oracle RPAS as a consultant and the application is developed using C++. In Oracle RPAS , JNI calls can be made to custom developed Java programs to process the data.
The data stored in Oracle RPAS for Real is always Double(8 bytes) and same is available in Java. Unfortunately Oracle RPAS is using a hard coded value of 0.0000000001 (1e-09) as epsilon for comparing doubles and other calculations even though data is stored in database as Double.
In Java I am not able to find out a way to align the code with this hard coded value. I need to compare the dataset in similar way . These are the mathematical operations, I need to perform mostly
min(double x,double Y) , max(double x,double Y) round(double x*double Y)/ double Y) , ceil ,floor etc.
I need help in understanding how epsilon works in Java ?
Java version used for development is 1.6
I am not an expert in Java and have some coding experience in it and any starting point will be helpful to tackle this problem.
The whole idea of epsilon should be base on your application logic. If you are going to deal with data which you expect to have value till the 3rd decimal place, then use an epsilon like 0.0001. Or similarly, if you know your application is going to deal with number up to a billion (1,000,000,000), then you know double can be at most accurate til the 5-6th decimal places (15 significant figures minus 10 digits before decimal). Hence choosing 0.000001 as epsilon will be reasonable.
As far as I know, Java does not do any internal epsilon handling during double comparison. It needs to be done explicitly, just like when you are writing C/C++.
Of course you may also consider BigDecimal as an alternative. However you still need to define the precision you want to keep when you are doing BigDecimal arithmetic, with somewhat similar reasoning like epsilon for double.
In Java you use BigDecimal instead of Double and then you don't need to use any epsilon at all.
What algorithms does Java's BigInteger class employ for multiplication and division, and what are their respective time complexities?
What primitive type does Java's BigInteger class use - byte, short, int, etc. - and why?
How does Java's BigInteger class handle the fact that its primitive type is signed? If the answer is it just does it and it's really messy, that's all I need/want to know. What I'm really getting at is does it cheat in the same way some python libraries cheat in that they're not written in python?
I looked at the source code to BigInteger here. Here's what I found.
BigInteger does not "cheat". In Java "cheating" is accomplished through the use of what are known as "native" functions. See java.lang.Math for a rather extensive list of these.
BigInteger uses int to represent its data.
private transient int[] words;
And yes, it is pretty messy. Lot's of bit crunching at the like.
Oracle's java.math.BigInteger class has undergone some extensive improvements from Java 7 to Java 8. See for yourself by examining the source on grepcode.com. It doesn't cheat, it's all pure java.
Internally, it uses a sign-magnitude representation of the integer, using an array of int values to store the magnitude. Recall that a java int is a 32-bit value. All 32-bits are used without regard to the sign. This size is also convenient since the product of two ints fits into a java long.
Beginning in Java 8 the BigInteger class added some advanced algorithms such as Karatsuba and Toom-Cook multiplication to improve the performance for integers of thousands of bits.
It uses int.
Reason: It's the fastest for most platforms.
This is an interview question: how to count set bits in float in Java ? I guess I should not know the bit representation of float to answer this question. Can I just convert a float to a byte array somehow? I can use the Java serialization but it looks like overkill.
The Float class has an:
public static int floatToIntBits(float value)
method that returns a representation of the specified floating-point value according to the IEEE 754 floating-point "single format" bit layout.
So you can get the bits with this method, then count which ones are set.
http://download.oracle.com/javase/7/docs/api/
You can use:
Integer.bitCount(Float.floatToIntBits(value))
Having said that it's a very bad interview question...... seems to rely more on knowing specifics of the Java API which is not what you should be using as a basis for hiring. You want someone who understands the principles and knows how to look the details up when needed, not someone who just memorises APIs.
As well as Java API methods, there's various bit hacks hiding here that you could use as well, although how well they translate to the Java world I don't know (or for those that do, whether they're any more efficient that calling APIs). See the 'Counting bits set' section.