I am trying to implement a Matrix.pow(int exp) method for exp >= 0.
My current implementation repeatedly calls a Matrix.mul(Matrix m) method, which works well for small exponent.
For large exponents, however, the solution becomes skewed. The Matrix class is using doubles internally and I think that the repeated calls end up in a loss of precision.
I was looking at http://en.wikipedia.org/wiki/Exponentiation_by_squaring, which will certainly help, but I think it will still be an issue for larger exponents.
Is there a better way to exponentiate a Matrix?
Think to use BigDecimal instead of double , if you are using a java api make sure that the method that you are using use BigDecimal instead of Double.you could use an open source project and edit it like these one here.
In most cases repeated squares is the best solution for your problem. Try it out, the accuracy is way better than with repeated multiplication. If the accuracy is still not sufficient there are ways to improve the numerical stability depending on the type of matrix. E.g. if you have a symmetric matrix you can diagonalize upfront and calculate the power of the Eigenvalues. That way a singular matrix remains singular. There are other methods for other special types of matrices, I can elaborate on that if you tell us what sort of matrices you have.
Related
I was wondering how to replace common trigonometric values in an expression. To put this into more context, I am making a calculator that needs to be able to evaluate user inputs such as "sin(Math.PI)", or "sin(6 * math.PI/2)". The problem is that floating point values aren't accurate and when I input sin(Math.PI), the calculator ends up with:
1.2245457991473532E-16
But I want it to return 0. I know I could try replacing in the expression all sin(Math.PI) and other common expressions with 0, 1, etc., except I have to check all multiples of Math.PI/2. Can any of you give me some guidance on how to return the user the proper values?
You're running into the problem that it's not quite possible to express a number like pi in a fixed number of bits, so with the available machine precision the computation gives you a small but non-zero number. Math.PI in any case is only an approximation of PI, which is an irrational number. To clean up your answer for display purposes, one possibility is to use rounding. You could instead try adding +1 and -1 to it which may well round the answer to zero.
This question here may help you further:
Java Strange Behavior with Sin and ToRadians
Your problem is that 1.2245457991473532E-16 is in fact zero for many purposes. What about simply rounding the result yielded by sin? With enough rounding, you may achieve what you want and even get 0.5, -0.5 and other important sin values relatively easily.
If you really want to replace those functions as your title suggests, then you can't do that in Java. Your best bet would be to create an SPI specification for common functions that could either fall back to the standard Java implementation or use your own implementation, which replaces the Java one.
Then users of your solution would need to retrieve one of the implementations using dependency injection of explicit references to a factory method.
I need to evaluate a logarithm of any base, it does not matter, to some precision. Is there an algorithm for this? I program in Java, so I'm fine with Java code.
How to find a binary logarithm very fast? (O(1) at best) might be able to answer my question, but I don't understand it. Can it be clarified?
Use this identity:
logb(n) = loge(n) / loge(b)
Where log can be a logarithm function in any base, n is the number and b is the base. For example, in Java this will find the base-2 logarithm of 256:
Math.log(256) / Math.log(2)
=> 8.0
Math.log() uses base e, by the way. And there's also Math.log10(), which uses base 10.
I know this is extremely late, but this may come to be useful for some since the matter here is precision. One way of doing this is essentially implementing a root-finding algorithm that uses, from its base, the high precision types you might want to be using, consisting of simple +-x/ operations.
I would recommend implementing Newton's ​method since it demands relatively few iterations and has great convergence. For this sort of application, specifically, I believe it's fair to say it will always provide the correct result provided good input validation is implemented.
Considering a simple constant "a" where
Where a is sought to be solved for such that it obeys, then
We can use the Newton method iteratively to find "a" within any specified tolerance, where each a-ith iteration can be computed by
and the denominator is
,
because that's the first derivative of the function, as necessary for the Newton method. Once this is solved for, "a" is the direct answer for the "a = log,b(x)" problem, obtainable by simple +-x/ operations, so you're already good to go. "Wait, but there's a power there?". Yes. If you can rely on your power function as being accurate enough, then there are no issues with going ahead and using it there. Otherwise, you can further break down the power operation into a series of other +-x/ operations by using these methods to simplify whatever decimal number that is on the power into two integer power operations that can be computed easily with a series of multiplication operations. This process will eventually leave you with nth-roots to solve for, which you can also find with the Newton method. If you do go down that road, you can use this for the newton method
which, as you can see, has to be solved for recursively until you reach b = 1.
Phew, but yeah, that's it. This is the way you can solve the problem by making sure you use high precision types along the whole way with only +-x/ operations. Below is a quick implementation I did in Excel to solve for log,2(3), compared with the solution given by the software's original function. As you can see, I can just keep refining "a" until I reach the tolerance I want by monitoring what the optimization function gives me. In this, I used a=2 as the initial guess, which you can use and should be fine for most cases.
I am working on probabilistic models, and when doing inference on those models, the estimated probabilities can become very small. In order to avoid underflow, I am currently working in the log domain (I store the log of the probabilities). Multiplying probabilities is equivalent to an addition, and summing is done by using the formula:
log(exp(a) + exp(b)) = log(exp(a - m) + exp(b - m)) + m
where m = max(a, b).
I use some very large matrices, and I have to take the element-wise exponential of those matrices to compute matrix-vector multiplications. This step is quite expensive, and I was wondering if there exist other methods to deal with underflow, when working with probabilities.
Edit: for efficiency reasons, I am looking for a solution using primitive types and not objects storing arbitrary-precision representation of real numbers.
Edit 2: I am looking for a faster solution than the log domain trick, not a more accurate solution. I am happy with the accuracy I currently get, but I need a faster method. Particularly, summations happen during matrix-vector multiplications, and I would like to be able to use efficient BLAS methods.
Solution: after a discussion with Jonathan Dursi, I decided to factorize each matrix and vector by its largest element, and to store that factor in the log domain. Multiplications are straightforward. Before additions, I have to factorize one of the added matrices/vectors by the ratio of the two factors. I update the factor every ten operations.
This issue has come up recently on the computational science stack exchange site as well, and although there the immediate worry there was overflow, the issues are more or less the same.
Transforming into log space is certainly one reasonable approach. Whatever space you're in, to do a large number of sums correctly, there's a couple of methods you can use to improve the accuracy of your summations. Compensated summation approaches, most famously Kahan summation, keep both a sum and what's effectively a "remainder"; it gives you some of the advantages of using higher precision arithmeitic without all of the cost (and only using primitive types). The remainder term also gives you some indication of how well you're doing.
In addition to improving the actual mechanics of your addition, changing the order of how you add your terms can make a big difference. Sorting your terms so that you're summing from smallest to largest can help, as then you're no longer adding terms as frequently that are very different (which can cause significant roundoff problems); in some cases, doing log2 N repeated pairwise sums can also be an improvement over just doing the straight linear sum, depending on what your terms look like.
The usefullness of all these approaches depend a lot on the properties of your data. The arbitrary precision math libraries, while enormously expensive in compute time (and possibly memory) to use, have the advantage of being a fairly general solution.
I ran into a similar problem years ago. The solution was to develop an approximation of log(1+exp(-x)). The range of the approximation does not need to be all that large (x from 0 to 40 will more than suffice), and at least in my case the accuracy didn't need to be particularly high, either.
In your case, it looks like you need to compute log(1+exp(-x1)+exp(-x2)+...). Throw out those large negative values. For example, suppose a, b, and c are three log probabilities, with 0>a>b>c. You can ignore c if a-c>38. It's not going to contribute to your joint log probability at all, at least not if you are working with doubles.
Option 1: Commons Math - The Apache Commons Mathematics Library
Commons Math is a library of lightweight, self-contained mathematics and statistics components addressing the most common problems not
available in the Java programming language or Commons Lang.
Note: The API protects the constructors to force a factory pattern while naming the factory DfpField (rather than the somewhat more intuitive DfpFac or DfpFactory). So you have to use
new DfpField(numberOfDigits).newDfp(myNormalNumber)
to instantiate a Dfp, then you can call .multiply or whatever on this. I thought I'd mention this because it's a bit confusing.
Option 2: GNU Scientific Library or Boost C++ Libraries.
In these cases you should use JNI in order to call these native libraries.
Option 3: If you are free to use other programs and/or languages, you could consider using programs/languages for numerical computations such as Octave, Scilab, and similar.
Option 4: BigDecimal of Java.
Rather than storing values in logarithmic form, I think you'd probably be better off using the same concept as doubles, namely, floating-point representation. For example, you might store each value as two longs, one for sign-and-mantissa and one for the exponent. (Real floating-point has a carefully tuned design to support lots of edge cases and avoid wasting a single bit; but you probably don't need to worry so much about any of those, and can focus on designing it in a way that's simple to implement.)
I don't understand why this works, but this formula seems to work and is simpler:
c = a + log(1 + exp(b - a))
Where c = log(exp(a)+exp(b))
Ok so I'm trying to use Apache Commons Math library to compute a double integral, but they are both from negative infinity (to around 1) and it's taking ages to compute. Are there any other ways of doing such operations in java? Or should it run "faster" (I mean I could actually see the result some day before I die) and I'm doing something wrong?
EDIT: Ok, thanks for the answers. As for what I've been trying to compute it's the Gaussian Copula:
So we have a standard bivariate normal cumulative distribution function which takes as arguments two inverse standard normal cumulative distribution functions and I need integers to compute that (I know there's a Apache Commons Math function for standard normal cumulative distribution but I failed to find the inverse and bivariate versions).
EDIT2: as my friend once said "ahhh yes the beauty of Java, no matter what you want to do, someone has already done it" I found everything I needed here http://www.iro.umontreal.ca/~simardr/ssj/ very nice library for probability etc.
There are two problems with infinite integrals: convergence and value-of-convergence. That is, does the integral even converge? If so, to what value does it converge? There are integrals which are guaranteed to converge, but whose value it is not possible to determine exactly (try the integral from 1 to infinity of e^(-x^2)). If it can't be exactly returned, then an exact answer is not possible mathematically, which leaves only approximation. Apache Commons uses several different approximation schemes, but all require the use of finite bounds for correctness.
The best way to get an appropriate answer is to repeatedly evaluate finite integrals, with ever increasing bounds, and compare the results. In pseudo-code, it would look something like this:
double DELTA = 10^-6//your error threshold here
double STEP_SIZE = 10.0;
double oldValue=Double.MAX_VALUE;
double newValue=oldValue;
double lowerBound=-10; //or whatever you want to start with--for (-infinity,1), I'd
//start with something like -10
double upperBound=1;
do{
oldValue = newValue;
lowerBound-= STEP_SIZE;
newValue = integrate(lowerBound,upperBound); //perform your integration methods here
}while(Math.abs(newValue-oldValue)>DELTA);
Eventually, if the integral converges, then you will get enough of the important stuff in that widening the bounds further will not produce meaningful information.
A word to the wise though: this kind of thing can be explosively bad if the integral doesn't converge. In that case, one of two situations can occur: Either your termination condition is never satisfied and you fall into an infinite loop, or the value of the integral oscillates indefinitely around a value, which may cause your termination condition to be incorrectly satisfied (giving incorrect results).
To avoid the first, the best way is to put in some maximum number of steps to take before returning--doing this should stop the potentially infinite loop that can result.
To avoid the second, hope it doesn't happen or prove that the integral must converge (three cheers for Calculus 2, anyone? ;-)).
To answer your question formally, no, there are no other such ways to perform your computation in java. In fact, there are no guaranteed ways of doing it in any language, with any algorithm--the mathematics just don't work out the way we want them to. However, in practice, a lot (though by no means all!) of the practical integrals do converge; its been my experience that only about ~20 iterations will give you an approximation of reasonable accuracy, and Apache should be fast enough to handle that without taking absurdly long.
Suppose you are integrating f(x) over -infinity to 1, then substitute x = 2 - 1/(1-t), and evaluate over the range 0 .. 1. Note check a maths text for how to do the substition, I'm a little rusty and its too late here.
The result of a numerical integration where one of the bounds is infinity has a good chance to be infinity as well. And it will take infinite time to prove it ;)
So you either find an equivalent formula (using real math) that can be computed or your replace the lower bound with a reasonable big negative value and look, if you can get a good estimation for the integral.
If Apache Commons Math could do numerical integration for integrals with infinite bounds in finite time, they wouldn't give it away for free ;-)
Maybe it's your algorithm.
If you're doing something naive like Simpson's rule it's likely to take a very long time.
If you're using Gaussian or log quadrature you might have better luck.
What's the function you're trying to integrate, and what's the algorithm you're using?
Im facing a scenario where in ill have to compute some huge math expressions. The expressions in themselves are simple, ie have just the conventional BODMAS fundamental but the numbers that occur as operands are very large, to the tune of 1000 digit numbers. I do know of the BigInteger class of the java.math module but am looking for a different way so that the computation can occur also in a speedy manner. Im a guy still finding his feet in Java, so any pointers or advice in gthis regard would be of great help.
Regards
p1nG
Try it with BigInteger, profile the results with some test calculations, and see if it will work for you, before you look for something more optimized.
Since you say you are new to Java, I would have to suggest you use BigInteger and BigDecimal unless you want to write your own arbitrarily large number handlers. BigInteger and BigDecimal are fast enough for most uses of them. The only time I've had speed issues with them is when dealing with numbers on the order of a million digits.
That is unless you have a specific need for not using BigInteger.
First write the program correctly (using BigFoo) and then determine if optimization is appropriate.
BigInteger/BigFloat will be the most optimized implementation of generalized math that you will possibly get.
If you want it faster, you MIGHT be able to write assembly to use bit-shifting patters for specialized math (well, like divide by 2 tends to be a simple right shift), but if you are doing more than a few different types of equations, that will be very impractical.
BigInteger is only slow in comparison to int, but it will probably be the best you are going to possibly get for operations on numbers of more than 64 bits or so without going to another language--and even then you probably won't get much of an improvement unless that other language is assembly...
I am surprised that equations with 1000 digits have a practical application (except perhaps encryption)
Could you could explain what you are doing and what your speed requirements are?