I'm trying to find a way to compute roots of a polynomial with complex coefficients in Java (i.e. an equivalent of what is ridiculously easily done with roots() in MATLAB).
I'm ready to recode a root finding algorithm that builds the companion matrix and then uses generalized eigenvalue decomposition to find the roots, but for this I would need a library that handles complex-valued matrix operations.
I browsed for a while and nothing convincing seems to be available out there, which I think is rather weird. Then, I'd like to ask you:
Do you know a (stable) Java library that performs root finding on polynomials defined by COMPLEX coefficients?
Do you know a (stable) Java library that performs evd, svd, inverse, etc. on COMPLEX-valued matrices?
Note: I already looked at JAMA (doesn't handle complex), Michael Thomas Flanagan's Java Scientific Library (not available anymore), colt (doesn't seem to handle complex), Efficient Java Matrix Library (no complex either), DDogleg Numerics (does not handle polynomial with complex coefficients), JScience (not clear if evd is available) and common-math from Apache (not clear if they allow for complex matrices, and if yes, if evd is available).
The Durand-Kerner method also works for complex coefficients and does not rely on matrix computations.
It's quite simple to implement, you could google up an implementation (Stackoverflow forbids me to link the one I found) or make one of your own. You could use the jscience library for the complex data types, not for the algorithm itself.
EDIT: Didn't see that you need evd too, never mind my mention of jscience as an option to do the complex matrix math.
If one wants to keep it real, use the Bairstow method. If the polynomial has odd degree, use first Newton's method to find a real root and reduce the polynomial to even degree. This avoids an odd singularity of the Bairstow method where it converges towards a quadratic polynomial that has infinity as one root. Information of good quality can be found at the usual places. Some of it written or edited by yours truly.
Determine an inner root radius r and use z^2-2r*cos(phi)*z+r^2 with random angle phi as initial factor for Bairstow's method. It produces in each step a quadratic factor, always in and with real coefficients, containing either a pair of real roots or a conjugate pair of complex roots.
Check in each step for speed of convergence and restart with a different initial point if necessary. Find new roots after deflation, and polish the roots or quadratic factors by executing the method with the original polynomial and the factors as starting point.
Related
I'm trying to create an application in java which does several matrix modifications like calculating the invereses and determinants.
Now I would also like to include the option for the application to calculate the eigenvalues and the eigenvectors for matrices.
Since the only 'solid' way to calculate eigenvalues, by my knowledge, is by using the characteristic formula given by:
det(A-λI) = 0
Where A is an nxn matrix and λ a real number.
To my knowledge, there is no simple, maybe none at all, way to use algebra in Java. Also I would like to program this myself, so I would like not to use external packages like Jama or others.
Can someone explain me how I can program this equation in Java or maybe tell me another way of doing it?
One way you could do it is have a look at Jama and see how it is calculated in there and interpret that. And don't just Copy and Paste :P we all know who tempting that can be.
Finding eigenvalues and eigenvectors is a bit tricky, and there are many algorithms with varying positives and negatives. I'll suggest a few that are quite good and that are not that difficult to implement.
First, compute the characteristic polynomial and then find the roots using. Then you have the eigenvalues. Then you can solve a set of equations to find the eigenvectors given the eigenvalues.
Let's say I want to build a function that would properly schedule three bus drivers to drive in a week with the following constraints:
Each driver must not drive more than five times per week
There must be two drivers driving everyday
They will rest one day each week (will not clash with other drivers' rest day)
What kind of algorithm would be used to solve a problem like this?
I looked through several sites and I found these:
1) Backtracking algorithm (brute force)
2) Genetic algorithm
3) Constraint programming
Frankly, these are all "culture shock" for me as I have never learnt any kind of linear programming in the past. There are two things I want to know:
1) Which algorithm will best suit the case scenario above?
2) What would be the simplest algorithm to solve this problem?
3) Please suggest any other algorithms I can look into to solve the above problem.
1) I agree brute force is bad.
2) Your Problem is an Integer Problem. They can be solved with Linear Programming though.
3) You can distinquish 2 different approaches: heuristics and exact approaches.
Heuristics provide good solutions in reasonable computation time. They are used when there are strict requirements on the computation time or if the problem is too hard to calculate an optimal solution. Genetic Algorithms is a heuristic.
As your Problem is comparably simple, you would probably go with an exact approach.
4) The standard way to solve this exacly, is to embed a Linear Program in a Branch & Bound search tree. There is lots of literature on it. The procedure can be outlined as follows:
Solve the Linear Program with the Simplex-Algorithm
Find a fractional variable for branching. I.e. x=1.5
Create two new nodes and add the constraints x<=1 and x>=2 respectively
Go into one node (selected by some strategy)
Go to point 1
Additionally, at every node in the tree, after point 1, the algorithms checks, if a node can be pruned. That means to stop searching 'deeper' from this node on, because
a) the problem has become infeasible,
b) a better solution already exists,
c) an integer solution is found. This objective value of this solution is used to determine point b.
The procedure finishes when all nodes are pruned.
Luckily, as Nicolas stated, there are free implementations that do just this. All you have to do is to create your model. Code its objective and constraints in some tool and let it solve.
First of all this is a discrete optimization problem, so linear programming is probably not a good idea (since it is meant for continuous optimization). You can still solve this using linear programming (it will become an integer or mixed-integer program) but that is exponentially heard (if your input size is small then it is ok).
Now back to the comparison:
Brute force : worst.
Genetic: Can not guarantee optimality. The algorithm may not be able to solve the problem.
Constraint programming: definitely the best in this case (and in many discrete optimization problems). There is a super efficient implementation of it in IBM ILOG CPLEX solver (but is is not free, it is free for academia or for testing though).
I have some function (for example, double function(double value)), and some range (for example, from A to B). I need to calculate max value of function in this range. Are there existed libraries for it? Please, give me advice.
If the function needs to handle floating-point values, you're going to have to use something like Golden section search. Note that for this specific method, there are significant limitations regarding the functions that can be handled (specifically it must be unimodal). There are some adjustments you can make to the algorithm which extend it to more functions, specifically these modifications will allow it to work for continuous functions.
Is this a continuous function, or a set of discrete values? If discrete values, then you can either iterate over all values, and set max/min flags as 808sound suggests, or you can load all values into an array.
If it's a continuous function, then you can either populate an array with the function's value at discrete inputs, and find the max as above, or if it's differentiable, then you can use basic calculus to find the points at which df(x)/dx are 0. The latter case is a little more abstract, and probably more complicated than you want, though?
A quick google search led me to this:
http://code.google.com/p/javacalculus/
But I've never used it myself, so I don't know if that implements the required functionality. It does differential equations, though, so I assume they'd have "baby stuff" like basic differentiation.
I do not know if there are any librairies in Java for your problem.
But I know you can easily do that with MatLab (or Octave for the OpenSource equivalent).
If you do not have any indication of what the functions inner workings are (i.e. the function is a black box that accepts an input and produces an output), there is no "easy" way to find the global maximum.
There are an infinite number of points to choose for your input (technically) so "iterating over all possible inputs" is not feasible mathematically.
There are various algorithms that will give you estimated maximum values ina function like this:
The hill climbing algorithm, and the firefly algorithm are two, but there are many more. This is a fairly well documented/studied computer science problem and there is a lot of material online for you to look at. I suggest starting with the hill climbing algorithm, and maybe expanding out to other global optimization algorithms.
Note: These algorithms do not guarantee that the result is the maximum, but provide an estimate of its value.*
I'm writing a biological evolution simulator. Currently, all of my code is written in Python. For the most part, this is great and everything works sufficiently well. However, there are two steps in the process which take a long time and which I'd like to rewrite in Scala.
The first problem area is sequence evolution. Imagine you're given a phylogenetic tree which relates a large set of proteins. The length of each branch represents the evolutionary distance between the parent and child. The root of the tree is seeded with a single sequence, and then an evolutionary model (e.g. http://en.wikipedia.org/wiki/Models_of_DNA_evolution) is used to evolve the sequence along the tree structure; taking into account the branch lengths. PyCogent takes a long time to perform this step, and I believe that a reasonable Java/Scala implementation would be significantly faster. Do you know of any libraries that implement this type of functionality. I want to write the application in Scala, so, due to interoperability, any Java library will suffice.
The second problem area is the comparison of the generated sequences. The problem is, given a set of sequences for the proteins in a number of different extant species, attempt to use the sequence to reconstruct the phylogenetic tree which relates the species. This problem is inherently computationally demanding, because one must basically do a pairwise comparison between all sequences in the extant species. Here again, however, I feel like a Java/Scala implementation would perform significantly faster than a Python one, if for nothing else than the unfortunately slow speed of looping in Python. This part I could write from scratch more easily than the sequence evolution part, but I'd be willing to use a library for it as well if a good one exists.
Thanks,
Rob
For the second problem, why not make use an existing program for comparing sequences and infering phylogenetic trees, like RAxML or MrBayes and call that? Maximum likelihood and Bayesian inference are very sophisticated models for these problems, and using them seems a far better idea than implementing it yourself - something like a maximum parsiomony or a neihbour-joining tree, which probably could be written from scratch for such a project, is not sufficient for evolutionary analysis. Unless you just want a very quick and dirty topology (and trees inferred via MP or NJ are really often quite false), where you can probably use something like this
I have a bunch of sets of data (between 50 to 500 points, each of which can take a positive integral value) and need to determine which distribution best describes them. I have done this manually for several of them, but need to automate this going forward.
Some of the sets are completely modal (every datum has the value of 15), some are strongly modal or bimodal, some are bell-curves (often skewed and with differing degrees of kertosis/pointiness), some are roughly flat, and there are any number of other possible distributions (possion, power-law, etc.). I need a way to determine which distribution best describes the data and (ideally) also provides me with a fitness metric so that I know how confident I am in the analysis.
Existing open-source libraries would be ideal, followed by well documented algorithms that I can implement myself.
Looking for a distribution that fits is unlikely to give you good results in the absence of some a priori knowledge. You may find a distribution that coincidentally is a good fit but is unlikely to be the underlying distribution.
Do you have any metadata available that would hint at what the data means? E.g., "this is open-ended data sampled from a natural population, so it's some sort of normal distribution", vs. "this data is inherently bounded at 0 and discrete, so check for the best-fitting Poisson".
I don't know of any distribution solvers for Java off the top of my head, and I don't know of any that will guess which distribution to use. You could examine some statistical properties (skew/etc.) and make some guesses here--but you're more likely to end up with an accidentally good fit which does not adequately represent the underlying distribution. Real data is noisy and there are just too many degrees of freedom if you don't even know what distribution it is.
This may be above and beyond what you want to do, but it seems the most complete approach (and it allows access to the wealth of statistical knowledge available inside R):
use JRI to communicate with the R statistical language
use R, internally, as indicated in this thread
Look at Apache commons-math.
What you're looking for comes under the general heading of "goodness of fit." You could search on "goodness of fit test."
Donald Knuth describes a couple popular goodness of fit tests in Seminumerical Algorithms: the chi-squared test and the Kolmogorov-Smirnov test. But you've got to have some idea first what distribution you want to test. For example, if you have bell curve data, you might try normal or Cauchy distributions.
If all you really need the distribution for is to model the data you have sampled, you can make your own distribution based on the data you have:
1. Create a histogram of your sample: One method for selecting the bin size is here. There are other methods for selecting bin size, which you may prefer.
2. Derive the sample CDF: Think of the histogram as your PDF, and just compute the integral. It's probably best to scale the height of the bins so that the CDF has the right characteristics ... namely that the value of the CDF at +Infinity is 1.0.
To use the distribution for modeling purposes:
3. Draw X from your distribution: Make a draw Y from U(0,1). Use a reverse lookup on your CDF of the value Y to determine the X such that CDF(X) = Y. Since the CDF is invertible, X is unique.
I've heard of a package called Eureqa that might fill the bill nicely. I've only downloaded it; I haven't tried it myself yet.
You can proceed with a three steps approach, using the SSJ library:
Fit each distribution separately using maximum likelihood estimation (MLE). Using SSJ, this can be done with the static method getInstanceFromMLE(double[] x,
int n) available on each distribution.
For each distribution you have obtained, compute its goodness-of-fit with the real data, for example using Kolmogorov-Smirnov: static void kolmogorovSmirnov (double[] data, ContinuousDistribution dist, double[] sval,double[] pval), note that you don't need to sort the data before calling this function.
Pick the distribution having the highest p-value as your best fit distribution