Spline Interpolation Performance in Scala vs Java - java

I translated this spline interpolation algorithm from apache.commons.math from Java into Scala in the most straightforward way I could think of (see below). The function I ended up with runs 2 to 3 times slower than the original Java code. My guess is that the problem stems from the extra loops coming from the calls to Array.fill, but I can't think of a straightforward way to get rid of them. Any suggestions on how to make this code perform better? (It would also be nice to write it in a more concise and/or functional way -- suggestions on that front would be appreciated as well.)
type Real = Double
def mySplineInterpolate(x: Array[Real], y: Array[Real]) = {
if (x.length != y.length)
throw new DimensionMismatchException(x.length, y.length)
if (x.length < 3)
throw new NumberIsTooSmallException(x.length, 3, true)
// Number of intervals. The number of data points is n + 1.
val n = x.length - 1
// Differences between knot points
val h = Array.tabulate(n)(i => x(i+1) - x(i))
var mu: Array[Real] = Array.fill(n)(0)
var z: Array[Real] = Array.fill(n+1)(0)
var i = 1
while (i < n) {
val g = 2.0 * (x(i+1) - x(i-1)) - h(i-1) * mu(i-1)
mu(i) = h(i) / g
z(i) = (3.0 * (y(i+1) * h(i-1) - y(i) * (x(i+1) - x(i-1))+ y(i-1) * h(i)) /
(h(i-1) * h(i)) - h(i-1) * z(i-1)) / g
i += 1
}
// cubic spline coefficients -- b is linear, c quadratic, d is cubic (original y's are constants)
var b: Array[Real] = Array.fill(n)(0)
var c: Array[Real] = Array.fill(n+1)(0)
var d: Array[Real] = Array.fill(n)(0)
var j = n-1
while (j >= 0) {
c(j) = z(j) - mu(j) * c(j + 1)
b(j) = (y(j+1) - y(j)) / h(j) - h(j) * (c(j+1) + 2.0 * c(j)) / 3.0
d(j) = (c(j+1) - c(j)) / (3.0 * h(j))
j -= 1
}
Array.tabulate(n)(i => Polynomial(Array(y(i), b(i), c(i), d(i))))
}

You can get rid of all the Array.fill since a new array is always initialized with 0 or null, depending on whether it is a value or a reference (booleans are initialized with false, and characters with \0).
You might be able to simplify the loops by zipping arrays, but you'll only make it slower. The only way functional programming (on the JVM anyway) is going to help you make this faster is if you make it non-strict, such as with a Stream or a view, and then you go ahead and not use all of it.

Related

Calculating the intersection point of a 3D line single Axis

Im trying to work out how to calculate where a 3D line(Two 3D points) and a Axis meets.
In my case im trying to calculate the intersection of a 3D Point to the Z Axis.
Lets say i give the line the coords of
Point1.X = 0
Point1.Y = 0
Point1.Z = 0
Point2.X = 50
Point2.Y = 50
Point2.Z = 50
And i want to calculate the intersect at Z 15, how would i accomplish this?
I've thought about using interpolation, but it seems really inefficient here, i've looked at another posts and mathematical equations, but i dont really understand them enough to implement them into usable code.
Suppose you have vector A (like your point1) and B, then a straight line L through both points can be defined by
L(k) = A + k(B - A)
such that L(0) = A and L(1) = B. Suppose you want to find where
L(k).z = z_0
then you need to solve
A.z + k(B.z - A.z) = z_0
so the line L intersects the plane z = z_0 at
k = (z_0 - A.z) / (B.z - A.z)
If (B.z - A.z) is zero, either it does not intersect the plane z = z_0 anywhere, or it is in the plane everywhere.
lambda = ( Point1.Z - Z ) / ( Point1.Z - Point2.Z )
Point3.X = Point1.X + lambda * (Point2.X - Point1.X)
Point3.Y = Point1.Y + lambda * (Point2.Y - Point1.Y)
Point3.Z = Z

FFT division Complex, Java

I see this in matlab file. freqz.m file
h = dividenowarn(fft([b zeros(1,s*nfft-nb)]),...
fft([a zeros(1,s*nfft-na)])).';
example:
x = fft([1.5,0,1,0,0,0,1,3]')
x =
6.5000
3.6213 + 2.1213i
-0.5000 + 3.0000i
-0.6213 + 2.1213i
0.5000
-0.6213 - 2.1213i
-0.5000 - 3.0000i
3.6213 - 2.1213i
now
y = fft([1,1,2,3,1,0,9,3]')
y =
20.0000
0.7071 + 6.2929i
-9.0000 + 5.0000i
-0.7071 - 7.7071i
6.0000
-0.7071 + 7.7071i
-9.0000 - 5.0000i
0.7071 - 6.2929i
not really matter the fft's I need how to perfom this operation..
z = (x./y)
z =
0.3250
0.3968 - 0.5309i
0.1840 - 0.2311i
-0.2656 - 0.1050i
0.0833
-0.2656 + 0.1050i
0.1840 + 0.2311i
0.3968 + 0.5309i
I need an algorythm (no matlab code), I need something Java, or step by step calculus...
» a
a =
1.0000 + 2.0000i 3.0000 + 4.0000i 5.0000 + 6.0000i 0
» b
b =
5.0000 + 2.0000i 1.0000 - 2.0000i 0 0
» c = a./b
Warning: Divide by zero.
c =
0.3103 + 0.2759i -1.0000 + 2.0000i Inf + Infi NaN - NaNi
»
The ./ operator performs element-wise division. You can tell it's an element-wise operator from the . before the division sign. This means that the result will be a vector with elements that are obtained using the rule x[i] / y[i].
If you want to do this in Java you will either need to implement your own Complex number class writing the division code yourself, or you can use the Apache commons math Complex class.
Assuming you use apache commons, the element-wise division in Java would look like this:
List<Complex> elementWiseDivision(List<Complex> dividend, List<Complex> divisor)
{
if (dividend.size() != divisor.size())
{
throw new IllegalArgumentException("Must have same size");
}
List<Complex> result = new ArrayList<>();
// using iterators to get O(n) with both LinkedList and ArrayList inputs
for (Iterator<Complex> xit = dividend.iterator(), yit = divisor.iterator(); xit.hasNext();)
{
result.add(xit.next().divide(yit.next()));
}
return result;
}

Identification of the intersection point of two lines

Here are the calculations
L1 = Line joining points A(x1,y1) and B(x2,y2)
L2 = Line joining points c(x3,y3) and D(x4,y4)
For Line L1
Line Equation : y = m1*x + c1
slope m1 : (y2-y1)/(x2-x1)
Y intercept : c1 = (y1 - m1*x1)
For Line L2
Line Equation : y = m2*x + c2
slope m2 : (y4-y3)/(x4-x3)
Y intercept : c2 = (y3 - m2*x3)
For Point of Intersection
Solving the above equations we get
x = (c2 -c1)/(m1-m2)
y = (c1*m2 - c2*m1)/(m2-m1)
These above calculations are used to calculate the intersection points in my java program.
The Problem
The problem occurs when the two lines are parallel to x-axis and y-axis respectively. For example if the connecting points are as follows
L1 = A(34,112) B(34,180) ...(x-coordinate value remains constant)
L2 = C(72,100) D(88,100) ...(y-coordinate value remains constant)
now m1 will become Infinity and m2 will become 0 accordingly c1 will become Infinity and c2=y3 as a result the computation of the intersection points using below given formula gives strange (NaN) result although the lines L1 and L2 must meet at (34,100).
x = (c2 -c1)/(m1-m2)
y = (c1*m2 - c2*m1)/(m2-m1)
Why such a problem ? How is this problem handled using math so that it can be implemented in the program.
A line which is parallel to the y-axis cannot be expressed as y = ax + b. You need to use the general equation of a line ax + by + c = 0. Determine the coefficients of the equations of your two lines and solve the linear system of their intersection. Make sure the determinant of the system is different from 0, otherwise there is no solution, or an infinity (which you can consider being another case of no solution).
You can obtain a and b coefficients quite easily considering the normal vector of your segment (if vect(AB) = (x,y) then normal(AB) = (-y,x) = (a,b). You then determine c by introducing the coordinates of A in the equation : c = -a*x_A - b*y_A.
You now have a linear system to solve :
(S) : { a1*x + b1*y + c1 = 0
{ a2*x + b2*y + c2 = 0
If det = a1*b2 - a2*b1 = 0 (be careful with loss of precision, make your comparison epsilon-wise), then the system has no unic solution. Otherwise, you can find the inverse of the matrix of the system :
M = (a1 b1), M^(-1) = 1/det * ( b2 -b1)
(a2 b2) (-a2 a1)
Now you just need to compute
M^(-1) * (-c1) = 1/det * (-b2*c1 + b1*c2)
(-c2) ( a2*c1 - a1*c2)
And that's it, you have your solution !

Levenberg-Marquardt minimization in Java

I generally code in MATLAB, but for some reasons I decided to switch to a JAVA approach.
The question is quite easy: I'd like understanding how to translate the following MATLAB code into a working JAVA's one.
Within MATLAB I have a target function called findZ0:
function F = findZ0(V, Z, Latitude, TI, x)
%%% Inputs
% V = Average Wind Speed at Hub Height
% Z = Hub Height;
% Latitude = Specific Site Latitude (default value equal to 50 deg);
% x = Tryout Roughness length;
% TI = Target Turbulent Intensity;
%%% Outputs
% F = Roughness Length tuned to match Target Turbulent Intensity
Latitude = deg2rad(Latitude);
omega = 72.9E-06;
f = 2*omega*sin(Latitude);
ustar = ( 0.4*V - 34.5*f*Z)/log(Z/x);
mu = 1 - ((6*f*Z)/(ustar));
p = mu^(16);
sigmaTarget = (V*TI)/100;
F = sigmaTarget - (( 7.5*mu*ustar*((0.538 + .09*log(Z/x))^p) )/(1 + .156*log(ustar/(f*x))));
end
I then called this lines:
Uhub = 8;
HubHt = 90;
Latitude = 50;
x_trial = 0.01;
TI_target = 24;
find_z0 = #(x) findZ0(Uhub,HubHt,Latitude,TI_target, x);
z0 = fsolve(find_z0,x_trial,{'fsolve','Jacobian','on','levenberg-marquardt',.005,'MaxIter',15000,'TolX',1e-07,'TolFun',1E-07,'Display','off'});
I am aware that Fortran packages have been imported in Java, but I don't really have a clue how to achieve my goal by applying the mentioned tools. Hence, I'd welcome any suggestion on how to overcome this problem.
I'd suggest using an existing solution like Apache Commons - it's a robust library containing a lot of tools you may find helpful.
The LM optimization method is implemented by this class - you can either use it directly or just look at it for inspiration.

Simulated Binary Crossover (SBX) crossover operator in Scala genetic algorithm (GA) library

I work on a very little research team to create/adapt a Genetic Algorithm library in Scala for distributed computation with Scientific Worklow System, in our case we use the open source OpenMole software (http://www.openmole.org/).
Recently, i try to understand and re-implement the SBX crossover operator written in JMetal Metaheuristics library (http://jmetal.sourceforge.net/) to adapt it in functionnal version in our Scala library.
I write some code, but i need our advice or your validation about the SBX defined into java library, because the source code (src in svn) doesn't appear equal to the original equation written here : http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.33.7291&rep=rep1&type=pdf at page 30, in annexe A
First question, i don't understand the java version of JMetal, why they use two different beta values ?!
beta1 which use in the equation the first arg of min[(y1 - yL), ...] and
beta2 which use the second arg of min [... , (yu - y2)])
Beta 1 and 2 are used for computation of alpha value and two (so here and in jmetal we have also two alpha different value alpha1 and 2) ...
Same problem/question, we have in jmetal two computation for betaq (java code) or in Deb equation, result of :
Second question, what is the meaning of the symbol used in (2) and (3) procedure in pseudo-algorithm of SBX, and the difference with simple beta ? Especially when we want to compute children/offsprings of crossover parents, like here :
Edit
Correct a no-op if/else block
Author of code in jmetal give me the link of original source code of Nsga-II algorithm, and he explain me that description of SBX by Deb differs from his implementation :/
http://www.iitk.ac.in/kangal/codes.shtml
I don't understand the difference between the description and the implementation in jmetal and original source code, do you have an explanation ?
Correct if/else return for map
Start of translation into scala
class SBXBoundedCrossover[G <: GAGenome, F <: GAGenomeFactory[G]](rate: Random => Double = _.nextDouble) extends CrossOver [G, F] {
def this(rate: Double) = this( _ => rate)
def crossOver (genomes : IndexedSeq [G], factory: F) (implicit aprng : Random) = {
val g1 = genomes.random
val g2 = genomes.random
val crossoverRate = rate(aprng)
val EPS = 1.0e-14
val numberOfVariables = g1.wrappedValues.size
val distributionIndex = 2
val variableToMutate = (0 until g1.wrappedValues.size).map{x => !(aprng.nextDouble < 0.5)}
//crossover probability
val offspring = {
if (aprng.nextDouble < crossoverRate) {
(variableToMutate zip (g1.wrappedValues zip g2.wrappedValues)) map {
case (b, (g1e, g2e)) =>
if(b) {
if (abs(g1e - g2e) > EPS){
val y1 = min(g1e, g2e)
val y2 = max(g2e, g1e)
var yL = 0.0 //g1e.getLowerBound
var yu = 1.0 //g1e.getUpperBound
var rand = aprng.nextDouble // ui
var beta1 = 1.0 + (2.0 * (y1 - yL)/(y2 - y1))
var alpha1 = 2.0 - pow(beta1,-(distributionIndex+1.0))
var betaq1 = computebetaQ(alpha1,distributionIndex,rand)
//calcul offspring 1 en utilisant betaq1, correspond au β barre
var c1 = 0.5 * ((y1 + y2) - betaq1 * (y2 - y1))
// -----------------------------------------------
var beta2 = 1.0 + (2.0 * (yu - y2) / (y2 - y1))
var alpha2 = 2.0 - pow(beta2, -(distributionIndex + 1.0))
var betaq2 = computebetaQ(alpha2,distributionIndex,rand)
//calcul offspring2 en utilisant betaq2
var c2 = 0.5 * ((y1 + y2) + betaq2 * (y2 - y1))
if (c1 < yL) c1 = yL
if (c1 > yu) c1 = yu
if (c2 < yL) c2 = yL
if (c2 > yu) c2 = yu
if (aprng.nextDouble <= 0.5) {
(c2,c1)
} else {
(c1, c2)
}
}else{
(g1e, g2e)
}
}else{
(g2e, g1e)
}
}
}else{
// not so good here ...
(g1.wrappedValues zip g2.wrappedValues)
}
}
(factory.buildGenome(offspring.map{_._1}), factory.buildGenome(offspring.map{_._2}))
}
def computebetaQ(alpha:Double, distributionIndex:Double, rand:Double):Double = {
if (rand <= (1.0/alpha)){
pow ((rand * alpha),(1.0 / (distributionIndex + 1.0)))
} else {
pow ((1.0 / (2.0 - rand * alpha)),(1.0 / (distributionIndex + 1.0)))
}
}
Thanks a lot for your advice, or help in this problem.
SR
Reyman64, your question is the answer I was looking for. Thank you.
I took the paper you linked and the code of Deb's implementation and tried to understand both. For that, I commented almost every line of the code. They differ only in the calculation of beta.
Since Deb used this code in his implementation of the NSGA-II, I'll stick to this version of the algorithm.
If anyone is in the same situation I was (not understanding how to implement SBX), I left my commentaries in the following gist, take a look.
https://gist.github.com/Tiagoperes/1779d5f1c89bae0cfdb87b1960bba36d
I did an implementation of the SBX (it is called Simulated Binary Crossover btw) for HeuristicLab (C#). You can take a look at the implementation of our SimulatedBinaryCrossover. I took the description from a different reference however (paper title: "Simulated binary crossover for continuous search space" from 1995). The full citation is given in the source code.

Categories

Resources