I am trying to create a spark application that takes a dataset of lat, long, timestamp points and increases the cell count if they are inside a grid cell. The grid is comprised of 3d cells with lon,lat and time as the z-axis.
Now I have completed the application and it does what its supposed to, but it takes hours to scan the whole dataset(~9g). My cluster is comprised of 3 nodes with 4 cores,8g ram each and I am currently using 6 executors with 1 core and 2g each.
I am guessing that I can optimize the code quite a bit but is there like a big mistake in my code that results in this delay?
//Create a JavaPairRDD with tuple elements. For each String line of lines we split the string
//and assign latitude, longitude and timestamp of each line to sdx,sdy and sdt. Then we check if the data point of
//that line is contained in a cell of the centroids list. If it is then a new tuple is returned
//with key the latitude, Longitude and timestamp (split by ",") of that cell and value 1.
JavaPairRDD<String, Integer> pairs = lines.mapToPair(x -> {
String sdx = x.split(" ")[2];
String sdy = x.split(" ")[3];
String sdt = x.split(" ")[0];
double dx = Double.parseDouble(sdx);
double dy = Double.parseDouble(sdy);
int dt = Integer.parseInt(sdt);
List<Integer> t = brTime.getValue();
List<Point2D.Double> p = brCoo.getValue();
double dist = brDist.getValue();
int dur = brDuration.getValue();
for(int timeCounter=0; timeCounter<t.size(); timeCounter++) {
for ( int cooCounter=0; cooCounter < p.size(); cooCounter++) {
double cx = p.get(cooCounter).getX();
double cy = p.get(cooCounter).getY();
int ct = t.get(timeCounter);
String scx = Double.toString(cx);
String scy = Double.toString(cy);
String sct = Integer.toString(ct);
if (dx > (cx-dist) && dx <= (cx+dist)) {
if (dy > (cy-dist) && dy <= (cy+dist)) {
if (dt > (ct-dur) && dt <= (ct+dur)) {
return new Tuple2<String, Integer>(scx+","+scy+","+sct,1);
}
}
}
}
}
return new Tuple2<String, Integer>("Out Of Bounds",1);
});
Try to use mapPartitions it's more fast see this exapmle link; other thing to do is to put this part of code outside the loop timeCounter
One of the biggest factors that may contribute to costs in running a Spark map like this relates to data access outside of the RDD context, which means driver interaction. In your case, there are at least 4 accessors of variables where this occurs: brTime, brCoo, brDist, and brDuration. It also appears that you're doing some line parsing via String#split rather than leveraging built-ins. Finally, scx, scy, and sct are all calculated for each loop, though they're only returned if their numeric counterparts pass a series of checks, which means wasted CPU cycles and extra GC.
Without actually reviewing the job plan, it's tough to say whether the above will make performance reach an acceptable level. Check out your history server application logs and see if there are any stages which are eating up your time - once you've identified a culprit there, that's what actually needs optimizing.
I tried mappartitionstopair and also moved the calculations of scx,scy and sct so that they are calculated only if the point passes the conditions. The speed of the application has improved dramatically only 17 minutes! I believe that the mappartitionsopair was the biggest factor. Thanks a lot Mks and bsplosion!
Related
I am currently working on my Bachelors Thesis on the advantages/disadvantages of GraalVM Native Image compared to a Jar running on the JVM.
During one of the Tests i am calling a certain Method which allocates and populates an Array of size 10^6. After that the function loops the array and performs arithmetic operations (It is a variant of the ackley function). The usual runtime of this method was between 3 and 4 seconds, but sometimes the method would complete after just 50 ms (when running as either the Native Image or as the jar file running on the JVM).
Since the array is populated by using the Math.Random() function I dont think it is due to caching and the Native Image rules out JIT compilation as the source for these outliers.
The endpoint looks like this, where dtno is the Data Transfer Object containing the "range" variable:
#PostMapping(path="/ackley")
public static #ResponseBody long calculateackley (#RequestBody dtno d) {
long start = System.nanoTime();
double res = ackley(d.range);
long end = System.nanoTime();
System.out.println("Ackley funtion took: "+res);
return end - start;
}
The ackley function looks like this:
public static long ackley(int range){
long start = System.nanoTime();
if(range!=0){
double[] a = new double[range];
int counter = 0;
for(int i=-range/2;i<range/2;i++){
a[counter++] = Math.random()*range*i;
}
double sum1 = 0.0;
double sum2 = 0.0;
for (int i = 0 ; i < a.length ; i ++) {
sum1 += a[i]*a[i];
sum2 += (Math.cos(2*Math.PI*a[i]));
}
double result = -20.0*Math.exp(-0.2*Math.sqrt(sum1 / ((double )a.length))) + 20
- Math.exp(sum2 /((double)a.length)) + Math.exp(1.0);
}
long end = System.nanoTime();
return end - start;
}
As already mentioned the range variable in the test was 10^6. What I was also suspecting is that, since the result and sum are never actually used to calculate the return value, the programm decides to skip everything between for loop and the decleration of "end".
In the graph from the JMeter test you can see that these fast execution times where all during ramp up and the very end of the testrun.
Test Results Performance Test
In the summary Report you can see the huge deviation from the average runtime.
Performance Test Summary Report
If anyone could give me a hint or a good source, where I could find a hint as to what is going on, I would be very thankfull.
My question might cause some confusion so please see Description first. It might be helpful to identify my problem. I will add my Code later at the end of the question (Any suggestions regarding my code structure/implementation is also welcomed).
Thank you for any help in advance!
My question:
How to define multiple sinks in Flink Batch processing without having it get data from one source repeatedly?
What is the difference between createCollectionEnvironment() and getExecutionEnvironment() ? Which one should I use in local environment?
What is the use of env.execute()? My code will output the result without this sentence. if I add this sentence it will pop an Exception:
-
Exception in thread "main" java.lang.RuntimeException: No new data sinks have been defined since the last execution. The last execution refers to the latest call to 'execute()', 'count()', 'collect()', or 'print()'.
at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:940)
at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:922)
at org.apache.flink.api.java.CollectionEnvironment.execute(CollectionEnvironment.java:34)
at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:816)
at MainClass.main(MainClass.java:114)
Description:
New to programming. Recently I need to process some data (grouping data, calculating standard deviation, etc.) using Flink Batch processing.
However I came to a point where I need to output two DataSet.
The structure was something like this
From Source(Database) -> DataSet 1 (add index using zipWithIndex())-> DataSet 2 (do some calculation while keeping index) -> DataSet 3
First I output DataSet 2, the index is e.g. from 1 to 10000;
And then I output DataSet 3 the index becomes from 10001 to 20000 although I did not change the value in any function.
My guessing is when outputting DataSet 3 instead of using the result of
previously calculated DataSet 2 it started from getting data from database again and then perform the calculation.
With the use of ZipWithIndex() function it does not only give the wrong index number but also increase the connection to db.
I guess that this is relevant to the execution environment, as when I use
ExecutionEnvironment env = ExecutionEnvironment.createCollectionsEnvironment();
will give the "wrong" index number (10001-20000)
and
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
will give the correct index number (1-10000)
The time taken and number of database connections is different and the order of print will be reversed.
OS, DB, other environment details and versions:
IntelliJ IDEA 2017.3.5 (Community Edition)
Build #IC-173.4674.33, built on March 6, 2018
JRE: 1.8.0_152-release-1024-b15 amd64
JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o
Windows 10 10.0
My Test code(Java):
public static void main(String[] args) throws Exception {
ExecutionEnvironment env = ExecutionEnvironment.createCollectionsEnvironment();
//Table is used to calculate the standard deviation as I figured that there is no such calculation in DataSet.
BatchTableEnvironment tableEnvironment = TableEnvironment.getTableEnvironment(env);
//Get Data from a mySql database
DataSet<Row> dbData =
env.createInput(
JDBCInputFormat.buildJDBCInputFormat()
.setDrivername("com.mysql.cj.jdbc.Driver")
.setDBUrl($database_url)
.setQuery("select value from $table_name where id =33")
.setUsername("username")
.setPassword("password")
.setRowTypeInfo(new RowTypeInfo(BasicTypeInfo.DOUBLE_TYPE_INFO))
.finish()
);
// Add index for assigning group (group capacity is 5)
DataSet<Tuple2<Long, Row>> indexedData = DataSetUtils.zipWithIndex(dbData);
// Replace index(long) with group number(int), and convert Row to double at the same time
DataSet<Tuple2<Integer, Double>> rawData = indexedData.flatMap(new GroupAssigner());
//Using groupBy() to combine individual data of each group into a list, while calculating the mean and range in each group
//put them into a POJO named GroupDataClass
DataSet<GroupDataClass> groupDS = rawData.groupBy("f0").combineGroup(new GroupCombineFunction<Tuple2<Integer, Double>, GroupDataClass>() {
#Override
public void combine(Iterable<Tuple2<Integer, Double>> iterable, Collector<GroupDataClass> collector) {
Iterator<Tuple2<Integer, Double>> it = iterable.iterator();
Tuple2<Integer, Double> var1 = it.next();
int groupNum = var1.f0;
// Using max and min to calculate range, using i and sum to calculate mean
double max = var1.f1;
double min = max;
double sum = 0;
int i = 1;
// The list is to store individual value
List<Double> list = new ArrayList<>();
list.add(max);
while (it.hasNext())
{
double next = it.next().f1;
sum += next;
i++;
max = next > max ? next : max;
min = next < min ? next : min;
list.add(next);
}
//Store group number, mean, range, and 5 individual values within the group
collector.collect(new GroupDataClass(groupNum, sum / i, max - min, list));
}
});
//print because if no sink is created, Flink will not even perform the calculation.
groupDS.print();
// Get the max group number and range in each group to calculate average range
// if group number start with 1 then the maximum of group number equals to the number of group
// However, because this is the second sink, data will flow from source again, which will double the group number
DataSet<Tuple2<Integer, Double>> rangeDS = groupDS.map(new MapFunction<GroupDataClass, Tuple2<Integer, Double>>() {
#Override
public Tuple2<Integer, Double> map(GroupDataClass in) {
return new Tuple2<>(in.groupNum, in.range);
}
}).max(0).andSum(1);
// collect and print as if no sink is created, Flink will not even perform the calculation.
Tuple2<Integer, Double> rangeTuple = rangeDS.collect().get(0);
double range = rangeTuple.f1/ rangeTuple.f0;
System.out.println("range = " + range);
}
public static class GroupAssigner implements FlatMapFunction<Tuple2<Long, Row>, Tuple2<Integer, Double>> {
#Override
public void flatMap(Tuple2<Long, Row> input, Collector<Tuple2<Integer, Double>> out) {
// index 1-5 will be assigned to group 1, index 6-10 will be assigned to group 2, etc.
int n = new Long(input.f0 / 5).intValue() + 1;
out.collect(new Tuple2<>(n, (Double) input.f1.getField(0)));
}
}
It's fine to connect a source to multiple sink, the source gets executed only once and records get broadcasted to the multiple sinks. See this question Can Flink write results into multiple files (like Hadoop's MultipleOutputFormat)?
getExecutionEnvironment is the right way to get the environment when you want to run your job. createCollectionEnvironment is a good way to play around and test. See the documentation
The exception error message is very clear: if you call print or collect your data flow gets executed. So you have two choices:
Either you call print/collect at the end of your data flow and it gets executed and printed. That's good for testing stuff. Bear in mind you can only call collect/print once per data flow, otherwise it gets executed many time while it's not completely defined
Either you add a sink at the end of your data flow and call env.execute(). That's what you want to do once your flow is in a more mature shape.
I have a basic framework for a neural network to recognize numeric digits, but I'm having some problems with training it. My back-propogation works for small data sets, but when I have more than 50 data points, the return value starts converging to 0. And when I have data sets in the thousands, I get NaN's for costs and returns.
Basic structure: 3 layers: 784 : 15 : 1
784 is the number of pixels per data set, 15 neurons in my hidden layer, and one output neuron which returns a value from 0 to 1 (when you multiply by 10 you get a digit).
public class NetworkManager {
int inputSize;
int hiddenSize;
int outputSize;
public Matrix W1;
public Matrix W2;
public NetworkManager(int input, int hidden, int output) {
inputSize = input;
hiddenSize = hidden;
outputSize = output;
W1 = new Matrix(inputSize, hiddenSize);
W2 = new Matrix(hiddenSize, output);
}
Matrix z2, z3;
Matrix a2;
public Matrix forward(Matrix X) {
z2 = X.dot(W1);
a2 = sigmoid(z2);
z3 = a2.dot(W2);
Matrix yHat = sigmoid(z3);
return yHat;
}
public double costFunction(Matrix X, Matrix y) {
Matrix yHat = forward(X);
Matrix cost = yHat.sub(y);
cost = cost.mult(cost);
double returnValue = 0;
int i = 0;
while (i < cost.m.length) {
returnValue += cost.m[i][0];
i++;
}
return returnValue;
}
Matrix yHat;
public Matrix[] costFunctionPrime(Matrix X, Matrix y) {
yHat = forward(X);
Matrix delta3 = (yHat.sub(y)).mult(sigmoidPrime(z3));
Matrix dJdW2 = a2.t().dot(delta3);
Matrix delta2 = (delta3.dot(W2.t())).mult(sigmoidPrime(z2));
Matrix dJdW1 = X.t().dot(delta2);
return new Matrix[]{dJdW1, dJdW2};
}
}
There's the code for network framework. I pass double arrays of length 784 into the forward method.
int t = 0;
while (t < 10000) {
dJdW = Nn.costFunctionPrime(X, y);
Nn.W1 = Nn.W1.sub(dJdW[0].scalar(3));
Nn.W2 = Nn.W2.sub(dJdW[1].scalar(3));
t++;
}
I call this to adjust the weights. With small sets, the cost converges to 0 pretty well, but larger sets don't (the cost associated with 100 characters converges to 13, always). And if the set is too large, the first adjustment works (and costs go down) but after the second all I can get is NaN.
Why does this implementation fail with larger data sets (specifically training) and how can I fix this? I tried a similar structure with 10 outputs instead of 1 where each would return a value near 0 or 1 acting like boolean values, but the same thing was happening.
I'm also doing this in java by the way, and I'm wondering if that has something to do with the problem. I was wondering if it was a problem with running out of space but I haven't been getting any heap space messages. Is there a problem with how I'm back-propogating or is something else happening?
EDIT: I think I know what's happening. I think my backpropogation function is getting caught in local minimums. Sometimes the training succeeds and sometimes it fails for large data sets. Because I'm starting with random weights, I get random initial costs. What I've noticed is that when the cost initially exceeds a certain amount (it depends on the number of datasets involved), the costs converge to a clean number (sometimes 27, others 17.4) and the outputs converge to 0 (which makes sense).
I was warned about relative minimums in the cost function when I began, and I'm beginning to realize why. So now the question becomes, how do I go about my gradient descent so that I'll actually find the global minimum? I'm working in Java by the way.
This seems like a problem with weight initialization.
As far as i can see you never initialize the weights to any specific value. Therefore the network diverges. You should at least use random initialization.
If your backprop works on small dataset is there really good assumtion that there isn't problem. When you're suspicious about it you can try your BP on XOR problem.
Are units biased?
I once discuss with guy who doing exactly same thing. Hand digit recognition and 15 units in hidden layer. I saw a network who doing this task well. Her topology was:
Input: 784
First hidden: 500
Second hidden: 500
Third hidden: 2000
Output: 10
You have a sets of images and you nonlinear transform 784 pixels of image into the 15 numbers from <0, 1> interval and you doing this for all images of your set. You hope that you can right separate digit based on these 15 numbers. From my point of view is 15 hidden unit too little for such a task when I assumed you have dataset with thousands of example. Please try for example 500 hidden units.
And learning rate has influence on backprop and can caused problem with convergence.
I have a bunch of sensors and I really just want to reconstruct the input.
So what I want is this:
after I have trained my model I will pass in my feature matrix
get the reconstructed feature matrix back
I want to investigate which sensor values are completely different from the reconstructed value
Therefore I thought a RBM will be the right choice and since I am used to Java, I have tried to use deeplearning4j. But I got stuck very early. If you run the following code, I am facing 2 problems.
The result is far away from a correct prediction, most of them are simply [1.00,1.00,1.00].
I would expect to get back 4 values (which is the number of inputs expected to be reconstructed)
So what do I have to tune to get a) a better result and b) get the reconstructed inputs back?
public static void main(String[] args) {
// Customizing params
Nd4j.MAX_SLICES_TO_PRINT = -1;
Nd4j.MAX_ELEMENTS_PER_SLICE = -1;
Nd4j.ENFORCE_NUMERICAL_STABILITY = true;
final int numRows = 4;
final int numColumns = 1;
int outputNum = 3;
int numSamples = 150;
int batchSize = 150;
int iterations = 100;
int seed = 123;
int listenerFreq = iterations/5;
DataSetIterator iter = new IrisDataSetIterator(batchSize, numSamples);
// Loads data into generator and format consumable for NN
DataSet iris = iter.next();
iris.normalize();
//iris.scale();
System.out.println(iris.getFeatureMatrix());
NeuralNetConfiguration conf = new NeuralNetConfiguration.Builder()
// Gaussian for visible; Rectified for hidden
// Set contrastive divergence to 1
.layer(new RBM.Builder()
.nIn(numRows * numColumns) // Input nodes
.nOut(outputNum) // Output nodes
.activation("tanh") // Activation function type
.weightInit(WeightInit.XAVIER) // Weight initialization
.lossFunction(LossFunctions.LossFunction.XENT)
.updater(Updater.NESTEROVS)
.build())
.seed(seed) // Locks in weight initialization for tuning
.iterations(iterations)
.learningRate(1e-1f) // Backprop step size
.momentum(0.5) // Speed of modifying learning rate
.optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT) // ^^ Calculates gradients
.build();
Layer model = LayerFactories.getFactory(conf.getLayer()).create(conf);
model.setListeners(Arrays.asList((IterationListener) new ScoreIterationListener(listenerFreq)));
model.fit(iris.getFeatureMatrix());
System.out.println(model.activate(iris.getFeatureMatrix(), false));
}
For b), when you call activate(), you get a list of "nlayers" arrays. Every array in the list is the activation for one layer. The array itself is composed of rows: 1 row per input vector; each column contains the activation for every neuron in this layer and this observation (input).
Once all layers have been activated with some input, you can get the reconstruction with the RBM.propDown() method.
As for a), I'm afraid it's very tricky to train correctly an RBM.
So you really want to play with every parameter, and more importantly,
monitor during training various metrics that will give you some hint about whether it's training correctly or not. Personally, I like to plot:
The score() on the training corpus, which is the reconstruction error after every gradient update; check that it decreases.
The score() on another development corpus: useful to be warned when overfitting occurs;
The norm of the parameter vector: it has a large impact on the score
Both activation maps (= XY rectangular plot of the activated neurons of one layer over the corpus), just after initialization and after N steps: this helps detecting unreliable training (e.g.: when all is black/white, when a large part of all neurons are never activated, etc.)
I've built a model of the solar system in Java. In order to determine the position of a planet it does do a whole lot of computations which give a very exact value. However I am often satisfied with the approximate position, if that could make it go faster. Because I'm using it in a simulation speed is important, as the position of the planet will be requested millions of times.
Currently I try to cache the position of a planet throughout its orbit and then use those coordinates over and over. If a position in between two values is requested I perform a linear interpolation. This is how I store values:
for(int t=0; t<tp; t++) {
listCoordinates[t]=super.coordinates(ti+t);
}
interpolator = new PlanetOrbit(listCoordinates,tp);
PlanetOrbit has the interpolation code:
package cometsim;
import org.apache.commons.math3.util.FastMath;
public class PlanetOrbit {
final double[][] coordinates;
double tp;
public PlanetOrbit(double[][] coordinates, double tp) {
this.coordinates = coordinates;
this.tp = tp;
}
public double[] coordinates(double julian) {
double T = julian % FastMath.floor(tp);
if(coordinates.length == 1 || coordinates.length == 0) return coordinates[0];
if(FastMath.round(T) == T) return coordinates[(int) T];
int floor = (int) FastMath.floor(T);
if(floor>=coordinates.length) floor=coordinates.length-5;
double[] f = coordinates[floor];
double[] c = coordinates[floor+1];
double[] retval = f;
retval[0] += (T-FastMath.floor(T))*(c[0]-f[0]);
retval[1] += (T-FastMath.floor(T))*(c[1]-f[1]);
retval[2] += (T-FastMath.floor(T))*(c[2]-f[2]);
return retval;
}
}
You can think of FastMath as Math but faster. However, this code is not much of a speed improvement over calculating the exact value every time. Do you have any ideas for how to make it faster?
There are a few issues I can see, the main ones I can see are as follows
PlanetOrbit#coordinates seems to actually change the values in the variable coordinates. As this method is supposed to only interpolate I expect that your orbit will actually corrupt slightly everytime you run though it (because it is a linear interpolation the orbit will actually degrade towards its centre).
You do the same thing several times, most clearly T-FastMath.floor(T) occures 3 seperate times in the code.
Not a question of efficiency or accuracy but the variable and method names are very opaque, use real words for variable names.
My proposed method would be as follows
public double[] getInterpolatedCoordinates(double julian){ //julian calendar? This variable name needs to be something else, like day, or time, or whatever it actually means
int startIndex=(int)julian;
int endIndex=(startIndex+1>=coordinates.length?1:startIndex+1); //wrap around
double nonIntegerPortion=julian-startIndex;
double[] start = coordinates[startIndex];
double[] end = coordinates[endIndex];
double[] returnPosition= new double[3];
for(int i=0;i< start.length;i++){
returnPosition[i]=start[i]*(1-nonIntegerPortion)+end[i]*nonIntegerPortion;
}
return returnPosition;
}
This avoids corrupting the coordinates array and avoids repeating the same floor several times (1-nonIntegerPortion is still done several times and could be removed if needs be but I expect profiling will show it isn't significant). However, it does create a new double[] each time which may be inefficient if you only need the array temporarily. This can be corrected using a store object (an object you used previously but no longer need, usually from the previous loop)
public double[] getInterpolatedCoordinates(double julian, double[] store){
int startIndex=(int)julian;
int endIndex=(startIndex+1>=coordinates.length?1:startIndex+1); //wrap around
double nonIntegerPortion=julian-startIndex;
double[] start = coordinates[startIndex];
double[] end = coordinates[endIndex];
double[] returnPosition= store;
for(int i=0;i< start.length;i++){
returnPosition[i]=start[i]*(1-nonIntegerPortion)+end[i]*nonIntegerPortion;
}
return returnPosition; //store is returned
}