Parallelize a collection with Spark - java

I'm trying to parallelize a collection with Spark and the example in the documentation doesn't seem to work:
List<Integer> data = Arrays.asList(1, 2, 3, 4, 5);
JavaRDD<Integer> distData = sc.parallelize(data);
I'm creating a list of LabeledPoints from records each of which contain data points (double[]) and a label (defaulted: true/false).
public List<LabeledPoint> createLabeledPoints(List<ESRecord> records) {
List<LabeledPoint> points = new ArrayList<>();
for (ESRecord rec : records) {
points.add(new LabeledPoint(
rec.defaulted ? 1.0 : 0.0, Vectors.dense(rec.toDataPoints())));
}
return points;
}
public void test(List<ESRecord> records) {
SparkConf conf = new SparkConf().setAppName("SVM Classifier Example");
SparkContext sc = new SparkContext(conf);
List<LabeledPoint> points = createLabeledPoints(records);
JavaRDD<LabeledPoint> data = sc.parallelize(points);
...
}
The function signature of parallelize is no longer taking one parameter, here is how it looks in spark-mllib_2.11 v1.3.0: sc.parallelize(seq, numSlices, evidence$1)
So any ideas on how to get this working?

In Java, you should use JavaSparkContext.
https://spark.apache.org/docs/0.6.2/api/core/spark/api/java/JavaSparkContext.html

Related

Apache Ignite updating previously trained ML model

I have a dataset that is used for training a KNN model. Later I'd like to update the model with new training data. What I'm seeing is that the updated model only takes the new training data ignoring what was previously trained.
Vectorizer vec = new DummyVectorizer<Integer>(1, 2).labeled(0);
DatasetTrainer<KNNClassificationModel, Double> trainer = new KNNClassificationTrainer();
KNNClassificationModel model;
KNNClassificationModel modelUpdated;
Map<Integer, Vector> trainingData = new HashMap<Integer, Vector>();
Map<Integer, Vector> trainingDataNew = new HashMap<Integer, Vector>();
Double[][] data1 = new Double[][] {
{0.136,0.644,0.154},
{0.302,0.634,0.779},
{0.806,0.254,0.211},
{0.241,0.951,0.744},
{0.542,0.893,0.612},
{0.334,0.277,0.486},
{0.616,0.259,0.121},
{0.738,0.585,0.017},
{0.124,0.567,0.358},
{0.934,0.346,0.863}};
Double[][] data2 = new Double[][] {
{0.300,0.236,0.193}};
Double[] observationData = new Double[] { 0.8, 0.7 };
// fill dataset (in cache)
for (int i = 0; i < data1.length; i++)
trainingData.put(i, new DenseVector(data1[i]));
// first training / prediction
model = trainer.fit(trainingData, 1, vec);
System.out.println("First prediction : " + model.predict(new DenseVector(observationData)));
// new training data
for (int i = 0; i < data2.length; i++)
trainingDataNew.put(data1.length + i, new DenseVector(data2[i]));
// second training / prediction
modelUpdated = trainer.update(model, trainingDataNew, 1, vec);
System.out.println("Second prediction: " + modelUpdated.predict(new DenseVector(observationData)));
As an output I get this:
First prediction : 0.124
Second prediction: 0.3
This looks like the second prediction only used data2 which must lead to 0.3 as prediction.
How does model update work? If I would have to add data2 to data1 and then train on data1 again, what would be the difference compared to a complete new training on all combined data?
How does model update work?
For KNN specifically:
Add data2 to data1 and call modelUpdate on the combined data.
see this test as an example: https://github.com/apache/ignite/blob/635dafb7742673494efa6e8e91e236820156d38f/modules/ml/src/test/java/org/apache/ignite/ml/knn/KNNClassificationTest.java#L167
Follow the instructions in that test:
set up your trainer:
KNNClassificationTrainer trainer = new KNNClassificationTrainer()
.withK(3)
.withDistanceMeasure(new EuclideanDistance())
.withWeighted(false);
Then set up your vectorizer: (note how the labeled coordinate is created)
model = trainer.fit(
trainingData,
parts,
new DoubleArrayVectorizer<Integer>().labeled(Vectorizer.LabelCoordinate.LAST)
);
then call the updateModel as needed.
KNNClassificationModel updatedOnData = trainer.update(
originalMdlOnEmptyDataset,
newData,
parts,
new DoubleArrayVectorizer<Integer>().labeled(Vectorizer.LabelCoordinate.LAST)
);
docs for KNN classification: https://ignite.apache.org/docs/latest/machine-learning/binary-classification/knn-classification
KNN Classification example: https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/knn/KNNClassificationExample.java

The i/p col features must be either string or numeric type, but got org.apache.spark.ml.linalg.VectorUDT

I am very new to Spark Machine Learning just an 3 day old novice and I'm basically trying to predict some data using Logistic Regression algorithm in spark via Java. I have referred few sites and documentation and came up with the code and i am trying to execute it but facing an issue.
So i have pre-processed the data and have used vector assembler to club all the relevant columns into one and i am trying to fit the model and facing an issue.
public class Sparkdemo {
static SparkSession session = SparkSession.builder().appName("spark_demo")
.master("local[*]").getOrCreate();
#SuppressWarnings("empty-statement")
public static void getData() {
Dataset<Row> inputFile = session.read()
.option("header", true)
.format("csv")
.option("inferschema", true)
.csv("C:\\Users\\WildJasmine\\Downloads\\NKI_cleaned.csv");
inputFile.show();
String[] columns = inputFile.columns();
int beg = 16, end = columns.length - 1;
String[] featuresToDrop = new String[end - beg + 1];
System.arraycopy(columns, beg, featuresToDrop, 0, featuresToDrop.length);
System.out.println("rows are\n " + Arrays.toString(featuresToDrop));
Dataset<Row> dataSubset = inputFile.drop(featuresToDrop);
String[] arr = {"Patient", "ID", "eventdeath"};
Dataset<Row> X = dataSubset.drop(arr);
X.show();
Dataset<Row> y = dataSubset.select("eventdeath");
y.show();
//Vector Assembler concept for merging all the cols into a single col
VectorAssembler assembler = new VectorAssembler()
.setInputCols(X.columns())
.setOutputCol("features");
Dataset<Row> dataset = assembler.transform(X);
dataset.show();
StringIndexer labelSplit = new StringIndexer().setInputCol("features").setOutputCol("label");
Dataset<Row> data = labelSplit.fit(dataset)
.transform(dataset);
data.show();
Dataset<Row>[] splitsX = data.randomSplit(new double[]{0.8, 0.2}, 42);
Dataset<Row> trainingX = splitsX[0];
Dataset<Row> testX = splitsX[1];
LogisticRegression lr = new LogisticRegression()
.setMaxIter(10)
.setRegParam(0.3)
.setElasticNetParam(0.8);
LogisticRegressionModel lrModel = lr.fit(trainingX);
Dataset<Row> prediction = lrModel.transform(testX);
prediction.show();
}
public static void main(String[] args) {
getData();
}}
Below image is my dataset,
dataset
Error message:
Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: The input column features must be either string type or numeric type, but got org.apache.spark.ml.linalg.VectorUDT#3bfc3ba7.
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.ml.feature.StringIndexerBase$class.validateAndTransformSchema(StringIndexer.scala:86)
at org.apache.spark.ml.feature.StringIndexer.validateAndTransformSchema(StringIndexer.scala:109)
at org.apache.spark.ml.feature.StringIndexer.transformSchema(StringIndexer.scala:152)
at org.apache.spark.ml.PipelineStage.transformSchema(Pipeline.scala:74)
at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:135)
My end result is I need a predicted value using the features column.
Thanks in advance.
That error occurs when the input field of your dataframe for which you want to apply the StringIndexer transformation is a Vector. In the Spark documentation https://spark.apache.org/docs/latest/ml-features#stringindexer you can see that the input column is a string. This transformer performs a distinct to that column and creates a new column with integers that correspond to each different string value. It does not work for vectors.

How to pass csv mapped bean class to Dataset

I wrote code to read a csv file and map all the columns to a bean class.
Now, I'm trying to set these values to a Dataset and getting an issue.
7/08/30 16:33:58 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.IllegalArgumentException: object is not an instance of declaring class
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
If I try to set the values manually it works fine
public void run(String t, String u) throws FileNotFoundException {
JavaRDD<String> pairRDD = sparkContext.textFile("C:/temp/L1_result.csv");
JavaPairRDD<String,String> rowJavaRDD = pairRDD.mapToPair(new PairFunction<String, String, String>() {
public Tuple2<String,String> call(String rec) throws FileNotFoundException {
String[] tokens = rec.split(";");
String[] vals = new String[tokens.length];
for(int i= 0; i < tokens.length; i++){
vals[i] =tokens[i];
}
return new Tuple2<String, String>(tokens[0], tokens[1]);
}
});
ColumnPositionMappingStrategy cpm = new ColumnPositionMappingStrategy();
cpm.setType(funds.class);
String[] csvcolumns = new String[]{"portfolio_id", "portfolio_code"};
cpm.setColumnMapping(csvcolumns);
CSVReader csvReader = new CSVReader(new FileReader("C:/temp/L1_result.csv"));
CsvToBean csvtobean = new CsvToBean();
List csvDataList = csvtobean.parse(cpm, csvReader);
for (Object dataobject : csvDataList) {
funds fund = (funds) dataobject;
System.out.println("Portfolio:"+fund.getPortfolio_id()+ " code:"+fund.getPortfolio_code());
}
/* funds b0 = new funds();
b0.setK("k0");
b0.setSomething("sth0");
funds b1 = new funds();
b1.setK("k1");
b1.setSomething("sth1");
List<funds> data = new ArrayList<funds>();
data.add(b0);
data.add(b1);*/
System.out.println("Portfolio:" + rowJavaRDD.values());
//manual set works fine ///
// Dataset<Row> fundDf = SQLContext.createDataFrame(data, funds.class);
Dataset<Row> fundDf = SQLContext.createDataFrame(rowJavaRDD.values(), funds.class);
fundDf.printSchema();
fundDf.write().option("mergeschema", true).parquet("C:/test");
}
The line below is giving an issue: using rowJavaRDD.values():
Dataset<Row> fundDf = SQLContext.createDataFrame(rowJavaRDD.values(), funds.class);
what is the resolution to this? whatever values Im column mapping should be passed here, but how this needs to be done. Any idea really helps me.
Dataset fundDf = SQLContext.createDataFrame(csvDataList, funds.class);
Passing list worked!

Building decision tree pipeline for kyro-encoded Datasets in spark 2.0.2 with Java

I'm trying to build a version of the decision tree classification example from Spark 2.0.2 org.apache.spark.examples.ml.JavaDecisionTreeClassificationExample. I can't use this directly because it uses libsvm-encoded data. I need to avoid libsvm (undocumented AFAIK) to classify ordinary datasets more easily. I'm trying to adapt the example to use a kyro-encoded dataset instead.
The issue originates in the map call below, particularly the consequences of using Encoders.kyro as the encoder as instructed by SparkML feature vectors and Spark 2.0.2 Encoders in Java
public SMLDecisionTree(Dataset<Row> incomingDS, final String label, final String[] features)
{
this.incomingDS = incomingDS;
this.label = label;
this.features = features;
this.mapSet = new StringToDoubleMapperSet(features);
this.sdlDS = incomingDS
.select(label, features)
.filter(new FilterFunction<Row>()
{
public boolean call(Row row) throws Exception
{
return !row.getString(0).equals(features[0]); // header
}
})
.map(new MapFunction<Row, LabeledFeatureVector>()
{
public LabeledFeatureVector call(Row row) throws Exception
{
double labelVal = mapSet.addValue(0, row.getString(0));
double[] featureVals = new double[features.length];
for (int i = 1; i < row.length(); i++)
{
Double val = mapSet.addValue(i, row.getString(i));
featureVals[i - 1] = val;
}
return new LabeledFeatureVector(labelVal, Vectors.dense(featureVals));
}
// https://stackoverflow.com/questions/36648128/how-to-store-custom-objects-in-a-dataset
}, Encoders.kryo(LabeledFeatureVector.class));
Dataset<LabeledFeatureVector>[] splits = sdlDS.randomSplit(new double[] { 0.7, 0.3 });
this.trainingDS = splits[0];
this.testDS = splits[1];
}
This impacts the StringIndexer and VectorIndexer from the original spark example which are unable to handle the resulting kyro-encoded dataset. Here is the pipeline building code taken from the spark decision tree example code:
public void run() throws IOException
{
sdlDS.show();
StringIndexerModel labelIndexer = new StringIndexer()
.setInputCol("label")
.setOutputCol("indexedLabel")
.fit(df);
VectorIndexerModel featureIndexer = new VectorIndexer()
.setInputCol("features")
.setOutputCol("indexedFeatures")
.setMaxCategories(4) // treat features with > 4 distinct values as continuous.
.fit(df);
DecisionTreeClassifier classifier = new DecisionTreeClassifier()
.setLabelCol("indexedLabel")
.setFeaturesCol("indexedFeatures");
IndexToString labelConverter = new IndexToString()
.setInputCol("prediction")
.setOutputCol("predictedLabel")
.setLabels(labelIndexer.labels());
Pipeline pipeline = new Pipeline().setStages(new PipelineStage[]
{ labelIndexer, featureIndexer, classifier, labelConverter });
This code apparently expects a dataset with "label" and "features" columns with the label and a Vector of double-encoded features. The problem is that kyro produces a single column named "values" that seems to hold a byte array. I know of no documentation for how to convert this to what the original StringIndexer and VectorIndexer expect. Can someone help? Java please.
Don't use Kryo encoder in the first place. It is very limited in general and not applicable here at all. The simplest solution here is to drop custom class and use Row encoder. First you'll need a bunch of imports:
import org.apache.spark.sql.catalyst.encoders.RowEncoder;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType;
import org.apache.spark.ml.linalg.*;
and a schema:
List<StructField> fields = new ArrayList<>();
fields.add(DataTypes.createStructField("label", DoubleType, false));
fields.add(DataTypes.createStructField("features", new VectorUDT(), false));
StructType schema = DataTypes.createStructType(fields);
Encoder can be defined like this:
Encoder<Row> encoder = RowEncoder.apply(schema);
and use as shown below:
Dataset<Row> inputDs = spark.read().json(sc.parallelize(Arrays.asList(
"{\"lablel\": 1.0, \"features\": \"foo\"}"
)));
inputDs.map(new MapFunction<Row, Row>() {
public Row call(Row row) {
return RowFactory.create(1.0, Vectors.dense(1.0, 2.0));
}
}, encoder);

Extract Aggregator values in Batch Execution

Is there any way to programatically extract the final value of the aggregators after a Dataflow batch execution ?
Based on the DirectePipelineRunner class, I wrote the following method. It seems to work, but for dinamically created counters, it gives different values than the values shown in the console output.
PS. If it helps, I'm assuming that aggregators are based on Long values, with a sum combining function.
public static Map<String, Object> extractAllCounters(Pipeline p, PipelineResult pr)
{
AggregatorPipelineExtractor aggregatorExtractor = new AggregatorPipelineExtractor(p);
Map<String, Object> results = new HashMap<>();
for (Map.Entry<Aggregator<?, ?>, Collection<PTransform<?, ?>>> e :
aggregatorExtractor.getAggregatorSteps().entrySet()) {
Aggregator agg = e.getKey();
try {
results.put(agg.getName(), pr.getAggregatorValues(agg).getTotalValue(agg.getCombineFn()));
} catch(AggregatorRetrievalException|IllegalArgumentException aggEx) {
//System.err.println("Can't extract " + agg.getName() + ": " + aggEx.getMessage());
}
}
return results;
}
The values of aggregators should be available in the PipelineResult. For example:
CountOddsFn countOdds = new CountOddsFn();
pipeline
.apply(Create.of(1, 3, 5, 7, 2, 4, 6, 8, 10, 12, 14, 20, 42, 68, 100))
.apply(ParDo.of(countOdds));
PipelineResult result = pipeline.run();
// Here you may need to use the BlockingDataflowPipelineRunner
AggregatorValues<Integer> values =
result.getAggregatorValues(countOdds.aggregator);
Map<String, Integer> valuesAtSteps = values.getValuesAtSteps();
// Now read the values from the step...
Example DoFn that reports the aggregator:
private static class CountOddsFn extends DoFn<Integer, Void> {
Aggregator<Integer, Integer> aggregator =
createAggregator("odds", new SumIntegerFn());
#Override
public void processElement(ProcessContext c) throws Exception {
if (c.element() % 2 == 1) {
aggregator.addValue(1);
}
}
}

Categories

Resources