I have a project where I want to load in a given shapefile, and pick out polygons above a certain size before writing the results to a new shapefile. Maybe not the most efficient, but I've got code that successfully does all of that, right up to the point where it is supposed to write the shapefile. I get no errors, but the resulting shapefile has no usable data in it. I've followed as many tutorials as possible, but still I'm coming up blank.
The first bit of code is where I read in a shapefile, pickout the polygons I want, and put then into a feature collection. This part seems to work fine as far as I can tell.
public class ShapefileTest {
public static void main(String[] args) throws MalformedURLException, IOException, FactoryException, MismatchedDimensionException, TransformException, SchemaException {
File oldShp = new File("Old.shp");
File newShp = new File("New.shp");
//Get data from the original ShapeFile
Map<String, Object> map = new HashMap<String, Object>();
map.put("url", oldShp.toURI().toURL());
//Connect to the dataStore
DataStore dataStore = DataStoreFinder.getDataStore(map);
//Get the typeName from the dataStore
String typeName = dataStore.getTypeNames()[0];
//Get the FeatureSource from the dataStore
FeatureSource<SimpleFeatureType, SimpleFeature> source = dataStore.getFeatureSource(typeName);
SimpleFeatureCollection collection = (SimpleFeatureCollection) source.getFeatures(); //Get all of the features - no filter
//Start creating the new Shapefile
final SimpleFeatureType TYPE = createFeatureType(); //Calls a method that builds the feature type - tested and works.
DefaultFeatureCollection newCollection = new DefaultFeatureCollection(); //To hold my new collection
try (FeatureIterator<SimpleFeature> features = collection.features()) {
while (features.hasNext()) {
SimpleFeature feature = features.next(); //Get next feature
SimpleFeatureBuilder fb = new SimpleFeatureBuilder(TYPE); //Create a new SimpleFeature based on the original
Integer level = (Integer) feature.getAttribute(1); //Get the level for this feature
MultiPolygon multiPoly = (MultiPolygon) feature.getDefaultGeometry(); //Get the geometry collection
//First count how many new polygons we will have
int numNewPoly = 0;
for (int i = 0; i < multiPoly.getNumGeometries(); i++) {
double area = getArea(multiPoly.getGeometryN(i));
if (area > 20200) {
numNewPoly++;
}
}
//Now build an array of the larger polygons
Polygon[] polys = new Polygon[numNewPoly]; //Array of new geometies
int iPoly = 0;
for (int i = 0; i < multiPoly.getNumGeometries(); i++) {
double area = getArea(multiPoly.getGeometryN(i));
if (area > 20200) { //Write the new data
polys[iPoly] = (Polygon) multiPoly.getGeometryN(i);
iPoly++;
}
}
GeometryFactory gf = new GeometryFactory(); //Create a geometry factory
MultiPolygon mp = new MultiPolygon(polys, gf); //Create the MultiPolygonyy
fb.add(mp); //Add the geometry collection to the feature builder
fb.add(level);
fb.add("dBA");
SimpleFeature newFeature = SimpleFeatureBuilder.build( TYPE, new Object[]{mp, level,"dBA"}, null );
newCollection.add(newFeature); //Add it to the collection
}
At this point I have a collection that looks right - it has the correct bounds and everything. The next bit if code is where I put it into a new Shapefile.
//Time to put together the new Shapefile
Map<String, Serializable> newMap = new HashMap<String, Serializable>();
newMap.put("url", newShp.toURI().toURL());
newMap.put("create spatial index", Boolean.TRUE);
DataStore newDataStore = DataStoreFinder.getDataStore(newMap);
newDataStore.createSchema(TYPE);
String newTypeName = newDataStore.getTypeNames()[0];
SimpleFeatureStore fs = (SimpleFeatureStore) newDataStore.getFeatureSource(newTypeName);
Transaction t = new DefaultTransaction("add");
fs.setTransaction(t);
fs.addFeatures(newCollection);
t.commit();
ReferencedEnvelope env = fs.getBounds();
}
}
I put in the very last code to check the bounds of the FeatureStore fs, and it comes back null. Obviously, loading the newly created shapefile (which DOES get created and is ab out the right size), nothing shows up.
The solution actually had nothing to do with the code I posted - it had everything to do with my FeatureType definition. I did not include the "the_geom" to my polygon feature type, so nothing was getting written to the file.
I believe you are missing the step to finalize/close the file. Try adding this after the the t.commit line.
fs.close();
As an expedient alternative, you might try out the Shapefile dumper utility mentioned in the Shapefile DataStores docs. Using that may simplify your second code block into two or three lines.
Related
I am working on the classification of Imagenet DataSet on AlexNet architecture. I am working on distributed systems for data streams. I am using DeepLearning4j library. I have a problem with loading Imagenet data from a path on our HPC. So my current, normally loading data method is:
FileSplit fileSplit= new FileSplit(new File("/scratch/imagenet/ILSVRC2012/train"), NativeImageLoader.ALLOWED_FORMATS);
int imageHeightWidth = 224; //224x224 pixel input
int imageChannels = 3; //RGB
PathLabelGenerator labelMaker = new ParentPathLabelGenerator();
ImageRecordReader rr = new ImageRecordReader(imageHeightWidth, imageHeightWidth, imageChannels, labelMaker);
System.out.println("initialization");
rr.initialize(fileSplit);
System.out.println("iterator");
DataSetIterator iter = new RecordReaderDataSetIterator.Builder(rr, minibatch)
.classification(1, 1000)
.preProcessor(new ImagePreProcessingScaler()) //For normalization of image values 0-255 to 0-1
.build();
System.out.println("data list creator");
List<DataSet> dataList = new ArrayList<>();
while (iter.hasNext()){
dataList.add(iter.next());
}
And this is my try to load the dataset via spark. labels list contain all the labels of Imagenet Dataset but I didn't copy them all here:
JavaSparkContext sc = SparkContext.initSparkContext(useSparkLocal);
//load data just one time
System.out.println("load data");
List<String> labelsList = Arrays.asList("kit fox, Vulpes macrotis " , "English setter " , "Australian terrier ");
String folder= "/scratch/imagenet/ILSVRC2012/train/*";
File f = new File(folder);
String path = f.getPath();
path=folder+"/*";
JavaPairRDD<String, PortableDataStream> origData = sc.binaryFiles(path);
int imageHeightWidth = 224; //224x224 pixel input
int imageChannels = 3; //RGB
PathLabelGenerator labelMaker = new ParentPathLabelGenerator();
ImageRecordReader rr = new ImageRecordReader(imageHeightWidth, imageHeightWidth, imageChannels, labelMaker);
System.out.println("initialization");
rr.setLabels(labelsList);
RecordReaderFunction rrf = new org.datavec.spark.functions.RecordReaderFunction(rr);
JavaRDD<List<Writable>> rdd = origData.map(rrf);
JavaRDD<DataSet> data = rdd.map(new DataVecDataSetFunction(1, 1000, false));
List<DataSet> collected = data.collect();
By the way, in the train directory there is 1000 folders (n01440764, n01755581, n02012849, n02097658 ...) in which we find the images.
I need this parallelization since the load of the data itself took around 26h and it's not efficient. So could you help me with correcting me my try method?
For spark I would recommend pre vectorizing all of the data and just loading the ndarrays themselves directly. We cover this approach in our examples: https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-distributed-training-examples/
I would recommend this approach and just loading the pre created datasets using a map call after that where ideally you setup the batches relative to your number of workers available. Datasets have a save(..) load(..) you can use.
In order to implement this consider using:
SparkDataUtils.createFileBatchesSpark(JavaRDD filePaths, final String rootOutputDir, final int batchSize, #NonNull final org.apache.hadoop.conf.Configuration hadoopConfig)
This takes in filepaths, an output directory on HDFS, a pre configured batch size and a hadoop configuration for accessing your cluster.
Here is a snippet from the relevant java doc to get you started on some of the concepts:
{#code
* JavaSparkContext sc = ...
* SparkDl4jMultiLayer net = ...
* String baseFileBatchDir = ...
* JavaRDD<String> paths = org.deeplearning4j.spark.util.SparkUtils.listPaths(sc, baseFileBatchDir);
*
* //Image record reader:
* PathLabelGenerator labelMaker = new ParentPathLabelGenerator();
* ImageRecordReader rr = new ImageRecordReader(32, 32, 1, labelMaker);
* rr.setLabels(<labels here>);
*
* //Create DataSetLoader:
* int batchSize = 32;
* int numClasses = 1000;
* DataSetLoader loader = RecordReaderFileBatchLoader(rr, batchSize, 1, numClasses);
*
* //Fit the network
* net.fitPaths(paths, loader);
I wrote a WEKA java code to train 4 classifiers. I saved the classifiers models and want to use them to predict new unseen instances (think about it as someone who wants to test whether a tweet is positive or negative).
I used StringToWordsVector filter on the training data. And to avoid the "Src and Dest differ in # of attributes" error I used the following code to train the filter using the trained data before applying the filter on the new instance to try and predict whether a new instance is positive or negative. And I just can't get it right.
Classifier cls = (Classifier) weka.core.SerializationHelper.read("models/myModel.model"); //reading one of the trained classifiers
BufferedReader datafile = readDataFile("Tweets/tone1.ARFF"); //read training data
Instances data = new Instances(datafile);
data.setClassIndex(data.numAttributes() - 1);
Filter filter = new StringToWordVector(50);//keep 50 words
filter.setInputFormat(data);
Instances filteredData = Filter.useFilter(data, filter);
// rebuild classifier
cls.buildClassifier(filteredData);
String testInstance= "Text that I want to use as an unseen instance and predict whether it's positive or negative";
System.out.println(">create test instance");
FastVector attributes = new FastVector(2);
attributes.addElement(new Attribute("text", (FastVector) null));
// Add class attribute.
FastVector classValues = new FastVector(2);
classValues.addElement("Negative");
classValues.addElement("Positive");
attributes.addElement(new Attribute("Tone", classValues));
// Create dataset with initial capacity of 100, and set index of class.
Instances tests = new Instances("test istance", attributes, 100);
tests.setClassIndex(tests.numAttributes() - 1);
Instance test = new Instance(2);
// Set value for message attribute
Attribute messageAtt = tests.attribute("text");
test.setValue(messageAtt, messageAtt.addStringValue(testInstance));
test.setDataset(tests);
Filter filter2 = new StringToWordVector(50);
filter2.setInputFormat(tests);
Instances filteredTests = Filter.useFilter(tests, filter2);
System.out.println(">train Test filter using training data");
Standardize sfilter = new Standardize(); //Match the number of attributes between src and dest.
sfilter.setInputFormat(filteredData); // initializing the filter with training set
filteredTests = Filter.useFilter(filteredData, sfilter); // create new test set
ArffSaver saver = new ArffSaver(); //save test data to ARFF file
saver.setInstances(filteredTests);
File unseenFile = new File ("Tweets/unseen.ARFF");
saver.setFile(unseenFile);
saver.writeBatch();
When I try to Standardize the Input data using the filtered training data I get a new ARFF file (unseen.ARFF) but with 2000 (same number of training data) instances where most of the values are negative. I don't understand why or how to remove those instances.
System.out.println(">Evaluation"); //without the following 2 lines I get ArrayIndexOutOfBoundException.
filteredData.setClassIndex(filteredData.numAttributes() - 1);
filteredTests.setClassIndex(filteredTests.numAttributes() - 1);
Evaluation eval = new Evaluation(filteredData);
eval.evaluateModel(cls, filteredTests);
System.out.println(eval.toSummaryString("\nResults\n======\n", false));
Printing the evaluation results I want to see for example a percentage of how positive or negative this instance is but instead I get the following. I also want to see 1 instance instead of 2000. Any help on how to do this will be great.
> Results
======
Correlation coefficient 0.0285
Mean absolute error 0.8765
Root mean squared error 1.2185
Relative absolute error 409.4123 %
Root relative squared error 121.8754 %
Total Number of Instances 2000
Thanks
use eval.predictions(). It is an java.util.ArrayList<Prediction>. Then you can use Prediction.weight() method to get how much positive or negative your test variable is....
cls.distributionForInstance(newInst) returns the probability distribution for an instance. Check the docs
I have reached a good solution and here I share my code with you. This trains a classifier using WEKA Java code then use it to predict new unseen instances. Some parts - like paths - are hardcoded but you can easily modify the method to take parameters.
/**
* This method performs classification of unseen instance.
* It starts by training a model using a selection of classifiers then classifiy new unlabled instances.
*/
public static void predict() throws Exception {
//start by providing the paths for your training and testing ARFF files make sure both files have the same structure and the exact classes in the header
//initialise classifier
Classifier classifier = null;
System.out.println("read training arff");
Instances train = new Instances(new BufferedReader(new FileReader("Train.arff")));
train.setClassIndex(0);//in my case the class was the first attribute thus zero otherwise it's the number of attributes -1
System.out.println("read testing arff");
Instances unlabeled = new Instances(new BufferedReader(new FileReader("Test.arff")));
unlabeled.setClassIndex(0);
// training using a collection of classifiers (NaiveBayes, SMO (AKA SVM), KNN and Decision trees.)
String[] algorithms = {"nb","smo","knn","j48"};
for(int w=0; w<algorithms.length;w++){
if(algorithms[w].equals("nb"))
classifier = new NaiveBayes();
if(algorithms[w].equals("smo"))
classifier = new SMO();
if(algorithms[w].equals("knn"))
classifier = new IBk();
if(algorithms[w].equals("j48"))
classifier = new J48();
System.out.println("==========================================================================");
System.out.println("training using " + algorithms[w] + " classifier");
Evaluation eval = new Evaluation(train);
//perform 10 fold cross validation
eval.crossValidateModel(classifier, train, 10, new Random(1));
String output = eval.toSummaryString();
System.out.println(output);
String classDetails = eval.toClassDetailsString();
System.out.println(classDetails);
classifier.buildClassifier(train);
}
Instances labeled = new Instances(unlabeled);
// label instances (use the trained classifier to classify new unseen instances)
for (int i = 0; i < unlabeled.numInstances(); i++) {
double clsLabel = classifier.classifyInstance(unlabeled.instance(i));
labeled.instance(i).setClassValue(clsLabel);
System.out.println(clsLabel + " -> " + unlabeled.classAttribute().value((int) clsLabel));
}
//save the model for future use
ObjectOutputStream out = new ObjectOutputStream(new FileOutputStream("myModel.dat"));
out.writeObject(classifier);
out.close();
System.out.println("===== Saved model =====");
}
I am trying to take a very long file of strings and convert it to an XML according to a schema I was given. I used jaxB to create classes from that schema. Since the file is very large I created a thread pool to improve the performance but since then it only processes one line of the file and marshalls it to the XML file, per thread.
Below is my home class where I read from the file. Each line is a record of a transaction, for every new user encountered a list is made to store all of that users transactions and each list is put into a HashMap. I made it a ConcurrentHashMap because multiple threads will work on the map simultaneously, is this the correct thing to do?
After the lists are created a thread is made for each user. Each thread runs the method ProcessCommands below and receives from home the list of transactions for its user.
public class home{
public static File XMLFile = new File("LogFile.xml");
Map<String,List<String>> UserMap= new ConcurrentHashMap<String,List<String>>();
String[] UserNames = new String[5000];
int numberOfUsers = 0;
try{
BufferedReader reader = new BufferedReader(new FileReader("test.txt"));
String line;
while ((line = reader.readLine()) != null)
{
parsed = line.split(",|\\s+");
if(!parsed[2].equals("./testLOG")){
if(Utilities.checkUserExists(parsed[2], UserNames) == false){ //User does not already exist
System.out.println("New User: " + parsed[2]);
UserMap.put(parsed[2],new ArrayList<String>()); //Create list of transactions for new user
UserMap.get(parsed[2]).add(line); //Add First Item to new list
UserNames[numberOfUsers] = parsed[2]; //Add new user
numberOfUsers++;
}
else{ //User Already Existed
UserMap.get(parsed[2]).add(line);
}
}
}
reader.close();
} catch (IOException x) {
System.err.println(x);
}
//get start time
long startTime = new Date().getTime();
tCount = numberOfUsers;
ExecutorService threadPool = Executors.newFixedThreadPool(tCount);
for(int i = 0; i < numberOfUsers; i++){
System.out.println("Starting Thread " + i + " for user " + UserNames[i]);
Runnable worker = new ProcessCommands(UserMap.get(UserNames[i]),UserNames[i], XMLfile);
threadPool.execute(worker);
}
threadPool.shutdown();
while(!threadPool.isTerminated()){
}
System.out.println("Finished all threads");
}
Here is the ProcessCommands class. The thread receives the list for its user and creates a marshaller. From what I unserstand marshalling is not thread safe so it is best to create one for each thread, is this the best way to do that?
When I create the marshallers I know that each from (from each thread) will want to access the created file causing conflicts, I used synchronized, is that correct?
As the thread iterates through it's list, each line calls for a certain case. There are a lot so I just made pseudo-cases for clarity. Each case calls the function below.
public class ProcessCommands implements Runnable{
private static final boolean DEBUG = false;
private List<String> list = null;
private String threadName;
private File XMLfile = null;
public Thread myThread;
public ProcessCommands(List<String> list, String threadName, File XMLfile){
this.list = list;
this.threadName = threadName;
this.XMLfile = XMLfile;
}
public void run(){
Date start = null;
int transactionNumber = 0;
String[] parsed = new String[8];
String[] quoteParsed = null;
String[] universalFormatCommand = new String[9];
String userCommand = null;
Connection connection = null;
Statement stmt = null;
Map<String, UserObject> usersMap = null;
Map<String, Stack<BLO>> buyMap = null;
Map<String, Stack<SLO>> sellMap = null;
Map<String, QLO> stockCodeMap = null;
Map<String, BTO> buyTriggerMap = null;
Map<String, STO> sellTriggerMap = null;
Map<String, USO> usersStocksMap = null;
String SQL = null;
int amountToAdd = 0;
int tempDollars = 0;
UserObject tempUO = null;
BLO tempBLO = null;
SLO tempSLO = null;
Stack<BLO> tempStBLO = null;
Stack<SLO> tempStSLO = null;
BTO tempBTO = null;
STO tempSTO = null;
USO tempUSO = null;
QLO tempQLO = null;
String stockCode = null;
String quoteResponse = null;
int usersDollars = 0;
int dollarAmountToBuy = 0;
int dollarAmountToSell = 0;
int numberOfSharesToBuy = 0;
int numberOfSharesToSell = 0;
int quoteStockInDollars = 0;
int shares = 0;
Iterator<String> itr = null;
int transactionCount = list.size();
System.out.println("Starting "+threadName+" - listSize = "+transactionCount);
//UO dollars, reserved
usersMap = new HashMap<String, UserObject>(3); //userName -> UO
//USO shares
usersStocksMap = new HashMap<String, USO>(); //userName+stockCode -> shares
//BLO code, timestamp, dollarAmountToBuy, stockPriceInDollars
buyMap = new HashMap<String, Stack<BLO>>(); //userName -> Stack<BLO>
//SLO code, timestamp, dollarAmountToSell, stockPriceInDollars
sellMap = new HashMap<String, Stack<SLO>>(); //userName -> Stack<SLO>
//BTO code, timestamp, dollarAmountToBuy, stockPriceInDollars
buyTriggerMap = new ConcurrentHashMap<String, BTO>(); //userName+stockCode -> BTO
//STO code, timestamp, dollarAmountToBuy, stockPriceInDollars
sellTriggerMap = new HashMap<String, STO>(); //userName+stockCode -> STO
//QLO timestamp, stockPriceInDollars
stockCodeMap = new HashMap<String, QLO>(); //stockCode -> QLO
//create user object and initialize stacks
usersMap.put(threadName, new UserObject(0, 0));
buyMap.put(threadName, new Stack<BLO>());
sellMap.put(threadName, new Stack<SLO>());
try {
//Marshaller marshaller = getMarshaller();
synchronized (this){
Marshaller marshaller = init.jc.createMarshaller();
marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true);
marshaller.setProperty(Marshaller.JAXB_FRAGMENT, true);
marshaller.marshal(LogServer.Root,XMLfile);
marshaller.marshal(LogServer.Root,System.out);
}
} catch (JAXBException M) {
M.printStackTrace();
}
Date timing = new Date();
//universalFormatCommand = new String[8];
parsed = new String[8];
//iterate through workload file
itr = this.list.iterator();
while(itr.hasNext()){
userCommand = (String) itr.next();
itr.remove();
parsed = userCommand.split(",|\\s+");
transactionNumber = Integer.parseInt(parsed[0].replaceAll("\\[", "").replaceAll("\\]", ""));
universalFormatCommand = Utilities.FormatCommand(parsed, parsed[0]);
if(transactionNumber % 100 == 0){
System.out.println(this.threadName + " - " +transactionNumber+ " - "+(new Date().getTime() - timing.getTime())/1000);
}
/*System.out.print("UserCommand " +transactionNumber + ": ");
for(int i = 0;i<8;i++)System.out.print(universalFormatCommand[i]+ " ");
System.out.print("\n");*/
//switch for user command
switch (parsed[1].toLowerCase()) {
case "One"
*Do Stuff"
LogServer.create_Log(universalFormatCommand, transactionNumber, CommandType.ADD);
break;
case "Two"
*Do Stuff"
LogServer.create_Log(universalFormatCommand, transactionNumber, CommandType.ADD);
break;
}
}
}
The function create_Log has multiple cases so as before, for clarity I just left one. The case "QUOTE" only calls one object creation function but other other cases can create multiple objects. The type 'log' is a complex XML type that defines all the other object types so in each call to create_Log I create a log type called Root. The class 'log' generated by JaxB included a function to create a list of objects. The statement:
Root.getUserCommandOrQuoteServerOrAccountTransaction().add(quote_QuoteType);
takes the root element I created, creates a list and adds the newly created object 'quote_QuoteType' to that list. Before I added threading this method successfully created a list of as many objects as I wanted then marshalled them. So I'm pretty positive the bit in class 'LogServer' is not the issue. It is something to do with the marshalling and syncronization in the ProcessCommands class above.
public class LogServer{
public static log Root = new log();
public static QuoteServerType Log_Quote(String[] input, int TransactionNumber){
ObjectFactory factory = new ObjectFactory();
QuoteServerType quoteCall = factory.createQuoteServerType();
**Populate the QuoteServerType object called quoteCall**
return quoteCall;
}
public static void create_Log(String[] input, int TransactionNumber, CommandType Command){
System.out.print("TRANSACTION "+TransactionNumber + " is " + Command + ": ");
for(int i = 0; i<input.length;i++) System.out.print(input[i] + " ");
System.out.print("\n");
switch(input[1]){
case "QUOTE":
System.out.print("QUOTE CASE");
QuoteServerType quote_QuoteType = Log_Quote(input,TransactionNumber);
Root.getUserCommandOrQuoteServerOrAccountTransaction().add(quote_QuoteType);
break;
}
}
So you wrote a lot of code, but have you try if it is actually working? After quick look I doubt it. You should test your code logic part by part not going all the way till the end. It seems you are just staring with Java. I would recommend practice first on simple one threaded applications. Sorry if I sound harsh, but I will try to be constructive as well:
Per convention, the classes names are starts with capital letter, variables by small, you do it other way.
You should make a method in you home (Home) class not a put all your code in the static block.
You are reading the whole file to the memory, you do not process it line by line. After the Home is initialized literary whole content of file will be under UserMap variable. If the file is really large you will run out of the heap memory. If you assume large file than you cannot do it and you have to redisign your app to store somewhere partial results. If your file is smaller than memmory you could keep it like that (but you said it is large).
No need for UserNames, the UserMap.containsKey will do the job
Your thread pools size should be in the range of your cores not number of users as you will get thread trashing (if you have blocking operation in your code make tCount = 2*processors if not keep it as number of processors). Once one ProcessCommand finish, the executor will start another one till you finish all and you will be efficiently using all your processor cores.
DO NOT while(!threadPool.isTerminated()), this line will completely consume one processor as it will be constantly checking, call awaitTermination instead
Your ProcessCommand, has view map variables which will only had one entry cause as you said, each will process data from one user.
The synchronized(this) is Process will not work, as each thread will synchronized on different object (different isntance of process).
I believe creating marshaller is thread safe (check it) so no need to synchronization at all
You save your log (whatever it is) before you did actual processing in of the transactions lists
The marshalling will override content of the file with current state of LogServer.Root. If it is shared bettween your proccsCommand (seems so) what is the point in saving it in each thread. Do it once you are finished.
You dont need itr.remove();
The log class (for the ROOT variable !!!) needs to be thread-safe as all the threads will call the operations on it (so the list inside the log class must be concurrent list etc).
And so on.....
I would recommend, to
Start with simple one thread version that actually works.
Deal with processing line by line, (store reasults for each users in differnt file, you can have cache with transactions for recently used users so not to keep writing all the time to the disk (see guava cache)
Process multithreaded each user transaction to your user log objects (again if it is a lot you have to save them to the disk not keep all in memmory).
Write code that combines logs from diiffernt users to create one (again you may want to do it mutithreaded), though it will be mostly IO operations so not much gain and more tricky to do.
Good luck
override cont
I am very new to GeoTools. I would like to create a hex grid and save it to a SHP file. But something goes wrong along the way (the saved SHP file is empty). In the debug mode I found that the gird is correctly created and contains a bunch of polygons that make sense. Writing those to a shape file proves to be difficult. I followed the tutorial on GeoTools' website, but that does not quite do it yet. I suspect TYPE to be incorrectly defined, but could not find out how to define it correctly.
Any help of how to store the grid into a SHP file is highly appreciated.
ReferencedEnvelope gridBounds = new ReferencedEnvelope(xMin, xMax, yMin, yMax, DefaultGeographicCRS.WGS84);
// length of each hexagon edge
double sideLen = 0.5;
// max distance between vertices
double vertexSpacing = sideLen / 20;
SimpleFeatureSource grid = Grids.createHexagonalGrid(gridBounds, sideLen, vertexSpacing);
/*
* We use the DataUtilities class to create a FeatureType that will describe the data in our
* shapefile.
*
* See also the createFeatureType method below for another, more flexible approach.
*/
final SimpleFeatureType TYPE = createFeatureType();
/*
* Get an output file name and create the new shapefile
*/
File newFile = new File("D:/test/shape.shp");
ShapefileDataStoreFactory dataStoreFactory = new ShapefileDataStoreFactory();
Map<String, Serializable> params = new HashMap<String, Serializable>();
params.put("url", newFile.toURI().toURL());
params.put("create spatial index", Boolean.TRUE);
ShapefileDataStore newDataStore = (ShapefileDataStore) dataStoreFactory.createNewDataStore(params);
newDataStore.createSchema(TYPE);
/*
* You can comment out this line if you are using the createFeatureType method (at end of
* class file) rather than DataUtilities.createType
*/
newDataStore.forceSchemaCRS(DefaultGeographicCRS.WGS84);
/*
* Write the features to the shapefile
*/
Transaction transaction = new DefaultTransaction("create");
String typeName = newDataStore.getTypeNames()[0];
SimpleFeatureSource featureSource = newDataStore.getFeatureSource(typeName);
if (featureSource instanceof SimpleFeatureStore) {
SimpleFeatureStore featureStore = (SimpleFeatureStore) featureSource;
featureStore.setTransaction(transaction);
try {
featureStore.addFeatures(grid.getFeatures());
transaction.commit();
} catch (Exception problem) {
problem.printStackTrace();
transaction.rollback();
} finally {
transaction.close();
}
} else {
System.out.println(typeName + " does not support read/write access");
}
private static SimpleFeatureType createFeatureType() {
SimpleFeatureTypeBuilder builder = new SimpleFeatureTypeBuilder();
builder.setName("Location");
builder.setCRS(DefaultGeographicCRS.WGS84); // <- Coordinate reference system
// add attributes in order
builder.add("Polygon", Polygon.class);
builder.length(15).add("Name", String.class); // <- 15 chars width for name field
// build the type
final SimpleFeatureType LOCATION = builder.buildFeatureType();
return LOCATION;
}
I trained an IBK classifier with some training data that I created manually as following:
ArrayList<Attribute> atts = new ArrayList<Attribute>();
ArrayList<String> classVal = new ArrayList<String>();
classVal.add("C1");
classVal.add("C2");
atts.add(new Attribute("a"));
atts.add(new Attribute("b"));
atts.add(new Attribute("c"));
atts.add(new Attribute("d"));
atts.add(new Attribute("##class##", classVal));
Instances dataRaw = new Instances("TestInstances", atts, 0);
dataRaw.setClassIndex(dataRaw.numAttributes() - 1);
double[] instanceValue1 = new double[]{3,0,1,0,0};
dataRaw.add(new DenseInstance(1.0, instanceValue1));
double[] instanceValue2 = new double[]{2,1,1,0,0};
dataRaw.add(new DenseInstance(1.0, instanceValue2));
double[] instanceValue3 = new double[]{2,0,2,0,0};
dataRaw.add(new DenseInstance(1.0, instanceValue3));
double[] instanceValue4 = new double[]{1,3,0,0,1};
dataRaw.add(new DenseInstance(1.0, instanceValue4));
double[] instanceValue5 = new double[]{0,3,1,0,1};
dataRaw.add(new DenseInstance(1.0, instanceValue5));
double[] instanceValue6 = new double[]{0,2,1,1,1};
dataRaw.add(new DenseInstance(1.0, instanceValue6));
Then I build up the classifier:
IBk ibk = new IBk();
try {
ibk.buildClassifier(dataRaw);
} catch (Exception e) {
e.printStackTrace();
}
I want to create a new instance with unlabeled class and classify this instance, I tried the following with no luck.
IBk ibk = new IBk();
try {
ibk.buildClassifier(dataRaw);
double[] values = new double[]{3,1,0,0,-1};
DenseInstance newInst = new DenseInstance(1.0,values);
double classif = ibk.classifyInstance(newInst);
System.out.println(classif);
} catch (Exception e) {
e.printStackTrace();
}
I just get the following errors
weka.core.UnassignedDatasetException: DenseInstance doesn't have access to a dataset!
at weka.core.AbstractInstance.classAttribute(AbstractInstance.java:98)
at weka.classifiers.AbstractClassifier.classifyInstance(AbstractClassifier.java:74)
at TextCategorizationTest.instancesWithDoubleValues(TextCategorizationTest.java:136)
at TextCategorizationTest.main(TextCategorizationTest.java:33)
Looks like I am doing something wrong while creating a new instance. How can I create an unlabeled instance exactly ?
Thanks in Advance
You will see this error, when you classify a new instance which is not associated with a dataset. You have to associate every new instance you create to an Instances object using setDataset.
//Make a place holder Instances
//If you already have access to one, you can skip this step
Instances dataset = new Instances("testdata", attr, 1);
dataset.setClassIndex(classIdx);
DenseInstance newInst = new DenseInstance(1.0,values);
//To associate your instance with Instances object, in this case dataset
newInst.setDataset(dataset);
After this you can classify newly created instance.
double classif = ibk.classifyInstance(newInst);
http://www.cs.tufts.edu/~ablumer/weka/doc/weka.core.Instance.html
Detailed Implementation Link
The problem is with this line:
double classif = ibk.classifyInstance(newInst);
When you try to classify newInst, Weka throws an exception because newInst has no Instances object (i.e., dataset) associated with it - thus it does not know anything about its class attribute.
You should first create a new Instances object similar to the dataRaw, add your unlabeled instance to it, set class index, and only then try classifying it, e.g.:
Instances dataUnlabeled = new Instances("TestInstances", atts, 0);
dataUnlabeled.add(newInst);
dataUnlabeled.setClassIndex(dataUnlabeled.numAttributes() - 1);
double classif = ibk.classifyInstance(dataUnlabeled.firstInstance());
See pages 203 - 204 of the WEKA documentation. That helped me a lot! (The Weka Manual is a pdf file that is located in your weka installation folder. Just open the doucmentation.html and it will point you to the pdf manual.)
Copy-pasting some snippets of the code listings of Chapter 17 (Using the WEKA API / Creating datasets in memory) should help you solve the task.