I am very new to GeoTools. I would like to create a hex grid and save it to a SHP file. But something goes wrong along the way (the saved SHP file is empty). In the debug mode I found that the gird is correctly created and contains a bunch of polygons that make sense. Writing those to a shape file proves to be difficult. I followed the tutorial on GeoTools' website, but that does not quite do it yet. I suspect TYPE to be incorrectly defined, but could not find out how to define it correctly.
Any help of how to store the grid into a SHP file is highly appreciated.
ReferencedEnvelope gridBounds = new ReferencedEnvelope(xMin, xMax, yMin, yMax, DefaultGeographicCRS.WGS84);
// length of each hexagon edge
double sideLen = 0.5;
// max distance between vertices
double vertexSpacing = sideLen / 20;
SimpleFeatureSource grid = Grids.createHexagonalGrid(gridBounds, sideLen, vertexSpacing);
/*
* We use the DataUtilities class to create a FeatureType that will describe the data in our
* shapefile.
*
* See also the createFeatureType method below for another, more flexible approach.
*/
final SimpleFeatureType TYPE = createFeatureType();
/*
* Get an output file name and create the new shapefile
*/
File newFile = new File("D:/test/shape.shp");
ShapefileDataStoreFactory dataStoreFactory = new ShapefileDataStoreFactory();
Map<String, Serializable> params = new HashMap<String, Serializable>();
params.put("url", newFile.toURI().toURL());
params.put("create spatial index", Boolean.TRUE);
ShapefileDataStore newDataStore = (ShapefileDataStore) dataStoreFactory.createNewDataStore(params);
newDataStore.createSchema(TYPE);
/*
* You can comment out this line if you are using the createFeatureType method (at end of
* class file) rather than DataUtilities.createType
*/
newDataStore.forceSchemaCRS(DefaultGeographicCRS.WGS84);
/*
* Write the features to the shapefile
*/
Transaction transaction = new DefaultTransaction("create");
String typeName = newDataStore.getTypeNames()[0];
SimpleFeatureSource featureSource = newDataStore.getFeatureSource(typeName);
if (featureSource instanceof SimpleFeatureStore) {
SimpleFeatureStore featureStore = (SimpleFeatureStore) featureSource;
featureStore.setTransaction(transaction);
try {
featureStore.addFeatures(grid.getFeatures());
transaction.commit();
} catch (Exception problem) {
problem.printStackTrace();
transaction.rollback();
} finally {
transaction.close();
}
} else {
System.out.println(typeName + " does not support read/write access");
}
private static SimpleFeatureType createFeatureType() {
SimpleFeatureTypeBuilder builder = new SimpleFeatureTypeBuilder();
builder.setName("Location");
builder.setCRS(DefaultGeographicCRS.WGS84); // <- Coordinate reference system
// add attributes in order
builder.add("Polygon", Polygon.class);
builder.length(15).add("Name", String.class); // <- 15 chars width for name field
// build the type
final SimpleFeatureType LOCATION = builder.buildFeatureType();
return LOCATION;
}
Related
I am working on the classification of Imagenet DataSet on AlexNet architecture. I am working on distributed systems for data streams. I am using DeepLearning4j library. I have a problem with loading Imagenet data from a path on our HPC. So my current, normally loading data method is:
FileSplit fileSplit= new FileSplit(new File("/scratch/imagenet/ILSVRC2012/train"), NativeImageLoader.ALLOWED_FORMATS);
int imageHeightWidth = 224; //224x224 pixel input
int imageChannels = 3; //RGB
PathLabelGenerator labelMaker = new ParentPathLabelGenerator();
ImageRecordReader rr = new ImageRecordReader(imageHeightWidth, imageHeightWidth, imageChannels, labelMaker);
System.out.println("initialization");
rr.initialize(fileSplit);
System.out.println("iterator");
DataSetIterator iter = new RecordReaderDataSetIterator.Builder(rr, minibatch)
.classification(1, 1000)
.preProcessor(new ImagePreProcessingScaler()) //For normalization of image values 0-255 to 0-1
.build();
System.out.println("data list creator");
List<DataSet> dataList = new ArrayList<>();
while (iter.hasNext()){
dataList.add(iter.next());
}
And this is my try to load the dataset via spark. labels list contain all the labels of Imagenet Dataset but I didn't copy them all here:
JavaSparkContext sc = SparkContext.initSparkContext(useSparkLocal);
//load data just one time
System.out.println("load data");
List<String> labelsList = Arrays.asList("kit fox, Vulpes macrotis " , "English setter " , "Australian terrier ");
String folder= "/scratch/imagenet/ILSVRC2012/train/*";
File f = new File(folder);
String path = f.getPath();
path=folder+"/*";
JavaPairRDD<String, PortableDataStream> origData = sc.binaryFiles(path);
int imageHeightWidth = 224; //224x224 pixel input
int imageChannels = 3; //RGB
PathLabelGenerator labelMaker = new ParentPathLabelGenerator();
ImageRecordReader rr = new ImageRecordReader(imageHeightWidth, imageHeightWidth, imageChannels, labelMaker);
System.out.println("initialization");
rr.setLabels(labelsList);
RecordReaderFunction rrf = new org.datavec.spark.functions.RecordReaderFunction(rr);
JavaRDD<List<Writable>> rdd = origData.map(rrf);
JavaRDD<DataSet> data = rdd.map(new DataVecDataSetFunction(1, 1000, false));
List<DataSet> collected = data.collect();
By the way, in the train directory there is 1000 folders (n01440764, n01755581, n02012849, n02097658 ...) in which we find the images.
I need this parallelization since the load of the data itself took around 26h and it's not efficient. So could you help me with correcting me my try method?
For spark I would recommend pre vectorizing all of the data and just loading the ndarrays themselves directly. We cover this approach in our examples: https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-distributed-training-examples/
I would recommend this approach and just loading the pre created datasets using a map call after that where ideally you setup the batches relative to your number of workers available. Datasets have a save(..) load(..) you can use.
In order to implement this consider using:
SparkDataUtils.createFileBatchesSpark(JavaRDD filePaths, final String rootOutputDir, final int batchSize, #NonNull final org.apache.hadoop.conf.Configuration hadoopConfig)
This takes in filepaths, an output directory on HDFS, a pre configured batch size and a hadoop configuration for accessing your cluster.
Here is a snippet from the relevant java doc to get you started on some of the concepts:
{#code
* JavaSparkContext sc = ...
* SparkDl4jMultiLayer net = ...
* String baseFileBatchDir = ...
* JavaRDD<String> paths = org.deeplearning4j.spark.util.SparkUtils.listPaths(sc, baseFileBatchDir);
*
* //Image record reader:
* PathLabelGenerator labelMaker = new ParentPathLabelGenerator();
* ImageRecordReader rr = new ImageRecordReader(32, 32, 1, labelMaker);
* rr.setLabels(<labels here>);
*
* //Create DataSetLoader:
* int batchSize = 32;
* int numClasses = 1000;
* DataSetLoader loader = RecordReaderFileBatchLoader(rr, batchSize, 1, numClasses);
*
* //Fit the network
* net.fitPaths(paths, loader);
I am using RTFTemplate API to create report (.doc) using .rtf template.
My .rtf template file have multiple textfields, apart from the dynamic cells. These textfields are supposed to be filled manually after doc file generation.
Few of textfields are of "Regular Text" type format and other textfields have "Number type" format.
If I am keeping Number textfields settings number format as #,##0.00 refer:Corrupt Number Format then the output file is getting corrupted.
But if i change the number format to 0.00, then output is coming correctly.
Following is my java code:
public class TestEncore {
private static Map<String, RTFDocument> transformedDocumentMap = new HashMap<String, RTFDocument>();
/**
* #param args
*/
public static void main(String[] args) throws Exception {
try {
String baseDirectory = "./";
String outDirectory = baseDirectory+"out"+File.separator;
String rtfSource = baseDirectory+"out2"+File.separator + "model.rtf";
String rtfTarget = outDirectory + "/sortie2.rtf";
// Create out Directory
File out = new File(outDirectory);
out.mkdirs();
HashMap<String,Object> valueMap=new HashMap<String,Object> ();
valueMap.put("date", Calendar.getInstance().getTime());
valueMap.put("name", "ABC DEF XYZ");
valueMap.put("age", 24);
/**
* Execute transformed RTFDocument
* and put it the instance of transformed RTFDocument
* into the Map (into cache)
*/
executeTransformedDocument(rtfSource,valueMap);
// 1. Get default RTFtemplateBuilder
RTFTemplateBuilder builder = RTFTemplateBuilder.newRTFTemplateBuilder();
// 2. Get RTFtemplate with default Implementation of template engine (Velocity)
RTFTemplate rtfTemplate = builder.newRTFTemplate();
// 3. Get the transformed document
// and NOT the rtf source
RTFDocument transformedDocument = (RTFDocument)transformedDocumentMap.get(rtfSource);
rtfTemplate.setTransformedDocument(transformedDocument);
VelocityTemplateEngineImpl templateEngine = new VelocityTemplateEngineImpl();
VelocityEngine velocityEngine = new VelocityEngine();
velocityEngine.setProperty("input.encoding ", "UTF-8");
velocityEngine.setProperty("output.encoding", "UTF-8");
velocityEngine.setProperty ("response.encoding", "UTF-8");
templateEngine.setVelocityEngine(velocityEngine);
rtfTemplate.setTemplateEngine(templateEngine);
OutputStreamWriter fileout = new OutputStreamWriter(new FileOutputStream(rtfTarget), "UTF-8");
rtfTemplate.merge(fileout);
} catch (Exception e) {
e.printStackTrace();
}
}
private static void executeTransformedDocument(String rtfSourceFile,Map<String,Object> templateValues) throws Exception {
// 1. Get default RTFtemplateBuilder
RTFTemplateBuilder builder = RTFTemplateBuilder.newRTFTemplateBuilder();
// 2. Get RTFtemplate with default Implementation of template engine (Velocity)
RTFTemplate rtfTemplate = builder.newRTFTemplate();
// 3 Load XML fields available and set it to the RTFTemplate
if(templateValues!=null&& !templateValues.isEmpty()){
Set<Entry<String,Object>> entrySet=templateValues.entrySet();
Iterator<Entry<String, Object>> iterator=entrySet.iterator();
while(iterator.hasNext()){
Entry<String, Object> entry=iterator.next();
rtfTemplate.put(entry.getKey(), entry.getValue());
}
}
// 3. Set the RTF model source
rtfTemplate.setTemplate(new File(rtfSourceFile));
// 4. Transform to get transformed RTFDocument
RTFDocument transformedDocument = rtfTemplate.transform();
// 5. Cache the transformed document into map
transformedDocumentMap.put(rtfSourceFile, transformedDocument);
}
}
Has anyone faced such issue?
Can it be related to MSOffice version? I am using MSOffice2013.
I have a project where I want to load in a given shapefile, and pick out polygons above a certain size before writing the results to a new shapefile. Maybe not the most efficient, but I've got code that successfully does all of that, right up to the point where it is supposed to write the shapefile. I get no errors, but the resulting shapefile has no usable data in it. I've followed as many tutorials as possible, but still I'm coming up blank.
The first bit of code is where I read in a shapefile, pickout the polygons I want, and put then into a feature collection. This part seems to work fine as far as I can tell.
public class ShapefileTest {
public static void main(String[] args) throws MalformedURLException, IOException, FactoryException, MismatchedDimensionException, TransformException, SchemaException {
File oldShp = new File("Old.shp");
File newShp = new File("New.shp");
//Get data from the original ShapeFile
Map<String, Object> map = new HashMap<String, Object>();
map.put("url", oldShp.toURI().toURL());
//Connect to the dataStore
DataStore dataStore = DataStoreFinder.getDataStore(map);
//Get the typeName from the dataStore
String typeName = dataStore.getTypeNames()[0];
//Get the FeatureSource from the dataStore
FeatureSource<SimpleFeatureType, SimpleFeature> source = dataStore.getFeatureSource(typeName);
SimpleFeatureCollection collection = (SimpleFeatureCollection) source.getFeatures(); //Get all of the features - no filter
//Start creating the new Shapefile
final SimpleFeatureType TYPE = createFeatureType(); //Calls a method that builds the feature type - tested and works.
DefaultFeatureCollection newCollection = new DefaultFeatureCollection(); //To hold my new collection
try (FeatureIterator<SimpleFeature> features = collection.features()) {
while (features.hasNext()) {
SimpleFeature feature = features.next(); //Get next feature
SimpleFeatureBuilder fb = new SimpleFeatureBuilder(TYPE); //Create a new SimpleFeature based on the original
Integer level = (Integer) feature.getAttribute(1); //Get the level for this feature
MultiPolygon multiPoly = (MultiPolygon) feature.getDefaultGeometry(); //Get the geometry collection
//First count how many new polygons we will have
int numNewPoly = 0;
for (int i = 0; i < multiPoly.getNumGeometries(); i++) {
double area = getArea(multiPoly.getGeometryN(i));
if (area > 20200) {
numNewPoly++;
}
}
//Now build an array of the larger polygons
Polygon[] polys = new Polygon[numNewPoly]; //Array of new geometies
int iPoly = 0;
for (int i = 0; i < multiPoly.getNumGeometries(); i++) {
double area = getArea(multiPoly.getGeometryN(i));
if (area > 20200) { //Write the new data
polys[iPoly] = (Polygon) multiPoly.getGeometryN(i);
iPoly++;
}
}
GeometryFactory gf = new GeometryFactory(); //Create a geometry factory
MultiPolygon mp = new MultiPolygon(polys, gf); //Create the MultiPolygonyy
fb.add(mp); //Add the geometry collection to the feature builder
fb.add(level);
fb.add("dBA");
SimpleFeature newFeature = SimpleFeatureBuilder.build( TYPE, new Object[]{mp, level,"dBA"}, null );
newCollection.add(newFeature); //Add it to the collection
}
At this point I have a collection that looks right - it has the correct bounds and everything. The next bit if code is where I put it into a new Shapefile.
//Time to put together the new Shapefile
Map<String, Serializable> newMap = new HashMap<String, Serializable>();
newMap.put("url", newShp.toURI().toURL());
newMap.put("create spatial index", Boolean.TRUE);
DataStore newDataStore = DataStoreFinder.getDataStore(newMap);
newDataStore.createSchema(TYPE);
String newTypeName = newDataStore.getTypeNames()[0];
SimpleFeatureStore fs = (SimpleFeatureStore) newDataStore.getFeatureSource(newTypeName);
Transaction t = new DefaultTransaction("add");
fs.setTransaction(t);
fs.addFeatures(newCollection);
t.commit();
ReferencedEnvelope env = fs.getBounds();
}
}
I put in the very last code to check the bounds of the FeatureStore fs, and it comes back null. Obviously, loading the newly created shapefile (which DOES get created and is ab out the right size), nothing shows up.
The solution actually had nothing to do with the code I posted - it had everything to do with my FeatureType definition. I did not include the "the_geom" to my polygon feature type, so nothing was getting written to the file.
I believe you are missing the step to finalize/close the file. Try adding this after the the t.commit line.
fs.close();
As an expedient alternative, you might try out the Shapefile dumper utility mentioned in the Shapefile DataStores docs. Using that may simplify your second code block into two or three lines.
I want to edit jpg files' properties like: comments, title, date taken, camera maker, etc.
I have found libraries to read these data. But I need a free library with examples to edit them.
I'm aware of apache's imaging (sanselan). But I was not able to edit data with it. If you have previously used it yourself, I'd accept that as an answer only if you provide an example code other than the one in their website. Because even when I use their example I was not able to edit any property other than GPS data. After i run the code, file-properties-details still have the same values.
Thanks !
Note: I also tried JHeader (https://sourceforge.net/projects/jheader/) but using it as a process with -cl option still did not changed properties list.
Apache commons Imaging works for me.
I have extended the sample provided here
So obviously my client code looks like this
public static void main(String[] args) throws ImageWriteException, ImageReadException, IOException {
new WriteExifMetadataExample().changeExifMetadata(new File("somefilename.jpg"), new File("result_file.jpg"));
}
and the extended method in WriteExifMetadataExample
public void changeExifMetadata(final File jpegImageFile, final File dst)
throws IOException, ImageReadException, ImageWriteException {
OutputStream os = null;
boolean canThrow = false;
try {
TiffOutputSet outputSet = null;
// note that metadata might be null if no metadata is found.
final ImageMetadata metadata = Imaging.getMetadata(jpegImageFile);
final JpegImageMetadata jpegMetadata = (JpegImageMetadata) metadata;
if (null != jpegMetadata) {
// note that exif might be null if no Exif metadata is found.
final TiffImageMetadata exif = jpegMetadata.getExif();
if (null != exif) {
// TiffImageMetadata class is immutable (read-only).
// TiffOutputSet class represents the Exif data to write.
//
// Usually, we want to update existing Exif metadata by
// changing
// the values of a few fields, or adding a field.
// In these cases, it is easiest to use getOutputSet() to
// start with a "copy" of the fields read from the image.
outputSet = exif.getOutputSet();
}
}
// if file does not contain any exif metadata, we create an empty
// set of exif metadata. Otherwise, we keep all of the other
// existing tags.
if (null == outputSet) {
outputSet = new TiffOutputSet();
}
{
// Example of how to add a field/tag to the output set.
//
// Note that you should first remove the field/tag if it already
// exists in this directory, or you may end up with duplicate
// tags. See above.
//
// Certain fields/tags are expected in certain Exif directories;
// Others can occur in more than one directory (and often have a
// different meaning in different directories).
//
// TagInfo constants often contain a description of what
// directories are associated with a given tag.
//
final TiffOutputDirectory exifDirectory = outputSet
.getOrCreateExifDirectory();
// make sure to remove old value if present (this method will
// not fail if the tag does not exist).
exifDirectory
.removeField(ExifTagConstants.EXIF_TAG_APERTURE_VALUE);
exifDirectory.add(ExifTagConstants.EXIF_TAG_APERTURE_VALUE,
new RationalNumber(3, 10));
}
{
// Example of how to add/update GPS info to output set.
// New York City
final double longitude = -74.0; // 74 degrees W (in Degrees East)
final double latitude = 40 + 43 / 60.0; // 40 degrees N (in Degrees
// North)
outputSet.setGPSInDegrees(longitude, latitude);
}
final TiffOutputDirectory exifDirectory = outputSet
.getOrCreateRootDirectory();
exifDirectory
.removeField(ExifTagConstants.EXIF_TAG_SOFTWARE);
exifDirectory.add(ExifTagConstants.EXIF_TAG_SOFTWARE,
"SomeKind");
os = new FileOutputStream(dst);
os = new BufferedOutputStream(os);
new ExifRewriter().updateExifMetadataLossless(jpegImageFile, os,
outputSet);
canThrow = true;
} finally {
IoUtils.closeQuietly(canThrow, os);
}
}
Please pay attention only to line where I add additional tag
final TiffOutputDirectory exifDirectory = outputSet
.getOrCreateRootDirectory();
exifDirectory
.removeField(ExifTagConstants.EXIF_TAG_SOFTWARE);
exifDirectory.add(ExifTagConstants.EXIF_TAG_SOFTWARE,
"SomeKind");
as a result EXIF tag was properly added
To change the comments tag you can do the following
final TiffOutputDirectory exifDirectory = outputSet.getOrCreateRootDirectory();
exifDirectory.removeField(MicrosoftTagConstants.EXIF_TAG_XPCOMMENT);
exifDirectory.add(MicrosoftTagConstants.EXIF_TAG_XPCOMMENT, "SomeKind");
the full list of available constants is in the package:
org.apache.commons.imaging.formats.tiff.constants
Would an example like this work for you?
I'd assume using packages like org.apache.commons.imaging.util.IoUtils and import org.apache.commons.imaging.Imaging would be of great help to you here.
To change the comments tag you can do the following
final TiffOutputDirectory exifDirectory = outputSet.getOrCreateRootDirectory();
exifDirectory.removeField(MicrosoftTagConstants.EXIF_TAG_XPCOMMENT);
exifDirectory.add(MicrosoftTagConstants.EXIF_TAG_XPCOMMENT, "SomeKind");
the full list of available constants is in the package:
org.apache.commons.imaging.formats.tiff.constants
You need to use Microsoft tag constraints to edit tags
I am exporting a set of points to KML using the GeoTools KML encoder.
It works fine, but my points are expressed as lat, lon, alt, and they are exported as such, but Google Earth presents them clamped to the surface.
I read here that I need to set the altitudeMode attribute. How would I do that with the GeoTools encoder?
Here's my code:
/**
* Converts the given point array to KML.
* #param points The array of points to be converted.
* #param os The target output stream where the resulting KML is to be
* written. This method does not close the stream.
*/
public static void toKml(Point[] points, OutputStream os) {
SimpleFeatureTypeBuilder sftb = new SimpleFeatureTypeBuilder();
sftb.setName("points");
sftb.add("geomtery", Point.class, DefaultGeographicCRS.WGS84_3D);
SimpleFeatureType type = sftb.buildFeatureType();
DefaultFeatureCollection features = new DefaultFeatureCollection();
SimpleFeatureBuilder builder = new SimpleFeatureBuilder(type);
for (int i = 0; i < points.length; i++) {
builder.add(points[i]);
features.add(builder.buildFeature(Integer.toString(i)));
}
Encoder encoder = new Encoder(new KMLConfiguration());
encoder.setIndenting(true);
try {
encoder.encode(features, KML.kml, os);
} catch (IOException e) {
throw new RuntimeException("While converting " +
Arrays.toString(points) + " to KML.", e);
}
}