How can I set altitudeMode in KML encoding with GeoTools? - java

I am exporting a set of points to KML using the GeoTools KML encoder.
It works fine, but my points are expressed as lat, lon, alt, and they are exported as such, but Google Earth presents them clamped to the surface.
I read here that I need to set the altitudeMode attribute. How would I do that with the GeoTools encoder?
Here's my code:
/**
* Converts the given point array to KML.
* #param points The array of points to be converted.
* #param os The target output stream where the resulting KML is to be
* written. This method does not close the stream.
*/
public static void toKml(Point[] points, OutputStream os) {
SimpleFeatureTypeBuilder sftb = new SimpleFeatureTypeBuilder();
sftb.setName("points");
sftb.add("geomtery", Point.class, DefaultGeographicCRS.WGS84_3D);
SimpleFeatureType type = sftb.buildFeatureType();
DefaultFeatureCollection features = new DefaultFeatureCollection();
SimpleFeatureBuilder builder = new SimpleFeatureBuilder(type);
for (int i = 0; i < points.length; i++) {
builder.add(points[i]);
features.add(builder.buildFeature(Integer.toString(i)));
}
Encoder encoder = new Encoder(new KMLConfiguration());
encoder.setIndenting(true);
try {
encoder.encode(features, KML.kml, os);
} catch (IOException e) {
throw new RuntimeException("While converting " +
Arrays.toString(points) + " to KML.", e);
}
}

Related

Load ImageNet Data via Spark for AlexNet

I am working on the classification of Imagenet DataSet on AlexNet architecture. I am working on distributed systems for data streams. I am using DeepLearning4j library. I have a problem with loading Imagenet data from a path on our HPC. So my current, normally loading data method is:
FileSplit fileSplit= new FileSplit(new File("/scratch/imagenet/ILSVRC2012/train"), NativeImageLoader.ALLOWED_FORMATS);
int imageHeightWidth = 224; //224x224 pixel input
int imageChannels = 3; //RGB
PathLabelGenerator labelMaker = new ParentPathLabelGenerator();
ImageRecordReader rr = new ImageRecordReader(imageHeightWidth, imageHeightWidth, imageChannels, labelMaker);
System.out.println("initialization");
rr.initialize(fileSplit);
System.out.println("iterator");
DataSetIterator iter = new RecordReaderDataSetIterator.Builder(rr, minibatch)
.classification(1, 1000)
.preProcessor(new ImagePreProcessingScaler()) //For normalization of image values 0-255 to 0-1
.build();
System.out.println("data list creator");
List<DataSet> dataList = new ArrayList<>();
while (iter.hasNext()){
dataList.add(iter.next());
}
And this is my try to load the dataset via spark. labels list contain all the labels of Imagenet Dataset but I didn't copy them all here:
JavaSparkContext sc = SparkContext.initSparkContext(useSparkLocal);
//load data just one time
System.out.println("load data");
List<String> labelsList = Arrays.asList("kit fox, Vulpes macrotis " , "English setter " , "Australian terrier ");
String folder= "/scratch/imagenet/ILSVRC2012/train/*";
File f = new File(folder);
String path = f.getPath();
path=folder+"/*";
JavaPairRDD<String, PortableDataStream> origData = sc.binaryFiles(path);
int imageHeightWidth = 224; //224x224 pixel input
int imageChannels = 3; //RGB
PathLabelGenerator labelMaker = new ParentPathLabelGenerator();
ImageRecordReader rr = new ImageRecordReader(imageHeightWidth, imageHeightWidth, imageChannels, labelMaker);
System.out.println("initialization");
rr.setLabels(labelsList);
RecordReaderFunction rrf = new org.datavec.spark.functions.RecordReaderFunction(rr);
JavaRDD<List<Writable>> rdd = origData.map(rrf);
JavaRDD<DataSet> data = rdd.map(new DataVecDataSetFunction(1, 1000, false));
List<DataSet> collected = data.collect();
By the way, in the train directory there is 1000 folders (n01440764, n01755581, n02012849, n02097658 ...) in which we find the images.
I need this parallelization since the load of the data itself took around 26h and it's not efficient. So could you help me with correcting me my try method?
For spark I would recommend pre vectorizing all of the data and just loading the ndarrays themselves directly. We cover this approach in our examples: https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-distributed-training-examples/
I would recommend this approach and just loading the pre created datasets using a map call after that where ideally you setup the batches relative to your number of workers available. Datasets have a save(..) load(..) you can use.
In order to implement this consider using:
SparkDataUtils.createFileBatchesSpark(JavaRDD filePaths, final String rootOutputDir, final int batchSize, #NonNull final org.apache.hadoop.conf.Configuration hadoopConfig)
This takes in filepaths, an output directory on HDFS, a pre configured batch size and a hadoop configuration for accessing your cluster.
Here is a snippet from the relevant java doc to get you started on some of the concepts:
{#code
* JavaSparkContext sc = ...
* SparkDl4jMultiLayer net = ...
* String baseFileBatchDir = ...
* JavaRDD<String> paths = org.deeplearning4j.spark.util.SparkUtils.listPaths(sc, baseFileBatchDir);
*
* //Image record reader:
* PathLabelGenerator labelMaker = new ParentPathLabelGenerator();
* ImageRecordReader rr = new ImageRecordReader(32, 32, 1, labelMaker);
* rr.setLabels(<labels here>);
*
* //Create DataSetLoader:
* int batchSize = 32;
* int numClasses = 1000;
* DataSetLoader loader = RecordReaderFileBatchLoader(rr, batchSize, 1, numClasses);
*
* //Fit the network
* net.fitPaths(paths, loader);

WAVE file unexpected behaviour

I am currently trying to make a .wav file that will play sos in morse.
The way I went about this is: I have a byte array that contains one wave of a beep. I then repeated that until I had the desired length.
After that I inserted those bytes into a new array and put bytes containing 00 (in hexadecimal) to separate the beeps.
If I add 1 beep to a WAVE file, it creates the file correctly (i.e. I get a beep of the desired length).
Here is a picture of the waves zoomed in (I opened the file in Audacity):
And here is a picture of the entire wave part:
The problem now is that when I add a second beep, the second one becomes completely distorted:
So this is what the entire file looks like now:
If I add another beep, it will be the correct beep again, If I add yet another beep it's going to be distorted again, etc.
So basically, every other wave is distorted.
Does anyone know why this happens?
Here is a link to a .txt file I generated containing the the audio data of the wave file I created: byteTest19.txt
And here is a lint to a .txt file that I generated using file format.info that is a hexadecimal representation of the bytes in the .wav file I generated containing 5 beeps (with two of them, the even beeps being distorted): test3.txt
You can tell when a new beep starts because it is preceded by a lot of 00's.
As far as I can see, the bytes of the second beep does not differ from the first one, which is why I am asking this question.
If anyone knows why this happens, please help me. If you need more information, don't hesitate to ask. I hope I explained well what I'm doing, if not, that's my bad.
EDIT
Here is my code:
// First I calculate the byte array for a single beep
// This file is just a single wave of the audio (up and down)
// (see below for the fileToAudioByteArray method) (In my
// actual code I only take in half of the wave and then I
// invert it, but I didn't want to make this too complicated,
// I'll put the full code below
final byte[] wave = fileToAudioByteArray(new File("path to my wav file");
// This is how long that audio fragment is in seconds
final double secondsPerWave = 0.0022195;
// This is the amount of seconds a beep takes up (e.g. the seconds picture)
double secondsPerBeep = 0.25;
final int amountWaveInBeep = (int) Math.ceil((secondsPerBeep/secondsPerWave));
// this is the byte array containing the audio data of
// 1 beep (see below for the repeatArray method)
final byte[] beep = repeatArray(wave, amountWaveInBeep);
// Now for the silence between the beeps
final byte silenceByte = 0x00,
// The amount of seconds a silence byte takes up
final double secondsPerSilenceByte = 0.00002;
// The amount of silence bytes I need to make one second
final int amountOfSilenceBytesForOneSecond = (int) (Math.ceil((1/secondsPerSilenceByte)));
// The space between 2 beeps will be 0.25 * secondsPerBeep
double amountOfBeepsEquivalent = 0.25;
// This is the amount of bytes of silence I need
// between my beeps
final int amntSilenceBytesPerSpaceBetween = (int) Math.ceil(secondsPerBeep * amountOfBeepsEquivalent * amountOfSilenceBytesForOneSecond);
final byte[] spaceBetweenBeeps = new byte[amntSilenceBytesPerSpaceBetween];
for (int i = 0; i < amntSilenceBytesPerSpaceBetween; i++) {
spaceBetweenBeeps[i] = silenceByte;
}
WaveFileBuilder wavBuilder = new WaveFileBuilder(WaveFileBuilder.AUDIOFORMAT_PCM, 1, 44100, 16);
// Adding all the beeps and silence to the WAVE file (test3.wav)
wavBuilder.addBytes(beep);
wavBuilder.addBytes(spaceBetweenDigits);
wavBuilder.addBytes(beep);
wavBuilder.addBytes(spaceBetweenDigits);
wavBuilder.addBytes(beep);
wavBuilder.addBytes(spaceBetweenDigits);
wavBuilder.addBytes(beep);
wavBuilder.addBytes(spaceBetweenDigits);
wavBuilder.addBytes(beep);
wavBuilder.addBytes(nextChar);
File outputFile = new File("path/test3.wav");
wavBuilder.saveFile(outputFile);
These are the 2 methods I used in the beginning:
/**
* Converts a wav file to a byte array containing its audio data
* #param file the wav file you want to convert
* #return the data part of a wav file in byte form
*/
public static byte[] fileToAudioByteArrray(File file) throws UnsupportedAudioFileException, IOException {
AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(file);
AudioFormat audioFormat = audioInputStream.getFormat();
int bytesPerSample = audioFormat.getFrameSize();
if (bytesPerSample == AudioSystem.NOT_SPECIFIED) {
bytesPerSample = -1;
}
long numSamples = audioInputStream.getFrameLength();
int numBytes = (int) (numSamples * bytesPerSample);
byte[] audioBytes = new byte[numBytes];
int numBytesRead;
while((numBytesRead = audioInputStream.read(audioBytes)) != -1);
return audioBytes;
}
/**
* Repeats an array into a new array x times
* #param array the array you want to copy x times
* #param repeat the amount of times you want to copy the array into the new array
* #return an array containing the content of {#code array} {#code repeat} times.
*/
public static byte[] repeatArray(byte[] array, int repeat) {
byte[] result = new byte[array.length * repeat];
for (int i = 0; i < result.length; i++) {
result[i] = array[i % array.length];
}
return result;
}
Now for my WaveFileBuilder class:
/**
* <p> Constructs a WavFileBuilder which can be used to create wav files.</p>
*
* <p>The builder takes care of the subchunks based on the parameters that are given in the constructor.</p>
*
* <h3>Adding audio to the wav file</h3>
* There are 2 methods that can be used to add audio data to the WavFile.
* One is {#link #addBytes(byte[]) addBytes} which lets you directly inject bytes
* into the data section of the wav file.
* The other is {#link #addAudioFile(File) addAudioFile} which lets you add the audio
* data of another wav file to the wav file's audio data.
*
* #param audioFormat The be.jonaseveraert.util.audio format of the wav file {#link #AUDIOFORMAT_PCM PCM} = 1
* #param numChannels The number of channels the wav file will have {#link #NUM_CHANNELS_MONO MONO} = 1,
* {#link #NUM_CHANNELS_STEREO STEREO} = 2
* #param sampleRate The sample rate of the wav file in Hz (e.g. 22050, 44100, ...)
* #param bitsPerSample The amount of bits per sample. If 16 bits, the audio sample will contain 2 bytes per
* channel. (e.g. 8, 16, ...). This is important to take into account when using the
* {#link #addBytes(byte[]) addBytes} method to insert data into the wav file.
*/
public WaveFileBuilder(int audioFormat, int numChannels, int sampleRate, int bitsPerSample) {
this.audioFormat = audioFormat;
this.numChannels = numChannels;
this.sampleRate = sampleRate;
this.bitsPerSample = bitsPerSample;
// Subchunk 1 calculations
this.byteRate = this.sampleRate * this.numChannels * (this.bitsPerSample / 8);
this.blockAlign = this.numChannels * (this.bitsPerSample / 8);
}
/**
* Contains the audio data for the wav file that is being constructed
*/
byte[] audioBytes = null;
// For debug purposes
int counter = 0;
/**
* Adds audio data to the wav file from bytes
* <p>See the "see also" for the structure of the "Data" part of a wav file</p>
* #param audioBytes audio data
* #see Wave PCM Soundfile Format
*/
public void addBytes(byte[] audioBytes) throws IOException {
// This is all debug code that I used to maker byteText19.txt
// which I have linked in my question
String test1;
try {
test1 = (temp.bytesToHex(this.audioBytes, true));
} catch (NullPointerException e) {
test1 = "null";
}
File file = new File("/Users/jonaseveraert/Desktop/Morse Sound Test/debug/byteTest" + counter + ".txt");
file.createNewFile();
counter++;
BufferedWriter writer = new BufferedWriter(new FileWriter(file));
writer.write(test1);
writer.close();
// This is where the actual code starts //
if (this.audioBytes != null)
this.audioBytes = ArrayUtils.addAll(this.audioBytes, audioBytes);
else
this.audioBytes = audioBytes;
// End of code //
// This is for debug again
String test2 = (temp.bytesToHex(this.audioBytes, true));
File file2 = new File("/Users/jonaseveraert/Desktop/Morse Sound Test/debug/byteTest" + counter + ".txt");
file2.createNewFile();
counter++;
BufferedWriter writer2 = new BufferedWriter(new FileWriter(file2));
writer2.write(test2);
writer2.close();
}
/**
* Saves the file to the location of the {#code outputFile}.
* #param outputFile The file that will be outputted (not created yet), contains the path
* #return true if the file was created and written to successfully. Else false.
* #throws IOException If an I/O error occurred
*/
public boolean saveFile(File outputFile) throws IOException {
// subchunk2 calculations
//int numBytesInData = data.length()/2;
int numBytesInData = audioBytes.length;
int numSamples = numBytesInData / (2 * numChannels);
subchunk2Size = numSamples * numChannels * (bitsPerSample / 8);
// chunk calculation
chunkSize = 4 + (8 + subchunk1Size) + (8 + subchunk2Size);
// convert everything to hex string //
// Chunk descriptor
String f_chunkID = asciiStringToHexString(chunkID);
String f_chunkSize = intToLittleEndianHexString(chunkSize, 4);
String f_format = asciiStringToHexString(format);
// fmt subchunck
String f_subchunk1ID = asciiStringToHexString(subchunk1ID);
String f_subchunk1Size = intToLittleEndianHexString(subchunk1Size, 4);
String f_audioformat = intToLittleEndianHexString(audioFormat, 2);
String f_numChannels = intToLittleEndianHexString(numChannels, 2);
String f_sampleRate = intToLittleEndianHexString(sampleRate, 4);
String f_byteRate = intToLittleEndianHexString(byteRate, 4);
String f_blockAlign = intToLittleEndianHexString(blockAlign, 2);
String f_bitsPerSample = intToLittleEndianHexString(bitsPerSample, 2);
// data subchunk
String f_subchunk2ID = asciiStringToHexString(subchunk2ID);
String f_subchunk2Size = intToLittleEndianHexString(subchunk2Size, 4);
// data is stored in audioData
// Combine all hex data into one String (except for the
// audio data, which is passed in as a byte array)
final String AUDIO_BYTE_STREAM_STRING = f_chunkID + f_chunkSize + f_format
+ f_subchunk1ID + f_subchunk1Size + f_audioformat + f_numChannels + f_sampleRate + f_byteRate + f_blockAlign + f_bitsPerSample
+ f_subchunk2ID + f_subchunk2Size;
// Convert the hex data to a byte array
final byte[] BYTES = hexStringToByteArray(AUDIO_BYTE_STREAM_STRING);
// Create & write file
if (outputFile.createNewFile()) {
// Combine byte arrays
// This array now contains the full WAVE file
byte[] audioFileBytes = ArrayUtils.addAll(BYTES, audioBytes);
try (FileOutputStream fos = new FileOutputStream(outputFile)) {
fos.write(audioFileBytes); // Write the bytes into a file
}
catch (IOException e) {
logger.log(Level.SEVERE, "IOException occurred");
logger.log(Level.SEVERE, null, e);
return false;
}
logger.log(Level.INFO, "File created: " + outputFile.getName());
}
return true;
} else {
//System.out.println("File already exists.");
logger.log(Level.WARNING, "File already exists.");
}
return false;
}
}
// Aiding methods
/**
* Converts a string containing hexadecimal to bytes
* #param s e.g. 00014F
* #return an array of bytes e.g. {00, 01, 4F}
*/
private byte[] hexStringToByteArray(String s) {
int len = s.length();
byte[] bytes = new byte[len / 2];
for (int i = 0; i < len; i+= 2) {
bytes[i / 2] = (byte) ((Character.digit(s.charAt(i), 16) << 4) + Character.digit(s.charAt(i+1), 16));
}
return bytes;
}
/**
* Converts an int to a hexadecimal string in the little-endian format
* #param input an integer number
* #param numberOfBytes The number of bytes the the integer is stored in
* #return The integer as a hexadecimal string in the little-endian byte ordering
*/
private String intToLittleEndianHexString(int input, int numberOfBytes) {
String hexBigEndian = Integer.toHexString(input);
StringBuilder hexLittleEndian = new StringBuilder();
int amountOfNumberProcessed = 0;
for (int i = 0; i < hexBigEndian.length()/2f; i++) {
int endIndex = hexBigEndian.length() - (i * 2);
try {
hexLittleEndian.append(hexBigEndian.substring(endIndex-2, endIndex));
} catch (StringIndexOutOfBoundsException e ) {
hexLittleEndian.append(0).append(hexBigEndian.charAt(0));
}
amountOfNumberProcessed++;
}
while (amountOfNumberProcessed != numberOfBytes) {
hexLittleEndian.append("00");
amountOfNumberProcessed++;
}
return hexLittleEndian.toString();
}
/**
* Converts a string containing ascii to its hexadecimal notation
* #param input The string that has to be converted
* #return The string as a hexadecimal notation in the big-endian byte ordering
*/
private String asciiStringToHexString(String input) {
byte[] bytes = input.getBytes(StandardCharsets.US_ASCII);
StringBuilder hex = new StringBuilder();
for (byte b : bytes) {
String hexChar = String.format("%02X", b);
hex.append(hexChar);
}
return hex.toString().trim();
}
And lastly: if you want the full code, replace
final byte[] wave = fileToAudioByteArray(new File("path to my wav file"); in the beginning of my code with:
File morse_half_wave_file = new File("/Users/jonaseveraert/Desktop/Morse Sound Test/morse_audio_fragment.wav");
final byte[] half_wave = temp.fileToAudioByteArrray(morse_half_wave_file);
final byte[] half_wave_inverse = temp.invertByteArray(half_wave);
// Then the wave byte array becomes:
final byte[] wave = ArrayUtils.addAll(half_wave, half_wave_inverse); // This ArrayUtils.addAll comes from the Apache Commons lang3 library
// And this is the invertByteArray method
/**
* Inverts bytes e.g. 000101 becomes 111010
*/
public static byte[] invertByteArray(byte[] bytes) {
if (bytes == null) {
return null;
// TODO: throw empty byte array expcetion
}
byte[] outputArray = new byte[bytes.length];
for(int i = 0; i < bytes.length; i++) {
outputArray[i] = (byte) ~bytes[i];
}
return outputArray;
}
P.S. Here is the morse_audio_fragment.wav: morse_audio_fragment.wav
Thanks in advance,
Jonas
The problem
Your .wav file is Signed 16 bit Little Endian, Rate 44100 Hz, Mono - which means that each sample in the file is 2 bytes long, and describes a signed amplitude. So you can copy-and-paste chunks of samples without any problems, as long as their lengths are divisible by 2 (your block size). Your silences are likely of odd length, so that the 1st sample after a silence is interpreted as
0x00 0x65 // last byte of silence, 1st byte of actual beep: weird
and all subsequent pairs bytes are interpreted wrong (taking the 2nd byte from each sample with the 1st byte from the next sample) due to this initial mis-alignment, until you find the next odd-length silence, when suddenly everything gets re-aligned correctly again; instead of the expected
0x65 0x05 // 1st and 2nd byte of beep: actual expected sample
How to fix it
Do not allow calls to addBytes that would add a number of bytes that does not evenly divide the block-size.
public class WaveFileBuilder() {
byte[] audioBytes = null;
// ... other attributes, methods, constructor
public void addBytes(byte[] audioBytes) throws IOException {
// ... debug code above, handle empty
// THIS SHOULD CHECK audioBytes IS MULTIPLE OF blockSize
this.audioBytes = ArrayUtils.addAll(this.audioBytes, audioBytes);
// ... debug code below
}
public boolean saveFile(File outputFile) throws IOException {
// ... prepare headers
// concatenate header (BYTES) and contents
byte[] audioFileBytes = ArrayUtils.addAll(BYTES, audioBytes);
// ... write out bytes
try (FileOutputStream fos = new FileOutputStream(outputFile)) {
fos.write(audioFileBytes);
}
// ...
}
}
First, you could have avoided some confusion using a different name for the attribute and the parameter. Then, you are constantly growing an array over and over; this is wasteful, making code that could run in O(n) run in O(n^2), because you are calling it like this:
wavBuilder.addBytes(beep);
wavBuilder.addBytes(spaceBetweenDigits);
wavBuilder.addBytes(beep);
wavBuilder.addBytes(spaceBetweenDigits);
wavBuilder.addBytes(beep);
wavBuilder.addBytes(spaceBetweenDigits);
wavBuilder.addBytes(beep);
wavBuilder.addBytes(spaceBetweenDigits);
wavBuilder.addBytes(beep);
wavBuilder.addBytes(nextChar);
Instead, I propose the following:
public class WaveFileBuilder() {
List<byte[]> chunks = new ArrayList<>();
// ... other attributes, methods, constructor
public void addBytes(byte[] audioBytes) throws IOException {
if ((audioBytes.length % blockAlign) != 0) {
throw new IllegalArgumentException("Trying to add a chunk that does not fit evenly; this would cause un-aligned blocks")
}
chunks.add(audioBytes);
}
public boolean saveFile(File outputFile) throws IOException {
// ... prepare headers
// ... write out bytes
try (FileOutputStream fos = new FileOutputStream(outputFile)) {
for (byte[] chunk : chunks) fos.write(chunk);
}
}
}
This version uses no concatenation at all, and should be much faster and easier to test. It also requires less memory, because it is not copying all those arrays around to concatenate them to each other.

Corrupt output .doc is generated using net.sourceforge.rtf.RTFTemplate;

I am using RTFTemplate API to create report (.doc) using .rtf template.
My .rtf template file have multiple textfields, apart from the dynamic cells. These textfields are supposed to be filled manually after doc file generation.
Few of textfields are of "Regular Text" type format and other textfields have "Number type" format.
If I am keeping Number textfields settings number format as #,##0.00 refer:Corrupt Number Format then the output file is getting corrupted.
But if i change the number format to 0.00, then output is coming correctly.
Following is my java code:
public class TestEncore {
private static Map<String, RTFDocument> transformedDocumentMap = new HashMap<String, RTFDocument>();
/**
* #param args
*/
public static void main(String[] args) throws Exception {
try {
String baseDirectory = "./";
String outDirectory = baseDirectory+"out"+File.separator;
String rtfSource = baseDirectory+"out2"+File.separator + "model.rtf";
String rtfTarget = outDirectory + "/sortie2.rtf";
// Create out Directory
File out = new File(outDirectory);
out.mkdirs();
HashMap<String,Object> valueMap=new HashMap<String,Object> ();
valueMap.put("date", Calendar.getInstance().getTime());
valueMap.put("name", "ABC DEF XYZ");
valueMap.put("age", 24);
/**
* Execute transformed RTFDocument
* and put it the instance of transformed RTFDocument
* into the Map (into cache)
*/
executeTransformedDocument(rtfSource,valueMap);
// 1. Get default RTFtemplateBuilder
RTFTemplateBuilder builder = RTFTemplateBuilder.newRTFTemplateBuilder();
// 2. Get RTFtemplate with default Implementation of template engine (Velocity)
RTFTemplate rtfTemplate = builder.newRTFTemplate();
// 3. Get the transformed document
// and NOT the rtf source
RTFDocument transformedDocument = (RTFDocument)transformedDocumentMap.get(rtfSource);
rtfTemplate.setTransformedDocument(transformedDocument);
VelocityTemplateEngineImpl templateEngine = new VelocityTemplateEngineImpl();
VelocityEngine velocityEngine = new VelocityEngine();
velocityEngine.setProperty("input.encoding ", "UTF-8");
velocityEngine.setProperty("output.encoding", "UTF-8");
velocityEngine.setProperty ("response.encoding", "UTF-8");
templateEngine.setVelocityEngine(velocityEngine);
rtfTemplate.setTemplateEngine(templateEngine);
OutputStreamWriter fileout = new OutputStreamWriter(new FileOutputStream(rtfTarget), "UTF-8");
rtfTemplate.merge(fileout);
} catch (Exception e) {
e.printStackTrace();
}
}
private static void executeTransformedDocument(String rtfSourceFile,Map<String,Object> templateValues) throws Exception {
// 1. Get default RTFtemplateBuilder
RTFTemplateBuilder builder = RTFTemplateBuilder.newRTFTemplateBuilder();
// 2. Get RTFtemplate with default Implementation of template engine (Velocity)
RTFTemplate rtfTemplate = builder.newRTFTemplate();
// 3 Load XML fields available and set it to the RTFTemplate
if(templateValues!=null&& !templateValues.isEmpty()){
Set<Entry<String,Object>> entrySet=templateValues.entrySet();
Iterator<Entry<String, Object>> iterator=entrySet.iterator();
while(iterator.hasNext()){
Entry<String, Object> entry=iterator.next();
rtfTemplate.put(entry.getKey(), entry.getValue());
}
}
// 3. Set the RTF model source
rtfTemplate.setTemplate(new File(rtfSourceFile));
// 4. Transform to get transformed RTFDocument
RTFDocument transformedDocument = rtfTemplate.transform();
// 5. Cache the transformed document into map
transformedDocumentMap.put(rtfSourceFile, transformedDocument);
}
}
Has anyone faced such issue?
Can it be related to MSOffice version? I am using MSOffice2013.

Building GeoTools Geometry "segments" from route coordinates

From a set of coordinates that define a route, I want to draw a Geometry that mimicks a theoretical highway of that track, given an arbitrary number of meters wide (e.g. 20)
I don't know if GeoTools provides tools for constructing a Geometry with such inputs, so my initial idea is to split the track coordinates (several thousands) in pairs (coord0, coord1), (coord1, coord2), ...., (coordN, coordN-1) and, with each pair, build a rectangle assuming that the two points are the midpoints of a 20m wide segment (as in Knowing two points of a rectangle, how can I figure out the other two?), and joining all the resulting geometries.
Maybe it's overkill but I haven't found a cheaper way to do this
Any ideas would be greatly appreciated!
The easy way to do this is to use a 20m buffer around a line created from your points. So some code like this to create a line from the points (:
String[] wkt = {
"Point (-0.13666168754467312 50.81919869153657743)",
"Point (-0.13622277073931291 50.82205165077141373)",
"Point (-0.13545466632993253 50.82512406840893959)",
"Point (-0.13457683271921211 50.82687973563037787)",
"Point (-0.13413791591385191 50.82907431965718104)",
"Point (-0.13951464677951447 50.8294035072611976)",
"Point (-0.14346489802775639 50.83082998687861931)",
"Point (-0.14697623247063807 50.83072025767727808)",
"Point (-0.15004865010815954 50.83390240451614517)",
"Point (-0.15740050659794308 50.8349996965295432)",
"Point (-0.16486209228906662 50.83741373895902171)",
"Point (-0.17276259478555042 50.83894994777778464)",
"Point (-0.18549118214099652 50.8387304893751022)"
};
//build line
WKTReader2 reader = new WKTReader2();
GeometryFactory gf = new GeometryFactory();
Coordinate[] points = new Coordinate[wkt.length];
int i=0;
for(String w:wkt) {
Point p;
try {
p = (Point) reader.read(w);
points[i++]=p.getCoordinate();
} catch (ParseException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
LineString line = gf.createLineString(points);
SimpleFeatureBuilder builder = new SimpleFeatureBuilder(schema);
builder.set("locations", line);
SimpleFeature feature = builder.buildFeature("1");
And then a BufferLine method like:
public SimpleFeature bufferFeature(SimpleFeature feature, Measure<Double, Length> distance) {
// extract the geometry
GeometryAttribute gProp = feature.getDefaultGeometryProperty();
CoordinateReferenceSystem origCRS = gProp.getDescriptor().getCoordinateReferenceSystem();
Geometry geom = (Geometry) feature.getDefaultGeometry();
Geometry pGeom = geom;
MathTransform toTransform, fromTransform = null;
// reproject the geometry to a local projection
if (!(origCRS instanceof ProjectedCRS)) {
Point c = geom.getCentroid();
double x = c.getCoordinate().x;
double y = c.getCoordinate().y;
String code = "AUTO:42001," + x + "," + y;
// System.out.println(code);
CoordinateReferenceSystem auto;
try {
auto = CRS.decode(code);
toTransform = CRS.findMathTransform(DefaultGeographicCRS.WGS84, auto);
fromTransform = CRS.findMathTransform(auto, DefaultGeographicCRS.WGS84);
pGeom = JTS.transform(geom, toTransform);
} catch (MismatchedDimensionException | TransformException | FactoryException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
// buffer
Geometry out = buffer(pGeom, distance.doubleValue(SI.METER));
Geometry retGeom = out;
// reproject the geometry to the original projection
if (!(origCRS instanceof ProjectedCRS)) {
try {
retGeom = JTS.transform(out, fromTransform);
} catch (MismatchedDimensionException | TransformException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
// return a new feature containing the geom
SimpleFeatureType schema = feature.getFeatureType();
SimpleFeatureTypeBuilder ftBuilder = new SimpleFeatureTypeBuilder();
ftBuilder.setCRS(origCRS);
// ftBuilder.setDefaultGeometry("buffer");
ftBuilder.addAll(schema.getAttributeDescriptors());
ftBuilder.setName(schema.getName());
SimpleFeatureType nSchema = ftBuilder.buildFeatureType();
SimpleFeatureBuilder builder = new SimpleFeatureBuilder(nSchema);
List<Object> atts = feature.getAttributes();
for (int i = 0; i < atts.size(); i++) {
if (atts.get(i) instanceof Geometry) {
atts.set(i, retGeom);
}
}
SimpleFeature nFeature = builder.buildFeature(null, atts.toArray());
return nFeature;
}
/**
* create a buffer around the geometry, assumes the geometry is in the same
* units as the distance variable.
*
* #param geom
* a projected geometry.
* #param dist
* a distance for the buffer in the same units as the projection.
* #return
*/
private Geometry buffer(Geometry geom, double dist) {
Geometry buffer = geom.buffer(dist);
return buffer;
}
The tricky part is reprojecting into a locally flat CRS so that you can use metres for the buffer size. If you know of a locally good projection you could just use that (in this case we could have used OSGB (EPSG:27700) for better results).
This gives the following map:

GeoTools: Saving grid to shp file

I am very new to GeoTools. I would like to create a hex grid and save it to a SHP file. But something goes wrong along the way (the saved SHP file is empty). In the debug mode I found that the gird is correctly created and contains a bunch of polygons that make sense. Writing those to a shape file proves to be difficult. I followed the tutorial on GeoTools' website, but that does not quite do it yet. I suspect TYPE to be incorrectly defined, but could not find out how to define it correctly.
Any help of how to store the grid into a SHP file is highly appreciated.
ReferencedEnvelope gridBounds = new ReferencedEnvelope(xMin, xMax, yMin, yMax, DefaultGeographicCRS.WGS84);
// length of each hexagon edge
double sideLen = 0.5;
// max distance between vertices
double vertexSpacing = sideLen / 20;
SimpleFeatureSource grid = Grids.createHexagonalGrid(gridBounds, sideLen, vertexSpacing);
/*
* We use the DataUtilities class to create a FeatureType that will describe the data in our
* shapefile.
*
* See also the createFeatureType method below for another, more flexible approach.
*/
final SimpleFeatureType TYPE = createFeatureType();
/*
* Get an output file name and create the new shapefile
*/
File newFile = new File("D:/test/shape.shp");
ShapefileDataStoreFactory dataStoreFactory = new ShapefileDataStoreFactory();
Map<String, Serializable> params = new HashMap<String, Serializable>();
params.put("url", newFile.toURI().toURL());
params.put("create spatial index", Boolean.TRUE);
ShapefileDataStore newDataStore = (ShapefileDataStore) dataStoreFactory.createNewDataStore(params);
newDataStore.createSchema(TYPE);
/*
* You can comment out this line if you are using the createFeatureType method (at end of
* class file) rather than DataUtilities.createType
*/
newDataStore.forceSchemaCRS(DefaultGeographicCRS.WGS84);
/*
* Write the features to the shapefile
*/
Transaction transaction = new DefaultTransaction("create");
String typeName = newDataStore.getTypeNames()[0];
SimpleFeatureSource featureSource = newDataStore.getFeatureSource(typeName);
if (featureSource instanceof SimpleFeatureStore) {
SimpleFeatureStore featureStore = (SimpleFeatureStore) featureSource;
featureStore.setTransaction(transaction);
try {
featureStore.addFeatures(grid.getFeatures());
transaction.commit();
} catch (Exception problem) {
problem.printStackTrace();
transaction.rollback();
} finally {
transaction.close();
}
} else {
System.out.println(typeName + " does not support read/write access");
}
private static SimpleFeatureType createFeatureType() {
SimpleFeatureTypeBuilder builder = new SimpleFeatureTypeBuilder();
builder.setName("Location");
builder.setCRS(DefaultGeographicCRS.WGS84); // <- Coordinate reference system
// add attributes in order
builder.add("Polygon", Polygon.class);
builder.length(15).add("Name", String.class); // <- 15 chars width for name field
// build the type
final SimpleFeatureType LOCATION = builder.buildFeatureType();
return LOCATION;
}

Categories

Resources