I have a random number of coordinates for a polygon taken from a shapefile.
-119.00072399999999 35.36158, -118.99903 35.361576, -118.999026 35.362579, -118.999023 35.363482, -118.999019 35.36432, -118.999408 35.364847999999995, -118.999406 35.365564, -118.999402 35.366516, -118.999398 35.367467999999995, -118.999394 35.368438, -118.999256 35.368438, -118.998232 35.368441
I now have to check if a point (33.63705, -112.17563) is inside this polygon.
My concern is that, my coordinates doesn't fit into an int datatype.
Here is what I have tried:
import java.awt.Polygon;
import java.io.File;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.apache.commons.lang3.StringUtils;
import org.geotools.data.DataStore;
import org.geotools.data.DataStoreFinder;
import org.geotools.data.simple.SimpleFeatureCollection;
import org.geotools.data.simple.SimpleFeatureIterator;
import org.geotools.data.simple.SimpleFeatureSource;
import org.geotools.feature.DefaultFeatureCollection;
import org.geotools.feature.simple.SimpleFeatureBuilder;
import org.geotools.feature.simple.SimpleFeatureTypeBuilder;
import org.geotools.geometry.jts.JTSFactoryFinder;
import org.geotools.referencing.crs.DefaultGeographicCRS;
import org.locationtech.jts.geom.Coordinate;
import org.locationtech.jts.geom.GeometryFactory;
import org.locationtech.jts.geom.Point;
import org.opengis.feature.simple.SimpleFeature;
import org.opengis.feature.simple.SimpleFeatureType;
public class ReadShapeFile {
public static void main(String[] args) {
File file = new File("D:\\shapefile201806\\tl_2018_06_bg.shp");
try {
Map<String, String> connect = new HashMap<String, String>();
connect.put("url", file.toURI().toString());
DataStore dataStore = DataStoreFinder.getDataStore(connect);
String[] typeNames = dataStore.getTypeNames();
String typeName = typeNames[0];
System.out.println("Reading content : " + typeName);
SimpleFeatureSource featureSource = dataStore.getFeatureSource(typeName);
SimpleFeatureCollection collection = featureSource.getFeatures();
SimpleFeatureIterator iterator = collection.features();
try {
while (iterator.hasNext()) {
SimpleFeature feature = iterator.next();
String featureString = feature.toString();
List<String> polygonList = new ArrayList<String>();
String polygonCoordinates = StringUtils.substringBetween(featureString, "(((", ")))");
System.out.println(polygonCoordinates);
polygonList = Arrays.asList(polygonCoordinates.split(","));
SimpleFeatureTypeBuilder b = new SimpleFeatureTypeBuilder();
b.setName("MyFeatureType");
b.setCRS(DefaultGeographicCRS.WGS84);
b.add("location", Point.class);
final SimpleFeatureType TYPE = b.buildFeatureType();
SimpleFeatureBuilder featureBuilder = new SimpleFeatureBuilder(TYPE);
GeometryFactory geometryFactory = JTSFactoryFinder.getGeometryFactory();
SimpleFeature pointFeature = featureBuilder.buildFeature(null);
DefaultFeatureCollection featureCollection = new DefaultFeatureCollection("internal", TYPE);
featureCollection.add(pointFeature);
try {
Polygon polygon = new Polygon();
for (int i = 0; i < polygonList.size(); i++) {
String[] splitAxis = (polygonList.get(i).split("\\s+"));
polygon.addPoint(Integer.valueOf(splitAxis[0]), Integer.valueOf(splitAxis[1]));
}
boolean isInside = polygon.contains(33.63705, -112.17563);
System.out.println(isInside);
} catch (Exception e) {
e.printStackTrace();
}
}
} finally {
iterator.close();
}
} catch (Throwable e) {
}
}
}
I knew that converting a double to string and back to integer is not going to work anyway.
How can I achieve the solution whether a point is in the polygon for negative decimated values? Please help.
Using your SimpleFeature, you can call getDefaultGeometry and get a Geometry object. Once you cast to a Geometry, there should be a contains method which would take a Point class.
Also, you don't want to use the java.awt.Polygon class. Instead you'll be using org.locationtech.jts Geometry classes.
Related
Trying to generate SVG from an external graphic link but the output SVG image not rendering properly. To generate SVG using the below code,
package org.geotools.tutorial.quickstart;
import org.apache.batik.svggen.SVGGeneratorContext;
import org.apache.batik.svggen.SVGGraphics2D;
import org.geotools.factory.CommonFactoryFinder;
import org.geotools.feature.DefaultFeatureCollection;
import org.geotools.feature.simple.SimpleFeatureBuilder;
import org.geotools.feature.simple.SimpleFeatureTypeBuilder;
import org.geotools.geometry.jts.JTSFactoryFinder;
import org.geotools.geometry.jts.ReferencedEnvelope;
import org.geotools.map.FeatureLayer;
import org.geotools.map.Layer;
import org.geotools.map.MapContent;
import org.geotools.renderer.lite.StreamingRenderer;
import org.geotools.styling.*;
import org.locationtech.jts.geom.*;
import org.locationtech.jts.geom.Polygon;
import org.opengis.feature.simple.SimpleFeature;
import org.opengis.feature.simple.SimpleFeatureType;
import org.opengis.filter.FilterFactory2;
import org.opengis.filter.expression.Expression;
import org.w3c.dom.Document;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;
import javax.xml.transform.TransformerException;
import java.awt.*;
import java.awt.Dimension;
import java.io.*;
import java.net.URISyntaxException;
import java.net.URL;
public class SvgRendering {
static StyleFactory styleFactory = CommonFactoryFinder.getStyleFactory();
static FilterFactory2 filterFactory = CommonFactoryFinder.getFilterFactory2();
public static void main(String[] args) throws Exception {
Coordinate[] listOfPoints = new Coordinate[5];
listOfPoints[0] = new Coordinate(-73.82, 41.24);
setupSVG(listOfPoints);
}
public static DefaultFeatureCollection createBoundingBox(Coordinate[] listofP){
SimpleFeatureTypeBuilder b = new SimpleFeatureTypeBuilder();
b.setName("MyFeatureType");
b.add("location", Polygon.class);
final SimpleFeatureType TYPE = b.buildFeatureType();
SimpleFeatureBuilder featureBuilder = new SimpleFeatureBuilder(TYPE);
GeometryFactory geometryFactory = JTSFactoryFinder.getGeometryFactory();
Polygon polygon = geometryFactory.createPolygon(listofP);
featureBuilder.add(polygon);
SimpleFeature feature = featureBuilder.buildFeature("polygon");
DefaultFeatureCollection featureCollection = new DefaultFeatureCollection("internal", TYPE);
featureCollection.add(feature); //Add feature 1
return featureCollection;
}
private static void setupSVG(Coordinate[] listofP)
throws IOException, ParserConfigurationException, URISyntaxException, TransformerException {
//URL url = new URL("http://localhost:8080/");
URL url = new URL("file:///C:/poc/mapreport/Map_marker.svg");
//File file = new File(baseFilePath + "mapreport\\Map_marker.svg");
//URL url = file.toURI().toURL();
PointSymbolizer symb = styleFactory.createPointSymbolizer();
ExternalGraphic eg = styleFactory.createExternalGraphic(url, "image/svg+xml");
symb.getGraphic().graphicalSymbols().add(eg);
Expression size = filterFactory.literal(54);
symb.getGraphic().setSize(size);
Rule rule = styleFactory.createRule();
rule.symbolizers().add(symb);
FeatureTypeStyle fts = styleFactory.createFeatureTypeStyle(rule);
Style style = styleFactory.createStyle();
style.featureTypeStyles().add(fts);
MapContent mc = new MapContent();
DefaultFeatureCollection boundingbox = createBoundingBox(listofP);
Layer layer = new FeatureLayer(boundingbox, style);
mc.addLayer(layer);
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
// Create an instance of org.w3c.dom.Document
Document document = db.getDOMImplementation().createDocument(null, "svg", null);
// Set up the map
SVGGeneratorContext ctx1 = SVGGeneratorContext.createDefault(document);
SVGGeneratorContext ctx = ctx1;
ctx.setComment("Generated by GeoTools2 with Batik SVG Generator");
SVGGraphics2D g2d = new SVGGraphics2D(ctx, true);
Dimension canvasSize = new Dimension(1024, 1024);
g2d.setSVGCanvasSize(canvasSize);
StreamingRenderer renderer = new StreamingRenderer();
renderer.setMapContent(mc);
Rectangle outputArea = new Rectangle(g2d.getSVGCanvasSize());
ReferencedEnvelope dataArea = mc.getMaxBounds();
dataArea.expandBy(5); // some of these have 0 size
renderer.paint(g2d, outputArea, dataArea);
File fileToSave = new File("C:\\poc\\markers.svg");
OutputStreamWriter osw = null;
try {
OutputStream out = new FileOutputStream(fileToSave);
osw = null;
osw = new OutputStreamWriter(out, "UTF-8");
g2d.stream(osw);
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
if (osw != null)
try {
osw.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
mc.dispose();
}
}
Expected Output:-
Final Output:-
So, in the final output SVG marker is getting cut from the right-hand side.
Help would be appreciated.
Thanks in advance
This is different from the previous question.
This happens near the bounds of the layer. You can enlarge the layer's bounds manually in the layer config page.
Currently, I'm working on speed test CLI and I successfully implement CLI with java with Jython. But I got an issue that every time calculate server-side internet speed and I want to consume user side network resources. so anybody knows what is the correct way of implementation
this java code is following:
package com.speedtest.serviceimpl;
import java.io.InputStream;
import java.io.StringWriter;
import javax.servlet.http.HttpServletRequest;
import org.json.JSONException;
import org.json.JSONObject;
import org.python.util.PythonInterpreter;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;
import com.speedtest.dto.SpeedTestResponse;
import com.speedtest.service.SpeedTestJythonService;
#Service
public class JythonService implements SpeedTestJythonService {
private static final Logger Log =LoggerFactory.getLogger(JythonService.class);
PythonInterpreter pythonInterpreter;
public JythonService() {
pythonInterpreter = new PythonInterpreter();
}
#Override
public SpeedTestResponse execMethodInPyClass(InputStream inputStream,HttpServletRequest request) {
try {
StringWriter stringWriter = new StringWriter();
Double upload_speed = new Double(0.0);
Double download_speed = new Double(0.0);
Double download_bytes = new Double(0.0);
Double upload_bytes = new Double(0.0);
String service_provider_name = new String();
pythonInterpreter.setOut(stringWriter);
pythonInterpreter.execfile(inputStream);
System.out.println("Before Response==> "+stringWriter.toString());
String res = stringWriter.toString().replace("|", "");
System.out.println("Aftr Response==> "+res);
JSONObject obj = new JSONObject(res);
Double download = obj.getDouble("download");
Double upload = obj.getDouble("upload");
Double ping = obj.getDouble("ping");
Long bytes_sent = obj.getLong("bytes_sent");
Long bytes_received = obj.getLong("bytes_received");
JSONObject json_server = obj.getJSONObject("server");
String name = json_server.getString("name");
String country_name = json_server.getString("country");
JSONObject json_client = obj.getJSONObject("client");
String isp = json_client.getString("isp");
String ip = json_client.getString("ip");
String country_code = json_client.getString("country");
String isprating = json_client.getString("isprating");
Log.info("obj", obj.toString());
Log.info("download", download.toString());
Log.info("upload", upload.toString());
Log.info("ping", ping.toString());
Log.info("bytes_sent", bytes_sent.toString());
Log.info("bytes_received", bytes_received.toString());
download_speed = (download / 1000.0 / 1000.0);
upload_speed = (upload / 1000.0 / 1000.0);
download_bytes = bytes_received.doubleValue();
upload_bytes = bytes_sent.doubleValue();
SpeedTestResponse response = new SpeedTestResponse();
response.setDownload(download_speed);
response.setIspName(isp);
response.setPing(ping);
response.setUpload(upload_speed);
return response;
} catch (JSONException e) {
e.printStackTrace();
throw new RuntimeException(e.getMessage());
}
}
}
python code is
SpeedTest.py
Iam facing issue with some of the search keywords are not highlighting in chinese documents .Due to confidiential concerns iam not providing actual pdf . search keywords are 1)亿元或2) 收入亿来源 Please find the pdf document path which i tested ,pdfpath link. and ActualResult link .I have already posted related to this issue in following Link but some of the keywords are not highlighting properly in few chinese documents.Kindly provide your inputs to highlight the search keywords which i mentioned.
import java.awt.Color;
import java.awt.Desktop;
import java.awt.geom.Rectangle2D;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.UnsupportedEncodingException;
import java.net.URL;
import java.nio.charset.Charset;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import java.io.BufferedInputStream;
import java.io.File;
import org.pdfclown.documents.Page;
import org.pdfclown.documents.contents.ITextString;
import org.pdfclown.documents.contents.TextChar;
import org.pdfclown.documents.contents.colorSpaces.DeviceRGBColor;
import org.pdfclown.documents.interaction.annotations.TextMarkup;
import org.pdfclown.documents.interaction.annotations.TextMarkup.MarkupTypeEnum;
import org.pdfclown.files.SerializationModeEnum;
import org.pdfclown.util.math.Interval;
import org.pdfclown.util.math.geom.Quad;
import org.pdfclown.tools.TextExtractor;
public class pdfclown2 {
private static int count;
public static void main(String[] args) throws IOException {
highlight("ebook.pdf","C:\\Users\\Downloads\\6.pdf");
System.out.println("OK");
}
private static void highlight(String inputPath, String outputPath) throws IOException {
URL url = new URL(inputPath);
InputStream in = new BufferedInputStream(url.openStream());
org.pdfclown.files.File file = null;
try {
file = new org.pdfclown.files.File("C:\\Users\\Desktop\\pdf\\test123.pdf");
Map<String, String> m = new HashMap<String, String>();
m.put("亿元或","hi");
m.put("收入亿来","hi");
System.out.println("map size"+m.size());
long startTime = System.currentTimeMillis();
// 2. Iterating through the document pages...
TextExtractor textExtractor = new TextExtractor(true, true);
for (final Page page : file.getDocument().getPages()) {
Map<Rectangle2D, List<ITextString>> textStrings = textExtractor.extract(page);
for (Map.Entry<String, String> entry : m.entrySet()) {
Pattern pattern;
String serachKey = entry.getKey();
final String translationKeyword = entry.getValue();
/*
if ((serachKey.contains(")") && serachKey.contains("("))
|| (serachKey.contains("(") && !serachKey.contains(")"))
|| (serachKey.contains(")") && !serachKey.contains("(")) || serachKey.contains("?")
|| serachKey.contains("*") || serachKey.contains("+")) {s
pattern = Pattern.compile(Pattern.quote(serachKey), Pattern.CASE_INSENSITIVE);
}
else*/
pattern = Pattern.compile(serachKey, Pattern.CASE_INSENSITIVE);
// 2.1. Extract the page text!
//System.out.println(textStrings.toString().indexOf(entry.getKey()));
// 2.2. Find the text pattern matches!
final Matcher matcher = pattern.matcher(TextExtractor.toString(textStrings));
// 2.3. Highlight the text pattern matches!
textExtractor.filter(textStrings, new TextExtractor.IIntervalFilter() {
public boolean hasNext() {
// System.out.println(matcher.find());
// if(key.getMatchCriteria() == 1){
if (matcher.find()) {
return true;
}
/*
* } else if(key.getMatchCriteria() == 2) { if
* (matcher.hitEnd()) { count++; return true; } }
*/
return false;
}
public Interval<Integer> next() {
return new Interval<Integer>(matcher.start(), matcher.end());
}
public void process(Interval<Integer> interval, ITextString match) {
// Defining the highlight box of the text pattern
// match...
System.out.println(match);
/* List<Quad> highlightQuads = new ArrayList<Quad>();
{
Rectangle2D textBox = null;
for (TextChar textChar : match.getTextChars()) {
Rectangle2D textCharBox = textChar.getBox();
if (textBox == null) {
textBox = (Rectangle2D) textCharBox.clone();
} else {
if (textCharBox.getY() > textBox.getMaxY()) {
highlightQuads.add(Quad.get(textBox));
textBox = (Rectangle2D) textCharBox.clone();
} else {
textBox.add(textCharBox);
}
}
}
textBox.setRect(textBox.getX(), textBox.getY(), textBox.getWidth(), textBox.getHeight());
highlightQuads.add(Quad.get(textBox));
}*/
List<Quad> highlightQuads = new ArrayList<Quad>();
List<TextChar> textChars = match.getTextChars();
Rectangle2D firstRect = textChars.get(0).getBox();
Rectangle2D lastRect = textChars.get(textChars.size()-1).getBox();
Rectangle2D rect = firstRect.createUnion(lastRect);
highlightQuads.add(Quad.get(rect).get(rect));
// subtype can be Highlight, Underline, StrikeOut, Squiggly
new TextMarkup(page, highlightQuads, translationKeyword, MarkupTypeEnum.Highlight);
}
public void remove() {
throw new UnsupportedOperationException();
}
});
}
}
SerializationModeEnum serializationMode = SerializationModeEnum.Standard;
file.save(new java.io.File(outputPath), serializationMode);
System.out.println("file created");
long endTime = System.currentTimeMillis();
System.out.println("seconds take for execution is:"+(endTime-startTime)/1000);
} catch (Exception e) {
e.printStackTrace();
}
finally{
in.close();
}
}
}
Indeed, when searching for "亿元或" the result highlight is somewhat wrong:
The cause is a PDF Clown bug. When it parses a composite font (aka Type 0 font), it expects the DW (default width) entry in the Type 0 font base dictionary while it is specified to be in the CIDFont subdictionary!
In case of the document at hand the widths of most characters, in particular of the Chinese characters, are not given explicitly and, therefore, default to that DW value. As this value cannot be determined properly due to the bug mentioned above, an average over the explicitly given widths is used, and this average happens to be merely ¾ of the correct value. Thus, the highlighted area is too short.
You can fix this bug in the CompositeFont class (package org.pdfclown.documents.contents.fonts) at the end of the method onLoad. Simply replace
PdfInteger defaultWidthObject = (PdfInteger)getBaseDataObject().get(PdfName.DW);
by
PdfInteger defaultWidthObject = (PdfInteger)getCIDFontDictionary().get(PdfName.DW);
The highlighting now results in
This is my Class to fetch the list of OHLC (open, high, low, close) also the volume and date. i have separated each one of the arraylist for each stocksymbol. i have used my local API to fetch the data. To perform all these calculation i have used ta4j library for JAVA.
package com.infodev.util;
import com.google.gson.Gson;
import com.infodev.Model.Data;
import com.infodev.Model.Find;
import com.infodev.Pojo.RequestForTechnicalCalculation;
import org.apache.log4j.Logger;
import org.json.simple.JSONObject;
import org.springframework.http.*;
import org.springframework.http.converter.json.MappingJackson2HttpMessageConverter;
import org.springframework.web.client.RestTemplate;
import java.math.BigDecimal;
public class ApiTestData {
static Logger logger = Logger.getLogger(ApiTestData.class);
private static Data[] a;
public ApiTestData(RequestForTechnicalCalculation requestForTechnicalCalculation) throws Exception {
//setting request body
JSONObject jsonObject = new JSONObject();
jsonObject.put("sectorId", requestForTechnicalCalculation.getSectorId());
//setting request headers
HttpHeaders httpHeaders = new HttpHeaders();
httpHeaders.setContentType(MediaType.APPLICATION_JSON);
//setting httpEntity as the request for server post request
HttpEntity<?> httpEntity = new HttpEntity<>(jsonObject.toString(), httpHeaders);
//installing restTemplate
RestTemplate restTemplate = new RestTemplate();
restTemplate.getMessageConverters().add(new MappingJackson2HttpMessageConverter());
ResponseEntity<Find> returnedObject = restTemplate.exchange("http://localhost:8081/pull365", HttpMethod.POST, httpEntity, Find.class);
a = returnedObject.getBody().getData();
logger.info("ApiData " + new Gson().toJson(a));
}
public int getDataSize() {
return a.length;
}
public BigDecimal[] getOpen(int index) {
return a[index].getOpen();
}
public BigDecimal[] getHigh(int index) {
return a[index].getHigh();
}
public BigDecimal[] getLow(int index) {
return a[index].getLow();
}
public BigDecimal[] getClose(int index) {
return a[index].getClose();
}
public BigDecimal[] getVolume(int index) {
return a[index].getVolume();
}
public String[] getDates(int index) {
return a[index].getDates();
}
public String getSymbols(int index) {
logger.info("stock name " +new Gson().toJson(a[index].getStockName()));
return a[index].getStockName();
}
}
This one is my calculation part to get the values of RSI. I have calculated another indicators also which is exactly correct according to my manual calculation of indicators but the problem seems to be in the calculation of RSI.
package com.infodev.Services.Indicators;
import com.infodev.Pojo.RequestForTechnicalCalculation;
import com.infodev.util.ApiTestData;
import eu.verdelhan.ta4j.Decimal;
import eu.verdelhan.ta4j.Tick;
import eu.verdelhan.ta4j.TimeSeries;
import eu.verdelhan.ta4j.indicators.candles.LowerShadowIndicator;
import eu.verdelhan.ta4j.indicators.helpers.*;
import eu.verdelhan.ta4j.indicators.oscillators.CMOIndicator;
import eu.verdelhan.ta4j.indicators.oscillators.PPOIndicator;
import eu.verdelhan.ta4j.indicators.oscillators.StochasticOscillatorDIndicator;
import eu.verdelhan.ta4j.indicators.oscillators.StochasticOscillatorKIndicator;
import eu.verdelhan.ta4j.indicators.simple.*;
import eu.verdelhan.ta4j.indicators.statistics.*;
import eu.verdelhan.ta4j.indicators.trackers.*;
import eu.verdelhan.ta4j.indicators.trackers.bollinger.BollingerBandWidthIndicator;
import eu.verdelhan.ta4j.indicators.trackers.bollinger.BollingerBandsLowerIndicator;
import eu.verdelhan.ta4j.indicators.trackers.bollinger.BollingerBandsMiddleIndicator;
import eu.verdelhan.ta4j.indicators.trackers.bollinger.BollingerBandsUpperIndicator;
import eu.verdelhan.ta4j.indicators.volatility.MassIndexIndicator;
import eu.verdelhan.ta4j.indicators.volume.ChaikinMoneyFlowIndicator;
import eu.verdelhan.ta4j.indicators.volume.OnBalanceVolumeIndicator;
import org.apache.log4j.Logger;
import org.joda.time.DateTime;
import org.joda.time.Period;
import org.springframework.stereotype.Service;
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.IOException;
import java.math.BigDecimal;
import java.text.DecimalFormat;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
#Service
public class IndicatorServiceImpl implements IndicatorService {
static Logger logger = Logger.getLogger(IndicatorServiceImpl.class);
private static DecimalFormat df = new DecimalFormat("###,###.##");
private static final SimpleDateFormat DATE_FORMAT = new SimpleDateFormat("yyyy-MM-dd");
List<Tick> ticks;
List<Tick> tickList;
TimeSeries series;
ClosePriceIndicator closePrice;
SMAIndicator shortSma;
SMAIndicator longSma;
EMAIndicator shortEma;
RSIIndicator rsi;
MACDIndicator macd;
BollingerBandsMiddleIndicator bbm;
BollingerBandsLowerIndicator bbl;
BollingerBandsUpperIndicator bbh;
BollingerBandWidthIndicator bbw;
ApiTestData apiData;
String symbol;
String[] date;
BigDecimal[] volume;
BigDecimal[] close;
BigDecimal[] low;
BigDecimal[] high;
BigDecimal[] open;
#Override
public List<Map<Object, Object>> getIndicators(RequestForTechnicalCalculation requestForTechnicalCalculation) {
System.out.println("service state");
List<Map<Object, Object>> finalList = new ArrayList<>();
try {
apiData = new ApiTestData(requestForTechnicalCalculation);
logger.info("----" + apiData.getDataSize());
for (int i = 0; i < apiData.getDataSize(); i++) {
logger.info("----" + i);
// getting the symbol from the api
symbol = apiData.getSymbols(i);
date = apiData.getDates(i);
volume = apiData.getVolume(i);
close = apiData.getClose(i);
low = apiData.getLow(i);
high = apiData.getHigh(i);
open = apiData.getOpen(i);
if (date.length == 0 || volume.length == 0 || close.length == 0 ||
low.length == 0 || high.length == 0 || open.length == 0) {
finalList.add(makeEmptyObject());
} else {
makeCalculation(i);
finalList.add(makeIndicatorObject());
}
}
//return finalList;
} catch (Exception e) {
e.printStackTrace();
finalList.add(makeEmptyObject());
}
return finalList;
}
private void makeCalculation(int ii) throws ParseException {
//instating tick to change the ohlc to Tick class array
ticks = new ArrayList<>();
logger.info("----" + ticks.size());
for (int i = 0; i < close.length; i++) {
this.ticks.add(new Tick(new DateTime(DATE_FORMAT.parse(date[i])), open[i].doubleValue(), high[i].doubleValue()
, low[i].doubleValue(), close[i].doubleValue(), volume[i].doubleValue()));
}
//converting the array to the list of tick
//generating the time Series of the sample data
series = new TimeSeries(apiData.getSymbols(ii), ticks);
if (series == null) {
throw new IllegalArgumentException("Series cannot be null");
} else {
//close price indicator
closePrice = new ClosePriceIndicator(this.series);
logger.info("ClosePrice: " + closePrice.getValue(series.getEnd()));
// Simple moving averages
shortSma = new SMAIndicator(closePrice, 5);
logger.info("shortSMA: " + shortSma.getValue(series.getEnd()));
longSma = new SMAIndicator(closePrice, 20);
logger.info("longSMA: " + longSma.getValue(series.getEnd()));
// Exponential moving averages
shortEma = new EMAIndicator(closePrice, 5);
logger.info("shortEMA: " + shortEma.getValue(series.getEnd()));
longEma = new EMAIndicator(closePrice, 20);
logger.info("longEMA: " + longEma.getValue(series.getEnd()));
rsi = new RSIIndicator(closePrice, 14);
series.getLastTick().addTrade(100, rsi.getValue(series.getEnd()).toDouble());
//newTick.addTrade(100, rsi.getValue(series.getEnd()).toDouble());
logger.info("RsiIndicator: " + rsi.getValue(series.getEnd()));
// Standard deviation
sd = new StandardDeviationIndicator(closePrice, 20);
logger.info("StandardDeviationIndicator: " + sd.getValue(series.getEnd()));
//macd indicator
macd = new MACDIndicator(closePrice, 12, 26);
logger.info("MACD indicator: " + macd.getValue(series.getEnd()));
//bollingerbandsmiddle indicator
bbm = new BollingerBandsMiddleIndicator(longSma);
logger.info("Bollinger Bands Middle Indicator :" + bbm.getValue(series.getEnd()));
bbl = new BollingerBandsLowerIndicator(bbm, sd);
logger.info("Bollinger bands lower indicator :" + bbl.getValue(series.getEnd()));
bbh = new BollingerBandsUpperIndicator(bbm, sd);
logger.info("Bollinger bands upper indicator :" + bbh.getValue(series.getEnd()));
bbw = new BollingerBandWidthIndicator(bbh, bbm, bbl);
logger.info("Bollinger band width :" + bbw.getValue(series.getEnd()));
StringBuilder sb = new StringBuilder("timestamp,close,typical,variation,sma8,sma20,ema8,ema20,ppo,roc,rsi,williamsr,atr,sd\n");
/**
* Adding indicators values
*/
final int nbTick = series.getTickCount();
for (int i = 0; i < nbTick; i++) {
sb.append(series.getTick(i).getEndTime()).append(',')
.append(closePrice.getValue(i)).append(',')
.append(typicalPrice.getValue(i)).append(',')
.append(priceVariation.getValue(i)).append(',')
.append(shortSma.getValue(i)).append(',')
.append(longSma.getValue(i)).append(',')
.append(shortEma.getValue(i)).append(',')
.append(longEma.getValue(i)).append(',')
.append(ppo.getValue(i)).append(',')
.append(roc.getValue(i)).append(',')
.append(rsi.getValue(i)).append(',')
.append(williamsR.getValue(i)).append(',')
.append(atr.getValue(i)).append(',')
.append(sd.getValue(i)).append('\n');
}
/**
* Writing CSV file
*/
BufferedWriter writer = null;
try {
writer = new BufferedWriter(new FileWriter("C:\\Users\\Administrator\\Desktop\\fafa\\indicators.csv"));
writer.write(sb.toString());
} catch (IOException ioe) {
System.out.println(ioe);
} finally {
try {
if (writer != null) {
writer.close();
}
} catch (IOException ioe) {
}
}
}
}
private Map<Object, Object> makeIndicatorObject() {
// Map for indicator values.
try {
logger.info("map state of make indicator");
Map<Object, Object> indicators = new LinkedHashMap<>();
indicators.put("symbol", symbol);
indicators.put("ClosePrice", formatBigDecimal(closePrice.getValue(series.getEnd()).toDouble()));
indicators.put("ShortSMA", formatBigDecimal(shortSma.getValue(series.getEnd()).toDouble()));
indicators.put("LongSMA", formatBigDecimal(longSma.getValue(series.getEnd()).toDouble()));
indicators.put("ShortEMA", formatBigDecimal(shortEma.getValue(series.getEnd()).toDouble()));
indicators.put("LongEMA", formatBigDecimal(longEma.getValue(series.getEnd()).toDouble()));
indicators.put("RSI", formatBigDecimal(rsi.getValue(series.getEnd()).toDouble()));
indicators.put("SD", formatBigDecimal(sd.getValue(series.getEnd()).toDouble()));
indicators.put("MACD", formatBigDecimal(macd.getValue(series.getEnd()).toDouble()));
indicators.put("BBM", formatBigDecimal(bbm.getValue(series.getEnd()).toDouble()));
indicators.put("BBL", formatBigDecimal(bbl.getValue(series.getEnd()).toDouble()));
indicators.put("BBH", formatBigDecimal(bbh.getValue(series.getEnd()).toDouble()));
indicators.put("BBW", formatBigDecimal(bbw.getValue(series.getEnd()).toDouble()));
return indicators;
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
private BigDecimal formatBigDecimal(double value) {
try {
return new BigDecimal(df.format(value));
} catch (Exception e) {
return new BigDecimal(0);
}
}
private Map<Object, Object> makeEmptyObject() {
logger.info("map state of empty object");
Map<Object, Object> indicators = new LinkedHashMap<>();
indicators.put("symbol", symbol);
indicators.put("ClosePrice", new BigDecimal(0));
indicators.put("ShortSMA", new BigDecimal(0));
indicators.put("LongSMA", new BigDecimal(0));
indicators.put("ShortEMA", new BigDecimal(0));
indicators.put("LongEMA", new BigDecimal(0));
indicators.put("RSI", new BigDecimal(0));
indicators.put("SD", new BigDecimal(0));
indicators.put("MACD", new BigDecimal(0));
indicators.put("BBM", new BigDecimal(0));
indicators.put("BBL", new BigDecimal(0));
indicators.put("BBH", new BigDecimal(0));
indicators.put("BBW", new BigDecimal(0));
return indicators;
}
}
This is the Json Output from the local API that is used in the first class (ApiTestData)
I have a code to read an excel file and place it in a list Hash tables. The code that I have is:
package tests;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Enumeration;
import java.util.Hashtable;
import java.util.List;
import org.testng.annotations.BeforeMethod;
import org.testng.annotations.Test;
import jxl.Cell;
import jxl.Sheet;
import jxl.Workbook;
import jxl.read.biff.BiffException;
public class ReadExcelTest {
public static Hashtable htable=new Hashtable();
public static List<Hashtable<String,String>> exceldata = new ArrayList<Hashtable<String,String>>();
public static void main() throws IOException{
readexcel(System.getProperty("user.dir")+"//src//config//TestData.xls");
}
#Test
public static void readexcel() throws IOException
{
String inputFile=System.getProperty("user.dir")+"//src//config//TestData.xls";
File inputWorkbook = new File(inputFile);
Workbook w;
Cell firstrowelement = null;
try
{
w = Workbook.getWorkbook(inputWorkbook);
Sheet sheet = w.getSheet(0);
for (int j = 0; j <sheet.getRows(); j++)
{
for (int i = 0; i < sheet.getColumns(); i++)
{
firstrowelement = sheet.getCell(i, 0);
Cell cell = sheet.getCell(i, j);
htable.put(firstrowelement.getContents(),cell.getContents());
System.out.print(firstrowelement.getContents()+"->"+cell.getContents());
}
System.out.println(firstrowelement.getContents());
exceldata.add(j,htable);
}
//printing the list
for(Hashtable hash :exceldata)
{
Enumeration e = hash.keys();
while (e.hasMoreElements()) {
String key = (String) e.nextElement();
System.out.println(key + " : " + hash.get(key));
}
}
}
catch (BiffException e)
{
e.printStackTrace();
}
}
}
The output I am getting is all the hash tables in the list are displaying the same data instead of displaying each rows data.
Not exactly sure how to over come this issue. Any help is much appreciated.
You need to make a new htable for every row.
As it is, you keep re-using and updating a single HashTable.
(btw. prefer HashMap to HashTable).