I am trying to convert geometry coordinate reference system of some land parcels in Australia into another using java geotools api and opengis libraries. i.e WSG84 (EPSG:4326) to GDA2020 / MGA zone 50 (EPSG:7850), or WSG84 (EPSG:4326) to GDA2020 / PCG2020 (EPSG:8031). By now the converted coordinates have some deviation from the original coordinates it suppose to be. Now my requirement is to perform Conformal + Distortion transformation explained in this article which is more accurate - https://www.icsm.gov.au/datum/gda-transformation-products-and-tools/transformation-grids
However I am not very sure on what changes I need for the current code to do above. I did some google for find some code examples but couldn't find what I wanted. Any help would be highly appreciated.
Query query = new Query();
DataStore dataStore = getDataStore();
FeatureSource<SimpleFeatureType, SimpleFeature> source = dataStore.getFeatureSource(featureTypeName);
query.setFilter(ECQL.toFilter("land_id='" + landID + "'"));
FeatureCollection collection = source.getFeatures(query);
FeatureIterator iterator = collection.features();
CoordinateReferenceSystem sourceCRS = collection.getSchema().getCoordinateReferenceSystem(); // WSG 84
List<Geometry> geometryList = new ArrayList<Geometry>();
CoordinateReferenceSystem targetCRS = getCRS(targetEPSGCode); // GDA2020 / MGA zone 50
while (iterator.hasNext())
{
Feature feature = (Feature) iterator.next();
GeometryAttribute geom = feature.getDefaultGeometryProperty();
Object geomVal = geom.getValue();
if (geomVal instanceof Geometry)
{
MathTransform mathTransform = CRS.findMathTransform(sourceCRS, targetCRS);
Geometry transformedGeometry = JTS.transform((Geometry) geomVal, mathTransform);
geometryList.add(transformedGeometry);
}
}
// Use geometryList for further stuff
You will need to download the required NTv2 grid file and save it to your projects src/main/resources/org/geotools/referencing/factory/gridshift folder and recompile the project. There is more discussion and some example code to help you check it is being picked up in this answer over on https://gis.stackexchange.com
Related
I'm quite new to Spark and I need to use the JAVA api. Our goal is to serve predictions on the fly, where the user is going to provide a few of the variables, but not the label or the goal variable, of course.
But the model seems to need the data to be split in training data and test data for training and validation.
How can I get the prediction and the RMSE for the out of the sample data, that the user will query on the fly?
Dataset<Row>[] splits = df.randomSplit(new double[] {0.99, 0.1});
Dataset<Row> trainingData = splits[0];
Dataset<Row> testData = df_p;
My out of sample data has the following format (where 0s is data the user cannot provide)
IMO,PORT_ID,DWT,TERMINAL_ID,BERTH_ID,TIMESTAMP,label,OP_ID
0000000,1864,80000.00,5689,6060,2020-08-29 00:00:00.000,1,2
'label' is the result I want to predict.
This is how I used the models:
// Train a GBT model.
GBTRegressor gbt = new GBTRegressor()
.setLabelCol("label")
.setFeaturesCol("features")
.setMaxIter(10);
// Chain indexer and GBT in a Pipeline.
Pipeline pipeline = new Pipeline().setStages(new PipelineStage[] {assembler, gbt, discretizer});
// Train model. This also runs the indexer.
PipelineModel model = pipeline.fit(trainingData);
// Make predictions.
Dataset<Row> predictions = model.transform(testData);
// Select example rows to display.
predictions.select("prediction", "label", "weekofyear", "dayofmonth", "month", "year", "features").show(150);
// Select (prediction, true label) and compute test error.
RegressionEvaluator evaluator = new RegressionEvaluator()
.setLabelCol("label")
.setPredictionCol("prediction")
.setMetricName("rmse");
double rmse = evaluator.evaluate(predictions);
System.out.println("Root Mean Squared Error (RMSE) on test data = " + rmse);
How i can to get travel duration in MINS from point A to B in Google maps, below i attach my code fragment:
Location locationB = new Location("point B");
locationB.setLatitude(location.getLatitude());
locationB.setLongitude(location.getLongitude());
distance = locationA.distanceTo(locationB);
double kms = distance / 1000 ;
String distanceKm = new DecimalFormat("#.0").format(kms);
Log.v("log", "distance " + distance);
tv3.setText(distanceKm + "Kms");
Thnks for any help
Refer below link google provide direction api it give all the information about source to destination place. read this all things you getting your question solution.
https://developers.google.com/maps/documentation/directions/start
Does anyone have an example of making a call to Watson Natural Language Understanding using Java ? The API docs only show Node. However there is a class in the SDK to support it - but no documentation on how to construct the required 'Features' 'AnalyzeOptions' or 'Builder' input.
Here's a snippet that throws a 'Features cannot be Null' - I'm just fumbling in the dark at this point
String response = docConversionService.convertDocumentToHTML(doc).execute();
Builder b = new AnalyzeOptions.Builder();
b.html(response);
AnalyzeOptions ao = b.build();
nlu.analyze(ao);
Until the API reference is published, have you tried looking at the tests on github? See here for NaturalLanguageUnderstandingIT
I've gotten it working with a text string, and looking at the above test, it won't be too much to get it working with a URL or HTML (changing the AnalyzeOptions builder call from text() to html() for example).
Code example:
final NaturalLanguageUnderstanding understanding =
new NaturalLanguageUnderstanding(
NaturalLanguageUnderstanding.VERSION_DATE_2017_02_27);
understanding.setUsernameAndPassword(serviceUsername, servicePassword);
understanding.setEndPoint(url);
understanding.setDefaultHeaders(getDefaultHeaders());
final String testString =
"In remote corners of the world, citizens are demanding respect"
+ " for the dignity of all people no matter their gender, or race, or religion, or disability,"
+ " or sexual orientation, and those who deny others dignity are subject to public reproach."
+ " An explosion of social media has given ordinary people more ways to express themselves,"
+ " and has raised people's expectations for those of us in power. Indeed, our international"
+ " order has been so successful that we take it as a given that great powers no longer"
+ " fight world wars; that the end of the Cold War lifted the shadow of nuclear Armageddon;"
+ " that the battlefields of Europe have been replaced by peaceful union; that China and India"
+ " remain on a path of remarkable growth.";
final ConceptsOptions concepts =
new ConceptsOptions.Builder().limit(5).build();
final Features features =
new Features.Builder().concepts(concepts).build();
final AnalyzeOptions parameters = new AnalyzeOptions.Builder()
.text(testString).features(features).returnAnalyzedText(true).build();
final AnalysisResults results =
understanding.analyze(parameters).execute();
System.out.println(results);
Make sure you populate your NLU service with default headers (setDefaultHeaders()). I pulled these from WatsonServiceTest (I'd post the link but my rep is too low. Just use the FindFile option on WDC github)
final Map<String, String> headers = new HashMap<String, String>();
headers.put(HttpHeaders.X_WATSON_LEARNING_OPT_OUT, String.valueOf(true));
headers.put(HttpHeaders.X_WATSON_TEST, String.valueOf(true));
return headers;
I am trying to get RSI indicator from yahoo finance api.
so far I can get quote in CSV format, but there seems no api for specific indicator such as RSI.
Anyone knows how ?
thanks
You have all the data needed to calculate the RSI. http://www.investopedia.com/terms/r/rsi.asp
import numpy as np
from urllib.request import urlopen
# The stock to fetch
stock = 'AMD'
# Yahoos API
urlToVisit = 'http://chartapi.finance.yahoo.com/instrument/1.0/'+stock+'/chartdata;type=quote;range=1y/csv'
stockFile = []
# Fetch the stock info from Yahoo API
try:
sourceCode = urlopen(urlToVisit).read().decode('utf-8')
splitSource = sourceCode.split('\n')
for eachLine in splitSource:
splitLine = eachLine.split(',')
if len(splitLine)==6:
if 'values' not in eachLine:
stockFile.append(eachLine)
except Exception as e:
print(str(e), 'failed to organize pulled data')
except Exception as e:
print(str(e), 'failed to pull price data')
date, closep, highp, lowp, openp, volume = np.loadtxt(stockFile, delimiter=',',unpack=True,)
def rsiFunc(prices, n=14):
# Returns an RSI array
deltas = np.diff(prices)
seed = deltas[:n+1]
up = seed[seed>=0].sum()/n
down = -seed[seed<0].sum()/n
rs = up/down
rsi = np.zeros_like(prices)
rsi[:n] = 100. - 100./(1.+rs)
for i in range(n, len(prices)):
delta = deltas[i-1]
if delta > 0:
upval = delta
downval = 0.
else:
upval = 0.
downval = -delta
up = (up*(n-1)+upval)/n
down = (down*(n-1)+downval)/n
rs = up/down
rsi[i] = 100. - 100./(1.+rs)
return rsi
# Lets see what we got here
rsi = rsiFunc(closep)
n = 0
for i in date:
print('Date stamp:', i, 'RSI', rsi[n])
n+=1
There is no such API for yahoo finance. I found an interesting API that seems to do what you are looking for (https://www.stockvider.com/). It's brand new, the API does not provide a lot of features but it aims to cover the most common technical indicators. So far you can get your data in xml format only.
For instance, you can get RSI values for Apple stock like this : https://api.stockvider.com/data/NASDAQ/AAPL/RSI?start_date=2015-05-20&end_date=2015-07-20
You can get the quotes and compute the indicators you want with packages. See the examples of quantmod and TTR.
For example:
library(quantmod)
getSymbols('F',src='yahoo',return.class='ts')
fpr <- Cl(F)
rsi <- RSI(fpr)
tail(cbind(Cl(F),rsi),10)
I'm trying to parse RSS/Atom feeds with the ROME library. I am new to Java, so I am not in tune with many of its intricacies.
Does ROME automatically use its modules to handle different feeds as it comes across them, or do I have to ask it to use them? If so, any direction on this.
How do I get to the correct 'source'? I was trying to use item.getSource(), but it is giving me fits. I guess I am using the wrong interface. Some direction would be much appreciated.
Here is the meat of what I have for collection my data.
I noted two areas where I am having problems, both revolving around getting Source Information of the feed. And by source, I want CNN, or FoxNews, or whomever, not the Author.
Judging from my reading, .getSource() is the correct method.
List<String> feedList = theFeeds.getFeeds();
List<FeedData> feedOutput = new ArrayList<FeedData>();
for (String sites : feedList ) {
URL feedUrl = new URL(sites);
SyndFeedInput input = new SyndFeedInput();
SyndFeed feed = input.build(new XmlReader(feedUrl));
List<SyndEntry> entries = feed.getEntries();
for (SyndEntry item : entries){
String title = item.getTitle();
String link = item.getUri();
Date date = item.getPublishedDate();
Problem here --> ** SyndEntry source = item.getSource();
String description;
if (item.getDescription()== null){
description = "";
} else {
description = item.getDescription().getValue();
}
String cleanDescription = description.replaceAll("\\<.*?>","").replaceAll("\\s+", " ");
FeedData feedData = new FeedData();
feedData.setTitle(title);
feedData.setLink(link);
And Here --> ** feedData.setSource(link);
feedData.setDate(date);
feedData.setDescription(cleanDescription);
String preview =createPreview(cleanDescription);
feedData.setPreview(preview);
feedOutput.add(feedData);
// lets print out my pieces.
System.out.println("Title: " + title);
System.out.println("Date: " + date);
System.out.println("Text: " + cleanDescription);
System.out.println("Preview: " + preview);
System.out.println("*****");
}
}
getSource() is definitely wrong - it returns back SyndFeed to which entry in question belongs. Perhaps what you want is getContributors()?
As far as modules go, they should be selected automatically. You can even write your own and plug it in as described here
What about trying regex the source from the URL without using the API?
That was my first thought, anyway I checked against the RSS standardized format itself to get an idea if this option is actually available at this level, and then try to trace its implementation upwards...
In RSS 2.0, I have found the source element, however it appears that it doesn't exist in previous versions of the spec- not good news for us!
[ is an optional sub-element of 1
Its value is the name of the RSS channel that the item came from, derived from its . It has one required attribute, url, which links to the XMLization of the source.