I am trying to get RSI indicator from yahoo finance api.
so far I can get quote in CSV format, but there seems no api for specific indicator such as RSI.
Anyone knows how ?
thanks
You have all the data needed to calculate the RSI. http://www.investopedia.com/terms/r/rsi.asp
import numpy as np
from urllib.request import urlopen
# The stock to fetch
stock = 'AMD'
# Yahoos API
urlToVisit = 'http://chartapi.finance.yahoo.com/instrument/1.0/'+stock+'/chartdata;type=quote;range=1y/csv'
stockFile = []
# Fetch the stock info from Yahoo API
try:
sourceCode = urlopen(urlToVisit).read().decode('utf-8')
splitSource = sourceCode.split('\n')
for eachLine in splitSource:
splitLine = eachLine.split(',')
if len(splitLine)==6:
if 'values' not in eachLine:
stockFile.append(eachLine)
except Exception as e:
print(str(e), 'failed to organize pulled data')
except Exception as e:
print(str(e), 'failed to pull price data')
date, closep, highp, lowp, openp, volume = np.loadtxt(stockFile, delimiter=',',unpack=True,)
def rsiFunc(prices, n=14):
# Returns an RSI array
deltas = np.diff(prices)
seed = deltas[:n+1]
up = seed[seed>=0].sum()/n
down = -seed[seed<0].sum()/n
rs = up/down
rsi = np.zeros_like(prices)
rsi[:n] = 100. - 100./(1.+rs)
for i in range(n, len(prices)):
delta = deltas[i-1]
if delta > 0:
upval = delta
downval = 0.
else:
upval = 0.
downval = -delta
up = (up*(n-1)+upval)/n
down = (down*(n-1)+downval)/n
rs = up/down
rsi[i] = 100. - 100./(1.+rs)
return rsi
# Lets see what we got here
rsi = rsiFunc(closep)
n = 0
for i in date:
print('Date stamp:', i, 'RSI', rsi[n])
n+=1
There is no such API for yahoo finance. I found an interesting API that seems to do what you are looking for (https://www.stockvider.com/). It's brand new, the API does not provide a lot of features but it aims to cover the most common technical indicators. So far you can get your data in xml format only.
For instance, you can get RSI values for Apple stock like this : https://api.stockvider.com/data/NASDAQ/AAPL/RSI?start_date=2015-05-20&end_date=2015-07-20
You can get the quotes and compute the indicators you want with packages. See the examples of quantmod and TTR.
For example:
library(quantmod)
getSymbols('F',src='yahoo',return.class='ts')
fpr <- Cl(F)
rsi <- RSI(fpr)
tail(cbind(Cl(F),rsi),10)
Related
I am trying to pull records from a view which emits in following way
DefaultViewRow{id=0329a6ac-84cb-403e-9d1d, key=[“X”,“Y”,“0329a6ac-84cb-403e-9d1d”,“31816552700”], value=1}
As we have millions of record, we are trying to implement pagination to pull 500 records per page and do some process then get next 500 records
i have implemented following code with java client
def cluster = CouchbaseCluster.create(host)
def placesBucket = cluster.openBucket("pass", "pass")
def startKey = JsonArray.from(X, "Y")
def endKey = JsonArray.from(X, "Y", new JsonObject())
hasRow = true
rowPerPage = 500
page = 0
currentStartkey=""
startDocId=""
def viewResult
def counter = 0
while (hasRow) {
hasRow = false
def skip = page == 0 ?0: 1
page = page + 1
viewResult = placesBucket.query(ViewQuery.from("design", "view")
.startKey(startKey)
.endKey(endKey)
.reduce(false)
.inclusiveEnd()
.limit(rowPerPage)
.stale(Stale.FALSE)
.skip(skip).startKeyDocId(startDocId)
)
def runResult = viewResult.allRows()
for(ViewRow row: runResult){
hasRow = true
println(row)
counter++
startDocId = row.document().id()
}
println("Page NUMBER "+ page)
}
println("total "+ counter)
Post execution, i am getting few repetitive rows and even though the total records is around 1000 for particular small scenario i get around 3000+ rows in response and it keeps going.
can someone please tell me if i am doing something wrong ?? PS: My start key value will be same for each run as i am trying to get each unique doc _id.
please help.
I am trying to convert geometry coordinate reference system of some land parcels in Australia into another using java geotools api and opengis libraries. i.e WSG84 (EPSG:4326) to GDA2020 / MGA zone 50 (EPSG:7850), or WSG84 (EPSG:4326) to GDA2020 / PCG2020 (EPSG:8031). By now the converted coordinates have some deviation from the original coordinates it suppose to be. Now my requirement is to perform Conformal + Distortion transformation explained in this article which is more accurate - https://www.icsm.gov.au/datum/gda-transformation-products-and-tools/transformation-grids
However I am not very sure on what changes I need for the current code to do above. I did some google for find some code examples but couldn't find what I wanted. Any help would be highly appreciated.
Query query = new Query();
DataStore dataStore = getDataStore();
FeatureSource<SimpleFeatureType, SimpleFeature> source = dataStore.getFeatureSource(featureTypeName);
query.setFilter(ECQL.toFilter("land_id='" + landID + "'"));
FeatureCollection collection = source.getFeatures(query);
FeatureIterator iterator = collection.features();
CoordinateReferenceSystem sourceCRS = collection.getSchema().getCoordinateReferenceSystem(); // WSG 84
List<Geometry> geometryList = new ArrayList<Geometry>();
CoordinateReferenceSystem targetCRS = getCRS(targetEPSGCode); // GDA2020 / MGA zone 50
while (iterator.hasNext())
{
Feature feature = (Feature) iterator.next();
GeometryAttribute geom = feature.getDefaultGeometryProperty();
Object geomVal = geom.getValue();
if (geomVal instanceof Geometry)
{
MathTransform mathTransform = CRS.findMathTransform(sourceCRS, targetCRS);
Geometry transformedGeometry = JTS.transform((Geometry) geomVal, mathTransform);
geometryList.add(transformedGeometry);
}
}
// Use geometryList for further stuff
You will need to download the required NTv2 grid file and save it to your projects src/main/resources/org/geotools/referencing/factory/gridshift folder and recompile the project. There is more discussion and some example code to help you check it is being picked up in this answer over on https://gis.stackexchange.com
I have a json file whose structure is [{"time","currentStop","lat","lon","speed"}], here is an example:
[
{"time":"2015-06-09 23:59:59","currentStop":"xx","lat":"22.264856","lon":"113.520450","speed":"25.30"},
{"time":"2015-06-09 21:00:49","currentStop":"yy","lat":"22.263","lon":"113.52","speed":"34.5"},
{"time":"2015-06-09 21:55:49","currentStop":"zz","lat":"21.3","lon":"113.521","speed":"13.7"}
]
And I want to get json result which has structure [{"hour","value":["currentStop","lat","lon","speed"]}]. The result shows hourly data of distinct ("currentStop","lat","lon","speed"). Here is the result of the example(skip some empty values):
[
{"hour":0,"value":[]},
{"hour":1,"value":[]},
......
{"hour":21,"value":[{"currentStop":"yy","lat":"22.263","lon":"113.52","speed":"34.5"},{"currentStop":"zz","lat":"21.3","lon":"113.521","speed":"13.7"}]}
{"hour":23, "value": [{"currentStop":"xx","lat":22.264856,"lon":113.520450,"speed":25.30}]},
]
Is it possible to achieve this using spark-sql query?
I use spark with Java API, and with loops, I can get what I want, but this way is really inefficient and costs much.
Here is my code:
Dataset<Row> bus_ic=spark.read().json(file);
bus_ic.createOrReplaceTempView("view");
StringBuilder text = new StringBuilder("[");
bus_ic.select(bus_ic.col("currentStop"),
bus_ic.col("lon").cast("double"), bus_ic.col("speed").cast("double"),
bus_ic.col("lat").cast("double"),bus_ic.col("LINEID"),
bus_ic.col("time").cast("timestamp"))
.createOrReplaceTempView("view");
StringBuilder sqlString = new StringBuilder();
for(int i = 0; i<24; i++){
sqlString.delete(0,sqlString.length());
sqlString.append("select currentStop, speed, lat, lon from view where hour(time) = ")
.append(i)
.append(" group by currentStop, speed, lat, lon");
Dataset<Row> t = spark.sql(sqlString.toString());
text.append("{")
.append("\"h\":").append(i)
.append(",\"value\":")
.append(t.toJSON().collectAsList().toString())
.append("}");
if(i!=23) text.append(",");
}
text.append("]");
There must be some other ways to solve this problem. How to write efficient sql query to achieve this goal?
You can write your code in much more concise way (Scala code):
val bus_comb = bus_ic
.groupBy(hour(to_timestamp(col("time"))).as("hour"))
.agg(collect_set(struct(
col("currentStop"), col("lat"), col("lon"), col("speed")
)).alias("value"));
bus_comb.toJSON.show(false);
// +--------------------------------------------------------------------------------------------------------------------------------------------------------+
// |value |
// +--------------------------------------------------------------------------------------------------------------------------------------------------------+
// |{"hour":23,"value":[{"currentStop":"xx","lat":"22.264856","lon":"113.520450","speed":"25.30"}]} |
// |{"hour":21,"value":[{"currentStop":"yy","lat":"22.263","lon":"113.52","speed":"34.5"},{"currentStop":"zz","lat":"21.3","lon":"113.521","speed":"13.7"}]}|
// +--------------------------------------------------------------------------------------------------------------------------------------------------------+
but with only 24 grouping records, there is no opportunity for scaling out here. It might be an interesting exercise, but it is not something you can really apply on large dataset, where using Spark makes sense.
You can add missing hours by joining with range:
spark.range(0, 24).toDF("hour").join(bus_comb, Seq("hour"), "leftouter")
I'm trying to run MaxEnt in biomod2 but I keep getting a java error...
This error seems to be related to a certain "status 1" that I can't understand.
This is my workflow:
# Defining MAXENT Mododelling options
myBiomodOption <- BIOMOD_ModelingOptions(
MAXENT.Phillips = list( path_to_maxent.jar = "C:/Users/thai/Documents/ORCHIDACEAE/Ecologicos/w2/biomod_1",
maximumiterations = 5000,
memory_allocated = 1024,
visible = FALSE,
linear = TRUE,
quadratic = TRUE,
product = TRUE,
threshold = FALSE,
hinge = TRUE,
lq2lqptthreshold = 80,
l2lqthreshold = 10,
hingethreshold = 15,
beta_threshold = -1,
beta_categorical = -1,
beta_lqp = -1,
beta_hinge = -1,
defaultprevalence = 0.5))
This runs with no problems. Then, I run a code for computing the models:
# Computing the models
myBiomodModelOut <- BIOMOD_Modeling(
myBiomodData,
models = c('MAXENT.Phillips'), #'SRE','RF',
models.options = myBiomodOption,
NbRunEval=1,
DataSplit=80,
Yweights=NULL,
VarImport=3,
models.eval.meth = c('TSS','ROC'),
SaveObj = TRUE,
rescal.all.models = TRUE)
I get the running log up to the end. But then, the follownng message comes, in red:
Warning message: running command 'java' had status 1
The model appears to run despite this "status 1" error. When I evaluate the model, there is indeed an evaluation:
> get_evaluations(myBiomodModelOut)
, , MAXENT.Phillips, RUN1, AllData
Testing.data Cutoff Sensitivity Specificity
TSS 0.982 939.0 100 98.202
ROC 0.988 946.5 100 98.494
, , MAXENT.Phillips, Full, AllData
Testing.data Cutoff Sensitivity Specificity
TSS 0.774 687.0 83.333 94.035
ROC 0.926 689.5 83.333 94.085
Well, this is a model evaluation, so a model was made, right?
But when I try to project, I get:
# Project our models over studied area
myBiomomodProj <- BIOMOD_Projection(modeling.output = myBiomodModelOut,
new.env = myExpl,
proj.name = 'current',
selected.models = 'all',
binary.meth = 'TSS',
compress = 'xz',
clamping.mask = F,
output.format = '.grd'
)
*** in setMethod('BinaryTransformation', signature(data='RasterLayer')
*** in setMethod('BinaryTransformation', signature(data='RasterLayer')
> Projecting Anguloa.virginalis_AllData_RUN1_MAXENT.Phillips ...
Error in .local(.Object, ...) :
In addition: Warning message:
running command 'java -mx1024m -cp "C:/Users/thai/Documents/ORCHIDACEAE/Ecologicos/w2/biomod_1/maxent.jar" density.Project "Anguloa.virginalis/models/1506620550/Anguloa.virginalis_AllData_RUN1_MAXENT.Phillips_outputs/Anguloa.virginalis_AllData_RUN1.lambdas" "Anguloa.virginalis/./m_46332684/part1" "Anguloa.virginalis/./m_46332684/part1/projMaxent.asc" doclamp=false visible=false autorun nowarnings notooltips' had status 1
Error in .rasterObjectFromFile(x, band = band, objecttype = "RasterLayer", :
Cannot create a RasterLayer object from this file. (file does not exist)
You see? that "had status 1" part makes me believe that there is a problem being carried out since the model
I Did a lot of research and found out that some people managed maxent to work with biomod2 by moving maxent.ar to a folder with no spaces in the folder name... But this is not my case. Also, some people got it by chanfing the memory allocation... I also tryied several different memory amounts, and I just get different errors, but I still cant't get projections.
Any tips?
My guess is that is due to a memory issue. I suspect you are working with high resolution environmental data on (quite) large area.
One thing to try should be:
split the environmental data into smaller pieces/area
make the projections on each area independently
stick projections back together
I am doing a project using libsvm and I am preparing my data to use the lib. How can I convert CSV file to LIBSVM compatible data?
CSV File:
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/datasets/data/iris.csv
In the frequencies questions:
How to convert other data formats to LIBSVM format?
It depends on your data format. A simple way is to use libsvmwrite in the libsvm matlab/octave interface. Take a CSV (comma-separated values) file in UCI machine learning repository as an example. We download SPECTF.train. Labels are in the first column. The following steps produce a file in the libsvm format.
matlab> SPECTF = csvread('SPECTF.train'); % read a csv file
matlab> labels = SPECTF(:, 1); % labels from the 1st column
matlab> features = SPECTF(:, 2:end);
matlab> features_sparse = sparse(features); % features must be in a sparse matrix
matlab> libsvmwrite('SPECTFlibsvm.train', labels, features_sparse);
The tranformed data are stored in SPECTFlibsvm.train.
Alternatively, you can use convert.c to convert CSV format to libsvm format.
but I don't wanna use matlab, I use python.
I found this solution as well using JAVA
Can anyone recommend a way to tackle this problem ?
You can use csv2libsvm.py to convert csv to libsvm data
python csv2libsvm.py iris.csv libsvm.data 4 True
where 4 means target index, and True means csv has a header.
Finally, you can get libsvm.data as
0 1:5.1 2:3.5 3:1.4 4:0.2
0 1:4.9 2:3.0 3:1.4 4:0.2
0 1:4.7 2:3.2 3:1.3 4:0.2
0 1:4.6 2:3.1 3:1.5 4:0.2
...
from iris.csv
150,4,setosa,versicolor,virginica
5.1,3.5,1.4,0.2,0
4.9,3.0,1.4,0.2,0
4.7,3.2,1.3,0.2,0
4.6,3.1,1.5,0.2,0
...
csv2libsvm.py does not work with Python3, and also it does not support label targets (string targets), I have slightly modified it. Now It should work with Python3 as well as wıth the label targets.
I am very new to Python, so my code may do not follow the best practices, but I hope it is good enough to help someone.
#!/usr/bin/env python
"""
Convert CSV file to libsvm format. Works only with numeric variables.
Put -1 as label index (argv[3]) if there are no labels in your file.
Expecting no headers. If present, headers can be skipped with argv[4] == 1.
"""
import sys
import csv
import operator
from collections import defaultdict
def construct_line(label, line, labels_dict):
new_line = []
if label.isnumeric():
if float(label) == 0.0:
label = "0"
else:
if label in labels_dict:
new_line.append(labels_dict.get(label))
else:
label_id = str(len(labels_dict))
labels_dict[label] = label_id
new_line.append(label_id)
for i, item in enumerate(line):
if item == '' or float(item) == 0.0:
continue
elif item=='NaN':
item="0.0"
new_item = "%s:%s" % (i + 1, item)
new_line.append(new_item)
new_line = " ".join(new_line)
new_line += "\n"
return new_line
# ---
input_file = sys.argv[1]
try:
output_file = sys.argv[2]
except IndexError:
output_file = input_file+".out"
try:
label_index = int( sys.argv[3] )
except IndexError:
label_index = 0
try:
skip_headers = sys.argv[4]
except IndexError:
skip_headers = 0
i = open(input_file, 'rt')
o = open(output_file, 'wb')
reader = csv.reader(i)
if skip_headers:
headers = reader.__next__()
labels_dict = {}
for line in reader:
if label_index == -1:
label = '1'
else:
label = line.pop(label_index)
new_line = construct_line(label, line, labels_dict)
o.write(new_line.encode('utf-8'))