biomod2 Warning message: running command 'java' had status 1 - java

I'm trying to run MaxEnt in biomod2 but I keep getting a java error...
This error seems to be related to a certain "status 1" that I can't understand.
This is my workflow:
# Defining MAXENT Mododelling options
myBiomodOption <- BIOMOD_ModelingOptions(
MAXENT.Phillips = list( path_to_maxent.jar = "C:/Users/thai/Documents/ORCHIDACEAE/Ecologicos/w2/biomod_1",
maximumiterations = 5000,
memory_allocated = 1024,
visible = FALSE,
linear = TRUE,
quadratic = TRUE,
product = TRUE,
threshold = FALSE,
hinge = TRUE,
lq2lqptthreshold = 80,
l2lqthreshold = 10,
hingethreshold = 15,
beta_threshold = -1,
beta_categorical = -1,
beta_lqp = -1,
beta_hinge = -1,
defaultprevalence = 0.5))
This runs with no problems. Then, I run a code for computing the models:
# Computing the models
myBiomodModelOut <- BIOMOD_Modeling(
myBiomodData,
models = c('MAXENT.Phillips'), #'SRE','RF',
models.options = myBiomodOption,
NbRunEval=1,
DataSplit=80,
Yweights=NULL,
VarImport=3,
models.eval.meth = c('TSS','ROC'),
SaveObj = TRUE,
rescal.all.models = TRUE)
I get the running log up to the end. But then, the follownng message comes, in red:
Warning message: running command 'java' had status 1
The model appears to run despite this "status 1" error. When I evaluate the model, there is indeed an evaluation:
> get_evaluations(myBiomodModelOut)
, , MAXENT.Phillips, RUN1, AllData
Testing.data Cutoff Sensitivity Specificity
TSS 0.982 939.0 100 98.202
ROC 0.988 946.5 100 98.494
, , MAXENT.Phillips, Full, AllData
Testing.data Cutoff Sensitivity Specificity
TSS 0.774 687.0 83.333 94.035
ROC 0.926 689.5 83.333 94.085
Well, this is a model evaluation, so a model was made, right?
But when I try to project, I get:
# Project our models over studied area
myBiomomodProj <- BIOMOD_Projection(modeling.output = myBiomodModelOut,
new.env = myExpl,
proj.name = 'current',
selected.models = 'all',
binary.meth = 'TSS',
compress = 'xz',
clamping.mask = F,
output.format = '.grd'
)
*** in setMethod('BinaryTransformation', signature(data='RasterLayer')
*** in setMethod('BinaryTransformation', signature(data='RasterLayer')
> Projecting Anguloa.virginalis_AllData_RUN1_MAXENT.Phillips ...
Error in .local(.Object, ...) :
In addition: Warning message:
running command 'java -mx1024m -cp "C:/Users/thai/Documents/ORCHIDACEAE/Ecologicos/w2/biomod_1/maxent.jar" density.Project "Anguloa.virginalis/models/1506620550/Anguloa.virginalis_AllData_RUN1_MAXENT.Phillips_outputs/Anguloa.virginalis_AllData_RUN1.lambdas" "Anguloa.virginalis/./m_46332684/part1" "Anguloa.virginalis/./m_46332684/part1/projMaxent.asc" doclamp=false visible=false autorun nowarnings notooltips' had status 1
Error in .rasterObjectFromFile(x, band = band, objecttype = "RasterLayer", :
Cannot create a RasterLayer object from this file. (file does not exist)
You see? that "had status 1" part makes me believe that there is a problem being carried out since the model
I Did a lot of research and found out that some people managed maxent to work with biomod2 by moving maxent.ar to a folder with no spaces in the folder name... But this is not my case. Also, some people got it by chanfing the memory allocation... I also tryied several different memory amounts, and I just get different errors, but I still cant't get projections.
Any tips?

My guess is that is due to a memory issue. I suspect you are working with high resolution environmental data on (quite) large area.
One thing to try should be:
split the environmental data into smaller pieces/area
make the projections on each area independently
stick projections back together

Related

Couchbase view pagination with java client

I am trying to pull records from a view which emits in following way
DefaultViewRow{id=0329a6ac-84cb-403e-9d1d, key=[“X”,“Y”,“0329a6ac-84cb-403e-9d1d”,“31816552700”], value=1}
As we have millions of record, we are trying to implement pagination to pull 500 records per page and do some process then get next 500 records
i have implemented following code with java client
def cluster = CouchbaseCluster.create(host)
def placesBucket = cluster.openBucket("pass", "pass")
def startKey = JsonArray.from(X, "Y")
def endKey = JsonArray.from(X, "Y", new JsonObject())
hasRow = true
rowPerPage = 500
page = 0
currentStartkey=""
startDocId=""
def viewResult
def counter = 0
while (hasRow) {
hasRow = false
def skip = page == 0 ?0: 1
page = page + 1
viewResult = placesBucket.query(ViewQuery.from("design", "view")
.startKey(startKey)
.endKey(endKey)
.reduce(false)
.inclusiveEnd()
.limit(rowPerPage)
.stale(Stale.FALSE)
.skip(skip).startKeyDocId(startDocId)
)
def runResult = viewResult.allRows()
for(ViewRow row: runResult){
hasRow = true
println(row)
counter++
startDocId = row.document().id()
}
println("Page NUMBER "+ page)
}
println("total "+ counter)
Post execution, i am getting few repetitive rows and even though the total records is around 1000 for particular small scenario i get around 3000+ rows in response and it keeps going.
can someone please tell me if i am doing something wrong ?? PS: My start key value will be same for each run as i am trying to get each unique doc _id.
please help.

Why dynamic (real time) update doesn’t work? Why changes can be seen only after application restarts?

For example:
When attribute value in domain is changed (value of string or boolean gets changed from ‘true’ to ‘false’). Everything happens in one session, and we’re waiting for another rebounds to ‘rest’, which should be getting data but it seems not to be updating.
This is often caused by incorrect domain usage in hyperon runtime.
Compare these 2 usages (incorrect and correct):
1) Incorrect - domain element is reused (it should not be)
// incorrect:
HyperonDomainObject lob = engine.getDomain("GROUP", "LOB[GROUP]");
HyperonDomainObject trm = lob.getChild("PRODUCT", "PRD3");
HyperonDomainObject adb = trm.getChild("RIDER", "ADB");
// adb is domain element snapshot
while (true) {
log.info("code = {}", adb.getAttrString("CODE", ctx));
// sleep 3 sec
...
}
// == console == (in meantime user changes attribute CODE from "A" to "BBB")
// but adb is frozen - it is snapshot
code = A
code = A
code = A
code = A
...
2) Correct usage - always get fresh domain objects:
while (true) {
HyperonDomainObject lob = engine.getDomain("GROUP", "LOB[GROUP]");
HyperonDomainObject trm = lob.getChild("PRODUCT", "PRD3");
HyperonDomainObject adb = trm.getChild("RIDER", "ADB");
log.info("code = {}", adb.getAttrString("CODE", ctx));
// sleep 3 sec
...
}
// == console == (in meantime user changes attribute CODE from "A" to "BBB")
// adb is always fresh
code = A
code = A
code = BBB
code = BBB
...
Remember, engine.getDomain() returns domain object snopshot and this object is frozen.
Treat engine.getDomain() as cache as it finds objects in memory structure. This structure is refreshed if user modify domain in Hyperon Studio.

SpagoBI multi value parameter

I'm trying to create a multi-value parameter in SpagoBI.
Here is my data set query whose last line appears to be causing an issue.
select C."CUSTOMERNAME", C."CITY", D."YEAR", P."NAME"
from "CUSTOMER" C, "DAY" D, "PRODUCT" P, "TRANSACTIONS" T
where C."CUSTOMERID" = T."CUSTOMERID"
and D."DAYID" = T."DAYID"
and P."PRODUCTID" = T."PRODUCTID"
and _CITY_
I created before open script in my dataset which looks like this:
this.queryText = this.queryText.replace(_CITY_, " CUSTOMER.CITY in ( "+params["cp"].value+" ) ");
My parameter is set as string, display type dynamic list box.
When I run the report I'm getting that error.
org.eclipse.birt.report.engine.api.EngineException: There are errors evaluating script "
this.queryText = this.queryText.replace(_CITY_, " CUSTOMER.CITY in ( "+params["cp"].value+" ) ");
":
Fail to execute script in function __bm_beforeOpen(). Source:
Could anyone please help me?
Hello I managed to solve the problem. Here is my code:
var substring = "" ;
var strParamValsSelected=reportContext.getParameterValue("citytext");
substring += "?," + strParamValsSelected ;
this.queryText = this.queryText.replace("'xxx'",substring);
As You can see the "?" is necessary before my parameter. Maybe It will help somebody. Thank You so much for Your comments.
If you are using SpagoBI server and High charts (JFreeChart Engine) / JSChat Engine you can just use ($P{param_url}) in query,
or build dynamic query using Java script / groovy Script
so your query could also be:
select C."CUSTOMERNAME", C."CITY", D."YEAR", P."NAME"
from "CUSTOMER" C, "DAY" D, "PRODUCT" P, "TRANSACTIONS" T
where C."CUSTOMERID" = T."CUSTOMERID"
and D."DAYID" = T."DAYID"
and P."PRODUCTID" = T."PRODUCTID"
and CUSTOMER."CITY" in ('$P{param_url}')

Fetching technical indicators from yahoo api

I am trying to get RSI indicator from yahoo finance api.
so far I can get quote in CSV format, but there seems no api for specific indicator such as RSI.
Anyone knows how ?
thanks
You have all the data needed to calculate the RSI. http://www.investopedia.com/terms/r/rsi.asp
import numpy as np
from urllib.request import urlopen
# The stock to fetch
stock = 'AMD'
# Yahoos API
urlToVisit = 'http://chartapi.finance.yahoo.com/instrument/1.0/'+stock+'/chartdata;type=quote;range=1y/csv'
stockFile = []
# Fetch the stock info from Yahoo API
try:
sourceCode = urlopen(urlToVisit).read().decode('utf-8')
splitSource = sourceCode.split('\n')
for eachLine in splitSource:
splitLine = eachLine.split(',')
if len(splitLine)==6:
if 'values' not in eachLine:
stockFile.append(eachLine)
except Exception as e:
print(str(e), 'failed to organize pulled data')
except Exception as e:
print(str(e), 'failed to pull price data')
date, closep, highp, lowp, openp, volume = np.loadtxt(stockFile, delimiter=',',unpack=True,)
def rsiFunc(prices, n=14):
# Returns an RSI array
deltas = np.diff(prices)
seed = deltas[:n+1]
up = seed[seed>=0].sum()/n
down = -seed[seed<0].sum()/n
rs = up/down
rsi = np.zeros_like(prices)
rsi[:n] = 100. - 100./(1.+rs)
for i in range(n, len(prices)):
delta = deltas[i-1]
if delta > 0:
upval = delta
downval = 0.
else:
upval = 0.
downval = -delta
up = (up*(n-1)+upval)/n
down = (down*(n-1)+downval)/n
rs = up/down
rsi[i] = 100. - 100./(1.+rs)
return rsi
# Lets see what we got here
rsi = rsiFunc(closep)
n = 0
for i in date:
print('Date stamp:', i, 'RSI', rsi[n])
n+=1
There is no such API for yahoo finance. I found an interesting API that seems to do what you are looking for (https://www.stockvider.com/). It's brand new, the API does not provide a lot of features but it aims to cover the most common technical indicators. So far you can get your data in xml format only.
For instance, you can get RSI values for Apple stock like this : https://api.stockvider.com/data/NASDAQ/AAPL/RSI?start_date=2015-05-20&end_date=2015-07-20
You can get the quotes and compute the indicators you want with packages. See the examples of quantmod and TTR.
For example:
library(quantmod)
getSymbols('F',src='yahoo',return.class='ts')
fpr <- Cl(F)
rsi <- RSI(fpr)
tail(cbind(Cl(F),rsi),10)

Rhadoop basic task on a single machine

I'm running the following code in Rhadoop:
Sys.setenv(HADOOP_HOME="/home/ashkan/Downloads/hadoop-1.0.3/")
Sys.setenv(HADOOP_BIN="/home/ashkan/Downloads/hadoop-1.0.3/bin/")
Sys.setenv(HADOOP_CONF_DIR="/home/ashkan/Downloads/hadoop-1.0.3/conf")
Sys.setenv(HADOOP_CMD="/home/ashkan/Downloads/hadoop-1.0.3/bin/hadoop")
library (Rhipe)
library(rhdfs)
library(rmr2)
hdfs.init()
small.ints = to.dfs(1:10)
mapreduce(
input = small.ints,
map = function(k, v)
{
lapply(seq_along(v), function(r){
x <- runif(v[[r]])
keyval(r,c(max(x),min(x)))
})})
How ever, I get the following error:
Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, :
hadoop streaming failed with error code 1
Does anyone know what the problem is?
Thanks a lot.
To fix the problem you'll have to set the HADOOP_STREAMING environment variable. The below code worked fine for me. Note that your code is not using Rhipe so no need to load.
R Code (I'm using hadoop 2.4.0)
Sys.setenv("HADOOP_CMD"="/usr/local/hadoop/bin/hadoop")
Sys.setenv("HADOOP_STREAMING"="/usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.4.0.jar")
library(rhdfs)
# Initialise
hdfs.init()
library(rmr2)
hdfs.init()
small.ints = to.dfs(1:10)
mapreduce(
input = small.ints,
map = function(k, v)
{
lapply(seq_along(v), function(r){
x <- runif(v[[r]])
keyval(r,c(max(x),min(x)))
})})
I'm guessing that your hadoop streaming path will be as below:
Sys.setenv("HADOOP_STREAMING"="/home/ashkan/Downloads/hadoop-1.0.3/contrib/streaming/hadoop-streaming-1.0.3.jar")
Hope this helps.

Categories

Resources