ENCOG values in output file incorrectly denormalized? - java

The following was produced using the most recent version of encog-workbench (3.2.0)
I was wondering if this is a bug or if I do not grasp the purpose of the output file.
When I run the [ sunspot example ][1] in the encog workbench, without seggregation, i expect the output file to have the fitted values from the model. When i create the validation chart it presents me with the figure found in the tutorial so this seems correct.
But when i go to the sunspots_output.csv output file I get the following output:
ssn(t-29) ssn(t+1) Output:ssn(t+1)
... first thirty values have output Null ...
-0.600472813 -0.947202522 null
-0.477541371 -1 8.349050184
-0.528762805 -0.976359338 8.334476431
-0.814814815 -0.986603625 8.314903157
-0.817178881 -0.892040977 8.292847897
...
All the output values are around 8 for the rest of the file.
Now when i go back to the validation chart, there is a tab data, which contains the following columns:
Ideal Result
-0.477541371 -0.52449577
-0.528762805 -0.526507195
-0.814814815 -0.535029097
-0.817178881 -0.653884012
If I denormalize the values in these columns, I get the following.
66.3 60.3414868
59.8 60.08623701
23.5 59.00480764
23.2 43.92211894
These seem to be correct values for the actual(if i compare them with the original data) and thus these should be the predicted values in the output column.
Is this a bug or do the values in the output(t+1) column mean something else.
I copied these values to excel and denormalized by typing in the formula for (-1,1).
I was hoping not to have to do this every time I run an experiment.
I am going to move to code eventually. Just wanted to get some preliminary results with the workbench. Using segregation results in the same problem, btw.
If its a bug I'll report it on the encog website.
Thanks for your answers,
Florian
UPDATE
Hey Jef, I downloaded your zip and reproduced the problem using my workbench.
The problem only arises when i do not seggregate, which i do not want to.
There are some clear differences in the .ega file created by workbench-excecutable3.2.0
When i use your .ega file and remove the seggregate section, it works.
When i use mine it doesn't. That's why i uploaded my project [here][2]:
Maybe you can discover if something new interferes with outputting the correct values.
Hope it helps!
Update 3:
My actual goal is to build a forecaster of which the project can be found here:
http://wikisend.com/download/477372/Myproject.rar
I was wondering if you could tell me if I am doing something definitely wrong, because currently my output is total rubbish.
Thanks again.

I tried to reproduce the error, but when I ran my own sunspots prediction I did get predicted values closer to the expected range. You might try running the zipped version of the example, found here.
http://www.heatonresearch.com/dload/encog/example/workbench/SunspotExample.zip
You should be able to run the EGA file and it will produce an output file. Some of my data are as follows:
"year" "mon" "ssn" "dev" "Output:ssn(t+1)"
1948 5 174.0 69.3 156.3030108771
1948 6 167.8 26.6 168.4791037592
1948 7 142.2 28.3 208.1090604116
1948 8 157.9 35.3 186.0234029962
1948 9 143.3 55.9 131.5008296846
1948 10 136.3 44.9 93.0720770479
1948 11 95.8 21.8 89.8269594386
Perhaps comparing the EGA file for the above zip to your EGA file.

Related

Why is ANTLR not printing set of tokens correctly?

I am testing to see if ANTLR-4.7.1 is working properly by using a sample, provided by my professor, to match these results for the same printed set of tokens:
% java -jar ./antlr-4.7.1-complete.jar HelloExample.g4
% javac -cp antlr-4.7.1-complete.jar HelloExample*.java
% java -cp .:antlr-4.7.1-complete.jar org.antlr.v4.gui.TestRig HelloExample greeting helloworld.greeting -tokens
[#0,0:4='Hello',<1>,1:0]
[#1,6:10='World',<3>,1:6]
[#2,12:12='!',<2>,1:12]
[#3,14:13='<EOF>',<-1>,2:0]
(greeting Hello World !)
However, after getting to the 3rd command, my output was instead:
[#0,0:4='Hello',<'Hello'>,1:0]
[#1,6:10='World',<Name>,1:6]
[#2,12:12='!',<'!'>,1:12]
[#3,13:12='<EOF>',<EOF>,1:13]
In my output, there are no numbers inside < >, which I believe should be defined from the HelloExample.tokens file that contain:
Hello=1
Bang=2
Name=3
WS=4
'Hello'=1
'!'=2
I get no error information and antlr seemed to have generated all the files I needed, so I don't know where I should be looking to resolve this, please help. And I'm not sure if it'll be of use, but my working directory started with helloworld.greeting and HelloExample.g4 and final directory now contains
helloworld.greeting
HelloExample.g4
HelloExample.interp
HelloExample.tokens
HelloExampleBaseListener.class
HelloExampleBaseListener.java
HelloExampleLexer.class
HelloExampleLexer.inerp
HelloExampleLexer.java
HelloExampleLexer.tokens
HelloExampleListener.class
HelloExampleListener.java
HelloExampleParser$GreetingContext.class
HelloExampleParser.class
HelloExampleParser.java
As rici already pointed out in the comments, getting the actual rule names instead of their numbers in the token output is a feature and shouldn't worry you.
In order to get the (greeting Hello World !) output at the end, you'll want to add the -tree flag after -tokens.

JVM Error While Writing Data Frame to Oracle Database using parLapply

I want to parallelize my data writing process. I am writing a data frame to Oracle Database. This data has 4 million rows and 8 columns. It takes 6.5 hours without parallelizing.
When I try to go parallel, I get the error
Error in checkForRemoteErrors(val) :
7 nodes produced errors; first error: No running JVM detected. Maybe .jinit() would help.
I know this error. I can solve it when I work with single cluster. But I do not know how to tell other clusters the location of Java. Here is my code
Sys.setenv(JAVA_HOME='C:/Program Files/Java/jre1.8.0_181')
library(rJava)
library(RJDBC)
library(DBI)
library(compiler)
library(dplyr)
library(data.table)
jdbcDriver =JDBC("oracle.jdbc.OracleDriver",classPath="C:/Program Files/directory/ojdbc6.jar", identifier.quote = "\"")
jdbcConnection =dbConnect(jdbcDriver, "jdbc:oracle:thin:#//XXXXX", "YYYYY", "ZZZZZ")
By using Sys.setenv(JAVA_HOME='C:/Program Files/Java/jre1.8.0_181') I solve the same problem for single core. But when I go parallel
library(parallel)
no_cores <- detectCores() - 1
cl <- makeCluster(no_cores)
clusterExport(cl, varlist = list("jdbcConnection", "brand3.merge.u"))
clusterEvalQ(cl, .libPaths("C:/Users/onur.boyar/Documents/R/win-library/3.5"))
clusterEvalQ(cl, library(RJDBC))
clusterEvalQ(cl, library(rJava))
parLapply(cl, 1:length(brand3.merge.u$CELL_PH_NUM), function(x) dbSendUpdate(jdbcConnection, "INSERT INTO xxnvdw.an_cust_analytics VALUES(?,?,?,?,?,?,?,?)", brand3.merge.u[x, 1], brand3.merge.u[x,2], brand3.merge.u[x,3],brand3.merge.u[x,4],brand3.merge.u[x,5],brand3.merge.u[x,6],brand3.merge.u[x,7],brand3.merge.u[x,8]))
#brand3.merge.u is my data frame that I try to write.
I get the above error and I do not know how to set my Java location for other nodes.
I want to use parLapply since it is faster than foreach. Any help would be appreciated. Thanks!
JAVA_HOME environment variable
If the problem really is with the location of Java, you could set the environment variable in your .Renviron file. It is likely located in ~/.Renviron. Add a line to that file and this will be propagated to all R session that run via your user:
JAVA_HOME='C:/Program Files/Java/jre1.8.0_181'
Alternatively, you can just add that location to your PATH environment variable.
JVM Initialization via rJava
On the other hand the error message may point to just a JVM not being initialized, which you can solve with .jinit, a minimal example:
library(parallel)
cl <- makeCluster(detectCores())
parallel::parLapply(cl, 1:5, function(x) {
rJava::.jinit()
rJava::.jnew(class = "java/lang/Integer", x)$toString()
})
Working around Java use
This was not specifically asked, but you can also work around the need for Java dependency using ODBC drivers, which for Oracle should be accessible here:
con <- DBI::dbConnect(
odbc::odbc(),
Driver = "[your driver's name]",
...
)

Problems generating '.pas' file with Java2OP and compiling it

I have 2 JAR files coming from an SDK that I have to use.
Generation problem
I succeeded in generating the first .pas file, but Java2OP fails to generate the second .pas I need, with the message
Access violation at address 0042AF4A dans the 'Java2OP.exe' module. Read at address 09D00000
Would this come from a common issue? There's no other hints about what causes the problem in the SDK .jar.
I'm using Java2OP located in C:\Program Files (x86)\Embarcadero\Studio\18.0\bin\converters\java2op, but I had to first use the solutions in Delphi 10.1 Berlin - Java2OP: class or interface expected before generating the 1st file.
Compilation problem
Anyway, I tried to generate a .hpp file from the generated .pas.
I don't know much about Delphi. Does the problem come from the SDK itself, or the generation of the .pas file?
1st issue solved
Java2OP included Androidapi.JNI.Java.Util
and not Androidapi.JNI.JavaUtil. I had to import Androidapi.JNI.JavaUtil myself, though it is present in the /Program Files(x86)/Embarcadero/... folders.
2nd issue
The same 4 compilation errors happen multiple times across the .pas file on parts using the word this.
Do I have to replace every use of this with self?
Errors
E2023 The function require a type of result : Line 4
E2147 The property 'this' doesn't exist in the base class : Line 5
E2029 ',' or ':' awaited but found 'read' identificator : Line 5
E2029 ',' or ':' awaited but found number : Line 5
[JavaSignature('com/hsm/barcode/DecoderConfigValues$SymbologyFlags')]
JDecoderConfigValues_SymbologyFlags = interface(JObject)
['{BCF30FD2-B650-433C-8A4E-8B638A508487}']
function _Getthis$0: JDecoderConfigValues; cdecl;
property this$0: JDecoderConfigValues read _Getthis$0;
end;
[JavaSignature('com/hsm/barcode/ExposureValues$ExposureSettingsMinMax')]
JExposureValues_ExposureSettingsMinMax = interface(JObject)
['{A576F85F-A021-475C-9741-06D92DBC205F}']
function _Getthis$0: JExposureValues; cdecl;
property this$0: JExposureValues read _Getthis$0;
end;

apache PIG with datafu: Cannot resolve UDF's

I'm trying the quickstart from here: http://datafu.incubator.apache.org/docs/datafu/getting-started.html
I tried nearly everything, but I'm sure it must be my fault somewhere. I tried already:
exporting PIG_HOME, CLASSPATH, PIG_CLASSPATH
starting pig with -cpdatafu-pig-incubating-1.3.0.jar
registering datafu-pig-incubating-1.3.0.jar locally and in hdfs => both succesful (at least no error shown)
nothing helped
Trying this on pig:
register datafu-pig-incubating-1.3.0.jar
DEFINE Median datafu.pig.stats.StreamingMedian();
data = load '/user/hduser/numbers.txt' using PigStorage() as (val:int);
data2 = FOREACH (GROUP data ALL) GENERATE Median(data);
or directly
data2 = FOREACH (GROUP data ALL) GENERATE datafu.pig.stats.StreamingMedian(data);
I get this name-resolve error:
2016-06-04 17:22:22,734 [main] ERROR org.apache.pig.tools.grunt.Grunt
- ERROR 1070: Could not resolve datafu.pig.stats.StreamingMedian using imports: [, java.lang., org.apache.pig.builtin.,
org.apache.pig.impl.builtin.] Details at logfile:
/home/hadoop/pig_1465053680252.log
When I look into the datafu-pig-incubating-1.3.0.jar it looks OK, everything in place. I also tried some Bag functions, same error then.
I think it's kind of a noob-error which I just don't see (as I did not find particular answers for datafu in SO or google), so thanks in advance for shedding some light on this.
Pig script is proper, the only thing that could break is that while registering datafu there were some class dependencies that coudn't been met.
Try to run locally (pig -x local) and see a detailed log.
Check also the version of pig - it should be newer than 0.14.0.

Run an ABCL code that uses cl-cppre

With reference to my previous question,
Executing a lisp function from Java
I was able to call lisp code from Java using ABCL.
But the problem is, the already existing lisp code uses CL-PPCRE package.
I can not compile the code as it says 'CL-PPCRE not found'.
I have tried different approaches to add that package,
including
1) how does one compile a clisp program which uses cl-ppcre?
2)https://groups.google.com/forum/#!topic/cl-ppcre/juSfOhEDa1k
Doesnot work!
Other thing is, that executing (compile-file aima.asd) works perfectly fine although it does also require cl-pprce
(defpackage #:aima-asd
(:use :cl :asdf))
(in-package :aima-asd)
(defsystem aima
:name "aima"
:version "0.1"
:components ((:file "defpackage")
(:file "main" :depends-on ("defpackage")))
:depends-on (:cl-ppcre))
The final java code is
interpreter.eval("(load \"aima/asdf.lisp\")");
interpreter.eval("(compile-file \"aima/aima.asd\")");
interpreter.eval("(compile-file \"aima/defpackage.lisp\")");
interpreter.eval("(in-package :aima)");
interpreter.eval("(load \"aima/aima.lisp\")");
interpreter.eval("(aima-load 'all)");
The error message is
Error loading C:/Users/Administrator.NUIG-1Z7HN12/workspace/aima/probability/domains/edit-nets.lisp at line 376 (offset 16389)
#<THREAD "main" {3A188AF2}>: Debugger invoked on condition of type READER-ERROR
The package "CL-PPCRE" can't be found.
[1] AIMA(1):
Can anyone help me?
You need to load cl-ppcre before you can use it. You can do that by using (asdf:load-system :aima), provided that you put both aima and cl-ppcre into locations that your ASDF searches.
I used QuickLisp to add cl-ppcre (because nothing else worked for me).
Here is what I did
(load \"~/QuickLisp.lisp\")")
(quicklisp-quickstart:install)
(load "~/quicklisp/setup.lisp")
(ql:quickload :cl-ppcre)
First 2 lines are only a one time things. Once quickLisp is installed you can start from line 3.

Categories

Resources