In our existing application we are using Esper Version 5.3.
We have added few addPlugInSingleRowFunction() to use it in EPL as below --
final Configuration cepConfiguration = new Configuration();
cepConfiguration.addPlugInSingleRowFunction("toNumber", Double.class.getName(), "parseDouble");
cepConfiguration.addPlugInSingleRowFunction("toBoolean", Boolean.class.getName(), "parseBoolean");
This was working fine in 5.3 version.
Post upgrading to 8.3 above code changed as per Esper documentation --
cepConfiguration.getCompiler().addPlugInSingleRowFunction("toNumber", Double.class.getName(), "parseDouble");
cepConfiguration.getCompiler().addPlugInSingleRowFunction("toBoolean", Boolean.class.getName(), "parseBoolean");
But once the sendEventBean() method is called to send a Event to runtime we are seeing below exception every time.
Surprisingly events are getting matched as per the statements present in runtime even if below exception are coming. Though we are not sure whether some events are not matching or not.
Can someone please help on this?
applog.cls=com.espertech.esper.common.internal.epl.expression.dot.core.ExprDotNodeForgeStaticMethodEval,applog.mthd=staticMethodEvalHandleInvocationException,applog.line=228,applog.msg=Invocation exception when invoking method 'parseDouble' of class 'java.lang.Double' passing parameters [null] for statement 'stmt-0': NullPointerException : null,exc.stack=java.lang.NullPointerException\n\tat sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1838)\n\tat sun.misc.FloatingDecimal.parseDouble(FloatingDecimal.java:110)\n\tat java.lang.Double.parseDouble(Double.java:538)\n\tat generated.StatementAIFactoryProvider_a4bd241445010f45474e4598e34521ca1b2836db_stmt450.m8(StatementAIFactoryProvider_a4bd241445010f45474e4598e34521ca1b2836db_stmt450.java:161)\n\tat generated.StatementAIFactoryProvider_a4bd241445010f45474e4598e34521ca1b2836db_stmt450$2.get(ANONYMOUS.java:148)\n\tat com.espertech.esper.runtime.internal.filtersvcimpl.FilterParamIndexEquals.matchEvent(FilterParamIndexEquals.java:32)\n\tat com.espertech.esper.runtime.internal.filtersvcimpl.FilterHandleSetNode.matchEvent(FilterHandleSetNode.java:100)\n\tat com.espertech.esper.runtime.internal.filtersvcimpl.EventTypeIndex.matchType(EventTypeIndex.java:178)\n\tat com.espertech.esper.runtime.internal.filtersvcimpl.EventTypeIndex.matchEvent(EventTypeIndex.java:124)\n\tat com.espertech.esper.runtime.internal.filtersvcimpl.FilterServiceBase.retryableMatchEvent(FilterServiceBase.java:179)\n\tat com.espertech.esper.runtime.internal.filtersvcimpl.FilterServiceBase.evaluateInternal(FilterServiceBase.java:96)\n\tat com.espertech.esper.runtime.internal.filtersvcimpl.FilterServiceLockCoarse.evaluate(FilterServiceLockCoarse.java:52)\n\tat com.espertech.esper.runtime.internal.kernel.service.EPEventServiceImpl.processMatches(EPEventServiceImpl.java:610)\n\tat com.espertech.esper.runtime.internal.kernel.service.EPEventServiceImpl.processWrappedEvent(EPEventServiceImpl.java:450)\n\tat com.espertech.esper.runtime.internal.kernel.thread.InboundUnitSendEvent.run(InboundUnitSendEvent.java:43)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)
You could turn on compiler logging (config.getCompiler().getLogging().setEnableCode(true);) and make sure you have INFO level logging. You can inspect "StatementAIFactoryProvider_a4bd241445010f45474e4598e34521ca1b2836db_stmt450.m8" at line 161 to see what the problem is. Sounds like a null value gets passed to Double.parseDouble. But since I don't have the complete code its hard to say.
Related
I'm trying to fix errors which are reported by forbiddenapis. I had that line:
paramMap.put(Config.TITLEBOOST.toUpperCase(), titleBoost);
So, its been reported as error as usual. I've tried that:
paramMap.put(Config.TITLEBOOST.toUpperCase(Locale.getDefault()), titleBoost);
and that:
paramMap.put(Config.TITLEBOOST.toUpperCase(Locale.ROOT), titleBoost);
also that:
paramMap.put(Config.TITLEBOOST.toUpperCase(Locale.ENGLISH), titleBoost);
However none of them fixed the error:
[forbiddenapis] Forbidden method invocation:
java.lang.String#toUpperCase() [Uses default locale]
What I miss?
Double-check that the bytecode you are analyzing is actually your most recent build output, and that you're looking at the same line forbiddenapis is :) . This looks to me like your source/bytecode/analysis are falling out of sync — the relevant rule shouldn't flag an error on String.toUpperCase(Locale).
Disclaimer: I haven't used forbiddenapis myself --- I wrote this answer based on the repo and on a blog post I found.
I am using com.cloudera.crunch version: '0.3.0-3-cdh-5.2.1'.
I have a small program that reads some AVROs and filters out invalid data based on some criteria. I am using pipeline.write(PCollection, AvroFileTarget) to write the invalid data output. It works fine in production run.
For unit testing this piece of code, I use MemPipeline instance.
But, it fails while writing the output in that case.
I get error:
java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(II[BI[BIILjava/lang/String;JZ)V
at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(Native Method)
at org.apache.hadoop.util.NativeCrc32.calculateChunkedSumsByteArray(NativeCrc32.java:86)
at org.apache.hadoop.util.DataChecksum.calculateChunkedSums(DataChecksum.java:428)
at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:197)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:163)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:144)
at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:78)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:50)
at java.io.DataOutputStream.writeBytes(DataOutputStream.java:276)
at com.cloudera.crunch.impl.mem.MemPipeline.write(MemPipeline.java:159)
Any idea what's wrong?
Hadoop environment variable should be configured properly along with hadoop.dll and winutils.exe.
Also pass the JVM argument while executing MR job/application
-Djava.library.path=HADOOP_HOME/lib/native
I'm trying the quickstart from here: http://datafu.incubator.apache.org/docs/datafu/getting-started.html
I tried nearly everything, but I'm sure it must be my fault somewhere. I tried already:
exporting PIG_HOME, CLASSPATH, PIG_CLASSPATH
starting pig with -cpdatafu-pig-incubating-1.3.0.jar
registering datafu-pig-incubating-1.3.0.jar locally and in hdfs => both succesful (at least no error shown)
nothing helped
Trying this on pig:
register datafu-pig-incubating-1.3.0.jar
DEFINE Median datafu.pig.stats.StreamingMedian();
data = load '/user/hduser/numbers.txt' using PigStorage() as (val:int);
data2 = FOREACH (GROUP data ALL) GENERATE Median(data);
or directly
data2 = FOREACH (GROUP data ALL) GENERATE datafu.pig.stats.StreamingMedian(data);
I get this name-resolve error:
2016-06-04 17:22:22,734 [main] ERROR org.apache.pig.tools.grunt.Grunt
- ERROR 1070: Could not resolve datafu.pig.stats.StreamingMedian using imports: [, java.lang., org.apache.pig.builtin.,
org.apache.pig.impl.builtin.] Details at logfile:
/home/hadoop/pig_1465053680252.log
When I look into the datafu-pig-incubating-1.3.0.jar it looks OK, everything in place. I also tried some Bag functions, same error then.
I think it's kind of a noob-error which I just don't see (as I did not find particular answers for datafu in SO or google), so thanks in advance for shedding some light on this.
Pig script is proper, the only thing that could break is that while registering datafu there were some class dependencies that coudn't been met.
Try to run locally (pig -x local) and see a detailed log.
Check also the version of pig - it should be newer than 0.14.0.
With reference to my previous question,
Executing a lisp function from Java
I was able to call lisp code from Java using ABCL.
But the problem is, the already existing lisp code uses CL-PPCRE package.
I can not compile the code as it says 'CL-PPCRE not found'.
I have tried different approaches to add that package,
including
1) how does one compile a clisp program which uses cl-ppcre?
2)https://groups.google.com/forum/#!topic/cl-ppcre/juSfOhEDa1k
Doesnot work!
Other thing is, that executing (compile-file aima.asd) works perfectly fine although it does also require cl-pprce
(defpackage #:aima-asd
(:use :cl :asdf))
(in-package :aima-asd)
(defsystem aima
:name "aima"
:version "0.1"
:components ((:file "defpackage")
(:file "main" :depends-on ("defpackage")))
:depends-on (:cl-ppcre))
The final java code is
interpreter.eval("(load \"aima/asdf.lisp\")");
interpreter.eval("(compile-file \"aima/aima.asd\")");
interpreter.eval("(compile-file \"aima/defpackage.lisp\")");
interpreter.eval("(in-package :aima)");
interpreter.eval("(load \"aima/aima.lisp\")");
interpreter.eval("(aima-load 'all)");
The error message is
Error loading C:/Users/Administrator.NUIG-1Z7HN12/workspace/aima/probability/domains/edit-nets.lisp at line 376 (offset 16389)
#<THREAD "main" {3A188AF2}>: Debugger invoked on condition of type READER-ERROR
The package "CL-PPCRE" can't be found.
[1] AIMA(1):
Can anyone help me?
You need to load cl-ppcre before you can use it. You can do that by using (asdf:load-system :aima), provided that you put both aima and cl-ppcre into locations that your ASDF searches.
I used QuickLisp to add cl-ppcre (because nothing else worked for me).
Here is what I did
(load \"~/QuickLisp.lisp\")")
(quicklisp-quickstart:install)
(load "~/quicklisp/setup.lisp")
(ql:quickload :cl-ppcre)
First 2 lines are only a one time things. Once quickLisp is installed you can start from line 3.
I'm using the owl2java plugin to generate Java code from an Ontology file. But I'm always getting de same error.
Exception in thread "main" com.hp.hpl.jena.ontology.ConversionException: Cannot convert node http://www.w3.org/2002/07/owl#bottomObjectProperty to TransitiveProperty
at com.hp.hpl.jena.ontology.impl.TransitivePropertyImpl$1.wrap(TransitivePropertyImpl.java:66)
at com.hp.hpl.jena.enhanced.EnhNode.convertTo(EnhNode.java:142)
at com.hp.hpl.jena.enhanced.EnhNode.convertTo(EnhNode.java:22)
at com.hp.hpl.jena.enhanced.Polymorphic.asInternal(Polymorphic.java:54)
at com.hp.hpl.jena.enhanced.EnhNode.viewAs(EnhNode.java:92)
at com.hp.hpl.jena.enhanced.EnhGraph.getNodeAs(EnhGraph.java:135)
at com.hp.hpl.jena.ontology.impl.OntModelImpl$SubjectNodeAs.map1(OntModelImpl.java:3040)
at com.hp.hpl.jena.ontology.impl.OntModelImpl$SubjectNodeAs.map1(OntModelImpl.java:3033)
at com.hp.hpl.jena.util.iterator.Map1Iterator.next(Map1Iterator.java:35)
at com.hp.hpl.jena.util.iterator.WrappedIterator.next(WrappedIterator.java:68)
at com.hp.hpl.jena.util.iterator.UniqueExtendedIterator.nextIfNew(UniqueExtendedIterator.java:61)
at com.hp.hpl.jena.util.iterator.UniqueExtendedIterator.hasNext(UniqueExtendedIterator.java:69)
at com.hp.hpl.jena.util.iterator.NiceIterator.asList(NiceIterator.java:185)
at com.hp.hpl.jena.util.iterator.NiceIterator.toList(NiceIterator.java:159)
at de.incunabulum.owl2java.core.generator.OwlReader.handleProperties(OwlReader.java:862)
at de.incunabulum.owl2java.core.generator.OwlReader.generateJModel(OwlReader.java:457)
at de.incunabulum.owl2java.core.JenaGenerator.generate(JenaGenerator.java:65)
at onto.main.main(main.java:99)
I have no idea about what I'm doing wrong. Any Ideas?
Thanks you a lot.
I looked at the top line on your exception, and see com.hp.hpl.jena.ontology.impl.TransitivePropertyImpl.
Googling for that leads to a version of the source code. It may not be exactly the same version as you're using, but is probably close enough to be informative. Reading the code leads to these questions:
Does your Model have a profile? It must.
Does the profile support Transitivity? It must.
Are you combining Transitive with something else that it's incompatible with?