I get "java.lang.ClassCastException: Cannot cast org.apache.uima.jcas.tcas.Annotation to java.lang.String" exception after updating the ruta version from 2.5.0 to 2.6.1 (Installtion Details attached). How to resolve this exception ? The exception occurs while executing the below script. Initializing Process taking more time.
Script:
FOREACH(hlev) Headinglevel{}
{
Document{->tagClass="",onlyClass=true};
hlev{-> ASSIGN(tagClass,tagName + "." + className), Headinglevel.class = tagClass}
<-{TagName{->MATCHEDTEXT(tagName)} # ClassName{->MATCHEDTEXT(className)};};
CssDefinition{->onlyClass=false, family = CssDefinition.fontfamily, size = CssDefinition.fontsize, color = CssDefinition.fontcolor, bold = CssDefinition.bold, italic = CssDefinition.italic, underline = CssDefinition.underline, case = CssDefinition.case}
<-{CssStyles{PARSE(cssStylesStr),IF(contains(cssStylesStr,tagClass))};};
}
Stacktrace:
Aug 02, 2018 12:07:25 PM org.apache.uima.analysis_engine.impl.PrimitiveAnalysisEngine_impl callAnalysisComponentProcess(434)
SEVERE: Exception occurred
org.apache.uima.analysis_engine.AnalysisEngineProcessException: Annotator processing failed.
at org.apache.uima.ruta.engine.RutaEngine.process(RutaEngine.java:563)
at org.apache.uima.analysis_component.JCasAnnotator_ImplBase.process(JCasAnnotator_ImplBase.java:48)
at org.apache.uima.analysis_engine.impl.PrimitiveAnalysisEngine_impl.callAnalysisComponentProcess(PrimitiveAnalysisEngine_impl.java:401)
at org.apache.uima.analysis_engine.impl.PrimitiveAnalysisEngine_impl.processAndOutputNewCASes(PrimitiveAnalysisEngine_impl.java:318)
at org.apache.uima.analysis_engine.impl.AnalysisEngineImplBase.process(AnalysisEngineImplBase.java:269)
at org.apache.uima.ruta.ide.launching.RutaLauncher.processFile(RutaLauncher.java:242)
at org.apache.uima.ruta.ide.launching.RutaLauncher.main(RutaLauncher.java:191)
Caused by: java.lang.ClassCastException: Cannot cast org.apache.uima.jcas.tcas.Annotation to java.lang.String
at java.lang.Class.cast(Class.java:3369)
at org.apache.uima.ruta.RutaEnvironment.getVariableValue(RutaEnvironment.java:866)
at org.apache.uima.ruta.expression.string.StringVariableExpression.getStringValue(StringVariableExpression.java:38)
at org.apache.uima.ruta.rule.RutaLiteralMatcher.getMatchingAnnotations(RutaLiteralMatcher.java:51)
at org.apache.uima.ruta.rule.RutaLiteralMatcher.getMatchingAnnotations(RutaLiteralMatcher.java:33)
at org.apache.uima.ruta.rule.RutaRuleElement.getAnchors(RutaRuleElement.java:51)
at org.apache.uima.ruta.rule.RutaRuleElement.startMatch(RutaRuleElement.java:59)
at org.apache.uima.ruta.rule.ComposedRuleElement.startMatch(ComposedRuleElement.java:76)
at org.apache.uima.ruta.rule.RutaRule.apply(RutaRule.java:63)
at org.apache.uima.ruta.rule.RutaRule.apply(RutaRule.java:54)
at org.apache.uima.ruta.rule.RutaRule.apply(RutaRule.java:36)
at org.apache.uima.ruta.block.ForEachBlock.apply(ForEachBlock.java:92)
at org.apache.uima.ruta.block.RutaScriptBlock.apply(RutaScriptBlock.java:67)
at org.apache.uima.ruta.RutaModule.apply(RutaModule.java:56)
at org.apache.uima.ruta.engine.RutaEngine.process(RutaEngine.java:561)
...
Instead of using String, I used the annotation name directly.
FOREACH(hlev) Headinglevel{}
{
Document{->tagClass="",onlyClass=true};
Headinglevel{-> ASSIGN(tagClass,tagName + "." + className), Headinglevel.class = tagClass}
<-{TagName{->MATCHEDTEXT(tagName)} # ClassName{->MATCHEDTEXT(className)};};
CssDefinition{->onlyClass=false, family = CssDefinition.fontfamily, size = CssDefinition.fontsize, color = CssDefinition.fontcolor, bold = CssDefinition.bold, italic = CssDefinition.italic, underline = CssDefinition.underline, case = CssDefinition.case}
<-{CssStyles{PARSE(cssStylesStr),IF(contains(cssStylesStr,tagClass))};};
}
Related
Getting org.apache.solr.common.SolrException: java.lang.IllegalArgumentException: Invalid path string "//10.52.21.4:8021/solr" caused by empty node name specified #1 while trying to connect to solr client.
Full Logs:
org.apache.solr.common.SolrException: java.lang.IllegalArgumentException: Invalid path string "//10.52.21.4:8021/solr" caused by empty node name specified #1
at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:171)
at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:117)
at org.apache.solr.common.cloud.SolrZkClient.<init>(SolrZkClient.java:107)
at org.apache.solr.common.cloud.ZkStateReader.<init>(ZkStateReader.java:277)
at org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider.connect(ZkClientClusterStateProvider.java:140)
at org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:383)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:805)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195)
I am getting the error while trying to execute below block of code:
String solrUrl="http://10.52.21.4:8021/solr/";
ArrayList<String> fields = new ArrayList<String>();
CollectionAdminResponse response = ( CollectionAdminResponse ) request.process( new CloudSolrClient.Builder().withZkHost( solrUrl ).build() );
fields = ( ArrayList<String> ) response.getResponse().get( "collections" );
Anybody please help with the problem.
I tried to compress Inputmask.js file. Here is my code to work
public class JSFileMinifyTest {
public static void main(String[] args) throws Exception {
String sourceFileName = "D:\\temp\\jquery.inputmask.bundle.js";
String outputFilename = "D:\\temp\\combined.min.js";
com.google.javascript.jscomp.Compiler.setLoggingLevel(Level.INFO);
com.google.javascript.jscomp.Compiler compiler = new com.google.javascript.jscomp.Compiler();
CompilerOptions options = new CompilerOptions();
CompilationLevel.WHITESPACE_ONLY.setOptionsForCompilationLevel(options);
options.setAggressiveVarCheck(CheckLevel.OFF);
options.setRuntimeTypeCheck(false);
options.setCheckRequires(CheckLevel.OFF);
options.setCheckProvides(CheckLevel.OFF);
options.setReserveRawExports(false);
WarningLevel.VERBOSE.setOptionsForWarningLevel(options);
// To get the complete set of externs, the logic in
// CompilerRunner.getDefaultExterns() should be used here.
SourceFile extern = SourceFile.fromCode("externs.js", "function alert(x) {}");
SourceFile jsFile = SourceFile.fromFile(sourceFileName);
compiler.compile(extern, jsFile, options);
for (JSError message : compiler.getWarnings()) {
System.err.println("Warning message: " + message.toString());
}
for (JSError message : compiler.getErrors()) {
System.err.println("Error message: " + message.toString());
}
FileWriter outputFile = new FileWriter(outputFilename);
outputFile.write(compiler.toSource());
outputFile.close();
}
}
But error occured while compressing this js file.
Apr 09, 2018 11:29:18 AM com.google.javascript.jscomp.parsing.ParserRunner parse
INFO: Error parsing D:\temp\jquery.inputmask.bundle.js: Compilation produced 3 syntax errors. (D:\temp\jquery.inputmask.bundle.js#1)
Apr 09, 2018 11:29:18 AM com.google.javascript.jscomp.LoggerErrorManager println
SEVERE: D:\temp\jquery.inputmask.bundle.js:1002: ERROR - Parse error. identifier is a reserved word
static || null !== test.fn && void 0 !== testPos.input ? static && null !== test.fn && void 0 !== testPos.input && (static = !1,
^
Apr 09, 2018 11:29:18 AM com.google.javascript.jscomp.LoggerErrorManager println
SEVERE: D:\temp\jquery.inputmask.bundle.js:1003: ERROR - Parse error. syntax error
maskTemplate += "</span>") : (static = !0, maskTemplate += "<span class='im-static''>");
^
Apr 09, 2018 11:29:18 AM com.google.javascript.jscomp.LoggerErrorManager println
SEVERE: D:\temp\jquery.inputmask.bundle.js:1010: ERROR - Parse error. missing variable name
var maskTemplate = "", static = !1;
^
Apr 09, 2018 11:29:18 AM com.google.javascript.jscomp.LoggerErrorManager printSummary
WARNING: 3 error(s), 0 warning(s)
Error message: JSC_PARSE_ERROR. Parse error. identifier is a reserved word at D:\temp\jquery.inputmask.bundle.js line 1002 : 16
Error message: JSC_PARSE_ERROR. Parse error. syntax error at D:\temp\jquery.inputmask.bundle.js line 1003 : 29
Error message: JSC_PARSE_ERROR. Parse error. missing variable name at D:\temp\jquery.inputmask.bundle.js line 1010 : 39
Has there anyways to skip syntax validations or ignore errors and continue to compress ?
The compiler by default, parses code as 'use strict'. 'static' is a reserved word in strict mode. You can either change the language mode or simply renamed the variable from 'strict' to something else.
The strict mode reserve words are documented here:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Strict_mode#Paving_the_way_for_future_ECMAScript_versions
I have a question regarding h2o.stackedEnsemble in R. When I try to create an ensemble from GLM models (or any other models and GLM) I get the following error:
DistributedException from localhost/127.0.0.1:54321: 'null', caused by java.lang.NullPointerException
DistributedException from localhost/127.0.0.1:54321: 'null', caused by java.lang.NullPointerException
at water.MRTask.getResult(MRTask.java:478)
at water.MRTask.getResult(MRTask.java:486)
at water.MRTask.doAll(MRTask.java:390)
at water.MRTask.doAll(MRTask.java:396)
at hex.StackedEnsembleModel.predictScoreImpl(StackedEnsembleModel.java:123)
at hex.StackedEnsembleModel.doScoreMetricsOneFrame(StackedEnsembleModel.java:194)
at hex.StackedEnsembleModel.doScoreOrCopyMetrics(StackedEnsembleModel.java:206)
at hex.ensemble.StackedEnsemble$StackedEnsembleDriver.computeMetaLearner(StackedEnsemble.java:302)
at hex.ensemble.StackedEnsemble$StackedEnsembleDriver.computeImpl(StackedEnsemble.java:231)
at hex.ModelBuilder$Driver.compute2(ModelBuilder.java:206)
at water.H2O$H2OCountedCompleter.compute(H2O.java:1263)
at jsr166y.CountedCompleter.exec(CountedCompleter.java:468)
at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263)
at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974)
at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477)
at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
Caused by: java.lang.NullPointerException
Error: DistributedException from localhost/127.0.0.1:54321: 'null', caused by java.lang.NullPointerException
The error does not occur when I stack any other models, only appears with the GLM. Of course I use the same folds for cross-validation.
Some sample code for training the model and the ensemble:
glm_grid <- h2o.grid(algorithm = "glm",
family = 'binomial',
grid_id = "glm_grid",
x = predictors,
y = response,
seed = 1,
fold_column = "fold_assignment",
training_frame = train_h2o,
keep_cross_validation_predictions = TRUE,
hyper_params = list(alpha = seq(0, 1, 0.05)),
lambda_search = TRUE,
search_criteria = search_criteria,
balance_classes = TRUE,
early_stopping = TRUE)
glm <- h2o.getGrid("glm_grid",
sort_by="auc",
decreasing=TRUE)
ensemble <- h2o.stackedEnsemble(x = predictors,
y = response,
training_frame = train_h2o,
model_id = "ens_1",
base_models = glm#model_ids[1:5])
This is a bug, and you can track the progress of the fix here (this should be fixed in the next release, but it might be fixed sooner and available on the nightly releases).
I was going to suggest training the GLMs in a loop or apply function (instead of using h2o.grid()) as a temporary work-around, but unfortunately, the same error happens.
In my project. I need to use NER annotation so I used NERDemo.java
It works fine when I create a new project and have only this code, but when I add it to my project I keep getting errors. I have edited the path in my code to the specific location of the classifiers.
I added the Jar files:
This is the code:
String serializedClassifier = "/Users/ha/stanford-ner-2014-10-26/classifiers/english.all.3class.distsim.crf.ser.gz";
String serializedClassifier2 = "/Users/ha/stanford-ner-2014-10-26/classifiers/english.muc.7class.distsim.crf.ser.gz";
if (args.length > 0) {
serializedClassifier = args[0];
}
NERClassifierCombiner classifier = new NERClassifierCombiner(false, false, serializedClassifier, serializedClassifier2);
String fileContents = IOUtils.slurpFile("/Users/ha/NetBeansProjects/StanfordPOSCode/src/stanfordposcode/input.txt");
List<List<CoreLabel>> out = classifier.classify(fileContents);
int i = 0;
for (List<CoreLabel> lcl : out) {
i++;
int j = 0;
for (CoreLabel cl : lcl) {
j++;
System.out.printf("%d:%d: %s%n", i, j,
cl.toShorterString("Text", "CharacterOffsetBegin", "CharacterOffsetEnd", "NamedEntityTag"));
}
}
But I got this error:
run:
Loading classifier from /Users/ha/stanford-ner-2014-10-26/classifiers/english.all.3class.distsim.crf.ser.gz ... Loading distsim lexicon from /u/nlp/data/pos_tags_are_useless/egw4-reut.512.clusters ... java.lang.RuntimeException: java.io.FileNotFoundException: /u/nlp/data/pos_tags_are_useless/egw4-reut.512.clusters (No such file or directory)
at edu.stanford.nlp.objectbank.ReaderIteratorFactory$ReaderIterator.setNextObject(ReaderIteratorFactory.java:225)
at edu.stanford.nlp.objectbank.ReaderIteratorFactory$ReaderIterator.<init>(ReaderIteratorFactory.java:161)
at edu.stanford.nlp.objectbank.ReaderIteratorFactory.iterator(ReaderIteratorFactory.java:98)
at edu.stanford.nlp.objectbank.ObjectBank$OBIterator.<init>(ObjectBank.java:404)
at edu.stanford.nlp.objectbank.ObjectBank.iterator(ObjectBank.java:242)
at edu.stanford.nlp.ie.NERFeatureFactory.initLexicon(NERFeatureFactory.java:471)
at edu.stanford.nlp.ie.NERFeatureFactory.init(NERFeatureFactory.java:379)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.reinit(AbstractSequenceClassifier.java:171)
at edu.stanford.nlp.ie.crf.CRFClassifier.loadClassifier(CRFClassifier.java:2630)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1620)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1736)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1679)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1662)
at edu.stanford.nlp.ie.crf.CRFClassifier.getClassifier(CRFClassifier.java:2851)
at edu.stanford.nlp.ie.ClassifierCombiner.loadClassifierFromPath(ClassifierCombiner.java:189)
at edu.stanford.nlp.ie.ClassifierCombiner.loadClassifiers(ClassifierCombiner.java:173)
at edu.stanford.nlp.ie.ClassifierCombiner.<init>(ClassifierCombiner.java:125)
at edu.stanford.nlp.ie.NERClassifierCombiner.<init>(NERClassifierCombiner.java:52)
at stanfordposcode.MultipleNERs.main(MultipleNERs.java:24)
Caused by: java.io.FileNotFoundException: /u/nlp/data/pos_tags_are_useless/egw4-reut.512.clusters (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:131)
at edu.stanford.nlp.io.EncodingFileReader.<init>(EncodingFileReader.java:78)
at edu.stanford.nlp.objectbank.ReaderIteratorFactory$ReaderIterator.setNextObject(ReaderIteratorFactory.java:192)
... 18 more
Loading classifier from /Users/ha/stanford-ner-2014-10-26/classifiers/english.all.3class.distsim.crf.ser.gz ... Exception in thread "main" java.io.FileNotFoundException
at edu.stanford.nlp.ie.ClassifierCombiner.loadClassifierFromPath(ClassifierCombiner.java:199)
at edu.stanford.nlp.ie.ClassifierCombiner.loadClassifiers(ClassifierCombiner.java:173)
at edu.stanford.nlp.ie.ClassifierCombiner.<init>(ClassifierCombiner.java:125)
at edu.stanford.nlp.ie.NERClassifierCombiner.<init>(NERClassifierCombiner.java:52)
at stanfordposcode.MultipleNERs.main(MultipleNERs.java:24)
Caused by: java.lang.ClassCastException: java.util.ArrayList cannot be cast to edu.stanford.nlp.classify.LinearClassifier
at edu.stanford.nlp.ie.ner.CMMClassifier.loadClassifier(CMMClassifier.java:1070)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1620)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1736)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1679)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1662)
at edu.stanford.nlp.ie.ner.CMMClassifier.getClassifier(CMMClassifier.java:1116)
at edu.stanford.nlp.ie.ClassifierCombiner.loadClassifierFromPath(ClassifierCombiner.java:195)
... 4 more
Java Result: 1
BUILD SUCCESSFUL (total time: 1 second)
You are mixing and matching the code from version 3.4 and the models from version 3.5. I suggest upgrading everything to the latest version.
I am trying to update index using update script, if index object value have double quotes I am getting exception.
Using the following code:
Employee employee = Employee();
employee.setId(16661L);
employee.setEmployeeId(11026L);
employee.setEmployeeName("Ashok"s Kumar");
employee.setEmailId("ashokkumar#yahoo.com");
final StringBuilder updateScript = new StringBuilder("ctx._source.employees.add("
+ employee + ");");
final UpdateRequestBuilder request = CLIENT.prepareUpdate(indexName, String.valueOf("88"), "14344");
final UpdateResponse response = request.setScript(updateScript.toString()).execute().actionGet();
while execute this getting following exception:
Exception in thread "main" org.elasticsearch.ElasticSearchIllegalArgumentException: failed to execute script
at org.elasticsearch.action.update.UpdateHelper.prepare(UpdateHelper.java:124)
at org.elasticsearch.action.update.UpdateHelper.prepare(UpdateHelper.java:60)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:187)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:183)
at org.elasticsearch.action.update.TransportUpdateAction.shardOperation(TransportUpdateAction.java:63)
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$1.run(TransportInstanceSingleOperationAction.java:191)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: [Error: unterminated string literal]
[Near : {... umar","emailId":"ashokkumar#yahoo.com"}); ....}]
^
[Line: 1, Column: 250]
at org.elasticsearch.common.mvel2.util.ParseTools.balancedCapture(ParseTools.java:1395)
Any one have solution for this please?
Please try by replacing
employeeName = employeeName.replace("\"", "\\\"");