In previous versions of Java I was able to use a fragment that had a host of system-bundle in order to provide classes to the boot classloader.
In my particular case this was to support using Jacorb in Eclipse. This all worked fine prior to Java 7u55.
I created an osgi fragment that contained all the jars for Jacorb. The manifest looks like this:
Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: org.jacorb.systemFragment
Bundle-SymbolicName: org.jacorb.systemFragment
Bundle-Version: 3.3.0.20140422-1108
Bundle-ClassPath: jars/slf4j-jdk14-1.6.4.jar,
jars/slf4j-api-1.6.4.jar,
jars/jacorb-3.3.jar
Fragment-Host: system.bundle; extension:=framework
Export-Package: org.jacorb.config;version="3.3.0", ....
I also specify the following as vm args:
-Dorg.omg.CORBA.ORBClass=org.jacorb.orb.ORB
-Dorg.omg.CORBA.ORBSingletonClass=org.jacorb.orb.ORBSingleton
-Dorg.omg.PortableInterceptor.ORBInitializerClass.standard_init=org.jacorb.orb.standardInterceptors.IORInterceptorInitializer
When I ran my Eclipse application in Java 7u51 I am able to call ORB.init() successfully.
When I run the same application in Java 7u55 I get the following:
Caused by: java.lang.ClassNotFoundException: org.jacorb.orb.ORBSingleton
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
at org.omg.CORBA.ORB.create_impl_with_systemclassloader(ORB.java:306)
If I add the following as vmargs it works.
-Djava.endorsed.dirs=${jacorb/lib}
I confirmed that this effects Java 7u55 Java 6u30 and Java 8u5
I didn't need to do this before. Any ideas why?
--- EDIT 04/30 ---
Did some more digging and I found a commit to ORB.java that is causing the issue.
changeset: 817:a8d27c3fc4e4
tag: jdk7u55-b05
user: msheppar
date: Tue Jan 21 12:46:58 2014 +0000
summary: 8025005: Enhance CORBA initializations
This commit changed the way the ORB class was created. Instead of using the Thread context class loader it is now hard coded to use the SystemClassLoader.
- singleton = create_impl(className);
+ singleton = create_impl_with_systemclassloader(className);
}
}
return singleton;
}
+ private static ORB create_impl_with_systemclassloader(String className) {
+
+ try {
+ ReflectUtil.checkPackageAccess(className);
+ ClassLoader cl = ClassLoader.getSystemClassLoader();
+ Class<org.omg.CORBA.ORB> orbBaseClass = org.omg.CORBA.ORB.class;
+ Class<?> singletonOrbClass = Class.forName(className, true, cl).asSubclass(orbBaseClass);
+ return (ORB)singletonOrbClass.newInstance();
+ } catch (Throwable ex) {
+ SystemException systemException = new INITIALIZE(
+ "can't instantiate default ORB implementation " + className);
+ systemException.initCause(ex);
+ throw systemException;
+ }
+ }
I've tried to log a ticket to Orcale about this problem. In the mean time, is there a way to override the ORB.java that comes with the JVM via some sort of fragment?
I have the same problem (and I saw many others having it too) but with a CORBA based Webstart application.
The problem of this change is that the SystemClassLoader which is forced to be used due to the change in u55 doesn't know how to load the ORB & ORBSingleton classes specified through the mentioned properties as they are part of the application classpath - in my case loaded by JNLPClassloader.
I guess there is a similar constalation in your case.
One way to replace the JDK version of orb.omg.CORBA you've used already by specifying -Djava.endorsed.dirs=${jacorb/lib/}. This replaces JDK's org.omg.CORBA package version with the one provided by JacORB which uses the ContextClassLoader of the current Thread instead (the same as the pre u55 code has done).
Another option is to use e.g. -Xbootclasspath/p:${jacorb/lib/jar-containing-omg-api.jar} or copy the JARs containing JacORB's version of org.omg.CORBA to <jre-home>/lib/endorsed.
Unfortuantelly this doesn't helped for my Webstart application problem.
Do you need the system-wide/singleton ORB to be the Jacorb ORB? If not then the simplest solution here might be to just drop -Dorg.omg.CORBA.ORBSingletonClass from the command line. Remember the singleton ORB is just the TypeCode factory, your call to the 2-arg ORG.init will still give a Jacorb ORB because you have org.omg.CORBA.ORBClass set to select it.
The (recently updated, as this information wasn' there before) release notes mentioned by user3054250 (thank you for this) point to another possible workaround.
Specifying only the ORB property but omiting ORBSingleton works (short test) in my CORBA/Webstart application together with JacORB 3.4.
It doesn't work with OpenORB (as OpenORB checks for the "right" instance of ORBSingleton) so I have to upgrade my application to JacORB but it is a solution.
Related
I am Facing Issues while Using ZetaSQL in Apache beam Framework (2.17.0-SNAPSHOT). After Going through documentation of the apache beam I am not able to find any sample for ZetaSQL.
I tried to add the Planner:
options.setPlannerName("org.apache.beam.sdk.extensions.sql.zetasql.ZetaSQLQueryPlanner");
But Still Facing Issue, Snippet is added below for help.
```
String sql =
"SELECT CAST (1243 as INT64), "
+ "CAST ('2018-09-15 12:59:59.000000+00' as TIMESTAMP), "
+ "CAST ('string' as STRING);";
ZetaSQLQueryPlanner zetaSQLQueryPlanner = new ZetaSQLQueryPlanner();
BeamRelNode beamRelNode = zetaSQLQueryPlanner.convertToBeamRel(sql);
PCollection<Row> stream = BeamSqlRelUtils.toPCollection(p, beamRelNode);
p.run();
I Understand we need the below Snippet but failed to create the config
Frameworks.newConfigBuilder()
and while Running Code I found below Exceptions:
Exception in thread "main" java.util.ServiceConfigurationError: com.google.zetasql.ClientChannelProvider: Provider com.google.zetasql.JniChannelProvider could not be instantiated
at java.util.ServiceLoader.fail(Unknown Source)
at java.util.ServiceLoader.access$100(Unknown Source)
at java.util.ServiceLoader$LazyIterator.nextService(Unknown Source)
Update: as of 06/23/2020, Beam ZetaSQL is supported on Mac OS as well (not all versions but at least most recent ones)!
====
I think it is related to your OS. Beam is as unified framework but your exception looks from its dependency: ZetaSQL parser. If you change to a newer version of linux I think your code snippet should work.
I'm trying to use MultiClassFLDA in discriminant analysis package but I always get an error on running the code and defining a new instance of the MultiClassFLDA class
Exception in thread "main" java.lang.NoClassDefFoundError: no/uib/cipr/matrix/Vector
at assignment2.face.tryLDA(face.java:141)
at assignment2.Assignment2.main(Assignment2.java:106)
Caused by: java.lang.ClassNotFoundException: no.uib.cipr.matrix.Vector
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
It seems to be linked to some dynamic class loading in newer versions of Weka, presumably within the Weka Package Manager: that class is defined in the mtj.jar, which is bundled inside weka.jar. In that other question, an answer suggested to extract the mtj.jar and add it to your classpath.
As I didn't have this problem with other Filters, my guess is that MultiClassFLDA is not implemented correctly: I found out is that if you use another Filter before, that specific class will get loaded:
// Run a dummy Filter for correct initialization
Filter f = new Standardize();
f.setInputFormat(data);
Filter.useFilter(new Instances("", params, 0), f); // Dummy run on empty dataset
// Now run the MultiClassFLDA
f = new MultiClassFLDA();
f.setInputFormat(data);
data = Filter.useFilter(data, f);
N.B. that is a really ugly hack! I used it to be able to work. I'll edit my answer when I find the appropriate way to do it (other than extracting the jar from Weka itslef).
I had hudson running on windows server successfully. needed to restart the hudson service. After restart i am getting below error. Any idea, or if anybody experienced this issue.
org.jvnet.hudson.reactor.ReactorException: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException
at org.jvnet.hudson.reactor.Reactor.execute(Reactor.java:246)
at hudson.model.Hudson.executeReactor(Hudson.java:719)
at hudson.model.Hudson.<init>(Hudson.java:616)
at org.eclipse.hudson.init.InitialRunnable.run(InitialRunnable.java:51)
at java.lang.Thread.run(Thread.java:619)
Caused by: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2263)
at com.google.common.cache.LocalCache.get(LocalCache.java:4000)
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:4004)
at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4874)
at hudson.model.TopLevelItemsCache.get(TopLevelItemsCache.java:78)
at hudson.model.LazyTopLevelItem.item(LazyTopLevelItem.java:144)
at hudson.model.LazyTopLevelItem.hasPermission(LazyTopLevelItem.java:271)
at hudson.model.Hudson.getItems(Hudson.java:1303)
at hudson.model.Hudson.getItems(Hudson.java:223)
at hudson.model.Hudson.getAllItems(Hudson.java:1367)
at hudson.model.DependencyGraph.<init>(DependencyGraph.java:78)
at hudson.model.Hudson.rebuildDependencyGraph(Hudson.java:3626)
at hudson.model.Hudson$12.run(Hudson.java:2415)
at org.jvnet.hudson.reactor.TaskGraphBuilder$TaskImpl.run(TaskGraphBuilder.java:146)
at org.jvnet.hudson.reactor.Reactor.runTask(Reactor.java:259)
at hudson.model.Hudson$4.runTask(Hudson.java:699)
at org.jvnet.hudson.reactor.Reactor$2.run(Reactor.java:187)
at org.jvnet.hudson.reactor.Reactor$Node.run(Reactor.java:94)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
... 1 more
Caused by: java.lang.NullPointerException
at hudson.model.RunMap.recalcLastStable(RunMap.java:469)
at hudson.model.RunMap.recalcMarkers(RunMap.java:209)
at hudson.model.RunMap.setBuilds(RunMap.java:199)
at hudson.model.RunMap.putAllRunValues(RunMap.java:225)
at hudson.model.RunMap.reset(RunMap.java:292)
at hudson.model.RunMap.load(RunMap.java:640)
at hudson.model.AbstractProject.onLoad(AbstractProject.java:329)
at hudson.model.BaseBuildableProject.onLoad(BaseBuildableProject.java:91)
at hudson.model.TopLevelItemsCache$1.load(TopLevelItemsCache.java:64)
at hudson.model.TopLevelItemsCache$1.load(TopLevelItemsCache.java:57)
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599)
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2379)
at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2257)
... 20 more
Greatly appreciate help !!
I had a similar problem. There was power outage and some files get corrupted. It was possible to start Hudson without any job (I moved all jobs to different directory).
So I went through latest modified jobs in $HUDSON_HOME/jobs and deleted
empty nextBuildNumber file
empty builds/_runmap.xml file
whole build with empty build.xml and/or changelog.xml files (builds/yyyy-MM-ss_HH-mm-ss directory and builds/xxx link)
Here's a quick hack we used when we ran into the same situation:
#!/usr/bin/env python
"""Remove references from missing builds from all _runmap.xml files"""
from xml.dom.minidom import parse, parseString
import os
import glob
for buildRoot in glob.glob("/var/lib/hudson/jobs/*/builds/"):
runmapFilename = buildRoot + "/_runmap.xml"
if not os.path.exists(runmapFilename):
continue
dom = parse(runmapFilename)
builds = dom.getElementsByTagName("builds")[0]
changed = False
for build in builds.getElementsByTagName("build"):
buildDir = build.getElementsByTagName("buildDir")[0].childNodes[0].data
if not os.path.exists(buildRoot + "/" + buildDir + "/build.xml"):
changed = True
print "missing", buildRoot, buildDir
builds.removeChild(build)
if changed:
os.rename(runmapFilename, runmapFilename + ".org")
f = open(runmapFilename, "w")
f.write(dom.toxml())
f.close()
I unfortunately it seems that the history is lost for the jobs that are corrupted.
In order have back online hudson I have done the following:
removed builds folder entirely
removed the link to the lastS*
set nextBuildNumber to 1
I hope this help.
I have a problem while transforming the xslt to pdf in java, i am following the same process posted in this link
"Java Transformation to PDF".
Error:
`
Caused by: java.lang.NoSuchMethodError: org.apache.xmlgraphics.java2d.GraphicContext.<init>(Lorg/apache/xmlgraphics/java2d/GraphicContext;)V
at org.apache.fop.render.intermediate.IFGraphicContext.<init>(IFGraphicContext.java:50)
at org.apache.fop.render.intermediate.IFGraphicContext.clone(IFGraphicContext.java:56)
at org.apache.fop.render.intermediate.IFRenderer.saveGraphicsState(IFRenderer.java:632)
at org.apache.fop.render.intermediate.IFRenderer.startViewport(IFRenderer.java:885)
at org.apache.fop.render.intermediate.IFRenderer.startVParea(IFRenderer.java:878)
at org.apache.fop.render.AbstractRenderer.renderRegionViewport(AbstractRenderer.java:289)
at org.apache.fop.render.intermediate.IFRenderer.renderRegionViewport(IFRenderer.java:731)
at org.apache.fop.render.AbstractRenderer.renderPageAreas(AbstractRenderer.java:249)
at org.apache.fop.render.AbstractRenderer.renderPage(AbstractRenderer.java:230)
at org.apache.fop.render.intermediate.IFRenderer.renderPage(IFRenderer.java:580)
at org.apache.fop.area.RenderPagesModel.addPage(RenderPagesModel.java:114)
at org.apache.fop.layoutmgr.AbstractPageSequenceLayoutManager.finishPage(AbstractPageSequenceLayoutManager.java:312)
at org.apache.fop.layoutmgr.PageSequenceLayoutManager.finishPage(PageSequenceLayoutManager.java:167)
at org.apache.fop.layoutmgr.PageSequenceLayoutManager.activateLayout(PageSequenceLayoutManager.java:109)
at org.apache.fop.area.AreaTreeHandler.endPageSequence(AreaTreeHandler.java:238)
at org.apache.fop.fo.pagination.PageSequence.endOfNode(PageSequence.java:120)
at org.apache.fop.fo.FOTreeBuilder$MainFOHandler.endElement(FOTreeBuilder.java:349)
at org.apache.fop.fo.FOTreeBuilder.endElement(FOTreeBuilder.java:177)
at net.sf.saxon.event.ContentHandlerProxy.endElement(ContentHandlerProxy.java:391)
at net.sf.saxon.event.ProxyReceiver.endElement(ProxyReceiver.java:174)
at net.sf.saxon.event.NamespaceReducer.endElement(NamespaceReducer.java:213)
at net.sf.saxon.event.ComplexContentOutputter.endElement(ComplexContentOutputter.java:417)
at net.sf.saxon.instruct.ElementCreator.processLeavingTail(ElementCreator.java:301)
at net.sf.saxon.instruct.Block.processLeavingTail(Block.java:409)
at net.sf.saxon.instruct.Instruction.process(Instruction.java:94)
at net.sf.saxon.instruct.ElementCreator.processLeavingTail(ElementCreator.java:298)
at net.sf.saxon.instruct.Template.applyLeavingTail(Template.java:175)
at net.sf.saxon.instruct.ApplyTemplates.applyTemplates(ApplyTemplates.java:343)
at net.sf.saxon.Controller.transformDocument(Controller.java:1736)
at net.sf.saxon.Controller.transform(Controller.java:1560)
at mypackage.v2.business.pdf.XMLtoPDF.convertXMLPDF(XMLtoPDF.java:103)
... 51 more
java.lang.ClassCastException: java.lang.NoSuchMethodError cannot be cast to java.lang.Exception
`
Please let me know what could be the problem.
As described in the documentation
Thrown if an application tries to call a specified method of a class (either static or instance), and that class no longer has a definition of that method.
Here's how this can happen. Imagine you built some code that used a library with a method called foo(). You compiled your project and then later decided to upgrade the library. You do this by overwriting the old jar file with the new one. You don't recompile your code.
But what you didn't know is the new library removed the foo() method. Now if you run your code, it will throw this exception because your compile code calls this method and it's no longer there.
In your specific case, this isn't necessarily your code with the problem. It could be that you have one library that uses another library and the problem is there (for example, Library X uses version 2 of Library Y, but you only have Library Y version 1 in your classpath). If this is happening, you need to make sure the libraries are using the version they expect.
For your specific problem, you need to find the version of the jar that has org.apache.xmlgraphics.java2d.GraphicContext in it and it needs to have a constructor that takes a... list of org/apache/xmlgraphics/java2d/GraphicContext... maybe. I forget what (L...;)V means.
I have some problems to create a new model for Stanford Parser.
I have also downloaded the last version from Stanford:
http://nlp.stanford.edu/software/lex-parser.shtml
And here, Genia Corpus in 2 formats, xml and ptb (Penn Treebank).
Standford Parser can train with ptd files ; then I downloaded Genia Corpus, because I want to work with biomedical text:
http://categorizer.tmit.bme.hu/~illes/genia_ptb/ (link no longer available) (genia_ptb.tar.gz)
Then, I have a short Main class to get dependency representation of one biomedical sentence:
String treebankPath = "/stanford-parser-2012-05-22/genia_ptb/GENIA_treebank_v1/ptb";
Treebank tr = op.tlpParams.diskTreebank();
tr.loadPath(treebankPath);
LexicalizedParser lpc=LexicalizedParser.trainFromTreebank(tr,op);
I have tried different ways, but always get the same result.
I have an error in the last line. This is my output:
Currently Fri Jun 01 15:02:57 CEST 2012
Options parameters:
useUnknownWordSignatures 2
smoothInUnknownsThreshold 100
smartMutation false
useUnicodeType false
unknownSuffixSize 1
unknownPrefixSize 1
flexiTag true
useSignatureForKnownSmoothing false
parserParams edu.stanford.nlp.parser.lexparser.EnglishTreebankParserParams
forceCNF false
doPCFG true
doDep false
freeDependencies false
directional true
genStop true
distance true
coarseDistance false
dcTags false
nPrune false
Train parameters: smooth=false PA=true GPA=false selSplit=true (400.0; deleting [VP^SQ, VP^VP, VP^SINV, VP^NP]) mUnary=1 mUnaryTags=false sPPT=false tagPA=true tagSelSplit=false (0.0) rightRec=true leftRec=false collinsPunc=false markov=true mOrd=2 hSelSplit=true (10) compactGrammar=3 postPA=false postGPA=false selPSplit=false (0.0) tagSelPSplit=false (0.0) postSplitWithBase=false fractionBeforeUnseenCounting=0.5 openClassTypesThreshold=50 preTransformer=null taggedFiles=null
Using EnglishTreebankParserParams splitIN=4 sPercent=true sNNP=0 sQuotes=false sSFP=false rbGPA=false j#=false jJJ=false jNounTags=false sPPJJ=false sTRJJ=false sJJCOMP=false sMoreLess=false unaryDT=true unaryRB=true unaryPRP=false reflPRP=false unaryIN=false sCC=1 sNT=false sRB=false sAux=2 vpSubCat=false mDTV=2 sVP=3 sVPNPAgr=false sSTag=0 mVP=false sNP%=0 sNPPRP=false dominatesV=1 dominatesI=false dominatesC=false mCC=0 sSGapped=4 numNP=false sPoss=1 baseNP=1 sNPNNP=0 sTMP=1 sNPADV=1 cTags=true rightPhrasal=false gpaRootVP=false splitSbar=0 mPPTOiIN=0
Binarizing trees...done. Time elapsed: 141 ms
Extracting PCFG...done. Time elapsed: 56 ms
Compiling grammar...done Time elapsed: 1 ms
Extracting Lexicon...Exception in thread "main" edu.stanford.nlp.util.ReflectionLoading$ReflectionLoadingException: edu.stanford.nlp.util.MetaClass$ClassCreationException: java.lang.ClassNotFoundException: edu.stanford.nlp.parser.lexparser.EnglishUnknownWordModelTrainer
at edu.stanford.nlp.util.ReflectionLoading.loadByReflection(ReflectionLoading.java:39)
at edu.stanford.nlp.parser.lexparser.BaseLexicon.initializeTraining(BaseLexicon.java:335)
at edu.stanford.nlp.parser.lexparser.LexicalizedParser.getParserFromTreebank(LexicalizedParser.java:800)
at edu.stanford.nlp.parser.lexparser.LexicalizedParser.trainFromTreebank(LexicalizedParser.java:226)
at edu.stanford.nlp.parser.lexparser.LexicalizedParser.trainFromTreebank(LexicalizedParser.java:237)
at ABravoDemo.main(ABravoDemo.java:35)
Caused by: edu.stanford.nlp.util.MetaClass$ClassCreationException: java.lang.ClassNotFoundException: edu.stanford.nlp.parser.lexparser.EnglishUnknownWordModelTrainer
at edu.stanford.nlp.util.MetaClass.createFactory(MetaClass.java:353)
at edu.stanford.nlp.util.MetaClass.createInstance(MetaClass.java:370)
at edu.stanford.nlp.util.ReflectionLoading.loadByReflection(ReflectionLoading.java:37)
... 5 more
Caused by: java.lang.ClassNotFoundException: edu.stanford.nlp.parser.lexparser.EnglishUnknownWordModelTrainer
at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
at java.lang.ClassLoader.loadClass(ClassLoader.java:303)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:316)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:169)
at edu.stanford.nlp.util.MetaClass$ClassFactory.construct(MetaClass.java:119)
at edu.stanford.nlp.util.MetaClass$ClassFactory.<init>(MetaClass.java:192)
at edu.stanford.nlp.util.MetaClass$ClassFactory.<init>(MetaClass.java:53)
at edu.stanford.nlp.util.MetaClass.createFactory(MetaClass.java:349)
... 7 more
How could I create a new model with this corpus ?
As andrucz stated in his comment, the real cause of your problem seems to stem from a missing class.
Try checking whether you correctly imported your library ( and make sure that it contains the class EnglishUnknownWordModelTrainer in edu.stanford.nlp.parser.lexparser.
(If you're using Maven, verify that you correctly added the dependency - a quick google brougt this up : Stanford Parser Maven Repo )
Did the NLP library install correctly?
Check in the logs to verify there are no errors. Most of the times this issue comes when there the stanford nltk library does not install correctly.
A quick way to check is by running the GUI to try out the parser if that runs successfully then the library installed correctly otherwise if that throws errors then you know your installation was poor.
The Stanford website also mentions this take a look:
If you're new to parsing, you can start by running the GUI to try out the parser. Scripts are included for linux (lexparser-gui.sh) and Windows (lexparser-gui.bat).
Take a look at the Javadoc lexparser package documentation and LexicalizedParser class documentation. (Point your web browser at the index.html file in the included javadoc directory and navigate to those items.)
Look at the parser FAQ for answers to common questions.
If none of that helps, please see our email guidelines for instructions on how to reach us for further assistance.
Check whether you have correctly imported library and make sure that it is containing the class {EnglishUnknownWordModelTrainer} and also make sure that version you downloaded properly works with Genia Corps.