BaseX-Exception (Interrupted) while loading large XML File - java

I am trying to query a large xml-file like this:
ClientSession session = DatabaseConnection.getConnection();
session.execute(new
XQuery("doc('path/dataset.xml')")).getBytes());
I get the folloing exception:
Exception in thread "main" org.basex.core.BaseXException: Interrupted.
at org.basex.api.client.ClientSession.receive(ClientSession.java:191)
at org.basex.api.client.ClientSession.execute(ClientSession.java:160)
at org.basex.api.client.ClientSession.execute(ClientSession.java:165)
at org.basex.api.client.Session.execute(Session.java:36)
at testing.Main.main(Main.java:124)
I tried to increase the java-heapspace as well as the Xmx-value in the
basexserver-script but it did not help.
What else could causing this exception?
Files having the same structure can be loaded. It seems that the dataset is just to big..

Related

How to load saved model created by (Weka GUI) into my java application and browse the predicted result

My model is done using "FilteredClassifier" algorithm, then SMO as a
"classifier" parameter. "weka.classifiers.functions.SMO".
I tried to load my model into java using this code but it is not work
SupportVector SOM = (SupportVector) SerializationHelper.read(new
FileInputStream("C:\\Users\\HP\\Desktop\\SOM.model"));
and this code
FilteredClassifier SOM = (FilteredClassifier )
SerializationHelper.read(new
FileInputStream("C:\\Users\\HP\\Desktop\\SOM.model"));
both not working
then I want to browse the data used in building this model (actual value and predicted value).
how I can do it? Once I have created the model, do I need to load the dataset again?
This is the error
Exception in thread "main" java.lang.ClassCastException: weka.classifiers.meta.FilteredClassifier cannot be cast to weka.core.pmml.jaxbbindings.SupportVector
at weka.api.Model.main(Model.java:28)
This is the error
Exception in thread "main" java.lang.ClassCastException: weka.classifiers.meta.FilteredClassifier cannot be cast to weka.core.pmml.jaxbbindings.SupportVector
at weka.api.Model.main(Model.java:28)
weka.classifiers.meta.FilteredClassifier cannot be cast to weka.core.pmml.jaxbbindings.SupportVector
pmml and jaxb are XML related classes, you appear to have imported the wrong package.

How to fix Google DataFlow Pipeline (args) null pointer exception?

I'm trying to run a really simple dataflow job, just taking some data in BigQuery, processing it a bit and putting it in a new bigquery table
Pipeline p = Pipeline.create(
PipelineOptionsFactory.fromArgs(args).withValidation().create());
p.apply(BigQueryIO.Read.fromQuery("SELECT * FROM realtime.status_6_output_11"));
p.run();
However whenever I run it I get the following pretty undescriptive NullPointerException:
Exception in thread "main" java.lang.NullPointerException
at java.util.regex.Matcher.getTextLength(Matcher.java:1283)
at java.util.regex.Matcher.reset(Matcher.java:309)
at java.util.regex.Matcher.<init>(Matcher.java:229)
at java.util.regex.Pattern.matcher(Pattern.java:1093)
at com.google.cloud.dataflow.sdk.util.IOChannelUtils.getFactory(IOChannelUtils.java:174)
at com.google.cloud.dataflow.sdk.io.BigQueryIO$Read$Bound.apply(BigQueryIO.java:553)
at com.google.cloud.dataflow.sdk.io.BigQueryIO$Read$Bound.apply(BigQueryIO.java:387)
at com.google.cloud.dataflow.sdk.runners.PipelineRunner.apply(PipelineRunner.java:74)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner.apply(DirectPipelineRunner.java:247)
at com.google.cloud.dataflow.sdk.Pipeline.applyInternal(Pipeline.java:367)
at com.google.cloud.dataflow.sdk.Pipeline.applyTransform(Pipeline.java:274)
at com.google.cloud.dataflow.sdk.values.PBegin.apply(PBegin.java:47)
at com.google.cloud.dataflow.sdk.Pipeline.apply(Pipeline.java:156)
at com.noraway.conductor.NormalizedPipeline.main(NormalizedPipeline.java:42)
I think there's a problem with my command line arguments (don't have any right now) but I'm not sure what that would be.
It looks like there is a missing --tempLocation for BigQuery to use. The obscure error message is fixed as part of https://github.com/GoogleCloudPlatform/DataflowJavaSDK/issues/313.

java.lang.ClassCastException for TDBFactory

I made a tdb index with Jena for the data I have.
In order to refer to the data while querying, I tried using TDBFactory and Model (both statements given below). I am getting the same exception for both, so this seems to be independent of the statement I write.
Dataset dataset = TDBFactory.createDataset(directory) ;
and
Model model=FileManager.get().loadModel(directory);
The runtime exception is:
Exception in thread "main" java.lang.ExceptionInInitializerError
at com.hp.hpl.jena.rdf.model.impl.RDFReaderFImpl.reset(RDFReaderFImpl.java:81)
at com.hp.hpl.jena.rdf.model.impl.RDFReaderFImpl.<clinit>(RDFReaderFImpl.java:74)
at com.hp.hpl.jena.rdf.model.impl.ModelCom.<clinit>(ModelCom.java:54)
at com.hp.hpl.jena.rdf.model.ModelFactory.createDefaultModel(ModelFactory.java:114)
at com.hp.hpl.jena.vocabulary.OWL.<clinit>(OWL.java:36)
at com.hp.hpl.jena.sparql.graph.NodeConst.<clinit>(NodeConst.java:29)
at com.hp.hpl.jena.sparql.engine.optimizer.reorder.ReorderFixed.<clinit>(ReorderFixed.java:23)
at com.hp.hpl.jena.sparql.engine.optimizer.reorder.ReorderLib.fixed(ReorderLib.java:53)
at com.hp.hpl.jena.tdb.sys.SystemTDB.<clinit>(SystemTDB.java:187)
at com.hp.hpl.jena.tdb.TDB.<clinit>(TDB.java:90)
at com.hp.hpl.jena.tdb.setup.DatasetBuilderStd.<clinit>(DatasetBuilderStd.java:64)
at com.hp.hpl.jena.tdb.StoreConnection.make(StoreConnection.java:227)
at com.hp.hpl.jena.tdb.transaction.DatasetGraphTransaction.<init>(DatasetGraphTransaction.java:75)
at com.hp.hpl.jena.tdb.sys.TDBMaker._create(TDBMaker.java:57)
at com.hp.hpl.jena.tdb.sys.TDBMaker.createDatasetGraphTransaction(TDBMaker.java:45)
at com.hp.hpl.jena.tdb.TDBFactory._createDatasetGraph(TDBFactory.java:104)
at com.hp.hpl.jena.tdb.TDBFactory.createDatasetGraph(TDBFactory.java:73)
at com.hp.hpl.jena.tdb.TDBFactory.createDataset(TDBFactory.java:52)
at com.hp.hpl.jena.tdb.TDBFactory.createDataset(TDBFactory.java:48)
Caused by: java.lang.ClassCastException: org.apache.xerces.dom.DeferredTextImpl cannot be cast to org.w3c.dom.Element
at sun.util.xml.PlatformXmlPropertiesProvider.importProperties(PlatformXmlPropertiesProvider.java:118)
at sun.util.xml.PlatformXmlPropertiesProvider.load(PlatformXmlPropertiesProvider.java:90)
at java.util.Properties$XmlSupport.load(Properties.java:1201)
at java.util.Properties.loadFromXML(Properties.java:881)
at com.hp.hpl.jena.util.Metadata.read(Metadata.java:76)
at com.hp.hpl.jena.util.Metadata.addMetadata(Metadata.java:54)
at com.hp.hpl.jena.util.Metadata.<init>(Metadata.java:48)
at com.hp.hpl.jena.JenaRuntime.<clinit>(JenaRuntime.java:34)
The jar files that I am using are:
arq-2.8.7.jar,
commons-cli-1.2.jar,
commons-codec-1.6.jar,
commons-collections-3.2.1.jar,
commons-csv-1.0.jar,
commons-io-2.4.jar,
commons-lang3-3.1.jar,
commons-math3-3.0.jar,
httpclient-4.2.6.jar,
httpclient-cache-4.2.6.jar,
httpcore-4.2.5.jar,
jackson-annotations-2.3.0.jar,
jackson-core-2.3.3.jar,
jackson-databind-2.3.3.jar,
jcl-over-slf4j-1.7.6.jar,
jena-arq-2.12.1.jar,
jena-core-2.12.1.jar,
jena-iri-1.1.1.jar,
jena-sdb-1.5.1.jar,
jena-tdb-1.1.1.jar,
jgraph.jar,
jsonld-java-0.5.0.jar,
libthrift-0.9.1.jar,
log4j-1.2.17.jar,
slf4j-api-1.7.6.jar,
slf4j-log4j12-1.7.6.jar,
xercesImpl-2.11.0.jar,
xml-apis-1.4.01.jar
How do I fix this?

H2 - General error: "java.lang.NullPointerException" [50000-182]

I have a quite big (>2.5 GB) h2 database file. Driver version is 1.4.182. Everything worked fine but recently the DB stop to work with exception:
Błąd ogólny: "java.lang.NullPointerException"
General error: "java.lang.NullPointerException" [50000-182] HY000/50000 (Help)
org.h2.jdbc.JdbcSQLException: Błąd ogólny: "java.lang.NullPointerException"
General error: "java.lang.NullPointerException" [50000-182]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:168)
at org.h2.message.DbException.convert(DbException.java:295)
at org.h2.engine.Database.openDatabase(Database.java:297)
at org.h2.engine.Database.<init>(Database.java:260)
at org.h2.engine.Engine.openSession(Engine.java:60)
at org.h2.engine.Engine.openSession(Engine.java:167)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:145)
at org.h2.engine.Engine.createSession(Engine.java:128)
at org.h2.engine.Engine.createSession(Engine.java:26)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:347)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:108)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:92)
at org.h2.Driver.connect(Driver.java:72)
at org.h2.server.web.WebServer.getConnection(WebServer.java:750)
at org.h2.server.web.WebApp.test(WebApp.java:895)
at org.h2.server.web.WebApp.process(WebApp.java:221)
at org.h2.server.web.WebApp.processRequest(WebApp.java:170)
at org.h2.server.web.WebThread.process(WebThread.java:137)
at org.h2.server.web.WebThread.run(WebThread.java:93)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.NullPointerException
at org.h2.mvstore.db.ValueDataType.compare(ValueDataType.java:102)
at org.h2.mvstore.MVMap.compare(MVMap.java:741)
at org.h2.mvstore.Page.binarySearch(Page.java:388)
at org.h2.mvstore.MVMap.put(MVMap.java:179)
at org.h2.mvstore.MVMap.put(MVMap.java:133)
at org.h2.mvstore.db.TransactionStore.rollbackTo(TransactionStore.java:491)
at org.h2.mvstore.db.TransactionStore$Transaction.rollback(TransactionStore.java:785)
at org.h2.mvstore.db.MVTableEngine$Store.initTransactions(MVTableEngine.java:223)
at org.h2.engine.Database.open(Database.java:736)
at org.h2.engine.Database.openDatabase(Database.java:266)
... 17 more
The problem occurs in my application and using H2 web frontend.
I have tried solution from similar question but I cannot downgrade H2 to 1.3.x as it cannot read 1.4.x DB files.
My questions are:
How to handle it? Is it to possible to make it work again? I have tried downgrade H2 to 1.4.177 but it didn't help.
Is there any way to at least recover data to other format? I could use other DB (Sqlite, etc.) however I would need a way to get to these data.
EDIT: updated stacktrace
EDIT 2: Result of using Recovery tool:
$ java -cp h2-1.4.182.jar org.h2.tools.Recover
Exception in thread "main" java.lang.IllegalStateException: Unknown tag 50 [1.4.182/6]
at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:762)
at org.h2.mvstore.type.ObjectDataType.read(ObjectDataType.java:222)
at org.h2.mvstore.db.TransactionStore$ArrayType.read(TransactionStore.java:1792)
at org.h2.mvstore.db.TransactionStore$ArrayType.read(TransactionStore.java:1759)
at org.h2.mvstore.Page.read(Page.java:843)
at org.h2.mvstore.Page.read(Page.java:230)
at org.h2.mvstore.MVStore.readPage(MVStore.java:1813)
at org.h2.mvstore.MVMap.readPage(MVMap.java:769)
at org.h2.mvstore.Page.getChildPage(Page.java:252)
at org.h2.mvstore.MVMap.getFirstLast(MVMap.java:351)
at org.h2.mvstore.MVMap.firstKey(MVMap.java:218)
at org.h2.mvstore.db.TransactionStore.init(TransactionStore.java:169)
at org.h2.mvstore.db.TransactionStore.<init>(TransactionStore.java:117)
at org.h2.mvstore.db.TransactionStore.<init>(TransactionStore.java:81)
at org.h2.tools.Recover.dumpMVStoreFile(Recover.java:593)
at org.h2.tools.Recover.process(Recover.java:331)
at org.h2.tools.Recover.runTool(Recover.java:192)
at org.h2.tools.Recover.main(Recover.java:155)
I also noticed that two another files (.txt and .sql) has been created but they don't seem to contain a data.
I got the same situation last week with JIRA database, it took me few hours to do google regarding to this problem however, there are no answers which can resolve the situation.
I decided to take a look at H2 source code and I can recover the whole database with very simple code. I may not understand whole picture about the situation e.g. root cause, in which condition it happened, etc.
However, the cause is: when you connect to h2 file then H2 engine will look into auditLog and rollback the in-progress transactions, there are some data which have unknown type (id is 17) and H2 fails to rollback due to exception during recognize the type (id 17).
My code is simple, add h2 lib into your build path then just manual connect to file and clear auditLog because I think it is just the log and will not have big impact (someone may correct me).
Hopefully, you can resolve your problem as well.
public static void main(final String[] args) {
// open the store (in-memory if fileName is null)
final MVStore store = MVStore.open("C:\\temp\\h2db.mv.db");
final MVMap<Object, Object> openMap = store.openMap("undoLog");
openMap.clear();
// close the store (this will persist changes)
store.close();
}
Another solution for this problem:
Go to your home folder (~ in linux).
Move all files named [*.mv.db] to a backup with a different name. For example: mv xyz.mv.db xyz.mv.db.backup
Restart your database.
This seems to clear out MVStore meta-data used for H2 undo features, and resolves the NPE from the MV Store compare.
I had a similar problem:
[HY000][50000] Allgemeiner Fehler: "java.lang.NullPointerException"
General error: "java.lang.NullPointerException" [50000-176]
java.lang.NullPointerException
I tried to connect with IntelliJ to H2 file DB. H2 driver version was 1.3.176 but the DB file version had 1.3.161. So downgraded the driver to 1.3.161 in IntelliJ solved the problem completely.

atomchace function in biojava

Hi everyone I'm fairly new to biojava and trying to implement this piece of code:
AtomCache cache = new AtomCache();
cache.setPath("/tmp/");
FileParsingParameters params = cache.getFileParsingParams();
params.setLoadChemCompInfo(true);
StructureIO.setAtomCache(cache);
Structure strucuture = StructureIO.getStructure("4HHB");
after executing these lines im getting the following error message:
Exception in thread "main" java.lang.NoSuchFieldError: lineSplit
at org.biojava.bio.structure.align.util.UserConfiguration.(UserConfiguration.java:87)
at org.biojava.bio.structure.align.util.AtomCache.(AtomCache.java:115)
at protein_structure.main(protein_structure.java:27)
Java Result: 1
I cant figure out the reason for this error, I downloaded the pdb files for the proteins that Im working with (in this case "4HHB" in the /tmp/ directory but still the same error is showing up. can anyone tell me how Atomcache function works? Thanks

Categories

Resources