Pipeline fails with Invalid namespace string: '//' - java

Context: A streaming pipeline using dataflow 2.6.0, making use of windows and GroupByKey stages
What happens: after a short time, the pipeline keeps launching the following error:
java.lang.RuntimeException: Invalid namespace string: '//'
Looking at the stack trace, this happens because StateNamespaces (https://github.com/apache/beam/blob/master/runners/core-java/src/main/java/org/apache/beam/runners/core/StateNamespaces.java) tries to send an empty namespace string to CoderUtils.decodeFromBase64, which creates an uncaught EOF error, which itself launches a CoderException.
However, this only happens in the cloud. Indeed, looking at the stack trace reveals that dataflow proprietary code is at stake here. (in particular, the direct runner never uses StateNamespaces)
Here is the stack trace: (full stack trace here https://pastebin.com/xwh82pYx)
java.lang.RuntimeException: Invalid namespace string: '//'
org.apache.beam.runners.core.StateNamespaces.fromString(StateNamespaces.java:270)
com.google.cloud.dataflow.worker.WindmillTimerInternals.windmillTimerToTimerData(WindmillTimerInternals.java:264)
com.google.cloud.dataflow.worker.StreamingModeExecutionContext$StepContext.lambda$getNextFiredTimer$1(StreamingModeExecutionContext.java:535)
com.google.cloud.dataflow.worker.repackaged.com.google.common.collect.Iterators$7.transform(Iterators.java:750)
com.google.cloud.dataflow.worker.repackaged.com.google.common.collect.TransformedIterator.next(TransformedIterator.java:47)
com.google.cloud.dataflow.worker.StreamingModeExecutionContext$StepContext.getNextFiredTimer(StreamingModeExecutionContext.java:543)
com.google.cloud.dataflow.worker.SimpleParDoFn.processTimers(SimpleParDoFn.java:445)
com.google.cloud.dataflow.worker.SimpleParDoFn.processTimers(SimpleParDoFn.java:343)
com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.finish(ParDoOperation.java:51)
com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:83)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1227)
com.google.cloud.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:136)
com.google.cloud.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:966)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.beam.sdk.coders.CoderException: java.io.EOFException
org.apache.beam.sdk.coders.InstantCoder.decode(InstantCoder.java:70)
org.apache.beam.sdk.coders.InstantCoder.decode(InstantCoder.java:34)
...
How can I fix this error ?
Some investigation:
there seems to be no documentation on similar errors available
this could be related to this issue in the flink runner, however it was supposed to be fixed in beam 2.5.0

Related

I have setup the all basic plugins and driver when I run the Feature file and getting ERROR

I'm running this from Itellij IDEA.
I have the necessary Plugins and drivers, but Unable to figure out why getting this error.
Please find code in the image
Exception in thread "main" cucumber.runtime.CucumberException: Error parsing feature file C:/Users/IBM_ADMIN/IdeaProjects/redbusservices/src/test/resources/feature/Login.feature
at cucumber.runtime.FeatureBuilder.parse(FeatureBuilder.java:133)
at cucumber.runtime.model.CucumberFeature.loadFromFeaturePath(CucumberFeature.java:104)
at cucumber.runtime.model.CucumberFeature.load(CucumberFeature.java:54)
at cucumber.runtime.model.CucumberFeature.load(CucumberFeature.java:34)
at cucumber.runtime.RuntimeOptions.cucumberFeatures(RuntimeOptions.java:239)
at cucumber.runtime.Runtime.run(Runtime.java:111)
at cucumber.api.cli.Main.run(Main.java:36)
at cucumber.api.cli.Main.main(Main.java:18)
Caused by: gherkin.lexer.LexingError: Lexing error on line 1: 'Feature : Scenarios for login into Redbus
There is a space after the Feature keyword(or before colon) in the feature file.
Please remove it as given below then the error will go off.
Feature: Scenarios for login into Redbus

Error when running the WordCount example pipeline on Dataflow with Eclipse

When trying to run the WordCount example pipeline using Dataflow under Eclipse IDE, I get the following error:
Exception in thread "main" java.lang.RuntimeException: Failed to construct instance from factory method DataflowRunner#fromOptions(interface org.apache.beam.sdk.options.PipelineOptions)
at org.apache.beam.sdk.util.InstanceBuilder.buildFromMethod(InstanceBuilder.java:233)
at org.apache.beam.sdk.util.InstanceBuilder.build(InstanceBuilder.java:162)
at org.apache.beam.sdk.PipelineRunner.fromOptions(PipelineRunner.java:55)
at org.apache.beam.sdk.Pipeline.create(Pipeline.java:150)
at com.google.cloud.dataflow.examples.WordCount.main(WordCount.java:178)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.beam.sdk.util.InstanceBuilder.buildFromMethod(InstanceBuilder.java:222)
... 4 more
Caused by: java.lang.IllegalArgumentException: Missing object or bucket in path: 'gs://mysite-ga-datastreaming-196008-my-bucket/', did you mean: 'gs://some-bucket/mysite-ga-datastreaming-196008-my-bucket'?
at org.apache.beam.sdks.java.extensions.google.cloud.platform.core.repackaged.com.google.common.base.Preconditions.checkArgument(Preconditions.java:383)
at org.apache.beam.sdk.extensions.gcp.storage.GcsPathValidator.verifyPath(GcsPathValidator.java:77)
at org.apache.beam.sdk.extensions.gcp.storage.GcsPathValidator.validateOutputFilePrefixSupported(GcsPathValidator.java:60)
at org.apache.beam.runners.dataflow.DataflowRunner.fromOptions(DataflowRunner.java:246)
... 9 more
Some people suggest that the error is due to the Java version, as it seems that Beam doesn't work fine with Java 9. Anyway, I'm still using Java 8. On the other hand, some other people say that the error is caused because you have to provide a subfolder under your bucket as the storage location. I've tried, but it still does not work.
If anyone faced this same issue before or can provide any advice on the error, it would be appreciated.
You should create bucket gs://mysite-ga-datastreaming-196008-my-bucket/ in Google Cloud Storage before use the pipeline.
Create mysite-ga-datastreaming-196008-my-bucket in the GCP Project.
For Bucket Creation :
Go to GCP UI, select Storage Bucket. Click Create Bucket Button. Enter Bucket name mysite-ga-datastreaming-196008-my-bucket. Click ok.
Then run the command.

Docker container running one computer and not another

I'm trying to deploy a WebSphere Liberty Application via Docker. I'm also using Apache Struts for the UI. When deploying on my local machine I have no problems, but when put on seemingly any other machine, it throws an error saying the struts2 filter cannot be loaded. Classes do not seem to be missing.
Why would this container work on one machine and not another?
Stack Trace:
[ERROR ] SRVE0321E: The [struts2] filter did not load during start up.
Filter [struts2]: could not be initialized
[ERROR ] SRVE0315E: An exception occurred: java.lang.Throwable: javax.servlet.ServletException: Filter [struts2]: could not be initialized
at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:5027)
at [internal classes]
Caused by: javax.servlet.ServletException: Filter [struts2]: could not be initialized
at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.init(FilterInstanceWrapper.java:163)
... 1 more
Caused by: Unable to create SAX parser - Class: com.icl.saxon.aelfred.SAXParserFactoryImpl
File: SAXParserFactoryImpl.java
Method: newSAXParser
Line: 34 - com/icl/saxon/aelfred/SAXParserFactoryImpl.java:34:-1
at com.opensymphony.xwork2.config.providers.XmlConfigurationProvider.loadConfigurationFiles(XmlConfigurationProvider.java:835)
at com.opensymphony.xwork2.config.providers.XmlConfigurationProvider.loadDocuments(XmlConfigurationProvider.java:131)
at com.opensymphony.xwork2.config.providers.XmlConfigurationProvider.init(XmlConfigurationProvider.java:100)
at com.opensymphony.xwork2.config.impl.DefaultConfiguration.reload(DefaultConfiguration.java:130)
at com.opensymphony.xwork2.config.ConfigurationManager.getConfiguration(ConfigurationManager.java:52)
at org.apache.struts2.dispatcher.Dispatcher.init_PreloadConfiguration(Dispatcher.java:395)
at org.apache.struts2.dispatcher.Dispatcher.init(Dispatcher.java:452)
at org.apache.struts2.dispatcher.FilterDispatcher.init(FilterDispatcher.java:201)
at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.init(FilterInstanceWrapper.java:149)
... 1 more
Caused by: Unable to create SAX parser - Class: com.icl.saxon.aelfred.SAXParserFactoryImpl
File: SAXParserFactoryImpl.java
Method: newSAXParser
Line: 34 - com/icl/saxon/aelfred/SAXParserFactoryImpl.java:34:-1
at com.opensymphony.xwork2.util.DomHelper.parse(DomHelper.java:111)
at com.opensymphony.xwork2.config.providers.XmlConfigurationProvider.loadConfigurationFiles(XmlConfigurationProvider.java:830)
... 9 more
Caused by: javax.xml.parsers.ParserConfigurationException: AElfred parser is non-validating
at com.icl.saxon.aelfred.SAXParserFactoryImpl.newSAXParser(SAXParserFactoryImpl.java:34)
at com.opensymphony.xwork2.util.DomHelper.parse(DomHelper.java:109)
... 10 more
Caused by: javax.xml.parsers.ParserConfigurationException: AElfred parser is non-validating
The struts2 requires to have a validating parser. Since this parser is non-validating it should be removed from the classpath.
The affected parser could be found in saxon.jar.
Thanks to Roman I was able to diagnose this problem more correctly as a saxon XML parser issue. I tried just replacing my JAR and this actually worked for a few tests but later broke.
This forum post ultimately solved the problem: http://grokbase.com/t/tomcat/users/031xc9jye7/i-cant-use-saxon-xml-parser-in-my-web-app-please-help
My web server (WebSphere Liberty) was trying to use Saxon as the XML parser, however Saxon is non-validating and thus this was failing, particularly in Docker where I was trying this.
To fix this I had to remove the file javax.xml.parsers.SAXParserFactory from the JAR and then the server ran correctly.

I can't run Scada-Lts, WebappClassLoaderBase and ViewGraphicLoader classes errors

When I tried to run Scada-Lts on Tomcat 8.0.24 I got the following error:
23-Sep-2016 11:03:27.228 INFO [Timer-0] org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading Illegal access: this web application instance has been stopped already. Could not load [java.util.ListIterator]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
java.lang.IllegalStateException: Illegal access: this web application instance has been stopped already. Could not load [java.util.ListIterator]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading(WebappClassLoaderBase.java:1335)
at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForClassLoading(WebappClassLoaderBase.java:1321)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1203)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1164)
at org.apache.commons.pool.impl.GenericObjectPool.evict(GenericObjectPool.java:981)
at org.apache.commons.pool.impl.GenericObjectPool$Evictor.run(GenericObjectPool.java:1112)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
and a lots of error like:
WARN 2016-09-23 11:03:17,125 (com.serotonin.mango.view.ViewGraphicLoader.loadViewGraphics:58) - Failed to load image set at C:\Users\xxx\Desktop\GitHubProjects\Scada-LTS\out\artifacts\scadabr_1_1_0_RC_war_exploded\graphics\Weather
java.lang.Exception: Unable to derive image dimensions
Somebody knows why I got those errors?
Please run Scada-LTS on tomcat7.
Scada-LTS have not been migrated to tomcat8 (migrating)
I mean the problem is paths in app Scada-LTS.

Neo4j store upgrade error

I have created a large graph using the Neo4j's 2.2M02 import tool.
Now I want to use the same database in embedded version in 2.2RC01. I get the following error in Java, when I initialize the database:
Exception in thread "main" java.lang.RuntimeException: Error starting org.neo4j.kernel.EmbeddedGraphDatabase, D:\Neo4j\data\test3.db
at org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:331)
at org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:59)
at org.neo4j.graphdb.factory.GraphDatabaseFactory.newDatabase(GraphDatabaseFactory.java:103)
at org.neo4j.graphdb.factory.GraphDatabaseFactory$1.newDatabase(GraphDatabaseFactory.java:90)
at org.neo4j.graphdb.factory.GraphDatabaseBuilder.newGraphDatabase(GraphDatabaseBuilder.java:176)
at RCNeo4j.initDB(RCNeo4j.java:419)
at RCNeo4j.main(RCNeo4j.java:46)
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.kernel.impl.transaction.state.DataSourceManager#2c7e0aa0' was successfully initialized, but failed to start. Please see attached cause exception.
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:513)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:115)
at org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:326)
... 6 more
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.kernel.NeoStoreDataSource#37b86b14' was successfully initialized, but failed to start. Please see attached cause exception.
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:513)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:115)
at org.neo4j.kernel.impl.transaction.state.DataSourceManager.start(DataSourceManager.java:117)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:507)
... 8 more
Caused by: org.neo4j.kernel.impl.storemigration.StoreUpgrader$UnexpectedUpgradingStoreVersionException: 'neostore.nodestore.db' has a store version number that we cannot upgrade from. Expected 'v0.A.3' but file is version 'NodeStore v0.A.4'.
at org.neo4j.kernel.impl.storemigration.UpgradableDatabase.checkUpgradeable(UpgradableDatabase.java:88)
at org.neo4j.kernel.impl.storemigration.StoreMigrator.needsMigration(StoreMigrator.java:157)
at org.neo4j.kernel.impl.storemigration.StoreUpgrader.getParticipantsEagerToMigrate(StoreUpgrader.java:259)
at org.neo4j.kernel.impl.storemigration.StoreUpgrader.migrateIfNeeded(StoreUpgrader.java:134)
at org.neo4j.kernel.NeoStoreDataSource.upgradeStore(NeoStoreDataSource.java:562)
at org.neo4j.kernel.NeoStoreDataSource.start(NeoStoreDataSource.java:471)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:507)
... 11 more
message.log inside the database doesn't seem to show any exception either. I get the same error when I try to move from 2.1.7 to RC01.
Also, on a different note I would also like to know if it's possible to use the database generated from 2.2M02 in 2.1.7 (kind of like a downgrade). Because I prefer to have a more stable version to do some analysis.
Neo4j does not provide an upgrade path between milestone releases, so there is no direct way to upgrade 2.2.0-M02 to 2.2.0-RC1. Upgrades are only supported from one stable to another stable version. Downgrades are not supported at all in the product.
However there is a potential way to do it. Use Michael's store-utils (https://github.com/jexp/store-utils) and change the code using classloader separation in a way that the store you're reading from and the one you're writing to are using separate classloaders with different Neo4j versions.

Categories

Resources