For JUnit tests in our application, I'm attempting to initialize Datanucleus to use an SQLite database. However, when I try to get a PersistenceManager it fails while trying to create a DELETEME******** table (I'm not fully aware WHY it needs this table yet, either).
It appears to be failing during truncate() because of some identifier length issue. I have attempted to tweak all sorts of various datanucleus configuration properties to no avail.
Can anyone explain to me why Datanucleus feels the need to create these DELETEME**** tables, and what might potentially cause this truncation failure with a SQLite (org.sqlite.JDBC) database and not with MySQL?
Log:
Apr 12, 2011 1:30:16 PM org.datanucleus.store.rdbms.table.AbstractTable create
INFO: Creating table DELETEME1302640216142
Apr 12, 2011 1:32:10 PM org.datanucleus.store.rdbms.RDBMSStoreManager <init>
SEVERE: Failed initialising database. Please check that your database JDBC driver is accessible, and the database URL and username/password are correct. Exception : The length argument (=-3) is less than HASH_LENGTH(=4)!
java.lang.IllegalArgumentException: The length argument (=-3) is less than HASH_LENGTH(=4)!
Full stack trace:
java.lang.IllegalArgumentException: The length argument (=-3) is less than HASH_LENGTH(=4)!
at org.datanucleus.store.mapped.identifier.AbstractIdentifierFactory.truncate(AbstractIdentifierFactory.java:314)
at org.datanucleus.store.mapped.identifier.AbstractIdentifierFactory.newPrimaryKeyIdentifier(AbstractIdentifierFactory.java:661)
at org.datanucleus.store.rdbms.key.PrimaryKey.<init>(PrimaryKey.java:37)
at org.datanucleus.store.rdbms.table.TableImpl.getPrimaryKey(TableImpl.java:128)
at org.datanucleus.store.rdbms.table.TableImpl.getSQLCreateStatements(TableImpl.java:1264)
at org.datanucleus.store.rdbms.table.AbstractTable.create(AbstractTable.java:419)
at org.datanucleus.store.rdbms.RDBMSStoreManager.initialiseSchema(RDBMSStoreManager.java:676)
at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:350)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:597)
at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300)
at org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:227)
at org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:591)
at org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:293)
at org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:189)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
at java.security.AccessController.doPrivileged(Native Method)
at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
Related
I am trying to connect to Google big query using spark in java, but I am unable to find accurate documentation for the same.
I tried: https://cloud.google.com/dataproc/docs/tutorials/bigquery-connector-spark-example
and
https://github.com/GoogleCloudPlatform/spark-bigquery-connector#compiling-against-the-connector
My code:
sparkSession.conf().set("credentialsFile", "/path/OfMyProjectJson.json");
Dataset<Row> dataset = sparkSession.read().format("bigquery").option("table","myProject.myBigQueryDb.myBigQuweryTable")
.load();
dataset.printSchema();
But this is throwing exception:
Exception in thread "main" java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister: Provider com.google.cloud.spark.bigquery.BigQueryRelationProvider could not be instantiated
at java.util.ServiceLoader.fail(ServiceLoader.java:232)
at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)
at scala.collection.TraversableLike$class.filter(TraversableLike.scala:259)
at scala.collection.AbstractTraversable.filter(Traversable.scala:104)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:614)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:190)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:164)
at com.mySparkConnector.getDataset(BigQueryFetchClass.java:12)
Caused by: java.lang.IllegalArgumentException: A project ID is required for this service but could not be determined from the builder or the environment. Please set a project ID using the builder.
at com.google.cloud.spark.bigquery.repackaged.com.google.common.base.Preconditions.checkArgument(Preconditions.java:142)
at com.google.cloud.spark.bigquery.repackaged.com.google.cloud.ServiceOptions.<init>(ServiceOptions.java:285)
at com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.BigQueryOptions.<init>(BigQueryOptions.java:91)
at com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.BigQueryOptions.<init>(BigQueryOptions.java:30)
at com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.BigQueryOptions$Builder.build(BigQueryOptions.java:86)
at com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.BigQueryOptions.getDefaultInstance(BigQueryOptions.java:159)
at com.google.cloud.spark.bigquery.BigQueryRelationProvider$.$lessinit$greater$default$2(BigQueryRelationProvider.scala:29)
at com.google.cloud.spark.bigquery.BigQueryRelationProvider.<init>(BigQueryRelationProvider.scala:40)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
... 15 more
My json file contains project_id
I tried searching for possible solutions but am unable to find any, hence please help me with finding a solution to this exception, or else any documentation on how to connect to big query with spark.
I've got exactly the same error with DataProcPySparkOperator operator in the Airflow. The fix was to provide
dataproc_pyspark_jars='gs://spark-lib/bigquery/spark-bigquery-latest_2.12.jar'
instead of
dataproc_pyspark_jars='gs://spark-lib/bigquery/spark-bigquery-latest.jar'
I guess in your case it should be passed as a command line argument like
--jars=gs://spark-lib/bigquery/spark-bigquery-latest_2.12.jar
Recently a PR handling this issue has been merged to the the spark-bigquery-connector, a new version of the connector will be released soon.
A simple solution for now is to add the environment variable GOOGLE_APPLICATION_CREDENTIALS=/path/OfMyProjectJson.json to the spark runtime.
I am using Hazelcast v3.3. The Hazelcast server runs a few map store implementations. I have sporadically seen the following warning in the logs and need to understand what might be causing this and whether it could lead to data loss in hazelcast. I have a pretty small cluster at the moment for testing (4VMs running Ubuntu13 - 2GB RAM each).
2014-09-14 18:55:21,886 WARN c.h.s.i.BasicInvocation [main] [xxx.xxx.xxx.xxx]:5701 [testApp] [3.3] While asking 'is-executing': BasicInvocationFuture{invocation=BasicInvocation{ serviceName='hz:
impl:mapService', op=com.hazelcast.spi.impl.PartitionIteratingOperation#285583d4, partitionId=-1, replicaIndex=0, tryCount=10, tryPauseMillis=300, invokeCount=1, callTimeout=60000, target=Address[xxx.xxx.xxx.xxx]:5701}, done=false} java.util.concurrent.TimeoutException: Call BasicInvocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.IsStillExecutingOperation#5b202ff, partit
ionId=-1, replicaIndex=0, tryCount=0, tryPauseMillis=0, invokeCount=1, callTimeout=5000, target=Address[xxx.xxx.xxx.xxx]:5701} encountered a timeout
at com.hazelcast.spi.impl.BasicInvocationFuture.resolveApplicationResponse(BasicInvocationFuture.java:321)
at com.hazelcast.spi.impl.BasicInvocationFuture.resolveApplicationResponseOrThrowException(BasicInvocationFuture.java:289)
at com.hazelcast.spi.impl.BasicInvocationFuture.get(BasicInvocationFuture.java:181)
at com.hazelcast.spi.impl.BasicInvocationFuture.isOperationExecuting(BasicInvocationFuture.java:390)
at com.hazelcast.spi.impl.BasicInvocationFuture.waitForResponse(BasicInvocationFuture.java:228)
at com.hazelcast.spi.impl.BasicInvocationFuture.get(BasicInvocationFuture.java:180)
at com.hazelcast.spi.impl.BasicInvocationFuture.get(BasicInvocationFuture.java:160)
at com.hazelcast.spi.impl.BasicOperationService$InvokeOnPartitions.awaitCompletion(BasicOperationService.java:489)
at com.hazelcast.spi.impl.BasicOperationService$InvokeOnPartitions.invoke(BasicOperationService.java:458)
at com.hazelcast.spi.impl.BasicOperationService$InvokeOnPartitions.access$600(BasicOperationService.java:430)
at com.hazelcast.spi.impl.BasicOperationService.invokeOnAllPartitions(BasicOperationService.java:293)
at com.hazelcast.map.proxy.MapProxySupport.size(MapProxySupport.java:616)
at com.hazelcast.map.proxy.MapProxyImpl.size(MapProxyImpl.java:72)
at com.akkadian.wildmetrix.StartHcastServer.main(StartHcastServer.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.eclipse.jdt.internal.jarinjarloader.JarRsrcLoader.main(JarRsrcLoader.java:58)
Hibernate 4 uses jdbc4, were the signature of method setBinaryStream(int, InputStream, int) was changed to setBinaryStream(int, InputStream, long). C3P0 does not support this new method.
So calling saveOrUpdate(myObjWithBlob) results
java.lang.AbstractMethodError: com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.setBinaryStream(ILjava/io/InputStream;J)V
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.hibernate.engine.jdbc.internal.proxy.AbstractStatementProxyHandler.continueInvocation(AbstractStatementProxyHandler.java:122)
at org.hibernate.engine.jdbc.internal.proxy.AbstractProxyHandler.invoke(AbstractProxyHandler.java:81)
at $Proxy75.setBinaryStream(Unknown Source)
So what can i do now?
1) Do not use c3p0. DHCP, BoneCP or no conntection pool at all. - is not really the option i want.
2) Somehow make hibernate avoid calling this new method? Is ist possible?
3) Switching back to hibernate 3 - is also not really good for me.
please upgrade to c3p0 0.9.2-pre8 (or wait a few days for 0.9.2 final). This issue has been resolved in recent releases of the library.
Update: c3p0-0.9.2 is now a release. it does resolve this issue.
We are developing a Vaadin application, running it on oc4j 10.1.3.
After implementing basic authentication, the user's session gets invalidated automatically, within the timeout (which is the default).
So, a user opens the Vaadin application and on one point there should be some data set as session attributes for an other servlet. However, at that time, we have an exception saying that the session in already invalidated. It is never called explicitly, maybe oc4j does it?
It was working fine before the authentication has been implemented.
Aug 13, 2012 4:21:27 PM com.vaadin.Application terminalError
SEVERE: Terminal error:
com.vaadin.event.ListenerMethod$MethodException: Invocation of method buttonClick in com.mycompany.myapp$1 failed.
at com.vaadin.event.ListenerMethod.receiveEvent(ListenerMethod.java:530)
at com.vaadin.event.EventRouter.fireEvent(EventRouter.java:164)
at com.vaadin.ui.AbstractComponent.fireEvent(AbstractComponent.java:1219)
at com.vaadin.ui.Button.fireClick(Button.java:567)
at com.vaadin.ui.Button.changeVariables(Button.java:223)
at com.vaadin.terminal.gwt.server.AbstractCommunicationManager.changeVariables(AbstractCommunicationManager.java:1460)
at com.vaadin.terminal.gwt.server.AbstractCommunicationManager.handleVariableBurst(AbstractCommunicationManager.java:1404)
at com.vaadin.terminal.gwt.server.AbstractCommunicationManager.handleVariables(AbstractCommunicationManager.java:1329)
at com.vaadin.terminal.gwt.server.AbstractCommunicationManager.doHandleUidlRequest(AbstractCommunicationManager.java:761)
at com.vaadin.terminal.gwt.server.CommunicationManager.handleUidlRequest(CommunicationManager.java:318)
at com.vaadin.terminal.gwt.server.AbstractApplicationServlet.service(AbstractApplicationServlet.java:501)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
at com.evermind.server.http.ResourceFilterChain.doFilter(ResourceFilterChain.java:64)
at com.mycompany.myapp.RewriteFilter.doFilter(RewriteFilter.java:44)
at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:644)
at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:391)
at com.evermind.server.http.HttpRequestHandler.doProcessRequest(HttpRequestHandler.java:908)
at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:458)
at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:313)
at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:199)
at oracle.oc4j.network.ServerSocketReadHandler$SafeRunnable.run(ServerSocketReadHandler.java:260)
at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
at java.lang.Thread.run(Thread.java:595)
Caused by: java.lang.IllegalStateException: Session was invalidated
at com.evermind.server.http.EvermindHttpSession.setAttribute(EvermindHttpSession.java:158)
at com.evermind.server.http.EvermindHttpSession.setAttribute(EvermindHttpSession.java:137)
at com.mycompany.myapp$1.buttonClick(MyPage.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.vaadin.event.ListenerMethod.receiveEvent(ListenerMethod.java:510)
... 22 more
After implementing a HttpSessionListener, i've found that the session itself is valid. Something must be wrong on the Vaadin side, b/c setting the session as a variable in my Application implementation from a HttpServletRequestListener.onRequestStart method, that session instance is okay.
I tried using the following query:
Query q = getPersistenceManager().newQuery(
getPersistenceManager().getExtent(ICommentItem.class, false)
);
but got:
org.datanucleus.exceptions.NoPersistenceInformationException: The class
"com.sampleapp.data.dataobjects.ICommentItem" is required to be persistable yet no Meta -Data/Annotations can be found for this class. Please check that the Meta-Data/annotations is defined in a valid file location.
I saw in the Datanucleus forum somebody suggested (a few years ago) using :
<interface name=IComment/>
I tried that but it didn't create any table when I ran schema-update. Is tag still relavent? I coudnt see anything in docs on it.
I also tried :
<class name=IComment/>
But that gave this error when running schema-create:
SEVERE: Error thrown enhancing with ASMClassEnhancer
java.lang.NullPointerException
at org.datanucleus.enhancer.asm.method.DefaultConstructor.execute(DefaultConstructor.java:63)
at org.datanucleus.enhancer.asm.JdoClassAdapter.visitEnd(JdoClassAdapter.java:317)
at org.objectweb.asm.ClassReader.accept(Unknown Source)
at org.objectweb.asm.ClassReader.accept(Unknown Source)
at org.datanucleus.enhancer.asm.ASMClassEnhancer.enhance(ASMClassEnhancer.java:388)
at org.datanucleus.enhancer.DataNucleusEnhancer.enhanceClass(DataNucleusEnhancer.java:1035)
at org.datanucleus.enhancer.DataNucleusEnhancer.enhance(DataNucleusEnhancer.java:609)
at org.datanucleus.enhancer.DataNucleusEnhancer.main(DataNucleusEnhancer.java:1316)
Oct 23, 2010 6:46:33 PM org.datanucleus.enhancer.DataNucleusEnhancer addMessage
INFO: ERROR (PersistenceCapable) : com.sampleapp.data.dataobjects.ICommentItem
Oct 23, 2010 6:46:33 PM org.datanucleus.enhancer.asm.ASMClassEnhancer enhance
INFO: Class "com.sampleapp.data.dataobjects.Article" is already enhanced.
Oct 23, 2010 6:46:33 PM org.datanucleus.enhancer.DataNucleusEnhancer addMessage
SEVERE: DataNucleus Enhancer completed with an error. Please review the enhancer log for full details. Some classes may have been enhanced but some caused errors
Failure during enhancement of classes - see the log for details
org.datanucleus.exceptions.NucleusException: Failure during enhancement of classes - see the log for details
at org.datanucleus.enhancer.DataNucleusEnhancer.enhance(DataNucleusEnhancer.java:620)
at org.datanucleus.enhancer.DataNucleusEnhancer.main(DataNucleusEnhancer.java:1316)
Turns out this is not supported at this time but is planned to be added in version 2.2.0M3