this is my last attempt to configure Apache Ignite 2.0 to work with Cassandra as persistence layer and ODBC as query layer.
ODBC configuration is ok, I am able to put and get data in the cache with sql, but when I plug in Cassandra (version 3.9 via docker image) as persistence layer I get this:
java.lang.ClassNotFoundException: Unknown pair [platformId=0, typeId=1262449073]
I tried googling around for this exception but got no useful hint.
Here's my Ignite configuration:
boolean persistence = true;
IgniteConfiguration cfg = new IgniteConfiguration();
CacheConfiguration<String, ValueClass> configuration = new CacheConfiguration<String, ValueClass>();
configuration.setName("test-cache");
configuration.setIndexedTypes(String.class, ValueClass.class);
if(persistence){
// Configuring Cassandra's persistence
DataSource dataSource = new DataSource();
dataSource.setContactPoints("172.17.0.2");
RoundRobinPolicy robinPolicy = new RoundRobinPolicy();
dataSource.setLoadBalancingPolicy(robinPolicy);
dataSource.setReadConsistency("ONE");
dataSource.setWriteConsistency("ONE");
String persistenceSettingsXml = FileUtils.readFileToString(new File(persistenceSettingsConfig), "utf-8");
KeyValuePersistenceSettings persistenceSettings = new KeyValuePersistenceSettings(persistenceSettingsXml);
CassandraCacheStoreFactory cacheStoreFactory = new CassandraCacheStoreFactory();
cacheStoreFactory.setDataSource(dataSource);
cacheStoreFactory.setPersistenceSettings(persistenceSettings);
configuration.setCacheStoreFactory(cacheStoreFactory);
configuration.setWriteThrough(true);
configuration.setReadThrough(true);
configuration.setWriteBehindEnabled(true);
}
// Setting cache configuration
cfg.setCacheConfiguration(configuration);
// Configuring ODBC
OdbcConfiguration odbcConfig = new OdbcConfiguration();
odbcConfig.setMaxOpenCursors(100);
cfg.setOdbcConfiguration(odbcConfig);
// Starting Ignite
Ignite ignite = Ignition.start(cfg);
ValueClass:
public class ValueClass implements Serializable{
#QuerySqlField
private Integer numberOne;
#QuerySqlField
private Integer numberTwo;
public Integer getNumberOne(){ return numberOne; }
public Integer getNumberTwo(){ return numberTwo; }
public void setNumberOne(Integer value){
numberOne = value;
}
public void setNumberTwo(Integer value){
numberTwo = value;
}
}
Persistence configuration:
<persistence keyspace="ignite" table="odbc_test" ttl="86400">
<keyspaceOptions>
REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor' : 1}
AND DURABLE_WRITES = true
</keyspaceOptions>
<tableOption>
comment = 'Cache test'
AND read_repair_chance = 0.2
</tableOption>
<keyPersistence class="java.lang.String" strategy="PRIMITIVE" column="key" />
<valuePersistence class="com.riccamini.ignite.ValueClass" strategy="POJO">
<value name="numberOne" column="number_one"/>
<value name="numberTwo" column="number_two"/>
</valuePersistence>
</persistence>
Complete stack trace:
SEVERE: <test-cache> Unexpected exception during cache update
class org.apache.ignite.IgniteException: Runtime failure on search row: org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$SearchRow#6380d269
at org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.invoke(BPlusTree.java:1615)
at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:925)
at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:326)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:1693)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2386)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1792)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1630)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:480)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:440)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1162)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:651)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2345)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter.putIfAbsent(GridCacheAdapter.java:2720)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.doInsert(DmlStatementsProcessor.java:829)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.executeUpdateStatement(DmlStatementsProcessor.java:369)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.updateSqlFields(DmlStatementsProcessor.java:164)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.updateSqlFieldsTwoStep(DmlStatementsProcessor.java:198)
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryTwoStep(IgniteH2Indexing.java:1659)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:1659)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:1657)
at org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2103)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.queryTwoStep(GridQueryProcessor.java:1657)
at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:806)
at org.apache.ignite.internal.processors.odbc.OdbcRequestHandler.executeQuery(OdbcRequestHandler.java:213)
at org.apache.ignite.internal.processors.odbc.OdbcRequestHandler.handle(OdbcRequestHandler.java:108)
at org.apache.ignite.internal.processors.odbc.OdbcNioListener.onMessage(OdbcNioListener.java:124)
at org.apache.ignite.internal.processors.odbc.OdbcNioListener.onMessage(OdbcNioListener.java:33)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.binary.BinaryInvalidTypeException: Unknown pair [platformId=0, typeId=1262449073]
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:701)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1745)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1704)
at org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:794)
at org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:142)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinary(CacheObjectContext.java:273)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:161)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:148)
at org.apache.ignite.internal.processors.cache.GridCacheContext.unwrapBinaryIfNeeded(GridCacheContext.java:1730)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:555)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.update(GridCacheMapEntry.java:4404)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:4226)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:3966)
at org.apache.ignite.internal.processors.cache.database.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:2966)
at org.apache.ignite.internal.processors.cache.database.tree.BPlusTree$Invoke.access$6200(BPlusTree.java:2860)
at org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.invokeDown(BPlusTree.java:1696)
at org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.invoke(BPlusTree.java:1585)
... 37 more
Caused by: java.lang.ClassNotFoundException: Unknown pair [platformId=0, typeId=1262449073]
at org.apache.ignite.internal.MarshallerContextImpl.getClassName(MarshallerContextImpl.java:385)
at org.apache.ignite.internal.MarshallerContextImpl.getClass(MarshallerContextImpl.java:335)
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:692)
... 53 more
Jun 28, 2017 10:02:41 AM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to execute SQL query [reqId=2, req=OdbcQueryExecuteRequest [cacheName=test-cache, sqlQry=INSERT INTO valueclass (_key, numberone, numbertwo) VALUES ('testkey2', 10, 10), args=[]]]
javax.cache.CacheException: class org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to execute DML statement [stmt=INSERT INTO valueclass (_key, numberone, numbertwo) VALUES ('testkey2', 10, 10), params=[]]
at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:818)
at org.apache.ignite.internal.processors.odbc.OdbcRequestHandler.executeQuery(OdbcRequestHandler.java:213)
at org.apache.ignite.internal.processors.odbc.OdbcRequestHandler.handle(OdbcRequestHandler.java:108)
at org.apache.ignite.internal.processors.odbc.OdbcNioListener.onMessage(OdbcNioListener.java:124)
at org.apache.ignite.internal.processors.odbc.OdbcNioListener.onMessage(OdbcNioListener.java:33)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to execute DML statement [stmt=INSERT INTO valueclass (_key, numberone, numbertwo) VALUES ('testkey2', 10, 10), params=[]]
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryTwoStep(IgniteH2Indexing.java:1662)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:1659)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:1657)
at org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2103)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.queryTwoStep(GridQueryProcessor.java:1657)
at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:806)
... 12 more
Caused by: class org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: Failed to update keys (retry update if possible).: [testkey2]
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.onPrimaryError(GridNearAtomicAbstractUpdateFuture.java:397)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:250)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture$1.apply(GridNearAtomicAbstractUpdateFuture.java:303)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture$1.apply(GridNearAtomicAbstractUpdateFuture.java:300)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1885)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1630)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:480)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:440)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1162)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:651)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2345)
at org.apache.ignite.internal.processors.cache.GridCacheAdapter.putIfAbsent(GridCacheAdapter.java:2720)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.doInsert(DmlStatementsProcessor.java:829)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.executeUpdateStatement(DmlStatementsProcessor.java:369)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.updateSqlFields(DmlStatementsProcessor.java:164)
at org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.updateSqlFieldsTwoStep(DmlStatementsProcessor.java:198)
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryTwoStep(IgniteH2Indexing.java:1659)
... 18 more
Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to update keys on primary node.
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.UpdateErrors.addFailedKeys(UpdateErrors.java:124)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateResponse.addFailedKeys(GridNearAtomicUpdateResponse.java:342)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1883)
... 32 more
Suppressed: class org.apache.ignite.IgniteException: Runtime failure on search row: org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$SearchRow#6380d269
at org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.invoke(BPlusTree.java:1615)
at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:925)
at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:326)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:1693)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2386)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1792)
... 32 more
Caused by: class org.apache.ignite.binary.BinaryInvalidTypeException: Unknown pair [platformId=0, typeId=1262449073]
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:701)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1745)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1704)
at org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:794)
at org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:142)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinary(CacheObjectContext.java:273)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:161)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:148)
at org.apache.ignite.internal.processors.cache.GridCacheContext.unwrapBinaryIfNeeded(GridCacheContext.java:1730)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:555)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.update(GridCacheMapEntry.java:4404)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:4226)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:3966)
at org.apache.ignite.internal.processors.cache.database.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:2966)
at org.apache.ignite.internal.processors.cache.database.tree.BPlusTree$Invoke.access$6200(BPlusTree.java:2860)
at org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.invokeDown(BPlusTree.java:1696)
at org.apache.ignite.internal.processors.cache.database.tree.BPlusTree.invoke(BPlusTree.java:1585)
... 37 more
Caused by: java.lang.ClassNotFoundException: Unknown pair [platformId=0, typeId=1262449073]
at org.apache.ignite.internal.MarshallerContextImpl.getClassName(MarshallerContextImpl.java:385)
at org.apache.ignite.internal.MarshallerContextImpl.getClass(MarshallerContextImpl.java:335)
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:692)
... 53 more
Any suggestion is really appreciated.
Please try to turn on isStoreKeepBinary in cache settings - like this; please note the last line:
if (persistence){
// Configuring Cassandra's persistence
DataSource dataSource = new DataSource();
// ...here go the rest of your settings as they appear now...
configuration.setWriteBehindEnabled(true);
configuration.setStoreKeepBinary(true);
}
This setting forces Ignite to avoid binary deserialization when working with underlying cache store.
Related
The DataStream job:
public static final FunctionType DEVICE = new FunctionType("com.github.f1xman.era.anomalydetection.device", "DeviceFunction");
public static void main(String[] args) throws Exception {
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
StatefulFunctionsConfig statefunConfig = StatefulFunctionsConfig.fromEnvironment(env);
statefunConfig.setFactoryType(MessageFactoryType.WITH_KRYO_PAYLOADS);
DataStreamSource<String> names = env.addSource(new NamesSourceFunction());
DataStream<RoutableMessage> namesIngress = names.map(name -> RoutableMessageBuilder.builder()
.withTargetAddress(DEVICE, name)
.withMessageBody(name)
.build());
StatefulFunctionDataStreamBuilder.builder("example")
.withDataStreamAsIngress(namesIngress)
.withRequestReplyRemoteFunction(
requestReplyFunctionBuilder(DEVICE, URI.create("http://localhost:8080/statefun"))
)
.withConfiguration(statefunConfig)
.build(env);
env.execute("Flink Streaming Java API Skeleton");
}
When String value passed to .withMessageBody(...) the following exception occurred:
Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
at org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:258)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47)
at akka.dispatch.OnComplete.internal(Future.scala:300)
at akka.dispatch.OnComplete.internal(Future.scala:297)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
at scala.concurrent.impl.CallbackRunnable.run$$$capture(Promise.scala:60)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala)
at org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284)
at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284)
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284)
at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:621)
at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:24)
at akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:23)
at scala.concurrent.Future.$anonfun$andThen$1(Future.scala:532)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
at scala.concurrent.impl.CallbackRunnable.run$$$capture(Promise.scala:60)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:63)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:100)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:100)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
at java.util.concurrent.ForkJoinTask.doExec$$$capture(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:252)
at org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:242)
at org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:233)
at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:684)
at org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:79)
at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:444)
at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:316)
at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:314)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:217)
at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
at akka.actor.Actor.aroundReceive(Actor.scala:537)
at akka.actor.Actor.aroundReceive$(Actor.scala:535)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
at akka.actor.ActorCell.receiveMessage$$$capture(ActorCell.scala:580)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala)
at akka.actor.ActorCell.invoke(ActorCell.scala:548)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
at akka.dispatch.Mailbox.run(Mailbox.scala:231)
at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
... 5 more
Caused by: org.apache.flink.statefun.flink.core.functions.StatefulFunctionInvocationException: An error occurred when attempting to invoke function FunctionType(com.github.f1xman.era.anomalydetection.device, DeviceFunction).
at org.apache.flink.statefun.flink.core.functions.StatefulFunction.receive(StatefulFunction.java:50)
at org.apache.flink.statefun.flink.core.functions.ReusableContext.apply(ReusableContext.java:74)
at org.apache.flink.statefun.flink.core.functions.LocalFunctionGroup.processNextEnvelope(LocalFunctionGroup.java:60)
at org.apache.flink.statefun.flink.core.functions.Reductions.processEnvelopes(Reductions.java:164)
at org.apache.flink.statefun.flink.core.functions.Reductions.apply(Reductions.java:149)
at org.apache.flink.statefun.flink.core.functions.FunctionGroupOperator.processElement(FunctionGroupOperator.java:90)
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:82)
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:57)
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:29)
at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
at org.apache.flink.statefun.flink.core.feedback.FeedbackUnionOperator.sendDownstream(FeedbackUnionOperator.java:180)
at org.apache.flink.statefun.flink.core.feedback.FeedbackUnionOperator.processElement(FeedbackUnionOperator.java:86)
at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:233)
at org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134)
at org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105)
at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:496)
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)
at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:809)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:761)
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to org.apache.flink.statefun.sdk.reqreply.generated.TypedValue
at org.apache.flink.statefun.flink.core.reqreply.RequestReplyFunction.invoke(RequestReplyFunction.java:118)
at org.apache.flink.statefun.flink.core.functions.StatefulFunction.receive(StatefulFunction.java:48)
... 25 more
Though, sending a String value to an embedded function works well. The workaround I've found is to wrap the value with TypedValue:
.withMessageBody(TypedValue.newBuilder()
.setValue(ByteString.copyFrom(name, StandardCharsets.UTF_8))
.setHasValue(true)
.setTypename("example/Name")
.build()
)
This approach requires the receiver function to unwrap the TypedValue and deserialize the ByteString. It looks too low-level for this kind of API. I believe this is the wrong usage of Stateful Function's SDK for Flink DataStream Integration. What is the correct way to implement Flink DataStream and remote Stateful Functions interoperability?
The job is inspired by the official examples.
How do I fix the following serialization error when using a DataStreamer with a StreamReceiver? I am guessing that it is not able to find the class when deserializing the RowStreamReceiver.
Error: SEVERE: Failure in Java callback class org.apache.ignite.IgniteException: Platform error:Apache.Ignite.Core.Binary.BinaryObjectException: Unknown pair [platformId=1, typeId=113114]
I'm using Apache Ignite 2.0, and I am trying to employ the same kind of code demonstrated in this test:
https://github.com/apache/ignite/blob/master/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Dataload/DataStreamerTest.cs#L436
I've tried adding the assembly to the configuration, but that didn't seem to help. config.Assemblies.Add(typeof(RowStreamReceiver).Assembly.FullName);
Here's the relevant code:
DataStreamer:
using (var ds = m_ignite.GetDataStreamer<string, IBinaryObject>(CacheName)) {
ds.AllowOverwrite = true;
ds.Receiver = new RowStreamReceiver(); // If I comment this out, the error goes away
Parallel.ForEach(rows.Select((r, i) => new KeyValuePair<long, string>(i, r)), r => {
var pair = BuildRow(r.Key, r.Value);
ds.AddData(pair);
});
}
StreamReceiver:
[Serializable]
public class RowStreamReceiver : IStreamReceiver<string, IBinaryObject> {
public void Receive(ICache<string, IBinaryObject> cache, ICollection<ICacheEntry<string, IBinaryObject>> entries) {
var bin = cache.Ignite.GetBinary();
cache.PutAll(entries.ToDictionary(x => x.Key, x => {
var builder = bin.GetBuilder(x.Value);
SetColumnFields(builder);
return builder.Build();
}));
}
private static void SetColumnFields(IBinaryObjectBuilder builder) {
/* logic to set fields */
}
}
Stack Trace:
Jul 07, 2017 11:25:22 AM java.util.logging.LogManager$RootLogger log
SEVERE: Failure in Java callback class org.apache.ignite.IgniteException: Platform error:Apache.Ignite.Core.Binary.BinaryObjectException: Unknown pair [platformId=1, typeId=113114] ---> Apache.Ignite.Core.Common.JavaException: class org.apache.ignite.binary.BinaryObjectException: Unknown pair [platformId=1, typeId=113114]
at org.apache.ignite.internal.processors.platform.binary.PlatformBinaryProcessor.processInStreamOutStream(PlatformBinaryProcessor.java:119)
at org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutStream(PlatformTargetProxyImpl.java:155)
at org.apache.ignite.internal.processors.platform.callback.PlatformCallbackUtils.inLongLongLongObjectOutLong(Native Method)
at org.apache.ignite.internal.processors.platform.callback.PlatformCallbackGateway.dataStreamerStreamReceiverInvoke(PlatformCallbackGateway.java:464)
at org.apache.ignite.internal.processors.platform.datastreamer.PlatformStreamReceiverImpl.receive(PlatformStreamReceiverImpl.java:100)
at org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:137)
at org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.localUpdate(DataStreamProcessor.java:382)
at org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:301)
at org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:58)
at org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:88)
at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1257)
at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:885)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$2100(GridIoManager.java:114)
at org.apache.ignite.internal.managers.communication.GridIoManager$7.run(GridIoManager.java:802)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.ClassNotFoundException: Unknown pair [platformId=1, typeId=113114]
at org.apache.ignite.internal.MarshallerContextImpl.getClassName(MarshallerContextImpl.java:385)
at org.apache.ignite.internal.processors.platform.binary.PlatformBinaryProcessor.processInStreamOutStream(PlatformBinaryProcessor.java:113)
... 16 more
--- End of inner exception stack trace --- at Apache.Ignite.Core.Impl.Unmanaged.UnmanagedCallbacks.Error(Void* target, Int32 errType, SByte* errClsChars, Int32 errClsCharsLen, SByte* errMsgChars, Int32 errMsgCharsLen, SByte* stackTraceChars, Int32 stackTraceCharsLen, Void* errData, Int32 errDataLen) at Apache.Ignite.Core.Impl.Unmanaged.IgniteJniNativeMethods.TargetInStreamOutStream(Void* ctx, Void* target, Int32 opType, Int64 inMemPtr, Int64 outMemPtr) at Apache.Ignite.Core.Impl.PlatformTarget.DoOutInOp[TR](Int32 type, Action`1 outAction, Func`2 inAction) at Apache.Ignite.Core.Impl.Binary.BinaryProcessor.GetType(Int32 id) at Apache.Ignite.Core.Impl.Binary.Marshaller.GetDescriptor(Boolean userType, Int32 typeId, Boolean requiresType) at Apache.Ignite.Core.Impl.Binary.BinaryReader.ReadFullObject[T](Int32 pos) at Apache.Ignite.Core.Impl.Binary.BinaryReader.TryDeserialize[T](T& res) at Apache.Ignite.Core.Impl.Binary.BinaryReader.Deserialize[T]() at Apache.Ignite.Core.Impl.Binary.BinaryReader.ReadBinaryObject[T](Boolean do Detach) at Apache.Ignite.Core.Impl.Binary.BinaryReader.TryDeserialize[T](T& res) at Apache.Ignite.Core.Impl.Binary.BinaryReader.Deserialize[T]() at Apache.Ignite.Core.Impl.Datastream.StreamReceiverHolder.InvokeReceiver[TK, TV](IStreamReceiver`2 receiver, Ignite grid, IUnmanagedTarget cache, IBinaryStream stream, Boolean keepBinary) at Apache.Ignite.Core.Impl.Datastream.StreamReceiverHolder.Receive(Ignite grid, IUnmanagedTarget cache, IBinaryStream stream, Boolean keepBinary) at Apache.Ignite.Core.Impl.Unmanaged.UnmanagedCallbacks.DataStreamerStreamReceiverInvoke(Int64 memPtr, Int64 unused, Int64 unused1, Void* cache) at Apache.Ignite.Core.Impl.Unmanaged.UnmanagedCallbacks.InLongLongLongObjectOutLong(Void* target, Int32 type, Int64 val1, Int64 val2, Int64 val3, Void* arg)
at org.apache.ignite.internal.processors.platform.PlatformProcessorImpl.loggerLog(PlatformProcessorImpl.java:497)
at org.apache.ignite.internal.processors.platform.callback.PlatformCallbackUtils.inLongLongLongObjectOutLong(Native Method)
at org.apache.ignite.internal.processors.platform.callback.PlatformCallbackGateway.dataStreamerStreamReceiverInvoke(PlatformCallbackGateway.java:464)
at org.apache.ignite.internal.processors.platform.datastreamer.PlatformStreamReceiverImpl.receive(PlatformStreamReceiverImpl.java:100)
at org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:137)
at org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.localUpdate(DataStreamProcessor.java:382)
at org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:301)
at org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:58)
at org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:88)
at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1257)
at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:885)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$2100(GridIoManager.java:114)
at org.apache.ignite.internal.managers.communication.GridIoManager$7.run(GridIoManager.java:802)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
The "Unknown pair" issue is caused by not using GetDataStreamer correctly. My cache was originally created like this:
ignite.GetOrCreateCache<string, object>(cacheConfig)
so when getting the DataStreamer, I needed to use the same types, and then use KeepWithBinary
var ds = m_ignite.GetDataStreamer<string, object>(CacheName).KeepWithBinary<string, IBinaryObject>()
typeId=113114 is for Row class name. Looks like somewhere in RowStreamReceiver you try to deserialize such an object, but a class can't be found.
Can you attach the debugger to the server node and see where the exception is thrown?
I am trying to write a parquet read/write class for a certain class type using DataFrame/datasets
class schema:
class A {
long count;
List<B> listOfValues;
}
class B {
String id;
long count;
}
code :
String path = "some path";
List<A> entries = somerandomAentries();
JavaRDD<A> rdd = sc.parallelize(entries, 1);
DataFrame df = sqlContext.createDataFrame(rdd, A.class);
df.write().parquet(path);
DataFrame newDataDF = sqlContext.read().parquet(path);
newDataDF.show();
when i try to run this, it throws an error. what am I missing here? Do I need to provide a schema for the whole class while creating data frames
error:
Caused by: scala.MatchError: B(Id=abc, count=0) (of class B)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:255)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$ArrayConverter.toCatalystImpl(CatalystTypeConverters.scala:169)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$ArrayConverter.toCatalystImpl(CatalystTypeConverters.scala:153)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:401)
at org.apache.spark.sql.SQLContext$$anonfun$org$apache$spark$sql$SQLContext$$beansToRows$1$$anonfun$apply$1.apply(SQLContext.scala:1358)
at org.apache.spark.sql.SQLContext$$anonfun$org$apache$spark$sql$SQLContext$$beansToRows$1$$anonfun$apply$1.apply(SQLContext.scala:1358)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
at org.apache.spark.sql.SQLContext$$anonfun$org$apache$spark$sql$SQLContext$$beansToRows$1.apply(SQLContext.scala:1358)
at org.apache.spark.sql.SQLContext$$anonfun$org$apache$spark$sql$SQLContext$$beansToRows$1.apply(SQLContext.scala:1356)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:263)
... 8 more
You are getting error because nested JavaBeans are not supported in Spark 1.6 version. Please see https://spark.apache.org/docs/1.6.0/sql-programming-guide.html#inferring-the-schema-using-reflection
Currently, Spark SQL does not support JavaBeans that contain nested or contain complex types such as Lists or Arrays.
I am relatively new to scala and to couchbase, but I need to learn both fast. Recently while trying to run a sample application using the Java couchbase SDK through Scala I have run into the following problem.
[cb-core-3-2] WARN com.couchbase.client.core.CouchbaseCore - Exception while Handling Request Events RequestEvent{request=null}
java.lang.IndexOutOfBoundsException: Index: 1854, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at com.couchbase.client.core.config.DefaultCouchbaseBucketConfig.nodeIndexForMaster(DefaultCouchbaseBucketConfig.java:135)
at com.couchbase.client.core.node.locate.KeyValueLocator.calculateNodeId(KeyValueLocator.java:165)
at com.couchbase.client.core.node.locate.KeyValueLocator.locateForCouchbaseBucket(KeyValueLocator.java:124)
at com.couchbase.client.core.node.locate.KeyValueLocator.locateAndDispatch(KeyValueLocator.java:84)
at com.couchbase.client.core.RequestHandler.dispatchRequest(RequestHandler.java:219)
at com.couchbase.client.core.RequestHandler.onEvent(RequestHandler.java:176)
at com.couchbase.client.core.RequestHandler.onEvent(RequestHandler.java:71)
at com.couchbase.client.deps.com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.couchbase.client.deps.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
[error] (run-main-0) java.lang.RuntimeException: java.util.concurrent.TimeoutException
java.lang.RuntimeException: java.util.concurrent.TimeoutException
at com.couchbase.client.java.util.Blocking.blockForSingle(Blocking.java:71)
at com.couchbase.client.java.CouchbaseBucket.upsert(CouchbaseBucket.java:354)
at com.couchbase.client.java.CouchbaseBucket.upsert(CouchbaseBucket.java:349)
at App$.main(Application.scala:28)
at App.main(Application.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
Caused by: java.util.concurrent.TimeoutException
at com.couchbase.client.java.util.Blocking.blockForSingle(Blocking.java:71)
at com.couchbase.client.java.CouchbaseBucket.upsert(CouchbaseBucket.java:354)
at com.couchbase.client.java.CouchbaseBucket.upsert(CouchbaseBucket.java:349)
at App$.main(Application.scala:28)
at App.main(Application.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
[trace] Stack trace suppressed: run last compile:run for the full output.
[cb-core-3-1] WARN com.couchbase.client.core.CouchbaseCore - Exception while Handling Response Events null
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
at com.couchbase.client.deps.com.lmax.disruptor.BlockingWaitStrategy.waitFor(BlockingWaitStrategy.java:45)
at com.couchbase.client.deps.com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
at com.couchbase.client.deps.com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:124)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.couchbase.client.deps.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
[cb-core-3-2] WARN com.couchbase.client.core.CouchbaseCore - Exception while Handling Request Events RequestEvent{request=null}
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
at com.couchbase.client.deps.com.lmax.disruptor.BlockingWaitStrategy.waitFor(BlockingWaitStrategy.java:45)
at com.couchbase.client.deps.com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
at com.couchbase.client.deps.com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:124)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.couchbase.client.deps.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
And this is the code that generated the error
import com.couchbase.client.java._
import com.couchbase.client.core.time._
import com.couchbase.client.java.document._
import com.couchbase.client.java.document.json._
import com.couchbase.client.java.query._
import com.couchbase.client.java.env.DefaultCouchbaseEnvironment
object App {
def main(args: Array[String]): Unit = {
// Initialize the Connection
// Connects to localhost
val env = DefaultCouchbaseEnvironment.builder()
.connectTimeout(5000)
.bootstrapCarrierEnabled(false)
.build()
val cluster = CouchbaseCluster.create(env, "127.0.0.1")
// Opens the "default" bucket
val bucket = cluster.openBucket("default")
// Create a JSON Document
val user: JsonObject = JsonObject.create()
.put("firstname", "Walter")
.put("lastname", "White")
.put("job", "chemistry teacher")
.put("age", 50)
val stored: JsonDocument = bucket.upsert(JsonDocument.create("walter", user));
// Load the Document and print it
// Prints Content and Metadata of the stored Document
println(bucket.get("walter"))
// Just close a single bucket
bucket.close();
// Disconnect and close all buckets
cluster.disconnect();
}
}
EDIT:
I am a little new to this but here is what I managed to get out of the debugger.
this = {DefaultCouchbaseBucketConfig#2385} "DefaultCouchbaseBucketConfig{name='testBucket', locator=VBUCKET, uri='/pools/default/buckets/testBucket?bucket_uuid=54c1356c57dea1d640837c678f87d5e4', streamingUri='/pools/default/bucketsStreaming/testBucket?bucket_uuid=54c1356c57dea1d640837c678f87d5e4', nodeInfo=[NodeInfo{, hostname=localhost/127.0.0.1, configPort=0, directServices={CONFIG=8091, QUERY=8093, VIEW=8092, BINARY=11210}, sslServices={CONFIG=18091, QUERY=18093, VIEW=18092, BINARY=11207}}], partitionInfo=PartitionInfo{numberOfReplicas=1, partitionHosts=[localhost], partitions=[], tainted=false}, tainted=false, rev=23}"
partitionInfo = {CouchbasePartitionInfo#2391} "PartitionInfo{numberOfReplicas=1, partitionHosts=[localhost], partitions=[], tainted=false}"
numberOfReplicas = 1
partitionHosts = {String[1]#2428}
partitions = {ArrayList#2390} size = 0
forwardPartitions = null
tainted = false
partitionHosts = {ArrayList#2396} size = 1
0 = {DefaultNodeInfo#2425} "NodeInfo{, hostname=localhost/127.0.0.1, configPort=8091, directServices={CONFIG=8091, BINARY=11210, VIEW=8092}, sslServices={}}"
nodesWithPrimaryPartitions = {HashSet#2397} size = 0
tainted = false
rev = 23
name = "testBucket"
value = {char[10]#2422}
hash = 1241531676
password = ""
value = {char[0]#2421}
hash = 0
locator = {BucketNodeLocator#2400} "VBUCKET"
name = "VBUCKET"
ordinal = 0
uri = "/pools/default/buckets/testBucket?bucket_uuid=54c1356c57dea1d640837c678f87d5e4"
value = {char[78]#2411}
hash = 0
streamingUri = "/pools/default/bucketsStreaming/testBucket?bucket_uuid=54c1356c57dea1d640837c678f87d5e4"
value = {char[87]#2420}
hash = 0
nodeInfo = {ArrayList#2403} size = 1
0 = {DefaultNodeInfo#2408} "NodeInfo{, hostname=localhost/127.0.0.1, configPort=0, directServices={CONFIG=8091, QUERY=8093, VIEW=8092, BINARY=11210}, sslServices={CONFIG=18091, QUERY=18093, VIEW=18092, BINARY=11207}}"
enabledServices = 15
partition = 7620
useFastForward = false
EDIT 2:
I had a look at the Couchbase Console log and I am constantly getting the following
Service 'memcached' exited with status 1. Restarting. Messages: Failed to open library "/Users/luishreis/Downloads/couchbase-server-enterprise_4/Couchbase Server.app/Contents/Resources/couchbase-core/lib/memcached/stdin_term_handler.so": dlopen(/Users/luishreis/Downloads/couchbase-server-enterprise_4/Couchbase Server.app/Contents/Resources/couchbase-core/lib/memcached/stdin_term_handler.dylib, 6): image not found
Unable to load extension /Users/luishreis/Downloads/couchbase-server-enterprise_4/Couchbase Server.app/Contents/Resources/couchbase-core/lib/memcached/stdin_term_handler.so using the config
Any help on the matter would be appreciated.
Many thanks.
The error I am getting is as follows:
" Aug 25, 2015 1:47:41 PM com.orientechnologies.common.log.OLogManager log
INFO: OrientDB auto-config DISKCACHE=4,161MB (heap=1,776MB os=7,985MB disk=416,444MB)
Aug 25, 2015 1:47:41 PM com.orientechnologies.common.log.OLogManager log
WARNING: segment file 'database.ocf' was not closed correctly last time
Exception in thread "main" com.orientechnologies.common.exception.OException: Error on creation
of shared resource
at com.orientechnologies.common.concur.resource.OSharedContainerImpl.getResource(OSharedContainerImpl.java:55)
at com.orientechnologies.orient.core.metadata.OMetadataDefault.init(OMetadataDefault.java:175)
at com.orientechnologies.orient.core.metadata.OMetadataDefault.load(OMetadataDefault.java:77)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.initAtFirstOpen(ODatabaseDocumentTx.java:2633)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.open(ODatabaseDocumentTx.java:254)
at arss.db.main(db.java:17)
Caused by: com.orientechnologies.orient.core.exception.ORecordNotFoundException:
The record with id '#0:1' not found
at com.orientechnologies.orient.core.record.ORecordAbstract.reload(ORecordAbstract.java:266)
at com.orientechnologies.orient.core.record.impl.ODocument.reload(ODocument.java:665)
at com.orientechnologies.orient.core.type.ODocumentWrapper.reload(ODocumentWrapper.java:91)
at com.orientechnologies.orient.core.type.ODocumentWrapperNoClass.reload(ODocumentWrapperNoClass.java:73)
at com.orientechnologies.orient.core.metadata.schema.OSchemaShared.load(OSchemaShared.java:786)
at com.orientechnologies.orient.core.metadata.OMetadataDefault$1.call(OMetadataDefault.java:180)
at com.orientechnologies.orient.core.metadata.OMetadataDefault$1.call(OMetadataDefault.java:175)
at com.orientechnologies.common.concur.resource.OSharedContainerImpl.getResource(OSharedContainerImpl.java:53)
... 5 more
Caused by: com.orientechnologies.orient.core.exception.ODatabaseException: Error
on retrieving record #0:1 (cluster: internal)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.executeReadRecord(ODatabaseDocumentTx.java:1605)
at com.orientechnologies.orient.core.tx.OTransactionNoTx.loadRecord(OTransactionNoTx.java:80)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.reload(ODatabaseDocumentTx.java:1453)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.reload(ODatabaseDocumentTx.java:117)
at com.orientechnologies.orient.core.record.ORecordAbstract.reload(ORecordAbstract.java:260)
... 12 more
Caused by: java.lang.NoSuchMethodError: com.orientechnologies.common.concur.lock.ONewLockManager.tryAcquireSharedLock(Ljava/lang/Object;J)Z
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.acquireReadLock(OAbstractPaginatedStorage.java:1301)
at com.orientechnologies.orient.core.tx.OTransactionAbstract.lockRecord(OTransactionAbstract.java:120)
at com.orientechnologies.orient.core.id.ORecordId.lock(ORecordId.java:282)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.lockRecord(OAbstractPaginatedStorage.java:1784)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.readRecord(OAbstractPaginatedStorage.java:1424)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.readRecord(OAbstractPaginatedStorage.java:697)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.executeReadRecord(ODatabaseDocumentTx.java:1572)
... 16 more"
The code that I use is:
package arss;
import com.orientechnologies.orient.core.config.OGlobalConfiguration;
import com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx;
import com.orientechnologies.orient.core.record.impl.ODocument;
import com.orientechnologies.orient.core.serialization.serializer.record.ORecordSerializerFactory;
import com.orientechnologies.orient.core.serialization.serializer.record.binary.ORecordSerializerBinary;
import com.orientechnologies.orient.core.serialization.serializer.record.string.ORecordSerializerSchemaAware2CSV;
public class db {
public static void main(String[] args) {
ODatabaseDocumentTx db = new ODatabaseDocumentTx("plocal:C:/AR/AR/Newfolder/orientdb-community-2.0.3_S/databases/GratefulDeadConcerts").open("admin", "admin");
try {
// CREATE A NEW DOCUMENT AND FILL IT
ODocument doc = new ODocument("Person");
doc.field( "name", "Luke" );
doc.field( "surname", "Skywalker" );
doc.field( "city", new ODocument("City").field("name","Rome").field("country", "Italy") );
// SAVE THE DOCUMENT
doc.save();
db.close();
} finally {
db.close();
}
}
}"
Not sure if you need .flush with ODocuments, you should look that up. (or if save is an equivalent and its okay)
from those two errors lines:
WARNING: segment file 'database.ocf' was not closed correctly last time
and
com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.executeReadRecord(ODatabaseDocumentTx.java:1572) ... 16 more"
I think it has something to do with the database.ocf file itself. dont know if this helps, but try opening it manually, preferable with/without admin and close it again. ("Have you tried turning it off and on again?")
If there is still an erorr, check if it is a different one.