ClassCastException with Stacktrace Hazelcast version 4.2.5 using ReplicatedMap - java

Using Hazelcast version 4.2.5 in a webapp deployed on Tomcat on Kubernetes. We're frequently("every 5 seconds") seeing ClassCastException with a stacktrace in the application logs.
Here's the ClassCastException :
java.lang.ClassCastException: class java.lang.String cannot be cast to class com.hazelcast.internal.serialization.impl.HeapData (java.lang.String is in module java.base of loader 'bootstrap'; com.hazelcast.internal.serialization.impl.HeapData is in unnamed module of loader org.apache.catalina.loader.ParallelWebappClassLoader #2f04993d)
27-Oct-2022 22:57:56.357 WARNING [hz.rogueUsers.cached.thread-2] com.hazelcast.internal.metrics.impl.MetricsCollectionCycle.null Collecting metrics from source com.hazelcast.replicatedmap.impl.ReplicatedMapService failed
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76)
at java.base/java.lang.Thread.run(Thread.java:834)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at com.hazelcast.internal.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:217)
at com.hazelcast.spi.impl.executionservice.impl.DelegateAndSkipOnConcurrentExecutionDecorator$DelegateDecorator.run(DelegateAndSkipOnConcurrentExecutionDecorator.java:77)
at com.hazelcast.internal.metrics.impl.MetricsService.collectMetrics(MetricsService.java:154)
at com.hazelcast.internal.metrics.impl.MetricsService.collectMetrics(MetricsService.java:160)
at com.hazelcast.internal.metrics.impl.MetricsRegistryImpl.collect(MetricsRegistryImpl.java:316)
at com.hazelcast.internal.metrics.impl.MetricsCollectionCycle.collectDynamicMetrics(MetricsCollectionCycle.java:88)
at com.hazelcast.replicatedmap.impl.ReplicatedMapService.provideDynamicMetrics(ReplicatedMapService.java:387)
at com.hazelcast.replicatedmap.impl.ReplicatedMapService.getStats(ReplicatedMapService.java:357)
at com.hazelcast.replicatedmap.impl.ReplicatedMapService.getLocalReplicatedMapStats(ReplicatedMapService.java:197)
at com.hazelcast.replicatedmap.impl.LocalReplicatedMapStatsProvider.getLocalReplicatedMapStats(LocalReplicatedMapStatsProvider.java:85)
Here's how we're setting up Hazelcast.
private static HazelcastInstance setupHazelcastConfig() {
Config config = new Config();
config.setInstanceName("rogueUsers");
NetworkConfig network = config.getNetworkConfig();
network.setPort(5701).setPortCount(20);
network.setPortAutoIncrement(true);
JoinConfig join = network.getJoin();
join.getMulticastConfig().setEnabled(true);
// join.getTcpIpConfig()
// .setEnabled(true);
HazelcastInstance hz = Hazelcast.getOrCreateHazelcastInstance(config);
ReplicatedMapConfig replicatedMapConfig =
config.getReplicatedMapConfig("rogueUsers");
replicatedMapConfig.setInMemoryFormat(InMemoryFormat.BINARY);
replicatedMapConfig.setAsyncFillup(true);
replicatedMapConfig.setStatisticsEnabled(true);
replicatedMapConfig.setSplitBrainProtectionName("splitbrainprotection-name");
ReplicatedMap<String, String> map = hz.getReplicatedMap("rogueUsers");
map.addEntryListener(new RogueEntryListener());
return hz;
}
Is this a configuration issue ?
How do I fix this ?
Thanks very much,

The exception is being thrown from the following line:
if (isBinary) {
memoryUsage += ((HeapData) record.getValueInternal()).getHeapCost(); <-- exception
}
which is line 85 of com.hazelcast.replicatedmap.impl.LocalReplicatedMapStats class. The condition being checked is as the following:
boolean isBinary = (replicatedMapConfig.getInMemoryFormat() == InMemoryFormat.BINARY);
so basically, it is related to the format you are saving the data (from the config above you have chosen BINARY).
However, I don't think you are following it correctly since you do the following: ReplicatedMap<String, String> map = hz.getReplicatedMap("rogueUsers"); in your config.
From the Javadoc of com.hazelcast.internal.serialization.Data class:
Data is basic unit of serialization. It stores binary form of an object serialized by SerializationService.toData(Object).
Therefore, try editing your config to this:
ReplicatedMap<Data, Data> map = hz.getReplicatedMap("rogueUsers");

Related

When doing a redeploy of JBoss WAR with Apache Ignite, Failed to marshal custom event: StartRoutineDiscoveryMessage

I am trying to make it so I can redeploy a JBoss 7.1.0 cluster with a WAR that has apache ignite.
I am starting the cache like this:
System.setProperty("IGNITE_UPDATE_NOTIFIER", "false");
igniteConfiguration = new IgniteConfiguration();
int failureDetectionTimeout = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_FAILURE_DETECTION_TIMEOUT", "60000"));
igniteConfiguration.setFailureDetectionTimeout(failureDetectionTimeout);
String igniteVmIps = getProperty("IGNITE_VM_IPS");
List<String> addresses = Arrays.asList("127.0.0.1:47500");
if (StringUtils.isNotBlank(igniteVmIps)) {
addresses = Arrays.asList(igniteVmIps.split(","));
}
int networkTimeout = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_NETWORK_TIMEOUT", "60000"));
boolean failureDetectionTimeoutEnabled = Boolean.parseBoolean(getProperty("IGNITE_TCP_DISCOVERY_FAILURE_DETECTION_TIMEOUT_ENABLED", "true"));
int tcpDiscoveryLocalPort = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_LOCAL_PORT", "47500"));
int tcpDiscoveryLocalPortRange = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_LOCAL_PORT_RANGE", "0"));
TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi();
tcpDiscoverySpi.setLocalPort(tcpDiscoveryLocalPort);
tcpDiscoverySpi.setLocalPortRange(tcpDiscoveryLocalPortRange);
tcpDiscoverySpi.setNetworkTimeout(networkTimeout);
tcpDiscoverySpi.failureDetectionTimeoutEnabled(failureDetectionTimeoutEnabled);
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(addresses);
tcpDiscoverySpi.setIpFinder(ipFinder);
igniteConfiguration.setDiscoverySpi(tcpDiscoverySpi);
Ignite ignite = Ignition.start(igniteConfiguration);
ignite.cluster().active(true);
Then I am stopping the cache when the application undeploys:
ignite.close();
When I try to redeploy, I get the following error during initialization.
org.apache.ignite.spi.IgniteSpiException: Failed to marshal custom event: StartRoutineDiscoveryMessage [startReqData=StartRequestData [prjPred=org.apache.ignite.internal.cluster.ClusterGroupAdapter$CachesFilter#7385a997, clsName=null, depInfo=null, hnd=org.apache.ignite.internal.GridEventConsumeHandler#2aec6952, bufSize=1, interval=0, autoUnsubscribe=true], keepBinary=false, deserEx=null, routineId=bbe16e8e-2820-4ba0-a958-d5f644498ba2]
If I full restart the server, starts up fine.
Am I missing some magic in the shutdown process?
I see what I did wrong, and it was code I omitted from the ticket.
ignite.events(ignite.cluster().forCacheNodes(cacheConfig.getKey())).remoteListen(locLsnr, rmtLsnr,
EVT_CACHE_OBJECT_PUT, EVT_CACHE_OBJECT_READ, EVT_CACHE_OBJECT_REMOVED);
When it was trying to register this code twice, it was causing that strange error.
I put a try-catch ignore around it for now and things seem to be ok.

how to get cached data Apache ignite

I am new in Apache ignite. I am trying to populate cache and read from cache. I have created 2 java projects one populates Apache ignite caches and the other one print cached data however printing cache project gives error.
here is the code that I use to populate caches
public void run(String... arg0) throws Exception
{
try (Ignite ignite = Ignition.start("ignite.xml"))
{
int iteration=0;
while(true)
{
iteration++;
IgniteCache<Object, Object> cache = ignite.getOrCreateCache("test cache " + iteration);
System.out.println(""+100);
System.out.println("Caching started for iteration " + iteration);
printMemory();
for (int i = 0; i < 100; i++)
{
cache.put(i, new CacheObject(i, "Cached integer " + i));
System.out.println(i);
Thread.sleep(100);
}
//cache.destroy();
System.out.println("**************************************"+cache.size());
}
}
}
this is the code that I use to print cached data
Ignition.setClientMode(true);
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setPeerClassLoadingEnabled(true);
TcpDiscoveryMulticastIpFinder discoveryMulticastIpFinder = new TcpDiscoveryMulticastIpFinder();
Set<String> set = new HashSet<>();
set.add("serverhost:47500..47509");
discoveryMulticastIpFinder.setAddresses(set);
TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
discoverySpi.setIpFinder(discoveryMulticastIpFinder);
cfg.setDiscoverySpi(discoverySpi);
cfg.setPeerClassLoadingEnabled(true);
cfg.setIncludeEventTypes(EVTS_CACHE);
Ignite ignite = Ignition.start(cfg);
System.out.println("***************************************************\n"+ignite.cacheNames()+"\n****************************");
CacheConfiguration<String, BinaryObject> cacheConfiguration = new CacheConfiguration<>(CACHE_NAME);
IgniteCache<String, BinaryObject> cache = ignite.getOrCreateCache(cacheConfiguration).withKeepBinary();
this 2 peace of code in different project so while 1 first project populating cache, I am trying to reach the cache from another project
the error below is occurred when I am trying to reach cached data
Error that returns from the code that reads cache and prints it
Aug 01, 2018 9:25:25 AM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to start manager: GridManagerAdapter [enabled=true, name=o.a.i.i.managers.discovery.GridDiscoveryManager]
class org.apache.ignite.IgniteCheckedException: Failed to start SPI: TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackTimeout=5000, reconCnt=10, maxAckTimeout=600000, forceSrvMode=false, clientReconnectDisabled=false]
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:258)
at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:660)
at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1505)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:917)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1688)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1547)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1003)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:534)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:515)
at org.apache.ignite.Ignition.start(Ignition.java:322)
at test.App.main(App.java:76)
Caused by: class org.apache.ignite.spi.IgniteSpiException: Local node's marshaller differs from remote node's marshaller (to make sure all nodes in topology have identical marshaller, configure marshaller explicitly in configuration) [locMarshaller=org.apache.ignite.internal.binary.BinaryMarshaller, rmtMarshaller=org.apache.ignite.marshaller.optimized.OptimizedMarshaller, locNodeAddrs=[192.168.1.71/0:0:0:0:0:0:0:1%lo, /127.0.0.1, /192.168.1.71], locPort=0, rmtNodeAddr=[192.168.1.71/0:0:0:0:0:0:0:1%lo, /127.0.0.1, /192.168.1.71], locNodeId=b41f0d09-5a7f-424b-b3b5-420a5e1acdf6, rmtNodeId=ff436f20-5d4b-477e-aade-837d59b1eaa7]
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.checkFailedError(TcpDiscoverySpi.java:1647)
at org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.body(ClientImpl.java:1460)
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
Caused by: class org.apache.ignite.spi.IgniteSpiException: Local node's marshaller differs from remote node's marshaller (to make sure all nodes in topology have identical marshaller, configure marshaller explicitly in configuration) [locMarshaller=org.apache.ignite.internal.binary.BinaryMarshaller, rmtMarshaller=org.apache.ignite.marshaller.optimized.OptimizedMarshaller
Looks like in ignite.xml you've explicitly set marshaller. Please check <property name="marshaller"> in xml config file. You should have the same marshaller configured for all nodes in the cluster, or they won't be able to communicate.

Kafka Streamer: Issue with user defined 'Serdes'

I am using Confluent-3.2.1 as a Kafka streamer. I am trying to aggregate my KGroupedStream<String, MyClass1> into KTable<Windowed<String>,MsgAggr>. While using aggregation, I am also using TimeWindows.of(TimeUnit.SECONDS.toMillis(5)). I am using user defined "Serdes" as an argument to aggregation. The code for User define "Serdes" is,
Map<String, Object> serdeProps = new HashMap<>();
final Serializer<MsgAggr> pageViewSerializer = new JsonPOJOSerializer<>();
serdeProps.put("JsonPOJOClass", MsgAggr.class);
pageViewSerializer.configure(serdeProps, false);
final Deserializer<MsgAggr> pageViewDeserializer = new JsonPOJODeserializer<>();
serdeProps.put("JsonPOJOClass", MsgAggr.class);
pageViewDeserializer.configure(serdeProps, false);
final Serde<MsgAggr> pageViewSerde = Serdes.serdeFrom(pageViewSerializer, pageViewDeserializer);`
Code for Streaming is
KGroupedStream<String, MyClass1> msg_grp = message
.groupByKey();
KTable<Windowed<String>,MsgAggr> msg_win = msg_grp
//.reduce(new Reduced(), arg1, arg2);
.aggregate(new Init(),
new Aggr(),
TimeWindows.of(TimeUnit.SECONDS.toMillis(5)),
pageViewSerde,
"MySample_out");
When I run the code I got the errors:
[2017-05-23 18:16:45,648] ERROR stream-thread [StreamThread-1] Streams application error during processing: (org.apache.kafka.streams.processor.internals.StreamThread:249)
java.lang.ClassCastException: my.kafka.strm.MyClass1 cannot be cast to java.lang.String
at org.apache.kafka.common.serialization.StringSerializer.serialize(StringSerializer.java:24)
at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:64)
at org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:82)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:202)
at org.apache.kafka.streams.kstream.internals.KStreamFilter$KStreamFilterProcessor.process(KStreamFilter.java:44)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:82)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:202)
at org.apache.kafka.streams.kstream.internals.KStreamMap$KStreamMapProcessor.process(KStreamMap.java:43)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:82)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:202)
at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:66)
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:180)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:436)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242)
Exception in thread "StreamThread-1" java.lang.ClassCastException: my.kafka.strm.MyClass1 cannot be cast to java.lang.String
at org.apache.kafka.common.serialization.StringSerializer.serialize(StringSerializer.java:24)
at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:64)
at org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:82)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:202)
at org.apache.kafka.streams.kstream.internals.KStreamFilter$KStreamFilterProcessor.process(KStreamFilter.java:44)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:82)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:202)
at org.apache.kafka.streams.kstream.internals.KStreamMap$KStreamMapProcessor.process(KStreamMap.java:43)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:82)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:202)
at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:66)
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:180)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:436)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242)
The problem is with message.groupByKey();. Its using the String Serde for your custom class MyClass1. Please implement custom Serializer and deserializer for MyClass1 and use the same in the overloaded version of groupByKey - https://kafka.apache.org/0102/javadoc/org/apache/kafka/streams/kstream/KStream.html#groupByKey(org.apache.kafka.common.serialization.Serde,%20org.apache.kafka.common.serialization.Serde)

How to read and write a custom class from parquet file

I am trying to write a parquet read/write class for a certain class type using DataFrame/datasets
class schema:
class A {
long count;
List<B> listOfValues;
}
class B {
String id;
long count;
}
code :
String path = "some path";
List<A> entries = somerandomAentries();
JavaRDD<A> rdd = sc.parallelize(entries, 1);
DataFrame df = sqlContext.createDataFrame(rdd, A.class);
df.write().parquet(path);
DataFrame newDataDF = sqlContext.read().parquet(path);
newDataDF.show();
when i try to run this, it throws an error. what am I missing here? Do I need to provide a schema for the whole class while creating data frames
error:
Caused by: scala.MatchError: B(Id=abc, count=0) (of class B)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:255)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$ArrayConverter.toCatalystImpl(CatalystTypeConverters.scala:169)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$ArrayConverter.toCatalystImpl(CatalystTypeConverters.scala:153)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
at org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:401)
at org.apache.spark.sql.SQLContext$$anonfun$org$apache$spark$sql$SQLContext$$beansToRows$1$$anonfun$apply$1.apply(SQLContext.scala:1358)
at org.apache.spark.sql.SQLContext$$anonfun$org$apache$spark$sql$SQLContext$$beansToRows$1$$anonfun$apply$1.apply(SQLContext.scala:1358)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
at org.apache.spark.sql.SQLContext$$anonfun$org$apache$spark$sql$SQLContext$$beansToRows$1.apply(SQLContext.scala:1358)
at org.apache.spark.sql.SQLContext$$anonfun$org$apache$spark$sql$SQLContext$$beansToRows$1.apply(SQLContext.scala:1356)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:263)
... 8 more
You are getting error because nested JavaBeans are not supported in Spark 1.6 version. Please see https://spark.apache.org/docs/1.6.0/sql-programming-guide.html#inferring-the-schema-using-reflection
Currently, Spark SQL does not support JavaBeans that contain nested or contain complex types such as Lists or Arrays.

Unknown pdx type=4

My distributed geode system went just well before, but recently not, and I don't know whether it's relevant to the company's power-cut last night. It seems all geode data persisted before can not be deserialize back now.
here's the error information:
Caused by: java.lang.IllegalStateException: Unknown pdx type=4
at com.gemstone.gemfire.internal.InternalDataSerializer.readPdxSerializable(InternalDataSerializer.java:3162)
at com.gemstone.gemfire.internal.InternalDataSerializer.basicReadObject(InternalDataSerializer.java:2979)
at com.gemstone.gemfire.DataSerializer.readObject(DataSerializer.java:3210)
at com.gemstone.gemfire.internal.util.BlobHelper.deserializeBlob(BlobHelper.java:101)
at com.gemstone.gemfire.internal.cache.EntryEventImpl.deserialize(EntryEventImpl.java:1554)
at com.gemstone.gemfire.internal.cache.EntryEventImpl.deserialize(EntryEventImpl.java:1546)
at com.gemstone.gemfire.internal.cache.PreferBytesCachedDeserializable.getDeserializedValue(PreferBytesCachedDeserializable.java:67)
at com.gemstone.gemfire.internal.cache.EntryEventImpl.getOldValue(EntryEventImpl.java:723)
at com.gemstone.gemfire.internal.cache.LocalRegion.validatedDestroy(LocalRegion.java:1146)
at com.gemstone.gemfire.internal.cache.LocalRegion.destroy(LocalRegion.java:1130)
at com.gemstone.gemfire.internal.cache.AbstractRegion.destroy(AbstractRegion.java:315)
at com.gemstone.gemfire.internal.cache.LocalRegion.remove(LocalRegion.java:9372)
at com.igola.datahub.wwl.geode.WWLResultsDAO.removeByKey(WWLResultsDAO.java:276)
my project was build on play framework and my configuration of geode cache and region is like this:
cache.setIsServer(true);
diskStorage = configuration.getString("geode.storage.name");
String fileStorage = configuration.getString("geode.storage.path");
cache.createDiskStoreFactory()
.setDiskDirs(new File[]{new File(fileStorage)})
.setDiskUsageWarningPercentage(0.8f)
.setAutoCompact(true)
.create(diskStorage);
geodeCache.getCache()
.<DataKey, When2GoData>createRegionFactory(PARTITION_REDUNDANT_PERSISTENT_OVERFLOW)
.setStatisticsEnabled(true)
.setEntryIdleTimeout(new ExpirationAttributes(TIMEOUT_LONG, ExpirationAction.DESTROY))
.setDiskStoreName(geodeCache.getDiskStorage())
.setPartitionAttributes(new PartitionAttributesFactory<>()
.setRedundantCopies(REDUNDANT_COPIES)
.setPartitionResolver(new DataKey())
.create())
.create("xxx");
cache = new CacheFactory()
.set("locators", configuration.getString("geode.locator"))
.set("name", configuration.getString("geode.name")+ "-"+ uuid)
.set("mcast-port", "0")
.set("log-level", "error")
.setPdxPersistent(true)
.setPdxReadSerialized(true)
.create();
Here's the relevant code from the readPdxSerializable method:
PdxType pdxType = gfc.getPdxRegistry().getType(typeId);
if (logger.isTraceEnabled(LogMarker.SERIALIZER)) {
logger.trace(LogMarker.SERIALIZER, "readPdxSerializable pdxType={}", pdxType);
}
if (pdxType == null) {
throw new IllegalStateException("Unknown pdx type=" + typeId);
}
So it looks like you have something in your cache that you can no longer deserialize ... due (I think) the fact that the type is no longer registered in the type registry.

Categories

Resources