I am new in Apache ignite. I am trying to populate cache and read from cache. I have created 2 java projects one populates Apache ignite caches and the other one print cached data however printing cache project gives error.
here is the code that I use to populate caches
public void run(String... arg0) throws Exception
{
try (Ignite ignite = Ignition.start("ignite.xml"))
{
int iteration=0;
while(true)
{
iteration++;
IgniteCache<Object, Object> cache = ignite.getOrCreateCache("test cache " + iteration);
System.out.println(""+100);
System.out.println("Caching started for iteration " + iteration);
printMemory();
for (int i = 0; i < 100; i++)
{
cache.put(i, new CacheObject(i, "Cached integer " + i));
System.out.println(i);
Thread.sleep(100);
}
//cache.destroy();
System.out.println("**************************************"+cache.size());
}
}
}
this is the code that I use to print cached data
Ignition.setClientMode(true);
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setPeerClassLoadingEnabled(true);
TcpDiscoveryMulticastIpFinder discoveryMulticastIpFinder = new TcpDiscoveryMulticastIpFinder();
Set<String> set = new HashSet<>();
set.add("serverhost:47500..47509");
discoveryMulticastIpFinder.setAddresses(set);
TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
discoverySpi.setIpFinder(discoveryMulticastIpFinder);
cfg.setDiscoverySpi(discoverySpi);
cfg.setPeerClassLoadingEnabled(true);
cfg.setIncludeEventTypes(EVTS_CACHE);
Ignite ignite = Ignition.start(cfg);
System.out.println("***************************************************\n"+ignite.cacheNames()+"\n****************************");
CacheConfiguration<String, BinaryObject> cacheConfiguration = new CacheConfiguration<>(CACHE_NAME);
IgniteCache<String, BinaryObject> cache = ignite.getOrCreateCache(cacheConfiguration).withKeepBinary();
this 2 peace of code in different project so while 1 first project populating cache, I am trying to reach the cache from another project
the error below is occurred when I am trying to reach cached data
Error that returns from the code that reads cache and prints it
Aug 01, 2018 9:25:25 AM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to start manager: GridManagerAdapter [enabled=true, name=o.a.i.i.managers.discovery.GridDiscoveryManager]
class org.apache.ignite.IgniteCheckedException: Failed to start SPI: TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackTimeout=5000, reconCnt=10, maxAckTimeout=600000, forceSrvMode=false, clientReconnectDisabled=false]
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:258)
at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:660)
at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1505)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:917)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1688)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1547)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1003)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:534)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:515)
at org.apache.ignite.Ignition.start(Ignition.java:322)
at test.App.main(App.java:76)
Caused by: class org.apache.ignite.spi.IgniteSpiException: Local node's marshaller differs from remote node's marshaller (to make sure all nodes in topology have identical marshaller, configure marshaller explicitly in configuration) [locMarshaller=org.apache.ignite.internal.binary.BinaryMarshaller, rmtMarshaller=org.apache.ignite.marshaller.optimized.OptimizedMarshaller, locNodeAddrs=[192.168.1.71/0:0:0:0:0:0:0:1%lo, /127.0.0.1, /192.168.1.71], locPort=0, rmtNodeAddr=[192.168.1.71/0:0:0:0:0:0:0:1%lo, /127.0.0.1, /192.168.1.71], locNodeId=b41f0d09-5a7f-424b-b3b5-420a5e1acdf6, rmtNodeId=ff436f20-5d4b-477e-aade-837d59b1eaa7]
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.checkFailedError(TcpDiscoverySpi.java:1647)
at org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.body(ClientImpl.java:1460)
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
Caused by: class org.apache.ignite.spi.IgniteSpiException: Local node's marshaller differs from remote node's marshaller (to make sure all nodes in topology have identical marshaller, configure marshaller explicitly in configuration) [locMarshaller=org.apache.ignite.internal.binary.BinaryMarshaller, rmtMarshaller=org.apache.ignite.marshaller.optimized.OptimizedMarshaller
Looks like in ignite.xml you've explicitly set marshaller. Please check <property name="marshaller"> in xml config file. You should have the same marshaller configured for all nodes in the cluster, or they won't be able to communicate.
Related
Using Hazelcast version 4.2.5 in a webapp deployed on Tomcat on Kubernetes. We're frequently("every 5 seconds") seeing ClassCastException with a stacktrace in the application logs.
Here's the ClassCastException :
java.lang.ClassCastException: class java.lang.String cannot be cast to class com.hazelcast.internal.serialization.impl.HeapData (java.lang.String is in module java.base of loader 'bootstrap'; com.hazelcast.internal.serialization.impl.HeapData is in unnamed module of loader org.apache.catalina.loader.ParallelWebappClassLoader #2f04993d)
27-Oct-2022 22:57:56.357 WARNING [hz.rogueUsers.cached.thread-2] com.hazelcast.internal.metrics.impl.MetricsCollectionCycle.null Collecting metrics from source com.hazelcast.replicatedmap.impl.ReplicatedMapService failed
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76)
at java.base/java.lang.Thread.run(Thread.java:834)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at com.hazelcast.internal.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:217)
at com.hazelcast.spi.impl.executionservice.impl.DelegateAndSkipOnConcurrentExecutionDecorator$DelegateDecorator.run(DelegateAndSkipOnConcurrentExecutionDecorator.java:77)
at com.hazelcast.internal.metrics.impl.MetricsService.collectMetrics(MetricsService.java:154)
at com.hazelcast.internal.metrics.impl.MetricsService.collectMetrics(MetricsService.java:160)
at com.hazelcast.internal.metrics.impl.MetricsRegistryImpl.collect(MetricsRegistryImpl.java:316)
at com.hazelcast.internal.metrics.impl.MetricsCollectionCycle.collectDynamicMetrics(MetricsCollectionCycle.java:88)
at com.hazelcast.replicatedmap.impl.ReplicatedMapService.provideDynamicMetrics(ReplicatedMapService.java:387)
at com.hazelcast.replicatedmap.impl.ReplicatedMapService.getStats(ReplicatedMapService.java:357)
at com.hazelcast.replicatedmap.impl.ReplicatedMapService.getLocalReplicatedMapStats(ReplicatedMapService.java:197)
at com.hazelcast.replicatedmap.impl.LocalReplicatedMapStatsProvider.getLocalReplicatedMapStats(LocalReplicatedMapStatsProvider.java:85)
Here's how we're setting up Hazelcast.
private static HazelcastInstance setupHazelcastConfig() {
Config config = new Config();
config.setInstanceName("rogueUsers");
NetworkConfig network = config.getNetworkConfig();
network.setPort(5701).setPortCount(20);
network.setPortAutoIncrement(true);
JoinConfig join = network.getJoin();
join.getMulticastConfig().setEnabled(true);
// join.getTcpIpConfig()
// .setEnabled(true);
HazelcastInstance hz = Hazelcast.getOrCreateHazelcastInstance(config);
ReplicatedMapConfig replicatedMapConfig =
config.getReplicatedMapConfig("rogueUsers");
replicatedMapConfig.setInMemoryFormat(InMemoryFormat.BINARY);
replicatedMapConfig.setAsyncFillup(true);
replicatedMapConfig.setStatisticsEnabled(true);
replicatedMapConfig.setSplitBrainProtectionName("splitbrainprotection-name");
ReplicatedMap<String, String> map = hz.getReplicatedMap("rogueUsers");
map.addEntryListener(new RogueEntryListener());
return hz;
}
Is this a configuration issue ?
How do I fix this ?
Thanks very much,
The exception is being thrown from the following line:
if (isBinary) {
memoryUsage += ((HeapData) record.getValueInternal()).getHeapCost(); <-- exception
}
which is line 85 of com.hazelcast.replicatedmap.impl.LocalReplicatedMapStats class. The condition being checked is as the following:
boolean isBinary = (replicatedMapConfig.getInMemoryFormat() == InMemoryFormat.BINARY);
so basically, it is related to the format you are saving the data (from the config above you have chosen BINARY).
However, I don't think you are following it correctly since you do the following: ReplicatedMap<String, String> map = hz.getReplicatedMap("rogueUsers"); in your config.
From the Javadoc of com.hazelcast.internal.serialization.Data class:
Data is basic unit of serialization. It stores binary form of an object serialized by SerializationService.toData(Object).
Therefore, try editing your config to this:
ReplicatedMap<Data, Data> map = hz.getReplicatedMap("rogueUsers");
I am trying to make it so I can redeploy a JBoss 7.1.0 cluster with a WAR that has apache ignite.
I am starting the cache like this:
System.setProperty("IGNITE_UPDATE_NOTIFIER", "false");
igniteConfiguration = new IgniteConfiguration();
int failureDetectionTimeout = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_FAILURE_DETECTION_TIMEOUT", "60000"));
igniteConfiguration.setFailureDetectionTimeout(failureDetectionTimeout);
String igniteVmIps = getProperty("IGNITE_VM_IPS");
List<String> addresses = Arrays.asList("127.0.0.1:47500");
if (StringUtils.isNotBlank(igniteVmIps)) {
addresses = Arrays.asList(igniteVmIps.split(","));
}
int networkTimeout = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_NETWORK_TIMEOUT", "60000"));
boolean failureDetectionTimeoutEnabled = Boolean.parseBoolean(getProperty("IGNITE_TCP_DISCOVERY_FAILURE_DETECTION_TIMEOUT_ENABLED", "true"));
int tcpDiscoveryLocalPort = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_LOCAL_PORT", "47500"));
int tcpDiscoveryLocalPortRange = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_LOCAL_PORT_RANGE", "0"));
TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi();
tcpDiscoverySpi.setLocalPort(tcpDiscoveryLocalPort);
tcpDiscoverySpi.setLocalPortRange(tcpDiscoveryLocalPortRange);
tcpDiscoverySpi.setNetworkTimeout(networkTimeout);
tcpDiscoverySpi.failureDetectionTimeoutEnabled(failureDetectionTimeoutEnabled);
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(addresses);
tcpDiscoverySpi.setIpFinder(ipFinder);
igniteConfiguration.setDiscoverySpi(tcpDiscoverySpi);
Ignite ignite = Ignition.start(igniteConfiguration);
ignite.cluster().active(true);
Then I am stopping the cache when the application undeploys:
ignite.close();
When I try to redeploy, I get the following error during initialization.
org.apache.ignite.spi.IgniteSpiException: Failed to marshal custom event: StartRoutineDiscoveryMessage [startReqData=StartRequestData [prjPred=org.apache.ignite.internal.cluster.ClusterGroupAdapter$CachesFilter#7385a997, clsName=null, depInfo=null, hnd=org.apache.ignite.internal.GridEventConsumeHandler#2aec6952, bufSize=1, interval=0, autoUnsubscribe=true], keepBinary=false, deserEx=null, routineId=bbe16e8e-2820-4ba0-a958-d5f644498ba2]
If I full restart the server, starts up fine.
Am I missing some magic in the shutdown process?
I see what I did wrong, and it was code I omitted from the ticket.
ignite.events(ignite.cluster().forCacheNodes(cacheConfig.getKey())).remoteListen(locLsnr, rmtLsnr,
EVT_CACHE_OBJECT_PUT, EVT_CACHE_OBJECT_READ, EVT_CACHE_OBJECT_REMOVED);
When it was trying to register this code twice, it was causing that strange error.
I put a try-catch ignore around it for now and things seem to be ok.
enter image description here public class GemfireTest {
public static void main(String[] args) throws NameResolutionException, TypeMismatchException, QueryInvocationTargetException, FunctionDomainException {
ServerLauncher serverLauncher = new ServerLauncher.Builder()
.setMemberName("server1")
.setServerPort(40404)
.set("start-locator", "127.0.0.1[9090]")
.build();
serverLauncher.start();
String queryString = "SELECT * FROM /gemregion";
ClientCache cache = new ClientCacheFactory().create();
QueryService queryService = cache.getQueryService();
Query query = queryService.newQuery(queryString);
SelectResults results = (SelectResults)query.execute();
int size = results.size();
System.out.println(size);
}
}
trying to run a locator and a server inside my java application getting an exception below:
Exception in thread "main" java.lang.IllegalStateException: A
connection to a distributed system already exists in this VM. It has
the following configuration: ack-severe-alert-threshold="0"
ack-wait-threshold="15" archive-disk-space-limit="0"
archive-file-size-limit="0" async-distribution-timeout="0"
async-max-queue-size="8" async-queue-timeout="60000"
bind-address="" cache-xml-file="cache.xml"
cluster-configuration-dir="" cluster-ssl-ciphers="any"
cluster-ssl-enabled="false" cluster-ssl-keystore=""
cluster-ssl-keystore-password="" cluster-ssl-keystore-type=""
cluster-ssl-protocols="any"
cluster-ssl-require-authentication="true" cluster-ssl-truststore=""
cluster-ssl-truststore-password="" conflate-events="server"
conserve-sockets="true" delta-propagation="true"
deploy-working-dir="C:\Users\Saranya\IdeaProjects\Gemfire"
disable-auto-reconnect="false" disable-tcp="false"
distributed-system-id="-1" distributed-transactions="false"
durable-client-id="" durable-client-timeout="300"
enable-cluster-configuration="true"
enable-network-partition-detection="true"
enable-time-statistics="false" enforce-unique-host="false"
gateway-ssl-ciphers="any" gateway-ssl-enabled="false"
gateway-ssl-keystore="" gateway-ssl-keystore-password=""
gateway-ssl-keystore-type="" gateway-ssl-protocols="any"
gateway-ssl-require-authentication="true" gateway-ssl-truststore=""
gateway-ssl-truststore-password="" groups=""
http-service-bind-address="" http-service-port="7070"
http-service-ssl-ciphers="any" http-service-ssl-enabled="false"
http-service-ssl-keystore="" http-service-ssl-keystore-password=""
http-service-ssl-keystore-type="" http-service-ssl-protocols="any"
http-service-ssl-require-authentication="false"
http-service-ssl-truststore=""
http-service-ssl-truststore-password="" jmx-manager="false"
jmx-manager-access-file="" jmx-manager-bind-address=""
jmx-manager-hostname-for-clients="" jmx-manager-http-port="7070"
jmx-manager-password-file="" jmx-manager-port="1099"
jmx-manager-ssl-ciphers="any" jmx-manager-ssl-enabled="false"
jmx-manager-ssl-keystore="" jmx-manager-ssl-keystore-password=""
jmx-manager-ssl-keystore-type="" jmx-manager-ssl-protocols="any"
jmx-manager-ssl-require-authentication="true"
jmx-manager-ssl-truststore="" jmx-manager-ssl-truststore-password=""
jmx-manager-start="false" jmx-manager-update-rate="2000"
load-cluster-configuration-from-dir="false" locator-wait-time="0"
locators="127.0.0.1[9090]" (wanted "") lock-memory="false"
log-disk-space-limit="0"
log-file="C:\Users\Saranya\IdeaProjects\Gemfire\server1.log"
(wanted "") log-file-size-limit="0" log-level="config" max-num-reconnect-tries="3" max-wait-time-reconnect="60000"
mcast-address="/239.192.81.1" mcast-flow-control="1048576, 0.25,
5000" mcast-port="0" mcast-recv-buffer-size="1048576"
mcast-send-buffer-size="65535" mcast-ttl="32"
member-timeout="5000" membership-port-range="[1024,65535]"
memcached-bind-address="" memcached-port="0"
memcached-protocol="ASCII" name="server1" (wanted "")
off-heap-memory-size="" redis-bind-address="" redis-password=""
redis-port="0" redundancy-zone="" remote-locators=""
remove-unresponsive-client="false" roles=""
security-client-accessor="" security-client-accessor-pp=""
security-client-auth-init="" security-client-authenticator=""
security-client-dhalgo="" security-log-file=""
security-log-level="config" security-manager=""
security-peer-auth-init="" security-peer-authenticator=""
security-peer-verifymember-timeout="1000" security-post-processor=""
security-shiro-init="" security-udp-dhalgo=""
serializable-object-filter="!" server-bind-address=""
server-ssl-ciphers="any" server-ssl-enabled="false"
server-ssl-keystore="" server-ssl-keystore-password=""
server-ssl-keystore-type="" server-ssl-protocols="any"
server-ssl-require-authentication="true" server-ssl-truststore=""
server-ssl-truststore-password="" socket-buffer-size="32768"
socket-lease-time="60000" ssl-ciphers="any" ssl-cluster-alias=""
ssl-default-alias="" ssl-enabled-components="[]"
ssl-gateway-alias="" ssl-jmx-alias="" ssl-keystore=""
ssl-keystore-password="" ssl-keystore-type="" ssl-locator-alias=""
ssl-protocols="any" ssl-require-authentication="true"
ssl-server-alias="" ssl-truststore="" ssl-truststore-password=""
ssl-truststore-type="" ssl-web-alias=""
ssl-web-require-authentication="false" start-dev-rest-api="false"
start-locator="127.0.0.1[9090]" (wanted "")*
statistic-archive-file="" statistic-sample-rate="1000"
statistic-sampling-enabled="true" tcp-port="0"
udp-fragment-size="60000" udp-recv-buffer-size="1048576"
udp-send-buffer-size="65535" use-cluster-configuration="true"
user-command-packages="" validate-serializable-objects="false"
at
org.apache.geode.distributed.internal.InternalDistributedSystem.validateSameProperties(InternalDistributedSystem.java:2959)
at
org.apache.geode.distributed.DistributedSystem.connect(DistributedSystem.java:199)
at
org.apache.geode.cache.client.ClientCacheFactory.basicCreate(ClientCacheFactory.java:243)
at
org.apache.geode.cache.client.ClientCacheFactory.create(ClientCacheFactory.java:214)
at GemfireTest.main(GemfireTest.java:61)
How to solve this exception?
The error here is pretty self explanatory: you can’t have more than one connection to a distributed system within a single JVM. In this particular case you’re starting both a server cache (ServerLauncher) and a client cache (ClientCacheFactory) within the same JVM, which is not supported.
To solve the issue, use two different applications or JVMs, one for the server and another one for the client executing the query.
Cheers.
I need to assign different values of memory for each new worker. So I tried changing memory for each bolt and spout. I am currently using a custom scheduler also. Here is my approach to the problem.
MY CODE:
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("spout", new EmailSpout(), 1).addConfiguration("node", "zoo1").setMemoryLoad(512.0);
builder.setBolt("increment1", new IncrementBolt(), PARALLELISM).shuffleGrouping("spout").addConfiguration("node", "zoo2").setMemoryLoad(2048.0);
builder.setBolt("increment2", new IncrementBolt(), PARALLELISM).shuffleGrouping("increment1").addConfiguration("node", "zoo3").setMemoryLoad(2048.0);
builder.setBolt("increment3", new IncrementBolt(), PARALLELISM).shuffleGrouping("increment2").addConfiguration("node", "zoo4").setMemoryLoad(2048.0);
builder.setBolt("output", new OutputBolt(), 1).globalGrouping("increment2").addConfiguration("node", "zoo1").setMemoryLoad(512.0);
Config conf = new Config();
conf.setDebug(false);
conf.setNumWorkers(4);
StormSubmitter.submitTopologyWithProgressBar("Microbenchmark", conf, builder.createTopology());
MY STORM.YAML:
storm.zookeeper.servers:
- "zoo1"
storm.zookeeper.port: 2181
nimbus.seeds: ["zoo1"]
storm.local.dir: "/home/ubuntu/eranga/storm-data"
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
- 6704
storm.scheduler: "org.apache.storm.scheduler.NodeBasedCustomScheduler"
supervisor.scheduler.meta:
node: "zoo4"
worker.profiler.enabled: true
worker.profiler.childopts: "-XX:+UnlockCommercialFeatures -XX:+FlightRecorder"
worker.profiler.command: "flight.bash"
worker.heartbeat.frequency.secs: 1
worker.childopts: "-Xmx2048m -Xms2048m -Djava.net.preferIPv4Stack=true -Dorg.xml.sax.driver=com.sun.org.apache.xerces.internal.parsers.SAXParser -Djavax.xml.parsers.DocumentBuilderFactory=com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl -Djavax.xml.parsers.SAXParserFactory=com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl"
When I submit the topology I get the following error.
ERROR:
Exception in thread "main" java.lang.IllegalArgumentException: Topology will not be able to be successfully scheduled: Config TOPOLOGY_WORKER_MAX_HEAP_SIZE_MB=768.0 < 2048.0 (Largest memory requirement of a component in the topology). Perhaps set TOPOLOGY_WORKER_MAX_HEAP_SIZE_MB to a larger amount
at org.apache.storm.StormSubmitter.validateTopologyWorkerMaxHeapSizeMBConfigs(StormSubmitter.java:496)
Any suggestions?
Try using this.
import org.apache.storm.Config;
public class TopologyExecuter{
for(List<StormTopology> StormTopologyObject : StormTopologyObjects ){
Config topologyConf = new Config();
topologyConf.put(Config.TOPOLOGY_WORKER_CHILDOPTS,"-Xmx512m -Xms256m");
StormSubmitter.submitTopology("topology name", topologyConf, StormTopologyObject);
}
}
Did you try following the advice from the error message?
Perhaps set TOPOLOGY_WORKER_MAX_HEAP_SIZE_MB to a larger amount
Try adding this to storm.yaml:
topology.worker.max.heap.size.mb=2048.0
My distributed geode system went just well before, but recently not, and I don't know whether it's relevant to the company's power-cut last night. It seems all geode data persisted before can not be deserialize back now.
here's the error information:
Caused by: java.lang.IllegalStateException: Unknown pdx type=4
at com.gemstone.gemfire.internal.InternalDataSerializer.readPdxSerializable(InternalDataSerializer.java:3162)
at com.gemstone.gemfire.internal.InternalDataSerializer.basicReadObject(InternalDataSerializer.java:2979)
at com.gemstone.gemfire.DataSerializer.readObject(DataSerializer.java:3210)
at com.gemstone.gemfire.internal.util.BlobHelper.deserializeBlob(BlobHelper.java:101)
at com.gemstone.gemfire.internal.cache.EntryEventImpl.deserialize(EntryEventImpl.java:1554)
at com.gemstone.gemfire.internal.cache.EntryEventImpl.deserialize(EntryEventImpl.java:1546)
at com.gemstone.gemfire.internal.cache.PreferBytesCachedDeserializable.getDeserializedValue(PreferBytesCachedDeserializable.java:67)
at com.gemstone.gemfire.internal.cache.EntryEventImpl.getOldValue(EntryEventImpl.java:723)
at com.gemstone.gemfire.internal.cache.LocalRegion.validatedDestroy(LocalRegion.java:1146)
at com.gemstone.gemfire.internal.cache.LocalRegion.destroy(LocalRegion.java:1130)
at com.gemstone.gemfire.internal.cache.AbstractRegion.destroy(AbstractRegion.java:315)
at com.gemstone.gemfire.internal.cache.LocalRegion.remove(LocalRegion.java:9372)
at com.igola.datahub.wwl.geode.WWLResultsDAO.removeByKey(WWLResultsDAO.java:276)
my project was build on play framework and my configuration of geode cache and region is like this:
cache.setIsServer(true);
diskStorage = configuration.getString("geode.storage.name");
String fileStorage = configuration.getString("geode.storage.path");
cache.createDiskStoreFactory()
.setDiskDirs(new File[]{new File(fileStorage)})
.setDiskUsageWarningPercentage(0.8f)
.setAutoCompact(true)
.create(diskStorage);
geodeCache.getCache()
.<DataKey, When2GoData>createRegionFactory(PARTITION_REDUNDANT_PERSISTENT_OVERFLOW)
.setStatisticsEnabled(true)
.setEntryIdleTimeout(new ExpirationAttributes(TIMEOUT_LONG, ExpirationAction.DESTROY))
.setDiskStoreName(geodeCache.getDiskStorage())
.setPartitionAttributes(new PartitionAttributesFactory<>()
.setRedundantCopies(REDUNDANT_COPIES)
.setPartitionResolver(new DataKey())
.create())
.create("xxx");
cache = new CacheFactory()
.set("locators", configuration.getString("geode.locator"))
.set("name", configuration.getString("geode.name")+ "-"+ uuid)
.set("mcast-port", "0")
.set("log-level", "error")
.setPdxPersistent(true)
.setPdxReadSerialized(true)
.create();
Here's the relevant code from the readPdxSerializable method:
PdxType pdxType = gfc.getPdxRegistry().getType(typeId);
if (logger.isTraceEnabled(LogMarker.SERIALIZER)) {
logger.trace(LogMarker.SERIALIZER, "readPdxSerializable pdxType={}", pdxType);
}
if (pdxType == null) {
throw new IllegalStateException("Unknown pdx type=" + typeId);
}
So it looks like you have something in your cache that you can no longer deserialize ... due (I think) the fact that the type is no longer registered in the type registry.