Currently I have two maps in hazelcast, and they are configured like so:
<hz:map name="some-map"
max-idle-seconds="0"
time-to-live-seconds="0">
<hz:map-store enabled="true"
initial-mode="EAGER"
write-delay-seconds="0"
class-name="SomeMapStore">
</hz:map-store>
<hz:partition-strategy>com.hazelcast.partition.strategy.DefaultPartitioningStrategy</hz:partition-strategy>
</hz:map>
I would expect the initial-mode="EAGER" from the hazelcast-beans.xml configuration to populate the hazelcast map. Instead the application process hangs for a moment, and then I see the following error:
my-service 21:14:15.247Z [hz.my-service-name.SlowOperationDetectorThread] WARN com.hazelcast.spi.impl.operationexecutor.slowoperationdetector.SlowOperationDetector - [localhost]:8085 [my-service-name-local] [3.9.4] Slow operation detected: com.hazelcast.map.impl.operation.PutTransientOperation
Has anyone run into this? I'm on hazelcast 3.9.4
Related
I am getting a very strange error while I am trying to compile to native.
Here is the error:
206853 [INFO] [io.quarkus.creator.phase.nativeimage.NativeImagePhase] Running Quarkus native-image plugin on OpenJDK GraalVM CE 1.0.0-rc15
206863 [INFO] [io.quarkus.creator.phase.nativeimage.NativeImagePhase] /opt/graalvm/bin/native-image -J-Djava.util.logging.manager=org.jboss.logmanager.LogManager -J-Drx.unsafe-disable=true -H:InitialCollectionPolicy=com.oracle.svm.core.genscavenge.CollectionPolicy$BySpaceAndTime -jar example-project-api-services-1.0-runner.jar -J-Djava.util.concurrent.ForkJoinPool.common.parallelism=1 -H:FallbackThreshold=0 -H:+PrintAnalysisCallTree -H:-AddAllCharsets -H:EnableURLProtocols=http,https --enable-all-security-services -H:-SpawnIsolates -H:+JNI --no-server -H:-UseServiceLoaderFeature -H:+StackTrace
[example-project-api-services-1.0-runner:391] classlist: 12,582.29 ms
[example-project-api-services-1.0-runner:391] (cap): 1,021.83 ms
[example-project-api-services-1.0-runner:391] setup: 2,121.29 ms
13:12:55,427 INFO [org.hib.val.int.uti.Version] HV000001: Hibernate Validator 6.1.0.Alpha4
13:12:55,729 INFO [io.sma.fau.HystrixInitializer] ### Init Hystrix ###
13:12:55,731 INFO [io.sma.fau.DefaultHystrixConcurrencyStrategy] ### Privilleged Thread Factory used ###
13:12:55,731 INFO [io.sma.fau.HystrixInitializer] Hystrix concurrency strategy used: DefaultHystrixConcurrencyStrategy
13:12:55,737 WARN [com.net.con.sou.URLConfigurationSource] No URLs will be polled as dynamic configuration sources.
13:12:55,737 INFO [com.net.con.sou.URLConfigurationSource] To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath.
13:12:55,761 INFO [org.hib.Version] HHH000412: Hibernate Core {5.4.2.Final}
13:12:55,774 INFO [org.hib.ann.com.Version] HCANN000001: Hibernate Commons Annotations {5.1.0.Final}
13:12:55,799 INFO [org.hib.dia.Dialect] HHH000400: Using dialect: org.hibernate.dialect.PostgreSQL95Dialect
13:12:55,863 INFO [io.sma.ope.api.OpenApiDocument] OpenAPI document initialized: io.smallrye.openapi.api.models.OpenAPIImpl#4878969e
13:12:57,347 INFO [org.jbo.threads] JBoss Threads version 3.0.0.Alpha4
13:12:58,200 INFO [com.arj.ats.arjuna] ARJUNA012170: TransactionStatusManager started on port 38873 and host 127.0.0.1 with service com.arjuna.ats.arjuna.recovery.ActionStatusService
13:12:58,348 INFO [org.xnio] XNIO version 3.7.0.Final
13:12:58,440 INFO [org.xni.nio] XNIO NIO Implementation Version 3.7.0.Final
Warning: RecomputeFieldValue.ArrayIndexScale automatic substitution failed. The automatic substitution registration was attempted because a call to sun.misc.Unsafe.arrayIndexScale(Class) was detected in the static initializer of rx.internal.util.unsafe.ConcurrentCircularArrayQueue. Detailed failure reason(s): Could not determine the field where the value produced by the call to sun.misc.Unsafe.arrayIndexScale(Class) for the array index scale computation is stored. The call is not directly followed by a field store or by a sign extend node followed directly by a field store.
Warning: RecomputeFieldValue.ArrayBaseOffset automatic substitution failed. The automatic substitution registration was attempted because a call to sun.misc.Unsafe.arrayBaseOffset(Class) was detected in the static initializer of rx.internal.util.unsafe.ConcurrentCircularArrayQueue. Detailed failure reason(s): Could not determine the field where the value produced by the call to sun.misc.Unsafe.arrayBaseOffset(Class) for the array base offset computation is stored. The call is not directly followed by a field store or by a sign extend node followed directly by a field store.
Warning: RecomputeFieldValue.ArrayIndexScale automatic substitution failed. The automatic substitution registration was attempted because a call to sun.misc.Unsafe.arrayIndexScale(Class) was detected in the static initializer of rx.internal.util.unsafe.SpscUnboundedArrayQueue. Detailed failure reason(s): Could not determine the field where the value produced by the call to sun.misc.Unsafe.arrayIndexScale(Class) for the array index scale computation is stored. The call is not directly followed by a field store or by a sign extend node followed directly by a field store.
[example-project-api-services-1.0-runner:391] analysis: 88,298.64 ms
Printing call tree to /builds/orema/example-project/services/example-project-services/example-project-api-services/target/reports/call_tree_example-project-api-services-1.0-runner_20190517_131435.txt
Printing list of used classes to /builds/orema/example-project/services/example-project-services/example-project-api-services/target/reports/used_classes_example-project-api-services-1.0-runner_20190517_131441.txt
Printing list of used packages to /builds/orema/example-project/services/example-project-services/example-project-api-services/target/reports/used_packages_example-project-api-services-1.0-runner_20190517_131441.txt
Error: No instances are allowed in the image heap for a class that is initialized or reinitialized at image runtime: sun.security.provider.NativePRNG
Detailed message:
Trace: object java.security.SecureRandom
method net.example-project.domain.collection.control.CollectionNumber.generate()
Call path from entry point to net.example-project.domain.collection.control.CollectionNumber.generate():
at net.example-project.domain.collection.control.CollectionNumber.generate(CollectionNumber.java:24)
at net.example-project.domain.collection.control.CollectionNumber_ClientProxy.generate(Unknown Source)
at net.example-project.domain.collection.boundary.CollectionCreationContext.create(CollectionCreationContext.java:41)
at net.example-project.domain.collection.boundary.CollectionCreationContext_Subclass.create$$superaccessor27(Unknown Source)
at net.example-project.domain.collection.boundary.CollectionCreationContext_Subclass$$function$$51.apply(Unknown Source)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.stream.SortedOps$RefSortingSink$$Lambda$425/239200789.accept(Unknown Source)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at io.smallrye.restclient.async.AsyncInvocationInterceptorHandler$Decorator.lambda$decorate$0(AsyncInvocationInterceptorHandler.java:48)
at io.smallrye.restclient.async.AsyncInvocationInterceptorHandler$Decorator$$Lambda$559/661106985.run(Unknown Source)
at java.lang.Thread.run(Thread.java:748)
at com.oracle.svm.core.thread.JavaThreads.threadStartRoutine(JavaThreads.java:473)
at com.oracle.svm.core.posix.thread.PosixJavaThreads.pthreadStartRoutine(PosixJavaThreads.java:193)
at com.oracle.svm.core.code.IsolateEnterStub.PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df(generated:0)
--------------------------------------------------------------------------------------------
-- WARNING: The above stack trace is not a real stack trace, it is a theoretical call tree---
-- If an interface has multiple implementations SVM will just display one potential call ---
-- path to the interface. This is often meaningless, and what you actually need to know is---
-- the path to the constructor of the object that implements this interface. ---
-- Quarkus has attempted to generate a more meaningful call flow analysis below ---
---------------------------------------------------------------------------------------------
Error: Use -H:+ReportExceptionStackTraces to print stacktrace of underlying exception
Error: Image build request failed with exit 331740
I think this logs says much about the error:
Error: No instances are allowed in the image heap for a class that is initialized or reinitialized at image runtime: sun.security.provider.NativePRNG
I found some issues # GraalVM's Github repo.
https://github.com/oracle/graal/issues/712
I think I should do something with Delay class initialization at https://quarkus.io/guides/writing-native-applications-tips
So, I wrote this piece of Java code:
#BuildStep
public RuntimeInitializedClassBuildItem secureRandom() {
return new RuntimeInitializedClassBuildItem("sun.security.provider.NativePRNG");
}
But it don't work.
So first, using #BuildStep only works during Quarkus augmentation phase: you need to be in an extension for it to work. It won't work in application code.
Second you need to delay the runtime initialization of the class holding the field so in your case, probably CollectionNumber?
So I would try to add:
<additionalBuildArgs>--delay-class-initialization-to-runtime=net.example-project.domain.collection.control.CollectionNumber</additionalBuildArgs>
to the Native image phase in your pom.xml.
You can pass additional build params for Quarkus native build by using Maven pom.xml like this (in 2021):
<profiles>
<profile>
<id>native</id>
<activation>
<property>
<name>native</name>
</property>
</activation>
<properties>
<quarkus.package.type>native</quarkus.package.type>
<quarkus.native.additional-build-args>--initialize-at-run-time=org.bouncycastle.jcajce.provider.drbg.DRBG\\,org.bouncycastle.jcajce.provider.SOMETHING_ELSE --trace-object-instantiation=sun.security.provider.NativePRNG</quarkus.native.additional-build-args>
</properties>
.....
You can try to specify the list of multiple Inner classes:
--initialize-at-run-time=org.bouncycastle.jcajce.provider.drbg.DRBG\$Default\,org.bouncycastle.jcajce.provider.drbg.DRBG\$NonceAndIV --trace-object-instantiation=sun.security.provider.NativePRNG
I create a datasource as
#DataSourceDefinition
(
name="java:app/env/myDataSource",
className="org.apache.derby.jdbc.EmbeddedXADataSource40",
databaseName="myDB",
properties=
{
// Vendor properties for Derby Embedded JDBC driver:
"createDatabase=create",
"connectionAttributes=upgrade=true",
// Custom properties for WebSphere Application Server:
"connectionTimeout=60",
"dataStoreHelperClass=com.ibm.websphere.rsadapter.DerbyDataStoreHelper",
"validateNewConnection=true",
"validateNewConnectionRetryCount=5"
},
serverName=""
)
Then I put it into the startup code.
#Startup
#Singleton
public class StartUp {
#Resource(lookup = "java:app/env/myDataSource")
private javax.sql.DataSource dataSource;
...
When websphere server starts up, I got the below error and more
[12/12/16 15:05:28:136 EST] 0000003b J2CXAResource W J2CA0061W:
Error creating XA Connection and Resource java.lang.Exception:
Parameter xaResInfo lacks an RA wrapper and an RA wrapper could not be
resolved using RA key. at
com.ibm.ejs.j2c.J2CXAResourceFactory$1.run(J2CXAResourceFactory.java:264)
at
com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:118)
at
com.ibm.ejs.j2c.J2CXAResourceFactory.getXAResource(J2CXAResourceFactory.java:199)
at
com.ibm.ws.Transaction.JTA.XARecoveryData.getXARminst(XARecoveryData.java:492)
at
com.ibm.ws.Transaction.JTA.XARecoveryData.recover(XARecoveryData.java:658)
at
com.ibm.tx.jta.impl.PartnerLogTable.recover(PartnerLogTable.java:432)
at
com.ibm.tx.jta.impl.RecoveryManager.resync(RecoveryManager.java:1543)
at
com.ibm.tx.jta.impl.RecoveryManager.performResync(RecoveryManager.java:2276)
at
com.ibm.ws.tx.jta.RecoveryManager.performResync(RecoveryManager.java:119)
at com.ibm.tx.jta.impl.RecoveryManager.run(RecoveryManager.java:2229)
at java.lang.Thread.run(Thread.java:798)
Any thought? I think the embedded derby doesn't need the J2C.
As aguibert said, the server is attempting to perform XA recovery, which is failing. Here's a link with info about recovering from a failed recovery:
https://www.ibm.com/developerworks/community/blogs/aimsupport/entry/recovering_from_failed_transaction_recovery_websphere_application_server?lang=en
In cases where the logged transaction is of no concern, you can simply stop the application server, navigate to the tranlog and partnerlog directories and delete the contents (log1 & log2) of both directories, then restart the app server.
For reference, unless changed by your configuration, the default directories are typically located in the paths:
C:WebSphere\AppServer\profiles\AppSrv01\tranlog\MyNode01Cell\MyNode02\server1\transaction\partnerlog\
C:WebSphere\AppServer\profiles\AppSrv01\tranlog\MyNode01Cell\MyNode02\server1\transaction\tranlog\
I have 5GB worth of data in DSE 4.8.9. I am trying to load the same data into DSE 5.0.2. The command I use is following:
root#dse:/mnt/cassandra/data$ sstableloader -d 10.0.2.91 /mnt/cassandra/data/my-keyspace/my-table-0b168ba1637111e6b40131c603254a9b/
This gives me following exception:
DEBUG 15:27:12,850 Using framed transport.
DEBUG 15:27:12,850 Opening framed transport to: 10.0.2.91:9160
DEBUG 15:27:12,850 Using thriftFramedTransportSize size of 16777216
DEBUG 15:27:12,851 Framed transport opened successfully to: 10.0.2.91:9160
Could not retrieve endpoint ranges:
InvalidRequestException(why:unconfigured table schema_columnfamilies)
java.lang.RuntimeException: Could not retrieve endpoint ranges: at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:342)
at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:156)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:109)
Caused by: InvalidRequestException(why:unconfigured table schema_columnfamilies)
at org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result$execute_cql3_query_resultStandardScheme.read(Cassandra.java:50297)
at org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result$execute_cql3_query_resultStandardScheme.read(Cassandra.java:50274)
at org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result.read(Cassandra.java:50189)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86)
at org.apache.cassandra.thrift.Cassandra$Client.recv_execute_cql3_query(Cassandra.java:1734)
at org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Cassandra.java:1719)
at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:321)
... 2 more
Thoughts?
For scenarios when you have few nodes and not a lot of data, you can follow these steps for a cluster migration (ensure the clusters are at most 1 major release apart)
1) create the schema in the new cluster
2) move both node's data to each new node (into the new cfid tables)
3) nodetool refresh to pick up the data
4) nodetool cleanup to clear out the extra data
5) If the old cluster was from a previous major version, run sstable upgrade on the new cluster.
We have 10 Cassandra nodes in production running Cassandra-2.1.8. We recently upgraded to 2.1.8 version. Previously we were using only 3 nodes running Cassandra-2.1.2. First we upgraded the initial 3 nodes from 2.1.2 to 2.1.8 (following the procedure as described in Upgrading Cassandra). Then we added 7 more nodes running Cassandra-2.1.8 in cluster. Then we started our client programs. For first few hours everything worked fine, but after few hours, we saw some errors in client program logs like
Thread-0 [29/07/15 17:41:23.356] ERROR com.cleartrail.entityprofiling.engine.InterpretationWriter - Error:com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: [/172.50.33.161:9041, /172.50.33.162:9041, /172.50.33.95:9041, /172.50.33.96:9041, /172.50.33.165:9041, /172.50.33.166:9041, /172.50.33.163:9041, /172.50.33.164:9041, /172.50.33.42:9041, /172.50.33.167:9041] - use getErrors() for details)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:65)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:259)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:175)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
at com.cleartrail.entityprofiling.engine.InterpretationWriter.WriteInterpretation(InterpretationWriter.java:430)
at com.cleartrail.entityprofiling.engine.Profiler.buildProfile(Profiler.java:1042)
at com.cleartrail.messageconsumer.consumer.KafkaConsumer.run(KafkaConsumer.java:336)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: [/172.50.33.161:9041, /172.50.33.162:9041, /172.50.33.95:9041, /172.50.33.96:9041, /172.50.33.165:9041, /172.50.33.166:9041, /172.50.33.163:9041, /172.50.33.164:9041, /172.50.33.42:9041, /172.50.33.167:9041] - use getErrors() for details)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:102)
at com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:176)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Now, I double checked the Firewall (as suggested in few posts), ports, timeouts in client as well as nodes and they all are correct.
I am also not closing the connection anywhere in between. I am using batch queries with batch size of 1000 and the queries are update queries updating counters in my table with three columns
entity , twfwv , cvalue
where entity and twfwv columns are text and primary key and cvalue is counter column.
I even restarted all my nodes (because this trick helped me in my dev environment when I faced the same exception) but its not helping. Please suggest what can be the probable problem here.
My issue was resolved by checking the errors collection of NoHostAvailableException as advised by Olivier Michallat in the comments. For me it was the protocol version on the cluster configuration. Mine was null, setting it to 3 fixed the problem.
My issue was resolved by removing/using a property to set or unset the custom load balancing TokenAwarePolicy my connection was using, and relying on the default.
Specifically, I was trying to get a local spring boot app talking to a single dockerized Cassandra instance.
Cluster.Builder builder = Cluster.builder()
.addContactPoints(cassandraProperties.getHosts())
.withPort(cassandraProperties.getPort())
.withProtocolVersion(ProtocolVersion.V4)
.withRetryPolicy(new LoggingRetryPolicy(DefaultRetryPolicy.INSTANCE))
.withCredentials(cassandraProperties.getUsername(), cassandraProperties.getPassword())
.withCodecRegistry(codecRegistry);
if (loadBalanced) {
builder.withLoadBalancingPolicy(
new TokenAwarePolicy(DCAwareRoundRobinPolicy.builder().withLocalDc(localDc).build()));
}
I am trying to setup a replicated cache using Jgroups in Ehcache.I am having problems in clustering the cache.I created 2 projects in eclipse each refering to different ehcache.xml configuration file.
Both the configuration files are identical and is given beolw.
<?xml version="1.0"?>
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory"
properties="connect=TCP(bind_port=7800):
TCPPING(initial_hosts=localhost[7800],localhost[7801];port_range=10;timeout=3000;
num_initial_members=3):
VERIFY_SUSPECT(timeout=1500):
pbcast.NAKACK(retransmit_timeout=3000):
pbcast.GMS(join_timeout=50000;print_local_addr=true)"
propertySeparator="::" />
<cache name="sampleCache"
maxElementsInMemory="1000000"
eternal="true"
overflowToDisk="false">
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory"
properties="replicateAsynchronously=true"/>
</cache>
I am using the following jar files in my classpath.
-ehcache-2.9.0.jar
-ehcache-jgroupsreplication-1.7.jar
-jgroups-3.6.0.Final.jar
-log4j-1.2.16
When I run the programs project1 shows
63511 [main] DEBUG org.jgroups.protocols.pbcast.NAKACK -
[SBSPBWSVM110-42986 setDigest()]
existing digest: []
new digest: SBSPBWSVM110-42986: [0 (0)]
resulting digest: SBSPBWSVM110-42986: [0 (0)]
63511 [main] DEBUG org.jgroups.protocols.pbcast.GMS - SBSPBWSVM110-42986: installing view [SBSPBWSVM110-42986|0] (1) [SBSPBWSVM110-42986]
63543 [main] DEBUG org.jgroups.protocols.pbcast.GMS - SBSPBWSVM110-42986: created cluster (first member). My view is [SBSPBWSVM110-42986|0], impl is org.jgroups.protocols.pbcast.CoordGmsImpl
Jan 09, 2015 11:49:51 AM net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProvider init
INFO:JGroups Replication started for 'EH_CACHE'. JChannel: local_addr=SBSPBWSVM110-42986
cluster_name=EH_CACHE
my_view=[SBSPBWSVM110-42986|0] (1) [SBSPBWSVM110-42986]
state=CONNECTED
discard_own_messages=true
state_transfer_supported=false
When I run the programs project2 shows
63451 [main] DEBUG org.jgroups.protocols.pbcast.NAKACK -
[SBSPBWSVM110-20554 setDigest()]
existing digest: []
new digest: SBSPBWSVM110-20554: [0 (0)]
resulting digest: SBSPBWSVM110-20554: [0 (0)]
63451 [main] DEBUG org.jgroups.protocols.pbcast.GMS - SBSPBWSVM110-20554: installing view [SBSPBWSVM110-20554|0] (1) [SBSPBWSVM110-20554]
63452 [main] DEBUG org.jgroups.protocols.pbcast.GMS - SBSPBWSVM110-20554: created cluster (first member). My view is [SBSPBWSVM110-20554|0], impl is org.jgroups.protocols.pbcast.CoordGmsImpl
Jan 09, 2015 11:49:51 AM net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProvider init
INFO: JGroups Replication started for 'EH_CACHE'. JChannel: local_addr=SBSPBWSVM110-20554
cluster_name=EH_CACHE
my_view=[SBSPBWSVM110-20554|0] (1) [SBSPBWSVM110-20554]
state=CONNECTED
discard_own_messages=true
state_transfer_supported=false
But the replication simply isnt happening.I have done the RMIreplication using Ehcache and following the same approach here also. So I am assuming nothing is wrong in my java code.
I am unable to find the issue here.Is my configuration wrong??Please help me with this issue..
The config you use is strange: it's mising some protocols. Can't ehcache refer to a JGroups config file, e.g. udp.xml ?
Also you you set bind_addr in TCP or use -Djgroups.bind_addr=1.2.3.4 where 1.2.3.4 is the network interface.
Then, in TCPPING.initial_hosts, you'll need to list all the members with the bind addresses you used above, e.g. 1.2.3.4[7800],5.6.7.8[7800] etc.