We have Keycloak in HA, which we have configured with a external Infinispan cluster for sessions, clientSessions & authenticationSessions.
Everything works under containers in a similar approach like the one performed under https://github.com/albertoSoto/keycloak-infinispan-cluster
The project runs KC 15.0.2 with Wildfly (migration to quarkus to be done), and in that case, uses Infinispan 11.0.9 to perform the external data persistence to Mysql 5.7. The driver used is the latest one, using https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar as suggested by Oracle. The connection driver is com.mysql.cj.jdbc.Driver.
The project starts nice, but after random time, mysql drops the connection and the Infinispan cluster can't reconnect.
In a trial to make it work, I have been able to use Agroal configuration, based in a properties file like it's at the bottom of this message.
The content of that agroal property file, that overrides JPA behavior in the project is the following:
org.infinispan.agroal.metricsEnabled=false
org.infinispan.agroal.minSize=10
org.infinispan.agroal.maxSize=100
org.infinispan.agroal.initialSize=20
org.infinispan.agroal.acquisitionTimeout_s=1
org.infinispan.agroal.validationTimeout_m=1
org.infinispan.agroal.leakTimeout_s=10
org.infinispan.agroal.reapTimeout_m=10
org.infinispan.agroal.maxLifetime_m=10
org.infinispan.agroal.autoCommit=true
org.infinispan.agroal.jdbcTransactionIsolation=READ_COMMITTED
org.infinispan.agroal.jdbcUrl=jdbc:mysql://mysql:3306/infinispan
org.infinispan.agroal.driverClassName=com.mysql.cj.jdbc.Driver
org.infinispan.agroal.principal=keycloak
org.infinispan.agroal.credential=password
The error shown after the connection is closed from the db is the following:
[1;31m21:55:31,052 ERROR (jgroups-319,vi-infinispan-1-5379) [org.infinispan.interceptors.impl.InvocationContextInterceptor] ISPN000136: Error executing command RemoveCommand on Cache 'clientSessions', writing keys [WrappedByteArray{bytes=0304090000000E\j\a\v\a\.\u\t\i\l\.\U\U\I\DBC9903F798\m85\/000000020000000C\l\e\a\s\t\S\i\g\B\i\t\s\$000000000B\m\o\s\t\S\i\g\B\i\t\s\$00168D0C\z8AB49FBA9B\C118A06A0DB\D82... (85 bytes), hashCode=73644551}] org.infinispan.remoting.RemoteException: ISPN000217: Received exception from vi-infinispan-0-53111, see cause for remote stack trace
at org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
at org.infinispan.remoting.transport.ValidSingleResponseCollector.withException(ValidSingleResponseCollector.java:37)
at org.infinispan.remoting.transport.ValidSingleResponseCollector.addResponse(ValidSingleResponseCollector.java:21)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.addResponse(SingleTargetRequest.java:73)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:43)
at org.infinispan.remoting.transport.impl.RequestRepository.addResponse(RequestRepository.java:52)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1402)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1305)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$300(JGroupsTransport.java:131)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1445)
at org.jgroups.JChannel.up(JChannel.java:784)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:913)
at org.jgroups.protocols.FRAG3.up(FRAG3.java:165)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:876)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:243)
at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1049)
at org.jgroups.protocols.UNICAST3.addMessage(UNICAST3.java:772)
at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:753)
at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:405)
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:592)
at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132)
at org.jgroups.protocols.FailureDetection.up(FailureDetection.java:186)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:254)
at org.jgroups.protocols.MERGE3.up(MERGE3.java:281)
at org.jgroups.protocols.Discovery.up(Discovery.java:300)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1396)
at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.infinispan.persistence.spi.PersistenceException: Error while removing string keys from database
at org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:234)
at org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:42)
at org.infinispan.marshall.core.GlobalMarshaller.readWithExternalizer(GlobalMarshaller.java:728)
at org.infinispan.marshall.core.GlobalMarshaller.readNonNullableObject(GlobalMarshaller.java:709)
at org.infinispan.marshall.core.GlobalMarshaller.readNullableObject(GlobalMarshaller.java:358)
at org.infinispan.marshall.core.BytesObjectInput.readObject(BytesObjectInput.java:32)
at org.infinispan.remoting.responses.ExceptionResponse$Externalizer.readObject(ExceptionResponse.java:49)
at org.infinispan.remoting.responses.ExceptionResponse$Externalizer.readObject(ExceptionResponse.java:41)
at org.infinispan.marshall.core.GlobalMarshaller.readWithExternalizer(GlobalMarshaller.java:728)
at org.infinispan.marshall.core.GlobalMarshaller.readNonNullableObject(GlobalMarshaller.java:709)
at org.infinispan.marshall.core.GlobalMarshaller.readNullableObject(GlobalMarshaller.java:358)
at org.infinispan.marshall.core.GlobalMarshaller.objectFromObjectInput(GlobalMarshaller.java:192)
at org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)
... 25 more
Caused by: java.sql.SQLNonTransientConnectionException: No operations allowed after connection closed.
We do use JDBC_PING for the cluster connection and 2 nodes are active. They register themselves properly and everything works like a charm, until the timeout is set.
The base configuration that I have placed is the following:
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:11.0 https://infinispan.org/schemas/infinispan-config-11.0.xsd
urn:infinispan:server:11.0 https://infinispan.org/schemas/infinispan-server-11.0.xsd"
xmlns="urn:infinispan:config:11.0"
xmlns:server="urn:infinispan:server:11.0">
<!--
Generic XML definition located under
https://docs.jboss.org/infinispan/11.0/configdocs/
-->
<jgroups>
<stack-file name="default-udp" path="default-jgroups.xml"/>
<stack-file name="default-tcp" path="default-jgroups-tcp.xml"/>
<stack-file name="gce" path="default-jgroups-google.xml"/>
<stack-file name="k8s" path="default-jgroups-kubernetes.xml"/>
<stack-file name="kc-udp" path="default-keycloak-jgroups-udp.xml"/>
<stack-file name="custom-k8s-jdbc" path="custom-jgroups-kubernetes-jdbc.xml"/>
<stack-file name="custom-tcp-jdbc" path="custom-jgroups-tcp-jdbc.xml"/>
</jgroups>
<cache-container name="default" statistics="${env.INFINISPAN_CACHE_STATISTICS:false}">
<serialization marshaller="org.infinispan.jboss.marshalling.commons.GenericJBossMarshaller">
<white-list>
<class>java.util.UUID</class>
<regex>org.keycloak.models.sessions.infinispan.*</regex>
</white-list>
</serialization>
<serialization marshaller="org.infinispan.commons.marshall.JavaSerializationMarshaller">
<white-list>
<class>java.util.UUID</class>
<regex>org.keycloak.models.sessions.infinispan.*</regex>
</white-list>
</serialization>
<transport cluster="${infinispan.cluster.name:cluster}" stack="${infinispan.cluster.stack:default-udp}"
node-name="${infinispan.node.name:}"/>
<replicated-cache-configuration name="sessions-cfg" mode="SYNC" start="EAGER"
statistics="${env.INFINISPAN_CACHE_STATISTICS:false}">
<state-transfer timeout="${infinispan.statetransfer.timeout:600000}"/>
<encoding media-type="application/x-jboss-marshalling"/>
<expiration lifespan="900000000000000000"/>
</replicated-cache-configuration>
<distributed-cache-configuration name="distributed-cache-cfg">
<encoding media-type="application/x-jboss-marshalling"/>
<expiration lifespan="900000000000000000"/>
<persistence passivation="false">
<string-keyed-jdbc-store shared="true" xmlns="urn:infinispan:config:store:jdbc:11.0">
<connection-pool properties-file="${env.PROPERTIES_FILE:/opt/infinispan/server/conf/connection-pool.properties}" />
<string-keyed-table drop-on-exit="false"
prefix="ISPN">
<id-column name="ID_COLUMN" type="VARCHAR(255)"/>
<!-- Blob generates error on KC. We increase it to a safe max size (65K per row)
<data-column name="DATA_COLUMN" type="BLOB" />
-->
<data-column name="DATA_COLUMN" type="VARBINARY(50000)"/>
<timestamp-column name="TIMESTAMP_COLUMN" type="BIGINT"/>
<segment-column name="SEGMENT_COLUMN" type="INT"/>
</string-keyed-table>
</string-keyed-jdbc-store>
</persistence>
<state-transfer timeout="${infinispan.statetransfer.timeout:600000}"/>
</distributed-cache-configuration>
<!--https://infinispan.org/docs/stable/titles/configuring/configuring.html#distributed-caches_clustered-caches-->
<!--https://infinispan.org/docs/stable/titles/configuring/configuring.html#configuring-jdbc-cache-stores_persistence-->
<distributed-cache name="sessions" owners="2" configuration="distributed-cache-cfg">
</distributed-cache>
<distributed-cache name="clientSessions" owners="2" configuration="distributed-cache-cfg">
</distributed-cache>
<distributed-cache name="authenticationSessions" owners="2" configuration="distributed-cache-cfg">
</distributed-cache>
</cache-container>
<!-- Original at v11 - -->
<server xmlns="urn:infinispan:server:11.0">
<interfaces>
<interface name="public">
<inet-address value="${infinispan.bind.address:0.0.0.0}"/>
</interface>
</interfaces>
<socket-bindings default-interface="public" port-offset="0">
<socket-binding name="default" port="11222"/>
</socket-bindings>
<security>
<security-realms>
<security-realm name="default">
<properties-realm groups-attribute="Roles">
<user-properties path="users.properties" relative-to="infinispan.server.config.path"
plain-text="true"/>
<group-properties path="groups.properties" relative-to="infinispan.server.config.path"/>
</properties-realm>
</security-realm>
</security-realms>
</security>
<endpoints socket-binding="default" security-realm="default">
<hotrod-connector name="hotrod">
<authentication>
<sasl mechanisms="SCRAM-SHA-512 SCRAM-SHA-384 SCRAM-SHA-256 SCRAM-SHA-1 DIGEST-SHA-512 DIGEST-SHA-384 DIGEST-SHA-256 DIGEST-SHA DIGEST-MD5 PLAIN"
qop="auth" server-name="infinispan"/>
</authentication>
</hotrod-connector>
<rest-connector name="rest">
<authentication mechanisms="DIGEST BASIC"/>
</rest-connector>
</endpoints>
</server>
</infinispan>
The thing is... what are am I doing wrong?
There is not too much information about it. Can anyone help?
Thank you!
Unfortunately I think you have stumbled across a bug with the persistence availability check that prevents stores from reconnecting if an exception is thrown ISPN-13863. I have just created a PR, however the fix will only be available in the Infinispan 14.x stream.
Related
I have two Wildfly 18 instances running locally: n1 and n2. I would like instance n2 to consume instance n1's produced messages in order to take steps towards a HA scenario.
After reading the RH EAP docs,
I have done the following:
1- Defined a Exposed JMS Queue on n2. Also, I added security settings and Remote Factory in the ActiveMQ Submodule:
[...]
<server name="default">
<security-setting name="#">
<role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
</security-setting>
[...]
<jms-queue name="testQueue" entries="queue/test java:jboss/exported/jms/queue/test"/>
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/>
</server>
[...]
2- I configured JGroups via TCPPING with an initial list of nodes to join the cluster, in order to achieve cluster discovery:
[...]
<protocol type="org.jgroups.protocols.TCPPING">
<property name="initial_hosts">127.0.0.1[8600]</property>
<property name="port_range">0</property>
</protocol>
[...]
3- Then I brought up the two instances, and there I get the following messages in the app logs:
(Thread-12 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6#7124120f)) AMQ221027: Bridge ClusterConnectionBridge#c6997b5 [name=$.artemis.internal.sf.my-cluster.f3561996-f354-11ea-83cc-4c32759d60cf, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.f3561996-f354-11ea-83cc-4c32759d60cf, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=c9af42f1-f354-11ea-8e25-4c32759d60cf], temp=false]#2747e684 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge#c6997b5 [name=$.artemis.internal.sf.my-cluster.f3561996-f354-11ea-83cc-4c32759d60cf, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.f3561996-f354-11ea-83cc-4c32759d60cf, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=c9af42f1-f354-11ea-8e25-4c32759d60cf], temp=false]#2747e684 targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=<port_number>&host=localhost], discoveryGroupConfiguration=null]]::ClusterConnectionImpl#1775690639[nodeUUID=c9af42f1-f354-11ea-8e25-4c32759d60cf, connector=TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8323&host=localhost, address=jms, server=ActiveMQServerImpl::serverUUID=c9af42f1-f354-11ea-8e25-4c32759d60cf])) [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=<port_number>&host=localhost], discoveryGroupConfiguration=null]] is connected
But when I try to send messages from n1 to n2 using the following JNDI conf,
java.naming.factory.initial = org.wildfly.naming.client.WildFlyInitialContextFactory
java.naming.provider.url = remote://localhost:8323
java.naming.security.principal = ***
java.naming.security.credentials = ***
Connection Factory JNDI name = jms/RemoteConnectionFactory
Queue JNDI name = jms/queue/test
... I get this error after a certain timeout (~30s):
javax.naming.CommunicationException: WFNAM00018: Failed to connect to remote host [Root exception is java.io.IOException: JBREM000202: Abrupt close on Remoting connection 4ba0f2c1 to localhost/127.0.0.1:8323 of endpoint (anonymous)
I have tried to connect to the same queue using a simple JMS client (https://plugins.jetbrains.com/plugin/10949-jms-messenger), and I was actually able to connect, as I at least got the following error:
ERROR [com.my.app.Receiver] (Thread-14 (ActiveMQ-client-global-threads)) Unknown message: ActiveMQMessage[ID:5f71e993-f377-11ea-acfc-169f02eb582c]:PERSISTENT/ClientMessageImpl[messageID=442, durable=true, address=jms.queue.test,userID=5f71e993-f377-11ea-acfc-169f02eb582c,properties=TypedProperties[__AMQ_CID=5f684ca0-f377-11ea-acfc-169f02eb582c,_AMQ_ROUTING_TYPE=1]]
Could you please hint me on what is wrong and explain why that is? Thanks a lot
I solved this issue by working on the Wildfly and JNDI configuration. Though the error message was very generic, at least in my case, the following Wildfly config:
<subsystem xmlns="urn:jboss:domain:messaging-activemq:8.0">
<server name="default">
<http-acceptor name="http-acceptor-throughput" http-listener="messaging">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
...
<http-connector name="http-connector-throughput" socket-binding="messaging-throughput" endpoint="http-acceptor-throughput">
<param name="batch-delay" value="50"/>
</http-connector>
...
<jms-queue name="test" entries="queue/test java:jboss/exported/jms/test"/>
<broadcast-group name="bg-group1" jgroups-cluster="activemq-cluster" broadcast-period="5000" connectors="http-connector"/>
<discovery-group name="dg-group1" jgroups-cluster="activemq-cluster"/>
...
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/>
</subsystem>
...
<subsystem xmlns="urn:jboss:domain:remoting:4.0">
<http-connector name="messaging-remoting-connector" connector-ref="messaging-http" security-realm="ApplicationRealm"/>
</subsystem>
...
<socket-binding-group ... >
...
<socket-binding name="messaging" port="8323"/>
<socket-binding name="messaging-throughput" port="8324"/>
...
</socket-binding-group>
Worked with the following JNDI config:
java.naming.factory.initial = org.wildfly.naming.client.WildFlyInitialContextFactory
java.naming.provider.url = remote://localhost:8323
java.naming.security.principal = ***
java.naming.security.credentials = ***
Connection Factory JNDI name = jms/RemoteConnectionFactory
Queue JNDI name = jms/test
Also, as the principal/credentials were not part of the ApplicationRealm, I started getting a 403 HTTP response code (upon calling the messaging endpoint). In order to get that working too, I had to add the user and related credential using the add-user.sh script (found in the Wildfly /bin folder).
I have an EE app I want to deploy to 2 wildfly13 instances in cluster. I have an entity using #Cache (from hibernate) and #NamedQuery with hints to use cache as well: entity can be queried by both id (will use #Cache) and by other query (using query hint in this case).
The cache region used for the hint is "replicated-query".
I use wildfly 13, so I have hibernate 5.1.14 (non ee 8 preview mode), infinispan 9.2.4 and jgroups 4.0.11 and java 10 (we can't go to java 11 because of some removal in Unsafe class we still have libs depending on it).
The app is 100+ EJBs and close to 150k LOC so upgrading wildfly is not an option for the moment.
Problem is: replicated cache is not replicating, even not starting as replicated.
Infinispan replicated cache not replicating objects for read is not helpful, nor is Replicated infinispan cache with Wildfly 11.
I use jgroups with tcpping (as the app will be deployed on a private cloud, we need to keep network as low as possible so udp is not an option). The cluster is forming well between the 2 wildfly instances (confirmed by the logs and jmx), but the replicated cache is not starting on deployment, as if it could not find a transport.
The cache name i use for type "replicated-cache" is not making any difference, including the pre-configured "replicated-query".
Using the "non deprecated configuration" for jgroups as mentionned by Paul Ferraro here, did not allow the cluster to form (which in my case is a step back because the cluster is forming when using my conf).
One weird thing thougt: the UpdateTimestamp cache, configured as replicated is replicating (confirmed by logs and jmx: the name of the region is suffixed by repl_async).
The caches are in invalidation_sync by default and works fine as the sql query is only issued once with same parameters (confirmed by logs and statistics).
For the moment (test/debug purpose), I deploy both instances on my local. omega1 with a port offset of 20000 and omega2 with port offset of 30000.
I haven't tried distributed cache because from what I read, I would face the same kind of issue.
Here is the relevant part of the entity:
#Entity
#Table(name = "my_entity", schema = "public")
#NamedQueries({
#NamedQuery(name = "myEntityTest", query = "select p from MyEntity p where p.value = :val", hints = {
#QueryHint(name = org.hibernate.annotations.QueryHints.CACHEABLE, value = "true"),
#QueryHint(name = org.hibernate.annotations.QueryHints.CACHE_REGION, value = "RPL-myEntityTest")
})
})
#Cache(usage = CacheConcurrencyStrategy.NONE, region = "replicated-entity")
Here is the jgroups subsystem portion of standalone-full-ha.xml:
<subsystem xmlns="urn:jboss:domain:jgroups:6.0">
<channels default="omega-ee">
<channel name="omega-ee" stack="tcpping" cluster="omega-ejb" statistics-enabled="true"/>
</channels>
<stacks>
<stack name="tcpping">
<transport type="TCP" statistics-enabled="true" socket-binding="jgroups-tcp"/>
<protocol type="org.jgroups.protocols.TCPPING">
<property name="port_range">
10
</property>
<property name="discovery_rsp_expiry_time">
3000
</property>
<property name="send_cache_on_join">
true
</property>
<property name="initial_hosts">
localhost[27600],localhost[37600]
</property>
</protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
</stacks>
</subsystem>
Here is the socket-binding for jgroups-tcp:
<socket-binding name="jgroups-tcp" interface="private" port="7600"/>
And this is the infinispan hibernate cache container section of standalone-full-ha.xml:
<cache-container name="hibernate" module="org.infinispan.hibernate-cache">
<transport channel="omega-ee" lock-timeout="60000"/>
<local-cache name="local-query">
<object-memory size="10000"/>
<expiration max-idle="100000"/>
</local-cache>
<invalidation-cache name="entity">
<transaction mode="NON_XA"/>
<object-memory size="10000"/>
<expiration max-idle="100000"/>
</invalidation-cache>
<replicated-cache name="replicated-query">
<transaction mode="NON_XA"/>
</replicated-cache>
<replicated-cache name="RPL-myEntityTest" statistics-enabled="true">
<transaction mode="BATCH"/>
</replicated-cache>
<replicated-cache name="replicated-entity" statistics-enabled="true">
<transaction mode="NONE"/>
</replicated-cache>
</cache-container>
and I've set the following properties in persistence.xml
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQL9Dialect"/>
<property name="hibernate.cache.use_second_level_cache" value="true"/>
<property name="hibernate.cache.use_query_cache" value="true"/>
<property name="hibernate.show_sql" value="true"/>
<property name="hibernate.format_sql" value="true"/>
</properties>
I expect:
the replicated caches to start on deployment (maybe even on start if they are configured in infinispan subsystem)
cached data to be replicated between nodes on read and invalidated cluster wide on update/expiration/invalidation
data to be retrieved from cache (local because it should have been replicated).
I feel that I'm not so far from the expected result, but I'm missing something.
Any help will be much appreciated!
Update 1:
I just tried what #Bela Ban suggested and set initial hosts to localhost[7600] on both nodes with no success: the cluster is not forming. I use port offset to start both nodes on my local machine to avoid port overlap.
With localhost[7600] on both hosts, how would one node know on which port to connect to the other one since I need to use port offset?
I even tried localhost[7600],localhost[37600] on the node i start with offset 20000 and localhost[7600],localhost[27600] on the one i start with offset 30000. The cluster is forming but the cache is not replicating.
Update 2:
The entity's cache is in invalidation_sync and works as expected, which means that jgroups is working as expected and confirms the cluster is well formed, so my guess is the issue is infinispan or wildfly related.
If you use port 7600 (in jgroups-tcp.xml), then listing ports 27600 and 37600 won't work: localhost[27600],localhost[37600] should be localhost[7600].
As well as correcting the ports as indicated in the other answer, I think you need <global-state/> in your <cache-container>, e.g.:
<cache-container name="hibernate" module="org.infinispan.hibernate-cache">
<transport channel="omega-ee" lock-timeout="60000"/>
<global-state/>
<local-cache name="local-query">
<object-memory size="10000"/>
...etc...
I have configured Hibernate Search to use infinispan and use File System based Cache Store to persist the indexes in file system instead of memory.
Now, I wish to configure S3 instead of File System, but I am not able to find the correct configuration for this.
My infinispan.xml file is:
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:6.0 http://www.infinispan.org/schemas/infinispan-config-6.0.xsd"
xmlns="urn:infinispan:config:6.0">
<global>
<globalJmxStatistics enabled="false" />
<!-- <transport clusterName="storage-test-cluster" /> -->
<shutdown hookBehavior="DONT_REGISTER" />
</global>
<default>
<storeAsBinary
enabled="false" />
<locking
isolationLevel="READ_COMMITTED"
lockAcquisitionTimeout="20000"
writeSkewCheck="false"
concurrencyLevel="5000"
useLockStriping="false" />
<invocationBatching
enabled="false" />
</default>
<namedCache name="LuceneIndexesMetadata">
<persistence passivation="false">
<singleFile
fetchPersistentState="true"
preload="true"
purgeOnStartup="false"
shared="true"
ignoreModifications="false"
location="C:\\infinispan">
</singleFile>
</persistence>
</namedCache>
<namedCache name="LuceneIndexesData">
<persistence passivation="false">
<singleFile
fetchPersistentState="true"
preload="true"
purgeOnStartup="false"
shared="true"
ignoreModifications="false"
location="C:\\infinispan">
</singleFile>
</persistence>
</namedCache>
<namedCache name="LuceneIndexesLocking">
<!-- No CacheLoader configured here -->
</namedCache>
</infinispan>
Can anyone help me in configuring this file to use Amazon S3 as Cache Store.
The specific versions of Hibernate Search and Infinispan which you're using are extremely old. Specifically, Infinispan didn't support storage on Amazon S3 in version 6.
I would suggest upgrading to some more recent version which is still being maintained.
As of writing this, you could use Infinispan 9.1.5.Final with Hibernate Search 5.8.2.Final.
Below is ehcache configuration we are using. We use Jgroups for cache replication.
ehcache.xml
<defaultCache
maxElementsInMemory="10000"
eternal="false"
timeToIdleSeconds="1200"
timeToLiveSeconds="86400"
overflowToDisk="true"
diskSpoolBufferSizeMB="30"
maxElementsOnDisk="10000000"
diskPersistent="false"
diskExpiryThreadIntervalSeconds="120"
memoryStoreEvictionPolicy="LRU">
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory"
properties="replicateAsynchronously=true,replicatePuts=true,replicateUpdates=true,replicateUpdatesViaCopy=true,replicateRemovals=true" />
</defaultCache>
jgroups_tcp_config.xml
<?xml version="1.0" encoding="UTF-8"?>
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.0.xsd">
<!--Configure node ip inside bind_addr-->
<TCP bind_addr="host1" bind_port="7831" max_bundle_size="9999999"/>
<!--Configure nodes inside 'initial_hosts' property-->
<TCPPING timeout="3000" initial_hosts="host1[7831],host2[7831]" port_range="1" num_initial_members="3"/>
<FRAG2 frag_size="9999999"/>
<MERGE3 max_interval="30000" min_interval="10000"/>
<FD timeout="3000" max_tries="10"/>
<VERIFY_SUSPECT timeout="1500"/>
<pbcast.NAKACK use_mcast_xmit="false" exponential_backoff="500" discard_delivered_msgs="false"/>
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="400000"/>
<pbcast.GMS print_local_addr="true" join_timeout="5000" view_bundling="true"/>
</config>
Initially from the logs we can see that the nodes are getting clustered. Also we can see that messages are being replicated across nodes. But after some time, we see that messages are no more being replicated and hence resulting in erroneous behavior. Is there any problem with the jgroups configurations we are using?
Also we tried using NAKACK2, but the messages are not getting replicated across nodes at all. We simply replaced NAKACK with NAKACK2 in above configuration specified. Not sure where we are going wrong.
Above issue we are facing in AWS cloud.Ehcache Jgroups tcp will not work in cloud environment because cloud VPN dosn't support TCP multicasting due to which node discovery will not happen, to address this we are using jgroups_s3_config.xml instead of jgroups_tcp_config.xml in the AWS cloud.With the following jgroups_s3_config.xml configuration we have addressed the issue.
<?xml version="1.0" encoding="UTF-8"?>
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.1.xsd">
<TCP loopback="true" bind_port="7800"/>
<S3_PING location="s3 bucket name should be in the same region in which app servers are running"
access_key="s3 bucket access key from aws credential file"
secret_access_key="s3 bucket secret access key from aws credential file" timeout="10000" num_initial_members="2"/>
<FRAG2/>
<MERGE2 min_interval="10000" max_interval="30000"/>
<FD_ALL timeout="12000" interval="3000" timeout_check_interval="4000"/>
<VERIFY_SUSPECT timeout="1500"/>
<pbcast.NAKACK2 use_mcast_xmit="false" discard_delivered_msgs="false"/>
<UNICAST2 timeout="300,600,1200"/>
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="40K"/>
<pbcast.GMS print_local_addr="true" join_timeout="5000" view_bundling="true"/>
</config>
Additionally we have to set the JAVA_OPTS.
export JAVA_OPTS="$JAVA_OPTS -Djava.net.preferIPv4Stack=true"
I have an embedded ActiveMQ under Apache TomEE. TomEE configures JMS in a file called tomee.xml, in my case, it's configured like this
<Resource id="Default JMS Resource Adapter" type="ActiveMQResourceAdapter">
BrokerXmlConfig = broker:(tcp://localhost:61616)?persistent=true
ServerUrl = tcp://localhost:61616
DataSource = MyDataSource
</Resource>
Now, I'd like to specify custom memory settings, which is done in the activemq.xml file. TomEE can load activemq.xml configuration using Spring XBeans if I add , like this (I think)
<Resource id="Default JMS Resource Adapter" type="ActiveMQResourceAdapter">
BrokerXmlConfig = xbean:file:conf/activemq.xml
ServerUrl = tcp://localhost:61616
DataSource = MyDataSource
</Resource>
See http://tomee.apache.org/jms-resources-and-mdb-container.html
Is that right?
I've added the 5 jars into tomee's lib path, just as indicated in the link above.
And then, I have an activemq.xml like this
<!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding
copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may
obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed
on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the
License. -->
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:amq="http://activemq.apache.org/schema/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue=">" producerFlowControl="false" prioritizedMessages="true" useCache="false" expireMessagesPeriod="0" queuePrefetch="1" />
<pendingQueuePolicy>
<vmQueueCursor />
</pendingQueuePolicy>
</policyEntries>
</policyMap>
</destinationPolicy>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="128 mb" />
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb" />
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb" />
</tempUsage>
</systemUsage>
</systemUsage>
<transportConnectors>
<transportConnector name="anythingHere" uri="broker:(tcp://localhost:61616)?persistent=true"/>
</transportConnectors>
</broker>
</beans>
but obviously, I am doing something wrong here, because JMS does not start and returns an error message like
SEVERE: Failed to connect to broker [tcp://localhost:61616]: Could not connect to
broker URL: tcp://localhost:61616. Reason: java.net.ConnectException: Connection refused
javax.jms.JMSException: Could not connect to broker URL: tcp://localhost:61616. Reason:
java.net.ConnectException: Connection refused
what am I missing here?
UPDATE - more info then
then I've added the absolute path to the activemq.xml file because I could not make it work from inside eclipse (I know, this is probably more like eclipse's fault)
then I've changed some invalid XML such as
<!-- <destinationPolicy> -->
<!-- <policyMap> -->
<!-- <policyEntries> -->
<!-- <policyEntry queue=">" producerFlowControl="false" prioritizedMessages="true" useCache="false" expireMessagesPeriod="0" queuePrefetch="1" /> -->
<!-- <pendingQueuePolicy> -->
<!-- <vmQueueCursor /> -->
<!-- </pendingQueuePolicy> -->
<!-- </policyEntries> -->
<!-- </policyMap> -->
<!-- </destinationPolicy> -->
and replaced with site's default
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="true">
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
<policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb">
<!-- Use VM cursor for better latency
For more information, see:
http://activemq.apache.org/message-cursors.html
<pendingQueuePolicy>
<vmQueueCursor/>
</pendingQueuePolicy>
-->
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
after adding kahadb from the maven repository and switching from activemq-all to activemq-spring and defining the bean into activemq.xml as
</broker>
<bean id="oracle-ds" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="oracle.jdbc.OracleDriver"/>
<property name="url" value="jdbc:oracle:thin:#localhost:1521:XE"/>
<property name="username" value="xxx"/>
<property name="password" value="xxx"/>
<property name="poolPreparedStatements" value="true"/>
</bean>
</beans>
finally.... I am getting a new error
SEVERE: Failed to load: URL [file:/home/leoks/EclipseIndigo/workspace2/Servers /TomEE1.6.0-STABLE-config/activemq.xml], reason: Error creating bean with name 'org.apache.activemq.xbean.XBeanBrokerService#0' defined in URL [file:/home/leoks /EclipseIndigo/workspace2/Servers/TomEE1.6.0-STABLE-config/activemq.xml]: Invocation of init method failed; nested exception is java.io.IOException: Transport Connector could not be registered in JMX: Transport scheme NOT recognized: [broker]
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.apache.activemq.xbean.XBeanBrokerService#0' defined in URL [file:/home/leoks/EclipseIndigo/workspace2/Servers/TomEE1.6.0-STABLE-config/activemq.xml]: Invocation of init method failed; nested exception is java.io.IOException: Transport Connector could not be registered in JMX: Transport scheme NOT recognized: [broker]
after some google, some solutions seems to be related somehow to the incapacity of activemq to load the XML (makes sense, since XML is a recent technology, invented in 96, almost 20 years ago)
I am pulling my hair off.
I think you transport connector configuration should look like this:
<transportConnectors>
<transportConnector name="tcp" uri="tcp://0.0.0.0:61616"/>
</transportConnectors>
See the documentation for connectors.