I have an EE app I want to deploy to 2 wildfly13 instances in cluster. I have an entity using #Cache (from hibernate) and #NamedQuery with hints to use cache as well: entity can be queried by both id (will use #Cache) and by other query (using query hint in this case).
The cache region used for the hint is "replicated-query".
I use wildfly 13, so I have hibernate 5.1.14 (non ee 8 preview mode), infinispan 9.2.4 and jgroups 4.0.11 and java 10 (we can't go to java 11 because of some removal in Unsafe class we still have libs depending on it).
The app is 100+ EJBs and close to 150k LOC so upgrading wildfly is not an option for the moment.
Problem is: replicated cache is not replicating, even not starting as replicated.
Infinispan replicated cache not replicating objects for read is not helpful, nor is Replicated infinispan cache with Wildfly 11.
I use jgroups with tcpping (as the app will be deployed on a private cloud, we need to keep network as low as possible so udp is not an option). The cluster is forming well between the 2 wildfly instances (confirmed by the logs and jmx), but the replicated cache is not starting on deployment, as if it could not find a transport.
The cache name i use for type "replicated-cache" is not making any difference, including the pre-configured "replicated-query".
Using the "non deprecated configuration" for jgroups as mentionned by Paul Ferraro here, did not allow the cluster to form (which in my case is a step back because the cluster is forming when using my conf).
One weird thing thougt: the UpdateTimestamp cache, configured as replicated is replicating (confirmed by logs and jmx: the name of the region is suffixed by repl_async).
The caches are in invalidation_sync by default and works fine as the sql query is only issued once with same parameters (confirmed by logs and statistics).
For the moment (test/debug purpose), I deploy both instances on my local. omega1 with a port offset of 20000 and omega2 with port offset of 30000.
I haven't tried distributed cache because from what I read, I would face the same kind of issue.
Here is the relevant part of the entity:
#Entity
#Table(name = "my_entity", schema = "public")
#NamedQueries({
#NamedQuery(name = "myEntityTest", query = "select p from MyEntity p where p.value = :val", hints = {
#QueryHint(name = org.hibernate.annotations.QueryHints.CACHEABLE, value = "true"),
#QueryHint(name = org.hibernate.annotations.QueryHints.CACHE_REGION, value = "RPL-myEntityTest")
})
})
#Cache(usage = CacheConcurrencyStrategy.NONE, region = "replicated-entity")
Here is the jgroups subsystem portion of standalone-full-ha.xml:
<subsystem xmlns="urn:jboss:domain:jgroups:6.0">
<channels default="omega-ee">
<channel name="omega-ee" stack="tcpping" cluster="omega-ejb" statistics-enabled="true"/>
</channels>
<stacks>
<stack name="tcpping">
<transport type="TCP" statistics-enabled="true" socket-binding="jgroups-tcp"/>
<protocol type="org.jgroups.protocols.TCPPING">
<property name="port_range">
10
</property>
<property name="discovery_rsp_expiry_time">
3000
</property>
<property name="send_cache_on_join">
true
</property>
<property name="initial_hosts">
localhost[27600],localhost[37600]
</property>
</protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
</stacks>
</subsystem>
Here is the socket-binding for jgroups-tcp:
<socket-binding name="jgroups-tcp" interface="private" port="7600"/>
And this is the infinispan hibernate cache container section of standalone-full-ha.xml:
<cache-container name="hibernate" module="org.infinispan.hibernate-cache">
<transport channel="omega-ee" lock-timeout="60000"/>
<local-cache name="local-query">
<object-memory size="10000"/>
<expiration max-idle="100000"/>
</local-cache>
<invalidation-cache name="entity">
<transaction mode="NON_XA"/>
<object-memory size="10000"/>
<expiration max-idle="100000"/>
</invalidation-cache>
<replicated-cache name="replicated-query">
<transaction mode="NON_XA"/>
</replicated-cache>
<replicated-cache name="RPL-myEntityTest" statistics-enabled="true">
<transaction mode="BATCH"/>
</replicated-cache>
<replicated-cache name="replicated-entity" statistics-enabled="true">
<transaction mode="NONE"/>
</replicated-cache>
</cache-container>
and I've set the following properties in persistence.xml
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQL9Dialect"/>
<property name="hibernate.cache.use_second_level_cache" value="true"/>
<property name="hibernate.cache.use_query_cache" value="true"/>
<property name="hibernate.show_sql" value="true"/>
<property name="hibernate.format_sql" value="true"/>
</properties>
I expect:
the replicated caches to start on deployment (maybe even on start if they are configured in infinispan subsystem)
cached data to be replicated between nodes on read and invalidated cluster wide on update/expiration/invalidation
data to be retrieved from cache (local because it should have been replicated).
I feel that I'm not so far from the expected result, but I'm missing something.
Any help will be much appreciated!
Update 1:
I just tried what #Bela Ban suggested and set initial hosts to localhost[7600] on both nodes with no success: the cluster is not forming. I use port offset to start both nodes on my local machine to avoid port overlap.
With localhost[7600] on both hosts, how would one node know on which port to connect to the other one since I need to use port offset?
I even tried localhost[7600],localhost[37600] on the node i start with offset 20000 and localhost[7600],localhost[27600] on the one i start with offset 30000. The cluster is forming but the cache is not replicating.
Update 2:
The entity's cache is in invalidation_sync and works as expected, which means that jgroups is working as expected and confirms the cluster is well formed, so my guess is the issue is infinispan or wildfly related.
If you use port 7600 (in jgroups-tcp.xml), then listing ports 27600 and 37600 won't work: localhost[27600],localhost[37600] should be localhost[7600].
As well as correcting the ports as indicated in the other answer, I think you need <global-state/> in your <cache-container>, e.g.:
<cache-container name="hibernate" module="org.infinispan.hibernate-cache">
<transport channel="omega-ee" lock-timeout="60000"/>
<global-state/>
<local-cache name="local-query">
<object-memory size="10000"/>
...etc...
Related
We have Keycloak in HA, which we have configured with a external Infinispan cluster for sessions, clientSessions & authenticationSessions.
Everything works under containers in a similar approach like the one performed under https://github.com/albertoSoto/keycloak-infinispan-cluster
The project runs KC 15.0.2 with Wildfly (migration to quarkus to be done), and in that case, uses Infinispan 11.0.9 to perform the external data persistence to Mysql 5.7. The driver used is the latest one, using https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar as suggested by Oracle. The connection driver is com.mysql.cj.jdbc.Driver.
The project starts nice, but after random time, mysql drops the connection and the Infinispan cluster can't reconnect.
In a trial to make it work, I have been able to use Agroal configuration, based in a properties file like it's at the bottom of this message.
The content of that agroal property file, that overrides JPA behavior in the project is the following:
org.infinispan.agroal.metricsEnabled=false
org.infinispan.agroal.minSize=10
org.infinispan.agroal.maxSize=100
org.infinispan.agroal.initialSize=20
org.infinispan.agroal.acquisitionTimeout_s=1
org.infinispan.agroal.validationTimeout_m=1
org.infinispan.agroal.leakTimeout_s=10
org.infinispan.agroal.reapTimeout_m=10
org.infinispan.agroal.maxLifetime_m=10
org.infinispan.agroal.autoCommit=true
org.infinispan.agroal.jdbcTransactionIsolation=READ_COMMITTED
org.infinispan.agroal.jdbcUrl=jdbc:mysql://mysql:3306/infinispan
org.infinispan.agroal.driverClassName=com.mysql.cj.jdbc.Driver
org.infinispan.agroal.principal=keycloak
org.infinispan.agroal.credential=password
The error shown after the connection is closed from the db is the following:
[1;31m21:55:31,052 ERROR (jgroups-319,vi-infinispan-1-5379) [org.infinispan.interceptors.impl.InvocationContextInterceptor] ISPN000136: Error executing command RemoveCommand on Cache 'clientSessions', writing keys [WrappedByteArray{bytes=0304090000000E\j\a\v\a\.\u\t\i\l\.\U\U\I\DBC9903F798\m85\/000000020000000C\l\e\a\s\t\S\i\g\B\i\t\s\$000000000B\m\o\s\t\S\i\g\B\i\t\s\$00168D0C\z8AB49FBA9B\C118A06A0DB\D82... (85 bytes), hashCode=73644551}] org.infinispan.remoting.RemoteException: ISPN000217: Received exception from vi-infinispan-0-53111, see cause for remote stack trace
at org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
at org.infinispan.remoting.transport.ValidSingleResponseCollector.withException(ValidSingleResponseCollector.java:37)
at org.infinispan.remoting.transport.ValidSingleResponseCollector.addResponse(ValidSingleResponseCollector.java:21)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.addResponse(SingleTargetRequest.java:73)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:43)
at org.infinispan.remoting.transport.impl.RequestRepository.addResponse(RequestRepository.java:52)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1402)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1305)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$300(JGroupsTransport.java:131)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1445)
at org.jgroups.JChannel.up(JChannel.java:784)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:913)
at org.jgroups.protocols.FRAG3.up(FRAG3.java:165)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:876)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:243)
at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1049)
at org.jgroups.protocols.UNICAST3.addMessage(UNICAST3.java:772)
at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:753)
at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:405)
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:592)
at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132)
at org.jgroups.protocols.FailureDetection.up(FailureDetection.java:186)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:254)
at org.jgroups.protocols.MERGE3.up(MERGE3.java:281)
at org.jgroups.protocols.Discovery.up(Discovery.java:300)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1396)
at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.infinispan.persistence.spi.PersistenceException: Error while removing string keys from database
at org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:234)
at org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:42)
at org.infinispan.marshall.core.GlobalMarshaller.readWithExternalizer(GlobalMarshaller.java:728)
at org.infinispan.marshall.core.GlobalMarshaller.readNonNullableObject(GlobalMarshaller.java:709)
at org.infinispan.marshall.core.GlobalMarshaller.readNullableObject(GlobalMarshaller.java:358)
at org.infinispan.marshall.core.BytesObjectInput.readObject(BytesObjectInput.java:32)
at org.infinispan.remoting.responses.ExceptionResponse$Externalizer.readObject(ExceptionResponse.java:49)
at org.infinispan.remoting.responses.ExceptionResponse$Externalizer.readObject(ExceptionResponse.java:41)
at org.infinispan.marshall.core.GlobalMarshaller.readWithExternalizer(GlobalMarshaller.java:728)
at org.infinispan.marshall.core.GlobalMarshaller.readNonNullableObject(GlobalMarshaller.java:709)
at org.infinispan.marshall.core.GlobalMarshaller.readNullableObject(GlobalMarshaller.java:358)
at org.infinispan.marshall.core.GlobalMarshaller.objectFromObjectInput(GlobalMarshaller.java:192)
at org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)
... 25 more
Caused by: java.sql.SQLNonTransientConnectionException: No operations allowed after connection closed.
We do use JDBC_PING for the cluster connection and 2 nodes are active. They register themselves properly and everything works like a charm, until the timeout is set.
The base configuration that I have placed is the following:
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:11.0 https://infinispan.org/schemas/infinispan-config-11.0.xsd
urn:infinispan:server:11.0 https://infinispan.org/schemas/infinispan-server-11.0.xsd"
xmlns="urn:infinispan:config:11.0"
xmlns:server="urn:infinispan:server:11.0">
<!--
Generic XML definition located under
https://docs.jboss.org/infinispan/11.0/configdocs/
-->
<jgroups>
<stack-file name="default-udp" path="default-jgroups.xml"/>
<stack-file name="default-tcp" path="default-jgroups-tcp.xml"/>
<stack-file name="gce" path="default-jgroups-google.xml"/>
<stack-file name="k8s" path="default-jgroups-kubernetes.xml"/>
<stack-file name="kc-udp" path="default-keycloak-jgroups-udp.xml"/>
<stack-file name="custom-k8s-jdbc" path="custom-jgroups-kubernetes-jdbc.xml"/>
<stack-file name="custom-tcp-jdbc" path="custom-jgroups-tcp-jdbc.xml"/>
</jgroups>
<cache-container name="default" statistics="${env.INFINISPAN_CACHE_STATISTICS:false}">
<serialization marshaller="org.infinispan.jboss.marshalling.commons.GenericJBossMarshaller">
<white-list>
<class>java.util.UUID</class>
<regex>org.keycloak.models.sessions.infinispan.*</regex>
</white-list>
</serialization>
<serialization marshaller="org.infinispan.commons.marshall.JavaSerializationMarshaller">
<white-list>
<class>java.util.UUID</class>
<regex>org.keycloak.models.sessions.infinispan.*</regex>
</white-list>
</serialization>
<transport cluster="${infinispan.cluster.name:cluster}" stack="${infinispan.cluster.stack:default-udp}"
node-name="${infinispan.node.name:}"/>
<replicated-cache-configuration name="sessions-cfg" mode="SYNC" start="EAGER"
statistics="${env.INFINISPAN_CACHE_STATISTICS:false}">
<state-transfer timeout="${infinispan.statetransfer.timeout:600000}"/>
<encoding media-type="application/x-jboss-marshalling"/>
<expiration lifespan="900000000000000000"/>
</replicated-cache-configuration>
<distributed-cache-configuration name="distributed-cache-cfg">
<encoding media-type="application/x-jboss-marshalling"/>
<expiration lifespan="900000000000000000"/>
<persistence passivation="false">
<string-keyed-jdbc-store shared="true" xmlns="urn:infinispan:config:store:jdbc:11.0">
<connection-pool properties-file="${env.PROPERTIES_FILE:/opt/infinispan/server/conf/connection-pool.properties}" />
<string-keyed-table drop-on-exit="false"
prefix="ISPN">
<id-column name="ID_COLUMN" type="VARCHAR(255)"/>
<!-- Blob generates error on KC. We increase it to a safe max size (65K per row)
<data-column name="DATA_COLUMN" type="BLOB" />
-->
<data-column name="DATA_COLUMN" type="VARBINARY(50000)"/>
<timestamp-column name="TIMESTAMP_COLUMN" type="BIGINT"/>
<segment-column name="SEGMENT_COLUMN" type="INT"/>
</string-keyed-table>
</string-keyed-jdbc-store>
</persistence>
<state-transfer timeout="${infinispan.statetransfer.timeout:600000}"/>
</distributed-cache-configuration>
<!--https://infinispan.org/docs/stable/titles/configuring/configuring.html#distributed-caches_clustered-caches-->
<!--https://infinispan.org/docs/stable/titles/configuring/configuring.html#configuring-jdbc-cache-stores_persistence-->
<distributed-cache name="sessions" owners="2" configuration="distributed-cache-cfg">
</distributed-cache>
<distributed-cache name="clientSessions" owners="2" configuration="distributed-cache-cfg">
</distributed-cache>
<distributed-cache name="authenticationSessions" owners="2" configuration="distributed-cache-cfg">
</distributed-cache>
</cache-container>
<!-- Original at v11 - -->
<server xmlns="urn:infinispan:server:11.0">
<interfaces>
<interface name="public">
<inet-address value="${infinispan.bind.address:0.0.0.0}"/>
</interface>
</interfaces>
<socket-bindings default-interface="public" port-offset="0">
<socket-binding name="default" port="11222"/>
</socket-bindings>
<security>
<security-realms>
<security-realm name="default">
<properties-realm groups-attribute="Roles">
<user-properties path="users.properties" relative-to="infinispan.server.config.path"
plain-text="true"/>
<group-properties path="groups.properties" relative-to="infinispan.server.config.path"/>
</properties-realm>
</security-realm>
</security-realms>
</security>
<endpoints socket-binding="default" security-realm="default">
<hotrod-connector name="hotrod">
<authentication>
<sasl mechanisms="SCRAM-SHA-512 SCRAM-SHA-384 SCRAM-SHA-256 SCRAM-SHA-1 DIGEST-SHA-512 DIGEST-SHA-384 DIGEST-SHA-256 DIGEST-SHA DIGEST-MD5 PLAIN"
qop="auth" server-name="infinispan"/>
</authentication>
</hotrod-connector>
<rest-connector name="rest">
<authentication mechanisms="DIGEST BASIC"/>
</rest-connector>
</endpoints>
</server>
</infinispan>
The thing is... what are am I doing wrong?
There is not too much information about it. Can anyone help?
Thank you!
Unfortunately I think you have stumbled across a bug with the persistence availability check that prevents stores from reconnecting if an exception is thrown ISPN-13863. I have just created a PR, however the fix will only be available in the Infinispan 14.x stream.
I have two Wildfly 18 instances running locally: n1 and n2. I would like instance n2 to consume instance n1's produced messages in order to take steps towards a HA scenario.
After reading the RH EAP docs,
I have done the following:
1- Defined a Exposed JMS Queue on n2. Also, I added security settings and Remote Factory in the ActiveMQ Submodule:
[...]
<server name="default">
<security-setting name="#">
<role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
</security-setting>
[...]
<jms-queue name="testQueue" entries="queue/test java:jboss/exported/jms/queue/test"/>
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/>
</server>
[...]
2- I configured JGroups via TCPPING with an initial list of nodes to join the cluster, in order to achieve cluster discovery:
[...]
<protocol type="org.jgroups.protocols.TCPPING">
<property name="initial_hosts">127.0.0.1[8600]</property>
<property name="port_range">0</property>
</protocol>
[...]
3- Then I brought up the two instances, and there I get the following messages in the app logs:
(Thread-12 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6#7124120f)) AMQ221027: Bridge ClusterConnectionBridge#c6997b5 [name=$.artemis.internal.sf.my-cluster.f3561996-f354-11ea-83cc-4c32759d60cf, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.f3561996-f354-11ea-83cc-4c32759d60cf, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=c9af42f1-f354-11ea-8e25-4c32759d60cf], temp=false]#2747e684 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge#c6997b5 [name=$.artemis.internal.sf.my-cluster.f3561996-f354-11ea-83cc-4c32759d60cf, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.f3561996-f354-11ea-83cc-4c32759d60cf, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=c9af42f1-f354-11ea-8e25-4c32759d60cf], temp=false]#2747e684 targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=<port_number>&host=localhost], discoveryGroupConfiguration=null]]::ClusterConnectionImpl#1775690639[nodeUUID=c9af42f1-f354-11ea-8e25-4c32759d60cf, connector=TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8323&host=localhost, address=jms, server=ActiveMQServerImpl::serverUUID=c9af42f1-f354-11ea-8e25-4c32759d60cf])) [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=<port_number>&host=localhost], discoveryGroupConfiguration=null]] is connected
But when I try to send messages from n1 to n2 using the following JNDI conf,
java.naming.factory.initial = org.wildfly.naming.client.WildFlyInitialContextFactory
java.naming.provider.url = remote://localhost:8323
java.naming.security.principal = ***
java.naming.security.credentials = ***
Connection Factory JNDI name = jms/RemoteConnectionFactory
Queue JNDI name = jms/queue/test
... I get this error after a certain timeout (~30s):
javax.naming.CommunicationException: WFNAM00018: Failed to connect to remote host [Root exception is java.io.IOException: JBREM000202: Abrupt close on Remoting connection 4ba0f2c1 to localhost/127.0.0.1:8323 of endpoint (anonymous)
I have tried to connect to the same queue using a simple JMS client (https://plugins.jetbrains.com/plugin/10949-jms-messenger), and I was actually able to connect, as I at least got the following error:
ERROR [com.my.app.Receiver] (Thread-14 (ActiveMQ-client-global-threads)) Unknown message: ActiveMQMessage[ID:5f71e993-f377-11ea-acfc-169f02eb582c]:PERSISTENT/ClientMessageImpl[messageID=442, durable=true, address=jms.queue.test,userID=5f71e993-f377-11ea-acfc-169f02eb582c,properties=TypedProperties[__AMQ_CID=5f684ca0-f377-11ea-acfc-169f02eb582c,_AMQ_ROUTING_TYPE=1]]
Could you please hint me on what is wrong and explain why that is? Thanks a lot
I solved this issue by working on the Wildfly and JNDI configuration. Though the error message was very generic, at least in my case, the following Wildfly config:
<subsystem xmlns="urn:jboss:domain:messaging-activemq:8.0">
<server name="default">
<http-acceptor name="http-acceptor-throughput" http-listener="messaging">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
...
<http-connector name="http-connector-throughput" socket-binding="messaging-throughput" endpoint="http-acceptor-throughput">
<param name="batch-delay" value="50"/>
</http-connector>
...
<jms-queue name="test" entries="queue/test java:jboss/exported/jms/test"/>
<broadcast-group name="bg-group1" jgroups-cluster="activemq-cluster" broadcast-period="5000" connectors="http-connector"/>
<discovery-group name="dg-group1" jgroups-cluster="activemq-cluster"/>
...
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/>
</subsystem>
...
<subsystem xmlns="urn:jboss:domain:remoting:4.0">
<http-connector name="messaging-remoting-connector" connector-ref="messaging-http" security-realm="ApplicationRealm"/>
</subsystem>
...
<socket-binding-group ... >
...
<socket-binding name="messaging" port="8323"/>
<socket-binding name="messaging-throughput" port="8324"/>
...
</socket-binding-group>
Worked with the following JNDI config:
java.naming.factory.initial = org.wildfly.naming.client.WildFlyInitialContextFactory
java.naming.provider.url = remote://localhost:8323
java.naming.security.principal = ***
java.naming.security.credentials = ***
Connection Factory JNDI name = jms/RemoteConnectionFactory
Queue JNDI name = jms/test
Also, as the principal/credentials were not part of the ApplicationRealm, I started getting a 403 HTTP response code (upon calling the messaging endpoint). In order to get that working too, I had to add the user and related credential using the add-user.sh script (found in the Wildfly /bin folder).
I have configured Hibernate Search to use infinispan and use File System based Cache Store to persist the indexes in file system instead of memory.
Now, I wish to configure S3 instead of File System, but I am not able to find the correct configuration for this.
My infinispan.xml file is:
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:6.0 http://www.infinispan.org/schemas/infinispan-config-6.0.xsd"
xmlns="urn:infinispan:config:6.0">
<global>
<globalJmxStatistics enabled="false" />
<!-- <transport clusterName="storage-test-cluster" /> -->
<shutdown hookBehavior="DONT_REGISTER" />
</global>
<default>
<storeAsBinary
enabled="false" />
<locking
isolationLevel="READ_COMMITTED"
lockAcquisitionTimeout="20000"
writeSkewCheck="false"
concurrencyLevel="5000"
useLockStriping="false" />
<invocationBatching
enabled="false" />
</default>
<namedCache name="LuceneIndexesMetadata">
<persistence passivation="false">
<singleFile
fetchPersistentState="true"
preload="true"
purgeOnStartup="false"
shared="true"
ignoreModifications="false"
location="C:\\infinispan">
</singleFile>
</persistence>
</namedCache>
<namedCache name="LuceneIndexesData">
<persistence passivation="false">
<singleFile
fetchPersistentState="true"
preload="true"
purgeOnStartup="false"
shared="true"
ignoreModifications="false"
location="C:\\infinispan">
</singleFile>
</persistence>
</namedCache>
<namedCache name="LuceneIndexesLocking">
<!-- No CacheLoader configured here -->
</namedCache>
</infinispan>
Can anyone help me in configuring this file to use Amazon S3 as Cache Store.
The specific versions of Hibernate Search and Infinispan which you're using are extremely old. Specifically, Infinispan didn't support storage on Amazon S3 in version 6.
I would suggest upgrading to some more recent version which is still being maintained.
As of writing this, you could use Infinispan 9.1.5.Final with Hibernate Search 5.8.2.Final.
I've got a fast producer ESB (converts CSV to XML) and a slow consumer ESB (performing zip/base64/SOAP wrapping of the XML). The ESBs communicate via a JMS topic. This design is legacy and cannot be changed. When a large CSV file is processed, JBoss AS (5.2) grinds to a halt as the producer is flooding out the consumer, this is even with a heap-size of 4096M. Forgive me I'm new to JBoss/JMS and finding it all bewildering.
Producer sending config
<action class="com.example.FooAction" name="ProcessFoo">
<property name="springJndiLocation" value="FooEsbSpring" />
<property name="exceptionMethod" value="exceptionHandler" />
<property name="okMethod" value="processSuccess" />
<property name="jndiName" value="topic/FooTopic" />
<property name="connection-factory" value="ConnectionFactory" />
<property name="unwrap" value="true" />
<property name="security-principal" value="guest" />
<property name="security-credential" value="guest" />
</action>
Producer sending code:
Message msg = MessageFactory.getInstance().getMessage(MessageType.JAVA_SERIALIZED);
msg.getBody().add(foo); // foo is the business specific message
new JMSRouter(config).process(msg);
Consumer receiving config:
<jms-jca-provider connection-factory="ConnectionFactory" name="FooMessaging">
<jms-bus busid="fooChannel">
<jms-message-filter dest-name="topic/FooTopic"
dest-type="TOPIC" transacted="false" />
</jms-bus>
<activation-config>
<property name="dLQMaxResent" value="1" />
</activation-config>
</jms-jca-provider>
Topic config
<server>
<mbean code="org.jboss.jms.server.destination.TopicService"
name="jboss.esb.quickstart.destination:service=Topic,name=FooTopic"
xmbean-dd="xmdesc/Queue-xmbean.xml">
<depends optional-attribute-name="ServerPeer">jboss.messaging:service=ServerPeer
</depends>
<depends>jboss.messaging:service=PostOffice</depends>
</mbean>
</server>
Things I've tried so far.
Run the publisher ESB without the consumer ESB - as expected no problems.
Lots of googling, looking for existing questions on stackoverflow
Found some references to rate limiting but I can't see how to fit these into my config.
I've tried to find an API to discover how many messages are already on the topic unprocessed (with the hope I can implement my own back-off strategy).
Looked at this documentation.
Look at this section 6.3.17.2. org.jboss.mq.server.jmx.Topic and use the 'Depth' related attributes using JMX.
It might help you build the back-off strategy you're looking for
I've run into a bit of a wall with sending messages from BlazeDS on the server to Flex clients. I have my adapters and destinations set properly (I think) messaging-config.xml and my streaming channel setup in my services-config.xml files. The messages work beautifully in Safari (Mac and PC) but no other browsers.
relevant Bits from messaging-config.xml
Adapter:
Destination:
<destination id="FriendNotifierGateway">
<adapter ref="friendNotifierAdapter" />
<properties>
<server>
<max-cache-size>1000</max-cache-size>
<durable>true</durable>
<allow-subtopics>true</allow-subtopics>
<subtopic-separator>.</subtopic-separator>
</server>
</properties>
<channels>
<channel ref="my-streaming-amf"/>
<channel ref="cf-polling-amf"/>
</channels>
Relevant Bits from services-config.xml
<channel-definition id="my-streaming-amf" class="mx.messaging.channels.StreamingAMFChannel">
<endpoint url="http://{server.name}:{server.port}/{context.root}/messagebroker/amfsecure/streamingamf" class="flex.messaging.endpoints.StreamingAMFEndpoint" />
<properties>
<idle-timeout-minutes>0</idle-timeout-minutes>
<max-streaming-clients>500</max-streaming-clients>
<server-to-client-heartbeat-millis>5000</server-to-client-heartbeat-millis>
<user-agent-settings>
<user-agent match-on="MSIE" kickstart-bytes="2048" max-streaming-connections-per-session="1" />
<user-agent match-on="Firefox" kickstart-bytes="2048" max-streaming-connections-per-session="4" />
<user-agent match-on="Safari" kickstart-bytes="2048" max-streaming-connections-per-session="3" />
<user-agent match-on="Opera" kickstart-bytes="2048" max-streaming-connections-per-session="3" />
<user-agent match-on="Chrome" kickstart-bytes="2048" max-streaming-connections-per-session="3" />
</user-agent-settings>
</properties>
I feel like things are setup correctly in the channel definition but, perhaps, some of those user-agent settings are off (I have played with their settings, to no avail thus far).
Thanks, in advance, for any suggestions or insights!
Regards,
Craig
I never sorted out why the server-side messages never reached the client. However, my setup was less than ideal for an active site. So, I switched to using ActiveMQ and, ever since, the messaging has been fantastic!