I have two Wildfly 18 instances running locally: n1 and n2. I would like instance n2 to consume instance n1's produced messages in order to take steps towards a HA scenario.
After reading the RH EAP docs,
I have done the following:
1- Defined a Exposed JMS Queue on n2. Also, I added security settings and Remote Factory in the ActiveMQ Submodule:
[...]
<server name="default">
<security-setting name="#">
<role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
</security-setting>
[...]
<jms-queue name="testQueue" entries="queue/test java:jboss/exported/jms/queue/test"/>
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/>
</server>
[...]
2- I configured JGroups via TCPPING with an initial list of nodes to join the cluster, in order to achieve cluster discovery:
[...]
<protocol type="org.jgroups.protocols.TCPPING">
<property name="initial_hosts">127.0.0.1[8600]</property>
<property name="port_range">0</property>
</protocol>
[...]
3- Then I brought up the two instances, and there I get the following messages in the app logs:
(Thread-12 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6#7124120f)) AMQ221027: Bridge ClusterConnectionBridge#c6997b5 [name=$.artemis.internal.sf.my-cluster.f3561996-f354-11ea-83cc-4c32759d60cf, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.f3561996-f354-11ea-83cc-4c32759d60cf, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=c9af42f1-f354-11ea-8e25-4c32759d60cf], temp=false]#2747e684 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge#c6997b5 [name=$.artemis.internal.sf.my-cluster.f3561996-f354-11ea-83cc-4c32759d60cf, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.f3561996-f354-11ea-83cc-4c32759d60cf, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=c9af42f1-f354-11ea-8e25-4c32759d60cf], temp=false]#2747e684 targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=<port_number>&host=localhost], discoveryGroupConfiguration=null]]::ClusterConnectionImpl#1775690639[nodeUUID=c9af42f1-f354-11ea-8e25-4c32759d60cf, connector=TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8323&host=localhost, address=jms, server=ActiveMQServerImpl::serverUUID=c9af42f1-f354-11ea-8e25-4c32759d60cf])) [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=<port_number>&host=localhost], discoveryGroupConfiguration=null]] is connected
But when I try to send messages from n1 to n2 using the following JNDI conf,
java.naming.factory.initial = org.wildfly.naming.client.WildFlyInitialContextFactory
java.naming.provider.url = remote://localhost:8323
java.naming.security.principal = ***
java.naming.security.credentials = ***
Connection Factory JNDI name = jms/RemoteConnectionFactory
Queue JNDI name = jms/queue/test
... I get this error after a certain timeout (~30s):
javax.naming.CommunicationException: WFNAM00018: Failed to connect to remote host [Root exception is java.io.IOException: JBREM000202: Abrupt close on Remoting connection 4ba0f2c1 to localhost/127.0.0.1:8323 of endpoint (anonymous)
I have tried to connect to the same queue using a simple JMS client (https://plugins.jetbrains.com/plugin/10949-jms-messenger), and I was actually able to connect, as I at least got the following error:
ERROR [com.my.app.Receiver] (Thread-14 (ActiveMQ-client-global-threads)) Unknown message: ActiveMQMessage[ID:5f71e993-f377-11ea-acfc-169f02eb582c]:PERSISTENT/ClientMessageImpl[messageID=442, durable=true, address=jms.queue.test,userID=5f71e993-f377-11ea-acfc-169f02eb582c,properties=TypedProperties[__AMQ_CID=5f684ca0-f377-11ea-acfc-169f02eb582c,_AMQ_ROUTING_TYPE=1]]
Could you please hint me on what is wrong and explain why that is? Thanks a lot
I solved this issue by working on the Wildfly and JNDI configuration. Though the error message was very generic, at least in my case, the following Wildfly config:
<subsystem xmlns="urn:jboss:domain:messaging-activemq:8.0">
<server name="default">
<http-acceptor name="http-acceptor-throughput" http-listener="messaging">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
...
<http-connector name="http-connector-throughput" socket-binding="messaging-throughput" endpoint="http-acceptor-throughput">
<param name="batch-delay" value="50"/>
</http-connector>
...
<jms-queue name="test" entries="queue/test java:jboss/exported/jms/test"/>
<broadcast-group name="bg-group1" jgroups-cluster="activemq-cluster" broadcast-period="5000" connectors="http-connector"/>
<discovery-group name="dg-group1" jgroups-cluster="activemq-cluster"/>
...
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/>
</subsystem>
...
<subsystem xmlns="urn:jboss:domain:remoting:4.0">
<http-connector name="messaging-remoting-connector" connector-ref="messaging-http" security-realm="ApplicationRealm"/>
</subsystem>
...
<socket-binding-group ... >
...
<socket-binding name="messaging" port="8323"/>
<socket-binding name="messaging-throughput" port="8324"/>
...
</socket-binding-group>
Worked with the following JNDI config:
java.naming.factory.initial = org.wildfly.naming.client.WildFlyInitialContextFactory
java.naming.provider.url = remote://localhost:8323
java.naming.security.principal = ***
java.naming.security.credentials = ***
Connection Factory JNDI name = jms/RemoteConnectionFactory
Queue JNDI name = jms/test
Also, as the principal/credentials were not part of the ApplicationRealm, I started getting a 403 HTTP response code (upon calling the messaging endpoint). In order to get that working too, I had to add the user and related credential using the add-user.sh script (found in the Wildfly /bin folder).
Related
We have Keycloak in HA, which we have configured with a external Infinispan cluster for sessions, clientSessions & authenticationSessions.
Everything works under containers in a similar approach like the one performed under https://github.com/albertoSoto/keycloak-infinispan-cluster
The project runs KC 15.0.2 with Wildfly (migration to quarkus to be done), and in that case, uses Infinispan 11.0.9 to perform the external data persistence to Mysql 5.7. The driver used is the latest one, using https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar as suggested by Oracle. The connection driver is com.mysql.cj.jdbc.Driver.
The project starts nice, but after random time, mysql drops the connection and the Infinispan cluster can't reconnect.
In a trial to make it work, I have been able to use Agroal configuration, based in a properties file like it's at the bottom of this message.
The content of that agroal property file, that overrides JPA behavior in the project is the following:
org.infinispan.agroal.metricsEnabled=false
org.infinispan.agroal.minSize=10
org.infinispan.agroal.maxSize=100
org.infinispan.agroal.initialSize=20
org.infinispan.agroal.acquisitionTimeout_s=1
org.infinispan.agroal.validationTimeout_m=1
org.infinispan.agroal.leakTimeout_s=10
org.infinispan.agroal.reapTimeout_m=10
org.infinispan.agroal.maxLifetime_m=10
org.infinispan.agroal.autoCommit=true
org.infinispan.agroal.jdbcTransactionIsolation=READ_COMMITTED
org.infinispan.agroal.jdbcUrl=jdbc:mysql://mysql:3306/infinispan
org.infinispan.agroal.driverClassName=com.mysql.cj.jdbc.Driver
org.infinispan.agroal.principal=keycloak
org.infinispan.agroal.credential=password
The error shown after the connection is closed from the db is the following:
[1;31m21:55:31,052 ERROR (jgroups-319,vi-infinispan-1-5379) [org.infinispan.interceptors.impl.InvocationContextInterceptor] ISPN000136: Error executing command RemoveCommand on Cache 'clientSessions', writing keys [WrappedByteArray{bytes=0304090000000E\j\a\v\a\.\u\t\i\l\.\U\U\I\DBC9903F798\m85\/000000020000000C\l\e\a\s\t\S\i\g\B\i\t\s\$000000000B\m\o\s\t\S\i\g\B\i\t\s\$00168D0C\z8AB49FBA9B\C118A06A0DB\D82... (85 bytes), hashCode=73644551}] org.infinispan.remoting.RemoteException: ISPN000217: Received exception from vi-infinispan-0-53111, see cause for remote stack trace
at org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
at org.infinispan.remoting.transport.ValidSingleResponseCollector.withException(ValidSingleResponseCollector.java:37)
at org.infinispan.remoting.transport.ValidSingleResponseCollector.addResponse(ValidSingleResponseCollector.java:21)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.addResponse(SingleTargetRequest.java:73)
at org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:43)
at org.infinispan.remoting.transport.impl.RequestRepository.addResponse(RequestRepository.java:52)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1402)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1305)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$300(JGroupsTransport.java:131)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1445)
at org.jgroups.JChannel.up(JChannel.java:784)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:913)
at org.jgroups.protocols.FRAG3.up(FRAG3.java:165)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:876)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:243)
at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1049)
at org.jgroups.protocols.UNICAST3.addMessage(UNICAST3.java:772)
at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:753)
at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:405)
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:592)
at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132)
at org.jgroups.protocols.FailureDetection.up(FailureDetection.java:186)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:254)
at org.jgroups.protocols.MERGE3.up(MERGE3.java:281)
at org.jgroups.protocols.Discovery.up(Discovery.java:300)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1396)
at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.infinispan.persistence.spi.PersistenceException: Error while removing string keys from database
at org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:234)
at org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:42)
at org.infinispan.marshall.core.GlobalMarshaller.readWithExternalizer(GlobalMarshaller.java:728)
at org.infinispan.marshall.core.GlobalMarshaller.readNonNullableObject(GlobalMarshaller.java:709)
at org.infinispan.marshall.core.GlobalMarshaller.readNullableObject(GlobalMarshaller.java:358)
at org.infinispan.marshall.core.BytesObjectInput.readObject(BytesObjectInput.java:32)
at org.infinispan.remoting.responses.ExceptionResponse$Externalizer.readObject(ExceptionResponse.java:49)
at org.infinispan.remoting.responses.ExceptionResponse$Externalizer.readObject(ExceptionResponse.java:41)
at org.infinispan.marshall.core.GlobalMarshaller.readWithExternalizer(GlobalMarshaller.java:728)
at org.infinispan.marshall.core.GlobalMarshaller.readNonNullableObject(GlobalMarshaller.java:709)
at org.infinispan.marshall.core.GlobalMarshaller.readNullableObject(GlobalMarshaller.java:358)
at org.infinispan.marshall.core.GlobalMarshaller.objectFromObjectInput(GlobalMarshaller.java:192)
at org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)
... 25 more
Caused by: java.sql.SQLNonTransientConnectionException: No operations allowed after connection closed.
We do use JDBC_PING for the cluster connection and 2 nodes are active. They register themselves properly and everything works like a charm, until the timeout is set.
The base configuration that I have placed is the following:
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:11.0 https://infinispan.org/schemas/infinispan-config-11.0.xsd
urn:infinispan:server:11.0 https://infinispan.org/schemas/infinispan-server-11.0.xsd"
xmlns="urn:infinispan:config:11.0"
xmlns:server="urn:infinispan:server:11.0">
<!--
Generic XML definition located under
https://docs.jboss.org/infinispan/11.0/configdocs/
-->
<jgroups>
<stack-file name="default-udp" path="default-jgroups.xml"/>
<stack-file name="default-tcp" path="default-jgroups-tcp.xml"/>
<stack-file name="gce" path="default-jgroups-google.xml"/>
<stack-file name="k8s" path="default-jgroups-kubernetes.xml"/>
<stack-file name="kc-udp" path="default-keycloak-jgroups-udp.xml"/>
<stack-file name="custom-k8s-jdbc" path="custom-jgroups-kubernetes-jdbc.xml"/>
<stack-file name="custom-tcp-jdbc" path="custom-jgroups-tcp-jdbc.xml"/>
</jgroups>
<cache-container name="default" statistics="${env.INFINISPAN_CACHE_STATISTICS:false}">
<serialization marshaller="org.infinispan.jboss.marshalling.commons.GenericJBossMarshaller">
<white-list>
<class>java.util.UUID</class>
<regex>org.keycloak.models.sessions.infinispan.*</regex>
</white-list>
</serialization>
<serialization marshaller="org.infinispan.commons.marshall.JavaSerializationMarshaller">
<white-list>
<class>java.util.UUID</class>
<regex>org.keycloak.models.sessions.infinispan.*</regex>
</white-list>
</serialization>
<transport cluster="${infinispan.cluster.name:cluster}" stack="${infinispan.cluster.stack:default-udp}"
node-name="${infinispan.node.name:}"/>
<replicated-cache-configuration name="sessions-cfg" mode="SYNC" start="EAGER"
statistics="${env.INFINISPAN_CACHE_STATISTICS:false}">
<state-transfer timeout="${infinispan.statetransfer.timeout:600000}"/>
<encoding media-type="application/x-jboss-marshalling"/>
<expiration lifespan="900000000000000000"/>
</replicated-cache-configuration>
<distributed-cache-configuration name="distributed-cache-cfg">
<encoding media-type="application/x-jboss-marshalling"/>
<expiration lifespan="900000000000000000"/>
<persistence passivation="false">
<string-keyed-jdbc-store shared="true" xmlns="urn:infinispan:config:store:jdbc:11.0">
<connection-pool properties-file="${env.PROPERTIES_FILE:/opt/infinispan/server/conf/connection-pool.properties}" />
<string-keyed-table drop-on-exit="false"
prefix="ISPN">
<id-column name="ID_COLUMN" type="VARCHAR(255)"/>
<!-- Blob generates error on KC. We increase it to a safe max size (65K per row)
<data-column name="DATA_COLUMN" type="BLOB" />
-->
<data-column name="DATA_COLUMN" type="VARBINARY(50000)"/>
<timestamp-column name="TIMESTAMP_COLUMN" type="BIGINT"/>
<segment-column name="SEGMENT_COLUMN" type="INT"/>
</string-keyed-table>
</string-keyed-jdbc-store>
</persistence>
<state-transfer timeout="${infinispan.statetransfer.timeout:600000}"/>
</distributed-cache-configuration>
<!--https://infinispan.org/docs/stable/titles/configuring/configuring.html#distributed-caches_clustered-caches-->
<!--https://infinispan.org/docs/stable/titles/configuring/configuring.html#configuring-jdbc-cache-stores_persistence-->
<distributed-cache name="sessions" owners="2" configuration="distributed-cache-cfg">
</distributed-cache>
<distributed-cache name="clientSessions" owners="2" configuration="distributed-cache-cfg">
</distributed-cache>
<distributed-cache name="authenticationSessions" owners="2" configuration="distributed-cache-cfg">
</distributed-cache>
</cache-container>
<!-- Original at v11 - -->
<server xmlns="urn:infinispan:server:11.0">
<interfaces>
<interface name="public">
<inet-address value="${infinispan.bind.address:0.0.0.0}"/>
</interface>
</interfaces>
<socket-bindings default-interface="public" port-offset="0">
<socket-binding name="default" port="11222"/>
</socket-bindings>
<security>
<security-realms>
<security-realm name="default">
<properties-realm groups-attribute="Roles">
<user-properties path="users.properties" relative-to="infinispan.server.config.path"
plain-text="true"/>
<group-properties path="groups.properties" relative-to="infinispan.server.config.path"/>
</properties-realm>
</security-realm>
</security-realms>
</security>
<endpoints socket-binding="default" security-realm="default">
<hotrod-connector name="hotrod">
<authentication>
<sasl mechanisms="SCRAM-SHA-512 SCRAM-SHA-384 SCRAM-SHA-256 SCRAM-SHA-1 DIGEST-SHA-512 DIGEST-SHA-384 DIGEST-SHA-256 DIGEST-SHA DIGEST-MD5 PLAIN"
qop="auth" server-name="infinispan"/>
</authentication>
</hotrod-connector>
<rest-connector name="rest">
<authentication mechanisms="DIGEST BASIC"/>
</rest-connector>
</endpoints>
</server>
</infinispan>
The thing is... what are am I doing wrong?
There is not too much information about it. Can anyone help?
Thank you!
Unfortunately I think you have stumbled across a bug with the persistence availability check that prevents stores from reconnecting if an exception is thrown ISPN-13863. I have just created a PR, however the fix will only be available in the Infinispan 14.x stream.
Maybe this question already has but I think there is a different situation.
I configure all required things from the web config file and install certificates.
I consume java web service in ASP.NET WEB API.
SOAP service was configured mutual authentication. (Two-way SSL)
I have 2 Keystore files. (client.jks and truststore.jks)
My full error: This could be due to the fact that the server certificate is
not configured properly with HTTP.SYS in the HTTPS case.
This could also be caused by a mismatch of the security binding between
the client and the server.'
WebConfig:
<customBinding>
<binding name="MyBinding">
<textMessageEncoding messageVersion="Soap11"/>
<security authenticationMode="MutualCertificate" enableUnsecuredResponse="true" allowSerializedSigningTokenOnReply="true"
messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10"
includeTimestamp="false">
</security>
<httpsTransport />
</binding>
</customBinding>
<endpoint behaviorConfiguration="ClientCredentialsBehavior" address="https://abc.bank.dm:9193/Money/Money" binding="customBinding" bindingConfiguration="MyBinding" contract="Ref.Port" name="Port">
<identity>
<dns value="test"/>
</identity>
</endpoint>
<behaviors>
<endpointBehaviors>
<behavior name="ClientCredentialsBehavior">
<clientCredentials>
<clientCertificate findValue="2d73n94087857dndyr874ydr"
storeLocation="CurrentUser"
storeName="My"
x509FindType="FindByThumbprint" />
<serviceCertificate>
<defaultCertificate findValue="d346n32d48938w43d943095d"
storeLocation="CurrentUser"
storeName="TrustedPeople"
x509FindType="FindByThumbprint" />
<authentication certificateValidationMode="None" revocationMode="NoCheck"/>
</serviceCertificate>
</clientCredentials>
</behavior>
</endpointBehaviors>
</behaviors>
Try to specify the same protocol on the client and server. Add the following code in the client:
System.Net.ServicePointManager.SecurityProtocol = System.Net.SecurityProtocolType.Tls12;
Here is the reference: TLS 1.2
A background process active on my Wildfly 10.0.0.Final needs to connect to a webservice using Axis. On the remote server, there is a self-signed certificate, which I imported in a truststore using openssl to get it from the remote server and they keytool to create the truststore and import the certificate into it.
I set up my Wildfly 10.0.0.Final's standalone.xml like this:
<security-realm name="SSLRealm">
<server-identities>
<ssl>
<keystore path="keystore.jks" relative-to="jboss.server.config.dir"
keystore-password="mykeystorepassword" alias="myalias"
key-password="mykeypass" />
</ssl>
</server-identities>
<authentication>
<truststore path="truststore.jks" relative-to="jboss.server.config.dir"
keystore-password="mytruststorepassword" />
</authentication>
</security-realm>
<server name="default-server">
<http-listener name="default" socket-binding="http" redirect-socket="https" />
<https-listener name="default-ssl" security-realm="SSLRealm" socket-binding="https" />
<host name="default-host" alias="localhost">
<location name="/" handler="welcome-content" />
</host>
</server>
but still, when the background process tries to connect to the remote service, I obtain the following exception:
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.provider.certpath.SunCertPathBuilder.build(Unknown Source)
at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(Unknown Source)
at java.security.cert.CertPathBuilder.build(Unknown Source)
Any idea how to solve this issue? It seems like the truststore is not being used or something like this...
I had the same problem and could only solve it by adding parameters to the startup script for wildfly:
-Djavax.net.ssl.trustStore=foo.jks
-Djavax.net.ssl.trustStorePassword=bar
which, of course, overrides the default cacerts.
But I am unclear why the truststore defined in the security-realm seems to be ignored.
I'm deploying a working Tomcat webapp on websphere-liberty on docker. The webapp connects to a postgres data source on docker. In websphere when I try to get connection with
DataSource ds = (DataSource)ctx.lookup("java:comp/env/jdbc/postgres");
Connection conn = ds.getConnection();
my web.xml is setted as:
<resource-ref>
<description>postgreSQL Connection</description>
<res-ref-name>jdbc/postgres</res-ref-name>
<res-type>javax.sql.XADataSource</res-type>
<res-auth>Container</res-auth>
</resource-ref>
I get the following error
javax.naming.NamingException: CWNEN1001E: The object referenced by the java:comp/env/jdbc/postgres JNDI name could not be instantiated.
If the reference name maps to a JNDI name in the deployment descriptor bindings for the application
performing the JNDI lookup, make sure that the JNDI name mapping in the deployment descriptor binding is correct.
If the JNDI name mapping is correct, make sure the target resource
can be resolved with the specified name relative to the default initial context.
[Root exception is com.ibm.wsspi.injectionengine.InjectionException: CWNEN0030E: The
server was unable to obtain an object instance for the java:comp/env/jdbc/postgres reference.
The exception message was: CWNEN1004E: The server was unable to find the jdbc/postgres default binding with the javax.sql.XADataSource type for
the java:comp/env/jdbc/postgres reference.]
I exlude that is a problem with docker network. What I have setted wrong? Is something in my websphere-liberty?
server.xml is
<server description="Default server">
<!-- Enable features -->
<featureManager>
<feature>webProfile-7.0</feature>
<feature>adminCenter-1.0</feature>
<feature>jdbc-4.0</feature>
</featureManager>
<quickStartSecurity userName="admin" userPassword="password"/>
<!-- Define the host name for use by the collective.
If the host name needs to be changed, the server should be
removed from the collective and re-joined. -->
<!-- <variable name="defaultHostName" value="localhost" /> -->
<!-- Define an Administrator and non-Administrator -->
<basicRegistry id="basic">
<user name="admin" password="admin" />
<user name="nonadmin" password="nonadminpwd" />
</basicRegistry>
<!-- Assign 'admin' to Administrator -->
<administrator-role>
<user>admin</user>
</administrator-role>
<!-- <keyStore id="defaultKeyStore" password="Liberty" /> -->
<httpEndpoint id="defaultHttpEndpoint"
host="*"
httpPort="9080"
httpsPort="9443" />
<remoteFileAccess>
<writeDir>${server.config.dir}</writeDir>
</remoteFileAccess>
<library id="postgres-lib">
<fileset dir="/arturial/project/" includes="postgresql-9.4.1208.jre6.jar"/>
</library>
<dataSource id="jdbc-Prima_WA_db" jndiName="jdbc/postgres" type="javax.sql.DataSource">
<jdbcDriver libraryRef="postgres-lib"/>
<connectionManager numConnectionsPerThreadLocal="10" id="connectionManager" minPoolSize="1"/>
<!-- <properties.oracle user="postgres" password="postgres" -
url="jdbc:postgres://172.17.0.3:5432/Prima_WA_db"/> -->
</dataSource>
<!--
<applicationManager updateTrigger="disabled"/>
<application id="primawebapp" name="primawebapp" location="war/primawebapp" type="war">
<classLoader delegation="parentLast" commonLibraryRef="postgres-lib"/>
</application>
-->
</server>
Try this;
DataSource ds = (DataSource)ctx.lookup("jdbc/postgres");
Also, a different way.
Your <datasource> must have properties configured under it, otherwise there is no way for Liberty to know how to connect to your database.
For postgresql, try the following configuration:
<dataSource id="jdbc-Prima_WA_db" jndiName="jdbc/postgres">
<jdbcDriver libraryRef="postgres-lib"/>
<properties serverName="172.17.0.3" portNumber="5432" databaseName="Prima_WA_db"
user="postgres" password="postgres"/>
</dataSource>
Of course, using the proper values that correspond to the postgresql instance that you are running on.
Looks like your datasource definition might be incomplete, try updating it with the following:
<dataSource type="javax.sql.XADataSource" ...
<jdbcDriver javax.sql.XADataSource="org.postgresql.xa.PGXADataSource" ...
I want to set the <consumer-window-size/> to 0. This seems to be the answer of another question ( JMS queue with multiple consumers ), and is described in this article in chapter 17.1.1 . I retrieve the connection factory using JNDI. My hornetq-jms.xml looks like this:
<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-jms.xsd">
<connection-factory name="ConnectionFactory">
<connectors>
<connector-ref connector-name="netty-connector"/>
</connectors>
<entries>
<entry name="ConnectionFactory"/>
</entries>
<consumer-window-size>0</consumer-window-size>
</connection-factory>
<queue name="my.qeue">
<entry name="/queue/test"/>
</queue>
</configuration>
The section <connection-factory/> is copy&paste from the link above, but I got the error:
DEPLOYMENTS IN ERROR:
Deployment "org.hornetq:module=JMS,name="ConnectionFactory",
type=ConnectionFactory" is in error due to the following reason(s):
HornetQException[errorCode=104 message=There is no connector with
name 'netty-connector' deployed.]
This may be JBoss-6 related, because in other environments this seems to work: force order of messages with HornetQ
Before you place netty-connector, you need to look at the connectors your have registered at your hornetq-configuration.xml
From your hornetq-configuration, you will see something like this:
<connectors>
<connector name="netty">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
<param key="host" value="${jboss.bind.address:localhost}" />
<param key="port" value="${hornetq.remoting.netty.port:5445}" />
</connector>
<connector name="in-vm">
<factory-class>org.hornetq.core.remoting.impl.invm.InVMConnectorFactory</factory-class>
<param key="server-id" value="${hornetq.server-id:0}" />
</connector>
</connectors>
You will have match the connector here at your connection-factory definition.
For more information read the HornetQ's documentation about acceptors and connectors.