Below is my flow file content. I have generated .key file with help of java keytool. the same flow is working for TLSv1.1(when client was using TLSv1.1 certificate) and not working for TLSv1.2(client certificate is TLSv1.2).
<https:connector name="paypalConnector" doc:name="HTTP\HTTPS" validateConnections="true" clientSoTimeout="10000" cookieSpec="netscape" receiveBacklog="0" receiveBufferSize="0" sendBufferSize="0" serverSoTimeout="10000" socketSoLinger="0">
<service-overrides sessionHandler="org.mule.session.NullSessionHandler"/>
<https:tls-server path="C:/Users/damodaram.setti/Desktop/PayPal/paypal.key" storePassword="paypal" requireClientAuthentication="true" />
</https:connector>
<https:outbound-endpoint exchange-pattern="request-response" method="POST" address="https://tlstest.paypal.com" mimeType="text/xml" connector-ref="paypalConnector" doc:name="2IssuerServ"/>
and I have tried with below options
-Ddeployment.security.SSLv2Hello=false -Ddeployment.security.SSLv3=false -Ddeployment.security.TLSv1=false -Ddeployment.security.TLSv1.1=true -Ddeployment.security.TLSv1.2=true
and
-Dhttps.protocols=TLSv1.2 -Dhttps.cipherSuites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
but no luck so far. Please help me to sort this issue.
Message : Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=https://tlstest.paypal.com, connector=HttpsConnector
{
name=paypalConnector
lifecycle=start
this=527fe4
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[https]
serviceOverrides=<none>
}
, name='endpoint.https.tlstest.paypal.com', mep=REQUEST_RESPONSE, properties={http.method=POST}, transactionConfig=Transaction{factory=null, action=INDIFFERENT, timeout=0}, deleteUnacceptedMessages=false, initialState=started, responseTimeout=10000, endpointEncoding=UTF-8, disableTransportTransformer=false}. Message payload is of type: PostMethod
Code : MULE_ERROR--2
--------------------------------------------------------------------------------
Exception stack is:
1. Connection refused: connect (java.net.ConnectException)
java.net.DualStackPlainSocketImpl:-2 (null)
2. Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=https://tlstest.paypal.com, connector=HttpsConnector
{
name=paypalConnector
lifecycle=start
this=527fe4
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[https]
serviceOverrides=<none>
}
, name='endpoint.https.tlstest.paypal.com', mep=REQUEST_RESPONSE, properties={http.method=POST}, transactionConfig=Transaction{factory=null, action=INDIFFERENT, timeout=0}, deleteUnacceptedMessages=false, initialState=started, responseTimeout=10000, endpointEncoding=UTF-8, disableTransportTransformer=false}. Message payload is of type: PostMethod (org.mule.api.transport.DispatchException)
org.mule.transport.http.HttpClientMessageDispatcher:155 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transport/DispatchException.html)
--------------------------------------------------------------------------------
Root Exception stack trace:
java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
********************************************************************************
* -XX:PermSize=128M
* -XX:MaxPermSize=256M
* -Ddeployment.security.SSLv2Hello=false
* -Ddeployment.security.SSLv3=false
* -Ddeployment.security.TLSv1=false
* -Ddeployment.security.TLSv1.1=true
* -Ddeployment.security.TLSv1.2=true
* -Dmule.home=D:\MConnect\MuleStudioWorkspace\.mule
* -Dlog4j.debug=true
* -Dosgi.dev=true
* -Dosgi.instance.area=file:/D:/MConnect/MuleStudioWorkspace
* -Dfile.encoding=Cp1252
ERROR 2016-07-21 16:45:10,647 [[simpletest].connector.http.mule.default.receiver.02] org.mule.exception.DefaultMessagingExceptionStrategy:
********************************************************************************
Message : Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=https://tlstest.paypal.com, connector=HttpsConnector
{
name=paypalConnector
lifecycle=start
this=527fe4
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[https]
serviceOverrides=<none>
}
, name='endpoint.https.tlstest.paypal.com', mep=REQUEST_RESPONSE, properties={http.method=POST}, transactionConfig=Transaction{factory=null, action=INDIFFERENT, timeout=0}, deleteUnacceptedMessages=false, initialState=started, responseTimeout=10000, endpointEncoding=UTF-8, disableTransportTransformer=false}. Message payload is of type: PostMethod
Code : MULE_ERROR--2
--------------------------------------------------------------------------------
Exception stack is:
1. Connection refused: connect (java.net.ConnectException)
java.net.DualStackPlainSocketImpl:-2 (null)
2. Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=https://tlstest.paypal.com, connector=HttpsConnector
{
name=paypalConnector
lifecycle=start
this=527fe4
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[https]
serviceOverrides=<none>
}
, name='endpoint.https.tlstest.paypal.com', mep=REQUEST_RESPONSE, properties={http.method=POST}, transactionConfig=Transaction{factory=null, action=INDIFFERENT, timeout=0}, deleteUnacceptedMessages=false, initialState=started, responseTimeout=10000, endpointEncoding=UTF-8, disableTransportTransformer=false}. Message payload is of type: PostMethod (org.mule.api.transport.DispatchException)
org.mule.transport.http.HttpClientMessageDispatcher:155 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transport/DispatchException.html)
--------------------------------------------------------------------------------
Root Exception stack trace:
java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
To use TLSv1.2 you must specify it in the https connector.
<spring:property name="sslType" value="TLSv1.2" />
or
<https:connector name="paypalConnector" doc:name="HTTP\HTTPS" validateConnections="true" clientSoTimeout="10000" cookieSpec="netscape" receiveBacklog="0" receiveBufferSize="0" sendBufferSize="0" serverSoTimeout="10000" socketSoLinger="0">
<spring:property name="sslType" value="TLSv1.2" />
<service-overrides sessionHandler="org.mule.session.NullSessionHandler"/>
<https:tls-server path="C:/Users/damodaram.setti/Desktop/PayPal/paypal.key" storePassword="paypal" requireClientAuthentication="true" />
</https:connector>
Hope that this answer your question.
Please use the below syntax to create to send HTTP request over HTTP\HTTPS and enabling TLS versions. In this case I have used HTTPS protcol and sending request over TLSv1.
http:request-config doc:name="HTTP Request Configuration" name="HTTPS_Request_Configuration" protocol="HTTPS" connectionIdleTimeout="300000">
tls:context enabledProtocols="TLSv1">
tls:trust-store type="jks" password="${truststore.pwd}" path="${truststore.path}"/>
tls:key-store type="jks" password="${keystore.pass}" path="${keystore.path}" keyPassword="${keystore.keypass}" alias="${keystore.alias}"/>
/tls:context>
/http:request-config>
Related
I am using Apache Artemis V2.12.0, started two instance of broker in two VM's
broker.xml (myhost1) [ broker.xml of myhost2 is similar only the port I used was 61616]
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core">
<bindings-directory>./data/bindings</bindings-directory>
<journal-directory>./data/journal</journal-directory>
<large-messages-directory>./data/largemessages</large-messages-directory>
<paging-directory>./data/paging</paging-directory>
<!-- Connectors -->
<connectors>
<connector name="netty-connector">tcp://10.64.60.100:61617</connector><!-- direct ip addres of host myhost1 -->
<connector name="broker2-connector">tcp://myhost2:61616</connector> <!-- ip 10.64.60.101 <- mocked up ip for security reasons -->
</connectors>
<!-- Acceptors -->
<acceptors>
<acceptor name="amqp">tcp://0.0.0.0:61617?amqpIdleTimeout=0;tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP;useEpoll=true</acceptor>
</acceptors>
<cluster-connections>
<cluster-connection name="myhost1-cluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>broker2-connector</connector-ref> <!-- defined in the connectors -->
</static-connectors>
</cluster-connection>
</cluster-connections>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
<address-settings>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-delete-queues>false</auto-delete-queues>
<auto-delete-created-queues>false</auto-delete-created-queues>
<auto-delete-addresses>false</auto-delete-addresses>
</address-setting>
</address-settings>
</core>
</configuration>
After starting the broker instance on two nodes they joined the cluster, which i can see in logs.
2020-06-03 23:59:17,874 INFO [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:61617 for protocols [CORE,AMQP]
2020-06-03 23:59:17,910 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live
2020-06-03 23:59:17,910 INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.12.0 [localhost, nodeID=e6c6eab6-a456-11ea-94cf-000d3a306e31]
2020-06-03 23:59:18,240 INFO [org.apache.activemq.artemis.core.server] AMQ221027: Bridge ClusterConnectionBridge#5e9820f4 [name=$.artemis.internal.sf.myhost1-cluster.bd39cc41-a201-11ea-abaa-000d3a315d06, queue=QueueImpl[name=$.artemis.internal.sf.devmq1-cluster.bd39cc41-a201-11ea-abaa-000d3a315d06, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=e6c6eab6-a456-11ea-94cf-000d3a306e31], temp=false]#2b0263f3 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge#5e9820f4 [name=$.artemis.internal.sf.devmq1-cluster.bd39cc41-a201-11ea-abaa-000d3a315d06, queue=QueueImpl[name=$.artemis.internal.sf.devmq1-cluster.bd39cc41-a201-11ea-abaa-000d3a315d06, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=e6c6eab6-a456-11ea-94cf-000d3a306e31], temp=false]#2b0263f3 targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-64-60-100], discoveryGroupConfiguration=null]]::ClusterConnectionImpl#24293395[nodeUUID=e6c6eab6-a456-11ea-94cf-000d3a306e31, connector=TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61617&host=10-64-60-101, address=, server=ActiveMQServerImpl::serverUUID=e6c6eab6-a456-11ea-94cf-000d3a306e31])) [initialConnectors=[TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-64-60-100], discoveryGroupConfiguration=null]] is connected
2020-06-03 23:59:18,364 INFO [org.apache.activemq.hawtio.branding.PluginContextListener] Initialized activemq-branding plugin
Below java code sends message to the clustered broker,
Step1: both the brokers where running
Step2: The java client was started to send messages to the broker
Step3: From the console of myhost1, i see messages pushed to the queue
Step4: I stop the broker instance in myhost1
Step5: java client log, retries to connect to the other server, after n attempts it throws exception. (My expectation is it should NOT throw any exception)
The java code has JNDI approach which i commented, even in this case the messages where pushed but similar exception occured.
I tired JmsPoolConnectionfactory, even then the same issue, where when one of the broker instance is stopped after few retries it throws exception. (the logs for this are at bottom of the code)
Question:
Using the java code on client side how to achieve auto discovery/failover/reconnect without any exception. I am using static-connector under the cluster-options.
package com.demo.artemis.clients;
import java.util.Properties;
import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.MessageConsumer;
import javax.jms.MessageProducer;
import javax.jms.Queue;
import javax.jms.Session;
import javax.jms.TextMessage;
import javax.naming.InitialContext;
import org.apache.activemq.artemis.jms.client.ActiveMQJMSConnectionFactory;
import org.messaginghub.pooled.jms.JmsPoolConnectionFactory;
public class ArtemisClientClustered
{
public static void main(final String[] args) throws Exception {
//only produces the message
new ArtemisClientClustered().runProducer(true, false);
}
public boolean runProducer(boolean produceMesage, boolean consumeMessage) throws Exception{
Connection connection = null;
InitialContext initalContext = null;
int i = 0;
try {
Properties jndiProp = new Properties();
jndiProp.put("java.naming.factory.initial", "org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory");
//jndiProp.put("connectionFactory.ConnectionFactory", "tcp://localhost:61616?producerMaxRate=50");
jndiProp.put("connectionFactory.ConnectionFactory", "(tcp://myhost2:61616,tcp://myhost1:61617)?ha=true;reconnectAttempts=-1;");
jndiProp.put("queue.queue/ahm.load-datawarehouse.queue","ahm.load-datawarehouse.queue");
initalContext = new InitialContext(jndiProp);
// Step 2. Perfom a lookup on the queue
Queue queue = (Queue) initalContext.lookup("queue/myExampleQ.queue");
// Step 3. Perform a lookup on the Connection Factory
//ConnectionFactory cf = new ActiveMQConnectionFactory("tcp://localhost:61616?producerMaxRate=50");
ConnectionFactory cf = (ConnectionFactory)initalContext.lookup("ConnectionFactory");
// ConnectionFactory cf= new ActiveMQJMSConnectionFactory("(tcp://myhost2:61616,tcp://myhost1:61617)?ha=true;reconnectAttempts=-1;");
//using the PoolconectionFactory
JmsPoolConnectionFactory jmsPoolConnectionFactory = new JmsPoolConnectionFactory();
jmsPoolConnectionFactory.setMaxConnections(8);
jmsPoolConnectionFactory.setConnectionFactory(cf);
// Step 4. Create a JMS Connection
connection = jmsPoolConnectionFactory.createConnection("admin","admin");
// Step 5. Create a JMS Session
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
if(produceMesage) {
// Step 6. Create a JMS Message Producer
MessageProducer producer = session.createProducer(queue);
System.out.println("Will now send as many messages as we can in few seconds...");
// Step 7. Send as many messages as we can in N milliseconds
final long duration = 1200000;
i=0;
long start = System.currentTimeMillis();
while (System.currentTimeMillis() - start <= duration) {
TextMessage message = session.createTextMessage("This is text message: " + i++);
producer.send(message);
}
long end = System.currentTimeMillis();
double rate = 1000 * (double) i / (end - start);
System.out.println("We sent " + i + " messages in " + (end - start) + " milliseconds");
System.out.println("Actual send rate was " + rate + " messages per second");
// Step 8. For good measure we consumer the messages we produced.
}
if(consumeMessage) {
MessageConsumer messageConsumer = session.createConsumer(queue);
connection.start();
System.out.println("Now consuming the messages...");
i = 0;
while (true) {
TextMessage messageReceived = (TextMessage) messageConsumer.receive(5000);
if (messageReceived == null) {
break;
}
i++;
}
System.out.println("Received " + i + " messages");
}
return true;
} finally {
// Step 9. Be sure to close our resources!
if (connection != null) {
connection.close();
}
}
}
}
Log message of client code execution: When the client starts both myhost1 and myhost2 was running.
After sometime I manually stop the myhost1 broker, expecting the myhost2 will be automatically discovered by the client.
....
2020-06-03 23:58:48 DEBUG ClientSessionFactoryImpl:1102 - Trying to connect with connectorFactory = org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory#45d84a20, connectorConfig=TransportConfiguration(name=ConnectionFactory, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=myhost2&reconnectAttempts=-1&ha=true
2020-06-03 23:58:48 DEBUG NettyConnector:486 - Connector NettyConnector [host=myhost2, port=61616, httpEnabled=false, httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=false, useNio=true] using native epoll
2020-06-03 23:58:48 DEBUG client:668 - AMQ211002: Started EPOLL Netty Connector version 4.1.48.Final to myhost2:61616
2020-06-03 23:58:48 DEBUG NettyConnector:815 - Remote destination: myhost2/10.64.60.101:61616
2020-06-03 23:58:48 DEBUG NettyConnector:659 - Added ActiveMQClientChannelHandler to Channel with id = cf33ff23
2020-06-03 23:58:48 DEBUG Recycler:97 - -Dio.netty.recycler.maxCapacityPerThread: 4096
2020-06-03 23:58:48 DEBUG Recycler:98 - -Dio.netty.recycler.maxSharedCapacityFactor: 2
2020-06-03 23:58:48 DEBUG Recycler:99 - -Dio.netty.recycler.linkCapacity: 16
2020-06-03 23:58:48 DEBUG Recycler:100 - -Dio.netty.recycler.ratio: 8
2020-06-03 23:58:48 DEBUG AbstractByteBuf:63 - -Dio.netty.buffer.checkAccessible: true
2020-06-03 23:58:48 DEBUG AbstractByteBuf:64 - -Dio.netty.buffer.checkBounds: true
2020-06-03 23:58:48 DEBUG ResourceLeakDetectorFactory:195 - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#6933b6c6
2020-06-03 23:58:48 DEBUG ClientSessionFactoryImpl:809 - Reconnection successful
2020-06-03 23:58:48 DEBUG NettyConnector:1269 - NettyConnector [host=myhost2, port=61616, httpEnabled=false, httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=false, useNio=true] host 1: 10.44.6.85 ip address: 10.44.6.85 host 2: myhost2 ip address: 10.44.6.85
2020-06-03 23:58:48 DEBUG ClientSessionFactoryImpl:277 - ClientSessionFactoryImpl received backup update for live/backup pair = TransportConfiguration(name=ConnectionFactory, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=myhost2&reconnectAttempts=-1&ha=true / null but it didn't belong to TransportConfiguration(name=ConnectionFactory, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=myhost2&reconnectAttempts=-1&ha=true
Will now send as many messages as we can in few seconds...
...
...
2020-06-04 00:01:09 WARN client:210 - AMQ212037: Connection failure to myhost2/10.64.60.101:61616 has been detected: AMQ219015: The connection was disconnected because of server shutdown [code=DISCONNECTED]
2020-06-04 00:01:09 DEBUG ClientSessionFactoryImpl:800 - Trying reconnection attempt 0/-1
2020-06-04 00:01:09 DEBUG ClientSessionFactoryImpl:1102 - Trying to connect with connectorFactory = org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory#45d84a20, connectorConfig=TransportConfiguration(name=ConnectionFactory, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=myhost2&reconnectAttempts=-1&ha=true
2020-06-04 00:01:09 DEBUG NettyConnector:486 - Connector NettyConnector [host=myhost2, port=61616, httpEnabled=false, httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=false, useNio=true] using native epoll
2020-06-04 00:01:09 DEBUG client:668 - AMQ211002: Started EPOLL Netty Connector version 4.1.48.Final to myhost2:61616
2020-06-04 00:01:09 DEBUG NettyConnector:815 - Remote destination: myhost2/10.64.60.101:61616
2020-06-04 00:01:09 DEBUG NettyConnector:659 - Added ActiveMQClientChannelHandler to Channel with id = d4ed884e
2020-06-04 00:01:09 DEBUG ClientSessionFactoryImpl:1063 - Connector towards NettyConnector [host=myhost2, port=61616, httpEnabled=false, httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=false, useNio=true] failed
2020-06-04 00:01:09 DEBUG ClientSessionFactoryImpl:1140 - Backup is not active, trying original connection configuration now.
2020-06-04 00:01:11 DEBUG ClientSessionFactoryImpl:800 - Trying reconnection attempt 1/-1
2020-06-04 00:01:11 DEBUG ClientSessionFactoryImpl:1102 - Trying to connect with connectorFactory = org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory#45d84a20, connectorConfig=TransportConfiguration(name=ConnectionFactory, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=myhost2&reconnectAttempts=-1&ha=true
2020-06-04 00:01:11 DEBUG NettyConnector:486 - Connector NettyConnector [host=myhost2, port=61616, httpEnabled=false, httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=false, useNio=true] using native epoll
2020-06-04 00:01:11 DEBUG client:668 - AMQ211002: Started EPOLL Netty Connector version 4.1.48.Final to myhost2:61616
2020-06-04 00:01:11 DEBUG NettyConnector:815 - Remote destination: myhost2/10.64.60.101:61616
2020-06-04 00:01:11 DEBUG NettyConnector:659 - Added ActiveMQClientChannelHandler to Channel with id = 1530857a
2020-06-04 00:01:11 DEBUG ClientSessionFactoryImpl:1063 - Connector towards NettyConnector [host=myhost2, port=61616, httpEnabled=false, httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=false, useNio=true] failed
020-06-04 00:01:37 DEBUG NettyConnector:659 - Added ActiveMQClientChannelHandler to Channel with id = d886a84e
2020-06-04 00:01:37 DEBUG ClientSessionFactoryImpl:1063 - Connector towards NettyConnector [host=myhost2, port=61616, httpEnabled=false, httpUpgradeEnabled=false, useServlet=false, servletPath=/messaging/ActiveMQServlet, sslEnabled=false, useNio=true] failed
2020-06-04 00:01:37 DEBUG ClientSessionFactoryImpl:1140 - Backup is not active, trying original connection configuration now.
Exception in thread "main" javax.jms.JMSException: AMQ219014: Timed out after waiting 30,000 ms for response when sending packet 71
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:457)
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:361)
at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext.sendFullMessage(ActiveMQSessionContext.java:552)
at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.sendRegularMessage(ClientProducerImpl.java:296)
at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.doSend(ClientProducerImpl.java:268)
at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.send(ClientProducerImpl.java:143)
at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.send(ClientProducerImpl.java:125)
at org.apache.activemq.artemis.jms.client.ActiveMQMessageProducer.doSendx(ActiveMQMessageProducer.java:483)
at org.apache.activemq.artemis.jms.client.ActiveMQMessageProducer.send(ActiveMQMessageProducer.java:220)
at org.messaginghub.pooled.jms.JmsPoolMessageProducer.sendMessage(JmsPoolMessageProducer.java:182)
at org.messaginghub.pooled.jms.JmsPoolMessageProducer.send(JmsPoolMessageProducer.java:90)
at org.messaginghub.pooled.jms.JmsPoolMessageProducer.send(JmsPoolMessageProducer.java:79)
at com.demo.artemis.clients.ArtemisClientClustered.runProducer(ArtemisClientClustered.java:77)
at com.demo.artemis.clients.ArtemisClientClustered.main(ArtemisClientClustered.java:26)
Caused by: ActiveMQConnectionTimedOutException[errorType=CONNECTION_TIMEDOUT message=AMQ219014: Timed out after waiting 30,000 ms for response when sending packet 71]
... 14 more
NOTE: When I used the Camel consumer to consumer the message from this queue and transform to another queue. During the process when I stop the broker the consumer count is automatically redirected to the other broker instance. From the console I am able to see the consumer counts redirected from one broker to another.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:aop="http://www.springframework.org/schema/aop"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:jee="http://www.springframework.org/schema/jee"
xmlns:tx="http://www.springframework.org/schema/tx"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/aop
http://www.springframework.org/schema/aop/spring-aop-3.1.xsd
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.1.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context-3.1.xsd
http://www.springframework.org/schema/jee
http://www.springframework.org/schema/jee/spring-jee-3.1.xsd
http://www.springframework.org/schema/tx
http://www.springframework.org/schema/tx/spring-tx-3.1.xsd
http://camel.apache.org/schema/spring
http://camel.apache.org/schema/spring/camel-spring.xsd">
<bean id="jmsConnectionFactory" class="org.apache.activemq.artemis.jms.client.ActiveMQJMSConnectionFactory">
<constructor-arg index="0" value="(tcp://myhost2:61616,tcp://myhost1:61617)?ha=true;reconnectAttempts=-1;"/>
</bean>
<bean id="jmsPooledConnectionFactory" class="org.messaginghub.pooled.jms.JmsPoolConnectionFactory" init-method="start" destroy-method="stop">
<property name="maxConnections" value="10" />
<property name="connectionFactory" ref="jmsConnectionFactory" />
</bean>
<bean id="jmsConfig" class="org.apache.camel.component.jms.JmsConfiguration">
<property name="connectionFactory" ref="jmsPooledConnectionFactory" />
<property name="concurrentConsumers" value="10" />
</bean>
<bean id="jms" class="org.apache.camel.component.jms.JmsComponent">
<property name="configuration" ref="jmsConfig" />
</bean>
<camelContext id="camel" xmlns="http://camel.apache.org/schema/spring">
<endpoint id="queue1" uri="jms:queue:myExampleQ" />
<endpoint id="queue2" uri="jms:queue:myExampleQ2" />
<route>
<from uri="ref:queue1" />
<convertBodyTo type="java.lang.String" />
<transform>
<simple>MSG FRM queue1 TO queue2 : ${bodyAs(String)}</simple>
</transform>
<to uri="ref:queue2" />
</route>
</camelContext>
</beans>
You've configured an active/active cluster of 2 nodes. This supports both connection and message load-balancing, but it doesn't support transparent failover. In order to get transparent failover you need to configure an active/passive HA pair. Check the ActiveMQ Artemis documentation as well as HA examples shipped with the broker for more details on how to do that.
I have one linux machine with 2 Wildfly servers listening on 2 différents https ports.
I have one domain and 2 sub-domain: aa.mydomain.fr et bb.mydomain.fr that i redirect to my 2 wildlfy servers using a Haproxy (i didn't find other solutions to redirect 2 sub-domain in dealing with 2 different https ports and one linux server IP)
My HapProxy server configuration (for aa.mydomain.fr only):
global
log 127.0.0.1:514 local0 info
daemon
maxconn 4096
tune.ssl.default-dh-param 1024
ssl-default-bind-options ssl-min-ver TLSv1.2
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
log global
option httplog
option forwardfor
frontend http-in
bind linux_server_ip:80
acl is_demo_site hdr_end(host) aa.mydomain.fr
use_backend demo_site if is_demo_site
frontend https-in
bind linux_server_ip:443 ssl crt /etc/haproxy/cert/mycert.pem
acl is_demo_https_site hdr_end(host) aa.mydomain.fr
use_backend demo_https_site if is_demo_https_site
backend demo_site
server s1 linux_server_ip:8xxx maxconn 32
backend demo_https_site
server s3 linux_server_ip:8yyy maxconn 32
http-request set-header X-Forwarded-Proto https
My wildfly server conf for sub-domain aa.mydomain.fr:
<subsystem xmlns="urn:jboss:domain:undertow:8.0" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="other">
<buffer-cache name="default"/>
<server name="default-server">
<http-listener name="default" socket-binding="http" redirect-socket="https" proxy-address-forwarding="true" enable-http2="true"/>
<https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enable-http2="true" proxy-protocol="true"/>
<host name="default-host" alias="localhost">
<location name="/" handler="welcome-content"/>
<access-log pattern="%a %t %H %p %U %s %S %T" directory="${jboss.home.dir}/standalone/log" prefix="access_"/>
<http-invoker security-realm="ApplicationRealm"/>
</host>
</server>
<servlet-container name="default">
<jsp-config/>
<websockets/>
</servlet-container>
<handlers>
<file name="welcome-content" path="${jboss.home.dir}/welcome-content"/>
</handlers>
</subsystem>
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
...
<socket-binding name="http" port="${jboss.http.port:8xxx}"/>
<socket-binding name="https" port="${jboss.https.port:8yyy}"/>
...
</socket-binding-group>
The http redirection works fine but not the https one which return an 502 error code bad Gateway and i have this error message in my wildfly server log:
2019-09-10 10:47:11,746 TRACE [org.xnio.nio] (default I/O-2) Running task org.xnio.nio.QueuedNioTcpServer$1#7b85bf52
2019-09-10 10:47:11,746 TRACE [org.xnio.nio] (default I/O-2) Running task org.xnio.nio.NioHandle$1#dd77838
2019-09-10 10:47:11,746 DEBUG [io.undertow.request.io] (default I/O-2) UT005013: An IOException occurred: java.io.IOException: UT000179: Invalid PROXY protocol header
at io.undertow.core#2.0.15.Final//io.undertow.server.protocol.proxy.ProxyProtocolReadListener.handleEvent(ProxyProtocolReadListener.java:90)
at io.undertow.core#2.0.15.Final//io.undertow.server.protocol.proxy.ProxyProtocolReadListener.handleEvent(ProxyProtocolReadListener.java:34)
at org.jboss.xnio#3.6.5.Final//org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
at org.jboss.xnio#3.6.5.Final//org.xnio.conduits.ReadReadyHandler$ChannelListenerHandler.readReady(ReadReadyHandler.java:66)
at org.jboss.xnio.nio#3.6.5.Final//org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89)
at org.jboss.xnio.nio#3.6.5.Final//org.xnio.nio.NioHandle$1.run(NioHandle.java:50)
at org.jboss.xnio.nio#3.6.5.Final//org.xnio.nio.WorkerThread.safeRun(WorkerThread.java:612)
at org.jboss.xnio.nio#3.6.5.Final//org.xnio.nio.WorkerThread.run(WorkerThread.java:479)
2019-09-10 10:47:11,747 TRACE [org.xnio.nio] (default I/O-2) Cancelling key channel=java.nio.channels.SocketChannel[connected local=/linux_server_ip:8xxx remote=/linux_server_ip:49866], selector=sun.nio.ch.EPollSelectorImpl#4a7d8873, interestOps=1, readyOps=0 of java.nio.channels.SocketChannel[connected local=/linux_server_ip:8xxx remote=/linux_server_ip:49866] (same thread)
2019-09-10 10:47:11,747 TRACE [org.xnio.nio] (default I/O-2) Added task org.xnio.nio.QueuedNioTcpServer$2#1939a2a9
Details of the error:
private static final byte[] NAME = "PROXY ".getBytes(StandardCharsets.US_ASCII);
…
public void handleEvent(StreamSourceChannel streamSourceChannel) {
PooledByteBuffer buffer = bufferPool.allocate();
boolean freeBuffer = true;
try {
for (; ; ) {
int res = streamSourceChannel.read(buffer.getBuffer());
if (res == -1) {
IoUtils.safeClose(streamConnection);
return;
} else if (res == 0) {
return;
} else {
buffer.getBuffer().flip();
while (buffer.getBuffer().hasRemaining()) {
char c = (char) buffer.getBuffer().get();
if (byteCount < NAME.length) {
//first we verify that we have the correct protocol
if (c != NAME[byteCount]) {
throw **UndertowMessages.MESSAGES.invalidProxyHeader()**;
}
…
Notes:
I use a "Let's encrypt" SSL certificat.
I get the same error code if i remove the "option forwardfor" in the Haproxy conf.
If i add "accept-proxy" in frontend https-in section and "send-proxy" in backend demo_https_site, i get the Following message in haproxy.log: "Received something which does not look like a PROXY protocol header".
When i monitor the header request with FF monitor tools, i don't see X-Forwarded detail...
Software details:
Haproxy v1.8.8/Wildfly v15.0.1
I don't know if the issue come from my wildfly conf or my haproxy conf, can somebody suggest idea or fix please ?
Best regards.
One way I think you could fix this is by adding proxy protocol to your https proxy with the send-proxy or send-proxy-v2 option. e.g:
backend demo_https_site
server s3 linux_server_ip:8yyy maxconn 32 send-proxy
Another way would be to remove proxy-protocol from wildfly, e.g:
<https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enable-http2="true"/>
However, this will means the client's source ip would have to be derived from the X-Forwarded-For header.
I have an ejb client and able to test remote ejb on tomcat and weblogic.But it is failing in jboss-eap 6.4.
There is no firewall issue as it is working in other servers.I tried placing jboss-client jar shipped with jboss and also tried as bom type. still same error.
Jboss server log:
19:27:45,107 TRACE [org.jboss.remoting.remote] (Remoting "config-based-naming-client-endpoint" read-1) Received message data
19:27:45,108 TRACE [org.jboss.remoting.remote] (Remoting "config-based-naming-client-endpoint" read-1) CAS Channel ID bf292e5d (outbound) of Remoting connection 33fccb04 to /192.168.1.50:4447
old: RS=false WS=false IM=0 OM=0
new: RS=false WS=false IM=1 OM=0
19:27:45,109 TRACE [org.jboss.remoting.remote] (Remoting "config-based-naming-client-endpoint" read-1) Opened inbound message on Channel ID bf292e5d (outbound) of Remoting connection 33fccb04 to /192.168.1.50:4447
19:27:45,116 TRACE [org.jboss.remoting.remote] (Remoting "config-based-naming-client-endpoint" read-1) Received message data
19:27:45,116 TRACE [org.jboss.remoting.remote] (Remoting "config-based-naming-client-endpoint" read-1) CAS Channel ID bf292e5d (outbound) of Remoting connection 33fccb04 to /192.168.1.50:4447
old: RS=false WS=false IM=1 OM=0
new: RS=false WS=false IM=0 OM=0
19:27:45,117 TRACE [org.jboss.remoting.remote] (Remoting "config-based-naming-client-endpoint" read-1) Closed inbound message on Channel ID bf292e5d (outbound) of Remoting connection 33fccb04 to /192.168.1.50:4447
19:27:45,117 TRACE [org.jboss.remoting.remote.connection] (Remoting "config-based-naming-client-endpoint" read-1) Sent message java.nio.HeapByteBuffer[pos=7 lim=7 cap=8192] (direct)
19:27:45,117 TRACE [org.jboss.remoting.remote.connection] (Remoting "config-based-naming-client-endpoint" read-1) Flushed channel (direct)
19:27:45,117 TRACE [org.jboss.remoting.remote] (Remoting "config-based-naming-client-endpoint" read-1) No message ready; returning
Jboss standalone.xml configuration:
<subsystem
xmlns="urn:jboss:domain:remoting:1.2">
<connector name="remoting-connector" socket-binding="remoting" security-realm="ApplicationRealm"/>
<outbound-connections>
<remote-outbound-connection name="remote-ejb-connection" outbound-socket-binding-ref="remote-ejb" username="admin" security-realm="ApplicationRealm">
<properties>
<property name="SASL_POLICY_NOANONYMOUS" value="false"/>
<property name="SSL_ENABLED" value="false"/>
</properties>
</remote-outbound-connection>
</outbound-connections>
</subsystem>
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<outbound-socket-binding name="remote-ejb">
<remote-destination host="192.168.2.31" port="4447"/>
</outbound-socket-binding>
</socket-binding-group>
Stacktrace is:
callExternalService - Exchange-exception :java.lang.IllegalStateException: EJBCLIENT000025: No EJB receiver available for handling [appName:, moduleName:AS-Test-EJB, distinctName:] combination for invocation context org.jboss.ejb.client.EJBClientInvocationContext#a89afa9
at org.jboss.ejb.client.EJBClientContext.requireEJBReceiver(EJBClientContext.java:747)
at org.jboss.ejb.client.ReceiverInterceptor.handleInvocation(ReceiverInterceptor.java:116)
at org.jboss.ejb.client.EJBClientInvocationContext.sendRequest(EJBClientInvocationContext.java:186)
at org.jboss.ejb.client.EJBInvocationHandler.sendRequestWithPossibleRetries(EJBInvocationHandler.java:255)
at org.jboss.ejb.client.EJBInvocationHandler.doInvoke(EJBInvocationHandler.java:200)
at org.jboss.ejb.client.EJBInvocationHandler.doInvoke(EJBInvocationHandler.java:183)
at org.jboss.ejb.client.EJBInvocationHandler.invoke(EJBInvocationHandler.java:146)
at com.sun.proxy.$Proxy406.sayHello(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.camel.component.bean.MethodInfo.invoke(MethodInfo.java:408)
at org.apache.camel.component.bean.MethodInfo$1.doProceed(MethodInfo.java:279)
at org.apache.camel.component.bean.MethodInfo$1.proceed(MethodInfo.java:252)
at org.apache.camel.component.bean.BeanProcessor.process(BeanProcessor.java:177)
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:109)
at org.apache.camel.component.bean.BeanProcessor.process(BeanProcessor.java:68)
at org.apache.camel.component.bean.BeanProducer.process(BeanProducer.java:38)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:197)
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:109)
at org.apache.camel.processor.UnitOfWorkProducer.process(UnitOfWorkProducer.java:68)
at org.apache.camel.impl.ProducerCache$2.doInProducer(ProducerCache.java:412)
at org.apache.camel.impl.ProducerCache$2.doInProducer(ProducerCache.java:380)
at org.apache.camel.impl.ProducerCache.doInProducer(ProducerCache.java:270)
at org.apache.camel.impl.ProducerCache.sendExchange(ProducerCache.java:380)
at org.apache.camel.impl.ProducerCache.send(ProducerCache.java:221)
at org.apache.camel.impl.DefaultProducerTemplate.send(DefaultProducerTemplate.java:124)
at org.apache.camel.impl.DefaultProducerTemplate$13.call(DefaultProducerTemplate.java:616)
at org.apache.camel.impl.DefaultProducerTemplate$13.call(DefaultProducerTemplate.java:614)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Edited with properties used for lookup.
<endpoint id="Admin__Admin__EJBServiceTest" uri="ejb04-Admin:AS-Test-EJB/ServerTestBean!com.appzillon.test.ejb.ServerTest?method=sayHello"/>
<bean class="org.apache.camel.component.ejb.EjbComponent" id="ejb04-Admin">
<property name="properties" ref="Admin_Admin__EJBServiceTest_jndiProperties"/>
</bean>
<util:properties id="Admin_Admin__EJBServiceTest_jndiProperties">
<prop key="java.naming.provider.url">remote://192.168.1.50:4447</prop>
<prop key="java.naming.factory.initial">org.jboss.naming.remote.client.InitialContextFactory</prop>
<prop key="jboss.naming.client.ejb.context">true</prop>
<prop key="java.naming.security.principal">admin</prop>
<prop key="java.naming.security.credentials">********</prop>
</util:properties>
and we are using apache camel for processing.we read endpoint from xml file and proccess.lookup is taken care by apche camel.
We are trying to connect with the HDFS using kerberos, from Karaf container by OSGI bundle. We have already installed the hadoop client in karaf using apache servicemix bundles.
<groupId>org.apache.servicemix.bundles</groupId>
<artifactId>org.apache.servicemix.bundles.hadoop-client</artifactId>
<version>2.4.1_1</version>
Pom file is attached below:
<build>
<plugins>
<plugin>
<groupId>org.apache.felix</groupId>
<artifactId>maven-bundle-plugin</artifactId>
<version>2.3.7</version>
<extensions>true</extensions>
<configuration>
<instructions>
<Bundle-Activator>com.bdbizviz.hadoop.activator.PaHdfsActivator</Bundle-Activator>
<Bundle-SymbolicName>${project.artifactId}</Bundle-SymbolicName>
<Bundle-Version>${project.version}</Bundle-Version>
<Export-Package>
<!-- com.google.*, !org.apache.camel.model.dataformat, !org.apache.poi.ddf,
!org.apache.xmlbeans, org.apache.commons.collections.*, org.apache.commons.configuration.*,
org.apache.hadoop.hdfs*, org.apache.hadoop.hdfs.client*, org.apache.hadoop.hdfs.net*,
org.apache.hadoop.hdfs.protocol.datatransfer*, org.apache.hadoop.hdfs.protocol.proto*,
org.apache.hadoop.hdfs.protocolPB*, org.apache.hadoop.conf.*, org.apache.hadoop.io.*,
org.apache.hadoop.fs.*, org.apache.hadoop.security.*, org.apache.hadoop.metrics2.*,
org.apache.hadoop.util.*, org.apache.hadoop*; -->
<!-- org.apache.*; -->
</Export-Package>
<Import-Package>
org.apache.hadoop*,org.osgi.framework,*;resolution:=optional
</Import-Package>
<Include-Resource>
{maven-resources},
#org.apache.servicemix.bundles.hadoop-client-2.4.1_1.jar!/coredefault.
xml,
#org.apache.servicemix.bundles.hadoop-client-2.4.1_1.jar!/hdfsdefault.
xml,
#org.apache.servicemix.bundles.hadoop-client-
2.4.1_1.jar!/mapred-default.xml,
#org.apache.servicemix.bundles.hadoop-client-
2.4.1_1.jar!/hadoop-metrics.properties
</Include-Resource>
<DynamicImport-Package>*</DynamicImport-Package>
</instructions>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.apache.servicemix.bundles</groupId>
<artifactId>org.apache.servicemix.bundles.hadoop-client</artifactId>
<version>2.4.1_1</version>
<exclusions>
<exclusion>
<groupId>jdk.tools</groupId>
<artifactId>jdk.tools</artifactId>
<!-- <version>1.7</version> -->
</exclusion>
</exclusions>
</dependency>
</dependencies>
Code Snippet:
public class TestHdfs implements ITestHdfs{
public void printName() throws IOException{
/*
Configuration config = new Configuration();
config.set("fs.default.name", "hdfs://192.168.1.17:8020");
config.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
config.set("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class.getName());
try {
fs = FileSystem.get(config);
getHostnames(fs);
} catch (IOException e) {
e.printStackTrace();
}*/
Thread.currentThread().setContextClassLoader(getClass().getClassLoader());
final Configuration config = new Configuration();
config.set("fs.default.name", "hdfs://192.168.1.124:8020");
config.set("fs.file.impl", LocalFileSystem.class.getName());
config.set("fs.hdfs.impl", DistributedFileSystem.class.getName());
config.set("hadoop.security.authentication", "KERBEROS");
config.set("dfs.namenode.kerberos.principal.pattern",
"hdfs/*#********.COM");
System.setProperty("HADOOP_JAAS_DEBUG", "true");
System.setProperty("sun.security.krb5.debug", "true");
System.setProperty("java.net.preferIPv4Stack", "true");
System.out.println("--------------status---:"
+ UserGroupInformation.isSecurityEnabled());
UserGroupInformation.setConfiguration(config);
// UserGroupInformation.loginUserFromKeytab(
// "hdfs/hadoop1.********.com#********.COM",
// "file:/home/kaushal/hdfs-hadoop1.keytab");
UserGroupInformation app_ugi = UserGroupInformation
.loginUserFromKeytabAndReturnUGI("hdfs/hadoop1.********.com#********.COM",
"C:\\Users\\desanth.pv\\Desktop\\hdfs-hadoop1.keytab");
UserGroupInformation proxy_ugi = UserGroupInformation.createProxyUser(
"ssdfsdfsdfsdfag", app_ugi);
System.out.println("--------------status---:"
+ UserGroupInformation.isSecurityEnabled());
/*ClassLoader tccl = Thread.currentThread()
.getContextClassLoader();*/
try {
/*Thread.currentThread().setContextClassLoader(
getClass().getClassLoader());*/
proxy_ugi.doAs(new PrivilegedExceptionAction() {
#Override
public Object run() throws Exception {
/*ClassLoader tccl = Thread.currentThread()
.getContextClassLoader();*/
try {
/*Thread.currentThread().setContextClassLoader(
getClass().getClassLoader());*/
System.out.println("desanth");
FileSystem fs = FileSystem.get(config);
DistributedFileSystem hdfs = (DistributedFileSystem) fs;
DatanodeInfo[] dataNodeStats = hdfs.getDataNodeStats();
String[] names = new String[dataNodeStats.length];
for (int i = 0; i < dataNodeStats.length; i++) {
names[i] = dataNodeStats[i].getHostName();
System.out.println((dataNodeStats[i].getHostName()));
}
} catch (IOException e) {
e.printStackTrace();
} finally {
//Thread.currentThread().setContextClassLoader(tccl);
}
return null;
}
});
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
/*Thread.currentThread().setContextClassLoader(tccl);*/
}
}
public void getHostnames(FileSystem fs) throws IOException {
DistributedFileSystem hdfs = (DistributedFileSystem) fs;
DatanodeInfo[] dataNodeStats = hdfs.getDataNodeStats();
String[] names = new String[dataNodeStats.length];
for (int i = 0; i < dataNodeStats.length; i++) {
names[i] = dataNodeStats[i].getHostName();
System.out.println((dataNodeStats[i].getHostName()));
}
}
}
Error :
Caused by: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
[12:35:51 PM] Jayendra Parsai: java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "jayendra-dynabook-T451-34EW/127.0.1.1"; destination host is: "hadoop2.********.com":8020;
Following the background section of the Vladimir's answer I've tried many things, but
the simplest one which is adding
SecurityUtil.setSecurityInfoProviders(new AnnotatedSecurityInfo());
before UserGroupInformation.loginUserFromKeytab helped me to solve the issue.
I have not tried to reproduce this issue in an OSGI environment, but I think you may be facing an issue similar to the one you face when trying to run in a Kerberised environment with a fat jar that includes the hadoop/hdfs dependencies.
Namely the org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] error.
Background
After turning on DEBUG logging there was a funny line after SASL negotiation:
Get kerberos info proto:interface org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB info:null
Notice the null - successful executions have a class reference here instead.
Tracking this down, SaslRpcClient calls SecurityUtil.getTokenInfo. This initiates a search of all the org.apache.hadoop.security.SecurityInfo providers.
org.apache.hadoop.security.SecurityUtil uses java.util.ServiceLoader to look up SecurityInfo instances. ServiceLoader by default uses the current thread's ContextClassLoader to look for files in the META-INF/services/ directory on the classpath. The files are named corresponding to the service name, so it's looking for META-INF/services/org.apache.hadoop.security.SecurityInfo
When a jar is an uber jar (or I guess if you load something in an OSGI bundle) and you have only one such file on the classpath then you have to ensure all the entries are appended. In maven for example, you can use the ServicesResourceTransformer to append the entries. sbt-assembly has a similar merge option that is more configurable.
Solution
As described in background, make sure the classloader that java.util.ServiceLoader is using can find the META-INF/services/org.apache.hadoop.security.SecurityInfo with all the entries from the hadoop jars.
In the OSGI case, you still have to somehow merge the entries. Try including them in the <Include-Resources> section of your bundle XML?
Log output
This is the output the I get when it does not work:
2018-05-03 12:01:56,739 DEBUG PrivilegedAction as:user#DOMAIN (auth:KERBEROS) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:757) [ForkJoinPool-1-worker-5] org.apache.hadoop.security.UserGroupInformation (UserGroupInformation.java:1893)
2018-05-03 12:01:56,740 DEBUG Sending sasl message state: NEGOTIATE
[ForkJoinPool-1-worker-5] org.apache.hadoop.security.SaslRpcClient (SaslRpcClient.java:457)
2018-05-03 12:01:56,741 DEBUG Received SASL message state: NEGOTIATE
auths {
method: "TOKEN"
mechanism: "DIGEST-MD5"
protocol: ""
serverId: "default"
challenge: "XXX"
}
auths {
method: "KERBEROS"
mechanism: "GSSAPI"
protocol: "XXX"
serverId: "XXX"
}
[ForkJoinPool-1-worker-5] org.apache.hadoop.security.SaslRpcClient (SaslRpcClient.java:389)
2018-05-03 12:01:56,741 DEBUG Get token info proto:interface org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB info:null [ForkJoinPool-1-worker-5] org.apache.hadoop.security.SaslRpcClient (SaslRpcClient.java:264)
2018-05-03 12:01:56,741 DEBUG Get kerberos info proto:interface org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB info:null [ForkJoinPool-1-worker-5] org.apache.hadoop.security.SaslRpcClient (SaslRpcClient.java:291)
2018-05-03 12:01:56,742 DEBUG PrivilegedActionException as:user#DOMAIN (auth:KERBEROS) cause:org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] [ForkJoinPool-1-worker-5] org.apache.hadoop.security.UserGroupInformation (UserGroupInformation.java:1870)
2018-05-03 12:01:56,742 DEBUG PrivilegedAction as:user#DOMAIN (auth:KERBEROS) from:org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:683) [ForkJoinPool-1-worker-5] org.apache.hadoop.security.UserGroupInformation (UserGroupInformation.java:1893)
2018-05-03 12:01:56,743 WARN Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] [ForkJoinPool-1-worker-5] org.apache.hadoop.ipc.Client (Client.java:715)
2018-05-03 12:01:56,743 DEBUG PrivilegedActionException as:user#DOMAIN (auth:KERBEROS) cause:java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] [ForkJoinPool-1-worker-5] org.apache.hadoop.security.UserGroupInformation (UserGroupInformation.java:1870)
2018-05-03 12:01:56,743 DEBUG closing ipc connection to XXX/nnn.nnn.nnn.nnn:8020: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] [ForkJoinPool-1-worker-5] org.apache.hadoop.ipc.Client (Client.java:1217)
java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:720)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:683)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:770)
at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1620)
at org.apache.hadoop.ipc.Client.call(Client.java:1451)
at org.apache.hadoop.ipc.Client.call(Client.java:1398)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy10.create(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:313)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185)
at com.sun.proxy.$Proxy11.create(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1822)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1701)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1636)
at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:480)
at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:476)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:476)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:417)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:930)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:807)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:796)
...
Caused by: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:172)
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:595)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:397)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:762)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:758)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:757)
... 50 more
I had some problems When I use Spring boot(1.2.5) with ActiveMQ(5.11.1).
When I set below value in sping boot's application.properties
spring.activemq.broker-url=tcp://localhost:61616
It works well.
When I set another value like below:
spring.activemq.broker-url=stomp://localhost:61613
It throws :
Could not create Transport. Reason: java.lang.IllegalArgumentException: Invalid connect parameters: {wireFormat.host=localhost}
Or like
spring.activemq.broker-url=mqtt://localhost:1883
Also Throws
Could not create Transport. Reason: java.lang.IllegalArgumentException: Invalid connect parameters: {wireFormat.host=localhost}
Full exception Info:
Exception in thread "main" org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.jms.config.internalJmsListenerEndpointRegistry'; nested exception is org.springframework.jms.UncategorizedJmsException: Uncategorized exception occured during JMS processing; nested exception is javax.jms.JMSException: Could not create Transport. Reason: java.lang.IllegalArgumentException: Invalid connect parameters: {wireFormat.host=localhost, maximumConnections=1000, wireFormat.maxFrameSize=104857600}
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:176)
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:51)
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:346)
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:149)
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:112)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:770)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.finishRefresh(EmbeddedWebApplicationContext.java:140)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:483)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:118)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:686)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:320)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:957)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:946)
at cn.vamos.Application.main(Application.java:39)
Caused by: org.springframework.jms.UncategorizedJmsException: Uncategorized exception occured during JMS processing; nested exception is javax.jms.JMSException: Could not create Transport. Reason: java.lang.IllegalArgumentException: Invalid connect parameters: {wireFormat.host=localhost, maximumConnections=1000, wireFormat.maxFrameSize=104857600}
at org.springframework.jms.support.JmsUtils.convertJmsAccessException(JmsUtils.java:316)
at org.springframework.jms.support.JmsAccessor.convertJmsAccessException(JmsAccessor.java:169)
at org.springframework.jms.listener.AbstractJmsListeningContainer.start(AbstractJmsListeningContainer.java:273)
at org.springframework.jms.config.JmsListenerEndpointRegistry.start(JmsListenerEndpointRegistry.java:167)
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:173)
... 13 more
Caused by: javax.jms.JMSException: Could not create Transport. Reason: java.lang.IllegalArgumentException: Invalid connect parameters: {wireFormat.host=localhost, maximumConnections=1000, wireFormat.maxFrameSize=104857600}
at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:36)
at org.apache.activemq.ActiveMQConnectionFactory.createTransport(ActiveMQConnectionFactory.java:319)
at org.apache.activemq.ActiveMQConnectionFactory.createActiveMQConnection(ActiveMQConnectionFactory.java:332)
at org.apache.activemq.ActiveMQConnectionFactory.createActiveMQConnection(ActiveMQConnectionFactory.java:305)
at org.apache.activemq.ActiveMQConnectionFactory.createConnection(ActiveMQConnectionFactory.java:245)
at org.springframework.jms.support.JmsAccessor.createConnection(JmsAccessor.java:180)
at org.springframework.jms.listener.AbstractJmsListeningContainer.createSharedConnection(AbstractJmsListeningContainer.java:413)
at org.springframework.jms.listener.AbstractJmsListeningContainer.establishSharedConnection(AbstractJmsListeningContainer.java:381)
at org.springframework.jms.listener.AbstractJmsListeningContainer.doStart(AbstractJmsListeningContainer.java:285)
at org.springframework.jms.listener.SimpleMessageListenerContainer.doStart(SimpleMessageListenerContainer.java:209)
at org.springframework.jms.listener.AbstractJmsListeningContainer.start(AbstractJmsListeningContainer.java:270)
... 15 more
Caused by: java.lang.IllegalArgumentException: Invalid connect parameters: {wireFormat.host=localhost, maximumConnections=1000, wireFormat.maxFrameSize=104857600}
at org.apache.activemq.transport.TransportFactory.doConnect(TransportFactory.java:122)
at org.apache.activemq.transport.TransportFactory.connect(TransportFactory.java:64)
at org.apache.activemq.ActiveMQConnectionFactory.createTransport(ActiveMQConnectionFactory.java:317)
... 24 more
A part of ActiveMq Start Informations like below:
INFO | KahaDB is version 5
INFO | Recovering from the journal ...
INFO | Recovery replayed 480 operations from the journal in 0.066 seconds.
INFO | Apache ActiveMQ 5.11.1 (localhost, ID:LBDZ-20120706QF-18491-1437644294931-0:1) is starting
INFO | Listening for connections at: tcp://LBDZ-20120706QF:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector openwire started
INFO | Listening for connections at: amqp://LBDZ-20120706QF:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector amqp started
INFO | Listening for connections at: stomp://LBDZ-20120706QF:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector stomp started
INFO | Listening for connections at: mqtt://LBDZ-20120706QF:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector mqtt started
INFO | Listening for connections at ws://LBDZ-20120706QF:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector ws started
Pom.xml about activeMQ:
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-broker</artifactId>
<version>${activemq.version}</version>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-mqtt</artifactId>
<version>${activemq.version}</version>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-amqp</artifactId>
<version>${activemq.version}</version>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-stomp</artifactId>
<version>${activemq.version}</version>
</dependency>
Any one can help me out? Thans a lot !
The errors you're getting sound like the ones you'd get if you tried to use STOMP for a broker-to-broker networkConnection (see section #1 in the middle of http://mail-archives.apache.org/mod_mbox/activemq-users/201502.mbox/%3C1424962402912-4692106.post#n4.nabble.com%3E); are you sure that spring.activemq.broker-url is how you're supposed to set the URL at which the broker is listening? (After all, your log shows that you're listening on a number of protocols/ports already, which doesn't seem like it's being controlled by that property.)