How to connect with hdfs via kerberos from OSGI bundles - java

We are trying to connect with the HDFS using kerberos, from Karaf container by OSGI bundle. We have already installed the hadoop client in karaf using apache servicemix bundles.
<groupId>org.apache.servicemix.bundles</groupId>
<artifactId>org.apache.servicemix.bundles.hadoop-client</artifactId>
<version>2.4.1_1</version>
Pom file is attached below:
<build>
<plugins>
<plugin>
<groupId>org.apache.felix</groupId>
<artifactId>maven-bundle-plugin</artifactId>
<version>2.3.7</version>
<extensions>true</extensions>
<configuration>
<instructions>
<Bundle-Activator>com.bdbizviz.hadoop.activator.PaHdfsActivator</Bundle-Activator>
<Bundle-SymbolicName>${project.artifactId}</Bundle-SymbolicName>
<Bundle-Version>${project.version}</Bundle-Version>
<Export-Package>
<!-- com.google.*, !org.apache.camel.model.dataformat, !org.apache.poi.ddf,
!org.apache.xmlbeans, org.apache.commons.collections.*, org.apache.commons.configuration.*,
org.apache.hadoop.hdfs*, org.apache.hadoop.hdfs.client*, org.apache.hadoop.hdfs.net*,
org.apache.hadoop.hdfs.protocol.datatransfer*, org.apache.hadoop.hdfs.protocol.proto*,
org.apache.hadoop.hdfs.protocolPB*, org.apache.hadoop.conf.*, org.apache.hadoop.io.*,
org.apache.hadoop.fs.*, org.apache.hadoop.security.*, org.apache.hadoop.metrics2.*,
org.apache.hadoop.util.*, org.apache.hadoop*; -->
<!-- org.apache.*; -->
</Export-Package>
<Import-Package>
org.apache.hadoop*,org.osgi.framework,*;resolution:=optional
</Import-Package>
<Include-Resource>
{maven-resources},
#org.apache.servicemix.bundles.hadoop-client-2.4.1_1.jar!/coredefault.
xml,
#org.apache.servicemix.bundles.hadoop-client-2.4.1_1.jar!/hdfsdefault.
xml,
#org.apache.servicemix.bundles.hadoop-client-
2.4.1_1.jar!/mapred-default.xml,
#org.apache.servicemix.bundles.hadoop-client-
2.4.1_1.jar!/hadoop-metrics.properties
</Include-Resource>
<DynamicImport-Package>*</DynamicImport-Package>
</instructions>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.apache.servicemix.bundles</groupId>
<artifactId>org.apache.servicemix.bundles.hadoop-client</artifactId>
<version>2.4.1_1</version>
<exclusions>
<exclusion>
<groupId>jdk.tools</groupId>
<artifactId>jdk.tools</artifactId>
<!-- <version>1.7</version> -->
</exclusion>
</exclusions>
</dependency>
</dependencies>
Code Snippet:
public class TestHdfs implements ITestHdfs{
public void printName() throws IOException{
/*
Configuration config = new Configuration();
config.set("fs.default.name", "hdfs://192.168.1.17:8020");
config.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
config.set("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class.getName());
try {
fs = FileSystem.get(config);
getHostnames(fs);
} catch (IOException e) {
e.printStackTrace();
}*/
Thread.currentThread().setContextClassLoader(getClass().getClassLoader());
final Configuration config = new Configuration();
config.set("fs.default.name", "hdfs://192.168.1.124:8020");
config.set("fs.file.impl", LocalFileSystem.class.getName());
config.set("fs.hdfs.impl", DistributedFileSystem.class.getName());
config.set("hadoop.security.authentication", "KERBEROS");
config.set("dfs.namenode.kerberos.principal.pattern",
"hdfs/*#********.COM");
System.setProperty("HADOOP_JAAS_DEBUG", "true");
System.setProperty("sun.security.krb5.debug", "true");
System.setProperty("java.net.preferIPv4Stack", "true");
System.out.println("--------------status---:"
+ UserGroupInformation.isSecurityEnabled());
UserGroupInformation.setConfiguration(config);
// UserGroupInformation.loginUserFromKeytab(
// "hdfs/hadoop1.********.com#********.COM",
// "file:/home/kaushal/hdfs-hadoop1.keytab");
UserGroupInformation app_ugi = UserGroupInformation
.loginUserFromKeytabAndReturnUGI("hdfs/hadoop1.********.com#********.COM",
"C:\\Users\\desanth.pv\\Desktop\\hdfs-hadoop1.keytab");
UserGroupInformation proxy_ugi = UserGroupInformation.createProxyUser(
"ssdfsdfsdfsdfag", app_ugi);
System.out.println("--------------status---:"
+ UserGroupInformation.isSecurityEnabled());
/*ClassLoader tccl = Thread.currentThread()
.getContextClassLoader();*/
try {
/*Thread.currentThread().setContextClassLoader(
getClass().getClassLoader());*/
proxy_ugi.doAs(new PrivilegedExceptionAction() {
#Override
public Object run() throws Exception {
/*ClassLoader tccl = Thread.currentThread()
.getContextClassLoader();*/
try {
/*Thread.currentThread().setContextClassLoader(
getClass().getClassLoader());*/
System.out.println("desanth");
FileSystem fs = FileSystem.get(config);
DistributedFileSystem hdfs = (DistributedFileSystem) fs;
DatanodeInfo[] dataNodeStats = hdfs.getDataNodeStats();
String[] names = new String[dataNodeStats.length];
for (int i = 0; i < dataNodeStats.length; i++) {
names[i] = dataNodeStats[i].getHostName();
System.out.println((dataNodeStats[i].getHostName()));
}
} catch (IOException e) {
e.printStackTrace();
} finally {
//Thread.currentThread().setContextClassLoader(tccl);
}
return null;
}
});
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
/*Thread.currentThread().setContextClassLoader(tccl);*/
}
}
public void getHostnames(FileSystem fs) throws IOException {
DistributedFileSystem hdfs = (DistributedFileSystem) fs;
DatanodeInfo[] dataNodeStats = hdfs.getDataNodeStats();
String[] names = new String[dataNodeStats.length];
for (int i = 0; i < dataNodeStats.length; i++) {
names[i] = dataNodeStats[i].getHostName();
System.out.println((dataNodeStats[i].getHostName()));
}
}
}
Error :
Caused by: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
[12:35:51 PM] Jayendra Parsai: java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "jayendra-dynabook-T451-34EW/127.0.1.1"; destination host is: "hadoop2.********.com":8020;

Following the background section of the Vladimir's answer I've tried many things, but
the simplest one which is adding
SecurityUtil.setSecurityInfoProviders(new AnnotatedSecurityInfo());
before UserGroupInformation.loginUserFromKeytab helped me to solve the issue.

I have not tried to reproduce this issue in an OSGI environment, but I think you may be facing an issue similar to the one you face when trying to run in a Kerberised environment with a fat jar that includes the hadoop/hdfs dependencies.
Namely the org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] error.
Background
After turning on DEBUG logging there was a funny line after SASL negotiation:
Get kerberos info proto:interface org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB info:null
Notice the null - successful executions have a class reference here instead.
Tracking this down, SaslRpcClient calls SecurityUtil.getTokenInfo. This initiates a search of all the org.apache.hadoop.security.SecurityInfo providers.
org.apache.hadoop.security.SecurityUtil uses java.util.ServiceLoader to look up SecurityInfo instances. ServiceLoader by default uses the current thread's ContextClassLoader to look for files in the META-INF/services/ directory on the classpath. The files are named corresponding to the service name, so it's looking for META-INF/services/org.apache.hadoop.security.SecurityInfo
When a jar is an uber jar (or I guess if you load something in an OSGI bundle) and you have only one such file on the classpath then you have to ensure all the entries are appended. In maven for example, you can use the ServicesResourceTransformer to append the entries. sbt-assembly has a similar merge option that is more configurable.
Solution
As described in background, make sure the classloader that java.util.ServiceLoader is using can find the META-INF/services/org.apache.hadoop.security.SecurityInfo with all the entries from the hadoop jars.
In the OSGI case, you still have to somehow merge the entries. Try including them in the <Include-Resources> section of your bundle XML?
Log output
This is the output the I get when it does not work:
2018-05-03 12:01:56,739 DEBUG PrivilegedAction as:user#DOMAIN (auth:KERBEROS) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:757) [ForkJoinPool-1-worker-5] org.apache.hadoop.security.UserGroupInformation (UserGroupInformation.java:1893)
2018-05-03 12:01:56,740 DEBUG Sending sasl message state: NEGOTIATE
[ForkJoinPool-1-worker-5] org.apache.hadoop.security.SaslRpcClient (SaslRpcClient.java:457)
2018-05-03 12:01:56,741 DEBUG Received SASL message state: NEGOTIATE
auths {
method: "TOKEN"
mechanism: "DIGEST-MD5"
protocol: ""
serverId: "default"
challenge: "XXX"
}
auths {
method: "KERBEROS"
mechanism: "GSSAPI"
protocol: "XXX"
serverId: "XXX"
}
[ForkJoinPool-1-worker-5] org.apache.hadoop.security.SaslRpcClient (SaslRpcClient.java:389)
2018-05-03 12:01:56,741 DEBUG Get token info proto:interface org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB info:null [ForkJoinPool-1-worker-5] org.apache.hadoop.security.SaslRpcClient (SaslRpcClient.java:264)
2018-05-03 12:01:56,741 DEBUG Get kerberos info proto:interface org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB info:null [ForkJoinPool-1-worker-5] org.apache.hadoop.security.SaslRpcClient (SaslRpcClient.java:291)
2018-05-03 12:01:56,742 DEBUG PrivilegedActionException as:user#DOMAIN (auth:KERBEROS) cause:org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] [ForkJoinPool-1-worker-5] org.apache.hadoop.security.UserGroupInformation (UserGroupInformation.java:1870)
2018-05-03 12:01:56,742 DEBUG PrivilegedAction as:user#DOMAIN (auth:KERBEROS) from:org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:683) [ForkJoinPool-1-worker-5] org.apache.hadoop.security.UserGroupInformation (UserGroupInformation.java:1893)
2018-05-03 12:01:56,743 WARN Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] [ForkJoinPool-1-worker-5] org.apache.hadoop.ipc.Client (Client.java:715)
2018-05-03 12:01:56,743 DEBUG PrivilegedActionException as:user#DOMAIN (auth:KERBEROS) cause:java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] [ForkJoinPool-1-worker-5] org.apache.hadoop.security.UserGroupInformation (UserGroupInformation.java:1870)
2018-05-03 12:01:56,743 DEBUG closing ipc connection to XXX/nnn.nnn.nnn.nnn:8020: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] [ForkJoinPool-1-worker-5] org.apache.hadoop.ipc.Client (Client.java:1217)
java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:720)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:683)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:770)
at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1620)
at org.apache.hadoop.ipc.Client.call(Client.java:1451)
at org.apache.hadoop.ipc.Client.call(Client.java:1398)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy10.create(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:313)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185)
at com.sun.proxy.$Proxy11.create(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1822)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1701)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1636)
at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:480)
at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:476)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:476)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:417)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:930)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:807)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:796)
...
Caused by: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:172)
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:595)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:397)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:762)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:758)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:757)
... 50 more

Related

How to correct access Hadoop 3.2.1 with Java API?

I installed Hadoop 3.2.1 on remote centos server in a pseudo-distributed mode and and has checked firewall config. I exposed the ports 9000 and 9800-9900。Local environment is windows and a not started hadoop,I must install it or the application will throw an Exception that can't find HADOOP_HOME it also confuse me.
My maven dependencies as follows:
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs-client</artifactId>
<version>3.2.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>3.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>3.2.0</version>
</dependency>
My test as follows:
#Test
public void testHDFSRepository() throws IOException {
Configuration conf= new Configuration();
conf.set("fs.hdfs.impl", "org.apache.hadoop.hdfs.DistributedFileSystem");
conf.set("fs.defaultFS", "hdfs://remoteip:9000");
FileSystem fs = FileSystem.get(conf);
fs.copyFromLocalFile(new Path("C:\\Users\\admin\\Desktop\\111.txt"), new Path("/demo/111.txt"));
}
then it throws an Exception and upload a 0B file toHadoop
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /demo/111.txt could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2219)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2789)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:574)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:999)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2915)
the nameNode logs as follows:
2020-02-18 19:41:09,526 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2020-02-18 19:41:09,526 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on default port 9000, call Call#4 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from xx.xx.xx.xx:13049
java.io.IOException: File /demo/111.txt could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2219)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2789)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:574)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:999)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2915)
on the other hand, I can use bin/hdfs dfs -put on the remote server successfully。
Look at the error info I only have one dataNode indeed, should I add some datanode?
Your datanode process is not healthy
1 node(s) are excluded in this operation.
You'll need to inspect its logs or the namenode UI to determine the issue
I can use bin/hdfs dfs -put on the remote server successfully
Then download the CLI tools locally and configure it for the remote server

HBase access from Java API Client

I'm having some troubles accessing HBase from java API Client and I can't figure out what I'm doing wrong.
I'm using HBase 1.1.2 in standalone mode on VM (10.166.205.41) with RHEL6 and JAVA 1.7.
Here is my HBase configuration from the hbase-site.xml
<configuration>
<property>
<name>hbase.zookeeper.quorum</name>
<value>10.166.205.41</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>9091</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>file:///usr/local/hbaserootdir/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/hbaserootdir/zookeeper</value>
</property>
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
</property>
</configuration>
My regionservers file is defined as followed:
10.166.205.41
The HBase shell client is working fine and i can access the HBase master UI from url 10.166.205.41:16010.
Here is my Java API Client running on Eclipse on Windows 7.
Pom.xml
<dependencies>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>1.1.2</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase</artifactId>
<version>1.1.2</version>
<type>pom</type>
</dependency>
</dependencies>
Source code:
public class InsertData {
final static Logger logger = Logger.getLogger(InsertData.class);
public static void main(String[] args) throws IOException {
Configuration config = HBaseConfiguration.create();
config.setInt("timeout", 120000);
config.set("hbase.zookeeper.quorum","10.166.205.41");
config.set("hbase.zookeeper.property.clientPort", "9091");
Connection connection = ConnectionFactory.createConnection(config);
Table table = connection.getTable(TableName.valueOf("emp"));
try {
Get g = new Get(Bytes.toBytes("1"));
Result result = table.get(g);
byte [] name = result.getValue(Bytes.toBytes("personal data"), Bytes.toBytes("name"));
logger.info("name : " + Bytes.toString(name));
} finally {
table.close();
connection.close();
}
}
}
During execution connection to server 10.166.205.41:41571 failed.
2018-12-14 16:35:19 DEBUG FailedServers:56 - Added failed server with address hlzudd5hdf01.yres.ytech/10.166.205.41:41571 to list caused by org.apache.hbase.thirdparty.io.netty.channel.ConnectTimeoutException: connection timed out: hlzudd5hdf01.yres.ytech/10.166.205.41:41571
2018-12-14 16:35:19 DEBUG ClientCnxn:843 - Reading reply sessionid:0x167ad4d815d000b, packet:: clientPath:/hbase/meta-region-server serverPath:/hbase/meta-region-server finished:false header:: 3,4 replyHeader:: 3,4697,0 request:: '/hbase/meta-region-server,F response:: #ffffffff0001a726567696f6e7365727665723a3431353731ffffffa0ffffffe9ffffff80fffffffd5611ffffff8c6a50425546a24a17686c7a7564643568646630312e797265732e797465636810ffffffe3ffffffc4218ffffffceffffff93ffffffb6ffffffeafffffffa2c100183,s{4519,4519,1544800813291,1544800813291,0,0,0,0,77,0,4519}
2018-12-14 16:35:19 DEBUG ClientCnxn:742 - Got ping response for sessionid: 0x167ad4d815d000b after 38ms
2018-12-14 16:35:19 DEBUG AbstractRpcClient:349 - Not trying to connect to hlzudd5hdf01.yres.ytech/10.166.205.41:41571 this server is in the failed servers list
2018-12-14 16:35:19 DEBUG ClientCnxn:843 - Reading reply sessionid:0x167ad4d815d000b, packet:: clientPath:/hbase/meta-region-server serverPath:/hbase/meta-region-server finished:false header:: 4,4 replyHeader:: 4,4697,0 request:: '/hbase/meta-region-server,F response:: #ffffffff0001a726567696f6e7365727665723a3431353731ffffffa0ffffffe9ffffff80fffffffd5611ffffff8c6a50425546a24a17686c7a7564643568646630312e797265732e797465636810ffffffe3ffffffc4218ffffffceffffff93ffffffb6ffffffeafffffffa2c100183,s{4519,4519,1544800813291,1544800813291,0,0,0,0,77,0,4519}
On the HBase master UI this is the region server address and by clicking on the link i can't get the page neither.
I packaged my program as jar file and running it on my VM is fine which makes me think it could be a port access issue.
Entering netstat -tanp | grep LISTEN command on my RHEL6 VM tells me my region server port is listening
tcp 0 0 10.166.205.41:41571 0.0.0.0:* LISTEN 26322/java
Seems I don't have any firewall running so don't know why the connection failed. Maybe it is something else.
I'm out of idea to fix that issue so if you could help me that would be much appreciated ^^
Thanks a lot.

How to enable TLSv1.2 in mule for outbound URL

Below is my flow file content. I have generated .key file with help of java keytool. the same flow is working for TLSv1.1(when client was using TLSv1.1 certificate) and not working for TLSv1.2(client certificate is TLSv1.2).
<https:connector name="paypalConnector" doc:name="HTTP\HTTPS" validateConnections="true" clientSoTimeout="10000" cookieSpec="netscape" receiveBacklog="0" receiveBufferSize="0" sendBufferSize="0" serverSoTimeout="10000" socketSoLinger="0">
<service-overrides sessionHandler="org.mule.session.NullSessionHandler"/>
<https:tls-server path="C:/Users/damodaram.setti/Desktop/PayPal/paypal.key" storePassword="paypal" requireClientAuthentication="true" />
</https:connector>
<https:outbound-endpoint exchange-pattern="request-response" method="POST" address="https://tlstest.paypal.com" mimeType="text/xml" connector-ref="paypalConnector" doc:name="2IssuerServ"/>
and I have tried with below options
-Ddeployment.security.SSLv2Hello=false -Ddeployment.security.SSLv3=false -Ddeployment.security.TLSv1=false -Ddeployment.security.TLSv1.1=true -Ddeployment.security.TLSv1.2=true
and
-Dhttps.protocols=TLSv1.2 -Dhttps.cipherSuites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
but no luck so far. Please help me to sort this issue.
Message : Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=https://tlstest.paypal.com, connector=HttpsConnector
{
name=paypalConnector
lifecycle=start
this=527fe4
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[https]
serviceOverrides=<none>
}
, name='endpoint.https.tlstest.paypal.com', mep=REQUEST_RESPONSE, properties={http.method=POST}, transactionConfig=Transaction{factory=null, action=INDIFFERENT, timeout=0}, deleteUnacceptedMessages=false, initialState=started, responseTimeout=10000, endpointEncoding=UTF-8, disableTransportTransformer=false}. Message payload is of type: PostMethod
Code : MULE_ERROR--2
--------------------------------------------------------------------------------
Exception stack is:
1. Connection refused: connect (java.net.ConnectException)
java.net.DualStackPlainSocketImpl:-2 (null)
2. Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=https://tlstest.paypal.com, connector=HttpsConnector
{
name=paypalConnector
lifecycle=start
this=527fe4
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[https]
serviceOverrides=<none>
}
, name='endpoint.https.tlstest.paypal.com', mep=REQUEST_RESPONSE, properties={http.method=POST}, transactionConfig=Transaction{factory=null, action=INDIFFERENT, timeout=0}, deleteUnacceptedMessages=false, initialState=started, responseTimeout=10000, endpointEncoding=UTF-8, disableTransportTransformer=false}. Message payload is of type: PostMethod (org.mule.api.transport.DispatchException)
org.mule.transport.http.HttpClientMessageDispatcher:155 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transport/DispatchException.html)
--------------------------------------------------------------------------------
Root Exception stack trace:
java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
********************************************************************************
* -XX:PermSize=128M
* -XX:MaxPermSize=256M
* -Ddeployment.security.SSLv2Hello=false
* -Ddeployment.security.SSLv3=false
* -Ddeployment.security.TLSv1=false
* -Ddeployment.security.TLSv1.1=true
* -Ddeployment.security.TLSv1.2=true
* -Dmule.home=D:\MConnect\MuleStudioWorkspace\.mule
* -Dlog4j.debug=true
* -Dosgi.dev=true
* -Dosgi.instance.area=file:/D:/MConnect/MuleStudioWorkspace
* -Dfile.encoding=Cp1252
ERROR 2016-07-21 16:45:10,647 [[simpletest].connector.http.mule.default.receiver.02] org.mule.exception.DefaultMessagingExceptionStrategy:
********************************************************************************
Message : Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=https://tlstest.paypal.com, connector=HttpsConnector
{
name=paypalConnector
lifecycle=start
this=527fe4
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[https]
serviceOverrides=<none>
}
, name='endpoint.https.tlstest.paypal.com', mep=REQUEST_RESPONSE, properties={http.method=POST}, transactionConfig=Transaction{factory=null, action=INDIFFERENT, timeout=0}, deleteUnacceptedMessages=false, initialState=started, responseTimeout=10000, endpointEncoding=UTF-8, disableTransportTransformer=false}. Message payload is of type: PostMethod
Code : MULE_ERROR--2
--------------------------------------------------------------------------------
Exception stack is:
1. Connection refused: connect (java.net.ConnectException)
java.net.DualStackPlainSocketImpl:-2 (null)
2. Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=https://tlstest.paypal.com, connector=HttpsConnector
{
name=paypalConnector
lifecycle=start
this=527fe4
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[https]
serviceOverrides=<none>
}
, name='endpoint.https.tlstest.paypal.com', mep=REQUEST_RESPONSE, properties={http.method=POST}, transactionConfig=Transaction{factory=null, action=INDIFFERENT, timeout=0}, deleteUnacceptedMessages=false, initialState=started, responseTimeout=10000, endpointEncoding=UTF-8, disableTransportTransformer=false}. Message payload is of type: PostMethod (org.mule.api.transport.DispatchException)
org.mule.transport.http.HttpClientMessageDispatcher:155 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transport/DispatchException.html)
--------------------------------------------------------------------------------
Root Exception stack trace:
java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
To use TLSv1.2 you must specify it in the https connector.
<spring:property name="sslType" value="TLSv1.2" />
or
<https:connector name="paypalConnector" doc:name="HTTP\HTTPS" validateConnections="true" clientSoTimeout="10000" cookieSpec="netscape" receiveBacklog="0" receiveBufferSize="0" sendBufferSize="0" serverSoTimeout="10000" socketSoLinger="0">
<spring:property name="sslType" value="TLSv1.2" />
<service-overrides sessionHandler="org.mule.session.NullSessionHandler"/>
<https:tls-server path="C:/Users/damodaram.setti/Desktop/PayPal/paypal.key" storePassword="paypal" requireClientAuthentication="true" />
</https:connector>
Hope that this answer your question.
Please use the below syntax to create to send HTTP request over HTTP\HTTPS and enabling TLS versions. In this case I have used HTTPS protcol and sending request over TLSv1.
http:request-config doc:name="HTTP Request Configuration" name="HTTPS_Request_Configuration" protocol="HTTPS" connectionIdleTimeout="300000">
tls:context enabledProtocols="TLSv1">
tls:trust-store type="jks" password="${truststore.pwd}" path="${truststore.path}"/>
tls:key-store type="jks" password="${keystore.pass}" path="${keystore.path}" keyPassword="${keystore.keypass}" alias="${keystore.alias}"/>
/tls:context>
/http:request-config>

MockServer analysis with Maven plugin - output as Java

I'd like to use MockServer analysis in the dumpToLogAsJava mode to have it print java-formatted expectations of a HTTP service I have.
Moreover I'd like to use mockserver-maven-plugin for it.
So far I was able to get a json-formatted output like this:
2016-06-07 19:10:46,348 DEBUG [main] org.mockserver.cli.Main [?:?]
Using command line options: proxyPort=18080
2016-06-07 19:10:46,623 INFO [MockServer HttpProxy Thread] org.mockserver.proxy.http.HttpProxy [?:?] MockServer proxy started on port: 18080
2016-06-07 19:10:46,626 DEBUG [MockServer HttpProxy Thread] o.m.c.ConfigurationProperties [?:?] Property file not found on classpath using path [mockserver.properties]
2016-06-07 19:10:46,626 DEBUG [MockServer HttpProxy Thread] o.m.c.ConfigurationProperties [?:?] Property file not found using path [mockserver.properties]
2016-06-07 19:11:01,838 DEBUG [nioEventLoopGroup-3-1] o.m.client.netty.NettyHttpClient [?:?] Sending request: {
"method" : "GET",
"path" : "/sources",
[...]
}
2016-06-07 19:11:02,180 DEBUG [nioEventLoopGroup-3-1] o.m.client.netty.NettyHttpClient [?:?] Received response: {
[...]
}
2016-06-07 19:11:02,220 INFO [nioEventLoopGroup-3-1] o.m.proxy.http.HttpProxyHandler [?:?] returning response:
{
[...]
}
for request as json:
{
[...]
}
as curl:
[...]
By adding the following to my pom.xml
<plugin>
<groupId>org.mock-server</groupId>
<artifactId>mockserver-maven-plugin</artifactId>
<version>3.10.4</version>
<configuration>
<proxyPort>18080</proxyPort>
<logLevel>DEBUG</logLevel>
</configuration>
<executions>
<execution>
<id>start-mock</id>
<phase>pre-integration-test</phase>
<goals>
<goal>runForked</goal>
</goals>
</execution>
<execution>
<id>stop-mock</id>
<phase>post-integration-test</phase>
<goals>
<goal>stopForked</goal>
</goals>
</execution>
</executions>
</plugin>
I have no idea how to change the output format to Java without writing some additional Java code (and even then I'm not sure what exactly would I have to write).
PS. I can't seem to get a Java-formatted dump at all -- I've filled an issue on GH
The missing part was that you need dump the expectations after the requests you are interested in have been executed.
My impression was that it's a feature you enable and then Mockserver dumps the expectation on the go.
Anyhow, here is one ugly way to do it:
ClientAndProxy client = ClientAndProxy.startClientAndProxy(18080);
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
client.dumpToLogAsJava();
}
});

Spring Boot Admin, client registration results in bad request

I am going to register my spring boot application to spring-boot-admin server.
Here is my SpringBootAdminApplication.java
#Configuration
#EnableAutoConfiguration
#EnableAdminServer
public class SpringBootAdminApplication {
public static void main(String[] args) {
SpringApplication.run(SpringBootAdminApplication.class, args);
}
}
And pom.xml:
<dependencies>
<dependency>
<groupId>de.codecentric</groupId>
<artifactId>spring-boot-admin-example</artifactId>
<version>1.0.5</version>
</dependency>
</dependencies>
and application.properties
server.port = 8080
Server is running now:
Now, in client side:
properties:
spring.boot.admin.url=http://localhost:8080
info.version=#project.version#
spring.application.name=rest-module
pom.xml:
<dependency>
<groupId>de.codecentric</groupId>
<artifactId>spring-boot-admin-starter-client</artifactId>
<version>1.3.0</version>
</dependency>
But, when i run the spring boot from client, i get this error:
Created POST request for "http://localhost:8080/api/applications"
Setting request Accept header to [application/json, application/json, application/*+json, application/*+json]
Writing [Application [id=null, name=rest-module, managementUrl=http://Hayatloo-PC:8082, healthUrl=http://Hayatloo-PC:8082/health, serviceUrl=http://Hayatloo-PC:8082]] as "application/json" using [org.springframework.http.converter.json.MappingJackson2HttpMessageConverter#1637320b]
Failed to register application as Application [id=null, name=rest-module, managementUrl=http://Hayatloo-PC:8082, healthUrl=http://Hayatloo-PC:8082/health, serviceUrl=http://Hayatloo-PC:8082] at spring-boot-admin (http://localhost:8080/api/applications): 400 Bad Request
Why id=null ?
You messed up the versions. Try to update the client and server to the same versions. We try too keep em compatible but from 1.0.x to 1.3.x you have no chance.
Btw current version is 1.3.2.
Additional you are using the sample as dependency. This indeed works, but I wouldn't recommend it. You better setup your server as described in the guide. http://codecentric.github.io/spring-boot-admin/1.3.2/

Categories

Resources