Has anyone managed to connect a java program to AWS DocumentDB where the java program is running outside of AWS and DocumentDB has tls enabled? Any examples or guidance provided would be greatly appreciated.
This is what I've done so far =>
I've been following AWS's developer guide and I understand to be able to do this I need an SSH tunnel set up to a jump box (EC2 instance) and then to the DB Cluster. I have done this and connected from my laptop.
I have then created the required .jks file from AWS's rds-combined-ca-bundle.pem file and referenced it in a basic java main class. From the java main class I have referenced the cluster as localhost:27017 as this is where I've set up the SSH tunnel from.
My test code is following the AWS example for Java and I get the following error when I run the program =>
Caused by: javax.net.ssl.SSLHandshakeException: No subject alternative DNS name matching localhost found.
public class CertsTestMain {
public static void main(String[] args) {
String template = "mongodb://%s:%s#%s/test?ssl=true&replicaSet=rs0&readpreference=%s";
String username = "dummy";
String password = "dummy";
String clusterEndpoint = "localhost:27017";
String readPreference = "secondaryPreferred";
String connectionString = String.format(template, username, password, clusterEndpoint, readPreference);
String truststore = "C:/Users/eclipse-workspace/certs/certs/rds-truststore.jks";
String truststorePassword = "test!";
System.setProperty("javax.net.ssl.trustStore", truststore);
System.setProperty("javax.net.ssl.trustStorePassword", truststorePassword);
MongoClient mongoClient = MongoClients.create(connectionString);
MongoDatabase testDB = mongoClient.getDatabase("test");
MongoCollection<Document> bookingCollection = testDB.getCollection("booking");
MongoCursor<Document> cursor = bookingCollection.find().iterator();
try {
while (cursor.hasNext()) {
System.out.println(cursor.next().toJson());
}
} finally {
cursor.close();
}
}
}
So, for me, to make this work I only had to alter the template to:
String template = "mongodb://%s:%s#%s/test?sl=true&tlsAllowInvalidHostnames&readpreference=%s";
As long as you have created your .jks file correctly
(I did this simply it by using a linux env and running the script AWS provide for Java in the following link in Point 2 => https://docs.aws.amazon.com/documentdb/latest/developerguide/connect_programmatically.html)
and you have a fully working ssh tunnel as described in https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-from-outside-a-vpc.html
then the above code will work.
Related
I've got a problem with connecting to corporate machine with SSH using JCraft JSch.
public static void main(String[] args) throws JSchException {
JSch jSch = new JSch();
String username = "username";
String host = "host";
int port = 22;
jSch.addIdentity("Path\\to\\key\\file");
jSch.setKnownHosts("known_hosts");
/* In known_hosts file, I've got Host IP, and public key,
the same that's on server
*/
Session session = jSch.getSession(username,host,port);
Properties config = new Properties();
config.put("StrictHostKeyChecking","yes");
session.setConfig(config);
session.connect(5000);
}
That's my code, after running it, there is an error: HostKey has been changed: [host adress], if i change StrictHostKeyChecking to no, another error appears: Auth fail.
I can connect to that machine via PuTTy or WinSCP, and I dont need to use tunneling. Also the key format is right(I made that mistake allready). When I connect to the machine via PuTTy, I need to pick the right machine(there are 5), and each of them has their own key pair, and the main connection to the "menu", has also it's own key pair, it's CAPI key.
Does anyone had something like that before? Also I made a forwarding port, and tried to connect with that, but I'm not sure how it works. Can anyone tell me how to do the port forwarding, or maybe somebody has another idea.
Thanks in advice
I'm trying to upload a document from a Lambda script, however I've been stuck where I keep getting the following whenever the Lambda script starts:
com.mongodb.MongoSocketException: cluster0-whnfd.mongodb.net: No address associated with hostname
The error seems obvious, however I can connect using that same URL via Mongo Compass. The Java class I'm using looks like:
public class MongoStore {
private final static String MONGO_ADDRESS = "mongodb+srv://<USERNAME>:<PASSWORD>#cluster0-whnfd.mongodb.net/test";
private MongoCollection<Document> collection;
public MongoStore() {
final MongoClientURI uri = new MongoClientURI(MONGO_ADDRESS);
final MongoClient mongoClient = new MongoClient(uri);
final MongoDatabase database = mongoClient.getDatabase("test");
this.collection = database.getCollection("test");
}
public void save(String payload) {
Document document = new Document();
document.append("message", payload);
collection.insertOne(document);
}
}
Have I just misconfigured my Java class, or is there something more tricky going on here?
The same problem I had with freshly created MongoDB Atlas database, when I started the migration of my Python web application from Heroku.
So I've realised the DNS name cluster0.hgmft.mongodb.net just doesn't exist.
The magic happened when I've installed the library dnspython (my app is written in Python), with this library MongoDB client was able to connect to my database in Mongo Atlas.
I am trying to connect to Hive2 server via JDBC with kerberos authentication. After numerous attempts to make it work, I can't get it to work with the Cloudera driver.
If someone can help me to solve the problem, I can greatly appreciate it.
I have this method:
private Connection establishConnection() {
final String driverPropertyClassName = "driver";
final String urlProperty = "url";
Properties hiveProperties = config.getMatchingProperties("hive.jdbc");
String driverClassName = (String) hiveProperties.remove(driverPropertyClassName);
String url = (String) hiveProperties.remove(urlProperty);
Configuration hadoopConfig = new Configuration();
hadoopConfig.set("hadoop.security.authentication", "Kerberos");
String p = config.getProperty("hadoop.core.site.path");
Path path = new Path(p);
hadoopConfig.addResource(path);
UserGroupInformation.setConfiguration(hadoopConfig);
Connection conn = null;
if (driverClassName != null) {
try {
UserGroupInformation.loginUserFromKeytab(config.getProperty("login.user"), config.getProperty("keytab.file"));
Driver driver = (Driver) Class.forName(driverClassName).newInstance();
DriverManager.registerDriver(driver);
conn = DriverManager.getConnection(url, hiveProperties);
} catch (Throwable e) {
LOG.error("Failed to establish Hive connection", e);
}
}
return conn;
}
URL for the server, that I am getting from the properties in the format described in Cloudera documentation
I am getting an exception:
2018-05-05 18:26:49 ERROR HiveReader:147 - Failed to establish Hive connection
java.sql.SQLException: [Cloudera][HiveJDBCDriver](500164) Error initialized or created transport for authentication: Peer indicated failure: Unsupported mechanism type PLAIN.
at com.cloudera.hiveserver2.hivecommon.api.HiveServer2ClientFactory.createTransport(Unknown Source)
at com.cloudera.hiveserver2.hivecommon.api.ZooKeeperEnabledExtendedHS2Factory.createClient(Unknown Source)
...
I thought, that it is missing AuthMech attribute and added AuthMech=1 to the URL. Now I am getting:
java.sql.SQLNonTransientConnectionException: [Cloudera][JDBC](10100) Connection Refused: [Cloudera][JDBC](11640) Required Connection Key(s): KrbHostFQDN, KrbServiceName; [Cloudera][JDBC](11480) Optional Connection Key(s): AsyncExecPollInterval, AutomaticColumnRename, CatalogSchemaSwitch, DecimalColumnScale, DefaultStringColumnLength, DelegationToken, DelegationUID, krbAuthType, KrbRealm, PreparedMetaLimitZero, RowsFetchedPerBlock, SocketTimeOut, ssl, StripCatalogName, transportMode, UseCustomTypeCoercionMap, UseNativeQuery, zk
at com.cloudera.hiveserver2.exceptions.ExceptionConverter.toSQLException(Unknown Source)
at com.cloudera.hiveserver2.jdbc.common.BaseConnectionFactory.checkResponseMap(Unknown Source)
...
But KrbHostFQDN is already specified in the principal property as required in the documentation.
Am I missing something or is this documentation wrong?
Below is the one of the similar kind of problem statement in Impala (just JDBC engine changes others are same) that is resolved by setting "KrbHostFQDN" related properties in JDBC connection string itself.
Try to use the URL below. Hopefully works for u.
String jdbcConnStr = "jdbc:impala://myserver.mycompany.corp:21050/default;SSL=1;AuthMech=1;KrbHostFQDN=myserver.mycompany.corp;KrbRealm=MYCOMPANY.CORP;KrbServiceName=impala"
I suppose that if you are not using SSL=1 but only Kerberos, you just drop that part from the connection string and don't worry about setting up SSL certificates in the java key store, which is yet another hassle.
However in order to get Kerberos to work properly we did the following:
Install MIT Kerberos 4.0.1, which is a kerberos ticket manager. (This is for Windows)
This ticket manager asks you for authentication every time you initiate a connection, creates a ticket and stores it in a kerberos_ticket.dat binary file, whose location can be configured somehow but I do not recall exactly how.
Finally, before launching your JAVA app you have to set an environment variable KRB5CCNAME=C:/path/to/kerberos_ticket.dat. In your java app, you can check that the variable was correctly set by doing System.out.println( "KRB5CCNAME = " + System.getenv( "KRB5CCNAME" ) ). If you are working with eclipse or other IDE you might even have to close the IDE,set up the environment variable and start the IDE again.
NOTE: this last bit is very important, I have observed that if this variable is not properly set up, the connection wont be established...
In Linux, instead MIT Kerberos 4.0.1, there is a program called kinit which does the same thing, although without a graphical interface, which is even more convenient for automation.
I wanted to put it in the comment but it was too long for the comment, therefore I am placing it here:
I tried your suggestion and got another exception:
java.sql.SQLException: [Cloudera]HiveJDBCDriver Error
creating login context using ticket cache: Unable to obtain Principal
Name for authentication .
May be my problem is, that I do not have environment variable KRB5CCNAME set.
I, honestly, never heard about it before.
What is supposed to be in that ticket file.
I do have, however, following line in my main method:
System.setProperty("java.security.krb5.conf", "path/to/krb5.conf");
Which is supposed to be used by
UserGroupInformation.loginUserFromKeytab(config.getProperty("login.user"), config.getProperty("keytab.file"));
to obtain the kerberos ticket.
To solve this issue update Java Cryptography Extension for the Java version that you use in your system.
Here's the link when you can download JCE for Java 1.7
Uncompress and overwrite those files in $JDK_HOME/jre/lib/security
Restart your computer.
I have a ppk file and username "x#domain.com", which I use to connect to Apache cassandra through putty from my windows system. What code snippet can be used in java using datastax to connect the same.I could see the IP of cassandra system from putty terminal.
package com.cassandra.tutorial;
import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.Session;
public class CassConnector {
private static Cluster cluster;
private static Session session;
public static Cluster connect(String node)
{
return cluster.builder().addContactPoint(node).build();
}
public static void main(String args[])
{
cluster=connect("172.31.yy.xx");
session=cluster.connect("core");
session.execute("USE core");
session.close();
cluster.close();
}
}
PPK file is used by Putty to connect to host with Cassandra via SSH protocol.
You can connect to Cassandra itself only by using username & password configured inside it. See corresponding part of Cassandra's documentation on how to enable & configure password-based authentication.
After you configure it, you just need to add the call of withCredentials function into your cluster building chain, and pass username & password to it.
How to connect to a remote machine with username and password using sshj java api?
I tried this code. What is the problem with this code?
final SSHClient ssh = new SSHClient();
ssh.connect("192.168.0.1");
ssh.authPassword("abcde", "fgh".toCharArray());
try {
final Session session = ssh.startSession();
try {
final Command cmd = session
.exec("cd /home/abcde/Desktop/");
System.out.println(IOUtils.readFully(cmd.getInputStream())
.toString());
cmd.join(5, TimeUnit.SECONDS);
System.out.println("\n** exit status: " + cmd.getExitStatus());
} finally {
session.close();
}
} finally {
ssh.disconnect();
}
It is throwing this following error.
net.schmizz.sshj.transport.TransportException:
[HOST_KEY_NOT_VERIFIABLE] Could not verify ssh-rsa host key with
fingerprint ******** for 192.168.0.1 on port 22
You solve your problem by implementing HostKeyVerifier
class NullHostKeyVerifier implements HostKeyVerifier {
#Override
public boolean verify(String arg0, int arg1, PublicKey arg2) {
return true;
}
}
and adding this fake implementation to your SSHClient instance configuration:
...
final SSHClient ssh = new SSHClient();
ssh.addHostKeyVerifier(new NullHostKeyVerifier());
...
Insert ssh.loadKnownHosts(); or ssh.loadKnownHosts("somepath"); after instantiation of SSHClient.
Then add the machine (remote) you are trying to connect (192.168.0.1) to known_hosts file (on your machine) at the default location or at "somepath". For a Linux box default path will be /home/myuser/.ssh/known_hosts, or in a windows box c:/user/myuser/.ssh/known_hosts.
known_host is in openSSH format (ip/orhostname algorithm key commentary).
To add the machine to known_hosts:
-if you are using Linux (on your machine), just ssh to the remote machine and it will be automatically add to known_hosts.
-if you are using Windows, use bitwise tunnelier to connect to the remote machine it will store the key. Go to bitwise key manager (it will be on your start menu, bitwise folder) and export the row with the remote machine ip to openSSH format. Copy the resulting line to your known_host file.
That way you will be really validating the host key. It is also helpful in mule esb where you cannot add the nullhost verifier to the ssh connector (My case).
You miss ssh key, simply add
ssh.addHostKeyVerifier("10:20......");
where 10:20... from your exception: "with fingerprint ********"