external LDAP connection using JpsContextFactory - java

I am trying to connect to an external weblogic embeded LDAP in Oracle ADF.
I've just found a good sample code that uses JpsContextFactory class, it doesnt get any url, username and password. it seems that it connects to local weblogic ldap by defult. I could not figure out how to set a connection to an external weblogic ldap using this class.
the sample code :
private void initIdStoreFactory() {
JpsContextFactory ctxFactory;
try {
ctxFactory = JpsContextFactory.getContextFactory();
JpsContext ctx = ctxFactory.getContext();
LdapIdentityStore idStoreService = (LdapIdentityStore) ctx.getServiceInstance(IdentityStoreService.class);
ldapFactory = idStoreService.getIdmFactory();
storeEnv.put(OIDIdentityStoreFactory.RT_USER_SEARCH_BASES, USER_BASES);
storeEnv.put(OIDIdentityStoreFactory.RT_GROUP_SEARCH_BASES, GROUP_BASES);
storeEnv.put(OIDIdentityStoreFactory.RT_USER_CREATE_BASES, USER_BASES);
storeEnv.put(OIDIdentityStoreFactory.RT_GROUP_CREATE_BASES, GROUP_BASES);
storeEnv.put(OIDIdentityStoreFactory.RT_GROUP_SELECTED_CREATE_BASE, GROUP_BASES[0]);
storeEnv.put(OIDIdentityStoreFactory.RT_USER_SELECTED_CREATE_BASE, USER_BASES[0]);
} catch (JpsException e) {
e.printStackTrace();
throw new RuntimeException("Jps Exception encountered", e);
}
}
any suggestion how to use this code to connect to external ldap will be appreciated.

JpsContextFactory is utilised to retrieve the current information of the identity store(s) inside weblogic. In order to use it with an external LDAP, you need first to add a new security provider in Weblogic and declare it as required in order for your application to utilise the new external ldap.
Check this old article of how to do it (http://www.itbuzzpress.com/weblogic-tutorials/securing-oracle-weblogic/configuring-oracle-weblogic-security-providers.html)

Related

Hive JDBC connection problems

I am trying to connect to Hive2 server via JDBC with kerberos authentication. After numerous attempts to make it work, I can't get it to work with the Cloudera driver.
If someone can help me to solve the problem, I can greatly appreciate it.
I have this method:
private Connection establishConnection() {
final String driverPropertyClassName = "driver";
final String urlProperty = "url";
Properties hiveProperties = config.getMatchingProperties("hive.jdbc");
String driverClassName = (String) hiveProperties.remove(driverPropertyClassName);
String url = (String) hiveProperties.remove(urlProperty);
Configuration hadoopConfig = new Configuration();
hadoopConfig.set("hadoop.security.authentication", "Kerberos");
String p = config.getProperty("hadoop.core.site.path");
Path path = new Path(p);
hadoopConfig.addResource(path);
UserGroupInformation.setConfiguration(hadoopConfig);
Connection conn = null;
if (driverClassName != null) {
try {
UserGroupInformation.loginUserFromKeytab(config.getProperty("login.user"), config.getProperty("keytab.file"));
Driver driver = (Driver) Class.forName(driverClassName).newInstance();
DriverManager.registerDriver(driver);
conn = DriverManager.getConnection(url, hiveProperties);
} catch (Throwable e) {
LOG.error("Failed to establish Hive connection", e);
}
}
return conn;
}
URL for the server, that I am getting from the properties in the format described in Cloudera documentation
I am getting an exception:
2018-05-05 18:26:49 ERROR HiveReader:147 - Failed to establish Hive connection
java.sql.SQLException: [Cloudera][HiveJDBCDriver](500164) Error initialized or created transport for authentication: Peer indicated failure: Unsupported mechanism type PLAIN.
at com.cloudera.hiveserver2.hivecommon.api.HiveServer2ClientFactory.createTransport(Unknown Source)
at com.cloudera.hiveserver2.hivecommon.api.ZooKeeperEnabledExtendedHS2Factory.createClient(Unknown Source)
...
I thought, that it is missing AuthMech attribute and added AuthMech=1 to the URL. Now I am getting:
java.sql.SQLNonTransientConnectionException: [Cloudera][JDBC](10100) Connection Refused: [Cloudera][JDBC](11640) Required Connection Key(s): KrbHostFQDN, KrbServiceName; [Cloudera][JDBC](11480) Optional Connection Key(s): AsyncExecPollInterval, AutomaticColumnRename, CatalogSchemaSwitch, DecimalColumnScale, DefaultStringColumnLength, DelegationToken, DelegationUID, krbAuthType, KrbRealm, PreparedMetaLimitZero, RowsFetchedPerBlock, SocketTimeOut, ssl, StripCatalogName, transportMode, UseCustomTypeCoercionMap, UseNativeQuery, zk
at com.cloudera.hiveserver2.exceptions.ExceptionConverter.toSQLException(Unknown Source)
at com.cloudera.hiveserver2.jdbc.common.BaseConnectionFactory.checkResponseMap(Unknown Source)
...
But KrbHostFQDN is already specified in the principal property as required in the documentation.
Am I missing something or is this documentation wrong?
Below is the one of the similar kind of problem statement in Impala (just JDBC engine changes others are same) that is resolved by setting "KrbHostFQDN" related properties in JDBC connection string itself.
Try to use the URL below. Hopefully works for u.
String jdbcConnStr = "jdbc:impala://myserver.mycompany.corp:21050/default;SSL=1;AuthMech=1;KrbHostFQDN=myserver.mycompany.corp;KrbRealm=MYCOMPANY.CORP;KrbServiceName=impala"
I suppose that if you are not using SSL=1 but only Kerberos, you just drop that part from the connection string and don't worry about setting up SSL certificates in the java key store, which is yet another hassle.
However in order to get Kerberos to work properly we did the following:
Install MIT Kerberos 4.0.1, which is a kerberos ticket manager. (This is for Windows)
This ticket manager asks you for authentication every time you initiate a connection, creates a ticket and stores it in a kerberos_ticket.dat binary file, whose location can be configured somehow but I do not recall exactly how.
Finally, before launching your JAVA app you have to set an environment variable KRB5CCNAME=C:/path/to/kerberos_ticket.dat. In your java app, you can check that the variable was correctly set by doing System.out.println( "KRB5CCNAME = " + System.getenv( "KRB5CCNAME" ) ). If you are working with eclipse or other IDE you might even have to close the IDE,set up the environment variable and start the IDE again.
NOTE: this last bit is very important, I have observed that if this variable is not properly set up, the connection wont be established...
In Linux, instead MIT Kerberos 4.0.1, there is a program called kinit which does the same thing, although without a graphical interface, which is even more convenient for automation.
I wanted to put it in the comment but it was too long for the comment, therefore I am placing it here:
I tried your suggestion and got another exception:
java.sql.SQLException: [Cloudera]HiveJDBCDriver Error
creating login context using ticket cache: Unable to obtain Principal
Name for authentication .
May be my problem is, that I do not have environment variable KRB5CCNAME set.
I, honestly, never heard about it before.
What is supposed to be in that ticket file.
I do have, however, following line in my main method:
System.setProperty("java.security.krb5.conf", "path/to/krb5.conf");
Which is supposed to be used by
UserGroupInformation.loginUserFromKeytab(config.getProperty("login.user"), config.getProperty("keytab.file"));
to obtain the kerberos ticket.
To solve this issue update Java Cryptography Extension for the Java version that you use in your system.
Here's the link when you can download JCE for Java 1.7
Uncompress and overwrite those files in $JDK_HOME/jre/lib/security
Restart your computer.

How to hook up AWS RDS - Aurora with AWS Lambda Java function

I am trying to hook up AWS RDS Aurora database with AWS Lambda Java function. For this, I am yet to see any concrete examples. I have seen some examples but they are non java.
I would also like to configure a mySQL DBMS tool with Aurora which I am not able to do :( Can someone help me with that as well. I have got the connection strings from https://console.aws.amazon.com/rds/home?region=us-east-1#dbinstances.
Also, the code I am trying to connect to DB via Lambda Java is:
private Statement createConnection(Context context) {
logger = context.getLogger();
try {
String url = "jdbc:mysql://HOSTNAME:3306";
String username = "USERNAME";
String password = "PASSWORD";
Connection conn = DriverManager.getConnection(url, username, password);
return conn.createStatement();
} catch (Exception e) {
e.printStackTrace();
logger.log("Caught exception: " + e.getMessage());
}
return null;
}
And yes, this doesn't help as I always get null using the db instance config.
RDS needs be in a security group that opens the DB port to the Security Group attached to the ENI of the lambda.
To enable your Lambda function to access resources inside your private VPC, you must provide additional VPC-specific configuration
information that includes VPC subnet IDs and security group IDs. AWS
Lambda uses this information to set up elastic network interfaces
(ENIs) that enable your function to connect securely to other
resources within your private VPC.
http://docs.aws.amazon.com/lambda/latest/dg/vpc.html
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html

GAE HttpResponseException: 401

I am trying to access the DataStore of one app from another GAE project using Remote API.
I am using the following code:
String serverString = "http://example.com";//this should be the target appengine
RemoteApiOptions options;
if (serverString.equals("localhost")) {
options = new RemoteApiOptions().server(serverString, 8080).useDevelopmentServerCredential();
} else {
options = new RemoteApiOptions().server(serverString, 80).useApplicationDefaultCredential();
}
RemoteApiInstaller installer = new RemoteApiInstaller();
installer.install(options);
datastore = DatastoreServiceFactory.getDatastoreService();
try {
results = datastore.get(KeyFactory.createKey("some key"));
} catch (EntityNotFoundException e) {
e.printStackTrace();
return null;
}
when I run this locally, i get a nullpointerexception at installer.install(options);.
and when deployed, the error seen from error reporting on the appengine is :HttpResponseException: 401 You must be logged in as an administrator, or access from an approved application.
That being said, I made a small java application with the follwing code:
String serverString = "http://example.com";//same string as the one used in the above code
RemoteApiOptions options;
if (serverString.equals("localhost")) {
options = new RemoteApiOptions().server(serverString, 8080).useDevelopmentServerCredential();
} else {
options = new RemoteApiOptions().server(serverString, 80).useApplicationDefaultCredential();
}
RemoteApiInstaller installer = new RemoteApiInstaller();
installer.install(options);
try {
DatastoreService ds = DatastoreServiceFactory.getDatastoreService();
System.out.println("Key of new entity is " + ds.put(new Entity("Hello Remote API!")));
and this one works!! Hello Remote API entity is added.
The reason it does not work when running on App Engine vs running locally has to do with the credentials that are being picked up. When running locally, it is likely using your own credentials (which has access to both projects); by contrast, when running on App Engine, you are likely picking up the App Engine default service account, which only has access to that App Engine project.
Try fixing this by opening the Cloud IAM section of Cloud Console for the project containing the Cloud Datastore that you wish to access. There, grant the appropriate level of access to the default App Engine service account that is being used by the other project.
If you don't want all App Engine services in the other project to have this kind of access, you might also consider, instead, generating a service account for this cross-project access that you grant the appropriate access to (rather than granting that access to the default App Engine service account). Then, in your code that calls the API, you would explicitly use that service account by calling the useServiceAccountCredential() method of RemoteApiOptions to ensure that the API requests that are issued use the specified service account rather than the default App Engine service account.

Check MongoDB server is running and credentials are valid in Java

I am programming an UI where a user should be able to put in the URL and port to check whether a mongoDB server is running. Furthermore he should be able to provide credentials when necessary.
If the server is not running or the credentials are wrong, I want to provide a message for each case. Similar questions have been answered here:
Check MongoDB authentication with Java 3.0 driver
how to check from a driver, if mongoDB server is running
Unfortunately they use older versions of the Java driver for mongo. I'm using the 3.2+ version of the MongoDB java driver, where i.e. getDB() is deprecated.
My "solution" for the problem looks somewhat like this:
try {
String database = "test";
MongoClient client = null;
if (StringUtils.isNotBlank(username) && StringUtils.isNotBlank(password)) {
MongoCredential credentials = MongoCredential.createCredential(username, database, password.toCharArray());
client = new MongoClient(new ServerAddress(url, Integer.parseInt(port)), Arrays.asList(credentials));
}
else {
client = new MongoClient(url, Integer.parseInt(port));
}
MongoDatabase db = client.getDatabase(database);
db.listCollectionNames().first();
client.close();
return true;
}
catch (MongoCommandException | MongoSecurityException e) {
// program does not get in here when credentials are wrong,
// only when no credentials are provided, but necessary
}
catch (MongoSocketOpenException | MongoTimeoutException e) {
// only get in here after db.listCollectionNames().first() caused a timeout
}
How can I manage to:
Find out when mongoDB server is not running?
Find out that credentials are correct, when necessary?
Edit:
When credentials are wrong (username and/or password) the method catches only the MongoTimeoutException. It's the same when the the wrong URL or port or database is provided. To be clear there are other exceptions printed out, but not caught. Only difference is, when providing no password and no username, even though the database requires them, then the MongoCommandException is caught

Determine that application is running under application server

Some code may be reused in various environments including Java EE Application server. Sometimes it is nice to know whether the code is running under application server and which application server is it.
I prefer to do it by checking some system property typical for the application server.
For example it may be
jboss.server.name for JBoss
catalina.base for Tomcat
Does somebody know appropriate property name for other servers?
Weblogic, Websphere, Oracle IAS, others?
It is very easy to check if you have the specific application server installed. Just add line
System.getProperties() to any JSP, Servlet, EJB and print the result.
I can do it myself but it will take a lot of time to install server and make it working.
I have read this discussion: How to determine type of Application Server an application is running on?
But I prefer to use system property. It is easier and absolutely portable solution. The code does not depend on any other API like Servlet, EJBContext or JMX.
JBoss AS sets a lot of diffrent system properties:
jboss.home.dir
jboss.server.name
You can check other properties using for example VisualVM or other tools.
I don't know other servers but I think you can find some kind of properties for each of them.
This is not a 'standard' way but what I did was to try to load a Class of the AppServer.
For WAS:
try{
Class cl = Thread.getContextClassLoader().loadClass("com.ibm.websphere.runtime.ServerName");
// found
}
// not Found
catch(Throwable)
{
}
// For Tomcat: "org.apache.catalina.xxx"
Etc.
Let me know what you think
//for Tomcat
try {
MBeanServer mBeanServer = ManagementFactory.getPlatformMBeanServer();
ObjectName name = new ObjectName("Catalina", "type", "Server");
StandardServer server = (StandardServer) mBeanServer.getAttribute(name,"managedResource");
if (server != null) {
//its a TOMCAT application server
}
} catch (Exception e) {
//its not a TOMCAT Application server
}
//for wildfly
try {
ObjectName http = new ObjectName("jboss.as:socket-binding-group=standard-sockets,socket- binding=http");
String jbossHttpAddress = (String) mBeanServer.getAttribute(http, "boundAddress");
int jbossHttpPort = (Integer) mBeanServer.getAttribute(http, "boundPort");
String url = jbossHttpAddress + ":" + jbossHttpPort;
if(jbossHttpAddress != null){
//its a JBOSS/WILDFLY Application server
}
} catch (Exception e) {
//its not a JBOSS/WILDFLY Application server
}

Categories

Resources