Accessing AWS RDS using IAM Authentication and Spring JDBC (DataSource and JdbcTemplace) - java

I am not able to figure out how to implement this. Any help and/or pointers will be greatly appreciated.
Currently, my Java/Spring application backend is deployed on EC2 and accessing MySQL on RDS successfully using the regular Spring JDBC setup. That is, storing database info in application.properties and configuring DataSource and JdbcTemplate in #Configuration class. Everything works fine.
Now, I need to access MySQL on RDS securely. RDS instance has IAM Authentication enabled. I have also successfully created IAM role and applied inline policy. Then, following the AWS RDS documentation and Java example on this link, I am able to access the database from a standalone Java class successfully using Authentication Token and the user I created instead of regular db username and password. This standalone Java class is dealing with "Connection" object directly.
The place I am stuck is how I translate this to Spring JDBC configuration. That is, setting up DataSource and JdbcTemplate beans for this in my #Configuration class.
What would be a correct/right approach to implement this?
----- EDIT - Start -----
I am trying to implement this as a library that can be used for multiple projects. That is, it will be used as a JAR and declared as a dependency in a project's POM file. This library is going to include configurable AWS Services like this RDS access using general DB username and password, RDS access using IAM Authentication, KMS (CMK/data keys) for data encryption, etc.
Idea is to use this library on any web/app server depending on the project.
Hope this clarifies my need more.
----- EDIT - End -----
DataSource internally has getConnection() so I can basically create my own DataSource implementation to achieve what I want. But is this a good approach?
Something like:
public class MyDataSource implements DataSource {
#Override
public Connection getConnection() throws SQLException {
Connection conn = null;
// get a connection using IAM Authentication Token for accessing AWS RDS, etc. as in the AWS docs
return conn;
}
#Override
public Connection getConnection(String username, String password) throws SQLException {
return getConnection();
}
//other methods
}

You can use the following snippet as a replacement for the default connection-pool provided by SpringBoot/Tomcat. It will refresh the token password every 10 minutes, since the token is valid for 15 minutes. Also, it assumes the region can be extracted from the DNS hostname. If this is not the case, you'll need to specify the region to use.
public class RdsIamAuthDataSource extends org.apache.tomcat.jdbc.pool.DataSource {
private static final Logger LOG = LoggerFactory.getLogger(RdsIamAuthDataSource.class);
/**
* The Java KeyStore (JKS) file that contains the Amazon root CAs
*/
public static final String RDS_CACERTS = "/rds-cacerts";
/**
* Password for the ca-certs file.
*/
public static final String PASSWORD = "changeit";
public static final int DEFAULT_PORT = 3306;
#Override
public ConnectionPool createPool() throws SQLException {
return pool != null ? pool : createPoolImpl();
}
protected synchronized ConnectionPool createPoolImpl() throws SQLException {
return pool = new RdsIamAuthConnectionPool(poolProperties);
}
public static class RdsIamAuthConnectionPool extends ConnectionPool implements Runnable {
private RdsIamAuthTokenGenerator rdsIamAuthTokenGenerator;
private String host;
private String region;
private int port;
private String username;
private Thread tokenThread;
public RdsIamAuthConnectionPool(PoolConfiguration prop) throws SQLException {
super(prop);
}
#Override
protected void init(PoolConfiguration prop) throws SQLException {
try {
URI uri = new URI(prop.getUrl().substring(5));
this.host = uri.getHost();
this.port = uri.getPort();
if (this.port < 0) {
this.port = DEFAULT_PORT;
}
this.region = StringUtils.split(this.host,'.')[2]; // extract region from rds hostname
this.username = prop.getUsername();
this.rdsIamAuthTokenGenerator = RdsIamAuthTokenGenerator.builder().credentials(new DefaultAWSCredentialsProviderChain()).region(this.region).build();
updatePassword(prop);
final Properties props = prop.getDbProperties();
props.setProperty("useSSL","true");
props.setProperty("requireSSL","true");
props.setProperty("trustCertificateKeyStoreUrl",getClass().getResource(RDS_CACERTS).toString());
props.setProperty("trustCertificateKeyStorePassword", PASSWORD);
super.init(prop);
this.tokenThread = new Thread(this, "RdsIamAuthDataSourceTokenThread");
this.tokenThread.setDaemon(true);
this.tokenThread.start();
} catch (URISyntaxException e) {
throw new RuntimeException(e.getMessage());
}
}
#Override
public void run() {
try {
while (this.tokenThread != null) {
Thread.sleep(10 * 60 * 1000); // wait for 10 minutes, then recreate the token
updatePassword(getPoolProperties());
}
} catch (InterruptedException e) {
LOG.debug("Background token thread interrupted");
}
}
#Override
protected void close(boolean force) {
super.close(force);
Thread t = tokenThread;
tokenThread = null;
if (t != null) {
t.interrupt();
}
}
private void updatePassword(PoolConfiguration props) {
String token = rdsIamAuthTokenGenerator.getAuthToken(GetIamAuthTokenRequest.builder().hostname(host).port(port).userName(this.username).build());
LOG.debug("Updated IAM token for connection pool");
props.setPassword(token);
}
}
}
Please note that you'll need to import Amazon's root/intermediate certificates to establish a trusted connection. The example code above assumes that the certificates have been imported into a file called 'rds-cacert' and is available on the classpath. Alternatively, you can also import them into the JVM 'cacerts' file.
To use this data-source, you can use the following properties for Spring:
datasource:
url: jdbc:mysql://dbhost.xyz123abc.us-east-1.rds.amazonaws.com/dbname
username: iam_app_user
driver-class-name: com.mysql.cj.jdbc.Driver
type: com.mydomain.jdbc.RdsIamAuthDataSource
Using Spring Java config:
#Bean public DataSource dataSource() {
PoolConfiguration props = new PoolProperties();
props.setUrl("jdbc:mysql://dbname.abc123xyz.us-east-1.rds.amazonaws.com/dbschema");
props.setUsername("iam_dbuser_app");
props.setDriverClassName("com.mysql.jdbc.Driver");
return new RdsIamAuthDataSource(props);
}
UPDATE: When using MySQL, you can also decide to use the MariaDB JDBC driver, which has builtin support for IAM authentication:
spring:
datasource:
host: dbhost.cluster-xxx.eu-west-1.rds.amazonaws.com
url: jdbc:mariadb:aurora//${spring.datasource.host}/db?user=xxx&credentialType=AWS-IAM&useSsl&serverSslCert=classpath:rds-combined-ca-bundle.pem
type: org.mariadb.jdbc.MariaDbPoolDataSource
The above requires MariaDB and AWS SDK libraries, and needs the CA-bundle in the classpath

I know this is an older question, but after a some searching I found a pretty easy way you can now do this using the MariaDB driver. In version 2.5 they added an AWS IAM credential plugin to the driver. It will handle generating, caching and refreshing the token automatically.
I've tested using Spring Boot 2.3 with the default HikariCP connection pool and it is working fine for me with these settings:
spring.datasource.url=jdbc:mariadb://host/db?credentialType=AWS-IAM&useSsl&serverSslCert=classpath:rds-combined-ca-bundle.pem
spring.datasource.driver-class-name=org.mariadb.jdbc.Driver
spring.datasource.username=iam_username
#spring.datasource.password=dont-need-this
spring.datasource.hikari.maxLifetime=600000
Download rds-combined-ca-bundle.pem and put it in src/main/resources so you can connect via SSL.
You will need these dependencies on the classpath as well:
runtime 'org.mariadb.jdbc:mariadb-java-client'
runtime 'com.amazonaws:aws-java-sdk-rds:1.11.880'
The driver uses the standard DefaultAWSCredentialsProviderChain so make sure you have credentials with policy allowing IAM DB access available wherever you are running your app.
Hope this helps someone else - most examples I found online involved custom code, background threads, etc - but using the new driver feature is much easier!

There is a library that can make this easy. Effectively you just override the getPassword() method in the HikariDataSource. You use STS to assume the role and send a "password" for that role.
<dependency>
<groupId>io.volcanolabs</groupId>
<artifactId>rds-iam-hikari-datasource</artifactId>
<version>1.0.4</version>
</dependency>

Related

How JAVA11 use SPI to load SQL Driver Class

In Java8 there is a static block in java.sql.DriverManger class as
static {
loadInitialDrivers();
println("JDBC DriverManager initialized");
}
It will be executed when java.sql.DriverManger class is loaded by ClassLoader, and it will call the ServiceLoader.load() method to start to scan files under META-IFO/services folder in jars under the classpath. In this way it register all the Driver class defined in services folder.
However, in Java11, it don't have this static block anymore, I was wondering how Java11 starts the SPI process. Thanks for any answers.
In Java 11 the scanning for the drivers is only started when the first connection is opened:
DriverManager.getConnection(String url)
public static Connection getConnection(String url)
throws SQLException {
java.util.Properties info = new java.util.Properties();
return (getConnection(url, info, Reflection.getCallerClass()));
}
calls DriverManager.getConnection(String url, Properties info, Class<?> caller):
private static Connection getConnection(
String url, java.util.Properties info, Class<?> caller) throws SQLException {
// [..]
ensureDriversInitialized();
// [..]
}
which in turn calls DriverManager.ensureDriversInitialized() which finally uses the java.util.ServiceLoader class to effectively load the drivers:
private static void ensureDriversInitialized() {
// [..]
ServiceLoader<Driver> loadedDrivers = ServiceLoader.load(Driver.class);
// [..]
}

How can I rerun an Apache Flink Postgres JDBC job without getting "No suitable driver found" exception

I have a Flink job derived from the starter Maven project. That job has a source that opens a Postgres JDBC connection. I am executing the job on my own Flink session cluster using the example docker-compose.yml.
When I submit the job for the first time it executes successfully. When I try to submit it again I get the following error:
Caused by: java.sql.SQLException: No suitable driver found for jdbc:postgresql://host.docker.internal:5432/postgres?user=postgres&password=mypassword
at java.sql.DriverManager.getConnection(DriverManager.java:689)
at java.sql.DriverManager.getConnection(DriverManager.java:270)
at com.myorg.project.JdbcPollingSource.run(JdbcPollingSource.java:25)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:66)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:269)
I have to restart my cluster in order to rerun my job. Why is this happening? How can I submit my job again without having to restart the cluster?
The only addition to the Maven starter project is:
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.2.24</version>
</dependency>
The Flink source does nothing but open a JDBC connection and is as follows:
package com.mycompany;
import org.apache.flink.streaming.api.functions.source.RichSourceFunction;
import java.sql.Connection;
import java.sql.DriverManager;
public class JdbcSource extends RichSourceFunction<Integer> {
private final String connString;
public JdbcSource(String connString) {
this.connString = connString;
}
#Override
public void run(SourceContext<Integer> ctx) throws Exception {
try (Connection conn = DriverManager.getConnection(this.connString)) {
}
}
#Override
public void cancel() {
}
}
I have tested this on Flink version 1.14.0 and 1.13.2 with the same results.
Note that this question provides a solution of using Class.forName("org.postgresql.Driver"); within my RichSourceFunction. However I would like to know what is going on.
The first question you can refer JDBC driver cannot be found when reading a DataSet from an SQL database in Apache Flink.
Second, if you use session mode. It can be easy to rerun the Flink job without restart the cluster. you can log in job manager shell then use the command rerun job.
Class.forName("org.postgresql.Driver"); will trigger static method block, so you DriverManager can get driver class. see:
// from org.postgresql.Driver
static {
try {
register();
} catch (SQLException var1) {
throw new ExceptionInInitializerError(var1);
}
}
I have this pom.xml dependency for Postgres for Apache Flink 1.13:
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>9.4-1201-jdbc41</version>
</dependency>
you can have a Postgres connector class for example:
public class PostgreSQLConnector {
private static volatile PostgreSQLConnector instance;
private Connection connectionDB = null;
public PostgreSQLConnector(your params) {
...
}
public static PostgreSQLConnector getInstance() {
PostgreSQLConnector postgreSQLConnector = instance;
if (postgreSQLConnector != null)
return postgreSQLConnector;
synchronized (PostgreSQLConnector.class) {
if (instance == null) {
instance = new PostgreSQLConnector(your params);
}
return instance;
}
}
public Connection getConnectionDB() throws SQLException {
if (checkNullConnection()) CreateConnection();
return connectionDB;
}
public void CheckConnection() throws SQLException {
if (checkNullConnection()) CreateConnection();
}
public void CreateConnection() throws SQLException {
try {
Class.forName(sink.driverName);
connectionDB = DriverManager.getConnection(fullUrl, username, password);
} catch (Exception e) {
...
}
}
public boolean checkNullConnection() throws SQLException {
return (connectionDB == null || connectionDB.isClosed());
}
}
then you can create a RichSourceFunction and create the connection in the overrides open method, not in the run
public class JdbcSource extends RichSourceFunction<Integer> {
private final String connString;
private static Connection dbConnection;
private static final PostgreSQLConnector postgreSQLConnector = PostgreSQLConnector.getInstance();
public JdbcSource(String connString) {
this.connString = connString;
}
#Override
public void open(Configuration parameters) throws SQLException {
dbConnection = postgreSQLConnector.getConnectionDB();
}
#Override
public void close() throws Exception {
if (dbConnection != null) dbConnection.close();
}
#Override
public void run(SourceContext<Integer> ctx) throws Exception {
do something here with the connection
}
#Override
public void cancel() {
}
}
Something like that you could maybe try and it should work
According to the official documentation of PostgreSQL JDBC driver, if you are using Java 1.6+, you can just put the driver's jar file into the classpath. The driver will be loaded by the JVM automatically. So the question is how to place the driver's jar file into the classpath.
Since you are using docker to deploy a session cluster, there's two way that may works:
Put the driver's jar file into docker image
Run and access the image with the command:
docker docker run -it -v $PWD:/tmp/flink <address to image> -- bash
Copy the driver's jar file into the folder /opt/flink/lib.
Create a new image from the container. Since /opt/flink/lib is loaded as classpath by default, now the driver's jar file is located at the classpath.
Package the driver's jar into your user jar
Add maven-assembly-plugin to the pom.xml of your maven project. Recompile your project and get a jar file with dependencies. In this jar, the PostgreSQL JDBC driver is packaged together.

unable to initialize datasource using java annotation on bluemix

I have a java application running on a bluemix cloud server, I originally developed it locally on a tomcat server and then decided to migrate to the cloud. The option suggested everywhere was to use liberty and sqldb services which after some finicking I got setup on my bluemix account with the sql database named SQL-RCT bound as a service to my java application.
The problem is encountered when running the following code:
#WebServlet({ "/LoginServlet", "/" })
public class LoginServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
private Connection conn;
#Resource(lookup="jdbc/SQL-RCT")
private DataSource myDataSource;
/**
* #see HttpServlet#HttpServlet()
*/
public LoginServlet() {
super();
try {
if(myDataSource == null){
throw new Exception("no data source");
}
conn = myDataSource.getConnection();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
when I try to load the servlet I get an error that there was a nullpointer exception in my init function which I quickly was able to narrow down to my myDataSource object being null.
I've checked the server.xml and I´m using the right name for the lookup but the lookup doesn't seem to work, any help would be appreciated.
the server.xml
<server>
<featureManager>
<feature>beanValidation-1.1</feature>
<feature>cdi-1.2</feature>
<feature>ejbLite-3.2</feature>
<feature>el-3.0</feature>
<feature>jaxrs-2.0</feature>
<feature>jdbc-4.1</feature>
<feature>jndi-1.0</feature>
<feature>jpa-2.1</feature>
<feature>jsf-2.2</feature>
<feature>jsonp-1.0</feature>
<feature>jsp-2.3</feature>
<feature>managedBeans-1.0</feature>
<feature>servlet-3.1</feature>
<feature>websocket-1.1</feature>
<feature>icap:managementConnector-1.0</feature>
<feature>appstate-1.0</feature>
<feature>cloudAutowiring-1.0</feature>
</featureManager>
<application name='myapp' location='myapp.war' type='war' context-root='/'/>
<cdi12 enableImplicitBeanArchives='false'/>
<httpEndpoint id='defaultHttpEndpoint' host='*' httpPort='${port}'/>
<webContainer trustHostHeaderPort='true' extractHostHeaderPort='true'/>
<include location='runtime-vars.xml'/>
<logging logDirectory='${application.log.dir}' consoleLogLevel='INFO'/>
<httpDispatcher enableWelcomePage='false'/>
<applicationMonitor dropinsEnabled='false' updateTrigger='mbean'/>
<config updateTrigger='mbean'/>
<appstate appName='myapp' markerPath='${home}/../.liberty.state'/>
<dataSource id='db2-SQL-RCT' jdbcDriverRef='db2-driver' jndiName='jdbc/SQL-RCT' statementCacheSize='30' transactional='true'>
<properties.db2.jcc id='db2-SQL-RCT-props' databaseName='${cloud.services.SQL-RCT.connection.db}' user='${cloud.services.SQL-RCT.connection.username}' password='${cloud.services.SQL-RCT.connection.password}' portNumber='${cloud.services.SQL-RCT.connection.port}' serverName='${cloud.services.SQL-RCT.connection.host}'/>
</dataSource>
<jdbcDriver id='db2-driver' libraryRef='db2-library'/>
<library id='db2-library'>
<fileset id='db2-fileset' dir='${server.config.dir}/lib' includes='db2jcc4.jar db2jcc_license_cu.jar'/>
</library>
</server>
Injected resources are not available within servlet constructors, since the resources do not get injected until after the servlet instance has been fully initialized.
Instead, override the javax.servlet.GenericServlet init() method and get your conneciton there. This lifecycle method will give you similar lifecycle behavior as how you are currently trying to create your connection in the servlet constructor.
Example code:
#WebServlet({ "/LoginServlet", "/" })
public class LoginServlet extends HttpServlet
{
private static final long serialVersionUID = 1L;
private Connection conn;
#Resource(lookup="jdbc/SQL-RCT")
private DataSource myDataSource;
#Override
public void init() throws ServletException {
super.init();
try {
conn = myDataSource.getConnection();
} catch (Exception e) {
throw new ServletException(e);
}
}
}
As a side note:
Since Liberty pools connections, it's not necessary to store a connection at the class scope. If you get connections when they are needed and close them once you are done using them, you should not see any performance difference.
If you want to get a connection in the servlet init code as a way to eagerly get a connection, that is fine, but it will impact your servlet load time.
In most containers, the naming convention for the #Resource annotation is as follows:
#Resource(name = "java:/comp/env/jdbc/SQL-RCT")
private DataSource myDataSource;
Found it on this answer:
JNDI #Resource annotation

JMX process, is it possible to call an external application to handle access rights when client attempts access

I have an application that is running on localhost:1234, I am using jconsole to connect to this. The application has a password file to handle login.
I need to allow logging in based on different AD groups of the windows user. So for example, if they are in Group1 they will be given readwrite access, if they are Group2 they are given readonly access, and group3 is not given and access.
I have created an AD group handling application that can query a list of AD groups and return the required user access level and login details.
My problem: I want to connect to the application using jconsole via the command line using something like:
jconsole localhost:1234
Obviously this will fail to connect, because it's expecting a username and password.
Is there a way in which I can have my JMX application that's running on localhost:1234 wait for an incoming connection request and run my AD group handling application to determine their access level?
My application on localhost:1234 is very basic and looks like this:
import java.lang.management.ManagementFactory;
import javax.management.InstanceAlreadyExistsException;
import javax.management.MBeanRegistrationException;
import javax.management.MBeanServer;
import javax.management.MalformedObjectNameException;
import javax.management.NotCompliantMBeanException;
import javax.management.ObjectName;
public class SystemConfigManagement {
private static final int DEFAULT_NO_THREADS = 10;
private static final String DEFAULT_SCHEMA = "default";
public static void main(String[] args)
throws MalformedObjectNameException, InterruptedException,
InstanceAlreadyExistsException, MBeanRegistrationException,
NotCompliantMBeanException{
//Get the MBean server
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
//register the mBean
SystemConfig mBean = new SystemConfig(DEFAULT_NO_THREADS, DEFAULT_SCHEMA);
ObjectName name = new ObjectName("com.barc.jmx:type=SystemConfig");
mbs.registerMBean(mBean, name);
do{
Thread.sleep(2000);
System.out.println(
"Thread Count = " + mBean.getThreadCount()
+ ":::Schema Name = " + mBean.getSchemaName()
);
}while(mBean.getThreadCount() != 0);
}
}
and
package com.test.jmx;
public class SystemConfig implements SystemConfigMBean {
private int threadCount;
private String schemaName;
public SystemConfig(int numThreads, String schema){
this.threadCount = numThreads;
this.schemaName = schema;
}
#Override
public void setThreadCount(int noOfThreads) {
this.threadCount = noOfThreads;
}
#Override
public int getThreadCount() {
return this.threadCount;
}
#Override
public void setSchemaName(String schemaName) {
this.schemaName = schemaName;
}
#Override
public String getSchemaName() {
return this.schemaName;
}
#Override
public String doConfig() {
return "No of Threads=" + this.threadCount + " and DB Schema Name = " + this.schemaName;
}
}
[source : http://www.journaldev.com/1352/what-is-jmx-mbean-jconsole-tutorial]
Is there somewhere in main() where I can create this query to validate the user details using the AD group handling application?
The default RMI connector server cannot do that very well (you can provide your own JAAS module (UC3) or Authenticator (UC4)).
You might be better off using another protocol/implementation which does already delegate authentication. There are some webservice, REST- and even jboss remoting connectors and most of them can be authenticated via a container mechanism. However I think most of them are not easy to integrate.
If you use for example Jolokia (servlet), you could also use hawt.io as a very nice "AJAX" console. (I am not sure if jolokia actually ships a JMX client connector which you can use in JConsole but there are many alternative clients which are most of the time better for integration/automation).

Websphere Network deployment datasource

I have installed Websphere Network deployment server 7.0.0.0
I have configured a cluster on it.
I have configured a data source on it say ORA_DS this data source using "JAAS - J2C authentication data"
When i test the ORA_DS by clicking on "Test connection" button, the test connection is success.
The issue comes when i try to access this data source using my java code.
Here is my code to access data source and create a connection:
public class DSTester
{
/**
* Return the data source.
* #return the data source
*/
private DataSource getDataSource()
{
DataSource dataSource = null;
Hashtable<String, String> env = new Hashtable<String, String>();
env.put(Context.INITIAL_CONTEXT_FACTORY, "com.ibm.websphere.naming.WsnInitialContextFactory");
env.put(Context.PROVIDER_URL, "iiop://localhost:9811");
// Retrieve datasource name
String dataSourceName = "EPLA1";
if (dataSource == null)
{
try
{
Context initialContext = new InitialContext(env);
dataSource = (DataSource) initialContext.lookup(dataSourceName);
}
catch (NamingException e1)
{
e1.printStackTrace();
return null;
}
}
return dataSource;
}
public static void main(String[] args)
throws Exception
{
DSTester dsTester = new DSTester();
DataSource ds = dsTester.getDataSource();
System.out.println(ds);
System.out.println(ds.getConnection());
}
}
Here is the output:
com.ibm.ws.rsadapter.jdbc.WSJdbcDataSource#17e40be6
Exception in thread "P=792041:O=0:CT" java.sql.SQLException: ORA-01017: invalid username/password; logon denied
DSRA0010E: SQL State = 72000, Error Code = 1,017
at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:206)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:406)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:399)
at oracle.jdbc.driver.T4CTTIoauthenticate.receiveOauth(T4CTTIoauthenticate.java:799)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:368)
at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:508)
at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:203)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:33)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:510)
at oracle.jdbc.pool.OracleDataSource.getPhysicalConnection(OracleDataSource.java:275)
at oracle.jdbc.pool.OracleDataSource.getConnection(OracleDataSource.java:206)
at oracle.jdbc.pool.OracleConnectionPoolDataSource.getPhysicalConnection(OracleConnectionPoolDataSource.java:139)
at oracle.jdbc.pool.OracleConnectionPoolDataSource.getPooledConnection(OracleConnectionPoolDataSource.java:88)
at oracle.jdbc.pool.OracleConnectionPoolDataSource.getPooledConnection(OracleConnectionPoolDataSource.java:70)
at com.ibm.ws.rsadapter.spi.InternalGenericDataStoreHelper$1.run(InternalGenericDataStoreHelper.java:1175)
at com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:118)
at com.ibm.ws.rsadapter.spi.InternalGenericDataStoreHelper.getPooledConnection(InternalGenericDataStoreHelper.java:1212)
at com.ibm.ws.rsadapter.spi.WSRdbDataSource.getPooledConnection(WSRdbDataSource.java:2019)
at com.ibm.ws.rsadapter.spi.WSManagedConnectionFactoryImpl.createManagedConnection(WSManagedConnectionFactoryImpl.java:1422)
at com.ibm.ws.rsadapter.spi.WSDefaultConnectionManagerImpl.allocateConnection(WSDefaultConnectionManagerImpl.java:81)
at com.ibm.ws.rsadapter.jdbc.WSJdbcDataSource.getConnection(WSJdbcDataSource.java:646)
at com.ibm.ws.rsadapter.jdbc.WSJdbcDataSource.getConnection(WSJdbcDataSource.java:613)
at com.test.DSTester.main(DSTester.java:70)
The code works fine if i replace
ds.getConnection()
with
ds.getConnection("ora_user", "ora_password")
My issue is i need to get the connection without specifying login details for Oracle.
Please help me on this issue.
Any clue will be appreciated.
Thanks
I'd guess it would work if you retrieved the datasource from an application running on the WAS.
Try creating a servlet.
Context initialContext = new InitialContext();
DataSource dataSource = (DataSource) initialContext.lookup("EPLA1");
Connection con = dataSource.getConnection();
As within a servlet it is running within WAS it should be fine, if the "Test Connection" works. Running it outside is probably a different context.
I think you need to check all your configuration:
1) Is it application deplyed on cluster or into only one of cluster member?
2) JAAS - J2C authentication data - what is the scope?
Sometimes you need restar all your WAS environment. It depends on resource configuration scope
I'd recomend to you add resource refences for better configuration options.
SeeIBM Tech note

Categories

Resources