For this requirement we tried to create the datasource using the org.wso2.carbon.user.core.util.DatabaseUtil class by passing the realm, but we always get an exception saying error in looking up datasource.
We understand that during server startup, org.wso2.carbon.user.core.internal.Activator -> startDeploy(BundleContext bundleContext) is invoked and it creates a new RealmService instance where the realmconfiguration and datasource objects are successfully initialized. In the Activator class initialized realmservice instance is set to UserCoreUtil class(UserCoreUtil.setRealmService(realmService)). RealmService initialization invokes the DefaultRealmService, where the datasource instance is initialized and that object is added to the properties.
For any of user or tenant related DB operations below call is invoked, CarbonContext.getThreadLocalCarbonContext().getUserRealm() method is invoked which actually uses the datasource from the properties which was stored by DefaultRealmService during the server start up and it creates the userStoreManager instance and returns the userRealm through which all user related operations are performed.
For accessing the application specific table, we created our own JDBCCustomManager class and tried to perform JDBC operations. We need the datasource to do DB operations for that when we execute, “DatabaseUtil.getRealmDataSource(objRealmService.getBootstrapRealmConfiguration())”, we always get an exception "Error in looking up data source: jdbc/WSO2CarbonDB".
If we write methods to access our table in the JDBCUserStoreManager its working but which is not the proper way to do. Can you please suggest is there any other way to get hold of datasources object of WSO2 so that we can use it in the application.
Your description is not much clear. If your are trying to get an datasource object you can do it like this.
public static DataSource lookupDataSource(String dataSourceName, final Hashtable<Object, Object> jndiProperties) {
try {
if (jndiProperties == null || jndiProperties.isEmpty()) {
return (DataSource) InitialContext.doLookup(dataSourceName);
}
final InitialContext context = new InitialContext(jndiProperties);
return (DataSource) context.doLookup(dataSourceName);
} catch (Exception e) {
throw new RuntimeException("Error in looking up data source: " + e.getMessage(), e);
}
}
You can defind the datasource in master-datasource.xml and give it a JNDI name which is used for the lookup.
Related
Looking at answers from high reputation users such as this it seems that it's appropriate to get a new DataSource object by querying the JNDI naming service on every single connection request. E.g. with code like the following (adapted from the linked answer for more brevity):
public class ConnectionManager{
public static Connection getConnection() throws NamingException {
Context initContext = new InitialContext();
Context envContext = (Context)initContext.lookup("java:/comp/env");
DataSource dataSource = (DataSource)envContext.lookup("jdbc/test");
return dataSource.getConnection();
}
}
Is this really, the suggested / idiomatic way? In some of my own "ConnectionManager" utilty classes I used to keep a reference to the DataSource object as an instance variable. Nothing wrong came out of it except when the JBoss administrator disabled and enabled the connection pool from the admin console and then my code was getting errors like the following:
java.sql.SQLException: javax.resource.ResourceException: IJ000451: The connection manager is shutdown
So, is it an anti-pattern to keep around instances of DataSource objects in JDBC?
A DataSource object can be cached and is thread-safe, although JNDI ought to be well-enough optimized that getting the DS out of JNDI every request is negligible (the same instance will be handed back from JNDI).
If you're working in a Java EE environment for example, it's spec standard to be able to inject a DataSource at the class level, such as:
public class MyServlet extends HttpServlet {
#Resource
DataSource ds;
public void processRequest() {
try(Connection con = ds.getConnection()) {
// ...
}
}
}
Also, it's completely safe to share DataSource objects across multiple threads. On the other hand, sharing Connection objects across multiple threads is a big mistake, because those are NOT threadsafe per spec.
I've read about AbstractRoutingDataSource and the standard ways to bind a datasource dynamically in this article:
public class CustomerRoutingDataSource extends AbstractRoutingDataSource {
#Override
protected Object determineCurrentLookupKey() {
return CustomerContextHolder.getCustomerType();
}
}
It uses a ThreadLocal context holder to "set" the DataSource:
public class CustomerContextHolder {
private static final ThreadLocal<CustomerType> contextHolder =
new ThreadLocal<CustomerType>();
public static void setCustomerType(CustomerType customerType) {
Assert.notNull(customerType, "customerType cannot be null");
contextHolder.set(customerType);
}
public static CustomerType getCustomerType() {
return (CustomerType) contextHolder.get();
}
// ...
}
I have a quite complex system where threads are not necessarily in my control, say:
Scheduled EJB reads a job list from the database
For each Job it fires a Spring (or Java EE) batch job.
Each job have its origin and destination databases (read from a central database).
Multiple jobs will run in parallel
Jobs may be multithreaded.
ItemReader will use the origin data source that was set for that specific job (origin data source must be bound to some repositories)
ItemWriter will use the destination data source that was set for that specific job (destination data source must also be bound to some repositories).
So I'm feeling somewhat anxious about ThreadLocal, specially, I'm not sure if the same thread will be used to handle multiple jobs. If that happens origin and destination databases may get mixed.
How can I "store" and bind a data source dynamically in a safe way when dealing with multiple threads?
I could not find a way to setup Spring to play nice with my setup and inject the desired DataSource, so I've decided to handle that manually.
Detailed solution:
I changed my repositories to be prototypes so that a new instance is constructed every time that I wire it:
#Repository
#Scope(BeanDefinition.SCOPE_PROTOTYPE)
I've introduced new setDataSource and setSchema methods in top level interfaces / implementations that are supposed to work with multiple instances / schemas.
Since I'm using spring-data-jdbc-repository my setDataSource method simple wraps the DataSource with a new JdbcTemplate and propagate the change.
setJdbcOperations(new JdbcTemplate(dataSource));
My implementation is obtaining the DataSources directly from the application server:
final Context context = new InitialContext();
final DataSource dataSource = (DataSource) context.lookup("jdbc/" + dsName);
Finally, for multiples schemas under the same database instance, I'm logging in with a special user (with the correct permissions) and using a Oracle command to switch to the desired schema:
getJdbcOperations().execute("ALTER SESSION SET CURRENT_SCHEMA = " + schema);
While this goes against the Dependency inversion principle it works and is handling my concurrency requirements very well.
We're building an app using Grails 2.0.4, GORM, and Hibernate. When the database is not available, Grails will not initialize, and startup fails. We thought our pool settings would protect against startup failures, but that doesn't seem to be the case.
If pool settings alone can't address this, is it possible to catch exceptions in resources.groovy where, if a database service can't be initialized, switch to a file-based service temporarily? Something like this...
resources.groovy
try{
myDataService(PostgresDatabaseServiceImpl){}
}catch(Exception e){
//if database connect failed, use local service instead
myDataService(FileBasedServiceImpl){}
}
Even if the above is possible, it creates a new problem; how to switch back, dynamically, once the database is available. We attempted the above try/catch, but it had no impact, the startup issue persists:
Error creating bean with name 'transactionManagerPostProcessor':
Initialization of bean failed
If it's possible to avoid startup failures through pool settings alone, we could certainly manage SQL exceptions at runtime when the app attempts to use bad database connections, but startup failures we can't manage.
DataSource.groovy (pool settings)
dataSource {
pooled = true
driverClassName = "org.postgresql.Driver"
properties {
maxActive = 20
minEvictableIdleTimeMillis=1800000
timeBetweenEvictionRunsMillis=1800000
numTestsPerEvictionRun=3
testOnBorrow=true
testWhileIdle=true
testOnReturn=true
validationQuery="SELECT 1"
}
}
hibernate {
cache.use_second_level_cache = false
cache.use_query_cache = false
cache.region.factory_class = 'net.sf.ehcache.hibernate.EhCacheRegionFactory'
}
We attempted the above try/catch, but it had no impact, the startup issue persists:
So it seems you already have the answer to the question of whether it's possible to register a Spring bean for a (potentially) unavailable database in resources.groovy.
As an alternative, you could try registering a Spring bean for the database at runtime. This advantage of this approach is that even if registering the bean fails, you will be able to catch the error and use the file-based service instead. An example of how to register DataSource beans at runtime is show here.
To use this approach, register only a bean for the file-based service in resources.groovy
myDataService(FileBasedServiceImpl)
Then when you need to access the datasource:
class DataSourceService implements ApplicationContextAware {
def myDataService
ApplicationContext applicationContext
private static PG_BEAN = 'postgres'
def getDataSource() {
try {
getPostgresService()
} catch (ex) {
myDataService
}
}
private getPostgresService() {
def postgres
if (applicationContext.containsBean(PG_BEAN)) {
postgres = applicationContext.getBean(PG_BEAN)
} else {
// register a bean under the name 'postGres' and store a reference to it in postgres
// https://stackoverflow.com/a/20634968/2648
}
checkPostgres(postgres)
}
private checkPostres(postgresBean) {
// check that the database is available, throw an exception if it's not, return
// postgresBean if it is
}
}
I am getting javax.naming.NoInitialContextException from Hibernate's SessionFactory.buildSessionFactory() method. This is because I am trying to run a testcase outside of container.
I have code in place to refer to local Datasource configured in applicationContext.xml. Problem is that I am not figure out where to implement local datasource code.
I can not put it inside a catch(NoInitialContextException) because SessionFactory class is deep in the code and as per the application design, throwing all exceptions not catching them.
Is there anyway to find out if InitialContext exist before hitting the buildSessionFactory method?
Maybe can you perform a lookup on the InitialContext to check if there is a SessionFactory, as described here: http://docs.jboss.org/jbossas/getting_started/v4/html/hibernate.html
try {
InitialContext ctx = new InitialContext();
ctx.lookup("java:/hibernate/SessionFactory");
catch (NamingException e) {
//here you can assume that buildSessionFactory won't work
...
}
So pre spring, we used version of HibernateUtil that cached the SessionFactory instance if a successful raw JDBC connection was made, and threw SQLException otherwise. This allowed us to recover from initial setup of the SessionFactory being "bad" due to authentication or server connection issues.
We moved to Spring and wired things in a more or less classic way with the LocalSessionFactoryBean, the C3P0 datasource, and various dao classes which have the SessionFactory injected.
Now, if the SQL server appears to not be up when the web app runs, the web app never recovers. All access to the dao methods blow up because a null sessionfactory gets injected. (once the sessionfactory is made properly, the connection pool mostly handles the up/down status of the sql server fine, so recovery is possible)
Now, the dao methods are wired by default to be singletons, and we could change them to prototype. I don't think that will fix the matter though - I believe the LocalSessionFactoryBean is now "stuck" and caches the null reference (I haven't tested this yet, though, I'll shamefully admit).
This has to be an issue that concerns people.
Tried proxy as suggested below -- this failed
First of all I had to ignore the suggestion (which frankly seemed wrong from a decompile) to call LocalSessionFactory.buildSessionFactory - it isn't visible.
Instead I tried a modified version as follows:
override newSessionFactory. At end return proxy of SessionFactory pointing to an invocation handler listed below
This failed too.
org.hibernate.HibernateException: No local DataSource found for configuration - 'dataSource' property must be set on LocalSessionFactoryBean
Now, if newSessionfactory() is changed to simply
return config.buildSessionFactory() (instead of a proxy) it works, but of course no longer exhibits the desired proxy behavior.
public static class HibernateInvocationHandler implements InvocationHandler {
final private Configuration config;
private SessionFactory realSessionFactory;
public HibernateInvocationHandler(Configuration config) {
this.config=config;
}
public Object invoke(Object proxy, Method method, Object[] args)
throws Throwable {
if (false) proxy.hashCode();
System.out.println("Proxy for SessionFactory called");
synchronized(this) {
if (this.realSessionFactory == null){
SessionFactory sf =null;
try {
System.out.println("Gonna BUILD one or die trying");
sf=this.config.buildSessionFactory();
} catch (RuntimeException e) {
System.out.println(ErrorHandle.exceptionToString(e));
log.error("SessionFactoryProxy",e);
closeSessionFactory(sf);
System.out.println("FAILED to build");
sf=null;
}
if (sf==null) throw new RetainConfigDataAccessException("SessionFactory not available");
this.realSessionFactory=sf;
}
return method.invoke(this.realSessionFactory, args);
}
}
The proxy creation in newSessionFactory looks like this
SessionFactory sfProxy= (SessionFactory) Proxy.newProxyInstance(
SessionFactory.class.getClassLoader(),
new Class[] { SessionFactory.class },
new HibernateInvocationHandler(config));
and one can return this proxy (which fails) or config.buildSessionFactory() which works but doesn't solve the initial issue.
An alternate approach has been suggested by bozho, using getObject(). Note the fatal flaw in d), because buildSessionFactory is not visible.
a) if this.sessionfactory is nonnull, no need for a proxy, just return
b) if it is , build a proxy which...
c) should contain a private reference of sessionfactory, and each time it is called check if it is null. If so, you build a new factory and if successful assign to the private reference and return it from now on.
d) Now, state how you would build that factory from getObject(). Your answer should involve calling buildSessionFactory....but you CAN'T. One could create the factory by oneself, but you would end up risking breaking spring that way (look at buildSessionFactory code)
You shouldn't worry about this. Starting the app is something you will rarely do in production, and in development - well, you need the DB server anyway.
You should worry if the application doesn't recover if the db server stops while the app is running.
What you can do is extend LocalSessionFactoryBean and override the getObject() method, and make it return a proxy (via java.lang.reflect.Proxy or CGLIB / javassist), in case the sessionFactory is null. That way a SessionFactory will be injected. The proxy should hold a reference to a bare SessionFactory, which would initially be null. Whenever the proxy is asked to connect, if the sessionFacotry is still null, you call the buildSessionFactory() (of the LocalSessionFactoryBean) and delegate to it. Otherwise throw an exception. (Then of course map your new factory bean instead of the current)
Thus your app will be available even if the db isn't available on startup. But I myself wouldn't bother with this.