I'm trying to put jOOQ alongside hibernate in an existing big service. The code works as attended, save one problem: it seems that jOOQ queries are oblivious to Spring transactions (annotation based approach).
The problem is that within the same call stack there are some hibernate operations (repositories), and jOOQ does not see those entities, while hibernate does.
I suspect that the problem is in the bean definition, separate transaction managers, or what not.
Please note that I'm not using a Spring bot application, rather a "plain" Spring (version 5.0.8-RELEASE).
Configuration is copied from Spring autoconfig:
#Configuration
public class JooqAutoConfiguration {
#Bean
public DataSourceConnectionProvider dataSourceConnectionProvider(DataSource dataSource) {
return new DataSourceConnectionProvider(new TransactionAwareDataSourceProxy(dataSource));
}
#Bean
public SpringTransactionProvider transactionProvider(PlatformTransactionManager txManager) {
return new SpringTransactionProvider(txManager);
}
#Bean
#Order(0)
public DefaultExecuteListenerProvider jooqExceptionTranslatorExecuteListenerProvider() {
return new DefaultExecuteListenerProvider(new JooqExceptionTranslator());
}
#Configuration
public static class DslContextConfiguration {
private final ConnectionProvider connection;
private final DataSource dataSource;
private final TransactionProvider transactionProvider;
private final RecordMapperProvider recordMapperProvider;
private final RecordUnmapperProvider recordUnmapperProvider;
private final Settings settings;
private final RecordListenerProvider[] recordListenerProviders;
private final ExecuteListenerProvider[] executeListenerProviders;
private final VisitListenerProvider[] visitListenerProviders;
private final TransactionListenerProvider[] transactionListenerProviders;
private final ExecutorProvider executorProvider;
public DslContextConfiguration(ConnectionProvider connectionProvider,
DataSource dataSource, ObjectProvider<TransactionProvider> transactionProvider,
ObjectProvider<RecordMapperProvider> recordMapperProvider,
ObjectProvider<RecordUnmapperProvider> recordUnmapperProvider, ObjectProvider<Settings> settings,
ObjectProvider<RecordListenerProvider[]> recordListenerProviders,
ExecuteListenerProvider[] executeListenerProviders,
ObjectProvider<VisitListenerProvider[]> visitListenerProviders,
ObjectProvider<TransactionListenerProvider[]> transactionListenerProviders,
ObjectProvider<ExecutorProvider> executorProvider) {
this.connection = connectionProvider;
this.dataSource = dataSource;
this.transactionProvider = transactionProvider.getIfAvailable();
this.recordMapperProvider = recordMapperProvider.getIfAvailable();
this.recordUnmapperProvider = recordUnmapperProvider.getIfAvailable();
this.settings = settings.getIfAvailable();
this.recordListenerProviders = recordListenerProviders.getIfAvailable();
this.executeListenerProviders = executeListenerProviders;
this.visitListenerProviders = visitListenerProviders.getIfAvailable();
this.transactionListenerProviders = transactionListenerProviders.getIfAvailable();
this.executorProvider = executorProvider.getIfAvailable();
}
#Bean
public DefaultDSLContext dslContext(org.jooq.Configuration configuration) {
return new DefaultDSLContext(configuration);
}
#Bean
public DefaultConfiguration jooqConfiguration() {
DefaultConfiguration configuration = new DefaultConfiguration();
configuration.set(SQLDialect.MYSQL);
configuration.set(this.connection);
if (this.transactionProvider != null) {
configuration.set(this.transactionProvider);
}
if (this.recordMapperProvider != null) {
configuration.set(this.recordMapperProvider);
}
if (this.recordUnmapperProvider != null) {
configuration.set(this.recordUnmapperProvider);
}
if (this.settings != null) {
configuration.set(this.settings);
}
if (this.executorProvider != null) {
configuration.set(this.executorProvider);
}
configuration.set(this.recordListenerProviders);
configuration.set(this.executeListenerProviders);
configuration.set(this.visitListenerProviders);
configuration.setTransactionListenerProvider(this.transactionListenerProviders);
return configuration;
}
}
}
If you let Spring Boot configure jOOQ's DSLContext, and behind the scenes, the Configuration and ConnectionProvider, then you shouldn't have a transactional problem. However, do note that Hibernate might not have flushed the changes applied to its caches to the database, and jOOQ only queries the database directly, not any Hibernate caches.
Hence, you must make sure to always flush Hibernate's session prior to running any jOOQ (or JDBC, or other native SQL queries).
Related
I am connecting to multiple datasources but sometimes some datasources may be offline and at that time I am geting errors on app and application is failing at startup.
I want to skip datasource configuration at startup... I have tried several ways by adding
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
to the application.properties and also I have tried adding
#SpringBootApplication(exclude={DataSourceAutoConfiguration.class})
to the main class but still it tries to configure the datasource.
I also tried to use #Lazy annotation on all methods and on constructor as below but still getting error while creating fooEntityManagerFactory
#Lazy
#Configuration
#EnableJpaRepositories(basePackages = "com.heyo.tayo.repository.foo", entityManagerFactoryRef = "fooEntityManagerFactory", transactionManagerRef = "fooTransactionManager")
public class PersistencefooConfiguration {
#Autowired
private DbContextHolder dbContextHolder;
#Lazy
#Bean
#ConfigurationProperties("tay.datasource.foo")
public DataSourceProperties fooDataSourceProperties() {
return new DataSourceProperties();
}
#Lazy
#Bean
#ConfigurationProperties("tay.datasource.foo.configuration")
public DataSource fooDataSource() {
DataSource dataSource = fooDataSourceProperties().initializeDataSourceBuilder()
.type(BasicDataSource.class).build();
dbContextHolder.addNewAvailableDbType(DbTypeEnum.foo);
return dataSource;
}
#Lazy
#Bean(name = "fooEntityManagerFactory")
public LocalContainerEntityManagerFactoryBean fooEntityManagerFactory(
EntityManagerFactoryBuilder builder) {
//THE CODE IS FAILING AT BELOW RETURN CASE
return builder
.dataSource(fooDataSource())
.packages("com.heyo.tayo.model.foo")
.build();
}
#Lazy
#Bean
public PlatformTransactionManager fooTransactionManager(
final #Qualifier("fooEntityManagerFactory") LocalContainerEntityManagerFactoryBean fooEntityManagerFactory) {
return new JpaTransactionManager(fooEntityManagerFactory.getObject());
}
}
I have multiple classes like above for different configs for different datasources and I am adding them to available dbs static list at datasource Bean.
Here is my dbadapter factory class.
Here is my dbAdaptor factory that creates corresponding db adaptor
#Service
public class DbAdapterFactory {
#Autowired
private BeanFactory beanFactory;
#Autowired
private DbContextHolder dbContextHolder;
public DBAdapter dbAdapter(){
DbTypeEnum currentDb = dbContextHolder.getCurrentDb();
DBAdapter dbAdapter = null;
if(currentDb == DbTypeEnum.FOODB) {
dbAdapter = beanFactory.getBean(foodbadaptor.class);
} else {
dbAdapter = beanFactory.getBean(koodbadaptor.class);
}
return dbAdapter;
}
Here is db context holder that makes operation like setting default db or getting current db etc.:
#Component
public class DbContextHolder {
private DbTypeEnum dbType = DbTypeEnum.FOODB;
private Set<DbTypeEnum> availableDbTypes = new HashSet<>();
public void setCurrentDb(DbTypeEnum dbType) {
this.dbType = dbType;
}
public DbTypeEnum getCurrentDb() {
return this.dbType;
}
public List<DbTypeEnum> getAvailableDbTypes() {
return new ArrayList<>(availableDbTypes);
}
public void addNewAvailableDbType(DbTypeEnum dbTypeEnum) {
availableDbTypes.add(dbTypeEnum);
}
}
I made all #Lazy or tried #SpringBootApplication(exclude={DataSourceAutoConfiguration.class}) but still something is calling to create bean and getting error and app is closing. I want to use that config and datasource in a try-catch block and don't stop application at runtime. How can I achieve this or what am I missing on that configs or annotations ?
I believe that you can simply add in your application properties
spring.sql.init.continue-on-error=true
According to the Spring Boot 2.5.5 user guide:
https://docs.spring.io/spring-boot/docs/2.5.5/reference/htmlsingle/#howto-initialize-a-database-using-spring-jdbc
Spring Boot enables the fail-fast feature of its script-based database initializer. If the scripts cause exceptions, the application fails to start. You can tune that behavior by setting spring.sql.init.continue-on-error.
Depending on your spring boot version the property will be named either
spring.sql.init.continue-on-error
or before Spring Boot 2.5
spring.datasource.continue-on-error
It is so dumb but I solved the problem by adding following to application.properties.
spring.jpa.database=sql_server
I have no idea why I need to specify that explicitly in properties file but the problem is solved. I will search for it
I have an application that has a base database (Oracle). It fetches the other tenant database connection string from a table in the base database. These tenants can be Oracle or Postgres or MSSQL.
When the application starts the dialect is set to org.hibernate.dialect.SQLServerDialect by hibernate which is of the base database. But when I try to insert data in a tenant of the MSSQL database it is throwing error while inserting data. com.microsoft.sqlserver.jdbc.SQLServerException: DEFAULT or NULL are not allowed as explicit identity values
This is because it is setting MSSQL dialect for the Oracle database.
[WARN ] 2020-01-21 09:16:22.504 [https-jsse-nio-22500-exec-5] [o.h.e.j.s.SqlExceptionHelper] -- SQL Error: 339, SQLState: S0001
[ERROR] 2020-01-21 09:16:22.504 [https-jsse-nio-22500-exec-5] [o.h.e.j.s.SqlExceptionHelper] -- DEFAULT or NULL are not allowed as explicit identity values.
[ERROR] 2020-01-21 09:16:22.535 [https-jsse-nio-22500-exec-5] [o.a.c.c.C.[.[.[.[dispatcherServlet]] -- Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.dao.InvalidDataAccessResourceUsageException: could not execute statement; SQL [n/a]; nested exception is org.hibernate.exception.SQLGrammarException: could not execute statement] with root cause
com.microsoft.sqlserver.jdbc.SQLServerException: DEFAULT or NULL are not allowed as explicit identity values.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:217)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1655)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:440)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:385)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7505)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2445)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:191)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:166)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeUpdate(SQLServerPreparedStatement.java:328)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:197)
at org.hibernate.dialect.identity.GetGeneratedKeysDelegate.executeAndExtract(GetGeneratedKeysDelegate.java:57)
at org.hibernate.id.insert.AbstractReturningDelegate.performInsert(AbstractReturningDelegate.java:43)
at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:3106)
at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:3699)
at org.hibernate.action.internal.EntityIdentityInsertAction.execute(EntityIdentityInsertAction.java:84)
at org.hibernate.engine.spi.ActionQueue.execute(ActionQueue.java:645)
at org.hibernate.engine.spi.ActionQueue.addResolvedEntityInsertAction(ActionQueue.java:282)
at org.hibernate.engine.spi.ActionQueue.addInsertAction(ActionQueue.java:263)
at org.hibernate.engine.spi.ActionQueue.addAction(ActionQueue.java:317)
at org.hibernate.event.internal.AbstractSaveEventListener.addInsertAction(AbstractSaveEventListener.java:335)
at org.hibernate.event.internal.AbstractSaveEventListener.performSaveOrReplicate(AbstractSaveEventListener.java:292)
at org.hibernate.event.internal.AbstractSaveEventListener.performSave(AbstractSaveEventListener.java:198)
at org.hibernate.event.internal.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:128)
at org.hibernate.event.internal.DefaultPersistEventListener.entityIsTransient(DefaultPersistEventListener.java:192)
at org.hibernate.event.internal.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:135)
at org.hibernate.event.internal.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:62)
at org.hibernate.event.service.internal.EventListenerGroupImpl.fireEventOnEachListener(EventListenerGroupImpl.java:108)
at org.hibernate.internal.SessionImpl.firePersist(SessionImpl.java:702)
at org.hibernate.internal.SessionImpl.persist(SessionImpl.java:688)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
I have a TenantIdentifierResolver which implements CurrentTenantIdentifierResolver
#Component
public class TenantIdentifierResolver implements CurrentTenantIdentifierResolver {
#Autowired
PropertyConfig propertyConfig;
#Override
public String resolveCurrentTenantIdentifier() {
String tenantId = TenantContext.getCurrentTenant();
if (tenantId != null) {
return tenantId;
}
return propertyConfig.getDefaultTenant();
}
#Override
public boolean validateExistingCurrentSessions() {
return true;
}
}
A component class MultiTenantConnectionProviderImpl which extends AbstractDataSourceBasedMultiTenantConnectionProviderImpl
#Component
public class MultiTenantConnectionProviderImpl extends AbstractDataSourceBasedMultiTenantConnectionProviderImpl {
#Autowired
private DataSource defaultDS;
#Autowired
PropertyConfig propertyConfig;
#Autowired
TenantDataSourceService tenantDBService;
private Map<String, DataSource> map = new HashMap<>();
boolean init = false;
#PostConstruct
public void load() {
map.put(propertyConfig.getDefaultTenant(), defaultDS);
ConcurrentMap<String,DataSource> tenantList = tenantDBService.getGlobalTenantDataSource(); //gets tenant datasources from service
map.putAll(tenantList);
}
#Override
protected DataSource selectAnyDataSource() {
return map.get(propertyConfig.getDefaultTenant());
}
#Override
protected DataSource selectDataSource(String tenantIdentifier) {
return map.get(tenantIdentifier) != null ? map.get(tenantIdentifier) : map.get(propertyConfig.getDefaultTenant());
}
}
And a configuration class HibernateConfig
#Configuration
public class HibernateConfig {
#Autowired
private JpaProperties jpaProperties;
#Bean
public JpaVendorAdapter jpaVendorAdapter() {
return new HibernateJpaVendorAdapter();
}
#Bean
LocalContainerEntityManagerFactoryBean entityManagerFactory(
DataSource dataSource,
MultiTenantConnectionProviderImpl multiTenantConnectionProviderImpl,
TenantIdentifierResolver currentTenantIdentifierResolverImpl
) {
Map<String, Object> jpaPropertiesMap = new HashMap<>(jpaProperties.getProperties());
jpaPropertiesMap.put(Environment.MULTI_TENANT, MultiTenancyStrategy.SCHEMA);
jpaPropertiesMap.put(Environment.MULTI_TENANT_CONNECTION_PROVIDER, multiTenantConnectionProviderImpl);
jpaPropertiesMap.put(Environment.MULTI_TENANT_IDENTIFIER_RESOLVER, currentTenantIdentifierResolverImpl);
//jpaPropertiesMap.put(Environment.DIALECT_RESOLVERS, "com.esq.cms.CashOrderMgmtService.multitenant.CustomDialectResolver");
jpaPropertiesMap.put("hibernate.jdbc.batch_size", 500);
jpaPropertiesMap.put("hibernate.order_inserts", true);
jpaPropertiesMap.put("hibernate.order_updates", true);
LocalContainerEntityManagerFactoryBean em = new LocalContainerEntityManagerFactoryBean();
em.setDataSource(dataSource);
em.setPackagesToScan("com.esq.cms.*");
em.setJpaVendorAdapter(this.jpaVendorAdapter());
em.setJpaPropertyMap(jpaPropertiesMap);
return em;
}
}
There are many examples to set a dialect using properties file but there they have fixed type and number of databases. In my case it can be any of the database types. I have also tried adding a custom class for hibernate resolver in but it is still not working. I might be missing something. Therefore, what should I do to enable dialect as per the database by hibernate itself. Any help will be appriciated. Thanks
Try going for the Multi-tenancy strategy as DATABASE and not SCHEMA or DISCRIMINATOR as you are dealing with different types of databases (for example : Oracle, MySQL and so on).
As per Hibernate Docs : The approaches that you can take for separating data in the multi-tenant systems :
Separate Database (MultiTenancyStrategy.DATABASE) :
Each tenant's data is kept in a physically separate database instance.
JDBC connections will point to each separate database specifically so
that connection pooling will be per-single-tenant. The connection pool
is selected based on the "tenant identifier" linked to particular user.
Separate Schema (MultiTenancyStrategy.SCHEMA) :
Each tenant's data is kept in a distinct database schema on a single
database instance.
Partitioned data (MultiTenancyStrategy.DISCRIMINATOR):
All data is kept in a single database schema only. The data for each
tenant is partitioned by the use of discriminator. A single JDBC
connection pool is used for all tenants. For every SQL statement, the
app needs to manage execution of the queries on the database based on "tenant
identifier" discriminator.
Decide which strategy you want to go for based on the requirements.
I'm providing my own sample code of multi-tenancy (with spring-boot) that I have done with two different databases that is one with MySQL and another one with Postgres. This is the working example that I'm providing.
Github Repository : Working Multitenancy Code
Note: Create tables before doing any operations in the database.
I have configured all the tenants in the properties file (application.properties) with different databases.
server.servlet.context-path=/sample
spring.jpa.generate-ddl=false
spring.jpa.hibernate.ddl-auto=none
spring.jpa.show-sql=true
## Tenant 1 database ##
multitenant.datasources.tenant1.url=jdbc:postgresql://localhost:5432/tenant1
multitenant.datasources.tenant1.username=postgres
multitenant.datasources.tenant1.password=Anish#123
multitenant.datasources.tenant1.driverClassName=org.postgresql.Driver
## Tenant 2 database ##
multitenant.datasources.tenant2.url=jdbc:mysql://localhost:3306/tenant2
multitenant.datasources.tenant2.username=root
multitenant.datasources.tenant2.password=Anish#123
multitenant.datasources.tenant2.driverClassName=com.mysql.cj.jdbc.Driver
MultiTenantProperties : This class binds and validates the properties set for multiple tenants and keeping them as a map of tenant vs required database information.
#Component
#ConfigurationProperties(value = "multitenant")
public class MultiTenantProperties {
private Map<String, Map<String, String>> datasources = new LinkedHashMap<>();
public Map<String, Map<String, String>> getDatasources() {
return datasources;
}
public void setDatasources(Map<String, Map<String, String>> datasources) {
this.datasources = datasources;
}
}
ThreadLocalTenantStorage : This class keeps the name of the tenant coming from the incoming request for the current thread to perform CRUD operations.
public class ThreadLocalTenantStorage {
private static ThreadLocal<String> currentTenant = new ThreadLocal<>();
public static void setTenantName(String tenantName) {
currentTenant.set(tenantName);
}
public static String getTenantName() {
return currentTenant.get();
}
public static void clear() {
currentTenant.remove();
}
}
MultiTenantInterceptor : This class intercepts the incoming request and sets the ThreadLocalTenantStorage with current tenant for the database to be selected. After completion of the request, the tenant is removed from the ThreadLocalTenantStorage class.
public class MultiTenantInterceptor extends HandlerInterceptorAdapter {
private static final String TENANT_HEADER_NAME = "TENANT-NAME";
#Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler)
throws Exception {
String tenantName = request.getHeader(TENANT_HEADER_NAME);
ThreadLocalTenantStorage.setTenantName(tenantName);
return true;
}
#Override
public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler,
ModelAndView modelAndView) throws Exception {
ThreadLocalTenantStorage.clear();
}
}
TenantIdentifierResolver : This class is responsible to return the current tenant coming from ThreadLocalTenantStorage to select the datasource.
public class TenantIdentifierResolver implements CurrentTenantIdentifierResolver {
private static String DEFAULT_TENANT_NAME = "tenant1";
#Override
public String resolveCurrentTenantIdentifier() {
String currentTenantName = ThreadLocalTenantStorage.getTenantName();
return (currentTenantName != null) ? currentTenantName : DEFAULT_TENANT_NAME;
}
#Override
public boolean validateExistingCurrentSessions() {
return true;
}
}
WebConfiguration : This is configuration to register the MultiTenantInterceptor class to be used as an interceptor.
#Configuration
public class WebConfiguration implements WebMvcConfigurer {
#Override
public void addInterceptors(InterceptorRegistry registry) {
registry.addInterceptor(new MultiTenantInterceptor());
}
}
DataSourceMultiTenantConnectionProvider : This class selects the datasource based on the tenant name.
public class DataSourceMultiTenantConnectionProvider extends AbstractDataSourceBasedMultiTenantConnectionProviderImpl {
private static final long serialVersionUID = 1L;
#Autowired
private Map<String, DataSource> multipleDataSources;
#Override
protected DataSource selectAnyDataSource() {
return multipleDataSources.values().iterator().next();
}
#Override
protected DataSource selectDataSource(String tenantName) {
return multipleDataSources.get(tenantName);
}
}
MultiTenantJPAConfiguration : This class configures the custom beans for database transactions and register the tenant datasources for multi-tenancy.
#Configuration
#EnableJpaRepositories(basePackages = { "com.example.multitenancy.dao" }, transactionManagerRef = "multiTenantTxManager")
#EnableConfigurationProperties({ MultiTenantProperties.class, JpaProperties.class })
#EnableTransactionManagement
public class MultiTenantJPAConfiguration {
#Autowired
private JpaProperties jpaProperties;
#Autowired
private MultiTenantProperties multiTenantProperties;
#Bean
public MultiTenantConnectionProvider multiTenantConnectionProvider() {
return new DataSourceMultiTenantConnectionProvider();
}
#Bean
public CurrentTenantIdentifierResolver currentTenantIdentifierResolver() {
return new TenantIdentifierResolver();
}
#Bean(name = "multipleDataSources")
public Map<String, DataSource> repositoryDataSources() {
Map<String, DataSource> datasources = new HashMap<>();
multiTenantProperties.getDatasources().forEach((key, value) -> datasources.put(key, createDataSource(value)));
return datasources;
}
private DataSource createDataSource(Map<String, String> source) {
return DataSourceBuilder.create().url(source.get("url")).driverClassName(source.get("driverClassName"))
.username(source.get("username")).password(source.get("password")).build();
}
#Bean
public EntityManagerFactory entityManagerFactory(LocalContainerEntityManagerFactoryBean entityManagerFactoryBean) {
return entityManagerFactoryBean.getObject();
}
#Bean
public PlatformTransactionManager multiTenantTxManager(EntityManagerFactory entityManagerFactory) {
JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setEntityManagerFactory(entityManagerFactory);
return transactionManager;
}
#Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactoryBean(
MultiTenantConnectionProvider multiTenantConnectionProvider,
CurrentTenantIdentifierResolver currentTenantIdentifierResolver) {
Map<String, Object> hibernateProperties = new LinkedHashMap<>();
hibernateProperties.putAll(this.jpaProperties.getProperties());
hibernateProperties.put(Environment.MULTI_TENANT, MultiTenancyStrategy.DATABASE);
hibernateProperties.put(Environment.MULTI_TENANT_CONNECTION_PROVIDER, multiTenantConnectionProvider);
hibernateProperties.put(Environment.MULTI_TENANT_IDENTIFIER_RESOLVER, currentTenantIdentifierResolver);
LocalContainerEntityManagerFactoryBean entityManagerFactoryBean = new LocalContainerEntityManagerFactoryBean();
entityManagerFactoryBean.setPackagesToScan("com.example.multitenancy.entity");
entityManagerFactoryBean.setJpaVendorAdapter(new HibernateJpaVendorAdapter());
entityManagerFactoryBean.setJpaPropertyMap(hibernateProperties);
return entityManagerFactoryBean;
}
}
Sample Entity class for testing :
#Entity
#Table(name = "user_details", schema = "public")
public class User {
#Id
#Column(name = "id")
private Long id;
#Column(name = "full_name", length = 30)
private String name;
public User() {
super();
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
Sample Repository for testing :
public interface UserRepository extends JpaRepository<User, Long>{
}
Sample Controller :
#RestController
#Transactional
public class SampleController {
#Autowired
private UserRepository userRepository;
#GetMapping(value = "/{id}")
public ResponseEntity<User> getUser(#PathVariable("id") String id) {
Optional<User> user = userRepository.findById(Long.valueOf(id));
User userDemo = user.get();
return ResponseEntity.ok(userDemo);
}
#PostMapping(value = "/create/user")
public ResponseEntity<String> createUser(#RequestBody User user) {
userRepository.save(user);
return ResponseEntity.ok("User is saved");
}
}
Based on our current analysis for Spring/JPA Multitenancy implementation works with connecting to multiple database types (MSSQL, PostgGreSQL), only when you have an initial DataSource to connect at the startup. Spring/JPA/Hibernate framework support does require a Dialect setup during application startup and will throw an error if you don't set one. Our requirement was to get these connections in a lazy fashion through another service that requires a tenant context. We implemented a work around to use a lightweight empty/dummy in-memory sqlite DB that we are connecting at the startup to pass through the initial dialect and connection required. This is the path for least customization to current framework code and hopefully this will be added as a feature down the road for Multitenancy implementation.
The key method that needs an initial connect and helps to connect to multiple types of DBs as needed later is in class that
extends AbstractDataSourceBasedMultiTenantConnectionProviderImpl and override following method:
#Override
protected DataSource selectAnyDataSource() {
//TODO This method is called more than once. So check if the data source map
// is empty. If it is then set default tenant for now.
// This is test code and needs to figure out making it work for application scenarios
if (tenantDataSources.isEmpty()) {
tenantDataSources.put("default", dataSource.getDataSource(""));
log.info("selectAnyDataSource() method call...Total tenants:" + tenantDataSources.size());
}
return this.tenantDataSources.values().iterator().next();
}
I'm a beginner, I have a simple Spring Boot project, it's my first time using a connection pool (HikariCP in this case) and I need your help. It's working but I want to know if I'm using it the right way with Hibernate, or if there are better ways to do it, and if my Spring Boot project structure is correct.
EDIT : It's working even if I remove the class HikariCPConfig, how can I know if connection pools are working or not?
The project is as follow :
- BankManager
src/main/java
|
|__com.manager
|__BankManagerApplication.java
|__HikariCPConfig.java
|__com.manager.dao
|__ClientRepository.java
|__com.manager.entities
|__Client.java
|__com.manager.service
|__ClientServiceImpl.java
|__ClientServiceInterface.java
src/main/resources
|__application.properties
BankManagerApplication.java :
#SpringBootApplication
public class BankManagerApplication {
public static void main(String[] args) {
ApplicationContext ctx = SpringApplication.run(BankManagerApplication.class, args);
ClientServiceInterface service = ctx.getBean(ClientServiceInterface.class);
service.addClient(new Client("client1"));
service.addClient(new Client("client2"));
}
}
HikariCPConfig.java :
#Configuration
#ComponentScan
class HikariCPConfig {
#Value("${spring.datasource.username}")
private String user;
#Value("${spring.datasource.password}")
private String password;
#Value("${spring.datasource.url}")
private String dataSourceUrl;
#Value("${spring.datasource.dataSourceClassName}")
private String dataSourceClassName;
#Value("${spring.datasource.poolName}")
private String poolName;
#Value("${spring.datasource.connectionTimeout}")
private int connectionTimeout;
#Value("${spring.datasource.maxLifetime}")
private int maxLifetime;
#Value("${spring.datasource.maximumPoolSize}")
private int maximumPoolSize;
#Value("${spring.datasource.minimumIdle}")
private int minimumIdle;
#Value("${spring.datasource.idleTimeout}")
private int idleTimeout;
#Bean
public HikariDataSource primaryDataSource() {
Properties dsProps = new Properties();
dsProps.put("url", dataSourceUrl);
dsProps.put("user", user);
dsProps.put("password", password);
dsProps.put("prepStmtCacheSize",250);
dsProps.put("prepStmtCacheSqlLimit",2048);
dsProps.put("cachePrepStmts",Boolean.TRUE);
dsProps.put("useServerPrepStmts",Boolean.TRUE);
Properties configProps = new Properties();
configProps.put("dataSourceClassName", dataSourceClassName);
configProps.put("poolName",poolName);
configProps.put("maximumPoolSize",maximumPoolSize);
configProps.put("minimumIdle",minimumIdle);
configProps.put("minimumIdle",minimumIdle);
configProps.put("connectionTimeout", connectionTimeout);
configProps.put("idleTimeout", idleTimeout);
configProps.put("dataSourceProperties", dsProps);
HikariConfig hc = new HikariConfig(configProps);
HikariDataSource ds = new HikariDataSource(hc);
return ds;
}
}
ClientServiceImpl.java
#Service
public class ClientServiceImpl implements ClientServiceInterface {
#Autowired
ClientRepository clientRepository; // this class extends JPARepository
#Override
public Client addClient(Client c) {
return clientRepository.save(c);
}
}
application.properties :
server.port = 8888
spring.jpa.databasePlatform=org.hibernate.dialect.MySQL5Dialect
spring.jpa.show-sql = true
spring.jpa.hibernate.ddl-auto = update
spring.datasource.dataSourceClassName=com.mysql.jdbc.jdbc2.optional.MysqlDataSource
spring.datasource.url=jdbc:mysql://localhost:3306/bank_manager
spring.datasource.username=root
spring.datasource.password=
spring.datasource.poolName=SpringBootHikariCP
spring.datasource.maximumPoolSize=5
spring.datasource.minimumIdle=3
spring.datasource.maxLifetime=2000000
spring.datasource.connectionTimeout=30000
spring.datasource.idleTimeout=30000
spring.datasource.pool-prepared-statements=true
spring.datasource.max-open-prepared-statements=250
Thank you in advance.
You project structure is standard, so it's correct.
About Hikari:
Hikari is indeed a great choice for pooling. I'm used to work with Hikari successfully by using a smaller set of params you are applying in your case, but if it's working for you that's fine.
For more info about Hikaru setup, I recommend reading the official wiki, if you haven't already.
About property loading:
You can make use of some SpringBoot features to read the DB parameters and apply into your runtime Beans with less code. Like:
In application.properties (define a custom prefix 'myproject.db' for your pool params)
myproject.db.dataSourceClassName=com.mysql.jdbc.jdbc2.optional.MysqlDataSource
myproject.db.url=jdbc:mysql://localhost:3306/bank_manager
myproject.db.username=root
... and the other params below
Create a Spring Configuration class
#Configuration
public class MyDBConfiguration {
#Bean(name = "myProjectDataSource")
#ConfigurationProperties(prefix = "myproject.db")
public DataSource dataSource(){
//This will activate Hikari to create a new DataSource instance with all parameters you defined with 'myproject.db'
return DataSourceBuilder.create().build();
}
}
In your ClientRepository class:
#Repository
public class ClientRepository {
//The code below is optional, but will work if you want to use jdbctemplate tied to the DataSource created above. By default all Hibernate Sessions will take the DataSource generated by Spring
#Bean(name = "myProjectJdbcTemplate")
public JdbcTemplate jdbcTemplate(#Qualifier("myProjectDataSource") DataSource dataSource){
return new JdbcTemplate(dataSource);
}
}
There are other options to manage the DataSource beans creation if you are going to use 2 or more different Databases. You can vary the properties prefix for the other databases and annotate 1 Datasource only as #Primary, which is mandatory when you have more than 1 DataSources in Spring context
I'm writing application which connects with Oracle Database. I call function from DB which inserts new records to table. And after this callback I can decide what I want to do: commit or rollback.
Unfortunalety I'm new in Spring, so I have problems with configuration. And what's more I want to make this configuration in Java class, not in XML. And here I need your help.
UPDATED CODE:
ApplicationConfig code:
#Configuration
#EnableTransactionManagement
#ComponentScan("hr")
#PropertySource({"classpath:jdbc.properties", "classpath:functions.properties", "classpath:procedures.properties"})
public class ApplicationConfig {
#Autowired
private Environment env;
#Bean(name="dataSource")
public DataSource dataSource() {
BasicDataSource dataSource = new BasicDataSource();
dataSource.setDriverClassName(env.getProperty("jdbc.driver"));
dataSource.setUrl(env.getProperty("jdbc.url"));
dataSource.setUsername(env.getProperty("jdbc.username"));
dataSource.setPassword(env.getProperty("jdbc.password"));
dataSource.setDefaultAutoCommit(false);
return dataSource;
}
#Bean
public JdbcTemplate jdbcTemplate(DataSource dataSource) {
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
jdbcTemplate.setResultsMapCaseInsensitive(true);
return jdbcTemplate;
}
#Bean(name="txName")
public PlatformTransactionManager txManager() {
DataSourceTransactionManager txManager = new DataSourceTransactionManager();
txManager.setDataSource(dataSource());
return txManager;
}
}
I have Dao and Service, where both implements proper interface.
Service implementation:
#Service
public class HumanResourcesServiceImpl implements HumanResourcesService {
#Autowired
private HumanResourcesDao hrDao;
#Override
public String generateData(int rowsNumber) {
return hrDao.generateData(rowsNumber);
}
#Override
#Transactional("txName")
public void shouldCommit(boolean doCommit, Connection connection) throws SQLException {
hrDao.shouldCommit(doCommit, connection);
}
}
Dao implementation:
#Repository
public class HumanResourcesDaoImpl implements HumanResourcesDao {
private JdbcTemplate jdbcTemplate;
private SimpleJdbcCall generateData;
#Autowired
public HumanResourcesDaoImpl(JdbcTemplate jdbcTemplate, Environment env) {
this.jdbcTemplate = jdbcTemplate;
generateData = new SimpleJdbcCall(jdbcTemplate)
.withProcedureName(env.getProperty("procedure.generateData"));
}
#Override
public String generateData(int rowsNumber) {
HashMap<String, Object> params = new HashMap<>();
params.put("i_rowsNumber", rowsNumber);
Map<String, Object> m = generateData.execute(params);
return (String) m.get("o_execution_time");
}
#Override
#Transactional("txName")
public void shouldCommit(boolean doCommit, Connection connection) throws SQLException {
if(doCommit) {
connection.commit();
} else {
connection.rollback();
}
}
}
Main class code:
public class Main extends Application implements Initializable {
#Override
public void initialize(URL url, ResourceBundle resourceBundle) {
ApplicationContext context = new AnnotationConfigApplicationContext(ApplicationConfig.class);
hrService = context.getBean(HumanResourcesService.class);
BasicDataSource ds = (BasicDataSource)context.getBean("dataSource");
Connection connection = ds.getConnection();
//do something and call
//hrService.generateData
//do something and call
//hrService.shouldCommit(true, connection);
//which commit or rollback generated data from previoues callback
}
}
UPDATE:
I think that the problem is with connection, because this statement:
this.jdbcTemplate.getDataSource().getConnection();
creates new connection, so then there is nothing to commit or rollback. But still I can't figure why this doesn't work properly. No errors, no new records...
What is wierd, is that when I debuged connection.commit(); I found out that in DelegatingConnection.java, parameter this has proper connection, but there is something like:
_conn.commit();
and _conn has different connection. Why?
Should I in some way synchronize connection for those 2 methods or what? Or this is only one connection? To be honest, I'm not sure how it works exactly. One connection and all callbacks to stored procedures are in this connection or maybe with each callback new connection is created?
Real question is how to commit or rollback data from previous callback which do insert into table?
One easy way to do this is to annotate the method with #Transactional:
#Transactional
public void myBeanMethod() {
...
if (!doCommit)
throw new IllegalStateException(); // any unchecked will do
}
and spring will roll all database changes back.
Remember to add #EnableTransactionManagement to your spring application/main class
You can use #Transactional and #EnableTransactionManagement to setup transactions without using the XML configuration. In short, annotate the methods/classes you want to have transactions with #Transactional. To setup the transactional management you use the #EnableTransactionManagement inside your #Configuration.
See Springs docs for example on how to use both. The #EnableTransactionManagement is detailed in the JavaDocs but should match the XML configuration.
UPDATE
The problem is that you are mixing raw JDBC calls (java.sql.Connection) with Spring JDBC. When you execute your SimpleJdbcCall, Spring creates a new Connection. This is not the same Connection as the one you later try to commit. Hence, nothing happens when you perform the commit. I tried to somehow get the connection that the SimpleJdbcCall uses, but could not find any easy way.
To test this I tried the following (I did not use params):
#Override
public String generateData(int rowsNumber) {
//HashMap<String, Object> params = new HashMap<>();
//params.put("i_rowsNumber", rowsNumber);
//Map<String, Object> m = generateData.execute(params);
Connection targetConnection = DataSourceUtils.getTargetConnection(generateData.getJdbcTemplate().getDataSource().getConnection());
System.out.println(targetConnection.prepareCall((generateData.getCallString())).execute());
targetConnection.commit();
return (String) m.get("o_execution_time");
}
If I don't save the targetConnection, and instead try to get the connection again by calling DataSourceUtils.getTargetConnection() when committing, nothing happens. Thus, you must commit on the same connection that you perform the statement on. This does not seem to be easy, nor the proper way.
The solution is to drop the java.sql.Connection.commit() call. Instead, you use Spring Transactions completly. If you use #Transaction on the method that performs database call, Spring will automatically commit when the method has finished. If the method body experiences any Exception (even outside the actual database call) it will automatically rollback. In other words, this should suffice for normal Transaction management.
However, if you are doing batch processing, and wish to have more control over your transactions with commits and rollbacks, you can still use Spring. To programatically control transactions with Spring, you can use TransactionTemplate which have commit and rollback methods. Don't have time to give you proper samples, but may do so in later days if you are still stuck ;)
#Configuration
#EnableTransactionManagement
#ComponentScan(basePackages="org.saat")
#PropertySource(value="classpath:resources/db.properties",ignoreResourceNotFound=true)
public class AppConfig {
#Autowired
private Environment env;
#Bean(name="dataSource")
public DataSource getDataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName(env.getProperty("db.driver"));
dataSource.setUrl(env.getProperty("db.url"));
dataSource.setUsername(env.getProperty("db.username"));
dataSource.setPassword(env.getProperty("db.password"));
return dataSource;
}
#Bean(name="entityManagerFactoryBean")
public LocalContainerEntityManagerFactoryBean getSessionFactory() {
LocalContainerEntityManagerFactoryBean factoryBean = new LocalContainerEntityManagerFactoryBean ();
factoryBean.setDataSource(getDataSource());
factoryBean.setPackagesToScan("org.saat");
factoryBean.setJpaVendorAdapter(getJpaVendorAdapter());
Properties props=new Properties();
props.put("hibernate.dialect", env.getProperty("hibernate.dialect"));
props.put("hibernate.hbm2ddl.auto",env.getProperty("hibernate.hbm2ddl.auto"));
props.put("hibernate.show_sql",env.getProperty("hibernate.show_sql"));
factoryBean.setJpaProperties(props);
return factoryBean;
}
#Bean(name="transactionManager")
public JpaTransactionManager getTransactionManager() {
JpaTransactionManager jpatransactionManager = new JpaTransactionManager();
jpatransactionManager.setEntityManagerFactory(getSessionFactory().getObject());
return jpatransactionManager;
}
#Bean
public JpaVendorAdapter getJpaVendorAdapter() {
HibernateJpaVendorAdapter hibernateJpaVendorAdapter = new HibernateJpaVendorAdapter();
return hibernateJpaVendorAdapter;
}
}
I'm looking to create a sample project while learning Guice which uses JDBC to read/write to a SQL database. However, after years of using Spring and letting it abstract away connection handling and transactions I'm struggling to work it our conceptually.
I'd like to have a service which starts and stops a transaction and calls numerous repositories which reuse the same connection and participate in the same transaction. My questions are:
Where do I create my Datasource?
How do I give the repositories access to the connection? (ThreadLocal?)
Best way to manage the transaction (Creating an Interceptor for an annotation?)
The code below shows how I would do this in Spring. The JdbcOperations injected into each repository would have access to the connection associated with the active transaction.
I haven't been able to find many tutorials which cover this, beyond ones which show creating interceptors for transactions.
I am happy with continuing to use Spring as it is working very well in my projects, but I'd like to know how to do this in pure Guice and JBBC (No JPA/Hibernate/Warp/Reusing Spring)
#Service
public class MyService implements MyInterface {
#Autowired
private RepositoryA repositoryA;
#Autowired
private RepositoryB repositoryB;
#Autowired
private RepositoryC repositoryC;
#Override
#Transactional
public void doSomeWork() {
this.repositoryA.someInsert();
this.repositoryB.someUpdate();
this.repositoryC.someSelect();
}
}
#Repository
public class MyRepositoryA implements RepositoryA {
#Autowired
private JdbcOperations jdbcOperations;
#Override
public void someInsert() {
//use jdbcOperations to perform an insert
}
}
#Repository
public class MyRepositoryB implements RepositoryB {
#Autowired
private JdbcOperations jdbcOperations;
#Override
public void someUpdate() {
//use jdbcOperations to perform an update
}
}
#Repository
public class MyRepositoryC implements RepositoryC {
#Autowired
private JdbcOperations jdbcOperations;
#Override
public String someSelect() {
//use jdbcOperations to perform a select and use a RowMapper to produce results
return "select result";
}
}
If your database change infrequently, you could use the data source that comes with the database's JDBC driver and isolate the calls to the 3rd party library in a provider (My example uses the one provided by the H2 dataabse, but all JDBC providers should have one). If you change to a different implementation of the DataSource (e.g. c3PO, Apache DBCP, or one provided by app server container) you can simply write a new Provider implementation to get the datasource from the appropriate place. Here I've use singleton scope to allow the DataSource instance to be shared amongst those classes that depend on it (necessary for pooling).
public class DataSourceModule extends AbstractModule {
#Override
protected void configure() {
Names.bindProperties(binder(), loadProperties());
bind(DataSource.class).toProvider(H2DataSourceProvider.class).in(Scopes.SINGLETON);
bind(MyService.class);
}
static class H2DataSourceProvider implements Provider<DataSource> {
private final String url;
private final String username;
private final String password;
public H2DataSourceProvider(#Named("url") final String url,
#Named("username") final String username,
#Named("password") final String password) {
this.url = url;
this.username = username;
this.password = password;
}
#Override
public DataSource get() {
final JdbcDataSource dataSource = new JdbcDataSource();
dataSource.setURL(url);
dataSource.setUser(username);
dataSource.setPassword(password);
return dataSource;
}
}
static class MyService {
private final DataSource dataSource;
#Inject
public MyService(final DataSource dataSource) {
this.dataSource = dataSource;
}
public void singleUnitOfWork() {
Connection cn = null;
try {
cn = dataSource.getConnection();
// Use the connection
} finally {
try {
cn.close();
} catch (Exception e) {}
}
}
}
private Properties loadProperties() {
// Load properties from appropriate place...
// should contain definitions for:
// url=...
// username=...
// password=...
return new Properties();
}
}
To handle transactions a Transaction Aware data source should be used. I wouldn't recommend implementing this manually. Using something like warp-persist or a container supplied transaction management, however it would look something like this:
public class TxModule extends AbstractModule {
#Override
protected void configure() {
Names.bindProperties(binder(), loadProperties());
final TransactionManager tm = getTransactionManager();
bind(DataSource.class).annotatedWith(Real.class).toProvider(H2DataSourceProvider.class).in(Scopes.SINGLETON);
bind(DataSource.class).annotatedWith(TxAware.class).to(TxAwareDataSource.class).in(Scopes.SINGLETON);
bind(TransactionManager.class).toInstance(tm);
bindInterceptor(Matchers.any(), Matchers.annotatedWith(Transactional.class), new TxMethodInterceptor(tm));
bind(MyService.class);
}
private TransactionManager getTransactionManager() {
// Get the transaction manager
return null;
}
static class TxMethodInterceptor implements MethodInterceptor {
private final TransactionManager tm;
public TxMethodInterceptor(final TransactionManager tm) {
this.tm = tm;
}
#Override
public Object invoke(final MethodInvocation invocation) throws Throwable {
// Start tx if necessary
return invocation.proceed();
// Commit tx if started here.
}
}
static class TxAwareDataSource implements DataSource {
static ThreadLocal<Connection> txConnection = new ThreadLocal<Connection>();
private final DataSource ds;
private final TransactionManager tm;
#Inject
public TxAwareDataSource(#Real final DataSource ds, final TransactionManager tm) {
this.ds = ds;
this.tm = tm;
}
public Connection getConnection() throws SQLException {
try {
final Transaction transaction = tm.getTransaction();
if (transaction != null && transaction.getStatus() == Status.STATUS_ACTIVE) {
Connection cn = txConnection.get();
if (cn == null) {
cn = new TxAwareConnection(ds.getConnection());
txConnection.set(cn);
}
return cn;
} else {
return ds.getConnection();
}
} catch (final SystemException e) {
throw new SQLException(e);
}
}
// Omitted delegate methods.
}
static class TxAwareConnection implements Connection {
private final Connection cn;
public TxAwareConnection(final Connection cn) {
this.cn = cn;
}
public void close() throws SQLException {
try {
cn.close();
} finally {
TxAwareDataSource.txConnection.set(null);
}
}
// Omitted delegate methods.
}
static class MyService {
private final DataSource dataSource;
#Inject
public MyService(#TxAware final DataSource dataSource) {
this.dataSource = dataSource;
}
#Transactional
public void singleUnitOfWork() {
Connection cn = null;
try {
cn = dataSource.getConnection();
// Use the connection
} catch (final SQLException e) {
throw new RuntimeException(e);
} finally {
try {
cn.close();
} catch (final Exception e) {}
}
}
}
}
I would use something like c3po to create datasources directly. If you use ComboPooledDataSource you only need instance (pooling is done under the covers), which you can bind directly or through a provider.
Then I'd create an interceptor on top of that, one that e.g. picks up #Transactional, manages a connection and commit/ rollback. You could make Connection injectable as well, but you need to make sure you close the connections somewhere to allow them to be checked into the pool again.
To inject a data source, you probably don't need to be bound to a single data source instance since the database you are connecting to features in the url. Using Guice, it is possible to force programmers to provide a binding to a DataSource implementation (link) . This data source can be injected into a ConnectionProvider to return a data source.
The connection has to be in a thread local scope. You can even implement your thread local scope but all thread local connections must be closed & removed from ThreadLocal object after commit or rollback operations to prevent memory leakage. After hacking around, I have found that you need to have a hook to the Injector object to remove ThreadLocal elements. An injector can easily be injected into your Guice AOP interceptor, some thing like this:
protected void visitThreadLocalScope(Injector injector,
DefaultBindingScopingVisitor visitor) {
if (injector == null) {
return;
}
for (Map.Entry, Binding> entry :
injector.getBindings().entrySet()) {
final Binding binding = entry.getValue();
// Not interested in the return value as yet.
binding.acceptScopingVisitor(visitor);
}
}
/**
* Default implementation that exits the thread local scope. This is
* essential to clean up and prevent any memory leakage.
*
* The scope is only visited iff the scope is an sub class of or is an
* instance of {#link ThreadLocalScope}.
*/
private static final class ExitingThreadLocalScopeVisitor
extends DefaultBindingScopingVisitor {
#Override
public Void visitScope(Scope scope) {
// ThreadLocalScope is the custom scope.
if (ThreadLocalScope.class.isAssignableFrom(scope.getClass())) {
ThreadLocalScope threadLocalScope = (ThreadLocalScope) scope;
threadLocalScope.exit();
}
return null;
}
}
Make sure you call this after the method has been invoked and closing the connection. Try this to see if this works.
Please check the solution I provided: Transactions with Guice and JDBC - Solution discussion
it is just a very basic version and simple approach. but it works just fine to handle transactions with Guice and JDBC.