I'm using Oracle Spatial, and I have a table with an SDO_GEOMETRY field.
The table is mapped to a JPA entity. I want to have the SDO_GEOMETRY field mapped to a java oracle.spatial.geometry.JGeometry type.
I figured I should use a JPA Converter and to convert to and from java.sql.Struct (or maybe oracle.sql.STRUCT).
The problem is the JGeometry method that converts to Struct, JGeometry.storeJS(Connection conn, JGeometry geom), wants the jdbc connection as a parameter.
The spring EntityManagerFactory is configured with the persistence unit name, the persistence unit contains the data source jndi name, and the data source is defined in tomcat, as a connection pool.
Any idea on how I can get the Connection in the converter ?
This what I want to achieve:
#Converter(autoApply = true)
public class GeometryConverter implements AttributeConverter<JGeometry, Struct> {
#Override
public Struct convertToDatabaseColumn(JGeometry geometry) {
// How to get this connection ?
return JGeometry.storeJS(connection, geometry);
}
#Override
public JGeometry convertToEntityAttribute(Struct struct) {
try {
return JGeometry.loadJS(struct);
} catch (SQLException e) {
throw new RuntimeException("Failed to convert geometry", e);
}
}
}
I am using Spring 4, spring-data-jpa 1.6, Hibernate 4, Tomcat 8, Oracle 12c.
Updated with more info:
Spring configuration:
#Configuration
#EnableJpaRepositories("com.package.repository")
#EnableTransactionManagement
#ComponentScan("com.package")
public class SpringConfig {
#Bean(name = "entityManagerFactory", destroyMethod = "destroy")
public LocalContainerEntityManagerFactoryBean entityManagerFactory() {
LocalContainerEntityManagerFactoryBean emf = new LocalContainerEntityManagerFactoryBean();
emf.setPersistenceUnitName("persistence-unit");
return emf;
}
#Bean(name = "transactionManager")
public JpaTransactionManager getTransactionManager() {
return new JpaTransactionManager();
}
}
If you use spring, and you need to use both JPA and JDBC, you should :
construct a datasource bean and make connection pooling there (or get if from jndi(*))
inject that datasource in one on the spring helpers for building the EntityManagerFactory (such as LocalContainerEntityManagerFactoryBean)
inject that datasource in any bean where you want to do direct JDBC
That way you can use JPA for your normal DAO, and still have access to JDBC in special parts - without a too strong dependance of the internals of your JPA provider.
EDIT:
(*) If your datasource is defined by a jndi name, all is fine. Expose it as a bean (ref)
If using Spring's XML schema based configuration, setup in the Spring context like this:
<xmlns:jee="http://www.springframework.org/schema/jee"
xsi:schemaLocation="http://www.springframework.org/schema/jee
http://www.springframework.org/schema/jee/spring-jee-3.2.xsd">
...
<jee:jndi-lookup id="dbDataSource"
jndi-name="jdbc/DatabaseName"
expected-type="javax.sql.DataSource" />
Alternatively, setup using simple bean configuration like this:
<bean id="dbDataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="java:comp/env/jdbc/DatabaseName"/>
</bean>
As you are using a JpaTransactionManager, there will not be any problem because as specified in spring javadoc This transaction manager also supports direct DataSource access within a transaction (i.e. plain JDBC code working with the same DataSource). This allows for mixing services which access JPA and services which use plain JDBC (without being aware of JPA)! provided you get your Connection through DataSourceUtils.getConnection(javax.sql.DataSource)
EDIT2 :
Ok now the only problem is how to access a singleton bean from a non bean object. A simple way to solve it is to create a holder singleton bean with a static method.
#Bean
public class DataSourceHolder implements InitializingBean {
private DataSource dataSource;
private static DataSourceHolder instance;
public static DataSource getDataSource() {
return instance.dataSource;
}
#Autowired
public void setDataSource(DataSource dataSource) {
this.dataSource = dataSource;
}
#Override
public void afterPropertiesSet() throws Exception {
DataSourceHolder.instance = this;
}
}
Then in any object, be it a bean or not, you can use
DataSource ds = DataSourceHolder.getDataSource();
Connection con = DataSourceUtils.getConnection(ds);
That would be tricky. Purely from the jpa api. You will have to dig into the specific provider implementation and get hold of the DataSource object or the PersistenceUnitInfo object.
From here you can get hold of the Connection object.
Now depending on which environment you are working. If you are on an JavaEE environment, and you inject EntityManager or its EntityManagerFactory, there is no guarantee that the return instance is an instance of the provider own implementation as this may just be a proxy that implements the interface, and hence no relation to the provider's own implementation.
On JSE environment, since you are the one creating the EntityManagerFactory from Persistence.createEntityManagerFactory(), you could tweak the provider in order to get the Connection.
Related
I'm developing a standalone application with following technology stack:
Spring Boot version 2.1.0.RELEASE
Oracle12c System with driver ojdbc6 (11.2.0.3)
Apache Camel
JPA for the main datasource
JDBC for a secondary datasource (read only)
The JPA datasource is the primary datasource where the application itself is connected to and write data to. The JDBC is an additional datasource to read data from a database with another purpose.
During runtime I encounter the following issue:
I poll/select a JPA Entity from the primary datasource and do some processing. This processing includes running a select query on the secondary datasource via a jdbc template. Now if the execution of the query throws an exception I am able to catch it and want to update a status field on the JPA Entity and write it to the datasource.
I've already read that Oracle tries to do a rollback if a SQLException occurrs. The issue is that my JPA datasource is unable to commit the changes to the Entity I do when the JDBC query fails.
It seems to me like the two datasources/transaction managers are not completely independent of each other and that an exception in the secondary datasource causes errors in the primary datasource during commiting changes.
Is this even possible? If yes, how can I configure two independent transaction managers?
EDIT:
I have already tried to annotate the respective methods and classes with #Transactional(noRollbackFor = Exception.class) but this does not solve the problem.
Here are the two Datasource configurations:
ApplicationDatasourceConfig (JPA)
#Configuration
#EnableJpaRepositories(basePackages = "foo.bar.repository.jpa",
entityManagerFactoryRef = "applicationEntityManagerFactory",
transactionManagerRef = "applicationTransactionManager")
#ConfigurationProperties(prefix = "spring.datasource.hikari")
#EnableTransactionManagement
public class ApplicationDatasourceConfig extends HikariConfig{
#Bean("applicationDatasource")
#Primary
public DataSource applicationDataSource(){
return new HikariDataSource(this);
}
#Bean("applicationDatasourceProperties")
#Primary
public DataSourceProperties dataSourceProperties(){
return new DataSourceProperties();
}
#Bean("applicationEntityManagerFactory")
public LocalContainerEntityManagerFactoryBean applicationEntityManagerFactory(EntityManagerFactoryBuilder builder,
#Qualifier("applicationDatasource") DataSource dataSource){
return builder
.dataSource(dataSource)
.packages("foo.bar.entity")
.build();
}
#Bean("applicationTransactionManager")
#Primary
public PlatformTransactionManager applicationTransactionManager(#Qualifier("applicationEntityManagerFactory")EntityManagerFactory entityManagerFactory){
return new JpaTransactionManager(entityManagerFactory);
}
}
SecondaryDatasourceConfig (JDBC)
#Configuration
#ConfigurationProperties(prefix = "secondary.datasource")
#EnableTransactionManagement
public class SecondaryDatasourceConfig {
#Bean("secondaryDatasource")
public DataSource secondaryDataSource(){
return secondaryDataSourceProperties().initializeDataSourceBuilder().build();
}
#Bean
public DataSourceProperties secondaryDataSourceProperties(){
return new DataSourceProperties();
}
#Bean("secondaryTransactionManager")
public PlatformTransactionManager secondaryTransactionManager(#Qualifier("secondaryDatasource") DataSource secondaryDataSource){
return new DataSourceTransactionManager(secondaryDataSource);
}
}
I'm using Spring and Hibernate with an automatically generated database (for that I have set "hibernate.hbm2ddl.auto" to "update" in the JPA configuration properties).
I also have a class annotated #Configuration with a #PostConstruct method that is called on application startup after the database has been created or updated. This is where I setup the database with some default data if it's empty (first launch).
I would like to execute some custom native SQL queries at this moment. These queries won't return anything, they're just configuration stuff (like creating additional indexes or extensions).
Currently I'm stuck on creating a SessionFactory in order to create a new Hibernate Session. I've tried auto wiring it, but it doesn't work :
#Autowired
SessionFactory sessionFactory;
Gives me: Field sessionFactory in ... required a bean of type 'org.hibernate.SessionFactory' that could not be found.
I understand that I probably need to configure it elsewhere, but I don't know where. Several answers on SO use an xml configuration file, but I'm not using any configuration file so I can't do it that way.
Is there a way Spring can create the SessionFactory with the appropriate configuration ?
You don't even need to access SessionFactory. Please just put your scripts into a file src/main/resources/scripts/myscript.sql. You can then do the following with Spring:
#Component
public class Startup {
#Autowired
private DataSource dataSource;
#PostConstruct
public void runNativeSql() {
ClassPathResource resource = new ClassPathResource("scripts/myscript.sql");
try(Connection connection = dataSource.getConnection()) {
ScriptUtils.executeSqlScript(connection, resource);
} catch (SQLException | ScriptException e) {
//LOG
}
}
}
You can autowire the JPA EntityManager as:
#PersistenceContext
EntityManager entityManager;
If you really need a Hibernate Session and are using using JPA 2.1, the Session can be obtained from the EntityManager as:
entityManager.unwrap(Session.class);
We have a Spring Boot application that should access stored procedures from two different databases, DB2 and Oracle, through MyBatis mappers
We have created two DB2 context classes, e.g. for DB2
#Configuration
#MapperScan({ "...mapper.mybatis.db2" })
public class Db2Context {
#Primary
#Bean(name = "db2DataSource")
public DataSource getDataSource() { ...
#Primary
#Bean(name = "db2SqlSessionFactory")
public SqlSessionFactory getSqlSessionFactory() {...
The MyBatis beans look like
public interface Db2Mapper extends MyBatisMapper<SomeType> {
#Override
#Select(value = ...)
#Options(statementType = StatementType.CALLABLE)
#Results({...})
List<SomeType> select(Map<String, Object> parameters);
And the SqlSessionFactory beans are injected into the respective DAO classes with the appropriate qualification, e.g.
#Repository
public class Db2Dao {
#Autowired
#Qualifier("db2SqlSessionFactory")
SqlSessionFactory sqlSessionFactory;
...
try(SqlSession session= sqlSessionFactory.openSession(true);) {
Db2Mapper mapper = session.getMapper(Db2Mapper.class);
resultSet = mapper.select(parameters);
We have the identical config, mapper and DAO for Oracle as well, except that in that config the DataSource and SqlSessionFactory beans are not annotated with #Primary. This was necessary as per described in the Spring Boot reference: http://docs.spring.io/spring-boot/docs/1.2.3.RELEASE/reference/htmlsingle/#howto-two-datasources; without that the Spring Boot application startup would result in NoUniqueBeanDefinitionException
With this configuration the Spring Boot application starts up succesfully, and during the startup there are even INFO log printouts indicating that both mapper classes have been succesfully identified
INFO BeanPostProcessorChecker : Bean 'db2Mapper' of type [class org.mybatis.spring.mapper.MapperFactoryBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
INFO BeanPostProcessorChecker : Bean 'oracleMapper' of type [class org.mybatis.spring.mapper.MapperFactoryBean] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
However, in runtime we have a problem. First Db2Dao is executed, and with that everything goes perfectly, the DB2 stored procedure is getting executed, and the retrieved results are stored through Db2Mapper. Then comes OracleDao; however after the Oracle SP execution the following exception is received
ERROR Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception
[Request processing failed; ... Type interface com....mapper.mybatis.oracle.OracleMapper is not
known to the MapperRegistry.]
We have been fighting with this issue for some while now, but could not find a resolution. Possibly the usage of #Primary might have something to do with it, but without that we are not even able to start up the application. Our researches actually seem to indicate that different library versions might even provide different behaviour: our stack is Java 1.8, Spring Boot 1.2.6, Spring 4.1.7, MyBatis 3.2.5, MyBatis-Spring 1.2.2
First of all, I would suggest not to autowire SqlSessionFactory into your DAOs at all. In fact you can get rid of DAOs completely and use your mappers in service layer as spring beans.
So you do something like this:
public interface Db2Mapper extends MyBatisMapper<SomeType> {
#Override
#Select(value = ...)
#Options(statementType = StatementType.CALLABLE)
#Results({...})
List<SomeType> select(Map<String, Object> parameters);
}
#Service
public class Db2Service{
#Autowired
private Db2Mapper db2Mapper;
//...
}
Secondly, the key to have various datasources integrated with mybatis-spring is in sqlSessionFactoryRef attribute of #MapperScan annotation. With that you can narrow down which SqlSessionFactory instance is used for witch #MapperScan. Something like this:
#Configuration
#MapperScan(value = { "...mapper.mybatis.db2" }, sqlSessionFactoryRef = "db2SqlSessionFactory")
public class Db2Context {
#Primary
#Bean(name = "db2DataSource")
public DataSource getDataSource() { ...
#Primary
#Bean(name = "db2SqlSessionFactory")
public SqlSessionFactory getSqlSessionFactory() {...
#Configuration
#MapperScan(value = { "...mapper.mybatis.other" }, sqlSessionFactoryRef = "otherSqlSessionFactory")
public class OtherContext {
#Bean(name = "otherDataSource")
public DataSource getDataSource() { ...
#Bean(name = "otherSqlSessionFactory")
public SqlSessionFactory getSqlSessionFactory() {...
Obviously you shouldn't scan same packages with these two #MapperScan annotations.
Maybe what I´m going to ask it´s a silly question, what I wan to know if it is possible in a Spring MVC configuration has two entityManagerFactory. I will explain why.
I have one LocalContainerEntityManagerFactoryBean where I configure a hibernate.tenant_identifier_resolver which I use to determine the tenant by LDAP using the session of the user, and then use one database Schema or another, then I use "multi_tenant_connection_provider" to create the database connection for that schema.
Now my application has a Scheduler that needs access to all Schemas and get some information from all databases. So in order to do not touch the entityManagerFactory already configure, what I was thinking was to create a new one with my own implementation of "hibernate.tenant_identifier_resolver" to control which schema I want instead LDAP, before create the database connection by "multi_tenant_connection_provider".
the problem looks like Spring do not allow me configure two entityManagerFactory.
Can you give me any advice about how to achieve what I want?
Regards!
Yes, it is possible to use multiple EntityManagers.
In my project I use the annotation configuration, where I have:
#Configuration
#EnableTransactionManagement
public class AppConfig {
#Bean
public SessionFactory smartDataSessionFactory() {
return new LocalSessionFactoryBuilder(smartDataDatasource())
.scanPackages("...)
.addProperties(smartDataHibernateProperties())
.buildSessionFactory();
}
#Bean
public SessionFactory analysisSessionFactory() {
return new LocalSessionFactoryBuilder(analysisDatasource())
.scanPackages("...)
.addProperties(analysisHibernateProperties())
.buildSessionFactory();
}
...
}
When referencing the entityManagers, be sure to use the Qualifier annotation.
Also note that each SessionFactory will use it's own TransactionFactory
#Repository
#Transactional(value = "analysisTransactionManager")
public class ToURemunerationDaoImpl implements ToURemunerationDao {
private SessionFactory analysisSessionFactory;
private SessionFactory smartDataSessionFactory;
#Autowired
#Qualifier("analysisSessionFactory")
public void setAnalysisSessionFactory(SessionFactory sessionFactory) {
this.analysisSessionFactory = sessionFactory;
}
#Autowired
#Qualifier("smartDataSessionFactory")
public void setSmartDataSessionFactory(SessionFactory sessionFactory) {
this.smartDataSessionFactory = sessionFactory;
}
...
}
Finally I found the solution. The issue was becuase I´m using Spring data, and my repositories did not which EntityManagerFactory use. As soon as I specify which one to use during the scan everything works like a charm.
<repositories base-package="com.*.*.repository**" entity-manager-factory-ref="entityManagerFactory"/>
I'm trying to get Spring Batch 2.2 working with JavaConfig.
Nowadays they have a #EnableBatchProcessing annotation that sets up a lot of things.
Default that annotation uses a datasource for its job data, but we don't want to save this data and don't want to create the table for it. The documentation says something about customizing but I have not been able to get it working:
The user has to provide a DataSource as a bean in the context, or else implement BatchConfigurer in the configuration class itself, e.g.:
public class AppConfig extends DefaultBatchConfigurer {
In our older version we've been able to use MapJobRepositoryFactoryBean class so it keeps all its data in memory. Is there anyway to use the full JavaConfig way and not define a DataSource? I've not been able to get it working.
Even if I define two data sources (one HSQL in-memory that never gets used) and our real Oracle datasource it does not work because it finds two data sources instead of one.
Anyone have an idea how to get this working? Or is the only solution going back to configuring this in the XML way?
Assuming that no other artifacts require a DataSource, you can use java config to create a context without a DataSource. To do that, your configuration will need to extend DefaultBatchConfigurer as you point out. In there, you'll override two methods, createJobRepository() and setDataSource(). Below is an example context (it doesn't define a job or steps, but it bootstraps all the related beans correctly).
#Configuration
#EnableBatchProcessing
public static class BatchConfiguration extends DefaultBatchConfigurer {
#Override
protected JobRepository createJobRepository() throws Exception {
MapJobRepositoryFactoryBean factory =
new MapJobRepositoryFactoryBean();
factory.afterPropertiesSet();
return (JobRepository) factory.getObject();
}
#Override
#Autowired
public void setDataSource(DataSource dataSource) {
if(dataSource != null) {
super.setDataSource(dataSource);
}
}
#Bean
public DataSource dataSource() {
return null;
}
}
I do think that simplifying this would be a useful feature and have added it to Jira. You can track it's progress here: https://jira.springsource.org/browse/BATCH-2048
Just define a dataSource() method in your BatchConfig Class
Here is how
#Bean
public DataSource dataSource() {
BasicDataSource dataSource = new BasicDataSource();
dataSource.setDriverClassName(driverClassName);
dataSource.setUrl(driverUrl);
dataSource.setUsername(driverUsername);
dataSource.setPassword(driverPassword);
return dataSource;
}
This will automatically be invoked while setting up the TransactionManager