Main question - this may seem like a basic flyway question and I might have (somehow) missed this during my research but - is it possible to access an applications services (spring configured) when attempting to migrate data using flyway? Few details below -
Additional details -
I know we cannot inject spring data services etc. (learnt from this
SO question). And I understand this from a data access point of
view.
But can we not use (by injection) any other application services
either while using flyway (I searched for examples - but without
luck, and no details given on flyway documentation either)
Let us say we cannot use any Spring services (and I find some way to
work around that), can we access properties declared in
application.properties / .yml (this does not appear possible either).
Putting the above in context of our requirement - we have added a couple of new fields to few tables and as part of the release we want to populate those columns with data. This requires us (or flyway) to execute the following algorithm -
Get data from first table.
Using some of the data from each row, lookup more data with an API
call.
The URL of the API is environment specific (hence the third point
above).
Update the data returned from the API into the newly added columns.
Repeat steps above for next table.
P.S. - I know, adding columns that depend on other columns in the same table is not in accordance with 3rd normal etc. but for reasons outside this post, it is required.
Tech Stack -
Spring boot 1.3.x
Flyway 4.0.3
Using Java migration
Few examples I tried as below -
My flyway migration class is as below.
public class R__MigrationYeah implements SpringJdbcMigration {
#Value("${mypath.subpath}") // this does not work !
private String someStringIwannaUse;
#Inject // this does not work either (even with Autowired or Const. injection)!
private MyService myService;
}
I have seen some posts / blogs that have complicated details on how to configure flyways MigrationResolver or ConfigurationAware etc. - and not sure if they solve this problem (even if they do - it is a LOT of work just to write a quick migration script - is this the only way?).
Finally - I know I'm missing something because if we have to write flyway Java code without being able to use ANY existing application classes thru Spring, then it is no different than writing an independent migration project (therefore no value added by flyway other than making a DB connection available) - I'm sure this cannot be the case.
Any help would be great on this !
It is not possible to use dependency injection in a flyway migration.
The next version from flyway will support dependency injection from spring beans. See the Github issue for more details. On Stack Overflow is a workaround for the current version available.
I think you want to use flyway more dynamically than it's designed for.
Basically it's just for DB-Schema, you can do anything sql can do, but since
it does it job in a repeatable, reliable, step-by-step way you wouldn't want
any real business data in it. Flyway uses static scripts you provide, you
can't have them dynamically change over time (it would protest by checksum not matching) or with "API-Calls".
For that kind of stuff you can create your own Spring Boot app and use Flyway through it's Java API. Something along this lines.
#SpringBootApplication
#Import(ServiceConfig.class)
public class FlyWayApp implements CommandLineRunner {
public static void main(String[] args) {
SpringApplication.run(FlyWayApp.class, args);
}
#Value("${mypath.subpath}")
private String someStringIwannaUse;
#Autowired
private MyService myService;
#Override
public void run(String... args) throws Exception {
// Create the Flyway instance
Flyway flyway = new Flyway();
// Point it to the database
flyway.setDataSource("jdbc:h2:file:./target/foobar", "sa", null);
//Fetch data and create migration scripts needed by Flyway
myService.createMigrationScripts();
// Start the migration
flyway.migrate();
}
}
Related
I'm using both r2dbc and Liquibase in the same application. However, Liquibase is not able to run migrations using r2dbc so I need to use a separate jdbc driver just for it.
I followed the solution here, and used testcontainers for testing, so my application-test.yaml looks exactly like this:
spring:
liquibase:
url: jdbc:tc:postgresql:14.1:///testdb
r2dbc:
url: r2dbc:tc:postgresql:///testdb?TC_IMAGE_TAG=14.1
This works perfectly, migrations are launched and then the queries are able to run. The problem is, that this starts two different containers! So the migrations are run against one of them, and the queries against the other, and thus they find the database empty.
Is there any way I can tell testcontainers to use the same container for both connections.
When you use Testcontainers' JDBC support, which you configure by adding tc in the jdbc url, the lifecycle of the container is managed automatically.
Since you have two different urls instrumented like that, you get 2 containers.
Instead you can choose a different way to manage the lifecycle that gives you more control.
You can either do it yourself by creating a containers instances and calling start()/stop() or for example use JUnit integration which will correspond containers lifecycle with the tests lifecycle.
For example for JUnit5, you mark you class with #Testcontainers and the fields with #Container, something like:
#Testcontainers
class MixedLifecycleTests {
#Container
private static PostgreSQLContainer postgresqlContainer = new PostgreSQLContainer();
}
Since you're working on a Spring application you want to configure it to use the container, for that use #DynamicPropertySource: https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/test/context/DynamicPropertySource.html
In a nutshell, you mark a method with it and inside it configure Spring to use the databases in the container:
#DynamicPropertySource
static void redisProperties(DynamicPropertyRegistry registry) {
registry.add("spring.datasource.url", postgresqlContainer:: getJdbcUrl);
registry.add("spring.datasource.username", postgresqlContainer::getUsername);
registry.add("spring.datasource.password", postgresqlContainer::getPassword);
registry.add("spring.r2dbc.url", () -> "r2dbc:postgresql://"
+ postgreSQLContainer.getHost() + ":" + postgreSQLContainer.getFirstMappedPort()
+ "/" + postgreSQLContainer.getDatabaseName());
registry.add("spring.r2dbc.username", postgreSQLContainer::getUsername);
registry.add("spring.r2dbc.password", postgreSQLContainer::getPassword);
}
Note that since your app uses r2dbc and Liquibase works with non-reactive jdbc you should configure both.
I use spring-boot-starter-data-solr and would like to make use of the schmea cration support of Spring Data Solr, as stated in the documentation:
Automatic schema population will inspect your domain types whenever the applications context is refreshed and populate new fields to your index based on the properties configuration. This requires solr to run in Schemaless Mode.
However, I am not able to achieve this. As far as I can see, the Spring Boot starter does not enable the schemaCreationSupport flag on the #EnableSolrRepositories annotation. So what I tried is the following:
#SpringBootApplication
#EnableSolrRepositories(schemaCreationSupport = true)
public class MyApplication {
#Bean
public SolrOperations solrTemplate(SolrClient solr) {
return new SolrTemplate(solr);
}
}
But looking in Wireshark I cannot see any calls to the Solr Schema API when saving new entities through the repository.
Is this intended to work, or what am I missing? I am using Solr 6.2.0 with Spring Boot 1.4.1.
I've run into the same problem. After some debugging, I've found the root cause why the schema creation (or update) is not happening at all:
By using the #EnableSolrRepositories annotation, an Spring extension will add a factory-bean to the context that creates the SolrTemplate that is used in the repositories. This template initialises a SolrPersistentEntitySchemaCreator, which should do the creation/update.
public void afterPropertiesSet() {
if (this.mappingContext == null) {
this.mappingContext = new SimpleSolrMappingContext(
new SolrPersistentEntitySchemaCreator(this.solrClientFactory)
.enable(this.schemaCreationFeatures));
}
// ...
}
Problem is that the flag schemaCreationFeatures (which enables the creator) is set after the factory calls the afterPropertiesSet(), so it's impossible for the creator to do it's work.
I'll create an issue in the spring-data-solr issue tracker. Don't see any workaround right now, other either having a custom fork/build of spring-data or extend a bunch of spring-classes and trying to get the flag set before by using (but doubt of this can be done).
I am having spring webservice application with oracle as a database. Right now i have datasource created using weblogic server. Also using eclipse linkg JPA to do both read and write transactions(insert,Read and update). Now we want to separate dataSources for read(read) and wrtie(insert or update) transactions.
My current dataSource is as followed:
JNDI NAME : jdbc/POI_DS
URL : jdbc:oracle:thin:#localhost:1521:XE
using this, I am doing both read and write transactions.
What if i do the following:
JNDI NAME : jdbc/POI_DS_READ
URL : jdbc:oracle:thin:#localhost:1521:XE
JNDI NAME : jdbc/POI_DS_WRITE
URL : jdbc:oracle:thin:#localhost:1521:XE
I knew that using XA datasource we can define multiple dataSources. Can I do same thing without XA dataSource. Does any one tried this kind of approach.
::UPDATE::
Thank you all for your responses I have implemented following solution.
I have taken the multiple database approach. where you will define multiple transactionManagers and managerFactory. I have taken only single non xa dataSource(JNDI) that is refereed in EntityManagerFactory Bean.
you can reefer following links here which are for multiple dataSources
Multiple DataSource Approach
defining #transactional value
Also explored on transaction managers org.springframework.transaction.jta.WebLogicJtaTransactionManager and org.springframework.orm.jpa.JpaTransactionManager as well.
There is an interesting article about this in Spring docs - Dynamic DataSource Routing. There is an example there, that allows you to basically switch data sources at runtime. It should help you. I'd gladly help you more, if you have any more specific questions.
EDIT: It tells, that the actual use is to have connection to multiple databases via one configuration, but you could manage to create different configs to one database with different params, as you'd need to.
I would suggest using Database "services". Each workload, read-only and read-write, would be using its own service to access the database. That way you can use AWR reports to get statistics for each service. You can also turn off read-write when you keep read-only up and running.
Here is a pointer to the Oracle Database documentation that talks about Services:
https://docs.oracle.com/database/121/ADMIN/create.htm#CIABBCAI
If you're using spring, you should be able to accomplish this without using 2 Datasources via spring #Transactional with the readonly property set to true. The reason why I suggest this is that you seem to be concerned about the transactionality only and this seems to be catered for in the spring framework?
I'd suggest something like this for your case:
#Transactional(readOnly = true)
public class DefaultFooService implements FooService {
public Foo getFoo(String fooName) {
// do something
}
// these settings have precedence for this method
#Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW)
public void updateFoo(Foo foo) {
// do something
}
}
Using this style, you should be able to split read only services from their write counterparts, or even have read and write service methods combined. But both of these do not use 2 datasources.
Code is from the Spring Reference
I am pretty sure that you need to address the problem on the database / connection url + properties layer.
I would google around for something like read write replication.
Related to your question with JPA and transaction. You are doomed when you are using multiple Datasources. Also XA datasources are not really a solution for that. The only thing they do for you is to ensure consistency over multi data source operations. XA Transaction do only span some sort of logical transaction over two transactions (one for each datasource). From the transaction isolation point of view (as long as your not using READ_UNCOMMITED) both datasources use their own transaction. This means the read data source would not see the changes made by the write transaction.
I am using spring & hibernate. my application has 3 modules. Each module has a specific database. So, Application deals with 3 databases. On server start up, if any one of the databases is down, then server is not started. My requirement is even if one of the databases is down, server should start as other module's databases are up, user can work on other two modules. Please suggest me how can i achieve this?
I am using spring 3.x and hibernate 3.x. Also i am using c3p0 connection pooling.
App server is Tomcat.
Thanks!
I would use the #Configuration annotation to make an object who's job it is to construct the beans and deal with the DB down scenario. When constructing the beans, test if the DB connections are up, if not, return a Dummy Version of your bean. This will get injected into the relevant objects. The job of this dummy bean is to really just throw an unavailable exception when called. If your app can deal with these unavailable exceptions for certain functions and show that to the user while continuing to function when the other datasources are used, you should be fine.
#Configuration
public class DataAccessConfiguration {
#Bean
public DataSource dataSource() {
try {
//create data source to your database
....
return realDataSource;
} catch (Exception) {
//create dummy data source
....
return dummyDataSource;
}
}
}
This was originally a comment:
Have you tried it? You wouldn't know whether a database is down until you connect to it, so unless c3p0 prevalidates all its connections, you wouldn't know that a particular database is down until you try to use it. By that time your application will have already started.
I want to setup my database with initial data programmatically. I want to populate my database for development runs, not for testing runs (it's easy). The product is built on top of Spring and JPA/Hibernate.
Developer checks out the project
Developer runs command/script to setup database with initial data
Developer starts application (server) and begins developing/testing
then:
Developer runs command/script to flush the database and set it up with new initial data because database structures or the initial data bundle were changed
What I want is to setup my environment by required parts in order to call my DAOs and insert new objects into database. I do not want to create initial data sets in raw SQL, XML, take dumps of database or whatever. I want to programmatically create objects and persist them in database as I would in normal application logic.
One way to accomplish this would be to start up my application normally and run a special servlet that does the initialization. But is that really the way to go? I would love to execute the initial data setup as Maven task and I don't know how to do that if I take the servlet approach.
There is somewhat similar question. I took a quick glance at the suggested DBUnit and Unitils. But they seem to be heavily focused in setting up testing environments, which is not what I want here. DBUnit does initial data population, but only using xml/csv fixtures, which is not what I'm after here. Then, Maven has SQL plugin, but I don't want to handle raw SQL. Maven also has Hibernate plugin, but it seems to help only in Hibernate configuration and table schema creation (not in populating db with data).
How to do this?
Partial solution 2010-03-19
Suggested alternatives are:
Using unit tests to populate the database #2423663
Using ServletContextListener to gain control on web context startup #2424943 and #2423874
Using Spring ApplicationListener and Spring's Standard and Custom Events #2423874
I implemented this using Spring's ApplicationListener:
Class:
public class ApplicationContextListener implements ApplicationListener {
public void onApplicationEvent(ApplicationEvent event) {
if (event instanceof ContextRefreshedEvent) {
...check if database is already populated, if not, populate it...
}
}
}
applicationContext.xml:
<bean id="applicationContextListener" class="my.namespaces.ApplicationContextListener" />
For some reason I couldn't get ContextStartedEvent launched, so I chose ContextRefreshedEvent which is launched in startup as well (haven't bumped into other situations, yet).
How do I flush the database? Currently, I simply remove HSQLDB artifacts and a new schema gets generated on startup by Hibernate. As the DB is then also empty.
You can write a unit test to populate the database, using JPA and plain Java. This test would be called by Maven as part of the standard build lifecycle.
As a result, you would get an fully initialized database, using Maven, JPA and Java as requested.
The usual way to do this is to use a SQL script. Then you run a specific bash file that populate the db using your .sql
If you want to be able to programmatically set your DB during the WebApp StartUp you can use a Web Context Listener. During the initialization of your webContext you can use a Servlet Context Listener to get access to your DAO (Service Layer.. whatever) create your entities and persist them as you use to do in your java code
p.s. as a reference Servlet Life Cycle
If you use Spring you should have a look at the Standard and Custom Events section of the Reference. That's a better way to implement a 'Spring Listener' that is aware of Spring's Context (in the case you need to retrieve your Services form it)
You could create JPA entities in a pure Java class and persist them. This class could be invoked by a servlet but also have a main method and be invoked on the command line, by maven (with the Exec Maven Plugin) or even wrapped as a Maven plugin.
But you're final workflow is not clear (do you want the init to be part of the application startup or done during the build?) and requires some clarification.
I would us a Singleton bean for that:
import javax.annotation.PostConstruct;
import javax.ejb.Startup;
import javax.ejb.Singleton;
#Singleton
#Startup
public class InitData {
#PostConstruct
public void load() {
// Load your data here.
}
}
Depend on your db. It is better to have script to set up db
In the aforementioned ServletContextListener or in a common startup place put all the forthcoming code
Define your data in an agreeable format - XML, JSON or even java serialization.
Check whether the initial data exists (or a flag indicating a successful initial import)
If it exists, skip. If it does not exist, get a new DAO (using WebApplicationContextUtils.getRequiredWebApplicationContext().getBean(..)) , iterate all predefined objects and persist them via the EntityManager in the database.
I'm having the same problem. I've tried using an init-method on the bean, but that runs on the raw bean w/o AOP and thus cannot use #Transactional. The same seems to go for #PostConstruct and the other bean lifecycle mechanism.
Given that, I switched to ApplicationListener w/ ContextRefreshedEvent; however, in this case, #PersistenceContext is failing to get an entity manager
javax.persistence.PersistenceException: org.hibernate.SessionException: Session is closed!
at org.hibernate.ejb.AbstractEntityManagerImpl.throwPersistenceException(AbstractEntityManagerImpl.java:630)
at org.hibernate.ejb.QueryImpl.getSingleResult(QueryImpl.java:108)
Using spring 2.0.8, jpa1, hibernate 3.0.5.
I'm tempted to create a non-spring managed entitymanagerfactory and do everything directly but fear that would interfere w/ the rest of the Spring managed entity and transaction manager.
I'm not sure if you can get away from using some SQL. This would depend if your develoeprs are staring with an empty database with no schema defined or if the tables are there but they are empty.
If you starting with empty tables then you could use a Java approach to generating the data. I'm not that familiar with Maven but I assume you can create some task that would use your DAO classes to generate the data. You could probably even write it using a JVM based scripting language like Groovy that would be able to use your DAO classes directly. You would have a similar task that would clear the data from the tables. Then your developers would just run these tasks on the command line or through their IDE as a manual step after checkout.
If you have a fresh database instance that I think you will need to execute some SQL just to create the schema. You could technically do that with executing SQL calls with hibernate but that really doesn't seem worth it.
Found this ServletContextListener example by mkyong. Quoting the article:
You want to initialize the database connection pool before the web
application is start, is there a “main()” method in whole web
application?
This sounds to me like the right place where to have code to insert initial DB data.
I tested this approach to insert some initial data for my webapp; it works.
I found some interesting code in this repository: https://github.com/resilient-data-systems/fast-stateless-api-authentication
This works pretty neat in my project:
#Component
#DependsOn({ "dataSource" })
public class SampleDataPopulator {
private final static Logger log = LoggerFactory.getLogger(SampleDataPopulator.class);
#Inject
MyRepository myRepository
#PostConstruct
public void populateSampleData() {
MyItem item = new ResourceDatabasePopulator();
myRepository.save(item);
log.info("Populated DB with sample data");
}
}
You can put a file called data.sql in src/main/resources, it will be read and executed automatically on startup. See this tutorial.
The other answers did not work for me.