Pre-populate MongoDB database with Maven - java

I want to pre-populate MongoDB database in Maven Java project with some data for integration testing purposes. I can easily achieve this by executing script in mongo shell manually (./mongo server:port/database --quiet my_script.js), but how can I do this using Maven?

Is the MongoDB database running before the tests are run? Or do you start the server as part of your tests?
In either case, I think it would be possible to use dbunit maven plugin to load an external file with testdata. Take a look at http://mojo.codehaus.org/dbunit-maven-plugin/operation-mojo.html

You could use the Mongo Java API to run your script into the database with the DB.doEval() method. If you already have a running database then connecting and running the script could be done as part of your test setup.
That said, for some of my projects, I am using Maven with embedmongo Maven plugin, which allows a mongo database to be created on the fly for integration testing. That, combined with your script might be an alternate solution that does not rely on an existing Mongo database to be running.

You can now use embedmongo-maven-plugin to achieve this. It has a mongoimport goal to run mongoimport or a mongo-scripts goal to eval arbitrary js scripts.

Related

Dynamodb best practices to setup the database

I have a Java microservice and I would like to use Dynamodb as a database.
I have multiple profiles: dev, test, stage and prod.
I would like to use for internal profiles (dev and test) a local database using the docker-local image.
My idea is to automate the setup process for test profile (because I'll use Jenkins in order to run the tests for CI/CD) as also for dev profile in order to avoid to run a lot of commands manually (I would like to run just a setup script in order to create the db, the tables and the indexes also).
I saw that, after the initialization of the database using the docker compose, it's possible to use the aws cli in order to create the tables, indexes, etc
The idea is to run these commands into a bash script in order to setup everything.
I also saw that there is this project https://github.com/derjust/spring-data-dynamodb
The idea is to reuse the Hibernate idea in order to write just the entities java class having the automatic creation/update of the database tables/indexes.
In particular I don't like this approach because this is not an 'official' project.. just this.
About the production environment the idea is to integrate this configuration step into the Cloudformation script.
Could this be a good way or is it better to configure all using the AWS console?
Honestly this is the first time that I'm using DynamoDB and I'm not a clear idea about the best practices.
Can you point me to some good examples?

integration tests against different databases

I am working on spwrap to simplify calling to stored procedures.
Currently I wrote a couple of automated integration tests against HSQL which provides in-memeory database mode (and run on travis-ci).
Still I need to write more integration tests against other DBMS, example MySQL, SQL Server, Oracle, etc.
According to this answer, I can use MariaDB4j for MySQL in-memory testing.
But what about other DBMS in particular SQL Server and Oracle?
UPDATE: Is HSQLDB database compatibility sufficient?
One possible solution is to run on docker/docker-compose:
An important part of any Continuous Deployment or Continuous
Integration process is the automated test suite. Automated end-to-end
testing requires an environment in which to run tests. Compose
provides a convenient way to create and destroy isolated testing
environments for your test suite. By defining the full environment in
a Compose file you can create and destroy these environments in just a
few commands
https://docs.docker.com/compose/overview/#automated-testing-environments
BTW, Travis-ci support docker

How to handle DCL in Flyway migration scripts?

I have a postgres database that has users and roles already defined. There are multiple schemas in this database that are all controlled via different projects/flyway scripts. I am working on adding Flyway integration into a new project where we will use an embedded Postgres instance for testing.
Since none of these users/roles will exist on this instance, they need to be created in a migration script. However, since these users/roles will already exist in my operational databases, the migrations will fail when they attempt to create the roles.
I already considered writing a function for this, but then the function would have to be included in any project that uses the embedded Postgres and would have to be maintained across multiple code bases. This seems very sloppy. Can anyone recommend a way for me to handle these DCL operations using Flyway that will work with the embedded approach as well as my operational databases?
In a previous project we use for this approach a set of additional Flyway migration scripts. These scripts we add to the test environment classpath.
We used this for a Flyway version before the feature of Callback and repeatable migrations were added.
Add a callback configuration for your Test environment and you add in the before or after migration phase your user and roles.
Third solution use repeatable migration scripts for your user and roles setup see https://flywaydb.org/documentation/migration/repeatable. Use this scripts in production and test. But in this case your sql must done correct and repeatable otherwise you will break your production environment.

Spring Groovy Integration

I am a newbie to Spring and Groovy integration.
I have some strange requirement. Please help in achieving it.
I have Groovy scripts in a database and in a flow I want to get a script from the database and execute the script.
But the problem here is I want the Spring container inside that script and I want to call any beans of Spring container from Groovy script.
Now the problem is how to specify the Spring container to scan the Groovy scripts in database. Also Groovy scripts get added and removed from the database. I also don't want to restart the application when a new script is added.

Java Embedded Derby Database Plugin?

I am developing a very large scale J2EE application and we chose to use Derby as an embedded database for junit testing since hitting the actual prod database will slow down our tests. When I bootstrap my application, the Derby DB will create all the tables so I can run JDBC queries against it. It works fine but the drawback is I cannot actually query any of the tables except through JDBC calls at runtime, so if I need to make changes to my queries, I need to stop the app, modiify my query statements, then restart the application and run in debug. This process makes it very difficult when it comes to analyzing complex queries. Does anyone know of some kind of Derby plugin that can help me to query the DB without doing it through my java code?
If you are using Maven for your build, you can use the derby-maven-plugin, which I wrote and is available on GitHub and via Maven Central. It will take care of starting and stopping the database for you before your tests. You will need to populate this database, yourself of course. You will also have the database in your target/derby folder after the tests execute, so you can always query the data yourself afterwards. This will help you work in a separate development environment which doesn't affect the production database.
You can check here for my answer to a similar question.

Categories

Resources