I have a Java microservice and I would like to use Dynamodb as a database.
I have multiple profiles: dev, test, stage and prod.
I would like to use for internal profiles (dev and test) a local database using the docker-local image.
My idea is to automate the setup process for test profile (because I'll use Jenkins in order to run the tests for CI/CD) as also for dev profile in order to avoid to run a lot of commands manually (I would like to run just a setup script in order to create the db, the tables and the indexes also).
I saw that, after the initialization of the database using the docker compose, it's possible to use the aws cli in order to create the tables, indexes, etc
The idea is to run these commands into a bash script in order to setup everything.
I also saw that there is this project https://github.com/derjust/spring-data-dynamodb
The idea is to reuse the Hibernate idea in order to write just the entities java class having the automatic creation/update of the database tables/indexes.
In particular I don't like this approach because this is not an 'official' project.. just this.
About the production environment the idea is to integrate this configuration step into the Cloudformation script.
Could this be a good way or is it better to configure all using the AWS console?
Honestly this is the first time that I'm using DynamoDB and I'm not a clear idea about the best practices.
Can you point me to some good examples?
Related
I have a spring + hibernate application that uses postgres database. I need to write unit tests for the controllers. For tests I wanted to use h2 database but unfortunately test crashes during create-drop leaving me with information that bpchar data type is invalid. I wonder how to solve this issue so I could run tests.
I can't change my columns with bpchar to varchar, it need to stay as it is. I also tried to set postgresql mode but it didn't helped.
Am I right that the only solution I have is to use embedded postgres database in order to perform tests or is there any other approach that I could use?
Am I right that the only solution I have is to use embedded postgres database in order to perform tests or is there any other approach that I could use?
You try to use postgres-specific data type with h2 (which does not have it). Of course, it does not work.
If you cannot change type of this field - use embedded postgres in tests.
Actually you can do this in your application.properties to let H2 know:
spring.datasource.url=jdbc:h2:mem:testdb;INIT=CREATE TYPE BPCHAR AS CHARACTER NOT NULL
Also make sure auto configuration of the database is turned off for your test. You can do this by adding:
#AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
public class MyTestForTableWithBpcharColumn {
One interesting approach to this issue is Test Containers
Since Postgres doesn't have an embedded mode, but you can use the aforementioned framework to start the docker container before the test, create a schema and apply migrations if you're using something like Flyway or Liquibase or integrate your custom solution.
The idea is that the container will be prepared and available to the test when it runs.
After the test passes (regardless of the actual result, success or failure) you can stop the container.
Firing up the container can be quite expensive (a matter of seconds), however you can take advantage of spring caching configurations during the tests, so when the first test in the module starts, the container is actually started, however, it gets reused between the tests and the test cases, since the application context doesn't get re-started.
Keeping the database clean between tests also becomes a trivial task due to Spring's #Transactional annotation that you put on a test case, so that spring artificially rolls back the transaction after each test. Since in Postgres, even DDL commands can be transactional it should be good enough.
The only limitation of this approach is that you should have a docker available on build machine or local development machine if you're planning to run these tests locally (on Linux and Mac OS its not a problem anyway, but on Windows you need to have Windows 10 Professional edition at least in order to be able to install docker environment).
I've used this approach in real projects and found it very effective for integration testing.
I have a postgres database that has users and roles already defined. There are multiple schemas in this database that are all controlled via different projects/flyway scripts. I am working on adding Flyway integration into a new project where we will use an embedded Postgres instance for testing.
Since none of these users/roles will exist on this instance, they need to be created in a migration script. However, since these users/roles will already exist in my operational databases, the migrations will fail when they attempt to create the roles.
I already considered writing a function for this, but then the function would have to be included in any project that uses the embedded Postgres and would have to be maintained across multiple code bases. This seems very sloppy. Can anyone recommend a way for me to handle these DCL operations using Flyway that will work with the embedded approach as well as my operational databases?
In a previous project we use for this approach a set of additional Flyway migration scripts. These scripts we add to the test environment classpath.
We used this for a Flyway version before the feature of Callback and repeatable migrations were added.
Add a callback configuration for your Test environment and you add in the before or after migration phase your user and roles.
Third solution use repeatable migration scripts for your user and roles setup see https://flywaydb.org/documentation/migration/repeatable. Use this scripts in production and test. But in this case your sql must done correct and repeatable otherwise you will break your production environment.
I am developing a very large scale J2EE application and we chose to use Derby as an embedded database for junit testing since hitting the actual prod database will slow down our tests. When I bootstrap my application, the Derby DB will create all the tables so I can run JDBC queries against it. It works fine but the drawback is I cannot actually query any of the tables except through JDBC calls at runtime, so if I need to make changes to my queries, I need to stop the app, modiify my query statements, then restart the application and run in debug. This process makes it very difficult when it comes to analyzing complex queries. Does anyone know of some kind of Derby plugin that can help me to query the DB without doing it through my java code?
If you are using Maven for your build, you can use the derby-maven-plugin, which I wrote and is available on GitHub and via Maven Central. It will take care of starting and stopping the database for you before your tests. You will need to populate this database, yourself of course. You will also have the database in your target/derby folder after the tests execute, so you can always query the data yourself afterwards. This will help you work in a separate development environment which doesn't affect the production database.
You can check here for my answer to a similar question.
I want to pre-populate MongoDB database in Maven Java project with some data for integration testing purposes. I can easily achieve this by executing script in mongo shell manually (./mongo server:port/database --quiet my_script.js), but how can I do this using Maven?
Is the MongoDB database running before the tests are run? Or do you start the server as part of your tests?
In either case, I think it would be possible to use dbunit maven plugin to load an external file with testdata. Take a look at http://mojo.codehaus.org/dbunit-maven-plugin/operation-mojo.html
You could use the Mongo Java API to run your script into the database with the DB.doEval() method. If you already have a running database then connecting and running the script could be done as part of your test setup.
That said, for some of my projects, I am using Maven with embedmongo Maven plugin, which allows a mongo database to be created on the fly for integration testing. That, combined with your script might be an alternate solution that does not rely on an existing Mongo database to be running.
You can now use embedmongo-maven-plugin to achieve this. It has a mongoimport goal to run mongoimport or a mongo-scripts goal to eval arbitrary js scripts.
We have a java portal connected to a mysql db containing about 70 tables.
When we prepare a new client on it, we test it on a DEV server and if all work good
we DO THE SAME configuration on PRODUCTION.
Well, we want to build some simple tool to EXPORT this configuration from DEV and IMPORT it to PRODUCTION. (to avoid doing it by hand every time)
We think about doing this with REST. GET from DEV and POST to PRODUCTION.
This configuration implies about 7-8 tables.
What do you recommend? Do you think REST is the best decision?
I think REST is a a bit strange decision for this, as you would need to build and maintain the client and server software for handling the file uploads, and have it installed correctly on both machines.
I would use an automated secure copy (SCP) script to copy your build artefacts.