Code-first like approach in Dropwizard Migrations Liquibase - java

Currently I'm working on a small web service using Dropwizard, connecting to a Postgresql DB using hibernate(build in package in Dropwizard) and with a bit of Migrations(also from Dropwizard).
Coming from a .NET environment, I'm used to a code - first/centric approach.
Currently I'm looking into generating the migrations.xml from the current state of my entity class based on the JPA annotations on them.
I feel this is a case somebody might have already resolved.
Is there a way to automatically update the migrations.xml based on the classes I'm writting?

It is possible. See the liquibase-hibernate plugin at https://github.com/liquibase/liquibase-hibernate/wiki.
Make sure you look at the generated migrations.xml changes before applying them because, like any diff-based process, the schema transformation may not be what you intended and that matters with data. For example, if you rename a class it will generate a drop + create process rather than a rename operation. The result is a valid schema, but you lose data.

Related

Flyway custom migration

I have encountered a problem with flyway and I don't know how to resolve it.
I have template of sql database, which initiates all tables, and basic data. This sql have been migrated on few production servers, and when I want to make little changes to this template (add more data for example) - of course there is a checksum mismatch. So the idea is to not put checksum in this base template migration, like in Flyway's schema creation.
Should I create custom MigrationExecutor? If that's correct approach, could you provide basic example how to use that custom executor, and within that executor how to get information about current running migration? I couldn't find anything about creating custom callbacks, resolvers or executers except Flyways doc's which doesn't give me any idea how to get what I want.
Flyway's version is 4.0.3

Play + Ebean: Changes to the model + database

I'm currently using Java Play and persisting models through Ebean to MySQL. This is going to be a generic question – what I see is that whenever I make changes to a model – sometimes just adding a property, after applying the evolution script, the existing data in the corresponding table gets truncated.
Since I love play and I'm thinking about deploying my next project using Play, this is an important question for me – is there a workaround to securely make model changes? Or is the behaviour I'm seeing only when running the application in development mode?
I can't find much about this subject elsewhere.
That's common approach of Ebean - it doesn't truncate your tables it just drops whole DB and recreates it with new DDL: #see answer to the other question for explanation.
Note: In meantime I found that using standalone approach which is MyBatis Migrations is little bit more comfortable then Play's evolutions, anyway you need still to create the migrations manually (as evolutions).

Is it feasible to translate table definitions used by Spring Batch?

We are going to use Spring-Batch in a project that needs to read, convert and write big ammounts of data. So far, everything is fine.
But there is a non-functional requirement that says we can't create DB objects using english words, so the original schema used by Spring Data will not be aproved by client's DBA, unless we translate it.
In docs, I don't see any way to configure or extend the API to achieve this objective, so it seems that we'll have to customize source code to make it work with the equivalent, translated, model. Is that a correct/feasible assumption, or am I missing something?
That is an unusual requirement. However, in order to completely rename the tables and columns in the batch schema, you'll need to re-implement the JDBC based repository DAOs to use your own SQL.

How to load initial data (or seed data) using Java JPA?

I have a JPA project and I would like to insert some initial data just on development, so I can check if everything is running smoothly easy.
My research lead me to find only solution with direct SQL script, but that isn't right. If I'm using a framework to abstract database details why would I create script for an specific database?
In the ruby on rails world we have the command "rake db:seed" that simple executes a file named seed.rb that has the function to add the initial data on the database calling the abstraction layer. Is there something like that on java?
The ideal solution I can think of would be to execute a maven goal that would execute a java class, is there an easy way or a maven plugin to do it?
I feel your pain, I have gone wanting in a Java project for all of the perks Rails has.
That being said, there is no reason to use straight SQL. That approach is just asking for trouble. As your database schema changes during development, all the brittle SQL breaks. It is easier to manage data if it is mapped to JPA Models, which will abstract the SQL interaction with the database.
What you should do is use your JPA models to seed your data. Create a component that can execute the creation of models you require and persist them. In my current project, we use Snake YAML to serialize our Models as Yaml. To seed our database we deserialize the yaml to JPA models and persist.
If the models change (variable types change, remove columns, etc), you have to make sure that the serialize data will still be able to correctly deserialize into the JPA models. Using the human readable format of Yaml makes it easy to update the serialized models.
To actually run your seed data, bootstrap your system however you can. As #GeoorgeMcDowd said, you can use a Servlet. I personally prefer to create a command line tool by creating a uberjar with Class.main. Then you just need to create a script to setup your classpath and call the Class.main to run the seed.
Personally, I love Maven as project meta data but find it difficult as a build tool. The following can be used to exec a java class:
mvn exec:java -Dexec.mainClass="com.package.Main"
Just create a class and method that creates the objects and persists the data. When you fire up your application, run the method that you created in a servlet init.You can load your servlet up with the following web.xml config.
<servlet>
<servlet-name>MyServlet1</servlet-name>
<servlet-class>com.example.MyServlet1</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>
Edit: Format web.xml to be more reader friendly.
You could model your project with Maven and write a simple test to initialize the seed data, so the only thing you will need to do is to run "mvn test".
Similar to amuniz's idea: have a look at dbunit. It is a JUnit extension for pupulating test data into a db. It uses a simple schema-less xml format for that. And running it through a test class using "mvn test" is a simple thing to do.
I would suggest liquibase http://www.liquibase.org/ for this . It has many plugin and allows you to define the rollback logic for every change set (and detect the rollback in some cases).
In this case it is important also to think about the production servers and how the seed data will be moved to production.

OpenJPA: Code to build entities automatically from DB

Hi I'm looking for a code/tool to generate entities automatically. I'm not looking for a software like eclipselink which has to be executed manually, but rather a piece of code (or a maven plugin) that can be automatically run whenever the db changes. (If I can autorun eclipselink via cron job, that would work for me.)
Some other options:
I think Hibernate offers a reverse engineering method that can be called from maven build that auto generates the entities from db schemas. Does anyone has a such a tool for openjpa.
Any command line utility where you just specify the db urls and options and the utility generates the entities. I can just write a cron to run the utility nightly etc.
Any software that can be called automatically via cron, and it generates the entity will also do.
Update:
OpenJPA Reverse mapping tool seems to really suck at generating a proper entity with annotations, mapping and so on... I would be glad if someone corrected me
Check out Reverse Mapping in the user manual. You can launch that from an ant task.
I doubt a fully automatized tool like that can exist — simply because it can't be done well without human intervention. How would, for example, the algorithm decide which attributes should be taken into account in equals() and hashCode()? Or new relations uni- or bidirectional? Lazy/eager loading? And so on.
As you know, and others have noted, the tools per se exist, but they're rather intended to run once, tweak the result, and work with it from now on, rather than be a part of a continous integration process.

Categories

Resources