OpenJPA: Code to build entities automatically from DB - java

Hi I'm looking for a code/tool to generate entities automatically. I'm not looking for a software like eclipselink which has to be executed manually, but rather a piece of code (or a maven plugin) that can be automatically run whenever the db changes. (If I can autorun eclipselink via cron job, that would work for me.)
Some other options:
I think Hibernate offers a reverse engineering method that can be called from maven build that auto generates the entities from db schemas. Does anyone has a such a tool for openjpa.
Any command line utility where you just specify the db urls and options and the utility generates the entities. I can just write a cron to run the utility nightly etc.
Any software that can be called automatically via cron, and it generates the entity will also do.
Update:
OpenJPA Reverse mapping tool seems to really suck at generating a proper entity with annotations, mapping and so on... I would be glad if someone corrected me

Check out Reverse Mapping in the user manual. You can launch that from an ant task.

I doubt a fully automatized tool like that can exist — simply because it can't be done well without human intervention. How would, for example, the algorithm decide which attributes should be taken into account in equals() and hashCode()? Or new relations uni- or bidirectional? Lazy/eager loading? And so on.
As you know, and others have noted, the tools per se exist, but they're rather intended to run once, tweak the result, and work with it from now on, rather than be a part of a continous integration process.

Related

Configuring database development environment along with Hibernate and Spring

We have a web-based application in dev phase where we use Spring 5, JPA(Hibernate) and Postgresql 9.4
Till this moment we were using one instance of the posgresql db for our work. Basically, we don't have any schema generation script and we simply were updating the db if we needed some new table, column etc. For the Hibernate we were generating classes from the db.
Now when we have some amount of test data and each change in the db brings a lot of trouble and confusion. We realized that we need to create and start maintaining some schema generation file along with some scripts which generate test data.
After some research, we see two options
Create two *.sql files. The first will contain the schema generation script the second one SQL to create test data. Then add a small module with a class which will execute the *.sql files using plain jdbc. Basically, we will continue developing and whenever we made some changes we quickly wipe->create->populate the db. This approach looks the most appealing to us at this point. It quick, simple, robust.
Second is to set up some tool which may help with that e.g. Liquibase
This approach also looks good in terms of versioning support and other capabilities. However, we are not in production yet, we are in an active development phase. We don't have much of the devs who do the db changes and we are not sure how frequently we will update the db schema in production, it could be rare.
The question is the following. Would the first approach be a bad practice and applying the second one will give the most benefits and it worth to use it?
Would appreciate any comments or any other suggestions!
First approach is NOT a bad practice, until this generation. But it will be considering the growth of tools like Liquibase.
If you are in the early or middle of the Development Phase, go ahead with LiquiBase, along with Spring Data. Contrarily, in the closing stages of the Development Phase, Think you real need for it.
I would suggest second approach as it will automatically find the new script as you add and execute the script on startup. Moreover, when you have tools available like liquibase and flyway why reinvent the wheel ?.
2nd approach will also reduce the un-necessary code for manually executing the *.sql files. Moreover this code also needs testing and if updated can be error prone.
Moreover 1st approach where you write manual code to execute script also has to check which scripts needs to be executed.. If you already has existing database and you are adding some new scripts you need to execute those new scripts only. These things are taken care of automatically with 2nd approach and you don't need to worry about already executed script being executed again
Hope this answers your concern. Happy coding

Without using hibernate.hbm2ddl.auto, how do I export all the initial schema into Flyway?

I am at the almost ready stage of my JEE development. With a lot of recommendation NOT to use Hibernate's hbm2ddl.auto in production, I decided to remove it.
So now, I found out about Flyway, which seems great for future db changes and migrations, but I am stuck at first step: I have many entities, some entities inherit from base entities. This makes the CREATE statement very complex.
What is the best practice to create the first migration file?
Thanks!
If you've taken an "entities first" approach during development you'll need to generate the initial schema in the same way for the first live deployment: This will produce the first creation script used by Flyway and there may also need to be a second associated script for populating reference data.
In a nutshell, the reasons for no longer being able to use hbm2ddl.auto after the first deployment are that create will destroy existing data and update isn't reliable enough to cover all types of schema changes (as it sounds like you may already know from this SO question).
Flyway is a very useful tool but it does require a level of discipline that may not have existed during development. When going forward from the initial release, database update scripts need to be produced for Flyway that are equivalent to the changes made to the entities since the last release. There are tools (e.g. various commercial products from Redgate) that may help here: These attempt to "diff" two schemas and generate schema and/or data update scripts for getting from database A to database B. But in my experience, none of them are perfect and they don't quite reach the holy grail of enabling a completely automated approach.
Arguably, the best way is an "as you go" manual approach to ensure that non-destructive update scripts are committed to source control whenever an entity change is made that affects the schema or reference data - but as already mentioned, this will require some discipline and/or documented processes for all team members to follow.
For the first migration file, you just need the current ddl of your database. There are many tools which can get this for you (such as the "copy ddl" option in the IntelliJ IDEA Database tool or a GUI client from your database vendor).
I am not sure about Flyway but there is an alternate way, you can use ant tasks for hibernate to generate or update schema.
Hope it helps.
If you build your project with Maven, you could use Hibernate maven plugin.

Code-first like approach in Dropwizard Migrations Liquibase

Currently I'm working on a small web service using Dropwizard, connecting to a Postgresql DB using hibernate(build in package in Dropwizard) and with a bit of Migrations(also from Dropwizard).
Coming from a .NET environment, I'm used to a code - first/centric approach.
Currently I'm looking into generating the migrations.xml from the current state of my entity class based on the JPA annotations on them.
I feel this is a case somebody might have already resolved.
Is there a way to automatically update the migrations.xml based on the classes I'm writting?
It is possible. See the liquibase-hibernate plugin at https://github.com/liquibase/liquibase-hibernate/wiki.
Make sure you look at the generated migrations.xml changes before applying them because, like any diff-based process, the schema transformation may not be what you intended and that matters with data. For example, if you rename a class it will generate a drop + create process rather than a rename operation. The result is a valid schema, but you lose data.

How to load initial data (or seed data) using Java JPA?

I have a JPA project and I would like to insert some initial data just on development, so I can check if everything is running smoothly easy.
My research lead me to find only solution with direct SQL script, but that isn't right. If I'm using a framework to abstract database details why would I create script for an specific database?
In the ruby on rails world we have the command "rake db:seed" that simple executes a file named seed.rb that has the function to add the initial data on the database calling the abstraction layer. Is there something like that on java?
The ideal solution I can think of would be to execute a maven goal that would execute a java class, is there an easy way or a maven plugin to do it?
I feel your pain, I have gone wanting in a Java project for all of the perks Rails has.
That being said, there is no reason to use straight SQL. That approach is just asking for trouble. As your database schema changes during development, all the brittle SQL breaks. It is easier to manage data if it is mapped to JPA Models, which will abstract the SQL interaction with the database.
What you should do is use your JPA models to seed your data. Create a component that can execute the creation of models you require and persist them. In my current project, we use Snake YAML to serialize our Models as Yaml. To seed our database we deserialize the yaml to JPA models and persist.
If the models change (variable types change, remove columns, etc), you have to make sure that the serialize data will still be able to correctly deserialize into the JPA models. Using the human readable format of Yaml makes it easy to update the serialized models.
To actually run your seed data, bootstrap your system however you can. As #GeoorgeMcDowd said, you can use a Servlet. I personally prefer to create a command line tool by creating a uberjar with Class.main. Then you just need to create a script to setup your classpath and call the Class.main to run the seed.
Personally, I love Maven as project meta data but find it difficult as a build tool. The following can be used to exec a java class:
mvn exec:java -Dexec.mainClass="com.package.Main"
Just create a class and method that creates the objects and persists the data. When you fire up your application, run the method that you created in a servlet init.You can load your servlet up with the following web.xml config.
<servlet>
<servlet-name>MyServlet1</servlet-name>
<servlet-class>com.example.MyServlet1</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>
Edit: Format web.xml to be more reader friendly.
You could model your project with Maven and write a simple test to initialize the seed data, so the only thing you will need to do is to run "mvn test".
Similar to amuniz's idea: have a look at dbunit. It is a JUnit extension for pupulating test data into a db. It uses a simple schema-less xml format for that. And running it through a test class using "mvn test" is a simple thing to do.
I would suggest liquibase http://www.liquibase.org/ for this . It has many plugin and allows you to define the rollback logic for every change set (and detect the rollback in some cases).
In this case it is important also to think about the production servers and how the seed data will be moved to production.

How to generate model from database

I have an existing database. I need to generate the model classes in Java from it. Are there any tool/library that will allow me to do this. It will be of great help if it can emulate the entity relationships in the database into the model classes as well.
It is acceptable if the tool/library works with only one database vendor. I will create a database there and then generate the model.
Thanks in advance.
EDIT : I will probably use Hibernate as the ORM framework if I manage to generate the model.
The Hibernate Tools project (available as an Eclipse plug-in, as well as an Ant task) allows for "reverse-engineering" of database schemas into appropriate entity classes.
This project is also available in the JBoss Tools project.
The facility allows for reverse-engineering of the database metadata into a Hibernate configuration file. All artifacts (including the .java files) are generated from this config file.
You can control the nature of the reverse engineering process to suit your database structure. In other words, you can specify the schemas that you wish to have the tool reverse-engineer. You could also override the JDBC type mapping, apart from limiting the reverse-engineering process to a selected set of tables.
Obligatory link:
Screencast on Reverse engineering and code generation
Telosys does exactly this job
Let's have a look : http://www.telosys.org/
Minuteproject 4 JPA2 (http://minuteproject.wikispaces.com/JPA2) track does this task.
Minuteproject can be run from console or command line.
You can have a quick result by generating from the console where generate a maven project containing the JPA2 mapping classes in java or groovy.
If you use the command line then you need to fill an xml file that can contain additional customisation of your generated code such as packaging, enum, aliasing etc...
You can also try other track built on top JPA2 such as DAO with spring or EJB; REST; front end with Primefaces or Openxava; etc...
Hibernate has an Eclipse plugin Hibernate Tools http://www.hibernate.org/subprojects/tools.html that has reverse engineering capabilities.
See: http://docs.jboss.org/tools/3.2.0.GA/en/hibernatetools/html/plugins.html#refeng_codegen for more details on how to run and customize the reverse engineering process.

Categories

Resources