Is it feasible to translate table definitions used by Spring Batch? - java

We are going to use Spring-Batch in a project that needs to read, convert and write big ammounts of data. So far, everything is fine.
But there is a non-functional requirement that says we can't create DB objects using english words, so the original schema used by Spring Data will not be aproved by client's DBA, unless we translate it.
In docs, I don't see any way to configure or extend the API to achieve this objective, so it seems that we'll have to customize source code to make it work with the equivalent, translated, model. Is that a correct/feasible assumption, or am I missing something?

That is an unusual requirement. However, in order to completely rename the tables and columns in the batch schema, you'll need to re-implement the JDBC based repository DAOs to use your own SQL.

Related

Code-first like approach in Dropwizard Migrations Liquibase

Currently I'm working on a small web service using Dropwizard, connecting to a Postgresql DB using hibernate(build in package in Dropwizard) and with a bit of Migrations(also from Dropwizard).
Coming from a .NET environment, I'm used to a code - first/centric approach.
Currently I'm looking into generating the migrations.xml from the current state of my entity class based on the JPA annotations on them.
I feel this is a case somebody might have already resolved.
Is there a way to automatically update the migrations.xml based on the classes I'm writting?
It is possible. See the liquibase-hibernate plugin at https://github.com/liquibase/liquibase-hibernate/wiki.
Make sure you look at the generated migrations.xml changes before applying them because, like any diff-based process, the schema transformation may not be what you intended and that matters with data. For example, if you rename a class it will generate a drop + create process rather than a rename operation. The result is a valid schema, but you lose data.

Switching between embedded Databases in Java with JPA

Im currently working my way towards JPA 2.0 and I start of liking how easy it is to maintain persistent data.
What I'm currently trying to accomplish is using JPA in a basic desktop application. The application should allow me to open embedded databases which are on my file system. I chose H2 databases for now, but I can really live switching to JavaDB or anything else.
What Im trying to accomplish is, that one can open the database file without previously define a persistence-unit in the persistence.xml file.
I can easily define a unit and persist objects, but it needs to be configured first.
I want to write some sort of database browser which allows opening without preconfiguration and recompiling.
http://www.objectdb.com/java/jpa/start/connection
I saw that ObjectDB allows access for this type of PersistenceFactory creation, but I was not able to transfer this example to other databases.
Am I totally wrong with the way I approach this probblem? Is JPA not designed with on-the-fly database access?
Thank you for your help,
Johannes
Not part of the JPA standard. Some implementations may offer their own API to do it. For example with DataNucleus if you go to this page http://www.datanucleus.org/products/accessplatform_3_0/jpa/persistence_unit.html at the end you can create dynamic persistence-units (and hence EMFs), and that implementation obviously allows persistence to the widest range of datastores you'll get anywhere
You can pass a Map of properties to createEntityManagerFactory() call that defines the database connection info, etc. The property names are the same as in the persistence.xml. I assume most JPA providers support this, EclipseLink does.
You will still need to define the set of classes for the database and map them.
If you do not have any classes either, than you could look into EclipseLink's dynamic support,
http://wiki.eclipse.org/EclipseLink/Examples/JPA/Dynamic
If you want to make a database browser accessing different databases, you can't use a PU/Entity Manager (imo).
You'll need a dialogue asking a user for the IP/Port of the database, the username/password, the database name to access, and the type of database.
Then all you need to do is create a socket, send requests over the socket, and parse the response into a view.
Since both the request and the response are database specific, the user has to select the proper database driver.

Easy Java ORM for small projects

I'm currently searching for a really easy way to get simple Java Objects persistent in Databases and/or XML and/or other types of data stores.
For big projects in the company i would use hibernate, ibatis, datanucleus or something like that. But with small private projects this will take over 80% of the worktime.
I also found "simpleORM" but this one requires to code data-related stuff pretty hardly into the data-model classes. I don't really like that style so this is no option for me.
Do you have a suggestion for some library which simply takes my objects and saves / loads them as they are or with very little configuration?
You could try my ORMLite library, which was designed as a simple replacement for hibernate and iBatis. I'm the main author. It supports a number of JDBC databases and has an Android backend. Here is the getting started section of the manual which has some code examples. Here also are working examples of simple usage patterns.
Try Norm. It's a lightweight layer over JDBC. It adds almost zero overhead to JDBC calls and is very easy to learn.
You could just serialize your objects into a file/database whatsoever.
If you want to define the mapping then you'd have to go for more configuration and the standard OR mappers out there (like Hibernate) don't really add that much on top.
You could try xstream. It's really simple OXM library working without upfront configuration.
Sample code:
XStream xstream = new XStream();
// marshalling
String xml = xstream.toXML(domainObject);
// unmarshalling
domainObject = xstream.fromXML(xml);
For relational database persistence try one of the JPA implementations, such as OpenJPA.
The setup overhead is minimal. You can let JPA to create your schema & tables for your from your object definitions, so you don't need to hand crank any sql. All you need to supply is some annotations on your entities and a single config file, persistence.xml.
You can also use jEasyORM (http://jeasyorm.sourceforge.net/).
In most cases it automatically maps objects to database tables with no need for configuration.
You may want to consider www.sormula.org. Minimal programming/annotations and simple learning curve. It uses standard SQL and JDBC so will work with any relational db.
U could try SnakeORM http://sourceforge.net/p/selibs/wiki/Home/
It doesnt have many runtime dependencies, uses JPA annotations and follows DAO pattern.
Disclosure: I am the author of this project
Well if you want an ORM, then that implies that you want to map objects to tables, columns to fields etc. In this case, if you want to avoid the hassle of bigger ORM implementations, you could just use plain old JDBC, with simple DataAccessor patterns. But then this does not translated to XML directly.
If you want to just persist the object somewhere, and only care about "understanding" the object in Java, then serialization is a simple effective method, as Thomas mentioned earlier.
You could also try my little ORM library, Java2DB. I created it specifically for small projects that just want quick and easy access to their database. Check it out on GitHub.
Onyx Database is a very feature rich Java NoSQL database alternative. It's pure java with several persisting modes (caching, embedded-database, save-to-remote, and save-to-remote-cluster. It has an embedded ORM, and is probably the easiest persistence API I've used.

Strategies for dealing with constantly changing requirements for MySQL schemas?

I'm using Hibernate EntityManager and Hibernate Annotations for ORM in a very early stage project. The project needs to launch soon, but the specs are changing constantly and I am concerned that the system will be launched and live data will be collected, and then the specs will change again and I will be in a situation where I need to change the database schema.
How can I set things up in order to minimize the impact of this? Are there any open source projects that deal with this kind of migration? Can Hibernate do this automatically (without wiping the database)?
Your advice is much appreciated.
It's more a functional or organizational problem than a technical one. No tool will automatically guess how to migrate data from one schema to another one. You'd better learn how to write stored procedure in order to migrate your data.
You'll probably need to disable constraints, create temporary table and columns, copy lots of data, and then delete the temporary tables and columns and re-enable constraints to have migrate your data.
Once in maintenance mode, every new feature that modifies the schema should also come with the script allowing to migrate from the current schema and data in production to the new one.
No system can possibly create datamigration scripts automatically from just the original and the final schema. There just isn't enough information.
Consider for example a new column. Should it just contain the default value? Or a value calculated from other fields/tables.
There is a good book about refactoring databases: http://www.amazon.com/Refactoring-Databases-Evolutionary-Addison-Wesley-Signature/dp/0321774515/ref=sr_1_1?ie=UTF8&qid=1300140045&sr=8-1
But there is little to no tool support for this kind of stuff.
I think the best thing you can do in advance:
Don't let anybody access the database but your application
If something else absolutely must access the db directly, give it a separate set of view specially for that purpose. This allows you to change your table structure by keeping at least the structure of what other systems see.
Have tons of tests. I just posted an article wich (with the upcoming 2nd and 3rd part) might help a little with this: http://blog.schauderhaft.de/2011/03/13/testing-databases-with-junit-and-hibernate-part-1-one-to-rule-them/
Hibernate can update the database entity model with data in the database. So do that and write migration code in java which sets or removes data relationships.
This works, and we have done it multiple times. But of course, try to follow a flexible development process; make what you know for sure first, then reevaluate the requirements - scrum etc.
In your case, I would recommend a NoSQL database. I don't have much experience with such kind of databases so I can't recommend any current implementation so you may want to check this too.

Databases and Java

I am starting out writing java code and interacting with databases for my "nextbigthing" project. Can someone direct me towards the best way to deal with adding/updating tables/records to databases? Here is my problem. There is too much repitition when it comes to DB code in java. I have to create the tables first (I use mysql). I then create classes in Java for each table. Then I create a AddRow, DeleteRow, UpdateRow and Search* depending on my need. For every table, every need creating this huge ass sql statement and the classes all seems like a huge waste of my time. There has to be a better, easier, more efficient way of doing things. Is there something out there that I do not know that will let me just tell Java what the table is and it automatically generate the queries and execute them for me? Its simple SQL that can be auto generated if it knows the column names and DB table inter dependencies. Seems like a very reasonable thing to have.
Check out Hibernate - a standard Java ORM solution.
User hibernate for mapping your classes to Database.
Set its hbm2ddl.auto to update to avoid writing DDL yourself. But note that this is not the most optimal way to take it to production.
Consider using Hibernate:
https://www.hibernate.org/
It can create java classes with regular CRUD methods from existing database schema.
Of course there is a much better way !
You really want to learn some bits of Java EE, and in particular JPA for database access.
For a complete crash course on Java EE, check out the Sun the Java EE 5 tutorial.
http://java.sun.com/javaee/5/docs/tutorial/doc/
Part 4 - Enterprise Beans
Part 5 - Persistence (JPA)
Then you want to try Hibernate (for instance) which has an implementation of JPA.
This is for Java 5 or later.
If you are still in Java 2, you might want to try Hibernate or iBatis.
You can also try iBatis, if you want control over SQL. Else JPA is good.
You can also try using Seam Framework. It has good reverse-engineering tools.
There is also torque (http://db.apache.org/torque/) which I personally prefer because it's simpler, and does exactly what I need.
With torque I can define a database with mysql(Well I use Postgresql, but Mysql is supported too) and Torque can then query the database and then generate java classes for each table in the database. With Torque you can then query the database and get back Java objects of the correct type.
It supports where clauses (Either with a Criteria object or you can write the sql yourself) and joins.
It also support foreign keys, so if you got a User table and a House table, where a user can own 0 or more houses, there will be a getHouses() method on the user object which will give you the list of House objects the user own.
To get a first look at the kind of code you can write, take a look at
http://db.apache.org/torque/releases/torque-3.3/tutorial/step5.html which contains examples which show how to load/save/query data with torque. (All the classes used in this example are auto-generated based on the database definition).
Or, if Hibernate is too much, try Spring JDBC. It eliminates a lot of boilerplate code for you.
iBatis is another good choice, intermediate between Spring JDBC and Hibernate.
It's just a matter of using the right tools. Use an IDE with tools to autogenerate the one and other.
If you're using Eclipse for Java EE and decide to head to JPA, then I can recommend to take benefit of the builtin Dali plugin. There's a nice PDF tutorial out at Eclipse.org.
If you're using Eclipse for Java EE and decide to head to "good ol" Hibernate, then I can recommend to take benefit of the Hibernatetools plugin. There's good reference guide out at Hibernate.org.
Both tools are capable of reverse-engineering from a SQL table to fullworthy Javabeans/entities and/or mapping files. It really takes most of boilerplate pains away. The DAO pattern is slightly superflous when grabbing JPA. In case of Hibernate you can consider to use a Generic DAO.

Categories

Resources