Now and then I come into the situation that I have to display the table hierarchie of a database for further operations, currently in a data migration project where I have to treat "leaf tables" (tables which are leafes in the table dependency tree) in a different way.
I've always wanted to use Hibernate's meta information to retrieve and display the table dependency tree, but never knew how to approach the problem.
So can anyone give me feedback on whether Hibernate provides an API to do this? I am not asking for a complete solution, the information if there is an API and what it is called is absolutely sufficient.
I want to solve the following questions:
Which tables are in the database?
Is a given table a root table (not dependant from other tables)?
Is a given table a leaf table (dependant from other tables but no table is dependant from the given table)?
Which tables are dependant from the given table?
On which tables does the given table depend?
I know how to retrieve the mapping between entities and tables:
How to discover fully qualified table column from Hibernate MetadataSources , but I want the relationship between the tables.
In a custom MetadataContributor you can access metadataCollector.getDatabase() which exposes the full relational model to you. You just have so save that into a static volatile variable and then access it later on in your app to do whatever you want to do with it.
Related
I'm relatively new to working with JDBC and SQL. I have two tables, CustomerDetails and Cakes. I want to create a third table, called Transactions, which uses the 'Names' column from CustomerDetails, 'Description' column from Cakes, as well as two new columns of 'Cost' and 'Price'. I'm aware this is achievable through the use of relational databases, but I'm not exactly sure about how to go about it. One website I saw said this can be done using ResultSet, and another said using the metadata of the column. However, I have no idea how to go about either.
What you're probably looking to do is to create a 'SQL View' (to simplify - a virtual table), see this documentation
CREATE VIEW view_transactions AS
SELECT Name from customerdetails, Description from cakes... etc.
FROM customerdetails;
Or something along those lines
That way you can then query the View view_transactions for example as if it was a proper table.
Also why have you tagged this as mysql when you are using sqlite.
You should create the new table manually, i.e. outside of your program. Use the commandline 'client' sqlite3 for example.
If you need to, you can use the command .schema CustomerDetails in that tool to show the DDL ("metadata" if you want) of the table.
Then you can write your new CREATE TABLE Transactions (...) defining your new columns, plus those from the old tables as they're shown by the .schema command before.
Note that the .schema is only used here to show you the exact column definitions of the existing tables, so you can create matching columns in your new table. If you already know the present column definitions, because you created those tables yourself, you can of course skip that step.
Also note that SELECT Name from CUSTOMERDETAILS will always return the data from that table, but never the structure, i.e. the column definition. That data is useless when trying to derive a column definition from it.
If you really want/have to access the DB's metadata programatically, the documented way is to do so by querying the sqlite_master system table. See also SQLite Schema Information Metadata for example.
You should read up on the concept of data modelling and how relational databases can help you with it, then your transaction table might look just like this:
CREATE TABLE transactions (
id int not null primary key
, customer_id int not null references customerdetails( id )
, cake_id int not null references cakes( id )
, price numeric( 8, 2 ) not null
, quantity int not null
);
This way, you can ensure, that for each transaction (which is in this case would be just a single position of an invoice), the cake and customer exist.
And I agree with #hanno-binder, that it's not the best idea to create all this in plain JDBC.
I have a web project that uses a database to store data that is used to generate tasks that would be processed for remote machines to alter that records and store new data. My problem here is that I have to store all that changes on each table but I don't need all these information. For example, a table A could have 5 fields but I only need 2 for historical purposes. Another table B could have 3 and I would have to add another one (date for example). Also, I don't need changes during daily task generation, only the most recent one.
Which is the best way to maintain a change history? Someone told me that a good idea is having two tables, the A (B) table and another one called A_history (B_history) with the needed fields. This is actually what I'm doing, using triggers to insert into history tables but I don't feel comfortable with this approach. My project uses Spring (Spring-data, Hibernate and JPA) and if I change the DB (currently MySQL) I'd have to migrate triggers. Is there a good way to manage history records? Tables could be generated with Hibernate/JPA annotations.
If I maintain the two tables approach, can I add a method to the repository to fetch rows from current table and history table at once?
For this pourpose there is a special Hibernate Envers project. See official documentation here. Just configure it, annotate necessary properties with #Audited annotation and that's all. No need for DB triggers.
One pitfall: if you want to have a record for each delete operation then you need to use Session.delete(entity) way instead of HQL "delete ...".
EDIT. Also take a look into native auditing support of spring data jpa.
I am not a database expert. What I have seen them do boils down to a few ways of approach.
1) They add a trigger to the transactional table that copies inserts and updates to a history table but not deletes. This means any queries that need to include history can be done from the history table since all the current info is there too.
a) They can tag each entry in the history table with time and date and
keep track of all the states of the original records.
b) They can only
keep track of the current state of the original record and then it
settles when the original is deleted.
2) They have a periodic task that goes around and copies data marked as deletable into the history table. It then deletes the data from the transactional table. Any queries in the transactional table have to make sure to ignore the deletable rows. Any queries that need history have to search both tables and merge the results.
3) If the volume of data isn't too large, they just leave everything in one table and mark some entries as historical. Queries have to ignore historical rows. Queries that include history are easy. This may slow down database access as the table grows to include many unused rows but that can sometimes be ameliorated by clever use of indexes.
I am using IBM DB2 v9.1 and want to export all database to xml file and them import it back when needed. There are 9 tables in my database.
I am using java and hibernate. What I have done so far is: fetch all data through hibernate and fill POJO objects, then export the objects to xml file. Now for import I need to delete all existing databases first and them import xml file data to the database.
Problem is with primary keys (ids). Once id is deleted from DB2 then data cannot be saved with that id and it will be assigned new id. This disturbs foreign key relation. What is the best possible solution for it?
If you want to export/import data for testing purpose you may want to consider DbUnit http://www.dbunit.org/index.html
Perhaps the MERGE statement can come to your rescue. If there already is a row with the matching id, it will let you update the row. If there is no row with a matching ID, then it will let you insert it.
So then the question might become, do you really need to delete the rows from DB2 when you create the XML files?
What do you mean by all database? all data? or even the DDL?
I think you are exporting all data, and you leave the tables created to be refilled with the exported data.
The problem are the constraints and the generated values. There is a good article about generated values: http://www.ibm.com/developerworks/data/library/techarticle/0205pilaka/0205pilaka2.html
For the referential contraints, the best is to drop/deactivate them before import, then import data, and finally recreate/active referential constraints.
Here, a good stored procedure to enable/disable constraints: http://www.dzone.com/snippets/db2-enabledisable-constraints
After importing file all relations will be mapped to check their inter relation. New objects will be created after mapping relation and they will saved in database with new ID’s as DB2 will not save data on an old deleted ID and it saves it to a new ID.
I have followed Balusc's 1st method to create dynamic form from fields defined in database.
I can get field names and values of posted fields.
But I am confused about how to save values into database.
Should I precreate a table to hold values after creating form and
save values there manually (by forming SQL query manually)?
Should I convert name/value pairs to JSON objects
and save?
Should I create a simple table with id,name,value field and
save name/value pairs here (Like EAV Scheme)?
Or is there any way for persisting posted values into database?
Regards
It look like that you're trying to work bottom-up instead of top-down.
The dynamic form in the linked answer is intented to be reused among all existing tables without the need to manually create separate JSF CRUD forms on "hardcoded" Facelets files for every single table. You should already have a generic model available which contains information about all available columns in the particular DB table (which is Field in the linked answer). This information can be extracted dynamically and generically via JPA metadata information (how to do that in turn depends on the JPA provider used) or just via good 'ol JDBC ResultSetMetaData class once during application's startup.
If you really need to work bottom-up, then it gets trickier. Creating tables/columns during runtime is namely a very bad design (unless you intend to develop some kind of DB management tool like PhpMyAdmin or so, of course). Without the need to create tables/columns runtime, you should basically have 3 tables:
1 table which contains information about which "virtual" DB tables are all available.
1 table which contains information which columns one such "virtual" DB table has.
1 table which contains information which values one such column has.
Then you should link them together by FK relationships.
I am taking a 'Keyword' and table name from user.
Now, I want to find all the columns of table whose data type is varchar(String).
Then I will create query which will compare the keyword with those column and matching rows will be returned as result set.
I tried desc table_name query, but it didn't work.
Can we write describe table query in JPQL?
If not then is there any other way to solve above situation?
Please help and thank you in advance.
No workaround is necessary, because it's not a drawback of the technology. It is not JPQL that needs to be changed, it's your choice of technology. In JPQL you cannot even select data from a table. You select from classes, and these can be mapped to multiple tables at once, resulting in SQL joins for simplest queries. Describing such a join would be meaningless. And even if you could describe a table, you do not use names of columns in JPQL, but properties of objects. Describing tables in JPQL makes no sense.
JPQL is meant for querying objects, not tables. Also, it is meant for static work (where classes are mapped to relations once and for good) and not for dynamic things like mapping tables to objects on-the-fly or live inspection of database (that is what ror's AR is for). Dynamic discovery of properties is not a part of that.
Depending on what you really want to achieve (we only know what you are trying to do, that's different) you have two basic choices:
if you are trying to write a piece of software in a dynamic way, so that it adjusts itself to changes in schema - drop JPQL (or any other ORM). Java classes are meant to be static, you can't really map them to dynamic tables (or grow new attributes). Use rowsets, they work fine and they will let you use SQL;
if you are building a clever library that can be shared by many projects and so has to work with many different static mappings, use reflection API to find properties of objects that you query for. Names of columns in the table will not help you anyway, since in JPQL queries you have to use names defined in mappings.
Map the database dictionary tables and read the required data from them. For Oracle database you will need to select from these three tables: user_tab_comments, user_tab_cols, user_col_comments; to achieve the full functionality of the describe statement.
There are some talks over the community about dynamic definition of the persistent unit in the future releases of JPA: http://www.oracle.com/goto/newsletters/javadev/0111/blogs_sun_devoxx.html?msgid=3-3156674507
According to me, we can not use describe query in jpql.