In Hibernate if we set hbm2ddl.auto to create/create-drop , then it will delete the old schema and create the new schema when starts. It means, it will delete data also?.. My doubt is if it deletes every thing then how could we retrive the old data? (eg: user registration details) and what is the correct option should use in production environments?
Pls correct me, if I am wrong.
It basically drops the managed entity tables (not all of them in the scheme) on shutdown and recreates them on startup back again. Means as per your question; yes data is dropped from the tables as well. It does not drop the the whole schema but only the entities in the entity manager.
what is the correct option should use in production environments?
IMHO, the only valid option for production environements is validate. Everything else can cause potential risk of loosing data/breaking db schema due to misscofiguration, simple mistake or typo.
Use migrations tools for schema updates as they provide "version controll" over your schema allowing it to be tested before depoyment, and revert the changes.
validate- existing schema
update- only update your schema once created
create- create schema every time.
Also here is a good explanation Hibernate hbm2ddl.auto possible values and what they do?
Related
Currently I am working on a jooq project where I need to perform schema validation of the columns.
Whats the best way to get Table schema using jooq using table name.
DSLContext.meta() is taking so much time to get schema .
Thanks in advance
By default, DSLContext.meta() queries your entire database with all the schemas and all the tables, even if you only consume parts of it.
You can use Meta.filterSchemas() (and possibly even Meta.filterTables()) to filter out content prior to querying it.
I read the discussion about using hbm2ddl.auto=update in order to auto-update changes to the database schema.
The thread is from 2008 and I do not know how secure it is to use the auto-update mode today.
We are running a small JavaEE on a Glassfish with Hibernate 4.3.11 and PostgreSQL. We plan to use continious integration with Jenkins.
Is it useful to work with hbm2ddl.auto=update enabled? Or is it better to use an easy alternative to update/check the updates maybe manually?
I know it is hard to give a blanket statement.
You should not use hbm2ddl.auto=update to update production databases.
Few reasons:
Hibernate will only INSERT missing columns and not modify existing columns. Therefore, if you rename a property (Client to Customer), Hibernate will create a new column Customer, leaving the column Client untouched. You will need to manually "move" the data there and remove the orphan column.
Hibernate will not remove constraints on no longer mapped columns. Thus, if your Client column was NOT NULL, any insert query to that table will now fail in the first place, because Hibernate won't provide any data for the orphan column (Which still has it's NOT NULL constraint) anymore.
Hibernate will not touch data types of existing columns. So, if you change a property type from String to Date - Hibernate will leave the column definition as varchar.
Hibernate does not remove columns of which you deleted the property, leading to data-polution and worst-case (The constraints remain in place) to no longer working applications.
If you create additiional constriants on existing columns - hibernate will not create them, because the column already existed before. (You might miss important contraints on the production db you added on existing columns)
So, perform your updates on your own is safer. If you have to take into account what hibernate is doing and what not - you'd better do it on your own from the scratch.
Is there a way to tell hibernate's hbm2ddl to not create specific table but still have the model be recognized by Hibernate.
The thing is that the model map to a view and I want to have an in-memory database (empty on startup and deleted on termination) for testing, hence, having 2 sets of mapping is out of the question.
Okay, this doesn't exactly answer the question (there's probably no way to do it with current version) but it does solve the issue at hand.
So, in the end I let hibernate create the table but later on forcefully drop it and put in my own create view statement. It seems that there are 2 ways to do it.
The first way is by using the <database-object> element, specifically the child element called <create>, like so:
<class table="MY_VIEW"></class>
<database-object>
<create>
drop table MY_VIEW;
create view MY_VIEW etc etc;
</create>
</database-object>
The other way is by entering the same thing in the import.sql. This thing is undocumented. I don't know why, perhaps it's deprecated. I assume it is so, hence I won't put too much detail here. It's not deprecated, but I find the previous method less painful (the create view is several lines long).
Is there a way to tell hibernate's hbm2ddl to not create specific table
AFAIK, hbm2ddl is "all or nothing", you can't exclude specific tables. But you could use it to output the generated DDL to a file instead of automatically exporting it to the database if you want to alter the DDL. Would this help?
but still have the model be recognized by Hibernate.
I didn't get that part. Do you mean having Hibernate validate the database against the mapping?
I had a similar problem. I'm trying to extend an existing schema, so I only want my "new" tables to be created (dropped/altered/etc). I couldn't find any way to tell the hbm2ddl tool to use these entities in its model for validation, but not to generate SQL for them.
So I wrote a simple Perl script to remove those statements from the generated SQL. It's designed to work in a shell script pipeline, like so:
cat your-sql-file.sql | scrub-schema.pl table1 table2 table3 ... > scrubbed.sql
The code is available here (uses the Apache v2 license):
https://github.com/cobbzilla/sql-tools/blob/master/scrub-schema.pl
I hope this is helpful.
I work with Spring/Hibernate Dao's for saving my Object in the Database. Now I had to backup all my DB inside my application. Now when I try to read my backup back, my application crashed. Now I found the problem for this crashing. It's Hibernate it creates automaticly a new ID for my Object when I want to save.
For example I saved my object a with an Id 4 in my backup file.
Now i read the backup file. Clean my DB from old stuff. Save this object back to db. and now my object id is for example 5. But it has to be 4. How can I prevent hybernate from auto generate my id value?
Should i write an extra JDBCDao for importing ?
Here is my Model attribute for id
#ID
#Column(name="ID")
#GeneratedValue(strategy=GenerationType.AUTO)
private Long id;
Thanks for helping and exuse my bad english.
Four options come to my mind:
make the backup using a database utility (in case of mysql for example - with mysqldump) and restore it again through a database utility, without hibernate
since the above doesn't seem to be an option now, you can generate the SQL queries based on your backup (show us the format of your backup) and execute them as a batch against the dtabase (again without hibernate)
if you don't want to use the SQL option and want to do it in hibernate, iterate your objects and save them one by one. immediately after save, update the object with the correct ID (either using .persist() or using HQL).
you can temporarily remove the GeneratedValue annotation while importing.
That all said, I think (again, depending on your format) that it shouldn't matter that much what the IDs are, if the referential integrity is intact.
I have mapped several java classes like Customer, Assessment, Rating, ... to a database with Hibernate.
Now i am thinking about a history-mode for all changes to the persistent data. The application is a web application. In case of deleting (or editing) data another user should have the possibility to see the changes and undo it. Since the changes are out of the scope of the current session, i don't know how to solve this in something like the Command pattern, which is recommended for undo functionality.
For single value editing an approach like in this question sounds OK. But what about the deletion of a whole persistent entity? The simplest way is to create a flag in the table if this customer is deleted or not. The complexest way is to create a table for each class where deleted entities are stored. Is there anything in between? And how can i integrate these two things in a O/RM system (in my case Hibernate) comfortably, without messing around to much with SQL (which i want to avoid because of portability) and still have enough flexibility?
Is there a best practice?
One approach to maintaining audit/undo trails is to mark each version of an object's record with a version number. Finding the current version would be a painful effort if the this were a simple version number, so a reverse version numbering works best. "version' 0 is always the current and if you do an update the version numbers for all previous versions are incremented. Deleting an object is done by incrementing the version numbers on the current records and not inserting a new one at 0.
Compared to an attribute-by-attribute approach this make for far simpler rollbacks or historic version views but does take more space.
One way to do it would be to have a "change history" entity with properties for entity id of the entity changed, action (edit/delete), property name, orginal value, new value. Maybe also reference to the user performing the edit. A deletion would create entities for all properties of the deleted entity with action "delete".
This entity would provide enough data to perform undos and viewing of change history.
Hmm I'm looking for an answer to this too. So far the best I've found is the www.jboss.org/envers/ framework but even that seems to me like more work than should be necessary.