Is there a way to tell hibernate's hbm2ddl to not create specific table but still have the model be recognized by Hibernate.
The thing is that the model map to a view and I want to have an in-memory database (empty on startup and deleted on termination) for testing, hence, having 2 sets of mapping is out of the question.
Okay, this doesn't exactly answer the question (there's probably no way to do it with current version) but it does solve the issue at hand.
So, in the end I let hibernate create the table but later on forcefully drop it and put in my own create view statement. It seems that there are 2 ways to do it.
The first way is by using the <database-object> element, specifically the child element called <create>, like so:
<class table="MY_VIEW"></class>
<database-object>
<create>
drop table MY_VIEW;
create view MY_VIEW etc etc;
</create>
</database-object>
The other way is by entering the same thing in the import.sql. This thing is undocumented. I don't know why, perhaps it's deprecated. I assume it is so, hence I won't put too much detail here. It's not deprecated, but I find the previous method less painful (the create view is several lines long).
Is there a way to tell hibernate's hbm2ddl to not create specific table
AFAIK, hbm2ddl is "all or nothing", you can't exclude specific tables. But you could use it to output the generated DDL to a file instead of automatically exporting it to the database if you want to alter the DDL. Would this help?
but still have the model be recognized by Hibernate.
I didn't get that part. Do you mean having Hibernate validate the database against the mapping?
I had a similar problem. I'm trying to extend an existing schema, so I only want my "new" tables to be created (dropped/altered/etc). I couldn't find any way to tell the hbm2ddl tool to use these entities in its model for validation, but not to generate SQL for them.
So I wrote a simple Perl script to remove those statements from the generated SQL. It's designed to work in a shell script pipeline, like so:
cat your-sql-file.sql | scrub-schema.pl table1 table2 table3 ... > scrubbed.sql
The code is available here (uses the Apache v2 license):
https://github.com/cobbzilla/sql-tools/blob/master/scrub-schema.pl
I hope this is helpful.
Related
Currently I am working on a jooq project where I need to perform schema validation of the columns.
Whats the best way to get Table schema using jooq using table name.
DSLContext.meta() is taking so much time to get schema .
Thanks in advance
By default, DSLContext.meta() queries your entire database with all the schemas and all the tables, even if you only consume parts of it.
You can use Meta.filterSchemas() (and possibly even Meta.filterTables()) to filter out content prior to querying it.
I have a question. Where did these methods go?
Dialect.supportsTemporaryTables();
Dialect.generateTemporaryTableName();
Dialect.dropTemporaryTableAfterUse();
Dialect.getDropTemporaryTableString();
I've tried to browse git history for Dialect.java, but no luck. I found that something like
MultiTableBulkIdStrategy was created but I couldn't find any example of how to use it.
To the point...I have legacy code (using hibernate 4.3.11) which is doing batch delete from
multiple tables using temporary table. In those tables there may be 1000 rows, but also there may
be 10 milion rows. So just to make sure I don't kill DB with some crazy delete I create temp table where I put (using select query with some condition) 1000 ids at once
and then use this temp table to delete data from 4 tables. It's running in while cycle until all data based on some condition is not deleted.
Transaction is commited after each cycle.
To make it more complicated this code has to run on top of: mysql, mariadb, oracle, postgresql, sqlserver and h2.
It was done using native SQL, with methods mentioned above. But not I can't find a way how
to refactor it.
My first try was to create query using nested select like this:
delete from TABLE where id in (select id from TABLE where CONDITION limit 1000) but this is way slower as I have to run select query multiple times for each delete and limit is not supported in nested select in HQL.
Any ideas or pointers?
Thanks.
The methods were present in version 4.3.11 but removed in version 5.0.0. It seems a bit unusual that they were removed rather than deprecated - the background is on this Jira ticket.
To quote from this:
Long term, I think the best approach is to remove the Dialect method
intended to support table tabled in a piecemeal fashion and to make
MultiTableBulkIdStrategy be a fully self-contained contract.
The methods were removed in this commit.
So it seems that getDefaultMultiTableBulkIdStrategy() is the intended replacement for these methods - but I'm not entirely clear on how, as it currently has no Javadoc. Guess you could try to work it out from the source code ...or if all else fails, perhaps try to contact Steve Ebersole, who implemented the change?
In Hibernate if we set hbm2ddl.auto to create/create-drop , then it will delete the old schema and create the new schema when starts. It means, it will delete data also?.. My doubt is if it deletes every thing then how could we retrive the old data? (eg: user registration details) and what is the correct option should use in production environments?
Pls correct me, if I am wrong.
It basically drops the managed entity tables (not all of them in the scheme) on shutdown and recreates them on startup back again. Means as per your question; yes data is dropped from the tables as well. It does not drop the the whole schema but only the entities in the entity manager.
what is the correct option should use in production environments?
IMHO, the only valid option for production environements is validate. Everything else can cause potential risk of loosing data/breaking db schema due to misscofiguration, simple mistake or typo.
Use migrations tools for schema updates as they provide "version controll" over your schema allowing it to be tested before depoyment, and revert the changes.
validate- existing schema
update- only update your schema once created
create- create schema every time.
Also here is a good explanation Hibernate hbm2ddl.auto possible values and what they do?
In my db every table has 4 common columns - DATE_CREATED, USER_CREATED, DATE_MODIFIED, USER_MODIFIED, and I want to propagate this rule to all new tables implicitly.
Is it possible to do it without having to generate liquibase script manually?
This is not possible using liquibase (as far as I know).
The reason for this is simple:
What if you change your mind and add/remove one of the default columns later? If you want to change all tables then this is not possible with liquibase as this would mean changing all changesets, which is not allowed.
Use a DSL to generate your liquibase scripts then you can add a certain set of columns to every entity but an automatic way would be difficult with the way liquibase works.
There is nothing built into Liquibase to support this.
Your easiest option would be to use XML document entities which is purely XML-level and therefore transparent to Liquibase. They would allow you to attach common XML into your changelog files.
A more complex approach would be to use the Liquibase extension system (http://liquibase.org/extensions) which allows you to redefine the logic to convert changeSets into SQL. That would allow you to inject any logic you want, including common data types, standard columns, or anything else.
I do not think so.
My suggesion, Dont add above mentioned 4 columns in all tables because there are possible to keep null values in all table for existing entries.
please create a table like Primary key id, table or entity name and your four column name.
I have a use case where in I need to read rows from a file, transform them using an engine and then write the output to a database (that can be configured).
While I could write a query builder of my own, I was interested in knowing if there's already an available solution (library).
I searched online and could find jOOQ library but it looks like it is type-safe and has a code-gen tool so is probably suited for static database schema's. In the use case that I have db's can be configured dynamically and the meta-data is programatically read and made available for write-purposes (so a list of tables would be made available, user can select the columns to write and the insert script for these column needs to be dynamically created).
Is there any library that could help me with the use case?
If I understand correctly you need to query the database structure, display the result to via a GUI and have the user map data from a file to that structure?
Assuming this is the case, you're not looking for a 'library', you're looking for an ETL tool.
Alternatively, if you're set on writing something yourself, the (very) basic way to do this is:
the structure of a database using Connection.getMetaData(). The exact usage can vary between drivers so you'll need to create an abstraction layer that meets your needs - I'd assume you're just interested in the table structure here.
the format of the file needs to be mapped to a similar structure to the tables.
provide a GUI that allows the user to connect elements from the file to columns in the table including any type mapping that is needed.
create a parametrized insert statement based on file element to column mapping - this is just a simple bit of string concatenation.
loop throw the rows in the file performing a batch insert for each.
My advice, get an ETL tool, this sounds like a simple problem, but it's full of idiosyncrasies - getting even an 80% solution will be tough and time consuming.
jOOQ (the library you referenced in your question) can be used without code generation as indicated in the jOOQ manual:
http://www.jooq.org/doc/latest/manual/getting-started/use-cases/jooq-as-a-standalone-sql-builder
http://www.jooq.org/doc/latest/manual/sql-building/plain-sql
When searching through the user group, you'll find other users leveraging jOOQ in the way you intend
The setps you need to do is:
read the rows
build each row into an object
transform the above object to target object
insert the target object into the db
Among the above 4 steps, the only thing you need to do is step 3.
And for the above purpose, you can use Transmorph, EZMorph, Commons-BeanUtils, Dozer, etc.