Currently I am working on a jooq project where I need to perform schema validation of the columns.
Whats the best way to get Table schema using jooq using table name.
DSLContext.meta() is taking so much time to get schema .
Thanks in advance
By default, DSLContext.meta() queries your entire database with all the schemas and all the tables, even if you only consume parts of it.
You can use Meta.filterSchemas() (and possibly even Meta.filterTables()) to filter out content prior to querying it.
Related
I often have the situation that the generated jooq code doesn't match the database in production (columns get added all the time).
How can I fetch a weakly typed record, that contains all the database columns?
dsl.select(asterisk())
.from(PERSON)
.where(PERSON.PERSON_NO.eq(id))
.fetch()
Only returns the columns known at code generation.
A quick hack would be to make sure jOOQ doesn't know your tables by using plain SQL templating in your from clause. That way, jOOQ cannot resolve the asterisk and will try to discover the projection from the actual query results. For example:
dsl.select(asterisk())
.from("{0}", PERSON)
.where(PERSON.PERSON_NO.eq(id))
.fetch();
This has been a re-occurring request, I guess we can turn this into a feature: https://github.com/jOOQ/jOOQ/issues/10182
Note though, that it is usually better to make sure jOOQ knows the exact production schema and keep generated code up to date. A future jOOQ will support versioned generated meta data so that the same code can work with different production schema versions more easily:
https://github.com/jOOQ/jOOQ/issues/4232
Just use plain SQL: https://www.jooq.org/doc/3.14/manual-single-page/#query-vs-resultquery
If that won't work for you, explaining why not might help someone formulate a more suitable answer.
In my db every table has 4 common columns - DATE_CREATED, USER_CREATED, DATE_MODIFIED, USER_MODIFIED, and I want to propagate this rule to all new tables implicitly.
Is it possible to do it without having to generate liquibase script manually?
This is not possible using liquibase (as far as I know).
The reason for this is simple:
What if you change your mind and add/remove one of the default columns later? If you want to change all tables then this is not possible with liquibase as this would mean changing all changesets, which is not allowed.
Use a DSL to generate your liquibase scripts then you can add a certain set of columns to every entity but an automatic way would be difficult with the way liquibase works.
There is nothing built into Liquibase to support this.
Your easiest option would be to use XML document entities which is purely XML-level and therefore transparent to Liquibase. They would allow you to attach common XML into your changelog files.
A more complex approach would be to use the Liquibase extension system (http://liquibase.org/extensions) which allows you to redefine the logic to convert changeSets into SQL. That would allow you to inject any logic you want, including common data types, standard columns, or anything else.
I do not think so.
My suggesion, Dont add above mentioned 4 columns in all tables because there are possible to keep null values in all table for existing entries.
please create a table like Primary key id, table or entity name and your four column name.
I am taking a 'Keyword' and table name from user.
Now, I want to find all the columns of table whose data type is varchar(String).
Then I will create query which will compare the keyword with those column and matching rows will be returned as result set.
I tried desc table_name query, but it didn't work.
Can we write describe table query in JPQL?
If not then is there any other way to solve above situation?
Please help and thank you in advance.
No workaround is necessary, because it's not a drawback of the technology. It is not JPQL that needs to be changed, it's your choice of technology. In JPQL you cannot even select data from a table. You select from classes, and these can be mapped to multiple tables at once, resulting in SQL joins for simplest queries. Describing such a join would be meaningless. And even if you could describe a table, you do not use names of columns in JPQL, but properties of objects. Describing tables in JPQL makes no sense.
JPQL is meant for querying objects, not tables. Also, it is meant for static work (where classes are mapped to relations once and for good) and not for dynamic things like mapping tables to objects on-the-fly or live inspection of database (that is what ror's AR is for). Dynamic discovery of properties is not a part of that.
Depending on what you really want to achieve (we only know what you are trying to do, that's different) you have two basic choices:
if you are trying to write a piece of software in a dynamic way, so that it adjusts itself to changes in schema - drop JPQL (or any other ORM). Java classes are meant to be static, you can't really map them to dynamic tables (or grow new attributes). Use rowsets, they work fine and they will let you use SQL;
if you are building a clever library that can be shared by many projects and so has to work with many different static mappings, use reflection API to find properties of objects that you query for. Names of columns in the table will not help you anyway, since in JPQL queries you have to use names defined in mappings.
Map the database dictionary tables and read the required data from them. For Oracle database you will need to select from these three tables: user_tab_comments, user_tab_cols, user_col_comments; to achieve the full functionality of the describe statement.
There are some talks over the community about dynamic definition of the persistent unit in the future releases of JPA: http://www.oracle.com/goto/newsletters/javadev/0111/blogs_sun_devoxx.html?msgid=3-3156674507
According to me, we can not use describe query in jpql.
I am trying to create an application in java which pulls out records from the database and maps it to objects. It does that without knowing what the schema of the database looks like. All i want to do is fetch all rows from all tables and store them somewhere. There could be a thousand tables with thousands of records each. The application doesn't know the name of any table or attribute. It should map "on the fly". I looked at hibernate but it doesnt give me what i want for this app. I don't want to create hard-coded xml files and classes for mapping. Any ideas how i can accomplish this ?
Thanks
Oracle has a bunch of data dictionary views for metadata.
ALL_TABLES, ALL_TAB_COLUMNS would be first places to start. Then you'd build ad-hoc queries based on what you get out of there. Not sure whether you have to deal with all data types (dates, blobs, spatial, user-defined....).
Not sure what you mean by "store them somewhere". If you start thinking CSV or XML files, you'll need to escape various characters from VARCHAR2 columns.
If you are looking for some generic extract/unload routines, you should look at what is already available in the database or open-source/commercially.
MyBatis provides a pretty simple way to map data results to objects and back, maybe check that out?
http://code.google.com/p/mybatis/
Not to be flip, but for this task, you might want to check out Ruby on Rails and its ActiveRecord approach
Is there a way to tell hibernate's hbm2ddl to not create specific table but still have the model be recognized by Hibernate.
The thing is that the model map to a view and I want to have an in-memory database (empty on startup and deleted on termination) for testing, hence, having 2 sets of mapping is out of the question.
Okay, this doesn't exactly answer the question (there's probably no way to do it with current version) but it does solve the issue at hand.
So, in the end I let hibernate create the table but later on forcefully drop it and put in my own create view statement. It seems that there are 2 ways to do it.
The first way is by using the <database-object> element, specifically the child element called <create>, like so:
<class table="MY_VIEW"></class>
<database-object>
<create>
drop table MY_VIEW;
create view MY_VIEW etc etc;
</create>
</database-object>
The other way is by entering the same thing in the import.sql. This thing is undocumented. I don't know why, perhaps it's deprecated. I assume it is so, hence I won't put too much detail here. It's not deprecated, but I find the previous method less painful (the create view is several lines long).
Is there a way to tell hibernate's hbm2ddl to not create specific table
AFAIK, hbm2ddl is "all or nothing", you can't exclude specific tables. But you could use it to output the generated DDL to a file instead of automatically exporting it to the database if you want to alter the DDL. Would this help?
but still have the model be recognized by Hibernate.
I didn't get that part. Do you mean having Hibernate validate the database against the mapping?
I had a similar problem. I'm trying to extend an existing schema, so I only want my "new" tables to be created (dropped/altered/etc). I couldn't find any way to tell the hbm2ddl tool to use these entities in its model for validation, but not to generate SQL for them.
So I wrote a simple Perl script to remove those statements from the generated SQL. It's designed to work in a shell script pipeline, like so:
cat your-sql-file.sql | scrub-schema.pl table1 table2 table3 ... > scrubbed.sql
The code is available here (uses the Apache v2 license):
https://github.com/cobbzilla/sql-tools/blob/master/scrub-schema.pl
I hope this is helpful.