I've been using H2 on the functional tests part of a MySQL based application with Hibernate. I was finally fed up with it and I decided to usq jOOQ mostly so I could still abstract myself from the underlying database.
My problem is that I don't like this code generation thing jOOQ does at all since I'm yet to see an example with it properly set up in multiple profiles, also don't like connecting to the database as part of my build. It's overall quite a nasty set-up I don't want to spend a morning doing to realise is very horrible and I don't want it in the project.
I'm using tableByName() and fieldByName() instead which I thought was a good solution, but I'm getting problems with H2 putting everything in uppercase.
If I do something like Query deleteInclusiveQuery = jooqContext.delete(tableByName("inclusive_test"))... I get table inclusive_test not found. Note this has nothing to do with the connection delay or closing configuration.
I tried changing the connection to use ;DATABASE_TO_UPPER=false but then I get field not found (I thought it would translate all schema).
I'm not sure if H2 is either unable to create non-upper cased schemas or I'm failing at that. If the former then I'd expect jOOQ to also upper case the table and field names in the query.
example output is:
delete from "inclusive_test" where "segment_id" in (select "id" from "segment" where "external_taxonomy_id" = 1)
which would be correct if H2 schema would have not been created like this, however the query I'm creating the schema with specifically puts it in lowercase, yet in the end it ends up being upper cased, which Hibernate seems to understand or solve, but not jOOQ
Anyway, I'm asking if there is a solution because I'm quite disappointed at the moment and I'm considering just dropping the tests where I can't use Hibernate.
Any solution that is not using the code generation feature is welcome.
My problem is that I don't like this code generation thing jOOQ does at all since I'm yet to see an example with it properly set up in multiple profiles, also don't like connecting to the database as part of my build. It's overall quite a nasty set-up I don't to spend a morning doing to realise is very horrible and I don't want it in the project.
You're missing out on a ton of awesome jOOQ features if you're going this way. See this very interesting discussion about the rationale of why having a DB-connection in the build isn't that bad:
https://groups.google.com/d/msg/jooq-user/kQO757qJPbE/UszW4aUODdQJ
In any case, don't get frustrated too quickly. There are a couple of reasons why things have been done the way they are. DSL.fieldByName() creates a case-sensitive column. If you provide a lower-case "inclusive_test" column, then jOOQ will render the name with quotes and in lower case, by default.
You have several options:
Consistently name your MySQL and H2 tables / columns, explicitly specifying the case. E.g. `inclusive_test` in MySQL and "inclusive_test" in H2.
Use jOOQ's Settings to override the rendering behaviour. As I said, by default, jOOQ renders everything with quotes. You can override this by specifying RenderNameStyle.AS_IS
Use DSL.field() instead of DSL.fieldByName() instead. It will allow you to keep full control of your SQL string.
By the way, I think we'll change the manual to suggest using DSL.field() instead of DSL.fieldByName() to new users. This whole case-sensitivity has been causing too many issues in the past. This will be done with Issue #3218
Related
I often have the situation that the generated jooq code doesn't match the database in production (columns get added all the time).
How can I fetch a weakly typed record, that contains all the database columns?
dsl.select(asterisk())
.from(PERSON)
.where(PERSON.PERSON_NO.eq(id))
.fetch()
Only returns the columns known at code generation.
A quick hack would be to make sure jOOQ doesn't know your tables by using plain SQL templating in your from clause. That way, jOOQ cannot resolve the asterisk and will try to discover the projection from the actual query results. For example:
dsl.select(asterisk())
.from("{0}", PERSON)
.where(PERSON.PERSON_NO.eq(id))
.fetch();
This has been a re-occurring request, I guess we can turn this into a feature: https://github.com/jOOQ/jOOQ/issues/10182
Note though, that it is usually better to make sure jOOQ knows the exact production schema and keep generated code up to date. A future jOOQ will support versioned generated meta data so that the same code can work with different production schema versions more easily:
https://github.com/jOOQ/jOOQ/issues/4232
Just use plain SQL: https://www.jooq.org/doc/3.14/manual-single-page/#query-vs-resultquery
If that won't work for you, explaining why not might help someone formulate a more suitable answer.
We are changing databases, from one that supports an 8 bit int to one that does not. Our code breaks when Liquibase creates a DB that causes jOOQ to generate "short" variables but our code uses byte/Byte - this breaks code signatures.
Rather than recode, somebody suggested that we continue to use the previous database (HSQLDB) to generate the code and it "should" run with the new database. There are dissenting opinions, I cannot find anything definitive beyond intuition and it seems to be counter to what jOOQ was designed for. Has anyone done this successfully?
There is obviously no absolute yes/no answer to such a question, but there are several solutions/workarounds:
Use the previous database product to generate code
This will work for a short period of time, e.g. right now, but as you move on, it will be an extremely limiting factor for your schema design. You will continue tailoring your DDL and some other design decisions around what HSQLDB can do, and you won't be able to leverage other features of your new database product. This can be especially limiting when migrating data, as ALTER TABLE statements are quite different between dialects.
I would recommend this approach only for a very short period of time, e.g. if you can't thoroughly fix this right away.
Use jOOQ's <forcedType/> mechanism to rewrite your data types
jOOQ's code generator allows for rewriting data types prior to loading the meta data of your schema into the code generator. This way, you can pretend your byte types are TINYINT on your new database product, even if your new database product doesn't support TINYINT.
This is a thorough solution that you may want to implement regardless of what product you're using, as it will give you a way to re-define parts of your schema just for jOOQ's code generator, independently of how you're generating your code.
The feature is documented here:
https://www.jooq.org/doc/latest/manual/code-generation/codegen-advanced/codegen-config-database/codegen-database-forced-types
This is definitely a more long term solution for your case.
Notice, a future jOOQ will be able to use CHECK constraints as input meta data to decide whether to apply such a <forcedType/>. I would imagine that you will place a CHECK (my_smallint BETWEEN -128 AND 127) constraint on every such column, so you could easily recognise which columns to apply the <forcedType/> to: https://github.com/jOOQ/jOOQ/issues/8843
Until that feature is available, you can implement it yourself via programmatic code generator configuration:
https://www.jooq.org/doc/latest/manual/code-generation/codegen-programmatic/
Or, starting with jOOQ 3.12, by using a SQL expression to produce the regular expression that <forcedType/> matches. E.g. in Oracle:
<forcedType>
<name>TINYINT</name>
<sql>
select listagg(owner || '.' || table_name || '.'
|| regexp_replace(search_condition_vc, ' between.*', ''), '|')
from user_constraints
where constraint_type = 'C'
and regexp_like(search_condition_vc, '.* between -128 and 127');
</sql>
</forcedType>
You could use a file based meta data source
jOOQ doesn't have to connect to a live database instance to reverse engineer your schema. You can also pass DDL code to jOOQ, or XML files:
https://www.jooq.org/doc/latest/manual/code-generation/codegen-ddl/
https://www.jooq.org/doc/latest/manual/code-generation/codegen-xml/
This is not really solving your problem directly, but maybe, it might make solving it a bit easier. However, there are other limitations to these approaches, e.g. stored procedures aren't currently (jOOQ 3.12) supported, so I'm just adding this for completeness' sake here, not to suggest you use it right now.
I am pretty new to ES. I have been trying to search for a db migration tool for long and I could not find one. I am wondering if anyone could help to point me to the right direction.
I would be using Elasticsearch as a primary datastore in my project. I would like to version all mapping and configuration changes / data import / data upgrades scripts which I run as I develop new modules in my project.
In the past I used database versioning tools like Flyway or Liquibase.
Are there any frameworks / scripts or methods I could use with ES to achieve something similar ?
Does anyone have any experience doing this by hand using scripts and run migration scripts at least upgrade scripts.
Thanks in advance!
From this point of view/need, ES have a huge limitations:
despite having dynamic mapping, ES is not schemaless but schema-intensive. Mappings cant be changed in case when this change conflicting with existing documents (practically, if any of documents have not-null field which new mapping affects, this will result in exception)
documents in ES is immutable: once you've indexed one, you can retrieve/delete in only. The syntactic sugar around this is partial update, which makes thread-safe delete + index (with same id) on ES side
What does that mean in context of your question? You, basically, can't have classic migration tools for ES. And here's what can make your work with ES easier:
use strict mapping ("dynamic": "strict" and/or index.mapper.dynamic: false, take a look at mapping docs). This will protect your indexes/types from
being accidentally dynamically mapped with wrong type
get explicit error in case when you miss some error in data-mapping relation
you can fetch actual ES mapping and compare it with your data models. If your PL have high enough level library for ES, this should be pretty easy
you can leverage index aliases for migrations
So, a little bit of experience. For me, currently reasonable flow is this:
All data structures described as models in code. This models actually provide ORM abstraction too.
Index/mapping creation call is simple model's method.
Every index has alias (i.e. news) which points to actual index (i.e. news_index_{revision}_{date_created}).
Every time code being deployed, you
Try to put model(type) mapping. If it's done w/o error, this means that you've either
put the same mapping
put mapping that is pure superset of old one (only new fields was provided, old stays untouched)
no documents have values in fields affected by new mapping
All of this actually means that you're good to go with mappping/data you have, just work with data as always.
If ES provide exception about new mapping, you
create new index/type with new mapping (named like name_{revision}_{date}
redirect your alias to new index
fire up migration code that makes bulk requests for fast reindexing
During this reindexing you can safely index new documents normally through the alias. The drawback is that historical data is partially available during reindexing.
This is production-tested solution. Caveats around such approach:
you cannot do such, if your read requests require consistent historical data
you're required to reindex whole index. If you have 1 type per index (viable solution) then its fine. But sometimes you need multi-type indexes
data network roundtrip. Can be pain sometimes
To sum up this:
try to have good abstraction in your models, this always helps
try keeping historical data/fields stale. Just build your code with this idea in mind, that's easier than sounds at first
I strongly recommend to avoid relying on migration tools that leverage ES experimental tools. Those can be changed anytime, like river-* tools did.
I need to know how to generate reports dynamically in jasper reports? In my case table has id,name,design as fields. I need to manipulate five different queries in one jrxml file.
First one will select the entire table.
Second one will select id,name alone.
Third one name alone.
Here i succeed with selecting the entire table but am getting confused on how to to run the rest.
You have 3 choices:
1) You can combine all 5 queries into a single query. This can be difficult (sometimes impossible) to do, but should be tried first. Generally speaking the data in your report is related in some way to the other data your report. It would seem pointless to have completely unrelated data thrown together in rows in a report. So you should be able to do this.
This is your best option. You should be able to make those 5 queries into one single query. It will make your life easier, maintenance easer, and not to mention the initial design easier. You will be able to do everything in a single jrxml. You can do grouping, and formatting, and sorting, etc all in the jrxml. So look closely into this. If you are having trouble, create a new question on this site, give the queries, and the table descriptions, and I bet some SQL experts here will give you hand.
2) You can forgo placing the query in the JRXML altogether, and instead pass in a datasource through java that contains all the data. Check out Using a POJO as an IReport Datasouce for an example of how you could do this.
This is not that terrible of an option if you cannot do option 1 above. It is a decent compromise, but you are still running several queries to build your datasource, and that will have a negative effect of the time it takes to actually run the report.
3) You can use sub-reports, so that each report has one query, that you can use the result from the previous query to run the next. This would end up giving you a tree hierarchy of reports, and in your case would mean creating 5 reports.
I do not think this is a good idea in your case, but just wanted to make sure your are aware it is an option. It is going to be overly complex to setup, and difficult to maintain.
In addition to the three options presented by jschoen, there is a fourth option:
4) Use subdatasets to run multiple queries in one report. You can see the answer to Multiple queries in a single jasper document to see how to do this.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I've stumbled over a nice SQL builder framework, called JOOQ. BTW, in Russian JOOQ sounds like noun meaning "bug" (as an insect), "beetle" ;)
If you have any feedback about JOOQ, it's performance and such, please share. Links to blogs about JOOQ also welcome.
I think I should answer here also because I started using jooq one and a half months ago so I have some experience with it.
I wanted to use tool like jooq because:
ORM is an overkill in my current project (distributed calculations platform for cluster) since I need to read and write only separate fields from db, not complete table rows and some of my queries are complex enough not to be executed by simple and lightweight ORMs.
I wanted syntax autocomplete for my queries so that I don't need to keep my whole DB in mind
I wanted to be able to write queries directly in Java so that compiler could check basic query syntax on build.
I wanted my queries to be type-safe so that I couldn't accidentally pass a variable of one type, where another one is expected.
I wanted SQL, but I wanted it very convenient and easy to use
Well, with jooq I was able to achieve all that. My main requirement was for jooq to handle complex enough queries (nested, with grouping etc.). That was fulfilled.
I also wanted to be able to run queries using as few lines of code as possible and was able to reach this with jooq fluent API which allows jquery-like calls to perform SELECTs.
On my way using of jooq I reported a one or two bugs and I must say, that they were fixed surprisingly fast.
I also missed some features and again I must say, that I already have almost all of them.
What I liked very much, is that jooq now uses SLF4J for reporting some very interesting data about it's performance as well as for outputting the actual queries it has built. It really helped me with debugging.
Jooq even generate Java artifacts for stored procedures, UDFs and updatable recordsets, which I don't use currently, though.
What's important, jooq transparently supports DB2, Derby, H2, HSQLDB, MySQL, Oracle, PostGreSQL, SQLite, SQL Server, Sybase SQL Anywhere. Pretty extensive list, I think.
Jooq has support forum in Google groups where Lukas is day and night ready to answer even the stupidest of my questions.
Jooq supports Maven and that's a great relief for me since all my Java projects are Maven-based. We still miss Maven plugin for generator, but that's not important since running generator is a piece of cake.
Writting my queries with jooq I suddenly discovered, that they became really portable because I almost never used any MySQL-specific feature in the code since jooq tries to be as portable as possible. For those who can't live with such peculiarities, as I know support for SQL extensions is under the way also.
What does jooq lack for a moment from my point of view?
Well, there is no fluent API for statements other than SELECT. This complicates code a little and makes UPDATE/DELETE statements a little more complicated to write. But I think this will be added soon. Just implemented in 1.5.9! Ha! Too quick for me ;)
And one more thing. Jooq has a good manual, but... I don't know. May be I just don't understand it's structure or architecture... When I started using jooq for the first time, I opened one page after another looking for a feature I need. For example, try to guess, where in jooq manual UPDATE and DELETE statements are described, looking at contents... But that's really subjective, I believe. I also cannot even explain, what's wrong with manual from my point of view. When I can, I will post a ticket or two ;)
Manual also is not really well-navigable since Trac has no automatic "here, there and back"-like links.
Well, for me in Moscow (Russia) Trac pages doesn't open fast also so reading manual is a little boring.
Manual also misses a good architecture description of jooq for contributors. Jooq follows design-by-contract principle it seems and when I wanted to learn how certain feature is implemented inside by using my usual Ctrl-Click on some method name in IDE, I ended up inside a dull interface with no implementation ;) Not that I'm too smart to start improving jooq right away, but certainly I would enjoy understanding how exactly jooq is architectured from ground to up.
It's a pity also that we cannot contribute to jooq manual. I expected it to be in some kind of wiki.
What I would also want to improve, is the way news are reported. I would prefer link to manual there or examples how this or that new feature works.
Release notes link in manual is really just a roadmap. I think, I will do that one myself tomorrow...
Jooq also have relatively small community currently, but I am glad to report that it doesn't affect code quality or the way new features are introduced.
Jooq is really a good project. I will stick to it for my future projects as well. I really like it.
You can also take a look on MentaBean, a lightweight ORM and SQL Builder that lets you be as close as possible to SQL offering a lot of help with the boilerplate code. Here is an example:
Programmatic Configuration:
private BeanConfig getUserBeanConfig() {
// programmatic configuration for the bean... (no annotation or XML)
BeanConfig config = new BeanConfig(User.class, "Users");
config.pk("id", DBTypes.AUTOINCREMENT);
config.field("username", DBTypes.STRING);
config.field("birthdate", "bd", DBTypes.DATE); // note that the database column name is different
config.field("status", new EnumValueType(User.Status.class));
config.field("deleted", DBTypes.BOOLEANINT);
config.field("insertTime", "insert_time", DBTypes.TIMESTAMP).defaultToNow("insertTime");
return config;
}
// create table Users(id integer primary key auto_increment,
// username varchar(25), bd datetime, status varchar(20),
// deleted tinyint, insert_time timestamp)
A simple SQL join query:
Post p = new Post(1);
StringBuilder query = new StringBuilder(256);
query.append("select ");
query.append(session.buildSelect(Post.class, "p"));
query.append(", ");
query.append(session.buildSelect(User.class, "u"));
query.append(" from Posts p join Users u on p.user_id = u.id");
query.append(" where p.id = ?");
stmt = conn.prepareStatement(query.toString());
stmt.setInt(1, p.getId());
rset = stmt.executeQuery();
if (rset.next()) {
session.populateBean(rset, p, "p");
u = new User();
session.populateBean(rset, u, "u");
p.setUser(u);
}
If you are looking for only a SQL builder solution. I have one project which is an ORM framework for Java but it is still premature and in continuous development however handles many primitive usages of databases. https://github.com/ahmetalpbalkan/orman
There is no documentation in this stage however it can build safe queries using only Java chain methods and can handle many SQL operations. It can also map classes-fields to tables-columns respectively.
Here's a sample query building operation for query
SELECT COUNT(*) FROM sailors WHERE
rating>4 AND rating<9 GROUP BY rating HAVING AVG(age)>20;
Java code:
QueryBuilder qb = QueryBuilder.getBuilder(QueryType.SELECT);
System.out.println(qb
.from("sailors")
.where(
C.and(
C.gt("rating", 5),
C.lt("rating", 9)))
.groupBy("rating")
.having(
C.gt(
new OperationalField(QueryFieldOperation.AVG,
"age").toString(), 20)
).getQuery());
(LOL just give up developing that framework!)
Most probably that won't work for you but just wanted to announce my project :P