I need to know how to generate reports dynamically in jasper reports? In my case table has id,name,design as fields. I need to manipulate five different queries in one jrxml file.
First one will select the entire table.
Second one will select id,name alone.
Third one name alone.
Here i succeed with selecting the entire table but am getting confused on how to to run the rest.
You have 3 choices:
1) You can combine all 5 queries into a single query. This can be difficult (sometimes impossible) to do, but should be tried first. Generally speaking the data in your report is related in some way to the other data your report. It would seem pointless to have completely unrelated data thrown together in rows in a report. So you should be able to do this.
This is your best option. You should be able to make those 5 queries into one single query. It will make your life easier, maintenance easer, and not to mention the initial design easier. You will be able to do everything in a single jrxml. You can do grouping, and formatting, and sorting, etc all in the jrxml. So look closely into this. If you are having trouble, create a new question on this site, give the queries, and the table descriptions, and I bet some SQL experts here will give you hand.
2) You can forgo placing the query in the JRXML altogether, and instead pass in a datasource through java that contains all the data. Check out Using a POJO as an IReport Datasouce for an example of how you could do this.
This is not that terrible of an option if you cannot do option 1 above. It is a decent compromise, but you are still running several queries to build your datasource, and that will have a negative effect of the time it takes to actually run the report.
3) You can use sub-reports, so that each report has one query, that you can use the result from the previous query to run the next. This would end up giving you a tree hierarchy of reports, and in your case would mean creating 5 reports.
I do not think this is a good idea in your case, but just wanted to make sure your are aware it is an option. It is going to be overly complex to setup, and difficult to maintain.
In addition to the three options presented by jschoen, there is a fourth option:
4) Use subdatasets to run multiple queries in one report. You can see the answer to Multiple queries in a single jasper document to see how to do this.
Related
Let's say I have 2 tables in a database, one called students and the other called departments. students looks like the following:
department_id, student_id, class, name, age, gender, rank
and departments looks like:
department_id, department_name, campus_id, number_of_faculty
I have an API that can query the database and retrieve various information from the 2 tables. For example, I have an end point that can get number of students on each campus by joining the 2 tables.
I want to do integration testing for my API end points. To do that, I spin up a local database, run migration of the database schemas to create the tables, then populate each table with artificial records such that I know exactly what is in the database. But coming up with a good seeding process has proven to be anything but easy. For the simple example I described above, my current approach involves generating multiple distinct records for each column. For example, I need at least 2 campuses (say main and satellite), and 3 departments (say Electrical Engineering and Mathematics for main campus and English for satellite campus). Then I need at least 2 students in each department or 6 students in total. And if I mix in gender, age and rank, you can easily see that the number of artificial records grows exponentially. Coming up with all these artificial records is manual and thus tedious to maintain.
So my question is: what is the proper way to set up and seed database for integration testing in general?
First, I do not know any public tool that automates the task of generating test data for arbitrary scenarios.
Actually, this is a hard task in general. You might look for scientific papers and books on the topic. There are may of those. Unfortunately, I have no recommendation on a set of "good" ones.
A quite trivial approach is generating random data drawn from a set of potential values per field (column in the database case). (This is what you did already.) For smaller sets you may even generate the full set of potential combinations. E.g. you might have a look at the following test data generator for an example applying a variant of such an approach.
However, this might not be appropriate for the following reasons:
the resulting data will exhibit significant redundancy, while it may still not cover all interesting cases.
it might create inconsistent data with respect to logical constraints your application would enforce otherwise (e.g. referential integrity)
You might address such issues by adding some constraints into the process of generating test data for eliminating invalid or redundant combinations (with respect to your application).
The actual restriction possible (and making sense), however, are depending on your business and use cases. So, there is no general rule on such restrictions. E.g. if your API provides special treatment for age values based on gender combinations of age and gender are important for your tests, if no such distinction exists any combination of age and gender will be OK.
As long as you are looking for white box test scenarios, you will need to put in your implementation (or at least specification) details.
For black box testing a full set of combinatorial data will be sufficient. Then only reducing test data to keep runtime of tests within some maximum is an issue.
When dealing with white box testing, you might explicitly look for adding in corner cases. E.g. in your case: department without any student, department with a single student, students without department, as long as such scenario makes sense with your testing purposes. (e.g. when testing error handling or testing how your application would deal with inconsistent data.)
In your case you are looking at your API as the main view to the data. The database content just is the input necessary for achieving all interesting output from that API. The actual task of identifying a proper database content may be described by the mathematical problem of providing an inverse to the mapping provided by your application (from database content to API result).
In lack of any ready tool, you might apply the following steps:
start with a simple combinatorial data generator
apply some restrictions eliminating useless or illegal records
run tests capturing coverage data add extra data records for improving coverage repeat testing until coverage is OK
review and adjust data after any change to your code or schema
I think DbUnit might be the right tool for what you're trying to do. You can specify the state of your database before the tests and check the expected state after.
If you need to initialize database with tables and dummy data with Junit,
I am using Unitils or DbUnit
The data in Unitils could be loaded from XML files inside your resources folder, so once the test runner starts, it'll load all content from xml and insert them in the database, please look at the examples on their website.
I've been using H2 on the functional tests part of a MySQL based application with Hibernate. I was finally fed up with it and I decided to usq jOOQ mostly so I could still abstract myself from the underlying database.
My problem is that I don't like this code generation thing jOOQ does at all since I'm yet to see an example with it properly set up in multiple profiles, also don't like connecting to the database as part of my build. It's overall quite a nasty set-up I don't want to spend a morning doing to realise is very horrible and I don't want it in the project.
I'm using tableByName() and fieldByName() instead which I thought was a good solution, but I'm getting problems with H2 putting everything in uppercase.
If I do something like Query deleteInclusiveQuery = jooqContext.delete(tableByName("inclusive_test"))... I get table inclusive_test not found. Note this has nothing to do with the connection delay or closing configuration.
I tried changing the connection to use ;DATABASE_TO_UPPER=false but then I get field not found (I thought it would translate all schema).
I'm not sure if H2 is either unable to create non-upper cased schemas or I'm failing at that. If the former then I'd expect jOOQ to also upper case the table and field names in the query.
example output is:
delete from "inclusive_test" where "segment_id" in (select "id" from "segment" where "external_taxonomy_id" = 1)
which would be correct if H2 schema would have not been created like this, however the query I'm creating the schema with specifically puts it in lowercase, yet in the end it ends up being upper cased, which Hibernate seems to understand or solve, but not jOOQ
Anyway, I'm asking if there is a solution because I'm quite disappointed at the moment and I'm considering just dropping the tests where I can't use Hibernate.
Any solution that is not using the code generation feature is welcome.
My problem is that I don't like this code generation thing jOOQ does at all since I'm yet to see an example with it properly set up in multiple profiles, also don't like connecting to the database as part of my build. It's overall quite a nasty set-up I don't to spend a morning doing to realise is very horrible and I don't want it in the project.
You're missing out on a ton of awesome jOOQ features if you're going this way. See this very interesting discussion about the rationale of why having a DB-connection in the build isn't that bad:
https://groups.google.com/d/msg/jooq-user/kQO757qJPbE/UszW4aUODdQJ
In any case, don't get frustrated too quickly. There are a couple of reasons why things have been done the way they are. DSL.fieldByName() creates a case-sensitive column. If you provide a lower-case "inclusive_test" column, then jOOQ will render the name with quotes and in lower case, by default.
You have several options:
Consistently name your MySQL and H2 tables / columns, explicitly specifying the case. E.g. `inclusive_test` in MySQL and "inclusive_test" in H2.
Use jOOQ's Settings to override the rendering behaviour. As I said, by default, jOOQ renders everything with quotes. You can override this by specifying RenderNameStyle.AS_IS
Use DSL.field() instead of DSL.fieldByName() instead. It will allow you to keep full control of your SQL string.
By the way, I think we'll change the manual to suggest using DSL.field() instead of DSL.fieldByName() to new users. This whole case-sensitivity has been causing too many issues in the past. This will be done with Issue #3218
I have a use case where in I need to read rows from a file, transform them using an engine and then write the output to a database (that can be configured).
While I could write a query builder of my own, I was interested in knowing if there's already an available solution (library).
I searched online and could find jOOQ library but it looks like it is type-safe and has a code-gen tool so is probably suited for static database schema's. In the use case that I have db's can be configured dynamically and the meta-data is programatically read and made available for write-purposes (so a list of tables would be made available, user can select the columns to write and the insert script for these column needs to be dynamically created).
Is there any library that could help me with the use case?
If I understand correctly you need to query the database structure, display the result to via a GUI and have the user map data from a file to that structure?
Assuming this is the case, you're not looking for a 'library', you're looking for an ETL tool.
Alternatively, if you're set on writing something yourself, the (very) basic way to do this is:
the structure of a database using Connection.getMetaData(). The exact usage can vary between drivers so you'll need to create an abstraction layer that meets your needs - I'd assume you're just interested in the table structure here.
the format of the file needs to be mapped to a similar structure to the tables.
provide a GUI that allows the user to connect elements from the file to columns in the table including any type mapping that is needed.
create a parametrized insert statement based on file element to column mapping - this is just a simple bit of string concatenation.
loop throw the rows in the file performing a batch insert for each.
My advice, get an ETL tool, this sounds like a simple problem, but it's full of idiosyncrasies - getting even an 80% solution will be tough and time consuming.
jOOQ (the library you referenced in your question) can be used without code generation as indicated in the jOOQ manual:
http://www.jooq.org/doc/latest/manual/getting-started/use-cases/jooq-as-a-standalone-sql-builder
http://www.jooq.org/doc/latest/manual/sql-building/plain-sql
When searching through the user group, you'll find other users leveraging jOOQ in the way you intend
The setps you need to do is:
read the rows
build each row into an object
transform the above object to target object
insert the target object into the db
Among the above 4 steps, the only thing you need to do is step 3.
And for the above purpose, you can use Transmorph, EZMorph, Commons-BeanUtils, Dozer, etc.
I have to go through a database and modify it according to a logic. The problem looks something like this. I have a history table in my database and I have to modify.
Before modifying anything I have to look at whether an object (which has several rows in the history table) had a certain state, say 4 or 9. If it had state 4 or 9 then I have to check the rows between the currently found row and the next state 4 or 9 row. If such a row (between those states) has a specific value in a specific column then I do something in the next row. I hope this is simple enough to give you an idea. I have to do this check for all the objects. Keep in mind that any object can be modified anywhere in its life cycle (of course until it reaches a final state).
I am using a SQL Sever 2005 and Hibernate. AFAIK I can not do such a complicated check in Transact SQL! So what would you recommend for me to do? So far I have been thinking on doing it as JUnit test. This would have the advantage of having Hibernate to help me do the modifications and I would have Java for lists and other data structures I might need and don't exist in SQL. If I am doing it as a JUnit test I am not loosing my mapping files!
I am curious what approaches would you use?
I think you should be able to use cursors to manage the complicated checks in SQL Server. You didn't mention how frequently you need to do this, but if this is a one-time thing, you can either do it in Java or SQL Server, depending on your comfort level.
If this check needs to be applied on every CRUD operation, perhaps database trigger is the way to go. If the logic may change frequently over the time, I would much rather writing the checks in Hibernate assuming no one will hit the database directly.
Following problem: I want to render a news stream of short messages based on localized texts. In various places of these messages I have to insert parameters to "customize" them. I guess you know what I mean ;)
My question probably falls into the "Which is the best style to do it?" category: How would you store these parameters (they may be Strings and Numbers that need to be formatted according to Locale) in the database? I'm using Hibernate to do the ORM and I can think of the following solutions:
build a combined String and save it as such (ugly and hard to maintain I think)
do some kind of fancy normalization and and make every parameter a single row on the database (clean I guess, but a performance nightmare)
Put the params into an Array, Map or other Java data structure and save it in binary format (probably causes a lot of overhead size-wise)
I tend towards option #3 but I'm afraid that it might be to costly in terms of size in the database. What do you think?
If you can afford the performance hit of using the normalized approach of having a separate table I would go with this approach. We use the same approach as your first suggestion at work, and it gets messy, especially when you reach the column limit and key/values start getting truncated!
Do the normalization.
I would suggest something like:
Table Message
id
Table Params
message_id
key
value
Storing serialized Java objects in the database is quite a bad thing in most cases. As they are hard to maintain and you cannot access them with 'simple' SQL tools.
The performance impact is not as big, as you can fetch all together in a single select using a join.
It depends a bit. Is the number of parameters huge for each entity? If it is not probable second option is the best.
If you don't want to add extra queries caused by the lazy load you can always change fetch type for the variable number of parameters that would only add one join to a query you were always doing. In normal conditions it is not a big price to pay.
Also the third and the first one forbids forever any type of queries over the parameters. A huge technical debt for the future I would not be willing to pay.
directly put it as string and save it ..