I'm trying to get a list of multiple columns from my table using QueryDSL, and automatically fill my DB object, like this example in an older manual:
List<CatDTO> catDTOs = query.from(cat)
.list(EConstructor.create(CatDTO.class, cat.id, cat.name));
The problem is that it looks like the EConstructor class was removed in version 2.2.0, and all the examples I find now are like this:
List<Object[]> rows = query.from(cat)
.list(cat.id, cat.name);
Which forces me to manually cast all the objects into my CatDTO class.
Is there any alternative to this? Any EConstructor alternative?
EConstructor has been replaced with ConstructorExpression in Querydsl 2.0. So your example would become
List<CatDTO> catDTOs = query.from(cat)
.list(ConstructorExpression.create(CatDTO.class, cat.id, cat.name));
You can also annotate the CatDTO constructor and query like this
List<CatDTO> catDTOs = query.from(cat)
.list(new QCatDTO(cat.id, cat.name));
Alternatively you can use the QTuple projection which provides a more generic access option
List<Tuple> rows = query.from(cat)
.list(new QTuple(cat.id, cat.name));
The actual values can be accessed via their path like this
tuple.get(cat.id)
and
tuple.get(cat.name)
Tuple projection will probably be used in Querydsl 3.0 for multiple columns projections instead of Object arrays.
using queryDSL 4 and Java 8 stream:
List<CatDTO> cats = new JPAQueryFactory(entityManager)
.select(cat.id, cat.name)
.from(cat)
.fetch()
.stream()
.map(c -> new CatDTO(c.get(cat.id), c.get(cat.name)))
.collect(Collectors.toList());
Another alternative is to use the class Projections. It will construct the object using the fields you pass as parameters, like the EConstructor. Example:
List<CatDTO> catDTOs = query.from(cat)
.list(Projections.bean(CatDTO.class, cat.id, cat.name));
Reference: http://www.querydsl.com/static/querydsl/4.0.5/apidocs/com/querydsl/core/types/Projections.html
Related
I would like manipulate a jOOQ DSL query changing its SELECT columns and WHERE conditions.
For example:
DSLContext ctx = ...;
SelectHavingStep query = ctx.select(MyEntity.MY_ENTITY.ZIP, DSL.count(MyEntity.MY_ENTITY.ZIP))
.from(MyEntity.MY_ENTITY)
.where(MyEntity.MY_ENTITY.ID.gt("P"))
.groupBy(MyEntity.MY_ENTITY.ZIP);
Use case 1:
I would like to pass the above query to a utility class that will produce the same query just with with a different SELECT, for example:
ctx.select(DSL.count())
.from(MyEntity.MY_ENTITY)
.where(MyEntity.MY_ENTITY.ID.gt("P"))
.groupBy(MyEntity.MY_ENTITY.ZIP);
This particular example is to be able to create paginated results showing the total number of rows of the query.
Use case 2:
I would like to pass the above query to a utility class that will produce the same query just with with a modified WHERE clause, for example:
SelectHavingStep query =
ctx.select(MyEntity.MY_ENTITY.ZIP, DSL.count(MyEntity.MY_ENTITY.ZIP))
.from(MyEntity.MY_ENTITY)
.where(
MyEntity.MY_ENTITY.ID.gt("P")
.and(MyEntity.MY_ENTITY.ZIP.in("100", "200", "300"))
)
.groupBy(MyEntity.MY_ENTITY.ZIP);
This particular example is to further restrict a query based on some criteria (i.e. data visibility based on the user doing the query).
Is this possible?
Currently I'm using helper classes to do this at query construction time in the application code. I would like to move the responsibility to a library so it can be enforced transparently to the app.
Thanks.
You shouldn't try to alter jOOQ objects, instead you should try to create them dynamically in a functional way. There are different ways to achieve your use-cases, e.g.
Use case 1:
An approach to generic paginated querying can be seen here: https://blog.jooq.org/calculating-pagination-metadata-without-extra-roundtrips-in-sql/
Ideally, you would avoid the extra round trip for the COUNT(*) query and use a COUNT(*) OVER () window function. If that's not available in your SQL dialect, then you could do this, instead:
public ResultQuery<Record> mySelect(
boolean count,
Supplier<List<Field<?>>> select,
Function<? super SelectFromStep<Record>, ? extends ResultQuery<Record>> f
) {
return f.apply(count ? ctx.select(count()) : ctx.select(select.get()));
}
And then use it like this:
mySelect(false,
() -> List.of(MY_ENTITY.ZIP),
q -> q.from(MY_ENTITY)
.where(MY_ENTITY.ID.gt("P"))
.groupBy(MY_ENTITY.ZIP)
).fetch();
This is just one way to do it. There are many others, see the below link.
Use case 2:
Just take the above example one step further and extract the logic used to create the WHERE clause in yet another function, e.g.
public Condition myWhere(Function<? super Condition, ? extends Condition> f) {
return f.apply(MY_ENTITY.ID.gt("P"));
}
And now use it as follows:
mySelect(false,
() -> List.of(MY_ENTITY.ZIP),
q -> q.from(MY_ENTITY)
.where(myWhere(c -> c.and(MY_ENTITY.ZIP.in("100", "200", "300")))
.groupBy(MY_ENTITY.ZIP)
).fetch();
Again, there are many different ways to solve this, depending on what is the "common part", and what is the "user-defined part". You can also abstract over your MY_ENTITY table and pass around functions that produce the actual table.
More information
See also these resources:
https://www.jooq.org/doc/latest/manual/sql-building/dynamic-sql/
https://blog.jooq.org/a-functional-programming-approach-to-dynamic-sql-with-jooq/
In normal SQL queries to select the particular field (emp_id,emp_name,emp_email) we use
select emp_name from table_name(to display/fetch only emp_name)
In the same way how to do it in hazelcast?
static IMap<String, Model)> map = hazelCast.getMap("data");
map.put(1,(new Model(emp_id,emp_name,emp_email)));
map.values(new SqlPredicate("data[any].entity_id"));
How to select only emp_name values in the result?
What you're looking for is called a Projection in Hazelcast. What is stored in your map are objects of type Map.Entry; anytime you want to return something other than a Map.Entry, you can create a Projection to transform the Entry into the desired return type.
When you need to do a non-trivial transformation, you can implement custom Projections, but there are built-in Projections you can simply reuse when you're just trying to return a single attribute or a set of attributes from the entry.
So in your case, you can use the built-in singleAttribute projection:
Projection empNameProjection = Projection.singleAttribute("emp_name");
And then you can use IMap.project to return the projection for all entries, or for entries matching a predicte:
Collection<String> names = map.project(empNameProjection, mySqlPredicate);
See:
https://docs.hazelcast.org/docs/latest-dev/manual/html-single/#projection-api
I wanted to perform the Spring JPA repository where wanted to apply the and operation among 2 columns where one column cloud have multiple values in it.
SQL query for the same:
select * from table_name where col1='col1_val' and col2 IN
('col2_val_a','col2_val_b','col2_val_c');
I know that for and operation I can extend the JpaRepository and create the method with like this for:
List<MyPoJoObject> findByCol1AndCol2(String col1_val,String col2_val);
and for IN operation we can use : findByCol2In(Collection<String> col2_val)
But i did not know how i can club both the mentioned JPA default method into one, as per my sql statement mentioned before.
You can use the following method named:
List<MyPoJoObject> findByCol1AndCol2In(String col1_val, Collection<String> col2_val);
On this link repository-query-keywords you can find repository query keywords that you can use and combine them as well.
You can certainly combined both into one method.
List<MyPoJoObject> findByCol1AndCol2In(String col1_val,String[] col2_val);
Try this. I am not sure if it will accept Collection<String>. I will try that and update the answer.
HTH.
If you want to perform this logic for more than two columns then your method name becomes verbose.
Instead of stuck with Spring naming why can't you write your own JPA query.
Example:
#Query("select pojo from MyPoJoObject as pojo where pojo.col1 = :col1_val and pojo.col2 in :col2_val")
List<MyPoJoObject> findByColumns(String col1_val, List<String> col2_val);
I am trying to execute .fetchMap(key, value) with jOOQ but I want to process the key through a custom converter.
The docs are very clear on how to use converters and how to use .fetchMap() but I can't find anywhere a way of combining both.
Could this feature be missing from my jOOQ version (3.9) ?
Converter (and Binding) implementations are bound to the Field reference by the code generator, or you can do it manually like this:
// Using this static import
import static org.jooq.impl.DSL.*;
// Assuming a VARCHAR column in the database:
DataType<MyType> type = SQLDataType.VARCHAR.asConvertedDataType(
new MyConverter<String, MyType>());
Field<MyType> field = field(name("MY_TABLE", "MY_FIELD"), type);
Now, whenever you fetch this field in your SELECT statements, e.g.
Result<Record1<MyType>> result =
DSL.using(configuration)
.select(field)
.from(...)
.fetch();
jOOQ will automatically apply your converter while fetching results from the underlying JDBC ResultSet. You will never see the original String value in your result.
The ResultQuery.fetchMap(Field, Field) method that you've mentioned is just short for fetch() and then Result.intoMap(Field, Field). In other words, the converter will already have been applied automatically by the time you call fetchMap() or intoMap(), so there is no need to do anything specific. Just use your field as an argument to fetchMap() :
Map<MyType, OtherType> result =
DSL.using(configuration)
.select(field, otherField)
.from(...)
.fetchMap(field, otherField);
In jOOQ if I want to fetch a row of a table into a jOOQ autogenerated POJOs I do, for instance:
dsl.selectFrom(USER)
.where(USER.U_EMAIL.equal(email))
.fetchOptionalInto(User.class);
Now, suppose that I want to do a join between two tables, e.g. USER and ROLE, how can I fetch the result into the POJOs for these two tables?
Using nested collections
With more recent versions of jOOQ, you'll typically use a set of ORDBMS features, including:
nested collections using MULTISET
nested records and nested table records
ad-hoc converters
You'll write something like this, to produce jOOQ types:
Result<Record2<UserRecord, Result<Record1<RoleRecord>>>> result =
dsl.select(
USER,
multiset(
selectFrom(USER_ROLE.role())
.where(USER_ROLE.USER_ID.eq(USER.ID))
))
.from(USER)
.where(USER.U_EMAIL.equal(email))
.fetch();
Or, by using said ad-hoc converters, to produce your own types:
List<User> result =
dsl.select(
USER.U_ID,
USER.U_EMAIL,
...
multiset(
selectFrom(USER_ROLE.role())
.where(USER_ROLE.USER_ID.eq(USER.ID))
).convertFrom(r -> r.map(Records.mapping(Role::new))))
.from(USER)
.where(USER.U_EMAIL.equal(email))
.fetch(Records.mapping(User::new));
Historic answers / alternatives
There are other ways to achieve something like the above, for completeness' sake:
Fetching the POJOs into a Map
This is one solution using ResultQuery.fetchGroups(RecordMapper, RecordMapper)
Map<UserPojo, List<RolePojo>> result =
dsl.select(USER.fields())
.select(ROLE.fields())
.from(USER)
.join(USER_TO_ROLE).on(USER.USER_ID.eq(USER_TO_ROLE.USER_ID))
.join(ROLE).on(ROLE.ROLE_ID.eq(USER_TO_ROLE.ROLE_ID))
.where(USER.U_EMAIL.equal(email))
.fetchGroups(
// Map records first into the USER table and then into the key POJO type
r -> r.into(USER).into(UserPojo.class),
// Map records first into the ROLE table and then into the value POJO type
r -> r.into(ROLE).into(RolePojo.class)
);
Note, if you want to use LEFT JOIN instead (in case a user does not necessarily have any roles, and you want to get an empty list per user), you'll have to translate NULL roles to empty lists yourself.
Make sure you have activated generating equals() and hashCode() on your POJOs in order to be able to put them in a HashMap as keys:
<pojosEqualsAndHashCode>true</pojosEqualsAndHashCode>
Using custom, hierarchical POJOs and fetching them into a nested collection
A frequently re-occurring question is how to fetch nested collections in jOOQ, i.e. what if your result data structures look like this:
class User {
long id;
String email;
List<Role> roles;
}
class Role {
long id;
String name;
}
Starting with jOOQ 3.14, and if your RDBMS supports it, you can now use SQL/XML or SQL/JSON as an intermediary format to nest collections, and then use Jackson, Gson, or JAXB to map the document back to your Java classes (or keep the XML or JSON, if that's what you needed in the first place). For example:
List<User> users =
ctx.select(
USER.ID,
USER.EMAIL,
field(
select(jsonArrayAgg(jsonObject(ROLE.ID, ROLE.NAME)))
.from(ROLES)
.join(USER_TO_ROLE).on(ROLE.ROLE_ID.eq(USER_TO_ROLE.ROLE_ID))
.where(USER_TO_ROLE.USER.ID.eq(USER.ID))
).as("roles")
)
.from(USER)
.where(USER.EMAIL.eq(email))
.fetchInto(User.class);
Note that JSON_ARRAYAGG() aggregates empty sets into NULL, not into an empty []. If that's a problem, use COALESCE()
continuing good answer of Lukas Eder:
If you need list structure in return or you can not ovveride equals or hashcode in your POJO you can fetch it into List<Pair>
List<Pair<UserPojo, RolePojo>> result = dsl.select(USER.fields())
.select(ROLE.fields())
.from(USER)
.join(USER_TO_ROLE).on(USER.USER_ID.eq(USER_TO_ROLE.USER_ID))
.join(ROLE).on(ROLE.ROLE_ID.eq(USER_TO_ROLE.ROLE_ID))
.where(USER.U_EMAIL.equal(email))
.fetch(record -> {
UserPojo userPojo = r.into(USER).into(UserPojo.class);
RolePojo rolePojo = r.into(ROLE).into(RolePojo.class);
return new ImmutablePair<>(zoneEntity, userEntity);
});