I'm joining some tables from jOOQ and I'd like to use a RecordMapper to parse the result into my pojo AType.
final List<AType> typeList = dsl.select()
.from(TABLEA)
.join(TABLEB).on(TABLEA.ID.equal(TABLEB.ID))
.fetch()
.map((RecordMapper<Record, AType>) record -> {
//Extract field values from Record
return new AType(....);
});
As I explained in a comment, I would like to know how to convert a Field object from the Record into the contained value.
The method you're looking for is Record.getValue(Field) (or also Record.get(Field) from jOOQ 3.8 onwards):
.map((RecordMapper<Record, AType>) record -> {
//Extract field values from Record
return new AType(record.getValue(TABLEA.ID), ...);
});
Related
I have List<MyBean> mostly but not completely populated. Later code needs to set the remaining fields from a DB sub-fetch which is keyed to the original List's planId field.
List<MyBean> results = service.getPrimaryResults(); // Contains field MyBean.planId populated
// ...
populateAdditionalFields(results);
Sub-method with my solution, which looks too complicated and is probably inefficient:
public void populateAdditionalFields(List<MyBean> primaryResults) {
List<Object[]> additionalResults = service.getAdditionResults();
/* This Service call returns List<Object[]>, as follows:
[0] : planId
[1] : reminderHistoryStr
[2] : specialStr
etc.
These are the remaining field that need to be populated
*/
additionalResults.stream().forEach(obj -> {
// Find PrimaryResult with this PlanID, in obj[0]
Optional<MyBean> primaryResult =
primaryResults.stream().filter(x -> x.getPlanId().equals(obj[0])).findFirst();
// Now set other fields with this PlanID
if (primaryResult.isPresent()) {
primaryResult.get().setReminderHistory((String)obj[1]);
primaryResult.get().setSpecialStr((String)obj[2]);
// etc.
}
});
This seems inefficient, I'm doing nested stream()s.
Is there an efficient solution to set the remaining fields from one collection to another where there is a match on the key field?
You can create a map of the primary results (planId as key), so later you can fetch it on o(1) when looping over additionalResults.
Map<String,MyBean> resultsMap = primaryResults.stream()
.collect(Collectors.toMap(
MyBean::getPlanId,
Function.identity()));
additionalResults.stream().forEach(obj -> {
// Find PrimaryResult with this PlanID, in obj[0]
Optional<MyBean> primaryResult = Optional.ofNullable(resultsMap.get(obj[0]));
// Now set other fields with this PlanID
if (primaryResult.isPresent()) {
primaryResult.get().setReminderHistory((String)obj[1]);
primaryResult.get().setSpecialStr((String)obj[2]);
// etc.
}
});
I need to create a mongotemplate database query to get a specific number of elements into a list.
At the moment I just get all the elements with findAll(), and then I modify the obtained data using code that I have writen within the service class.
Initially, I have a Laptop class with fields price::BigDecimal and name::String and I use findAll() to get a list of them.
Then I put those in a HashMap, where key is the name field, sorted from most expensive to cheapest.
Map<String, List<Laptop>> laptopsMap = laptopsFrom.stream()
.collect(Collectors.groupingBy(Laptop::getName,
Collectors.collectingAndThen(Collectors.toList(),
l -> l.stream()
.sorted(Comparator.comparing(Laptop::getPrice).reversed())
.collect(Collectors.toList())
))
);
So the results are like below:
[{"MSI", [2200, 1100, 900]},
{"HP", [3200, 900, 800]},
{"Dell", [2500, 2000, 700]}]
Then, I use the code in the bottom of the question, to create a Laptop list with the following contents:
[{"HP", 3200}, {"Dell", 2500}, {"MSI", 2200},
{"Dell", 2000}, {"MSI", 1100}, {"HP", 900},
{"MSI", 900}, {"HP", 800}, {"Dell", 700}]
So basically, I iterate the map and from each key, I extract the next in line element of the list.
do {
for (Map.Entry<String, List<Laptop>> entry :
laptopsMap.entrySet()) {
String key = entry.getKey();
List<Laptop> value = entry.getValue();
finalResultsList.add(value.get(0));
value.remove(0);
if (value.size() == 0) {
laptopsMap.entrySet()
.removeIf(pr -> pr.getKey().equals(key));
} else {
laptopsMap.replace(key, value);
}
}
} while(!laptopsMap.isEmpty());
I instead of all this in-class code need to use a mongoTemplate database argument, but I cant seem to figure out how to create such a complex query. I have read material about Aggregation but I have not found anything helpful enough. At the moment, I have started putting a query together as shown below:
Query query = new Query();
query.limit(numOfLaptops);
query.addCriteria(Criteria.where(Laptop.PRICE).gte(minPrice));
I'm using Jooq and am trying to generate a near copy of a data set within the same table. In the process I want to update the value of one field to a known value. I've been looking at the docs & trying variations with no luck yet. Here is my approach updating the REGISTRATION table and setting the 'stage' field to the value 6 (where it was 5). So I'll end up with the original data plus a duplicate set with just the different stage value.
in pseudo code
insert into Registration (select * from Registration where stage=5) set stage=6
I tried this code below and thinking I could add a ".set(...)" method to set the value but that doesn't seem to be valid.
create.insertInto(REGISTRATION)
.select(
(selectFrom(REGISTRATION)
.where(REGISTRATION.STAGE.eq(5))
)
).execute();
I'm not aware of a database that supports an INSERT .. SELECT .. SET syntax, and if there were such a syntax, it certainly isn't SQL standards compliant. The way forward here would be to write:
In SQL:
INSERT INTO registration (col1, col2, col3, stage, col4, col5)
SELECT col1, col2, col3, 6, col4, col5
FROM registration
WHERE stage = 5;
In jOOQ:
create.insertInto(REGISTRATION)
.columns(
REGISTRATION.COL1,
REGISTRATION.COL2,
REGISTRATION.COL3,
REGISTRATION.STAGE,
REGISTRATION.COL4,
REGISTRATION.COL5)
.select(
select(
REGISTRATION.COL1,
REGISTRATION.COL2,
REGISTRATION.COL3,
val(6),
REGISTRATION.COL4,
REGISTRATION.COL5)
.from(REGISTRATION)
.where(REGISTRATION.STAGE.eq(5)))
.execute();
The following static import is implied:
import static org.jooq.impl.DSL.*;
In jOOQ, dynamically
Since you're looking for a dynamic SQL solution, here's how this could be done:
static <T> int copy(
DSLContext create, Table<?> table, Field<T> field,
T oldValue, T newValue
) {
List<Field<?>> into = new ArrayList<>();
List<Field<?>> from = new ArrayList<>();
into.addAll(Stream.of(table.fields())
.filter(f -> !field.equals(f))
.collect(toList()));
from.addAll(into);
into.add(field);
from.add(val(newValue));
return
create.insertInto(table)
.columns(into)
.select(
select(from)
.from(table)
.where(field.eq(oldValue))
.execute();
}
Thanks Lukas for your answer which I'll use a version of as it's nice and general. My own answer which I just got to work is less general but might be a useful reference for other people who come this way especially as it takes account of the identity field "id" which can otherwise cause problems.
public void duplicate(int baseStage, int newStage) {
Field<?>[] allFieldsExceptId = Stream.of(REGISTRATION.fields())
.filter(field -> !field.getName().equals("id"))
.toArray(Field[]::new);
Field<?>[] newFields = Stream.of(allFieldsExceptId).map(field -> {
if (field.getName().contentEquals("stage")) {
return val(newStage);
} else {
return field;
}
}).toArray(Field[]::new);
create.insertInto(REGISTRATION)
.columns(allFieldsExceptId)
.select(
select(newFields)
.from(REGISTRATION)
.where(REGISTRATION.STAGE.eq(baseStage)))
.execute();
}
I have some very complicated SQL (does some aggregation, some counts based on max value etc) so I want to use SQLQuery rather than Query. I created a very simple Pojo:
public class SqlCount {
private String name;
private Double count;
// getters, setters, constructors etc
Then when I run my SQLQuery, I want hibernate to populate a List for me, so I do this:
Query hQuery = sqlQuery.setResultTransformer(Transformers.aliasToBean(SqlCount.class));
Now I had a problem where depending on what the values are for 'count', Hibernate will variably retrieve it as a Long, Double, BigDecimal or BigInteger. So I use the addScalar function:
sqlQuery.addScalar("count", StandardBasicTypes.DOUBLE);
Now my problem. It seems that if you don't use the addScalar function, Hibernate will populate all of your fields with all of your columns in your SQL result (ie it will try to populate both 'name' and 'count'). However if you use the addScalar function, it only maps the columns that you listed, and all other columns seem to be discarded and the fields are left as null. In this instance, it wouldn't be too bad to just list both "name" and "count", but I have some other scenarios where I need a dozen or so fields - do I really have to list them all?? Is there some way in hibernate to say "map all fields automatically, like you used to, but by the way map this field as a Double"?
Is there some way in hibernate to say "map all fields automatically.
No, check the document here, find 16.1.1. Scalar queries section
The most basic SQL query is to get a list of scalars (values).
sess.createSQLQuery("SELECT * FROM CATS").list();
sess.createSQLQuery("SELECT ID, NAME, BIRTHDATE FROM CATS").list();
These will return a List of Object arrays (Object[]) with scalar values for each column in the CATS table. Hibernate will use ResultSetMetadata to deduce the actual order and types of the returned scalar values.
To avoid the overhead of using ResultSetMetadata, or simply to be more explicit in what is returned, one can use addScalar():
sess.createSQLQuery("SELECT * FROM CATS")
.addScalar("ID", Hibernate.LONG)
.addScalar("NAME", Hibernate.STRING)
.addScalar("BIRTHDATE", Hibernate.DATE)
i use this solution, I hope it will work with you.
with this solution you can populate what you select from the SQL, and return it as Map, and cast the values directly.
since hibernate 5.2 the method setResultTransformer() is deprecated but its work fine to me and works perfect.
if you hate to write extra code addScalar() for each column from the SQL, you can implement ResultTransformer interface and do the casting as you wish.
ex:
lets say we have this Query:
/*ORACLE SQL*/
SELECT
SEQ AS "code",
CARD_SERIAL AS "cardSerial",
INV_DATE AS "date",
PK.GET_SUM_INV(SEQ) AS "sumSfterDisc"
FROM INVOICE
ORDER BY "code";
note: i use double cote for case-sensitive column alias, check This
after create hibernate session you can create the Query like this:
/*Java*/
List<Map<String, Object>> list = session.createNativeQuery("SELECT\n" +
" SEQ AS \"code\",\n" +
" CARD_SERIAL AS \"cardSerial\",\n" +
" INV_DATE AS \"date\",\n" +
" PK.GET_SUM_INV(SEQ) AS \"sumSfterDisc\"\n" +
"FROM INVOICE\n" +
"ORDER BY \"code\"")
.setResultTransformer(new Trans())
.list();
now the point with Trans Class:
/*Java*/
public class Trans implements ResultTransformer {
private SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss", Locale.US);
#Override
public Object transformTuple(Object[] objects, String[] strings) {
Map<String, Object> map = new LinkedHashMap<>();
for (int i = 0; i < strings.length; i++) {
if (objects[i] == null) {
continue;
}
if (objects[i] instanceof BigDecimal) {
map.put(strings[i], ((BigDecimal) objects[i]).longValue());
} else if (objects[i] instanceof Timestamp) {
map.put(strings[i], dateFormat.format(((Timestamp) objects[i])));
} else {
map.put(strings[i], objects[i]);
}
}
return map;
}
#Override
public List transformList(List list) {
return list;
}
}
here you should override the two method transformTuple and transformList, in transformTuple you have two parameters the Object[] objects its the columns values of the row and String[] strings the names of the columns the hibernate Guaranteed the same order of of the columns as you order it in the query.
now the fun begin, for each row returned from the query the method transformTuple will be invoke, so you can build the row as Map or create new object with fields.
When I currently query with Jooq I am explicitly casting each record-object to the expected record-type.
Result<Record> result = sql.select().from(Tables.COUNTRY).fetch();
for (Record r : result) {
CountryRecord countryRecord = (CountryRecord) r;
//Extract data from countryRecord
countryRecord.getId();
}
Is it, with Jooq, possibly to cast the result straight into the desired record-type?
Such as (this does not compile):
Result<CountryRecord> countryRecords = (Result<CountryRecord>) sql.select().from(Tables.COUNTRY).fetch();
for (CountryRecord cr : countryRecords) {
cr.getNamet();
//etc...
}
#Lukas,
Actually we are using fetchInto() to convert the results to list of object.
For example:
Employee pojo matching database table is employee.
List<Employee> employeeList = sql.select(Tables.Employee)
.from(Tables.EMPLOYEE).fetchInto(Employee.class);
similarly, how could we convert the records we are fetching using joins?
For example:
Customer pojo matching database table is customer.
Employee pojo matching database table is employee.
sql.select(<<IWantAllFields>>).from(Tables.CUSTOMER)
.join(Tables.EMPLOYEE)
.on(Tables.EMPLOYEE.ID.equal(Tables.CUSTOMER.EMPLOYEE_ID))
.fetchInto(?);
You shouldn't be using the select().from(...) syntax when you want to fetch generated record types. Use selectFrom() instead. This is documented here:
http://www.jooq.org/doc/3.1/manual/sql-execution/fetching/record-vs-tablerecord
So your query should be:
Result<CountryRecord> countryRecords = sql.selectFrom(Tables.COUNTRY).fetch();
for (CountryRecord cr : countryRecords) {
cr.getNamet();
//etc...
}