JPA EntityManager updates object before transaction has finished? [duplicate] - java

I have native query to run :
String sqlSelect =
"select r.id_roster as id, " +
"count(roster_cat.id_category), " +
" sum(case when roster_cat.id_category IN ( :categoryIds) then 1 else 0 end) as counter " +
"from roster r " +
"inner join roster_sa_categories roster_cat " +
"on r.id_roster = roster_cat.id_roster " +
"where r.day = :dayToLookFor " +
"and r.id_shop = :idShop " +
"group by r.id_roster " +
"having count(roster_cat.id_category) = :nrCategories " +
"and count(roster_cat.id_category) = counter" ;
Query selectRostersQuery = entityManager.createNativeQuery(sqlSelect);
selectRostersQuery.setParameter("categoryIds", Arrays.asList(categoryIds));
selectRostersQuery.setParameter("dayToLookFor", day.toString());
selectRostersQuery.setParameter("idShop", shopId);
selectRostersQuery.setParameter("nrCategories", categoryIds.length);
List<Integer> rosterIds = new ArrayList<>();
List<Object> result = (List<Object>) selectRostersQuery.getResultList();
For some reason Hibernate choses to do an update before executing the select and it is really interfering with my data
Hibernate: /* update domain.Roster */ update roster set day=?, employee_count=?, interval_end=?, interval_start=?, id_shop=? where id_roster=?
Hibernate: /* update Roster */ update roster set day=?, employee_count=?, interval_end=?, interval_start=?, id_shop=? where id_roster=?
Hibernate: /* dynamic native SQL query */ select r.id_roster as id, count(roster_cat.id_category),sum(case when roster_cat.id_category IN ( ?) then 1 else 0 end) as counter from roster r inner join roster_sa_categories
roster_cat on r.id_roster = roster_cat.id_roster where r.day = ? and r.id_shop = ? group by r.id_roster having count(roster_cat.id_category) = ? and count(roster_cat.id_category) = counter
Any help would be appreciated,Thank you

What you describe is precisely what Hibernate's FlushMode.AUTO implies.
Any modifications in the Persistence Context (1LC) at the time a query is executed will be automatically flushed prior to executing the query, guaranteeing that the results returned by the database match that which was cached by in-memory modifications.
If the query is going to return entities that you're seeing the update for, then you should likely re-evaluate your operations, making sure that the query fires prior to the update to avoid the flush operation, which can be quite expensive depending on the volume of entities in your Persistence Context.
If you are absolutely sure that the changes you're seeing flushed won't be returned by the query in question, you can always force the query not to cause a flush by setting the flush mode manually:
Query query = session.createQuery( ... );
query.setFlushMode( FlushMode.COMMIT );
List results = query.list();
But only do this if you're sure that the query wouldn't then be reading uncommitted changes as this can cause lots of problems and lead to long debug sessions to understand why changes are being inadvertantly lost by your application.

Related

How can I solve memory error in Hibernate Jpa? [duplicate]

I'm getting a warning in the Server log "firstResult/maxResults specified with collection fetch; applying in memory!". However everything working fine. But I don't want this warning.
My code is
public employee find(int id) {
return (employee) getEntityManager().createQuery(QUERY).setParameter("id", id).getSingleResult();
}
My query is
QUERY = "from employee as emp left join fetch emp.salary left join fetch emp.department where emp.id = :id"
Although you are getting valid results, the SQL query fetches all data and it's not as efficient as it should.
So, you have two options.
Fixing the issue with two SQL queries that can fetch entities in read-write mode
The easiest way to fix this issue is to execute two queries:
. The first query will fetch the root entity identifiers matching the provided filtering criteria.
. The second query will use the previously extracted root entity identifiers to fetch the parent and the child entities.
This approach is very easy to implement and looks as follows:
List<Long> postIds = entityManager
.createQuery(
"select p.id " +
"from Post p " +
"where p.title like :titlePattern " +
"order by p.createdOn", Long.class)
.setParameter(
"titlePattern",
"High-Performance Java Persistence %"
)
.setMaxResults(5)
.getResultList();
List<Post> posts = entityManager
.createQuery(
"select distinct p " +
"from Post p " +
"left join fetch p.comments " +
"where p.id in (:postIds) " +
"order by p.createdOn", Post.class)
.setParameter("postIds", postIds)
.setHint(
"hibernate.query.passDistinctThrough",
false
)
.getResultList();
Fixing the issue with one SQL query that can only fetch entities in read-only mode
The second approach is to use SDENSE_RANK over the result set of parent and child entities that match our filtering criteria and restrict the output for the first N post entries only.
The SQL query can look as follows:
#NamedNativeQuery(
name = "PostWithCommentByRank",
query =
"SELECT * " +
"FROM ( " +
" SELECT *, dense_rank() OVER (ORDER BY \"p.created_on\", \"p.id\") rank " +
" FROM ( " +
" SELECT p.id AS \"p.id\", " +
" p.created_on AS \"p.created_on\", " +
" p.title AS \"p.title\", " +
" pc.id as \"pc.id\", " +
" pc.created_on AS \"pc.created_on\", " +
" pc.review AS \"pc.review\", " +
" pc.post_id AS \"pc.post_id\" " +
" FROM post p " +
" LEFT JOIN post_comment pc ON p.id = pc.post_id " +
" WHERE p.title LIKE :titlePattern " +
" ORDER BY p.created_on " +
" ) p_pc " +
") p_pc_r " +
"WHERE p_pc_r.rank <= :rank ",
resultSetMapping = "PostWithCommentByRankMapping"
)
#SqlResultSetMapping(
name = "PostWithCommentByRankMapping",
entities = {
#EntityResult(
entityClass = Post.class,
fields = {
#FieldResult(name = "id", column = "p.id"),
#FieldResult(name = "createdOn", column = "p.created_on"),
#FieldResult(name = "title", column = "p.title"),
}
),
#EntityResult(
entityClass = PostComment.class,
fields = {
#FieldResult(name = "id", column = "pc.id"),
#FieldResult(name = "createdOn", column = "pc.created_on"),
#FieldResult(name = "review", column = "pc.review"),
#FieldResult(name = "post", column = "pc.post_id"),
}
)
}
)
The #NamedNativeQuery fetches all Post entities matching the provided title along with their associated PostComment child entities. The DENSE_RANK Window Function is used to assign the rank for each Post and PostComment joined record so that we can later filter just the amount of Post records we are interested in fetching.
The SqlResultSetMapping provides the mapping between the SQL-level column aliases and the JPA entity properties that need to be populated.
Now, we can execute the PostWithCommentByRank #NamedNativeQuery like this:
List<Post> posts = entityManager
.createNamedQuery("PostWithCommentByRank")
.setParameter(
"titlePattern",
"High-Performance Java Persistence %"
)
.setParameter(
"rank",
5
)
.unwrap(NativeQuery.class)
.setResultTransformer(
new DistinctPostResultTransformer(entityManager)
)
.getResultList();
Now, by default, a native SQL query like the PostWithCommentByRank one would fetch the Post and the PostComment in the same JDBC row, so we will end up with an Object[] containing both entities.
However, we want to transform the tabular Object[] array into a tree of parent-child entities, and for this reason, we need to use the Hibernate ResultTransformer.
The DistinctPostResultTransformer looks as follows:
public class DistinctPostResultTransformer
extends BasicTransformerAdapter {
private final EntityManager entityManager;
public DistinctPostResultTransformer(
EntityManager entityManager) {
this.entityManager = entityManager;
}
#Override
public List transformList(
List list) {
Map<Serializable, Identifiable> identifiableMap =
new LinkedHashMap<>(list.size());
for (Object entityArray : list) {
if (Object[].class.isAssignableFrom(entityArray.getClass())) {
Post post = null;
PostComment comment = null;
Object[] tuples = (Object[]) entityArray;
for (Object tuple : tuples) {
if(tuple instanceof Identifiable) {
entityManager.detach(tuple);
if (tuple instanceof Post) {
post = (Post) tuple;
}
else if (tuple instanceof PostComment) {
comment = (PostComment) tuple;
}
else {
throw new UnsupportedOperationException(
"Tuple " + tuple.getClass() + " is not supported!"
);
}
}
}
if (post != null) {
if (!identifiableMap.containsKey(post.getId())) {
identifiableMap.put(post.getId(), post);
post.setComments(new ArrayList<>());
}
if (comment != null) {
post.addComment(comment);
}
}
}
}
return new ArrayList<>(identifiableMap.values());
}
}
The DistinctPostResultTransformer must detach the entities being fetched because we are overwriting the child collection and we don’t want that to be propagated as an entity state transition:
post.setComments(new ArrayList<>());
Reason for this warning is that when fetch join is used, order in result sets is defined only by ID of selected entity (and not by join fetched).
If this sorting in memory is causing problems, do not use firsResult/maxResults with JOIN FETCH.
To avoid this WARNING you have to change the call getSingleResult to
getResultList().get(0)
This warning tells you Hibernate is performing in memory java pagination. This can cause high JVM memory consumption.
Since a developer can miss this warning, I contributed to Hibernate by adding a flag allowing to throw an exception instead of logging the warning (https://hibernate.atlassian.net/browse/HHH-9965).
The flag is hibernate.query.fail_on_pagination_over_collection_fetch.
I recommend everyone to enable it.
The flag is defined in org.hibernate.cfg.AvailableSettings :
/**
* Raises an exception when in-memory pagination over collection fetch is about to be performed.
* Disabled by default. Set to true to enable.
*
* #since 5.2.13
*/
String FAIL_ON_PAGINATION_OVER_COLLECTION_FETCH = "hibernate.query.fail_on_pagination_over_collection_fetch";
the problem is you will get cartesian product doing JOIN. The offset will cut your recordset without looking if you are still on same root identity class
I guess the emp has many departments which is a One to Many relationship. Hibernate will fetch many rows for this query with fetched department records. So the order of result set can not be decided until it has really fetch the results to the memory. So the pagination will be done in memory.
If you do not want to fetch the departments with emp, but still want to do some query based on the department, you can achieve the result with out warning (without doing ordering in the memory). For that simply you have to remove the "fetch" clause. So something like as follows:
QUERY = "from employee as emp left join emp.salary sal left join emp.department dep where emp.id = :id and dep.name = 'testing' and sal.salary > 5000 "
As others pointed out, you should generally avoid using "JOIN FETCH" and firstResult/maxResults together.
If your query requires it, you can use .stream() to eliminate warning and avoid potential OOM exception.
try (Stream<ENTITY> stream = em.createQuery(QUERY).stream()) {
ENTITY first = stream.findFirst().orElse(null); // equivalents .getSingleResult()
}
// Stream returned is an IO stream that needs to be closed manually.

Hibernate/JPA: Update then delete in one transaction

Is it possible to execute an update query then a delete query right after the update one in the same transaction? I'm trying to activate an account based on a token's hash and then remove that token in the same transaction.
transaction.begin();
entityManager
.createNativeQuery(
"UPDATE accounts AS ac "
+ "INNER JOIN account_tokens AS ak ON ac.id = ak.account_id "
+ "SET ac.account_state = "
+ "CASE "
+ "WHEN ac.account_state = 'AWAITING_ACTIVATION' THEN 'ACTIVATED' "
+ "END "
+ "WHERE ak.token_hash = :tokenHash")
.setParameter()
.executeUpdate();
em.createNativeQuery(
"DELETE FROM account_tokens AS ak "
+ "WHERE ak.token_hash = :tokenHash")
.setParameter()
.executeUpdate(); // delete
transaction.commit();
Yes, Use PL/SQL Procedure.
You can't reduce the number of queries - they all do different things - but you could reduce the number of round trips to the database and the number of parses by wrapping it all as a PLSQL function.
CREATE PROCEDURE s_u_d(a)
BEGIN
UPDATE tab_x SET tab_x.avalue=1 WHERE tab_x.another=a;
DELETE FROM tab_y WHERE tab_y.avalue=a;
SELECT *
FROM tab_x
WHERE tab_x.another=a;
END;

JPQL Constructor execution takes long time

I am using JPQL constructor in my code. It involves large results, 5024 data retrieved and objects created when executing this query. It is taking around 2 minutes to complete its execution. There are multiple tables involved to execute this query. I could not go for cache since the DB data will update daily. Is there any way to optimize the execution speed?
My SQL query formation is like this,
String sql = "SELECT DISTINCT "
+ "new com.abc.dc.entity.market.LaptopData(vd.segment.id, vd.segment.segmentGroup, "
+ "vd.segment.segmentName, vd.segment.category, "
+ "vd.product.make, vd.product.model, vd.product.modelYear) "
+ "FROM LaptopData vd, Profile p "
+ "WHERE vd.profile.profileCode = :profileCode "
+ "AND vd.pk.profileId = p.id "
+ "Order By vd.segment.segmentGroup, vd.segment.segmentName, vd.product.make, vd.product.model,"
+ " vd.product.modelYear asc";
Query query = em.createQuery(sql);
query.setParameter("profileCode", profileCode);
#SuppressWarnings("unchecked")
List<LaptopData> results = query.getResultList();
em.clear();
return results;
Well, one option for you is to create separate table which would contain result of this query for all entities and update it daily (every night) and then run your query on this new table, like this:
"SELECT DISTINCT"
+ "new com.abc.dc.entity.market.LaptopData(m.id, m.segmentGroup, "
+ "m.segmentName, m.category, "
+ "m.make, m.model, m.modelYear) "
+ "FROM someNewTable m"
+ "WHERE m.profileCode = :profileCode ";
This way you don't have to do join and ordering every time you execute your query, but only once a day when you generate new table. Ofcourse with this approach to see data updates you'll have to wait until new table is recreated.
Also, you can create indexes on fields you are using in where clause.

Hibernate updation not working

I am using following method to update data in database.
String hql = "UPDATE EmployeeSalary set salary = :sl,"
+ "monthYr=:dt "
+ "WHERE id =:id and client.id=:cid";
for (EmployeeSalary e : eList) {
Query query = session.createQuery(hql);
query.setParameter("sl", e.getSalary());
query.setParameter("dt", e.getMonthYr());
query.setParameter("id", e.getId());
query.setParameter("cid", e.getClient().getId());
int result = query.executeUpdate();
System.out.println("result is " + result);
if (eAttList.size() % 20 == 0) {
session.flush();
session.clear();
}
}
Is there any problem with code.
On execution it is showing
result is 0
How to resolve above problem.
The documentation about update queries says:
No "Forms of join syntax", either implicit or explicit, can be specified in a bulk HQL query. Sub-queries can be used in the where-clause, where the subqueries themselves may contain joins.
Your query seems to violate this rule: client.id=:cid is an implicit join to the client entity.
Note that you're making your life difficult. You could simply get the entity by ID from the session (using Session.get()), and update it. Update queries are useful to update many rows at once.

How can I avoid the Warning "firstResult/maxResults specified with collection fetch; applying in memory!" when using Hibernate?

I'm getting a warning in the Server log "firstResult/maxResults specified with collection fetch; applying in memory!". However everything working fine. But I don't want this warning.
My code is
public employee find(int id) {
return (employee) getEntityManager().createQuery(QUERY).setParameter("id", id).getSingleResult();
}
My query is
QUERY = "from employee as emp left join fetch emp.salary left join fetch emp.department where emp.id = :id"
Although you are getting valid results, the SQL query fetches all data and it's not as efficient as it should.
So, you have two options.
Fixing the issue with two SQL queries that can fetch entities in read-write mode
The easiest way to fix this issue is to execute two queries:
. The first query will fetch the root entity identifiers matching the provided filtering criteria.
. The second query will use the previously extracted root entity identifiers to fetch the parent and the child entities.
This approach is very easy to implement and looks as follows:
List<Long> postIds = entityManager
.createQuery(
"select p.id " +
"from Post p " +
"where p.title like :titlePattern " +
"order by p.createdOn", Long.class)
.setParameter(
"titlePattern",
"High-Performance Java Persistence %"
)
.setMaxResults(5)
.getResultList();
List<Post> posts = entityManager
.createQuery(
"select distinct p " +
"from Post p " +
"left join fetch p.comments " +
"where p.id in (:postIds) " +
"order by p.createdOn", Post.class)
.setParameter("postIds", postIds)
.setHint(
"hibernate.query.passDistinctThrough",
false
)
.getResultList();
Fixing the issue with one SQL query that can only fetch entities in read-only mode
The second approach is to use SDENSE_RANK over the result set of parent and child entities that match our filtering criteria and restrict the output for the first N post entries only.
The SQL query can look as follows:
#NamedNativeQuery(
name = "PostWithCommentByRank",
query =
"SELECT * " +
"FROM ( " +
" SELECT *, dense_rank() OVER (ORDER BY \"p.created_on\", \"p.id\") rank " +
" FROM ( " +
" SELECT p.id AS \"p.id\", " +
" p.created_on AS \"p.created_on\", " +
" p.title AS \"p.title\", " +
" pc.id as \"pc.id\", " +
" pc.created_on AS \"pc.created_on\", " +
" pc.review AS \"pc.review\", " +
" pc.post_id AS \"pc.post_id\" " +
" FROM post p " +
" LEFT JOIN post_comment pc ON p.id = pc.post_id " +
" WHERE p.title LIKE :titlePattern " +
" ORDER BY p.created_on " +
" ) p_pc " +
") p_pc_r " +
"WHERE p_pc_r.rank <= :rank ",
resultSetMapping = "PostWithCommentByRankMapping"
)
#SqlResultSetMapping(
name = "PostWithCommentByRankMapping",
entities = {
#EntityResult(
entityClass = Post.class,
fields = {
#FieldResult(name = "id", column = "p.id"),
#FieldResult(name = "createdOn", column = "p.created_on"),
#FieldResult(name = "title", column = "p.title"),
}
),
#EntityResult(
entityClass = PostComment.class,
fields = {
#FieldResult(name = "id", column = "pc.id"),
#FieldResult(name = "createdOn", column = "pc.created_on"),
#FieldResult(name = "review", column = "pc.review"),
#FieldResult(name = "post", column = "pc.post_id"),
}
)
}
)
The #NamedNativeQuery fetches all Post entities matching the provided title along with their associated PostComment child entities. The DENSE_RANK Window Function is used to assign the rank for each Post and PostComment joined record so that we can later filter just the amount of Post records we are interested in fetching.
The SqlResultSetMapping provides the mapping between the SQL-level column aliases and the JPA entity properties that need to be populated.
Now, we can execute the PostWithCommentByRank #NamedNativeQuery like this:
List<Post> posts = entityManager
.createNamedQuery("PostWithCommentByRank")
.setParameter(
"titlePattern",
"High-Performance Java Persistence %"
)
.setParameter(
"rank",
5
)
.unwrap(NativeQuery.class)
.setResultTransformer(
new DistinctPostResultTransformer(entityManager)
)
.getResultList();
Now, by default, a native SQL query like the PostWithCommentByRank one would fetch the Post and the PostComment in the same JDBC row, so we will end up with an Object[] containing both entities.
However, we want to transform the tabular Object[] array into a tree of parent-child entities, and for this reason, we need to use the Hibernate ResultTransformer.
The DistinctPostResultTransformer looks as follows:
public class DistinctPostResultTransformer
extends BasicTransformerAdapter {
private final EntityManager entityManager;
public DistinctPostResultTransformer(
EntityManager entityManager) {
this.entityManager = entityManager;
}
#Override
public List transformList(
List list) {
Map<Serializable, Identifiable> identifiableMap =
new LinkedHashMap<>(list.size());
for (Object entityArray : list) {
if (Object[].class.isAssignableFrom(entityArray.getClass())) {
Post post = null;
PostComment comment = null;
Object[] tuples = (Object[]) entityArray;
for (Object tuple : tuples) {
if(tuple instanceof Identifiable) {
entityManager.detach(tuple);
if (tuple instanceof Post) {
post = (Post) tuple;
}
else if (tuple instanceof PostComment) {
comment = (PostComment) tuple;
}
else {
throw new UnsupportedOperationException(
"Tuple " + tuple.getClass() + " is not supported!"
);
}
}
}
if (post != null) {
if (!identifiableMap.containsKey(post.getId())) {
identifiableMap.put(post.getId(), post);
post.setComments(new ArrayList<>());
}
if (comment != null) {
post.addComment(comment);
}
}
}
}
return new ArrayList<>(identifiableMap.values());
}
}
The DistinctPostResultTransformer must detach the entities being fetched because we are overwriting the child collection and we don’t want that to be propagated as an entity state transition:
post.setComments(new ArrayList<>());
Reason for this warning is that when fetch join is used, order in result sets is defined only by ID of selected entity (and not by join fetched).
If this sorting in memory is causing problems, do not use firsResult/maxResults with JOIN FETCH.
To avoid this WARNING you have to change the call getSingleResult to
getResultList().get(0)
This warning tells you Hibernate is performing in memory java pagination. This can cause high JVM memory consumption.
Since a developer can miss this warning, I contributed to Hibernate by adding a flag allowing to throw an exception instead of logging the warning (https://hibernate.atlassian.net/browse/HHH-9965).
The flag is hibernate.query.fail_on_pagination_over_collection_fetch.
I recommend everyone to enable it.
The flag is defined in org.hibernate.cfg.AvailableSettings :
/**
* Raises an exception when in-memory pagination over collection fetch is about to be performed.
* Disabled by default. Set to true to enable.
*
* #since 5.2.13
*/
String FAIL_ON_PAGINATION_OVER_COLLECTION_FETCH = "hibernate.query.fail_on_pagination_over_collection_fetch";
the problem is you will get cartesian product doing JOIN. The offset will cut your recordset without looking if you are still on same root identity class
I guess the emp has many departments which is a One to Many relationship. Hibernate will fetch many rows for this query with fetched department records. So the order of result set can not be decided until it has really fetch the results to the memory. So the pagination will be done in memory.
If you do not want to fetch the departments with emp, but still want to do some query based on the department, you can achieve the result with out warning (without doing ordering in the memory). For that simply you have to remove the "fetch" clause. So something like as follows:
QUERY = "from employee as emp left join emp.salary sal left join emp.department dep where emp.id = :id and dep.name = 'testing' and sal.salary > 5000 "
As others pointed out, you should generally avoid using "JOIN FETCH" and firstResult/maxResults together.
If your query requires it, you can use .stream() to eliminate warning and avoid potential OOM exception.
try (Stream<ENTITY> stream = em.createQuery(QUERY).stream()) {
ENTITY first = stream.findFirst().orElse(null); // equivalents .getSingleResult()
}
// Stream returned is an IO stream that needs to be closed manually.

Categories

Resources