How can I create update methods in spring CurdRepository?
Something like:
interface PersonRepository extends CrudRepository<PersonEntity> {
//update ... set age =: age where name =: name
boolean setAgeByName(age, name);
}
If you ask me: do not use such queries. That's because your second level cache gets eradicated and you get some performance problems (your application will be slightly slower).
And by the way the process you want to do is in reality the following:
you load your entity from the database
change the attribute on the loaded object (via a setter method)
you save your entity to the database (this is done automagically when your transaction ends so you do not need to call PartnerRepository.save(entity) explicitly)
If you want to use a query then I suggest you write your query as you mentioned in your question:
#Modifying
#Query("update Person p set p.age = :age where p.name = :name")
int setAgeByName(#Param("age") int age, #Param("name") String name);
And you get back how many entries have been modified.
Related
let's assume we have a Spring Data repository interface with a custom method...
#Modifying
#Transactional
#Query("UPDATE MyEntity SET deletedAt = CURRENT_TIMESTAMP WHERE id = ?1")
void markAsSoftDeleted(long id);
This method simply sets the deletedAt field of the entity, ok. Is there any way to allow this method to return an updated version of the MyEntity?
Obviously...
#Modifying
#Transactional
#Query("UPDATE MyEntity SET deletedAt = CURRENT_TIMESTAMP WHERE id = ?1")
MyEntity markAsSoftDeleted(long id);
...does not work, since...
java.lang.IllegalArgumentException: Modifying queries can only use void or int/Integer as return type!
Does anyon know another way to easily allow that, except of course the obvious "add a service layer between repository and caller for such things"...
Set clearAutomatically attribute on #Modifying annotation.That will clear all the non-flushed values from EntityManager.
#Modifying(clearAutomatically=true)
#Transactional
#Query("UPDATE MyEntity SET deletedAt = CURRENT_TIMESTAMP WHERE id = ?1")
void markAsSoftDeleted(long id);
To flush your changes before committing the update latest spring-data-jpa has another attribute on #ModifyingAttribute. But I think its still in 2.1.M1 release.
#Modifying(clearAutomatically=true, flushAutomatically = true)
Please check corresponding jira bug request: https://jira.spring.io/browse/DATAJPA-806
Another approach can be you can implement custom repostiory Implementation and return your updated entity after done with the query execution.
Reference : Spring data jpa custom repository implemenation
There are two ways to do that:
The JPA idiomatic way to do this is to load the entities first, then changing them using Java code.
Doing this in a transaction will flush the changes to the database.
If you insist on doing a batch update you need to mark the entities as part of the update. Maybe with a timestamp, maybe the update itself already marks them. And then you reload them using a select statement that uses the marker set during the update.
Note that you have to ensure that the entities don't exist yet in your EntityManager, otherwise you will keep the old state there. This is the purpose of #Modifying(clearAutomatically=true) recommended by other answers.
#Modifying(clearAutomatically=true)
Its works for me.
It will never return void or your class type, add return type int or Integer like below,
#Modifying(clearAutomatically=true)
#Transactional
#Query("UPDATE MyEntity SET deletedAt = CURRENT_TIMESTAMP WHERE id = ?1")
Integer markAsSoftDeleted(long id);
For concurrency purpose, I have got a requirement to update the state of a column of the database to USED while selecting from AVAILABLE pool.
I was thinking to try #Modifying, and #Query(query to update the state based on the where clause)
It is all fine, but this is an update query and so it doesn't return the updated data.
So, is it possible in spring data, to update and return a row, so that whoever read the row first can use it exclusively.
My update query is something like UPDATE MyObject o SET o.state = 'USED' WHERE o.id = (select min(id) from MyObject a where a.state='AVAILABLE'), so basically the lowest available id will be marked used. There is a option of locking, but these requires exceptional handling and if exception occur for another thread, then try again, which is not approved in my scenario
You need to explicitly declare a transaction to avoid other transactions being able to read the values involved until it's commited. The level with best performance allowing it is READ_COMMITED, which doesn't allow dirty reads from other transactions (suits your case). So the code will look like this:
Repo:
#Repository
public interface MyObjectRepository extends JpaRepository<MyObject, Long> {
#Modifying
#Query("UPDATE MyObject o SET o.state = 'USED' WHERE o.id = :id")
void lockObject(#Param("id") long id);
#Query("select min(id) from MyObject a where a.state='AVAILABLE'")
Integer minId();
}
Service:
#Transactional(isolation=Isolation.READ_COMMITTED)
public MyObject findFirstAvailable(){
Integer minId;
if ((minId = repo.minId()) != null){
repo.lockObject(minId);
return repo.findOne(minId);
}
return null;
}
I suggest to use multiple transactions plus Optimistic Locking.
Make sure your entity has an attribute annotated with #Version.
In the first transaction load the entity, mark it as USED, close the transaction.
This will flush and commit the changes and make sure nobody else touched the entity in the mean time.
In the second transaction you can no do whatever you want to do with the entity.
For these small transactions I find it clumsy to move them to separate methods so I can use #Transactional. I therefore use the TransactionTemplate instead.
Let's say I have a List of entities:
List<SomeEntity> myEntities = new ArrayList<>();
SomeEntity.java:
#Entity
#Table(name = "entity_table")
public class SomeEntity{
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private long id;
private int score;
public SomeEntity() {}
public SomeEntity(long id, int score) {
this.id = id;
this.score = score;
}
MyEntityRepository.java:
#Repository
public interface MyEntityRepository extends JpaRepository<SomeEntity, Long> {
List<SomeEntity> findAllByScoreGreaterThan(int Score);
}
So when I run:
myEntityRepository.findAllByScoreGreaterThan(10);
Then Hibernate will load all of the records in the table into memory for me.
There are millions of records, so I don't want that. Then, in order to intersect, I need to compare each record in the result set to my List.
In native MySQL, what I would have done in this situation is:
create a temporary table and insert into it the entities' ids from the List.
join this temporary table with the "entity_table", use the score filter and then only pull the entities that are relevant to me (the ones that were in the list in the first place).
This way I gain a big performance increase, avoid any OutOfMemoryErrors and have the machine of the database do most of the work.
Is there a way to achieve such an outcome with Spring Data JPA's query methods (with hibernate as the JPA provider)? I couldn't find in the documentation or in SO any such use case.
I understand you have a set of entity_table identifiers and you want to find each entity_table whose identifier is in that subset and whose score is greater than a given score.
So the obvious question is: how did you arrive to the initial subset of entity_tables and couldn't you just add the criteria of that query to your query that also checks for "score is greater than x"?
But if we ignore that, I think there's two possible solutions. If the list of some_entity identifiers is small (what exactly is "small" depends on your database), you could just use an IN clause and define your method as:
List<SomeEntity> findByScoreGreaterThanAndIdIn(int score, Set<Long) ids)
If the number of identifiers is too large to fit in an IN clause (or you're worried about the performance of using an IN clause) and you need to use a temporary table, the recipe would be:
Create an entity that maps to your temporary table. Create a Spring Data JPA repository for it:
class TempEntity {
#Id
private Long entityId;
}
interface TempEntityRepository extends JpaRepository<TempEntity,Long> { }
Use its save method to save all the entity identifiers into the temporary table. As long as you enable insert batching this should perform all right -- how to enable differs per database and JPA provider, but for Hibernate at the very least set the hibernate.jdbc.batch_size Hibernate property to a sufficiently large value. Also flush() and clear() your entityManager regularly or all your temp table entities will accumulate in the persistence context and you'll still run out of memory. Something along the lines of:
int count = 0;
for (SomeEntity someEntity : myEntities) {
tempEntityRepository.save(new TempEntity(someEntity.getId());
if (++count == 1000) {
entityManager.flush();
entityManager.clear();
}
}
Add a find method to your SomeEntityRepository that runs a native query that does the select on entity_table and joins to the temp table:
#Query("SELECT id, score FROM entity_table t INNER JOIN temp_table tt ON t.id = tt.id WHERE t.score > ?1", nativeQuery = true)
List<SomeEntity> findByScoreGreaterThan(int score);
Make sure you run both methods in the same transaction, so create a method in a #Service class that you annotate with #Transactional(Propagation.REQUIRES_NEW) that calls both repository methods in succession. Otherwise your temp table's contents will be gone by the time the SELECT query runs and you'll get zero results.
You might be able to avoid native queries by having your temp table entity have a #ManyToOne to SomeEntity since then you can join in JPQL; I'm just not sure if you'll be able to avoid actually loading the SomeEntitys to insert them in that case (or if creating a new SomeEntity with just an ID would work). But since you say you already have a list of SomeEntity that's perhaps not a problem.
I need something similar myself, so will amend my answer as I get a working version of this.
You can:
1) Make a paginated native query via JPA (remember to add an order clause to it) and process a fixed amount of records
2) Use a StatelessSession (see the documentation)
I'm still looking for a update method in Spring's Data JPA to update a given Object persited in a relational database. I only found solutions in which I'm forced to specify some kind of UPDATE queries via #Query annotation (in comparison with #Modifying), for example:
#Modifying
#Query("UPDATE User u SET u.firstname = ?1, u.lastname = ?2 WHERE u.id = ?3")
public void update(String firstname, String lastname, int id);
For building the Query, I also have to pass single parameters instead of whole Objects. But that's exactly what I want to do (passing whole Objects).
So, what I'm trying to find is a method like this:
public void update(Object obj);
Is it possible to build such a update method based on Spring Data JPA? How must it be annotated?
Thank you!
If the goal is to modify an entity, you don't need any update method. You get the object from the database, modify it, and JPA saves it automatically:
User u = repository.findOne(id);
u.setFirstName("new first name");
u.setLastName("new last name");
If you have a detached entity and want to merge it, then use the save() method of CrudRepository:
User attachedUser = repository.save(detachedUser);
If you want to update an Entity you do not need JPQL query(Optional). You can directly use (old version)findOne or (new version) findById to access data and make required modifications
Ex- here approve (entity ) is updated.
Optional<User> user= Repository.findById(id);
Registration reg = reg.get();
reg.setApproved("yes");
userRepo.save(reg);
And That's It.
These answers aren't addressing the question, which is how to avoid all of the messy...
if (var1 != null) u.setVar1( var1 );
if (var2 != null) u.setVar2( var2 );
....
if (varN != null) u.setVarN( varN );
shenanigans.
So, in essence, the question involves how to merge an object. Unfortunately, JPA "merge" is a misnomer. It's re-attaching a detached object.
Have you tried using this scheme:
add jackson data-bind dependency
convert update object (source) to Map
Map<String, Object> sourceMap = objMapper.convertValue(source, Map.class);
remove null values using Java 8 streams
convert to-be-updated object (target to Map) as above
use new Java 8 Map merge method to update target with source, i.e.
targetMap.merge( ... );
then use objMapper to convert targetMap back to target type.
One caveat, now you have an updated entity that's not JPA-attached so you'd have to do a target.merge();
I have this class mapped
#Entity
#Table(name = "USERS")
public class User {
private long id;
private String userName;
}
and I make a query:
Query query = session.createQuery("select id, userName, count(userName) from User order by count(userName) desc");
return query.list();
How can I access the values returned by the query?
I mean, how should I treat the query.list()? As a User or what?
To strictly answer your question, queries that specify a property of a class in the select clause (and optionally call aggregate functions) return "scalar" results i.e. a Object[] (or a List<Object[]>). See 10.4.1.3. Scalar results.
But your current query doesn't work. You'll need something like this:
select u.userName, count(u.userName)
from User2633514 u
group by u.userName
order by count(u.userName) desc
I'm not sure how Hibernate handles aggregates and counts, but I'm not sure if your query is going to work at all. You're trying to select a aggregate (i.e. the "count(userName)"), but you don't have a "group by" clause for userName.
If the query does in fact work, and Hibernate can figure out what to do with it, the results you get back will most likely be a raw Object[], because Hibernate will not be able to map your "count(userName)" data into any field on your mapped objects.
Overall, when you get into using aggregates in queries, Hibernate can get a little more tricky, since you're no longer mapping tables/columns directly into classes/fields. It might be a good idea to read up more on how to do aggregates in Hibernate, from their documentation.