update returning on java spring - java

i changed my function to
public Task assignTask(Student s) {
Task task = null;
Calendar calendar = Calendar.getInstance();
java.util.Date now = calendar.getTime();
java.sql.Timestamp date = new java.sql.Timestamp(now.getTime());
String sr1 = "update Task t3 set t3.startDate = case when t3.startDate is null then '"+ date +"' else t3.startDate end, t3.student.id = ?1 where id = (SELECT t FROM Task t WHERE t.batch not in (SELECT distinct batch FROM Task t2 WHERE t2.student.id= ?2 and t2.endDate IS NOT NULL) and ((t.student.id= ?3 AND t.endDate IS NULL) OR (t.student.id IS NULL)) ORDER BY t.student.id LIMIT 1) returning t3";
Query query1 = this.entityManager.createNativeQuery(sr1).setParameter(1, s.getId()).setParameter(2, s.getId()).setParameter(3, s.getId());
//int update = query1.executeUpdate();
//List<Task> taskList = query1.getResultList(); // trova il task da eseguire
if (taskList.size() > 0) {
task = taskList.get(0);
s.addTask(task);
}
return task;
}
from
public Task assignTask(Student s) {
Task task = null;
String sr1 = "SELECT t FROM Task t WHERE t.batch not in (SELECT distinct batch FROM Task t2 WHERE t2.student.id= ?1 and t2.endDate IS NOT NULL) and ((t.student.id= ?2 AND t.endDate IS NULL) OR (t.student.id IS NULL)) ORDER BY t.student.id";
Query query1 = this.entityManager.createQuery(sr1).setMaxResults(1).setParameter(1, s.getId()).setParameter(2,
s.getId());
List<Task> taskList = query1.getResultList(); // trova il task da eseguire
if (taskList.size() > 0) {
task = taskList.get(0);
task.setStudent(s);
if (task.getStartDate() == null) {
Calendar calendar = Calendar.getInstance();
java.util.Date now = calendar.getTime();
java.sql.Timestamp date = new java.sql.Timestamp(now.getTime());
task.setStartDate(date);
}
if (task != null) {
s.addTask(task);
this.taskDao.save(task);
}
}
return task;
}
the old function was working well except when 2 users ask for the task in the same time and the code assign the same task at both users
i used a update ... returning for the same result (if i run the sql on pgadmin it works) but on spring i don't know how execute the sql....
if i use executeUpdate i have a sql error javax.persistence.TransactionRequiredException: Executing an update/delete query and i still think i lose the return of the task (i get a int), if i use the getResultList i have an error with something like "cannot edit" or something similar
how i can use the update and return the edited line? and why i get the transactional error?

You need to learn about Transactions in Spring, lookup #Trasactional. Furthermore you should be designing your entities in a more logical way IMO. If a Task can only be assigned to one person, it may make more sense to have a Student a member of Task, with a OneToOne mapping. If you think about the Student class, I would ask the question is a Student composed of tasks, or do they have Tasks. If they have Tasks there is more likely a rule which describes what tasks they have being what has been assigned to them, as opposed to adding a field for recording tasks they have. If it is convenient to have a field, within student, then consider using a JPA query to select the tasks of a Student. That should be much cleaner than what you currently have.

Related

How to properly / efficiently manage entity manager JPA Spring #Transactional for large datasets?

I am attempting to insert ~57,000 entities in my database, but the insert method takes longer and longer as the loop progresses. I have implemented batches of 25 - each time flushing, clearing, and closing the transaction (I'm pretty sure) without success. Is there something else I need to be doing in the code below to maintain the insert rate? I feel like it should not take 4+ hours to insert 57K records.
[Migrate.java]
This is the main class that loops through 'Xaction' entities and adds 'XactionParticipant' records based off each Xaction.
// Use hibernate cursor to efficiently loop through all xaction entities
String hql = "select xaction from Xaction xaction";
Query<Xaction> query = session.createQuery(hql, Xaction.class);
query.setFetchSize(100);
query.setReadOnly(true);
query.setLockMode("xaction", LockMode.NONE);
ScrollableResults results = query.scroll(ScrollMode.FORWARD_ONLY);
int count = 0;
Instant lap = Instant.now();
List<Xaction> xactionsBatch = new ArrayList<>();
while (results.next()) {
count++;
Xaction xaction = (Xaction) results.get(0);
xactionsBatch.add(xaction);
// save new XactionParticipants in batches of 25
if (count % 25 == 0) {
xactionParticipantService.commitBatch(xactionsBatch);
float rate = ChronoUnit.MILLIS.between(lap, Instant.now()) / 25f / 1000;
System.out.printf("Batch rate: %.4fs per xaction\n", rate);
xactionsBatch = new ArrayList<>();
lap = Instant.now();
}
}
xactionParticipantService.commitBatch(xactionsBatch);
results.close();
[XactionParticipantService.java]
This service provides a method with "REQUIRES_NEW" in an attempt to close the transaction for each batch
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void commitBatch(List<Xaction> xactionBatch) {
for (Xaction xaction : xactionBatch) {
try {
XactionParticipant xp = new XactionParticipant();
// ... create xp based off Xaction info ...
// Use native query for efficiency
String nativeQueryStr = "INSERT INTO XactionParticipant .... xp info/data";
Query q = em.createNativeQuery(nativeQueryStr);
q.executeUpdate();
} catch (Exception e) {
log.error("Unable to update", e);
}
}
// Clear just in case??
em.flush();
em.clear();
}
That is not clear what is the root cause of your performance problem: java memory consumption or db performance, please check some thoughts below:
The following code does not actually optimize memory consumption:
String hql = "select xaction from Xaction xaction";
Query<Xaction> query = session.createQuery(hql, Xaction.class);
query.setFetchSize(100);
query.setReadOnly(true);
query.setLockMode("xaction", LockMode.NONE);
ScrollableResults results = query.scroll(ScrollMode.FORWARD_ONLY);
Since you are retrieving full-functional entities, those entities get stored in persistence context (session-level cache), and in order to free memory up you need to detach entity upon entity has been processed (i.e. after xactionsBatch.add(xaction) or // ... create xp based off Xaction info ...), otherwise at the end of processing you consume the same amount of memory as you were doing List<> results = query.getResultList();, and here I'm not sure what is better: consume all memory required at the start of transaction and release all other resources or keep cursor and jdbc connection open for 4 hours.
The following code does not actually optimize JDBC interactions:
for (Xaction xaction : xactionBatch) {
try {
XactionParticipant xp = new XactionParticipant();
// ... create xp based off Xaction info ...
// Use native query for efficiency
String nativeQueryStr = "INSERT INTO XactionParticipant .... xp info/data";
Query q = em.createNativeQuery(nativeQueryStr);
q.executeUpdate();
} catch (Exception e) {
log.error("Unable to update", e);
}
}
yes, in general, JDBC should be faster than JPA API, however that is not your case - you are inserting records one-by-one instead of using batch inserts. In order to take advantage of batches your code should look like:
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void commitBatch(List<Xaction> xactionBatch) {
session.doWork(connection -> {
String insert = "INSERT INTO XactionParticipant VALUES (?, ?, ...)";
try (PreparedStatement ps = connection.prepareStatement(insert)) {
for (Xaction xaction : xactionBatch) {
ps.setString(1, "val1");
ps.setString(2, "val2");
ps.addBatch();
ps.clearParameters();
}
ps.executeBatch();
}
});
}
BTW, Hibernate may do the same if hibernate.jdbc.batch_size is set to large enough positive integer and entities are properly designed (id generation is backed up by DB sequence and allocationSize is large enough)

Java JDBC bugs with transaction and losing data

I have 2 bugs rarely happening during last 3 years.
If I have 100 orders during a day 1-2 orders have alerts when manual db check says that number was not incremented, !! but when i check db manually it is really incremented.
If I have 3000 orders during a month 3-5 orders have alerts when lock is not released from order after order completion. But when I check db manually it is not null when it should be null
I am using only jdbcTemplate and transactional template(select, update, read). I am using JPA only when insert a model to mysql.
Everything is done with lock by 1 thread.
Code snippet to show the issue:
public synchronized void test() {
long payment = 999;
long bought_times_before = jdbcTemplate.queryForObject("select bought_times from user where id = ?", new Object[]{1}, Long.class);
TransactionTemplate tmpl = new TransactionTemplate(txManager);
tmpl.setTimeout(300);
tmpl.setName("p:" + payment);
tmpl.executeWithoutResult(status -> {
jdbcTemplate.update("update orders set attempts_to_verify = attempts_to_verify + 1, transaction_value = null where id = ?", payment);
jdbcTemplate.update("update orders set locked = null where id = ?", payment);
jdbcTemplate.update("update user set bought_times = bought_times + 1 where id = 1");
});
long bought_times_after = jdbcTemplate.queryForObject("select bought_times from user where id = ?", new Object[]{1}, Long.class);
if (bought_times_after <= bought_times_before) log.error("bought_times_after <= bought_times_before");
}
I upgraded mysql and implemented redis distributed lock to allow only 1 thread run code with select and transaction and select.
UPDATE:
default isolation level is read comited
i tried serializable but it still has the same bug
UPDATE 2:
re: lock != null after transaction it is somehow related to high load on mysql, since it is never occur when low load.
UPDATE 3:
i checked mysql logs - nothing, no errors
also i tried to use REQUIRED_NEW + SERIALIZABLE but received dead locks
UPDATE 4:
i wrote a test and cannot reproduce the issue - but on production there are more than 1 transaction as well as more updates and reads but i guess it is hardware issue or mysql bug
#PostConstruct
public void test(){
jdbcTemplate.execute("CREATE TEMPORARY TABLE IF NOT EXISTS TEST ( id int, name int, locked boolean )");
jdbcTemplate.execute("insert into TEST values(1, 1, 1);");
for(int i = 0; i < 100000; i++) {
long prev = jdbcTemplate.queryForObject("select name from TEST where id = 1", Long.class);
TransactionTemplate tmpl = new TransactionTemplate(txManager);
jdbcTemplate.update("update TEST set locked = true where id = 1;");
tmpl.execute(new TransactionCallbackWithoutResult() {
#SneakyThrows
#Override
protected void doInTransactionWithoutResult(org.springframework.transaction.TransactionStatus status) {
jdbcTemplate.update("update TEST set name = name + 1 where id = 1;");
jdbcTemplate.update("update TEST set locked = false where id = 1;");
}
});
long curr = jdbcTemplate.queryForObject("select name from TEST where id = 1", Long.class);
boolean lock = jdbcTemplate.queryForObject("select locked from TEST where id = 1", Boolean.class);
if(curr <= prev){
log.error("curr <= prev");
}
if(lock){
log.error("lock = true");
}
}
}
UPDATE 5: WAS ABLE TO REPRODUCE IT!!!!
#PostConstruct
public void test(){
jdbcTemplate.execute("CREATE TEMPORARY TABLE IF NOT EXISTS TEST ( id int, name int, locked boolean )");
jdbcTemplate.execute("insert into TEST values(1, 1, 1);");
ExecutorService executorService = Executors.newFixedThreadPool(100);
for(int i = 0; i < 100000; i++) {
executorService.submit(() -> {
RLock rLock = redissonClient.getFairLock("lock");
try {
rLock.lock(120, TimeUnit.SECONDS);
long prev = jdbcTemplate.queryForObject("select name from TEST where id = 1", Long.class);
TransactionTemplate tmpl = new TransactionTemplate(txManager);
jdbcTemplate.update("update TEST set locked = true where id = 1;");
tmpl.execute(new TransactionCallbackWithoutResult() {
#SneakyThrows
#Override
protected void doInTransactionWithoutResult(org.springframework.transaction.TransactionStatus status) {
jdbcTemplate.update("update TEST set name = name + 1 where id = 1;");
jdbcTemplate.update("update TEST set locked = false where id = 1;");
}
});
long curr = jdbcTemplate.queryForObject("select name from TEST where id = 1", Long.class);
boolean lock = jdbcTemplate.queryForObject("select locked from TEST where id = 1", Boolean.class);
if (curr <= prev) {
log.error("curr <= prev");
}
if (lock) {
log.error("lock = true");
}
} finally {
rLock.unlock();
}
});
}
}
UPDATE 7: after the second and third run i cannot reproduce it again neither with Lock nor with FairLock ..
UPDATE 8: on prod i am using 3 redis lock with 120 sec timeouts so i think there is timeout occurs rarely on 1 of 3 lock thus code might be executed by 2 threads without lock
SOLUTION: increase lock timeout as well as transaction timeout up to 500 seconds
UPDATE 9: looks like the issue has been resolved but i need to monitor it during couple of weeks before close the issue on stack overflow

Bulk update from object list in Java using JPA

I'm trying to do a multiple update using JPA. What I currently now, It's that is possibly to update multiple columns records in a same entity using JPA. I'm trying to avoid using update statements in loop but I couldn't find any information about this.
I'm using an entity manager in order to execute the queries
#Override
public void updateAllNotes(List<Note> NOTES) {
LocalTime now = LocalTime.now(ZoneId.of("America/Mexico_City"));
String query = "UPDATE Note SET TITLE = :title, CONTENT = :content, UPDATED_AT = :updatedAt WHERE ID = :id";
/* I'm trying to avoid this */
for (Note note:NOTES) {
entityManager.createQuery(query)
.setParameter("title", note.getTitle())
.setParameter("content", note.getContent())
.setParameter("updatedAt", now)
.setParameter("id", note.getId())
.executeUpdate();
}
}
You can try below code may be it helpful or you can refer JPA - Batch/Bulk Update - What is the better approach?.
public void updateAllNotes(List<Note> NOTES) {
LocalTime now = LocalTime.now(ZoneId.of("America/Mexico_City"));
List<Integer> idList = NOTES.stream().map(Note::getId).collect(Collectors.toList());
String query = "UPDATE Note SET TITLE = (?1), CONTENT = (?2), UPDATED_AT = (?3) WHERE ID = (?4)";
entityManager.createQuery(query)
.setParameter(1, note.getTitle())
.setParameter(2, note.getContent())
.setParameter(3, now)
.setParameter(4, idList)
.executeUpdate();
}

How to utilize Pageable when running a custom delete query in Spring JPA for mongodb?

I am working on creating a tool allowing admins to purge data from the database. Our one collection has millions of records making deletes seize up the system. Originally I was just running a query with that returns a Page and dropping that into the standard delete. Ideally i'd prefer to run the query and delete in one go.
#Query(value = "{ 'timestamp' : {$gte : ?0, $lte: ?1 }}")
public Page deleteByTimestampBetween(Date from, Date to, Pageable pageable);
Is this possible, using the above code the system behaves the same where the program doesnt continue the delete function and the data isnt removed from mongo. Or is there a better approach?
I don't think it is possible using Pageable/Query annotation. You can use Bulk Write to process deletes in batches.
Something like
int count = 0;
int batch = 100; //Send 100 requests at a time
BulkOperations bulkOps = mongoTemplate.bulkOps(BulkOperations.BulkMode.UNORDERED, YourPojo.class);
List<DateRange> dateRanges = generateDateRanges(from, to, step); //Add a function to generate date ranges with the defined step.
for (DateRange dateRange: dateRanges){
Query query = new Query();
Criteria criteria = new Criteria().andOperator(Criteria.where("timestamp").gte(dateRange.from), Criteria.where("timestamp").lte(dateRange.to));
query.addCriteria(criteria);
bulkOps.remove(query);
count++;
if (count == batch) {
bulkOps.execute();
count = 0;
}
}
if (count > 0) {
bulkOps.execute();
}

Doing a query for each day in a period, turning it into one query

I have this:
public Map<Day,Integer> getUniqueLogins(long fromTime, long toTime) {
EntityManager em = emf.createEntityManager();
try {
Map<Day,Integer> resultMap = new ...;
for (Day day : daysInPeriod(fromTime, toTime)) {
CriteriaBuilder cb = em.getCriteriaBuilder();
CriteriaQuery<Long> q = cb.createQuery(Long.class);
// FROM UserSession
Root<UserSession> userSess = q.from(UserSession.class);
// SELECT COUNT(DISTINCT userId)
q.select(cb.countDistinct(userSess.<Long>get("userId")));
// WHERE loginTime BETWEEN ...
q.where(cb.between(userSess.<Date>get("loginTime"), day.startDate(), day.endDate()));
long result = em.createQuery(q).getSingleResult();
resultMap.put(day, (int) result);
}
return resultMap;
} finally {
em.close();
}
}
This executes a query for each day in a given period (the period being in the order of magnitude of a month).
Could I get this specific data in one query? I'm using Hibernate/MySQL, but I'd prefer not to need any non-standard functions.
Assuming your original query is:
SELECT COUNT(DISTINCT userId)
FROM UserSession
WHERE loginTime BETWEEN dayStart AND dayEnd;
This should return the same results as running the original one per each day of the period:
SELECT date(loginTime) AS day, COUNT(DISTINCT userId)
FROM UserSession
WHERE loginTime BETWEEN startDate AND endDate
GROUP BY day;
GROUP BY the date segment of LoginTime counting distinct userids. The back-end should provide a way to extract the date-part of the datetime value.
You'll have to use MySQL specific functions to do this.
SELECT FROM_DAYS(TO_DAYS(loginTime)) AS day, COUNT(DISTINCT userId)
FROM UserSession
WHERE loginTime BETWEEN :fromTime AND :toTime
GROUP BY day
The from_days/to_days will convert the loginTime to a number of days and then back to a datetime but with the hour/minute/second parts zero'd.

Categories

Resources