JPA executeUpdate always returns 1 - java

I'm having an issue here where the executeUpdate command always returns value 1 even though there's no record to be updated.
First I retrieve several records, do a bit of calculation, and then update the status of some of the retrieved records.
The JPA update code:
private int executeUpdateStatusToSuccess(Long id, Query updateQuery) {
updateQuery.setParameter(1, getSysdateFromDB());
updateQuery.setParameter(2, id);
int cnt = updateQuery.executeUpdate();
return cnt; // always return 1
}
The update query:
UPDATE PRODUCT_PARAM SET STATUS = 2, DATA_TIMESTAMP=? WHERE ID = ? AND STATUS=-1
Note that STATUS column is practically never valued < 0. I'm purposely adding this condition here just to show that even though it shouldn't have updated any record, the executeUpdate() still returns the value 1.
As an additional note, there is no update process anywhere between the data retrieval and the update. It's all done within my local environment.
Any advice if I'm possibly missing anything here? Or if there's some configuration parameter that I need to checK?
EDIT:
For the JPA I'm using EclipseLink.
For the database I'm using Oracle 10g with driver ojdbc5.jar.

In the end I have to look into the EclipseLink JPA source code. So the system actually executes this line
return Integer.valueOf(1);
from the codes inside basicExecuteCall method of DatabaseAccessor class below:
if (isInBatchWritingMode(session)) {
// if there is nothing returned and we are not using optimistic locking then batch
//if it is a StoredProcedure with in/out or out parameters then do not batch
//logic may be weird but we must not batch if we are not using JDBC batchwriting and we have parameters
// we may want to refactor this some day
if (dbCall.isNothingReturned() && (!dbCall.hasOptimisticLock() || getPlatform().canBatchWriteWithOptimisticLocking(dbCall) )
&& (!dbCall.shouldBuildOutputRow()) && (getPlatform().usesJDBCBatchWriting() || (!dbCall.hasParameters())) && (!dbCall.isLOBLocatorNeeded())) {
// this will handle executing batched statements, or switching mechanisms if required
getActiveBatchWritingMechanism().appendCall(session, dbCall);
//bug 4241441: passing 1 back to avoid optimistic lock exceptions since there
// is no way to know if it succeeded on the DB at this point.
return Integer.valueOf(1);
} else {
getActiveBatchWritingMechanism().executeBatchedStatements(session);
}
}
One easy hack will be by not using the batch. I've tried turning it off in persistence.xml and the update returns the expected value, which is 0.
<property name="eclipselink.jdbc.batch-writing" value="none" />
I'm expecting better solution but this one will do for now in my situation.

I know that this question and answer are pretty old but since I stumbled upon this same problem recently and figured out a solution for my use-case (keep batch-writing enabled and still get the updated rows count for some queries), I figured my solution might be helpful to somebody else in the future.
Basically, you can use a query hint to signal that a specific query does not support batch execution. The code to do this is something like this:
import org.eclipse.persistence.config.HintValues;
import org.eclipse.persistence.config.QueryHints;
import javax.persistence.Query;
public class EclipseLinkUtils {
public static void disableBatchWriting(Query query) {
query.setHint(QueryHints.BATCH_WRITING, HintValues.FALSE);
}
}

Related

Safe data update in mySQL / Java

Here I have a dilemma.
Let's imagine that we have a sql table like this
enter image description here
It could be a problem when two or more users overwrite data in the table.
How should I check if the place hasn't been taken before update data?
I have two options
in SQL query:
UPDATE ticket SET user_user_id = ? WHERE place = ? AND user_user_id is NULL
or in Service layer:
try {
Ticket ticket = ticketDAO.read(place)
if (ticket.getUser() == null) {
ticket.setUser(user)
ticketDAO.update(ticket)
}else {
throw new DAOException("Place has been already tooken")
}
What way is safer and commonly used in practice?
Please, share your advice.
Possible approach here is to go ahead with SQL query. After query execution check number of rows modified in ticketDAO.update method. If 0 rows modified then throw exception DAOException.

MyBatis: how to bypass a local cache and directly hit the DB on specific select

I use MyBatis 3.1.
I have two use cases when I need to bypass MyBatis local cache and directly hit the DB.
Since MyBatis configuration file only have global settings, it is not applicable to my case, because I need it as an exception, not as a default. Attributes of MyBatis <select> XML statement do not seem to include this option.
Use case 1: 'select sysdate from dual'.
MyBatis caching causes this one to always return the same value within a MyBatis session. This causes an issue in my integration test, when I try to replicate a situation of an outdated entry.
My workaround was just to use a plain JDBC call.
Use case 2: 'select' from one thread does not always see the value written by another thread.
Thread 1:
SomeObject stored = dao.insertSomeObject(obj);
runInAnotherThread(stored.getId());
//complete and commit
Thread 2:
//'id' received as an argument provided to 'runInAnotherThread(...)'
SomeObject stored = dao.findById(id);
int count = 0;
while(stored == null && count < 300) {
++count;
Thread.sleep(1000);
stored = dao.findById(id);
}
if (stored == null) {
throw new MyException("There is no SomeObject with id="+id);
}
I occasionally receive MyException errors on a server, but can't reproduce on my local machine. In all cases the object is always in the DB. So I guess the error depends on whether the stored object was in MyBatis local cache at the first time, and waiting for 5 minutes does not help, since it never checks the actual DB.
So my question is how to solve the above use cases within MyBatis without falling back to the plain JDBC?
Being able just to somehow signal MyBatis not to use a cached value in a specific call (the best) or in all calls to a specific query would be the preferred option, but I will consider any workaround as well.
I don't know a way to bypass local cache but there are two options how to achieve what you need.
The first option is to set flushCache="true" on select. This will clear the cache after statement execution so next query will hit database.
<select id="getCurrentDate" resultType="date" flushCache="true">
SELECT SYSDATE FROM DUAL
</select>
Another option is to use STATEMENT level local cache. By default local cache is used during SESSION (which is typically translates to transaction). This is specified by localCacheScope option and is set per session factory. So this will affect all queries using this mybatis session factory.
Let me summarize.
The solution from the previous answer, 'flushCache="true"' option on the query, works and solves both use cases. It will flush cache after every such 'select', so the next 'select' statement will hit the DB. Although it works after the 'select' statement is executed, it's OK since the cache is empty anyway before the first 'select'.
Another solution is to start a new session. I use Spring, so it's enough to mark a method with #Transactional(propagation = Propagation.REQUIRES_NEW). Since MyBatis session is tied to Spring transaction, this will cause to create another MyBatis session with fresh cache every time the method is called.
By some reason, the MyBatis option 'useCache="false"' in the query does not work.
The following Options annotation can be used:
#Options(useCache=false, flushCache=FlushCachePolicy.TRUE)
Apart from answers by Roman and Alexander there is one more solution for this:
Configuration configuration = MyBatisUtil.getSqlSessionFactory().getConfiguration();
Collection<Cache> caches = configuration.getCaches();
//If you have multiple caches and want a particular to get deleted.
// Cache cache = configuration.getCache("PPL"); // namespace of particular XML
for (Cache cache : caches) {
Lock w = cache.getReadWriteLock().writeLock();
w.lock();
try {
cache.clear();
} finally {
w.unlock();
}
}

org.hibernate.NonUniqueResultException: query did not return a unique result: 2?

I have below code in my DAO:
String sql = "SELECT COUNT(*) FROM CustomerData " +
"WHERE custId = :custId AND deptId = :deptId";
Query query = session.createQuery(sql);
query.setParameter("custId", custId);
query.setParameter("deptId", deptId);
long count = (long) query.uniqueResult(); // ERROR THROWN HERE
Hibernate throws below exception at the marked line:
org.hibernate.NonUniqueResultException: query did not return a unique result:
I am not sure whats happening as count(*) will always return only one row.
Also when i run this query on db directly, it return result as 1. So whats the issue?
It seems like your query returns more than one result check the database. In documentation of query.uniqueResult() you can read:
Throws: org.hibernate.NonUniqueResultException - if there is more
than one matching result
If you want to avoid this error and still use unique result request, you can use this kind of workaround query.setMaxResults(1).uniqueResult();
Hibernate
Optional findTopByClientIdAndStatusOrderByCreateTimeDesc(Integer clientId, Integer status);
"findTop"!! The only one result!
I don't think other answers explained the key part: why "COUNT(*)" returns more than one result?
I just encountered the same issue today, and what I found out is that if you have another class extending the target mapped class (here "CustomerData"), Hibernate will do this magic.
Hope this will save some time for other unfortunate guys.
Generally This exception is thrown from Oracle when query result (which is stored in an Object in your case), can not be cast to the desired object.
for example when result is a
List<T>
and you're putting the result into a single T object.
In case of casting to long error, besides it is recommended to use wrapper classes so that all of your columns act the same, I guess a problem in transaction or query itself would cause this issue.
It means that the query you wrote returns more than one element(result) while your code expects a single result.
Received this error while doing otherwise correct hibernate queries. The issue was that when having a class extend another hibernate was counting both. This error can be "fixed" by adding a method to your repository class.
By overriding the class count you can manually determine the way this is counted.
#Override
public Integer count(Page<MyObject> page) {
// manual counting method here
}
I was using JPQL and wanted to return Map. In my case, the reason was that I wanted to get Map<String, String>, but had to expect List<Map<String, String>> :)
Check your table, where one entity occurring multiple time's.
I had the same error, with this data :
id
amount
clientid
createdate
expiredate
428
100
427
19/11/2021
19/12/2021
464
100
459
22/11/2021
22/12/2021
464
100
459
22/11/2021
22/12/2021
You see here clientid occurring two times with 464.
I solved it by deleting one row :
id
amount
clientid
createdate
expiredate
428
100
427
19/11/2021
19/12/2021
464
100
459
22/11/2021
22/12/2021
I have found the core of the problem:
result of SELECT COUNT(*) can be a list, if there is a GROUP BY in the query,
and sometimes Hibernate rewrite your HQL and put a GROUP BY into it, just for fun.
Basically your query returns more than one result set.
In API Docs uniqueResult() method says that
Convenience method to return a single instance that matches
the query, or null if the query returns no results
uniqueResult() method yield only single resultset
Could this exception be thrown during an unfinished transaction, where your application is attempting to create an entity with a duplicate field to the identifier you are using to try find a single entity?
In this case the new (duplicate) entity will not be visible in the database as the transaction won't have, and will never be committed to the db. The exception will still be thrown however.
Thought this might help to someone, it happens because "When the number of data queries is greater than 1".reference
As what Ian Wang said, I suspect you are using a repository from spring. And a few days ago you just copy past a class and forgot to delete it when it is finally unused. Check that repository, and see if there is multiple same class of table you use. The count is not the count of rows, but the count of the table problem.
This means that orm technology is not preprogrammed to give you which results you are looking for because there are too many of the same results in the database. for example If there is more than one same value in my database and I want to get it back, you will encounter the error you get with the select query.
For me the error is caused by
spring.jpa.hibernate.ddl-auto=update
in application.properties file replacing it with
spring.jpa.hibernate.ddl-auto=create solved the issue, but it still depends on your needs to decide which configuration you need in your project, for more insights on the topic check this.
First you must test the query list size; here a example:
long count;
if (query.list().size() > 0)
count=(long) criteria.list().get(0);
else
count=0;
return count;

Update all objects in JPA entity

I'm trying to update all my 4000 Objects in ProfileEntity but I am getting the following exception:
javax.persistence.QueryTimeoutException: The datastore operation timed out, or the data was temporarily unavailable.
this is my code:
public synchronized static void setX4all()
{
em = EMF.get().createEntityManager();
Query query = em.createQuery("SELECT p FROM ProfileEntity p");
List<ProfileEntity> usersList = query.getResultList();
int a,b,x;
for (ProfileEntity profileEntity : usersList)
{
a = profileEntity.getA();
b = profileEntity.getB();
x = func(a,b);
profileEntity.setX(x);
em.getTransaction().begin();
em.persist(profileEntity);
em.getTransaction().commit();
}
em.close();
}
I'm guessing that I take too long to query all of the records from ProfileEntity.
How should I do it?
I'm using Google App Engine so no UPDATE queries are possible.
Edited 18/10
In this 2 days I tried:
using Backends as Thanos Makris suggested but got to a dead end. You can see my question here.
reading DataNucleus suggestion on Map-Reduce but really got lost.
I'm looking for a different direction. Since I only going to do this update once, Maybe I can update manually every 200 objects or so.
Is it possible to to query for the first 200 objects and after it the second 200 objects and so on?
Given your scenario, I would advice to run a native update query:
Query query = em.createNativeQuery("update ProfileEntity pe set pe.X = 'x'");
query.executeUpdate();
Please note: Here the query string is SQL i.e. update **table_name** set ....
This will work better.
Change the update process to use something like Map-Reduce. This means all is done in datastore. The only problem is that appengine-mapreduce is not fully released yet (though you can easily build the jar yourself and use it in your GAE app - many others have done so).
If you want to set(x) for all object's, better to user update statement (i.e. native SQL) using JPA entity manager instead of fetching all object's and update it one by one.
Maybe you should consider the use of the Task Queue API that enable you to execute tasks up to 10min. If you want to update such a number of entities that Task Queues do not fit you, you could also consider the user of Backends.
Put the transaction outside of the loop:
em.getTransaction().begin();
for (ProfileEntity profileEntity : usersList) {
...
}
em.getTransaction().commit();
Your class behaves not very well - JPA is not suitable for bulk updates this way - you just starting a lot of transaction in rapid sequence and produce a lot of load on the database. Better solution for your use case would be scalar query setting all the objects without loading them into JVM first ( depending on your objects structure and laziness you would load much more data as you think )
See hibernate reference:
http://docs.jboss.org/hibernate/orm/3.3/reference/en/html/batch.html#batch-direct

Mybatis query returning incorrect results. Possible caching issue?

I have a JUnit test, which includes use of Mybatis. At the start of the test I'm retrieving a count of records in the table.
At the end of the test I expect an additional record to be present in the table, and have an assert to verify this condition. However I find that the second query returns exactly the same number of records as the first one did.
I know a new record has definitely been inserted in the table.
I thought this may be related to caching, so I tried flushing all caches associated with the session. I also tried using setCacheEnabled(false), but still the same result.
Here's my code fragment -
#Test
public void config_0_9() {
session.getConfiguration().setCacheEnabled(false);
cfgMapper = session.getMapper(CfgMapper.class);
int oldRecords = cfgMapper.countByExample(null);
messageReprocessor.processSuspendedMessages();
session.commit();
int newRecords = cfgMapper.countByExample(null);
assertTrue(newRecords == oldRecords + 1);
}

Categories

Resources