Update with a where clause with JPA - java

I have a column called Sequence No. Anytime my java program receives a message, the message has a sequence number that is always higher than the last message. When I update a row I need to check if the sequence number is greater than the sequence number in my database, if it is not I need to drop the message. Something like Update MyTable t where t.sequenceNo < SEQUENCE_NO_VALUE set (t.sequenceNo = SEQUENCE_NO_VALUE, Set all my columns..); I have done this before with SQL, but never using JPA. My other issue is that I would like to update the entire row at once. Something like this...
public void saveWithCondition(Account entity, Integer sequenceNO) {
EntityManagerHelper.log("saving Account instance", Level.INFO, null);
try {
CriteriaBuilder criteriaBuilder = getEntityManager().getCriteriaBuilder();
CriteriaUpdate<Account> update = criteriaBuilder.createCriteriaUpdate(Account.class);
update.set(entity);//set the entire row instead of each column.
update.where(criteriaBuilder.lessThan(entity.getODSSequenceNo(), SequenceNO));
this.em.createQuery(update).executeUpdate();
} catch (RuntimeException re) {
EntityManagerHelper.log("save failed", Level.SEVERE, re);
throw re;
}
}

Related

How to handle EntityNotFoundException during Hibernate query?

I'm working on an application where we need to query for a collection of entities using hibernate and then perform some indexing on the results. Unfortunately, the database the application is querying from does not enforce foreign key constraints on all associations, so the application needs to handle potentially broken references.
Currently, the code responsible for doing the query looks like this:
private List<? extends Entity> findAll(Class entity, Integer startIndex, Integer pageSize)
throws EntityIndexingServiceException {
try {
Query query = entityManager
.createQuery("select entity from " + entity.getName() + " entity order by entity.id desc");
if (startIndex != null && pageSize != null) {
return query.setFirstResult(startIndex).setMaxResults(pageSize).getResultList();
} else {
return query.getResultList();
}
}
catch (Throwable e) {
StringWriter sw = new StringWriter();
PrintWriter pw = new PrintWriter(sw);
e.printStackTrace(pw);
log.warn(sw.toString());
return Collections.EMPTY_LIST;
}
}
The problem with this code is that bad data in any of the results will result in the whole page of data being skipped. So if I'm using this method to get a list of objects to index, none of the query results will be included in the indexing even though only one or two were invalid.
I know Hibernate has a #NotFound annotation, but that comes with it's own side-effects like forced eager loading that I would rather avoid. If possible, I want to exclude invalid entities from the results, not load them in a broken state.
So the question then is how can I handle this query in such a way that invalid results (those that cause an EntiyNotFoundException to be thrown) are excluded from the return values without also throwing out the 'good' data?

Using batch insert in Hibernate/JPA

Well, i'm trying to making a batch insert in JPA but i think this don't work.
My method is this :
public void saveBatch(List<? extends AbstractBean> beans) {
try {
begin();
logger.info("Salvando em batch " + beans.size() + " bean(s)");
for (int i = 0; i < beans.size(); i++) {
if (i % 50 == 0) {
entityManager.flush();
entityManager.clear();
}
entityManager.merge(beans.get(i));
}
commit();
} catch (Exception e) {
logger.error("Ocorreu um erro ao tentar salvar batch. MSG ORIGINAL: "
+ e.getMessage());
rollback();
throw new DAOException("Ocorreu um erro ao tentar salvar batch");
}
}
My ideia is that each 50 rows the hibernate will make:
insert into tableA values (),(),()...
But watching the log i see one INSERT for each merge() command link this:
insert into tableA values ()
insert into tableA values ()
insert into tableA values ()
insert into tableA values ()
What is wrong ? This is correct ?
Hibernate does not enable batching by default. You will want to consider the following settings (I think at least batch_size is required to get any batch inserts/updates to work):
hibernate.jdbc.batch_size
A non-zero value enables use of JDBC2 batch updates by Hibernate. e.g.
recommended values between 5 and 30
hibernate.jdbc.batch_versioned_data
Set this property to true if your JDBC driver returns correct row
counts from executeBatch(). It is usually safe to turn this option on.
Hibernate will then use batched DML for automatically versioned data.
Defaults to false. e.g. true | false
hibernate.order_updates (similarly, hibernate.order_inserts)
Forces Hibernate to order SQL updates by the primary key value of the
items being updated. This will result in fewer transaction deadlocks
in highly concurrent systems. e.g. true | false

Spring3/Hibernate AssertionFailure

Here is a simple hibernate code that inserts a value into a table.
If the row already exists, query the row and return the data.
Most of the time, the code works fine with no issues.
In a very special case, three different clients are trying to insert the exact the same row into the table. Ofcourse, only one row gets inserted. The other two insertions fail and the fall into the try-catch block.
There is a query in the try catch block, which queries the data and sends the value to the client. This results in an error for subsequent operations on the session.
Hibernate throws "ERROR org.hibernate.AssertionFailure - an assertion
failure occured (this may indicate a bug in Hibernate, but is more
likely due to unsafe use of the session)" in the logs.
Here is the code. What would be the right way to handle this scenario?
#Override
public void addPackage(PackageEntity pkg) {
try{
getCurrentSession().save(pkg);
getCurrentSession().flush();
}catch( ConstraintViolationException cve ){
// UNIQ constraint is violated
// query now, instead of insert
System.out.println("Querying again because of UNIQ constraint : "+ pkg);
PackageEntity p1 = getPackage(pkg.getName(), pkg.getVersion());
if( p1 == null ){
// something seriously wrong
throw new RuntimeException("Unable to query or insert " + pkg);
}else{
pkg.setId(p1.getId());
}
}catch (Exception e) {
e.printStackTrace();
}catch (Throwable t) {
t.printStackTrace();
}
}
Primary (or) composite Key makes each row data unique and avoids this error.
If you need the data from all these three requests then create a unique primary key in your table and add it to the entity.
Primary Key could be any unique thing from your data, an auto generated sequence or UUID/GUID.

Hiberate delete rows from table with limit

Hi want to delete the millions of rows from the table in batch to avoid locking. I am trying below code but its deleting all the rows.
Session session;
try {
session = dao.getHibernateTemplate().getSessionFactory().getCurrentSession();
} catch (HibernateException e) {
session = dao.getHibernateTemplate().getSessionFactory().openSession();
}
String sql = "delete from "+clazz.getSimpleName();
session.createQuery(sql).setFetchSize(limit).executeUpdate();
dao.getHibernateTemplate().flush();
Is there any better way of doing it
I am considering "clazz.getSimpleName();" is returning a table name.
If this is the case than your query is - "delete from 'tablename'" here you are not specifying any condition which restrict the delete statement, that's why it is deleting all the rows from the table.
As you are using setFetchSize - setFetchSize(int value) is a 'hint' to the driver, telling it how many rows it should fetch.
I think this method is not require in case of delete query.

Concurrency issues when retriveing Ids of newly inserted rows with ibatis

I'm using iBatis/Java and Postgres 8.3.
When I do an insert in ibatis i need the id returned.
I use the following table for describing my question:
CREATE TABLE sometable ( id serial NOT NULL, somefield VARCHAR(10) );
The Sequence sometable_id_seq gets autogenerated by running the create statement.
At the moment i use the following sql map:
<insert id="insertValue" parameterClass="string" >
INSERT INTO sometable ( somefield ) VALUES ( #value# );
<selectKey keyProperty="id" resultClass="int">
SELECT last_value AS id FROM sometable_id_seq
</selectKey>
</insert>
It seems this is the ibatis way of retrieving the newly inserted id. Ibatis first runs a INSERT statement and afterwards it asks the sequence for the last id.
I have doubts that this will work with many concurrent inserts.
Could this cause problems? Like returning the id of the wrong insert?
( See also my related question about how to get ibatis to use the INSERT .. RETURING .. statements )
This is definitely wrong. Use:
select currval('sometable_id_seq')
or better yet:
INSERT INTO sometable ( somefield ) VALUES ( #value# ) returning id
which will return you inserted id.
Here is simple example:
<statement id="addObject"
parameterClass="test.Object"
resultClass="int">
INSERT INTO objects(expression, meta, title,
usersid)
VALUES (#expression#, #meta#, #title#, #usersId#)
RETURNING id
</statement>
And in Java code:
Integer id = (Integer) executor.queryForObject("addObject", object);
object.setId(id);
I have another thought. ibatis invokes the insert method delegate the Class: com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate,with the code:
try {
trans = autoStartTransaction(sessionScope, autoStart, trans);
SelectKeyStatement selectKeyStatement = null;
if (ms instanceof InsertStatement) {
selectKeyStatement = ((InsertStatement) ms).getSelectKeyStatement();
}
// Here we get the old value for the key property. We'll want it later if for some reason the
// insert fails.
Object oldKeyValue = null;
String keyProperty = null;
boolean resetKeyValueOnFailure = false;
if (selectKeyStatement != null && !selectKeyStatement.isRunAfterSQL()) {
keyProperty = selectKeyStatement.getKeyProperty();
oldKeyValue = PROBE.getObject(param, keyProperty);
generatedKey = executeSelectKey(sessionScope, trans, ms, param);
resetKeyValueOnFailure = true;
}
StatementScope statementScope = beginStatementScope(sessionScope, ms);
try {
ms.executeUpdate(statementScope, trans, param);
}catch (SQLException e){
// uh-oh, the insert failed, so if we set the reset flag earlier, we'll put the old value
// back...
if(resetKeyValueOnFailure) PROBE.setObject(param, keyProperty, oldKeyValue);
// ...and still throw the exception.
throw e;
} finally {
endStatementScope(statementScope);
}
if (selectKeyStatement != null && selectKeyStatement.isRunAfterSQL()) {
generatedKey = executeSelectKey(sessionScope, trans, ms, param);
}
autoCommitTransaction(sessionScope, autoStart);
} finally {
autoEndTransaction(sessionScope, autoStart);
}
You can see that the insert and select operator are in a Transaction. So I think there is no concureency problem with the insert method.

Categories

Resources