Database deletes failed during inserts - java

I have two java apps: one of them inserts records to Table1.
Second application reads first N items and removes them.
When 1st application inserts data intensive, 2nd failed when I try to delete any rows with CannotSerializeTransactionException. I don't see any problems: inserted items are visible in select/delete only when insert transaction is finished. How can I fix it? Thanks.
TransactionTemplate tt = new TransactionTemplate(platformTransactionManager);
tt.setIsolationLevel(Connection.TRANSACTION_SERIALIZABLE);
tt.execute(new TransactionCallbackWithoutResult() {
#Override
protected void doInTransactionWithoutResult(TransactionStatus status) {
List<Record> records = getRecords(); // jdbc select
if (!records.isEmpty()) {
try {
processRecords(records); // no database
removeRecords(records); // jdbc delete - exception here
} catch (CannotSerializeTransactionException e) {
log.info("Transaction rollback");
}
} else {
pauseProcessing();
}
}
});
pauseProcessing() - sleep
public void removeRecords(int changeId) { String sql = "delete from RECORDS where ID <= ?";
getJdbcTemplate().update(sql, new Object[]{changeId});}

Are you using Connection.TRANSACTION_SERIALIZABLE also in first application? Looks like first application locks table, so second one cannot access it (cannot start transaction). Maybe Connection.TRANSACTION_REPEATABLE_READ could be enough?
Probably you can also configure second application not to throw exception when it cannot access resources, but to wait for it.

This sounds as if you're reading uncommitted data. Are you sure you're properly settings the isolation level?
It seems to me that you're mixing up constants from two different classes: Shouldn't you be passing TransactionDefinition.ISOLATION_SERIALIZABLE instead of Connection.TRANSACTION_SERIALIZABLE to the setIsolationLevel method?
Why do you set the isolation level anyway? Oracle's default isolation level (read committed) is usually the best compromise between consistency and speed and should nicely work in you case.

Related

Retrieve value of a DB column after I update it

Sorry in advance for the long post. I'm working with a Java WebApplication which uses Spring (2.0, I know...) and Jpa with Hibernateimplementation (using hibernate 4.1 and hibernate-jpa-2.0.jar). I'm having problems retrieving the value of a column from a DB Table (MySql 5) after i update it. This is my situation (simplified, but that's the core of it):
Table KcUser:
Id:Long (primary key)
Name:String
.
.
.
Contract_Id: Long (foreign key, references KcContract.Id)
Table KcContract:
Id: Long (primary Key)
ColA
.
.
ColX
In my server I have something like this:
MyController {
myService.doSomething();
}
MyService {
private EntityManager myEntityManager;
#Transactional(readOnly=true)
public void doSomething() {
List<Long> IDs = firstFetch(); // retrieves some users IDs querying the KcContract table
doUpdate(IDs); // updates a column on KcUser rows that matches the IDs retrieved by the previous query
secondFecth(IDs); // finally retrieves KcUser rows <-- here the returned rows contains the old value and not the new one i updated in the previous method
}
#Transactional(readOnly=true)
private List<Long> firstFetch() {
List<Long> userIDs = myEntityManager.createQuery("select c.id from KcContract c" ).getResultList(); // this is not the actual query, there are some conditions in the where clause but you get the idea
return userIDs;
}
#Transactional(readOnly=false, propagation=Propagation.REQUIRES_NEW)
private void doUpdate(List<Long> IDs) {
Query hql = myEntityManager().createQuery("update KcUser t set t.name='newValue' WHERE t.contract.id IN (:list)").setParameter("list", IDs);
int howMany = hql.executeUpdate();
System.out.println("HOW MANY: "+howMany); // howMany is correct, with the number of updated rows in DB
Query select = getEntityManager().createQuery("select t from KcUser t WHERE t.contract.id IN (:list)" ).setParameter("list", activeContractIDs);
List<KcUser> users = select.getResultList();
System.out.println("users: "+users.get(0).getName()); //correct, newValue!
}
private void secondFetch(List<Long> IDs) {
List<KcUser> users = myEntityManager.createQuery("from KcUser t WHERE t.contract.id IN (:list)").setParameter("list", IDs).getResultList()
for(KcUser u : users) {
myEntityManager.refresh(u);
String name = u.getName(); // still oldValue!
}
}
}
The strange thing is that if i comment the call to the first method (myService.firstFetch()) and call the other two methods with a constant list of IDs, i get the correct new KcUser.name value in secondFetch() method.
Im not very expert with Jpa and Hibernate, but I thought it might be a cache problem, so i've tried:
using myEntityManager.flush() after the update
clearing the cache with myEntityManager.clear() and myEntityManager.getEntityManagerFactory().evictAll();
clearing the cache with hibernate Session.clear()
using myEntityManager.refresh on KcUser entities
using native queries (myEntityManager.createNativeQuery("")), which to my understanding should not involve any cache
Nothing of that worked and I always got returned the old KcUser.name value in secondFetch() method.
The only things that worked so far are:
making the firstFetch() method public and moving its call outside of myService.doSomething(), so doing something like this in MyController:
List<Long> IDs = myService.firstFetch();
myService.doSomething(IDs);
using a new EntityManager in secondFetch(), so doing something like this:
EntityManager newEntityManager = myEntityManager.getEntityManagerFactory().createEntityManager();
and using it to execute the subsequent query to fetch users from DB
Using either of the last two methods, the second select works fine and i get users with the updated value in "name" column.
But I'd like to know what's actually happening and why noone of the other things worked: if it's actually a cache problem a simply .clear() or .refresh() should have worked i think. Or maybe i'm totally wrong and it's not related to the cache at all, but then i'm bit lost to what might actually be happening.
I fear there might be something wrong in the way we are using hibernate / jpa which might bite us in the future.
Any idea please? Tell me if you need more details and thanks for your help.
Actions are performed in following order:
Read-only transaction A opens.
First fetch (transaction A)
Not-read-only transaction B opens
Update (transaction B)
Transaction B closes
Second fetch (transaction A)
Transaction A closes
Transaction A is read-only. All subsequent queries in that transaction see only changes that were committed before the transaction began - your update was performed after it.

JPA executeUpdate always returns 1

I'm having an issue here where the executeUpdate command always returns value 1 even though there's no record to be updated.
First I retrieve several records, do a bit of calculation, and then update the status of some of the retrieved records.
The JPA update code:
private int executeUpdateStatusToSuccess(Long id, Query updateQuery) {
updateQuery.setParameter(1, getSysdateFromDB());
updateQuery.setParameter(2, id);
int cnt = updateQuery.executeUpdate();
return cnt; // always return 1
}
The update query:
UPDATE PRODUCT_PARAM SET STATUS = 2, DATA_TIMESTAMP=? WHERE ID = ? AND STATUS=-1
Note that STATUS column is practically never valued < 0. I'm purposely adding this condition here just to show that even though it shouldn't have updated any record, the executeUpdate() still returns the value 1.
As an additional note, there is no update process anywhere between the data retrieval and the update. It's all done within my local environment.
Any advice if I'm possibly missing anything here? Or if there's some configuration parameter that I need to checK?
EDIT:
For the JPA I'm using EclipseLink.
For the database I'm using Oracle 10g with driver ojdbc5.jar.
In the end I have to look into the EclipseLink JPA source code. So the system actually executes this line
return Integer.valueOf(1);
from the codes inside basicExecuteCall method of DatabaseAccessor class below:
if (isInBatchWritingMode(session)) {
// if there is nothing returned and we are not using optimistic locking then batch
//if it is a StoredProcedure with in/out or out parameters then do not batch
//logic may be weird but we must not batch if we are not using JDBC batchwriting and we have parameters
// we may want to refactor this some day
if (dbCall.isNothingReturned() && (!dbCall.hasOptimisticLock() || getPlatform().canBatchWriteWithOptimisticLocking(dbCall) )
&& (!dbCall.shouldBuildOutputRow()) && (getPlatform().usesJDBCBatchWriting() || (!dbCall.hasParameters())) && (!dbCall.isLOBLocatorNeeded())) {
// this will handle executing batched statements, or switching mechanisms if required
getActiveBatchWritingMechanism().appendCall(session, dbCall);
//bug 4241441: passing 1 back to avoid optimistic lock exceptions since there
// is no way to know if it succeeded on the DB at this point.
return Integer.valueOf(1);
} else {
getActiveBatchWritingMechanism().executeBatchedStatements(session);
}
}
One easy hack will be by not using the batch. I've tried turning it off in persistence.xml and the update returns the expected value, which is 0.
<property name="eclipselink.jdbc.batch-writing" value="none" />
I'm expecting better solution but this one will do for now in my situation.
I know that this question and answer are pretty old but since I stumbled upon this same problem recently and figured out a solution for my use-case (keep batch-writing enabled and still get the updated rows count for some queries), I figured my solution might be helpful to somebody else in the future.
Basically, you can use a query hint to signal that a specific query does not support batch execution. The code to do this is something like this:
import org.eclipse.persistence.config.HintValues;
import org.eclipse.persistence.config.QueryHints;
import javax.persistence.Query;
public class EclipseLinkUtils {
public static void disableBatchWriting(Query query) {
query.setHint(QueryHints.BATCH_WRITING, HintValues.FALSE);
}
}

MongoDB insert in multiple threads

I am using MongoDB as database. So When I insert a document into the database and very shortly after I do this again, it inserts the document again (I check if the database contains the document before inserting it). The reason it does this, I think, is that I run the update method async which means it takes some time, so at the time it checks if it contains it it's still updating it to the database.
Update method:
public static void updateAndInsert(final String collection, final String where, final String whereValue, final DBObject value)
{
Utils.runAsync(new Runnable()
{
#Override
public void run()
{
if(!contains(collection, where, whereValue))
insert(collection, value);
else
db.getCollection(collection).update(new BasicDBObject(where, whereValue), new BasicDBObject("$set", value));
}
});
}
How can I make sure it only inserts it once?
A question without a question. Wow! :D
You shouln't do it that way, because there are no transactions in MongoDB. But you do have atomic operations on single documents.
Better use an upsert. Within the find part of the upsert, you specify the thing you do within your contains method. (Maybe have a look here: http://techidiocy.com/upsert-mongodb-java-example/ or just google for MongoDB and upsert)
This way you can do contains, insert and update in a single query. That's the way you should do it with MongoDB!

How to efficiently save value to many duplicated tables?

I have an entity named Message with fields: id (PK), String messageXML and Timestamp date. and simple dao to store object into Oracle Database (11g) / MyBatis
Code looks like something like that:
Sevice:
void process throws ProcessException {
Message message = wrapper.getMessage(request);
Long messageId;
try {
messageId = (Long) dao.save(message);
} catch (DaoException e) {
throw ProcessException(e);
}
Dao
private String mapperName = "messageMapper";
Serializable save(Message message) throws DaoException {
try {
getSqlSession().insert(mapperName + ".insert", message);
return message.getPrimaryKey();
} catch (Exception e) {
throw DaoException(e);
}
Simple code. Unfortunately, load of this method process(req) is about 500 req / sec. and sometimes I get a lock on DB during saving message.
To resolve that problem I thought about multiplication table Message, for instance I will be have five table Message1, Message2 ... Message 5 and during saving entity Message i will be drawing (like a round robin algorithm) table - for instance:
private Random generator;
public MessageDao() {
this.generator = new Random();
Serializable save(Message message) throws DaoException {
try {
getSqlSession().insert(getMapperName() + ".insert", message);
return message.getPrimaryKey();
} catch (Exception e) {
throw DaoException(e);
}
private String getMapperName() {
return this.mapperName.concat(String.valueOf(generator.nextInt(5))); //could be more effeciency of course
}
What are you thinking about this solution? Could be efficiently? How can I make that better? Where could I make bottleneck?
Reading between the lines, I guess you have a number of instances of code running serving multiple concurrent requests, hence why you are getting the contention. Or you have 1 server that is firing 500 requests per second and you experience waits. Not sue which of these you mean. In the former case, you might want to look extent allocation - if the table/index next extent sizes are small you will see regularly latency when Oracle grabs the next extent. Size too small and you will get this latency very regularly, size big and when it does eventually run out the wait will be longer. You could do something like calculate the storage per week, and have a weekly procedure to "Grow" the table/indexes accordingly to avoid this during operation hours. I would be tempted to examine the stats and see what the waits are.
If however the cause is concurrency (maybe in addition to extent management), then you're probably getting hot-block contention on the index used to enforce the PK constraints. Typical strategies to mitigate this include REVERSE index (no code change required), or more controversially use partitioning with a weaker unique constraint by adding a simple column to further segregate the concurrent sessions. E.g. add a column serverId to the table and partition by this and the existing PK column. Assign each application server a unique serverId (config/startup file). Amend the insert to include the serverID. Have 1 partition per server. Controversial because the constraint is weaker (down to how partitions work), and this will be an anathema to purists, but this is something I've used on projects with Oracle Consulting to maximise performance on Exadata. So, it's out there. Of course, partitions can be thought of as distinct tables grouped into a super table, so your idea of writing to separate tables is not a million miles from what is being suggested here. The advantage with partitions it is a more natural mechanism for group this data, and adding a new partition will require less work than adding a new table when expanded.

Inserted entries aren't remaining permanent in the db

Good morning,
yesterday I used for the first time MyBatis. As a starting point I used the example from Loiane Groner. And I tried to replace the mysql db with an internal hsqldb (v1.8). I changed everything but I never got the insert unit test to work as expected. See below, first all necessary parts.
<insert id="insert" parameterType="Contact">
INSERT INTO CONTACT ( CONTACT_EMAIL, CONTACT_NAME, CONTACT_PHONE )
VALUES ( #{email}, #{name}, #{phone} );
</insert>
public void insert(Contact contact){
SqlSession session = sqlSessionFactory.openSession();
try {
session.insert("Contact.insert", contact);
session.commit();
} finally {
session.close();
}
}
#Test
public void testInsert() {
Contact actual = new Contact();
actual.setName("Adam");
actual.setPhone("+001 811 23456");
actual.setEmail("anonym#gmail.com");
contactDAO.insert( actual );
assertEquals(1, contactDAO.selectAll().size() );
}
This test will pass, because with the select method I'll retrieve the contact I inserted before. But if I open the hsqldb there is no contact ( entry ) in.
I acutally would expect, that this test would only pass once. And if I call it a second time there should be a more than one entry. But this dosn't happen. Why, doesn't stay the contact permant? (There is no cleanup method)
This is because your settings for HSQLDB are the default settings.
With default settings, the database has a WRITE DELAY. This is normally fine for application embedded databases, but you need to turn off WRITE DELAY for testing if you expect the changes to be persisted immediately. Use hsqldb.write_delay=false as a connection property.
An alternative is to SHUTDOWN the database at the end of the test. You can add the connection property shutdown=true and explicitly close all your database connections at the end of the test.
These properties are the same in HSQLDB 1.8 and 2.x and documented here:
http://hsqldb.org/doc/2.0/guide/dbproperties-chapt.html
I'm guessing that the issue has to do with the try...finally block in the insert method. Personally, I think that leaving out even a catch(Exception e){log.error(e)} is bad policy and a disservice.
I don't know about hsqldb specifically, but, I have seen, in certain DB's, if an error happens during the call to "commit", it will continue to show rows which shouldn't exist. I'll bet that's what's happening here.
Try adding catch(Exception e){log.error(e)} before the finally in the insert method.

Categories

Resources