How to Continue bulk inserts when exception occurs - java

I am using jpa with hibernate I want to insert 100 record in Database, suppose I get exception JDBC batch update in 50th record insertion i need to handling the Exception and I need to persist remaining record to DB.
Code:
private List<TempCustomers> tempCustomer =new ArrayList<TempCustomers>();
public String migrateCustomers() {
TempCustomers temp = null;
for(DoTempCustomers tempCustomers:doTempCustomers){
try {
temp=new TempCustomers();
BeanUtils.copyProperties(temp, tempCustomers);
tempCustomer.add(temp);
entityManager.persist(temp);
}catch (Exception e) {
tempCustomer.add(temp);
entityManager.persist(temp);
log.info("Exception ..."+e);
return "null";
}
}
return "null";
}

Nagendra.
What Mr. RAS is telling is correct.
For example you are persisting 100 entities and exception happened in the 50th Entity persist. You have exception handler it will work for you to handle the situation. It ll skip the current one and process the next one.
Things to take care as follows:
1- Your exception handling should be within the loop, hope you already have it.
2- For exception you can save the entity in different list for further analysis for error details. do it in the exception catch block.
3- I am not sure whether you are using the transaction manager or not, transaction need to take care.
--

In the 2nd case please remove the line...
entityManager.persist(temp);
as you already know this throws exception. keep it in the list for your further analysis. better put into any queue(ActiveMQ) upto you.
Best solution for this is again:
Validate all your data before persist...to minimize the exception. runtime things need your re-processing again and that should be manual.

Related

React to duplicate unique key error in MongoDB's java driver

I am developing an application which uses MongoDB, and one of my fields must be unique. This field is calculated by the application based on another value in the DB. If I am running multiple instances of the application, however, I can imagine the applications calculating the same value.
In this case, I would like to catch the exception, recalculate the value, and try again. Unfortunately, the exception raised seems to be a simple MongoWriteException. It seems to me that the only way I will know that it is due to the duplicate key issue is based on the exception message, but parsing and making use of the message really doesn't feel right. Are there any other options?
You can check the ErrorCategory of the error inside the MongoWriteException and confirm it was due to a duplicate key using getCategory():
catch(MongoWriteException ex) {
if(ex.getError().getCategory() == ErrorCategory.DUPLICATE_KEY) {
//handle duplicate key error
} else {
//do something else...
}
}

How to catch Neo4j TransientException: LockClient

I often get this kind of Exception
Exception in thread "main" org.neo4j.driver.v1.exceptions.TransientException: LockClient[21902] can't wait on resource RWLock[NODE(1423923), hash=286792765] since => LockClient[21902] <-[:HELD_BY]- RWLock[NODE(1419986), hash=869661492] <-[:WAITING_FOR]- LockClient[21905] <-[:HELD_BY]- RWLock[NODE(1423923), hash=286792765]
when I run Neo4j queries in my Java application. Now, this question has a good answer to the reason why this error occurs, and I can't do anything to improve my queries: I just need them as they are.
My question is: how can I catch this kind of exception? It occurs at this line of my code:
session.run(query, parameters);
but the Javadoc doesn't show any apparent Exception to be catched with a try-catch block.
Thanks in advance.
This is because TransientException is a runtime exception (E.G. a subclass of Java.lag.RuntimeException). It is not required to be in the method signature, and you are not required to put he method in a Try...Catch block. Try putting that line within a try...catch block and you should not get that exception anymore. How you handle it depends on the nature of your application. You could print a warning to log, and then error in the application, or even keep trying until the code worked.
Edit: after reading the answer you linked, I understand why you are getting these exceptions. I would put a Thread.sleep() in the catch block, then attempt the query again, in which case the error should go away. But then again, I am in no way a Neo4j expert so take my advice with a grain (truckload ) of salt
Edit 2: your code should look somewhat like this:
for(Query query : queries){
boolean flag = false;
while(!flag){
try{
query.execute();
flag = true;
} catch (TransientException e){
log("Retrying query "+query);
Thread.sleep(1*1000); //1 second
}
}
}

java - Exception not getting caught

I am using Spring ROO.
In my web application I can create many users and save.
I can update the existing users as well.
For the update scenario we are using merge() method to update the existing data. In database, the column 'username' is unique. Following is the scenario.
The user create an user name 'Sean' with mobile number '6039274849'
The user create another user named 'Parker' with mobile number '8094563454'
When the user tries to update the second user 'Parker' with 'Sean', I am getting exception.
In the stacktrace I could see the following exception being the causes
caused by ConstraintviolationException
caused by SQLException
caused by TransactionSystemException
caused by PersistenceException
caused by TransactionRollbackException
I tried the do the following
public String merge()
{
try{
//code to merge
}
catch(????? e){
throw e;
}
}
I tried to add the above 5 exceptions in '????' . But I couldnot catch still.
Can anyone please tell which exception I need to add in '????' to catch the exception from the above list?
P.S: I am using Spring ROO. So I am changing code in .aj file. Please dont close this question as duplicate. I am expecting an answer from anyone for my issue before closing this question.
As a last resort, you can just catch the all-purpose exception
public String merge()
{
try{
//code to merge
}
catch(Exception e){
//handle e here.
}
}
Um, aren't you just rethrowing the exception in your catch? It should be the "most-recent" exception in the trace, so ConstraintValidationException.
Note that typically in Spring/Hibernate apps, the exception bubbling out of your code is what causes transactions to roll back. If you catch the exception, you will probably prevent that, which might lead to data inconsistencies.
When in doubt I try catching a Throwable and either add break point or log it out to see exactly what it is. Then change code accrodingly.
I had the same probelem lately. It seems that spring wrap exception in it's own exception class. I solved this problem with:
try {
...
}
catch(Exception e){
System.out.println(e.getClass().getName());
}
with that you will discover what exception has been realy thrown;

Java: commit vs rollback vs nothing when semantics is unchanged?

Ok, I know the difference between commit and rollback and what these operations are supposed to do.
However, I am not certain what to do in cases where I can achieve the same behavior when using commit(), rollback() and/or do nothing.
For instance, let's say I have the following code which executes a query without writing to db:
I am working on an application which communicates with SQLite database.
try {
doSomeQuery()
// b) success
} catch (SQLException e) {
// a) failed (because of exception)
}
Or even more interesting, consider the following code, which deletes a single row:
try {
if (deleteById(2))
// a) delete successful (1 row deleted)
else
// b) delete unsuccessful (0 row deleted, no errors)
} catch (SQLException e) {
// c) delete failed (because of an error (possibly due to constraint violation in DB))
}
Observe that from a semantic standpoint, doing commit or rollback in cases b) and c) result in the same behavior.
Generally, there are several choices to do inside each case (a, b, c):
commit
rollback
do nothing
Are there any guidelines or performance benefits of choosing a particular operation? What is the right way?
Note: Assume that auto-commit is disabled.
If it is just a select, I wouldn't open a transaction and therefore, there is nothing to do in any case. Probably you already know is there is an update/insert since you have already passed the parameters.
The case where you intend to do a manipulation is more interesting. The case where it deletes is clear you want to commit; if there is an exception you should rollback to keep the DB consistent since something failed and there is not a lot you can do. If the delete failed because there was nothing to delete I'd commit for 3 reasons:
Semantically, it seems more correct since the operation was
technically successful and performed as specified.
It is more future proof so if somebody adds more code to the
transaction they won't be surprised with it is rolling back because
one delete just didn't do anything (they would expect on exception
that the transaction is rolled back)
When there is an operation to do, commit is faster but in this case I
don't think it matters.
Any non-trivial application will have operations requiring multiple SQL statements to complete. Any failure happening after the first SQL statement and before the last SQL statement will cause data to be inconsistent.
Transactions are designed to make multiple-statement operations as atomic as the single-statement operations you are currently working with.
I've asked myself the exact same question. In the end I went for a solution where I always commited successful transactions and always rollbacked non-successful transactions regardles of it having any effect. This simplified lot of code and made it clearer and more easy to read.
It did not have any major performance problems in the application I worked in which used NHibernate + SQLite on .net. Your milage may vary.
As others stated in their answers, it's not a matter of performance (for the equivalent cases you described), which I believe is negligible, but a matter of maintainability, and this is ever so important!
In order for your code to be nicely maintainable, I suggest (no matter what) to always commit at the very bottom of your try block and to always close your Connection in your finally block. In the finally block you also should rollback if there are uncommitted transactions (meaning that you didn't reach the commit at the end of the try block).
This example shows what I believe is the best practice (miantainability-wise):
public boolean example()
{
Connection conn;
...
try
{
...
//Do your SQL magic (that can throw exceptions)
...
conn.commit();
return true;
}
catch(...)
{
...
return false;
}
finally
{//Close statements, connections, etc
...
closeConn(conn);
}
}
public static void closeConn(Connection conn)
{
if (conn != null)
if (!conn.isClosed())
{
if (!conn.getAutoCommit())
conn.rollback();//If we need to close() but there are uncommitted transacitons (meaning there have been problems)
conn.close();
conn = null;
}
}
What you are saying depends upon the code being called, does it return a flag for you to test against, or does it exclusively throw exceptions if something goes wrong?
API throws exceptions but also returns a boolean (true|false):
This situation occurs a lot, and it makes it difficult for the calling code to handle both conditions, as you pointed out in your OP. The one thing you can do in this situation is:
// Initialize a var we can test against later
// Lol, didn't realize this was for java, please excuse the var
// initialization, as it's demonstrative only
$queryStatus = false;
try {
if (deleteById(2)) {
$queryStatus = true;
} else {
// I can do a few things here
// Skip it, because the $queryStatus was initialized as false, so
// nothing changes
// Or throw an exception to be caught
throw new SQLException('Delete failed');
}
} catch (SQLException $e) {
// This can also be skipped, because the $queryStatus was initialized as
// false, however you may want to do some logging
error_log($e->getMessage());
}
// Because we have a converged variable which covers both cases where the API
// may return a bool, or throw an exception we can test the value and determine
// whether rollback or commit
if (true === $queryStatus) {
commit();
} else {
rollback();
}
API exclusively throws exceptions (no return value):
No problem. We can assume that if no exception was caught, the operation completed without error and we can add rollback() / commit() within the try/catch block.
try {
deleteById(2);
// Because the API dev had a good grasp of error handling by using
// exceptions, we can also do some other calls
updateById(7);
insertByName('Chargers rule');
// No exception thrown from above API calls? sweet, lets commit
commit();
} catch (SQLException $e) {
// Oops exception was thrown from API, lets roll back
rollback();
}
API does not throw any exceptions, only returns a bool (true|false):
This goes back to old school error handling/checking
if (deleteById(2)) {
commit();
} else {
rollback();
}
If you have multiple queries making up the transaction, you can borrow the single var idea from scenario #1:
$queryStatus = true;
if (!deleteById(2)) {
$queryStatus = false;
}
if (!updateById(7)) {
$queryStatus = false;
}
...
if (true === $queryStatus) {
commit();
} else {
rollback();
}
"Note: Assume that auto-commit is disabled."
Once you disable auto-commit, you are telling the RDBMS that you are taking control of commits from now until auto-commit is re-enabled, so IMO, it's good practice to either rollback or commit the transaction versus leaving any queries in limbo.

Inserting or updating multiple records in database in a multi-threaded way in java

I am updating multiple records in database. Now whenever UI sends the list of records to be updated, I have to just update those records in database. I am using JDBC template for that.
Earlier Case
Earlier what I was whenever I got records from UI, I just do
jdbcTemplate.batchUpdate(Query, List<object[]> params)
Whenever there was an exception, I used to rollback whole transaction.
(Updated : Is batchUpdate multi-threaded or faster than batch update in some way?)
Later Case
But later as requirement changed whenever there was exception. So, whenever there is some exception, I should know which records failed to update. So I had to sent the records back to UI in case of exception with a reason, why did they failed.
so I had to do something similar to this:
for(Record record : RecordList)
{
try{
jdbcTemplate.update(sql, Object[] param)
}catch(Exception ex){
record.setReason("Exception : "+ex.getMessage());
continue;
}
}
So am I doing this in right fashion, by using the loop?
If yes, can someone suggest me how to make it multi-threaded.
Or is there anything wrong in this case.
To be true, I was hesitating to use try catch block inside the loop :( .
Please correct me, really need to learn a better way because I myself feel, there must be a better way , thanks.
make all update-operation to a Collection Callable<>,
send it to java.util.concurrent.ThreadPoolExecutor. the pool is multithreaded.
make Callable:
class UpdateTask implements Callable<Exception> {
//constructor with jdbctemplate,sql,param goes here.
#Override
public Exception call() throws Exception {
try{
jdbcTemplate.update(sql, Object[] param)
}catch(Exception ex){
return ex;
}
return null;
}
invoke call:
<T> List<Future<T>> java.util.concurrent.ExecutorService.invokeAll(Collection<? extends Callable<T>> tasks) throws InterruptedException
Your case looks like you need to use validation in java and filter out the valid data alone and send to the data base for updating.
BO layer
-> filter out the Valid Record.
-> Invalid Record should be send back with some validation text.
In DAO layer
-> batch update your RecordList
This will give you the best performance.
Never use database insert exception as a validation mechanism.
Exceptions are costly as the stack trace has to be created
Connection to database is another costly process and will take time to get a connection
Java If-Else will run much faster for same data-base validation

Categories

Resources