Why I am getting deadlock in this code?
I tried to debug it and also read many article about deadlock prevention but could not get this. I have used synchronization, to make thread safe a block of code on the basis of accountNumber.
I am getting this Transaction object from an API and I want to lock my code on the basis of what the Transaction object contain. Transaction object contains info like debit/credit account number, amount etc.
Two threads should not be executed executeTransaction method simultaneously if there is any common accountNumber between them.
Here, lockedAccount is storing all accounts that are currently locked and two methods for locking and unlocking an accountNumber.
DAO / Repository layer.
#Repository
public class TransactionDAOImpl implements TransactionDAO {
// define field for entitymanager
private EntityManager entityManager;
public TransactionDAOImpl() {}
// set up constructor injection
#Autowired
public TransactionDAOImpl(EntityManager theEntityManager) {
entityManager = theEntityManager;
}
private static final Set<String> lockedAccounts = new HashSet<>();
private void LockAccount(String AccountNumber) throws InterruptedException {
int count = 0;
synchronized (lockedAccounts) {
while (!lockedAccounts.add(AccountNumber)) {
lockedAccounts.wait();
count++;
}
System.out.println(AccountNumber + " waited for " + count + " times" + " and now i am getting lock");
}
}
private void unLockAccount(String AccountNumber) {
synchronized (lockedAccounts) {
lockedAccounts.remove(AccountNumber);
lockedAccounts.notifyAll();
System.out.println("unlocking " + AccountNumber);
}
}
#Override
public void executeTransaction(Transaction theTransaction) {
// System.out.println(theTransaction);
// get the current hibernate session
Session currentSession = entityManager.unwrap(Session.class);
// lock both account in a increasing order to avoid deadlock
// lexicographically lesser account number should be lock first
String firstAccount = theTransaction.getDebitAccountNumber();
String secondAccount = theTransaction.getCreditAccountNumber();
// check firstAccount is lesser or greater then second account,if not then swap
// value
if (firstAccount.compareTo(secondAccount) > 0) {
firstAccount = theTransaction.getCreditAccountNumber();
secondAccount = theTransaction.getDebitAccountNumber();
}
try {
LockAccount(firstAccount);
try {
LockAccount(secondAccount);
AccountDetail debitAccount = getAccountDetails(currentSession, theTransaction.getDebitAccountNumber());
AccountDetail creditAccount = getAccountDetails(currentSession,
theTransaction.getCreditAccountNumber());
if (debitAccount == null || creditAccount == null) // check invalid accountNumber
{
theTransaction.setStatus("failed,account not found");
} else if (debitAccount.getBalance() < theTransaction.getAmount()) // check insufficient account balance
{
theTransaction.setStatus("failed,insufficient account balance");
} else {
// update custmer accout balance
debitAccount.setBalance(debitAccount.getBalance() - theTransaction.getAmount());
creditAccount.setBalance(creditAccount.getBalance() + theTransaction.getAmount());
// save to database
currentSession.saveOrUpdate(debitAccount);
currentSession.saveOrUpdate(creditAccount);
// update status of transacion
theTransaction.setStatus("successful");
}
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
unLockAccount(secondAccount);
}
} catch (InterruptedException e1) {
e1.printStackTrace();
} finally {
unLockAccount(firstAccount);
}
return;
}
private AccountDetail getAccountDetails(Session currentSession, String accountNumber) {
Query<?> query = currentSession.createQuery("from AccountDetail where accountNumber=:accountNumber");
query.setParameter("accountNumber", accountNumber);
AccountDetail accountDetails = (AccountDetail) query.uniqueResult();
return accountDetails;
}
}
for more information ,
my accountDetails table in database have three columns,
id(int,primary key)
AccountNumber(String,unique)
amount(double)
this is Service layer
where i am using #Transactional annotation for executeTransaction method.
public class TransactionServiceImpl implements TransactionService {
private TransactionDAO theTransactionDAO;
public TransactionServiceImpl() {}
//constructor injection
#Autowired
public TransactionServiceImpl(TransactionDAO theTransactionDAO)
{
this.theTransactionDAO= theTransactionDAO;
}
#Override
#Transactional
public void executeTransaction(Transaction theTransaction) {
theTransactionDAO.executeTransaction(theTransaction);
}
}
but i am getting database deadlock in this code.
below is my error.
2020-08-30 19:09:28.235 WARN 6948 --- [nio-8081-exec-4] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 1213, SQLState: 40001
2020-08-30 19:09:28.236 ERROR 6948 --- [nio-8081-exec-4] o.h.engine.jdbc.spi.SqlExceptionHelper : Deadlock found when trying to get lock; try restarting transaction
2020-08-30 19:09:28.384 ERROR 6948 --- [nio-8081-exec-4] o.a.c.c.C.[.[.[.[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [/bank] threw exception [Request processing failed; nested exception is org.springframework.dao.CannotAcquireLockException: could not execute statement; SQL [n/a]; nested exception is org.hibernate.exception.LockAcquisitionException: could not execute statement] with root cause
com.mysql.cj.jdbc.exceptions.MySQLTransactionRollbackException: Deadlock found when trying to get lock; try restarting transaction
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:123) ~[mysql-connector-java-8.0.21.jar:8.0.21]
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97) ~[mysql-connector-java-8.0.21.jar:8.0.21]
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122) ~[mysql-connector-java-8.0.21.jar:8.0.21]
at com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:953) ~[mysql-connector-java-8.0.21.jar:8.0.21]
at com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1092)
Suppose there are two Account Transactions(debitAccount,creditAccount): AT1(1,2) AT2(2,1). And we have Java Lock (JL) and Database Lock (DBL). In following scenario, deadlock will occur.
+------+---------------------+---------------------+-----------------------------------------------------+
| Step | AT1 State | AT2 State | Remark |
+------+---------------------+---------------------+-----------------------------------------------------+
| 1 | get JL | wait JL | |
+------+---------------------+---------------------+-----------------------------------------------------+
| 2 | release JL | get JL | AT1 saveOrUpdate may not flush to database, |
| | | | hence database lock may not be acquired this moment |
+------+---------------------+---------------------+-----------------------------------------------------+
| 3 | flush debitAccount | flush debitAccout | AT1 acquire DB lock for account 1, |
| | saveOrUpdate | saveOrUpdate | AT2 acquire DB lock for account 2 |
+------+---------------------+---------------------+-----------------------------------------------------+
| 4 | AT1 DBL account 1 | AT2 DBL account 2 | |
+------+---------------------+---------------------+-----------------------------------------------------+
| 5 | flush creditAccount | flush creditAccount | AT1 acquire DBL for account 2, |
| | saveOrUpdate | saveOrUpdate | AT2 acquire DBL for account 1, Deadlock |
+------+---------------------+---------------------+-----------------------------------------------------+
Please also note that
Database lock is acquired in update statement when the statement is flushed.
Database lock is released when transaction commit/rollback.
Related
When I execute:
select * from pg_stat_activity where state ~ 'idle in transact'
I get inappropriate number of rows with state 'idle in transaction'. Some of them idle for a few days. Most of them, are the same simple select query which are executed from one service class (Hibernate 5.1.0.Final, Guice 4.1.0):
public class FirebaseServiceImpl implements FirebaseService {
#Inject
private Provider<FirebaseKeyDAO> firebaseKeyDAO;
#Override
public void sendNotification(User recipient) {
List<FirebaseKey> firebaseKeys = firebaseKeyDAO.get().findByUserId(recipient.getId());
final ExecutorService notificationsPool = Executors.newFixedThreadPool(3);
for (FirebaseKey firebaseKey : firebaseKeys)
notificationsPool.execute(new Runnable() {
#Override
public void run() {
sendNotification(new FirebaseNotification(firebaseKey.getFirebaseKey(), "example");
}
});
notificationsPool.shutdown();
}
}
DAO method:
#Override
#SuppressWarnings("unchecked")
public List<FirebaseKey> findByUserId(Long userId) {
Criteria criteria = getSession().createCriteria(type);
criteria.add(Restrictions.eq("userId", userId));
return criteria.list();
}
Why does it happen? How to avoid this?
UPDATE
Transactions are not commited when I use Guice Provider exampleDAO.get() in a separate thread:
#Inject
Provider<ExampleDAO> exampleDAO;
It usually happens when you use pgbouncer or other pooler/session manager that uses pool_mode = transaction. Eg when client opens a transaction and holds it, not committing nor rolling back. Check if you see DISCARD ALL in query column - if you do this is the case, because pooler has to discard shared session plans, sequences, deallocate statements etc to avoid mixing those for different sessions in pool.
On the other hand any "normal" transaction gives same idle in transaction, eg:
2>select now(),pg_backend_pid();
now | pg_backend_pid
----------------------------------+----------------
2017-05-05 16:53:01.867444+05:30 | 26500
(1 row)
if we check its state we see orthodox idle:
t=# select query,state from pg_stat_activity where pid = 26500;
query | state
--------------------------------+-------
select now(),pg_backend_pid(); | idle
(1 row)
now we start transaction on session 2 >:
2>begin;
BEGIN
2>select now(),pg_backend_pid();
now | pg_backend_pid
----------------------------------+----------------
2017-05-05 16:54:15.856306+05:30 | 26500
(1 row)
and check pg_stat_statements gain:
t=# select query,state from pg_stat_activity where pid = 26500;
query | state
--------------------------------+---------------------
select now(),pg_backend_pid(); | idle in transaction
(1 row)
It will remain this way until statement timeout or end of transaction:
2>end;
COMMIT
t=# select query,state from pg_stat_activity where pid = 26500;
query | state
-------+-------
end; | idle
(1 row)
So it is quite common and ok to have it. If you want to avoid connected sessions, you have to disconnect client. But connection in postgres is expensive, so usually people try reuse existing connections with pool, and so such states appear in pg_stat_activity
I am trying to produce a phantom read, for the sake of learning but unfortunately I am unable to. I am using Java threads, JDBC, MySQL.
Here is the program I am using:
package com.isolation.levels.phenomensa;
import javax.xml.transform.Result;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.concurrent.CountDownLatch;
import static com.isolation.levels.ConnectionsProvider.getConnection;
import static com.isolation.levels.Utils.printResultSet;
/**
* Created by dreambig on 13.03.17.
*/
public class PhantomReads {
public static void main(String[] args) {
setUp(getConnection());// delete the newly inserted row, the is supposed to be a phantom row
CountDownLatch countDownLatch1 = new CountDownLatch(1); // use to synchronize threads steps
CountDownLatch countDownLatch2 = new CountDownLatch(1); // use to synchronize threads steps
Transaction1 transaction1 = new Transaction1(countDownLatch1, countDownLatch2, getConnection()); // the first runnable
Transaction2 transaction2 = new Transaction2(countDownLatch1, countDownLatch2, getConnection()); // the second runnable
Thread thread1 = new Thread(transaction1); // transaction 1
Thread thread2 = new Thread(transaction2); // transaction 2
thread1.start();
thread2.start();
}
private static void setUp(Connection connection) {
try {
connection.prepareStatement("DELETE from actor where last_name=\"PHANTOM_READ\"").execute();
} catch (SQLException e) {
e.printStackTrace();
}
}
public static class Transaction1 implements Runnable {
private CountDownLatch countDownLatch;
private CountDownLatch countDownLatch2;
private Connection connection;
public Transaction1(CountDownLatch countDownLatch, CountDownLatch countDownLatch2, Connection connection) {
this.countDownLatch = countDownLatch;
this.countDownLatch2 = countDownLatch2;
this.connection = connection;
}
#Override
public void run() {
try {
String query = "select * from actor where first_name=\"BELA\"";
connection.setAutoCommit(false); // start the transaction
// the transaction isolation, dirty reads and non-repeatable reads are prevented !
// only phantom reads can occure
connection.setTransactionIsolation(Connection.TRANSACTION_REPEATABLE_READ);
//read the query result for the first time.
ResultSet resultSet = connection.prepareStatement(query).executeQuery();
printResultSet(resultSet); // print result.
//count down so that thread2 can insert a row and commit.
countDownLatch2.countDown();
//wait for the second query the finish inserting the row
countDownLatch.await();
System.out.println("\n ********* The query returns a second row satisfies it (a phantom read) ********* !");
//query the result again ...
ResultSet secondRead = connection.createStatement().executeQuery(query);
printResultSet(secondRead); //print the result
} catch (SQLException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
public static class Transaction2 implements Runnable {
private CountDownLatch countDownLatch;
private CountDownLatch countDownLatch2;
private Connection connection;
public Transaction2(CountDownLatch countDownLatch, CountDownLatch countDownLatch2, Connection connection) {
this.countDownLatch = countDownLatch;
this.countDownLatch2 = countDownLatch2;
this.connection = connection;
}
#Override
public void run() {
try {
//wait the first thread to read the result
countDownLatch2.await();
//insert and commit !
connection.prepareStatement("INSERT INTO actor (first_name,last_name) VALUE (\"BELA\",\"PHANTOM_READ\") ").execute();
//count down so that the thread1 can read the result again ...
countDownLatch.countDown();
} catch (SQLException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
However this is actually the result
----------------------------------------------------------
| 196 | | BELA | | WALKEN | | 2006-02-15 04:34:33.0 |
---------------------------------------------------------- The query returns a second row satisfies it (a phantom read)
----------------------------------------------------------
| 196 | | BELA | | WALKEN | | 2006-02-15 04:34:33.0 |
----------------------------------------------------------
But I think it should be
----------------------------------------------------------
| 196 | | BELA | | WALKEN | | 2006-02-15 04:34:33.0 |
---------------------------------------------------------- The query returns a second row satisfies it (a phantom read) !
----------------------------------------------------------
| 196 | | BELA | | WALKEN | | 2006-02-15 04:34:33.0 |
----------------------------------------------------------
----------------------------------------------------------
| 196 | | BELA | | PHANTOM_READ | | 2006-02-15 04:34:33.0 |
----------------------------------------------------------
I am using:
Java 8
JDBC
MySQL
InnoDB
SakilaDB insert into mysql
A phantom read is the following scenario: a transaction reads a set of rows that satisfy a search condition. Then a second transaction inserts a row that satisfies this search condition. Then the first transaction reads again the set of rows that satisfy a search condition, and gets a different set of rows (e.g. including the newly inserted row).
Repeatable read requires that if a transaction reads a row, a different transaction then updates or deletes this row and commits these changes, and the first transaction rereads the row, it will get the same constistent values as before (a snapshot).
It actually doesn't require that phantom reads have to happen. MySQL will actually prevent phantom reads in more cases than it has to. In MySQL, phantom reads (currently) only happen after you (accidently) updated a phantom row, otherwise the row stays hidden. This is specific to MySQL, other database system will behave differently. Also, this behaviour might change some day (as MySQL only specifies that it supports consistent reads as required by the sql standard, not under which specific circumstances phantom reads occur).
You can use for example the following steps to get phantom rows:
insert into actor (first_name,last_name) values ('ADELIN','NO_PHANTOM');
transaction 1:
select * from actor;
-- ADELIN|NO_PHANTOM
transaction 2:
insert into actor (first_name,last_name) values ('BELA','PHANTOM_READ');
commit;
transaction 1:
select * from actor; -- still the same
-- ADELIN|NO_PHANTOM
update actor set last_name = 'PHANTOM READ'
where last_name = 'PHANTOM_READ';
select * from actor; -- now includes the new, updated row
-- ADELIN|NO_PHANTOM
-- BELA |PHANTOM READ
Another funny thing happens btw when you delete rows:
insert into actor (first_name,last_name) values ('ADELIN','NO_PHANTOM');
insert into actor (first_name,last_name) values ('BELA','REPEATABLE_READ');
transaction 1:
select * from actor;
-- ADELIN|NO_PHANTOM
-- BELA |REPEATABLE_READ
transaction 2:
delete from actor where last_name = 'REPEATABLE_READ';
commit;
transaction 1:
select * from actor; -- still the same
-- ADELIN|NO_PHANTOM
-- BELA |REPEATABLE_READ
update actor set last_name = '';
select * from actor; -- the deleted row stays unchanged
-- ADELIN|
-- BELA |REPEATABLE_READ
This is exactly what the sql standard requires: if you reread a (deleted) row, you will get the original value.
In my mysql table there is one enum field 'spe_gender'.
mysql> desc tbl_sswltdata_persons;
+-----------------+-----------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------------+-----------------------+------+-----+---------+----------------+
| spe_id | bigint(20) unsigned | NO | PRI | NULL | auto_increment |
| spe_sen_id | bigint(20) unsigned | NO | MUL | NULL | |
| spe_gender | enum('male','female') | YES | | NULL | |
| spe_is_deceased | tinyint(1) | NO | | 0 | |
| spe_birth_place | varchar(255) | YES | | NULL | |
| spe_create_date | datetime | YES | | NULL | |
| spe_update_date | datetime | YES | | NULL | |
+-----------------+-----------------------+------+-----+---------+----------------+
7 rows in set (0.00 sec)
So I created one POJO class as:
public class SswltdataPersons implements Serializable {
private static final long serialVersionUID = 1L;
private long spe_id;
private long spe_sen_id;
private String spe_gender;
private String spe_is_deceased;
private String spe_birth_place;
private String spe_create_date;
private String spe_update_date;
// .........
public String getSpe_gender() {
return spe_gender;
}
public void setSpe_gender(String spe_gender) {
this.spe_gender = spe_gender;
}
// ......
}
When I trying to write data into this table I am getting an exception
org.springframework.dao.DataIntegrityViolationException: PreparedStatementCallback; SQL
[INSERT INTO iwpro_imp.tbl_sswltdata_persons VALUES(?,?,?,?,?,?,?)];
Data truncated for column 'spe_gender' at row 1; nested exception is java.sql.BatchUpdateException: Data truncated for column 'spe_gender' at row 1
I think the issue is While inserting String value (through java) in Enum field (in DB). Here is my methods where I am getting Exception.
#Transactional(value="transactionManager_iwpro_imp", rollbackFor = Exception.class)
public void saveAllPersons(final List<SswltdataPersons> list) {
String sql = "INSERT INTO iwpro_imp.tbl_sswltdata_persons VALUES(?,?,?,?,?,?,?)";
try{
jdbcTemplate.update("SET foreign_key_checks = 0");
List<List<SswltdataPersons>> batchLists = Lists.partition(list, batchSize);
for(final List<SswltdataPersons> batch : batchLists) {
BatchPreparedStatementSetter bpss = new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int index) throws SQLException {
SswltdataPersons dataObject = batch.get(index);
ps.setLong(1, dataObject.getSpe_id());
ps.setLong(2, dataObject.getSpe_sen_id());
ps.setString(3, dataObject.getSpe_gender());
ps.setString(4, dataObject.getSpe_is_deceased());
ps.setString(5, dataObject.getSpe_birth_place());
ps.setString(6, dataObject.getSpe_create_date());
ps.setString(7, dataObject.getSpe_update_date());
}
#Override
public int getBatchSize() {
return batch.size();
}
};
jdbcTemplate.batchUpdate(sql, bpss);
}
jdbcTemplate.update("SET foreign_key_checks = 1");
}catch(Exception e){
TransactionAspectSupport.currentTransactionStatus().setRollbackOnly();
logger.error("\n\nUnexpected Exception:\n", e);
e.printStackTrace();
}
}
Can't I insert this enum value in DB?
In your java code, declare spe_gender as an Enum type
private Gender spe_gender
where Gender is an Enum class
public enum Gender {
MALE,
FEMALE
}
In order to answer that, you would have to give an example of the data actually being inserted.
Anyway, regarding the exception that you get, you're most likely not inserting "male" or "female", since it says "Data truncated for column 'spe_gender' at row 1" it means that the data you're inserting differs, and is in fact bigger (as in more characters) than the allowed one.
Also, check if there's an actual method to insert enums rather than "setString". -> EDIT: it does not
Thanks Mick Mnemonic for the suggestion. It worked.
replacing
ps.setString(3, dataObject.getSpe_gender().isEmpty());
with
ps.setString(3, dataObject.getSpe_gender().isEmpty() ? null : dataObject.getSpe_gender());
worked for me. Thanks all.
This question already has answers here:
ResultSet exception - before start of result set
(6 answers)
Closed 5 years ago.
i am using Spring JdbcTemplate. And i have query to get data by ID.
I have this table schema :
+---------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------------+--------------+------+-----+---------+-------+
| id | varchar(150) | NO | PRI | NULL | |
| position_name | varchar(150) | NO | | NULL | |
| description | text | YES | | NULL | |
+---------------+--------------+------+-----+---------+-------+
And i run using this template :
public Position fetchById(final String id) throws Exception {
// TODO Auto-generated method stub
String sql = "SELECT * FROM position WHERE id = ?";
return jdbcTemplate.query(sql, new PreparedStatementSetter() {
public void setValues(PreparedStatement ps) throws SQLException {
// TODO Auto-generated method stub
ps.setString(1, id);
}
}, new ResultSetExtractor<Position>() {
public Position extractData(ResultSet rs) throws SQLException,
DataAccessException {
// TODO Auto-generated method stub
Position p = new Position();
p.setId(rs.getString("id"));
p.setPositionName(rs.getString("position_name"));
p.setDescription(rs.getString("description"));
return p;
}
});
}
But when i run unit test like this :
#Test
public void getPositionByIdTest() throws Exception {
String id = "35910510-ef2f-11e5-9ce9-5e5517507c66";
Position p = positionService.getPositionById(id);
Assert.assertNotNull(p);
Assert.assertEquals("Project Manager", p.getPositionName());
}
I get this following error :
org.springframework.dao.TransientDataAccessResourceException: PreparedStatementCallback; SQL [SELECT * FROM position WHERE id = ?]; Before start of result set; nested exception is java.sql.SQLException: Before start of result set
at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:108)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
...
Caused by: java.sql.SQLException: Before start of result set
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:957)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:896)
...
How to use PreparedStatement in Select query JDBC Template?
Thank you.
You need to call ResultSet#next() to "move the cursor forward one row from its current position." As you are expecting a single row to be returned from your query, you can call this in an if statement as shown below:
public Position extractData(ResultSet rs) throws SQLException,
DataAccessException {
Position p = new Position();
if(rs.next()) {
p.setId(rs.getString("id"));
p.setPositionName(rs.getString("position_name"));
p.setDescription(rs.getString("description"));
}
return p;
}
If you were expecting to process multiple results and return a collection of some sort, you would do while(rs.next()) and process a row on each iteration of the loop.
Also, as you are using JdbcTemplate you could consider using a RowMapper instead which may simplify your implementation slightly.
You have a simple use case and use one of the more complex query methods, why? Next you are using a ResultSetExtractor whereas you probably want a RowMapper instead. If you use a ResultSetExtractor you will have to iterate over the result set yourself. Replace your code with the following
return getJdbcTemplate.query(sql, new RowMapper<Position>() {
public Position mapRow(ResultSet rs, int row) throws SQLException,
DataAccessException {
Position p = new Position();
p.setId(rs.getString("id"));
p.setPositionName(rs.getString("position_name"));
p.setDescription(rs.getString("description"));
return p;
}, id);
}
So instead of using one of the complexer methods, use one that suits what you need. The JdbcTemplate uses a PreparedStatement anyway.
If you use a ResultSetExtractor you must iterate through the result for using next() calls. This explains the error since the ResultSet is still positioned before the first row, when you read its values.
For your use case - to select a record for a given id - there is a simpler solution using JdbcTemplate.queryForObject and a RowMapper lambda:
String sql = "SELECT * FROM position WHERE id = ?";
Position position = (Position) jdbcTemplate.queryForObject(
sql, new Object[] { id }, (ResultSet rs, int rowNum) -> {
Position p = new Position();
p.setId(rs.getString("id"));
p.setPositionName(rs.getString("position_name"));
p.setDescription(rs.getString("description"));
r eturn p;
});
Wikipedia describes the Phantom read phenomenon as:
A phantom read occurs when, in the course of a transaction, two identical queries are executed, and the collection of rows returned by the second query is different from the first.
It also states that with serializable isolation level, Phantom reads are not possible. I'm trying to make sure it is so in H2, but either I expect the wrong thing, or I do a wrong thing, or something is wrong with H2. Nevertheless, here's the code:
try(Connection connection1 = DriverManager.getConnection(JDBC_URL, JDBC_USER, JDBC_PASSWORD)) {
connection1.setAutoCommit(false);
try(Connection connection2 = DriverManager.getConnection(JDBC_URL, JDBC_USER, JDBC_PASSWORD)) {
connection2.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
connection2.setAutoCommit(false);
assertEquals(0, selectAll(connection1));
assertEquals(0, selectAll(connection2)); // A: select
insertOne(connection1); // B: insert
assertEquals(1, selectAll(connection1));
assertEquals(0, selectAll(connection2)); // A: select
connection1.commit(); // B: commit for insert
assertEquals(1, selectAll(connection1));
assertEquals(0, selectAll(connection2)); // A: select ???
}
}
Here, I start 2 concurrent connections and configure one of them to have serializable transaction isolation. After it, I make sure that both don't see any data. Then, using connection1, I insert a new row. After it, I make sure that this new row is visible to connection1, but not to connection2. Then, I commit the change and expect the connection2 to keep being unaware of this change. Briefly, I expect all my A: select queries to return the same set of rows (an empty set in my case).
But this does not happen: the very last selectAll(connection2) returns the row that has just been inserted in a parallel connection. Am I wrong and this behavior is expected, or is it something wrong with H2?
Here are the helper methods:
public void setUpDatabase() throws SQLException {
try(Connection connection = DriverManager.getConnection(JDBC_URL, JDBC_USER, JDBC_PASSWORD)) {
try (PreparedStatement s = connection.prepareStatement("create table Notes(text varchar(256) not null)")) {
s.executeUpdate();
}
}
}
private static int selectAll(Connection connection) throws SQLException {
int count = 0;
try (PreparedStatement s = connection.prepareStatement("select * from Notes")) {
s.setQueryTimeout(1);
try (ResultSet resultSet = s.executeQuery()) {
while (resultSet.next()) {
++count;
}
}
}
return count;
}
private static void insertOne(Connection connection) throws SQLException {
try (PreparedStatement s = connection.prepareStatement("insert into Notes(text) values(?)")) {
s.setString(1, "hello");
s.setQueryTimeout(1);
s.executeUpdate();
}
}
The complete test is here: https://gist.github.com/loki2302/26f3c052f7e73fd22604
I use H2 1.4.185.
In presence of pessimistic locking when enabling isolation level "serializable" your first two read operations on connection 1 and 2 respectively should result in two shared (write) locks.
The subsequent insertOne(connection1) needs a range lock being incompatible with a shared lock from an alien transaction 2. Thus connection 1 will go into "wait" (polling) state. Without using setQueryTimeout(1) your application would hang.
With respect to https://en.wikipedia.org/wiki/Isolation_(database_systems)#Phantom_reads you should alter your application (not using setQueryTimeout) to allow for the following schedule, either by manually starting two JVM instances or by using different threads:
Transaction 1 | Transaction 2 | Comment
--------------+---------------+--------
- | selectAll | Acquiring shared lock in T2
insert | - | Unable to acquire range lock
wait | - | T1 polling
wait | selectAll | T2 gets identical row set
wait | - |
wait | commit | T2 releasing shared lock
| | T1 resuming insert
commit | |
In case "serializable" is not being supported you will see:
Transaction 1 | Transaction 2 | Comment
--------------+---------------+--------
- | selectAll | Acquiring shared lock in T2
insert | - | No need for range lock due to missing support
commit | | T1 releasing all locks
| selectAll | T2 gets different row set
as office doc set_lock_mode ,To enable, execute the SQL statement SET LOCK_MODE 1 or append ;LOCK_MODE=1 to the database URL: jdbc:h2:~/test;LOCK_MODE=1