Unable to produce a phantom read - java

I am trying to produce a phantom read, for the sake of learning but unfortunately I am unable to. I am using Java threads, JDBC, MySQL.
Here is the program I am using:
package com.isolation.levels.phenomensa;
import javax.xml.transform.Result;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.concurrent.CountDownLatch;
import static com.isolation.levels.ConnectionsProvider.getConnection;
import static com.isolation.levels.Utils.printResultSet;
/**
* Created by dreambig on 13.03.17.
*/
public class PhantomReads {
public static void main(String[] args) {
setUp(getConnection());// delete the newly inserted row, the is supposed to be a phantom row
CountDownLatch countDownLatch1 = new CountDownLatch(1); // use to synchronize threads steps
CountDownLatch countDownLatch2 = new CountDownLatch(1); // use to synchronize threads steps
Transaction1 transaction1 = new Transaction1(countDownLatch1, countDownLatch2, getConnection()); // the first runnable
Transaction2 transaction2 = new Transaction2(countDownLatch1, countDownLatch2, getConnection()); // the second runnable
Thread thread1 = new Thread(transaction1); // transaction 1
Thread thread2 = new Thread(transaction2); // transaction 2
thread1.start();
thread2.start();
}
private static void setUp(Connection connection) {
try {
connection.prepareStatement("DELETE from actor where last_name=\"PHANTOM_READ\"").execute();
} catch (SQLException e) {
e.printStackTrace();
}
}
public static class Transaction1 implements Runnable {
private CountDownLatch countDownLatch;
private CountDownLatch countDownLatch2;
private Connection connection;
public Transaction1(CountDownLatch countDownLatch, CountDownLatch countDownLatch2, Connection connection) {
this.countDownLatch = countDownLatch;
this.countDownLatch2 = countDownLatch2;
this.connection = connection;
}
#Override
public void run() {
try {
String query = "select * from actor where first_name=\"BELA\"";
connection.setAutoCommit(false); // start the transaction
// the transaction isolation, dirty reads and non-repeatable reads are prevented !
// only phantom reads can occure
connection.setTransactionIsolation(Connection.TRANSACTION_REPEATABLE_READ);
//read the query result for the first time.
ResultSet resultSet = connection.prepareStatement(query).executeQuery();
printResultSet(resultSet); // print result.
//count down so that thread2 can insert a row and commit.
countDownLatch2.countDown();
//wait for the second query the finish inserting the row
countDownLatch.await();
System.out.println("\n ********* The query returns a second row satisfies it (a phantom read) ********* !");
//query the result again ...
ResultSet secondRead = connection.createStatement().executeQuery(query);
printResultSet(secondRead); //print the result
} catch (SQLException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
public static class Transaction2 implements Runnable {
private CountDownLatch countDownLatch;
private CountDownLatch countDownLatch2;
private Connection connection;
public Transaction2(CountDownLatch countDownLatch, CountDownLatch countDownLatch2, Connection connection) {
this.countDownLatch = countDownLatch;
this.countDownLatch2 = countDownLatch2;
this.connection = connection;
}
#Override
public void run() {
try {
//wait the first thread to read the result
countDownLatch2.await();
//insert and commit !
connection.prepareStatement("INSERT INTO actor (first_name,last_name) VALUE (\"BELA\",\"PHANTOM_READ\") ").execute();
//count down so that the thread1 can read the result again ...
countDownLatch.countDown();
} catch (SQLException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
However this is actually the result
----------------------------------------------------------
| 196 | | BELA | | WALKEN | | 2006-02-15 04:34:33.0 |
---------------------------------------------------------- The query returns a second row satisfies it (a phantom read)
----------------------------------------------------------
| 196 | | BELA | | WALKEN | | 2006-02-15 04:34:33.0 |
----------------------------------------------------------
But I think it should be
----------------------------------------------------------
| 196 | | BELA | | WALKEN | | 2006-02-15 04:34:33.0 |
---------------------------------------------------------- The query returns a second row satisfies it (a phantom read) !
----------------------------------------------------------
| 196 | | BELA | | WALKEN | | 2006-02-15 04:34:33.0 |
----------------------------------------------------------
----------------------------------------------------------
| 196 | | BELA | | PHANTOM_READ | | 2006-02-15 04:34:33.0 |
----------------------------------------------------------
I am using:
Java 8
JDBC
MySQL
InnoDB
SakilaDB insert into mysql

A phantom read is the following scenario: a transaction reads a set of rows that satisfy a search condition. Then a second transaction inserts a row that satisfies this search condition. Then the first transaction reads again the set of rows that satisfy a search condition, and gets a different set of rows (e.g. including the newly inserted row).
Repeatable read requires that if a transaction reads a row, a different transaction then updates or deletes this row and commits these changes, and the first transaction rereads the row, it will get the same constistent values as before (a snapshot).
It actually doesn't require that phantom reads have to happen. MySQL will actually prevent phantom reads in more cases than it has to. In MySQL, phantom reads (currently) only happen after you (accidently) updated a phantom row, otherwise the row stays hidden. This is specific to MySQL, other database system will behave differently. Also, this behaviour might change some day (as MySQL only specifies that it supports consistent reads as required by the sql standard, not under which specific circumstances phantom reads occur).
You can use for example the following steps to get phantom rows:
insert into actor (first_name,last_name) values ('ADELIN','NO_PHANTOM');
transaction 1:
select * from actor;
-- ADELIN|NO_PHANTOM
transaction 2:
insert into actor (first_name,last_name) values ('BELA','PHANTOM_READ');
commit;
transaction 1:
select * from actor; -- still the same
-- ADELIN|NO_PHANTOM
update actor set last_name = 'PHANTOM READ'
where last_name = 'PHANTOM_READ';
select * from actor; -- now includes the new, updated row
-- ADELIN|NO_PHANTOM
-- BELA |PHANTOM READ
Another funny thing happens btw when you delete rows:
insert into actor (first_name,last_name) values ('ADELIN','NO_PHANTOM');
insert into actor (first_name,last_name) values ('BELA','REPEATABLE_READ');
transaction 1:
select * from actor;
-- ADELIN|NO_PHANTOM
-- BELA |REPEATABLE_READ
transaction 2:
delete from actor where last_name = 'REPEATABLE_READ';
commit;
transaction 1:
select * from actor; -- still the same
-- ADELIN|NO_PHANTOM
-- BELA |REPEATABLE_READ
update actor set last_name = '';
select * from actor; -- the deleted row stays unchanged
-- ADELIN|
-- BELA |REPEATABLE_READ
This is exactly what the sql standard requires: if you reread a (deleted) row, you will get the original value.

Related

Getting database deadlock with #Transactional in spring boot and hibernate

Why I am getting deadlock in this code?
I tried to debug it and also read many article about deadlock prevention but could not get this. I have used synchronization, to make thread safe a block of code on the basis of accountNumber.
I am getting this Transaction object from an API and I want to lock my code on the basis of what the Transaction object contain. Transaction object contains info like debit/credit account number, amount etc.
Two threads should not be executed executeTransaction method simultaneously if there is any common accountNumber between them.
Here, lockedAccount is storing all accounts that are currently locked and two methods for locking and unlocking an accountNumber.
DAO / Repository layer.
#Repository
public class TransactionDAOImpl implements TransactionDAO {
// define field for entitymanager
private EntityManager entityManager;
public TransactionDAOImpl() {}
// set up constructor injection
#Autowired
public TransactionDAOImpl(EntityManager theEntityManager) {
entityManager = theEntityManager;
}
private static final Set<String> lockedAccounts = new HashSet<>();
private void LockAccount(String AccountNumber) throws InterruptedException {
int count = 0;
synchronized (lockedAccounts) {
while (!lockedAccounts.add(AccountNumber)) {
lockedAccounts.wait();
count++;
}
System.out.println(AccountNumber + " waited for " + count + " times" + " and now i am getting lock");
}
}
private void unLockAccount(String AccountNumber) {
synchronized (lockedAccounts) {
lockedAccounts.remove(AccountNumber);
lockedAccounts.notifyAll();
System.out.println("unlocking " + AccountNumber);
}
}
#Override
public void executeTransaction(Transaction theTransaction) {
// System.out.println(theTransaction);
// get the current hibernate session
Session currentSession = entityManager.unwrap(Session.class);
// lock both account in a increasing order to avoid deadlock
// lexicographically lesser account number should be lock first
String firstAccount = theTransaction.getDebitAccountNumber();
String secondAccount = theTransaction.getCreditAccountNumber();
// check firstAccount is lesser or greater then second account,if not then swap
// value
if (firstAccount.compareTo(secondAccount) > 0) {
firstAccount = theTransaction.getCreditAccountNumber();
secondAccount = theTransaction.getDebitAccountNumber();
}
try {
LockAccount(firstAccount);
try {
LockAccount(secondAccount);
AccountDetail debitAccount = getAccountDetails(currentSession, theTransaction.getDebitAccountNumber());
AccountDetail creditAccount = getAccountDetails(currentSession,
theTransaction.getCreditAccountNumber());
if (debitAccount == null || creditAccount == null) // check invalid accountNumber
{
theTransaction.setStatus("failed,account not found");
} else if (debitAccount.getBalance() < theTransaction.getAmount()) // check insufficient account balance
{
theTransaction.setStatus("failed,insufficient account balance");
} else {
// update custmer accout balance
debitAccount.setBalance(debitAccount.getBalance() - theTransaction.getAmount());
creditAccount.setBalance(creditAccount.getBalance() + theTransaction.getAmount());
// save to database
currentSession.saveOrUpdate(debitAccount);
currentSession.saveOrUpdate(creditAccount);
// update status of transacion
theTransaction.setStatus("successful");
}
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
unLockAccount(secondAccount);
}
} catch (InterruptedException e1) {
e1.printStackTrace();
} finally {
unLockAccount(firstAccount);
}
return;
}
private AccountDetail getAccountDetails(Session currentSession, String accountNumber) {
Query<?> query = currentSession.createQuery("from AccountDetail where accountNumber=:accountNumber");
query.setParameter("accountNumber", accountNumber);
AccountDetail accountDetails = (AccountDetail) query.uniqueResult();
return accountDetails;
}
}
for more information ,
my accountDetails table in database have three columns,
id(int,primary key)
AccountNumber(String,unique)
amount(double)
this is Service layer
where i am using #Transactional annotation for executeTransaction method.
public class TransactionServiceImpl implements TransactionService {
private TransactionDAO theTransactionDAO;
public TransactionServiceImpl() {}
//constructor injection
#Autowired
public TransactionServiceImpl(TransactionDAO theTransactionDAO)
{
this.theTransactionDAO= theTransactionDAO;
}
#Override
#Transactional
public void executeTransaction(Transaction theTransaction) {
theTransactionDAO.executeTransaction(theTransaction);
}
}
but i am getting database deadlock in this code.
below is my error.
2020-08-30 19:09:28.235 WARN 6948 --- [nio-8081-exec-4] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 1213, SQLState: 40001
2020-08-30 19:09:28.236 ERROR 6948 --- [nio-8081-exec-4] o.h.engine.jdbc.spi.SqlExceptionHelper : Deadlock found when trying to get lock; try restarting transaction
2020-08-30 19:09:28.384 ERROR 6948 --- [nio-8081-exec-4] o.a.c.c.C.[.[.[.[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [/bank] threw exception [Request processing failed; nested exception is org.springframework.dao.CannotAcquireLockException: could not execute statement; SQL [n/a]; nested exception is org.hibernate.exception.LockAcquisitionException: could not execute statement] with root cause
com.mysql.cj.jdbc.exceptions.MySQLTransactionRollbackException: Deadlock found when trying to get lock; try restarting transaction
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:123) ~[mysql-connector-java-8.0.21.jar:8.0.21]
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97) ~[mysql-connector-java-8.0.21.jar:8.0.21]
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122) ~[mysql-connector-java-8.0.21.jar:8.0.21]
at com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:953) ~[mysql-connector-java-8.0.21.jar:8.0.21]
at com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1092)
Suppose there are two Account Transactions(debitAccount,creditAccount): AT1(1,2) AT2(2,1). And we have Java Lock (JL) and Database Lock (DBL). In following scenario, deadlock will occur.
+------+---------------------+---------------------+-----------------------------------------------------+
| Step | AT1 State | AT2 State | Remark |
+------+---------------------+---------------------+-----------------------------------------------------+
| 1 | get JL | wait JL | |
+------+---------------------+---------------------+-----------------------------------------------------+
| 2 | release JL | get JL | AT1 saveOrUpdate may not flush to database, |
| | | | hence database lock may not be acquired this moment |
+------+---------------------+---------------------+-----------------------------------------------------+
| 3 | flush debitAccount | flush debitAccout | AT1 acquire DB lock for account 1, |
| | saveOrUpdate | saveOrUpdate | AT2 acquire DB lock for account 2 |
+------+---------------------+---------------------+-----------------------------------------------------+
| 4 | AT1 DBL account 1 | AT2 DBL account 2 | |
+------+---------------------+---------------------+-----------------------------------------------------+
| 5 | flush creditAccount | flush creditAccount | AT1 acquire DBL for account 2, |
| | saveOrUpdate | saveOrUpdate | AT2 acquire DBL for account 1, Deadlock |
+------+---------------------+---------------------+-----------------------------------------------------+
Please also note that
Database lock is acquired in update statement when the statement is flushed.
Database lock is released when transaction commit/rollback.

'idle in transaction' when using Hibernate, Postgres and Guice Provider

When I execute:
select * from pg_stat_activity where state ~ 'idle in transact'
I get inappropriate number of rows with state 'idle in transaction'. Some of them idle for a few days. Most of them, are the same simple select query which are executed from one service class (Hibernate 5.1.0.Final, Guice 4.1.0):
public class FirebaseServiceImpl implements FirebaseService {
#Inject
private Provider<FirebaseKeyDAO> firebaseKeyDAO;
#Override
public void sendNotification(User recipient) {
List<FirebaseKey> firebaseKeys = firebaseKeyDAO.get().findByUserId(recipient.getId());
final ExecutorService notificationsPool = Executors.newFixedThreadPool(3);
for (FirebaseKey firebaseKey : firebaseKeys)
notificationsPool.execute(new Runnable() {
#Override
public void run() {
sendNotification(new FirebaseNotification(firebaseKey.getFirebaseKey(), "example");
}
});
notificationsPool.shutdown();
}
}
DAO method:
#Override
#SuppressWarnings("unchecked")
public List<FirebaseKey> findByUserId(Long userId) {
Criteria criteria = getSession().createCriteria(type);
criteria.add(Restrictions.eq("userId", userId));
return criteria.list();
}
Why does it happen? How to avoid this?
UPDATE
Transactions are not commited when I use Guice Provider exampleDAO.get() in a separate thread:
#Inject
Provider<ExampleDAO> exampleDAO;
It usually happens when you use pgbouncer or other pooler/session manager that uses pool_mode = transaction. Eg when client opens a transaction and holds it, not committing nor rolling back. Check if you see DISCARD ALL in query column - if you do this is the case, because pooler has to discard shared session plans, sequences, deallocate statements etc to avoid mixing those for different sessions in pool.
On the other hand any "normal" transaction gives same idle in transaction, eg:
2>select now(),pg_backend_pid();
now | pg_backend_pid
----------------------------------+----------------
2017-05-05 16:53:01.867444+05:30 | 26500
(1 row)
if we check its state we see orthodox idle:
t=# select query,state from pg_stat_activity where pid = 26500;
query | state
--------------------------------+-------
select now(),pg_backend_pid(); | idle
(1 row)
now we start transaction on session 2 >:
2>begin;
BEGIN
2>select now(),pg_backend_pid();
now | pg_backend_pid
----------------------------------+----------------
2017-05-05 16:54:15.856306+05:30 | 26500
(1 row)
and check pg_stat_statements gain:
t=# select query,state from pg_stat_activity where pid = 26500;
query | state
--------------------------------+---------------------
select now(),pg_backend_pid(); | idle in transaction
(1 row)
It will remain this way until statement timeout or end of transaction:
2>end;
COMMIT
t=# select query,state from pg_stat_activity where pid = 26500;
query | state
-------+-------
end; | idle
(1 row)
So it is quite common and ok to have it. If you want to avoid connected sessions, you have to disconnect client. But connection in postgres is expensive, so usually people try reuse existing connections with pool, and so such states appear in pg_stat_activity

How to insert JSON into cassandra database using java API?

This is the code I have used for writing my program but there are errors - please give us some suggestions with the corrected code.
session.execute("INSERT INTO users JSON '{'id':'user123' , 'age':21 ,'state':'TX'}';");
The errors are directed to this one statement so I thought that its not necessary to present the whole code here.TABLE users has already been created in the cassandra database with the columns id, age and state. I could not find any proper answers for this problem anywhere, I hope my problem is solved here.
Here is the working query and below java code where I insert it and the results
"INSERT INTO users JSON '{\"id\":888 , \"age\":21 ,\"state\":\"TX\"}'";
import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.Row;
import com.datastax.driver.core.Session;
public class CasandarConnect {
public static void main(String[] args) {
String serverIP = "127.0.0.1";
String keyspace = "mykeyspace";
Cluster cluster = Cluster.builder()
.addContactPoints(serverIP)
.build();
Session session = cluster.connect(keyspace);
String cqlStatement = "INSERT INTO users JSON '{\"id\":888 , \"age\":21 ,\"state\":\"TX\"}'";
session.execute(cqlStatement);
}
}
Result
cqlsh:mykeyspace> select * from users;
id | age | state
------+-----+-------
1745 | 12 | smith
123 | 21 | TX
888 | 21 | TX

Does H2 support the serializable isolation level?

Wikipedia describes the Phantom read phenomenon as:
A phantom read occurs when, in the course of a transaction, two identical queries are executed, and the collection of rows returned by the second query is different from the first.
It also states that with serializable isolation level, Phantom reads are not possible. I'm trying to make sure it is so in H2, but either I expect the wrong thing, or I do a wrong thing, or something is wrong with H2. Nevertheless, here's the code:
try(Connection connection1 = DriverManager.getConnection(JDBC_URL, JDBC_USER, JDBC_PASSWORD)) {
connection1.setAutoCommit(false);
try(Connection connection2 = DriverManager.getConnection(JDBC_URL, JDBC_USER, JDBC_PASSWORD)) {
connection2.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
connection2.setAutoCommit(false);
assertEquals(0, selectAll(connection1));
assertEquals(0, selectAll(connection2)); // A: select
insertOne(connection1); // B: insert
assertEquals(1, selectAll(connection1));
assertEquals(0, selectAll(connection2)); // A: select
connection1.commit(); // B: commit for insert
assertEquals(1, selectAll(connection1));
assertEquals(0, selectAll(connection2)); // A: select ???
}
}
Here, I start 2 concurrent connections and configure one of them to have serializable transaction isolation. After it, I make sure that both don't see any data. Then, using connection1, I insert a new row. After it, I make sure that this new row is visible to connection1, but not to connection2. Then, I commit the change and expect the connection2 to keep being unaware of this change. Briefly, I expect all my A: select queries to return the same set of rows (an empty set in my case).
But this does not happen: the very last selectAll(connection2) returns the row that has just been inserted in a parallel connection. Am I wrong and this behavior is expected, or is it something wrong with H2?
Here are the helper methods:
public void setUpDatabase() throws SQLException {
try(Connection connection = DriverManager.getConnection(JDBC_URL, JDBC_USER, JDBC_PASSWORD)) {
try (PreparedStatement s = connection.prepareStatement("create table Notes(text varchar(256) not null)")) {
s.executeUpdate();
}
}
}
private static int selectAll(Connection connection) throws SQLException {
int count = 0;
try (PreparedStatement s = connection.prepareStatement("select * from Notes")) {
s.setQueryTimeout(1);
try (ResultSet resultSet = s.executeQuery()) {
while (resultSet.next()) {
++count;
}
}
}
return count;
}
private static void insertOne(Connection connection) throws SQLException {
try (PreparedStatement s = connection.prepareStatement("insert into Notes(text) values(?)")) {
s.setString(1, "hello");
s.setQueryTimeout(1);
s.executeUpdate();
}
}
The complete test is here: https://gist.github.com/loki2302/26f3c052f7e73fd22604
I use H2 1.4.185.
In presence of pessimistic locking when enabling isolation level "serializable" your first two read operations on connection 1 and 2 respectively should result in two shared (write) locks.
The subsequent insertOne(connection1) needs a range lock being incompatible with a shared lock from an alien transaction 2. Thus connection 1 will go into "wait" (polling) state. Without using setQueryTimeout(1) your application would hang.
With respect to https://en.wikipedia.org/wiki/Isolation_(database_systems)#Phantom_reads you should alter your application (not using setQueryTimeout) to allow for the following schedule, either by manually starting two JVM instances or by using different threads:
Transaction 1 | Transaction 2 | Comment
--------------+---------------+--------
- | selectAll | Acquiring shared lock in T2
insert | - | Unable to acquire range lock
wait | - | T1 polling
wait | selectAll | T2 gets identical row set
wait | - |
wait | commit | T2 releasing shared lock
| | T1 resuming insert
commit | |
In case "serializable" is not being supported you will see:
Transaction 1 | Transaction 2 | Comment
--------------+---------------+--------
- | selectAll | Acquiring shared lock in T2
insert | - | No need for range lock due to missing support
commit | | T1 releasing all locks
| selectAll | T2 gets different row set
as office doc set_lock_mode ,To enable, execute the SQL statement SET LOCK_MODE 1 or append ;LOCK_MODE=1 to the database URL: jdbc:h2:~/test;LOCK_MODE=1

jOOQ : fetchMany and fetchAny

I was going through jooq documentation to try and understand how fetchMany and fetchAny work. But there aren't many examples and usecases available.
Could someone show the proper use of these commands how are they different from each other and also from fetch().
The general idea of the various ResultQuery.fetch() methods is outlined in the manual:
http://www.jooq.org/doc/latest/manual/sql-execution/fetching/
And in particular:
http://www.jooq.org/doc/latest/manual/sql-execution/fetching/many-fetching/
As far as your specific question is concerned, I think the relevant Javadocs might help:
fetchAny()
This executes the query and returns at most one resulting record.
Example:
TableRecord randomRecord =
DSL.using(configuration)
.select()
.from(TABLE)
.fetchAny();
So, this will fetch whatever record the database returns first. A similar query would be the following one, where you explicitly limit the number of records to 1 in the database:
TableRecord randomRecord =
DSL.using(configuration)
.select()
.from(TABLE)
.limit(1)
.fetchOne();
fetchMany()
A variety of databases support returning more than one result set from stored procedures. A Sybase ASE example:
> sp_help 'author'
+--------+-----+-----------+-------------+-------------------+
|Name |Owner|Object_type|Object_status|Create_date |
+--------+-----+-----------+-------------+-------------------+
| author|dbo |user table | -- none -- |Sep 22 2011 11:20PM|
+--------+-----+-----------+-------------+-------------------+
+-------------+-------+------+----+-----+-----+
|Column_name |Type |Length|Prec|Scale|... |
+-------------+-------+------+----+-----+-----+
|id |int | 4|NULL| NULL| 0|
|first_name |varchar| 50|NULL| NULL| 1|
|last_name |varchar| 50|NULL| NULL| 0|
|date_of_birth|date | 4|NULL| NULL| 1|
|year_of_birth|int | 4|NULL| NULL| 1|
+-------------+-------+------+----+-----+-----+
When using JDBC directly, this is rather tedious as you have to write a lot of code to fetch one result after the other:
ResultSet rs = statement.executeQuery();
// Repeat until there are no more result sets
for (;;) {
// Empty the current result set
while (rs.next()) {
// [ .. do something with it .. ]
}
// Get the next result set, if available
if (statement.getMoreResults()) {
rs = statement.getResultSet();
}
else {
break;
}
}
// Be sure that all result sets are closed
statement.getMoreResults(Statement.CLOSE_ALL_RESULTS);
statement.close();
With jOOQ and fetchMany(), this is dead simple:
List<Result<Record>> results = create.fetchMany("sp_help 'author'");

Categories

Resources