When I execute:
select * from pg_stat_activity where state ~ 'idle in transact'
I get inappropriate number of rows with state 'idle in transaction'. Some of them idle for a few days. Most of them, are the same simple select query which are executed from one service class (Hibernate 5.1.0.Final, Guice 4.1.0):
public class FirebaseServiceImpl implements FirebaseService {
#Inject
private Provider<FirebaseKeyDAO> firebaseKeyDAO;
#Override
public void sendNotification(User recipient) {
List<FirebaseKey> firebaseKeys = firebaseKeyDAO.get().findByUserId(recipient.getId());
final ExecutorService notificationsPool = Executors.newFixedThreadPool(3);
for (FirebaseKey firebaseKey : firebaseKeys)
notificationsPool.execute(new Runnable() {
#Override
public void run() {
sendNotification(new FirebaseNotification(firebaseKey.getFirebaseKey(), "example");
}
});
notificationsPool.shutdown();
}
}
DAO method:
#Override
#SuppressWarnings("unchecked")
public List<FirebaseKey> findByUserId(Long userId) {
Criteria criteria = getSession().createCriteria(type);
criteria.add(Restrictions.eq("userId", userId));
return criteria.list();
}
Why does it happen? How to avoid this?
UPDATE
Transactions are not commited when I use Guice Provider exampleDAO.get() in a separate thread:
#Inject
Provider<ExampleDAO> exampleDAO;
It usually happens when you use pgbouncer or other pooler/session manager that uses pool_mode = transaction. Eg when client opens a transaction and holds it, not committing nor rolling back. Check if you see DISCARD ALL in query column - if you do this is the case, because pooler has to discard shared session plans, sequences, deallocate statements etc to avoid mixing those for different sessions in pool.
On the other hand any "normal" transaction gives same idle in transaction, eg:
2>select now(),pg_backend_pid();
now | pg_backend_pid
----------------------------------+----------------
2017-05-05 16:53:01.867444+05:30 | 26500
(1 row)
if we check its state we see orthodox idle:
t=# select query,state from pg_stat_activity where pid = 26500;
query | state
--------------------------------+-------
select now(),pg_backend_pid(); | idle
(1 row)
now we start transaction on session 2 >:
2>begin;
BEGIN
2>select now(),pg_backend_pid();
now | pg_backend_pid
----------------------------------+----------------
2017-05-05 16:54:15.856306+05:30 | 26500
(1 row)
and check pg_stat_statements gain:
t=# select query,state from pg_stat_activity where pid = 26500;
query | state
--------------------------------+---------------------
select now(),pg_backend_pid(); | idle in transaction
(1 row)
It will remain this way until statement timeout or end of transaction:
2>end;
COMMIT
t=# select query,state from pg_stat_activity where pid = 26500;
query | state
-------+-------
end; | idle
(1 row)
So it is quite common and ok to have it. If you want to avoid connected sessions, you have to disconnect client. But connection in postgres is expensive, so usually people try reuse existing connections with pool, and so such states appear in pg_stat_activity
Related
I am trying to produce a phantom read, for the sake of learning but unfortunately I am unable to. I am using Java threads, JDBC, MySQL.
Here is the program I am using:
package com.isolation.levels.phenomensa;
import javax.xml.transform.Result;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.concurrent.CountDownLatch;
import static com.isolation.levels.ConnectionsProvider.getConnection;
import static com.isolation.levels.Utils.printResultSet;
/**
* Created by dreambig on 13.03.17.
*/
public class PhantomReads {
public static void main(String[] args) {
setUp(getConnection());// delete the newly inserted row, the is supposed to be a phantom row
CountDownLatch countDownLatch1 = new CountDownLatch(1); // use to synchronize threads steps
CountDownLatch countDownLatch2 = new CountDownLatch(1); // use to synchronize threads steps
Transaction1 transaction1 = new Transaction1(countDownLatch1, countDownLatch2, getConnection()); // the first runnable
Transaction2 transaction2 = new Transaction2(countDownLatch1, countDownLatch2, getConnection()); // the second runnable
Thread thread1 = new Thread(transaction1); // transaction 1
Thread thread2 = new Thread(transaction2); // transaction 2
thread1.start();
thread2.start();
}
private static void setUp(Connection connection) {
try {
connection.prepareStatement("DELETE from actor where last_name=\"PHANTOM_READ\"").execute();
} catch (SQLException e) {
e.printStackTrace();
}
}
public static class Transaction1 implements Runnable {
private CountDownLatch countDownLatch;
private CountDownLatch countDownLatch2;
private Connection connection;
public Transaction1(CountDownLatch countDownLatch, CountDownLatch countDownLatch2, Connection connection) {
this.countDownLatch = countDownLatch;
this.countDownLatch2 = countDownLatch2;
this.connection = connection;
}
#Override
public void run() {
try {
String query = "select * from actor where first_name=\"BELA\"";
connection.setAutoCommit(false); // start the transaction
// the transaction isolation, dirty reads and non-repeatable reads are prevented !
// only phantom reads can occure
connection.setTransactionIsolation(Connection.TRANSACTION_REPEATABLE_READ);
//read the query result for the first time.
ResultSet resultSet = connection.prepareStatement(query).executeQuery();
printResultSet(resultSet); // print result.
//count down so that thread2 can insert a row and commit.
countDownLatch2.countDown();
//wait for the second query the finish inserting the row
countDownLatch.await();
System.out.println("\n ********* The query returns a second row satisfies it (a phantom read) ********* !");
//query the result again ...
ResultSet secondRead = connection.createStatement().executeQuery(query);
printResultSet(secondRead); //print the result
} catch (SQLException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
public static class Transaction2 implements Runnable {
private CountDownLatch countDownLatch;
private CountDownLatch countDownLatch2;
private Connection connection;
public Transaction2(CountDownLatch countDownLatch, CountDownLatch countDownLatch2, Connection connection) {
this.countDownLatch = countDownLatch;
this.countDownLatch2 = countDownLatch2;
this.connection = connection;
}
#Override
public void run() {
try {
//wait the first thread to read the result
countDownLatch2.await();
//insert and commit !
connection.prepareStatement("INSERT INTO actor (first_name,last_name) VALUE (\"BELA\",\"PHANTOM_READ\") ").execute();
//count down so that the thread1 can read the result again ...
countDownLatch.countDown();
} catch (SQLException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
However this is actually the result
----------------------------------------------------------
| 196 | | BELA | | WALKEN | | 2006-02-15 04:34:33.0 |
---------------------------------------------------------- The query returns a second row satisfies it (a phantom read)
----------------------------------------------------------
| 196 | | BELA | | WALKEN | | 2006-02-15 04:34:33.0 |
----------------------------------------------------------
But I think it should be
----------------------------------------------------------
| 196 | | BELA | | WALKEN | | 2006-02-15 04:34:33.0 |
---------------------------------------------------------- The query returns a second row satisfies it (a phantom read) !
----------------------------------------------------------
| 196 | | BELA | | WALKEN | | 2006-02-15 04:34:33.0 |
----------------------------------------------------------
----------------------------------------------------------
| 196 | | BELA | | PHANTOM_READ | | 2006-02-15 04:34:33.0 |
----------------------------------------------------------
I am using:
Java 8
JDBC
MySQL
InnoDB
SakilaDB insert into mysql
A phantom read is the following scenario: a transaction reads a set of rows that satisfy a search condition. Then a second transaction inserts a row that satisfies this search condition. Then the first transaction reads again the set of rows that satisfy a search condition, and gets a different set of rows (e.g. including the newly inserted row).
Repeatable read requires that if a transaction reads a row, a different transaction then updates or deletes this row and commits these changes, and the first transaction rereads the row, it will get the same constistent values as before (a snapshot).
It actually doesn't require that phantom reads have to happen. MySQL will actually prevent phantom reads in more cases than it has to. In MySQL, phantom reads (currently) only happen after you (accidently) updated a phantom row, otherwise the row stays hidden. This is specific to MySQL, other database system will behave differently. Also, this behaviour might change some day (as MySQL only specifies that it supports consistent reads as required by the sql standard, not under which specific circumstances phantom reads occur).
You can use for example the following steps to get phantom rows:
insert into actor (first_name,last_name) values ('ADELIN','NO_PHANTOM');
transaction 1:
select * from actor;
-- ADELIN|NO_PHANTOM
transaction 2:
insert into actor (first_name,last_name) values ('BELA','PHANTOM_READ');
commit;
transaction 1:
select * from actor; -- still the same
-- ADELIN|NO_PHANTOM
update actor set last_name = 'PHANTOM READ'
where last_name = 'PHANTOM_READ';
select * from actor; -- now includes the new, updated row
-- ADELIN|NO_PHANTOM
-- BELA |PHANTOM READ
Another funny thing happens btw when you delete rows:
insert into actor (first_name,last_name) values ('ADELIN','NO_PHANTOM');
insert into actor (first_name,last_name) values ('BELA','REPEATABLE_READ');
transaction 1:
select * from actor;
-- ADELIN|NO_PHANTOM
-- BELA |REPEATABLE_READ
transaction 2:
delete from actor where last_name = 'REPEATABLE_READ';
commit;
transaction 1:
select * from actor; -- still the same
-- ADELIN|NO_PHANTOM
-- BELA |REPEATABLE_READ
update actor set last_name = '';
select * from actor; -- the deleted row stays unchanged
-- ADELIN|
-- BELA |REPEATABLE_READ
This is exactly what the sql standard requires: if you reread a (deleted) row, you will get the original value.
Wikipedia describes the Phantom read phenomenon as:
A phantom read occurs when, in the course of a transaction, two identical queries are executed, and the collection of rows returned by the second query is different from the first.
It also states that with serializable isolation level, Phantom reads are not possible. I'm trying to make sure it is so in H2, but either I expect the wrong thing, or I do a wrong thing, or something is wrong with H2. Nevertheless, here's the code:
try(Connection connection1 = DriverManager.getConnection(JDBC_URL, JDBC_USER, JDBC_PASSWORD)) {
connection1.setAutoCommit(false);
try(Connection connection2 = DriverManager.getConnection(JDBC_URL, JDBC_USER, JDBC_PASSWORD)) {
connection2.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
connection2.setAutoCommit(false);
assertEquals(0, selectAll(connection1));
assertEquals(0, selectAll(connection2)); // A: select
insertOne(connection1); // B: insert
assertEquals(1, selectAll(connection1));
assertEquals(0, selectAll(connection2)); // A: select
connection1.commit(); // B: commit for insert
assertEquals(1, selectAll(connection1));
assertEquals(0, selectAll(connection2)); // A: select ???
}
}
Here, I start 2 concurrent connections and configure one of them to have serializable transaction isolation. After it, I make sure that both don't see any data. Then, using connection1, I insert a new row. After it, I make sure that this new row is visible to connection1, but not to connection2. Then, I commit the change and expect the connection2 to keep being unaware of this change. Briefly, I expect all my A: select queries to return the same set of rows (an empty set in my case).
But this does not happen: the very last selectAll(connection2) returns the row that has just been inserted in a parallel connection. Am I wrong and this behavior is expected, or is it something wrong with H2?
Here are the helper methods:
public void setUpDatabase() throws SQLException {
try(Connection connection = DriverManager.getConnection(JDBC_URL, JDBC_USER, JDBC_PASSWORD)) {
try (PreparedStatement s = connection.prepareStatement("create table Notes(text varchar(256) not null)")) {
s.executeUpdate();
}
}
}
private static int selectAll(Connection connection) throws SQLException {
int count = 0;
try (PreparedStatement s = connection.prepareStatement("select * from Notes")) {
s.setQueryTimeout(1);
try (ResultSet resultSet = s.executeQuery()) {
while (resultSet.next()) {
++count;
}
}
}
return count;
}
private static void insertOne(Connection connection) throws SQLException {
try (PreparedStatement s = connection.prepareStatement("insert into Notes(text) values(?)")) {
s.setString(1, "hello");
s.setQueryTimeout(1);
s.executeUpdate();
}
}
The complete test is here: https://gist.github.com/loki2302/26f3c052f7e73fd22604
I use H2 1.4.185.
In presence of pessimistic locking when enabling isolation level "serializable" your first two read operations on connection 1 and 2 respectively should result in two shared (write) locks.
The subsequent insertOne(connection1) needs a range lock being incompatible with a shared lock from an alien transaction 2. Thus connection 1 will go into "wait" (polling) state. Without using setQueryTimeout(1) your application would hang.
With respect to https://en.wikipedia.org/wiki/Isolation_(database_systems)#Phantom_reads you should alter your application (not using setQueryTimeout) to allow for the following schedule, either by manually starting two JVM instances or by using different threads:
Transaction 1 | Transaction 2 | Comment
--------------+---------------+--------
- | selectAll | Acquiring shared lock in T2
insert | - | Unable to acquire range lock
wait | - | T1 polling
wait | selectAll | T2 gets identical row set
wait | - |
wait | commit | T2 releasing shared lock
| | T1 resuming insert
commit | |
In case "serializable" is not being supported you will see:
Transaction 1 | Transaction 2 | Comment
--------------+---------------+--------
- | selectAll | Acquiring shared lock in T2
insert | - | No need for range lock due to missing support
commit | | T1 releasing all locks
| selectAll | T2 gets different row set
as office doc set_lock_mode ,To enable, execute the SQL statement SET LOCK_MODE 1 or append ;LOCK_MODE=1 to the database URL: jdbc:h2:~/test;LOCK_MODE=1
I have a really weird issue with a project I'm working with. I would appreciate if someone could point me to a right direction here.
// Setup
There are multiple web servers and a loadbalancer is in front of them. Servers are handling requests that might come in multiple parts and parts can be handled by different servers. These multi-part requests should be combined to a one single transaction that is going forward once all the parts are received.
The server that does the final processing doesn't matter, but only one server can do it. Other servers that receive the previous parts should just mark the part received, store the data and give a immediate response back.
For now I'm using database table to handle the synchronization between nodes.
The basic idea is that when a server gets a part it tries to acquire the lock with a transaction id coming with the rquest. This is done by trying to insert a row to a Lock table with the txid as a primary key. If insert is successful, that server gets the lock and processes the part it received, by storing it to database checks if other parts have been received and returns a response immediately if not.
// The Problem
The problem I have is that the threads seem to randomly lock at the database and thus freezing the whole processing. I have debugged it to the point that in a situation where multiple requests come to processing at the same time they just get stuck at trying to acquire the lock and ultimately timeout after 30 seconds. Few of the first requests might get processed or not it seems to be random but even something like 7 concurrent requests block the database.
To me there should not be any way how this could get stuck and I'm fresh out of ideas.
// Information
I am using MySQL with an InnoDB engine. Servers are running Java code and Hibernate is used as a ORM layer to access the DB.
The Lock table:
CREATE TABLE `lock` (
`id` varchar(255) NOT NULL,
`expiryDate` datetime DEFAULT NULL,
`issueDate` datetime DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
The id is the transaction id used to combine the parts.
I have an basic interface that manages the lock accessing.
public interface LockProviderDao {
public boolean lock(String id);
public boolean unlock(String id);
}
And a implementation of that class that uses Hibernate to access database.
#Override
public boolean lock(String id) {
Session session = this.sessionFactory.openSession();
Lock lock = new Lock(id);
Transaction tx = null;
boolean locked = false;
try {
// Try to lock
tx = session.beginTransaction();
session.save(lock);
tx.commit();
locked = true;
} catch(Exception e) {
if(tx != null) {
tx.rollback();
}
} finally {
session.close();
}
return locked;
}
#Override
public boolean unlock(String id) {
Session session = this.sessionFactory.openSession();
boolean status = true;
Transaction tx = null;
try {
Lock lock = (Lock) session.load(Lock.class, id);
tx = session.beginTransaction();
session.delete(lock);
tx.commit();
} catch(Exception e) {
if(tx != null) {
tx.rollback();
}
status = false;
} finally {
session.close();
}
return status;
}
Seems simple enough. Here is the code that does the processing. This thread has a Hibernate session opened already so the Session opened inside the lock and unlock methods is a nested Session, if that makes any difference.
int counter = 0;
boolean lockAcquired = false;
do {
// Try to acquire the lock
lockAcquired = this.lockProviderDao.lock(txId);
if (!lockAcquired) {
// Didn't get it try a bit later
try {
Thread.sleep(defaultSleepPeriod);
} catch (Exception e) {
}
if (counter >= defaultSleepCycles) {
return;
}
counter++;
}
} while (!lockAcquired);
// DO THE PROCESSING HERE ONCE LOCK ACQUIRED
// Release the lock
this.lockProviderDao.unlock(txId);
I would lock after inserting the data. This means, that you would have to change your algorithm to something like this:
Begin transaction
Insert the fragment to database
Commit transaction
Begin transaction
Count number of framgents inserted / exit, if not equal to expected fragment count
Insert a row, that indicates that fragments will be processed (e.g. your lock row). If this fails, fragments have been processed or are being processed (= rollback)
Commit transaction
Begin transaction
Read fragments (and verify that they still exist)
Process fragments
Delete lock and fragments (verify they still exist)
Commit transaction
If you need to increase reliability, you have three options:
1. Use JMS with JTA to control program flow
2. Have your client poll the server for status and start processing, if all parts have been received, but processing has not started yet or has been stalled
3. Create a scheduler that starts processing, if same conditions apply
I would like to ask you for help with following problem. I have method:
String sql = "INSERT INTO table ...."
Query query = em.createNativeQuery(sql);
query.executeUpdate();
sql = "SELECT max(id) FROM ......";
query = em.createNativeQuery(sql);
Integer importId = ((BigDecimal) query.getSingleResult()).intValue();
for (EndurDealItem item : deal.getItems()) {
String sql2 = "INSERT INTO another_table";
em.createNativeQuery(sql2).executeUpdate();
}
And after executing it, data are not commited (it takes like 10 or 15 minutes until data are commited). Is there any way how to commit data explicitly or trigger commit? And what causes the transaction to remain uncommited for such a long time?
The reason we use nativeQueries is, that we are exporting data on some shared interface and we are not using the data anymore.
I would like to mention, that the transaction is Container-Managed (by Geronimo). EntityManager is created via linking:
#PersistenceContext(unitName = "XXXX", type = PersistenceContextType.TRANSACTION)
private EntityManager em;
Use explicitly the transaction commit:
EntityManager em = /* get an entity manager */;
em.getTransaction().begin();
// make some changes
em.getTransaction().commit();
This should work. The time of execution of all operation between .begin() and .end() depends of course also from the cycle you're performing, the number of row you're inserting, from the position of the database (the speed of the network matters) and so on...
Given the following I am trying to force the child collection (countryData) to be loaded when I perform the query, this works however I end up with duplicates of the Bin records loaded.
public Collection<Bin> getBinsByPromotion(String season, String promotion) {
final Session session = sessionFactory.getCurrentSession();
try {
session.beginTransaction();
return (List<Bin>) session.createCriteria(Bin.class).
setFetchMode("countryData", FetchMode.JOIN).
add(Restrictions.eq("key.seasonCode", season)).
add(Restrictions.eq("key.promotionCode", promotion)).
add(Restrictions.ne("status", "closed")).
list();
} finally {
session.getTransaction().commit();
}
}
I don't want the default (lazy) behavior as the query will return ~8k records thus sending 16k additional queries off to get the child records.
If nothing else I'd prefer.
select ... from bins b where b.seasonCode = ?
and b.promotionCode = ?
and b.status <> 'Closed';
select ... from binCountry bc where bc.seasonCode = ?
and bc.promotionCode = ?;
you can use CriteriaSpecification.DISTINCT_ROOT_ENTITY;
criteria.setResultTransformer(CriteriaSpecification.DISTINCT_ROOT_ENTITY);