I have 2 bugs rarely happening during last 3 years.
If I have 100 orders during a day 1-2 orders have alerts when manual db check says that number was not incremented, !! but when i check db manually it is really incremented.
If I have 3000 orders during a month 3-5 orders have alerts when lock is not released from order after order completion. But when I check db manually it is not null when it should be null
I am using only jdbcTemplate and transactional template(select, update, read). I am using JPA only when insert a model to mysql.
Everything is done with lock by 1 thread.
Code snippet to show the issue:
public synchronized void test() {
long payment = 999;
long bought_times_before = jdbcTemplate.queryForObject("select bought_times from user where id = ?", new Object[]{1}, Long.class);
TransactionTemplate tmpl = new TransactionTemplate(txManager);
tmpl.setTimeout(300);
tmpl.setName("p:" + payment);
tmpl.executeWithoutResult(status -> {
jdbcTemplate.update("update orders set attempts_to_verify = attempts_to_verify + 1, transaction_value = null where id = ?", payment);
jdbcTemplate.update("update orders set locked = null where id = ?", payment);
jdbcTemplate.update("update user set bought_times = bought_times + 1 where id = 1");
});
long bought_times_after = jdbcTemplate.queryForObject("select bought_times from user where id = ?", new Object[]{1}, Long.class);
if (bought_times_after <= bought_times_before) log.error("bought_times_after <= bought_times_before");
}
I upgraded mysql and implemented redis distributed lock to allow only 1 thread run code with select and transaction and select.
UPDATE:
default isolation level is read comited
i tried serializable but it still has the same bug
UPDATE 2:
re: lock != null after transaction it is somehow related to high load on mysql, since it is never occur when low load.
UPDATE 3:
i checked mysql logs - nothing, no errors
also i tried to use REQUIRED_NEW + SERIALIZABLE but received dead locks
UPDATE 4:
i wrote a test and cannot reproduce the issue - but on production there are more than 1 transaction as well as more updates and reads but i guess it is hardware issue or mysql bug
#PostConstruct
public void test(){
jdbcTemplate.execute("CREATE TEMPORARY TABLE IF NOT EXISTS TEST ( id int, name int, locked boolean )");
jdbcTemplate.execute("insert into TEST values(1, 1, 1);");
for(int i = 0; i < 100000; i++) {
long prev = jdbcTemplate.queryForObject("select name from TEST where id = 1", Long.class);
TransactionTemplate tmpl = new TransactionTemplate(txManager);
jdbcTemplate.update("update TEST set locked = true where id = 1;");
tmpl.execute(new TransactionCallbackWithoutResult() {
#SneakyThrows
#Override
protected void doInTransactionWithoutResult(org.springframework.transaction.TransactionStatus status) {
jdbcTemplate.update("update TEST set name = name + 1 where id = 1;");
jdbcTemplate.update("update TEST set locked = false where id = 1;");
}
});
long curr = jdbcTemplate.queryForObject("select name from TEST where id = 1", Long.class);
boolean lock = jdbcTemplate.queryForObject("select locked from TEST where id = 1", Boolean.class);
if(curr <= prev){
log.error("curr <= prev");
}
if(lock){
log.error("lock = true");
}
}
}
UPDATE 5: WAS ABLE TO REPRODUCE IT!!!!
#PostConstruct
public void test(){
jdbcTemplate.execute("CREATE TEMPORARY TABLE IF NOT EXISTS TEST ( id int, name int, locked boolean )");
jdbcTemplate.execute("insert into TEST values(1, 1, 1);");
ExecutorService executorService = Executors.newFixedThreadPool(100);
for(int i = 0; i < 100000; i++) {
executorService.submit(() -> {
RLock rLock = redissonClient.getFairLock("lock");
try {
rLock.lock(120, TimeUnit.SECONDS);
long prev = jdbcTemplate.queryForObject("select name from TEST where id = 1", Long.class);
TransactionTemplate tmpl = new TransactionTemplate(txManager);
jdbcTemplate.update("update TEST set locked = true where id = 1;");
tmpl.execute(new TransactionCallbackWithoutResult() {
#SneakyThrows
#Override
protected void doInTransactionWithoutResult(org.springframework.transaction.TransactionStatus status) {
jdbcTemplate.update("update TEST set name = name + 1 where id = 1;");
jdbcTemplate.update("update TEST set locked = false where id = 1;");
}
});
long curr = jdbcTemplate.queryForObject("select name from TEST where id = 1", Long.class);
boolean lock = jdbcTemplate.queryForObject("select locked from TEST where id = 1", Boolean.class);
if (curr <= prev) {
log.error("curr <= prev");
}
if (lock) {
log.error("lock = true");
}
} finally {
rLock.unlock();
}
});
}
}
UPDATE 7: after the second and third run i cannot reproduce it again neither with Lock nor with FairLock ..
UPDATE 8: on prod i am using 3 redis lock with 120 sec timeouts so i think there is timeout occurs rarely on 1 of 3 lock thus code might be executed by 2 threads without lock
SOLUTION: increase lock timeout as well as transaction timeout up to 500 seconds
UPDATE 9: looks like the issue has been resolved but i need to monitor it during couple of weeks before close the issue on stack overflow
Related
i am struggling with hibernate prepared statement count.
I am using the following JPA criteria query:
int count = 30
EntityManager manager = ...
CriteriaBuilder builder = manager.getCriteriaBuilder();
CriteriaQuery<String> select = builder.createQuery(String.class);
Root<AdministrationParameter> root = select.from(AdministrationParameter.class);
select.select(root.get(AdministrationParameter_.value));
ParameterExpression<String> peF1 = builder.parameter(AdministrationParameter_.context.getBindableJavaType(), "f1");
ParameterExpression<String> peF2 = builder.parameter(AdministrationParameter_.parameter.getBindableJavaType(), "f2");
Predicate p1 = builder.equal(root.get(AdministrationParameter_.context), peF1);
Predicate p2 = builder.equal(root.get(AdministrationParameter_.parameter), peF2);
select.where(p1, p2);
List<String> results = Collections.emptyList();
TypedQuery<String> query = manager.createQuery(select);
for (int i = 0; i < count; i++) {
query.setParameter(peF1, administrationParameterTypeInterface.getContext());
query.setParameter(peF2, administrationParameterTypeInterface.getParameter());
query.getResultList();
}
The count variable is to execute the query n times, e.g. to run a db trace in background (the query is executed against a db2 database).
Assume count = 30
The db2 trace says, there are "30 prepares" and "30 describes", the "statement found count = 30".
Hibernate give me the same values:
EntityManagerFactory factory = ...;
SessionFactory sessionFactory = factory.unwrap(SessionFactory.class);
statistics = sessionFactory.getStatistics();
Statistics statistics = statistics.setStatisticsEnabled(true);
...running the query above...
System.out.println("prepared statement count: " + statistics.getPrepareStatementCount());//is 30
System.out.println("query cache hit count: " + statistics.getQueryCacheHitCount());//0
System.out.println("query cache miss count: " + statistics.getQueryCacheMissCount());//0
System.out.println("query execution count: " + statistics.getQueryExecutionCount());//30
According to the javadoc https://docs.jboss.org/hibernate/orm/3.2/api/org/hibernate/stat/Statistics.html the statistics.getPrepareStatementCount() is "The number of prepared statements that were acquired".
Shouldn't it be 1?
Which Hibernate version are you using? This might be a bug that has already been fixed in newer versions. If updating doesn't help please create an issue in the issue tracker(https://hibernate.atlassian.net) with a test case(https://github.com/hibernate/hibernate-test-case-templates/blob/master/orm/hibernate-orm-5/src/test/java/org/hibernate/bugs/JPAUnitTestCase.java) that reproduces the issue.
I'm trying to understand how to manage transactions in Spring Boot with backing PostgreSQL DBMS. Here is a small repository, whose sell() method checks to see whether there is enough coffee in stock, and if so - updates coffees and sales table accordingly. I annotated the method with #Transactional to guarantee that no two or more simultaneous method invocations would accidentally sell more quantity than there is in stock. Also, it's needed for updating two tables atomically.
However, the code below doesn't work as I expected when simulating two parallel transactions. Instead of suspending the second transaction and waiting for the completion of the first one, it throws CannotSerializeTransactionException, so that the second transaction always fails - whether there is enough coffee in stock or not.
#Repository
public class CoffeesRepositoryImpl implements CoffeesRepository {
private final JdbcTemplate jdbc;
#Autowired
public CoffeesRepositoryImpl(JdbcTemplate jdbc) {
this.jdbc = jdbc;
}
#Override
#Transactional(isolation = Isolation.REPEATABLE_READ, propagation = Propagation.REQUIRED)
public void sell(int coffeeId, int saleQuantity, String manager) {
// step #1: check if we have enough coffee in stock
String sql = "SELECT * FROM coffees WHERE id = ?";
Coffee coffee = jdbc.queryForObject(sql, this::mapRowToCoffeeObject, coffeeId);
int stockAfterSale = coffee.getStock() - saleQuantity;
if (stockAfterSale < 0)
throw new RuntimeException("Attempt to sell more quantity then there is in stock");
// step #2: update coffee stock after sale
sql = "UPDATE coffees SET stock = ? WHERE id = ? ";
jdbc.update(sql, stockAfterSale, coffeeId);
// note: this sleep is used while testing to help
// simulate situation with two simultaneous calls
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
// step #3: insert new record into sales table
sql = "INSERT INTO sales (coffee_id, manager, datetime, sale_quantity, sale_sum) " +
"VALUES(?, ?, ?, ?, ?)";
BigDecimal saleSum = coffee.getPrice().multiply(BigDecimal.valueOf(saleQuantity));
jdbc.update(sql, coffeeId, manager, LocalDateTime.now(), saleQuantity, saleSum);
}
private Coffee mapRowToCoffeeObject(ResultSet rs, int rowNum) throws SQLException {
return new Coffee(rs.getInt("id"),
rs.getString("name"),
rs.getBigDecimal("price"),
rs.getInt("stock"));
}
}
This is the contents of my coffees table before and after two transactions:
id name price stock
1 Arusha 15.90 9000
2 Catuai 18.00 10000
3 Mocha 17.00 7200
request #1: coffeesRepository.sell(2, 2000, 'admin')
request #2: coffeesRepository.sell(2, 500, 'admin')
id name price stock
1 Arusha 15.90 9000
2 Catuai 18.00 8000
3 Mocha 17.00 7200
And in the sales table only one record is inserted
with sale quantity 2000 and sale sum 36 000,00.
Although the second request would seemingly have all chances to finish the transaction with success after suspension, it just throws CannotSerializeTransactionException instead.
i changed my function to
public Task assignTask(Student s) {
Task task = null;
Calendar calendar = Calendar.getInstance();
java.util.Date now = calendar.getTime();
java.sql.Timestamp date = new java.sql.Timestamp(now.getTime());
String sr1 = "update Task t3 set t3.startDate = case when t3.startDate is null then '"+ date +"' else t3.startDate end, t3.student.id = ?1 where id = (SELECT t FROM Task t WHERE t.batch not in (SELECT distinct batch FROM Task t2 WHERE t2.student.id= ?2 and t2.endDate IS NOT NULL) and ((t.student.id= ?3 AND t.endDate IS NULL) OR (t.student.id IS NULL)) ORDER BY t.student.id LIMIT 1) returning t3";
Query query1 = this.entityManager.createNativeQuery(sr1).setParameter(1, s.getId()).setParameter(2, s.getId()).setParameter(3, s.getId());
//int update = query1.executeUpdate();
//List<Task> taskList = query1.getResultList(); // trova il task da eseguire
if (taskList.size() > 0) {
task = taskList.get(0);
s.addTask(task);
}
return task;
}
from
public Task assignTask(Student s) {
Task task = null;
String sr1 = "SELECT t FROM Task t WHERE t.batch not in (SELECT distinct batch FROM Task t2 WHERE t2.student.id= ?1 and t2.endDate IS NOT NULL) and ((t.student.id= ?2 AND t.endDate IS NULL) OR (t.student.id IS NULL)) ORDER BY t.student.id";
Query query1 = this.entityManager.createQuery(sr1).setMaxResults(1).setParameter(1, s.getId()).setParameter(2,
s.getId());
List<Task> taskList = query1.getResultList(); // trova il task da eseguire
if (taskList.size() > 0) {
task = taskList.get(0);
task.setStudent(s);
if (task.getStartDate() == null) {
Calendar calendar = Calendar.getInstance();
java.util.Date now = calendar.getTime();
java.sql.Timestamp date = new java.sql.Timestamp(now.getTime());
task.setStartDate(date);
}
if (task != null) {
s.addTask(task);
this.taskDao.save(task);
}
}
return task;
}
the old function was working well except when 2 users ask for the task in the same time and the code assign the same task at both users
i used a update ... returning for the same result (if i run the sql on pgadmin it works) but on spring i don't know how execute the sql....
if i use executeUpdate i have a sql error javax.persistence.TransactionRequiredException: Executing an update/delete query and i still think i lose the return of the task (i get a int), if i use the getResultList i have an error with something like "cannot edit" or something similar
how i can use the update and return the edited line? and why i get the transactional error?
You need to learn about Transactions in Spring, lookup #Trasactional. Furthermore you should be designing your entities in a more logical way IMO. If a Task can only be assigned to one person, it may make more sense to have a Student a member of Task, with a OneToOne mapping. If you think about the Student class, I would ask the question is a Student composed of tasks, or do they have Tasks. If they have Tasks there is more likely a rule which describes what tasks they have being what has been assigned to them, as opposed to adding a field for recording tasks they have. If it is convenient to have a field, within student, then consider using a JPA query to select the tasks of a Student. That should be much cleaner than what you currently have.
I have 3 tables having the following content :
Author
idAuthor INT
name VARCHAR
Publication
idPublication INT
Title VARCHAR
Date YEAR
Type VARCHAR
Conference
author_has_publication
author_idAuthor INT
publication_idPublication INT
I am trying to do relational schema on the authors. The objectif is to show the number of publication they have in common. The authors name are parameters, I can have up to 8 names. My code is giving the number of common publication between 2 authors, so i have to loop it. I am currently using a Java loop and SQL statement to do that. Here is the SQL part
private int runQuery(String a1, String a2){ // a1 author 1 and a2 author 2
try {
auth1 = new ArrayList<String>();
Class.forName("com.mysql.jdbc.Driver");
Connection connection = DriverManager.getConnection(
"jdbc:mysql://localhost:3306/mydb", "root", "ROOT");
Statement stmt = connection.createStatement();
long start = System.currentTimeMillis();
String queryUpdate1 = "DROP TABLE IF EXISTS temp1;";
String queryUpdate2 = "DROP TABLE IF EXISTS temp2;";
String queryUpdate3 = "CREATE TEMPORARY TABLE IF NOT EXISTS temp1 AS (SELECT Author.name, Publication.idPublication, Publication.title FROM Author INNER JOIN Author_has_Publication ON Author_has_Publication.author_idAuthor=author.idAuthor INNER JOIN Publication ON Author_has_Publication.publication_idPublication=publication.idPublication WHERE Author.name='"+ a1+"');";
String queryUpdate4 = "CREATE TEMPORARY TABLE IF NOT EXISTS temp2 AS (SELECT Author.name, Publication.idPublication, Publication.title FROM Author INNER JOIN Author_has_Publication ON Author_has_Publication.author_idAuthor=author.idAuthor INNER JOIN Publication ON Author_has_Publication.publication_idPublication=publication.idPublication WHERE Author.name='"+ a2+"');";
String query = "SELECT COUNT(*) FROM (SELECT temp1.title from temp1 INNER JOIN temp2 on temp1.idPublication = temp2.idPublication) as t;";
stmt.executeUpdate(queryUpdate1);
stmt.executeUpdate(queryUpdate2);
stmt.executeUpdate(queryUpdate3);
stmt.executeUpdate(queryUpdate4);
ResultSet rs = stmt.executeQuery(query);
int result = -1;
while (rs.next()) {
result = rs.getInt(1);
}
System.out.println("result = " + result);
long end = System.currentTimeMillis() - start;
queryTimeLabel.setText("Query Execution Time :"+end);
connection.close();
return result;
} catch (Exception e) {
System.out.println(e);
}
return -1;
}
Here is the loop part (to repeat the SQL when there are more than 2 authors given) and generate the graph :
public void actionPerformed(ActionEvent e) {
graph = new mxGraph();
Object parent = graph.getDefaultParent();
authVertex = getAuthors();
// ///////////////////////////////////
// CREATES GRAPH, Graph only shows up after you resize the window
graph.getModel().beginUpdate();
try {
int i = 0;
for(String a: authVertex.keySet()){
int j = 0;
for(String b: authVertex.keySet()){
if(j > i) {
graph.insertEdge(parent, null, String.valueOf(runQuery(a,b)), authVertex.get(a), authVertex.get(b)); // loop the SQL statement 2 by 2.
}
j++;
}
i++;
}
} finally {
graph.getModel().endUpdate();
}
graphComponent = new mxGraphComponent(graph);
graphPan.removeAll();
graphPan.add(graphComponent);
setVisible(true);
// /////////////////////////////////////////
}
My code is currently working, but I would like to know if it was possible to increase the performance by passing everything into MySQL, that means that I enter the authors name in parameter and the loop is hangled by MySQL, I check the MySQL procedure but my issue is how to handle the authors names parameter as it is a variable.
One way, in a single statement:
SELECT COUNT(*)
FROM Author_has_Publication AS ap1
JOIN Author_has_Publication AS ap2 ON ap1.publication_idPublication =
ap2.publication_idPublication
JOIN Author AS a1 ON ap1.author_idAuthor = a1.id_Author
JOIN Author AS a2 ON ap2.author_idAuthor = a2.id_Author
WHERE a1.name = '...'
AND a2.name = '...'
Another way may be
SELECT COUNT(*)
FROM
(
SELECT ahp.publication_idPublication, COUNT(*)
FROM Author_has_Publication AS ahp
JOIN Author AS a ON a.id_Author = ahp.author_idAuthor
WHERE a.name IN ('...', '...')
GROUP BY ahp.publication_idPublication
HAVING COUNT(*) = 2 -- Number of authors
) x
Composite indexes needed:
Author_has_Publication: (author_idAuthor, publication_idPublication)
Author_has_Publication: (publication_idPublication, author_idAuthor)
Author: (name, id)
Note: Each technique can be rather easily extended to more than 2 authors. The second query could even be adapted to "at least 3 of these 5 authors": 5 names in IN and HAVING COUNT(*) >= 3.
I have 60K records to be inserted. I want to commit the records by batch of 100.
Below is my code
for(int i = 0 ;i < 60000; i++) {
entityRepo.save(entity);
if(i % 100 == 0) {
entityManager.flush();
entityManager.clear();
LOG.info("Committed = " + i);
}
}
entityManager.flush();
entityManager.clear();
I keep checking the database whenever I receive the log but I don't see the records getting committed.. What am I missing?
It is not enough to call flush() and clear(). You need a reference to the Transaction and call .commit() (from the reference guide)
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
for ( int i=0; i<100000; i++ ) {
Customer customer = new Customer(.....);
session.save(customer);
}
tx.commit();
session.close();
I assume two ways to do this, One as define transaction declarative, and call from external method.
Parent:
List<Domain> domainList = new ArrayList<>();
for(int i = 0 ;i < 60000; i++) {
domainList.add(domain);
if(i%100 == 0){
child.saveAll(domainList);
domainList.clear();
}
}
Child:
#Transactional
public void saveAll(List<Domain> domainList) {
}
This calls the declarative method at regular intervals as defined by the parent.
The other one is to manually begin and end the transaction and close the session.