How to avoid CannotSerializeTransactionException when using Spring with Postgres? - java

I'm trying to understand how to manage transactions in Spring Boot with backing PostgreSQL DBMS. Here is a small repository, whose sell() method checks to see whether there is enough coffee in stock, and if so - updates coffees and sales table accordingly. I annotated the method with #Transactional to guarantee that no two or more simultaneous method invocations would accidentally sell more quantity than there is in stock. Also, it's needed for updating two tables atomically.
However, the code below doesn't work as I expected when simulating two parallel transactions. Instead of suspending the second transaction and waiting for the completion of the first one, it throws CannotSerializeTransactionException, so that the second transaction always fails - whether there is enough coffee in stock or not.
#Repository
public class CoffeesRepositoryImpl implements CoffeesRepository {
private final JdbcTemplate jdbc;
#Autowired
public CoffeesRepositoryImpl(JdbcTemplate jdbc) {
this.jdbc = jdbc;
}
#Override
#Transactional(isolation = Isolation.REPEATABLE_READ, propagation = Propagation.REQUIRED)
public void sell(int coffeeId, int saleQuantity, String manager) {
// step #1: check if we have enough coffee in stock
String sql = "SELECT * FROM coffees WHERE id = ?";
Coffee coffee = jdbc.queryForObject(sql, this::mapRowToCoffeeObject, coffeeId);
int stockAfterSale = coffee.getStock() - saleQuantity;
if (stockAfterSale < 0)
throw new RuntimeException("Attempt to sell more quantity then there is in stock");
// step #2: update coffee stock after sale
sql = "UPDATE coffees SET stock = ? WHERE id = ? ";
jdbc.update(sql, stockAfterSale, coffeeId);
// note: this sleep is used while testing to help
// simulate situation with two simultaneous calls
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
// step #3: insert new record into sales table
sql = "INSERT INTO sales (coffee_id, manager, datetime, sale_quantity, sale_sum) " +
"VALUES(?, ?, ?, ?, ?)";
BigDecimal saleSum = coffee.getPrice().multiply(BigDecimal.valueOf(saleQuantity));
jdbc.update(sql, coffeeId, manager, LocalDateTime.now(), saleQuantity, saleSum);
}
private Coffee mapRowToCoffeeObject(ResultSet rs, int rowNum) throws SQLException {
return new Coffee(rs.getInt("id"),
rs.getString("name"),
rs.getBigDecimal("price"),
rs.getInt("stock"));
}
}
This is the contents of my coffees table before and after two transactions:
id name price stock
1 Arusha 15.90 9000
2 Catuai 18.00 10000
3 Mocha 17.00 7200
request #1: coffeesRepository.sell(2, 2000, 'admin')
request #2: coffeesRepository.sell(2, 500, 'admin')
id name price stock
1 Arusha 15.90 9000
2 Catuai 18.00 8000
3 Mocha 17.00 7200
And in the sales table only one record is inserted
with sale quantity 2000 and sale sum 36 000,00.
Although the second request would seemingly have all chances to finish the transaction with success after suspension, it just throws CannotSerializeTransactionException instead.

Related

How to properly / efficiently manage entity manager JPA Spring #Transactional for large datasets?

I am attempting to insert ~57,000 entities in my database, but the insert method takes longer and longer as the loop progresses. I have implemented batches of 25 - each time flushing, clearing, and closing the transaction (I'm pretty sure) without success. Is there something else I need to be doing in the code below to maintain the insert rate? I feel like it should not take 4+ hours to insert 57K records.
[Migrate.java]
This is the main class that loops through 'Xaction' entities and adds 'XactionParticipant' records based off each Xaction.
// Use hibernate cursor to efficiently loop through all xaction entities
String hql = "select xaction from Xaction xaction";
Query<Xaction> query = session.createQuery(hql, Xaction.class);
query.setFetchSize(100);
query.setReadOnly(true);
query.setLockMode("xaction", LockMode.NONE);
ScrollableResults results = query.scroll(ScrollMode.FORWARD_ONLY);
int count = 0;
Instant lap = Instant.now();
List<Xaction> xactionsBatch = new ArrayList<>();
while (results.next()) {
count++;
Xaction xaction = (Xaction) results.get(0);
xactionsBatch.add(xaction);
// save new XactionParticipants in batches of 25
if (count % 25 == 0) {
xactionParticipantService.commitBatch(xactionsBatch);
float rate = ChronoUnit.MILLIS.between(lap, Instant.now()) / 25f / 1000;
System.out.printf("Batch rate: %.4fs per xaction\n", rate);
xactionsBatch = new ArrayList<>();
lap = Instant.now();
}
}
xactionParticipantService.commitBatch(xactionsBatch);
results.close();
[XactionParticipantService.java]
This service provides a method with "REQUIRES_NEW" in an attempt to close the transaction for each batch
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void commitBatch(List<Xaction> xactionBatch) {
for (Xaction xaction : xactionBatch) {
try {
XactionParticipant xp = new XactionParticipant();
// ... create xp based off Xaction info ...
// Use native query for efficiency
String nativeQueryStr = "INSERT INTO XactionParticipant .... xp info/data";
Query q = em.createNativeQuery(nativeQueryStr);
q.executeUpdate();
} catch (Exception e) {
log.error("Unable to update", e);
}
}
// Clear just in case??
em.flush();
em.clear();
}
That is not clear what is the root cause of your performance problem: java memory consumption or db performance, please check some thoughts below:
The following code does not actually optimize memory consumption:
String hql = "select xaction from Xaction xaction";
Query<Xaction> query = session.createQuery(hql, Xaction.class);
query.setFetchSize(100);
query.setReadOnly(true);
query.setLockMode("xaction", LockMode.NONE);
ScrollableResults results = query.scroll(ScrollMode.FORWARD_ONLY);
Since you are retrieving full-functional entities, those entities get stored in persistence context (session-level cache), and in order to free memory up you need to detach entity upon entity has been processed (i.e. after xactionsBatch.add(xaction) or // ... create xp based off Xaction info ...), otherwise at the end of processing you consume the same amount of memory as you were doing List<> results = query.getResultList();, and here I'm not sure what is better: consume all memory required at the start of transaction and release all other resources or keep cursor and jdbc connection open for 4 hours.
The following code does not actually optimize JDBC interactions:
for (Xaction xaction : xactionBatch) {
try {
XactionParticipant xp = new XactionParticipant();
// ... create xp based off Xaction info ...
// Use native query for efficiency
String nativeQueryStr = "INSERT INTO XactionParticipant .... xp info/data";
Query q = em.createNativeQuery(nativeQueryStr);
q.executeUpdate();
} catch (Exception e) {
log.error("Unable to update", e);
}
}
yes, in general, JDBC should be faster than JPA API, however that is not your case - you are inserting records one-by-one instead of using batch inserts. In order to take advantage of batches your code should look like:
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void commitBatch(List<Xaction> xactionBatch) {
session.doWork(connection -> {
String insert = "INSERT INTO XactionParticipant VALUES (?, ?, ...)";
try (PreparedStatement ps = connection.prepareStatement(insert)) {
for (Xaction xaction : xactionBatch) {
ps.setString(1, "val1");
ps.setString(2, "val2");
ps.addBatch();
ps.clearParameters();
}
ps.executeBatch();
}
});
}
BTW, Hibernate may do the same if hibernate.jdbc.batch_size is set to large enough positive integer and entities are properly designed (id generation is backed up by DB sequence and allocationSize is large enough)

How to change value in mysql based on another value in the same database?

I am trying to change one of the value in my table in database based on an another value in the same database but different table. The first table is called orders and the second is called buy supply. What I want to do is I want to change the value of sumquantity that is in 'buysupply' table by subtracting a value from the column named quantity_ordered from 'orders' table. I tried writing some query but it is not working, it keeps popping up error. If you know of a solution, do let me know. The code is all below as well
private void DispatchButtonActionPerformed(java.awt.event.ActionEvent evt) {
String type = txttype.getSelectedItem().toString();
String name = txtname.getText();
String quantity = txtquantity.getText();
String dispatch_row = txtdispatch.getText();
String statusDispatched = "Dispatched";
try {
Class.forName("com.mysql.jdbc.Driver");
con1 = DriverManager.getConnection("jdbc:mysql://localhost/restock", "root", "password");
// Focus on this
String template = "UPDATE orders SET status = '%s' WHERE id = %s";
String template2 = "UPDATE buysupply SET sumquantity = sumquantity - %s WHERE id = %s";
String quantity_ordered = "quantity_ordered FROM orders";
pst = con1.prepareStatement(String.format(template, statusDispatched, dispatch_row));
pst.executeUpdate();
pst1 = con1.prepareStatement(String.format(template2, quantity_ordered , dispatch_row));
pst1.executeUpdate();
// Look on top
JOptionPane.showMessageDialog(null, "Item has been dispatched");
// To update the newly recorded data to the table
table_update();
// Set the textfields to empty upon button click
txttype.setSelectedIndex(-1);
txtname.setText("");
txtquantity.setText("");
txtdispatch.setText("");
txttype.requestFocus();
} catch (ClassNotFoundException | SQLException ex) {
JOptionPane.showMessageDialog(null, "Quantity or Dispatch field is not an integer, Please try again.");
Logger.getLogger(mainpage.class.getName()).log(Level.SEVERE, null, ex);
}
}
// This code is in another class file
try {
Class.forName("com.mysql.jdbc.Driver");
con1 = DriverManager.getConnection("jdbc:mysql://localhost/restock", "root", "password");
String template = "SELECT SUM(quantity) as sumquantity FROM buysupply WHERE itemtype IN ('Plastic gloves', 'Rubber gloves')";
PreparedStatement pst = con1.prepareStatement(template);
ResultSet rs = pst.executeQuery();
if (rs.next()) {
glovesum = rs.getString("sumquantity");
} else {
System.out.print("Query didn't return any results");
}
} catch (ClassNotFoundException | SQLException ex) {
Logger.getLogger(stock.class.getName()).log(Level.SEVERE, null, ex);
}
As far I understand (and please read until the end) if it is a classic Invoice order database with a list of product into "buysupply" table and "order" table for the ordering client list of product.
The first point, several people into the comment point is the missing link data between both table. I assume reading your peace of code that the link is made by a ID but that not clear, so I offer a solution base on a link between those columns:
orders.itemtype = buysupply.itemtype
If it on an other element, please change the information into the SQL query below.
I assume also the column orders.status as to change the values from what I call 'Waiting' value to 'Dispatched' value.
So here the data before into "buysupply" table:
id, itemtype, quantity
1 , mask , 704
2 , clothed, 101
3 , N95, 18
Here the data before into "order" table:
id, itemtype, quantity_orderred, status,
1,mask, 1 , Dispatched
2,clothed,3,Waiting
The SQL to update both value (the orders.status and buysupply.quantity ) should be something like that suppose the order.id to update is 2:
update orders,buysupply
set orders.status='Dispatched',
buysupply.quantity = buysupply.quantity - orders.quantity_orderred
where
orders.itemtype = buysupply.itemtype
AND
orders.status = 'Waiting'
AND
orders.id = '2'
AFTER:
So here the data after into "buysupply" table:
id, itemtype, quantity
1 , mask , 704
2 , clothed, 98
3 , N95, 18
Here the data before into "order" table:
id, itemtype, quantity_orderred, status,
1,mask, 1 , Dispatched
2,clothed,3,Dispatched
The update could apply on several tables and columns, you should just indicate the column table name with each column to avoid confusion.
That could be the first step to let you improve the code for the sum part, that, I afraid no understand at all.
Then I find a partial information explaining that the sum_quantity is a computed calcul from a sum of the value, so you do not want to change the quantity, my bad.
So you can create a temporary table with this kind of SQL, temporary table is detroy at the connection close:
CREATE TEMPORARY TABLE IF NOT EXISTS TMPsumquantity AS
SELECT SUM(quantity) as sumquantity FROM buysupply WHERE itemtype IN ('Plastic gloves', 'Rubber gloves')
That could create a column with the information you want, BUT, it's not my recommendation as far I understand ;-)
I will create a new column to store the sum value into the table "buysupply", to say the "the quantity in stock avalaible at the moment this order will be Dispatched is that for this element" so the result of you sum value
Before "buysupply":
id, itemtype, quantity, quantity_avalaible
1 , mask , 704, 704
2 , clothed, 101, 101
3 , N95, 18, 18
Before "order":
id, itemtype, quantity_orderred, status, quantity_avalaible
1,mask, 1 , Dispatched
2,clothed,3, Waiting
So the SQL to create this column is complex, base on an inner-join between the same table
UPDATE buysupply b1
INNER JOIN (
SELECT SUM(quantity) as sumquantity, id
FROM buysupply
where buysupply.itemtype IN ('clothed', 'N95')
) b2 ON true
SET b1.quantity_avalaible = b2.sumquantity
So the new table "buysupply" with the colum "quantity_avalaible" containing the sum of the value of the colum quantity for N95 and clothed values :
id, itemtype, quantity, quantity_avalaible
1 , mask , 704, 116
2 , clothed, 101, 116
3 , N95, 18, 116
So then you can use the first SQL proposal to update quantity_avalaible depending the value of "orders.quantity_orderred"
Last point, I have a partial view on the data structure and the bussiness logic, it could be usefull to store a negative value into the column "orders.quantity_orderred" so the SQL SUM could add and substract values with the same call to the SUM function
Best

Merging two SQL statement in one , to update a row in a table if it meet certain conditions

I am working on an app using JDBC to update stocks and place order.
I am storing the products and I want to update the products if the quantity requested is less then the stored one , and I want to delete the product from the database if the quantity is equal to the number of current stock in the DB.
I am using two different statements, but I would like to use just one of them. For example, if I want to add an order into the DB the things that are going to be requested by the system are a name and product quantity. The product quantity would get subtracted from the total quanitity of the product on the DB. The pseudocode would be
IF product quantity - user quantity =0 THEN DELETE product FROM database
ELSE UPDATE product quantity TO product quantity-user quantity ON THE database
product quantity=quantity of the product in the database
user quantity=quantity requested by the user
The Prepared Statements that I have for now are these two
UPDATE products SET quantity=quantity-? WHERE product_name=?
DELETE FROM products WHERE product_name=?
I would like to merge them as one if possible
In a production system you would do this sort of thing.
For an order, as you said, do this.
UPDATE products SET quantity=quantity-? WHERE product_name=?
Then, in an overnight or weekly cleanup do this to get rid of rows with no quantity left.
DELETE FROM products WHERE quantity = 0
When you want to know what products are actually available, you do
SELECT product_name, quantity FROM products WHERE quantity > 0
The concept here: rows with zero quantity are "invisible" even if they aren't deleted.
If this were my system, I would not DELETE rows. For one thing, what happens when you get more products in stock?
One way is to loosen security by setting the MySQL Configuration Property allowMultiQueries to true in the connection URL.
Then you can execute two SQL statements together:
String sql = "UPDATE products" +
" SET quantity = quantity - ?" +
" WHERE product_name = ?" +
" AND quantity >= ?" +
";" +
"DELETE FROM products" +
" WHERE product_name = ?" +
" AND quantity = 0";
try (PreparedStatement stmt = conn.prepareStatement(sql)) {
stmt.setInt(1, userQuantity);
stmt.setString(2, productName);
stmt.setInt(3, userQuantity);
stmt.setString(4, productName);
stmt.execute();
int updateCount = stmt.getUpdateCount();
if (updateCount == 0)
throw new IllegalStateException("Product not available: " + productName);
// if you need to know if product got sold out, do the following
stmt.getMoreResults();
int deleteCount = stmt.getUpdateCount();
boolean soldOut = (deleteCount != 0);
}

out of memory when insert record batch through jdbc

I want to copy a table (10 million records) in originDB(sqlite3) into another database called targetDB.
The process of my method is:
read data from the origin table and generate a ResultSet, then generate corresponding insert sql about every record and execute commit to batch insert when the count of record reach 10000. The code as follow:
public void transfer() throws IOException, SQLException {
targetDBOperate.setCommit(false);//batch insert
int count = 0;
String[] cols = parser(propertyPath);//get fields of data table
String query = "select * from " + originTable;
ResultSet rs = originDBOperate.executeQuery(query);//get origin table
String base = "insert into " + targetTable;
while(rs.next()) {
count++;
String insertSql = buildInsertSql(base,rs,cols);//corresponding insert sql
targetDBOperate.executeSql(insertSql);
if(count%10000==0) {
targetDBOperate.commit();// batch insert
}
}
targetDBOperate.closeConnection();
}
The follow picture is the trend of using memory, and vertical axis represents memory usage
As we can say it will be bigger and bigger until out of memory. The stackoverflow has some relevant question such as Out of memory when inserting records in SQLite, FireDac, Delphi
, but I havent solve my problem for we use different implement method. My hypothesis is that when the count of record hasn't reach 10000, these corresponding insert sql will be cached in memory and they haven't been removed when execute commit by default? Every advice will be appreciate.
By moving a higher number of rows in SQLite or any other relational database you should follow some basic principles:
1) set autoCommit to false, i.e. do not commit each insert
2) use batch update, i.e. do not round trip for each row
3) use prepared statement, i.e. do not parse each insert.
Putting this together you get following code:
cn is the source connection, cn2 is the target connection.
For each inserted row you call addBatch, but only once per batchSize you call executeBatch which initiates a round trip.
Do not forget a last executeBatch at the end of the loop and the final commit.
cn2.setAutoCommit(false)
String SEL_STMT = "select id, col1,col2 from tab1"
String INS_STMT = "insert into tab2(id, col1,col2) values(?,?,?)"
def batchSize = 10000
def stmt = cn.prepareStatement(SEL_STMT)
def stmtIns = cn2.prepareStatement(INS_STMT)
rs = stmt.executeQuery()
while(rs.next())
{
stmtIns.setLong(1,rs.getLong(1))
stmtIns.setString(2,rs.getString(2))
stmtIns.setTimestamp(3,rs.getTimestamp(3))
stmtIns.addBatch();
i += 1
if (i == batchSize) {
def insRec = stmtIns.executeBatch();
i = 0
}
}
rs.close()
stmt.close()
def insRec = stmtIns.executeBatch();
stmtIns.close()
cn2.commit()
Sample test with your size with sqlite-jdbc-3.23.1:
inserted rows: 10000000
total time taken to insert the batch = 46848 ms
I do not observe any memory issues or problems with a large transaction
You are trying to fetch 10M records in one go by doing the following. This will definitely eat your memory like anything
String query = "select * from " + originTable;
ResultSet rs = originDBOperate.executeQuery(query);//get origin table
Use paginated queries to read the batches and do batch updates according.
You are not even doing a batch-update You are simply firing 10K queries one after the other by doing the following code
String insertSql = buildInsertSql(base,rs,cols);//corresponding insert sql
targetDBOperate.executeSql(insertSql);
if(count%10000==0) {
targetDBOperate.commit();// This simply means that you are commiting after 10K records
}

Can I update multiple columns of table by query language using JPA

I created one table like student and I fetched the address based on age (like age = 22). Now I want to update the address based on age "22" to all address columns of table. How can I do this?
Below is my code:
public static void main(String[] args) {
EntityManager entityManager = Persistence.createEntityManagerFactory(
"demoJPA").createEntityManager();
Query query = entityManager.createQuery("SELECT student FROM Student student WHERE student.age = 22");
System.out.println("Data is" + query.getResultList().size());
List<Simpletable> simpletable = query.getResultList();
for (Simpletable simpletable1 : simpletable){
System.out.println(simpletable1.getAddress());
}
}
I fetched data but how can I update now. Is it possible to iterate through a loop and setAddress("US")
Since you are creating a standalone application, you must open a transaction first, then you can simply change the field values in your object and when the transaction is committed, the changes get flushed to the database automatically.
EntityTransaction tx = entityManager.getTransaction();
try {
tx.begin();
try {
for(SimpleTable simpleTable : simpleTables){
simplaTable.setAddress(newAddress);
}
} finally {
tx.commit();
}
} catch (Exception e) {
// handle exceptions from transaction methods
}
--Edit--
An alternative to edit all records without having to first fetch them is to do a bulk update, still within a transaction, like this:
entityManager.createQuery("UPDATE SimpleTable s " +
"SET s.address.state = ?1 " +
"WHERE s.address.country = ?2")
.setParameter(1, "FL")
.setParameter(2, "US")
.executeUpdate();

Categories

Resources