order item table
order_item_id
order_id
quantity
unit_price
shipping_price
business_id
workflow_id
delivery_id
item_id
Orders table
billing_address_id
shipping_address_id
payment_mode
total_price
shipping_price
customer_id
UPDATE `order_items` t1 INNER JOIN Orders t2 ON t2.order_id = t1.order_id SET t1.workflow_id = ? WHERE t1.order_item_id = ? and t2.order_id = ? and t2.customer_id = ? and t1.delivery_id = ?
UPDATE `order_items` t1 SET t1.workflow_id = ?
WHERE t1.order_item_id = ? and t1.business_id = ? and t1.delivery_id = ?
UPDATE `order_items` t1 INNER JOIN Orders t2 ON t2.order_id = t1.order_id SET t1.workflow_id = ? WHERE t1.order_item_id = ? and t2.order_id = ? and t1.delivery_id = ?"
These queries are fired on different scenarios from my java rest service. (at any point of time, only one query will be used).
Previously I didn't use the inner join in my update sql and it worked well.
Now after I modified the query, it throws the following exception and the query is stuck and doesn't return for a minute.
java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:996)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3887)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3823)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2435)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2582)
UPDATE
This was happening because we forgot to set the autocommit mode to true again in the finally block. After which we didnt see this error.
Point1: You should not update query with join in application instead get primary key and then update the table based on primary key.
Point2: Show your tables structure with index you can get details by "show create table mytable" command, so that it can be checked that your update query is optimized or not.
Point3: If still you want to update based on join due to any specific reason and also your query is optimized then need to change your wait_timeout variable. So check what value is set in this variable on your server. you can check by below command-
SHOW GLOBAL VARIABLES LIKE 'wait_timeout';
A good thing before running UPDATE query is to run same SELECT. I.e.
SELECT * FROM `order_items` t1 INNER JOIN Orders t2 ON t2.order_id = t1.order_id SET t1.workflow_id = ? WHERE t1.order_item_id = ? and t2.order_id = ? and t2.customer_id = ? and t1.delivery_id = ?
Just to make sure, you are updating the right row.
You can also run EXPLAIN on that query to find out how complicated it is for your DB.
Related
I devloppe a batch with juste main method in legacy project with : java 7, hibernate and Spring using mysql database.
In this batch I want to update a several lines in a table that have more than 50 millions lines.
When I start the batch each day I have to update at least 10000 lines.
So, what is the best way to update the line without lock the table in mysql ?
Juste do one query like this :
update table items set is_achive = true where id in (id1,id2,id3....id10000)
Or use a for loop like this :
for(item p : ItemsList){
update table item set is_achive = true where id = p.id
}
This depends how you determine the list of rows that need updating. If you query the database to determine the list, it's probably best just to use a DML statement like:
UPDATE Item i SET i.achive = true WHERE ...
If your concern is locking i.e. the amount of time rows are locked, you can use batching by using a cursor e.g. some id of the data source.
SELECT id FROM ... WHERE id >= :start AND ...
ORDER BY id
OFFSET 100 -- use a batch size that suites your needs
LIMIT 1 -- use a batch size that suites your needs
The limit and for update can be implemented by using a query
Integer end = entityManager.createQuery("SELECT id FROM ... WHERE id >= :start AND ... ORDER BY id")
.setParameter("start", previousEnd)
.setFirstResult(100) // batch size
.setMaxResults(1)
.getResultList().stream().findFirst().orElse(null);
Then do a query like this
UPDATE Item i SET i.achive = true WHERE i.id BETWEEN :start AND :end
or if the end is null i.e. the last batch use
UPDATE Item i SET i.achive = true WHERE i.id >= :start
Use Hibernate Criteria builder:
CriteriaBuilder cb = this.em.getCriteriaBuilder();
// create update
CriteriaUpdate<Order> update = cb.createCriteriaUpdate(Order.class);
// set the root class
Root e = update.from(Order.class);
// set update and where clause
update.set("amount", newAmount);
update.where(cb.greaterThanOrEqualTo(e.get("amount"), oldAmount));
// perform update
this.em.createQuery(update).executeUpdate();
https://thorben-janssen.com/criteria-updatedelete-easy-way-to/
It is best to try to get to the root if it like Chris B suggested, but if it's something you can't do, then you might also consider leveraging Spring JDBC Batch Update Operations as documented here. These have existed for some time, so find whichever documentation is appropriate for the version you're using.
https://docs.spring.io/spring-framework/docs/3.0.0.M4/reference/html/ch12s04.html
I'm attempting to convert an Oracle MERGE statement to a MySQL Update statement. This particular MERGE statement only does an update (no inserts), so am unclear why the previous engineer used a MERGE statement.
Regardless, I know need to convert this to MySQL and am not clear as to how this is done. (side note, I'm doing this within a JAVA App)
Here is the MERGE statement :
MERGE INTO table1 a
USING
(SELECT DISTINCT(ROWID) AS ROWID FROM table2
WHERE DATETIMEUTC >= TO_TIMESTAMP('
formatter.format(dWV.getTime())
','YYYY-MM-DD HH24:MI:SS')) b
ON(a.ROWID = b.ROWID and
a.STATE = 'WV' and a.LAST_DTE = trunc(SYSDATE))
WHEN MATCHED THEN UPDATE SET a.THISIND = 'S';
My attempts goes something like this :
UPDATE table1 a
INNER JOIN table2 b ON (a.ROWID = b.ROWID
and a.STATE = 'WV'
and a.LAST_DTE = date(sysdate()))
SET a.THISIND = 'S'
WHERE DATETIMEUTC >= TO_TIMESTAMP('formatter.form(dWV.getTime())', 'YYYY-MM-DD HH24:MI:SS')
However, I'm unclear if this is actually doing the same thing or not?
As noted by you, the original Oracle MERGE statement only performs updates, no inserts.
The general syntax of your MySQL query looks ok compared to the Oracle version. Here is an updated version :
UPDATE table1 a
INNER JOIN table2 b
ON a.ROWID = b.ROWID
AND b.DATETIMEUTC >= 'formatter.form(dWV.getTime())'
SET a.THISIND = 'S'
WHERE
a.STATE = 'WV'
AND a.LAST_DTE = CURDATE()
Changes :
current date can be obtained with function CURDATE()
'YYYY-MM-DD HH24:MI:SS' is the default format for MySQL dates, hence you do not need to convert it, you may just pass it as is (NB1 : it is unclear what 'formatter.form(dWV.getTime())' actually means - NB2 : if you ever need to translate a string to date, STR_TO_DATE is your friend)
the filter conditions on table a are better placed in the WHERE clause, while those on table b would better belong in the INNER JOIN
I'm trying to set a Lock for the row I'm working on until the next commit:
entityManager.createQuery("SELECT value from Table where id=:id")
.setParameter("id", "123")
.setLockMode(LockModeType.PESSIMISTIC_WRITE)
.setHint("javax.persistence.lock.timeout", 10000)
.getSingleResult();
What I thought should happen is that if two threads will try to write to the db at the same time, one thread will reach the update operation before the other, the second thread should wait 10 seconds and then throw PessimisticLockException.
But instead the thread hangs until the other thread finishes, regardless of the timeout set.
Look at this example :
database.createTransaction(transaction -> {
// Execute the first request to the db, and lock the table
requestAndLock(transaction);
// open another transaction, and execute the second request in
// a different transaction
database.createTransaction(secondTransaction -> {
requestAndLock(secondTransaction);
});
transaction.commit();
});
I expected that in the second request the transaction will wait until the timeout set and then throw the PessimisticLockException, but instead it deadlocks forever.
Hibernate generates my request to the db this way :
SELECT value from Table where id=123 FOR UPDATE
In this answer I saw that Postgres allows only SELECT FOR UPDATE NO WAIT that sets the timeout to 0, but it isn't possible to set a timeout in that way.
Is there any other way that I can use with Hibernate / JPA?
Maybe this way is somehow recommended?
Hibernate supports a bunch of query hints. The one you're using sets the timeout for the query, not for the pessimistic lock. The query and the lock are independent of each other, and you need to use the hint shown below.
But before you do that, please be aware, that Hibernate doesn't handle the timeout itself. It only sends it to the database and it depends on the database, if and how it applies it.
To set a timeout for the pessimistic lock, you need to use the javax.persistence.lock.timeout hint instead. Here's an example:
entityManager.createQuery("SELECT value from Table where id=:id")
.setParameter("id", "123")
.setLockMode(LockModeType.PESSIMISTIC_WRITE)
.setHint("javax.persistence.lock.timeout", 10000)
.getSingleResult();
I think you could try
SET LOCAL lock_timeout = '10s';
SELECT ....;
I doubt Hibernate supports this out-of-box. You could try find a way to extend it, not sure if it worth it. Because I guess using locks on a postges database (which is mvcc) is not the smartest option.
You could also do NO WAIT and delay-retry several times from your code.
There is the lock_timeout parameter that does exactly what you want.
You can set it in postgresql.conf or with ALTER ROLE or ALTER DATABASE per user or per database.
The hint for lock timeout for PostgresSQL doesn't work on PostreSQL 9.6 (.setHint("javax.persistence.lock.timeout", 10000)
The only solution I found is uncommenting lock_timeout property in postgresql.conf:
lock_timeout = 10000 # in milliseconds, 0 is disabled
For anyone who's still looking for a data jpa solution, this is how i managed to do it
First i've created a function in postgres
CREATE function function_name (some_var bigint)
RETURNS TABLE (id BIGINT, counter bigint, organisation_id bigint) -- here you list all the columns you want to be returned in the select statement
LANGUAGE plpgsql
AS
$$
BEGIN
SET LOCAL lock_timeout = '5s';
return query SELECT * from some_table where some_table.id = some_var FOR UPDATE;
END;
$$;
then in the repository interface i've created a native query that calls the function. This will apply the lock timeout on that particular transaction
#Transactional
#Query(value = """
select * from function_name(:id);
""", nativeQuery = true)
Optional<SomeTableEntity> findById(Long id);
I'm using jdbi (but would prepared to use raw jdbc if needed). My DB is currently Oracle. I have an update that updates the first row matching certain criteria. I want to get the primary key of the updated row from the same statement. Is this possible?
I tried
Integer rowNum = handle
.createUpdate(sqlFindUpdate)
.bind("some var", myVal)
.executeAndReturnGeneratedKeys("id")
.mapTo(Integer.class)
.findOnly();
but I guess this isn't a generated key, as it doesn't find it (illegal state exception, but the update succeeds).
Basically, I have a list of items in the DB that I need to process. So, I want to get the next and mark it as "in progress" at the same time. I'd like to be able to support multiple worker threads, so it needs to be a single statement - I can't do the select after (the status has changed so it won't match anymore) and doing it before introduces a race condition.
I guess I could do a stored procedure that uses returning into but can I do it directly from java?
I'm answering my own question, but I don't think it's a good answer :) What I'm doing is kind of a hybrid. It is possible to dynamically run PL/SQL blocks from jdbi. Technically, this is from Java as I had asked, not via a stored procedure. However, it's kind of a hack, in my opinion - in this case why not just create the stored procedure (as I probably will, if I don't find a better solution). But, for info, instead of:
String sql = "update foo set status = 1 where rownr in (select rownr from (select rownr from foo where runid = :runid and status = 0 order by rownr) where rownum = 1)";
return jdbi.withHandle((handle) -> {
handle
.createUpdate(sql)
.bind("runid", runId)
.executeAndReturnGeneratedKeys("rownr")
.mapTo(Integer.class)
.findOnly();
});
you can do
String sql = "declare\n" +
"vRownr foo.rownr%type;\n" +
"begin\n" +
"update foo set status = 1 where rownr in (select rownr from (select rownr from foo where runid = :runid and status = 0 order by rownr) where rownum = 1) returning rownr into vRownr;\n" +
":rownr := vRownr;\n" +
"end;";
return jdbi.withHandle((handle) -> {
OutParameters params = handle
.createCall(sql)
.bind("runid", runId)
.registerOutParameter("rownr", Types.INTEGER)
.invoke();
return params.getInt("rownr");
});
Like I said, it's probably better to just create the procedure in this case, but it does give you the option to still build the SQL dynamically in java if you need to I guess.
Based on this question, as linked by #APC in the comments, it is possible to use the OracleReturning class without the declare/begin/end.
String sql = "update foo set status = 1 where rownr in (select rownr from (select rownr from foo where runid = ? and status = 0 order by rownr) where rownum = 1) returning rownr into ?";
return jdbi.withHandle((handle) -> {
handle
.createUpdate(sql)
.bind(0, runId)
.addCustomizer(OracleReturning.returnParameters().register(1, OracleTypes.INTEGER))
.execute(OracleReturning.returningDml())
.mapTo(Integer.class)
.findOnly();
});
However, OracleReturning doesn't support named parameters, so you have to use positionals. Since my main reason for using JDBI over plain JDBC is to get named parameter support, that's important to me, so I'm not sure which way I'll go
Pretty hard dependency on it being an Oracle DB you're calling...
Update: enhancement for named parameters in OracleReturning was merged to master, and will be included in 3.1.0 release. Kudos to #qualidafial for the patch
I want to update a table value using join in oracle(11g),
I have used rowid as join parameter for same table, Is it safe to use rowid as a join parameter.
Following is the query which i am using for updation, I have tested the same on local database it is working fine, but is there any scenario that there may be rowid mis-match?
MERGE
INTO GEOTAG g
USING (SELECT g2.rowid AS rid, um.RETAILER_CODE
FROM GEOTAG g2
JOIN RETAILER_AD_DSE b
ON b.CODE = g2.RETAILER_CODE
JOIN USER_HIERARCHY_MASTER um
ON um.RETAILER_PRIMARY_ETOPUP = b.RETAILER_PRIMARY_ETOPUP) src
ON (g.rowid = src.rid)
WHEN MATCHED THEN UPDATE
SET g.RETAILER_CODE = src.RETAILER_CODE;
A rowid will be unique in a table so if by "safe" you just mean that you'll be joining a row to itself then, yes, this is safe.
On the other hand, your code seems to be a rather overly complicated way to do a correlated update. I suspect you just want this (you can omit the WHERE EXISTS if the there will always be a matching row in retailer_ad_dse and user_hierarchy_master).
UPDATE geotag g
SET g.retailer_code = (SELECT code
FROM retailer_ad_dse rad
JOIN user_hierarchy_master uhm
ON uhm.retailer_primary_etopup = rad.retailer_primary_etopup
WHERE g.retailer_code = rad.code)
WHERE EXISTS (SELECT code
FROM retailer_ad_dse rad
JOIN user_hierarchy_master uhm
ON uhm.retailer_primary_etopup = rad.retailer_primary_etopup
WHERE g.retailer_code = rad.code)