I use Hibernate version 4. We have a problem in batch process. Our system works as below
Select set of records which are in 'PENDING' state
Update immediately to 'IN PROGRESS' state
Process it and update to 'COMPLETED' state
The problem when we have two servers and executing at same time, we fear of having concurrency issue. So we would like to implement DB Lock for first two steps. We used query.setLockOptions(), but it seems not working. Is there any other to have table level lock or Row level lock till it completes select and update. Both are in same session.
We have options in JDBC that LOCK TABLE <TABLE_NAME> WRITE. But how do we implement in hibernate or is it possible to implement select..for update in hibernate?
"Select ... for update" is supported in Hibernate via LockMode.UPGRADE which you can set in, for example, a NamedQuery.
But using application/manual table-row locking has several drawbacks (especially when a database connection gets broken half-way a transaction) and your update-procedure can do without it:
Start transaction.
update table set state='PENDING', server_id=1 where state='IN PROGRESS';
Commit transaction
select from table where state='PENDING' and server_id=1;
[process records]
Each server must have a unique number for this to work, but it will be less error-prone and you let the DBMS do what it is supposed to be good at: isolation (see ACID).
Related
There are millions of records in table which I need to delete but I need to log off the transaction logs for that I use Alter Table not logged initially but it is throwing error and makes the table inaccessible. There is no partition in table but table contain Index and Sequences. Also autocommit is off.
Error : DB2 SQL Error: SQLCODE=-911, SQLSTATE=40001, SQLERRMC=68, DRIVER=3.65.77
Getting the above error only when running through java, not getting any error if running from client.
Need to know in which all cases and scenarios the query can fail or what need make ensure before running this query. How to handle this scenario in code.
When asking for help with Db2, always put into your question the Db2-server version and platform (Z/OS, i-Series, Linux/Unix/Windows) because the answers can depend on those facts.
The sqlcode -911 with sqlerrmc 68 is a lock-timeout. This is not a deadlock. Your job
might not be the only job that is concurrently accessing the table. Monitoring functions and administrative views let you see which locks exist at any moment in time (e.g. SNAPLOCK and SNAP_GET_LOCK table function and many others).
Refer to the Db2 Knowledge Centre for details of the suggestions below, to educate yourself.
Putting the table into not-logged-initially for your transaction is high risk, especially if you are a novice, because if your transaction fails then you can lose the entire table.
If you persist with that approach, take precautions and rehearse point in time recovery in case your actions cause damage. Verify your backups and recovery steps carefully.
With autocommit disabled, one can lock a table in exclusive mode, but this can cause a service-outage on production if the target table is a hot table. One can also force off applications that are holding locks if you have the relevant rights.
If there are any other runnning jobs (i.e. not your own code) accessing the table while you try to alter that table when the -911 is almost inevitable. Your approach may be unwise.
Bulk delete can be achieved by other means, it depends on what you wish to trade-off.
This is a frequently asked question. It's not RDBMS specific either.
Consider doing more research, as this is a widely discussed topic.
Alternative approaches for bulk delete include these:
batching the logged deletes, commit once per batch, adjustable batch size
( to ensure you avoid a -964 transaction-log-full situation).
This requires programming a loop, and you should condsider 'set current timeout not wait'
along with retrying automatically later any failed batches (e.g batches that failed
due to locks). This approach yields a
slow and gradual removal of rows, but increases concurrency. You are trading
a long slow execution for minimal impact on other running jobs.
Create an identical shadow table, into which you insert only the rows that you
wish to keep. Then use truncate table ... immediate on the target table
(this is an unlogged action)
and finally restore the preserved-rows from the shadow-table into the target-table.
A less safe variation of this is to export only the rows you want to keep and then
import-replace
depending on Db2-licence and frequency of purge, migrating the data (or some of the data) into a range partitioned table, and using detach may be the better long term solution
Refer to the on-line Db2 Knowledge Center for details of the above suggestions.
I am running java application with multiple threads those will query from oracle database and if condition meets it will update row. But there are high chances that multiple threads gets same status for a row and then multiple thread try to update same row.
Lets say if status is "ACCEPTED" for any row then update it to "PROCESSING" status and then start processing, But processing should be done by only one thread who updated this record.
One approach is I query database and if status is "ACCEPTED" then update record, I need to write synchronized java method, but that will block multi-threading. So I wanted to use sql way for this situation.
Hibernate update method return type is void. So there is no way I can find if row got updated now or it was already updated. Is there any Select for Update or any locking thing in hibernate that can help me in this situation.
You can very well make use of Optimistic Locking through #Version.
Please look at the post below:
Optimistic Locking by concrete (Java) example
I think that your question is related to How to properly handle two threads updating the same row in a database
On top of this I woud say on top of the answer provided by #shankarsh that if you want to use a Query and not the entitymanager or the hibernate session you need to include the version field in your query like this:
update t.columntoUpdate,version = version + 1 from yourEntity where yourcondition and version = :version
This way the update will succeed only for a particular version and all the concurent updates will not update anything.
I have a application which needs to aware of latest number of some records from a table from database, the solution should be applicable without changing the database code or add triggers or functions to it ,so I need a database vendor independent solution.
My program written in java but database could be (SQLite,MySQL,PostgreSQL or MSSQL),for now I'm doing Like that:
In a separate thread that is set as a daemon my application sends a simple command through JDBC to database to be aware of latest number of the records with condition:
while(true){
SELECT COUNT(*) FROM Mytable WHERE exited='1'
}
and this sort of coding causes DATABASE To lock,slows down the whole system and generates huge DB Logs which finally brings down the whole thing!
how can i do it in a right way to always have latest number of certain records or only counting when the number changed?
A SELECT statement should not -- by itself -- have the behavior that you are describing. For instance, nothing is logged with a SELECT. Now, it is possible that concurrent insert/update/delete statements are going on, and that these cause problems because the SELECT locks the table.
Two general things you can do:
Be sure that the comparison is of the same type. So, if exited is a number, do not use single quotes (mixing of types can confuse some databases).
Create an index on (exited). In basically all databases, this is a single command: create index idx_mytable_exited on mytable(exited).
If locking and concurrent transactions are an issue, then you will need to do more database specific things, to avoid that problem.
As others have said, make sure that exited is indexed.
Also, you can set the transaction isolation on your query to do a "dirty read"; this indicates to the database server that you do not need to wait for other processes' transactions to commit, and instead you wish to read the current value of exited on rows that are being updated by those other processes.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED is the standard syntax for using "dirty read".
I would like to make sure that the whole table is locked during my JPA transaction.
As far as i could figure out, there is no JPA Locking Mode to lock the whole table.
My question is, how does a proper Locking Statement look like and how can i combine it with entity managers merge or persist operations?
Actually thanks to the comment the solution was following statement:
getEntityManager().createNativeQuery("LOCK TABLE schemaname.tablename").executeUpdate();
The lock will get removed then the transaction (also the one from hibarnate - actually its the same) is over.
I'm developing a database program in Java with dbf. I need to know how to lock the database records from the Java side.
Example: we have a database table cheques with 5 records in the table (record 1 through 5). There are 2 users, user-1 and user-2. user-1 accesses record 1 and user-2 tries to access record 1 at the same time. I want to lock record 1 to prevent access to that record by user-2 while user-1 is accessing it. How do I do this in Java?
In case of MySQL, you can use, SELECT FOR UPDATE statement for locking the records. Lock will not be released until the transaction completes or rolls back.
More on the same here.
It depends on the environment you are working on. for a container managed transaction your container manages the transactions for you and all you have to do is to set the Lockmode to lockmode.write. What this does is that it blocks all write access to the class methods while userA is accessing record 1. On the other hand for a stand alone application you can just add Synchronization key word to your method to control concurrent access. I hope this helps.
Not every database supports per-record locking.
Generally, if you are in EE environment, you can use JPA EntityManager#find() method to lock certain record.
Full usage will be like that
// EntityManager em;
YourClass obj = em.find(YourClass.class, primaryKey, LockModeType.WRITE);
// do something
em.merge(obj);
After transaction commit the record(s) will be released.
In non-EE environment, as Darshan Mehta said, connection.createStatement().execute("SELECT * FROM table FOR UPDATE") will be the solution.