I know what optimistic and pessimistic locking is, but when you write a java code how do you do it? Suppose I am using Oracle with Java, do I have any methods in JDBC that will help me do that? How will I configure this thing? Any pointers will be appreciated.
You can implement optimistic locks in your DB table in this way (this is how optimistic locking is done in Hibernate):
Add integer "version" column to your table.
Increase the value of this column with each update of corresponding row.
To obtain lock, just read "version" value of the row.
Add "version = obtained_version" condition to where clause of
your update statement. Verify number of affected rows after update.
If no rows were affected - someone has already modified your entry.
Your update should look like
UPDATE mytable SET name = 'Andy', version = 3 WHERE id = 1 and version = 2
Of course, this mechanism works only if all parties follow it, contrary to DBMS-provided locks that require no special handling.
Hope this helps.
Suppose I am using Oracle with Java, do I have any methods in JDBC that will help me do that?
This Oracle paper should provide you with some tips on how to do this.
There are no specific JDBC methods. Rather, you achieve optimistic locking by the way that you design your SQL queries / updates and where you put the transaction boundaries.
Related
I have a Cassandra cluster setup like Node N1, Node N2 and Node N3
I have a user table, I need to write create a row level locking for it across the nodes in the cluster, So could you please guide me by answering the following questions?
1) What is the maximum level of locking possible in Cassandra?
2) What is lightweight transaction? How much it is possible to achieve row level locking?
3) Is there an alternate way to achieve the row level locking in Cassandra?
None, but you might stretch it to say column level.
It uses paxos for consensus and can perform conditional updates. It doesnt do locking. It will either succeed or not if another update occurred, if it doesnt succeed you can try again. If it does succeed everything in "transaction" (poor naming) will apply. However, theres still no isolation within it and if multiple columns in row are updated you may read between them being applied. details here
Design data model so you dont need locking.
There is no transactions in cassandra, there is no locking. There is however light weight transactions. They're not great, the performance is worse and there are alot of tradeoffs.
Depending on what the use case is for this lock you could do:
INSERT INTO User (userID, email)
VALUES (‘MyGuid’, ‘user#example.com’)
IF NOT EXISTS;
If the query returns an error/failure you would have to handle that, it won't just fail if someone inserts before you. A failure also might mean that 1 of your nodes did get the write but not all of them. LWT don't roll back.
I am running java application with multiple threads those will query from oracle database and if condition meets it will update row. But there are high chances that multiple threads gets same status for a row and then multiple thread try to update same row.
Lets say if status is "ACCEPTED" for any row then update it to "PROCESSING" status and then start processing, But processing should be done by only one thread who updated this record.
One approach is I query database and if status is "ACCEPTED" then update record, I need to write synchronized java method, but that will block multi-threading. So I wanted to use sql way for this situation.
Hibernate update method return type is void. So there is no way I can find if row got updated now or it was already updated. Is there any Select for Update or any locking thing in hibernate that can help me in this situation.
You can very well make use of Optimistic Locking through #Version.
Please look at the post below:
Optimistic Locking by concrete (Java) example
I think that your question is related to How to properly handle two threads updating the same row in a database
On top of this I woud say on top of the answer provided by #shankarsh that if you want to use a Query and not the entitymanager or the hibernate session you need to include the version field in your query like this:
update t.columntoUpdate,version = version + 1 from yourEntity where yourcondition and version = :version
This way the update will succeed only for a particular version and all the concurent updates will not update anything.
I have a big production web-application (Glassfish 3.1 + MySQL 5.5). All tables are InnoDB. Once per several days application totally hangs.
SHOW FULL PROCESSLIST shows many simple insert or update queries on different tables but all having status
Waiting for table level lock
Examples:
update user<br>
set user.hasnewmessages = NAME_CONST('in_flag',_binary'\0' COLLATE 'binary')
where user.id = NAME_CONST('in_uid',66381)
insert into exchanges_itempacks
set packid = NAME_CONST('in_packId',332149), type = NAME_CONST('in_type',1), itemid = NAME_CONST('in_itemId',23710872)
Queries with the longest 'Time' are waiting for the table-level lock too.
Please help to figure out why MySQL tries to get level lock and what can be locking all these tables. All articles about the InnoDB locking say this engine uses no table locking if you don't force it to do so.
My my.cnf has this:
innodb_flush_log_at_trx_commit = 0
innodb_support_xa = 0
innodb_locks_unsafe_for_binlog = 1
innodb_autoinc_lock_mode=2
Binary log is off. I have no "LOCK TABLES" or other explicit locking commands at all. Transactions are READ_UNCOMMITED.
SHOW ENGINE INNODB STATUS output:
http://avatar-studio.ru:8080/ph/imonout.txt
Are you using MSQLDump to backup your database while it is still being accessed by your application? This could cause that behaviour.
I think there are some situations when MySQL does a full table lock (i.e. using auto-inc).
I found a link which may help you: http://mysqldatabaseadministration.blogspot.com/2007/06/innodb-table-locks.html
Also review java persistence code having all con's commited/rollbacked and closed. (Closing always in finally block.)
Try setting innodb_table_locks=0 in MySQL configuration.
http://dev.mysql.com/doc/refman/5.0/en/innodb-parameters.html#sysvar_innodb_table_locks
Just a few ideas ...
I see you havily use NAME_CONST in your code. Just try not to use it. You know, mysql can be sometimes buggy (I also found several bugs), so I recommend don't rely on features which are not so common / well tested. It is related to column names, so maybe it locks something? Well it should't if it affects only the result, but who knows? This is suspicious. Moreover, this is marked as a function for internal use only.
This may seem simple, but you don't have a long-running select statement that is possibly locking out updates and inserts? There's no query that's actually running and not locked?
Have you considered using MyISAM instead of InnoDB?
If you are not utilizing any transactional features, MyISAM might make more sense.
Its simpler, easier to optimize, and since it doesn't have sophisticated transactional capabilities, easier to configure in your my.cnf.
Also, depending on the type of db load your app creates, MyISAM might be more appropriate. I prefer MyISAM for read-heavy applications, again, it's easier to configure and understand.
Other suggestions:
It might be a good idea to find a way to not use NAME_CONST in your SQL.
"This function was added in MySQL 5.0.12. It is for internal use only."
When the documentation of an open source product says this, its probably a good idea to heed it's advise.
By default, MySQL stores all InnoDB tables & schemas data in 1 enormous file, there could be some kind of OS level locking on that particular file that propogates to MySQL that prevents all table access. By using the innodb_file_per_table option , you may eliminate that potential issue. This also makes MySQL more space efficient.
in this case you have to create several different database table with same column each other and do not inset more then 3000 row per table, in this case if you want to enter more data into table you have to create another dynamic table(generate table using code) and insert new data into this table and access data from that table. in your condition if more and more table will have to generate then you have to create new database.
i think this tip will help you to design your database more carefully and solve error.
I understand a little about Oracle blocking - how updates block other updates till the transaction completes, how writers don't block readers etc.
I understand the concept of pessimistic and optimisic locking, and the typical banking textbook examples about losing lost updates etc.
I also understand the JDBC transaction isolation levels where we might say, for instance, we are happy with seeing uncommitted data.
I'm a bit fuzzy however about how these concepts are related and interact. For instance:
Is Oracle providing pessimistic or
optimistic locking by default (it
just seems to block the seperate
update based on experiments in two
TOAD sessions.)
If, as I suspect, these are
application level concepts, why would
I go to the trouble of implementing a
locking strategy when I can let the
database synchronise transaction
updates anyway?
How do transaction isolation levels (which I set on the connection) alter the database behaviour when other clients besides my application be accessing with different isolation levels.
Any words to clarify these topics would be really appreciated!
Oracle allows for either type of locking - how you build your app dictates what is used. In retrospect, it's not really a database decision.
Mostly, Oracle's locking is sufficient in a stateful connection to the database. In non-stateful apps, e.g., web apps, you cannot use it. You have to use application level locking in such situations because locking applies to a session.
Usually you don't need to worry about it. In Oracle, readers never block writers, and writers never block readers. Oracle's behavior does not change with the various ANSI isolation levels. For example, there is no such thing as a "dirty read" in Oracle. Tom Kyte points out that the spirit of allowing dirty reads is to avoid blocking reads, which is not an issue in Oracle.
I would strongly recommend reading Tom Kyte's excellent book "Expert Oracle Database Architecture", in which these and other topics are addressed quite clearly.
Optimistic locking is basically "I'll only lock the data when I modify the data, not when I read it". The gotcha is that if you don't lock the data right away, someone else can change it before you do and you're looking at old news (and can blindly overwrite changes that have happened between when you read the data and updated it.)
Pessimistic locking is locking the data when you read it so that you'll be sure that no one has changed it if you do decide to update it.
This is an application decision, not an Oracle decision as:
SELECT x, y, z FROM table1 WHERE a = 2
will not lock the matching records but
SELECT x, y, z FROM table1 WHERE a = 2 FOR UPDATE
will. So you have to decide if you're ok with optimistic locking
SELECT x, y, z FROM table1 WHERE a = 2
...time passes...
UPDATE table1
SET x = 1, y = 2, z = 3
WHERE a = 2
(you could have overwrote a change someone else made in the meantime)
or need to be pessimistic:
SELECT x, y, z FROM table1 WHERE a = 2 FOR UPDATE
...time passes...
UPDATE table1
SET x = 1, y = 2, z = 3
WHERE a = 2
(you're sure that no one has changed the data since you queried it.)
Check here for isolation levels available in Oracle.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/consist.htm#CNCPT621
Oracle ALWAYS handles pessimistic locking. That is, it will lock a record when it is updated (and you can also hit locks for deletes and inserts if there is a key involved). You can use SELECT....FOR UPDATE to augment the pessimistic locking strategy.
Really any database/storage engine that works transactionally must do some form of locking.
The SERIALIZABLE isolation level is much closer to a optimistic locking mechanism. It will throw an exception if the transaction tries to update a record that has been updated since the start of the transaction. However it relies on a one-to-one between the database session and the end user session.
As connection pooling/stateless applications become prevalent, especially in systems with heavy user activity, having a database session tied up for an extended period can be a poor strategy. Optimistic locking is preferred and later versions of Oracle support this with the ORA_ROWSCN and ROWDEPENDENCIES items. Basically they make it easier to see if a record has been changed since you initially/last looked at it.
As that one-to-one relationship between a database session and a user-session has become legacy, the application layer has preserved more of the 'user-session' state and so become more responsible for checking that choices the user made five/ten minutes ago are still valid (eg is the book still in stock, or did someone else buy it).
I encountered some curious behavior today and was wondering if it is expected or standard. We are using Hibernate against MySQL5. During the course of coding I forgot to close a transaction, I presume others can relate.
When I finally closed the transaction, ran the code and checked the table, I noticed the following. All the times I mistakenly ran my code without closing the transaction, which therefore did not result in actual rows being inserted, nevertheless incremented the auto-increment surrogate primary key value, so that I have a gap (i.e. no rows with id field value of 751 to 762).
Is this expected or standard behavior? Might it vary depending on the database? And/or does Hibernate's own transaction abstraction have some possible effect on this?
Yes that's expected.
If you think about it: what else can the database do? If you increment the column and then use that as a foreign key in other inserts within the same transaction and while you're doing that someone else commits then they can't use your value. You'll get a gap.
Sequences in databases like Oracle work much the same way. Once a particular value is requested, whether or not it's then committed doesn't matter. It'll never be reused. And sequences are loosely not absolutely ordered too.
It's pretty much expected behaviour. With out it the db would have to wait for each transaction that has inserted a record to complete before assigning a new id to the next insert.
Yes, this is expected behaviour. This documentation explains it very well.
Beginning with 5.1.22, there are actually three different lock modes that control how concurrent transactions get auto-increment values. But all three will cause gaps for rolled-back transactions (auto-increment values used by the rolled-back transaction will be thrown away).
Database sequences are not to guarantee id sequence without gaps. They are designed to be transaction-independent, only in such way can be non-blocking.
You want no gaps, you must write your own stored procedure to increase column transactionally, but such code will block other transactions, so you must be carrefull.
You do SELECT CURRVAL FROM SEQUENCE_TABLE WHERE TYPE = :YOUR_SEQ_NAME FOR UPDATE;
UPDATE SEQUENCE_TABLE SET CURRVAL = :INCREMENTED_CURRVAL WHERE TYPE = :YOUR_SEQ.