locking DB records for concurrency between threads - java

This kind of thing has been done a million times I'm sure, but my search foo appears weak today, and I'd like to get opinions on what is generally considered the best way to accomplish this goal.
My application keeps track of sessions for online users in a system. Each session corresponds to a single record in a database. A session can be ended in one of two ways. Either a "stop" message is received, or the session can timeout. The former case is easy, it is handled in the message processing thread and everything is fine. The latter case is where the concern comes from.
In order to process timeouts, each record has an ending time column that is updated each time a message is received for that session. To make timeouts work, I have a thread that returns all records from the database whose endtime < NOW() (has an end time in the past), and goes through the processing to close those sessions. The problem here is that it's possible that I might receive a message for a session while the timeout thread is going through processing for the same session. I end up with a race between the timeout thread and message processing thread.
I could use a semaphore or the like and just prevent the message thread from processing while timeout is taking place as it only needs to run every 30 seconds or a minute. However, as the user table gets large this is going to run into some serious performance issues. What I think I would like is a way to know in the message thread that this record is currently being processed by the timeout thread. If I could achieve that I could either discard the message or wait for timeout thread to end but only in the case of conflicts now instead of always.
Currently my application uses JDBC directly. Would there be an easier/standard method for solving this issue if I used a framework such as Hibernate?

This is a great opportunity for all kinds of crazy bugs to occur, and some of the cures can cause performance issues.
The classic solution would be to use transactions (http://dev.mysql.com/doc/refman/5.0/en/commit.html). This allows you to guarantee the consistency of your data - but a long-running transaction on the database turns it into a huge bottleneck; if your "find timed-out sessions" code runs for a minute, the transaction may run for that entire period, effectively locking write access to the affected table(s). Most systems would not deal well with this.
My favoured solution for this kind of situation is to have a "state machine" for status; I like to implement this as a history table, but that does tend to lead to a rapidly growing database.
You define the states of a session as "initiated", "running", "timed-out - closing", "timed-out - closed", and "stopped by user" (for example).
You implement code which honours the state transition logic in whatever data access logic you've got. The pseudo code for your "clean-up" script might then be:
update all records whose endtime < now() and whose status is "running, set status = "timed-out - closing"
for each record whose status is "timed-out - closing"
do whatever other stuff you need to do
update that record to set status "timed-out - closed" where status = "timed-out - closing"
next record
All other attempts to modify the current state of the session record must check that the current status is valid for the attempted change.
For instance, the "manual" stop code should be something like this:
update sessions
set status = "stopped by user"
where session_id = xxxxx
and status = 'running'
If the auto-close routine has kicked off in the time between showing the user interface and the database code, the where clause won't match any records, so the rest of the code simply doesn't run.
For this to work, all code that modifies the session status must check its pre-conditions; the most maintainable way is to encode status and allowed transitions into a separate database table.
You could also write triggers to enforce this logic, though I'm normally not a fan of triggers - only do this if you have to.
I don't think this adds significant performance worries - but test and optimize. The majority of the extra work on the database is by adding extra "where" clauses to your update statements; assuming you have an index on status, it's unlikely to have a measurable impact.

Related

Why does a SELECT wait for a lock?

In my application I have the problem that sometimes SELECT statements run into a java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction exception. Sadly I can't create an example as the circumstances are very complex. So the question is just about general understanding.
A little bit of background information: I'm using MySQL (InnoDB) with READ_COMMITED isolation level.
Actually I don't understand how a SELECT can ever run into a lock timeout with that setup. I thought that a SELECT would never lock as it will just return the latest commited state (managed by MySQL). Anyway according to what is happening this seems to be wrong. So how is it really?
I already read this https://dev.mysql.com/doc/refman/8.0/en/innodb-locking.html but that didn't really give me a clue. No SELECT ... FOR UPDATE or something like that is used.
That is probably due to your database. Usually this kind of problems come from that side, not from the programming side that access it.In my experience with db's, these problems are usually due to that. In the end, the programming side is just "go and get that for me in that db" thing.
I found this without much effort.
It basically explains that:
Lock wait timeout occurs typically when a transaction is waiting on row(s) of data to update which is already been locked by some other transaction.
You should also check this answer that has a specific transaction problem, which might help you, as trying to change different tables might do the timeout
the query was attempting to change at least one row in one or more InnoDB tables. Since you know the query, all the tables being accessed are candidates for being the culprit.
To speed up queries in a DB, several transactions can be executed at the same time. For example if someone runs a select query over a table for the wages of the employees of a company (each employee identified by an id) and another one changes the last name of someone who e.g. has married, you can execute both queries at the same time because they don't interfere.
But in other cases even a SELECT statement might interfere with another statement.
To prevent unexpected results in a SQL transactions, transactions follow the ACID-model which stands for Atomicity, Consistency, Isolation and Durability (for further information read wikipedia).
Let's say transaction 1 starts to calculate something and then wants to write the results to table A. Before writing it it locks all SELECT statements to table A. Otherwise this would interfere with the Isolation requirement. Because if a transaction 2 would start while 1 is still writing, 2's results depend on where 1 has already written to and where not.
Now, it might even produce a dead-lock. E.g. before transaction 1 can write the last field in table A, it still has to write something to table B but transaction 2 has already blocked table B to read safely from it after it read from A and now you have a deadlock. 2 wants to read from A which is blocked by 1, so it waits for 1 to finish but 1 waits for 2 to unlock table B to finish by itself.
To solve this problem one strategy is to rollback certain transactions after a certain timeout. (more here)
So that might be a read on for your select statement to get a lock wait timeout exceeded.
But a dead-lock usually just happens by coincidence, so if transaction 2 was forced to rollback, transaction 1 should be able to finish so that 2 should be able to succeed on a later try.

How to create acceptance tests for async micro services

If I have Microservice, which should create User but since user creation is complex it uses queue, and user is actually created by the consumer the endpoint only takes request and returns ok or fail.
How do I create acceptance test for this acceptance criteria:
Given: User who wants to register
When: api is requested for user creation
Then: create user AND set hosting environment_id on new user
For this I have to wait while the environment is actually set up, which takes up to 30 seconds. And if I implement sleep inside my test, then I hit anti pattern wait and see how to properly test it without failing best practices?
most proper might be, to return a response instantly, let's say "setup process started" (with a setup process id) and then have another API method, which will "obtain setup status" (for that setup process id) - and then proceed, when "setup has completed".
because, alike this nothing will be stuck for 30s, neither in tests nor production - and one could display a progress bar to the user, which indicates the current status, so that they will have an estimate how long it will take - whilst not getting the impression, that something is stuck or would not work.
one barely can test asynchronously, while the setup process by itself won't be asynchronous; and long-running tasks without any kind of status indicator are barely acceptable for delivery - because this only appears valid, while knowing what is going on in the background, but not whilst not knowing that.
whenever testing hits an anti-pattern, this is an indicator, that the solution might be sub-optimal.
I don't presume to tell you exactly how to code your acceptance tests without more detail regarding language or testing stack, but the simplest solution is to implement a dynamic wait that continuously polls the state of the system for a desired result before moving forward, breaking the loop (presuming you would use some form of loop, but that’s up to you) when the expected/desired response has been received.
This "polling" can take many forms such as:
a) querying for an expected update to a database (perhaps a value within a table is updated when the user is created)
b) pinging the dependent service until you receive the proper "signal" you are expecting to indicate user creation. For example, perhaps a GET request to another service (or another endpoint of the same service) returns a status of “created” for the given user, signifying that the user has been created.
Without further technical information I can’t give you exact instructions, but dynamic polling is the solution I use every day to test our asynchronous microservice architecture.
Keep in mind, this dynamic polling solution operates on the assumption that you have access to the service(s) and/or database(s) that contain the indicator for which you are "polling" when it is time to move forward with your test. Again, I'm the signal to move forward is something transparent such as a status change for the newly created user, the user's existence in a database/table either external or internal to the microservice, etc.
Some other assumptions in this scenario are:
a) sufficient non-functional performance of the System Under Test, where poor non-functional performance of the System Under Test would be a constraint.
b) a lack of resource constraints as resources are consumed somewhat heavily during the "polling", as resources are consumed somewhat heavily during the period of “polling”. (think Azure dynamic resource flexing, which can be costly over time).
Note: Be careful for infinite loops. You should insert some sort of constraint that exits the polling loop (and likely results in a failed test) after a reasonable period of time or number of attempts at your discretion.
Create a query service that given the user attributes (id, or name etc), will return the status of the user.
For the acceptance criteria, will be 2 part
create-user service returns 200
get-status service returns 200 (you can call it in a loop in your test).
This service will be helpful in the long run for various reasons
Check how long is it taking to the async process to complete.
At any time you can get status of any user, including to validate if a user is truly deleted / inactivated etc
You can mock this service results in your end-to-end integrated testing.

Getting usernames from database that are not being used by a thread

I have a multi threaded Java program where each thread gets one username for some processing which takes about 10 minutes or so.
Right now it's getting the usernames by a sql query that returns one username randomly and the problem is that the same username can be given to more than one thread at a time.
I don't want a username that is being processed by a thread, to be fetched again by another thread. What is a simple and easy way to achieve this goal?
Step-by-step solution:
Create a threads table where you store the threads' state. Among other columns, you need to store the owner user's id there as well.
When a thread is associated to a user, create a record, storing the owner, along with all other juicy stuff.
When a thread is no longer associated to a user, set its owner to null.
When a thread finishes its job, remove its record.
When you randomize your user for threads, filter out all the users who are already associated to at least a thread. This way you know any users at the end of randomization are threadless.
Make sure everything is in place. If, while working on the feature some thread records were created and should be removed or disposed from its owner, then do so.
There is a lot of ways to do this... I can think of three solution to this problem:
1) A singleton class with a array that contains all the user already in use. Be sure that the acces to the array is synchronized and you remove the unused users from it.
2) A flag in the user table that contains a unique Id referencing the thread that is using it. After you have to manage when you remove the flag from the table.
-> As an alternative, why do you check if a pool of connections shared by all the thread could be the solution to your problem ?
You could do one batch query that returns all of the usernames you want from the database and store them in a List (or some type of collection).
Then ensure synchronised access to this list to prevent two threads taking the same username at the same time. Use a synchronised list or a synchronised method to access the list and remove the username from the list.
One way to do it is to add another column to your users table.this column is a simple flag that shows if a user has an assigned thread or not.
but when you query the db you have to wrap it in a transaction.
you begin the transaction and then first you select a user that doesn't have a thread after that you update the flag column and then you commit or roll back.
since the queries are wrapped in a transaction the db handles all the issues that happen in scenarios like this.
with this solution there is no need to implement synchronization mechanisms in your code since the database will do it for you.
if you still have problems after doing this i think you have to configure isolation levels of your db server.
You appear to want a work queue system. Don't reinvent the wheel - use a well established existing work queue.
Robust, reliable concurrent work queuing is unfortunately tricky with relational databases. Most "solutions" land up:
Failing to cope with work items not being completed due to a worker restart or crash;
Actually land up serializing all work on a lock, so all but one worker are just waiting; and/or
Allowing a work item to be processed more than once
PostgreSQL 9.5's new FOR UPDATE SKIP LOCKED feature will make it easier to do what you want in the database. For now, use a canned reliable task/work/message queue engine.
If you must do this yourself, you'll want to have a table of active work items where you record the active process ID / thread ID of the worker that's processing a row. You will need a cleanup process that runs periodically, on thread crash, and on program startup that removes entries for failed jobs (where the worker process no longer exists) so they can be re-tried.
Note that unless the work the workers do is committed to the database in the same transaction that marks the work queue item as done, you will have timing issues where the work can be completed then the DB entry for it isn't marked as done, leading to work being repeated. To absolutely prevent that requires that you commit the work to the DB in the same transaction as the change that marks the work as done, or that you use two-phase commit and an external transaction manager.

querying DB in a loop continuously in java

Is it advisable to query a database continuously in a loop, to get any new data which is added to specific table?
I have below a piece of code:
while(true)
try{
// get connection
// execute only "SELECT" query
}
catch(Exception e){}
finally{// close connection
}
//Sleep 5 sec's
}
It is a simple approach that works in many cases. Make sure that the select statement you use doesn't put as little load as possible on the database.
The better (but more difficult to setup) variant would be either to use some mechanism to get actively informed by the database about changes. Some databases can for example can send information with some queuing mechanism, which again could be triggered using a database trigger.
Querying database in loop is not advisable but if you need the same you can daemonize your program.
If longer then 5 s a timer would be appropriate.
For a kind of staying totally up-to-date:
Triggers and cascading inserts/deletes can propagate data inside the database itself.
Otherwise before altering the database, issue messages in a message queue. This not necessarily needs to be a Message Queue (capitals) but can be any kind of queue, like a publish/subscribe mechanism or whatever.
On one hand, if your database has a low change rate then it would be better to use/implement a notification system. Many RDBMS have notification features (Oracle's Database Change Notification, Postgres' Asynchronous Notifications, ...), and if your RDBMS does not have them, it is easy to implement/emulate using triggers (if your RDBMS support them).
On the other hand, if the change rate is very high then your solution is preferable. But you need to adjust carefully the interval time and you must note: reading on intervals to detect changes has a negative collateral effect.
Using/implementing a notification system it is easy to inform the program what has been changed. (A new row X inserted on table A, a new updated row Y on table B, …).
But if you read your data on intervals, it is not easy to determine what has been changed. Then you have two options:
a) you must not only read but load/process all information every interval;
b) or you must not only read but compare database data with memory resident data to determine what has changed every interval.

Mysql/JDBC: Deadlock

I have a J2EE server, currently running only one thread (the problem arises even within one single request) to save its internal model of data to MySQL/INNODB-tables.
Basic idea is to read data from flat files, do a lot of calculation and then write the result to MySQL. Read another set of flat files for the next day and repeat with step 1. As only a minor part of the rows change, I use a recordset of already written rows, compare to the current result in memory and then update/insert it correspondingly (no delete, just setting a deletedFlag).
Problem: Despite a purely sequential process I get lock timeout errors (#1204) and Innodump show record locks (though I do not know how to figure the details). To complicate things under my windows machine everything works, while the production system (where I can't install innotop) has some record locks.
To the critical code:
Read data and calculate (works)
Get Connection from Tomcat Pool and set to autocommit=false
Use Statement to issue "LOCK TABLES order WRITE"
Open Recordset (Updateable) on table order
For each row in Recordset --> if difference, update from in-memory-object
For objects not yet in the database --> Insert data
Commit Connection, Close Connection
The Steps 5/6 have an Commitcounter so that every 500 changes the rows are committed (to avoid having 50.000 rows uncommitted). In the first run (so w/o any locks) this takes max. 30sec / table.
As stated above right now I avoid any other interaction with the database, but it in future other processes (user requests) might read data or even write some fields. I would not mind for those processes to read either old/new data and to wait for a couple of minutes to save changes to the db (that is for a lock).
I would be happy to any recommendation to do better than that.
Summary: Complex code calculates in-memory objects which are to be synchronized with database. This sync currently seems to lock itself despite the fact that it sequentially locks, changes unlocks the tables without any exceptions thrown. But for some reason row locks seem to remain.
Kind regards
Additional information:
Mysql: show processlist lists no active connections (all asleep or alternatively waiting for table locks on table order) while "show engine INNODB" reports a number of row locks (unfortuantely I can't understand which transaction is meant as output is quite cryptic).
Solved: I wrongly declared a ResultSet as updateable. The ResultSet was closed only on a "finalize()" method via Garbage Collector which was not fast enough - before I reopended the ResultSet and tried therefore to aquire a lock on an already locked table.
Yet it was odd, that innotop showed another query of mine to hang on a completely different table. Though as it works for me, I do not care about oddities:-)

Categories

Resources