MySQL InnoDB hangs on waiting for table-level locks - java

I have a big production web-application (Glassfish 3.1 + MySQL 5.5). All tables are InnoDB. Once per several days application totally hangs.
SHOW FULL PROCESSLIST shows many simple insert or update queries on different tables but all having status
Waiting for table level lock
Examples:
update user<br>
set user.hasnewmessages = NAME_CONST('in_flag',_binary'\0' COLLATE 'binary')
where user.id = NAME_CONST('in_uid',66381)
insert into exchanges_itempacks
set packid = NAME_CONST('in_packId',332149), type = NAME_CONST('in_type',1), itemid = NAME_CONST('in_itemId',23710872)
Queries with the longest 'Time' are waiting for the table-level lock too.
Please help to figure out why MySQL tries to get level lock and what can be locking all these tables. All articles about the InnoDB locking say this engine uses no table locking if you don't force it to do so.
My my.cnf has this:
innodb_flush_log_at_trx_commit = 0
innodb_support_xa = 0
innodb_locks_unsafe_for_binlog = 1
innodb_autoinc_lock_mode=2
Binary log is off. I have no "LOCK TABLES" or other explicit locking commands at all. Transactions are READ_UNCOMMITED.
SHOW ENGINE INNODB STATUS output:
http://avatar-studio.ru:8080/ph/imonout.txt

Are you using MSQLDump to backup your database while it is still being accessed by your application? This could cause that behaviour.

I think there are some situations when MySQL does a full table lock (i.e. using auto-inc).
I found a link which may help you: http://mysqldatabaseadministration.blogspot.com/2007/06/innodb-table-locks.html
Also review java persistence code having all con's commited/rollbacked and closed. (Closing always in finally block.)
Try setting innodb_table_locks=0 in MySQL configuration.
http://dev.mysql.com/doc/refman/5.0/en/innodb-parameters.html#sysvar_innodb_table_locks
Just a few ideas ...

I see you havily use NAME_CONST in your code. Just try not to use it. You know, mysql can be sometimes buggy (I also found several bugs), so I recommend don't rely on features which are not so common / well tested. It is related to column names, so maybe it locks something? Well it should't if it affects only the result, but who knows? This is suspicious. Moreover, this is marked as a function for internal use only.

This may seem simple, but you don't have a long-running select statement that is possibly locking out updates and inserts? There's no query that's actually running and not locked?

Have you considered using MyISAM instead of InnoDB?
If you are not utilizing any transactional features, MyISAM might make more sense.
Its simpler, easier to optimize, and since it doesn't have sophisticated transactional capabilities, easier to configure in your my.cnf.
Also, depending on the type of db load your app creates, MyISAM might be more appropriate. I prefer MyISAM for read-heavy applications, again, it's easier to configure and understand.
Other suggestions:
It might be a good idea to find a way to not use NAME_CONST in your SQL.
"This function was added in MySQL 5.0.12. It is for internal use only."
When the documentation of an open source product says this, its probably a good idea to heed it's advise.
By default, MySQL stores all InnoDB tables & schemas data in 1 enormous file, there could be some kind of OS level locking on that particular file that propogates to MySQL that prevents all table access. By using the innodb_file_per_table option , you may eliminate that potential issue. This also makes MySQL more space efficient.

in this case you have to create several different database table with same column each other and do not inset more then 3000 row per table, in this case if you want to enter more data into table you have to create another dynamic table(generate table using code) and insert new data into this table and access data from that table. in your condition if more and more table will have to generate then you have to create new database.
i think this tip will help you to design your database more carefully and solve error.

Related

Getting Error while running "Alter table <TableName> not logged initially" through Java

There are millions of records in table which I need to delete but I need to log off the transaction logs for that I use Alter Table not logged initially but it is throwing error and makes the table inaccessible. There is no partition in table but table contain Index and Sequences. Also autocommit is off.
Error : DB2 SQL Error: SQLCODE=-911, SQLSTATE=40001, SQLERRMC=68, DRIVER=3.65.77
Getting the above error only when running through java, not getting any error if running from client.
Need to know in which all cases and scenarios the query can fail or what need make ensure before running this query. How to handle this scenario in code.
When asking for help with Db2, always put into your question the Db2-server version and platform (Z/OS, i-Series, Linux/Unix/Windows) because the answers can depend on those facts.
The sqlcode -911 with sqlerrmc 68 is a lock-timeout. This is not a deadlock. Your job
might not be the only job that is concurrently accessing the table. Monitoring functions and administrative views let you see which locks exist at any moment in time (e.g. SNAPLOCK and SNAP_GET_LOCK table function and many others).
Refer to the Db2 Knowledge Centre for details of the suggestions below, to educate yourself.
Putting the table into not-logged-initially for your transaction is high risk, especially if you are a novice, because if your transaction fails then you can lose the entire table.
If you persist with that approach, take precautions and rehearse point in time recovery in case your actions cause damage. Verify your backups and recovery steps carefully.
With autocommit disabled, one can lock a table in exclusive mode, but this can cause a service-outage on production if the target table is a hot table. One can also force off applications that are holding locks if you have the relevant rights.
If there are any other runnning jobs (i.e. not your own code) accessing the table while you try to alter that table when the -911 is almost inevitable. Your approach may be unwise.
Bulk delete can be achieved by other means, it depends on what you wish to trade-off.
This is a frequently asked question. It's not RDBMS specific either.
Consider doing more research, as this is a widely discussed topic.
Alternative approaches for bulk delete include these:
batching the logged deletes, commit once per batch, adjustable batch size
( to ensure you avoid a -964 transaction-log-full situation).
This requires programming a loop, and you should condsider 'set current timeout not wait'
along with retrying automatically later any failed batches (e.g batches that failed
due to locks). This approach yields a
slow and gradual removal of rows, but increases concurrency. You are trading
a long slow execution for minimal impact on other running jobs.
Create an identical shadow table, into which you insert only the rows that you
wish to keep. Then use truncate table ... immediate on the target table
(this is an unlogged action)
and finally restore the preserved-rows from the shadow-table into the target-table.
A less safe variation of this is to export only the rows you want to keep and then
import-replace
depending on Db2-licence and frequency of purge, migrating the data (or some of the data) into a range partitioned table, and using detach may be the better long term solution
Refer to the on-line Db2 Knowledge Center for details of the above suggestions.

Replicate Oracle into HsqlDB (and knowing what the change was)

I am interested in taking an Oracle DB and "replicating" it into hsqldb - very fast, close to real time. And hopefully, also be aware of what fields were changed. (I need this in order to boost queries duration - and saw that HSQLDB in embedded in memory mode is much faster than even cached Oracle. However, since oracle grants me persistency, failover etc. I still want to use it).
So, I thought about a few possible approaches:
Use trigger on every possible table in my oracle db. The trigger will write the change to an auxiliary table. Very bad performance & practice, in my opion.
periodically select each table for all the latest updates (select * from T where ora_rowscn > ?). ?=latest maximal row scn. This has the disadvantage of not knowing about deletes (even though we can figure some other way for deletes). This also has the disadvantage of having to diff the previous record with the new record to understand the change. The table may be of 100 fields and the change on only one.
Use Oracle notifications, available since 11/10g - using a simple JDBC link - though this has some limitations, like: number of fields you can get that have changed.
Use "2" approach along with quering the sql_text table, in order to see which fields were affected in latest updates, and to diff only those from the last 1 minute. This will actually also help with figuring out deletes.
Use timesten instead of HSQLDB, but that costs money.
What do you think? What is the best way?
Thank you
You should explore the existing tools, notably SymmetricDS (http://www.symmetricds.org) and see if they can be configured or modified to support this.
An alternative approach is to write the triggers in HSQLDB to update the Oracle backend when there is a data change.

Accessing database multiple times

I am working on solution of below mentioned but could not find any best practice/tool for this.
For a batch of requests(say 5000 unique ids and records) received in webservice, it has to fetch rows for those unique ids in database and keep them in buffer(or cache) and compare those with records received in webservice. If there is a change for a particular data(say column) that will be updated in table for that unique id. And in turn, the child tables of that table also get affected. For ex, if someone changes his laptop model number and country, model number will be updated in a table and country value in another table. Likewise it goes on accessing multiple tables in short time. The maximum records coming in a webservice call might reach 70K in one call in an hour.
I don't have any other option than implementing it in java. Is there any good practice of implementing this, or can it be achieved using any open source java tools. Please suggest. Thanks.
Hibernate is likely to be the first thing you should try. I tend to avoid because it is overkill for most of my applications but it is a standard tool for accessing database which anyone who knows Java should at least have an understanding of. There are dozens of other solutions you could use but Hibernate is the most often used.
JDBC is the API to use to access relational database. Useful performance and security tips:
use prepared statements
use where ... in () queries to load many rows at once, but beware on the limit in the number of values in the in clause (1000 max in Oracle)
use batched statements to make your updates, rather than executing each update separately (see http://download.oracle.com/javase/1.3/docs/guide/jdbc/spec2/jdbc2.1.frame6.html)
See http://download.oracle.com/javase/tutorial/jdbc/ for a tutorial on JDBC.
This sounds not that complicated. Of course, you must know (or learn):
SQL
JDBC
Then you can go through the web service data record by record and for each record do the following:
fetch corresponding database record
for each field in record
if updated
execute corresponding update SQL statement
commit // every so many records
70K records per hour should be not the slightest problem for a decent RDBMS.

Way to know table is modified

There are two different processes developed in Java running independently,
If any of the process modifyies the table, can i get any intimation? As the table is modified. My objective is i want a object always in sync with a table in database, if any modification happens on table i want to modify the object.
If table is modified can i get any intimation regarding this ? Do Database provide any facility like this?
We use SQL Server and have certain triggers that fire when a table is modified and call an external binary. The binary we call sends a Tib rendezvous message to notify other applications that the table has been updated.
However, I'm not a huge fan of this solution - Much better to control writing to your table through one "custodian" process and have other applications delegate to that. To enforce this you could change permissions on your table so that only your custodian process can write to the database.
The other advantage of this approach is being able to provide a caching layer within your custodian process to cater for common access patterns. Granted that a DBMS performs caching anyway, but by offering it at the application layer you will have more control / visibility over it.
No, database doesn't provide these services. You have to query it periodically to check for modification. Or use some JMS solution to send notifications from one app to another.
You could add a timestamp column (last_modified) to the tables and check it periodically for updates or sequence numbers (which are incremented on updates similiar in concept to optimistic locking).
You could use jboss cache which provides update mechanisms.
One way, you can do this is: Just enclose your database statement in a method which should return 'true' when successfully accomplished. Maintain the scope of the flag in your code so that whenever you want to check whether the table has been modified or not. Why not you try like this???
If you're willing to take the hack approach, and your database stores tables as files (eg, mySQL), you could always have something that can check the modification time of the files on disk, and look to see if it's changed.
Of course, databases like Oracle where tables are assigned to tablespaces, and tablespaces are what have storage on disk it won't work.
(yes, I know this is a bad approach, that's why I said it's a hack -- but we don't know all of the requirements, and if he needs something quick, without re-writing the whole application, this would technically work for some databases)

Tips on Speeding up JDBC writes?

I am writing a program that does a lot of writes to a Postgres database. In a typical scenario I would be writing say 100,000 rows to a table that's well normalized (three foreign integer keys, the combination of which is the primary key and the index of the table). I am using PreparedStatements and executeBatch(), yet I can only manage to push in say 100k rows in about 70 seconds on my laptop, when the embedded database we're replacing (which has the same foreign key constraints and indices) does it in 10.
I am new at JDBC and I don't expect it to beat a custom embedded DB, but I was hoping it to be only 2-3x slower, not 7x. Anything obvious that I maybe missing? does the order of the writes matter? (i.e. say if it's not the order of the index?). Things to look at to squeeze out a bit more speed?
This is an issue that I have had to deal with often on my current project. For our application, insert speed is a critical bottleneck. However, we have discovered for the vast majority of database users, the select speed as their chief bottleneck so you will find that there are more resources dealing with that issue.
So here are a few solutions that we have come up with:
First, all solutions involve using the postgres COPY command. Using COPY to import data into postgres is by far the quickest method available. However, the JDBC driver by default does not currently support COPY accross the network socket. So, if you want to use it you will need to do one of two workarounds:
A JDBC driver patched to support COPY, such as this one.
If the data you are inserting and the database are on the same physical machine, you can write the data out to a file on the filesystem and then use the COPY command to import the data in bulk.
Other options for increasing speed are using JNI to hit the postgres api so you can talk over the unix socket, removing indexes and the pg_bulkload project. However, in the end if you don't implement COPY you will always find performance disappointing.
Check if your connection is set to autoCommit. If autoCommit is true, then if you have 100 items in the batch when you call executeBatch, it will issue 100 individual commits. That can be a lot slower than calling executingBatch() followed by a single explicit commit().
I would avoid the temptation to drop indexes or foreign keys during the insert. It puts the table in an unusable state while your load is running, since nobody can query the table while the indexes are gone. Plus, it seems harmless enough, but what do you do when you try to re-enable the constraint and it fails because something you didn't expect to happen has happened? An RDBMS has integrity constraints for a reason, and disabling them even "for a little while" is dangerous.
You can obviously try to change the size of your batch to find the best size for your configuration, but I doubt that you will gain a factor 3.
You could also try to tune your database structure. You might have better performances when using a single field as a primary key than using a composed PK. Depending on the level of integrity you need, you might save quite some time by deactivating integrity checks on your DB.
You might also change the database you are using. MySQL is supposed to be pretty good for high speed simple inserts ... and I know there is a fork of MySQL around that tries to cut functionalities to get very high performances on highly concurrent access.
Good luck !
try disabling indexes, and reenabling them after the insert. also, wrap the whole process in a transaction

Categories

Resources