We have an application storing data in a local H2 database (file mode). Everything works fine except for a single query that's executed on application shutdown. The query is issued to the H2, but never returns (no exception is thrown).
As far as I know, this only appears on a single workstation (feature still in test not in production).
When using the database of that workstation on my own workstation the application stops right there waiting for the query to return. So with this specific database it is reproducible.
When opening the database in an external tool (DbVisualizer Pro if it matters) and issuing the same query (in particular I used explain analyze <query> to not modify the data in the database) that query runs forever, too. The query looks as follows:
DELETE TOP(1000) FROM my_schema.SOMETABLE ST WHERE ST.someDate < '2019-05-23'
The problem is not directly tied to the shown date as it happend the same yesterday (where the date was 2019-05-22).
Strange thing is, when I stop the query execution and modify the date the query will work as expected (also with explain analyze so no data is modified). If I switch back to the original date it works as well.
When after that "trick" I start the application the query in question works like a charm. So I guess it must have something to do with the actual state of the particular database.
My question is: How to find out whats wrong with the database file?
I've already tried to do this "health check", but this reveals no problems.
Side note: "Running forever" here means I killed the application process after waiting for some 20 minutes, but I guess that time should be enough for deleting 16 of 18 entries in that specific table.
Related
First of all I know it's odd to rely on a manual vacuum from the application layer, but this is how we decided to run it.
I have the following stack :
HikariCP
JDBC
Postgres 11 in AWS
Now here is the problem. When we start fresh with brand new tables with autovacuum=off the manual vacuum is working fine. I can see the number of dead_tuples growing up to the threshold then going back to 0. The tables are being updated heavily in parallel connections (HOT is being used as well). At some point the number of dead rows is like 100k jumping up to the threshold and going back to 100k. The n_dead_tuples slowly creeps up.
Now the worst of all is that when you issue vacuum from a pg console ALL the dead tuples are cleaned, but oddly enough when the application is issuing vacuum it's successful, but partially cleans "threshold amount of records", but not all ?
Now I am pretty sure about the following:
Analyze is not running, nor auto-vacuum
There are no long running transactions
No replication is going on
These tables are "private"
Where is the difference between issuing a vacuum from the console with auto-commit on vs JDBC ? Why the vacuum issued from the console is cleaning ALL the tupples whereas the vacuum from the JDBC cleans it only partially ?
The JDBC vacuum is ran in a fresh connection from the pool with the default isolation level, yes there are updates going on in parallel, but this is the same as when a vacuum is executed from the console.
Is the connection from the pool somehow corrupted and can not see the updates? Is the ISOLATION the problem?
Visibility Map corruption?
Index referencing old tuples?
Side-note: I have observed that same behavior with autovacuum on and cost limit through the roof like 4000-8000 , threshold default + 5% . At first the n_dead_tuples is close to 0 for like 4-5 hours... The next day the table is 86gigs with milions of dead tuples. All the other tables are vacuumed and ok...
PS: I will try to log a vac verbose in the JDBC..
PS2: Because we are running in AWS could it be a backup that is causing it to stop cleaning ?
PS3: When refering to vaccum I mean simple vacuum, not full vacuum. We are not issuing full vacuum.
The main problem was that vacuum is run by another user. The vacuuming that I was seeing was the HOT updates + selects running over that data resulting in on-the-fly vacuum of the page.
Next: Vacuuming is affected by long running transactions ACROSS ALL schemas and tables. Yes, ALL schemas and tables. Changing to the correct user fixed the vacuum, but it will get ignored if there is an open_in_transaction in any other schema.table.
Work maintance memory helps, but in the end when the system is under heavy load all vacuuming is paused.
So we upgraded the DB's resources a bit and added a monitor to help us if there are any issues.
I have a problem with sleeping transaction locks on MS SQL Server 2008.
Sometimes it's not really sleeping or when transaction is not completed and there is real lock. But this is not my case.
We have tomcat 6 and java app on it. When I'm doing any update in java and then I'm trying to select updated records - no luck.
If I use with (nolock) - it helps, I see the correct changed rows, but then, after some period of time this updated is rolled back.
If I do update through Studio - then it works fine, immediate result.
Similar situation I have in many places of my application.
For example I have nightly job which is doing recalculation of big chunks of data into bitfields. And at the end it removes old bitfields and copies new bitfields inside. But when job is completed - rollback happens. I tried to save these bitfields manually in tmp tables and then I replaced old with new one through Studio - it worked.
All statements and connections are closed. I verified. And I tried to do all these changes in scope of one transaction.
This application was running for a long period of time without any troubles, but now something happened.
I tried to find out the reason and used sp_who, sp_who2, different scripts which shows locking queries, also I did massive monitoring with SQL server profiler and I tried to find any solution on the Internet. I found only
Error: 1222, Severity: 16, State: 18
which is the result of this problem, not the cause. May be I'm moving in wrong direction. For me it looks like something changed in SQL server configuration and now, for some reason, it holds connection and all changes, which were made in scope of it for ever. When this connection is killed - everything rolled back.
If you have any ideas I would appreciate it.
Thanks in advance for any help.
UPDATE: I've searched and found one article:https://support.microsoft.com/en-us/kb/137983
and there is an option like:
If the Windows NT Server computer has successfully closed the connection, but the client process still exists on the SQL Server as indicated by sp_who, then it may indicate a problem with SQL Server's connection management. In this case, you should work with your primary support provider to resolve this issue.
May be this is mine option. I will investigate it further.
I have a MySQL database for an application, but with the increase of records response time has now gone up, so I thought of enabling mysql querycache in my database.
The problem is we often restart the main machine so the query cache becomes 0 all the time. Is there a way to handle this problem?
If your query times are increasing with the number of records, it's time to evaluate table indexes. I would suggest enabling the slow query log and running explain against the slow running queries to figure out where to put indexes. Also please stop randomly restarting your database and fix the root cause instead.
I think you can try warming up cache in startup if you don't mind longer startup time... You can put queries in a separate file (or create a stored procedure that runs a bunch of selects, and just call the SP), and then specify path to it in init_file parameter of my.cnf
I need to create a Java agent that can be aware and execute its instruccions as soon as any update for particular tables in a Mysql or Psql Database occurr.
Everything needs to be done automaticaly.
I was wondering given Im a novice in Java you guys could give me any advice..
My options are:
1) Having a trigger that after a commit could awake my java application. (using Pg_notify and others)
2) or Having the java application subscribed to a particular ID in a database (not sure if this can be done given asynchronous updates are not possible and I might need to have my agent asking xx second to the dabatase for changes)
Thanks!
Yes, a trigger that uses NOTIFY is a good way to do it in PostgreSQL. The important problem when using the JDBC driver is that there is no way to receive notifications asynchronously, you have to poll. This is usually fine as the NOTIFY/LISTEN mechanism is very light-weight: if you want to poll 10 (100?) times a second, then you can do so without causing performance problems. See http://jdbc.postgresql.org/documentation/83/listennotify.html for more.
MySQL is a little less helpful; you'll need to have triggers INSERT rows into a monitoring table and repeatedly poll that table with SELECT * (and then DELETE). This will work, but you are more likely to end up in a latency/performance trade-off.
I have written a webapplication for payroll system which can do (insert,update,delete) to mysql database.
I want to know
how many transaction happened in mysql database ?
how many transaction happened in mysql database during start_time and end_time ?
MySQL has command counters. They can be seen with SHOW GLOBAL STATUS LIKE "COM\_%". Each execution of a command increments the counter associated with it. Transaction related counters are Com_begin, Com_commit and Com_rollback. Also Uptime is the number of seconds since server start. Reading and graphing these values or their delta yields the information you ask for.
There are also counters for Com_insert, Com_update, Com_delete and variations thereof. You might want to graph these as well.
Not sure this the answer you are looking for you I've heard that the following JDBC logger is very useful for tracking what an application is doing to a database. It should show where your application is opening and commiting transactions. You should then be able to write some scripts to process the logs to determine the number transactions.
http://code.google.com/p/log4jdbc/
It basically sits between your application and the real database driver. You add a log4jdbc prefix to your JDBC URL. For example, if your normal jdbc url is
jdbc:mysql://db.foo.com/webapplicationdb
then you would change it to:
jdbc:log4jdbc:mysql://db.foo.com/webapplicationdb