mysql query cache - java

I have a MySQL database for an application, but with the increase of records response time has now gone up, so I thought of enabling mysql querycache in my database.
The problem is we often restart the main machine so the query cache becomes 0 all the time. Is there a way to handle this problem?

If your query times are increasing with the number of records, it's time to evaluate table indexes. I would suggest enabling the slow query log and running explain against the slow running queries to figure out where to put indexes. Also please stop randomly restarting your database and fix the root cause instead.

I think you can try warming up cache in startup if you don't mind longer startup time... You can put queries in a separate file (or create a stored procedure that runs a bunch of selects, and just call the SP), and then specify path to it in init_file parameter of my.cnf

Related

Postgres vacuum/demon partially working when issued from JDBC

First of all I know it's odd to rely on a manual vacuum from the application layer, but this is how we decided to run it.
I have the following stack :
HikariCP
JDBC
Postgres 11 in AWS
Now here is the problem. When we start fresh with brand new tables with autovacuum=off the manual vacuum is working fine. I can see the number of dead_tuples growing up to the threshold then going back to 0. The tables are being updated heavily in parallel connections (HOT is being used as well). At some point the number of dead rows is like 100k jumping up to the threshold and going back to 100k. The n_dead_tuples slowly creeps up.
Now the worst of all is that when you issue vacuum from a pg console ALL the dead tuples are cleaned, but oddly enough when the application is issuing vacuum it's successful, but partially cleans "threshold amount of records", but not all ?
Now I am pretty sure about the following:
Analyze is not running, nor auto-vacuum
There are no long running transactions
No replication is going on
These tables are "private"
Where is the difference between issuing a vacuum from the console with auto-commit on vs JDBC ? Why the vacuum issued from the console is cleaning ALL the tupples whereas the vacuum from the JDBC cleans it only partially ?
The JDBC vacuum is ran in a fresh connection from the pool with the default isolation level, yes there are updates going on in parallel, but this is the same as when a vacuum is executed from the console.
Is the connection from the pool somehow corrupted and can not see the updates? Is the ISOLATION the problem?
Visibility Map corruption?
Index referencing old tuples?
Side-note: I have observed that same behavior with autovacuum on and cost limit through the roof like 4000-8000 , threshold default + 5% . At first the n_dead_tuples is close to 0 for like 4-5 hours... The next day the table is 86gigs with milions of dead tuples. All the other tables are vacuumed and ok...
PS: I will try to log a vac verbose in the JDBC..
PS2: Because we are running in AWS could it be a backup that is causing it to stop cleaning ?
PS3: When refering to vaccum I mean simple vacuum, not full vacuum. We are not issuing full vacuum.
The main problem was that vacuum is run by another user. The vacuuming that I was seeing was the HOT updates + selects running over that data resulting in on-the-fly vacuum of the page.
Next: Vacuuming is affected by long running transactions ACROSS ALL schemas and tables. Yes, ALL schemas and tables. Changing to the correct user fixed the vacuum, but it will get ignored if there is an open_in_transaction in any other schema.table.
Work maintance memory helps, but in the end when the system is under heavy load all vacuuming is paused.
So we upgraded the DB's resources a bit and added a monitor to help us if there are any issues.

Conditional execution based on database load (Oracle database)

We have a situation where we have to perform a lengthy query to the database based on human input. As the input changes, the query has to be done over and over again, and the input may change once per second.
The problem is, we know that this will cause a spike in server activity for several seconds, and since it is not critical to have an answer immediately or on every input change, it means we can afford executing or not executing the query.
The criteria we would like to use is the current state of the database server, and only allow the query to be done if it is in a low or medium load state, skipping the query when the database server is under stress.
We use Oracle database for this, and so far we have not found any way, from Java, to do this except by actually loading into the server a known query and benchmark it, but that is essentially adding some load to the server. So my question: is there any other way, specifically in Oracle database, where we can discover from the Java side of the application the load of the database?
Depending on how you define "low or medium load state", I'd guess that hitting v$osstat would give you the information you're after Of course, hitting v$osstat constantly will also add to the load on the server. You may want to write a job that copies the v$osstat data to a table you control periodically (and can thus index appropriately) so that your application can hit that table rather than hitting the dynamic performance view constantly. Depending on the goal (i.e. are you trying to ensure that other users have enough resources or are you trying to ensure that your app remains responsive), you may want to use Resource Manager to control resource utilization among users, you may want to run the query asynchronously from the application, and/or you may want to use some sort of cache at the middle tier to avoid hitting the database every time.

Log long-running queries against a MySQL database?

We have a web application (Tomcat/Spring/Hibernate) running against a MySQL database. Every once in a while, the application runs a data-driven query that takes a huge amount of time to complete. Right now, we have no way to track them without logging ALL the queries, which would be a huge number (very busy app.) The only way we can identify a query is if it actually times out, then we get a org.apache.tomcat.jdbc.pool.ConnectionPool abandon warning.
Is there some way in Tomcat, Spring or Hibernate to track only queries that take over a certain time to execute?
MySQL has a slow query log. Enable that if it isn't already.
http://dev.mysql.com/doc/refman/5.1/en/slow-query-log.html
Session factory has getStatistics() method to know all kinds of statistics. Find about it here. You may be interested in stats.getQueryExecutionMaxTime() method.

Configuration Caching (Java / MySQL)

I have a SQL procedure that I call often (around 10-25 times a second). The procedure itself is very well optimized, however, the pre-processing for this procedure requires a configuration based on the parameters (there are a few thousand configurations that are possible). This configuration is stored in the database as well and changes roughly once a day to once a month.
Currently I have it set up with a cache of configurations that I use to hold configurations that have recently been used (so I don't have to query the database for the configuration every time). The cache times out configurations after 30 minutes so if the configuration changes it will see it.
There are a couple problems with this:
1 - If the configuration changes it may take up to 30 minutes to see the change.
2 - If the configurations time out at different times on different running instances, the procedure will be run with different configurations at the same time.
So my question is: Is there any way to do this better? I don't want to query the database for the configuration every time but I also want to have the configuration updated as soon as it is changed.
One alternative I am considering is versioning the configuration in the database and checking the version from the database against the cache for every call. The problem with this is that it adds another query every time I call the procedure. I'm not sure what affect it will have on the load of the database.
Any suggestions are greatly appreciated
How is your cache stored?
Ideally on updating configuration, you could trigger cache clean-up.
If cache is stored in SQL as well, then you could add TRIGGER ON UPDATE of your configuration table.

What is stored in the JNDI cache?

We are using WebSphere 6.1 on Windows connecting to a DB2 database on a different Windows machine. We are using prepared statements in our application. While tuning a database index (adding a column to the end of an index) we do not see the performance boost we saw on a test database with the same query, after changing the index the processor on database server actually was pegged.
Are the prepared statements query plans actually stored in JNDI? If so, how can they be cleared? If not, how can we clear the cache on the DB2 server?
The execution plans for prepared statements are stored in the DB2 package cache. It's possible that after an index is added, the package cache is still holding on to old access plans that are now sub-optimal.
After adding an index, you will want to issue a RUNSTATS statement on at least that index in order to provide the DB2 optimizer with the information it needs to choose a reasonable access plan.
Once the RUNSTATS statistics exist for the new index, issue a FLUSH PACKAGE CACHE statement to release any access plans that involved the affected table. The downside of this is that access plans for other dynamic SQL statements will also be ejected, leading to a temporary uptick in optimizer usage as each distinct SQL statement is optimized and cached.
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/topic/com.ibm.db2.luw.sql.ref.doc/doc/r0007117.html
Query plans are 'normally' held in the database by the RDBMS itself, with the exact life cycle being vendor specific I'd guess. They are definitely not held in a JNDI registry.
I assume there is a similar volume of data in both databases?
If so have you looked at the explain plan for both databases and confirmed they match?
If the answer to both these questions is yes I'm out of ideas and it's time to reboot the database server.

Categories

Resources