What is stored in the JNDI cache? - java

We are using WebSphere 6.1 on Windows connecting to a DB2 database on a different Windows machine. We are using prepared statements in our application. While tuning a database index (adding a column to the end of an index) we do not see the performance boost we saw on a test database with the same query, after changing the index the processor on database server actually was pegged.
Are the prepared statements query plans actually stored in JNDI? If so, how can they be cleared? If not, how can we clear the cache on the DB2 server?

The execution plans for prepared statements are stored in the DB2 package cache. It's possible that after an index is added, the package cache is still holding on to old access plans that are now sub-optimal.
After adding an index, you will want to issue a RUNSTATS statement on at least that index in order to provide the DB2 optimizer with the information it needs to choose a reasonable access plan.
Once the RUNSTATS statistics exist for the new index, issue a FLUSH PACKAGE CACHE statement to release any access plans that involved the affected table. The downside of this is that access plans for other dynamic SQL statements will also be ejected, leading to a temporary uptick in optimizer usage as each distinct SQL statement is optimized and cached.
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/topic/com.ibm.db2.luw.sql.ref.doc/doc/r0007117.html

Query plans are 'normally' held in the database by the RDBMS itself, with the exact life cycle being vendor specific I'd guess. They are definitely not held in a JNDI registry.
I assume there is a similar volume of data in both databases?
If so have you looked at the explain plan for both databases and confirmed they match?
If the answer to both these questions is yes I'm out of ideas and it's time to reboot the database server.

Related

Threads on Multiple VMs accessing a table on single Instance of DB causing low performance and Exceptions occasionally

Application is hosted on multiple Virtual Machines and DB is on single server. All VMs are pointing to single Instance of DB.
In this architecture, I have a table having very few record. But this table is accessed and updated by threads running on VMs very heavily. This is causing a performance bottleneck and sometimes record level exception. Database level locking does not seem to be best option as it is introducing significant delays in request processing.
Please suggest if there is any other technique to solve this problem.
Few questions first!
Is your application using connection pooling? If not, please use it. Creating a JDBC connection is expensive!
Is your application read heavy/write heavy?
What kind of storage engine are you using in your MySQL tables? InnoDB or MyISAM. If your application is write heavy, please use InnoDB based tables as it uses row level locking and will serve concurrent requests better.
One special case - if you are implementing queues on top of database tables, find a database that has a built-in queue operation and use that, or use a reliable messaging service. Building queues on top of databases is typically not efficient. See e.g. http://mikehadlow.blogspot.co.uk/2012/04/database-as-queue-anti-pattern.html
In general, running transactions on databases is slow because at the end of each transaction the database needs to be sure that enough has been saved out to disk that if the system died right now the changes made by the transaction would be safely preserved. If you don't need this you might find it faster to write a single non-database application that does what the database does but doesn't write anything out to disk, or still does database IO but does the minimum possible. Then instead of all of the VMs talking to the database directly they would all talk to this application.

Postgres Issue Isolation level

Lot of SHOW TRANSACTION ISOLATION LEVEL appears in process list in Postgres 9.0 .
What are reasons for this and when it appears ?. All are in idle state.
How to disable this ?
I assume that with process list you mean the system view pg_stat_activity (accessible in pgAdmin III in the "statistics" tab or with "Tools/Server Status").
Since you say that the connections are idle, the query column does not show an active query, it shows the last query that has been issued in this database connection. I don't know which ORM or connection pooler you are using, but some software in your stack must insert these statements routinely at the end of a database action.
I wouldn't worry about them, this statements are not resource intensive and probably won't cause you any trouble.
If you really need to get rid of them, figure out which software in your stack causes them and investigate there.

JDBC to the limit

I'm using java.sql.Connection.setAutoCommit(false) and java.sql.PreparedStatement.addBatch() to do some bulk inserts. I'm guessing how many correct insert/update statements can be safely executed before a commit? For example, executing 100.000 inserts before a commit may results in a JDBC driver complaint or memory leakage or something else? I guess there is a limit about how many statements I can execute before a commit, where can I find such infos?
There's no limit on the number of DML statements. Every INSERT/UPDATE/DELETE you push to the database is actually tracked in the database only. So there would not be any memory leakage like you mentioned. Memory leakage in JDBC can usually be related with the unclosed result sets or prepared statements only.
But other side, So much of DML operations without COMMIT, could do so much of logging in the DB. And this might impact the performance of other operations. When you issue a COMMIT after say millions of INSERTs, the other operations like INDEX analysis, data replication (if any) would put more overhead to the DBMS. Still these points are completely DBMS specific. JDBC driver has nothing to with.

Hibernate Java batch operation deadlock

We have J2EE application built using Hibernate and struts. We have RMI registry based implementation for business functionality.
In our application around 250 concurrent users are going to upload batches containing huge data named BATCHDET. These batches are first validated against 30 validation and then they are inserted to tables where we have parent and child relationship. Similar there are other operation which need huge processing. like printing etc.
There is one table containing 10 million record which gets accessed for all types of transactions and every process inserts and updates this table. This table has emerged as bottleneck. We have added all the required indexes as well.
After 30 minutes of run system JVM utilizes all the allocated 6GB of RAM and goes in no response state. When we tried to find out root cause we realized there was lock at database site and all the update queries related to BATCHDET table were in wait state. We tried everything which we could but no luck.
System run smooth when tried with 50 concurrent user but dies with 250 users which are expected. BATCHDET has lot of dependency on almost every module, not in mood to rewrite the implementation, could you please provide quick fix to it.
we have Thread based transaction demarcation at Hibernate implemented with HIbernateUtil.java. Transaction isolation is ReadCommitted. Is there any way where we can define no lock for all search operation. we have oracle 10G RDBMS.
Let me know if you need any other details.
~Amar
" Is there any way where we can define no lock for all search operation. we have oracle 10G RDBMS."
Oracle doesn't lock on selects, so in effect this is already in place.
Oracle also locks at a row level, so you need to stop thinking about the table as a whole and start thinking individual rows.
You need to talk with your DBA. There's a whole bunch of stuff to monitor in Oracle at both the system and session level. The DBA will be able to be able to look at v$session and tell you what the individual sessions are waiting on. There might be locks, it might be a disk bottle neck, it may be index contention, or it may be the database is sat there idle and all the inefficiency is in the java layer.

Long running transactions with Spring and Hibernate?

The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set.
To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction).
This worked fine in my development environment. However, in production I got the following exception:
java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction
This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case.
So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction?
If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?
I consider keeping transaction open for an extended time evil. During my career the definition of "extended" has descended from seconds to milli-seconds.
It is an unending source of non-repeatable problems and headscratching problems.
I would bite the bullet in this case and keep a 'work log' in software which you can replay in reverse to clean up if the batch fails.
When you say your table is temporary, is it transaction scoped? That might lead to other transactions (perhaps on a different transaction) not being able to see/access it. Perhaps a join involving a real table and a temporary table somehow locks the real table.
Root cause: Have you tried to use the MySQL tools to determine what is locking the connection? It might be something like next row locking. I don't know the MySQL tools that well, but on oracle you can see what connections are blocking other connections.
Transaction timeout: You should create a second connection pool/data source with a much longer timeout. Use that connection pool for your long running task. I think your production environment is 'trying' to help you out by detecting stuck connections.
As mentioned by Justin regarding Transaction timeout, I recently faced the problem in which the connection pool ( in my case tomcat dbcp in Tomcat 7), had setting which was supposed to mark the long running connections mark abandon and then close them. After tweaking those parameters I could avoid that issue.

Categories

Resources