Eager Initializing Connection Pool & Custom Timeout Value - java

I'm using the Tomcat JDBC Connection Pool (which is the Spring Boot default) to manage connections to my PostgreSQL cluster, and I just noticed that the pool is created only when the very first query is made. My question is twofold:
Is there any elegant way to force the pool to be created eagerly (ie, when starting the application)? I believe that executing a simple query upon startup would do the trick, but I'd prefer a more elegant way if available.
During one of my tests I used iptables to drop all traffic directed at the PostgreSQL cluster. This caused the first query to last for about 127 seconds before failing with the message Unable to create initial connections of pool. 127 seconds is way too much. Is there any way I can set a lower value for the timeout? I've read the docs but could not conclude much.

Well, for your first question, I can only think 2 methods:
initSQL method of Tomcat Pool, which you already mentioned.
Please see http://docs.spring.io/spring-boot/docs/current/reference/html/howto-database-initialization.html. Spring offer some mechanism to init DB & failfast. I think you can use this one.
For your second question; if you choose second approach for first question, than it'll automatically resolved. But you can always set
spring.datasouce.max-wait
parameter. In the Tomcat Pool doc it says
(int) The maximum number of milliseconds that the pool will wait (when there are no available connections) for a connection to be returned before throwing an exception. Default value is 30000 (30 seconds)
but in your case, it's 127 seconds, which is strange..

Related

Tomcat JDBC Connection Pool removeAbandoned not working

We have an application that's using the Tomcat JDBC Connection Pool, configured along with Spring Boot and Hibernate. The connection pool itself is working well (I've been able to verify that through the JMX MBean the pool provides), but a specific parameter doesn't seem to be working.
According to the Tomcat documentation, if "removeAbandoned" is set to true a connection is considered abandoned and eligible for removal if it has been in use longer than the removeAbandonedTimeout. Thus, I've set "removeAbandoned" to true and "removeAbandonedTimeout" to 20 (seconds). I've been able to verify that the two parameters have been properly set as well using the JMX MBean.
In order to test if the abandoned connections were really being removed, I locked one of my database tables manually and opened pages that accessed such a table on several browser tabs. Each one opened a new connection to my database, as I was able to verify both using the JMX MBean and using show status where `variable_name` = 'Threads_connected';. However, after the 20 seconds had elapsed, none of those connections were removed. I even waited for longer and nothing happened to them until I unlocked the table.
From what I understood from the Tomcat documentation, those connections should all have been eligible for removal since they were both in use and lasted for more than 20 seconds. So what's going on here?
My other parameters are maxActive="75", minIdle="5", maxIdle="5", initialSize="3", validationQuery="SELECT 1" and testWhileIdle="true". I should clarify that all those connections remained active while the table was locked (none became idle again and none were removed from the pool).
EDIT: A correction. Actually, the first connection is being removed, just not the subsequent ones. "logAbandoned" shows all the suspect connections correctly when "suspectTimeout" is set to 20 and "removeAbandoned" is set to false.
Had the same problem. Please ensure you are also setting following parameters:
removeAbandoned = true
abandonWhenPercentageFull = 0 (0 is default but you can check if you are setting this value explicitly)

Oracle JDBC connection timed out issue

I have a scenario in production for a web app, where when a form is submitted the data gets stored in 3 tables in Oracle DB through JDBC. Sometimes I am seeing connection time out errors in logs while the app is trying to connect to Oracle DB through Java code. This is intermittent.
Below is the exception:
SQL exception while storing data in table
java.sql.SQLRecoverableException: IO Error: Connection timed out
Most of the times the web app is able to connect to data base and insert values in it but some times and I am getting this time out error and unable to insert data in it. I am not sure why am I getting this intermittent issue. When I checked the connections pool config in my application, I noticed the following things:
Pool Size (Maximum number of Connections that this pool can open) : 10
Pool wait (Maximum wait time, in milliseconds, before throwing an Exception if all pooled Connections are in use) : 1000
Since the pool size is just 10 and if there are multiple users trying to connect to data base will this connection time out issue occur ?
Also since there are 3 tables where the data insertion occurs we are doing the whole insertion in just one connection itself. We are not opneing each DB connection for each individual table.
NOTE: This application is deployed on AEM (Content Management system) server and connections pool config is provided by them.
Update: I tried setting the validation query in the connections pool but still I am getting the connection time out error. I am not sure whether the connections pool has checked the validation query or not. I have attached the connections pool above for reference.
I would try two things:
Try setting a validation query so each time the pool leases a connection, you're sure it's actually available. select 1 from dual should work. On recent JDBC drivers that should not be required but you might give it a go.
Estimate the concurrency of your form. A 10 connections pool is not too small depending on the complexity of your work on DB. It seems you're saving a form so it should not be THAT complex. How many users per day do you expect? Then, on peak time, how many users do you expect to be using the form at the same time? A 10 connections pool often leases and retrieves connections quite fast so it can handle several transactions per second. If you expect more, increase the size slightly (more than 25-30 actually degrades DB performance as more queries compete for resources there).
If nothing seems to work, it would be good to check what's happening on your DB. If possible, use Enterprise Manager to see if there are latches while doing stuff on those three tables.
I give this answer from programming point of view. There are multiple possibilities for this problem. These are following and i have added appropriate solution for it. As connection timeout occurs, means your new thread do not getting database access within mentioned time and it is due to:
Possibility I: Not closing connection, there should be connection leakage somewhere in your application Solution
You need to ensure this thing and need to check for this leakage and close the connection after use.
Possibility II: Big Transaction Solution
i. Is these insertion synchronized, if it is so then use it very carefully. Use it at block level not method level. And your synchronized block size should be minimum as much as possible.
What happen is if we have big synchronized block, we give connection, but it will be in waiting state as this synchronized block needs too much time for execution. so other thread waiting time increases. Suppose we have 100 users, each have 100 threads for that operation. 1st is executing and it takes too long time. and others are waiting. So there may be a case where 80th 90th,etc thread throw timeout. And For some thread this issue occurs.
So you must need to reduce size of the synchronized block.
ii. And also for this case also check If the transaction is big, then try to cut the transaction into smaller ones if possible:-
For an example here, for one insertion one small transaction. for second other small transaction, like this. And these three small transaction completes operation.
Possibility III: Pool size is not enough if usability of application is too high Solution
Need to increase the pool size. (It is applicable if you properly closes all the connection after use)
You can use Java Executor service in this case .One thread One connection , all asynchronous .Once transaction completed , release the connection back to pool.That way , you can get rid of this timeout issue.
If one connection is inserting the data in 3 tables and other threads trying to make connection are waiting, timeout is bound to happen.

Speed up a massive number of inserts into multiple databases

I wrote a script that uses MyBatis to execute a massive number of inserts into multiple databases. The previous script didn't use MyBatis an was, more or less, twice as fast (25 min for a million records, 1 hour 10 minutes using MyBatis). I have tried different things, but I don't know exactly how to configure MyBatis to increase its performance. Some specific considerations about my problem and solution:
The databases are in a VPC, so network time is important.
I use guice to bind the mappers for each database. Connection information is set programmatically. The mappers are get then when I need to execute an insert.
The rows that need to be inserted are not sorted, so they are enqueued by database. When a queue reaches a given size, a multirow insert is executed. Can I use something better with injected mappers?
I use pooled connections. Does this mean that 3 connections are opened when the mapper is first used and then reused? If a given mapper is used only every few minutes, those idle connections are closed?
Sometimes I get this error randomly:
org.apache.ibatis.transaction.TransactionException: Error configuring
AutoCommit. Your driver may not support getAutoCommit() or
setAutoCommit(). Requested setting: false.
Cause:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 4,030,088 milliseconds ago.
The last packet sent successfully to the server was 0 milliseconds ago.
What can I do to increase the performance and to avoid the communication error?
1、it seems that you need to change your connection pool param.
database such as mysql may close a connection when it's idle for a target time interval, but the connection pool may not be noticed ,so when your mapper use a closed connection , the CommunicationsException occurs.
(1)if you use c3p0 ,you can specify the idle_test_period to solve this problem.
(2) or you can specify jdbc Timeout Settings(Max Wait Time、Idle Timeout)
2、connection pool has minSize and maxSize property , when your idle connection num is greater then minSize, the exceed part will be closed.
I think you have to make optimization of mysql configuration to bulk insert : enter link description here
It look like mybatis try to set autocommit to false, and that's a good optimization.
And I think it's better to have an instance of mapper by database and create 3 datasources with guice.
Another way is to use sqlimport but it's a big break.

Dead Connections Returned to JDBC Connection Pool - Glassfish 3.1.2.2

I'm having an issue with the jdbc Connection Pool on glassfish handing out dead database connections. I'm running Glassfish 3.1.2.2 using jconn3 (com.sybase.jdbc3) to connect to Sybase 12.5. Our organization has a nightly reboot process during which time we restart the Sybase server. My issue manifests itself when an attempt to use a database connection during the reboot occurs. Here are the order of operations to produce my issue:
Sybase is down for restart.
Connection is requested from the pool.
DB operation fails as expected.
Connection is returned to the pool in a closed state.
Sybase is back up.
Connection is requested from the pool.
DB operation fails due to "Connection is already closed" exception.
Connection is returned to the pool
I've implemented a database recovery singleton that attempts to recover from this scenario. Any time a database exception occurs I make a jmx call to pause all queue's and execute a flushConnectionPool operation on the JDBC Connection Pool. If the database connection is still not up the process sets up a timer to retry in 10 minutes. While this process works, it's not without flaws.
I realize there's a setting on the pool so that you can require validation on the database connection prior to handing it out but I've shied away from this for performance reasons. My process performs approximately 5 million database transactions a day.
My question is, does anyone know of a way to avoid returning a dead connection back to the pool in the first place?
You've pretty well summed up your options. We had that problem, the midnight DB going down. For us, we turned on connection validation, but we don't have your transaction volume.
Glassfish offers a custom validation option, with which a class can be specified to do the validation.
By default, all the classes provided by Glassfish do (You'll see them offered as options in the console) is a SQL statement like this:
SELECT 1;
The syntax varies a bit between databases, SQL Server is uses '1', whereas for Postgres, it just uses 1. But the intent is the same.
The net is that it will cost you an extra DB hit every time you try to get a connection, but it's a really, really cheap hit. But still, it's a hit.
But you could implement your own version. It could do the check, say, every 10th request, or even less frequent. Roll a random number from 1 to N (N = 10, 20, 100...), if you get a '1', do the select (and fail, if it fails), otherwise return "true". But at the same time, configure it so that if you do detect an error, purge the entire pool. Clearly tweak this so you have a good chance of it happening when your db goes down at night (dunno how busy your system is at night) vs peak processing.
You could even "lower the odds" during peak processing. "if time between 6am and 6pm then odds = 1000 else odds = 100; if (random(odds) == 1) { do select... }"
A random option removes the need to maintain a thread safe counter.
In the end, it doesn't really matter, you just want a timely note that the DB is down so you can ask GF to abort the pool.
I can definitely see it thrashing a bit at the very beginning as the DB comes up, possibly refreshing the pool more than once, but that should be harmless.
Different ways you could play with that, but that's an avenue to consider.

DataSource configuration to exclude deadlocks on REQUIRES_NEW methods

While stress testing my JPA based DAO layer (Running 500 simultanious updates at the same time each in a separate thread). I encountered following - system always stuck unable to make any progress.
The problem was, that there were no available connections at some point for any thread, so no running thread could make any progress.
I have investigated this for a while and the root was REQUIRES_NEW annotation on add method in one of my JPA DAO's.
So the scenario was:
Test starts acquiring new Connection from ConnectionPool to start transaction.
After some initial phase, I call add on my DAO, causing it to request another Connection from ConnectionPool which there are no, because all the Connections by that time, were taken by parallel running tests.
I tried to play with DataSource configurations
c3p0 stucks
DBCP stucks
BoneCP stucks
MySQLDataSource fail some requests, with error - number of connections exceeded allowed.
Although I solved it by getting read of REQUIRES_NEW, with which all DataSources worked perfectly, still the best result seems to be of MySQLDataSource, since it did not stuck and just fail:)
So it seems you should not use REQUIRES_NEW at all, if you expect high throughput.
And my question:
Is there a configuration for either DataSources that could prevent this REQUIRES_NEW problem?
I played with checkout timeout in c3p0, and tests started to fail, as expected.
2 sec - 8 % passed
4 sec - 12 % passed
6 sec - 16 % passed
8 sec - 26 % passed
10 sec - 34 % passed
12 sec - 36 % passed
14/16/18 sec - 40 % passed
This is highly subjective ofcourse.
MySQLDataSource with plain configurations gave 20% of passed tests.
What about configuring a timeout for obtaining the connection? If the connection can't be obtained in say 2 seconds, the pool will abort and throw an exception.
Also note that REQUIRES is more typical. Often you'd want a call chain to share a transaction instead of starting a new transaction for each new call in a chain.
Probably any of the Connection pools can be configured to deal with this in any number of ways. Ultimately, all that REQUIRES_NEW is probably forcing your app to acquire more than one Connection per client, which is multiplying the stressfulness of your stress tests. If pools are hanging, it's probably because they are running out of Connections. If you set a sufficiently large pool size, you might resolve the issue. Alternatively, as Arjan suggests above, you can configure pools to "fail fast" instead of hanging indefinitely, if clients have to wait for a Connection. With c3p0, the config param for that would be checkoutTimeout.
Without more information about exactly what's going on when you say a Connection pool "stucks", this is necessarily guesswork. But under very high concurrent load, with any Connection pool, you'll either need to make lots of resources available (high maxPoolSize + numHelperThreads in c3p0), kick out excess clients (checkoutTimeout), or let clients just endure long (but finite!) wait times.

Categories

Resources