An interruption exception (java.lang.InterruptedException) occurs as I'm trying to perform some simple read (SELECT) operations using C3P0 on a MySQL database. The exception occurs as I increase the number of parallel threads more than 100 (I have tried with 5,10,20,60 and 100). The statement I execute is as simple as :
SELECT `Model.id` FROM `Model` LIMIT 100;
My connections are pooled from a ComboPooledDataSource which is configured using the following properties (see also the C3P0 manual):
c3p0.jdbcUrl=jdbc:mysql...
c3p0.debugUnreturnedConnectionStackTraces=true
c3p0.maxIdleTime=5
c3p0.maxPoolSize=1000
c3p0.minPoolSize=5
c3p0.initialPoolSize=5
c3p0.acquireIncrement=3
c3p0.acquireRetryAttempts=50
c3p0.numHelperThreads=20
c3p0.checkoutTimeout=0
c3p0.testConnectionOnCheckin=true
c3p0.testConnectionOnCheckout=true
user=***
password=***
The MySQL server on the machine I run the tests is configured to accept 1024 connections and the unit tests I run are successfully executed (the data are retrieved from the database as expected). However, in the C3P0 log file, I find the following warning:
15:36:11,449 WARN BasicResourcePool:1876 - com.mchange.v2.resourcepool.BasicResourcePool#9ba6076 -- Thread unexpectedly interrupted while performing an acquisition attempt.
java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method)
at com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1805)
at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)
I'd like to know the reason for that warning and second its possible impact on the software's robustness and stability. Note that after use, I close the result set, the SQL statement and the connection. Finally, once the test is over, I close the pool by calling the method ComboPooledDataSource#close(). What is more weird (and seems to be to reveal a synchronization problem), is that if I give enough time to the pool using the following...
Thread.sleep(10000); // wait for some time
datasource.close();
No warnings will appear in the logs! Dο you think this raises a thread safety issue for C3P0 or am I doing something the wrong way?
Update 1:
Let me mention that removing the Thread.sleep(10000), apart from what already mention, causes the following info to be logged in the MySQL log file:
110221 14:57:13 [Warning] Aborted connection 9762 to db: 'myDatabase' user: 'root'
host: 'localhost' (Got an error reading communication packets)
Might shed some more light...
Update 2:
Here is my MySQL server configuration. The number of maximum allowed connections by server is set to 1024 (as I mentioned above) which is adequate for what I'm trying to do.
[mysqld]
max_allowed_packet = 64M
thread_concurrency = 8
thread_cache_size = 8
thread_stack = 192K
query_cache_size = 0
query_cache_type = 0
max_connections = 1024
back_log = 50
innodb_thread_concurrency = 6
innodb_lock_wait_timeout = 120
log_warnings
To obfuscate any doubt, I verified that the maximum number of connections is properly set by:
show global variables where Variable_name='max_connections';
+-----------------+-------+
| Variable_name | Value |
+-----------------+-------+
| max_connections | 1024 |
+-----------------+-------+
1 row in set (0.00 sec)
That warning comes from around line 2007 here.
It seems to be a thread stuck trying to aquire a connection.
Perhaps because the pool is set up to aquire more connections than what your mysql server is configured to handle. This seems to make sense, as the default max_connection is 100 (or 151 depending on your mysql version)
So that thread trying to aquire a connection goes in to a sleep()/retry loop trying to aquire
the connection - however you close the whole pool while it's inside that loop - that thread gets interrupted so all resources can be reclaimed when you close the pool.
So far, it seems no harm done, your code likely returns connections to the pool when you're done with it leaving them idle for others to use, and all your queries get through.
Perhaps, InterruptedException is normal because some of c3p0 threads are waiting for connection and when you call close() these threads are interrupted. Though, according to your setup (100 clients, 1000 server connections), such necessity to wait for resource is not that obvious.
If you really interested, most reliable solution would be looking to c3p0 logs, perhaps, adding some more logs and recompiling...
I just ran into this problem. Here was my setting for the DataSource:
[java:comp/env/jdbc/pooledDS] = [com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, dataSourceName -> 2siwtu8o4m410i1l4tkxb|187c55c, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> null, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, identityToken -> 2siwtu8o4m410i1l4tkxb|187c55c, idleConnectionTestPeriod -> 0, initialPoolSize -> 3, jdbcUrl -> null, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 15, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 3, numHelperThreads -> 3, numThreadsAwaitingCheckoutDefaultUser -> 0, preferredTestQuery -> null, properties -> {}, propertyCycle -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false ]]
and fixed:
[java:comp/env/jdbc/pooledDS] = [com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, dataSourceName -> 2siwtu8o4m5kux117kgtx|13e754f, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> oracle.jdbc.driver.OracleDriver, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, identityToken -> 2siwtu8o4m5kux117kgtx|13e754f, idleConnectionTestPeriod -> 0, initialPoolSize -> 3, jdbcUrl -> jdbc:oracle:thin:#localhost:1521:oracle, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 15, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 3, numHelperThreads -> 3, numThreadsAwaitingCheckoutDefaultUser -> 0, preferredTestQuery -> null, properties -> {user=******, password=******}, propertyCycle -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false ]]
So not everything was set correctly. More concretely, when I called setDriverClass and setJdbcUrl to correct the null values, I eliminated the InterruptedException
Related
I am using Java 8, Hibernate 4.3.11 and c3p0 9.2.1 and the standard Java logging package and am having trouble with writing the debug information from c3p0 to my debug log.
I added
-Dcom.mchange.v2.log.MLog=com.mchange.v2.log.jdk14logging.Jdk14MLog
to start up, and this gets c3p0 to use standard logging and write to the console , but it doesnt write to my debug log file.
I initialize loggers for my application and lib
SongKong.ioLogger = Logger.getLogger("org.jaudiotagger");
MainWindow.logger = Logger.getLogger("com.jthink");
and then call my LogProperties class to configure the logs files and console and writing the data, and this works.
What am I doing wrong
package com.jthink.songkong.logging;
import com.jthink.songkong.cmdline.SongKong;
import com.jthink.songkong.preferences.GeneralPreferences;
import com.jthink.songkong.preferences.UserPreferences;
import com.jthink.songkong.ui.MainWindow;
import com.jthink.songkong.util.Platform;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.util.logging.ConsoleHandler;
import java.util.logging.FileHandler;
import java.util.logging.Level;
import java.util.logging.Logger;
/**
* This defines the command line properties of SongKong, currently consists of logger settings
*/
public final class LogProperties
{
public static int LOG_SIZE_IN_BYTES = 10000000;
public LogProperties()
{
try
{
//Set logging for jaudiotagger lib, user configurable
SongKong.ioLogger.setLevel(Level.parse(String.valueOf(GeneralPreferences.getInstance().getIoDebugLevel())));
SongKong.ioLogger.setUseParentHandlers(false);
//Set logging for songkongdebug, user configurable
MainWindow.logger.setLevel(Level.parse(String.valueOf(GeneralPreferences.getInstance().getDebugLevel())));
MainWindow.logger.setUseParentHandlers(false);
//C3p0 Logger
Logger c3p0Logger = Logger.getLogger("com.mchange.v2.c3p0");
c3p0Logger.setLevel(Level.FINEST);
c3p0Logger.setUseParentHandlers(false);
//Set Filehandler used for writing to debug log
String logFileName = Platform.getPlatformLogFolderInLogfileFormat() + "songkong_debug%u-%g.log";
FileHandler fe = new FileHandler(logFileName, LOG_SIZE_IN_BYTES, 10, true);
fe.setEncoding(StandardCharsets.UTF_8.name());
fe.setFormatter(new com.jthink.songkong.logging.LogFormatter());
fe.setLevel(Level.FINEST);
//Write output from these loggers to the debug log file
MainWindow.logger.addHandler(fe);
SongKong.ioLogger.addHandler(fe);
c3p0Logger.addHandler(fe);
ConsoleHandler ch = new ConsoleHandler();
ch.setFormatter(new com.jthink.songkong.logging.LogFormatter());
ch.setLevel(Level.FINEST);
MainWindow.logger.addHandler(ch);
SongKong.ioLogger.addHandler(ch);
c3p0Logger.addHandler(ch);
}
catch (IOException ioe)
{
MainWindow.userInfoLogger.severe("Unable to open log file");
}
}
}
I need the debugging to get written to the log file because I want a customer to run some tests, so it is no good to be if the data is just written to console. Also the format of c3p0 data written the console is not in the format of my other messages (as defined by com.jthink.songkong.logging.LogFormatter()) so it seems that my call to LogProperties() is effectively being ignored even though it is called before I access c3p0 for the first time.
e.g this is output to console at startup
debuglogfile is:C:\Users\Paul\AppData\Roaming\SongKong\Logs/songkong_debug%u-%g.log
userlogfile is:C:\Users\Paul\AppData\Roaming\SongKong\Logs/songkong_user%u-%g.log
23/08/2019 10.44.26:BST:SongKong:setLocale:SEVERE: Locale is:en
23/08/2019 10.44.27:BST:SongKong:setFonts:WARNING: Fonts Enabled:true
23/08/2019 10.44.27:BST:SongKong:setFonts:WARNING: Fonts configured successfully
23/08/2019 10.44.27:BST:SongKong:init:WARNING: end
23/08/2019 10.44.27:BST:SongKong:finish:WARNING: finish
23/08/2019 10.44.29:BST:SongKong:writeSystemInfo:WARNING: SongKong 6.3 Psychocandy 1099 24/07/2019 using Java 1.8.0_181 25.181-b13 64bit on Windows 10 10.0 amd64 initialized successfully
23/08/2019 10.44.29:BST:SongKong:writeSystemInfo:WARNING: No of CPUs:8
23/08/2019 10.44.29:BST:SongKong:writeSystemInfo:WARNING: SongKong has been configured with minimum heap memory of 100 mb, maximum heap memory of 1,778 mb and maximum permanent memory of -32 mb
23/08/2019 10.44.29:BST:SongKong:writeSystemInfo:WARNING: Total Computer Memory is 24,466 mb
23/08/2019 10.44.30:BST:SongKong:writeSystemInfo:WARNING: Username:Paul:Domain:pclaptop:RunningAsAdmin:false
23/08/2019 10.44.30:BST:SongKong:checkDatabase:WARNING: Setting Db Folder:C:\Users\Paul\AppData\Roaming\SongKong/Database
23/08/2019 10.44.30:BST:SongKong:checkDatabase:WARNING: Lock File remaining from previous, deleting lock
23/08/2019 10.44.30:BST:HibernateUtil:createFactory:SEVERE: ----Initilizing Hibernate Session factory
Aug 23, 2019 10:44:31 AM com.mchange.v2.log.MLog <clinit>
INFO: MLog clients using java 1.4+ standard logging.
Aug 23, 2019 10:44:32 AM com.mchange.v2.c3p0.C3P0Registry banner
INFO: Initializing c3p0-0.9.2.1 [built 20-March-2013 10:47:27 +0000; debug? true; trace: 10]
Aug 23, 2019 10:44:32 AM com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource getPoolManager
INFO: Initializing c3p0 pool... com.mchange.v2.c3p0.PoolBackedDataSource#3c73cbbb [ connectionPoolDataSource -> com.mchange.v2.c3p0.WrapperConnectionPoolDataSource#adb66302 [ acquireIncrement -> 3, acquireRetryAttempts -> 10, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, debugUnreturnedConnectionStackTraces -> true, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, identityToken -> 2rwcn5a41gohnzr1p7tndj|54e1c68b, idleConnectionTestPeriod -> 3000, initialPoolSize -> 1, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 2000, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 5, maxStatements -> 3000, maxStatementsPerConnection -> 50, minPoolSize -> 1, nestedDataSource -> com.mchange.v2.c3p0.DriverManagerDataSource#2d7c4b75 [ description -> null, driverClass -> null, factoryClassLocation -> null, identityToken -> 2rwcn5a41gohnzr1p7tndj|f736069, jdbcUrl -> jdbc:h2:async:C:\Users\Paul\AppData\Roaming\SongKong/Database/Database;FILE_LOCK=SOCKET;MVCC=TRUE;DB_CLOSE_ON_EXIT=FALSE;CACHE_SIZE=50000;, properties -> {user=******, password=******} ], preferredTestQuery -> null, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 10, usesTraditionalReflectiveProxies -> false; userOverrides: {} ], dataSourceName -> null, factoryClassLocation -> null, identityToken -> 2rwcn5a41gohnzr1p7tndj|a38c7fe, numHelperThreads -> 3 ]
23/08/2019 10.44.36:BST:SongKong:checkDatabase:SEVERE: Accessed Database okay
23/08/2019 10.44.36:BST:SongKong:checkCache:WARNING: Checking Cache:C:\Users\Paul\AppData\Roaming\SongKong\Database\EhCache
23/08/2019 10.44.38:BST:SongKong:checkCache:WARNING: Checked Cache:C:\Users\Paul\AppData\Roaming\SongKong\Database\EhCache
23/08/2019 10.44.39:BST:SongKong:setUserAgent:WARNING: start
23/08/2019 10.44.41:BST:AbstractAcoustidQuery:performBasicSubmissionQuery:SEVERE: Posting to url:http://api.acoustid.org/v2/user/lookup?format=xml&client=8XaBELgH&user=7st7qtJpzr
23/08/2019 10.44.42:BST:SongKong:setUserAgent:WARNING: end
23/08/2019 10.44.42:BST:SongKong:finish:WARNING: finish
Also the format of c3p0 data written the console is not in the format of my other messages (as defined by com.jthink.songkong.logging.LogFormatter()) so it seems that my call to LogProperties() is effectively being ignored even though it is called before I access c3p0 for the first time.
Loggers are subject to garbage collection. One bug in your code is the following:
Logger c3p0Logger = Logger.getLogger("com.mchange.v2.c3p0");
Remove that line and create a constant:
private static final Logger c3p0Logger = Logger.getLogger("com.mchange.v2.c3p0");
The fundamental flaw with my approach was that I was calling my logging code from within the application itself. But what I needed to was specify the following property
java.util.logging.config.class
at startup with the name of my logging config class
e.g
-Djava.util.logging.config.class=com.jthink.songkong.logging.LogProperties
I also needed
-Dcom.mchange.v2.log.MLog=com.mchange.v2.log.jdk14logging.Jdk14MLog
so that c3p0 knew i was using standard logging.
This solved the issue, although not being able to call the class from the code made some logic problematic.
I am having some issues with configuring c3p0 connection pool with my hinermate connection.
I've set the wait_for my MySQL server to 180 seconds.
I also have the following parameters set in my hibernate properties file:
properties.put("hibernate.connection.driver_class", JDBC_DRIVER);
properties.put("hibernate.connection.url", JDBC_URL);
properties.put("hibernate.connection.username", JDBC_USER);
properties.put("hibernate.connection.password", JDBC_PASSWORD);
properties.put("hibernate.dialect", "org.hibernate.dialect.MySQLDialect");
properties.put("hibernate.show_sql", false);
properties.put("hibernate.connection.provider_class", "org.hibernate.connection.C3P0ConnectionProvider");
properties.put("hibernate.c3p0.min_size", 1);
properties.put("hibernate.c3p0.max_size", 10);
properties.put("hibernate.c3p0.max_statements", 8 );
I have created a c3p0 properties file and added it to my resources folder. The content of this file is below:
c3p0.testConnectionOnCheckout=true
That is all. This is all that is left in this particular configuration file as I read that some properties can only be set in this file. Upon loading the application c3p0 is initialized I get the following log message printed out:
INFO: Initializing c3p0 pool...
com.mchange.v2.c3p0.PoolBackedDataSource#b014587e [
connectionPoolDataSource ->
com.mchange.v2.c3p0.WrapperConnectionPoolDataSource#5e2902db [
acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay
-> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0,
connectionCustomizerClassName -> null, connectionTesterClassName ->
com.mchange.v2.c3p0.impl.DefaultConnectionTester,
debugUnreturnedConnectionStackTraces -> false, factoryClassLocation ->
null, forceIgnoreUnresolvedTransactions -> false, identityToken ->
1hge17r9d37v2wm1n57lrr|ba81af, idleConnectionTestPeriod -> 0,
initialPoolSize -> 1, maxAdministrativeTaskTime -> 0, maxConnectionAge
-> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 10, maxStatements -> 8, maxStatementsPerConnection -> 0, minPoolSize -> 1, nestedDataSource ->
com.mchange.v2.c3p0.DriverManagerDataSource#d4187fd7 [ description ->
null, driverClass -> null, factoryClassLocation -> null, identityToken
-> 1hge17r9d37v2wm1n57lrr|1434751, jdbcUrl -> jdbc:mysql://localhost:3306/whatever?characterEncoding=UTF-8, properties
-> {user=******, password=******} ], preferredTestQuery -> null, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0,
testConnectionOnCheckin -> false, testConnectionOnCheckout -> true,
unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies ->
false; userOverrides: {} ], dataSourceName -> null,
factoryClassLocation -> null, identityToken ->
1hge17r9d37v2wm1n57lrr|2a237a, numHelperThreads -> 3 ]
It appears everything is being set.
I started configuring this connection pool because I was getting a broken pipe issue on the server 8 hours the first connection was obtained. So as a test, I set the wait_timeout no the server as I mentioned for 180 seconds and after I get the issue when trying to do a database call:
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
The last packet successfully received from the server was xxx
milliseconds ago. The last packet sent successfully to the server was
xxx milliseconds ago. is longer than the server configured
value of 'wait_timeout'. You should consider either expiring and/or
testing connection validity before use in your application, increasing
the server configured values for client timeouts, or using the
Connector/J connection property 'autoReconnect=true' to avoid this
problem.
Caused by: java.net.SocketException: Broken pipe
I am using the following relevant maven dependencies:
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-core</artifactId>
<version>5.0.0.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-c3p0</artifactId>
<version>5.0.0.Final</version>
</dependency>
What is the easiest I set up c3p0 to avoid this broken pipe issue.
I've tried the preferredTestQuery = Select 1 approach played around with the numbers, idleConnectionTestPeriod, nothing seemed to do tha trick.
I might be missing something very simple, I could use some guidance.
Thanks,
Peter
I'm using STS tools and Framework Hibernate + Spring to build web application. I got error on the screen (Server tomcat v8.0 Server at localhot was unable to start within 45 secods) when build using apache.
I'have increase the limit into max 145 second, but didn't work and change configuration apache based on internet finding but still not working.
Need advice.
Thanks.
I faced this issue today.
I started my Hibernate Web App with the option Run On Server.
Logs:
INFO: Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, dataSourceName -> 1hge0waa6y005ie1a2a65t|3bfc6a5e, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> com.mysql.cj.jdbc.Driver, extensions -> {}, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, forceUseNamedDriverClass -> false, identityToken -> 1hge0waa6y005ie1a2a65t|3bfc6a5e, idleConnectionTestPeriod -> 0, initialPoolSize -> 5, jdbcUrl -> jdbc:mysql://localhost:3306/web_customer_tracker?useSSL=false&serverTimezone=UTC, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 30000, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 20, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 5, numHelperThreads -> 3, preferredTestQuery -> null, privilegeSpawnedThreads -> false, properties -> {user=******, password=******}, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, userOverrides -> {}, usesTraditionalReflectiveProxies -> false ]
It was trying to connect to local MySql instance but the MySql service was stopped.
I started the MySql Service manually, retried starting the server and it worked.
Try checking all your project's dependencies are up and running.
I know its too late for you, maybe someone else will find it useful.
Thanks
This is bit late. But I faced the same problem when running tomcat 8 in Eclipse.
In case my case it was due to a system wide proxy and the network settings in Eclipse had been changed.
I fixed it by,
Eclipse preferences -> General -> Network Connections and setting the Active Provider to Manual.
This might be the late answer but I just fixed it as below:
I am using Tomcat v9.0 and MySQL server. While starting my Spring MVC web app, I faced the same issue.
Last few lines from the log:
Jul 18, 2020 10:57:08 AM com.mchange.v2.c3p0.C3P0Registry
INFO: Initializing c3p0-0.9.5.3 [built 27-January-2019 00:11:37 -0800; debug? true; trace: 10]
Jul 18, 2020 10:57:09 AM org.hibernate.Version logVersion
INFO: HHH000412: Hibernate ORM core version 5.4.18.Final
Jul 18, 2020 10:57:10 AM org.hibernate.annotations.common.reflection.java.JavaReflectionManager
INFO: HCANN000001: Hibernate Commons Annotations {5.1.0.Final}
Jul 18, 2020 10:57:11 AM com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource
INFO: Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, dataSourceName -> 1b619k3abqqte4y1lsnrqw|4016ccc1, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> com.mysql.cj.jdbc.Driver, extensions -> {}, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, forceUseNamedDriverClass -> false, identityToken -> 1b619k3abqqte4y1lsnrqw|4016ccc1, idleConnectionTestPeriod -> 0, initialPoolSize -> 3, jdbcUrl -> jdbc:mysql://localhost:3306/web_customer_tracker?useSSL=false&serverTimezone=UTC, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 30000, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 20, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 5, numHelperThreads -> 3, preferredTestQuery -> null, privilegeSpawnedThreads -> false, properties -> {user=, password=}, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, userOverrides -> {}, usesTraditionalReflectiveProxies -> false ]
Jul 18, 2020 10:57:11 AM com.mchange.v2.resourcepool.BasicResourcePool
WARNING: Bad pool size config, start 3 < min 5. Using 5 as start.
I tried to increase the server timeout in the tomcat configuration editor. That did not help. I searched over the Internet nothing helped. Then I launched the MySQL Workbench and whoa! I found the culprit. It said, "Unable to connect to localhost:3306". Then I realized that MySQL service may not be running. I started the MySQL service "MySQL80" and then the tomcat started within 45 seconds.
Hope this may help future readers :)
A certain indexed SELECT query against a Postgres database takes a highly variable amount of time - from 50 msecs to multiple seconds, and very occasionally minutes, even under the lightest load.
Our Postgres query log records anything over 10 msecs, but never records any of these. The EXPLAIN output suggests the query isn't particularly efficient, nonetheless it shouldn't be slow in this tiny database (000s of records), and we're trusting the Postgres logs.
With our applications logs set to report all Hibernate, C3P0, and Spring/Spring Data logging (see version numbers at end), the evidence suggests this is very much a Hibernate/C3P0 issue, however, all the evidence from the logs suggests the pool size and utilisation is fine for the time being. Unfortunately we cannot drill down any further.
Can you suggest an explanation for the 26 second gap?
10:19:29.149 DEBUG org.hibernate.SQL [I=9534] - select eventrepor0_.consortium_id as consorti1_3_3_, eventrepor0_.customer_resource_id as customer6_3_3_, eventrepor0_.item_type_id as item2_3_3_, eventrepor0_.reporting_date as reportin3_3_3_, eventrepor0_.event_subtype as event4_3_3_, eventrepor0_.event_count as event5_3_3_, customerre1_.id as id1_2_0_, customerre1_.customer_id as customer2_2_0_, customerre1_.resource_id as resource3_2_0_, resource2_.id as id1_8_1_, resource2_.data_type_id as data2_8_1_, resource2_.platform_id as platform5_8_1_, resource2_.prop_id as prop3_8_1_, resource2_.title as title4_8_1_, resource2_1_.doi as doi1_6_1_, resource2_1_.isbn as isbn2_6_1_, resource2_1_.online_issn as online3_6_1_, resource2_1_.print_issn as print4_6_1_, resource2_1_.publisher as publishe5_6_1_, resource2_1_.yop as yop6_6_1_, case when resource2_1_.id is not null then 1 when resource2_.id is not null then 0 end as clazz_1_, platform3_.id as id1_4_2_, platform3_.api_key as api2_4_2_, platform3_.platform_name as platform3_4_2_, hostnames4_.platform_id as platform1_4_5_, hostnames4_.hostname as hostname2_5_5_ from event_report eventrepor0_ inner join customer_resource customerre1_ on eventrepor0_.customer_resource_id=customerre1_.id left outer join resource resource2_ on customerre1_.resource_id=resource2_.id left outer join published_resource resource2_1_ on resource2_.id=resource2_1_.id left outer join platform platform3_ on resource2_.platform_id=platform3_.id left outer join platform_hostnames hostnames4_ on platform3_.id=hostnames4_.platform_id where eventrepor0_.consortium_id=? and eventrepor0_.customer_resource_id=? and eventrepor0_.item_type_id=? and eventrepor0_.reporting_date=? and eventrepor0_.event_subtype=?
10:19:29.149 DEBUG c.m.v.a.ThreadPoolAsynchronousRunner [I=9534] - com.mchange.v2.async.ThreadPoolAsynchronousRunner#4ffa2724: Adding task to queue -- com.mchange.v2.c3p0.stmt.GooGooStatementCache$1StmtAcquireTask#31e6b320
10:19:29.149 DEBUG c.m.v.c3p0.stmt.GooGooStatementCache [I=9534] - CULLING: update event_report set event_count=event_count+1 where customer_resource_id=? and item_type_id=? and event_subtype=? and reporting_date=? and consortium_id=?
10:19:29.149 DEBUG c.m.v.a.ThreadPoolAsynchronousRunner [I=9534] - com.mchange.v2.async.ThreadPoolAsynchronousRunner#4ffa2724: Adding task to queue -- com.mchange.v2.c3p0.stmt.GooGooStatementCache$StatementDestructionManager$1UncheckedStatementCloseTask#20fa1378
10:19:29.149 DEBUG c.m.v.c3p0.stmt.GooGooStatementCache [I=9534] - cxnStmtMgr.statementSet( org.postgresql.jdbc4.Jdbc4Connection#38e040d2 ).size(): 5
10:19:29.150 DEBUG c.m.v.c3p0.stmt.GooGooStatementCache [I=9534] - checkoutStatement: com.mchange.v2.c3p0.stmt.GlobalMaxOnlyStatementCache stats -- total size: 20; checked out: 5; num connections: 6; num keys: 20
10:19:29.150 TRACE o.h.e.j.internal.JdbcCoordinatorImpl [I=9534] - Registering statement [com.mchange.v2.c3p0.impl.NewProxyPreparedStatement#7cc20161]
10:19:29.150 TRACE o.h.e.j.internal.JdbcCoordinatorImpl [I=9534] - Registering last query statement [com.mchange.v2.c3p0.impl.NewProxyPreparedStatement#7cc20161]
10:19:29.150 TRACE o.h.type.descriptor.sql.BasicBinder [I=9534] - binding parameter [1] as [VARCHAR] -
10:19:29.150 TRACE o.h.type.descriptor.sql.BasicBinder [I=9534] - binding parameter [2] as [BIGINT] - 47
10:19:29.150 TRACE org.hibernate.type.EnumType [I=9534] - Binding [SEARCH_REG] to parameter: [3]
10:19:29.150 TRACE o.h.type.descriptor.sql.BasicBinder [I=9534] - binding parameter [4] as [TIMESTAMP] - Tue Jul 16 00:00:00 BST 2013
10:19:29.151 TRACE o.h.type.descriptor.sql.BasicBinder [I=9534] - binding parameter [5] as [VARCHAR] -
10:19:29.151 TRACE org.hibernate.loader.Loader [I=9534] - Bound [6] parameters total
[... massive gap ...]
10:19:55.644 TRACE o.h.e.j.internal.JdbcCoordinatorImpl [I=9534] - Registering result set [com.mchange.v2.c3p0.impl.NewProxyResultSet#fa7b109]
A note on concurrency: there is a great deal of variability even when only one request: 50 - 300 msecs end-to-end, but when one user submits a batch of about 100 of these lookups (probably 10-20 run concurrently), there's a high probability that a few will take 5-10 seconds. And yet the C3P0 stats are never any worse than:
com.mchange.v2.c3p0.stmt.GlobalMaxOnlyStatementCache stats -- total size: 20; checked out: 6; num connections: 6; num keys: 20
These are pretty powerful servers, so there's no obvious disk, network, or CPU activity. We use NewRelic to monitor.
Our DataSource setup:
ComboPooledDataSource dataSource = new com.mchange.v2.c3p0.ComboPooledDataSource();
dataSource.setInitialPoolSize(5);
dataSource.setMaxPoolSize(20);
dataSource.setMinPoolSize(5);
dataSource.setMaxStatements(20);
dataSource.setIdleConnectionTestPeriod(3600);
dataSource.setTestConnectionOnCheckin( Boolean.TRUE.toString() );
dataSource.setPreferredTestQuery("select 1");
JPA properties:
props.put("hibernate.dialect", "org.hibernate.dialect.PostgreSQL82Dialect");
props.put("hibernate.show_sql", "false");
props.put("generate_statistics", "false");
props.put("javax.persistence.sharedCache.mode", "ENABLE_SELECTIVE");
props.put("javax.persistence.validation.mode", "NONE");
props.put("hibernate.cache.use_second_level_cache", "false");
props.put("hibernate.cache.region.factory_class", "org.hibernate.cache.impl.NoCachingRegionFactory");
props.put("hibernate.hbm2ddl.auto", "false");
Versions: Postgres 9.1.7 with latest 9.2 JDBC driver; Hibernate 4.2.3.Final; C3P0 0.9.2.1; Spring 3.2.2.RELEASE; Spring Data JPA 1.1.0; Tomcat 7; JDK 1.7
Update - the C3P0 properties we're currently using (after switch to use maxStatementsPerConnection)
c.m.v.c.i.AbstractPoolBackedDataSource [] - Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, dataSourceName -> 2s05p58v1s6oref13lw967|538ab4bc, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> org.postgresql.Driver, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, identityToken -> 2s05p58v1s6oref13lw967|538ab4bc, idleConnectionTestPeriod -> 1800, initialPoolSize -> 5, jdbcUrl -> jdbc:postgresql://******, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 0, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 20, maxStatements -> 0, maxStatementsPerConnection -> 20, minPoolSize -> 5, numHelperThreads -> 3, preferredTestQuery -> select 1, properties -> {user=******, password=******}, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> true, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, userOverrides -> {}, usesTraditionalReflectiveProxies -> false ]
I can't be certain this is the cause of your issue, but I think there's a pretty good shot!
You have set maxStatements to a value that is way, way too low for the load you are carrying. Try setting maxStatements to zero (turn Statement caching off), or else try setting maxStatementsPerConnection to 20, which I think is what you may have intended. As is, you've set a global max of 20 PreparedStatements to be shared by up to 20 Connections. That's not likely to yield good performance.
I've just created a new Virtual Machine, which was initially completely clean -
I've installed MySQL, set up the passwords and now I can access it via PHPMyAdmin.
I copied my java application with the same configuration that is running on my other server. But it seems to have difficulties connecting to the MySQL server.
You can see the MySQL is listening:
# netstat -tap
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 localhost:mysql *:* LISTEN 5307/mysqld
In TOP, I can see these two processes:
5196 root 20 0 3956 632 508 S 0 0.0 0:00.00 mysqld_safe
5307 mysql 20 0 167m 33m 6780 S 0 0.8 0:00.47 mysqld
But once I try to run the application I end up when Hibernate tries to connect:
23:46:10,192 INFO org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl:127 - HHH000401: using driver [com.mysql.jdbc.Driver] at URL [jdbc:mysql://localhost/goout2?autoReconnect=true]
23:46:10,192 INFO org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl:132 - HHH000046: Connection properties: {user=goout2, writedelay=0, password=****, autocommit=true, shutdown=true, characterEncoding=UTF-8, charSet=UTF-8, release_mode=auto}
23:46:14,331 WARN org.hibernate.engine.jdbc.internal.JdbcServicesImpl:169 - HHH000342: Could not obtain connection to query metadata : Could not create connection to database server. Attempted reconnect 3 times. Giving up.
23:46:14,340 INFO org.hibernate.dialect.Dialect:122 - HHH000400: Using dialect: org.hibernate.dialect.MySQL5InnoDBDialect
23:46:14,350 INFO org.hibernate.engine.jdbc.internal.LobCreatorBuilder:85 - HHH000422: Disabling contextual LOB creation as connection was null
23:46:14,362 INFO org.hibernate.engine.transaction.internal.TransactionFactoryInitiator:73 - HHH000268: Transaction strategy: org.hibernate.engine.transaction.internal.jdbc.JdbcTransactionFactory
23:46:14,367 INFO org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory:48 - HHH000397: Using ASTQueryTranslatorFactory
23:46:14,944 INFO org.hibernate.tool.hbm2ddl.SchemaUpdate:182 - HHH000228: Running hbm2ddl schema update
23:46:14,944 INFO org.hibernate.tool.hbm2ddl.SchemaUpdate:193 - HHH000102: Fetching database metadata
23:46:18,949 ERROR org.hibernate.tool.hbm2ddl.SchemaUpdate:201 - HHH000319: Could not get database metadata
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Could not create connection to database server. Attempted reconnect 3 times. Giving up.
With C3P0 it dies like this:
23:58:08,696 INFO com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource:462 - Initializing c3p0 pool... com.mchange.v2.c3p0.PoolBackedDataSource#2e902532 [ connectionPoolDataSource -> com.mchange.v2.c3p0.WrapperConnectionPoolDataSource#43a8e988 [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, debugUnreturnedConnectionStackTraces -> false, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, identityToken -> ahkzhz8n1trd1gkc8ious|37e8cf51, idleConnectionTestPeriod -> 600, initialPoolSize -> 3, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 300, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 20, maxStatements -> 50, maxStatementsPerConnection -> 0, minPoolSize -> 5, nestedDataSource -> com.mchange.v2.c3p0.DriverManagerDataSource#78875ab8 [ description -> null, driverClass -> null, factoryClassLocation -> null, identityToken -> ahkzhz8n1trd1gkc8ious|20053644, jdbcUrl -> jdbc:mysql://localhost:5307/goout2?autoReconnect=true, properties -> {user=******, writedelay=0, password=******, autocommit=true, shutdown=true, characterEncoding=UTF-8, charSet=UTF-8, release_mode=auto} ], preferredTestQuery -> null, propertyCycle -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false; userOverrides: {} ], dataSourceName -> null, factoryClassLocation -> null, identityToken -> ahkzhz8n1trd1gkc8ious|347c8e1c, numHelperThreads -> 3 ]
23:58:28,694 WARN com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector:608 - com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector#457e133d -- APPARENT DEADLOCK!!! Creating emergency threads for unassigned pending tasks!
23:58:28,697 WARN com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector:624 - com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector#457e133d -- APPARENT DEADLOCK!!! Complete Status:
Obviously the MySQL does not want to accept conenction from my Java application and MySQL logs do not show anything. I suppose, there is some configuration option I am not aware of.
Thanks.
jdbc:mysql://localhost:5307/goout2?autoReconnect=true
5307 is not the standard mysql port. Have you tried connecting to port 3306? 5307 seems to be the PID (process id) and not the port.
You could also try to connect via 127.0.0.1 and/or your current ip. The screen looks like you have enabled mysql to listen to that ports, however make sure that its actively listening on that port. telnet localhost 5307 the mysqlserver has an option to listen on local sockets only, make sure its actually opening up a port.