Can s.b. tell me what can cause this:
Server doesn't do anything anymore:
server.network.http-listener-1.thread-pool.currentthreadcount-count = 500
server.network.http-listener-1.thread-pool.currentthreadsbusy-count = 500
Lot's of th is in the log:
[#|2013-05-06T13:06:07.917+0200|WARNING|glassfish3.0.1|com.sun.grizzly.config.GrizzlyServiceListener|_ThreadID=16;_ThreadName=Thread-1;|Interrupting idle Thread: http-thread-pool-8083-(498)|#]
[#|2013-05-06T13:06:09.917+0200|WARNING|glassfish3.0.1|com.sun.grizzly.config.GrizzlyServiceListener|_ThreadID=16;_ThreadName=Thread-1;|Interrupting idle Thread: http-thread-pool-8083-(499)|#]
[#|2013-05-06T13:06:10.917+0200|WARNING|glassfish3.0.1|com.sun.grizzly.config.GrizzlyServiceListener|_ThreadID=16;_ThreadName=Thread-1;|Interrupting idle Thread: http-thread-pool-8083-(500)|#]
Normal behaviour:
server.network.http-listener-1.thread-pool.currentthreadcount-count = 427
server.network.http-listener-1.thread-pool.currentthreadsbusy-count = 8
server.network.http-listener-1.connection-queue.countqueued1minuteaverage-count = 184
server.network.http-listener-1.connection-queue.countqueued5minutesaverage-count = 3014
server.network.http-listener-1.connection-queue.countqueued15minutesaverage-count = 10058
If you run out of connection this may indicate that your connections are not closed after they've been used. Hard to give an example without seeing your implementation of opening/closing the database connections. But normally you want to make sure that you close the connection in a finally clause, i'll provide an example below:
try {
//Some logic that reads/writes to the database
}catch (SQLException e) { //Rollback in case something goes wrong
try {
System.out.println("Rolling back current db transaction");
conn.rollback();
} catch (SQLException e1) {
System.out.println(e1);
}
}finally{ //Close the connection
conn.close();
System.out.println("DB connection is closed");
}
So that the database connection will be closed regardless if the reading/writing fails or not. If you fail to close these connections correctly these connections will remain open (or time out depending on your settings) and you will eventually run out of connections
Related
We use connection pool in our application. While I understand that we should close and get connections as needed since we are using a connection pool. I implemented a cache update mechanism by receiving Postgres LISTEN notifications. The code is pretty much similar to the canonical example given by the documentation.
As you can see in the code, the query is initiated in the constructor and the connection is re used. This may pose problem when the connection is closed out of band due to any factor. One solution to this is to get the connection before every use, but as you can see the statement is only executed once in the constructor but still I can receive the notification in the polling. So if I get the connection every time, it will force me to re issue the statement for every iteration(after delay). I'm not sure if that's an expensive operation.
What is the middle ground here?
class Listener extends Thread
{
private Connection conn;
private org.postgresql.PGConnection pgconn;
Listener(Connection conn) throws SQLException
{
this.conn = conn;
this.pgconn = conn.unwrap(org.postgresql.PGConnection.class);
Statement stmt = conn.createStatement();
stmt.execute("LISTEN mymessage");
stmt.close();
}
public void run()
{
try
{
while (true)
{
org.postgresql.PGNotification notifications[] = pgconn.getNotifications();
if (notifications != null)
{
for (int i=0; i < notifications.length; i++){
//use notification
}
}
Thread.sleep(delay);
}
}
catch (SQLException sqle)
{
//handle
}
catch (InterruptedException ie)
{
//handle
}
}
}
In addition to this, there is also another similar document which had another query in run method as well in addition to constructor. I'm wondering if someone could enlighten me the purpose of another query within the method.
public void run() {
while (true) {
try {
//this query is additional to the one in the constructor
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery("SELECT 1");
rs.close();
stmt.close();
org.postgresql.PGNotification notifications[] = pgconn.getNotifications();
if (notifications != null) {
for (int i=0; i<notifications.length; i++) {
System.out.println("Got notification: " + notifications[i].getName());
}
}
// wait a while before checking again for new
// notifications
Thread.sleep(delay);
} catch (SQLException sqle) {
//handle
} catch (InterruptedException ie) {
//handle
}
}
}
I experimented closing the connection in every iteration(but without getting another one). That's still working. Perhaps that's due to unwrap that was done.
Stack:
Spring Boot, JPA, Hikari, Postgres JDBC Driver (not pgjdbc-ng)
The connection pool is the servant, not the master. Keep the connection for as long as you are using it to LISTEN on, i.e. ideally forever. If the connection ever does close, then you will miss whatever notices were sent while it was closed. So to keep the cache in good shape, you would need to discard the whole thing and start over. Obviously not something you would want to do on a regular basis, or what would be the point of having it in the first place?
The other doc you show is just an ancient version of the first one. The dummy query just before polling is there to poke the underlying socket code to make sure it has absorbed all the messages. This is no longer necessary. I don't know if it ever was necessary, it might have just been some cargo cult that found its way into the docs.
You would probably be better off with the blocking version of this code, by using getNotifications(0) and getting rid of sleep(delay). This will block until a notice becomes available, rather than waking up twice a second and consuming some (small) amount of resources before sleeping again. Also, once a notice does arrive it will be processed almost immediately, instead of waiting for what is left of a half-second timeout to expire (so, on average, about a quarter second).
I have a client that I want to try to continuously connect to a server until a connection is established (i.e. until I start the server).
clientSocket = new Socket();
while (!clientSocket.isConnected()) {
try {
clientSocket.connect(new InetSocketAddress(serverAddress, serverPort));
} catch (IOException e) {
e.printStackTrace();
}
// sleep prevents a billion SocketExceptions from being printed,
// and hopefully stops the server from thinking it's getting DOS'd
try {
Thread.sleep(1500);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
After the first attempt, I get a ConnectionException; expected, since there is nothing to connect to. After that, however, I start getting SocketException: Socket closed which doesn't make sense to me since clientSocket.isClosed() always returns false, before and after the connect() call.
How should I change my code to get the functionality I need?
You can't reconnect a Socket, even if the connect attempt failed. You have to close it and create a new one.
I use JDBC pool for my Tomcat web applications.
As I read in docs, I need get connection, execute query and close.
After close, connection returns to pool.
I think Way 1 better when query DB depend on external event and for the task which running every 5sec Way 2 more correct.
Could someone explain which way use for task which repeat every 5 sec?
PS: I skip extra checks in code to make code looks readable.
Way #1 Gets connection from pool and close every 5 sec
Connection c = null;
Statement s = null;
ResultSet rs = null;
DataSource ds = ... Get DataSource ...
while(running) {
try {
c = ds.getConnection();
s = c.createStatement();
rs = s.executeQuery('SELECT data FROM my_table');
... do something with result ...
} catch (SQLException sec) {
... print exception ...
} finally {
try {
rs.close();
s.close();
c.close();
} catch (SQLException sec) { ... print exception ... }
... Thread sleep 5 seconds and repeat ...
}
}
Way #2 Get connection before loop and close after, reconnect inside loop
Connection c = null;
Statement s = null;
ResultSet rs = null;
DataSource ds = ... Get DataSource ...
c = ds.getConnection();
while(running) {
try {
s = c.createStatement();
rs = s.executeQuery('SELECT data FROM my_table');
... do something with result ...
} catch (SQLException sec) {
... print exception ...
... if connection lost, try reconnect and execute query again ...
} finally {
try {
rs.close();
s.close();
} catch (SQLException sec) {
... print exception ...
}
... Thread sleep 5 seconds and repeat ...
}
}
c.close();
Pool config
<Resource name="jdbc/pg_mega" auth="Container"
type="javax.sql.DataSource" driverClassName="org.postgresql.Driver"
url="jdbc:postgresql://127.0.0.1:6432/db"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
username="***" password="****"
defaultAutoCommit="true"
initialSize="1"
maxActive="300"
maxTotal="300"
maxIdle="20"
minIdle="5"
maxWait="10000"
validationQuery="select 1"
validationInterval="30000"
testWhileIdle="false"
testOnBorrow="true"
testOnReturn="false"
timeBetweenEvictionRunsMillis="30000"
minEvictableIdleTimeMillis="30000"
/>
I think the most common pattern is this:
Connection conn = null;
PreparedStatement stmt = null;
ResultSet res = null;
try {
conn = ds.getConnection();
stmt = conn.prepareStatement(sqlStatement);
//
// ....
res = stmt.executeQuery();
// use the resultset
conn.commit();
} catch (SQLException e) {
// Manage the exception
try {
conn.rollback();
} catch (SQLException e1) {
// SWALLOW
}
} finally {
close(res);
close(stmt);
close(conn);
}
I use these helper functions to safely close without too much boilerplate, from java 7 on you can autoclose so these helper are no use anymore.
public static void close(Connection conn) {
try {
if (conn != null)
conn.close();
} catch (SQLException e) {
// SWALLOW
}
}
public static void close(Statement stmt) {
try {
if (stmt != null)
stmt.close();
} catch (SQLException e) {
// SWALLOW
}
}
public static void close(ResultSet res) {
try {
if (res != null)
res.close();
} catch (SQLException e) {
// SWALLOW
}
}
You should really be sure to close the connection in the finally statement, bad things happens if you do not close the connection, and if (for example) the rs is null in your example (not so difficult) you won't close the connection.
Getting and releasing the connection from the pool is not a performance problem, it happens in microseconds , thousands of times faster then any possible query.
The reason why you do not release the connection eagerly is transactions, you want to keep the same connection for the whole transaction (no way around this).
When you commit (or rollback) then you don't need that peculiar connection anymore, so just release it.
Another hint, close the connection in finally, even if you catch SQL Exceptions, because there are always Runtime Exceptions and even Errors (which you will not catch), but finally will be executed even in the face of OutOfMemoryError or ClassDefNotFoundError or any other, and the connection will be returned to the pool.
Last but not least is the pool that will try to reconnect in case of disconnection, in fact the pool will just ditch invalid connection and create a fresh batch when needed.
You should choose a good policy of connection validation, a bad choice will lead to much extra time in getting the connection thus hitting hard on performance, or to exceptions caused by invalid connection acquired from the pool.
Optimizing the pool is like many other performance tuning tasks: HARD .
For example:
testOnBorrow="true" will hit the DB before acquiring the connection, it is safe, but it will cost, tens or hundreds of time slower that not checking on borrow.
testWhileIdle="true" instead is less safe (you could get an invalid connection) but is much faster an has the advantage of keeping alive the connections.
You have to choose considering how you use connection, how you deal with errors, where is the DB (on the same machine, on a lan, on a wan) and many other factors.
Way #2 is not correct when you use a pool. If you use a pool, you should always try to keep the connection out of the pool ("leased") for as short as possible to get the most out of pool usage. If you do not use a pool, you have to consider the cost of creating and destroying connections and manage the connection's life cycle.
If you use a pool, a leased connection must always be returned to the pool (a connection is returned to the pool when you close the connection). If connections are not returned to the pool (i.e. connections are leaked), the pool will be empty soon and your application will stop working. This is especially important when things go wrong (e.g. in your code example, when rs is null due to a query error, the connection will be leaked). To prevent connections from leaking, consider using a tool like Sql2o which has built-in protection against connection leakage.
Also reduce the number of connections in the pool. Start with minIndle="1" and maxActive="4". Use stress-testing to determine the upper-limit of the pool-size (more connections in a pool usually do more harm than good, see also About Pool Sizing from HikariCP which has more good articles about database connection pools).
I am very new to Java.
I have Java class which implements the Database(Postgres) related functionality.
The problem is if Database stopped and then restart then My this class throws SQLException as connection got reset(database is up and running).
Is there any way that after Database restarted; my class automatically Connection to database and work as expected instead of throwing SQLException.
Is there any way with Properties as parameter to DriverManager.getConnection().
Thanks
MAP
Use a try catch block to handle the SQLException. When you catch an SQLException, the program could wait a specified period of time and then try to reconnect, you could loop this as long as you want.
boolean connected = false;
// repeat until connected is true
while (!connected) {
try {
// put your connection code here
connected == true;
} catch (SQLException se) {
// sleep for 10 seconds
Thread.sleep(10000);
}
}
I have been learning about using MySQL within Java using Oracle JDBC and I am trying to get into the mindset of try/catch and pool cleanup.
I am wondering if the following code is the correct way to perfectly clean everything up or if you notice holes in my code that requires something I've missed. For the record, I intend to use InnoDB and its row locking mechanism which is why I turn auto commit off.
try
{
connection = getConnection(); // obtains Connection from a pool
connection.setAutoCommit(false);
// do mysql stuff here
}
catch(SQLException e)
{
if(connection != null)
{
try
{
connection.rollback(); // undo any changes
}
catch (SQLException e1)
{
this.trace(ExtensionLogLevel.ERROR, e1.getMessage());
}
}
}
finally
{
if(connection != null)
{
try
{
if(!connection.isClosed())
{
connection.close(); // free up Connection so others using the connection pool can make use of this object
}
}
catch (SQLException e)
{
this.trace(ExtensionLogLevel.ERROR, e.getMessage());
}
}
}
getConnection() returns a Connection object from a pool and connection.close() closes it releasing it back to the pool (so I've been told, still new to this so apologies if I am talking rubbish). Any help on any of this would be greatly appreciated!
Thank you!
I recommend not setting autocommit back to true in the finally block - your other threads that are relying on autocommit being set to true should not assume that the connections in the pool are in this state, but instead they should set autocommit to true before using a connection (just as this thread is setting autocommit to false).
In addition, you should check the connection's isClosed property before calling close() on it.
Other than that, I don't see any problems.