How can I get connection pool metrics in Cassandra ..? - java

Is there any way to get connection pool metrics in Cassandra using CqlSession. Need answer specific to core java.
I want to get each client connection metrics in Cassandra version(4.9.0).
Metrics like -> opened connection, closed connection, active connections ..
And
Is there any way to notify evetime when new connection is created or update..?

In 4.x versions, you need to explicitly enable in configuration file every metric that you need - something like this (taken from docs, full list of metrics is in the reference):
datastax-java-driver.advanced.metrics {
session.enabled = [ connected-nodes, cql-requests ]
node.enabled = [ pool.open-connections, pool.in-flight ]
}
Regarding the hook on the opened/closed connection, I'm not sure that there is an easy way to do that, except just record previous numbers of the opened connections. Maybe such things would be easier to track via Prometheus, or other monitoring system. Here is an example of how you can integrate driver metrics with Prometheus.

Related

Optimizing connection to GCP mysql from cloud run service?

I have a spring boot application running on cloud run, so far I only had to add the spring cloud gcp mysql
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-starter-sql-mysql</artifactId>
<version>1.2.8.RELEASE</version>
</dependency>
dependency in my POM, and configure my application.yml file to set database name, connection name etc, and it runs fine locally and on cloud run.
My application.yml:
spring:
cloud:
gcp:
sql:
enabled: true
database-name: pos_database
instance-connection-name: pos-sys:asia-southeast2:pos-server-database
datasource:
driver-class-name: com.mysql.cj.jdbc.Driver
username: ***
password: ***
hikari:
maximum-pool-size: 20
However I realized cold start performance has taken a hit, because on startup the socket factory connects to the database instance via SSL socket:
2021-05-31 13:10:07.152 INFO 1539 --- [onnection adder] c.g.cloud.sql.core.CoreSocketFactory :
Connecting to Cloud SQL instance [pos-sys:asia-southeast2:owl-server-database] via SSL socket.
and i get a bunch of lines just repeating
2021-05-31 13:10:09.461 INFO 1539 --- [connection adder] c.g.cloud.sql.core.CoreSocketFactory :
Connecting to Cloud SQL instance [pos-sys:asia-southeast2:pos-server-database] via SSL socket.
I know there is a faster way to connect then the application is running on the cloud, I have been following this tutorial so far:
https://cloud.google.com/sql/docs/mysql/connect-run
But i'm very confused on the last part where it says I have to connect with unix socket, is this a docker thing or within my application? where does the ConnectionPoolContextListener.java
file have to go?
It also says in a comment within the file itself not to use this for java users, and to instead use
Cloud SQL JDBC Socket Factory
But when I go to that link it says to add a dependency to for mysql-connector, but isnt that already included in spring-gcp-starter-mysql? It also says make a connection string in this format:
jdbc:mysql:///<DATABASE_NAME>?cloudSqlInstance=<INSTANCE_CONNECTION_NAME>&socketFactory=com.google.cloud.sql.mysql.SocketFactory&user=<MYSQL_USER_NAME>&password=<MYSQL_USER_PASSWORD>
But doesnt mention where do I put this?
So to summarise:
I have a cloud mysql instance, with the admin api enabled.
I did the Enable connecting to a Cloud SQL in my cloud run by selecting my db instance.
I am very confused by the documentation on what the next step is and what to do next.
Cloud Run provide a Unix domain socket when configured with a Cloud SQL instance - it's a file that can be used to connect to a database. You are using the Cloud SQL Java connector, which allows you to bypass using the Unix socket (which is usually preferred on Java, since Unix sockets aren't natively supported).
Instead to improve your cold start time, I recommend doing two things:
Reduce the number of connections in your pool. While the optimal number varies greatly between applications, 20 is almost certainly way more than you need. As a rule of thumb, try 2 * the number of cores used as your starting value, and increase/decrease as needed. Hikari uses maximumPoolSize to do this.
Adjust the number of starting connections in your pool. Hikari offers minimumIdle, which sets the minimum number of idle connections in the pool, and up to maximumPoolSize. While Hikari recommends not setting this value (so you have a fixed pool), setting it to 0 means your pool won't establish connections on startup. This means your application will start faster, but will take longer to get a connection from the pool on average.

Tomcat jdbc stalls during amazon multi-az failover

SOLVED
Ultimately the solution noted in the similar ticket noted below worked for us. When we tried it initially our configuration parser was mangling the URL and removing &connectTimeout=15000&socketTimeout=60000 from it, which invalidated that test.
I'm having trouble getting a tomcat jdbc connection pool to fail-over to a new DB server using Amazon RDS' mutli-az feature. When fail-over occurs the application server gets hung up trying to borrow a connection from the pool. It's similar to this question, however the solution to this users problem did not help me, so I suspect it's not quite the same: Configure GlassFish JDBC connection pool to handle Amazon RDS Multi-AZ failover
The sequence goes a bit like this:
Succesful request is made, log output as expected
fail-over initiated (via reboot with failover in RDS)
Request made that times out, log messages from request appear as expected up until a connection is borrowed from the pool.
Subsequent requests generate no log messages, they time out as well.
After some amount of time, the daemon will eventually start printing more log messages as if it succesfully connected to the database and performing operations. This can take just over 16 minutes to occur, the client has long since timed out.
If I wait about 50 minutes and try again eventually the system will finally accept connections again.
Notes
If I tell tomcat to shut down there are exceptions in the logs about it being unable to clean up resources, and about my servlet still processing a request. Most notable (In my opinion) is the following stack trace indicating at least one thing is stuck waiting for communication over a socket from mysql.
java.net.SocketInputStream.socketRead0(Native Method)
java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
java.net.SocketInputStream.read(SocketInputStream.java:171)
java.net.SocketInputStream.read(SocketInputStream.java:141)
com.mysql.jdbc.util.ReadAheadInputStream.fill(ReadAheadInputStream.java:114)
com.mysql.jdbc.util.ReadAheadInputStream.readFromUnderlyingStreamIfNecessary(ReadAheadInputStream.java:161)
com.mysql.jdbc.util.ReadAheadInputStream.read(ReadAheadInputStream.java:189)
com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3116)
com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3573)
com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3562)
com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4113)
com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2570)
com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2731)
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2812)
com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2761)
com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:894)
com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:732)
org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:441)
org.apache.tomcat.jdbc.pool.PooledConnection.validate(PooledConnection.java:384)
org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:716)
org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:579)
org.apache.tomcat.jdbc.pool.ConnectionPool.getConnection(ConnectionPool.java:174)
org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:111)
<...>
If I restart tomcat things return to normal as soon as it comes back. Obviously long term this is probably not a workable solution for maintaining uptime.
Environment Details
Database: MySQL (using mysql-connector-java 5.1.26) (server 5.5.53)
Tomcat 8.0.45
I've gone through several changes to my configuration while trying to solve this issue. At the time of this writing the following related settings are in place:
jre/lib/security/java.security -- I'm under the impression that the default value for Oracle Java 1.8 is 30s with no security manager. I set these to zero just to be positive this isn't the issue.
networkaddress.cache.ttl: 0
networkaddress.cache.negative.ttl: 0
connection pool settings
testOnBorrow: true
testOnConnect: true
testOnReturn: true
jdbc url parameters
connectTimeout:60000
socketTimeout:60000
autoReconnect:true
Update
Still no solution found.
Added in logging to confirm that this was not a DNS caching issue. IP address logged immediately before timeout matches up with IP address of 'new' master RDS host.
For reference the following block represents the properties I'm using to initialize my data source. I’m configuring the pool in code rather than JNDI, with some elements pulled out of our app's config file. I’ve pasted the code below along with comments indicating what the config values are for the tests I’ve been running.
PoolProperties p = new PoolProperties();
p.setUrl(config.get("JDBC_URL")); // jdbc:mysql://host:3306/dbname
p.setDriverClassName("com.mysql.jdbc.Driver");
p.setUsername(config.get("JDBC_USER"));
p.setPassword(config.get("JDBC_PASSWORD"));
p.setJmxEnabled(true);
p.setTestWhileIdle(false);
p.setTestOnBorrow(true);
p.setValidationQuery("SELECT 1");
p.setValidationInterval(30000);
p.setTimeBetweenEvictionRunsMillis(30000);
p.setMaxActive(Integer.parseInt(config.get("MAX_ACTIVE"))); //45
p.setInitialSize(10);
p.setMaxWait(5);
p.setRemoveAbandonedTimeout(Integer.parseInt(config.get("REMOVE_ABANDONED_TIMEOUT"))); //600
p.setMinEvictableIdleTimeMillis(Integer.parseInt(config.get("DB_EVICTION_TIMEOUT"))); //60000
p.setMinIdle(Integer.parseInt(config.get("DB_MIN_IDLE"))); //50
p.setLogAbandoned(Boolean.parseBoolean(config.get("LOG_ABANDONED"))); //true
p.setRemoveAbandoned(true);
p.setJdbcInterceptors(
"org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;"+
"org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer;"+
"org.apache.tomcat.jdbc.pool.interceptor.ResetAbandonedTimer");
// make sure new connections have auto commit true
p.setDefaultAutoCommit(true);

Initialize RabbitMQ metrics for Java

I am unsuccessfully trying to use RabbitMQ metrics support for Java.
My objective is to get some messaging statistics into my Java program. When testing I use a RabbitMQ instance at localhost, and I have put some test data into a test queue on a test virtual host using the RabbitMQ web interface.
My non-working code is:
ConnectionFactory connectionFactory = new ConnectionFactory();
connectionFactory.setHost(host); // localhost
connectionFactory.setUsername(userName);
connectionFactory.setPassword(password);
connectionFactory.setPort(port); // 5672
connectionFactory.setVirtualHost(virtualHost);
StandardMetricsCollector metrics = new StandardMetricsCollector();
connectionFactory.setMetricsCollector(metrics);
It seems like metrics is not properly initialized:
Metrics have the default value 0
Within metrics there are properties called initialized set to false (for instance metrics.getPublishedMessages().m1Rate.initialized)
So, it seems I am missing something important here despite trying to follow the official documentation.
As a workaround, I'm currently using HTTP requests to the API to get some basic messaging statistics, but the API is very limited.

Oracle database alert opiodr aborting process ORA-609

I am running a batch java application. The application runs every 10/20 minutes in my Production and UAT environment and I get database alerts like this:
Thu Feb 06 15:15:08 2014
opiodr aborting process unknown ospid (28246400) as a result of ORA-609
After researching a bit on the internet the suggested fix for these alerts is to change INBOUND_CONNECT_TIMEOUT as:
Sqlnet.ora: SQLNET.INBOUND_CONNECT_TIMEOUT=180
Listener.ora: INBOUND_CONNECT_TIMEOUT_listener_name=120
We have changed the setting on the database server side but don't know where to change in the client application. We are using c3p0 to create a connection pool and we are setting only these parameters:
dataSource.setAcquireRetryDelay(30000);
dataSource.setMaxPoolSize(50);
dataSource.setMinPoolSize(20);
dataSource.setInitialPoolSize(10);
We have other web services running on the same server as the batch application and they use Tomcat's DBCP pool and they don't seem to create any alerts. Also strangely enough, our batch application doesn't generate the alerts in lower test environments. They happen once in a while but the UAT and PROD environments get these alerts very frequently based on the schedule. Any suggestions what configurations to set in the c3p0 pool or should I try changing to another pool API like DBCP?
Update: I have added a few more parameters in the datasource and the frequency of alerts has reduced. I added the following and the number of alerts have gone down from 15 an hour to 4 an hour.
dataSource.setLoginTimeout(120);
dataSource.setAcquireRetryAttempts(10);
dataSource.setCheckoutTimeout(60000);
I moved to DBCP connection pooling and it seems to have fixed the issue. I tried changing a few more c3p0 settings mentioned above but nothing changed. The alerts were reduced but didn't go completely. So we decided to try DBCP. I am using all default values in DBCP except for the pool size. I'm using the tomcat version of DBCP available in tomcat's lib folder (tomcat-dbcp.jar).

How can I monitor/log Tomcat's thread pool?

I have a Tomcat installation where I suspect the thread pool may be decreasing over time due to threads not being properly released. I get an error in catalina.out when maxthreads is reached, but I would like to log the number of threads in use to a file every five minutes so I can verify this hypothesis. Would anyone please be able to advise how this can be be done?
Also in this installation there is no Tomcat manager, it appears whoever did the original installation deleted the manager webapp for some reason. I'm not sure if manager would be able to do the above or if I can reinstall it without damaging the existing installation? All I really want to do is keep track of the thread pool.
Also, I noticed that maxthreads for Tomcat is 200, but the max number of concurrent connections for Apache is lower (Apache is using mod_proxy and mod_proxy_ajp (AJP 1.3) to feed Tomcat). That seems wrong too, what is the correct relationship between these numbers?
Any help much appreciated :D
Update: Just a quick update to say the direct JMX access worked. However I also had to set Dcom.sun.management.jmxremote.host. I set it to localhost and it worked, however without it no dice. If anyone else has a similar problem trying to enable JMX I recommend you set this value also, even if you are connecting from the local machine. Seems it is required with some versions of Tomcat.
Just a quick update to say the direct JMX access worked. However I also had to set Dcom.sun.management.jmxremote.host. I set it to localhost and it worked, however without it no dice. If anyone else has a similar problem trying to enable JMX I recommend you set this value also, even if you are connecting from the local machine. Seems it is required with some versions of Tomcat.
Direct JMX access
Try adding this to catalina.sh/bat:
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=5005
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
UPDATE: Alex P suggest that the following settings might also be required in some situations:
-Dcom.sun.management.jmxremote.host=localhost
This enables remote anonymous JMX connections on port 5005. You may also consider JVisualVM which is much more please and allows to browse JMX via plugin.
What you are looking for is Catalina -> ThreadPool -> http-bio-8080 -> various interesting metrics.
JMX proxy servlet
Easier method might be to use Tomcat's JMX proxy servlet under: http://localhost:8080/manager/jmxproxy. For instance try this query:
$ curl --user tomcat:tomcat http://localhost:8080/manager/jmxproxy?qry=Catalina:name=%22http-bio-8080%22,type=ThreadPool
A little bit of grepping and scripting and you can easily and remotely monitor your application. Note that tomcat:tomcat is the username/password of user having manager-jmx role in conf/tomcat-users.xml.
You can deploy jolokia.war and then retrieve mbeans values in JSON (without the manager):
http://localhost:8080/jolokia/read/Catalina:name=*,type=ThreadPool?ignoreErrors=true
If you want only some values (currentThreadsBusy, maxThreads, currentThreadCount, connectionCount):
http://localhost:8080/jolokia/read/Catalina:name=*,type=ThreadPool/currentThreadsBusy,maxThreads,currentThreadCount,connectionCount?ignoreErrors=true
{
request: {
mbean: "Catalina:name="http-nio-8080",type=ThreadPool",
attribute: [
"currentThreadsBusy",
"maxThreads",
"currentThreadCount",
"connectionCount"
],
type: "read"
},
value: {
currentThreadsBusy: 1,
connectionCount: 4,
currentThreadCount: 10,
maxThreads: 200
},
timestamp: 1490396960,
status: 200
}
Note: This example works on Tomcat7 +.
For a more enterprise solution. I have been using New Relic in our production environment.
This provides a graph of the changes to the threadpool over time.
There are cheaper tools out meanwhile: I am using this jar here: https://docs.cyclopsgroup.org/jmxterm
You can automate it via shell/batch scripts. I regexed the output and let prometheus poll it for displaying it in grafana.

Categories

Resources