I have configured Log4j2 to write the logs to my Mongo Atlas cluster (4.4.8).
The configuration seems ok (I use the connection string given by Atlas), and the logs (console) say that the connection to the MongoDB is ok, database retrieved correctly and collection retrived correctly.
But then, when it tries to write a log to the DB, it times out after 30000ms saying:
Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[]
I also can see several messages saying:
INFO org.mongodb.driver.cluster - Cluster description not yet available. Waiting for 30000 ms before timing out
What I don't understand is that, using the very same driver, same connection string, all the operations I perform on this same MongoDB managing the connection myself (I have a MongoDBService class where I build the Mongo Connection etc...normal stuff) work with no problem, so it leads me to thing that it is Log4j that handles the connection to MongoDB in a bad way...
Any help is appreciated!
Finally I found the problem in my configurations. Maybe it works for you too.
I was used to have multiple appenders in root logger. So mongodb was trying to log something like: "hey, I'm going to log" after initializing the RollingFileAppender but before the mongodbAppender. You can see it in below:
Root:
level: info
AppenderRef:
- ref: ConsoleAppender
- ref: RollingFileAppender
- ref: MongoAppender
Just by changing the mongo appender's logger everything worked for me.
logger:
- name: com.sinansoft
level: info
additivity: false
AppenderRef:
- ref: MongoAppender
Root:
level: info
AppenderRef:
- ref: ConsoleAppender
- ref: RollingFileAppender
Let me know if you want more configuration details in this case.
Related
I use this logback appender to send logs to Kafka:
https://github.com/danielwegener/logback-kafka-appender
When Kafka was PLAINTEXT everything worked correctly. But when Kafka changed to SSL, it is not possible to send messages. I did not find the necessary information in readme.md. Has anyone had this setup experience? Or maybe use something else?
<topic>TEST_TOPIC_FOR_OS</topic>
<keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy"/>
<deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy">
</deliveryStrategy>
<producerConfig>metadata.fetch.timeout.ms=99999999999</producerConfig>
<producerConfig>bootstrap.servers=KAFKA BROKER HOST</producerConfig>
<producerConfig>acks=0</producerConfig>
<producerConfig>linger.ms=1000</producerConfig>
<producerConfig>buffer.memory=16777216</producerConfig>
<producerConfig>max.block.ms=100</producerConfig>
<producerConfig>retries=2</producerConfig>
<producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback</producerConfig>
<producerConfig>compression.type=none</producerConfig>
<producerConfig>security.protocol=SSL</producerConfig>
<producerConfig>ssl.keystore.location= path_to_jks</producerConfig>
<producerConfig>ssl.keystore.password=PASSWORD</producerConfig>
<producerConfig>ssl.truststore.location=path_to_jks </producerConfig>
<producerConfig>ssl.truststore.password=PASSWORD </producerConfig>
<producerConfig>ssl.endpoint.identification.algorithm=</producerConfig>
<producerConfig>ssl.protocol=TLSv1.1</producerConfig>
For any existing topic, I get an error:
12:05:49.505 [kafka-producer-network-thread | host-default-logback] route: DEBUG o.a.k.clients.producer.KafkaProducer breadcrumbId: - [Producer clientId=host-default-logback] Exception occurred during message send:
org.apache.kafka.common.errors.TimeoutException: Topic TEST_TOPIC_FOR_OS not present in metadata after 100 ms.
The application itself works correctly with this kafka and topic
The problem went away with the upgrade of appender to 0.2.0
We have configured our application to write some specific log messages to System's Syslog file using the Syslog appender of Log4j2. No issue in writing the Syslog to the file. But when the syslog service is restarted, the first log message is not written to the syslog. The subsequent messages are written.
Enabled debug logs of Log4j, no exception is seen while writing 1st message to syslog after the restart. But for the subsequent request, the following messages were captured in the Log4j2 log.
2022-01-27 18:07:40,120 ajp-nio-0.0.0.0-8009-exec-3 DEBUG Reconnecting localhost/127.0.0.1:514
2022-01-27 18:07:40,121 ajp-nio-0.0.0.0-8009-exec-3 DEBUG Creating socket localhost/127.0.0.1:514
2022-01-27 18:07:40,122 ajp-nio-0.0.0.0-8009-exec-3 DEBUG Closing SocketOutputStream java.net.SocketOutputStream#1a769d7
2022-01-27 18:07:40,122 ajp-nio-0.0.0.0-8009-exec-3 DEBUG Connection to localhost:514 reestablished: Socket[addr=localhost/127.0.0.1,port=514,localport=57852]
I took threaddump and checked whether the Reconnector thread is running but no such exists in the threaddump. I am clueless here, any help on finding the reason for missing the message would be helpful.
Environment details:
CentOS 7.9 + RSyslog Service,
Application deployed in Tomcat and running on Java 11,
Log4j2 version is 2.17.1
This is due to the way plain text TCP syslog works. Check out this post for further information.
This "bug" exists, since version 8.1901 and newer.
The only way you can fix this - as far as i know - is to send the messages over the RELP protocol. See omrelp module.
I have a spring boot application running on cloud run, so far I only had to add the spring cloud gcp mysql
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-starter-sql-mysql</artifactId>
<version>1.2.8.RELEASE</version>
</dependency>
dependency in my POM, and configure my application.yml file to set database name, connection name etc, and it runs fine locally and on cloud run.
My application.yml:
spring:
cloud:
gcp:
sql:
enabled: true
database-name: pos_database
instance-connection-name: pos-sys:asia-southeast2:pos-server-database
datasource:
driver-class-name: com.mysql.cj.jdbc.Driver
username: ***
password: ***
hikari:
maximum-pool-size: 20
However I realized cold start performance has taken a hit, because on startup the socket factory connects to the database instance via SSL socket:
2021-05-31 13:10:07.152 INFO 1539 --- [onnection adder] c.g.cloud.sql.core.CoreSocketFactory :
Connecting to Cloud SQL instance [pos-sys:asia-southeast2:owl-server-database] via SSL socket.
and i get a bunch of lines just repeating
2021-05-31 13:10:09.461 INFO 1539 --- [connection adder] c.g.cloud.sql.core.CoreSocketFactory :
Connecting to Cloud SQL instance [pos-sys:asia-southeast2:pos-server-database] via SSL socket.
I know there is a faster way to connect then the application is running on the cloud, I have been following this tutorial so far:
https://cloud.google.com/sql/docs/mysql/connect-run
But i'm very confused on the last part where it says I have to connect with unix socket, is this a docker thing or within my application? where does the ConnectionPoolContextListener.java
file have to go?
It also says in a comment within the file itself not to use this for java users, and to instead use
Cloud SQL JDBC Socket Factory
But when I go to that link it says to add a dependency to for mysql-connector, but isnt that already included in spring-gcp-starter-mysql? It also says make a connection string in this format:
jdbc:mysql:///<DATABASE_NAME>?cloudSqlInstance=<INSTANCE_CONNECTION_NAME>&socketFactory=com.google.cloud.sql.mysql.SocketFactory&user=<MYSQL_USER_NAME>&password=<MYSQL_USER_PASSWORD>
But doesnt mention where do I put this?
So to summarise:
I have a cloud mysql instance, with the admin api enabled.
I did the Enable connecting to a Cloud SQL in my cloud run by selecting my db instance.
I am very confused by the documentation on what the next step is and what to do next.
Cloud Run provide a Unix domain socket when configured with a Cloud SQL instance - it's a file that can be used to connect to a database. You are using the Cloud SQL Java connector, which allows you to bypass using the Unix socket (which is usually preferred on Java, since Unix sockets aren't natively supported).
Instead to improve your cold start time, I recommend doing two things:
Reduce the number of connections in your pool. While the optimal number varies greatly between applications, 20 is almost certainly way more than you need. As a rule of thumb, try 2 * the number of cores used as your starting value, and increase/decrease as needed. Hikari uses maximumPoolSize to do this.
Adjust the number of starting connections in your pool. Hikari offers minimumIdle, which sets the minimum number of idle connections in the pool, and up to maximumPoolSize. While Hikari recommends not setting this value (so you have a fixed pool), setting it to 0 means your pool won't establish connections on startup. This means your application will start faster, but will take longer to get a connection from the pool on average.
A spring mvc application using hibernate on CentOS 7 is suddenly not able to create a database connection when I restart tomcat 8. This means I cannot log into the application. The database connection was working perfectly for a long time. The only change I made recently was to follow the instructions in this OpenVPN tutorial, with the exception of using firewalld instead of iptables as in the tutorial. I am not certain that the OpenVPN changes caused this problem, but I reversed almost all the steps in the tutorial just to check: yum remove openvpn easy-rsa -y, remove firewalld changes, etc) But the problem still persists.
How can I get my app to successfully connect to the database again?
My database connection string is:
jdbc.url=jdbc:mysql://localhost:3306/atest?autoReconnect=true
The relevant message during tomcat startup is:
INFO ConnectionProviderInitiator - HHH000130: Instantiating explicit connection provider: org.hibernate.ejb.connection.InjectedDataSourceConnectionProvider
WARN JdbcServicesImpl - HHH000342: Could not obtain connection to query metadata : Could not create connection to database server. Attempted reconnect 3 times. Giving up.
INFO Dialect - HHH000400: Using dialect: org.hibernate.dialect.MySQLDialect
INFO LobCreatorBuilder - HHH000422: Disabling contextual LOB creation as connection was null
I also checked other things like confirming the password is correct, logging into the database using the shell to confirm it is working, etc.
EDIT
I checked the contents of /etc by doing cd /etc and then ls -al and the results included the following: host.conf, hostname, hosts, hosts.allow, hosts.deny. I checked each of the files just now, and their contents do not seem to have changed since the last time I modified them, long before this problem emerged.
As per #shinjw's request, /etc/hosts contains the following, but the following has not changed since long before this problem emerged:
127.0.0.1 localhost.localdomain localhost
# Auto-generated hostname. Please do not remove this comment.
abc.de.fgh.ij mydomain.com mydomain
::1 ip6-localhost ip6-loopback
I am using cassandra 2.0.7 sitting on a remote server listening on non-default port
<code>
---cassandra.yaml
rpc_address: 0.0.0.0
rpc_port: 6543
</code>
I am trying to connect to the server using titan-0.4.4 (java API, also tried with rexster) using the following config:
<code>
storage.hostname=172.182.183.215
storage.backend=cassandra
storage.port=6543
storage.keyspace=abccorp
</code>
It does not connect and I see the the following exceptions below. However, if I use cqlsh on the same host from where I am trying to execute my code/rexster, I am able to connect without any issues. Anybody seen this?
<code>
0 [main] INFO com.netflix.astyanax.connectionpool.impl.ConnectionPoolMBeanManager - Registering mbean: com.netflix.MonitoredResources:type=ASTYANAX,name=ClusterTitanConnectionPool,ServiceType=connectionpool
49 [main] INFO com.netflix.astyanax.connectionpool.impl.CountingConnectionPoolMonitor - AddHost: 172.182.183.215
554 [main] INFO com.netflix.astyanax.connectionpool.impl.ConnectionPoolMBeanManager - Registering mbean: com.netflix.MonitoredResources:type=ASTYANAX,name=KeyspaceTitanConnectionPool,ServiceType=connectionpool
555 [main] INFO com.netflix.astyanax.connectionpool.impl.CountingConnectionPoolMonitor - AddHost: 172.182.183.215
999 [main] INFO com.netflix.astyanax.connectionpool.impl.CountingConnectionPoolMonitor - AddHost: 127.0.0.1
1000 [main] INFO com.netflix.astyanax.connectionpool.impl.CountingConnectionPoolMonitor - RemoveHost: 172.182.183.215
2366 [main] INFO com.thinkaurelius.titan.diskstorage.Backend - Initiated backend operations thread pool of size 16
41523 [RingDescribeAutoDiscovery] WARN com.netflix.astyanax.impl.RingDescribeHostSupplier - Failed to get hosts from abccorp via ring describe. Will use previously known ring instead
61522 [RingDescribeAutoDiscovery] WARN com.netflix.astyanax.impl.RingDescribeHostSupplier - Failed to get hosts from abccorp via ring describe. Will use previously known ring instead
63080 [main] INFO com.thinkaurelius.titan.diskstorage.util.BackendOperation - Temporary storage exception during backend operation. Attempting backoff retry
com.thinkaurelius.titan.diskstorage.TemporaryStorageException: Temporary failure in storage backend
at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxOrderedKeyColumnValueStore.getNamesSlice(AstyanaxOrderedKeyColumnValueStore.java:138)
at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxOrderedKeyColumnValueStore.getSlice(AstyanaxOrderedKeyColumnValueStore.java:88)
at com.thinkaurelius.titan.graphdb.configuration.KCVSConfiguration$1.call(KCVSConfiguration.java:70)
at com.thinkaurelius.titan.graphdb.configuration.KCVSConfiguration$1.call(KCVSConfiguration.java:64)
at com.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:30)
at com.thinkaurelius.titan.graphdb.configuration.KCVSConfiguration.getConfigurationProperty(KCVSConfiguration.java:64)
at com.thinkaurelius.titan.diskstorage.Backend.initialize(Backend.java:277)
at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:1174)
at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.<init>(StandardTitanGraph.java:75)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:40)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:29)
at com.abccorp.grp.graphorm.GraphORM.<init>(GraphORM.java:23)
at com.abccorp.grp.graphorm.GraphORM.getInstance(GraphORM.java:47)
at com.abccorp.grp.utils.dataloader.MainLoader.main(MainLoader.java:150)
Caused by: com.netflix.astyanax.connectionpool.exceptions.NoAvailableHostsException: NoAvailableHostsException: [host=None(0.0.0.0):0, latency=0(0), attempts=0]No hosts to borrow from
at com.netflix.astyanax.connectionpool.impl.RoundRobinExecuteWithFailover.<init>(RoundRobinExecuteWithFailover.java:30)
at com.netflix.astyanax.connectionpool.impl.TokenAwareConnectionPoolImpl.newExecuteWithFailover(TokenAwareConnectionPoolImpl.java:83)
at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:256)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4.execute(ThriftColumnFamilyQueryImpl.java:519)
at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxOrderedKeyColumnValueStore.getNamesSlice(AstyanaxOrderedKeyColumnValueStore.java:136)
... 13 more
91522 [RingDescribeAutoDiscovery] WARN com.netflix.astyanax.impl.RingDescribeHostSupplier - Failed to get hosts from abccorp via ring describe. Will use previously known ring instead
121522 [RingDescribeAutoDiscovery] WARN com.netflix.astyanax.impl.RingDescribeHostSupplier - Failed to get hosts from abccorp via ring describe. Will use previously known ring instead
</code>
Any help greatly appreciated. I am evaluating titan on cassandra and am a bit stuck on this as previously I was using cassandra (same version) on localhost and everything was fine.
thanks
Changing the listen_address to 172.182.183.215 in the configuration had done the trick. Initially it was not clear if just setting the rpc_address was enough.
Thrift and the drivers that support Thrift are deprecated as of C* 1.2. You should switch to the DataStax Java Driver (currently at 2.0.2).
Alternately, ensure this is set properly in cassandra.yaml
start_rpc: true