How to prevent Vertx from writing logs automatically? - java

When starting my TCP server using Vertx, I have the following output :
[2018-06-04 12:15:45] [FINEST ] Net server listening on 0.0.0.0:/0:0:0:0:0:0:0:0:8600
[2018-06-04 12:15:45] [INFO ] Server is now listening on port : 8600
I was expected the second line, since I am telling Vertx to write it :
server.listen(res -> {
if (res.succeeded()) {
logger.info("Server is now listening on port : {0, number, #}", server.actualPort());
}
else {
logger.error("Server failed to bind");
}
});
The first line though, is written by Vertx itself. I am bit surprised, since I could not see anywhere in Vertx documentation that this would happen nor how to prevent it from doing so.
How can I make Vertx stop logging automatically?
Thanks in advance.

Well, the manual states that vert.x by default uses java.util.logging, often referred to by its nickname JUL. It's configurable so depending on your use case you should be able to tune the log output. Alternatively vert.x can be instructed to use an external logging framework, they each have their own advantages and disadvantages.
The documentation for JUL isn't really the most helpful prose ever written, fortunately there are plenty of third party sites covering that topic, like http://tutorials.jenkov.com/java-logging/index.html but a quick Google will point you to many others too.
Resuming:
you will need to write a logging.properties file that reflects the output you want to obtain, and where (in logfiles and/or on the console)
you will have to pass that file to your vert.x application via the system property java.util.logging.config.file
Limiting the info produced by certain application parts can be done by using the filtering capabilities present in JUL. So, for example, in your logging.properties you could put
java.util.logging.FileHandler.level=INFO
which will restrict logging that goes to the logfile to INFO or higher. That like for example would already do away with the vert.x log you see in your example. You can also restrict logging per package, group of packages or even individual classes. A nice writeup of these possibilities can be found here: java.util.logging: how to set level by logger package (or prefix)? . I think vert.x uses the prefix io.vertx

Related

log output from qpid library running in weblogic

I am looking for a way to get more detail, like debug or verbose level logging, of a JMS message send over amqps to AzureServiceBus.
I am using qpid client 0.60.1 and I have no access to the calling code. I am working with a web application running in Weblogic. The application provides a servlet that has generic JMS functions, and I can use configuration that maps those to a specific providers' JMS connection factory libraries. To make qpid available to use, I add the qpid client jars to the CLASSPATH for when I start weblogic, and I provide a jndi.properties file that currently contains only two entries:
con
connectionfactory.ServiceBusConnectionFactory=amqps://?jms.username=&jms.password=
queue.inbound-general-q-QueueLookup=
Currently, this is the only message that I see in the weblogic log:
Connection ID:6147a0e7-1870-4a1a-8dd5-bd7102fc1aa4:106 connected to server: amqps://
I have been told that we don't have enough information to open a case with Microsoft.
I am looking for a way to get more detail, like debug or verbose level logging, of a message send. Ideally, want to see as much as possible: headers, properties, payload, etc.
The things I have access to change:
Weblogic environment, including classpath and any other java runtime flags
The jnd.properties file
I am reviewed the qpid.apache.org documentation on logging, but it has not been helpful to me as it is too vague.
The main application running in weblogic has these parameters in its runtime:
-Djava.util.logging.config.file=properties/logging.properties
-Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger
I have tried adding some things to logging.properties, but it has never changed the output of the resulting log file to include anything from amqp.

Is it possible to create custom fields in a Kibana dashboard?

I am using a Java micro-service architecture in my application and generating separate log files for each micro-service.
I am using ELK stack approach to visualize the logs in Kibana, but the problem is whatever the fields that I'm getting from Elastic Search that are related to server logs fields. some example fields are #timestamp,#version,#path,#version.keyword,#host.
i want to customize this fields by adding some fields like customerId,txn-Id,mobile no so that we can analyze the data easily.
I'm using org.apache.logging.log4j2 to write the logs. Can I set above fields (customerId,txn-Id,mobile) to log files? And then Elastic will store these fields with the above default fields and then these custom fields should available in a Kibana dashboard. Is this possible?
It's definitely possible to do that. I've not done it with the log4j2 stack (I have with slf4j/logback), but the basic approach is:
set those fields in the Mapped Diagnostic Context (I'm fairly sure log4j2 supports that)
use a log appender which logs to logstash-structured JSON
configure filebeat to ship the JSON logs
if filebeat is shipping to logstash, you'll need to configure logstash to pass those preformatted JSON logs directly to elasticsearch
It is definitely possible. I am doing that now with my applications. However, the output looks a bit different from yours. The basic guide for doing this can be found at Logging in the Cloud on the Log4j2 web site.
The "normal" log view looks very similar to what you would see when logging to a file.
However, if you select a message you can see the individual fieds.
The Log4j2 configuration uses a TCP Socket appender that is configured to write to a cluster of Logstash servers that use a single DNS entry and to use the Gelf layout.
You can also use MapMessages to capture individual data elements and log them. While this currently works it is slightly cumbersome so I have recently committed improvements that will be available in Log4j 2.15.0.
It is important to note that the Logging in the Cloud page briefly mentions storing your logging configuration in Spring Cloud Config. If you want to have a common base configuration while allowing apps to do some customization this works very, very well. However, The Gelf, Json Template Layout and TCP Appender are all independent from that and can be used without Spring Boot.

Start and Stop Cloud SQL via Java mysql admin-api

I'm not able to find a way to simply start and stop a Cloud SQL instance using java mysql admin-api.
I found this official google documentation that explain how to start and stop the Cloud SQL instance via gcloud: https://cloud.google.com/sql/docs/mysql/start-stop-restart-instance But I'm not able to obtain the same things via java using mysql admin-api,
Anybody can help me?
Generally, the Cloud SQL Admin API for Java is used for operations such as the one you are looking for. If you are using Maven, you can add the library to your project adding the following lines of code to the pom.xml configuration file:
<project>
<dependencies>
<dependency>
<groupId>com.google.apis</groupId>
<artifactId>google-api-services-sqladmin</artifactId>
<version>v1beta4-rev48-1.23.0</version>
</dependency>
</dependencies>
</project>
EDIT:
As far as I can see in the documentation, the underlying API uses the Instance.Patch method for starting and stopping instances, although I cannot find any specific information about how to do it. However, you can find more relevant information yourself in the Instances:Patch page. I will keep looking for more information and in case I find something relevant, I will post a comment to this answer below.
EDIT 2:
I have been performing some tests using the Google APIs Explorer, using the PROJECT_ID, SQL_INSTANCE_ID and a JSON body such as this one:
{
"settings": {
"activationPolicy": "YOUR_PREFERED_STATE"
}
}
According to the documentation:
The activation policy specifies when the instance is activated; it is
applicable only when the instance state is RUNNABLE. Valid values:
ALWAYS: The instance is on, and remains so even in the absence of
connection requests. NEVER: The instance is off; it is not activated,
even if a connection request arrives. ON_DEMAND: First Generation
instances only. The instance responds to incoming requests, and turns
itself off when not in use. Instances with PER_USE pricing turn off
after 15 minutes of inactivity. Instances with PER_PACKAGE pricing
turn off after 12 hours of inactivity.
I have tried running the API with the NEVER and ALWAYS states, and my Cloud SQL instance stopped and started accordingly. So in your case, and going back to the Admin API for Java, you should be looking at the Settings of your instance, specifically at this method:
public Settings setActivationPolicy(java.lang.String activationPolicy)
Changing the Activation Policy to NEVER or ALWAYS should be what you need here, although you can have a look at the other possible instance states in case they fit your requirements better.

Apache Logging - Send log output directly to queue

Am using Standard apache logging (org.apache.log4j.logging )
Currently, taking the data to be logged manually, and publishing in to Apache Active MQ.
Is it possible to configure the logging output to publish directly in to Active MQ??
This might sound stupid, but since both are from Apache, I have a doubt that whether, it has any implicit support, which I could not grab it.
log4j provides JMSAppender out of the box. It allows publishing logging events to JMS Topic.
For configuration specific to ActiveMQ please check the documentation - How do I use log4j JMS appender with ActiveMQ
Not sure if you were looking for log4j-1.x or log4j-2.0, but here are the links for log4j-2.0:
http://logging.apache.org/log4j/2.x/manual/appenders.html#JMSQueueAppender
http://logging.apache.org/log4j/2.x/manual/appenders.html#JMSTopicAppender

hsqldb messing up with my server´s logs

I have a server I made in Java that needs to use a database, I chose HSQLDB.
So I have a lot of entries in my server like:
Logger.getLogger(getClass().getName()). severe or info ("Some important information");
When I run my server it goes to System.out which I think its the default configuration of java.util.logging?, so far its ok for me, and later I will make it go to a file ...
But, the problem is, when I start hsqldb it messes up with the default configuration and I can´t read my log entries on System.out anymore..
I already tried to change hsqldb.log_data=false, but it still messes up the default configuration.
Can someone help me??
I dont want to log hsqldb events, just my server ones.
Thanks
This issue was reported and fixed in the latest version 2.2.0 released today.
Basically, you set a system property hsqldb.reconfig_logging to the
string value false.
A system property is normally set with the -D option in the Java startup command for your application:
java -Dhsqldb.reconfig_logging=false ....
See below for details of the change:
http://sourceforge.net/tracker/?func=detail&aid=3195462&group_id=23316&atid=378131
In addition, when you use a fremework logger for your application, you should configure it directly to choose which levels of log to accept and which ones to ignore.
The hsqldb.applog setting does not affect framework logging and only controls the file log.
The hsqldb.log_data=false is for turning off internal data change logging and should not be used for normal databases. Its usage for bulk imports is explained in the Guide.
Try setting hsqldb.applog to 0, that shuts off application logging to the *.app.log file.
Start your server with a property pointing to the location of a dedicated properties file:
-Djava.util.logging.config.file=/location/of/your/hsqldblog.properties"
Which contains the following line to change Java logging for Hsqldb.
# Change hsqldb logging level
org.hsqldb.persist = WARNING
Side note, you can choose from the following levels:
SEVERE WARNING INFO CONFIG FINE FINER FINEST

Categories

Resources