Paketo BuildPacks Java JSON Log for Spring Boot Application - java

We are using Paketo BuildPacks for our Spring Boot application. We configured all logs to be JSON written to STDOUT. The issue is that there's a few lines of logs by Paketo during startup:
Setting Active Processor Count to 2
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -Xmx1643814K -XX:MaxMetaspaceSize=146137K -XX:ReservedCodeCacheSize=240M -Xss1M (Total Memory: 2G, Thread Count: 50, Loaded Class Count: 23387, Headroom: 0%)
Enabling Java Native Memory Tracking
Adding 124 container CA certificates to JVM truststore
Spring Cloud Bindings Enabled
Picked up JAVA_TOOL_OPTIONS: -Djava.security.properties=/layers/paketo-buildpacks_bellsoft-liberica/java-security-properties/java-security.properties -XX:+ExitOnOutOfMemoryError -XX:ActiveProcessorCount=2 -XX:MaxDirectMemorySize=10M -Xmx1643814K -XX:MaxMetaspaceSize=146137K -XX:ReservedCodeCacheSize=240M -Xss1M -XX:+UnlockDiagnosticVMOptions -XX:NativeMemoryTracking=summary -XX:+PrintNMTStatistics -Dorg.springframework.cloud.bindings.boot.enable=true
Is there any way to configure Paketo to print the above as JSON:
{ timestamp: 1234567890, "app": "my-service", "message": "Setting Active Processor Count to 2" }

It is not possible to directly configure Paketo BuildPacks to print log messages in JSON format. Paketo BuildPacks are designed to be used with Cloud Foundry, which uses a specific logging format that is based on log lines written to STDOUT. This format is not compatible with JSON, and there is no built-in support for converting log messages to JSON within Paketo BuildPacks.
One potential solution to this problem would be to use a log aggregation and analysis tool that is capable of parsing and analyzing log messages written in the Cloud Foundry format. There are many different tools available that can do this, including Elastic Stack (formerly known as the ELK stack), Splunk, and Logz.io. These tools can parse and analyze log messages written in the Cloud Foundry format, and they can provide you with powerful analytics and visualization features to help you understand and optimize your application's performance.
Another potential solution would be to use a log forwarding tool to send your log messages to a log analysis tool or service that is capable of parsing and analyzing JSON logs. There are many different log forwarding tools available, including Logstash, Fluentd, and rsyslog. These tools can collect log messages from your application and send them to a destination of your choice, such as a log analysis tool or a centralized log management service.

No, sorry. The logging format in Paketo Buildpacks, and the helpers (technical name is exec.d processes) that get installed, are not configurable at this time.
There is some work that has been done in libcnb which is the upstream library that the Java-related buildpacks use that would allow customized loggers. In theory, it should be possible to allow changing to a JSON-based logging format. That would require the v2.0 of the library which has yet to be released.
I suggest adding this as a suggestion for the 2023 Roadmap. There's presently a discussion going on where the project is soliciting new features. You can also open an issue under the Java buildpack for tracking.

Related

log output from qpid library running in weblogic

I am looking for a way to get more detail, like debug or verbose level logging, of a JMS message send over amqps to AzureServiceBus.
I am using qpid client 0.60.1 and I have no access to the calling code. I am working with a web application running in Weblogic. The application provides a servlet that has generic JMS functions, and I can use configuration that maps those to a specific providers' JMS connection factory libraries. To make qpid available to use, I add the qpid client jars to the CLASSPATH for when I start weblogic, and I provide a jndi.properties file that currently contains only two entries:
con
connectionfactory.ServiceBusConnectionFactory=amqps://?jms.username=&jms.password=
queue.inbound-general-q-QueueLookup=
Currently, this is the only message that I see in the weblogic log:
Connection ID:6147a0e7-1870-4a1a-8dd5-bd7102fc1aa4:106 connected to server: amqps://
I have been told that we don't have enough information to open a case with Microsoft.
I am looking for a way to get more detail, like debug or verbose level logging, of a message send. Ideally, want to see as much as possible: headers, properties, payload, etc.
The things I have access to change:
Weblogic environment, including classpath and any other java runtime flags
The jnd.properties file
I am reviewed the qpid.apache.org documentation on logging, but it has not been helpful to me as it is too vague.
The main application running in weblogic has these parameters in its runtime:
-Djava.util.logging.config.file=properties/logging.properties
-Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger
I have tried adding some things to logging.properties, but it has never changed the output of the resulting log file to include anything from amqp.

Is it possible to create custom fields in a Kibana dashboard?

I am using a Java micro-service architecture in my application and generating separate log files for each micro-service.
I am using ELK stack approach to visualize the logs in Kibana, but the problem is whatever the fields that I'm getting from Elastic Search that are related to server logs fields. some example fields are #timestamp,#version,#path,#version.keyword,#host.
i want to customize this fields by adding some fields like customerId,txn-Id,mobile no so that we can analyze the data easily.
I'm using org.apache.logging.log4j2 to write the logs. Can I set above fields (customerId,txn-Id,mobile) to log files? And then Elastic will store these fields with the above default fields and then these custom fields should available in a Kibana dashboard. Is this possible?
It's definitely possible to do that. I've not done it with the log4j2 stack (I have with slf4j/logback), but the basic approach is:
set those fields in the Mapped Diagnostic Context (I'm fairly sure log4j2 supports that)
use a log appender which logs to logstash-structured JSON
configure filebeat to ship the JSON logs
if filebeat is shipping to logstash, you'll need to configure logstash to pass those preformatted JSON logs directly to elasticsearch
It is definitely possible. I am doing that now with my applications. However, the output looks a bit different from yours. The basic guide for doing this can be found at Logging in the Cloud on the Log4j2 web site.
The "normal" log view looks very similar to what you would see when logging to a file.
However, if you select a message you can see the individual fieds.
The Log4j2 configuration uses a TCP Socket appender that is configured to write to a cluster of Logstash servers that use a single DNS entry and to use the Gelf layout.
You can also use MapMessages to capture individual data elements and log them. While this currently works it is slightly cumbersome so I have recently committed improvements that will be available in Log4j 2.15.0.
It is important to note that the Logging in the Cloud page briefly mentions storing your logging configuration in Spring Cloud Config. If you want to have a common base configuration while allowing apps to do some customization this works very, very well. However, The Gelf, Json Template Layout and TCP Appender are all independent from that and can be used without Spring Boot.

How to prevent Vertx from writing logs automatically?

When starting my TCP server using Vertx, I have the following output :
[2018-06-04 12:15:45] [FINEST ] Net server listening on 0.0.0.0:/0:0:0:0:0:0:0:0:8600
[2018-06-04 12:15:45] [INFO ] Server is now listening on port : 8600
I was expected the second line, since I am telling Vertx to write it :
server.listen(res -> {
if (res.succeeded()) {
logger.info("Server is now listening on port : {0, number, #}", server.actualPort());
}
else {
logger.error("Server failed to bind");
}
});
The first line though, is written by Vertx itself. I am bit surprised, since I could not see anywhere in Vertx documentation that this would happen nor how to prevent it from doing so.
How can I make Vertx stop logging automatically?
Thanks in advance.
Well, the manual states that vert.x by default uses java.util.logging, often referred to by its nickname JUL. It's configurable so depending on your use case you should be able to tune the log output. Alternatively vert.x can be instructed to use an external logging framework, they each have their own advantages and disadvantages.
The documentation for JUL isn't really the most helpful prose ever written, fortunately there are plenty of third party sites covering that topic, like http://tutorials.jenkov.com/java-logging/index.html but a quick Google will point you to many others too.
Resuming:
you will need to write a logging.properties file that reflects the output you want to obtain, and where (in logfiles and/or on the console)
you will have to pass that file to your vert.x application via the system property java.util.logging.config.file
Limiting the info produced by certain application parts can be done by using the filtering capabilities present in JUL. So, for example, in your logging.properties you could put
java.util.logging.FileHandler.level=INFO
which will restrict logging that goes to the logfile to INFO or higher. That like for example would already do away with the vert.x log you see in your example. You can also restrict logging per package, group of packages or even individual classes. A nice writeup of these possibilities can be found here: java.util.logging: how to set level by logger package (or prefix)? . I think vert.x uses the prefix io.vertx

What logging apis does Liberty provide?

Obviously Java.util.logging is an option but are any other options available (possibly by enabling a feature)? I did see the eventLogging-1.0 feature but I can't find the related jar or docs.
Specifically, I want to provide a unique identifire with some of my logs, similar to how Liberty does it. Example, see CWWK* below
[3/30/17 13:29:27:198 PDT] 00000001 com.ibm.ws.kernel.launch.internal.FrameworkManager A CWWKE0001I: The server defaultServer has been launched.
[3/30/17 13:29:28:638 PDT] 00000001 com.ibm.ws.kernel.launch.internal.FrameworkManager I CWWKE0002I: The kernel started after 1.695 seconds
I could just wrap my calls to Logger.log() and append the ids myself but I figured there has to be a better way. I shouldn't have to include a new lib (ex log4j) since the internal Liberty logs are already doing this.
The CWWK* prefix is part of the message in the NLS message files. There is no magic that prepends these ids to the log messages. They only appear for NLS enabled messages, if you look in trace.
The eventLogging feature essentially causes important events to be logged to the messages.log, it isn't providing an application logging API which is why you can't find any documentation on it.
Liberty doesn't provide a logging API, if java.util.logging doesn't work for you then you can use log4j or slf4j by putting those logging libraries in your application.

Java daemon deployment infrastructure

Is there any deployment platforms for Java daemons? We have glassfish, geronimo etc. for web-application deployment, but if I have simple Spring based application which is processing messages from ActiveMQ or something like that. Where I should deploy that?
You probably are looking for something like Java Service Wrapper. I used it a couple of years ago for a group of services that needed a watchdog and start, stop and restart operations. You can do that and a few things more:
Run a Java application as a Windows Service or Unix Daemon:
makes it possible to install a Java Application as a Windows Service or a daemon process on Unix systems.
Standard, Out of the Box Scripting: provides scripts for run on Windows and Unix
On Demand Restarts: Your application can request a restart of their own JVM
Flexible Configuration: Configuration for JVM and application can be centralized in a text file.
Logging: While the Java Service Wrapper does not attempt to replace any Logging Tools available, it does provide a number of properties to configure how "stdout" and "stderr" output to the JVM console is handled. This output can be logged to any combination of the console, a file, or the "Event Log" (Windows) or "syslog" (Unix).
If you build your project with Maven, there is a Application Assembler Maven Plugin that you can use.
Creating a simple daemon process with Spring

Categories

Resources