Logging using Logback on Spark StandAlone - java

We are using Spark StandAlone 2.3.2 and logback-core/logback-classic with 1.2.3
Have very simple Logback configuration file which allows us to log the data to a specific directory and on local I can pass the vm parameters from editor
-Dlogback.configurationFile="C:\path\logback-local.xml"
and it works and logs properly
On Spark StandAlone I am trying to pass the arguments using external link
spark-submit
--master spark://127.0.0.1:7077
--driver-java-options "-Dlog4j.configuration=file:/path/logback.xml"
--conf "spark.executor.extraJavaOptions=-Dlogback.configurationFile=file:/path/logback.xml"
Here is the config file (bit ansibilized), have verified the actual paths and they exist, any idea what could be the issue on the cluster. I have verified the Environment variables on Spark UI and they reflect the same for drvier and executor options.
Any potential issues with Logback and Spark StandAlone together?
There is nothing specific to configuration file here, it just filters the data for json logging vs file for better visualization on log server
<configuration>
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>{{ app_log_file_path }}</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<!--daily-->
<fileNamePattern>{{ app_log_dir }}/{{ app_name }}.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
<maxFileSize>100MB</maxFileSize>
<maxHistory>90</maxHistory>
<totalSizeCap>10GB</totalSizeCap>
</rollingPolicy>
<encoder>
<pattern>%d [%thread] %-5level %logger{36} %X{user} - %msg%n</pattern>
</encoder>
</appender>
<appender name="FILE_JSON" class="ch.qos.logback.core.rolling.RollingFileAppender">
<filter class="ch.qos.logback.core.filter.EvaluatorFilter">
<evaluator>
<expression>
return message.contains("timeStamp") &&
message.contains("logLevel") &&
message.contains("sourceLocation") &&
message.contains("exception");
</expression>
</evaluator>
<OnMismatch>DENY</OnMismatch>
<OnMatch>NEUTRAL</OnMatch>
</filter>
<file>{{ app_json_log_file_path }}</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<!--daily-->
<fileNamePattern>{{ app_log_dir }}/{{ app_name }}_json.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
<maxFileSize>100MB</maxFileSize>
<maxHistory>90</maxHistory>
<totalSizeCap>10GB</totalSizeCap>
</rollingPolicy>
<encoder>
<pattern>%msg%n</pattern>
</encoder>
</appender>
<logger name="com.baml.ctrltech.greensheet.logging.GSJsonLogging" level="info" additivity="false">
<appender-ref ref="FILE_JSON" />
</logger>
<root level="INFO">
<appender-ref ref="FILE" />
<appender-ref ref="FILE_JSON"/>
</root>
</configuration>

We couldn't get Logback to work with Spark, as Spark uses Log4J internally, we had to switch to the same

I fixed adding one dependency for logback and excluding the transitive dependency from Spark in sbt:
val sparkV = "3.3.1"
val slf4jLogbackV = "2.1.4"
val slf4jLogback = "com.networknt" % "slf4j-logback" % slf4jLogbackV
val sparkSql = ("org.apache.spark" %% "spark-sql" % sparkV)
.cross(CrossVersion.for3Use2_13)
.exclude("org.apache.logging.log4j", "log4j-slf4j-impl")

Related

No evaluator set for filter null, using logback-spring.xml in Spring Boot Application

I am writing logback file for my spring boot application but facing this issue, found no solution on internet. It would be really helpful if someone can help.
Logback.xml
<appender class="ch.qos.logback.core.rolling.RollingFileAppender" name="INFO">
<File>${log.dir}/default.log</File>
<append>true</append>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>${log.dir}/default.log.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
<maxFileSize>${log.default.maxFileSize}</maxFileSize>
<maxHistory>${log.maxHistory}</maxHistory>
</rollingPolicy>
<encoder>
<pattern>${log.pattern}</pattern>
</encoder>
</appender>
<appender class="ch.qos.logback.classic.AsyncAppender" name="ASYNC-INFO">
<discardingThreshold>${async.discardingThreshold}
</discardingThreshold>
<queueSize>${async.queueSize}</queueSize>
<neverBlock>${neverBlock}</neverBlock>
<filter class="ch.qos.logback.core.filter.EvaluatorFilter">
<OnMismatch>DENY</OnMismatch>
<OnMatch>NEUTRAL</OnMatch>
</filter>
<appender-ref ref="INFO"/>
</appender>
<root level="INFO">
<appender-ref ref="ASYNC-INFO"/>
</root>
ERROR
ERROR in ch.qos.logback.core.filter.EvaluatorFilter#13fd2ccd - No evaluator set for filter null
Note
I don't have much idea about code.
One More Question
I have multiple logback.xml files based on environments like a stage, production, etc. How can I pass them while running a jar?
Tried java - jar jarfile.jar --spring.config.location=application.yml,logback-dev.xml

How to select Logback appender based on property file or environment variable

I have configured logback xml for a spring boot project.
I want to configure another appender based on the property configured. We want to create an appender either for JSON logs or for text log, this will be decided either by property file or by environment variable.
So I am thinking about the best approach to do this.
Using filters to print logs to 1 of the file (either to JSON or to Txt). But this will create both of the appenders. I want to create only 1 appender.
Use "If else" blocks in logback XML file. To put if else around appenders, loggers seems untidy and error prone. So will try to avoid as much as possible.
So now exploring options where I can add appender at runtime.
So I want to know if it is possible to add appender at runtime. And will it be added before spring boots up or it could be done anytime in the project.
What could be the best approach to include this scenario.
As you're already using Spring, I suggest using Spring Profiles, lot cleaner than trying to do the same programmatically. This approach is also outlined in Spring Boot docs.
You can set an active profile from either property file:
spring.profiles.active=jsonlogs
or from environment value:
spring_profiles_active=jsonlogs
of from startup parameter:
-Dspring.profiles.active=jsonlogs
Then have separate configurations per profile:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="stdout-classic" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{dd-MM-yyyy HH:mm:ss.SSS} %magenta([%thread]) %highlight(%-5level) %logger{36}.%M - %msg%n</pattern>
</encoder>
</appender>
<appender name="stdout-json" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="ch.qos.logback.contrib.json.classic.JsonLayout">
<timestampFormat>yyyy-MM-dd'T'HH:mm:ss.SSSX</timestampFormat>
<timestampFormatTimezoneId>Etc/UTC</timestampFormatTimezoneId>
<jsonFormatter class="ch.qos.logback.contrib.jackson.JacksonJsonFormatter">
<prettyPrint>true</prettyPrint>
</jsonFormatter>
</layout>
</encoder>
</appender>
<!-- begin profile-specific stuff -->
<springProfile name="jsonlogs">
<root level="info">
<appender-ref ref="stdout-json" />
</root>
</springProfile>
<springProfile name="classiclogs">
<root level="info">
<appender-ref ref="stdout-classic" />
</root>
</springProfile>
</configuration>
As the previous answer states, you can set different appenders based on Spring Profiles.
However, if you do not want to rely on that feature, you can use environments variables as described in the Logback manual. I.e.:
<appender name="json" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="ch.qos.logback.contrib.json.classic.JsonLayout">
<jsonFormatter class="ch.qos.logback.contrib.jackson.JacksonJsonFormatter">
<prettyPrint>true</prettyPrint>
</jsonFormatter>
<appendLineSeparator>true</appendLineSeparator>
</layout>
</encoder>
</appender>
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>
%cyan(%d{HH:mm:ss.SSS}) %gray([%thread]) %highlight(%-5level) %magenta(%logger{36}) - %msg%n
</pattern>
</encoder>
</appender>
<root level="info">
<!--
! Use the content of the LOGBACK_APPENDER environment variable falling back
! to 'json' if it is not defined
-->
<appender-ref ref="${LOGBACK_APPENDER:-json}"/>
</root>

Simple logging example with sl4j/logback in scala not working

Logback classic 1.2.3 version works but if I use logback 1.3.0 alpha which uses sl4f 1.8 I get error :
SLF4J: No SLF4J providers were found. error
This happens only if I assemble the scala file, create a jar and execute it. If I run it from intellij IDE it works fine.
My sbt is:
libraryDependencies += "ch.qos.logback" % "logback-classic" % "1.3.0-alpha4"
And my scala code is:
import org.slf4j.LoggerFactory
object Hello extends App{
print("Hi!!!")
val logback = LoggerFactory.getLogger("CloudSim+")
logback.info(" --- Wecome to cloudsim+ simulator --- ")
logback.info("Press 1 to start Load balancing simulator")
logback.info("Press 2 to start Network simulator")
}
logbacl.xml in resources folder has below content:
<configuration>
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>[%date{HH:mm:ss}] %-5level %logger{0} {%class %method} - %msg%n</pattern>
</encoder>
</appender>
<appender name="file" class="ch.qos.logback.core.FileAppender">
<file>${log-file:-scala-logging.log}</file>
<encoder>
<pattern>[%date{HH:mm:ss}] %-5level %logger{0} {%class %method} - %msg%n</pattern>
</encoder>
</appender>
<root level="info">
<appender-ref ref="console"/>
<appender-ref ref="file"/>
</root>
</configuration>
I'm stuck I tried various things to see log statements even when I run a jar file.

Exclude default logback file

My team is developing a telecom realtime application with high calls per second rate. We are using logback to filter log based on a key-value match (traffic live values, like Calling Party, and so on). The filtered log file is correctly created, once verified the match from live values and db values, but we would get rid of default file which is filled with logs when there is no match. It might happen that a traffic node needs to be monitored for a while before a key-value match takes place, so in the meantime the default could indefinitely increase in size and cause problems to performance and stability of node itself. What should I do in my logback.xml to avoid generation of default log file? Is it possible? Any other option to achieve same result?
logback.xml
<?xml version="1.0" encoding="UTF-8"?>
<property scope="context" name="LOG_LEVEL" value="INFO" />
<appender name="SIFT_LOGGER" class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator class="com.ericsson.jee.ngin.services.log.ServiceKeyDiscriminator">
</discriminator>
<sift>
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<prudent>true</prudent>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>/var/log/tit/logback_${serviceKey}_%d{yyyy-MM-dd}_%i.log</fileNamePattern>
<maxFileSize>1MB</maxFileSize>
<maxHistory>10</maxHistory>
<totalSizeCap>2GB</totalSizeCap>
</rollingPolicy>
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>${LOG_LEVEL}</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
<!-- encoders are by default assigned the type ch.qos.logback.classic.encoder.PatternLayoutEncoder -->
<encoder>
<pattern> %d{yyyy-MM-dd HH:mm:ss.SSSZ} [%thread] %-5level %logger{36} %msg%n</pattern>
</encoder>
</appender>
</sift>
</appender>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<!-- encoders are by default assigned the type ch.qos.logback.classic.encoder.PatternLayoutEncoder -->
<encoder>
<pattern> %d{yyyy-MM-dd HH:mm:ss.SSSZ} [%thread] %-5level %logger{36} %msg%n</pattern>
</encoder>
</appender>
<turboFilter class="ch.qos.logback.classic.turbo.DynamicThresholdFilter">
<key>serviceKey</key>
<defaultThreshold>DEBUG</defaultThreshold>
<onHigherOrEqual>ACCEPT</onHigherOrEqual>
<onLower>ACCEPT</onLower>
</turboFilter>
<root level="DEBUG">
<appender-ref ref="SIFT_LOGGER" />
<appender-ref ref="STDOUT" />
</root>
ATTACHMENTS: FILTERED LOGBACK CLASS
The provided FL class is only working for a SK which has a java Discriminator in FL module.
You must move the filter to the general sift appender.
<appender name="SIFT-TRACE"
class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator
class="ch.qos.logback.classic.sift.MDCBasedDiscriminator">
<Key>loggerFileName</Key>
<DefaultValue>unknown</DefaultValue>
</discriminator>
<filter class="ch.qos.logback.core.filter.EvaluatorFilter">
<evaluator
class="ch.qos.logback.classic.boolex.JaninoEventEvaluator">
<expression>
mdc.get("loggerFileName")!=null
</expression>
</evaluator>
<OnMismatch>DENY</OnMismatch>
<OnMatch>NEUTRAL</OnMatch>
</filter>
<sift>
<appender name="TRACE-${loggerFileName}"
class="ch.qos.logback.core.FileAppender">
<File>D:/U/${loggerFileName}.log</File>
<Append>true</Append>
<layout class="ch.qos.logback.classic.PatternLayout">
<Pattern>%d [%thread] %level %mdc %logger - %msg%n</Pattern>
</layout>
</appender>
</sift>
</appender>
<logger name="org.springframework" level="DEBUG" />
<root level="DEBUG">
<appender-ref ref="SIFT-TRACE" />
</root>
Also to make it work correctly, you MUST after each Thread/file/marker/etc. to put those statements:
public void handle()
{
MDC.put("loggerFileName","some value");
...
MDC.remove("loggerFileName");
}
You have defined this root logger:
<root level="DEBUG">
<appender-ref ref="SIFT_LOGGER" />
<appender-ref ref="STDOUT" />
</root>
This means that all log events with Level >= DEBUG will be directed to two appenders:
SIFT_LOGGER
STDOUT
If I understand your question correctly then you do want logs to be written via your SIFT_APPENDER but you don't want any other log output. If so, then just remove this entry:
<appender-ref ref="STDOUT" />
The STDOUT appender is a console appender so it doesn't actually write to a log file instead it writes to System.out. I suspect the reason you are seeing these log events in some file is that whatever is running your application is redirecting System.out to a file. As long as you only have your SIFT_APPENDER in the root logger definition then you can be confident that this will be the only appender in play. Note: once you remove the appender from the root logger you can probbaly remove it from logback.xml since it is unused.
Update 1: Based on your last comment I now understand that you want to discard the logs which arrive at the SiftingAppender but do not match a given condition. I suspect what's happeneing here is that some log events arrive at the sifting appender with an 'unknown' value for serviceKey and these events are then written to /var/log/tit/logback_[unknownValue]_%d{yyyy-MM-dd}_%i.log. Is this the crux of the issue? If so, then you can add a filter into the nested appender. Here are some examples:
Using Groovy to express the 'contains unknown serviceKey condition':
<filter class="ch.qos.logback.core.filter.EvaluatorFilter">
<!-- GEventEvaluator requires Groovy -->
<evaluator
class="ch.qos.logback.classic.boolex.GEventEvaluator">
<expression>
serviceKey == null
</expression>
</evaluator>
<OnMismatch>NEUTRAL</OnMismatch>
<OnMatch>DENY</OnMatch>
</filter>
Using Janino to express the 'contains unknown serviceKey condition':
<filter class="ch.qos.logback.core.filter.EvaluatorFilter">
<!-- JaninoEventEvaluator requires Janino -->
<evaluator
class="ch.qos.logback.classic.boolex.JaninoEventEvaluator">
<expression>
serviceKey == null
</expression>
</evaluator>
<OnMismatch>NEUTRAL</OnMismatch>
<OnMatch>DENY</OnMatch>
</filter>
With either of these filters in place any log events which arrive at the sifting appender and have the 'unknown' serviceKey will be ignored. Note: I have written the 'contains unknown serviceKey condition' as serviceKey == null your logic might differ but the above examples show should you how to tell Logback to apply this filter for you.
Just to notify to #glitch (and all others interested) the happy conclusion of this issue: I have managed to make the tag expression working was this:
<expression>mdc.get("servicekey") == null</expression>
Thanks to this expression, I have got the wanted behavior: the default file "IS_UNDEFINED is not generated when the key doesn't match with the runtime traffic values.
The reason is because the type of Event in JaninoEventEvaluator is LoggingEvent that has a reserve object "mdc" (the type is Map).
Regards,
Pierluigi

Logback Dynamic Files using Two Properties

my problem is : My application maintains three buildings, and each building has a different process.
So, using logback, I want to create a log which has a specification :
each building will have a specific folder, and inside that specific folder of each building, there will be many log files, with each log file indicates a process.
My logback.xml right now is :
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
</pattern>
</encoder>
</appender>
<appender name="logAppender" class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator>
<key>processName</key>
<defaultValue>unknown</defaultValue>
</discriminator>
<sift>
<appender name="FILE-${processName}"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>logs/${distributor}/${processName}.log</file>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%d [%thread] %level %mdc %logger{35} - %msg%n</pattern>
</layout>
<rollingPolicy
class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<fileNamePattern>logs/${distributor}/${processName}.%i.log</fileNamePattern>
<minIndex>1</minIndex>
<maxIndex>10</maxIndex>
<!-- <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>5KB</maxFileSize> </timeBasedFileNamingAndTriggeringPolicy> -->
</rollingPolicy>
<triggeringPolicy
class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<maxFileSize>10MB</maxFileSize>
</triggeringPolicy>
</appender>
</sift>
</appender>
<logger name="processLog" level="debug" additivity="false">
<appender-ref ref="logAppender" />
<appender-ref ref="stdout" />
</logger>
<root level="debug">
<appender-ref ref="stdout" />
<appender-ref ref="logAppender" />
</root>
</configuration>
And my java servlet code is :
public class DistributorServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
private static Logger processLog = LoggerFactory.getLogger("processLog");
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
String office = req.getParameter("office");
MDC.put("distributor", office);
String process = req.getParameter("process");
MDC.put("process", process);
processLog.debug("Processing");
}
}
However, a log file is not generated.
Can anyone help me?
Thank you very much
1. Make the below change
private static Logger processLog = LoggerFactory.getLogger("processLog");
to
private static Logger processLog = LoggerFactory.getLogger(DistributorServlet .class);
2. Add additional discriminator for distributor
From the logback.xml it appears that only one discriminator has been added. Did you try adding another one for distributor
3. Do not forget
To add MDC.remove("keyName"); after its usage.
4. In case if you observe issue with multiple MDC keys
I faced an issue with the MDC.put in trying to add multiple keys into it. (I wondered why no putAll has been defined)
Although the underlying implementation is that of a HashMap<Key k, Value v> that should allow adding multiple keys, I was only able to see that the last key you put into MDC would be applied to logback.xml
While for the other keys I observed _IS_UNDEFINED value that gets appended instead.
Of course then again you can refer to the other various links and although this may not be a best idea, here is what I have done in my Java code,
System.setProperty("distributor", req.getParameter("office"));
System.setProperty("process", req.getParameter("process"));
Modify the logback.xml by removing discriminator. Well, alternatively you can remove one of the above properties and have the MDC.put for that property.
However please do refer to the links System.setProperty is safe in java?
Alternatively I suggest https://stackoverflow.com/a/32615172/3838328

Categories

Resources