I'm developing a web application in which i'll upload a log file, the file will be read and classified based on logger levels (INFO,ERROR,WARNING etc..). I need to index those logs to elasticsearch using java high level rest client api.
Currently i'm creating one index for a classname(Logs will contain classname) and store those class logs in that particular index. I feel this approach is wont be nice in some cases if a log file contains logs from 100 different classes, i'll be creating 100 indices for it and storing those logs.
Is there any efficient way of indexing logs to elasticsearch?How to determine indices in my case?
Sample log:
02-Jul-2021|10:03:10.040|INFO|[main]|org.apache.catalina.startup.VersionLoggerListener.log|Server
built: Jun 11 2021 13:32:01 UTC
Here is the practice that we use in some of our java based stack and it has many privileges for the usage of Apache Kafka as middle data pipe line and logstash as data ingestion pipeline.
First you need to remove default providers for logs in your spring boot application inside your pom.xml file, Which are Logback and perhaps Log-classic then you need to add log4j2 as new log provider and adding Kafka appender. After adding dependencies you need xml configuration file where you can add your Kafka appender configurations. By default you need to locate your configuration file in resource path of your project and name it as "log4j2.xml".
You can find many others Log4j2 appenders like Cassandra or Failover appenders and add them beside your Kafka appender inside your configuration file. You can find an applicable and correct example in below.
<!--excluding logback -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>logback-classic</artifactId>
</exclusion>
</exclusions>
</dependency>
<!--adding log4j2 and kafka appender-->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-log4j2</artifactId>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-log4j-appender</artifactId>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
Kafka appender configuration
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="info" name="kafka-appender" packages="Citydi.ElasticDemo">
<Appenders>
<Kafka name="kafkaLogAppender" topic="Second-Topic">
<JSONLayout />
<Property name="bootstrap.servers">localhost:9092</Property>
<MarkerFilter marker="Recorder" onMatch="DENY" onMismatch="ACCEPT"/>
</Kafka>
<Console name="stdout" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm:ss.SSS} stdout %highlight{%-5p} [%-7t] %F:%L - %m%n"/>
<MarkerFilter marker="Recorder" onMatch="DENY" onMismatch="ACCEPT"/>
</Console>
</Appenders>
<Loggers>
<Root level="INFO">
<AppenderRef ref="kafkaLogAppender"/>
<AppenderRef ref="stdout"/>
</Root>
<Logger name="org.apache.kafka" level="warn" />
</Loggers>
</Configuration>
Activating Zookeeper broker
./zookeeper-server-start.sh ../config/zookeeper.properties
Activating Kafka broker
./kafka-server-start.sh ../config/server.properties
Create Topic
./kafka-topics.sh --create --topic test-topic -zookeeper localhost:2181 --replication-factor 1 --partitions 4
Active consumer of the created topic
./kafka-console-producer.sh --broker-list localhost:9092 --topic test-topic
Then add the log appender for created topic for consuming logs(This one is up to you) and after that create a Logstash pipeline such as below configuration as ingest your logs into your desired index in elastic .
input {
kafka{
group_id => "35834"
topics => ["yourtopicname"]
bootstrap_servers => "localhost:9092"
codec => json
}
}
filter {
}
output {
file {
path => "C:\somedirectory"
}
elasticsearch {
hosts => ["localhost:9200"]
document_type => "_doc"
index => "yourindexname"
}
stdout { codec => rubydebug
}
}
Related
I am trying to send slf4j log messages in my Helidon MP application to a Kafka server that runs on port 9092. I have the following class as an example:
import lombok.extern.slf4j.Slf4j;
#Slf4j
public class Service {
private final ConfigProvider configProvider;
#Inject
public Service(ConfigProvider configProvider) {
this.configProvider = configProvider;
}
public String getString() {
String msg = String.format("%s %s !", configProvider.getString());
log.info("Entered getString() method");
return msg;
}
}
I also have a logging.xml file which specifies the Appender as KafkaAppender:
<Configuration>
<Appenders>
<Kafka name="KafkaAppender" topic="app-logs"
syncSend="false">
<Property name="bootstrap.servers"
value="localhost:9092"/>
</Kafka>
</Appenders>
<Loggers>
<Logger name="org.apache.kafka" level="WARN"/> <!-- avoid recursive logging -->
<Root level="INFO">
<AppenderRef ref="KafkaAppender"/>
</Root>
</Loggers>
</Configuration>
However, when I run the application, I get the following errors:
2022-11-28 14:23:17,358 main ERROR No layout provided for KafkaAppender
2022-11-28 14:23:17,362 main ERROR Null object returned for Kafka in Appenders.
2022-11-28 14:23:17,364 main ERROR Unable to locate appender "KafkaAppender" for logger config "root"
Any suggestions on how to make KafkaAppender work with Helidon?
Helidon does nothing special for logging. The only thing to note is that Helidon uses JUL (java.util.logging) for logging.
KafkaAppender is a Log4J2 appender, however you said "specify slf4j to send message to Kafka".
1. Setup the SLF4J to JUL bridge
See the javadocs.
Add the following dependency to your pom.xml:
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>jul-to-slf4j</artifactId>
</dependency>
You can setup the bridge either programmatically or by configuration.
Programmatically
import org.slf4j.bridge.SLF4JBridgeHandler;
SLF4JBridgeHandler.removeHandlersForRootLogger();
SLF4JBridgeHandler.install();
Using logging.properties
If you are using Helidon SE, you need to have code in your main class to load logging.properties. Helidon provides a utility for that, see below. If you are using Helidon MP, this is done for you automatically.
import io.helidon.common.LogConfig;
LogConfig.configureRuntime();
If you already have the statement above in your main method, you can setup the bridge by adding the following to your logging.properties file.
handlers = org.slf4j.bridge.SLF4JBridgeHandler
If you don't have a logging.properties file yet, create it under src/main/resources so that it is added to your project JAR.
2. Setup the SLF4J implementation
Import the Log4J2 bom in your pom.xml:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-bom</artifactId>
<version>2.19.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
Add the following dependencies to your pom.xml:
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-slf4j-impl</artifactId>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
</dependency>
See the docs.
3. Configure Log4J2
See the docs.
The default path for a XML configuration file is log4j2.xml. Create the file at src/main/resources/log4j2.xml
<Configuration>
<Appenders>
<Kafka name="KafkaAppender" topic="app-logs" syncSend="false">
<PatternLayout pattern="%date %message" />
<Property name="bootstrap.servers" value="localhost:9092"/>
</Kafka>
</Appenders>
<Loggers>
<Logger name="org.apache.kafka" level="WARN"/> <!-- avoid recursive logging -->
<Root level="INFO">
<AppenderRef ref="KafkaAppender"/>
</Root>
</Loggers>
</Configuration>
Add the kafka-clients dependency to your pom.xml:
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
</dependency>
Please note that there nothing specific to Helidon here apart from dependency version management and the utility provided to load logging.properties. You can use the same steps on any "plain" Java project.
I am using Logback in an application running in Tomcat. While my application works and, in the debugger, I see my logging statements reached, these statements never reach /opt/tomcat/logs/catalina.out. (By the way, I do see these statements in the IntelliJ IDEA debugger console, but upon deployment, they don't reach catalina.out.) Where do I begin?
In my WAR, WEB-INF/classes/logback.xml looks like this:
<configuration>
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${catalina.base}/logs/catalina.out</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<maxFileSize>10MB</maxFileSize>
<maxHistory>10</maxHistory>
</rollingPolicy>
<immediateFlush>true</immediateFlush>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="FILE" />
</root>
</configuration>
In code, I do this for example:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class Validator
{
private static final Logger logger = LoggerFactory.getLogger( Validator.class );
...
public void foo()
{
logger.info( "Called foo()" );
Correspondingly, in pom.xml dependencies, I have this:
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>${logback-version}</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-access</artifactId>
<version>${logback-version}</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>${slf4j.version}</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>${slf4j.version}</version>
</dependency>
I also tried this in logback.xml. It didn't create the file. All logfiles are owned by tomcat:tomcat. Tomcat owns the thread that writes to the log.
<file>${catalina.base}/logs/test.log</file>
The problem turned out to be the slf4j JARs and multiple bindings. See
https://www.slf4j.org/codes.html#multiple_bindings.
I eliminated slf4j-simple from my dependencies as shown above. The fact that two slf4j JARs were in conflict barred in essence logback.xml from having any effect upon logging behavior. Upon fixing this, JUnit tests still worked, the IDE-embedded Tomcat began to reveal TRACE-level statements when the level was set to TRACE, ditto when deployed to a real Tomcat installation where the statements come out (in catalina.out in my case).
One additional observation I can make is that if a log-level configuration change is made (say, by hand directly on logback.xml on the path /opt/tomcat/webapps/application/WEB-INF/classes/logback.xml), I found that Tomcat had to be bounced for it to take effect (even though I played with <configuration scan="true">, etc.).
I was getting following error message in aws lambda logs :
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
so i added maven depdenecies as :
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.21</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.7.21</version>
</dependency>
But on adding slfj4j dependencies unwated logs from azure service bus are also getting printed now eg.
[ReactorThread1288184d-400a-4928-b174-b819c8bd9ee1] INFO com.microsoft.azure.servicebus.primitives.MessagingFactory - starting reactor instance.
my log4j.xml looks like this :
<?xml version="1.0" encoding="UTF-8"?>
<Configuration packages="com.amazonaws.services.lambda.runtime.log4j2">
<Appenders>
<Lambda name="Lambda">
<PatternLayout>
<pattern>%d{yyyy-MM-dd HH:mm:ss} %X{AWSRequestId} %-5p %c{1}:%L - %m%n</pattern>
</PatternLayout>
</Lambda>
</Appenders>
<Loggers>
<Root level="info">
<AppenderRef ref="Lambda" />
</Root>
</Loggers>
</Configuration>
How can i disable these logs getting printed ?
I think the problem is related with the logging dependencies of your project.
On one hand, you have AWS Lambda over Log4j2, which uses Log4j2 for logging. On the other hand you have Azure Service Bus, which uses the SLF4J API facade for logging. You need to configure your system to support both logging approaches.
First of all, you need the following dependencies in your pom.xml file:
<dependencies>
...
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-log4j2</artifactId>
<version>1.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-slf4j-impl</artifactId>
<version>2.13.2</version>
</dependency>
....
</dependencies>
The first dependency allows you to emit log traces to AWS Lambda.
The last one is the Log4j2 SLF4J java bridge, necessary for Azure Service Bus logging.
Please, also remove the dependency slf4j-simple associated with the group org.slf4j, it will be no necessary.
With this dependencies in place, please, include the following line:
<Logger name="com.microsoft.azure.servicebus" level="OFF"/>
In your XML configuration file (by convention, it is better to name your Log4j2 configuration file like log4j2.xml instead of log4j.xml, more appropriate for the old Log4j library version).
Your log4j2.xml should looks something like the following:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration packages="com.amazonaws.services.lambda.runtime.log4j2">
<Appenders>
<Lambda name="Lambda">
<PatternLayout>
<pattern>%d{yyyy-MM-dd HH:mm:ss} %X{AWSRequestId} %-5p %c{1}:%L - %m%n</pattern>
</PatternLayout>
</Lambda>
</Appenders>
<Loggers>
<Logger name="com.microsoft.azure.servicebus" level="OFF"/>
<Root level="info">
<AppenderRef ref="Lambda" />
</Root>
</Loggers>
</Configuration>
Firstly, this is not an error. The message means that there is no logger implementation present in the classpath and hence it is defaulting to NOP (No-operation) log. And as SLF4J is an abstraction of different logging frameworks, you also need to include specific logging framework in your classpath apart from only having SLF4J. Since you are trying to use Lambda appender, you need to add aws-lambda-java-log4j2 to your classpath. This will bring in required SLF4J dependencies also.
Latest version as of May 05, 2020:
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-log4j2</artifactId>
<version>1.2.0</version>
</dependency>
I'm working on an app that logs using the slf4j api:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
...
private static final Logger LOG = LoggerFactory.getLogger(FreemarkerEmailPreviewGenerator.class);
...
LOG.error("Error generating email preview", e);
(Code above posted to show classes and packages in use, but pretty standard stuff.)
We use logback configured as follows:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>[%d{HH:mm:ss.SSS}] [%thread] [%-5level %logger{26} - %msg]%n
</pattern>
</encoder>
</appender>
<root>
<level value="debug" />
<appender-ref ref="STDOUT" />
</root>
</configuration>
Some of our code makes use of 3rd party libraries that logs with java.util.logging - specifically freemarker. As you can see from the following console log entries, both logback and j.u.l are logging to the console, but they are not using the same config (the logback entries use our pattern, the j.u.l ones don't)
[12:24:38.842] [pool-2-thread-19] [INFO u.o.n.r.l.s.e.t.TemplateLoaderFromService - Finding template workflow/mail/templates/common/workflow-macros.ftl]
[12:24:38.859] [pool-2-thread-19] [INFO u.o.n.r.l.s.e.t.TemplateLoaderFromService - Loaded template workflow/mail/templates/common/workflow-macros.ftl as /workflow/mail/templates/common/workflow-macros.ftl from RegistryMailTemplateService.]
11-Jan-2017 12:24:38 freemarker.log.JDK14LoggerFactory$JDK14Logger error
SEVERE:
Expression domainContact is undefined on line 9, column 74 in workflow/mail/templates/common/workflow-macros.ftl.
The problematic instruction:
----------
==> ${domainContact.name} [on line 9, column 72 in workflow/mail/templates/common/workflow-macros.ftl]
Is it possible to make j.u.l logging use the logback config so that we have a single consistent logging config for the whole app?
Your application needs to have the following jars:
Application -> Freemarker -> java.util.logging -> SLF4J Api: jul-to-slf4j.jar
Application -> SLF4J API: slf4j-api.jar
SLF4J API -> logback: logback-classic.jar and logback-core.jar
Since your application already contains slf4j-api.jar and logback-classic.jar, you probably only have to add the jul-to-slf4j.jar
If you use maven:
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>jul-to-slf4j</artifactId>
<version>1.7.22</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.1.8</version>
</dependency>
logback classic will transitively add logback-core and slf4j-api
I am using slf4j for logging and Glassfish as app server.
My logback.xml
<configuration debug="true" scan="true">
<appender name="FILE" class="ch.qos.logback.core.FileAppender">
<file>C:\glassfish4\glassfish\logs\log.log</file>
<encoder>
<Pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{52} - %msg%n</Pattern>
</encoder>
</appender>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<Pattern>%msg%n</Pattern>
</encoder>
</appender>
<logger level="DEBUG" name="ru.vmakarenko"/>
<root>
<level value="INFO"/>
<appender-ref ref="FILE"/>
<appender-ref ref="STDOUT"/>
</root>
</configuration>
My logging.properties
javax.enterprise.system.tools.admin.level=INFO
handlers=org.slf4j.bridge.SLF4JBridgeHandler
java.util.logging.ConsoleHandler.formatter=com.sun.enterprise.server.logging.UniformLogFormatter
javax.enterprise.system.ssl.security.level=INFO
org.apache.jasper.level=INFO
com.sun.enterprise.server.logging.GFFileHandler.flushFrequency=1
org.eclipse.persistence.session.level=INFO
javax.enterprise.system.tools.backup.level=INFO
javax.enterprise.resource.corba.level=INFO
javax.enterprise.resource.webcontainer.jsf.resource.level=INFO
javax.enterprise.system.core.classloading.level=INFO
javax.enterprise.resource.jta.level=INFO
java.util.logging.ConsoleHandler.level=FINEST
com.sun.enterprise.server.logging.GFFileHandler.file=${com.sun.aas.instanceRoot}/logs/server.log
javax.enterprise.system.webservices.saaj.level=INFO
javax.enterprise.system.tools.deployment.level=INFO
javax.enterprise.system.container.ejb.level=INFO
org.glassfish.naming.level=INFO
javax.enterprise.system.core.transaction.level=INFO
org.apache.catalina.level=INFO
javax.enterprise.resource.webcontainer.jsf.lifecycle.level=INFO
com.sun.enterprise.server.logging.GFFileHandler.rotationTimelimitInMinutes=0
javax.enterprise.resource.webcontainer.jsf.config.level=INFO
javax.enterprise.system.container.ejb.mdb.level=INFO
javax.enterprise.resource.webcontainer.jsf.timing.level=INFO
javax.enterprise.system.core.level=INFO
com.sun.enterprise.server.logging.GFFileHandler.rotationOnDateChange=false
com.sun.enterprise.server.logging.GFFileHandler.excludeFields=
org.apache.coyote.level=INFO
ShoalLogger.level=CONFIG
javax.level=INFO
javax.enterprise.resource.webcontainer.jsf.taglib.level=INFO
java.util.logging.FileHandler.limit=50000
javax.enterprise.system.webservices.rpc.level=INFO
javax.enterprise.resource.javamail.level=INFO
com.sun.enterprise.server.logging.GFFileHandler.logtoConsole=true
javax.enterprise.system.container.web.level=INFO
javax.enterprise.resource.webcontainer.jsf.facelets.level=INFO
javax.enterprise.system.util.level=INFO
javax.enterprise.resource.resourceadapter.level=INFO
com.sun.enterprise.server.logging.GFFileHandler.level=ALL
javax.org.glassfish.persistence.level=INFO
javax.enterprise.resource.webcontainer.jsf.context.level=INFO
javax.enterprise.resource.webcontainer.jsf.application.level=INFO
javax.enterprise.resource.jms.level=INFO
com.sun.enterprise.server.logging.GFFileHandler.multiLineMode=true
com.sun.enterprise.server.logging.GFFileHandler.rotationLimitInBytes=2000000
javax.enterprise.system.core.config.level=INFO
org.jvnet.hk2.osgiadapter.level=INFO
javax.enterprise.system.level=INFO
javax.enterprise.system.core.security.level=INFO
javax.enterprise.system.container.cmp.level=INFO
java.util.logging.FileHandler.pattern=%h/java%u.log
com.sun.enterprise.server.logging.SyslogHandler.useSystemLogging=false
javax.enterprise.resource.sqltrace.level=FINE
javax.enterprise.resource.webcontainer.jsf.renderkit.level=INFO
handlerServices=com.sun.enterprise.server.logging.GFFileHandler
javax.enterprise.system.webservices.registry.level=INFO
com.sun.enterprise.server.logging.GFFileHandler.alarms=false
javax.enterprise.system.core.selfmanagement.level=INFO
com.sun.enterprise.server.logging.GFFileHandler.formatter=com.sun.enterprise.server.logging.UniformLogFormatter
.level=INFO
com.sun.enterprise.server.logging.GFFileHandler.maxHistoryFiles=0
log4j.logger.org.hibernate.validator.util.Version=warn
java.util.logging.FileHandler.count=1
javax.enterprise.resource.webcontainer.jsf.managedbean.level=INFO
org.glassfish.admingui.level=INFO
javax.enterprise.resource.jdo.level=INFO
com.sun.enterprise.server.logging.GFFileHandler.retainErrorsStasticsForHours=0
And domain.xml (jvm-options)
<jvm-options>-Djava.util.logging.config.file=file:///${com.sun.aas.instanceRoot}/config/logging.properties</jvm-options>
<jvm-options>-Dlogback.configurationFile=file:///${com.sun.aas.instanceRoot}/config/logback.xml</jvm-options>
So i get the C:\glassfish4\glassfish\logs\log.log file with all logged stuff I need.
But I get nothing at Eclipse Console. I have Glassfish Tools installed, and server is managed from eclipse. What is my mistake, how can i redirect output both to file and console?
Also, when I run maven, I get
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Maybe it's part of the problem?
There seems to be a bug in the console processing of the Eclipse GlassFish Tools plugin, check this thread and this related bug report.
I have a similar problem - I can't get the JUL logging to work properly. If I set the logging level of my application class to anything lower than CONFIG, nothing is logged to the Eclipse console, but it is also still logged to server.log. Also, if I start the GlassFish server outside of Eclipse, I do get all my FINE level messages printed on the terminal.
UPDATE:
I could finally make log entries with a level below CONFIG work by using a thin wrapper which for example calls Logger.logp(Level.FINE, null, null, msg). GlassFish Tools apparently can't handle the format produced by com.sun.enterprise.server.logging.ODLLogFormatter for levels FINE and lower, which includes additional CLASSNAME and METHODNAME fields, so setting the source explicitly to null did the trick.
If you are using log4j, then declares slf4j log4j binding too in your pom.xml file :
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.5.6</version>
</dependency>
or if you are not using log4j then make sure you have all the below dependency into your pom.xml file :
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.6.6</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
<version>1.0.7</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.0.7</version>
</dependency>