I have a Springboot micro-service. For logging I'm using Elastic common scheme, implemented using ecs-logging-java.
I want to set the trace.ID and a transaction.ID but I'm not sure how?
Bonus question, I'm I right in thinking trace.ID should be the ID to following the request through multiple system. transaction.ID is just for within the service?
Configure your logging patter as below
<pattern> %d{yyyy-MM-dd HH:mm:ss.SSS} %thread [%X{trace-id}] [%-5level] %class{0} - %msg%n </pattern>
Put trace Id in MDC. (MDC belongs to particular thread context)
`MDC.put("trace-id", "traceid1");`
So whenever your log will print a message, it will print trace id.
Follow below artical.http://logback.qos.ch/manual/mdc.html
Step 1: Add trace id in the thread context.
This can be done using MDC (manages contextual information on a per-thread basis).
Add the below line at the start of any method, from where you want to trace logs.
MDC.put("TRACE_ID", UUID.randomUUID().toString());
Step 2: Add trace id in log format
Logs in java do not add trace id by default, so to make this possible we can add the trace id we previously added in the thread context to the log.
This can be added to the application.properties I have added [%X{TRACE_ID}] in the default log console pattern.
logging.pattern.console=%clr(%d{${LOG_DATEFORMAT_PATTERN:-yyyy-MM-dd HH:mm:ss.SSS}}){faint} [%X{TRACE_ID}] %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}
I thought I had documented this but the closest I could come is in Log4j-Audit's RequestContext.. I guess I need to add a new entry to my blog. The short answer to this is that you use Log4j 2's ThreadContextMap. First, when a user logs in create a session map that contains the data you want to capture in each request, such as the user's ip address and loginId. Then create servlet Filter or Spring Interceptor to add that data as well as a unique request id to Log4j 2's Thread Context Map.
All Leg Events will include the data in the ThreadContext. The ECSLayout automatically includes all the fields in the ThreadContextMap.
Lastly, you need to propagate the RequestContext to downstream services. You do that by creating a Spring Interceptor that gets wired into the RestTemplate which converts the RequestContext fields into HTTP headers. The downstream service then has a Filter or Spring Interceptor that converts the headers back into RequestContext attributes. Log4j Audit (referenced above) has examples and implementations of all these components.
I should add that the method described above does not implement tracing as described by the WSC Trace Context spec so it is also not compatible with Elasticsearch's distributed tracing support. It is worth noting however, that if one were to include Elasticsearch's distributed tracing support along with New Relic's distributed tracing support they would step on each other.
Related
I have the following configuration:
<int:channel id="responseXmlChannel">
<int:interceptors>
<int:wire-tap channel="responseXmlLogger" />
</int:interceptors>
</int:channel>
<int:logging-channel-adapter id="responseXmlLogger"
logger-name="test.SyncResponseLogger" level="DEBUG"
expression="'message received, headers:' + headers + ' payload:' + payload"/>
I am trying to add some session attributes to be logged. Current user for example.
I was taking a look at the header-enricher approach, but I could not find a good example of adding session attributes to the header using it.
Is there a way to do what I am trying to do? Is there other approach besides using the headers to add custom attributes to be logged?
My goal is to have in the logs something like
current user: $UserName. Payload: $Payload
OK. So, what is your question then? How to use header-enricher? I'm not sure what is your "session" object to be precise with the answer, but header-enricher can be configured to call any arbitrary method on beans. See its docs: https://docs.spring.io/spring-integration/docs/current/reference/html/message-transformation.html#header-enricher.
The other approach is too deal with ThreadLocal. That's exactly what the RequestContextHolder in Spring Web does for RequestAttributes with its HTTP session access. Although you need to keep in mind that moving to a different thread with queue or executor channels in between will lose the current thread context and you won't have access to that ThreadLocal. So, headers in the particular message is not so bad choice to do.
We are using Keycloak for SSO purpose, in particular we are able to use the REST API /admin/realms/{realm}/users to get the basic user details in a Keycloak realm, the response we get is UserRepresentation which seems to have provision for realmRoles and clientRoles as well but by default they are not required/false.
We have a new requirement to fetch the roles of all users, I see there are additional API exposed to get these roles: /auth/admin/realms/realm/users/user-id/role-mappings/realm/, but this means firing another request, and if we have 2k users that means 2k more request.
My question is as UserRepresentation also have properties realmRoles and clientRoles but seems to be optional by default, how can I enable these while firing the request /admin/realms/{realm}/users, and avoid additional request to get roles.
I'm afraid that getting the data you need in one request is not possible: just by looking at the source code of getting all users in UsersResource you can see that realmRoles and clientRoles are never populated.
Having that said, there is one thing that you can do - write your own REST Resource by implementing SPI. In fact, in the past I had a similar problem with groups resource and I ended up writing my own resource. In this case you will need to write custom resource with just one method - getting all users with roles. You can just copy-paste current keycloak logic and add extra bits or extend built-in UsersResource. This, however, is not a single bullet - on the long run you will be required to maintain your own code and upgrades to latest keycloak may not be that simple if some interface will change.
Let's say that I have a REST API with endpoint /user/{user_id}/foo.
Now when it is called I would like that all logs that come from handling this request contain information about {user_id}. Is it possible to achieve that without passing {user_id} to every method?
I'm using SLF4j for logging, my application is based on Spring Boot.
You could also use MDC for this, see here. It's essentially a map, you just put your contextual information in it (e.g. user id) and then you can use it in your log layout. Be aware that this only works with certain underlying frameworks like logback, where a sample layout pattern would look like this:
<Pattern>%X{user_id} %m%n</Pattern>
Check the logback manual for more details on this.
You can use Logback's Mapped Diagnotic Context to propagate the {user_id} to every log message.
There are two parts to this:
Push your {user_id} into MDC e.g. MDC.put("user_id", "Pawel");
Include the MDC entry in your log statements. You do this by specifying it in your logging pattern. So, if you store the user id in a MDC entry named "user_id" the you would set logging.pattern.level=user_id:%X{user_id} %5p to include the value of that entry in every log event.
More details in the docs
There is a webapp, where every request consumes various external resources. The webapp tracks those consumed resources with request scooped bean. Then HandlerInterceptor's afterCompletion method calls TaskExecutor to store this information in DB. All is fine and dandy, but there goes the requirement to add bandwith consumption as another resource. Counting outgoing response size is a typical task for servlet filter (along with response wrapper and custom stream implementation). So this is done and is also working.
The problem is that I'd like to aggregate two things together. Obviously, I can't pass "bytes sent" to Spring HandlerInterceptor, because filter's doFilter() hasn't completed yet and the amount of bytes sent isn't known when the interceptor runs. So filter must be the place to aggregate all the resource usage and start async task to store it in DB. The problem is: how can I pass data from HandlerInterceptor to Filter. I've tried simple request.setAttribute() but surprisingly it didn't worked.
As a side note: I'm aware of request scooped bean lifecycle and at the handler I'm creating a simple POJO populated with data from scooped bean.
The problem turned out to be quite trival. The app was doing a lot of forward/dispatches as a part of somehwat normal request handling. It turned out that the my filter was called (so far, so good) then another filter down the chain was doing forward/dispatch which then did actual request processing. Default filter setup catches only plain requests and you need additional configuration (web.xml) to also filter forwards/dispatches. I just did that and that solved the problem.
I'm using logback as my logging framework and have a couple of jobs that run the same main function with different parameters and would like to create a log file for each job and name the log file with the job's name.
For example, if I had jobs a,b,c that all run MyClass.main() but with different parameters, then I'd like to see a-{date}.log, b-{date}.log, c-{date}.log.
I can achieve the {date} part by specifying a <fileNamePattern>myjob-%d{yyyy-MM-dd}.log</fileNamePattern> in my logback.xml, but I'm not sure how to (or if it is even possible) create the prefix of the file names dynamically (to be the job's name).
Is there a way to dynamically name logfiles in logback? Is there another logging framework that makes this possible?
As a follow up question, am I just taking a bad approach for having multiple jobs that call the same main function with different parameters and wanting a log file named after each job? If so is there a standard/best practice solution for this case?
EDIT: The reason why I want to name each log file after the name of the job is that each job naturally defines a "unit of work" and it is easier for me to find the appropriate log file in case one of the job fails. I could simply use a rolling log file for jobs a,b,c but I found it harder for me to look through the logs and pinpoint where each job started and ended.
I would use you own logging.
public static PrintWriter getLogerFor(String prefix) {
SimpleDatFormat sdf = new SimpleDateFormat("yyyy-MM-dd");
String filename= prefix + sdf.format(new Date());
return new PrintWriter(filename, true); // auto flush.
}
You can write a simple LRU cache e.g. with LinkedHashMap to reuse the PrintWriters.
Is there a way to dynamically name logfiles in logback? Is there another logging framework that makes this possible?
I don't believe this is possible using the out of the box appenders (File, RollingFile etc) configured by a standard logback.xml file. To do what you want, you would need to dynamically create appenders on the fly and assign loggers to different appenders. Or you would need to invent a new appender that was smart enough to write to multiple files at the same time, based on the logger name.
am I just taking a bad approach for having multiple jobs that call the same main function with different parameters and wanting a log file named after each job?
The authors of logback address this issue and slightly discourage it in the section on Mapped Diagnostic Context
A possible but slightly discouraged approach to differentiate the logging output of one client from another consists of instantiating a new and separate logger for each client. This technique promotes the proliferation of loggers and may increase their management overhead. ... A lighter technique consists of uniquely stamping each log request servicing a given client.
Then they go on to discuss mapped diagnostic contexts as a solution to this problem. They give an example of a NumberCruncherServer which is crunching numbers, for various clients in various threads simultaneously. By setting the mapped diagnostic context and an appropriate logging pattern it becomes easy to determine which log events originated from which client. Then you could simply use a grep tool to separate logging events of interest into a separate file for detailed analysis.
Yes you can.
First you have to familiarize your self with these 2 concepts: Logger and Appender. Generally speaking, your code obtains a Logger, and invoke logging method such as debug(), warn(), info() etc. Logger has Appender attached to it, and Appender presents the logging information to the user according to the configuration set to it.
Once you're familiar, what you need to do is to dynamically create a FileAppender with a different file name for each different job type, and attach it to your Logger.
I suggest you spend some time with logback manual if none of above make sense.
You can make use of the logback discriminators, as discriminators' keys can be used in the <FileNamePattern> tag. I can think of two options:
Option One:
You can use the Mapped Diagnostic Context discriminator to implement your logging separation, you'll need to set a distinct value from each job using MDC.put();
Once you've done that your appender on logback configuration would look something like:
<appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator class="ch.qos.logback.classic.sift.MDCBasedDiscriminator">
<key>jobName</key> <!-- the key you used with MDC.put() -->
<defaultValue>none</defaultValue>
</discriminator>
<sift>
<appender name="jobsLogs-${jobName}" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<FileNamePattern>${jobName}.%d{dd-MM-yyyy}.log.zip</FileNamePattern>
.
.
.
</rollingPolicy>
<layout class="ch.qos.logback.classic.PatternLayout">
<Pattern>...</Pattern>
</layout>
</appender>
</sift>
</appender>
Option Two:
Implement your own discriminator -implementing ch.qos.logback.core.sift.Discriminator-, to discriminate based on the thread name. Would look something like this:
public class ThreadNameDiscriminator implements Discriminator<ILoggingEvent> {
private final String THREAD_NAME_KEY = "threadName";
#Override
public String getDiscriminatingValue(ILoggingEvent event) {
return Thread.currentThread().getName();
}
#Override
public String getKey() {
return THREAD_NAME_KEY;
}
// implementation for more methods
.
.
.
}
The logging appender would look like option one with the discriminator class being ThreadNameDiscriminator and the key being threadName. In this option there is no need to set a value to the MDC from your jobs, hence, no modification on them is required.