Log Correlation ID with Vertx [duplicate] - java

while doing logs in the multiple module of vertx, it is a basic requirement that we should be able to correlate all the logs for a single request.
as vertx being asynchronous what will be the best place to keep logid, conversationid, eventid.
any solution or patterns we can implement?

In a thread based system, you current context is held by the current thread, thus MDC or any ThreadLocal would do.
In an actor based system such as Vertx, your context is the message, thus you have to add a correlation ID to every message you send.
For any handler/callback you have to pass it as method argument or reference a final method variable.
For sending messages over the event bus, you could either wrap your payload in a JsonObject and add the correlation id to the wrapper object
vertx.eventBus().send("someAddr",
new JsonObject().put("correlationId", "someId")
.put("payload", yourPayload));
or you could add the correlation id as a header using the DeliveryOption
//send
vertx.eventBus().send("someAddr", "someMsg",
new DeliveryOptions().addHeader("correlationId", "someId"));
//receive
vertx.eventBus().consumer("someAddr", msg -> {
String correlationId = msg.headers().get("correlationId");
...
});
There are also more sophisticated options possible, such as using an Interceptor on the eventbus, which Emanuel Idi used to implement Zipkin support for Vert.x, https://github.com/emmanuelidi/vertx-zipkin, but I'm not sure about the current status of this integration.

There's a surprising lack of good answers published about this, which is odd, given how easy it is.
Assuming you set the correlationId in your MDC context on receipt of a request or message, the simplest way I've found to propagate it is to use interceptors to pass the value between contexts:
vertx.eventBus()
.addInboundInterceptor(deliveryContext -> {
MultiMap headers = deliveryContext.message().headers();
if (headers.contains("correlationId")) {
MDC.put("correlationId", headers.get("correlationId"));
deliveryContext.next();
}
})
.addOutboundInterceptor(deliveryContext -> {
deliveryContext.message().headers().add("correlationId", MDC.get("correlationId"));
deliveryContext.next();
});

If by multiple module you mean multiple verticles running on the same Vertx instance, you should be able to use a normal logging library such as SLF4J, Log4J, JUL, etc. You can then keep the logs in a directory of your choice, e.g. /var/logs/appName.
If, however, you mean how do you correlate logs between multiple instances of Vertx, then I'd suggest looking into GrayLog or similar applications for distributed/centralised logging. If you use a unique ID per request, you can pass that around and use it in the logs. Or depending on your authorization system, if you use unique tokens per request you can log those. The centralised logging system can be used to aggregate and filter logs based on that information.

The interceptor example presented by Clive Evans works great. I added a more details example showing how this might work:
import io.vertx.core.AbstractVerticle;
import io.vertx.core.DeploymentOptions;
import io.vertx.core.MultiMap;
import io.vertx.core.Promise;
import io.vertx.core.Vertx;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.slf4j.MDC;
import java.time.Duration;
import java.util.UUID;
public class PublisherSubscriberInterceptor {
private static final Logger LOG = LoggerFactory.getLogger(PublisherSubscriberInterceptor.class);
public static final String ADRESS = "sender.address";
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
createInterceptors(vertx);
vertx.deployVerticle(new Publisher());
vertx.deployVerticle(new Subscriber1());
//For our example lets deploy subscriber2 2 times.
vertx.deployVerticle(Subscriber2.class.getName(), new DeploymentOptions().setInstances(2));
}
private static void createInterceptors(Vertx vertx) {
vertx.eventBus()
.addInboundInterceptor(deliveryContext -> {
MultiMap headers = deliveryContext.message().headers();
if (headers.contains("myId")) {
MDC.put("myId", headers.get("myId"));
deliveryContext.next();
}
})
.addOutboundInterceptor(deliveryContext -> {
deliveryContext.message().headers().add("myId", MDC.get("myId"));
deliveryContext.next();
});
}
public static class Publisher extends AbstractVerticle {
#Override
public void start(Promise<Void> startPromise) throws Exception {
startPromise.complete();
vertx.setPeriodic(Duration.ofSeconds(5).toMillis(), id -> {
MDC.put("myId", UUID.randomUUID().toString());
vertx.eventBus().publish(Publish.class.getName(), "A message for all");
});
}
}
public static class Subscriber1 extends AbstractVerticle {
private static final Logger LOG = LoggerFactory.getLogger(Subscriber1.class);
#Override
public void start(Promise<Void> startPromise) throws Exception {
startPromise.complete();
vertx.eventBus().consumer(Publish.class.getName(), message-> {
LOG.debug("Subscriber1 Received: {}", message.body());
});
}
}
public static class Subscriber2 extends AbstractVerticle {
private static final Logger LOG = LoggerFactory.getLogger(Subscriber2.class);
#Override
public void start(Promise<Void> startPromise) throws Exception {
startPromise.complete();
vertx.eventBus().consumer(Publish.class.getName(), message-> {
LOG.debug("Subscriber2 Received: {}", message.body());
});
}
}
}
you can see the log example for publishing 2 messages:
13:37:14.315 [vert.x-eventloop-thread-3][myId=a2f0584c-9d4e-48a8-a724-a24ea12f7d80] DEBUG o.s.v.l.PublishSubscribeInterceptor$Subscriber2 - Subscriber2 Received: A message for all
13:37:14.315 [vert.x-eventloop-thread-1][myId=a2f0584c-9d4e-48a8-a724-a24ea12f7d80] DEBUG o.s.v.l.PublishSubscribeInterceptor$Subscriber1 - Subscriber1 Received: A message for all
13:37:14.315 [vert.x-eventloop-thread-4][myId=a2f0584c-9d4e-48a8-a724-a24ea12f7d80] DEBUG o.s.v.l.PublishSubscribeInterceptor$Subscriber2 - Subscriber2 Received: A message for all
13:37:19.295 [vert.x-eventloop-thread-1][myId=63b5839e-3b0b-43a5-b379-92bd1466b870] DEBUG o.s.v.l.PublishSubscribeInterceptor$Subscriber1 - Subscriber1 Received: A message for all
13:37:19.295 [vert.x-eventloop-thread-3][myId=63b5839e-3b0b-43a5-b379-92bd1466b870] DEBUG o.s.v.l.PublishSubscribeInterceptor$Subscriber2 - Subscriber2 Received: A message for all
13:37:19.295 [vert.x-eventloop-thread-4][myId=63b5839e-3b0b-43a5-b379-92bd1466b870] DEBUG o.s.v.l.PublishSubscribeInterceptor$Subscriber2 - Subscriber2 Received: A message for all

Surprised no one mentioned this Reactiverse project Contextual logging for Eclipse Vert.x
From their page:
In traditional Java development models (e.g. Spring or Java EE), the
server implements a one thread per request design. As a consequence,
it is possible to store contextual data in ThreadLocal variables and
use it when logging. Both logback and log4j2 name this Mapped
Diagnostic Context (MDC).
Vert.x implements the reactor pattern. In practice, this means many
concurrent requests can be handled by the same thread, thus preventing
usage of ThreadLocals to store contextual data.
This project uses an alternative storage method for contextual data
and makes it possible to have MDC logging in Vert.x applications.

Use vertx-sync and a ThreadLocal for the correlation ID. (i.e., a "FiberLocal"). Works great for me.

Related

MockEndpoint fails to run route

Using mock to launch a salesforce streaming route as shown here fails for the following route:
from("salesforce:AccountUpdateTopic?notifyForFields=ALL&notifyForOperations=ALL")
.tracing().convertBodyTo(String.class).to("file:D:/tmp/")
.to("mock:output")
.log("SObject ID: ${body}");
in
package org.apache.camel.component.salesforce;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.mock.MockEndpoint;
import org.apache.camel.component.salesforce.internal.OperationName;
import org.junit.Test;
public class StreamingApiIntegrationTest extends AbstractSalesforceTestBase {
#Test
public void testSubscribeAndReceive() throws Exception {
MockEndpoint mock = getMockEndpoint("mock:AccountUpdateTopic");
mock.start();
Thread.sleep(10000);
mock.stop();
}
#Override
protected RouteBuilder doCreateRouteBuilder() throws Exception {
return new RouteBuilder() {
#Override
public void configure() throws Exception {
// test topic subscription
from("salesforce:AccountUpdateTopic?notifyForFields=ALL&notifyForOperations=ALL").tracing().convertBodyTo(String.class).to("file:D:/tmp/").to("mock:output").log("SObject ID: ${body}");
}
};
}
}
Running this test does not start the route (updates are not fetched from Salesforce and stored in /tmp/).
Can mock run a route and wait for updates from Salesforce? Is there a shorter example that allows for testing salesforce routes without making use of spring?
You misunderstood the Camel Mock component. Mocks are not starting anything. They are just endpoints who record and assert the messages they receive.
To trigger a Camel route you have to send a message to it. You can do this easily using a ProducerTemplate.
It is this line from the example you mention that does exactly that.
CreateSObjectResult result = template().requestBody(
"direct:upsertSObject", merchandise, CreateSObjectResult.class);
template is the ProducerTemplate and requestBody the method to send a message to the endpoint direct:upsertSObject and wait for a response. See the Javadocs of ProducerTemplate for the various existing signatures.

Is there a way to stream a log to a Spring Boot application?

I have a Spring Boot app that is used as an event logger. Each client sends different events via a REST api, which are then saved in a database. But apart from simple events, I need the clients to also send their execution logs to Spring Boot.
Now, uploading a log after a client finishes executing is easy, and there are plenty examples for it out there. What I need is to stream the log as the client is executing, line by line, and not wait until the client has finished.
I've spent quite some time googling for a possible answer and I couldn't find anything that fits my needs. Any advice how to do this using Spring Boot (future releases included)? Is it feasible?
I see a couple of possibilities here. First, consider using a logback (the default Spring Boot logging implementation) SocketAppender or ServerSocketAppender in your client. See: https://logback.qos.ch/manual/appenders.html. This would let you send log messages to any logging service.
But I might suggest that you not log to your Spring Boot Event App as I suspect that will add complexity to your app unnecessarily, and I can see a situation where there is some bug in the Event App that then causes clients to log a bunch of errors which in turn all go back to the event app making it difficult to determine the initial error.
What I would respectfully suggest is that you instead log to a logging server - logstash: https://www.elastic.co/products/logstash for example, or if you already have a db that you are saving the event to, then maybe use the logbook DBAppender and write the logs directly to a db.
I wrote here an example on how to stream file updates in a spring boot endpoint. The only difference is that the code uses the Java WatchService API to trigger file updates on a given file.
However, in your situation, I would also choose the log appender to directly send messages to the connected clients (with sse - call template.broadcast from there) instead of watching for changes like I described.
The endpoint:
#GetMapping(path = "/logs", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public SseEmitter streamSseMvc() {
return sseService.newSseEmitter();
}
The service:
public class LogsSseService {
private static final Logger log = LoggerFactory.getLogger(LogsSseService.class);
private static final String TOPIC = "logs";
private final SseTemplate template;
private static final AtomicLong COUNTER = new AtomicLong(0);
public LogsSseService(SseTemplate template, MonitoringFileService monitoringFileService) {
this.template = template;
monitoringFileService.listen(file -> {
try {
Files.lines(file)
.skip(COUNTER.get())
.forEach(line ->
template.broadcast(TOPIC, SseEmitter.event()
.id(String.valueOf(COUNTER.incrementAndGet()))
.data(line)));
} catch (IOException e) {
e.printStackTrace();
}
});
}
public SseEmitter newSseEmitter() {
return template.newSseEmitter(TOPIC);
}
}
The custom appender (which you have to add to your logger - check here):
public class StreamAppender extends UnsynchronizedAppenderBase<ILoggingEvent> implements SmartLifecycle {
public static final String TOPIC = "logs";
private final SseTemplate template;
public StreamAppender(SseTemplate template) {
this.template = template;
}
#Override
protected void append(ILoggingEvent event) {
template.broadcast(TOPIC, SseEmitter.event()
.id(event.getThreadName())
.name("log")
.data(event.getFormattedMessage()));
}
#Override
public boolean isRunning() {
return isStarted();
}
}

How to send a message to an actor from outside in Play Framework 2?

I am new to Akka and trying to write some code in Play Framework 2 in Java and use Akka.
To create an actor and send a test message to it, I have:
public class Global extends GlobalSettings {
#Override
public void onStart(Application app) {
final ActorRef testActor = Akka.system().actorOf(Props.create(TestActor.class), "testActor");
testActor.tell("hi", ActorRef.noSender());
}
}
This work perfectly fine and I see that my actor received my message, here is the code for my actor:
public class TestActor extends UntypedActor {
#Override
public void onReceive(Object message) throws Exception {
if(message.toString().equals("hi")){
System.out.println("I received a HI");
}else{
unhandled(message);
}
}
}
Very simple.
However, If I try to send a message from a controller:
public static Result index() {
final ActorRef testActor = Akka.system().actorFor("testActor");
testActor.tell("hi", ActorRef.noSender());
return ok(index.render("Your new application is ready."));
}
I get this message on terminal:
[INFO] [09/20/2014 11:40:30.850] [application-akka.actor.default-dispatcher-4] [akka://application/testActor] Message [java.lang.String] from Actor[akka://application/deadLetters] to Actor[akka://application/testActor] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
Can someone help me with this? why the first usage works and the second one fails? Thanks
The actorFor method requires the entire path, and your actor lives in the user space, so you have to use actorFor("/user/testActor"). Currently you are sending it to application/testActor, which would be a top-level actor in the ActorSystem itself.
By the way, actorFor is deprecated (at least in the Scala API) and replaced by actorSelection.
For more information, refer to the excellent documentation.
actorFor should get the path to the actor which is probably "akka://System/user/testActor". It also does not create the actor, meaning it should be exist.
Anyway, is there a reason that inside the controller you use the actorFor and not the actorOf? It had been deprecated and shouldn't be use.

Original Message within the body of camel Exchange is lost

I have a camel route as follows which is transacted.
from("jms:queue:start")
.transacted()
.bean(new FirstDummyBean(), "setBodyToHello")
.bean(new SecondDummyBean(), "setBodyToWorld")
.to("jms:queue:end")
The bean methods due as their name suggests, set body to "Hello" and "World" respectively.
I also have a onException clause setup as well as follows:
onException(Exception.class)
.useOriginalMessage()
.handled(true)
.to("jms:queue:deadletter")
.markRollbackOnlyLast();
Assume, I drop a message on queue "start" with body as "test message". After successfully processing in FirstDummyBean, I throw a RuntimeException in SecondDummyBean.
I was expecting to the see the actual message or (the original message contents intact ie "test message") being sent to my dead letter queue.
However the contents of the message on deadletter queue are "Hello".
Why is this happening?..
I am using apache camel 2.10.0.
Also can anyone provide more information on how I can use both errorhandler and onexception clause together.
The document says :
If you have marked a route as transacted using the transacted DSL then Camel
will automatic use a TransactionErrorHandler. It will try to lookup the global/per
route configured error handler and use it if its a TransactionErrorHandlerBuilder
instance. If not Camel will automatic create a temporary TransactionErrorHandler that
overrules the default error handler. This is convention over configuration.
Example of how to use transactionerrorhandler with JavaDSL would be great.
I've seen this in non-transaction examples and it appears that useOriginalMessage() does use the original exchange, but if you've modified any objects that this references then you still get the modifications. It doesn't appear that useOriginalMessage goes back to the queue to get the original data.
Example code to show problem
The code below includes a set of route to demonstrate the problem. The timed route sends an ArrayList containing the String "Test message" to a queue read by a second route. This second route passes the message to ModifyBody which changes the content of the list. Next the message goes to TriggerException with throws a RuntimeException. This is handled by the onException route, which despite using useOriginalMessage is passed the updated body.
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import org.apache.camel.spring.SpringRouteBuilder;
import org.springframework.stereotype.Component;
#Component
public class TimedRoute extends SpringRouteBuilder {
private static final String QUEUE = "jms:a.queue";
private static final String QUEUE2 = "jms:another.queue";
// The message that will be sent on the route
private static final ArrayList<String> payLoad = new ArrayList<String>(
Arrays.asList("test message"));
public static class ModifyBody {
public List<String> modify(final List<String> list) {
final List<String> returnList = list;
returnList.clear();
returnList.add("Hello");
return returnList;
}
}
public static class TriggerException {
public List<String> trigger(final List<String> list) {
throw new RuntimeException("Deliberate");
}
}
#Override
public void configure() throws Exception {
//#formatter:off
onException(Exception.class)
.useOriginalMessage()
.log("Exception: ${body}")
.handled(true)
.setHeader("exception", constant("exception"))
.to(QUEUE2);
// Timed route to send the original message
from("timer://foo?period=60000")
.setBody().constant(payLoad)
.to(QUEUE);
// Initial processing route - this modifies the body.
from(QUEUE)
.log("queue ${body}")
.bean(new ModifyBody())
.log("after firstDummyBean: ${body}")
.bean(new TriggerException())
.stop();
// Messages are send here by the exception handler.
from(QUEUE2)
.log("queue2: ${body}")
.stop();
//#formatter:on
}
}
Workaround
If you replace ModifyBody with the code below then the original message is seen in the exception handling route.
public static class ModifyBody {
public List<String> modify(final List<String> list) {
final List<String> returnList = new ArrayList<String>(list);
returnList.clear();
returnList.add("Hello");
return returnList;
}
}
By changing the body to a new list the original Exchange can be left unmodified.
Doing a general solution is awkward as the mechanism for copying will depend on the objects that you have in flight. You might find that you can extend the RouteBuilder class to give yourself some custom DSL that copies your objects.

What can be the best approach to handle java.net.UnknownHostException for AWS users?

My application sends message to Amazon Simple Notification Service (SNS) topic but sometime (6/10) I get java.net.UnknownHostException:sqs.ap-southeast-1.amazonaws.com. The reason of exception is described in the amazon web services discussion forums, please look: https://forums.aws.amazon.com/thread.jspa?messageID=499290&#499290.
My problem is similar to what described in forums of amazon but my rate of publishing messages to topic is very dynamic. It can be 1 message/second or 1 message/minute or no message in an hour. I am looking for a cleaner, better and safe approach, which guaranties sending of message to SNS topic.
Description of problem in detail:
Topic_Arn= arn of SNS topic where application wants to publish message
msg = Message to send in topic
// Just a sample example which publish message to Amazon SNS topic
class SimpleNotificationService {
AmazonSNSClient mSnsClient = null;
static {
createSnsClient()
}
private void static createSnsClient() {
Region region = Region.getRegion(Regions.AP_SOUTHEAST_1);
AWSCredentials credentials = new
BasicAWSCredentials(AwsPropertyLoader.getInstance().getAccessKey(),
AwsPropertyLoader.getInstance().getSecretKey());
mSqsClient = new AmazonSQSClient(credentials);
mSqsClient.setRegion(region);
}
public void static publishMessage(String Topic_Arn, String msg) {
PublishRequest req = new PublishRequest(Topic_Arn, msg);
mSnsClient.publish(req);
}
}
class which calls SimpleNotificationService
class MessagingManager {
public void sendMessage(String message) {
String topic_arn = "arn:of:amazon:sns:topic";
SimpleNotificationService.publishMessage(topic_arn, message);
}
}
Please note that this is a sample code, not my actual code. Here can be class design issue but please ignore those if they are not related to problem.
My thought process says to have try-catch block inside sendMessage, so when we catch UnknownHostException then again retry but I am not sure how to write this in safer, cleaner and better way.
So MessagingManager class will look something like this:
class MessagingManager {
public void sendMessage(String message) {
String topic_arn = "arn:of:amazon:sns:topic";
try {
SimpleNotificationService.publishMessage(topic_arn, message);
} catch (UnknownHostException uhe) {
// I need to catch AmazonClientException as aws throws
//AmazonClientException when sees UnknownHostException.
// I am mentioning UnknownHostException for non-aws user to understand
// my problem in better way.
sendMessage(message); // Isn't unsafe? - may falls into infinite loop
}
}
}
I am open for answers like this: java.net.UnknownHostException: Invalid hostname for server: local but my concern is to dependent on solution at application code-level and less dependent on changes to machine. As my server application is going to run in many boxes (developer boxes, testing boxes or production boxes). If changes in machine host-files or etc is only guaranted solution then I prefer that to include with code level changes.
Each AWS SDK implements automatic retry logic. The AWS SDK for Java automatically retries requests, and you can configure the retry settings using the ClientConfiguration class.
Below is the sample example to create SNS client. It retries for 25 times if encounters UnKnownHostException. It uses default BackOff and retry strategy. If you want to have your own then you need to implement these two interfaces: http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/retry/RetryPolicy.html
private void static createSnsClient() {
Region region = Region.getRegion(Regions.AP_SOUTHEAST_1);
AWSCredentials credentials = new
BasicAWSCredentials(AwsPropertyLoader.getInstance().getAccessKey(),
AwsPropertyLoader.getInstance().getSecretKey());
ClientConfiguration clientConfiguration = new ClientConfiguration();
clientConfiguration.setMaxErrorRetry(25);
clientConfiguration.setRetryPolicy(new RetryPolicy(null, null, 25, true));
mSnsClient = new AmazonSNSClient(credentials, clientConfiguration);
mSnsClient.setRegion(region);
}
Have you considering looking into the JVM TTL for the DNS Cache?
http://docs.aws.amazon.com/AWSSdkDocsJava/latest//DeveloperGuide/java-dg-jvm-ttl.html

Categories

Resources