I need to have trace id and span id available in all my logs. However I am observing that after the first splitter in my camel route, I can no longer see the trace id and span id in my logs.
[traceId: spanId:] INFO ---
Is there any way to enable back the tracing information?
From the Camel Documentation I have tried to start the tracing after the split by using
context.setTracing(true)
But looks like this is not working.
Am I missing anything, please help.
You probably have the traceId and spanId stored in the exchange message headers which are lost after the split.
A solution is to store them in the exchange properties(before the split) which are stored for the entire processing of the exchange(see Passing values between processors in apache camel).
If you are using the Java DSL you can use:
.setProperty("traceId ", constant("traceIdValue"))
.setProperty("spanId", constant("spanIdValue"))
You can use the Simple Expression Language(https://camel.apache.org/manual/latest/simple-language.html) to access the properties after the split using exchangeProperty.property_name.
Example:
.log(LoggingLevel.INFO, "[traceId:${exchangeProperty.traceId} spanId:${exchangeProperty.spanId}]")
When you use split, a new and old exchange will be created and to pass exchange properties downstream, you would need to use an aggregator to do so.
Example:
.split().tokenize(System.lineSeparator()).aggregationStrategy(new YourAggregationStrategyClass())
Related
This is my sample java log I tried to parse using Logstash
[#|2022-04-06T07:02:47.885+0800|INFO|sun-appserver2.1|javax.enterprise.system.stream.out|_ThreadID=245;_ThreadName=sun-bpel-engine-thread-6;Process Instance Id=192.168.1.1:2001:0db8:85a3:0000:0000:8a2e:0370:7334;Service Assembly Name=CommComposite;BPEL Process Name=testname;|
Register BPEL ID : 192.168.1.1:2001:0db8:85a3:0000:0000:8a2e:0370:7334|#]
I tried to use this filter to parse it
%{TIMESTAMP_ISO8601:time} %{LOGLEVEL:logLevel} %{GREEDYDATA:logMessage}
It seems this filter always left last line thus creating invalid log line. I suspect due to the [#| and |#] opening and closing tag.
Could anyone help me how to parse this kind of log so I can parse it properly?
Here is the grok pattern for the sample data provided by you:
%{TIMESTAMP_ISO8601:timestamp}\|%{LOGLEVEL:loglevel}\|(?<message>(.|\r|\n)*)
Output:
My main issue might be not understanding some conventions in the Camel documents.
https://camel.apache.org/components/latest/mongodb-component.html#_delete_operations
They have a camel route commented out, and two Java objects being defined, which are not commented out. What are they trying to indicate? Where are these objects at in a project?
Anyway, I'm subscribed to a JMS queue that I have another camel route publishing to. The message is a JSON string, which I save to a Mongo DB. But what I'd like to do is remove any current documents (based on criteria) and replace it with the new message.
from("jms:topic:orderbook.raw.feed")
.log("JMS Message: ${body}")
.choice()
.when().jsonpath("$.[?(#.type=='partial')]")
// Figure out how to delete the old orderbook from Mongo with a type=T1
.to("mongodb:mongo?database=k2_dev&collection=orderbooks&operation=save");
Does your orderbook have an ID? If so, you can enrich the JSON with an _id field (MongoDB default representation for identifiers) whose value would be that ID. Thus you'll be "upserting" that orderbook.
Obs.: Sure the Camel docs could be better.
But if you really feel you'd have to perform a remove operation before saving an orderbook, another option would be to extract its type from the current JSON string and use it as a filter when removing. Something like:
from("jms:topic:orderbook.raw.feed")
.log("JMS Message: ${body}")
.filter("$.[?(#.type=='partial')]")
.multicast().stopOnException()
.to("direct://orderbook-removal")
.to("direct://orderbook-save")
.end()
;
from("direct://orderbook-removal")
// extract type and set it as the body message. e.g. {"type":"T1"}
.to("mongodb:mongo?database=k2_dev&collection=orderbooks&operation=remove")
;
from("direct://orderbook-save")
.to("mongodb:mongo?database=k2_dev&collection=orderbooks&operation=save")
;
The multicast sends a copy of the message to each destination. So the content won't be affected.
According to the RFC7239 specification, syntax for Forwarded Header is as follows:
Forwarded: by=<identifier>;for=<identifier>;host=<host>;proto=<http|https>
These values are used by Spring (all recent versions), if present, in order to reflect the client-originated protocol and address (when allowed through a configuration). There is a problem when using multiple values in this header:
# Multiple values can be appended using a comma
Forwarded: for=192.0.2.43,for=198.51.100.17;proto=https;host=xxx.yyy.com;by=10.97.9.10
The code in UriComponentsBuilder#adaptFromForwardedHeaders:798-800 gets the first Forwarded Header, if multiple are found, splits it by comma and uses only the first part:
UriComponentsBuilder adaptFromForwardedHeaders(HttpHeaders headers) {
try {
String forwardedHeader = headers.getFirst("Forwarded");
if (StringUtils.hasText(forwardedHeader)) {
String forwardedToUse = StringUtils.tokenizeToStringArray(forwardedHeader, ",")[0];
....
}
Using the example above, the forwardedToUse variable becomes Forwarded: for=192.0.2.43 where all useful information is trimmed.
Is this really an issue or there is something that I am missing? And if this is really a problem, how can I deal with it.
Thanks a lot in advance!
It seems that there is an issue in Spring with Forwarded header in case of multiple values. It is fixed with the commit below and will be available in next release:
GitHub Issue: Issue with Forwarded Header and Multiple Values
Spring Framework Commit: Do not tokenize Forward header value
Release: Spring 5.2.9.RELEASE
I am using Apache Camel in my Application. I am trying to use Composed Message Processor. I have exchange whose body contains some URLs to hit and by using split(body(), MyAggregationStrategy()), I am trying to get the data from urls and using Aggregation Strategy want to combine each data. But there is a problem where I am stuck. If there is some invalid url on the first line of the body then it happens that aggregation is working fine but it is not moving to the next processor and if invalid url is anywhere else except first line than it is working fine..
please help,
Here is the code for reference
onException(HttpOperationFailedException.class).handled(true)
.retryAttemptedLogLevel(LoggingLevel.DEBUG)
.maximumRedeliveries(5).redeliveryDelay(3000)
.process(new HttpExceptionProcessor(exceptions));
from("jms:queue:supplier")
.process(
new RequestParserProcessor(payloadDetailsMap,
metaDataDetailsPOJO, routesEndpointNamePOJO))
.choice().when(new AggregateStrategy(metaDataDetailsPOJO))
.to("direct:aggregate").otherwise().to("direct:single");
from("direct:aggregate").process(new SplitBodyProcessor())
.split(body(), new AggregatePayload(aggregatePayload))
.to("direct:aggregatepayloadData").end()
.to("direct:payloadDataAggregated").end();
from("direct:aggregatepayloadData").process(basicProcessor)
.recipientList(header(ApplicationConstants.URL));
from("direct:payloadDataAggregated")
.process(
new AggregateJsonGenerator(aggregatePayload,
payloadDetailsMap, metaDataDetailsPOJO)).
In this code AggregateJsonProcessor is never called if there some invalid url on the first hit..
You probably need to set continue(true) in your OnException code. See here:
http://camel.apache.org/exception-clause.html
I have a big file and I use splitter to process it. I use .split().tokenize("\n", 5).streaming(); to group lines.
How can I send every group to different endpoint?
This should do the trick for you.
.split().tokenize("\n", 250000).streaming()
.to(file://directory)
.end()
You can also use another endpoint instead of .to(file://).