Spring Integration aggregating messages that were split twice - java

I have a use case where my message are being split twice and i want to aggregate all these messages. How can this best be achieved, should i aggregate the messages twice by introducing different sequence headers, or is there a way to aggregate the messages in single aggregating step by overriding the method how messages are grouped?

That's called a "nested splitting" and there is built-in algorithm to push sequence detail headers to the stack for a new splitting context. This would allow in the end to have an ascendant aggregation: the first one aggregate for the closest nested splitting, pops sequence detail headers and allows the next aggregator to deal with its own sequence context.
So, in two words: it is better to have as many aggregator as you have splitting if you want to send a single message in the start and receive a single message in the end.
Of course you can have a custom splitting algorithm with an applySequence = false. As many as you need. And have only a single aggregator in the end, but with a custom correlation logic already.
We have some explanation in the docs: https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#aggregatingmessagehandler
Starting with version 5.3, after processing message group, an AbstractCorrelatingMessageHandler performs a MessageBuilder.popSequenceDetails() message headers modification for the proper splitter-aggregator scenario with several nested levels.
We don't have a sample on the matter, but here is a configuration for test case: https://github.com/spring-projects/spring-integration/blob/main/spring-integration-core/src/test/java/org/springframework/integration/aggregator/scenarios/NestedAggregationTests-context.xml

Related

Apache Beam GroupByKey outputting duplicate elements with PubSubIO

We need to group PubSub messages by one of the fields from messages. We used fixed window of 15mins to group these messages.
When run on data flow, the GroupByKey used for messages grouping is introducing too many duplicate elements, another GroupByKey at far end of pipeline is failing with 'KeyCommitTooLargeException: Commit request for stage P27 and key abc#123 has size 225337153 which is more than the limit of..'
I have gone through the below link and found the suggestion was to use Reshuffle but Reshuffle has GroupByKey internally.
Why is GroupByKey in beam pipeline duplicating elements (when run on Google Dataflow)?
My pipeline code:
PCollection<String> messages = getReadPubSubSubscription(options, pipeline);
PCollection<String> windowedMessages = messages
.apply(
Window
.<String>into(
FixedWindows.of(Duration.standardMinutes(15)))
.discardingFiredPanes());
PCollectionTuple objectsTuple = windowedMessages
.apply(
"UnmarshalStrings",
ParDo
.of(new StringUnmarshallFn())
.withOutputTags(
StringUnmarshallFn.mainOutputTag,
TupleTagList.of(StringUnmarshallFn.deadLetterTag)));
PCollection<KV<String, Iterable<ABCObject>>> groupedObjects =
objectsTuple.get(StringUnmarshallFn.mainOutputTag)
.apply(
"GroupByObjects",
GroupByKey.<String, ABCObject>create());
PCollection results = groupedObjects
.apply(
"FetchForEachKey",
ParDo.of(SomeFn())).get(SomeFn.tag)
.apply(
"Reshuffle",
Reshuffle.viaRandomKey());
results.apply(...)
...
PubSub is not duplicating messages for sure and there are no additional failures, GroupByKey is creating these duplicates, is something wrong with the Windowing I am using?
One observation is GroupBy is producing same no of elements as the next step produce. I am attaching two screenshots one for GroupByKey and Other For Fetch Function.
GroupByKey step
Fetch step
UPDATE After additional analysis
Stage P27 is actually the first GroupByKey which is outputting many elements than expected. I can't see them as duplicates of actual output element as all these million elements are not processed by next Fetch step. I am not sure if these are some dummy elements introduced by dataflow or wrong metric from dataflow.
I am still analyzing further on why this KeyCommitTooLargeException is thrown as I only have one input element and grouping should only produce one element iterable. I have opened a ticket with Google as well.
GroupByKey groups by key and window. Without a trigger, it outputs just one element per key and window, which is also at most 1 element per input element.
If you are seeing any other behavior it may be a bug and you can report it. You will probably need to provide more steps to reproduce the issue, including example data and the entire runnable pipeline.
Since in the UPDATE you clarified that there are not duplicates, instead somehow dummy records are being added (what is really strange), this old thread reports similar issue and the answer is interesting since points out to a protobuf serialization issue caused by grouping a very large amount of data in a single window.
I recommend using the available troubleshooting steps (e.g. 1 or 2) to identify in which part of the code the issue is starting. For example, I'm still think that new StringUnmarshallFn() could be performing tasks that contribute to generate the dummy records. You might want to implement counters in your steps to try to identify how many records each step generates.
If you don't find a solution, the outstanding option is contact GCP Support and maybe they can figure it out.

Apache Camel join ALL exchanges in nested split

I'm fairly new to Apache Camel and have a couple questions. I want my route to do the following:
load a list of lists in LoadSomeThingsProcessor
split twice so I can handle each item in the inner list
filter out some things I don't need
join all remaining exchanges from the inner split
then eventually join again (back to one exchange)
My route that looks something like the following:
from("direct:myRoute")
.process(new LoadSomeThingsProcessor())
.split(body())
.streaming()
.process(new SomeProcessor())
.split(body())
.streaming()
.filter(new SomeFilter())
.aggregate(header("myHeader", new MyAggregationStrategy())
.completionPredicate(new MyCompletionPredicate())
// more processors
// aggregate again (should just be one exchange after this point
// more processors
.to("direct:someOtherRoute");
MyCompletionPredicate's matches method is just:
return exchange.getIn().getProperty("CamelSplitComplete", Boolean.class);
I want to ensure that ALL exchanges in each split are aggregated together before I continue.
My questions are:
- The CamelSplitComplete header is somehow never true. What could cause this?
- Is trying to aggregate inside a nested split going to cause any issues?
- What happens if the last exchange (the one that is supposed to have CamelSplitComplete = true is filtered out? How can I know that I have aggregated all my exchanges together?
- Is this even the right way to approach this problem? If no, what else should I consider?
FYI my aggregation strategy just takes the bodies of the new exchange and adds them to the body of the old exchange.
Many thanks in advance.
See the Composed Message Processor EIP: https://camel.apache.org/components/latest/eips/composed-message-processor.html
And there is an example with the title Splitter Only which allows to do a fork/join style. You can still use a filter in the splitter only to filter out specific items etc.
And try to not make it to complicated with 2 x splits and all kind of stuff - keep it a bit simpler then its much easier to use, test and work with.

Dynamically connecting a Kafka input stream to multiple output streams

Is there functionality built into Kafka Streams that allows for dynamically connecting a single input stream into multiple output streams? KStream.branch allows branching based on true/false predicates, but this isn't quite what I want. I'd like each incoming log to determine the topic it will be streamed to at runtime, e.g., a log {"date": "2017-01-01"} will be streamed to the topic topic-2017-01-01 and a log {"date": "2017-01-02"} will be streamed to the topic topic-2017-01-02.
I could call forEach on the stream, then write to a Kafka producer, but that doesn't seem very elegant. Is there a better way to do this within the Streams framework?
If you want to create topics dynamically based on your data, you do not get any support within Kafka's Streaming API at the moment (v0.10.2 and earlier). You will need to create a KafkaProducer and implement your dynamic "routing" by yourself (for example using KStream#foreach() or KStream#process()). Note, that you need to do synchronous writes to avoid data loss (which are not very performant unfortunately). There are plans to extend Streaming API with dynamic topic routing, but there is no concrete timeline for this feature right now.
There is one more consideration you should take into account. If you do not know your destination topic(s) ahead of time and just rely on the so-called "topic auto creation" feature, you should make sure that those topics are being created with the desired configuration settings (e.g., number of partitions or replication factor).
As an alternative to "topic auto creation" you can also use Admin Client (available since v0.10.1) to create topics with correct configuration. See https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations

Processing data based on the metadata in the file using apache camel

I have to setup camel to process data files where the first line of the file is the metadata and then it follows with millions of lines of actual data. The metadata dictates how the data is to be processed. What I am looking for is something like this:
Read first line (metadata) and populate a bean (with metadata) --> 2. then send data 1000 lines at a time to the data processor which will refer to the bean in step # 1
Is it possible in Apache Camel?
Yes.
An example architecture might look something like this:
You could setup a simple queue that could be populated with file names (or whatever identifier you are using to locate each individual file).
From the queue, you could route through a message translator bean, whose sole is to translate a request for a filename into a POJO that contains the metadata from the first line of the file.
(You have a few options here)
Your approach to processing the 1000 line sets will depend on whether or not the output or resulting data created from the 1000 lines sets needs to be recomposed into a single message and processed again later. If so, you could implement a composed message processor made up of a message producer/consumer, a message aggregator and a router. The message producer/consumer would receive the POJO with the metadata created in step2 and enqueue as many new requests are necessary to process all of the lines in the file. The router would route from this queue through your processing pipeline and into the message aggregator. Once aggregated, a single unified message with all of your important data will be available for you to do what you will.
If instead each 1000 line set can be processed independently and rejoining is not required, than it is not necessary to agggregate the messages. Instead, you can use a router to route from step 2 to a producer/consumer that will, like above, enquene the necessary number of new requests for each file. Finally, the router will route from this final queue to a consumer that will do the processing.
Since you have a large quantity of data to deal with, it will likely be difficult to pass around 1000 line groups of data through messages, especially if they are being placed in a queue (you don't want to run out of memory). I recommend passing around some type of indicator that can be used to identify which line of the file a specific request was for, and then parse the 1000 lines when you need them. You could do this in a number of ways, like by calculating the number of bytes deep into a file a specific line is, and then using a file reader's skip() method to jump to that line when the request hits the bean that will be processing it.
Here are some resources provided on the Apache Camel website that describe the enterprise integration patterns that I mentioned above:
http://camel.apache.org/message-translator.html
http://camel.apache.org/composed-message-processor.html
http://camel.apache.org/pipes-and-filters.html
http://camel.apache.org/eip.html

Splitting, enriching and putting back together

I have a message carrying XML (an order) containing multiple homogenous nodes (think a list of products) in addition to other info (think address, customer details, etc.). I have to enrich each 'product' with details provided by another external service and return the same complete XML 'order' message with enriched 'products'.
I came up with this sequence of steps:
Split the original XML with xpath to separate messages (also keeping the original message)
Enrich split messages with additional data
Put enriched parts back into the original message by replacing old elements.
I was trying to use multicast by sending original message to endpoint where splitting and enriching is done and to aggregation endpoint where original message and split-enriched messages are aggregates and then passed to processor which is responsible for combining these parts back to single xml file. But I couldn't get the desired effect...
What would be the correct and nice way to solve this problem?
The Splitter EIP in Camel can aggregate messages back (as a Composed Message Processor EIP).
http://camel.apache.org/splitter
See this video which demonstrates such a use-case
http://davsclaus.blogspot.com/2011/09/video-using-splitter-eip-and-aggregate.html

Categories

Resources