I have a Spring Integration app that puts incoming files onto a channel. From there I'd like to be able to send the same file to two different processing pipelines (one archiving to S3, another parsing the contents) and later have a downstream component that can recognise when both have been successfully processed, and thus delete the actual local file.
The semantics are like if I needed a Splitter/Aggregator, but instead of splitting the message I need to duplicate it.
Is there any way to achieve this with available components, or will it require some custom classes?
Yes, a <publish-subscribe-channel/> (with apply-sequence="true") will work similarly to a splitter - however both subscribers to the channel will get the SAME File object. By default the two branches will be executed serially but you can introduce an ExecutorChannel if you want to process in parallel.
If you want each subscriber to get a different File object, you could add a transformer...
<transformer ... expression="new java.io.File(payload.absolutePath)" />
Related
I have a requirement, where I have to create the topic name based on different values coming in for a field in the <Value object>. So that all the records<K,V> with similar field values goes in Topic_<Field>. How can I do it using kstream?
In Kafka 1.1.0, you can use branch() to split a stream into substreams and than write the different substreams into different topics by adding a different sink operator (ie, to()) to each substream.
Kafka 2.0 (will be released in June), adds a new "dynamic routing" feature that simplifies this scenario. Compare: https://cwiki.apache.org/confluence/display/KAFKA/KIP-303%3A+Add+Dynamic+Routing+in+Streams+Sink
Note, that Kafka Streams requires that sink topics are created manually -- Kafka Streams does not create any sink topic for you. As mentioned by #Hemant, you could turn on auto topic creation. However, it's not recommended in general (one reason is you might want different configs for different topic, but via auto creation all would be created with the same default config).
Also note, that a rogue application could DDoS your Kafka cluster if auto topic creation is enabled by sending "bad data" into the application and thus creating hundreds or thousands of topics (by specifying a different topic name for each message). Thus, it's risky and not recommended to enable auto topic creation but to create topics manually.
Is there functionality built into Kafka Streams that allows for dynamically connecting a single input stream into multiple output streams? KStream.branch allows branching based on true/false predicates, but this isn't quite what I want. I'd like each incoming log to determine the topic it will be streamed to at runtime, e.g., a log {"date": "2017-01-01"} will be streamed to the topic topic-2017-01-01 and a log {"date": "2017-01-02"} will be streamed to the topic topic-2017-01-02.
I could call forEach on the stream, then write to a Kafka producer, but that doesn't seem very elegant. Is there a better way to do this within the Streams framework?
If you want to create topics dynamically based on your data, you do not get any support within Kafka's Streaming API at the moment (v0.10.2 and earlier). You will need to create a KafkaProducer and implement your dynamic "routing" by yourself (for example using KStream#foreach() or KStream#process()). Note, that you need to do synchronous writes to avoid data loss (which are not very performant unfortunately). There are plans to extend Streaming API with dynamic topic routing, but there is no concrete timeline for this feature right now.
There is one more consideration you should take into account. If you do not know your destination topic(s) ahead of time and just rely on the so-called "topic auto creation" feature, you should make sure that those topics are being created with the desired configuration settings (e.g., number of partitions or replication factor).
As an alternative to "topic auto creation" you can also use Admin Client (available since v0.10.1) to create topics with correct configuration. See https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations
My project streams object data through storm to a graphics application. The appearance of these objects depends upon variables assigned by a bolt in the storm topology.
My question is whether it is possible to update the bolt process by sending a message to it that changes the variables it attaches to object data. For example, after sending a message to the bolt declaring that I want any object with parameter x above a certain number to appear as red rather than blue.
The bolt process would then append a red rgb variable to the object data rather than blue.
I was thinking this would be possible by having a displayConfig class that the bolt uses to apply appearance and who's contents can be edited by messages with a certain header.
Is this possible?
It is possible, but you need to do it manually and prepare you topology before you start it.
There are two ways to do this:
use a local config file for bolt that you put into the worker machine (maybe via NFS). The bolts regularly check the file for updates an read an updated configuration if you do change the file.
You use one more spout that produces a configuration stream. All bolts you want to send a configuration during runtime, need to consumer from this configuration-spout via "allGrouping". When processing input tuple, you check if its a regular data tuple or and configuration tuple (and update you config accordingly).
I have to setup camel to process data files where the first line of the file is the metadata and then it follows with millions of lines of actual data. The metadata dictates how the data is to be processed. What I am looking for is something like this:
Read first line (metadata) and populate a bean (with metadata) --> 2. then send data 1000 lines at a time to the data processor which will refer to the bean in step # 1
Is it possible in Apache Camel?
Yes.
An example architecture might look something like this:
You could setup a simple queue that could be populated with file names (or whatever identifier you are using to locate each individual file).
From the queue, you could route through a message translator bean, whose sole is to translate a request for a filename into a POJO that contains the metadata from the first line of the file.
(You have a few options here)
Your approach to processing the 1000 line sets will depend on whether or not the output or resulting data created from the 1000 lines sets needs to be recomposed into a single message and processed again later. If so, you could implement a composed message processor made up of a message producer/consumer, a message aggregator and a router. The message producer/consumer would receive the POJO with the metadata created in step2 and enqueue as many new requests are necessary to process all of the lines in the file. The router would route from this queue through your processing pipeline and into the message aggregator. Once aggregated, a single unified message with all of your important data will be available for you to do what you will.
If instead each 1000 line set can be processed independently and rejoining is not required, than it is not necessary to agggregate the messages. Instead, you can use a router to route from step 2 to a producer/consumer that will, like above, enquene the necessary number of new requests for each file. Finally, the router will route from this final queue to a consumer that will do the processing.
Since you have a large quantity of data to deal with, it will likely be difficult to pass around 1000 line groups of data through messages, especially if they are being placed in a queue (you don't want to run out of memory). I recommend passing around some type of indicator that can be used to identify which line of the file a specific request was for, and then parse the 1000 lines when you need them. You could do this in a number of ways, like by calculating the number of bytes deep into a file a specific line is, and then using a file reader's skip() method to jump to that line when the request hits the bean that will be processing it.
Here are some resources provided on the Apache Camel website that describe the enterprise integration patterns that I mentioned above:
http://camel.apache.org/message-translator.html
http://camel.apache.org/composed-message-processor.html
http://camel.apache.org/pipes-and-filters.html
http://camel.apache.org/eip.html
I'm developing a 'WS oriented' application basing on Spring/CXF/Oracle DB. Now, I stuck with some architectural consideration about right approach to organize message processing (already stored in db).
Briefly, process looks as follows:
(A) Get the message from client -> Validate -> Store -> Send reposponse
(B) Process -> Update data
I consider two general approaches for part B of the process:
1) Use JMS queue
Just after validation and storing incoming message details in DB publish a message to the JSM queue. On the other side define cosumer which will retrieve the message and do the processing
2) Fetch data to be processed
Manually fetch data from with db and process it.
Additional facts:
The processing won't be compute-intensive, so for new I dont think that work distribution will be needed (all in single JVM).
All data in single db schema
So, I'm interested what are key factors to choose JMS in such case?
JMS would be a better approach. In positive scenario, approach #2 works as well. But JMS would provide you some in-built capability, specially for failed case. Though internally JMS would be using a DB-based persistent storage; it would provide a better interface to communicate that data.
For example, you could configure an error queue to track all the messages, whose processing failed.
It would also provide you scalable architecture, where some other app (in future) could starts consuming your message and process.
Reliable: Due to asynchronous messaging, all the pieces don’t need to be up for the application to function as a whole.
Flexible : Think of scenario, in which you might want to process certain type of data before all other (prioritization). JMS would provide more better approach than tweaking logic in a program.