How can I create producer in Spring Cloud Stream Functional Model?
The following version is deprecated now.
#Output(OUTPUT)
MessageChannel outbound();
I know that it is possible to achieve by java Supplier functional class, but it will send message every one second. I don't need it to send every second. I am going to replace REST API with with Kafka.
Are there any ways to do that?
Use the StreamBridge - see Sending data to an arbitrary output.
Here we autowire a StreamBridge bean which allows us to send data to an output binding effectively bridging non-stream application with spring-cloud-stream. Note that preceding example does not have any source functions defined (e.g., Supplier bean) leaving the framework with no trigger to create source bindings, which would be typical for cases where configuration contains function beans. So to trigger the creation of source binding we use spring.cloud.stream.source property where you can declare the name of your sources.
If you want to trigger a stream from an external Kafka topic, you can also bind a spring cloud steam processor’s input to that topic. The stream bridge provides a layer of abstraction that may be cleaner, I.e., your non-stream application does not use the Kafka API directly.
Related
I'm using kafka binder if it matters.
I want to add some custom logic to the recover method of ErrorMessageSendingRecoverer (particularly - to modify a message a bit before publishing it to the error channel with adding new headers). Seems that extending from this class with overriding this method and register it as a bean would be a good idea.
However, I cannot imagine a way to do it while using Spring Cloud Stream. In the code of AbstractMessageChannelBinder it's neither injected as a bean nor allows you to use any customizers - it's simply created with new.
How to solve this problem? Maybe there is another (intended) way to do the same?
I have a java interface that I have to implement that looks like that:
public Flow.Publisher<Packet> getLivePublisher();
This interface must return a Flow.Publisher that stays inactive until it is subscribed to, and the subscriber calls Subscription.next(n).
So far, my implementation looks like
return Source
.fromIterator(() -> new LivePacketIterator())
.async("live-dispatcher")
.runWith(JavaFlowSupport.Sink.asPublisher(AsPublisher.WITHOUT_FANOUT), actorSystem);
Unfortunately, this seems to immediately start getting elements from my LivePacketIterator, even when no subscribers ahve subscribed to the returned Flow.Publisher.
I understand that a Source is just a sort of a template for a Subscribable source of objects (my understanding is that it's like a Factory of Publishers), and that it only converts to a concrete active source once it's materialized. So if I understand correctly, I need to somehow materialize my Source to get a Flow.Publisher. But I want it to be materialized in a way that it only starts running when it is subscribed to.
I've also tried to use toMat()
return Source
.fromIterator(() -> new LivePacketIterator(maximumPacketSize))
.filter(OrderSnapshotPacket::isNotEmpty)
.async(dbDispatcher)
.toMat(JavaFlowSupport.Sink.asPublisher(AsPublisher.WITHOUT_FANOUT), Keep.right())
.???;
But I'm not sure what to do with the resulting RunnableGraph.
Am I understanding this correctly?
Is there a way to do what I'm trying to do?
Unfortunately, this seems to immediately start getting elements from my LivePacketIterator, even when no subscribers ahve subscribed to the returned Flow.Publisher.
What exactly do you observe to state this? I used a very similar snippet to yours:
Flow.Publisher<Integer> integerPublisher =
Source.from(List.of(1,2,3,4,5))
.wireTap(System.out::println)
.async()
.runWith(
JavaFlowSupport.Sink.asPublisher(AsPublisher.WITHOUT_FANOUT),
ActorSystem.create());
This will not start emitting items from the list until the publisher is subscribed to.
I understand that a Source is just a sort of a template for a Subscribable source of objects (my understanding is that it's like a Factory of Publishers), and that it only converts to a concrete active source once it's materialized
Kind of. All Flow.* interfaces are part of reactive streams specification for JVM. Akka Streams treats those interfaces as SPI and doesn't use them directly in its API. It introduces its own abstractions like Source, Flow and Sink. Akka Streams allows you to convert the processing stream expressed in its API to the lower level Flow.* just as you did in your snippet. This is useful if you say want to plugin Akka Streams processing pipeline to some other reactive streams implementation like say RxJava or Project Reactor. So Source is Akka Stream's abstraction that is somehow equivalent to Flow.Publisher, that is, it's a source of potentially infinite number of values. You need to connect Source to a Sink (potentially via a Flow) so that you get a RunnableGraph which you can run. This will set everything in motion and in most cases it will cause chain of subscriptions and elements will start flowing through the stream. But that is not the only option in case of JavaFlowSupport.Sink.asPublisher Sink, running the RunnableGraph will convert the whole Akka Stream to an instance of Flow.Publisher. The semantics here is that the subscription is deferred until something somewhere calls subscribe on that instance. Which is exactly what you're trying to achieve if I understand correctly.
We're currrently using the ReactiveSecurityContextHolder which gets our correct Auth details and is used along the Flux stream.
Now we want to de-couple stuff. The first iteration is that we use a Sinks as an intermediate 'event-hub'. So from an endpoint we produce some item to the Sinks.Many.
A listener is consuming events from this Sinks and doing the heavy work. Now in this consumer I'd like to use the context which is available on the producing site. I know one can do deferContextual to pass on the current context to another Flux. But is it possible to pass the context to the resulting Flux from the Sinks?
Thanks in advance.
Alex
There is currently no API that exposes that on arbitrary Sinks. The challenge with Sinks is that a lot of them are multicasting to multiple Subscribers, and the Context is defined on each Subscriber.
There is a hack though: Sinks.Many<T> is Scannable, and most concrete implementations should expose their current collection of subscribers through the Stream<Scannable> inners() method. In the case of a unicast sink, scan(Attr.ACTUAL) would also work.
Two big caveats:
these APIs only expose Scannables, which doesn't allow access to Context directly
if the implementation's inner subscriber isn't Scannable, it is replaced in the stream by the Scannable#NOT_SCANNABLE constant
Most if not all of reactor-core CoreSubscribers are Scannable, but if you connect a custom subscriber which isn't Scannable, even though it has a Context, you won't be able to see it.
Multicast Sinks in reactor-core tend to wrap downstream subscribers in their own inner Scannable inner tracker, which would make this approach work.
Unicast Sinks are a bit different as they directly attach to the downstream Subscriber. So if it is a CoreSubscriber but somehow not a Scannable, you won't be able to see it as a CoreSubscriber and access its Context.
To sum up the approach:
call sink.inners() to get a Stream<Scannable>
ensure values are instances of CoreSubscriber (that's the part where things can go wrong)
cast values to CoreSubscriber and call currentContext()
somehow reconcile the various Contexts you got to extract the relevant key-value pair(s)
I'm trying to refactor some camel routes to use camel-rx instead of camel dsl and I've hit a point where I want to process the events inside an observable but then use the ReactiveCamel class to send the observable to an endpoint based on conditions. For example, it would be useful to map my observable to an object with target route information and then use ReactiveCamel to send to this target.
Is something like that possible or perhaps another way to implement this use case?
I am trying to implement a Camel Component/Processor that takes one input and produces multiple output messages, similar to a Splitter. Like Splitter, the output should go to the next processor/endpoint in the route.
I have looked at Splitter & MulticastProcessor classes in the hope that I can reuse them or use similar logic. The idea, as I understood, is to create an new Exchange for each output and emit them. To do this, I need to provide the endpoint to which output is written to. This works, if I dynamically create the end point within the Processor class; my requirement is to send the output to the end point configured in the route. That is in the route below, mycomponent needs to write (multiple times) to file:output.
<route>
<from uri="file:input"/>
<to uri="mycomponent:OrderFlow?multi.output=true"/>
<to uri="file:output" />
</route>
In case of Splitter, it is instantiated by SplitDefinition class which has access to the output Processor/Endpoint.
a) From within a Processor is it possible to access the configured Output Processor/Endpoint?
b) If not, should I be writing a ProcessorDefinition class for my processor? Any pointers on this would help.
Two solutions suggested below by Petter are,
a) Inject a Producer template
b) Use Splitter component with a method call instead of writing a new component.
I assume you have read this page.
Yes, you can send multiple exchanges from a custom processor, but not really to the next processor in the flow. As in the link above, you can decouple the component implementation by injecting a producer template with a specific destination. You can cut your route into several parts using the direct or seda transport and make your component send the messages there. This way, you can reuse the code in several routes.
This is, as you point out, done in the splitter component (among others) in Camel core. Take a look at the multicastprocessor baseclass for example. However, there processors are aware of the following processors in the route, thanks to the route builder. You custom processor is not that lucky.
You can, non the less, extract that information from the CamelContext. Get hold of your route and there you can find the processors in the route. However, that seems like overcomplicating things.
UPDATE:
Instead of trying to alter the DSL, make use of the already existing DSL and components.
.split().method("mycomponent", "OrderFlow")
Instead of emitting new exchanges, your OrderFlow method just needs to create a List<..> with the resulting messages.