I would like to understand how messages are processed in Spring Integration: serial or parallel. In particular I have an inbound channel adapter with a poller and an HTTP outbound gateway. I guess splitters, transformers, header enrichers etc are not spawning their own threads.
I could have missed them, but are these details specified somewhere in the documentation?
Also can I programatically get all the channels in the system?
Channel types are described here.
The default channel type is Direct (the endpoint runs on the caller's thread); QueueChannel and ExcecutorChannel provide for asynch operations.
context.getBeansOfType(MessageChannel.class)
Actually "threading" dependes on MessageChannel type:
E.g. DirectChannel (<channel id="foo"/> - default config) doesn't do anything with threads and just shifts the message from send to the subscriber to handle it. If the handler is an AbstractReplyProducingMessageHandler it sends its result to the outputChannel and if the last one is DirectChannel, too, the work is done within the same Thread.
Another sample is about your inbound channel adapter. On the background there is a scheduled task, which executes within the Scheduler thread and if your poll is very often the next poll task might be executed within the new Thread.
The last "rule" is acceptable for QueueChannel: their handle work is done with Scheduler thread, too.
ExcecutorChannel just places the handle task to the Executor.
All other info you you can find in the Reference Manual provided by Gary Russell in his answer.
Related
I'm attempting to implement a spring integration flow that requires a multithreaded call if an input variable in true. If this variable is true then the flow with execute a multithreaded call and the main thread will continue it's flow.
Then at the end it will be required to wait for both flows to finish before returning a response.
I've been successful at implementing a multithreaded spring integration flow using a splitter, but the splitter results in all of the messages going to the same channel, this is different since the multithreaded call requires calling a different channel than the main thread of execution.
Is there a way to set up a splitter to send to different channels based on if the parameter is true or not? Or how would I set up an executor channel to spawn a new thread if that value is true while continuing the main flow at the same time.
As for waiting for both of the flows to finish execution would a spring integration barrier or an aggregator be a better approach for this use case?
Consider to use a PublishSubscribeChannel with an Executor configuration to let the same message to be sent to different parallel flows. This way you really can continue your main flow with one subscriber and do something else with other subscribers. With an Executor all of them are going to be executed in parallel.
Docs: https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-implementations-publishsubscribechannel
If you still insist that main flow must be executed only on the same thread, then consider to use a RecipientListRouter, where one of the recipients could be an unconditional next direct channel in a main flow. The other recipient could be conditional on your boolean variable and it can be an ExecutorChannel to let its subscriber to be invoked in parallel.
Docs: https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#router-implementations-recipientlistrouter
For waiting for both flows it is up to you to decide - barrier, an aggregator or more sophisticated scatter-gather. All of them will work for your "wait-for-all" requirements. Or you may implement some custom solution based on a CountDownLatch in some header. So, every time your parallel flow is done, you count it down. And this way you even will be able to determine the number according your boolean value. So, if no parallel, then just 1 and only main flow is going to be executed and only this one is going to count down that latch you are waiting for on original request.
In gRPC I would like some more information on the way the server handles requests.
Are requests executed in parallel? Or does the server spawn a new thread for each request, and execute them in parallel? Is there a way to modify this behavior? I understand that in client-streaming rpc's that message order is guaranteed.
If I send Request A followed by Request B to the same RPC, is it guaranteed that A will be executed first before B begins processing? Or are they each their own thread and executed in parallel with no guarantee that A finishes before B.
Ideally I would like to send a request to the server, the server acknowledges receipt of the request, and then the request is added to a queue to be processed sequentially, and returns a response once it's been processed. An approach I was exploring is to use an external task queue (like RabbitMQ) to queue the work done by the service but I want to know if there is a better approach.
Also -- on a somewhat related note -- does gRPC have a native retry counter mechanism? I have a particularly error-prone RPC that may have to retry up to 3 times (with an arbitrary delay between retries) before it is successful. This is something that could be implemented with RabbitMQ as well.
grpc-java passes RPCs to the service using the Executor provided by ServerBuilder.executor(Executor), or a cached thread pool if no executor is provided.
There is no ordering between simultaneous RPCs. RPCs can arrive in any order.
You could use a server-streaming RPC to allow the server to respond twice, once for acknowledgement and once for completion. You can use a oneof in the response message to allow sending the two different responses.
grpc-java as experimental retry support. gRFC A6 describes the support. The configuration is delivered to the client via service config. Retries are disabled by default, so overall you would want something like channelBuilder.defaultServiceConfig(serviceConfig).enableRetry(). You can also reference the hedging example which is very similar to retries.
I have a Kafka listener that implements the acknowledgment message listener interface with the following properties:
ackMode - MANUAL_IMMEDIATE
idleEventInterval - 3 Min
While consuming message on the listener it decides if to ack the specific record via acknowledgment.acknowledge() and it works as expected.
In addition, I have a scenario to ack last offset number(keeping it in memory) after X Minutes(also if no messages arrived).
to overcome this requirement I decide to use ListenerContainerIdleEvent that fire each 3 min according to my configuration.
My Questions are:
is there any way to acknowledge Kafka offset as a trigger to an idle event? the idle event contains a reference to KafkaMessageListenerContainer but it encapsulates the ListenerConsumer that hold KafkaConsumer.
is the idle message event send sync(with the same thread of the KafkaListenerConsumer)? From the code, the default implementation is SimpleApplicationEventMulticaster that initialize without TaskExecutor so it invokes the listener on the same thread. can u approve it?
I am using spring-kafka 1.3.9.
Yes, just keep a reference to the last Acknowledgment and call acknowledge() again.
Yes, the event is published on the consumer thread by default.
Even if the event is published on a different thread (executor in the multicaster) it should still work because, instead of committing directly, the commit will be queued and processed by the consumer when it wakes from the poll.
See the logic in processAck().
In newer versions (starting with 2.0), the event has a reference to the consumer so you can interact with it directly (obtain the current position and commit it again), as long as the event is published on the consumer thread.
I want to implement the HTTP endpoint using Spring Integration, which listen to the http requests, sends the request data as messages to the channel and another endpoint should listen messages on this channel and process them.
Sounds simple. But what I want to achieve is:
Messages should be processed in order.
Messages should be processed asap (without delay after http request, if the queue is already empty).
The http request should be responded as soon as message is received, not after it is processed, so the sender will know only that message is received for processing.
I don't want to use external queues, like RabbitMQ.
So I need a QueueChannel for this. But if I understand correctly, the only way to receive messages from the queue is the poller. So the point 2 will not be satisfied. There will be small delay after message received and before poller sees it.
So the question is: is there any simple way to achieve this in Spring Integration which I don't see?
Of course I can implement it myself. For example creating SmartLifeCycle component, which listen on DirectChannel and just put the messages into java.util.concurrent.BlockingQueue, and also starts a dedicated thread which will wait on this queue and send the message into another DirectChannel for processing. So there will be no delay because thread will be unblocked as soon as BlockingQueue is not empty.
This all sounds like a "pattern" - some queue between two direct channels based on dedicated thread.
Maybe there is a simplier way, already implemented in Spring Integration, which I just don't see because of absense of expirience in this area?
Point 2 can be satisfied, even with a poller - just set the fixed-delay to 0 and/or increase the receive timeout (default 1 second); the poller thread will block in the queue until a message arrives; then immediately wait again.
You can also use an executor channel (the http thread hands off to the executor thread).
I use Apache Camel 2.10.0 with spring-ws component to route some (20+) WS/SOAP operations.
The sample code looks like:
from("spring-ws:rootqname:{http://my/name/space}myOperation1?endpointMapping=#endpointMapping")
from("spring-ws:rootqname:{http://my/name/space}myOperation2?endpointMapping=#endpointMapping")
from("spring-ws:rootqname:{http://my/name/space}myOperation3?endpointMapping=#endpointMapping")
Operations normally access several DB and could last up to couple of seconds
It works perfectly, but now I have a new requirement: 3 of the operations must be synchronized.
For example: if client1 calls operation1 1ms before client2 calling operation1, the client1’s call must be finished before starting client2’s one.
Same is valid for 1 client calling 2 different operations.
For example: if client1 calls operation1 1ms before calling operation2, the operation1’s call must be finished before starting operation2’s one. Clients call the WS asynchronously and this cannot be changed
The application is running with WebLogic 10.3.5.
Reducing the container threads to 1 only would affect all operations, thus I was thinking about adding of some custom queue (JMS style) only to these 3 operations.
Do you have any better idea?
It looks all the calling some be put into the queue first, then we can decide which one should be invoked then.
ActiveMQ is easy to setup and would work with Camel very well.
You need to route the request to JMS queue first, the queue itself is the transaction break point, then you consume JMS messages sequentially. You have much more control of the threading and message consuming by using the message pattern