Possible to add a delay in spring integration DSL? - java

I have the following adapter:
#Component
class MyCustomFlow : IntegrationFlowAdapter() {
fun singleThreadTaskExecutor(): TaskExecutor {
val executor = ThreadPoolTaskExecutor()
executor.maxPoolSize = 1
executor.initialize()
return executor
}
#Filter
fun filter(data: SomeData): Boolean = ...
#Transformer
fun transform(customer: Data): Message<SomeData> {
....
}
#ServiceActivator
fun handle(data: Data): SomeData {
....
}
#Bean(name = [PollerMetadata.DEFAULT_POLLER])
fun poller(): PollerSpec? {
return Pollers.fixedRate(500)
}
#Bean
override fun buildFlow(): IntegrationFlowDefinition<*> {
return from(MessageChannels.queue("updateCustomersLocation"))
.channel(MessageChannels.executor(singleThreadTaskExecutor()))
.split()
.filter(this)
.transform(this)
.handle(this)
.channel("customerLocationFetched")
}
}
And if I understand the Java DSL, the handle is:
handle → ServiceActivator
Given a ServiceActivator, outside of the IntegrationFlowAdapter, I have an option to define a poller. Within that poller I could add a delay before the next messages is processed.
#ServiceActivator(
poller = [Poller(fixedDelay = "3000", maxMessagesPerPoll = "1", fixedRate = "3000")]
)
Would it be possible, within the adapter, to add a delay for the ServiceActivator (the method named handle, similar to the logic I get if I add another channel.
#Bean
override fun buildFlow(): IntegrationFlowDefinition<*> {
return from(MessageChannels.queue("updateCustomersLocation"))
.channel(MessageChannels.executor(singleThreadTaskExecutor()))
.split()
.filter(this)
.transform(this)
.channel("myNewChannel")
//.handle(this)
//.channel("customerLocationFetched")
}
And then from outside the adapter I could just define a new ServiceActivator:
#ServiceActivator(
inputChannel = "myNewChannel"
poller = [Poller(fixedDelay = "3000", maxMessagesPerPoll = "1", fixedRate = "3000")]
)
The reason I want a delay between the messages in the handle/service activator is that the method sends requests and I wanna control the rate the request are sent.

First of all the buildFlow() method must not be marked with #Bean : the IntegrationFlowAdapter does the trick to register everything.
You can add poller to the endpoint spec. See a second arg of that handle(). Only the point that input channel for this polling endpoint must be pollable - the QueueChannel exists out-of-the-box and you can place it just before your handle().

Related

Spring Boot RabbitMQ consistency issue

I'm using spring boot (2.5.5). I'm seeing inconsistent behavior using rabbitmq queue listener.
Issue is, accountService.incrementMediaCount executes, even if neoPostService.createNeoPost throws exeception, thus executes multiple times, as retry policy is set to 5, which I guess seems weird as accountService.incrementMediaCount should execute only if neoPostService.createNeoPost execute successfully (without exception). Also, loggingRepository.save() execute lesser times than number of messages coming to queue. What is causing such behavior? Is it relared to distributed nature of RabbitMQ?
Queue listener
#RabbitListener(queues = [RabbitConfiguration.Companion.Queues.POST_CREATION_QUEUE], concurrency = "3")
fun postCreationListener(postMessage: PostMessage) {
neoPostService.createNeoPost(postMessage.postId)
accountService.incrementMediaCount(postMessage.userId)
loggingRepository.save(
Log(
resourceType = ResourceType.Post,
entityEventType = EntityEventType.Created,
message = "Post ${postMessage.postId} created for user ${postMessage.userId}"
)
)
}
fun createNeoPost(postId: Long): NeoPost {
// Get post by id from postgresql
val post = postRepository.findByIdOrNull(postId)
?: throw ResourceNotFoundException("Post $postId not found")
val neoUser = neoUserService.getUserById(post.user.userId!!)
// Save post to Neo4j
return neoPostRepository.save(post.toNeoPost().copy(user = neoUser))
}
//Increment counter in postgresql
fun incrementMediaCount(userId: Long, amount: Int = 1) {
accountRepository.incrementPostCounter(userId, amount)
}
Rabbit Config
#EnableRabbit
#Configuration
class RabbitConfiguration {
// other code
....
#Bean
fun retryInterceptor(): RetryOperationsInterceptor? {
return RetryInterceptorBuilder.stateless()
.backOffOptions(100, 5.0, 10000)
.maxAttempts(5)
.recoverer(RejectAndDontRequeueRecoverer())
.build()
}
....
}

Periodic work not reach running state when unit testing WorkManager

I created a PeriodicWorkRequest from my SyncDatabaseWorker, which like below:
class SyncDatabaseWorker(ctx: Context, params: WorkerParameters) : RxWorker(ctx, params) {
private val dataManager: DataManager = App.getDataManager()
override fun createWork(): Single<Result> {
return Single.create { emitter ->
dataManager.loadStoresFromServer()
.subscribe(object : SingleObserver<List<Store>> {
override fun onSubscribe(d: Disposable) {
}
override fun onSuccess(storeList: List<Store>) {
if (!storeList.isEmpty()) {
emitter.onSuccess(Result.success())
} else {
emitter.onSuccess(Result.retry())
}
}
override fun onError(e: Throwable) {
emitter.onSuccess(Result.failure())
}
})
}
}
companion object {
fun prepareSyncDBWorker(): PeriodicWorkRequest {
val constraints = Constraints.Builder()
.setRequiredNetworkType(NetworkType.CONNECTED)
.build()
val myWorkBuilder = PeriodicWorkRequest.Builder(SyncDatabaseWorker::class.java, 7, TimeUnit.DAYS)
.setBackoffCriteria(BackoffPolicy.EXPONENTIAL, 1, TimeUnit.DAYS) // Backoff retry after 1 day
.setConstraints(constraints)
return myWorkBuilder.build()
}
}
}
Then I write unit test base on Google's guide like this:
#RunWith(AndroidJUnit4::class)
class SyncDatabaseWorkerTest {
private lateinit var context: Context
#Before
fun setup() {
context = InstrumentationRegistry.getInstrumentation().targetContext
val config = Configuration.Builder()
.setMinimumLoggingLevel(Log.DEBUG)
.setExecutor(SynchronousExecutor())
.build()
// Initialize WorkManager for instrumentation tests.
WorkManagerTestInitHelper.initializeTestWorkManager(context, config)
}
#Test
#Throws(Exception::class)
fun testPeriodicWork_WithConstrains() {
// Create request
val request = SyncDatabaseWorker.prepareSyncDBWorker()
val workManager = WorkManager.getInstance(context)
val testDriver = WorkManagerTestInitHelper.getTestDriver(context)
// Enqueue and wait for result.
workManager.enqueue(request).result.get()
// Check work request is enqueued
var workInfo = workManager.getWorkInfoById(request.id).get()
assertThat(workInfo.state, `is`(WorkInfo.State.ENQUEUED))
// Tells the testing framework the period delay & all constrains is met
testDriver!!.setPeriodDelayMet(request.id)
testDriver.setAllConstraintsMet(request.id)
// Check work request is running
workInfo = workManager.getWorkInfoById(request.id).get()
assertThat(workInfo.state, `is`(WorkInfo.State.RUNNING))
}
}
It's always in state ENQUEUED, even when period delay & all constrains is met.
Expected: is <RUNNING>
but: was <ENQUEUED>
When I debugged the test & find out that the createWork(): Single<Result> method is also triggered, but why the state is not RUNNING?
May be I'm wrong about the approach, but the documents about unit testing WorkManager is very few now, and I don't know the right way to do it.
Since you're using a synchronous executor, you will never actually see your work in the RUNNING state - it should have already executed. I suspect your work is actually being marked for retry and therefore enters the ENQUEUED state again. You should be able to verify this by either setting breakpoints or looking at your logs.
Given you are executing a PeriodicWorkRequest with a SynchronousExecutor, you will never see the WorkRequest in RUNNING state. It will be done executing before you can assert that it was in RUNNING.
After you return a Result.success() or a Result.failure() in your doWork(), the WorkRequest goes back to ENQUEUED for the next period (given it's a periodic request).

Camel: How to join back to a single path after multicast?

This seems like an incredibly simple problem but I've tried everything I can think of. Basically I have a timer route that sends its message to a bunch of different beans. Those beans set a property on the exchange (I've also tried a header on the message) and I want the exchange output from all of those beans to be directed to a filter (which checks for the property or header) and then optionally another endpoint. Something like this:
---> Bean A ---
/ \
timer --> multicast ------> Bean B ------> end --> filter --> endpoint
\ /
---> Bean C ---
Currently the route looks like this, and it works for multicasting to the beans:
from("timer://my-timer?fixedRate=true&period=20000&delay=0")
.multicast()
.to("bean:beanA", "bean:beanB", "bean:beanC");
Here are the some of the solutions I've tried:
Solution 1
from("timer://my-timer?fixedRate=true&period=20000&delay=0")
.multicast()
.to("bean:beanA", "bean:beanB", "bean:beanC")
.filter(new myPredicate())
.to("myOptionalEndpoint");
This puts the filter in parallel with the beans instead of after them.
Solution 2
from("timer://my-timer?fixedRate=true&period=20000&delay=0")
.multicast()
.to("bean:beanA", "bean:beanB", "bean:beanC")
.end()
.filter(new myPredicate())
.to("myOptionalEndpoint");
Does the beans in parallel and then does the filter. However, the properties/headers are not set. It seems like the exchange is fresh off the timer and is not the one that went through the beans...
Edit: I tried setting the body and in fact the message that arrives at the filter has no body. I can't imagine Camel would somehow shuck the payload of the message so I have to assume that this exchange is a new one from the timer, not one that went through the beans. However, it happens after the beans are done.
Solution 3
from("timer://my-timer?fixedRate=true&period=20000&delay=0")
.multicast()
.beanRef("beanA").to("direct:temp")
.beanRef("beanB").to("direct:temp")
.beanRef("beanC").to("direct:temp")
.end()
from("direct:temp")
.filter(new myPredicate())
.to("myOptionalEndpoint");
Messages reach the filter as expected but the properties/headers that I set are gone so no messages pass the filter.
Edit: The body is gone here too so clearly I am not getting the same exchange that is coming from the beans...
To clarify, I am looking for a solution where the a single exchange from the timer is multicasted to each bean (so now we have 3 exchanges) and each of these 3 is then sent to the filter.
Can anybody help me figure out how to build this route?
You need to use an aggregation strategy in order to aggregate all the results into one.
Below is a great example from http://javarticles.com/2015/05/apache-camel-multicast-examples.html (See the Multicast with a Custom Aggregation Strategy section)
public class CamelMulticastAggregationExample {
public static final void main(String[] args) throws Exception {
JndiContext jndiContext = new JndiContext();
jndiContext.bind("myBean", new MyBean());
CamelContext camelContext = new DefaultCamelContext(jndiContext);
try {
camelContext.addRoutes(new RouteBuilder() {
public void configure() {
from("direct:start")
.multicast()
.aggregationStrategy(new JoinReplyAggregationStrategy())
.to("direct:a", "direct:b", "direct:c")
.end()
.to("stream:out");
from("direct:a")
.to("bean:myBean?method=addFirst");
from("direct:b")
.to("bean:myBean?method=addSecond");
from("direct:c")
.to("bean:myBean?method=addThird");
}
});
ProducerTemplate template = camelContext.createProducerTemplate();
camelContext.start();
template.sendBody("direct:start", "Multicast");
} finally {
camelContext.stop();
}
}
}
where JoinReplyAggregationStrategy class looks as follows
public class JoinReplyAggregationStrategy implements AggregationStrategy {
public Exchange aggregate(Exchange exchange1, Exchange exchange2) {
if (exchange1 == null) {
return exchange2;
} else {
String body1 = exchange1.getIn().getBody(String.class);
String body2 = exchange2.getIn().getBody(String.class);
String merged = (body1 == null) ? body2 : body1 + "," + body2;
exchange1.getIn().setBody(merged);
return exchange1;
}
}
}
UPDATE In your case, your aggregation strategy might be to gather all of your exchanges together as follows:
public class ListAggregationStrategy implements AggregationStrategy {
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
Message newIn = newExchange.getIn();
Object newBody = newIn.getBody();
List list = null;
if (oldExchange == null) {
list = new ArrayList();
list.add(newBody);
newIn.setBody(list);
return newExchange;
} else {
Message in = oldExchange.getIn();
list = in.getBody(List.class);
list.add(newBody);
return oldExchange;
}
}
}
Use scatter gather EIP instead of multicast !
Here is the solution, inspired by Kalman's:
from("timer://my-timer?fixedRate=true&period=20000&delay=0")
.multicast()
.to("direct:a", "direct:b", "direct:c")
.end()
from("direct:a").beanRef("beanA").to("direct:temp")
from("direct:b").beanRef("beanB").to("direct:temp")
from("direct:c").beanRef("beanC").to("direct:temp")
from("direct:temp")
.filter(new myPredicate())
.to("myOptionalEndpoint");
This was a more complicated solution that I was expecting. There must be a more elegant way to achieve this but the above solution works. Obviously use different names than a, b, c and temp though...

Spring sync vs async rest controller

I try to see a difference between Spring synchronous REST Controller vs async version of same controller.
Each controller do the same thing : take a RequestBody and save it in a Mongo database.
#RestController
#RequestMapping ("/api/1/ticks")
public class TickController {
#Autowired
private TickManager tickManager;
#RequestMapping (method = RequestMethod.POST)
public ResponseEntity save(#RequestBody List<Tick> ticks) {
tickManager.save(ticks);
return new ResponseEntity(HttpStatus.OK);
}
#RequestMapping (value = "/async", method = RequestMethod.POST)
public #ResponseBody Callable<ResponseEntity> saveAsync(#RequestBody List<Tick> ticks) {
return () -> {
tickManager.save(ticks);
return new ResponseEntity(HttpStatus.OK);
};
}
}
The tickManager has only a dependency on a tickRepository and just do call to sub-layer.
The tickRepository is based on Spring Data Mongodb:
#Repository
public interface TickRepository extends MongoRepository<Tick, String> {}
I use Gatling to test those controllers.
This is my scenario:
import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._
class TicksSaveSyncSimulation extends Simulation {
val rampUpTimeSecs = 20
val testTimeSecs = 5
val noOfUsers = 1000
val minWaitMs = 1000 milliseconds
val maxWaitMs = 3000 milliseconds
val baseURL = "http://localhost:9080"
val requestName = "ticks-save-sync-request"
val scenarioName = "ticks-save-sync-scenario"
val URI = "/api/1/ticks"
val httpConf = http.baseURL(baseURL)
val http_headers = Map(
"Accept-Encoding" -> "gzip,deflate",
"Content-Type" -> "application/json;charset=UTF-8",
"Keep-Alive" -> "115"
)
val scn = scenario(scenarioName)
.repeat(100) {
exec(
http(requestName)
.post(URI)
.headers(http_headers)
.body(StringBody(
"""[{
| "type": "temperature",
| "datas": {}
|}]""".stripMargin))
.check(status.is(200))
)
}
setUp(scn.inject(rampUsers(1000) over (1 seconds))).protocols(httpConf)
}
I tried several situations and the sync version always handle 2 times more request by second than the async version.
When I increase the number of users the two versions crash.
I tried to override the taskExecutor for the async version with no more success:
#Configuration
public class TaskExecutorConfig implements AsyncConfigurer {
#Override
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setMaxPoolSize(1000);
taskExecutor.setThreadNamePrefix("LULExecutor-");
taskExecutor.initialize();
return taskExecutor;
}
#Override
public AsyncUncaughtExceptionHandler getAsyncUncaughtExceptionHandler() {
return new SimpleAsyncUncaughtExceptionHandler();
}
}
I thought see a difference in favor of the async implementation.
What am I doing wrong?
Your test looks to be flawed. It doesn't make any sense being non blocking at one end of the pipeline (here, your controllers), and being blocking at the other end (tickManager.save really looks like a blocking call). You're just paying the extra cost of jumping into a ThreadPoolTaskExecutor.
Then, generally speaking, you won't gain anything from a non blocking architecture when all your tasks are very fast, like a tick. You can expect gains when some tasks take some longer time, so you don't want to waste resources (threads, CPU cycles) just waiting for those to complete, and you want to use them to perform other tasks in the meanwhile.
Regarding your Too many open files exception, you probably haven't properly tuned your OS for load testing, check relevant documentation. There's also a good chance that you're running your app and Gatling (and possibly your database too) on the same host, which is bad as they'll compete for resources.

Spring Integration Java DSL - execute multiple service activators async?

There's a Job which has a list of tasks.
Each task has id, name, status.
I've created service activators for each tasks, as follows:
#ServiceActivator
public Message<Task> execute(Message<Task> message){
//do stuff
}
I've created a gateway for Job
and in the Integration flow, starting from the gateway:
#Bean
public IntegrationFlow startJob() {
return f -> f
.handle("jobService", "execute")
.channel("TaskRoutingChannel");
}
#Bean
public IntegrationFlow startJobTask() {
return IntegrationFlows.from("TaskRoutingChannel")
.handle("jobService", "executeTasks")
.route("headers['Destination-Channel']")
.get();
}
#Bean
public IntegrationFlow TaskFlow() {
return IntegrationFlows.from("testTaskChannel")
.handle("aTaskService", "execute")
.channel("TaskRoutingChannel")
.get();
}
#Bean
public IntegrationFlow TaskFlow2() {
return IntegrationFlows.from("test2TaskChannel")
.handle("bTaskService", "execute")
.channel("TaskRoutingChannel")
.get();
}
I've got the tasks to execute sequentially, using routers as above.
However, I need to start the job, execute all of it's tasks in parallel.
I couldn't figure out how to get that going.. I tried using #Async on the service activator methods and making it return void. but in that case, how do i chain it back to the routing channel and make it start next task?
Please help. Thanks.
EDIT:
I used the RecepientListRouter along with ExecutorChannel to get the parallel execution:
#Bean
public IntegrationFlow startJobTask() {
return IntegrationFlows.from("TaskRoutingChannel")
.handle("jobService", "executeTasks")
.routeToRecipients(r -> r
.recipient("testTaskChannel")
.recipient("test2TaskChannel"))
.get();
}
#Bean ExecutorChannel testTaskChannel(){
return new ExecutorChannel(this.getAsyncExecutor());
}
#Bean ExecutorChannel test2TaskChannel(){
return new ExecutorChannel(this.getAsyncExecutor());
}
#Bean
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(5);
executor.setMaxPoolSize(10);
executor.setQueueCapacity(10);
executor.initialize();
return executor;
}
Now, 3 questions:
1) If this is a good approach, how do i send specific parts of the payload to each recipient channel. Assume the payload is a List<>, and i want to send each list item to each channel.
2) How do I dynamically set the recipient channel? say from header? or a list?
3) Is this really a good approach? Is there a preferred way to do this?
Thanks in Advance.
Your TaskRoutingChannel must be an instance of ExecutorChannel. For example:
return f -> f
.handle("jobService", "execute")
.channel(c -> c.executor("TaskRoutingChannel", threadPoolTaskExecutor()));
Otherwise, yes: everything is invoked with the single Thread and it isn't good for your task.
UPDATE
Let me try to answer to your questions one by one, although it sounds like each of them must as separate SO one :-).
If you really need to send the same message to several services, you can use routeToRecipients, or can back to the publishSubscribe. Or even can do dynamic routing based on the header, for example.
To send the part of message to each channel there is just enough place .split() before your .routeToRecipients()
To answer to your last question I need to know the business requirements for the task.

Categories

Resources