I use spring integration to read data from database.
Now i use polling adapter
#Bean
public MessageSource<Object> jdbcMessageSource() {
JdbcPollingChannelAdapter a = new JdbcPollingChannelAdapter(dataSource(), "SELECT id, clientName FROM client");
return a;
}
Flow:
#Bean
public IntegrationFlow pollingFlow() throws Exception {
return IntegrationFlows.from(jdbcMessageSource(),
c -> c.poller(Pollers.fixedRate(30000).maxMessagesPerPoll(1)))
.channel(channel1())
.handle(handler())
.get();
}
But i would like to schedule my flow from other system.
Anyone know how to do this?
schedule my flow from other system
From your flow perspective it sounds like event driven action. For this purpose you should use JdbcOutboundGateway with the same SELECT.
And, of course, you should find the hook for that external system to trigger an event for your flow input channel. That might be any Inbound Channel Adapter or Message Driven Adapter, e.g. JMS, AMQP, HTTP and so. Depends what you already have in your middleware and what will be possible to expose from this your application to external systems.
I think i solve the problem with a custom trigger:
public Trigger onlyOnceTrigger() {
return new Trigger() {
private final AtomicBoolean invoked = new AtomicBoolean();
#Override
public Date nextExecutionTime(TriggerContext triggerContext) {
return this.invoked.getAndSet(true) ? null : new Date();
}
};
}
And my flow:
public IntegrationFlow pollingFlow() throws Exception {
return IntegrationFlows.from(jdbcMessageSource(),
c -> c.poller(Pollers.trigger(onlyOnceTrigger()).maxMessagesPerPoll(1)))
.channel(channel1())
.handle(handler())
.get();
}
Related
I've been trying to get pubsub to work within a spring application. To get up and running I've been reading through tutorials and documentation like this
I can get things to build and start but if I go through cloud console to send a message to the test subscription it never arrives.
This is what my code looks like right now:
#Configuration
#Import({GcpPubSubAutoConfiguration.class})
public class PubSubConfigurator {
#Bean
public GcpProjectIdProvider projectIdProvider(){
return () -> "project-id";
}
#Bean
public CredentialsProvider credentialsProvider(){
return GoogleCredentials::getApplicationDefault;
}
#Bean
public MessageChannel inputMessageChannel() {
return new PublishSubscribeChannel();
}
#Bean
#InboundChannelAdapter(channel = "inputMessageChannel", poller = #Poller(fixedDelay = "5"))
public MessageSource<Object> pubsubAdapter(PubSubTemplate pubSubTemplate) {
PubSubMessageSource messageSource = new PubSubMessageSource(pubSubTemplate, "tst-sandbox");
messageSource.setAckMode(AckMode.MANUAL);
messageSource.setPayloadType(String.class);
messageSource.setBlockOnPull(false);
messageSource.setMaxFetchSize(10);
//pubSubTemplate.pull("tst-sandbox", 10, true);
return messageSource;
}
// Define what happens to the messages arriving in the message channel.
#ServiceActivator(inputChannel = "inputMessageChannel")
public void messageReceiver(
String payload,
#Header(GcpPubSubHeaders.ORIGINAL_MESSAGE) BasicAcknowledgeablePubsubMessage message) {
System.out.println("Message arrived via an inbound channel adapter from sub-one! Payload: " + payload);
message.ack();
}
}
My thinking was that the poller annotation would start a poller to run every so often to check for messages and send them to the method annotated with service activator but this is clearly not the case as it is never hit.
Interestingly enough if I put a breakpoint right before "return messageSource" and check the result of the template.pull call the messages ARE returned so it is seemingly not an issue with the connection itself.
What am I missing here? Tutorials and documentation aren't helping much at this point as they all use pretty much the same bit of tutorial code like above...
I have tried variations of the above code like creating the adapter instead of the messagesource like so:
#Bean
public PubSubInboundChannelAdapter inboundChannelAdapter(
#Qualifier("inputMessageChannel") MessageChannel messageChannel,
PubSubTemplate pubSubTemplate) {
PubSubInboundChannelAdapter adapter =
new PubSubInboundChannelAdapter(pubSubTemplate, "tst-sandbox");
adapter.setOutputChannel(messageChannel);
adapter.setAckMode(AckMode.MANUAL);
adapter.setPayloadType(String.class);
return adapter;
}
to no avail. Any suggestions are welcome.
Found the problem after creating a spring boot project from scratch (main project is normal spring). Noticed in the debug output that it was auto starting the service activator bean and some other things like actually subscribing to the channels which it wasn't doing in the main project.
After a quick google the solution was simple, had to add
#EnableIntegration
annotation at class level and the messages started coming in.
I have something like below which works well, but I would prefer checking health without sending any message, (not only checking socket connection). I know Kafka has something like KafkaHealthIndicator out of the box, does someone have experience or example using it ?
public class KafkaHealthIndicator implements HealthIndicator {
private final Logger log = LoggerFactory.getLogger(KafkaHealthIndicator.class);
private KafkaTemplate<String, String> kafka;
public KafkaHealthIndicator(KafkaTemplate<String, String> kafka) {
this.kafka = kafka;
}
#Override
public Health health() {
try {
kafka.send("kafka-health-indicator", "❥").get(100, TimeUnit.MILLISECONDS);
} catch (InterruptedException | ExecutionException | TimeoutException e) {
return Health.down(e).build();
}
return Health.up().build();
}
}
In order to trip health indicator, retrieve data from one of the future objects otherwise indicator is UP even when Kafka is down!!!
When Kafka is not connected future.get() throws an exception which in turn set this indicator down.
#Configuration
public class KafkaConfig {
#Autowired
private KafkaAdmin kafkaAdmin;
#Bean
public AdminClient kafkaAdminClient() {
return AdminClient.create(kafkaAdmin.getConfigurationProperties());
}
#Bean
public HealthIndicator kafkaHealthIndicator(AdminClient kafkaAdminClient) {
final DescribeClusterOptions options = new DescribeClusterOptions()
.timeoutMs(1000);
return new AbstractHealthIndicator() {
#Override
protected void doHealthCheck(Health.Builder builder) throws Exception {
DescribeClusterResult clusterDescription = kafkaAdminClient.describeCluster(options);
// In order to trip health indicator DOWN retrieve data from one of
// future objects otherwise indicator is UP even when Kafka is down!!!
// When Kafka is not connected future.get() throws an exception which
// in turn sets the indicator DOWN.
clusterDescription.clusterId().get();
// or clusterDescription.nodes().get().size()
// or clusterDescription.controller().get();
builder.up().build();
// Alternatively directly use data from future in health detail.
builder.up()
.withDetail("clusterId", clusterDescription.clusterId().get())
.withDetail("nodeCount", clusterDescription.nodes().get().size())
.build();
}
};
}
}
Use the AdminClient API to check the health of the cluster via describing the cluster and/or the topic(s) you'll be interacting with, and verifying those topics have the required number of insync replicas, for example
Kafka has something like KafkaHealthIndicator out of the box
It doesn't. Spring's Kafka integration might
I am using IntegrationFlow as Sftp Inbound DSL Configuration where I am using CustomTriggerAdvice to handle manual trigger. Please see below code snippet for reference.
I am also using RotatingServerAdvice for handling multiple path in same host.
But when I start the Sftp Inbound it fetch file for the first time from every path but it does not work for second time and onward. Sftp Inbound Starts but does not fetch file from paths. I couldn't figure out the problem. Is there anything that I am missing?
SftpConfiguration
public IntegrationFlow fileFlow() {
SftpInboundChannelAdapterSpec spec = Sftp
.inboundAdapter(dSF())
.preserveTimestamp(true)
.remoteDirectory(".")
.autoCreateLocalDirectory(true)
.deleteRemoteFiles(false)
.localDirectory(new File(getDestinationLocation()));
return IntegrationFlows
.from(spec, e -> e.id(BEAN_ID)
.autoStartup(false)
.poller(Pollers
.fixedDelay(5000)
.advice(
customRotatingServerAdvice(dSF()),
customTriggerAdvice()
)
)
)
.channel(sftpReceiverChannel())
.handle(sftpInboundMessageHandler())
.get();
}
private MessageChannel sftpReceiverChannel() {
return MessageChannels.direct().get();
}
... ... ...
#Bean
public RotatingServerAdvice customRotatingServerAdvice(
DelegatingSessionFactory<LsEntry> dSF
) {
List<String> pathList = getSourcePathList();
for (String path : pathList) {
keyDirectories.add(new RotationPolicy.KeyDirectory(KEY, path));
}
return new RotatingServerAdvice(
dSF,
keyDirectories
);
}
#Bean
public CustomTriggerAdvice customTriggerAdvice() {
return new CustomTriggerAdvice(customControlChannel(),BEAN_ID);
}
#Bean
public IntegrationFlow customControlBus() {
return IntegrationFlows.from(customControlChannel())
.controlBus()
.get();
}
#Bean
public MessageChannel customControlChannel() {
return MessageChannels.direct().get();
}
CustomTriggerAdvice
public class CustomTriggerAdvice extends AbstractMessageSourceAdvice {
private final MessageChannel controlChannel;
private final String BEAN_ID;
public CustomTriggerAdvice(MessageChannel controlChannel, String beanID) {
this.controlChannel = controlChannel;
this.BEAN_ID = beanID;
}
#Override
public boolean beforeReceive(MessageSource<?> source) {
return true;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
if (result == null) {
controlChannel.send(new GenericMessage<>("#" + BEAN_ID + ".stop()"));
}
return result;
}
}
Starting Sftp Inbound using MessageChannel
#Qualifier("customControlChannel") MessageChannel controlChannel;
public void startSftpInbound(){
controlChannel.send(new GenericMessage<>("#" + beanID + ".start()"));
}
I need the system to start on demand and fetch file completing one cycle. If it is not stopped after that, it will continue polling and won't stop and my system will fall into an infinite loop. Is there any way to get that when the RotatingServerAdvice is completing polling from all servers at least one? Does it throw any event or something like that ?
You probably misunderstood the logic with the afterReceive(#Nullable Message<?> result, MessageSource<?> source) contract. You can't stop a channel adapter for your requirements when one of the servers has returned nothing to poll. This way you don't give a chance for another server to poll on the next polling cycle.
I think your idea is to iterate over all the servers only once and then stop. Probably independently of the result from any of them. It looks like the best way to stop for you it is to use a RotatingServerAdvice with the fair = true to move to the next server every time. The stop might be performed from the custom afterReceive() independently of the result when you see that the RotationPolicy.getCurrent() is equal to a last one in the list. So, this way you iterate over all of them and stop moving the first one for the next poling cycle.
We are using spring cloude stream 2.0 & Kafka as a message broker.
We've implemented a circuit breaker which stops the Application context, for cases where the target system (DB or 3rd party API) is unavilable, as suggested here: Stop Spring Cloud Stream #StreamListener from listening when target system is down
Now in spring cloud stream 2.0 there is a way to manage the lifecycle of binder using actuator: Binding visualization and control
Is it possible to control the binder lifecycle from the code, means in case target server is down, to pause the binder, and when it's up, to resume?
Sorry, I misread your question.
You can auto wire the BindingsEndpoint but, unfortunately, its State enum is private so you can't call changeState() programmatically.
I have opened an issue for this.
EDIT
You can do it with reflection, but it's a bit ugly...
#SpringBootApplication
#EnableBinding(Sink.class)
public class So53476384Application {
public static void main(String[] args) {
SpringApplication.run(So53476384Application.class, args);
}
#Autowired
BindingsEndpoint binding;
#Bean
public ApplicationRunner runner() {
return args -> {
Class<?> clazz = ClassUtils.forName("org.springframework.cloud.stream.endpoint.BindingsEndpoint$State",
So53476384Application.class.getClassLoader());
ReflectionUtils.doWithMethods(BindingsEndpoint.class, method -> {
try {
method.invoke(this.binding, "input", clazz.getEnumConstants()[2]); // PAUSE
}
catch (InvocationTargetException e) {
e.printStackTrace();
}
}, method -> method.getName().equals("changeState"));
};
}
#StreamListener(Sink.INPUT)
public void listen(String in) {
}
}
There's a Job which has a list of tasks.
Each task has id, name, status.
I've created service activators for each tasks, as follows:
#ServiceActivator
public Message<Task> execute(Message<Task> message){
//do stuff
}
I've created a gateway for Job
and in the Integration flow, starting from the gateway:
#Bean
public IntegrationFlow startJob() {
return f -> f
.handle("jobService", "execute")
.channel("TaskRoutingChannel");
}
#Bean
public IntegrationFlow startJobTask() {
return IntegrationFlows.from("TaskRoutingChannel")
.handle("jobService", "executeTasks")
.route("headers['Destination-Channel']")
.get();
}
#Bean
public IntegrationFlow TaskFlow() {
return IntegrationFlows.from("testTaskChannel")
.handle("aTaskService", "execute")
.channel("TaskRoutingChannel")
.get();
}
#Bean
public IntegrationFlow TaskFlow2() {
return IntegrationFlows.from("test2TaskChannel")
.handle("bTaskService", "execute")
.channel("TaskRoutingChannel")
.get();
}
I've got the tasks to execute sequentially, using routers as above.
However, I need to start the job, execute all of it's tasks in parallel.
I couldn't figure out how to get that going.. I tried using #Async on the service activator methods and making it return void. but in that case, how do i chain it back to the routing channel and make it start next task?
Please help. Thanks.
EDIT:
I used the RecepientListRouter along with ExecutorChannel to get the parallel execution:
#Bean
public IntegrationFlow startJobTask() {
return IntegrationFlows.from("TaskRoutingChannel")
.handle("jobService", "executeTasks")
.routeToRecipients(r -> r
.recipient("testTaskChannel")
.recipient("test2TaskChannel"))
.get();
}
#Bean ExecutorChannel testTaskChannel(){
return new ExecutorChannel(this.getAsyncExecutor());
}
#Bean ExecutorChannel test2TaskChannel(){
return new ExecutorChannel(this.getAsyncExecutor());
}
#Bean
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(5);
executor.setMaxPoolSize(10);
executor.setQueueCapacity(10);
executor.initialize();
return executor;
}
Now, 3 questions:
1) If this is a good approach, how do i send specific parts of the payload to each recipient channel. Assume the payload is a List<>, and i want to send each list item to each channel.
2) How do I dynamically set the recipient channel? say from header? or a list?
3) Is this really a good approach? Is there a preferred way to do this?
Thanks in Advance.
Your TaskRoutingChannel must be an instance of ExecutorChannel. For example:
return f -> f
.handle("jobService", "execute")
.channel(c -> c.executor("TaskRoutingChannel", threadPoolTaskExecutor()));
Otherwise, yes: everything is invoked with the single Thread and it isn't good for your task.
UPDATE
Let me try to answer to your questions one by one, although it sounds like each of them must as separate SO one :-).
If you really need to send the same message to several services, you can use routeToRecipients, or can back to the publishSubscribe. Or even can do dynamic routing based on the header, for example.
To send the part of message to each channel there is just enough place .split() before your .routeToRecipients()
To answer to your last question I need to know the business requirements for the task.