I'm new in Spring Integration, I'm trying to get the message from temporary channel.
Reading the documentation there is a temporary channel use by spring.
I guess it named NullChannel
I need my gateway returns the value from temporary channel.
http controller -> gateway -> direct channel -> activator 1 -> queue channel -> activator 2
So my activator 2 will put the new value into temporary channel, so the gateway will retrieve the value from temporary channel
#MessageEndpoint
public class Activator2 {
#Autowired
private NullChannel nullChannel;
#ServiceActivator(inputChannel = "asyncChannel")
public void plus(Integer message){
try {
message++;
Thread.sleep(2000);
nullChannel.send(MessageBuilder.withPayload(message).build());
log.info("Activator 2: " +message );
} catch (InterruptedException e) {
log.error("I don't want to sleep");
}
}
}
it not working. I'm not sure if everything is well connected
NullChannel is like /dev/null in Unix; it just discards the value.
#ServiceActivator(inputChannel = "asyncChannel")
public Integer plus(Integer message){
return message;
}
will automatically do what you want.
If there is no outputChannel, the framework sends the result to the replyChannel header.
Related
I have remote FTP server to which I have mounted FtpStreamingMessageSource Inbound Channel Adapter and now I need to filter files by filename using regex and route to corresponding channels to launch batch job. The official documentation has PayloadTypeRouter, HeaderValueRouter but they are not suitable for this task. I can play with filters, but then I have to write several Inbound Channel Adapters for each file with specific filter. Is this normal approach or is there a better solution?
For example: I have A.csv, B.csv, C.csv, D.csv files on FTP. After reading I need to route A.csv to chanell A, B.csv to channel B and so on.
Below is the current working solution, feel free to comment and correct
#Override
protected Object doTransform(Message<?> message) {
IntegrationMessageHeaderAccessor accessor = new IntegrationMessageHeaderAccessor(message);
MessageBuilder messageBuilder = MessageBuilder.fromMessage(message);
String fileName = accessor.getHeader("file_remoteFile").toString();
if(fileName.contains("file_name1")){
messageBuilder.setHeader("channel", "channel1");
} else if (fileName.contains("file_name2")){
messageBuilder.setHeader("channel", "channel2");
} else if(fileName.contains("file_name3")) {
messageBuilder.setHeader("channel", "channel3");
} else if (fileName.contains("file_name4")){
messageBuilder.setHeader("channel", "channel4");
}
return messageBuilder.build();
}
And here is Routing
#Bean
#org.springframework.integration.annotation.Transformer(inputChannel = CHANNEL_STREAMED_DATA, outputChannel = CHANNEL_DATA)
public CustomTransformer customTransformer() {
return new CustomTransformer();
}
#ServiceActivator(inputChannel = CHANNEL_DATA)
#Bean
public HeaderValueRouter router() {
HeaderValueRouter router = new HeaderValueRouter("channel");
router.setChannelMapping("channel1", "channelA");
router.setChannelMapping("channel2", "channelB");
return router;
}
Well, you have already that file_remoteFile with all the info you need for routing.
Use a #Router instead and plain POJO method:
#Router(inputChannel = CHANNEL_STREAMED_DATA)
String routeByRemoteFile(#Header(FileHeaders.REMOTE_FILE) String remoteFile) {
...
}
And return a respective channel name according your file name matching logic.
See more info in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#router-annotation
I am trying to configure my Spring AMQP ListenerContainer to allow for a certain type of retry flow that's backwards compatible with a custom rabbit client previously used in the project I'm working on.
The protocol works as follows:
A message is received on a channel.
If processing fails the message is nacked with the republish flag set to false
A copy of the message with additional/updated headers (a retry counter) is published to the same queue
The headers are used for filtering incoming messages, but that's not important here.
I would like the behaviour to happen on an opt-in basis, so that more standardised Spring retry flows can be used in cases where compatibility with the old client isn't a concern, and the listeners should be able to work without requiring manual acking.
I have implemented a working solution, which I'll get back to below. Where I'm struggling is to publish the new message after signalling to the container that it should nack the current message, because I can't really find any good hooks after the nack or before the next message.
Reading the documentation it feels like I'm looking for something analogous to the behaviour of RepublishMessageRecoverer used as the final step of a retry interceptor. The main difference in my case is that I need to republish immediately on failure, not as a final recovery step. I tried to look at the implementation of RepublishMessageRecoverer, but the many of layers of indirection made it hard for me to understand where the republishing is triggered, and if a nack goes before that.
My working implementation looks as follows. Note that I'm using an AfterThrowsAdvice, but I think an error handler could also be used with nearly identical logic.
/*
MyConfig.class, configuring the container factory
*/
#Configuration
public class MyConfig {
#Bean
// NB: bean name is important, overwrites autoconfigured bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(
ConnectionFactory connectionFactory,
Jackson2JsonMessageConverter messageConverter,
RabbitTemplate rabbitTemplate
) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setMessageConverter(messageConverter);
// AOP
var a1 = new CustomHeaderInspectionAdvice();
var a2 = new MyThrowsAdvice(rabbitTemplate);
Advice[] adviceChain = {a1, a2};
factory.setAdviceChain(adviceChain);
return factory;
}
}
/*
MyThrowsAdvice.class, hooking into the exception flow from the listener
*/
public class MyThrowsAdvice implements ThrowsAdvice {
private static final Logger logger = LoggerFactory.getLogger(MyThrowsAdvice2.class);
private final AmqpTemplate amqpTemplate;
public MyThrowsAdvice2(AmqpTemplate amqpTemplate) {
this.amqpTemplate = amqpTemplate;
}
public void afterThrowing(Method method, Object[] args, Object target, ListenerExecutionFailedException ex) {
var message = message(args);
var cause = ex.getCause();
// opt-in to old protocol by throwing an instance of BusinessException in business logic
if (cause instanceof BusinessException) {
/*
NB: Since we want to trigger execution after the current method fails
with an exception we need to schedule it in another thread and delay
execution until the nack has happened.
*/
new Thread(() -> {
try {
Thread.sleep(1000L);
var messageProperties = message.getMessageProperties();
var count = getCount(messageProperties);
messageProperties.setHeader("xb-count", count + 1);
var routingKey = messageProperties.getReceivedRoutingKey();
var exchange = messageProperties.getReceivedExchange();
amqpTemplate.send(exchange, routingKey, message);
logger.info("Sent!");
} catch (InterruptedException e) {
logger.error("Sleep interrupted", e);
}
}).start();
// NB: Produce the desired nack.
throw new AmqpRejectAndDontRequeueException("Business logic exception, message will be re-queued with updated headers", cause);
}
}
private static long getCount(MessageProperties messageProperties) {
try {
Long c = messageProperties.getHeader("xb-count");
return c == null ? 0 : c;
} catch (Exception e) {
return 0;
}
}
private static Message message(Object[] args) {
try {
return (Message) args[1];
} catch (Exception e) {
logger.info("Bad cast parse", e);
throw new AmqpRejectAndDontRequeueException(e);
}
}
}
Now, as you can imagine, I'm not particularly pleased with the indeterminism of scheduling a new thread with a delay.
So my question is simply, is there any way I could produce a deterministic solution to my problem using the provided hooks of the ListenerContainer ?
Your current solution risks message loss; since you are publishing on a different thread after a delay. If the server crashes during that delay, the message is lost.
It would be better to publish immediately to another queue with a TTL and dead-letter configuration to republish the expired message back to the original queue.
Using the RepublishMessageRecoverer with retries set to maxattempts=1 should do what you need.
I'm using spring cloud stream with kafka broker for microservice inter-communication. As part of which, stream bridge will be used to send the message, which is fine.
But during consumption of said message, the message need not be immediately consumed, rather when a condition is satisfied, then only, it should be consumed.
From the documentation, I understand that I need to use Polled Consumers(do correct me if I'm mistaken) for this.
This is what I've tried from what I've understood of the documentation.
application.properties
spring.cloud.stream.pollable-source = consumeResponse
spring.cloud.stream.function.definition = consumeResponse
#stream bridge
spring.cloud.stream.bindings.outputchannel1.destination = REQUEST_TOPIC
spring.cloud.stream.bindings.outputchannel1.binder= kafka1
#polled Consumer
spring.cloud.stream.bindings.consumeResponse-in-0.binder= kafka1
spring.cloud.stream.bindings.consumeResponse-in-0.destination = REQUEST_TOPIC
spring.cloud.stream.bindings.consumeResponse-in-0.group = consumer_cloud_stream1
spring.cloud.stream.binders.kafka1.type=kafka
MainApplication.java
#Bean
public CommandLineRunner commandLineRunner(ApplicationContext ctx) {
return args -> {
//produce message
for (int i = 0; i < 5; i++) {
streamBridge.send("outputchannel1", "msg"+i);
System.out.println("Request :: " + "msg"+i);
}
};
}
#Bean
public Consumer<String> consumeResponse() {
return (response) -> {
//consume message
System.out.println("Response :: " + response);
};
}
#Bean
public ApplicationRunner poller(PollableMessageSource destIn, MessageChannel destOut) {
return args -> {
while (someCondition) { //some condition that checks whether or not to consume the message
try {
//condition satisfied, so forward message to consumer
if (!destIn.poll(m -> {
String newPayload = ((String) m.getPayload());
destOut.send(new GenericMessage<>(newPayload));
})) {
Thread.sleep(1000);
}
} catch (Exception e) {
e.printStackTrace();
}
}
};
}
But this throws the following exception:-
Parameter 1 of method poller in com.MainApplication required a single bean, but 2 were found:
- nullChannel: defined in null
- errorChannel: defined in null
I'd appreciate it if someone could help me out here or point me towards a working example for the same.
Spring boot version: 2.6.4,
Spring cloud version: 2021.0.1
Why don't you just inject the StreamBridge into your runner instead of the message channel?
By default, stream bridge output channels are created on-demand (first send).
i have a channel that stores messages. When new messages arrive, if the server has not yet processed all the messages (that still in the queue), i need to clear the queue (for example, by rerouting all data into another channel). For this, I used a router. But the problem is when a new messages arrives, then not only old but also new ones rerouting into another channel. New messages must remain in the queue. How can I solve this problem?
This is my code:
#Bean
public IntegrationFlow integerFlow() {
return IntegrationFlows.from("input")
.bridge(e -> e.poller(Pollers.fixedDelay(500, TimeUnit.MILLISECONDS, 1000).maxMessagesPerPoll(1)))
.route(r -> {
if (flag) {
return "mainChannel";
} else {
return "garbageChannel";
}
})
.get();
}
#Bean
public IntegrationFlow outFlow() {
return IntegrationFlows.from("mainChannel")
.handle(m -> {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(m.getPayload() + "\tmainFlow");
})
.get();
}
#Bean
public IntegrationFlow outGarbage() {
return IntegrationFlows.from("garbageChannel")
.handle(m -> System.out.println(m.getPayload() + "\tgarbage"))
.get();
}
Flag value changes through #GateWay by pressing "q" and "e" keys.
I would suggest you to take a look into a purge API of the QueueChannel:
/**
* Remove any {#link Message Messages} that are not accepted by the provided selector.
* #param selector The message selector.
* #return The list of messages that were purged.
*/
List<Message<?>> purge(#Nullable MessageSelector selector);
This way with a custom MessageSelector you will be able to remove from the queue old messages. See a timestamp message header to consult. With the result of this method you can do whatever you need to do with old messages.
I am trying to build a simple cloud stream application with kafka binding. Let me describe the set up.
1. I have a producer producing to topic topic_1.
2. There's a stream binder, binding topic_1 after some processing into topic_2.
#StreamListener(MyBinder.INPUT)
#SendTo(MyBinder.OUTPUT_2)
public String handleIncomingMsgs(String s) {
logger.info(s); // prints all the messages
return s;
}
When the producer produces messages, the StreamListner handleIncomingMsgs gets all the messages.
After receiving, it should forward the messages to some other channel.
#Service
#EnableBinding(MyBinder.class)
public class LogMsg {
#StreamListener(MyBinder.OUTPUT_2)
public void handle(String board) {
logger.info("Received payload: " + board); //prints every alternate messages
}
Here is my binder
public interface ViewsStreams {
String INPUT = "input";
String OUTPUT_1 = "output_1";
String OP_USERS = "output_2";
#Autowired
#Input(INPUT)
SubscribableChannel job_board_views();
#Autowired
#Output(OUTPUT_1)
MessageChannel outboundJobBoards();
#Autowired
#Output(OUTPUT_2)
MessageChannel outboundUsers();
}
I am new in these technologies. Unable to figure out what is going wrong here. Can someone please help?
Your guess is correct; you have two consumers on the OUTPUT_2 channel - the listener and the binding which sends out the message.
They each get alternate messages.