I have a requirement where i need to look for a file continuously at unix location.Once its available then i need to parse it and convert to some json format.This needs to be done using Spring integration - DSL.
Following is the piece of code I got from spring site but it shows following exception:
o.s.integration.handler.LoggingHandler: org.springframework.messaging.MessageDeliveryException: Dispatcher has no subscribers for channel 'application.processFileChannel'.; nested exception is org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers
Below is the code:
#SpringBootApplication
public class FileReadingJavaApplication {
public static void main(String[] args) {
new SpringApplicationBuilder(FileReadingJavaApplication.class)
.web(false)
.run(args);
}
#Bean
public IntegrationFlow fileReadingFlow() {
return IntegrationFlows
.from(s -> s.file(new File("Y://"))
.patternFilter("*.txt"),
e -> e.poller(Pollers.fixedDelay(1000)))
.transform(Transformers.fileToString())
.channel("processFileChannel")
.get();
}
}
New Code:
#SpringBootApplication
public class SpringIntegration {
public static void main(String[] args) {
new SpringApplicationBuilder(SpringIntegration.class)
.web(false)
.run(args);
}
#Bean
public SessionFactory<LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost("ip");
factory.setPort(port);
factory.setUser("username");
factory.setPassword("pwd");
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<LsEntry>(factory);
}
#Bean
public SftpInboundFileSynchronizer sftpInboundFileSynchronizer() {
SftpInboundFileSynchronizer fileSynchronizer = new SftpInboundFileSynchronizer(sftpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setRemoteDirectory("remote dir");
fileSynchronizer.setFilter(new SftpSimplePatternFileListFilter("*.txt"));
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(channel = "sftpChannel", poller = #Poller(fixedDelay = "1000", maxMessagesPerPoll = "1"))
public MessageSource ftpMessageSource() {
SftpInboundFileSynchronizingMessageSource source =
new SftpInboundFileSynchronizingMessageSource(sftpInboundFileSynchronizer());
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
source.setLocalDirectory(new File("Local directory"));
return source;
}
#Bean
#ServiceActivator(inputChannel = "fileInputChannel")
public MessageHandler handler() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
System.out.println("File Name : "+message.getPayload());
}
};
}
#Bean
public static StandardIntegrationFlow processFileFlow() {
return IntegrationFlows
.from("fileInputChannel").split()
.handle("fileProcessor", "process").get();
}
#Bean
#InboundChannelAdapter(value = "fileInputChannel", poller = #Poller(fixedDelay = "1000"))
public MessageSource<File> fileReadingMessageSource() {
AcceptOnceFileListFilter<File> filters =new AcceptOnceFileListFilter<>();
FileReadingMessageSource source = new FileReadingMessageSource();
source.setAutoCreateDirectory(true);
source.setDirectory(new File("Local directory"));
source.setFilter(filters);
return source;
}
#Bean
public FileProcessor fileProcessor() {
return new FileProcessor();
}
#Bean
#ServiceActivator(inputChannel = "fileInputChannel")
public AmqpOutboundEndpoint amqpOutbound(AmqpTemplate amqpTemplate) {
AmqpOutboundEndpoint outbound = new AmqpOutboundEndpoint(amqpTemplate);
outbound.setExpectReply(true);
outbound.setRoutingKey("foo"); // default exchange - route to queue 'foo'
return outbound;
}
#MessagingGateway(defaultRequestChannel = "amqpOutboundChannel")
public interface MyGateway {
String sendToRabbit(String data);
}
}
FileProcessor:
public class FileProcessor {
public void process(Message<String> msg) {
String content = msg.getPayload();
JSONObject jsonObject ;
Map<String, String> dataMap = new HashMap<String, String>();
for(int i=0;i<=content.length();i++){
String userId = content.substring(i+5,i+16);
dataMap = new HashMap<String, String>();
dataMap.put("username", username.trim());
i+=290; //each record of size 290 in file
jsonObject = new JSONObject(dataMap);
System.out.println(jsonObject);
}
}
}
Your code is correct , but an exception tells you that there is need something what will read messages from the direct channel "processFileChannel".
Please, read more about different channel types in the Spring Integration Reference Manual.
EDIT
One of first class citizen in Spring Integration is MessageChannel abstraction. See EIP for more information.
The definition like .channel("processFileChannel") mean declare DirectChannel. This kind of channel means accept message on the send and perform handling directly just in send process. In the raw Java words it may sound like: call one service from another. Throw NPE if the another hasn't been autowired.
So, if you use DirectChannel for the output, you should declare somewhere a subscriber for it. I don't know what is your logic, but that is how it works and no other choice to fix Dispatcher has no subscribers for channel.
Although you can use some other MessageChannel type. But for this purpose you should read more doc, e.g. Mark Fisher's Spring Integration in Action.
Related
I've been trying to setup a reactive rabbitmq listener following the guidelines on https://www.baeldung.com/spring-amqp-reactive.
What I'm hoping for my code to do is
receive a message (works so far)
do a whole bunch of logic that ultimately returns a Flux response (ie. async calls to endpoints and dbs)
return the Flux as shown in the guide
When I attempt to run through the process with a sample event, I receive the following exception:
org.springframework.amqp.rabbit.listener.exception.ListenerExecutionFailedException: Listener threw exception
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.wrapToListenerExecutionFailedExceptionIfNeeded(AbstractMessageListenerContainer.java:1651)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:1555)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.actualInvokeListener(AbstractMessageListenerContainer.java:1478)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:1466)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:1461)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.executeListener(AbstractMessageListenerContainer.java:1410)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.doReceiveAndExecute(SimpleMessageListenerContainer.java:870)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.receiveAndExecute(SimpleMessageListenerContainer.java:854)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$1600(SimpleMessageListenerContainer.java:78)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.mainLoop(SimpleMessageListenerContainer.java:1137)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1043)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.springframework.amqp.rabbit.listener.adapter.ReplyFailureException: Failed to send reply with payload 'InvocationResult [returnValue=FluxSwitchIfEmpty, returnType=null]'
at org.springframework.amqp.rabbit.listener.adapter.AbstractAdaptableMessageListener.doHandleResult(AbstractAdaptableMessageListener.java:385)
at org.springframework.amqp.rabbit.listener.adapter.AbstractAdaptableMessageListener.handleResult(AbstractAdaptableMessageListener.java:339)
at org.springframework.amqp.rabbit.listener.adapter.AbstractAdaptableMessageListener.handleResult(AbstractAdaptableMessageListener.java:302)
at org.springframework.amqp.rabbit.listener.adapter.MessageListenerAdapter.onMessage(MessageListenerAdapter.java:294)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:1552)
... 12 common frames omitted
Caused by: org.springframework.amqp.AmqpException: Cannot determine ReplyTo message property value: Request message does not contain reply-to property, and no default response Exchange was set.
at org.springframework.amqp.rabbit.listener.adapter.AbstractAdaptableMessageListener.getReplyToAddress(AbstractAdaptableMessageListener.java:465)
at org.springframework.amqp.rabbit.listener.adapter.AbstractAdaptableMessageListener.doHandleResult(AbstractAdaptableMessageListener.java:381)
... 16 common frames omitted
From the message it looks like the problem stems from the published message not having a 'ReplyTo' property for specifying where the returned Flux should go to. This property is something that I will not be able to add. I've tried configuring a default exchange but nothing I've done seems to be working there. The code thats applicable is:
Configuration
#Bean
MessageConverter messageConverter() {
Jackson2JsonMessageConverter messageConverter = new Jackson2JsonMessageConverter() {
#Override
public Object fromMessage(Message message, Object conversionHint) throws MessageConversionException {
message.getMessageProperties().setContentType(MessageProperties.CONTENT_TYPE_JSON);
return super.fromMessage(message, conversionHint);
}
};
messageConverter.setClassMapper(classMapper());
return messageConverter;
}
#Bean
DefaultClassMapper classMapper() {
DefaultClassMapper classMapper = new DefaultClassMapper();
classMapper.setDefaultType(NotificationEvent.class);
return classMapper;
}
#Bean
MessageListenerAdapter listenerAdapter(ReactiveNotificationController notificationHandler) {
MessageListenerAdapter messageListenerAdapter = new MessageListenerAdapter(notificationHandler, messageConverter());
messageListenerAdapter.setDefaultListenerMethod("receive");
return messageListenerAdapter;
}
#Bean
SimpleMessageListenerContainer container(ConnectionFactory rabbitListenerConnectionFactory,
MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(rabbitListenerConnectionFactory);
container.setQueues(queue());
container.setReceiveTimeout(2000);
container.setMessageListener(listenerAdapter);
container.setTaskExecutor(Executors.newCachedThreadPool());
container.setDefaultRequeueRejected(false);
return container;
}
#Bean
Queue queue() {
Map<String, Object> arguments = new HashMap<>();
arguments.put("x-dead-letter-exchange", deadLetterQueueBaseName);
arguments.put("x-dead-letter-routing-key", deadLetterQueueBaseName);
return new Queue(queueName, true, false, false, arguments);
}
#Bean
TopicExchange exchange() {
return new TopicExchange(queueName, true, false);
}
#Bean
Declarables declarables() {
return new Declarables(new ArrayList<>(bindings()));
}
#Bean
List<Binding> bindings() {
List<Binding> bindings = new ArrayList<>();
//Have the notification queue listen on both the deadLetterQueue routing key as well as the base one
bindings.add(BindingBuilder.bind(queue()).to(exchange()).with(deadLetterQueueBaseName));
bindings.add(BindingBuilder.bind(queue()).to(exchange()).with(""));
//bind parent exchanges to our child exchange
for (String parentExchangeName : parentExchangeNames) {
FanoutExchange thisExchange = new FanoutExchange(parentExchangeName);
bindings.add(BindingBuilder.bind(exchange()).to(thisExchange));
}
return bindings;
}
#Bean
public RabbitProperties rabbitProperties(AmqpAdmin rabbitAdmin, AmqpTemplate rabbitTemplate) {
return new RabbitProperties(rabbitAdmin, rabbitTemplate, deadLetterQueueBaseName, exchange(), baseTimeToLive);
}
Receiver Controller
#RestController
#Slf4j
public class ReactiveNotificationController {
private RabbitProperties rabbitProperties;
private EventSubscriptionService subscriptionService;
private NotificationSenderFactory senderFactory;
#Value("${kibo.rabbitmq.queueName}")
private String queueName;
public ReactiveNotificationController(RabbitProperties rabbitProperties,
EventSubscriptionService subscriptionService,
NotificationSenderFactory senderFactory) {
this.rabbitProperties = rabbitProperties;
this.subscriptionService = subscriptionService;
this.senderFactory = senderFactory;
}
#GetMapping(value = "/queue/{queueName}", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
#SendTo("{queueName}")
public Flux<?> receive(NotificationEvent event) {
NotificationWorker worker = new NotificationWorker(event, rabbitProperties, subscriptionService, senderFactory);
return worker.run();
}
}
The error still exists if I force the receive method to just return a Flux.empty() value so I don't think the worker has anything to do with it. Any help on this would be greatly appreciated.
I am new to spring boot and am trying to use the sample example from the spring integration in order to subscribe and publish using MQTT. I manage to integrate it with Thingsboard and the logger in the code below is able to receive the published message from Thingsboard.
public static void main(String[] args) {
SpringApplication.run(MqttTest.class);
}
#Bean
public MqttPahoClientFactory mqttClientFactory() {
DefaultMqttPahoClientFactory factory = new DefaultMqttPahoClientFactory();
MqttConnectOptions options = new MqttConnectOptions();
options.setServerURIs(new String[] { "URI HERE" });
options.setUserName("ACCESS TOKEN HERE");
factory.setConnectionOptions(options);
return factory;
}
// consumer
#Bean
public IntegrationFlow mqttInFlow() {
return IntegrationFlows.from(mqttInbound())
.transform(p -> p)
.handle(logger())
.get();
}
private LoggingHandler logger() {
LoggingHandler loggingHandler = new LoggingHandler("INFO");
loggingHandler.setLoggerName("LoggerBot");
return loggingHandler;
}
#Bean
public MessageProducerSupport mqttInbound() {
MqttPahoMessageDrivenChannelAdapter adapter = new MqttPahoMessageDrivenChannelAdapter("Consumer",
mqttClientFactory(), "v1/devices/me/rpc/request/+");
adapter.setCompletionTimeout(5000);
adapter.setConverter(new DefaultPahoMessageConverter());
adapter.setQos(1);
return adapter;
}
This is the console output. I am able to receive the published json message that was sent from the thingsboard dashboard. I am wondering if there is a call method to retrieve the json message string so that I can process it further. Thank you.
2019-02-01 14:06:23.590 INFO 13416 --- [ Call: Consumer] LoggerBot : {"method":"setValue","params":true}
2019-02-01 14:06:24.840 INFO 13416 --- [ Call: Consumer] LoggerBot : {"method":"setValue","params":false}
To handle the published messages, subscribe message handles to the flow to consume the messages.
MessageHandler
#Bean
public IntegrationFlow mqttInFlow() {
return IntegrationFlows.from(mqttInbound())
.transform(p -> p)
.handle( mess -> {
System.out.println("mess"+mess);
})
.get();
}
ServiceActivator
#Bean
public IntegrationFlow mqttInFlow() {
return IntegrationFlows.from(mqttInbound())
.transform(p -> p)
.handle("myService","handleHere")
.handle(logger())
.get();
}
#Component
public class MyService {
#ServiceActivator
public Object handleHere(#Payload Object mess) {
System.out.println("payload "+mess);
return mess;
}
}
Note: As we discussed, there are lot of different ways of achieving it.
This is just a sample for your understanding.
I am trying to Connect both Http and SFTP Gateways using Spring Integeration...and wants to read list of files, i.e. running LS command.
This is my code:
// Spring Integration Configuration..
#Bean(name = "sftp.session.factory")
public SessionFactory<LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setPort(port);
factory.setHost(host);
factory.setUser(user);
factory.setPassword(password);
factory.setAllowUnknownKeys(allowUnknownKeys);
return new CachingSessionFactory<LsEntry>(factory);
}
#Bean(name = "remote.file.template")
public RemoteFileTemplate<LsEntry> remoteFileTemplate() {
RemoteFileTemplate<LsEntry> remoteFileTemplate = new RemoteFileTemplate<LsEntry>(sftpSessionFactory());
remoteFileTemplate.setRemoteDirectoryExpression(new LiteralExpression(remoteDirectory));
return remoteFileTemplate;
}
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata poller() {
return Pollers.fixedRate(500).get();
}
/* SFTP READ OPERATION CONFIGURATIONS */
#Bean(name = "http.get.integration.flow")
#DependsOn("http.get.error.channel")
public IntegrationFlow httpGetIntegrationFlow() {
return IntegrationFlows
.from(httpGetGate())
.channel(httpGetRequestChannel())
.handle("sftpService", "performSftpReadOperation")
.get();
}
#Bean
public MessagingGatewaySupport httpGetGate() {
RequestMapping requestMapping = new RequestMapping();
requestMapping.setMethods(HttpMethod.GET);
requestMapping.setPathPatterns("/api/sftp/ping");
HttpRequestHandlingMessagingGateway gateway = new HttpRequestHandlingMessagingGateway();
gateway.setRequestMapping(requestMapping);
gateway.setRequestChannel(httpGetRequestChannel());
gateway.setReplyChannel(httpGetResponseChannel());
gateway.setReplyTimeout(20000);
return gateway;
}
#Bean(name = "http.get.error.channel")
public IntegrationFlow httpGetErrorChannel() {
return IntegrationFlows.from("rejected").transform("'Error while processing request; got' + payload").get();
}
#Bean
#ServiceActivator(inputChannel = "sftp.read.request.channel")
public MessageHandler sftpReadHandler(){
return new SftpOutboundGateway(remoteFileTemplate(), Command.LS.getCommand(), "payload");
}
#Bean(name = "http.get.request.channel")
public MessageChannel httpGetRequestChannel(){
return new DirectChannel(); //new QueueChannel(25);
}
#Bean(name = "http.get.response.channel")
public MessageChannel httpGetResponseChannel(){
return new DirectChannel(); //new QueueChannel(25);
}
#Bean(name = "sftp.read.request.channel")
public MessageChannel sftpReadRequestChannel(){
return new DirectChannel(); //new QueueChannel(25);
}
#Bean(name = "sftp.read.response.channel")
public MessageChannel sftpReadResponseChannel(){
return new DirectChannel(); //new QueueChannel(25);
}
// Gateway
#MessagingGateway(name="sftpGateway")
public interface SftpMessagingGateway {
#Gateway(requestChannel = "sftp.read.request.channel", replyChannel = "sftp.read.response.channel")
#Description("Handles Sftp Outbound READ Request")
Future<Message> readListOfFiles();
}
// ServiceActivator, i.e. main logic.
#Autowired
private SftpMessagingGateway sftpGateway;
#ServiceActivator(inputChannel = "http.get.request.channel", outputChannel="http.get.response.channel")
public ResponseEntity<String> performSftpReadOperation(Message<?> message) throws ExecutionException, InterruptedException {
System.out.println("performSftpReadOperation()");
ResponseEntity<String> responseEntity;
Future<Message> result = sftpGateway.readListOfFiles();
while(!result.isDone()){
Thread.sleep(300);
System.out.println("Waitign.....");
}
if(Objects.nonNull(result)){
List<SftpFileInfo> listOfFiles = (List<SftpFileInfo>) result.get().getPayload();
System.out.println("Sftp File Info: "+listOfFiles);
responseEntity = new ResponseEntity<String>("Sftp Server is UP and Running", HttpStatus.OK);
}
else {
responseEntity = new ResponseEntity<String>("Error while acessing Sftp Server. Please try again later!!!", HttpStatus.SERVICE_UNAVAILABLE);
}
return responseEntity;
}
Whenever I hit the end-point ("/api/sftp/ping") it went to a loop of:
performSftpReadOperation()
Waitign.....
performSftpReadOperation()
Waitign.....
performSftpReadOperation()
Waitign.....
performSftpReadOperation()
Waitign.....
performSftpReadOperation()
Waitign.....
Kindly guide me how to fix this issue. There might be some issue with httpGetIntegrationFlow().
Thanks
Your problem that your #Gateway is without any parameters, meanwhile you do LS command in the SftpOutboundGateway against payload expression, which means "give me a remote directory to list" .
So, you need to consider to specify a particular argument for the gateway method with the value as a remote directory to list files in it.
I need help, my app (client) connect to RabbitMQ server and when server shutdown, my app cannot start....
Listener can't created and app failed start.
Or when virtual host dont have queue my app cant start too
So my question
1) How to process exception in my config (all exception, i need was my app start if RabbitMQ server have problems)
2) What in my config look bad and need refactor
i use
Spring 4.2.9.RELEASE
org.springframework.amqp 2.0.5.RELEASE
Java 8
My 2 classes
1) Config for Beans RabbitMq
2) Listener annotation
#EnableRabbit
#Configuration
public class RabbitMQConfig {
#Bean
public ConnectionFactory connectionFactory() {
com.rabbitmq.client.ConnectionFactory factoryRabbit = new com.rabbitmq.client.ConnectionFactory();
factoryRabbit.setNetworkRecoveryInterval(10000);
factoryRabbit.setAutomaticRecoveryEnabled(true);
CachingConnectionFactory connectionFactory =
new CachingConnectionFactory(factoryRabbit);
connectionFactory.setHost("DRIVER_APP_IP");
connectionFactory.setPort(5672);
connectionFactory.setConnectionTimeout(5000);
connectionFactory.setRequestedHeartBeat(10);
connectionFactory.setUsername("user");
connectionFactory.setPassword("pass");
connectionFactory.setVirtualHost("/vhost");
return connectionFactory;
}
#Bean
public RabbitTemplate rabbitTemplate() {
try {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory());
rabbitTemplate.setRoutingKey(this.DRIVER_QUEUE);
rabbitTemplate.setQueue(this.DRIVER_QUEUE);
return rabbitTemplate;
} catch (Exception ex){
return new RabbitTemplate();
}
}
#Bean
public Queue queue() {
return new Queue(this.DRIVER_QUEUE);
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setConcurrentConsumers(3);
factory.setMaxConcurrentConsumers(10);
return factory;
}
}
#Component
public class RabbitMqListener {
#RabbitListener(bindings = #QueueBinding(
value = #Queue(value = DRIVER_QUEUE, durable = "true"),
exchange = #Exchange(value = "exchange", ignoreDeclarationExceptions = "true", autoDelete = "true"))
)
public String balancer(byte[] message) throws InterruptedException {
String json = null;
try {
"something move"
} catch (Exception ex) {
}
}
I found solution my problem
First it's Bean container!
We need this
factory.setMissingQueuesFatal(false);
this property give our when queue lost on server RabbitMQ, our app don't crash and can start
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setMissingQueuesFatal(false);
factory.setConcurrentConsumers(3);
factory.setStartConsumerMinInterval(3000L);
factory.setMaxConcurrentConsumers(10);
factory.setRecoveryInterval(15000L);
factory.setStartConsumerMinInterval(1000L);
factory.setReceiveTimeout(10000L);
factory.setChannelTransacted(true);
return factory;
}
and second
#Component
public class RabbitMqListener {
#RabbitListener(containerFactory = "rabbitListenerContainerFactory", queues = DRIVER_QUEUE)
public String balancer(byte[] message) throws InterruptedException {
String json = null;
try {
"something move"
} catch (Exception ex) {
}
}
I set containerFactory and Queue in #RabbitListener and drop other propertys,
because i don't need it
I hope it's help somebody, thank all for you attention and sorry for my English
I'm trying to implement a TCP client/server application with Spring Integration where I need to open one TCP client socket per incoming TCP server connection.
Basically, I have a bunch of IoT devices that communicate with a backend server over raw TCP sockets. I need to implement extra features into the system. But the software on both the devices and the server are closed source so I can't do anything about that. So my thought was to place middleware between the devices and the server that will intercept this client/server communication and provide the added functionality.
I'm using a TcpNioServerConnectionFactory and a TcpNioClientConnectionFactory with inbound/outbound channel adapters to send/receive messages to/from all parties. But there's no information in the message structure that binds a message to a certain device; therefore I have to open a new client socket to the backend every time a new connection from a new device comes on the server socket. This client connection must be bound to that specific server socket's lifecycle. It must never be reused and if this client socket (backend to middleware) dies for any reason, the server socket (middleware to device) must also be closed. How can I go about this?
Edit: My first thought was to subclass AbstractClientConnectionFactory but it appears that it doesn't do anything except provide a client connection when asked. Should I rather look into subclassing inbound/outbound channel adapters or elsewhere? I should also mention that I'm also open to non-Spring integration solutions like Apache Camel, or even a custom solution with raw NIO sockets.
Edit 2: I got halfway there by switching to TcpNetServerConnectionFactory and wrapping the client factory with a ThreadAffinityClientConnectionFactory and the devices can reach the backend fine. But when the backend sends something back, I get the error Unable to find outbound socket for GenericMessage and the client socket dies. I think it's because the backend side doesn't have the necessary header to route the message correctly. How can I capture this info? My configuration class is as follows:
#Configuration
#EnableIntegration
#IntegrationComponentScan
public class ServerConfiguration {
#Bean
public AbstractServerConnectionFactory serverFactory() {
AbstractServerConnectionFactory factory = new TcpNetServerConnectionFactory(8000);
factory.setSerializer(new MapJsonSerializer());
factory.setDeserializer(new MapJsonSerializer());
return factory;
}
#Bean
public AbstractClientConnectionFactory clientFactory() {
AbstractClientConnectionFactory factory = new TcpNioClientConnectionFactory("localhost", 3333);
factory.setSerializer(new MapJsonSerializer());
factory.setDeserializer(new MapJsonSerializer());
factory.setSingleUse(true);
return new ThreadAffinityClientConnectionFactory(factory);
}
#Bean
public TcpReceivingChannelAdapter inboundDeviceAdapter(AbstractServerConnectionFactory connectionFactory) {
TcpReceivingChannelAdapter inbound = new TcpReceivingChannelAdapter();
inbound.setConnectionFactory(connectionFactory);
return inbound;
}
#Bean
public TcpSendingMessageHandler outboundDeviceAdapter(AbstractServerConnectionFactory connectionFactory) {
TcpSendingMessageHandler outbound = new TcpSendingMessageHandler();
outbound.setConnectionFactory(connectionFactory);
return outbound;
}
#Bean
public TcpReceivingChannelAdapter inboundBackendAdapter(AbstractClientConnectionFactory connectionFactory) {
TcpReceivingChannelAdapter inbound = new TcpReceivingChannelAdapter();
inbound.setConnectionFactory(connectionFactory);
return inbound;
}
#Bean
public TcpSendingMessageHandler outboundBackendAdapter(AbstractClientConnectionFactory connectionFactory) {
TcpSendingMessageHandler outbound = new TcpSendingMessageHandler();
outbound.setConnectionFactory(connectionFactory);
return outbound;
}
#Bean
public IntegrationFlow backendIntegrationFlow() {
return IntegrationFlows.from(inboundBackendAdapter(clientFactory()))
.log(LoggingHandler.Level.INFO)
.handle(outboundDeviceAdapter(serverFactory()))
.get();
}
#Bean
public IntegrationFlow deviceIntegrationFlow() {
return IntegrationFlows.from(inboundDeviceAdapter(serverFactory()))
.log(LoggingHandler.Level.INFO)
.handle(outboundBackendAdapter(clientFactory()))
.get();
}
}
It's not entirely clear what you are asking so I am going to assume that you mean you want a spring integration proxy between your clients and servers. Something like:
iot-device -> spring server -> message-transformation -> spring client -> back-end-server
If that's the case, you can implement a ClientConnectionIdAware client connection factory that wraps a standard factory.
In the integration flow, bind the incoming ip_connectionId header in a message to the thread (in a ThreadLocal).
Then, in the client connection factory, look up the corresponding outgoing connection in a Map using the ThreadLocal value; if not found (or closed), create a new one and store it in the map for future reuse.
Implement an ApplictionListener (or #EventListener) to listen for TcpConnectionCloseEvents from the server connection factory and close() the corresponding outbound connection.
This sounds like a cool enhancement so consider contributing it back to the framework.
EDIT
Version 5.0 added the ThreadAffinityClientConnectionFactory which would work out of the box with a TcpNetServerConnectionFactory since each connection gets its own thread.
With a TcpNioServerConnectionFactory you would need the extra logic to dynamically bind the connection to the thread for each request.
EDIT2
#SpringBootApplication
public class So51200675Application {
public static void main(String[] args) {
SpringApplication.run(So51200675Application.class, args).close();
}
#Bean
public ApplicationRunner runner() {
return args -> {
Socket socket = SocketFactory.getDefault().createSocket("localhost", 1234);
socket.getOutputStream().write("foo\r\n".getBytes());
BufferedReader reader = new BufferedReader(new InputStreamReader(socket.getInputStream()));
System.out.println(reader.readLine());
socket.close();
};
}
#Bean
public Map<String, String> fromToConnectionMappings() {
return new ConcurrentHashMap<>();
}
#Bean
public Map<String, String> toFromConnectionMappings() {
return new ConcurrentHashMap<>();
}
#Bean
public IntegrationFlow proxyInboundFlow() {
return IntegrationFlows.from(Tcp.inboundAdapter(serverFactory()))
.transform(Transformers.objectToString())
.<String, String>transform(s -> s.toUpperCase())
.handle((p, h) -> {
mapConnectionIds(h);
return p;
})
.handle(Tcp.outboundAdapter(threadConnectionFactory()))
.get();
}
#Bean
public IntegrationFlow proxyOutboundFlow() {
return IntegrationFlows.from(Tcp.inboundAdapter(threadConnectionFactory()))
.transform(Transformers.objectToString())
.<String, String>transform(s -> s.toUpperCase())
.enrichHeaders(e -> e
.headerExpression(IpHeaders.CONNECTION_ID, "#toFromConnectionMappings.get(headers['"
+ IpHeaders.CONNECTION_ID + "'])").defaultOverwrite(true))
.handle(Tcp.outboundAdapter(serverFactory()))
.get();
}
private void mapConnectionIds(Map<String, Object> h) {
try {
TcpConnection connection = threadConnectionFactory().getConnection();
String mapping = toFromConnectionMappings().get(connection.getConnectionId());
String incomingCID = (String) h.get(IpHeaders.CONNECTION_ID);
if (mapping == null || !(mapping.equals(incomingCID))) {
System.out.println("Adding new mapping " + incomingCID + " to " + connection.getConnectionId());
toFromConnectionMappings().put(connection.getConnectionId(), incomingCID);
fromToConnectionMappings().put(incomingCID, connection.getConnectionId());
}
}
catch (Exception e) {
e.printStackTrace();
}
}
#Bean
public ThreadAffinityClientConnectionFactory threadConnectionFactory() {
return new ThreadAffinityClientConnectionFactory(clientFactory()) {
#Override
public boolean isSingleUse() {
return false;
}
};
}
#Bean
public AbstractServerConnectionFactory serverFactory() {
return Tcp.netServer(1234).get();
}
#Bean
public AbstractClientConnectionFactory clientFactory() {
AbstractClientConnectionFactory clientFactory = Tcp.netClient("localhost", 1235).get();
clientFactory.setSingleUse(true);
return clientFactory;
}
#Bean
public IntegrationFlow serverFlow() {
return IntegrationFlows.from(Tcp.inboundGateway(Tcp.netServer(1235)))
.transform(Transformers.objectToString())
.<String, String>transform(p -> p + p)
.get();
}
#Bean
public ApplicationListener<TcpConnectionCloseEvent> closer() {
return e -> {
if (fromToConnectionMappings().containsKey(e.getConnectionId())) {
String key = fromToConnectionMappings().remove(e.getConnectionId());
toFromConnectionMappings().remove(key);
System.out.println("Removed mapping " + e.getConnectionId() + " to " + key);
threadConnectionFactory().releaseConnection();
}
};
}
}
EDIT3
Works fine for me with a MapJsonSerializer.
#SpringBootApplication
public class So51200675Application {
public static void main(String[] args) {
SpringApplication.run(So51200675Application.class, args).close();
}
#Bean
public ApplicationRunner runner() {
return args -> {
Socket socket = SocketFactory.getDefault().createSocket("localhost", 1234);
socket.getOutputStream().write("{\"foo\":\"bar\"}\n".getBytes());
BufferedReader reader = new BufferedReader(new InputStreamReader(socket.getInputStream()));
System.out.println(reader.readLine());
socket.close();
};
}
#Bean
public Map<String, String> fromToConnectionMappings() {
return new ConcurrentHashMap<>();
}
#Bean
public Map<String, String> toFromConnectionMappings() {
return new ConcurrentHashMap<>();
}
#Bean
public MapJsonSerializer serializer() {
return new MapJsonSerializer();
}
#Bean
public IntegrationFlow proxyRequestFlow() {
return IntegrationFlows.from(Tcp.inboundAdapter(serverFactory()))
.<Map<String, String>, Map<String, String>>transform(m -> {
m.put("foo", m.get("foo").toUpperCase());
return m;
})
.handle((p, h) -> {
mapConnectionIds(h);
return p;
})
.handle(Tcp.outboundAdapter(threadConnectionFactory()))
.get();
}
#Bean
public IntegrationFlow proxyReplyFlow() {
return IntegrationFlows.from(Tcp.inboundAdapter(threadConnectionFactory()))
.<Map<String, String>, Map<String, String>>transform(m -> {
m.put("foo", m.get("foo").toLowerCase() + m.get("foo"));
return m;
})
.enrichHeaders(e -> e
.headerExpression(IpHeaders.CONNECTION_ID, "#toFromConnectionMappings.get(headers['"
+ IpHeaders.CONNECTION_ID + "'])").defaultOverwrite(true))
.handle(Tcp.outboundAdapter(serverFactory()))
.get();
}
private void mapConnectionIds(Map<String, Object> h) {
try {
TcpConnection connection = threadConnectionFactory().getConnection();
String mapping = toFromConnectionMappings().get(connection.getConnectionId());
String incomingCID = (String) h.get(IpHeaders.CONNECTION_ID);
if (mapping == null || !(mapping.equals(incomingCID))) {
System.out.println("Adding new mapping " + incomingCID + " to " + connection.getConnectionId());
toFromConnectionMappings().put(connection.getConnectionId(), incomingCID);
fromToConnectionMappings().put(incomingCID, connection.getConnectionId());
}
}
catch (Exception e) {
e.printStackTrace();
}
}
#Bean
public ThreadAffinityClientConnectionFactory threadConnectionFactory() {
return new ThreadAffinityClientConnectionFactory(clientFactory()) {
#Override
public boolean isSingleUse() {
return false;
}
};
}
#Bean
public AbstractServerConnectionFactory serverFactory() {
return Tcp.netServer(1234)
.serializer(serializer())
.deserializer(serializer())
.get();
}
#Bean
public AbstractClientConnectionFactory clientFactory() {
AbstractClientConnectionFactory clientFactory = Tcp.netClient("localhost", 1235)
.serializer(serializer())
.deserializer(serializer())
.get();
clientFactory.setSingleUse(true);
return clientFactory;
}
#Bean
public IntegrationFlow backEndEmulatorFlow() {
return IntegrationFlows.from(Tcp.inboundGateway(Tcp.netServer(1235)
.serializer(serializer())
.deserializer(serializer())))
.<Map<String, String>, Map<String, String>>transform(m -> {
m.put("foo", m.get("foo") + m.get("foo"));
return m;
})
.get();
}
#Bean
public ApplicationListener<TcpConnectionCloseEvent> closer() {
return e -> {
if (fromToConnectionMappings().containsKey(e.getConnectionId())) {
String key = fromToConnectionMappings().remove(e.getConnectionId());
toFromConnectionMappings().remove(key);
System.out.println("Removed mapping " + e.getConnectionId() + " to " + key);
threadConnectionFactory().releaseConnection();
}
};
}
}
and
Adding new mapping localhost:56998:1234:55c822a4-4252-45e6-9ef2-79263391f4be to localhost:1235:56999:3d520ca9-2f3a-44c3-b05f-e59695b8c1b0
{"foo":"barbarBARBAR"}
Removed mapping localhost:56998:1234:55c822a4-4252-45e6-9ef2-79263391f4be to localhost:1235:56999:3d520ca9-2f3a-44c3-b05f-e59695b8c1b0