How to trigger SFTP inbound channel in test - java

I have found out that an IntegrationFlow I have written using Java DSL wasn't very testable so I have followed Configuring with Java Configuration and split it into #Bean configuration.
In my unit test I have used a 3rd party SFTP in memory server and I tried triggering InboundChannelAdaper and then calling receive() on the channel.
I had a problem with finding out the type of Channel to use, as Channel usage was not mentioned anywhere in the SFTP Adapters documentation, but ultimately I found what I think is correct (QueueChannel) in the testing examples repository .
My problem is that the unit test I wrote is hanging on the channel's receive() method. Through debugging I determined that session factory's getSession() never gets called.
What am I doing wrong?
#Bean
public PollableChannel sftpChannel() {
return new QueueChannel();
}
#Bean
#EndpointId("sftpInboundAdapter")
#InboundChannelAdapter(channel = "sftpChannel", poller = #Poller(fixedDelay = "1000"))
public SftpInboundFileSynchronizingMessageSource sftpMessageSource() {
SftpInboundFileSynchronizingMessageSource source =
new SftpInboundFileSynchronizingMessageSource(sftpInboundFileSynchronizer);
source.setLocalDirectory(new File("/local"));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
source.setMaxFetchSize(6);
return source;
}
#Bean
public SftpInboundFileSynchronizer sftpInboundFileSynchronizer() {
SftpInboundFileSynchronizer fileSynchronizer = new SftpInboundFileSynchronizer(testSftpSessionFactory);
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setPreserveTimestamp(true);
fileSynchronizer.setRemoteDirectory("/remote");
List<String> filterFileNameList = List.of("1.txt");
fileSynchronizer.setFilter(new FilenameListFilter(filterFileNameList));
return fileSynchronizer;
}
#Bean
private DefaultSftpSessionFactory testSftpSessionFactory(String username, String password, int port, String host) {
DefaultSftpSessionFactory defaultSftpSessionFactory = new DefaultSftpSessionFactory();
defaultSftpSessionFactory.setPassword("password");
defaultSftpSessionFactory.setUser("username");
defaultSftpSessionFactory.setHost("localhost");
defaultSftpSessionFactory.setPort(777);
defaultSftpSessionFactory.setAllowUnknownKeys(true);
Properties config = new java.util.Properties();
config.put( "StrictHostKeyChecking", "no" );
defaultSftpSessionFactory.setSessionConfig(config);
return defaultSftpSessionFactory;
}
#ExtendWith(SpringExtension.class)
#ContextConfiguration(classes = {IntegrationFlowTestSupport.class, Synchronizer.class, Channel.class, Activator.class})
public class IntegrationFlowConfigTest {
private static final String CONTENTS = "abcdef 1234567890";
#Autowired
PollableChannel sftpChannel;
#Autowired
DefaultSftpSessionFactory testSftpSessionFactory;
#Autowired
SftpInboundFileSynchronizer sftpInboundFileSynchronizer;
#Autowired
SftpInboundFileSynchronizingMessageSource sftpMessageSource;
#Autowired
SourcePollingChannelAdapter sftpInboundAdapter;
#Test
public void test() throws Exception {
FileEntry f1 = new FileEntry("/remote/1.txt", CONTENTS);
FileEntry f2 = new FileEntry("/remote/2.txt", CONTENTS);
FileEntry f3 = new FileEntry("/remote/3.txt", CONTENTS);
withSftpServer(server -> {
server.setPort(777);
server.addUser("username", "password");
server.putFile(f1.getPath(), f1.createInputStream());
server.putFile(f2.getPath(), f2.createInputStream());
sftpInboundAdapter.start();
Message<?> message = sftpChannel.receive();
});
}
}

First of all it is wrong to rewrite your code to satisfy unit test expectations. We spend not one hour thinking about dividing concerns from production code to testing.
See respective documentation: https://docs.spring.io/spring-integration/docs/current/reference/html/testing.html#test-context.
For your use-case it might be better to do a mock on that #ServiceActivator instead of QueueChannel and competing consumer in your test. What I mean that you already have a consumer in your configuration with that #ServiceActivator. So, there is no guarantee that your manual sftpChannel.receive() would give you a message from the queue since this one could be consumed by your #ServiceActivator subscriber.
The fixedDelay = "0" looks suspicious. Isn't that too often to ask SFTP server for new files? How do you expect your system would be stable enough if you give it so much stress with such a short delay?
We don't know what is withSftpServer(server -> {, and it is also not clear what is testSftpSessionFactory. So, not sure yet how you start an SFTP server and connect to it from your code.
I also see sftpMessageSource.start();, but there is nowhere in your that it is stopped somehow. Plus I guess you really meant to start an endpoint, not source. The endpoint in your case is a SourcePollingChannelAdapter created for that #InboundChannelAdapter. You can use an #EndpointId, if it is not autowired automatically by type.
In our tests we use Apache MINA SSH library: https://github.com/spring-projects/spring-integration/blob/main/spring-integration-sftp/src/test/java/org/springframework/integration/sftp/SftpTestSupport.java#L64-L76

Related

Question regarding FtpInboundFileSynchronizer running with multiple instances/applications

I've recently been trying to configure and set up a spring boot application that will later be run in kubernetes and have multiple pods running of it. The application is meant to download files from a FTP server. I've found some existing code for doing this in Springboot, particularly FtpInboundFileSynchronizer and so I tried set it up and make sure it works. I have a working solution with a ConcurrentMetaDataStore. So my only real question is if it will be fine running it with multiple instances or if I require something additional for it to be run with multiple pods?
My configuration looks something like this:
#Getter
#Setter
#Configuration
#ConfigurationProperties(prefix = "ftp")
public class FtpConfiguration
{
private final static int PASSIVE_LOCAL_DATA_CONNECTION_MODE = 2;
private final static int DEFAULT_FTP_PORT = 21;
String host;
String username;
String password;
String localDirectory;
String remoteDirectory;
FtpRemoteFileTemplate template;
FtpInboundFileSynchronizer synchronizer;
DataSource templateSource;
#Bean
public ConcurrentMetadataStore metadataStore(DataSource dataSource)
{
var jbdcMetaDatastore = new JdbcMetadataStore(dataSource);
jbdcMetaDatastore.setTablePrefix("INT_");
jbdcMetaDatastore.setRegion("TEMPORARY");
jbdcMetaDatastore.afterPropertiesSet();
return jbdcMetaDatastore;
}
#Bean
public DefaultFtpSessionFactory defaultFtpSessionFactory()
{
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost(host);
sf.setUsername(username);
sf.setPassword(password);
sf.setPort(DEFAULT_FTP_PORT);
sf.setConnectTimeout(5000);
sf.setClientMode(PASSIVE_LOCAL_DATA_CONNECTION_MODE);
return sf;
}
#Bean
FtpRemoteFileTemplate ftpRemoteFileTemplate(DefaultFtpSessionFactory dsf)
{
return new FtpRemoteFileTemplate(dsf);
}
#Bean
FtpInboundFileSynchronizer ftpInboundFileSynchronizer(DefaultFtpSessionFactory dsf)
{
FtpInboundFileSynchronizer ftpInSync = new FtpInboundFileSynchronizer(dsf);
ftpInSync.setRemoteDirectory(remoteDirectory);
ftpInSync.setFilter(ftpFileListFilter());
return ftpInSync;
}
public FileListFilter<FTPFile> ftpFileListFilter()
{
try (ChainFileListFilter<FTPFile> chain = new ChainFileListFilter<>())
{
chain.addFilter(new FtpPersistentAcceptOnceFileListFilter(metadataStore(templateSource), "TEST"));
return chain;
}
catch (IOException e)
{
throw new RuntimeException("Failed to create FtpPersistentAcceptOnceFileListFilter", e);
}
}
}
and then I just call the the SynchronizeToLocalDirectory method.
FtpClient(
FtpRemoteFileTemplate template, FtpInboundFileSynchronizer synchronizer,
#Value("${ftp.remote-directory}") String remoteDirectory,
#Value("${ftp.local-directory}") String localDirectory)
{
this.template = template;
this.synchronizer = synchronizer;
this.remoteDirectory = remoteDirectory;
this.localDirectory = localDirectory;
}
synchronizer.setRemoteDirectory(remoteDirectory);
synchronizer.synchronizeToLocalDirectory(new File(localDirectory));
Would this solution handle multiple applications without problems? Or what else would I need? Does the ConcurrentMetaData store alone make sure this works? (so for example there wouldn't be a conflict/crash if two instances at the same time try to synchronise same directory as they'd both be fine thanks to the metastore being #Transactional).
Your assumption is correct: as long as all your pods are connecting to the same data base, that JdbcMetadataStore will ensure that no concurrent read for the same file are going to happen.
It is not clear, though, why would one use an FtpInboundFileSynchronizer manually, but not via an FtpInboundFileSynchronizingMessageSource and subsequent integration flow, but that's I guess fully different story and question.
On the other hand: why do you ask this question at all? Didn't you try your solution? Isn't docs enough to be sure where and how to go: https://docs.spring.io/spring-integration/docs/current/reference/html/file.html#remote-persistent-flf ?

pubsub messages not being pulled with poller and serviceactivator

I've been trying to get pubsub to work within a spring application. To get up and running I've been reading through tutorials and documentation like this
I can get things to build and start but if I go through cloud console to send a message to the test subscription it never arrives.
This is what my code looks like right now:
#Configuration
#Import({GcpPubSubAutoConfiguration.class})
public class PubSubConfigurator {
#Bean
public GcpProjectIdProvider projectIdProvider(){
return () -> "project-id";
}
#Bean
public CredentialsProvider credentialsProvider(){
return GoogleCredentials::getApplicationDefault;
}
#Bean
public MessageChannel inputMessageChannel() {
return new PublishSubscribeChannel();
}
#Bean
#InboundChannelAdapter(channel = "inputMessageChannel", poller = #Poller(fixedDelay = "5"))
public MessageSource<Object> pubsubAdapter(PubSubTemplate pubSubTemplate) {
PubSubMessageSource messageSource = new PubSubMessageSource(pubSubTemplate, "tst-sandbox");
messageSource.setAckMode(AckMode.MANUAL);
messageSource.setPayloadType(String.class);
messageSource.setBlockOnPull(false);
messageSource.setMaxFetchSize(10);
//pubSubTemplate.pull("tst-sandbox", 10, true);
return messageSource;
}
// Define what happens to the messages arriving in the message channel.
#ServiceActivator(inputChannel = "inputMessageChannel")
public void messageReceiver(
String payload,
#Header(GcpPubSubHeaders.ORIGINAL_MESSAGE) BasicAcknowledgeablePubsubMessage message) {
System.out.println("Message arrived via an inbound channel adapter from sub-one! Payload: " + payload);
message.ack();
}
}
My thinking was that the poller annotation would start a poller to run every so often to check for messages and send them to the method annotated with service activator but this is clearly not the case as it is never hit.
Interestingly enough if I put a breakpoint right before "return messageSource" and check the result of the template.pull call the messages ARE returned so it is seemingly not an issue with the connection itself.
What am I missing here? Tutorials and documentation aren't helping much at this point as they all use pretty much the same bit of tutorial code like above...
I have tried variations of the above code like creating the adapter instead of the messagesource like so:
#Bean
public PubSubInboundChannelAdapter inboundChannelAdapter(
#Qualifier("inputMessageChannel") MessageChannel messageChannel,
PubSubTemplate pubSubTemplate) {
PubSubInboundChannelAdapter adapter =
new PubSubInboundChannelAdapter(pubSubTemplate, "tst-sandbox");
adapter.setOutputChannel(messageChannel);
adapter.setAckMode(AckMode.MANUAL);
adapter.setPayloadType(String.class);
return adapter;
}
to no avail. Any suggestions are welcome.
Found the problem after creating a spring boot project from scratch (main project is normal spring). Noticed in the debug output that it was auto starting the service activator bean and some other things like actually subscribing to the channels which it wasn't doing in the main project.
After a quick google the solution was simple, had to add
#EnableIntegration
annotation at class level and the messages started coming in.

Spring-rabbitmq - start spring-boot server even in case of lack of connection

I am using spring boot with RabbitMQ. Everything is working - processing messages works and after losing connection it automatically tries to reconnect. However,
I have only one problem:
When Rabbit server is swtiched off (no possibility to establish connection) and I try to launch spring-boot server it can't start. I can't check now (no access to machine) what exact content of exception is, however it was about problem with instatiation of beans. Can you help me ?
#Configuration
public class RabbitConfig{
private String queueName = "myQueue";
private String echangeName = "myExchange";
#Bean
public FanoutExchange exchange(RabbitAdmin rabbitAdmin) {
FanoutExchange exch = new
FanoutExchange(echangeName);
rabbitAdmin.declareExchange(exch);
return exch;
}
#Bean
public Queue queue(FanoutExchange exchange, RabbitAdmin rabbitAdmin) {
HashMap<String, Object> args = new HashMap<String, Object>();
args.put("x-message-ttl", 20);
args.put("x-dead-letter-exchange", "dlx_exchange_name");
Queue queue = new Queue(queueName, true, false, false, args);
rabbitAdmin.declareQueue(queue);
rabbitAdmin.declareBinding(BindingBuilder.bind(queue).to(exchange));
return queue;
}
}
Edit
I must edit, because I was not aware of fact that is is important here.
In my case the last argument is not null, it is some Hashmap (it is important for me). I edited my code above.
Moreover, I don't understand your answer exactly. Could you be more precisely ?
In order make sure that I was sufficiently clear: I would like to take advantage of automatic reconnection (now it is working). Additionally, If during starting spring boot server rabbit broker is shutdown it should start and cyclically try to reconnect (at this moment application doesn't start).
Edit2
#Configuration
public class RabbitConfig{
private String queueName = "myQueue";
private String echangeName = "myExchange";
#Bean
public FanoutExchange exchange(RabbitAdmin rabbitAdmin) {
FanoutExchange exch = new
FanoutExchange(echangeName);
//rabbitAdmin.declareExchange(exch);
return exch;
}
#Bean
public Queue queue(FanoutExchange exchange, RabbitAdmin rabbitAdmin) {
HashMap<String, Object> args = new HashMap<String, Object>();
args.put("x-message-ttl", 20);
args.put("x-dead-letter-exchange", "dlx_exchange_name");
Queue queue = new Queue(queueName, true, false, false, args);
//rabbitAdmin.declareQueue(queue);
//rabbitAdmin.declareBinding(BindingBuilder.bind(queue).to(exchange));
return queue;
}
// EDIT 3: now, we are made to create binding bean
#Autowired
Queue queue; // inject bean by name
#Autowired
Exchange exchange;
#Bean
public Binding binding() {
return BindingBuilder.bind(queue.to(exchange);
}
}
That's correct. You try to register Broker entities manually:
rabbitAdmin.declareExchange(exch);
...
rabbitAdmin.declareQueue(queue);
rabbitAdmin.declareBinding(BindingBuilder.bind(queue).to(exchange));
You should rely here on the built-in auto-declaration mechanism in the Framework.
In other words: you're good to declare those beans (including Bindingm BTW), but you have not to call rabbitAdmin.declare at all. At least not from the bean definition phase.

How to stop and start jms listener

I'm using Spring and I have a JMS queue to send messages from client to server. I'd like to stop the messages from being sent when the server is down, and resend them when it's back up.
I know it was asked before but I can't make it work. I created a JmsListener and gave it an ID, but I cannot get it's container in order to stop\start it.
#Resource(name="testId")
private AbstractJmsListeningContainer _probeUpdatesListenerContainer;
public void testSendJms() {
_jmsTemplate.convertAndSend("queue", "working");
}
#JmsListener(destination="queue", id="testId")
public void testJms(String s) {
System.out.println("Received JMS: " + s);
}
The container bean is never created. I also tried getting it from the context or using #Autowired and #Qualifier("testId") with no luck.
How can I get the container?
You need #EnableJms on one of your configuration classes.
You need a jmsListenerContainerFactory bean.
You can stop and start the containers using the JmsListenerEndpointRegistry bean.
See the Spring documentation.
If you use CachingConnectionFactory in your project, you need to call the resetConnection() method between stop and restart, otherwise the old physical connection will remain open, and it will be reused when you restart.
I used JmsListenerEndpointRegistry. Here's my example. I hope this will help.
Bean configuration in JmsConfiguration.java. I changed default autostart option.
#Bean(name="someQueueScheduled")
public DefaultJmsListenerContainerFactory odsContractScheduledQueueContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory(someActiveMQ);
Map<String, Class<?>> typeIds = new HashMap<>();
typeIds.put(SomeDTO);
factory.setMessageConverter(messageConverter(Collections.unmodifiableMap(typeIds)));
factory.setPubSubDomain(false);
factory.setConnectionFactory(cf);
factory.setAutoStartup(false);
return factory;
}
Invoke in SomeFacade.java
public class SomeFacade {
#Autowired
JmsListenerEndpointRegistry someUpdateListener;
public void stopSomeUpdateListener() {
MessageListenerContainer container = someUpdateListener.getListenerContainer("someUpdateListener");
container.stop();
}
public void startSomeUpdateListener() {
MessageListenerContainer container = someUpdateListener.getListenerContainer("someUpdateListener");
container.start();
}
}
JmsListener implementation in SomeService.java
public class SomeService {
#JmsListener(id = "someUpdateListener",
destination = "${some.someQueueName}",
containerFactory ="someQueueScheduled")
public void pullUpdateSomething(SomeDTO someDTO) {
}
}

RabbitMQ - Java Spring - how to init exchange to several queues?

I'm having difficulty finding a Spring way to initial an exchange that's sending the incoming message to more then 1 queue - on my Spring-boot application:
I can't find a good way to define a seconds exchange-queue binding.
I'm using RabbitTemplate as the producer client.
The RabbitMQ 6 page tutorial doesn't really help with that since:
the only initial several temporary queues from the Consumer on-demand (while I need to the Producer to do the binding - to persistant queues)
The examples are for basic java usage - not using Spring capabilities.
I also failed to find how to implement it via The spring AMQP pages.
what I got so far, is trying to inject the basic java binding to the spring way of doing it - but it's not working....
#Bean
public ConnectionFactory connectionFactory() throws IOException {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("localhost");
connectionFactory.setUsername("guest");
connectionFactory.setPassword("guest");
Connection conn = connectionFactory.createConnection();
Channel channel = conn.createChannel(false);
channel.exchangeDeclare(SPRING_BOOT_EXCHANGE, "fanout");
channel.queueBind(queueName, SPRING_BOOT_EXCHANGE, ""); //first bind
channel.queueBind(queueName2, SPRING_BOOT_EXCHANGE, "");// second bind
return connectionFactory;
}
Any help would be appreciated
Edited
I think the problem arise with the fact that every time I restart my server it tries to redefine the exchange-query-binding - while they persist in the broker...
I managed to define them manually via the brokers UI console - so the Producer only aware of the exchange name, and the Consumer only aware to it's relevant queue.
Is there a way to define those element progrematically - but in such a way so it won't be redefined\overwritten if already exist from previous restarts?
We use an approach similar to the following to send data from one specific input channel to several input queues of other consumers:
#Bean
public IntegrationFlow integrationFlow(final RabbitTemplate rabbitTemplate, final AmqpHeaderMapper amqpHeaderMapper) {
IntegrationFlows
.from("some-input-channel")
.handle(Amqp.outboundAdapter(rabbitTemplate)
.headerMapper(headerMapper))
.get()
}
#Bean
public AmqpHeaderMapper amqpHeaderMapper() {
final DefaultAmqpHeaderMapper headerMapper = new DefaultAmqpHeaderMapper();
headerMapper.setRequestHeaderNames("*");
return headerMapper;
}
#Bean
public ConnectionFactory rabbitConnectionFactory() {
return new CachingConnectionFactory();
}
#Bean
public RabbitAdmin rabbitAdmin(final ConnectionFactory rabbitConnectionFactory) {
final RabbitAdmin rabbitAdmin = new RabbitAdmin(rabbitConnectionFactory);
rabbitAdmin.afterPropertiesSet();
return rabbitAdmin;
}
#Bean
public RabbitTemplate rabbitTemplate(final ConnectionFactory rabbitConnectionFactory, final RabbitAdmin rabbitAdmin) {
final RabbitTemplate rabbitTemplate = new RabbitTemplate();
rabbitTemplate.setConnectionFactory(connectionFactory);
final FanoutExchange fanoutExchange = new FanoutExchange(MY_FANOUT.getFanoutName());
fanoutExchange.setAdminsThatShouldDeclare(rabbitAdmin);
for (final String queueName : MY_FANOUT.getQueueNames) {
final Queue queue = new Queue(queueName, true);
queue.setAdminsThatShouldDeclare(rabbitAdmin);
final Binding binding = BindingBuilder.bind(queue).to(fanoutExchange);
binding.setAdminsThatShouldDeclare(rabbitAdmin);
}
rabbitTemplate.setExchange(fanoutExchange);
}
and for completeness here's the enum for the fanout declaration:
public enum MyFanout {
MY_FANOUT(Lists.newArrayList("queue1", "queue2"), "my-fanout"),
private final List<String> queueNames;
private final String fanoutName;
MyFanout(final List<String> queueNames, final String fanoutName) {
this.queueNames = requireNonNull(queueNames, "queue must not be null!");
this.fanoutName = requireNonNull(fanoutName, "exchange must not be null!");
}
public List<String> getQueueNames() {
return this.queueNames;
}
public String getFanoutName() {
return this.fanoutName;
}
}
Hope it helps!
Thanks!
That was the answer I was looking for.
also - for the sake of completeness - I found a way to it 'the java way' inside Spring Bean:
#Bean
public ConnectionFactory connectionFactory() throws IOException {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("localhost");
connectionFactory.setUsername("guest");
connectionFactory.setPassword("guest");
Connection conn = connectionFactory.createConnection();
Channel channel = conn.createChannel(false);
// declare exchnage
AMQP.Exchange.DeclareOk resEx = channel.exchangeDeclare(AmqpTemp.SPRING_BOOT_EXCHANGE_test, ExchangeTypes.FANOUT, true, false, false, null);
// declares queues
AMQP.Queue.DeclareOk resQ = channel.queueDeclare(AmqpTemp.Q2, true, false, false, null);
resQ = channel.queueDeclare(AmqpTemp.Q3, true, false, false, null);
// declare binding
AMQP.Queue.BindOk resB = channel.queueBind(AmqpTemp.Q2, AmqpTemp.SPRING_BOOT_EXCHANGE_test, "");
resB = channel.queueBind(AmqpTemp.Q3, AmqpTemp.SPRING_BOOT_EXCHANGE_test, "");
// channel.queueBind(queueName2, SPRING_BOOT_EXCHANGE, "");
return connectionFactory;
}
The problems I was having before, were do to the fact that I created some queues in my initial play with the code - and when I tried to reuse the same queue names it caused exception since they were initially defined differently - so - lesson learnt: rename the queues from the names you used when you 'played' with the code.

Categories

Resources