Tomcat hangs shutting down with Spring Integration Java DSL - java

I'm having an issue where trying to gracefully shutdown Tomcat (8) never finishes, due to what appears to be DefaultMessageListenerContainer being blocked (or looping) indefinitely.
I've been googling around for solutions, but anything similar I've found hasn't worked. This includes (but is not limited to):
Using configureListenerContainer() to set the taskExecutor of the container
Using Messages.queue() instead of Messages.direct()
Wrapping the ActiveMQConnectionFactory in a CachingConnectionFactory
A simple Servlet 3.0 example:
compile 'org.springframework.integration:spring-integration-core:4.3.6.RELEASE'
compile 'org.springframework.integration:spring-integration-jms:4.3.6.RELEASE'
compile 'org.springframework.integration:spring-integration-java-dsl:1.2.1.RELEASE'
Initializer:
public class ExampleWebApp implements WebApplicationInitializer {
#Override
public void onStartup(final ServletContext servletContext) throws ServletException {
final AnnotationConfigWebApplicationContext springContext = new AnnotationConfigWebApplicationContext();
springContext.register(ExampleConfig.class);
servletContext.addListener(new ContextLoaderListener(springContext));
final ServletRegistration.Dynamic registration = servletContext.addServlet("example", new HttpRequestHandlerServlet());
registration.setLoadOnStartup(1);
registration.addMapping("/status");
}
}
Configuration:
#Configuration
#EnableIntegration
public class ExampleConfig {
#Bean
public ConnectionFactory connectionFactory() {
final ActiveMQConnectionFactory mqConnectionFactory = new ActiveMQConnectionFactory();
mqConnectionFactory.setBrokerURL("tcp://host:port");
mqConnectionFactory.setUserName("----");
mqConnectionFactory.setPassword("----");
return mqConnectionFactory;
}
#Bean
public Queue testQueue() {
return new ActiveMQQueue("test.queue");
}
#Bean
public MessageChannel testReceiveChannel() {
return MessageChannels.direct().get();
}
#Bean
public IntegrationFlow pushMessageInboundFlow() {
return IntegrationFlows
.from(Jms.messageDrivenChannelAdapter(connectionFactory())
.destination(testQueue()))
.log()
.transform(new JsonToObjectTransformer(TestMessageObject.class))
.channel(testReceiveChannel())
.get();
}
/** Example message object */
public static class TestMessageObject {
private String text;
public String getText() {
return text;
}
public void setText(final String text) {
this.text = text;
}
}
}
If I try and stop this via the catalina.sh script (for example, pressing "stop" in Intellij", it never finishes existing. So far the only way I've been able to get shutdown to finish is by "manually" destroying the JmsMessageAdapters on shutdown, via a little helper class:
public class JmsMessageListenerContainerLifecycleManager {
private static final Logger LOG = LoggerFactory.getLogger(JmsMessageListenerContainerLifecycleManager.class);
#Autowired
private List<IntegrationFlow> mIntegrationFlows;
#PreDestroy
public void shutdownJmsAdapters() throws Exception {
LOG.info("Checking {} integration flows for JMS message adapters", mIntegrationFlows.size());
for (IntegrationFlow flow : mIntegrationFlows) {
if (flow instanceof StandardIntegrationFlow) {
final StandardIntegrationFlow standardFlow = (StandardIntegrationFlow) flow;
for (Object component : standardFlow.getIntegrationComponents()) {
if (component instanceof JmsMessageDrivenChannelAdapter) {
final JmsMessageDrivenChannelAdapter adapter = (JmsMessageDrivenChannelAdapter) component;
LOG.info("Destroying JMS adapter {}", adapter.getComponentName());
adapter.destroy();
}
}
}
}
}
}
And while that works, it definitely feels like the wrong solution.
Previously I was using XML configuration of spring-integration, and I did not have this problem. What am I missing?

Ugh! This is definitely a bug. And looks like you workaround it properly.
Although consider to destroy any DisposableBean there.
I'm adding the fix to the Spring Integration Java DSL. We are going to release the next 1.2.2 just after Spring Integration 4.3.9.
The Spring Integration 5.0 will have a fix in its M3 release tomorrow.

Related

Using Netflix Ribbon without Spring Boot in Legacy Application

I work at an application which is using Apache Mina as SFTP Server. The application itself is started as jar and sends rest requests to our backend.
I now want to use Netflix Ribbon without turning the whole application into a spring boot project or spring project in general.
My approach is to access the api directly like in the example:
public class MyClass {
#Autowired
private LoadBalancerClient loadBalancer;
public void doStuff() {
ServiceInstance instance = loadBalancer.choose("stores");
URI storesUri = URI.create(String.format("http://%s:%s", instance.getHost(), instance.getPort()));
// ... do something with the URI
}
}
Examples in the documentation only show how it is done if configuration is done by spring automatically. However this is not working for me and I cannot get spring to automatically provide the loadbalancer bean.
I solved the problem by "hardcoding" the spring parts:
#Configuration
public class LoadbalancerConfig {
#Bean
public ILoadBalancer loadBalancer() {
BaseLoadBalancer baseLoadBalancer = new BaseLoadBalancer("balancer", rule(), new LoadBalancerStats("balancer"));
baseLoadBalancer.addServers(serverList().getInitialListOfServers());
return baseLoadBalancer;
}
#Bean
public IRule rule() {
return new RandomRule();
}
#Bean
public ServerList<Server> serverList() {
return new StaticServerList<>((new Server("host1", 80)),
new Server("host2", 80));
}
}
Util class for getting bean at later point:
public class BeanUtil implements ApplicationContextAware {
private static final Logger log = LogManager.getLogger(BeanUtil.class);
private static ApplicationContext applicationContext;
#Override
public void setApplicationContext(final ApplicationContext ctx) throws BeansException {
applicationContext = ctx;
}
public static <T> T getBean(Class<T> beanClass) {
return applicationContext.getBean(beanClass);
}
}
I initiate them through a xml file:
<context:component-scan base-package="package.of.loadbalancerconfig" />
<bean id="applicationContextProvider" lazy-init="false" class="my.package.BeanUtil" />
Dont forget to create your applicationContext at initialization:
ApplicationContext context = new FileSystemXmlApplicationContext("file:/path/to/beans.xml");
Now I could get the loadbalancer and the instances:
if (loadBalancer == null) {
loadBalancer = BeanUtil.getBean(ILoadBalancer.class);
}
Server instance = loadBalancer.chooseServer("balancer");
URI uri = URI.create(String.format("http://%s:%s", instance.getHost(), instance.getPort()));
I'm sure there is a more elegant way, but it worked for me.

Spring Integration manually publish message to channel

I'm in the process of learning how to use the Java Spring Framework and started experimenting with Spring Integration. I'm trying to use Spring Integration to connect my application to an MQTT broker both to publish and subscribe to messages but I'm having trouble finding a way to manually publish messages to an outbound channel. If possible I want to build it using notations in the java code exclusively rather than xml files defining beans and other related configuration.
In every example I've seen the solution to manually publishing a message seems to be to use a MessagingGateway Interface and then use the SpringApplicationBuilder to get the ConfigurableApplicationContext to get a reference to the gateway interface in the main method. The reference is then used to publish a message. Would it be possible to use AutoWired for the interface instead? In my attempts I just get a NullPointer.
My aim is to build a game where I subscribe to a topic to get game messages and then whenever the user is ready to make the next move, publish a new message to the topic.
Update:
This is one of the examples I've been looking at of how to setup an outbound channel: https://docs.spring.io/spring-integration/reference/html/mqtt.html
Update 2 after answer from Gary Russel:
This is some example code I wrote after looking at examples which gets me a NullPointer when using #AutoWired for the Gateway when running gateway.sendToMqtt in Controller.java. What I want to achieve here is to send an mqtt message manually when a GET request is handled by the controller.
Application.java
#SpringBootApplication
public class Application {
public static void main(String[] args){
SpringApplication.run(Application.class, args);
}
}
Controller.java
#RestController
#RequestMapping("/publishMessage")
public class Controller {
#Autowired
static Gateway gateway;
#RequestMapping(method = RequestMethod.GET)
public int request(){
gateway.sendToMqtt("Test Message!");
return 0;
}
}
MqttPublisher.java
#EnableIntegration
#Configuration
public class MqttPublisher {
#Bean
public MqttPahoClientFactory mqttClientFactory(){
DefaultMqttPahoClientFactory factory = new DefaultMqttPahoClientFactory();
factory.setServerURIs("tcp://localhost:1883");
return factory;
}
#Bean
#ServiceActivator(inputChannel = "mqttOutboundChannel")
public MessageHandler mqttOutbound(){
MqttPahoMessageHandler messageHandler =
new MqttPahoMessageHandler("clientPublisher", mqttClientFactory());
messageHandler.setAsync(true);
messageHandler.setDefaultTopic("topic");
return messageHandler;
}
#Bean
public MessageChannel mqttOutboundChannel(){
return new DirectChannel();
}
#MessagingGateway(defaultRequestChannel = "mqttOutboundChannel")
public interface Gateway {
void sendToMqtt(String data);
}
}
Update:
Not sure if this is the proper logging but it is what I get from adding:
logging.level.org.springframework.web=Debug
logging.level.org.hibernate=Error
to application.properties.
https://hastebin.com/cuvonufeco.hs
Use a Messaging Gateway or simply send a message to the channel.
EDIT
#SpringBootApplication
public class So47846492Application {
public static void main(String[] args) {
SpringApplication.run(So47846492Application.class, args).close();
}
#Bean
public ApplicationRunner runner(MyGate gate) {
return args -> {
gate.send("someTopic", "foo");
Thread.sleep(5_000);
};
}
#Bean
#ServiceActivator(inputChannel = "toMqtt")
public MqttPahoMessageHandler mqtt() {
MqttPahoMessageHandler handler = new MqttPahoMessageHandler("tcp://localhost:1883", "foo",
clientFactory());
handler.setDefaultTopic("myTopic");
handler.setQosExpressionString("1");
return handler;
}
#Bean
public MqttPahoClientFactory clientFactory() {
DefaultMqttPahoClientFactory factory = new DefaultMqttPahoClientFactory();
factory.setUserName("guest");
factory.setPassword("guest");
return factory;
}
#Bean
public MqttPahoMessageDrivenChannelAdapter mqttIn() {
MqttPahoMessageDrivenChannelAdapter adapter =
new MqttPahoMessageDrivenChannelAdapter("tcp://localhost:1883", "bar", "someTopic");
adapter.setOutputChannelName("fromMqtt");
return adapter;
}
#ServiceActivator(inputChannel = "fromMqtt")
public void in(String in) {
System.out.println(in);
}
#MessagingGateway(defaultRequestChannel = "toMqtt")
public interface MyGate {
void send(#Header(MqttHeaders.TOPIC) String topic, String out);
}
}

Spring Integration: Persistent and transactional QueueChannel

In Spring Integration we have a Setup that looks something like this:
--->
--->
(dispatcher) Messages --> Gateway ----> QueueChannel ---> MessageHandler (worker)
--->
--->
So we have one Dispatcher Thread that takes Messages from a MQTT-Broker and forwards them into the Queue. The Poller for the Queue is provided with a TaskExecuter, so the Consumer is multithreaded.
We managed to implement all the functionalities. So the just described setup is already implemented.
Now to guarantee no data loss we want to make two things:
1.:
We want our queue to persist the data, so when the Programm shuts down ungracefully, all the data in the queue will still be there.
This also worked for us, we are using MongoDB as a database because we read somewhere in your docs that this is the recommended way to do it.
2.:
The second thing we want to assure is that the worker threads are working transactional. So only if the worker threads return correctly the messages will permanently be deleted from the queue (and therefore the persistent MessageStore). If the program shuts down during the processing of a message (by the worker thread) the message will still be in the queue at the next startup.
Also if the worker, for example, thows an exception during the processing of the message, it will be put back into the queue.
Our implementation:
As explained before, the basic setup of the program is already implemented. We then extended the basic implementation with a message store implementation for the queue.
QueueChannel:
#Bean
public PollableChannel inputChannel(BasicMessageGroupStore mongoDbChannelMessageStore) {
return new QueueChannel(new MessageGroupQueue(mongoDbChannelMessageStore, "inputChannel"));
}
backed by a Messagestore:
#Bean
public BasicMessageGroupStore mongoDbChannelMessageStore(MongoDbFactory mongoDbFactory) {
MongoDbChannelMessageStore store = new MongoDbChannelMessageStore(mongoDbFactory);
store.setPriorityEnabled(true);
return store;
}
the matching Poller:
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata poller() {
PollerMetadata poll = Pollers.fixedDelay(10).get();
poll.setTaskExecutor(consumer);
return poll;
}
Executor:
private Executor consumer = Executors.newFixedThreadPool(5);
What we have tried?
As explained now we want to extend this implementation with a transactional functionality. We tried using the setTransactionSynchronizationFactory like explained here but it wasn't working (didn't get errors or anything but the behavior was still as it was before we added the TransactionSynchronizer):
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata poller() {
PollerMetadata poll = Pollers.fixedDelay(10).get();
poll.setTaskExecutor(consumer);
BeanFactory factory = mock(BeanFactory.class);
ExpressionEvaluatingTransactionSynchronizationProcessor etsp = new ExpressionEvaluatingTransactionSynchronizationProcessor();
etsp.setBeanFactory(factory);
etsp.setAfterRollbackChannel(inputChannel());
etsp.setAfterRollbackExpression(new SpelExpressionParser().parseExpression("#bix"));
etsp.setAfterCommitChannel(inputChannel());
etsp.setAfterCommitExpression(new SpelExpressionParser().parseExpression("#bix"));
DefaultTransactionSynchronizationFactory dtsf = new DefaultTransactionSynchronizationFactory(etsp);
poll.setTransactionSynchronizationFactory(dtsf);
return poll;
}
What would be the best way to realize our requirements in spring integration?
EDIT:
As recommended in the answer I chose to do this with the JdbcChannelMessageStore. So I tried converting the XML Implementation described here (18.4.2) into Java. I wasn't quite sure on how to do it, this is what I have tried so far:
I created H2 database and run the script shown here on it.
Created JDBCChannelMessageStore Bean:
#Bean
public JdbcChannelMessageStore store() {
JdbcChannelMessageStore ms = new JdbcChannelMessageStore();
ms.setChannelMessageStoreQueryProvider(queryProvider());
ms.setUsingIdCache(true);
ms.setDataSource(dataSource);
return ms;
}
Created H2ChannelMessageStoreQueryProvider
#Bean
public ChannelMessageStoreQueryProvider queryProvider() {
return new H2ChannelMessageStoreQueryProvider();
}
Adapted the poller:
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata poller() throws Exception {
PollerMetadata poll = Pollers.fixedDelay(10).get();
poll.setTaskExecutor(consumer);
poll.setAdviceChain(Collections.singletonList(transactionInterceptor()));
return poll;
}
Autowired my PlaatformTransactionManager:
#Autowired
PlatformTransactionManager transactionManager;
And created TransactionInterceptor from the TransactonManager:
#Bean
public TransactionInterceptor transactionInterceptor() {
return new TransactionInterceptorBuilder(true)
.transactionManager(transactionManager)
.isolation(Isolation.READ_COMMITTED)
.propagation(Propagation.REQUIRED)
.build();
}
If you need to have queue as transactional, you definitely should take a look into the transactional MessageStore. And only JDBC one is like that. Just because only JDBC support transactions. So, when we perform DELETE, it is OK only if TX is committed.
The MongoDB, nor any other NoSQL DataBases, support such a model, therefore you only can push back the failed messages to the DB on rollback using TransactionSynchronizationFactory.
UPDATE
#RunWith(SpringRunner.class)
#DirtiesContext
public class So47264688Tests {
private static final String MESSAGE_GROUP = "transactionalQueueChannel";
private static EmbeddedDatabase dataSource;
#BeforeClass
public static void init() {
dataSource = new EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.H2)
.addScript("classpath:/org/springframework/integration/jdbc/schema-drop-h2.sql")
.addScript("classpath:/org/springframework/integration/jdbc/schema-h2.sql")
.build();
}
#AfterClass
public static void destroy() {
dataSource.shutdown();
}
#Autowired
private PollableChannel transactionalQueueChannel;
#Autowired
private JdbcChannelMessageStore jdbcChannelMessageStore;
#Autowired
private PollingConsumer serviceActivatorEndpoint;
#Autowired
private CountDownLatch exceptionLatch;
#Test
public void testTransactionalQueueChannel() throws InterruptedException {
GenericMessage<String> message = new GenericMessage<>("foo");
this.transactionalQueueChannel.send(message);
assertTrue(this.exceptionLatch.await(10, TimeUnit.SECONDS));
this.serviceActivatorEndpoint.stop();
assertEquals(1, this.jdbcChannelMessageStore.messageGroupSize(MESSAGE_GROUP));
Message<?> messageFromStore = this.jdbcChannelMessageStore.pollMessageFromGroup(MESSAGE_GROUP);
assertNotNull(messageFromStore);
assertEquals(message, messageFromStore);
}
#Configuration
#EnableIntegration
public static class ContextConfiguration {
#Bean
public PlatformTransactionManager transactionManager() {
return new DataSourceTransactionManager(dataSource);
}
#Bean
public ChannelMessageStoreQueryProvider queryProvider() {
return new H2ChannelMessageStoreQueryProvider();
}
#Bean
public JdbcChannelMessageStore jdbcChannelMessageStore() {
JdbcChannelMessageStore jdbcChannelMessageStore = new JdbcChannelMessageStore(dataSource);
jdbcChannelMessageStore.setChannelMessageStoreQueryProvider(queryProvider());
return jdbcChannelMessageStore;
}
#Bean
public PollableChannel transactionalQueueChannel() {
return new QueueChannel(new MessageGroupQueue(jdbcChannelMessageStore(), MESSAGE_GROUP));
}
#Bean
public TransactionInterceptor transactionInterceptor() {
return new TransactionInterceptorBuilder()
.transactionManager(transactionManager())
.isolation(Isolation.READ_COMMITTED)
.propagation(Propagation.REQUIRED)
.build();
}
#Bean
public TaskExecutor threadPoolTaskExecutor() {
ThreadPoolTaskExecutor threadPoolTaskExecutor = new ThreadPoolTaskExecutor();
threadPoolTaskExecutor.setCorePoolSize(5);
return threadPoolTaskExecutor;
}
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata poller() {
return Pollers.fixedDelay(10)
.advice(transactionInterceptor())
.taskExecutor(threadPoolTaskExecutor())
.get();
}
#Bean
public CountDownLatch exceptionLatch() {
return new CountDownLatch(2);
}
#ServiceActivator(inputChannel = "transactionalQueueChannel")
public void handle(Message<?> message) {
System.out.println(message);
try {
throw new RuntimeException("Intentional for rollback");
}
finally {
exceptionLatch().countDown();
}
}
}
}
Thanks to Artem Bilan for your great support. I finally found the solution. It seemed like there was another bean with the name transactionManager and transactionInterceptor active. This resulted in the strange behavior, that my trans-manager was never initialized, instead the other transactionmanager (null) was used for the transactioninterceptor and the PollingConsumer. Thats why my Transactionmanager in PollingConsumer was null, and why my Transactions were never working.
The solution was to rename all my beans, for some beans I also used the annotation #Primary to tell spring to always use this speciffic bean when autowired.
I also downgraded two 4.3, just to make sure this wasn't an error related to Version 5. I haven't testet if it would work with V 5 yet, but I think it should work also.

#Autowired component is not available

My class is an sftp poller. It implements DirectoryListener and implements the fileAdded method to monitor when new events are added to an sftp directory. Code looks like
#SpringBootApplication
#Slf4j
public class SftpBridge implements DirectoryListener, IoErrorListener, InitialContentListener {
#Autowired
private SftpBridgeConfig config;
#Autowired
public SftpDirectory sftpDirectory;
public static void main(final String[] args) throws Exception {
SpringApplication.run(SftpBridge.class, args);
}
#PostConstruct
public void postConstruct() {
LOG.info("Initializing...");
initialize();
LOG.info("Initialized!");
}
private void initialize() {
pollSftp();
}
public void pollSftp() {
try {
while (true) {
LOG.info("monitoring directory: " + "/");
PolledDirectory polledDirectory = sftpDirectory;
DirectoryPoller dp = DirectoryPoller.newBuilder()
.addPolledDirectory(polledDirectory)
.addListener(new SftpBridge())
// other settings
//remove this later
.enableFileAddedEventsForInitialContent() // optional (disabled by default). FileAddedEvents fired for directories initial content.
//TODO: enable later for subdirectory polling
//.enableParallelPollingOfDirectories() // optional (disabled by default).
.setDefaultFileFilter(new RegexFileFilter(".*csv")) // optional. Only consider files ending with "xml".
.setThreadName("sftp-poller") // sets the name of the the polling thread
.setPollingInterval(10, TimeUnit.SECONDS)
.start();
TimeUnit.HOURS.sleep(2);
dp.stop();
}
} catch (final Exception e) {
LOG.error("Error monitoring ftp host", e);
}
}
Since pollSftp() is called by initialize() during Spring boot application init, it is able to see the #Autowired component SftpBridgeConfig config.
My problem is that my class implements DirectoryListener, I have to override the fileAdded event to take some action when a new ftp file is added.
#Override
public void fileAdded(FileAddedEvent event) {
LOG.info("Added: " + event.getFileElement());
//implementing DirectoryListener
//#Autowired component config is null here as it is called from a polling thread
}
in the fileAdded(FileAddedEvent event) method, my #Autowired component config is null, because this method is not called during Spring boot init. What is the best way to structure the code so that the #Autowired component config is available when fileAdded() is called by an sftp directory polling thread?
Thanks for any advice.
Edit: #Andreas - I've filled out my pollSftp() method which adds the class as a DirectoryListener. Thanks
Is there an annotaion in SftpBridgeConfig?
cf. #Service #Component
if configuration class, need to register #Bean in the spring context.
cf. #Bean(name="") or #Bean
ex) message source configuration
#Configuration
public class MessageSourceConfiguration {
#Bean
public MessageSource messageSource() {
// statements
}
}

Multiple dynamic HTTP endpoints

I want to run multiple HTTP endpoints which should be creates based on list of paths.
Currently I'm able to create one endpoint:
#MessagingGateway(defaultRequestChannel = "requestChannel")
public interface Gateway {
String sendReceive(String in);
}
#Bean
public MessageChannel requestChannel() {
return new DirectChannel();
}
#Bean
public IntegrationFlow flow() {
return IntegrationFlows.from("requestChannel").transform(new ObjectToStringTransformer())
.handle(new MyHandle())
.get();
}
#Bean
public HttpRequestHandlingMessagingGateway httpGate() {
HttpRequestHandlingMessagingGateway gateway = new HttpRequestHandlingMessagingGateway(true);
RequestMapping mapping = new RequestMapping();
mapping.setMethods(HttpMethod.POST);
mapping.setPathPatterns("/path");
gateway.setRequestMapping(mapping);
gateway.setRequestChannel(requestChannel());
gateway.setRequestPayloadType(byte[].class);
return gateway;
}
but I want to do somthing like this:
#Autowired
List<String> paths;
#PostConstruct
public void createEndpoints() {
for (String path : paths) {
//code for dynamic endpoint creation
}
}
private class MyHandle extends AbstractReplyProducingMessageHandler {
#Override
protected Object handleRequestMessage(Message<?> requestMessage) {
return this.getMessageBuilderFactory().withPayload("Your message: " + requestMessage.getPayload());
}
}
Can you tell me how can I do it?
Since Java DSL 1.2 there is a IntegrationFlowContextexactly for such a use-case to register IntegrationFlow and dependent beans dynamically.
https://spring.io/blog/2016/09/27/java-dsl-for-spring-integration-1-2-release-candidate-1-is-available
The GA release today.
You should just follow with the samples in those blog post and pay attention to the org.springframework.integration.dsl.http.Http factory.
But, indeed, do that as early as possible. The #PostConstruct is good phase for this use-case.
When it will be later, the HandlerMapping won't be able to detect an new mapping. Just because it does the scan in its afterPropertiesSet().

Categories

Resources