How can I implement a socket reconnection using spring integration TCP? - java

While integrating to a TCP endpoint, we created an application using Spring integration TCP where we've a pooled connection using the following beans:
<!-- Pooled Connection factory -->
<int-ip:tcp-connection-factory id="client" type="client" host="${gateway.url}" port="${gateway.port}"
single-use="true" so-timeout="${gateway.socket.timeout}" serializer="appSerializerDeserializer" deserializer="appSerializerDeserializer" />
<bean id="cachedClient" class="org.springframework.integration.ip.tcp.connection.CachingClientConnectionFactory">
<constructor-arg ref="client" />
<constructor-arg value="${gateway.pool.size}" />
</bean>
Does anyone have a suggestion on how to implement a socket re-connection in case a socket loses the connection?

It will automatically reconnect the next time you send something.

Related

Put operation does not work for IBM MQ from camel when using JMSPoolXAConnectionFactory

We are implementing XA transaction between MQ and database and trying to create a connection factory as a service in karaf as per the below link.
https://access.redhat.com/documentation/fr-fr/red_hat_fuse/7.2/html/apache_karaf_transaction_guide/using-jms-connection-factories#manual-deployment-connection-factories
The MQ we are using is IBM and we are connecting to it through camel.
The karaf service is exposed from the same bundle that is going to use it. This is done through blueprint xml file present in the src/main/resources/OSGI-INF/blueprint folder.
When we use (through JNDI) the connection factory exposed as a service for setting the connection factory to be used by the JmsComponent of camel, we are able to get message from the queue but not able to put message into the queue. There is no error when the put operation fails and hence, the database gets updated with success. This happens specifically when using JmsPoolXAConnectionFactory as the pool connection factory. If we change it to JmsPoolConnectionFactory, the put operation works and the message is added to the queue.
Below are the sample routes for get and put to queue.
GET:
from("mq:queue:{{queueName}}")
.process(new CustomProcessor1())
.to("direct:call-sp")
.end();
from("direct:call-sp")
.to("sql-stored:call-sp")
.end();
PUT:
from("vm:send")
.process(new CustomProcessor2())
.to("mq:queue:{{queueName}}")
.to("sql-stored:update-sp")
.to("vm:nextroute")
.end();
Camel JmsComponent Configuration in camel-context.xml:
<reference id="ptm" interface="org.springframework.transaction.PlatformTransactionManager" />
<reference id="connectionFactory" interface="javax.jms.ConnectionFactory" filter="(osgi.jndi.service.name=jms/mq)" availability="optional" />
<bean id="jmsConfig" class="org.apache.camel.component.jms.JmsConfiguration">
<property name="transacted" value="false" />
<property name="connectionFactory" ref="connectionFactory" />
<property name="transactionManager" ref="ptm" />
</bean>
<bean id="mq" class="org.apache.camel.component.jms.JmsComponent">
<property name="configuration" ref="jmsConfig" />
<property name="destinationResolver" ref="customDestinationResolver" />
</bean>
<bean id="customDestinationResolver" class="com.example.CustomDestinationResolver">
</bean>
Is there any put related specific configuration that we are missing?
To coordinate XA transactions, you need a transaction manager which implements the Java Transaction API (JTA).
Therefore, I think you need to use a JtaTransactionManager rather than a org.springframework.transaction.PlatformTransactionManager.
Check this out:
https://tomd.xyz/camel-xa-transactions-checklist/

JmsTemplate cannot send response because javax.jms.IllegalStateException: Session is closed

I have a standalone application using Spring to connect to TIBCO (queues). Occasionally, for various reasons, TIBCO connection is closed by the server. Most of the things are recovering from this. However, sometimes JmsTemplate is not able to send response because of the error below. I have a retry in place but the same error keeps coming (see trace below).
Details that may be important:
I am using DefaultMessageListenerContainer to get the request and send the response in that receiving thread. Also, I am using the same connection factory for both DefaultMessageListenerContainer and JmsTemplate.
Caused by: org.springframework.jms.IllegalStateException: Session is closed; nested exception is javax.jms.IllegalStateException: Session is closed
at org.springframework.jms.support.JmsUtils.convertJmsAccessException(JmsUtils.java:279)
at org.springframework.jms.support.JmsAccessor.convertJmsAccessException(JmsAccessor.java:169)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:487)
at org.springframework.jms.core.JmsTemplate.send(JmsTemplate.java:559)
at org.springframework.jms.core.JmsTemplate.convertAndSend(JmsTemplate.java:682)
at org.springframework.jms.core.JmsTemplate.convertAndSend(JmsTemplate.java:670)
at org.springframework.integration.jms.JmsSendingMessageHandler.send(JmsSendingMessageHandler.java:149)
at org.springframework.integration.jms.JmsSendingMessageHandler.handleMessageInternal(JmsSendingMessageHandler.java:116)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
... 83 more
Caused by: javax.jms.IllegalStateException: Session is closed
at com.tibco.tibjms.TibjmsxSessionImp._createProducer(TibjmsxSessionImp.java:1067)
at com.tibco.tibjms.TibjmsxSessionImp.createProducer(TibjmsxSessionImp.java:5080)
at org.springframework.jms.core.JmsTemplate.doCreateProducer(JmsTemplate.java:1114)
at org.springframework.jms.core.JmsTemplate.createProducer(JmsTemplate.java:1095)
at org.springframework.jms.core.JmsTemplate.doSend(JmsTemplate.java:591)
at org.springframework.jms.core.JmsTemplate$3.doInJms(JmsTemplate.java:562)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:484)
... 89 more
Communication with the TIBCO queue is done using Spring framework. Here is the configuration. A message is received by DefaultMessageListenerContainer, processed and JmsTemplate is used to send back the response. Connection factory is shared between receiver and sender (can this be an issue?).
<bean id="connectionFactory"
class="org.springframework.jms.connection.SingleConnectionFactory">
<constructor-arg ref="tibcoJNDI" />
<property name="targetConnectionFactory">
<bean class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiTemplate" ref="tibcoJNDI"/>
<property name="jndiName" value="${tibco.queueConnectionFactory}" />
</bean>
</property>
<property name="reconnectOnException" value="true"/>
</bean>
<bean id="client.req.msg.lstnr" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="autoStartup" value="false"/>
<property name="connectionFactory" ref="connectionFactory"/>
<property name="destination" ref="ext_client.request.queue"/>
<property name="sessionAcknowledgeMode" value="3"/>
<property name="concurrentConsumers" value="6"/>
<property name="receiveTimeout" value="60000"/>
</bean>
<jms:outbound-channel-adapter
jms-template="ext.outbound.jms.template"
channel="jms.to.ext.clnt.reply"/>
<bean id="ext.outbound.jms.template" class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory" ref="connectionFactory"/>
<property name="defaultDestination" ref="ext_client.reply.queue"/>
<property name="timeToLive" value="10800000" />
<property name="explicitQosEnabled" value="true" />
</bean>
One more detail that might help. I just noticed that the first exception is actually different. There is a "Connection is closed" exception first followed by multiple "Session is closed" exception (on retry).
Caused by: javax.jms.JMSException: Connection is closed
at com.tibco.tibjms.TibjmsxLink.sendRequest(TibjmsxLink.java:322)
at com.tibco.tibjms.TibjmsxLink.sendRequest(TibjmsxLink.java:286)
at com.tibco.tibjms.TibjmsxLink.sendRequestMsg(TibjmsxLink.java:261)
at com.tibco.tibjms.TibjmsxSessionImp._createProducer(TibjmsxSessionImp.java:1075)
at com.tibco.tibjms.TibjmsxSessionImp.createProducer(TibjmsxSessionImp.java:5080)
at org.springframework.jms.core.JmsTemplate.doCreateProducer(JmsTemplate.java:1114)
at org.springframework.jms.core.JmsTemplate.createProducer(JmsTemplate.java:1095)
at org.springframework.jms.core.JmsTemplate.doSend(JmsTemplate.java:591)
at org.springframework.jms.core.JmsTemplate$3.doInJms(JmsTemplate.java:562)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:484)
... 89 more
It seems that Spring does not handle properly this case. I solved this issue by overriding JmsTemplate (code below) and handling the exception myself (cleaning the session and the connection). I hope this helps.
public <T> T execute(SessionCallback<T> action, boolean startConnection) throws JmsException {
try {
return super.execute(action, startConnection);
} catch (JmsException jmse) {
logger.error("Exception while executing in JmsTemplate (will cleanup session & connection): ", jmse);
Object resourceHolder =
TransactionSynchronizationManager.getResource(getConnectionFactory());
if (resourceHolder != null && resourceHolder instanceof JmsResourceHolder) {
((JmsResourceHolder)resourceHolder).closeAll();
}
throw jmse;
}
}
This answer might help https://stackoverflow.com/a/24494739/208934 In particular:
When using JMS you shouldn't really cache the JMS Session (and
anything hanging of that such as a Producer). The reason being is that
the JMS Session is the unit of work within JMS and so should be a
short lived object. In the Java EE world that JMS Session might also
be enlisted with a global transaction for example and so needs to be
scoped correctly.
It sounds like you're reusing the same session for multiple operations.
I have seen this problem in some of my tests and telling Spring to use a pool of connections usually fixes it. If you were using active-mq then it can be done with the property spring.activemq.pooled=true. I'm not 100% how Tibco accomplishes the same thing.
You might get get same results by defining a PooledConnectionFactory bean that will override the default.

How to increase Apache CXF timeout using XML configuration and JaxWsProxyFactoryBean?

I am using apache CXF implementation of JAX-WS.
My web service is configured via spring xml configuration using JaxWsProxyFactoryBean:
<bean id="myWSClient" class="my.package.MyWSClient"
factory-bean="clientFactory"
factory-method="create" />
<bean id="clientFactory" class="org.apache.cxf.jaxws.JaxWsProxyFactoryBean">
<property name="serviceClass" value="my.package.MyWSClient"/>
<property name="address" value="http://some.url"/>
</bean>
and later I am injecting it via:
#Resource(name = "myWSClient")
MyWSClient myWSClient;
How I can manage to increase timeout for MyWSClient?
To configure client timeouts using spring configuration use this:
<http-conf:conduit name="*.http-conduit">
<http-conf:client
ConnectionTimeout="600000"
ReceiveTimeout="600000"/>
</http-conf:conduit>
In this example timeout for response and connection is setup for 600 seconds.
Reference:
Apache CXF: Client HTTP Transport: Advanced configuration

Connecting to remote activemq instance running on docker container

I have 2 docker containers, one running a spring app (in tomcat) and one running an active mq instance. When I try to connect to it from my spring app, I get the following error. Only activeMQ is running on the one container and the port is properly exposed. I verified the IP address of the docker container (shown below) and that is correct.
I'm not sure what could be causing this error at this point. Any thoughts would be appreciated.
ERROR [activemq.broker.BrokerService] Failed to start Apache ActiveMQ ([mybroker, ID:489af431756c-60313-1409695404227-0:1], java.io.IOException: Transport Connector could not be registered in JMX: Failed to bind to server socket: tcp://172.17.0.2:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600 due to: java.net.BindException: Cannot assign requested address)
You configured Spring to start a broker service on 172.17.0.2, which is the IP of the remote machine. Instead, you should configure Spring to connect to an existing broker on that machine. From the ActiveMQ documentation and using the Spring facility JMSTemplate:
<!-- a pooling based JMS provider -->
<bean id="jmsFactory"
class="org.apache.activemq.pool.PooledConnectionFactory"
destroy-method="stop">
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL">
<value>tcp://activemq-host.local:61616</value>
</property>
</bean>
</property>
</bean>
<!-- Spring JMS Template -->
<bean id="myJmsTemplate" class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory">
<ref local="jmsFactory"/>
</property>
</bean>

Spring Integration - Gateway - Splitter - Aggregator with JMS

I am trying to use spring integration to do a Gateway --> Splitter-->ServiceActivator --> Aggregator Pattern in an event driven fashion backed by JMS . I Expect the service activator to be multi-threaded and any of the end points can be executed on a cluster and not necessarily the originating server . I could get this working in a single JVM without using JMS ( Using SI Channels ) but I understand that SI Channels will not help me scale horizontally i.e multiple VMs .
Here's the configuration I have so far
<int:gateway id="transactionGateway" default-reply-channel="transaction-reply"
default-request-channel="transaction-request" default-reply-timeout="10000"
service-interface="com.test.abc.integration.service.ProcessGateway">
</int:gateway>
<int-jms:outbound-gateway id="transactionJMSGateway"
correlation-key="JMSCorrelationID" request-channel="transaction-request"
request-destination="transactionInputQueue" reply-channel="transaction-reply"
reply-destination="transactionOutputQueue" extract-reply-payload="true"
extract-request-payload="true">
<int-jms:reply-listener
max-concurrent-consumers="20" receive-timeout="5000"
max-messages-per-task="1" />
</int-jms:outbound-gateway>
<!-- Inbound Gateway for Splitter -->
<int-jms:inbound-gateway id="splitterGateWay"
request-destination="transactionInputQueue" request-channel="splitter-input"
reply-channel="splitter-output" concurrent-consumers="1"
default-reply-destination="processInputQueue"
max-concurrent-consumers="1" extract-reply-payload="true"
correlation-key="JMSCorrelationID" extract-request-payload="true" />
<!-- Inbound Gateway Invokes Service Activator and Sends response back to
the channel -->
<int-jms:inbound-gateway id="seriveActivatorGateway"
request-destination="processInputQueue" request-channel="process-input"
reply-channel="process-output" concurrent-consumers="1"
default-reply-destination="processOutputQueue"
max-concurrent-consumers="1" extract-reply-payload="true"
correlation-key="JMSCorrelationID" extract-request-payload="true"
max-messages-per-task="1" />
<int-jms:inbound-gateway id="aggregatorGateway"
request-destination="processOutputQueue" request-channel="aggregator-input"
reply-channel="aggregator-output" concurrent-consumers="1"
default-reply-destination="transactionOutputQueue"
max-concurrent-consumers="1" extract-reply-payload="true"
extract-request-payload="true" max-messages-per-task="1"
correlation-key="JMSCorrelationID" />
<int:splitter id="transactionSplitter" input-channel="splitter-input"
ref="processSplitter" output-channel="splitter-output">
</int:splitter>
<int:service-activator id="jbpmServiceActivator"
input-channel="process-input" ref="jbpmService" requires-reply="true"
output-channel="process-output">
</int:service-activator>
<int:aggregator id="transactionAggregator"
input-channel="aggregator-input" method="aggregate" ref="processAggregator"
output-channel="aggregator-output" message-store="processResultMessageStore"
send-partial-result-on-expiry="false">
</int:aggregator>
Before using gateway I tried using JMS backed Channels and that approach was't successful either . The problem I am facing now is that the Splitter now reply back to the transactionOutputQueue . I tried playing around with jms:header-enricher without much success . I feel that my approach to the problem /SI might have fundamental flaw . Any help /guidance is highly appreciated .
Also , in the code snippet I have provided above use a simple in memory aggregator , I understand that If I need to get this working across the cluster I might need a JDBC backed Aggregator but for the for now , I am trying to get this pattern working on a single VM
Here's the updated working configuration based on Gary's Comment
<bean id="processOutputQueue" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg value="test.com.abc.process.output" />
</bean>
<bean id="transactionOutputQueue" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg value="test.com.abc.transaction.result" />
</bean>
<bean id="transactionInputQueue" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg value="test.com.abc.transaction.input" />
</bean>
<int:gateway id="transactionGateway"
default-request-channel="transaction-request" default-reply-timeout="10000"
default-reply-channel="aggregator-output"
service-interface="com.test.abc.integration.service.ProcessGateway">
</int:gateway>
<int:splitter id="transactionSplitter" input-channel="transaction-request"
ref="processSplitter" output-channel="splitter-output">
</int:splitter>
<int-jms:outbound-gateway id="splitterJMSGateway"
correlation-key="JMSCorrelationID" request-channel="splitter-output"
request-destination="processInputQueue" reply-channel="aggregator-input"
reply-destination="processOutputQueue" extract-request-payload="true"
extract-reply-payload="true">
<int-jms:reply-listener
max-concurrent-consumers="20" receive-timeout="5000" />
</int-jms:outbound-gateway>
<!-- Inbound Gateway Invokes Service Activator and Sends response back to
the channel -->
<int-jms:inbound-gateway id="seriveActivatorGateway"
request-destination="processInputQueue" request-channel="process-input"
reply-channel="process-output" default-reply-destination="processOutputQueue"
concurrent-consumers="5" max-concurrent-consumers="10"
extract-reply-payload="true" correlation-key="JMSCorrelationID"
extract-request-payload="true" max-messages-per-task="1" />
<int:service-activator id="jbpmServiceActivator"
input-channel="process-input" ref="jbpmService" requires-reply="true"
output-channel="process-output">
</int:service-activator>
<int:aggregator id="transactionAggregator"
input-channel="aggregator-input" ref="processAggregator"
output-channel="aggregator-output" message-store="processResultMessageStore"
send-partial-result-on-expiry="false">
</int:aggregator>
<bean id="processResultMessageStore"
class="org.springframework.integration.store.SimpleMessageStore" />
<bean id="processResultMessageStoreReaper"
class="org.springframework.integration.store.MessageGroupStoreReaper">
<property name="messageGroupStore" ref="processResultMessageStore" />
<property name="timeout" value="5000" />
</bean>
<task:scheduled-tasks>
<task:scheduled ref="processResultMessageStoreReaper"
method="run" fixed-rate="1000" />
</task:scheduled-tasks>
<int:logging-channel-adapter id="logger"
level="DEBUG" log-full-message="true" />
<int-stream:stdout-channel-adapter
id="stdoutAdapter" channel="logger" />
I limited the JMS pipeline only to the Service Activator , which is what I originally wanted .
The only question I have based on the above approach is that do I need to have my Aggregator backed by a database even if I use this across multiple VMS ( Since the JMS gateway in front of it make sure that it receives only the messages that have valid correlation ID ?)
Regards ,
You probably don't need to use JMS between every component. However we have lots of test cases for chained gateways like this, and all works fine.
Something must be wired up incorrectly. Since you didn't show your full configuration, it's hard to speculate.
Be sure to use the latest version (2.2.4) and turn on DEBUG logging and follow a message through the flow; as long as your message payload is identifiable across JMS boundaries, it should be easy to figure out where things go awry.

Categories

Resources