How to wait for data in spring integration TCP Server - java

My TCP server built using Spring integration works great. I use ByteArrayLengthHeaderSerializer as a serializer.
Once in a while, client data comes very slowly making the server respond very slowly.
I would like to wait a maximum of 5 seconds to read each byte of the data from the client. If the data byte does not come in 5 seconds, I would like to send NAK.
How to set the timeout of 5 seconds? Where should it be set?
Do I need to customize serializer?
Here is my spring context:
<int-ip:tcp-connection-factory id="crLfServer"
type="server"
port="${availableServerSocket}"
single-use="true"
so-timeout="10000"
using-nio="false"
serializer="connectionSerializeDeserialize"
deserializer="connectionSerializeDeserialize"
so-linger="2000"/>
<bean id="connectionSerializeDeserialize" class="org.springframework.integration.ip.tcp.serializer.ByteArrayLengthHeaderSerializer"/>
<int-ip:tcp-inbound-gateway id="gatewayCrLf"
connection-factory="crLfServer"
request-channel="serverBytes2StringChannel"
error-channel="errorChannel"
reply-timeout="10000"/> <!-- reply-timeout works on inbound-gateway -->
<int:channel id="toSA" />
<int:service-activator input-channel="toSA"
ref="myService"
method="prepare"/>
<int:object-to-string-transformer id="serverBytes2String"
input-channel="serverBytes2StringChannel"
output-channel="toSA"/>
<int:transformer id="errorHandler"
input-channel="errorChannel"
expression="payload.failedMessage.payload + ':' + payload.cause.message"/>
Thank you

You would need a custom deserializer; by default when the read times out (after the so-timeout) we close the socket. You would have to catch the timeout and return a partial message, with some information to tell the downstream flow to return the nack.
The deserializer does not have access to the connection so it can't send the nack itself.
You could do it in a custom subclass TcpMessageMapper, though - override toMessage().
That said, your solution might be brittle unless you close the socket anyway because the stream may still contain some data from the previous message, although with single-use true, I assume you are only sending one message per socket.
EDIT
#SpringBootApplication
public class So40408085Application {
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(So40408085Application.class, args);
context.getBean("toTcp", MessageChannel.class).send(new GenericMessage<>("foo"));
Thread.sleep(5000);
context.close();
}
#Bean
public TcpNetServerConnectionFactory server() {
TcpNetServerConnectionFactory server = new TcpNetServerConnectionFactory(1234);
server.setSoTimeout(1000);
server.setMapper(new TimeoutMapper()); // use 'mapper' attribute in XML
return server;
}
#Bean
public TcpInboundGateway inGate() {
TcpInboundGateway inGate = new TcpInboundGateway();
inGate.setConnectionFactory(server());
inGate.setRequestChannelName("inChannel");
return inGate;
}
#ServiceActivator(inputChannel = "inChannel")
public String upCase(byte[] in) {
return new String(in).toUpperCase();
}
#Bean
public TcpNetClientConnectionFactory client() {
TcpNetClientConnectionFactory client = new TcpNetClientConnectionFactory("localhost", 1234);
client.setSerializer(new ByteArrayLfSerializer()); // so the server will timeout - he's expecting CRLF
return client;
}
#Bean
#ServiceActivator(inputChannel = "toTcp")
public TcpOutboundGateway out() {
TcpOutboundGateway outGate = new TcpOutboundGateway();
outGate.setConnectionFactory(client());
outGate.setOutputChannelName("reply");
return outGate;
}
#ServiceActivator(inputChannel = "reply")
public void reply(byte[] in) {
System.out.println(new String(in));
}
public static class TimeoutMapper extends TcpMessageMapper {
#Override
public Message<?> toMessage(TcpConnection connection) throws Exception {
try {
return super.toMessage(connection);
}
catch (SocketTimeoutException e) {
connection.send(new GenericMessage<>("You took too long to send me data, sorry"));
connection.close();
return null;
}
}
}
}

Related

Dynamically setting host for Spring AMQP and RabbitMQ on Spring Boot

i have a problem, i do not know how to set the host dynamically and doing RPC operation on different host
Here is the situation
I have a multiple RabbitMQ running on different servers and networks (i.e 192.168.1.0/24, 192.168.2.0/24).
The behavior would be i have a list of IP address which i will perform an RPC with.
So, for each entry in the ip address list, i want to perform a convertSendAndReceive and process the reply and so on.
Tried some codes in documentation but it seems it does not work even the invalid address (addresses that don't have a valid RabbitMQ running, or is not event existing on the network, for example 1.1.1.1) gets received by a valid RabbitMQ (running on 192.168.1.1 for example)
Note: I can successfully perform RPC call on correct address, however, i can also successfully perform RPC call on invalid address which im not suppose to
Anyone has any idea about this?
Here is my source
TaskSchedulerConfiguration.java
#Configuration
#EnableScheduling
public class TaskSchedulerConfiguration {
#Autowired
private IpAddressRepo ipAddressRepo;
#Autowired
private RemoteProcedureService remote;
#Scheduled(fixedDelayString = "5000", initialDelay = 2000)
public void scheduledTask() {
ipAddressRepo.findAll().stream()
.forEach(ipaddress -> {
boolean status = false;
try {
remote.setIpAddress(ipaddress);
remote.doSomeRPC();
} catch (Exception e) {
logger.debug("Unable to Connect to licenser server: {}", license.getIpaddress());
logger.debug(e.getMessage(), e);
}
});
}
}
RemoteProcedureService.java
#Service
public class RemoteProcedureService {
#Autowired
private RabbitTemplate template;
#Autowired
private DirectExchange exchange;
public boolean doSomeRPC() throws JsonProcessingException {
//I passed this.factory.getHost() so that i will know if only the valid ip address will be received by the other side
//at this point, other side receives invalid ipaddress which supposedly will not be receive by the oher side
boolean response = (Boolean) template.convertSendAndReceive(exchange.getName(), "rpc", this.factory.getHost());
return response;
}
public void setIpAddress(String host) {
factory.setHost(host);
factory.setCloseTimeout(prop.getRabbitMQCloseConnectTimeout());
factory.setPort(prop.getRabbitMQPort());
factory.setUsername(prop.getRabbitMQUsername());
factory.setPassword(prop.getRabbitMQPassword());
template.setConnectionFactory(factory);
}
}
AmqpConfiguration.java
#Configuration
public class AmqpConfiguration {
public static final String topicExchangeName = "testExchange";
public static final String queueName = "rpc";
#Autowired
private LicenseVisualizationProperties prop;
//Commented this out since this will only be assigne once
//i need to achieve to set it dynamically in order to send to different hosts
//so put it in RemoteProcedureService.java, but it never worked
// #Bean
// public ConnectionFactory connectionFactory() {
// CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
// connectionFactory.setCloseTimeout(prop.getRabbitMQCloseConnectTimeout());
// connectionFactory.setPort(prop.getRabbitMQPort());
// connectionFactory.setUsername(prop.getRabbitMQUsername());
// connectionFactory.setPassword(prop.getRabbitMQPassword());
// return connectionFactory;
// }
#Bean
public DirectExchange exhange() {
return new DirectExchange(topicExchangeName);
}
}
UPDATE 1
It seems that, during the loop, when an valid ip is set in the CachingConnectionFactory succeeding ip addressing loop, regardliess if valid or invalid, gets received by the first valid ip set in CachingConnectionFactory
UPDATE 2
I found out that once it can establish a successfully connection, it will not create a new connection. How do you force RabbitTemplate to establish a new connection?
It's a rather strange use case and won't perform very well; you would be better to have a pool of connection factories and templates.
However, to answer your question:
Call resetConnection() to close the connection.

TCP Socket Server setup for receive/process/reply

This is a new question following up on this older question and answer (specifically the comment that says "don't comment on old answers, ask a new question"), as well as these examples in GitHub.
I know the answer and examples are minimal working "trivial examples", but I don't know enough about how "things work in Spring" (or should work) to understand how to decompose those generic, trivial examples into separate servers and clients that suit my purpose. I currently have a working Spring-Boot daemon application that is client to / calls on (without any "spring integration") a legacy daemon application over a TCP Socket connection. It's all working, running in production.
But now I am tasked with migrating the legacy daemon to Spring Boot too. So I only need to configure and set up a cached/pooled TCP connection "socket listener" on the server-side. However, the "client parts" of the existing (self contained) examples confuse me. In my case the "client side" (the existing Spring Boot daemon) is not going to change and is a separate app on a separate server, I only need to set up / configure the "server-side" of the socket connection (the "legacy-daemon freshly migrated to Spring Boot" daemon).
I've copied this example configuration (exactly) into my legacy-migration project
#EnableIntegration
#IntegrationComponentScan
#Configuration
public static class Config {
#Value(${some.port})
private int port;
#MessagingGateway(defaultRequestChannel="toTcp")
public interface Gateway {
String viaTcp(String in);
}
#Bean
#ServiceActivator(inputChannel="toTcp")
public MessageHandler tcpOutGate(AbstractClientConnectionFactory connectionFactory) {
TcpOutboundGateway gate = new TcpOutboundGateway();
gate.setConnectionFactory(connectionFactory);
gate.setOutputChannelName("resultToString");
return gate;
}
#Bean
public TcpInboundGateway tcpInGate(AbstractServerConnectionFactory connectionFactory) {
TcpInboundGateway inGate = new TcpInboundGateway();
inGate.setConnectionFactory(connectionFactory);
inGate.setRequestChannel(fromTcp());
return inGate;
}
#Bean
public MessageChannel fromTcp() {
return new DirectChannel();
}
#MessageEndpoint
public static class Echo {
#Transformer(inputChannel="fromTcp", outputChannel="toEcho")
public String convert(byte[] bytes) {
return new String(bytes);
}
#ServiceActivator(inputChannel="toEcho")
public String upCase(String in) {
return in.toUpperCase();
}
#Transformer(inputChannel="resultToString")
public String convertResult(byte[] bytes) {
return new String(bytes);
}
}
#Bean
public AbstractClientConnectionFactory clientCF() {
return new TcpNetClientConnectionFactory("localhost", this.port);
}
#Bean
public AbstractServerConnectionFactory serverCF() {
return new TcpNetServerConnectionFactory(this.port);
}
}
...and the project will start on 'localhost' and "listen" on port 10000. But, when I connect to the socket from another local app and send some test text, nothing returns until I shut down the socket listening app. Only after the socket listening app starts shutting down does a response (the correct 'uppercased' result) go back to the sending app.
How do I get the "listener" to return a response to the "sender" normally, without shutting down the listener's server first?
Or can someone please provide an example that ONLY shows the server-side (hopefully annotation based) setup? (Or edit the example so the server and client are clearly decoupled?)
Samples usually contain both the client and server because it's easier that way. But there is nothing special about breaking apart the client and server sides. Here's an example using the Java DSL:
#SpringBootApplication
public class So60443538Application {
public static void main(String[] args) {
SpringApplication.run(So60443538Application.class, args);
}
#Bean
public IntegrationFlow server() {
return IntegrationFlows.from(Tcp.inboundGateway(Tcp.netServer(1234)))
.transform(Transformers.objectToString()) // byte[] -> String
.<String, String>transform(p -> p.toUpperCase())
.get();
}
}
#SpringBootApplication
public class So604435381Application {
private static final Logger LOG = LoggerFactory.getLogger(So604435381Application.class);
public static void main(String[] args) {
SpringApplication.run(So604435381Application.class, args);
}
#Bean
public IntegrationFlow client() {
return IntegrationFlows.from(Gate.class)
.handle(Tcp.outboundGateway(Tcp.netClient("localhost", 1234)))
.transform(Transformers.objectToString())
.get();
}
#Bean
#DependsOn("client")
public ApplicationRunner runner(Gate gateway) {
return args -> LOG.info(gateway.exchange("foo"));
}
}
interface Gate {
String exchange(String in);
}
2020-02-28 09:14:04.158 INFO 35974 --- [ main] com.example.demo.So604435381Application : FOO

RabbitListener does not pick up every message sent with AsyncRabbitTemplate

I am using a Spring-Boot project on Spring-Boot Version 1.5.4, with spring-boot-starter-amqp, spring-boot-starter-web-services and spring-ws-support v. 2.4.0.
So far, I have successfully created a #RabbitListener Component, which does exactly what it is supposed to do, when a message is sent to the broker via rabbitTemplate.sendAndReceive(uri, message). I tried to see what would happen if I used AsyncRabbitTemplate for this, as it is possible that the message processing might take a while, and I don't want to lock my application while waiting for a response.
The problem is: the first message I put in the queue is not even being picked up by the listener. The callback just acknowledges a success with the published message instead of the returned message.
Listener:
#RabbitListener(queues = KEY_MESSAGING_QUEUE)
public Message processMessage(#Payload byte[] payload, #Headers Map<String, Object> headers) {
try {
byte[] resultBody = messageProcessor.processMessage(payload, headers);
MessageBuilder builder = MessageBuilder.withBody(resultBody);
if (resultBody.length == 0) {
builder.setHeader(HEADER_NAME_ERROR_MESSAGE, "Error occurred during processing.");
}
return builder.build();
} catch (Exception ex) {
return MessageBuilder.withBody(EMPTY_BODY)
.setHeader(HEADER_NAME_ERROR_MESSAGE, ex.getMessage())
.setHeader(HEADER_NAME_STACK_TRACE, ex.getStackTrace())
.build();
}
}
When I am executing my Tests, one test fails, and the second test succeeds. The class is annotated with #RunWith(SpringJUnit4ClassRunner.class) and #SpringBootTest(classes = { Application.class, Test.TestConfiguration.class }) and has a #ClassRule of BrokerRunning.isRunningWintEmptyQueues(QUEUE_NAME)
TestConfiguration (inner class):
public static class TestConfiguration {
#Bean // referenced in the tests as art
public AsyncRabbitTemplate asyncRabbitTemplate(ConnectionFactory connectionFactory, RabbitTemplate rabbitTemplate) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory);
container.setQueueNames(QUEUE_NAME);
return new AsyncRabbitTemplate(rabbitTemplate, container);
}
#Bean
public MessageListener messageListener() {
return new MessageListener();
}
}
Tests:
#Test
public void shouldListenAndReplyToQueue() throws Exception {
doReturn(RESULT_BODY)
.when(innerMock)
.processMessage(any(byte[].class), anyMapOf(String.class, Object.class));
Message msg = MessageBuilder
.withBody(MESSAGE_BODY)
.setHeader("header", "value")
.setHeader("auth", "entication")
.build();
RabbitMessageFuture pendingReply = art.sendAndReceive(QUEUE_NAME, msg);
pendingReply.addCallback(new ListenableFutureCallback<Message>() {
#Override
public void onSuccess(Message result) { }
#Override
public void onFailure(Throwable ex) {
throw new RuntimeException(ex);
}
});
while (!pendingReply.isDone()) {}
result = pendingReply.get();
// assertions omitted
}
Test 2:
#Test
public void shouldReturnExceptionToCaller() throws Exception {
doThrow(new SSLSenderInstantiationException("I am a message", new Exception()))
.when(innerMock)
.processMessage(any(byte[].class), anyMapOf(String.class, Object.class));
Message msg = MessageBuilder
.withBody(MESSAGE_BODY)
.setHeader("header", "value")
.setHeader("auth", "entication")
.build();
RabbitMessageFuture pendingReply = art.sendAndReceive(QUEUE_NAME, msg);
pendingReply.addCallback(/*same as above*/);
while (!pendingReply.isDone()) {}
result = pendingReply.get();
//assertions omitted
}
When I run both tests together, the test that is executed first fails, while the second call succeeds.
When I run both tests separately, both fail.
When I add an #Before-Method, which uses the AsyncRabbitTemplate art to put any message into the queue, both tests MAY pass, or the second test MAY not pass, so in addition to being unexpected, the behaviour is inconsistent as well.
The interesting thing is, that the callback passed to the method reports a success before the listener is invoked, and reports the sent message as result.
The only class missing from this is the general configuration class, which is annotated with #EnableRabbit and has this content:
#Bean
public SimpleRabbitListenerContainerFactory simpleRabbitListenerContainerFactory(ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setConcurrentConsumers(10);
return factory;
}
Other Things I have tried:
specifically create AsyncRabbitTemplate myself, start and stop it manually before and after every message process -> both tests succeeded
increase / decrease receive timeout -> no effect
remove and change the callback -> no effect
explicitly created the queue again with an injected RabbitAdmin -> no effect
extracted the callback to a constant -> tests didn't even start correctly
As stated above, I used RabbitTemplate directly, which worked exactly as intended
If anyone has any ideas what is missing, I'd be very happy to hear.
You can't use the same queue for requests and replies...
#Bean // referenced in the tests as art
public AsyncRabbitTemplate asyncRabbitTemplate(ConnectionFactory connectionFactory, RabbitTemplate rabbitTemplate) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory);
container.setQueueNames(QUEUE_NAME);
return new AsyncRabbitTemplate(rabbitTemplate, container);
}
Will listen for replies on QUEUE_NAME, so...
RabbitMessageFuture pendingReply = art.sendAndReceive(QUEUE_NAME, msg);
...simply sends a message to itself. It looks like you intended...
RabbitMessageFuture pendingReply = art.sendAndReceive(KEY_MESSAGING_QUEUE, msg);

How to change the port used by tcp-ibound-gateway on the fly

Is there a way to change port used by tcp-inbound gateway on the fly? I'd like to set port and timeout used by tcp-inbound-gateway based on the configuration persisted in the database and have ability to change them on the fly without restarting an application. In order to do so I decided to use "publish-subscriber" pattern and extended TcpInboundGateway class:
public class RuntimeInboundGateway extends TcpInboundGateway implements SettingsSubscriber {
#Autowired
private Settings settings;
#PostConstruct
public void subscribe() {
settings.subscribe(this);
}
#Override
public void onSettingsChanged(Settings settings) {
this.stop();
AbstractByteArraySerializer serializer = new ByteArrayLfSerializer();
TcpNetServerConnectionFactory connectionFactory = new TcpNetServerConnectionFactory(settings.getPort());
connectionFactory.setSerializer(serializer);
connectionFactory.afterPropertiesSet();
this.setConnectionFactory(connectionFactory);
this.afterPropertiesSet();
this.start();
}
}
The settings object is a singleton bean and when it is changed the tcp inbound gateway starts indeed listening on the new port but looks like it doesn't send inbound messages further on the flow. Here is an excerpt from xml configuration:
<int-ip:tcp-connection-factory id="connFactory" type="server" port="${port}"
serializer="serializer"
deserializer="serializer"/>
<bean id="serializer" class="org.springframework.integration.ip.tcp.serializer.ByteArrayLfSerializer"/>
<bean id="inboundGateway" class="com.example.RuntimeInboundGateway">
<property name="connectionFactory" ref="connFactory"/>
<property name="requestChannel" ref="requestChannel"/>
<property name="replyChannel" ref="responseChannel"/>
<property name="errorChannel" ref="exceptionChannel"/>
<property name="autoStartup" value="true"/>
</bean>
There is logging-channel-adapter in the configuration which logs any requests to the service without any issues until the settings are changed. After that, it doesn't and I see no messages received though I'm able to connect to the new port by the telnet localhost <NEW_PORT>. Could somebody take a look and say how the desired behaviour can be achieved?
A quick look at your code indicated it should work ok, so I just wrote a quick Spring Boot app and it worked fine for me...
#SpringBootApplication
public class So40084223Application {
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext ctx = SpringApplication.run(So40084223Application.class, args);
Socket socket = SocketFactory.getDefault().createSocket("localhost", 1234);
socket.getOutputStream().write("foo\r\n".getBytes());
socket.close();
QueueChannel queue = ctx.getBean("queue", QueueChannel.class);
System.out.println(queue.receive(10000));
ctx.getBean(MyInboundGateway.class).recycle(1235);
socket = SocketFactory.getDefault().createSocket("localhost", 1235);
socket.getOutputStream().write("fooo\r\n".getBytes());
socket.close();
System.out.println(queue.receive(10000));
ctx.close();
}
#Bean
public TcpNetServerConnectionFactory cf() {
return new TcpNetServerConnectionFactory(1234);
}
#Bean
public MyInboundGateway gate(TcpNetServerConnectionFactory cf) {
MyInboundGateway gate = new MyInboundGateway();
gate.setConnectionFactory(cf);
gate.setRequestChannel(queue());
return gate;
}
#Bean
public QueueChannel queue() {
return new QueueChannel();
}
public static class MyInboundGateway extends TcpInboundGateway implements ApplicationEventPublisherAware {
private ApplicationEventPublisher applicationEventPublisher;
#Override
public void setApplicationEventPublisher(ApplicationEventPublisher applicationEventPublisher) {
this.applicationEventPublisher = applicationEventPublisher;
}
public void recycle(int port) {
stop();
TcpNetServerConnectionFactory sf = new TcpNetServerConnectionFactory(port);
sf.setApplicationEventPublisher(this.applicationEventPublisher);
sf.afterPropertiesSet();
setConnectionFactory(sf);
afterPropertiesSet();
start();
}
}
}
I would turn on DEBUG logging to see if it gives you any clues.
You also might to want to explore using the new DSL dynamic flow registration instead. The tcp-dynamic-client shows how to use that technique to add/remove flow snippets on the fly. It's on the client side, but similar techniques can be used on the server side to register/unregister your gateway and connection factory.
The cause of the troubles is me. Since the deserializer is not specified in the code above the default one is used and it couldn't demarcate inbound messages from the input byte stream. Just one line connectionFactory.setDeserializer(serializer); solved the issue I spent a day on.

Netty 4 - Outbound message at head of pipeline discarded

I am using Netty 4 RC1. I initialize my pipeline at the client side:
public class NodeClientInitializer extends ChannelInitializer<SocketChannel> {
#Override
protected void initChannel(SocketChannel sc) throws Exception {
// Frame encoding and decoding
sc.pipeline()
.addLast("logger", new LoggingHandler(LogLevel.DEBUG))
// Business logic
.addLast("handler", new NodeClientHandler());
}
}
NodeClientHandler has the following relevant code:
public class NodeClientHandler extends ChannelInboundByteHandlerAdapter {
private void sendInitialInformation(ChannelHandlerContext c) {
c.write(0x05);
}
#Override
public void channelActive(ChannelHandlerContext c) throws Exception {
sendInitialInformation(c);
}
}
I connect to the server using:
public void connect(final InetSocketAddress addr) {
Bootstrap bootstrap = new Bootstrap();
ChannelFuture cf = null;
try {
// set up the pipeline
bootstrap.group(new NioEventLoopGroup())
.channel(NioSocketChannel.class)
.handler(new NodeClientInitializer());
// connect
bootstrap.remoteAddress(addr);
cf = bootstrap.connect();
cf.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture op) throws Exception {
logger.info("Connect to {}", addr.toString());
}
});
cf.channel().closeFuture().syncUninterruptibly();
} finally {
bootstrap.shutdown();
}
}
So, what I basically want to do is to send some initial information from the client to the server, after the channel is active (i.e. the connect was successful). However, when doing the c.write() I get the following warning and no package is send:
WARNING: Discarded 1 outbound message(s) that reached at the head of the pipeline. Please check your pipeline configuration.
I know there is no outbound handler in my pipeline, but I didn't think I need one (at this point) and I thought Netty would take care to transport the ByteBuffer over to the server. What am I doing wrong here in the pipeline configuration?
Netty only handle messages of type ByteBuf by default if you write to the Channel. So you need to wrap it in a ByteBuf. See also the Unpooled class with its static helpers to create ByteBuf instances.

Categories

Resources