How Netty ChannelFuture works? - java

I've read Netty Guide, it does not explain much on ChannelFuture. I find ChannelFuture is a complex idea when applying it.
What I am trying to do is to write message to a context after it's initial response. Different from typical request/response flow. I need a flow like this:
Client send request -> Server (netty)
Server send a response with ctx.writeAndFlush(msg);
Server send some more message to that ctx after step 2 is complete.
The problem is that if I do something like this, the second write will not send out:
ctx.writeAndFlush(response);
Message newMsg = createMessage();
ctx.writeAndFlush(newMsg); //will not send to client
Then I try to use ChannelFuture, it works, but I am not sure if I am logically correct:
ChannelFuture msgIsSent = ctx.writeAndFlush(response);
if(msgIsSent.isDone())
{
Message newMsg = createMessage();
ctx.writeAndFlush(newMsg); //this works
}
or should I use a ChannelFutureListener() instead?
ChannelFuture msgIsSent = ctx.writeAndFlush(response);
msgIsSent.addListener(new ChannelFutureListener(){
#Override
public void operationComplete(ChannelFuture future)
{
Message newMsg = createMessage();
ctx.writeAndFlush(newMsg);
}
});
Will this also works?
Which one is the best practice approach? Is there any potential problem using method 2?

Of course, this depends too on your "protocol" (meaning for instance if you use HTTP, sending 2 answears for the same request is not supported by HTTP protocol). But let say your protocol allows you to send multiple response parts:
Netty add messages to send to the pipeline, respecting the order.
So in your first example, I'm a bit surprised it does not work:
ctx.writeAndFlush(response);
Message newMsg = createMessage();
ctx.writeAndFlush(newMsg); // should send the message
However it could be lead by your protocol. For instance, this could happen:
response in message queue to send
flush not yet done
newMsg in message queue to send
flush now come but protocol does not support 2 messages so only send first one
So if your protocol must admit that first message is sent already, then you have to wait for the first, so doing something like:
ctx.writeAndFlush(response).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) {
if (future.isDone()) {
Message newMsg = createMessage();
ctx.writeAndFlush(newMsg);
} else { // an error occurs, do perhaps something else
}
}
});
So your last proposal (I've just don't create a ChannelFuture but directly used the result of writeAndFlush, but both are equals). Just take care of the case where operationComplete does not mean it is in success.

Try this:
ctx.channel().writeAndFlush(response);
Message newMsg = createMessage();
ctx.channel().writeAndFlush(newMsg);
Channel.write() always starts from the tail of the ChannelPipeline.
ChannelHandlerContext.write() starts from the current position of the ChannelHandler.

#2 looks better but make sure to test if the operation was successful. If not, use future.getCause() to access the exception. Not that it will change the functionality, but you can shorten the code by simply adding the listener directly on the result of the write call, I.e. you don't need to declare the future itself since it will be provided in the callback.

Related

How to know if a message has been ack-ed / nack-ed?

I'm trying to know when a message has been accepted (ack) or not (nack) using RabbitMQ and Spring Boot.
I want to send a message into a queue (via exchange) and check if the queue has been accepted the message. Actually I want to send to two different queues, but it is not important, I'm assuming if it works for one of them will work for the other too.
So I've tried something like this using CorrelationData:
public boolean sendMessage(...) {
CorrelationData cd = new CorrelationData();
this.rabbitTemplate.convertAndSend(exchange, routingKey, message, cd);
try {
return cd.getFuture().get(3, TimeUnit.SECONDS).isAck();
} catch (InterruptedException | ExecutionException | TimeoutException e ) {
e.printStackTrace();
return false;
}
}
The line cd.getFuture().get(3, TimeUnit.SECONDS).isAck() should get false is value has not been ack into the queue I think. But this is always true, even if routingKey doesn't exists.
So I'm assuming this piece of code is checking the message has been send into the exchange and exchange says "yes, I've recived the message, it has not been routed, but I've recived it".
So, I've looked for other ways into Rabbit/Spring documentation but I can't get the way.
And, explaining a little more, that I want is:
Into Spring Boot code I receive a message. This message has to been send to other queues/exchange, but can't be removed from the current queue (i.e. acked) until other two queues confirm the ack.
I have manual ack and as a little pseudo-code I have this:
#RabbitListener(queues = {queue})
public void receiveMessageFromDirect(Message message, Channel channel,
#Header(AmqpHeaders.DELIVERY_TAG) long tag){
boolean sendQueue1 = sendMessage(...);
boolean sendQueue2 = sendMessage(...);
if(sendQueue1 && sendQueue2){
//both messages has been readed; now I can ack this message
channel.basicAck(tag, false);
}else{
//nacked; I can't remove the message util both queue ack the message
channel.basicNack(tag,false,true);
}
I've tested this structure and, even if the queues don't exists, values sendQueue1 and sendQueue2 are always true.
The confirm is true; even for unroutable messages (I am not entirely sure why).
You need to enable returned messages (and check that it is null in the CorrelationData after the future completes - correlationData.getReturnedMessage()). If it's not null, the message wasn't routable to any queue.
You only get nacks if there is a bug in the broker, or if you are using a queue with x-max-length and overflow behavior reject-publish.

Using ActiveMQ to handle errors during webservice call

I have a Java process which is listening for messages from a queue hosted by ActiveMQ and calls webservices if received status is COMPLETE.
I was also thinking of handling webservices calls using ActiveMQ as well. Is there a way I could make best use of ActiveMQ using maybe another queue?
This might help me in handling a scenario if one or more webservice calls fails in first attempt. But I'm trying to think what could I do to achieve something like this.
Do I need to forward webservice parameters to ActiveMQ queue and then look for something?
#Autowired
private JavaMailSender javaMailSender;
// Working Code with JMS 2.0
#JmsListener(destination = "MessageProducer")
public void processBrokerQueues(String message) throws DaoException {
...
if(receivedStatus.equals("COMPLETE")) {
// Do something inside COMPLETE
// Need to call 8-10 webservices (shown only 2 below for breviety)
ProcessBroker obj = new ProcessBroker();
ProcessBroker obj1 = new ProcessBroker();
// Calling webservice 1
try {
System.out.println("Testing 1 - Send Http POST request #1 ");
obj.sendHTTPPOST1();
} finally {
obj.close();
}
// Calling webservice 2.
try {
System.out.println("Testing 2 - Send Http POST request #2");
obj1.sendHTTPPOST2();
} finally {
obj1.close();
}
} else {
// Do something
}
}
What I'm looking for:
Basically If there is a possibility of creating an additional message queue, submit 8 messages to the queue, one corresponding to each webservice end point and have a queue listener dedicated to the message queue. If the message could not be delivered successfully, it could be put back on the queue for later delivery.

Non-blocking reverse proxy with netty

I'm trying to write a non-blocking proxy with netty 4.1. I have a "FrontHandler" which handles incoming connections, and then a "BackHandler" which handles outgoing ones. I'm following the HexDumpProxyHandler (https://github.com/netty/netty/blob/ed4a89082bb29b9e7d869c5d25d6b9ea8fc9d25b/example/src/main/java/io/netty/example/proxy/HexDumpProxyFrontendHandler.java#L67)
In this code I have found:
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
if (outboundChannel.isActive()) {
outboundChannel.writeAndFlush(msg).addListener(new ChannelFutureListener() {, I've seen:
Meaning that the incoming message is only written if the outbound client connection is already ready. This is obviously not ideal in a HTTP proxy case, so I am thinking what would be the best way to handle it.
I am wondering if disabling auto-read on the front-end connection (and only trigger reads manually once the outgoing client connection is ready) is a good option. I could then enable autoRead over the child socket again, in the "channelActive" event of the backend handler. However, I am not sure about how many messages would I get in the handler for each "read()" invocation (using HttpDecoder, I assume I would get the initial HttpRequest, but I'd really like to avoid getting the subsequent HttpContent / LastHttpContent messages until I manually trigger the read() again and enable autoRead over the channel).
Another option would be to use a Promise to get the Channel from the client ChannelPool:
private void setCurrentBackend(HttpRequest request) {
pool.acquire(request, backendPromise);
backendPromise.addListener((FutureListener<Channel>) future -> {
Channel c = future.get();
if (!currentBackend.compareAndSet(null, c)) {
pool.release(c);
throw new IllegalStateException();
}
});
}
and then do the copying from input to output thru that promise. Eg:
private void handleLastContent(ChannelHandlerContext frontCtx, LastHttpContent lastContent) {
doInBackend(c -> {
c.writeAndFlush(lastContent).addListener((ChannelFutureListener) future -> {
if (future.isSuccess()) {
future.channel().read();
} else {
pool.release(c);
frontCtx.close();
}
});
});
}
private void doInBackend(Consumer<Channel> action) {
Channel c = currentBackend.get();
if (c == null) {
backendPromise.addListener((FutureListener<Channel>) future -> action.accept(future.get()));
} else {
action.accept(c);
}
}
but I'm not sure about how good it is to keep the promise there forever and do all the writes from "front" to "back" by adding listeners to it. I'm also not sure about how to instance the promise so that the operations are performed in the right thread... right now I'm using:
backendPromise = group.next().<Channel> newPromise(); // bad
// or
backendPromise = frontCtx.channel().eventLoop().newPromise(); // OK?
(where group is the same eventLoopGroup as used in the ServerBootstrap of the frontend).
If they're not handled thru the right thread, I assume it could be problematic to have the "else { }" optimization in the "doInBackend" method to avoid using the Promise and write to the channel directly.
The no-autoread approach doesn't work by itself, because the HttpRequestDecoder creates several messages even if only one read() was performed.
I have solved it by using chained CompletableFutures.
I have worked on a similar proxy application based on the MQTT protocol. So it was basically used to create a real-time chat application. The application that I had to design however was asynchronous in nature so I naturally did not face any such problem. Because in case the
outboundChannel.isActive() == false
then I can simply keep the messages in a queue or a persistent DB and then process them once the outboundChannel is up. However, since you are talking about an HTTP application, so this means that the application is synchronous in nature meaning that the client cannot keep on sending packets until the outboundChannel is up and running. So the option you suggest is that the packet will only be read once the channel is active and you can manually handle the message reads by disabling the auto read in ChannelConfig.
However, what I would like to suggest is that you should check if the outboundChannel is active or not. In case the channel is active, send he packet forward for processing. In case the channel is not active, you should reject the packet by sending back a response similar to Error404
Along with this you should configure your client to keep on retrying sending the packets after certain intervals and accordingly handle what needs to be done in case the channel takes too long a time to become active and become readable. Manually handling channelRead is generally not preferred and is an anti pattern. You should let Netty handle that for you in the most efficient way.

Howto solve this typical Producer Consumer scenario

I encountered an interresting and I think very common synchronization problem in my test code.
This is the test (its a functional test that connects from the outside to the system), i run it via TestNG.
#Test
public void operationalClientConnected_sendGetUserSessionRequest_clientShallReceiveGetUserSessionResponse() {
// GIVEN
OperationalClientSimulator client = operationalClientHasEstablishedWebSocketConnection("ClientXY");
// WHEN
GetUserSessionRequest request = PojoRequestBuilder.newRequest(GetUserSessionRequest.class).build();
client.sendRequest(request);
// THEN
assertThatClientReceivesResponse(client, GetUserSessionResponse.class, request.getCorrelationId(), request.getRequestId());
}
Basically i send a single request and wait for the correct response, this is what i want to verify in this test.
Behind the assertThatClientReceivesResponse there is a hamcrest matcher that looks like this:
#Override
protected boolean matchesSafely(final OperationalClientSimulator client) {
Object awaitedMessage = client.awaitMessage(
new Verification<Object>() {
#Override
public VerificationResult verify(final Object actual) {
VerificationResult result = new VerificationResult();
if (!_expectedResponseClass.isInstance(actual)) {
result.addMismatch("not of expected type", actual, _expectedResponseClass.getSimpleName());
}
// check more details of message ..
return result;
}
}, _expectedTimeout);
boolean matches = awaitedMessage != null;
if (matches) {
_messageCaptor.setActualMessage((T) awaitedMessage);
}
return matches;
}
Now to the interresting part, the synchronization in the OperationalClientSimulator class.
Two methods are of interrest:
awaitMessage which blocks until either a message that matches the given Verification is received or the timeout expired
onMessage received method which is called for each message that is received (over a websocket connection)
Basically what I want to achive is having the test thread block on the awaitMessage method until either the correct message is received (via onMessage) or the specified timeout elapsed.
public Object awaitMessage(final Verification<Object> verification, final long timeoutMillis) {
// howto sync?
return awaitedMessage; // or null
}
#Override
public void onMessage(final String message) {
LOG.info("#Client {} <== received a message on websocket - {}", name, message);
// howto sync?
}
About the test:
The test thread will almost always be faster and therefor has to wait until the response is received via the awaitMessage method
There can be very rare cases when the expected message is received before the test thread is checking for it (this basically means i have to save every received message)
In this specific test case there are only a handfull of messages that are received (some heartbeat messages, the actual response and a notification), but in other cases there can be hundreds of messages which in need to inspect to find the expected message(s)
I was thinking about different solutions for synchronizing here:
The simplest of course would be the sync with the synchronized keyword but I think there are neater ways to do this
The onMessage received method could simply write into a blocking queue and the test thread can consume from it but here I dont know how to measure the timeout.. can I use a CountdownLatch?
Maybe I can do a non blocking solution where the producer (onMessage) writes into an Array and the consumer reads until it reaches an index that is published by the producer (like the LMAX Disruptor)
I know this is test code and performance is not really an issue here, I am just thinking how to solve this in a "nice" way.. you know.. because its christmas :-)
So the actual question here is, how do i "safely" wait for the message which i expect in my test with a timeout? Safely here means that i never miss a message or lose a message because of concurrency issues and that I also need to check if the expected message was already received.
How should I synchronize between the test runner thread and the thread that calls the onMessage method in the OperationalClientSimulator when a message is received on the websocket connection.

Mock XMPP Server with Mina works only part of the time

I've created a mock XMPP server that processes PLAIN encryption stanzas. I'm able to use Pidgin and go through the entire session creation, to the point where Pidgin thinks the user is on an actually XMPP server and is sending regular pings.
However, it seems like not all messages are processed correctly and when I do get a successful login, it was just luck. I'm talking, maybe 1/10th of the time I actually get connected. The other times it seems like Pidgin missed a message or I dumped messages to fast on the transport.
If I enable Pidgin's XMPP Console plugin, the first connection is ALWAYS successful, but a second user fails to make it through, typically dying when Pidgin requests Service Discovery.
My Mina code is something like this:
try
{
int PORT = 20600;
IoAcceptor acceptor = null;
acceptor = new NioSocketAcceptor();
acceptor.getFilterChain().addFirst("codec", new ProtocolCodecFilter( new ProtocolCodecFactoryImpl()));
acceptor.getFilterChain().addLast("executor", new ExecutorFilter(IoEventType.MESSAGE_RECEIVED));
acceptor.setHandler( new SimpleServerHandler());
acceptor.getSessionConfig().setIdleTime(IdleStatus.BOTH_IDLE, 10);
acceptor.bind( new InetSocketAddress(PORT));
}
catch (Exception ex)
{
System.out.println(ex.getMessage());
}
and the SimpleServerHandler is responsible for message/stanza processing and session creation. The messageReceived function looks like:
#Override
public void messageReceived(IoSession session, Object msg) throws Exception
{
String str = msg.toString();
System.out.println("MESSAGE: " + str);
process(session, str);
}
and finally, process is in charge of parsing the message out, and writing the response. I do use sychonized on my write:
public void sessionWrite(IoSession session, String buf)
{
synchronized(session)
{
WriteFuture future = session.write(buf);
}
}
I have omitted my processing code for brevity, but it simply looks for certain pieces of data, crafts a response and calls sessionWrite(...)
My question is, will this pattern work? And if not, should I consider shoving received messages in a Queue and simply processing the Queue from say a Timer?
It turns out, Pidgin would send two IQ stanzas, but I wasn't handling them correctly. My decoder now determines the end of a stanza and only writes a stanza to the buffer I read from.
Works like a dream now!

Categories

Resources