Howto solve this typical Producer Consumer scenario - java

I encountered an interresting and I think very common synchronization problem in my test code.
This is the test (its a functional test that connects from the outside to the system), i run it via TestNG.
#Test
public void operationalClientConnected_sendGetUserSessionRequest_clientShallReceiveGetUserSessionResponse() {
// GIVEN
OperationalClientSimulator client = operationalClientHasEstablishedWebSocketConnection("ClientXY");
// WHEN
GetUserSessionRequest request = PojoRequestBuilder.newRequest(GetUserSessionRequest.class).build();
client.sendRequest(request);
// THEN
assertThatClientReceivesResponse(client, GetUserSessionResponse.class, request.getCorrelationId(), request.getRequestId());
}
Basically i send a single request and wait for the correct response, this is what i want to verify in this test.
Behind the assertThatClientReceivesResponse there is a hamcrest matcher that looks like this:
#Override
protected boolean matchesSafely(final OperationalClientSimulator client) {
Object awaitedMessage = client.awaitMessage(
new Verification<Object>() {
#Override
public VerificationResult verify(final Object actual) {
VerificationResult result = new VerificationResult();
if (!_expectedResponseClass.isInstance(actual)) {
result.addMismatch("not of expected type", actual, _expectedResponseClass.getSimpleName());
}
// check more details of message ..
return result;
}
}, _expectedTimeout);
boolean matches = awaitedMessage != null;
if (matches) {
_messageCaptor.setActualMessage((T) awaitedMessage);
}
return matches;
}
Now to the interresting part, the synchronization in the OperationalClientSimulator class.
Two methods are of interrest:
awaitMessage which blocks until either a message that matches the given Verification is received or the timeout expired
onMessage received method which is called for each message that is received (over a websocket connection)
Basically what I want to achive is having the test thread block on the awaitMessage method until either the correct message is received (via onMessage) or the specified timeout elapsed.
public Object awaitMessage(final Verification<Object> verification, final long timeoutMillis) {
// howto sync?
return awaitedMessage; // or null
}
#Override
public void onMessage(final String message) {
LOG.info("#Client {} <== received a message on websocket - {}", name, message);
// howto sync?
}
About the test:
The test thread will almost always be faster and therefor has to wait until the response is received via the awaitMessage method
There can be very rare cases when the expected message is received before the test thread is checking for it (this basically means i have to save every received message)
In this specific test case there are only a handfull of messages that are received (some heartbeat messages, the actual response and a notification), but in other cases there can be hundreds of messages which in need to inspect to find the expected message(s)
I was thinking about different solutions for synchronizing here:
The simplest of course would be the sync with the synchronized keyword but I think there are neater ways to do this
The onMessage received method could simply write into a blocking queue and the test thread can consume from it but here I dont know how to measure the timeout.. can I use a CountdownLatch?
Maybe I can do a non blocking solution where the producer (onMessage) writes into an Array and the consumer reads until it reaches an index that is published by the producer (like the LMAX Disruptor)
I know this is test code and performance is not really an issue here, I am just thinking how to solve this in a "nice" way.. you know.. because its christmas :-)
So the actual question here is, how do i "safely" wait for the message which i expect in my test with a timeout? Safely here means that i never miss a message or lose a message because of concurrency issues and that I also need to check if the expected message was already received.
How should I synchronize between the test runner thread and the thread that calls the onMessage method in the OperationalClientSimulator when a message is received on the websocket connection.

Related

Clearing chat using discord JDA

I am coding a discord bot in java, I use discord JDA, and the utilities dependency, I tried using the utilities one but I didn't get it to work, so I tried using just the normal JDA, this is what I did, but I need some way of telling the bot not to send the message in the new channel if the command wasn't ran.
public class NukeCommand extends ListenerAdapter {
#Override
public void onGuildMessageReceived(GuildMessageReceivedEvent event){
if (event.getMessage().getContentRaw().equalsIgnoreCase(".nuke")){
event.getChannel().createCopy().queue();
event.getChannel().delete().queue();
}
}
#Override
public void onTextChannelCreate(TextChannelCreateEvent createEvent){
createEvent.getChannel().sendMessage(":warning:Nuked channel:warning:\nhttps://imgur.com/a/93vq9R8").queue();
}
I am open for answers in both dependencies.
this is the effect I want: https://gyazo.com/e549fd8dda0ded62db19fb84e31d3a61
I have the same effect but it sends the message every time I create a text channel.
I want it to only send the message if the .nuke command was ran.
You said you already got it but I though I'd explain more about how it actually works and refine your answer.
ListenerAdapter's methods are called for every event that happens in the whole scope of the bot, for example if you have
class Adapter extends ListenerAdapter {
#Override
public void onMessageReceived(MessageReceivedEvent event){
/*This will be called for every message, everywhere, including the bot's private channel*/
}
}
So you need to filter those events inside the method's body, what you did on your answer was check if whoever sent the message has permissions to manage messages, which is not the one you actually need to delete channels, it is Permission.MANAGE_CHANNELS, you can find at the roles tab in your server.
Then you call createCopy() which basically creates a shallow copy of the channel's information
I don't think you actually wanted to do that.
Then you queue that action, (This is what actually executes it) it is put in a queue for asynchronous processing by JDA's threads, which will subsequently be sent to discord over the websocket connection.
That queue method can take a Consumer<TextChannel> as a parameter, what does that mean?
Basically a consumer represent an operation that takes a single parameter as input (In this case a TextChannel) and returns nothing. More about it here: https://docs.oracle.com/javase/8/docs/api/java/util/function/Consumer.html
The consumer you give in your answer is doing what you actually want to, sending the message, on the channel that was operated on by the previous queue, meaning that, createCopy() is completely useless, both TextChannel objects are the same, the appropriate would be:
event.getChannel().sendMessage(":warning:Nuked channel:warning:\nhttps://imgur.com/a/93vq9R8").queue();
After that, you delete the channel right away, which does not make that much sense, since most likely no one would even be able to see the nuked message.
For that, JDA provides another method for queueing tasks, this time with a delay:
queueAfter()
It takes a long as the value, and a TimeUnit object to specify what is the time unit of said value, for example:
event.getChannel().delete().queueAfter(10, TimeUnit.SECONDS);
This would queue the task to be executed in 10 seconds, and it does NOT stop the execution of your code, unlike the complete() method.
Alternatively to that, you could just use Thread.sleep() which takes a long value as input: Thread.sleep(10000) for 10 seconds (10000 milliseconds).
You can find a lot more information regarding JDA and start tips and here: https://github.com/DV8FromTheWorld/JDA#creating-the-jda-object
I found a solution you can pass a consumable or whatever its called into the .queue method, this code gets run whenever the channel is created.
what i did:
#Override
public void onGuildMessageReceived(GuildMessageReceivedEvent event){
if (event.getMember().hasPermission(Permission.MESSAGE_MANAGE)){
if (event.getMessage().getContentRaw().equalsIgnoreCase(".nuke")){
event.getChannel().createCopy().queue(channel->channel.sendMessage(":warning:Nuked channel:warning:\nhttps://imgur.com/a/93vq9R8").queue());
event.getChannel().delete().queue();
}
}
}
This seems to work (Without deleting the channel)
#Override
public void onMessageReceived(#NotNull MessageReceivedEvent event) {
String message = event.getMessage().getContentRaw();
if (message.toLowerCase().equals("$" + "clear")) {
for (int i = 0; i <= 1000; i++) {
TextChannel channel = (TextChannel) event.getChannel();
MessageHistory history = new MessageHistory(channel);
List<Message> msgs;
msgs = history.retrievePast(100).complete();
if (msgs.size() > 1) {
channel.deleteMessages(msgs).queue();
} else {
channel.sendMessage("Mensagens deletadas").queue();
return;
}
}
}`

Assert Kafka send worked

I'm writing an application with Spring Boot so to write to Kafka I do:
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
and then inside my method:
kafkaTemplate.send(topic, data)
But I feel like I'm just relying on this to work, how can I know if this has worked? If it's asynchronous, is it a good practice to return a 200 code and hoped it did work? I'm confused. If Kafka isn't available, won't this fail? Shouldn't I be prompted to catch an exception?
Along with what #mjuarez has mentioned you can try playing with two Kafka producer properties. One is ProducerConfig.ACKS_CONFIG, which lets you set the level of acknowledgement that you think is safe for your use case. This knob has three possible values. From Kafka doc
acks=0: Producer doesn't care about acknowledgement from server, and considers it as sent.
acks=1: This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgement from all followers.
acks=all: This means the leader will wait for the full set of in-sync replicas to acknowledge the record.
The other property is ProducerConfig.RETRIES_CONFIG. Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error.
Yes, if Kafka is not available, that .send() call will fail, but if you send it async, no one will be notified. You can specify a callback that you want to be executed when the future finally finishes. Full interface spec here: https://kafka.apache.org/20/javadoc/org/apache/kafka/clients/producer/Callback.html
From the official Kafka javadoc here: https://kafka.apache.org/20/javadoc/index.html?org/apache/kafka/clients/producer/KafkaProducer.html
Fully non-blocking usage can make use of the Callback parameter to
provide a callback that will be invoked when the request is complete.
ProducerRecord<byte[],byte[]> record = new ProducerRecord<byte[],byte[]>("the-topic", key, value);
producer.send(myRecord,
new Callback() {
public void onCompletion(RecordMetadata metadata, Exception e) {
if(e != null) {
e.printStackTrace();
} else {
System.out.println("The offset of the record we just sent is: " + metadata.offset());
}
}
});
you can use below command while sending messages to kafka:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topic-name
while above command is running you should run your code and if sending messages being successful then the message must be printed on the console.
Furthermore, likewise any other connection to any resources if the connection could not be established, then doing any kinds of operations would result some exception raises.

Kafka - How to obtain failed messages details in Producer class

Kafka allows for asynchronous message sending through below methods on Producer (KafkaProducer) class:
public java.util.concurrent.Future<RecordMetadata> send(ProducerRecord<K,V> record)
public java.util.concurrent.Future<RecordMetadata> send(ProducerRecord<K,V> record, Callback callback)
Successes can be handled through
1) the Future<RecordMetaData> object or
2) onCompletion method invoked by the callback. Full method signature and usage of onCompletion is as below (taken from kafka docs)
`
ProducerRecord<byte[],byte[]> record = new ProducerRecord<byte[],byte[]>("the-topic", key, value);
producer.send(record,
new Callback() {
public void onCompletion(RecordMetadata metadata, Exception e) {
if(e != null)
e.printStackTrace();
System.out.println("The offset of the record we just sent is: " + metadata.offset());
}
});
While failure needs to be handled through the Exception e passed to the onCompletion method
Fine every thing looks good so far.
But if I am getting it right, any reasonable information that can be obtained from exception or e object is stacktrace and exception message. What I mean to point out here is, e does not contain any information of the actual record sent. Or in other words, it does not contain a reference to the actual record that was sent to kafka broker. So what useful processing or handling can be done by the producer if the record was not sent successfully. Really not much.
Why I say this is - ideally I would like to make a log of the failed message some where and then try to resend it. But with the little information (e) provided by framework, i feel this is not possible.
Can someone point out if I am right or wrong?
You could easily create a callback that receives the producerRecord as a constructor argument. So upon onCompletion with an exception, you can have complete knowledge of the producer record, and even try to send it again.
I dealt with the same issue. Created a callback that gets both producerRecord, and a callback handler that uses an executor service to send the record again. So eventually, I can tolerate any number of failures (e.g. network issues or kafka is down), and recover from it.

Websockets onMessage Lock

I am using both python and java implementations of a websocket client. However, since onMessage is asynchronous, it will begin executing immediately, even if there is another function being executed. How can I ensure that each onMessage function will finish completely before the next message is handled. Thanks!
EDIT:
I am subscribing to multiple channels, and regardless of which channel sends a message, my onMessage handler will handle the message. I need my onMessage handler to fully process each message it receives before it begins to process the next message, but I cannot lose any messages. I hope this helps to clarify a bit.
It sounds just a concurrent issue. How about this?
private final Object onMessageLock = new Object();
#OnMessage
public void onMessage(String message, Session session)
{
synchronized (onMessageLock)
{
// Handle the message here.
}
}
I tested the solution proposed by Takahiko. It only works based on one client. The messages from different clients will still be processed parallely.
If you want all messages to be processed after the message before has been processed completely (regardless of the client that sent it) you have to make the Lock object static:
private static final Object onMessageLock = new Object();

Non-blocking reverse proxy with netty

I'm trying to write a non-blocking proxy with netty 4.1. I have a "FrontHandler" which handles incoming connections, and then a "BackHandler" which handles outgoing ones. I'm following the HexDumpProxyHandler (https://github.com/netty/netty/blob/ed4a89082bb29b9e7d869c5d25d6b9ea8fc9d25b/example/src/main/java/io/netty/example/proxy/HexDumpProxyFrontendHandler.java#L67)
In this code I have found:
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
if (outboundChannel.isActive()) {
outboundChannel.writeAndFlush(msg).addListener(new ChannelFutureListener() {, I've seen:
Meaning that the incoming message is only written if the outbound client connection is already ready. This is obviously not ideal in a HTTP proxy case, so I am thinking what would be the best way to handle it.
I am wondering if disabling auto-read on the front-end connection (and only trigger reads manually once the outgoing client connection is ready) is a good option. I could then enable autoRead over the child socket again, in the "channelActive" event of the backend handler. However, I am not sure about how many messages would I get in the handler for each "read()" invocation (using HttpDecoder, I assume I would get the initial HttpRequest, but I'd really like to avoid getting the subsequent HttpContent / LastHttpContent messages until I manually trigger the read() again and enable autoRead over the channel).
Another option would be to use a Promise to get the Channel from the client ChannelPool:
private void setCurrentBackend(HttpRequest request) {
pool.acquire(request, backendPromise);
backendPromise.addListener((FutureListener<Channel>) future -> {
Channel c = future.get();
if (!currentBackend.compareAndSet(null, c)) {
pool.release(c);
throw new IllegalStateException();
}
});
}
and then do the copying from input to output thru that promise. Eg:
private void handleLastContent(ChannelHandlerContext frontCtx, LastHttpContent lastContent) {
doInBackend(c -> {
c.writeAndFlush(lastContent).addListener((ChannelFutureListener) future -> {
if (future.isSuccess()) {
future.channel().read();
} else {
pool.release(c);
frontCtx.close();
}
});
});
}
private void doInBackend(Consumer<Channel> action) {
Channel c = currentBackend.get();
if (c == null) {
backendPromise.addListener((FutureListener<Channel>) future -> action.accept(future.get()));
} else {
action.accept(c);
}
}
but I'm not sure about how good it is to keep the promise there forever and do all the writes from "front" to "back" by adding listeners to it. I'm also not sure about how to instance the promise so that the operations are performed in the right thread... right now I'm using:
backendPromise = group.next().<Channel> newPromise(); // bad
// or
backendPromise = frontCtx.channel().eventLoop().newPromise(); // OK?
(where group is the same eventLoopGroup as used in the ServerBootstrap of the frontend).
If they're not handled thru the right thread, I assume it could be problematic to have the "else { }" optimization in the "doInBackend" method to avoid using the Promise and write to the channel directly.
The no-autoread approach doesn't work by itself, because the HttpRequestDecoder creates several messages even if only one read() was performed.
I have solved it by using chained CompletableFutures.
I have worked on a similar proxy application based on the MQTT protocol. So it was basically used to create a real-time chat application. The application that I had to design however was asynchronous in nature so I naturally did not face any such problem. Because in case the
outboundChannel.isActive() == false
then I can simply keep the messages in a queue or a persistent DB and then process them once the outboundChannel is up. However, since you are talking about an HTTP application, so this means that the application is synchronous in nature meaning that the client cannot keep on sending packets until the outboundChannel is up and running. So the option you suggest is that the packet will only be read once the channel is active and you can manually handle the message reads by disabling the auto read in ChannelConfig.
However, what I would like to suggest is that you should check if the outboundChannel is active or not. In case the channel is active, send he packet forward for processing. In case the channel is not active, you should reject the packet by sending back a response similar to Error404
Along with this you should configure your client to keep on retrying sending the packets after certain intervals and accordingly handle what needs to be done in case the channel takes too long a time to become active and become readable. Manually handling channelRead is generally not preferred and is an anti pattern. You should let Netty handle that for you in the most efficient way.

Categories

Resources