Event processed confirmation in Kafka - java

I'm trying to achieve some kind of event processing in Kafka. I've got some producers which post events to Kafka queue. I've also consumers which get the event, process it, and save processed data in DB. However, I need to be sure that EVERY event had been processed and finished. What if something crash unexpectedly during processing of event after taking it from a queue? How can I inform Kafka that this particular event is still not processed? Are there any known patterns?

Kafka streams Version 0.10.* by design has "At least once" semantics. Once you are using DB if every event has its own key you will also get "Exactly once semantic " since there is no duplications if you write to the same key.
If you want to make sure that this is correct.
Start kafka,
Generate Data,
Start DB,
Start your stream,
Make sure data is getting there,
Now stop your DB,
Kill stream while it gets some errors,
Start DB again,
And you will see that Kafka reproduces the data into your DB again.
For further reading you can go here

Related

How to solve the queue multi-consuming concurrency problem?

Our program is using Queue.
Multiple consumers are processing messages.
Consumers do the following:
Receive on or off status message from the Queue.
Get the latest status from the repository.
Compare the state of the repository and the state received from the message.
If the on/off status is different, update the data. (At this time, other related data are also updated.)
Assuming that this process is handled by multiple consumers, the following problems are expected.
Producer sends messages 1: on, 2: off, and 3: on.
Consumer A receives message #1 and stores message #1 in the storage because there is no latest data.
Consumer A receives message #2.
At this time, consumer B receives message #3 at the same time.
Consumers A and B read the latest data from the storage at the same time (message 1).
Consumer B finishes processing first. Don't update the repository as the on/off state is unchanged.(1: on, 3: on)
Then consumer A finishes the processing. The on/off state has changed, so it processes and saves the work. (1: on, 2: off)
In normal case, the latest data remaining in the DB should be on.
(This is because the message was sent in the order of on -> off -> on.)
However, according to the above scenario, off remains the latest data.
Is there any good way to solve this problem?
For reference, the queue we use is using AWS Amazon MQ and the storage is using AWS dynamoDB. And using Spring Boot.
The fundamental problem here is that you need to consume these "status" messages in order, but you're using concurrent consumers which leads to race-conditions and out-of-order message processing. In short, your basic architecture using concurrent consumers is causing this problem.
You could possibly work up some kind of solution in the database with timestamps as suggested in the comments, but that would be extra work for the clients and extra data stored in the database that isn't strictly necessary.
The simplest way to solve the problem is to just consume the messages serially rather than concurrently. There are a handful of different ways to do this, e.g.:
Define just 1 consumer for the queue with the "status" messages.
Use ActiveMQ's "exclusive consumer" feature to ensure that only one consumer receives messages.
Use message groups to group all the "status" messages together to ensure they are processed serially (i.e. in order).

Can I have local state in a Kafka Processor?

I've been reading a bit about the Kafka concurrency model, but I still struggle to understand whether I can have local state in a Kafka Processor, or whether that will fail in bad ways?
My use case is: I have a topic of updates, I want to insert these updates into a database, but I want to batch them up first. I batch them inside a Java ArrayList inside the Processor, and send them and commit them in the punctuate call.
Will this fail in bad ways? Am I guaranteed that the ArrayList will not be accessed concurrently?
I realize that there will be multiple Processors and multiple ArrayLists, depending on the number of threads and partitions, but I don't really care about that.
I also realize I will loose the ArrayList if the application crashes, but I don't care if some events are inserted twice into the database.
This works fine in my simple tests, but is it correct? If not, why?
Whatever you use for local state in your Kafka consumer application is up to you. So, you can guarantee only the current thread/consumer will be able to access the local state data in your array list. If you have multiple threads, one per Kafka consumer, each thread can have their own private ArrayList or hashmap to store state into. You could also have something like a local RocksDB database for persistent local state.
A few things to look out for:
If you're batching updates together to send to the DB, are those updates in any way related, say, because they're part of a transaction? If not, you might run into problems. An easy way to ensure this is the case is to set a key for your messages with a transaction ID, or some other unique identifier for the transaction, and that way all the updates with that transaction ID will end up in one specific partition, so whoever consumes them is sure to always have the
How are you validating that you got ALL the transactions before your batch update? Again, this is important if you're dealing with database updates inside transactions. You could simply wait for a pre-determined amount of time to ensure you have all the updates (say, maybe 30 seconds is enough in your case). Or maybe you send an "EndOfTransaction" message that details how many messages you should have gotten, as well as maybe a CRC or hash of the messages themselves. That way, when you get it, you can either use it to validate you have all the messages already, or you can keep waiting for the ones that you haven't gotten yet.
Make sure you're not committing to Kafka the messages you're keeping in memory until after you've batched and sent them to the database, and you have confirmed that the updates went through successfully. This way, if your application dies, the next time it comes back up, it will get again the messages you haven't committed in Kafka yet.

How to sort infinite event streams using reactive programming?

Problem Statement:
I have the following stream, 1,2,4,6,3,5.... I expect the events reached to the subscribers as 123456...
For Simplicity:
There cannot be possibility where one element is missing i.e you will not have 1,2,3,4,5,6...
The sent messages can be deleted from in-memory data structure giving space for the others.
This is a infinite stream, sometime may be large enough to be stored everything in memory(May be at the worst case lead to memory-exception which is fine.
You can pile up the events using window(n) or other methods, but then the events are expected to be published in sequence.
With respect to my code,
I have a Flowable that gets inbound data with events. These events are not in order. These messages are expected to reach subscribers in order(ascending order of event ID).
Please let me know
how can I achieve this either using rxJava or without rx?
What could be the optimal design for this without any event loss?

Effective strategy to avoid duplicate messages in apache kafka consumer

I have been studying apache kafka for a month now. I am however, stuck at a point now. My use case is, I have two or more consumer processes running on different machines. I ran a few tests in which I published 10,000 messages in kafka server. Then while processing these messages I killed one of the consumer processes and restarted it. Consumers were writing processed messages in a file. So after consumption finished, file was showing more than 10k messages. So some messages were duplicated.
In consumer process I have disabled auto commit. Consumers manually commit offsets batch wise. So for e.g if 100 messages are written to file, consumer commits offsets. When single consumer process is running and it crashes and recovers duplication is avoided in this manner. But when more than one consumers are running and one of them crashes and recovers, it writes duplicate messages to file.
Is there any effective strategy to avoid these duplicate messages?
The short answer is, no.
What you're looking for is exactly-once processing. While it may often seem feasible, it should never be relied upon because there are always caveats.
Even in order to attempt to prevent duplicates you would need to use the simple consumer. How this approach works is for each consumer, when a message is consumed from some partition, write the partition and offset of the consumed message to disk. When the consumer restarts after a failure, read the last consumed offset for each partition from disk.
But even with this pattern the consumer can't guarantee it won't reprocess a message after a failure. What if the consumer consumes a message and then fails before the offset is flushed to disk? If you write to disk before you process the message, what if you write the offset and then fail before actually processing the message? This same problem would exist even if you were to commit offsets to ZooKeeper after every message.
There are some cases, though, where
exactly-once processing is more attainable, but only for certain use cases. This simply requires that your offset be stored in the same location as unit application's output. For instance, if you write a consumer that counts messages, by storing the last counted offset with each count you can guarantee that the offset is stored at the same time as the consumer's state. Of course, in order to guarantee exactly-once processing this would require that you consume exactly one message and update the state exactly once for each message, and that's completely impractical for most Kafka consumer applications. By its nature Kafka consumes messages in batches for performance reasons.
Usually your time will be more well spent and your application will be much more reliable if you simply design it to be idempotent.
This is what Kafka FAQ has to say on the subject of exactly-once:
How do I get exactly-once messaging from Kafka?
Exactly once semantics has two parts: avoiding duplication during data production and avoiding duplicates during data consumption.
There are two approaches to getting exactly once semantics during data production:
Use a single-writer per partition and every time you get a network error check the last message in that partition to see if your last write succeeded
Include a primary key (UUID or something) in the message and deduplicate on the consumer.
If you do one of these things, the log that Kafka hosts will be duplicate-free. However, reading without duplicates depends on some co-operation from the consumer too. If the consumer is periodically checkpointing its position then if it fails and restarts it will restart from the checkpointed position. Thus if the data output and the checkpoint are not written atomically it will be possible to get duplicates here as well. This problem is particular to your storage system. For example, if you are using a database you could commit these together in a transaction. The HDFS loader Camus that LinkedIn wrote does something like this for Hadoop loads. The other alternative that doesn't require a transaction is to store the offset with the data loaded and deduplicate using the topic/partition/offset combination.
I think there are two improvements that would make this a lot easier:
Producer idempotence could be done automatically and much more cheaply by optionally integrating support for this on the server.
The existing high-level consumer doesn't expose a lot of the more fine grained control of offsets (e.g. to reset your position). We will be working on that soon
I agree with RaGe's deduplicate on the consumer side. And we use Redis to deduplicate Kafka message.
Assume the Message class has a member called 'uniqId', which is filled by the producer side and is guaranteed to be unique. We use a 12 length random string. (regexp is '^[A-Za-z0-9]{12}$')
The consumer side use Redis's SETNX to deduplicate and EXPIRE to purge expired keys automatically. Sample code:
Message msg = ... // eg. ConsumerIterator.next().message().fromJson();
Jedis jedis = ... // eg. JedisPool.getResource();
String key = "SPOUT:" + msg.uniqId; // prefix name at will
String val = Long.toString(System.currentTimeMillis());
long rsps = jedis.setnx(key, val);
if (rsps <= 0) {
log.warn("kafka dup: {}", msg.toJson()); // and other logic
} else {
jedis.expire(key, 7200); // 2 hours is ok for production environment;
}
The above code did detect duplicate messages several times when Kafka(version 0.8.x) had situations. With our input/output balance audit log, no message lost or dup happened.
There's a relatively new 'Transactional API' now in Kafka that can allow you to achieve exactly once processing when processing a stream. With the transactional API, idempotency can be built in, as long as the remainder of your system is designed for idempotency. See https://www.baeldung.com/kafka-exactly-once
Whatever done on producer side, still the best way we believe to deliver exactly once from kafka is to handle it on consumer side:
Produce msg with a uuid as the Kafka message Key into topic T1
consumer side read the msg from T1, write it on hbase with uuid as rowkey
read back from hbase with the same rowkey and write to another topic T2
have your end consumers actually consume from topic T2

How to remove messages from a topic

I am trying to write an Application that uses the JMS publish subscribe model. However I have run into a setback, I want to be able to have the publisher delete messages from the topic. The usecase is that I have durable subscribers, the active ones will get the messages (since it's more or less instantly) , but if there are inactive ones and the publisher decides the message is wrong, I want to have him able to delete the message so that the subscribers won't receive it anymore once they become active.
Problem is, I don't know how/if this can be done.
For a provider I settled on glassfish's implementation, but if other alternatives offer this functionality, I can switch.
Thank you.
JMS is a form of asynchronous messaging and as such the publishers and subscribers are decoupled by design. This means that there is no mechanism to do what you are asking. For subscribers who are active at time of publication, they will consume the message with no chance of receiving the delete message in time to act on it. If a subscriber is offline then they will but async messages are supposed to be atomic. If you proceed with design of other respondent's answer (create a delete message and require reconnecting consumers to read the entire queue looking for delete messages), then you will create a situation in which the behavior of the system differs based on whether or not a subscriber was online or not at the time a specific message/delete combination was was published. There is also a race condition in which the subscriber completes reading of the retained messages just before the publisher sends out the delete message. This means you must put significant logic into subscribers to reconcile these conditions and even more to reconcile the race condition.
The accepted method of doing this is what are called "compensating transactions." In any system where the producer and consumer do not share a single unit of work or share common state (such as using the same DB to store state) then backing out or correcting a previous transaction requires a second transaction that reverses the first. The consumer must of course be able to apply the compensating transaction correctly. When this pattern is used the result is that all subscribers exhibit the same behavior regardless of whether the messages are consumed in real time or in a batch after the consumer has restarted.
Note that a compensating transaction differs from a "delete message." The delete message as proposed in the other respondent's answer is a form of command and control that affects the message stream itself. On the other hand, compensating transactions affect the state of the system through transactional updates of the system state.
As a general rule, you never want to manage state of the system by manipulating the message stream with command and control functions. This is fragile, susceptible to attack and very hard to audit or debug. Instead, design the system to deliver every message subject to its quality of service constraints and to process all messages. Handle state changes (including reversing a prior action) entirely in the application.
As an example, in banking where transactions trigger secondary effects such as overdraft fees, a common procedure is to "memo post" the transactions during the day, then sort and apply them in a batch after the bank has closed. This allows a mistake to be reconciled before it causes overdraft fees. More recently, the transactions are applied in real time but the triggers are withheld until the day's books close and this achieves the same result.
JMS API does not allow removing messages from any destination (either queue or topic). Although I believe that specific JMX providers provide their own proprietary tools to manage their state for example using JMX. Try to check it out for your JMS provider but be careful: even if you find solution it will not be portable between different JMS providers.
One legal way to "remove" message is using its time-to-live:
publish(Topic topic, Message message, int deliveryMode, int priority, long timeToLive). Probably it is good enough for you.
If it is not applicable for your application, solve the problem on application level. For example attach unique ID to each message and publish special "delete" message with higher priority that will be a kind of command to delete "real" message with the same ID.
You have have the producer send a delete message and the consumer needs to read all messages before starting to process them.

Categories

Resources