Java, Akka Actor and Bounded Mail Box - java

I have the following configuration in application.conf:
bounded-mailbox {
mailbox-type = "akka.dispatch.BoundedMailbox"
mailbox-capacity = 100
mailbox-push-timeout-time = 3s
}
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = INFO
daemonic = on
}
This is the way how I configured my actor
public class MyTestActor extends UntypedActor implements RequiresMessageQueue<BoundedMessageQueueSemantics>{
#Override
public void onReceive(Object message) throws Exception {
if (message instanceof String){
Thread.sleep(500);
System.out.println("message = " + message);
}
else {
System.out.println("Unknown Message " );
}
}
}
Now this is how I initate this actor:
myTestActor = myActorSystem.actorOf(Props.create(MyTestActor.class).withMailbox("bounded-mailbox"), "simple-actor");
After it, in my code I'm sending 3000 messages to this actor.
for (int i =0;i<3000;i++){
myTestActor.tell(guestName, null);}
What I expect to see is the exception that my Queues are full, but my messages are printed inside onReceive method every half second, like nothing happened. So I believe my mailbox configuration is not applied.
What am I doing wrong?
Updated: I created actor which subscribes to dead letter events:
deadLetterActor = myActorSystem.actorOf(Props.create(DeadLetterMonitor.class),"deadLetter-monitor");
and installed Kamon for queues monitoring:
After sending 3000 messages sent to the actor, Kamin shows me the following:
Actor: user/simple-actor
MailBox size:
Min: 100
Avg.: 100.0
Max: 101
Actor: system/deadLetterListener
MailBox size:
Min: 0
Avg.: 0.0
Max: 0
Actor: system/deadLetter-monitor
MailBox size:
Min: 0
Avg.: 0.0
Max: 0

By default Akka discards overflowing messages into DeadLetters and actor doesn't stop processing:
https://github.com/akka/akka/blob/876b8045a1fdb9cdd880eeab8b1611aa976576f6/akka-actor/src/main/scala/akka/dispatch/Mailbox.scala#L411
But sending thread will be blocked on interval which is configured by mailbox-push-timeout-time before discarding the message. Try to decrease it to 1ms and see that following test will pass:
import java.util.concurrent.atomic.AtomicInteger
import akka.actor._
import com.typesafe.config.Config
import com.typesafe.config.ConfigFactory._
import org.specs2.mutable.Specification
class BoundedActorSpec extends Specification {
args(sequential = true)
def config: Config = load(parseString(
"""
bounded-mailbox {
mailbox-type = "akka.dispatch.BoundedMailbox"
mailbox-capacity = 100
mailbox-push-timeout-time = 1ms
}
"""))
val system = ActorSystem("system", config)
"some messages should go to dead letters" in {
system.eventStream.subscribe(system.actorOf(Props(classOf[DeadLetterMetricsActor])), classOf[DeadLetter])
val myTestActor = system.actorOf(Props(classOf[MyTestActor]).withMailbox("bounded-mailbox"))
for (i <- 0 until 3000) {
myTestActor.tell("guestName", null)
}
Thread.sleep(100)
system.shutdown()
system.awaitTermination()
DeadLetterMetricsActor.deadLetterCount.get must be greaterThan(0)
}
}
class MyTestActor extends Actor {
def receive = {
case message: String =>
Thread.sleep(500)
println("message = " + message);
case _ => println("Unknown Message")
}
}
object DeadLetterMetricsActor {
val deadLetterCount = new AtomicInteger
}
class DeadLetterMetricsActor extends Actor {
def receive = {
case _: DeadLetter => DeadLetterMetricsActor.deadLetterCount.incrementAndGet()
}
}

Related

Kafka listener isn't listening to Kafka

I'm using Java 11 and kafka-client 2.0.0.
I'm using the following code to generate a consumer :
public Consumer createConsumer(Properties properties,String regex) {
log.info("Creating consumer and listener..");
Consumer consumer = new KafkaConsumer<>(properties);
ConsumerRebalanceListener listener = new ConsumerRebalanceListener() {
#Override
public void onPartitionsRevoked(Collection<TopicPartition> partitions) {
log.info("The following partitions were revoked from consumer : {}", Arrays.toString(partitions.toArray()));
}
#Override
public void onPartitionsAssigned(Collection<TopicPartition> partitions) {
log.info("The following partitions were assigned to consumer : {}", Arrays.toString(partitions.toArray()));
}
};
consumer.subscribe(Pattern.compile(regex), listener);
log.info("consumer subscribed");
return consumer;
}
}
My poll loop is in a different place in the code :
public <K, V> void startWorking(Consumer<K, V> consumer) {
try {
while (true) {
ConsumerRecords<K, V> records = consumer.poll(600);
if (records.count() > 0) {
log.info("Polled {} records", records.count());
} else {
log.info("polled 0 records.. going to sleep..");
Thread.sleep(200);
}
}
} catch (WakeupException | InterruptedException e) {
log.error("Consumer is shutting down", e);
} finally {
consumer.close();
}
}
When I run the code and use this function, the consumer is created and the log contains the following messages :
Creating consumer and listener..
consumer subscribed
polled 0 records.. going to sleep..
polled 0 records.. going to sleep..
polled 0 records.. going to sleep..
The log doesn't contain any info regarding the partition assignment/revocation.
In addition I'm able to see in the log the properties that the consumer uses (group.id is set) :
2020-07-09 14:31:07.959 DEBUG 7342 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [server1:9092]
check.crcs = true
client.id =
group.id=mygroupid
key.deserializer=..
value.deserializer=..
So I tried to use the kafka-console-consumer with the same configuration in order to consume from one of the topics that the regex(mytopic.*) should catch (in this case I used the topic mytopic-1) :
/usr/bin/kafka-console-consumer.sh --bootstrap-server server1:9092 --topic mytopic-1 --property print.timestamp=true --consumer.config /data/scripts/kafka-consumer.properties --from-begining
I have a poll loop in other part of my code that is timing out every 10m .So the bottom line - the problem is that partitions aren't assigned to the Java consumer. The prints inside the listener never happen and the consumer doesn't have any partitions to listen to.
It seems that I was missing the ssl property in my properties file. Don't forget to specify the security.protocol=ssl if you use ssl. It seems that kafka-client API doesn't throw exception if the Kafka uses ssl and you try to access it without ssl parameter configured.

Clustering in Akka

I am trying to understand Akka Clustering for parallel computation using nodes. So, I wrote one factorial program and want to run that on a cluster of 3 nodes (inclusive master).
I am using a configuration file to provide seed nodes and cluster provider. And reading file in my code.
cluster {
akka {
actor {
provider = "cluster"
}
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port = 0
}
}
cluster {
seed-nodes = [
"akka.tcp://ClusterSystem#127.0.0.1:9876",
"akka.tcp://ClusterSystem#127.0.0.1:6789"]
# auto downing is NOT safe for production deployments.
# you may want to use it during development, read more about it in the docs.
#
# auto-down-unreachable-after = 10s
}
}
}
Following is the java code:
package test
import java.io.File
import akka.actor.{Actor,ActorSystem, Props}
import akka.stream.ActorMaterializer
import com.typesafe.config.ConfigFactory
import scala.concurrent.ExecutionContextExecutor
class Factorial extends Actor {
override def receive = {
case (n: Int) => fact(n)
}
def fact(n:Int): Int ={
if (n<=1){
return 1
}
else {
return n * fact(n - 1)
}
}
}
object ClusterActor {
def main(args: Array[String]): Unit = {
val configFile = "E:/Scala/StatsRuleEngine/Resources/local_configuration.conf"
val config = ConfigFactory.parseFile(new File(configFile))
implicit val system:ActorSystem = ActorSystem("ClusterSystem" ,config.getConfig("cluster"))
implicit val materializer:ActorMaterializer = ActorMaterializer()
implicit val executionContext: ExecutionContextExecutor = system.dispatcher
val FacActor = system.actorOf(Props[Factorial],"Factorial")
FacActor ! (5)
}
}
On running the program, I am getting below error
Remote connection to [null] failed with java.net.ConnectException: Connection refused: no further information: /127.0.0.1:6789
[WARN] [01/21/2019 16:31:15.979] [New I/O boss #3] [NettyTransport(akka://ClusterSystem)] Remote connection to [null] failed with java.net.ConnectException: Connection refused: no further information: /127.0.0.1:9876
I tried to search, but I don't why this error is coming.
When you boot your nodes, you need to specify the exact ports that will be open in config
netty.tcp {
hostname = "127.0.0.1"
port = 0 // THE EXACT PORT
}
So, if your seed nodes say 9876 and 6789. Two of nodes have to specify
netty.tcp {
hostname = "127.0.0.1"
port = 9876
}
and
netty.tcp {
hostname = "127.0.0.1"
port = 6789
}
Note, that the node that is listed first in seed nodes list must start first.

Can produce to Kafka but cannot consume

I'm using the Kafka JDK client ver 0.10.2.1 . I am able to produce simple messages to Kafka for a "heartbeat" test, but I cannot consume a message from that same topic using the sdk. I am able to consume that message when I go into the Kafka CLI, so I have confirmed the message is there. Here's the function I'm using to consume from my Kafka server, with the props - I pass the message I produced to the topic only after I have indeed confirmed the produce() was succesful, I can post that function later if requested:
private def consumeFromKafka(topic: String, expectedMessage: String): Boolean = {
val props: Properties = initProps("consumer")
val consumer = new KafkaConsumer[String, String](props)
consumer.subscribe(List(topic).asJava)
var readExpectedRecord = false
try {
val records = {
val firstPollRecs = consumer.poll(MAX_POLLTIME_MS)
// increase timeout and try again if nothing comes back the first time in case system is busy
if (firstPollRecs.count() == 0) firstPollRecs else {
logger.info("KafkaHeartBeat: First poll had 0 records- trying again - doubling timeout to "
+ (MAX_POLLTIME_MS * 2)/1000 + " sec.")
consumer.poll(MAX_POLLTIME_MS * 2)
}
}
records.forEach(rec => {
if (rec.value() == expectedMessage) readExpectedRecord = true
})
} catch {
case e: Throwable => //log error
} finally {
consumer.close()
}
readExpectedRecord
}
private def initProps(propsType: String): Properties = {
val prop = new Properties()
prop.put("bootstrap.servers", kafkaServer + ":" + kafkaPort)
propsType match {
case "producer" => {
prop.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
prop.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")
prop.put("acks", "1")
prop.put("producer.type", "sync")
prop.put("retries", "3")
prop.put("linger.ms", "5")
}
case "consumer" => {
prop.put("group.id", groupId)
prop.put("enable.auto.commit", "false")
prop.put("auto.commit.interval.ms", "1000")
prop.put("session.timeout.ms", "30000")
prop.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
prop.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
prop.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest")
// poll just once, should only be one record for the heartbeat
prop.put("max.poll.records", "1")
}
}
prop
}
Now when I run the code, here's what it outputs in the console:
13:04:21 - Discovered coordinator serverName:9092 (id: 2147483647
rack: null) for group 0b8947e1-eb68-4af3-ac7b-be3f7c02e76e. 13:04:23
INFO o.a.k.c.c.i.ConsumerCoordinator - Revoking previously assigned
partitions [] for group 0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:24
INFO o.a.k.c.c.i.AbstractCoordinator - (Re-)joining group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:25 INFO
o.a.k.c.c.i.AbstractCoordinator - Successfully joined group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e with generation 1 13:04:26 INFO
o.a.k.c.c.i.ConsumerCoordinator - Setting newly assigned partitions
[HeartBeat_Topic.Service_5.2018-08-03.13_04_10.377-0] for group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:27 INFO
c.p.p.l.util.KafkaHeartBeatUtil - KafkaHeartBeat: First poll had 0
records- trying again - doubling timeout to 60 sec.
And then nothing else, no errors thrown -so no records are polled. Does anyone have any idea what's preventing the 'consume' from happening? The subscriber seems to be successful, as I'm able to successfully call the listTopics and list partions no problem.
Your code has a bug. It seems your line:
if (firstPollRecs.count() == 0)
Should say this instead
if (firstPollRecs.count() > 0)
Otherwise, you're passing in an empty firstPollRecs, and then iterating over that, which obviously returns nothing.

RXJava - emit "clock tick" item since last item received

I have an Observable emitting items, and I would like to merge into it special items acting as "time ticks" since the last item has been received.
I was trying to play around with timeout+onErrorXXX or intervals, but could not get it to work as expected.
import io.reactivex.Observable;
import io.reactivex.functions.Function;
import org.apache.log4j.Logger;
import org.junit.Test;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
public class RXTest {
private static final Logger log = Logger.getLogger(RXTest.class);
#Test
public void rxTest() throws InterruptedException {
log.info("Starting");
Observable.range(0, 26)
.concatMap(
item -> Observable.just(item)
.delay(item, TimeUnit.SECONDS)
)
.timeout(100, TimeUnit.MILLISECONDS)
// .retry()
.onErrorResumeNext((Function) throwable -> {
if (throwable instanceof TimeoutException) {
return Observable.just(-1);
}
throw new RuntimeException((Throwable)throwable);
})
.subscribe(
item -> log.info("Received " + item),
throwable -> log.error("Thrown" + throwable),
() -> log.info("Completed")
);
Thread.sleep(30000);
}
}
I would expect it to output something like:
00:00.000 Received 0
00:00.100 Received -1
00:00.200 Received -1
... (more Received -1 every 100 millis)
00:01.000 Received 1
00:01.100 Received -1
00:01.200 Received -1
...
00:03.000 Received 2
00:03.100 Received -1
00:03.200 Received -1
...
But instead, it receives -1 only once and then completes.
EDIT
Hopefully this marbles diagram would make it easier to understand:
I don't clearly understand what you want, but the expected output will be if you change onResumeNext() to return
Observable.interval(100, TimeUnit.MILLISECONDS)
.take(2)
.map(__ -> -1)

Why am I not getting hystrix metrics?

I am trying to use hystrix to monitor a certain network call. But all the metrics I try to monitor are always empty. What am I doing wrong?
I simulate a network call by implementing a (somewhat) RESTful interface that returns a pow calculation:
GetPowerCommand gpc = new GetPowerCommand(5, 82);
powerMetrics = gpc.getMetrics();
This is how I call the hystrix command and expect to get some metrics (at least Requests: not 0)
boolean run = true;
while (run) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
run = false;
}
System.out.println("GetPowerCommand.run(): " + gpc.run());
System.out.println("GetPowerCommand.run(): " + gpc.run());
System.out.println("getStatsStringFromMetrics(powerMetrics): " + getStatsStringFromMetrics(powerMetrics));
}
But all I get is:
GetPowerCommand.run(): <p>I guess .. </p><p>2^5 = 32</p>
GetPowerCommand.run(): <p>I guess .. </p><p>2^5 = 32</p>
getStatsStringFromMetrics(powerMetrics): Requests: 0 Errors: 0 (0%) Mean: 0 50th: 0 75th: 0 90th: 0 99th: 0
GetPowerCommand.run(): <p>I guess .. </p><p>2^5 = 32</p>
GetPowerCommand.run(): <p>I guess .. </p><p>2^5 = 32</p>
getStatsStringFromMetrics(powerMetrics): Requests: 0 Errors: 0 (0%) Mean: 0 50th: 0 75th: 0 90th: 0 99th: 0
edit: my metrics retrieval method:
private static String getStatsStringFromMetrics(HystrixCommandMetrics metrics) {
StringBuilder m = new StringBuilder();
if (metrics != null) {
HealthCounts health = metrics.getHealthCounts();
m.append("Requests: ").append(health.getTotalRequests()).append(" ");
m.append("Errors: ").append(health.getErrorCount()).append(" (").append(health.getErrorPercentage())
.append("%) ");
m.append("Mean: ").append(metrics.getTotalTimeMean()).append(" ");
m.append("50th: ").append(metrics.getExecutionTimePercentile(50)).append(" ");
m.append("75th: ").append(metrics.getExecutionTimePercentile(75)).append(" ");
m.append("90th: ").append(metrics.getExecutionTimePercentile(90)).append(" ");
m.append("99th: ").append(metrics.getExecutionTimePercentile(99)).append(" ");
}
return m.toString();
}
You have already answered your question: use execute() instead of run(). Have a look also here

Categories

Resources