I have an Observable emitting items, and I would like to merge into it special items acting as "time ticks" since the last item has been received.
I was trying to play around with timeout+onErrorXXX or intervals, but could not get it to work as expected.
import io.reactivex.Observable;
import io.reactivex.functions.Function;
import org.apache.log4j.Logger;
import org.junit.Test;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
public class RXTest {
private static final Logger log = Logger.getLogger(RXTest.class);
#Test
public void rxTest() throws InterruptedException {
log.info("Starting");
Observable.range(0, 26)
.concatMap(
item -> Observable.just(item)
.delay(item, TimeUnit.SECONDS)
)
.timeout(100, TimeUnit.MILLISECONDS)
// .retry()
.onErrorResumeNext((Function) throwable -> {
if (throwable instanceof TimeoutException) {
return Observable.just(-1);
}
throw new RuntimeException((Throwable)throwable);
})
.subscribe(
item -> log.info("Received " + item),
throwable -> log.error("Thrown" + throwable),
() -> log.info("Completed")
);
Thread.sleep(30000);
}
}
I would expect it to output something like:
00:00.000 Received 0
00:00.100 Received -1
00:00.200 Received -1
... (more Received -1 every 100 millis)
00:01.000 Received 1
00:01.100 Received -1
00:01.200 Received -1
...
00:03.000 Received 2
00:03.100 Received -1
00:03.200 Received -1
...
But instead, it receives -1 only once and then completes.
EDIT
Hopefully this marbles diagram would make it easier to understand:
I don't clearly understand what you want, but the expected output will be if you change onResumeNext() to return
Observable.interval(100, TimeUnit.MILLISECONDS)
.take(2)
.map(__ -> -1)
Related
I am trying to simulate the delay while emitting items in a specific sequence
Here I am trying to simulate the problem
List<Integer> integers = new ArrayList<>();
integers.add(1);
integers.add(2);
integers.add(3);
integers.add(4);
Disposable d = Observable
.just(integers)
.flatMap(integers1 -> {
return Observable
.zip(Observable.just(1L).concatWith(Observable.interval(10, 5, TimeUnit.SECONDS)),
Observable.fromIterable(integers1), (aLong, integer1) -> {
return new Pair<Long, Integer>(aLong, integer1);
})
.flatMap(longIntegerPair -> {
System.out.println("DATA " + longIntegerPair.getValue());
return Observable.just(longIntegerPair.getValue());
})
.toList()
.toObservable();
})
.subscribe(integers1 -> {
System.out.println("END");
}, throwable -> {
System.out.println("Error " + throwable.getMessage());
});
The output of the above code is
DATA 1
wait for 10 seconds
DATA 2
wait for 5
DATA 3
wait for 5
DATA 4
wait for 5 min
what I am expecting is to perform an operation once 10 seconds or 5 seconds delay is over at each stage but sure where do I inject that part in current flow.
DATA 1
wait for 10 seconds
[perform operation]
DATA 2
wait for 5
[perform operation]
DATA 3
wait for 5
[perform operation]
DATA 4
wait for 5 min
[perform operation]
Use concatMap to make the sequence orderly and use delay to delay the processing of a data to get your printout pattern:
Observable
.zip(Observable.just(-1L).concatWith(Observable.interval(10, 5, TimeUnit.SECONDS)),
Observable.range(1, 5),
(aLong, integer1) -> {
return new Pair<Long, Integer>(aLong, integer1);
}
)
.concatMap(longIntegerPair -> {
System.out.println("DATA " + longIntegerPair.getValue());
return Observable.just(longIntegerPair.getValue())
.delay(longIntegerPair.getKey() < 0 ? 10 : 5, TimeUnit.SECONDS)
.flatMap(value -> {
System.out.println("[performing operation] " + value);
return Observable.just(value);
});
})
.toList()
.toObservable()
.blockingSubscribe();
my case is showed as below:
I have two web requests named request1 and request2, the input of request2 comes from the output of request1. Now I want to set timeout for both of these two requests. Ideally, the time cost of request1 is 2s and the time cost of request2 is 3s. So I want to set the timeout of request1 as 3s and timeout of request2 as 4s. as the documentation of Mono#timeout said, I think it's possible to be done. But unfortunately the second timeout is calculated by accumulation. So I'm confused about the the meaning of this mono.
documentation of Mono#timeout(Duration timeout)(https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Mono.html#timeout-java.time.Duration-)
public final Mono<T> timeout(Duration timeout)
Propagate a TimeoutException in case no item arrives within the given Duration.
Parameters:
timeout - the timeout before the onNext signal from this Mono
Returns: a Mono that can time out
sample code of my case:
Mono<String> startMono = Mono.just("start");
String result = startMono
.map(x -> {
log.info("received message: {}", x);
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
return "#1 enriched: " + x;
})
.timeout(Duration.ofSeconds(3))
.onErrorResume(throwable -> {
log.warn("Caught exception, apply fallback behavior #1", throwable);
return Mono.just("item from backup #1");
})
.map(y -> {
log.info("received message: {}", y);
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
return "#2 enriched: " + y;
})
.timeout(Duration.ofSeconds(4))
// there is no timeoutException thrown if I set the second timeout to 6s (6s > 2s + 3s)
// .timeout(Duration.ofSeconds(6))
.onErrorResume(throwable -> {
log.warn("Caught exception, apply fallback behavior #2", throwable);
return Mono.just("item from backup #2");
})
.block();
log.info("result: {}", result);
exception thrown from the above code:
16:46:51.080 [main] INFO MonoDemo - received message: start
16:46:53.095 [elastic-2] INFO MonoDemo - received message: #1 enriched: start
16:46:55.079 [parallel-1] WARN MonoDemo - Caught exception, apply fallback behavior #2
java.util.concurrent.TimeoutException: Did not observe any item or terminal signal within 4000ms in 'flatMap' (and no fallback has been configured)
at reactor.core.publisher.FluxTimeout$TimeoutMainSubscriber.handleTimeout(FluxTimeout.java:288) [reactor-core-3.3.10.RELEASE.jar:3.3.10.RELEASE]
at reactor.core.publisher.FluxTimeout$TimeoutMainSubscriber.doTimeout(FluxTimeout.java:273) [reactor-core-3.3.10.RELEASE.jar:3.3.10.RELEASE]
at reactor.core.publisher.FluxTimeout$TimeoutTimeoutSubscriber.onNext(FluxTimeout.java:395) [reactor-core-3.3.10.RELEASE.jar:3.3.10.RELEASE]
at reactor.core.publisher.StrictSubscriber.onNext(StrictSubscriber.java:89) [reactor-core-3.3.10.RELEASE.jar:3.3.10.RELEASE]
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:73) [reactor-core-3.3.10.RELEASE.jar:3.3.10.RELEASE]
at reactor.core.publisher.MonoDelay$MonoDelayRunnable.run(MonoDelay.java:117) [reactor-core-3.3.10.RELEASE.jar:3.3.10.RELEASE]
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:68) [reactor-core-3.3.10.RELEASE.jar:3.3.10.RELEASE]
at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:28) [reactor-core-3.3.10.RELEASE.jar:3.3.10.RELEASE]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:834) [?:?]
16:46:55.095 [main] INFO MonoDemo - result: item from backup #2
The timeout operator measures the time from the subscription until the arrival of the first signal. More on this here.
If you'd like to apply the timeout on the second operation only then you need to put timeout operator in a place where only the second request is in scope. See the following:
public void execute() {
firstRequest()
.onErrorResume(throwable -> secondRequest())
.onErrorReturn("some static fallback value if second failed as well")
.block();
}
private Mono<String> firstRequest() {
return Mono.delay(Duration.ofSeconds(2))
.thenReturn("first")
.timeout(Duration.ofSeconds(3));
// additional mapping can be done here related to first request
}
private Mono<String> secondRequest() {
return Mono.delay(Duration.ofSeconds(3))
.thenReturn("second")
.timeout(Duration.ofSeconds(4));
// additional mapping can be done here related to second request
}
By moving the timeout operator inside the private methods we ensure that only the duration of the those particular Monos are measured and not the whole chain.
I am trying to understand CompletableFutures and chaining of calls that that return completed futures and I have created the bellow example which kind of simulates two calls to a database.
The first method is supposed to be giving a completable future with a list of userIds and then I need to make a call to another method providing a userId to get the user (a string in this case).
to summarise:
1. fetch the ids
2. fetch a list of the users coresponding to those ids.
I created simple methods to simulate the responses with sleap threads.
Please check the code bellow
public class PipelineOfTasksExample {
private Map<Long, String> db = new HashMap<>();
PipelineOfTasksExample() {
db.put(1L, "user1");
db.put(2L, "user2");
db.put(3L, "user3");
db.put(4L, "user4");
}
private CompletableFuture<List<Long>> returnUserIdsFromDb() {
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("building the list of Ids" + " - thread: " + Thread.currentThread().getName());
return CompletableFuture.supplyAsync(() -> Arrays.asList(1L, 2L, 3L, 4L));
}
private CompletableFuture<String> fetchById(Long id) {
CompletableFuture<String> cfId = CompletableFuture.supplyAsync(() -> db.get(id));
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("fetching id: " + id + " -> " + db.get(id) + " thread: " + Thread.currentThread().getName());
return cfId;
}
public static void main(String[] args) {
PipelineOfTasksExample example = new PipelineOfTasksExample();
CompletableFuture<List<String>> result = example.returnUserIdsFromDb()
.thenCompose(listOfIds ->
CompletableFuture.supplyAsync(
() -> listOfIds.parallelStream()
.map(id -> example.fetchById(id).join())
.collect(Collectors.toList()
)
)
);
System.out.println(result.join());
}
}
My question is, does the join call (example.fetchById(id).join()) ruins the nonblocking nature of process. If the answer is positive how can I solve this problem?
Thank you in advance
Your example is a bit odd as you are slowing down the main thread in returnUserIdsFromDb(), before any operation even starts and likewise, fetchById slows down the caller rather than the asynchronous operation, which defeats the entire purpose of asynchronous operations.
Further, instead of .thenCompose(listOfIds -> CompletableFuture.supplyAsync(() -> …)) you can simply use .thenApplyAsync(listOfIds -> …).
So a better example might be
public class PipelineOfTasksExample {
private final Map<Long, String> db = LongStream.rangeClosed(1, 4).boxed()
.collect(Collectors.toMap(id -> id, id -> "user"+id));
PipelineOfTasksExample() {}
private static <T> T slowDown(String op, T result) {
LockSupport.parkNanos(TimeUnit.MILLISECONDS.toNanos(500));
System.out.println(op + " -> " + result + " thread: "
+ Thread.currentThread().getName()+ ", "
+ POOL.getPoolSize() + " threads");
return result;
}
private CompletableFuture<List<Long>> returnUserIdsFromDb() {
System.out.println("trigger building the list of Ids - thread: "
+ Thread.currentThread().getName());
return CompletableFuture.supplyAsync(
() -> slowDown("building the list of Ids", Arrays.asList(1L, 2L, 3L, 4L)),
POOL);
}
private CompletableFuture<String> fetchById(Long id) {
System.out.println("trigger fetching id: " + id + " thread: "
+ Thread.currentThread().getName());
return CompletableFuture.supplyAsync(
() -> slowDown("fetching id: " + id , db.get(id)), POOL);
}
static ForkJoinPool POOL = new ForkJoinPool(2);
public static void main(String[] args) {
PipelineOfTasksExample example = new PipelineOfTasksExample();
CompletableFuture<List<String>> result = example.returnUserIdsFromDb()
.thenApplyAsync(listOfIds ->
listOfIds.parallelStream()
.map(id -> example.fetchById(id).join())
.collect(Collectors.toList()
),
POOL
);
System.out.println(result.join());
}
}
which prints something like
trigger building the list of Ids - thread: main
building the list of Ids -> [1, 2, 3, 4] thread: ForkJoinPool-1-worker-1, 1 threads
trigger fetching id: 2 thread: ForkJoinPool-1-worker-0
trigger fetching id: 3 thread: ForkJoinPool-1-worker-1
trigger fetching id: 4 thread: ForkJoinPool-1-worker-2
fetching id: 4 -> user4 thread: ForkJoinPool-1-worker-3, 4 threads
fetching id: 2 -> user2 thread: ForkJoinPool-1-worker-3, 4 threads
fetching id: 3 -> user3 thread: ForkJoinPool-1-worker-2, 4 threads
trigger fetching id: 1 thread: ForkJoinPool-1-worker-3
fetching id: 1 -> user1 thread: ForkJoinPool-1-worker-2, 4 threads
[user1, user2, user3, user4]
which might be a surprising number of threads on the first glance.
The answer is that join() may block the thread, but if this happens inside a worker thread of a Fork/Join pool, this situation will be detected and a new compensation thread will be started, to ensure the configured target parallelism.
As a special case, when the default Fork/Join pool is used, the implementation may pick up new pending tasks within the join() method, to ensure progress within the same thread.
So the code will always make progress and there’s nothing wrong with calling join() occasionally, if the alternatives are much more complicated, but there’s some danger of too much resource consumption, if used excessively. After all, the reason to use thread pools, is to limit the number of threads.
The alternative is to use chained dependent operations where possible.
public class PipelineOfTasksExample {
private final Map<Long, String> db = LongStream.rangeClosed(1, 4).boxed()
.collect(Collectors.toMap(id -> id, id -> "user"+id));
PipelineOfTasksExample() {}
private static <T> T slowDown(String op, T result) {
LockSupport.parkNanos(TimeUnit.MILLISECONDS.toNanos(500));
System.out.println(op + " -> " + result + " thread: "
+ Thread.currentThread().getName()+ ", "
+ POOL.getPoolSize() + " threads");
return result;
}
private CompletableFuture<List<Long>> returnUserIdsFromDb() {
System.out.println("trigger building the list of Ids - thread: "
+ Thread.currentThread().getName());
return CompletableFuture.supplyAsync(
() -> slowDown("building the list of Ids", Arrays.asList(1L, 2L, 3L, 4L)),
POOL);
}
private CompletableFuture<String> fetchById(Long id) {
System.out.println("trigger fetching id: " + id + " thread: "
+ Thread.currentThread().getName());
return CompletableFuture.supplyAsync(
() -> slowDown("fetching id: " + id , db.get(id)), POOL);
}
static ForkJoinPool POOL = new ForkJoinPool(2);
public static void main(String[] args) {
PipelineOfTasksExample example = new PipelineOfTasksExample();
CompletableFuture<List<String>> result = example.returnUserIdsFromDb()
.thenComposeAsync(listOfIds -> {
List<CompletableFuture<String>> jobs = listOfIds.parallelStream()
.map(id -> example.fetchById(id))
.collect(Collectors.toList());
return CompletableFuture.allOf(jobs.toArray(new CompletableFuture<?>[0]))
.thenApply(_void -> jobs.stream()
.map(CompletableFuture::join).collect(Collectors.toList()));
},
POOL
);
System.out.println(result.join());
System.out.println(ForkJoinPool.commonPool().getPoolSize());
}
}
The difference is that first, all asynchronous jobs are submitted, then, a dependent action calling join on them is scheduled, to be executed only when all jobs have completed, so these join invocations will never block. Only the final join call at the end of the main method may block the main thread.
So this prints something like
trigger building the list of Ids - thread: main
building the list of Ids -> [1, 2, 3, 4] thread: ForkJoinPool-1-worker-1, 1 threads
trigger fetching id: 3 thread: ForkJoinPool-1-worker-1
trigger fetching id: 2 thread: ForkJoinPool-1-worker-0
trigger fetching id: 4 thread: ForkJoinPool-1-worker-1
trigger fetching id: 1 thread: ForkJoinPool-1-worker-0
fetching id: 4 -> user4 thread: ForkJoinPool-1-worker-1, 2 threads
fetching id: 3 -> user3 thread: ForkJoinPool-1-worker-0, 2 threads
fetching id: 2 -> user2 thread: ForkJoinPool-1-worker-1, 2 threads
fetching id: 1 -> user1 thread: ForkJoinPool-1-worker-0, 2 threads
[user1, user2, user3, user4]
showing that no compensation threads had to be created, so the number of threads matches the configured target parallelism.
Note that if the actual work is done in a background thread rather than within the fetchById method itself, you now don’t need a parallel stream anymore, as there is no blocking join() call. For such scenarios, just using stream() will usually result in a higher performance.
I'm using the Kafka JDK client ver 0.10.2.1 . I am able to produce simple messages to Kafka for a "heartbeat" test, but I cannot consume a message from that same topic using the sdk. I am able to consume that message when I go into the Kafka CLI, so I have confirmed the message is there. Here's the function I'm using to consume from my Kafka server, with the props - I pass the message I produced to the topic only after I have indeed confirmed the produce() was succesful, I can post that function later if requested:
private def consumeFromKafka(topic: String, expectedMessage: String): Boolean = {
val props: Properties = initProps("consumer")
val consumer = new KafkaConsumer[String, String](props)
consumer.subscribe(List(topic).asJava)
var readExpectedRecord = false
try {
val records = {
val firstPollRecs = consumer.poll(MAX_POLLTIME_MS)
// increase timeout and try again if nothing comes back the first time in case system is busy
if (firstPollRecs.count() == 0) firstPollRecs else {
logger.info("KafkaHeartBeat: First poll had 0 records- trying again - doubling timeout to "
+ (MAX_POLLTIME_MS * 2)/1000 + " sec.")
consumer.poll(MAX_POLLTIME_MS * 2)
}
}
records.forEach(rec => {
if (rec.value() == expectedMessage) readExpectedRecord = true
})
} catch {
case e: Throwable => //log error
} finally {
consumer.close()
}
readExpectedRecord
}
private def initProps(propsType: String): Properties = {
val prop = new Properties()
prop.put("bootstrap.servers", kafkaServer + ":" + kafkaPort)
propsType match {
case "producer" => {
prop.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
prop.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")
prop.put("acks", "1")
prop.put("producer.type", "sync")
prop.put("retries", "3")
prop.put("linger.ms", "5")
}
case "consumer" => {
prop.put("group.id", groupId)
prop.put("enable.auto.commit", "false")
prop.put("auto.commit.interval.ms", "1000")
prop.put("session.timeout.ms", "30000")
prop.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
prop.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
prop.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest")
// poll just once, should only be one record for the heartbeat
prop.put("max.poll.records", "1")
}
}
prop
}
Now when I run the code, here's what it outputs in the console:
13:04:21 - Discovered coordinator serverName:9092 (id: 2147483647
rack: null) for group 0b8947e1-eb68-4af3-ac7b-be3f7c02e76e. 13:04:23
INFO o.a.k.c.c.i.ConsumerCoordinator - Revoking previously assigned
partitions [] for group 0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:24
INFO o.a.k.c.c.i.AbstractCoordinator - (Re-)joining group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:25 INFO
o.a.k.c.c.i.AbstractCoordinator - Successfully joined group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e with generation 1 13:04:26 INFO
o.a.k.c.c.i.ConsumerCoordinator - Setting newly assigned partitions
[HeartBeat_Topic.Service_5.2018-08-03.13_04_10.377-0] for group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:27 INFO
c.p.p.l.util.KafkaHeartBeatUtil - KafkaHeartBeat: First poll had 0
records- trying again - doubling timeout to 60 sec.
And then nothing else, no errors thrown -so no records are polled. Does anyone have any idea what's preventing the 'consume' from happening? The subscriber seems to be successful, as I'm able to successfully call the listTopics and list partions no problem.
Your code has a bug. It seems your line:
if (firstPollRecs.count() == 0)
Should say this instead
if (firstPollRecs.count() > 0)
Otherwise, you're passing in an empty firstPollRecs, and then iterating over that, which obviously returns nothing.
I have the following configuration in application.conf:
bounded-mailbox {
mailbox-type = "akka.dispatch.BoundedMailbox"
mailbox-capacity = 100
mailbox-push-timeout-time = 3s
}
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = INFO
daemonic = on
}
This is the way how I configured my actor
public class MyTestActor extends UntypedActor implements RequiresMessageQueue<BoundedMessageQueueSemantics>{
#Override
public void onReceive(Object message) throws Exception {
if (message instanceof String){
Thread.sleep(500);
System.out.println("message = " + message);
}
else {
System.out.println("Unknown Message " );
}
}
}
Now this is how I initate this actor:
myTestActor = myActorSystem.actorOf(Props.create(MyTestActor.class).withMailbox("bounded-mailbox"), "simple-actor");
After it, in my code I'm sending 3000 messages to this actor.
for (int i =0;i<3000;i++){
myTestActor.tell(guestName, null);}
What I expect to see is the exception that my Queues are full, but my messages are printed inside onReceive method every half second, like nothing happened. So I believe my mailbox configuration is not applied.
What am I doing wrong?
Updated: I created actor which subscribes to dead letter events:
deadLetterActor = myActorSystem.actorOf(Props.create(DeadLetterMonitor.class),"deadLetter-monitor");
and installed Kamon for queues monitoring:
After sending 3000 messages sent to the actor, Kamin shows me the following:
Actor: user/simple-actor
MailBox size:
Min: 100
Avg.: 100.0
Max: 101
Actor: system/deadLetterListener
MailBox size:
Min: 0
Avg.: 0.0
Max: 0
Actor: system/deadLetter-monitor
MailBox size:
Min: 0
Avg.: 0.0
Max: 0
By default Akka discards overflowing messages into DeadLetters and actor doesn't stop processing:
https://github.com/akka/akka/blob/876b8045a1fdb9cdd880eeab8b1611aa976576f6/akka-actor/src/main/scala/akka/dispatch/Mailbox.scala#L411
But sending thread will be blocked on interval which is configured by mailbox-push-timeout-time before discarding the message. Try to decrease it to 1ms and see that following test will pass:
import java.util.concurrent.atomic.AtomicInteger
import akka.actor._
import com.typesafe.config.Config
import com.typesafe.config.ConfigFactory._
import org.specs2.mutable.Specification
class BoundedActorSpec extends Specification {
args(sequential = true)
def config: Config = load(parseString(
"""
bounded-mailbox {
mailbox-type = "akka.dispatch.BoundedMailbox"
mailbox-capacity = 100
mailbox-push-timeout-time = 1ms
}
"""))
val system = ActorSystem("system", config)
"some messages should go to dead letters" in {
system.eventStream.subscribe(system.actorOf(Props(classOf[DeadLetterMetricsActor])), classOf[DeadLetter])
val myTestActor = system.actorOf(Props(classOf[MyTestActor]).withMailbox("bounded-mailbox"))
for (i <- 0 until 3000) {
myTestActor.tell("guestName", null)
}
Thread.sleep(100)
system.shutdown()
system.awaitTermination()
DeadLetterMetricsActor.deadLetterCount.get must be greaterThan(0)
}
}
class MyTestActor extends Actor {
def receive = {
case message: String =>
Thread.sleep(500)
println("message = " + message);
case _ => println("Unknown Message")
}
}
object DeadLetterMetricsActor {
val deadLetterCount = new AtomicInteger
}
class DeadLetterMetricsActor extends Actor {
def receive = {
case _: DeadLetter => DeadLetterMetricsActor.deadLetterCount.incrementAndGet()
}
}