I am using the resilience4j library to retry some code, I have the following code below, I expect it to run 4 times. If I throw IllegalArgumentException it works but if I throw ConnectException it doesn't.
object Test extends App {
val retryConf = RetryConfig.custom()
.maxAttempts(4)
.retryOnException(_ => true)
//.retryExceptions(classOf[ConnectException])
.build
val retryRegistry = RetryRegistry.of(retryConf)
val retryConfig = retryRegistry.retry("test", retryConf)
val supplier: Supplier[Unit] = () => {
println("Run")
throw new IllegalArgumentException("Test")
//throw new ConnectException("Test")
}
val decoratedSupplier = Decorators.ofSupplier(supplier).withRetry(retryConfig).get()
}
I expected that retry to retry on all exceptions.
You are creating decorated supplier which is only catching RuntimeExceptions whilst ConnectException is not a RuntimeException:
... decorateSupplier(Retry retry, Supplier<T> supplier) {
return () -> {
Retry.Context<T> context = retry.context();
do try {
...
} catch (RuntimeException runtimeException) {
...
Look through the Retry.java and choose one that catches Exception for example decorateCheckedFunction, for example
val registry =
RetryRegistry.of(RetryConfig.custom().maxAttempts(4).build())
val retry = registry.retry("my")
Retry.decorateCheckedFunction(retry, (x: Int) => {
println(s"woohoo $x")
throw new ConnectException("Test")
42
}).apply(1)
which outputs
woohoo 1
woohoo 1
woohoo 1
woohoo 1
Exception in thread "main" java.rmi.ConnectException: Test
Personally I use softwaremill/retry
Related
I want to catch exceptions thrown from a flux, my code is like this:
try {
Flux.just("key1", "key2", "key3")
.doOnNext(System.out::println)
.map(k -> {
if (!k.equals("key1")) {
throw new RuntimeException("Not key1"); // 1
}
return "External Value, key:" + k;
})
.subscribe(System.out::println); // 2
} catch (Throwable e) {
System.out.println("Got exception"); // 3
}
the output is:
key1
External Value, key:key1
key2
[ERROR] (main) Operator called default onErrorDropped - reactor.core.Exceptions$ErrorCallbackNotImplemented: java.lang.RuntimeException: Not key1
reactor.core.Exceptions$ErrorCallbackNotImplemented: java.lang.RuntimeException: Not key1
Caused by: java.lang.RuntimeException: Not key1
at com.cxd.study.reactor.HandlingErrors.lambda$catchException$7(HandlingErrors.java:153)
...
It seems that my catch at step 3 is never reached.
I know I can reach the exception at step 2 like this:.subscribe(System.out::println, e -> System.out.println("Got Exception")).
But how can I catch the exception thrown at step 1 out of the flux?
You can use the onError() operator to handle error cases, or the doOnError() operator if you e.g. want to log the exception.
When testing a code that does nasty reflective operations on Throwable fields, then it is failing on Android API 31. The reason is that Throwable.suppressedExceptions field is no longer present.
I did more tests between Android API 30 and 31:
public class ThrowableInternalTest {
private static final String TAG = "ThrowableInternal";
#Test
public void suppressedExceptionsFieldIsPresent() throws Exception {
Log.w(TAG, "Android API: " + VERSION.SDK_INT);
// Checking internal fields
Field[] fields = Throwable.class.getDeclaredFields();
String fieldNames = Stream.of(fields).map(Field::getName).collect(Collectors.joining(","));
Log.w(TAG, "Throwable fields: " + fieldNames);
// Manually setting suppressed exceptions
Exception suppressed = new Exception();
Exception exception = new Exception();
exception.addSuppressed(suppressed);
Log.w(TAG, "exception.getSuppressed(): " + exception.getSuppressed().length);
// Using try-with-resource block which should have suppressed exceptions
try {
try (ThrowingClosable ignored = new ThrowingClosable()) {
throw new Exception();
}
} catch (Throwable t) {
Log.w(TAG, "try-with-resource suppressed: " + t.getSuppressed().length);
}
// Results:
// minSdkVersion: 18
// Android API: 30
// Throwable fields: backtrace,cause,detailMessage,stackTrace,suppressedExceptions,serialVersionUID
// exception.getSuppressed(): 0
// try-with-resource suppressed: 0
// minSdkVersion: 19
// Android API: 30
// Throwable fields: backtrace,cause,detailMessage,stackTrace,suppressedExceptions,serialVersionUID
// exception.getSuppressed(): 1
// try-with-resource suppressed: 1
// minSdkVersion: 18
// Android API: 31
// Throwable fields: backtrace,cause,detailMessage,stackTrace,serialVersionUID
// exception.getSuppressed(): 0
// try-with-resource suppressed: 0
// minSdkVersion: 19
// Android API: 31
// Throwable fields: backtrace,cause,detailMessage,stackTrace,serialVersionUID
// exception.getSuppressed(): 1
// try-with-resource suppressed:1
}
private static class ThrowingClosable implements AutoCloseable {
#Override
public void close() throws Exception {
throw new Exception();
}
}
}
When minSdkVersion is below 19, then Throwable.getSuppressed() always returns an empty array. With higher minSdkVersion, then it is working properly.
Although, the suppressedExceptions field is only present on API 30, not on 31.
But in the Android SDK source code, the suppressedExceptions field is present on both API: 30 and 31.
I'm curious how API 31 could have suppressed exception working even without the suppressedExceptions field ?
Also why suppressed exceptions are not supported when minSdkVersion is below 19 ?
I'm using the Kafka JDK client ver 0.10.2.1 . I am able to produce simple messages to Kafka for a "heartbeat" test, but I cannot consume a message from that same topic using the sdk. I am able to consume that message when I go into the Kafka CLI, so I have confirmed the message is there. Here's the function I'm using to consume from my Kafka server, with the props - I pass the message I produced to the topic only after I have indeed confirmed the produce() was succesful, I can post that function later if requested:
private def consumeFromKafka(topic: String, expectedMessage: String): Boolean = {
val props: Properties = initProps("consumer")
val consumer = new KafkaConsumer[String, String](props)
consumer.subscribe(List(topic).asJava)
var readExpectedRecord = false
try {
val records = {
val firstPollRecs = consumer.poll(MAX_POLLTIME_MS)
// increase timeout and try again if nothing comes back the first time in case system is busy
if (firstPollRecs.count() == 0) firstPollRecs else {
logger.info("KafkaHeartBeat: First poll had 0 records- trying again - doubling timeout to "
+ (MAX_POLLTIME_MS * 2)/1000 + " sec.")
consumer.poll(MAX_POLLTIME_MS * 2)
}
}
records.forEach(rec => {
if (rec.value() == expectedMessage) readExpectedRecord = true
})
} catch {
case e: Throwable => //log error
} finally {
consumer.close()
}
readExpectedRecord
}
private def initProps(propsType: String): Properties = {
val prop = new Properties()
prop.put("bootstrap.servers", kafkaServer + ":" + kafkaPort)
propsType match {
case "producer" => {
prop.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
prop.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")
prop.put("acks", "1")
prop.put("producer.type", "sync")
prop.put("retries", "3")
prop.put("linger.ms", "5")
}
case "consumer" => {
prop.put("group.id", groupId)
prop.put("enable.auto.commit", "false")
prop.put("auto.commit.interval.ms", "1000")
prop.put("session.timeout.ms", "30000")
prop.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
prop.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
prop.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest")
// poll just once, should only be one record for the heartbeat
prop.put("max.poll.records", "1")
}
}
prop
}
Now when I run the code, here's what it outputs in the console:
13:04:21 - Discovered coordinator serverName:9092 (id: 2147483647
rack: null) for group 0b8947e1-eb68-4af3-ac7b-be3f7c02e76e. 13:04:23
INFO o.a.k.c.c.i.ConsumerCoordinator - Revoking previously assigned
partitions [] for group 0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:24
INFO o.a.k.c.c.i.AbstractCoordinator - (Re-)joining group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:25 INFO
o.a.k.c.c.i.AbstractCoordinator - Successfully joined group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e with generation 1 13:04:26 INFO
o.a.k.c.c.i.ConsumerCoordinator - Setting newly assigned partitions
[HeartBeat_Topic.Service_5.2018-08-03.13_04_10.377-0] for group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:27 INFO
c.p.p.l.util.KafkaHeartBeatUtil - KafkaHeartBeat: First poll had 0
records- trying again - doubling timeout to 60 sec.
And then nothing else, no errors thrown -so no records are polled. Does anyone have any idea what's preventing the 'consume' from happening? The subscriber seems to be successful, as I'm able to successfully call the listTopics and list partions no problem.
Your code has a bug. It seems your line:
if (firstPollRecs.count() == 0)
Should say this instead
if (firstPollRecs.count() > 0)
Otherwise, you're passing in an empty firstPollRecs, and then iterating over that, which obviously returns nothing.
I have a code which looks like below
object ErrorTest {
case class APIResults(status:String, col_1:Long, col_2:Double, ...)
def funcA(rows:ArrayBuffer[Row])(implicit defaultFormats:DefaultFormats):ArrayBuffer[APIResults] = {
//call some API ang get results and return APIResults
...
}
// MARK: load properties
val props = loadProperties()
private def loadProperties(): Properties = {
val configFile = new File("config.properties")
val reader = new FileReader(configFile)
val props = new Properties()
props.load(reader)
props
}
def main(args: Array[String]): Unit = {
val prop_a = props.getProperty("prop_a")
val session = Context.initialSparkSession();
import session.implicits._
val initialSet = ArrayBuffer.empty[Row]
val addToSet = (s: ArrayBuffer[Row], v: Row) => (s += v)
val mergePartitionSets = (p1: ArrayBuffer[Row], p2: ArrayBuffer[Row]) => (p1 ++= p2)
val sql1 =
s"""
select * from tbl_a where ...
"""
session.sql(sql1)
.rdd.map{row => {implicit val formats = DefaultFormats; (row.getLong(6), row)}}
.aggregateByKey(initialSet)(addToSet,mergePartitionSets)
.repartition(40)
.map{case (rowNumber,rows) => {implicit val formats = DefaultFormats; funcA(rows)}}
.flatMap(x => x)
.toDF()
.write.mode(SaveMode.Overwrite).saveAsTable("tbl_b")
}
}
when I run it via spark-submit, it throws error Caused by: java.lang.NoClassDefFoundError: Could not initialize class staging_jobs.ErrorTest$. But if I move val props = loadProperties() into the first line of main method, then there's no error anymore. Could anyone give me a explanation on this phenomenon? Thanks a lot!
Caused by: java.lang.NoClassDefFoundError: Could not initialize class staging_jobs.ErrorTest$
at staging_jobs.ErrorTest$$anonfun$main$1.apply(ErrorTest.scala:208)
at staging_jobs.ErrorTest$$anonfun$main$1.apply(ErrorTest.scala:208)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:243)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1341)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193)
... 8 more
I've met the same question as you. I defined a method convert outside main method. When I use it with dataframe.rdd.map{x => convert(x)} in main , NoClassDefFoundError:Could not initialize class Test$ happened.
But when I use a function object convertor, which is the same code with convert method, in main method, no error happened.
I used spark 2.1.0, scala 2.11, it seems like a bug in spark?
I guess the problem is that val props = loadProperties() defines a member for the outer class (of main). Then this member will be serialized (or run) on the executors, which do not have the save environment with the driver.
assume part of my code is like as:-
where doc is List[Document] that contains stu_name and roll_number
sometimes stu_name and roll_name may be null.
I used Try to avoid null Pointer exception in first two lines.
but why I m getting again Null Pointer exception in "val myRow".
val name= Try {Option.apply(doc.getFieldValue("stu_name"))}.getOrElse(null)
val rollNumber ={Option.apply(doc.getFieldValue("roll_number"))}.getOrElse(null)
val myRow = (
doc.getFieldValue("ID").asInstanceOf[Int] //can't be null
name.getOrElse(null).toString, //NullPointerException
rollNumber.getOrElse(null).asInstanceOf[Int] //NullPointerException
)
.....
.....
I m getting following error:
[2016-01-14 22:40:16,896] WARN o.a.s.s.TaskSetManager [] [akka://JobServer/user/context-supervisor/demeter] - Lost task 0.0 in stage 0.0 (TID 0, 10.29.23.136): java.lang.NullPointerException
at com.test.events.Monitoring$$anonfun$geteventTableReplicateDayFunc$1.apply(Monitoring.scala:75)
at com.test.events.Monitoring$$anonfun$geteventTableReplicateDayFunc$1.apply(Monitoring.scala:57)
at com.test.events.Monitoring$$anonfun$27.apply(Monitoring.scala:104)
at com.test.events.Monitoring$$anonfun$27.apply(Monitoring.scala:104)
I tried in console following but did not see any error:
scala> val a = Try (Option.apply("atar")).getOrElse(null)
a: Option[String] = Some(atar)
scala> a.getOrElse(null)
res16: String = atar
scala> val a = Try (Option.apply(null)).getOrElse(null)
a: Option[Null] = None
scala> a.getOrElse(null)
res17: Null = null
This is all wrong. By using getOrElse(null) you are basically removing all advantages to using an Option to begin with. Plus, generating much more complexity than needed.
You need to define what you will do if the values are null. This just keeps them as Options (None on null input):
val myRow = (
doc.getFieldValue("ID").toInt, // Fails if null
Option(doc.getFieldValue("stu_name")), // `None` if null
Option(doc.getFieldValue("roll_number")).map(_.toInt) // `None` if null
)
Or use default values:
val myRow = (
doc.getFieldValue("ID").toInt,
Option(doc.getFieldValue("stu_name")).getOrElse("default"),
Option(doc.getFieldValue("roll_number")).map(_.toInt).getOrElse(0)
)