Configuration to handle Dead-letter queue - java

I have a project that uses Spring Cloud Streams - RabbitMQ to exchange messages within micro-services. One thing that is critical for my project is that I must not lose any message.
In order to minimize failures, I planned the following:
Use the default retry method for messages in queue
Configure dead-letter queue to put messages again on queue after some time
To avoid an infinite loop, allow only a few times (let's say, 5) a message could be republished from dead-letter queue to regular messaging queue.
The first two items I believe I could make it using the configuration below:
#dlx/dlq setup - retry dead letter 5 minutes later (300000ms later)
spring.cloud.stream.rabbit.bindings.input.consumer.auto-bind-dlq=true
spring.cloud.stream.rabbit.bindings.input.consumer.republish-to-dlq=true
spring.cloud.stream.rabbit.bindings.input.consumer.dlq-ttl=300000
spring.cloud.stream.rabbit.bindings.input.consumer.dlq-dead-letter-exchange=
#input
spring.cloud.stream.bindings.myInput.destination=my-queue
spring.cloud.stream.bindings.myInput.group=my-group
However, I could not find searching on this reference guide how to do what I want (mostly, how to configure a maximum number of republish from dead-letter queue). I'm not completely sure I'm on the right path - maybe I should manually create a second queue and code what I want, and leave dead-letter only to messages that completely failed (which I must check regularly and handle manually, since my system should not lose any messages)...
I'm new to these frameworks, and I would like your help to configure mine...

This documentation for the rabbit binder shows how to publish a dead-letter to some parking-lot queue after some number of retries have failed.
#SpringBootApplication
public class ReRouteDlqApplication {
private static final String ORIGINAL_QUEUE = "so8400in.so8400";
private static final String DLQ = ORIGINAL_QUEUE + ".dlq";
private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot";
private static final String X_RETRIES_HEADER = "x-retries";
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args);
System.out.println("Hit enter to terminate");
System.in.read();
context.close();
}
#Autowired
private RabbitTemplate rabbitTemplate;
#RabbitListener(queues = DLQ)
public void rePublish(Message failedMessage) {
Integer retriesHeader = (Integer) failedMessage.getMessageProperties().getHeaders().get(X_RETRIES_HEADER);
if (retriesHeader == null) {
retriesHeader = Integer.valueOf(0);
}
if (retriesHeader < 3) {
failedMessage.getMessageProperties().getHeaders().put(X_RETRIES_HEADER, retriesHeader + 1);
this.rabbitTemplate.send(ORIGINAL_QUEUE, failedMessage);
}
else {
this.rabbitTemplate.send(PARKING_LOT, failedMessage);
}
}
#Bean
public Queue parkingLot() {
return new Queue(PARKING_LOT);
}
}
The second example shows how to use the delayed exchange plugin to delay between retries.
#SpringBootApplication
public class ReRouteDlqApplication {
private static final String ORIGINAL_QUEUE = "so8400in.so8400";
private static final String DLQ = ORIGINAL_QUEUE + ".dlq";
private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot";
private static final String X_RETRIES_HEADER = "x-retries";
private static final String DELAY_EXCHANGE = "dlqReRouter";
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args);
System.out.println("Hit enter to terminate");
System.in.read();
context.close();
}
#Autowired
private RabbitTemplate rabbitTemplate;
#RabbitListener(queues = DLQ)
public void rePublish(Message failedMessage) {
Map<String, Object> headers = failedMessage.getMessageProperties().getHeaders();
Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER);
if (retriesHeader == null) {
retriesHeader = Integer.valueOf(0);
}
if (retriesHeader < 3) {
headers.put(X_RETRIES_HEADER, retriesHeader + 1);
headers.put("x-delay", 5000 * retriesHeader);
this.rabbitTemplate.send(DELAY_EXCHANGE, ORIGINAL_QUEUE, failedMessage);
}
else {
this.rabbitTemplate.send(PARKING_LOT, failedMessage);
}
}
#Bean
public DirectExchange delayExchange() {
DirectExchange exchange = new DirectExchange(DELAY_EXCHANGE);
exchange.setDelayed(true);
return exchange;
}
#Bean
public Binding bindOriginalToDelay() {
return BindingBuilder.bind(new Queue(ORIGINAL_QUEUE)).to(delayExchange()).with(ORIGINAL_QUEUE);
}
#Bean
public Queue parkingLot() {
return new Queue(PARKING_LOT);
}
}

Related

Updating and using cached ArrayList in Spring Boot

I'd like to cache a list of objects that are available for all methods and need it periodically update. I'm wondering if this is safe with multiple threads as per the Spring Boot server. Do I keep the list as static? Or is there a better way to do this?
For example:
#Controller
public class HomeController
{
private static List<String> cachedTerms = new ArrayList<>();
#GetMapping("/getFirstCachedTerm")
public String greeting()
{
if(!cachedTerms.isEmpty())
{
return cachedTerms.get(0);
}else
{
return "no terms";
}
}
//Scheduled to update
private static void updateTerms()
{
//populating from disk IO
cachedTerms.clear();
cachedTerms.add("hello");
}
}
Found out how. By using CopyOnWriteArray, which can be read even while being altered (and thread safe), and by using the #Scheduled tag to automatically run the update.
#Controller
public class HomeController
{
private static final List<String> TERMS_CACHE= new CopyOnWriteArrayList<String>();
#GetMapping("/FirstTerm")
public String getFirstTerm()
{
for(String term: TERMS_CACHE)
{
return term;
}
}
//Scheduled to update
#Scheduled(initialDelay = 1000, fixedRate = 1000)
private static synchronized void updateTerms()
{
//populating from disk IO
TERMS_CACHE.clear();
TERMS_CACHE.add("hello");
}
}

Spring batch return custom process exit code

I have one jar with several jobs, I want to execute only one job each time and retrieve a custom exit code.
For example, I have basic job (retrieveErrorsJob) configuration with one step that will read an input XML file and write the data in specific database table.
Application class
#SpringBootApplication
#EnableBatchProcessing
#Import(CoreCommonsAppComponent.class)
public class Application {
private static final Logger logger = LoggerFactory.getLogger(Application.class);
private ConfigurationConstants constants;
#Autowired
public Application(ConfigurationConstants constants) {
this.constants = constants;
}
#EventListener(ApplicationStartedEvent.class)
public void idApplication()
{
logger.info("================================================");
logger.info(constants.APPLICATION_NAME() + "-v." + constants.APPLICATION_VERSION() + " started on " + constants.REMOTE_HOST());
logger.info("------------------------------------------------");
}
public static void main(String... args) throws Exception{
ApplicationContext context = SpringApplication.run(Application.class, args);
logger.info("================================================");
SpringApplication.exit(context);
}
}
I can choose one job from command line:
java -jar my-jar.jar --spring.batch.job.names=retrieveErrorsJob --input.xml.file=myfile.xml
Spring Batch starts the correct job.
The problem is that I need the jar to return a custom process exit integer like ExitCode.FAILED == 4 etc. But I always have a ZERO (if ExitCode = SUCCESS or FAILED).
As per the docs, I need to implement ExitCodeMapper interface.
Code (not finished)
public class CustomExitCodeMapper implements ExitCodeMapper {
private static final int NORMAL_END_EXECUTION = 1;
private static final int NORMAL_END_WARNING = 2;
private static final int ABNORMAL_END_WARNING = 3;
private static final int ABNORMAL_END_ERROR = 4;
#Override
public int intValue(String exitCode) {
System.out.println("EXIT CODE = " + exitCode);
switch (exitCode)
{
case "FAILED":
return ABNORMAL_END_WARNING;
default:
return NORMAL_END_EXECUTION;
}
}
}
I can't find a way to use this custom implementation. I could set the custom implementation to CommandLineJobRunner but how to use this class?
Thanks to #Mahendra I've got an idea :)
I've created a JobCompletionNotificationListener class as #Mahendra suggested:
#Component
public class JobCompletionNotificationListener extends JobExecutionListenerSupport {
private static final Logger logger = LoggerFactory.getLogger(JobCompletionNotificationListener.class);
#Override
public void afterJob(JobExecution jobExecution) {
SingletonExitCode exitCode = SingletonExitCode.getInstance();
if(jobExecution.getStatus() == BatchStatus.COMPLETED)
{
logger.info("Exit with code " + ExitCode.NORMAL_END_OF_EXECUTION);
exitCode.setExitCode(ExitCode.NORMAL_END_OF_EXECUTION);
}
else {
logger.info("Exit with code " + ExitCode.ABNORMAL_END_OF_EXECUTION_WARNING);
exitCode.setExitCode(ExitCode.ABNORMAL_END_OF_EXECUTION_WARNING);
}
}
}
But I don't force the application to exit with System.exit() from this class. I've implemented a simple singleton like this:
public class SingletonExitCode {
public ExitCode exitCode = ExitCode.ABNORMAL_END_OF_EXECUTION_WARNING; // Default code 3
private static SingletonExitCode instance = new SingletonExitCode();
private SingletonExitCode() {}
public static SingletonExitCode getInstance() {
return instance;
}
public void setExitCode(ExitCode exitCode) {
this.exitCode = exitCode;
}
}
and I ask the ExitCode from my singleton after closing Spring context:
#SpringBootApplication
#EnableBatchProcessing
#Import(CoreCommonsAppComponent.class)
public class Application {
// a lot of nice things
public static void main(String... args) throws Exception{
ApplicationContext context = SpringApplication.run(Application.class, args);
logger.info("================================================");
SpringApplication.exit(context);
System.exit(SingletonExitCode.getInstance().exitCode.getCode());
}
}
I did this because if we exit directly from JobCompletionNotificationListener class we miss an important line in the logs:
Job: [FlowJob: [name=writeErrorFromFile]] completed with the following parameters: [{-input.xml.file=c:/temp/unit-test-error.xml, -spring.batch.job.names=writeErrorFromFile, run.id=15, input.xml.file=c:/temp/unit-test-error.xml}] and the following status: [FAILED]
And seems that Spring context is not properly closed.
Despite of exit-status of Sprint-Batch's Job (i.e. COMPLETED or FAILED), java process will be completed successfully (and you will get process exit-code as 0).
If you want a custom exit-code for java process so that you can use it any script or somewhere else, you can use JobExecutionListener.
You can check the job's exitStatus in afterJob() and accordingly exit the java process with your desired exit-code (i.e. 4 for FAILURE)
Example of JobExecutionListener
public class InterceptingExitStatus implements JobExecutionListener{
#Override
public void beforeJob(JobExecution jobExecution) {
}
#Override
public void afterJob(JobExecution jobExecution) {
ExitStatus exitStatus = jobExecution.getExitStatus() ;
if(exitStatus == ExitStatus.COMPLETED ){
System.exit(0);
}
if(exitStatus == ExitStatus.FAILED ){
System.exit(4);
}
}
}
and this is how you can configure job-listener in the xml file -
<job id="job">
....
....
<listeners>
<listener ref="interceptingExitStatus "/>
</listeners>
</job>
Spring Boot and Sring Batch already have an internal solution for this, all you need is an extra line of code:
System.exit(SpringApplication.exit(applicationContext));
Here is another example:
public class BatchApplication {
public static void main(String[] args) {
ApplicationContext applicationContext = SpringApplication.run(BatchApplication.class, args);
System.exit(SpringApplication.exit(applicationContext));
}
}
EDIT: If you would like to know how it works check this class: org.springframework.boot.autoconfigure.batch.JobExecutionExitCodeGenerator

Netty: How to limit websocket channel messages per second?

I need to limit messages received on websocket channel per second for netty server.
Could'n find any ideas how to do that.
Any ideas would be appreciated
Thank you
You need to add simple ChannelInboundHandlerAdapter handler to your pipeline and add the simple counter to channelRead(ChannelHandlerContext ctx, Object msg) method. I would recommend you to use some of CodaHale Metrics Class for that purpose.
Pseudo code:
private final QuotaLimitChecker limitChecker;
public MessageDecoder() {
this.limitChecker = new QuotaLimitChecker();
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
if (limitChecker.quotaReached(100)) { //assume limit is 100 req per sec
return;
}
}
Where QuotaLimitChecker is a class that increments counter and checks if the limit is reached.
public class QuotaLimitChecker {
private final static Logger log = LogManager.getLogger(QuotaLimitChecker.class);
private final int userQuotaLimit;
//here is specific implementation of Meter for your needs
private final InstanceLoadMeter quotaMeter;
public QuotaLimitChecker(int userQuotaLimit) {
this.userQuotaLimit = userQuotaLimit;
this.quotaMeter = new InstanceLoadMeter();
}
public boolean quotaReached() {
if (quotaMeter.getOneMinuteRate() > userQuotaLimit) {
log.debug("User has exceeded message quota limit.");
return true;
}
quotaMeter.mark();
return false;
}
}
Here is my implementation of QuotaLimitChecker that uses the simplified version Meter class of CodaHale Metrics library.

Print from Apache Storm Bolt

I'm working my way through the example code of some Storm topologies and bolts, but I'm running into something weird. My goal is to set up Kafka with Storm, so that Storm can process the messages available on the Kafka bus. I have the following bolt defined:
public class ReportBolt extends BaseRichBolt {
private static final long serialVersionUID = 6102304822420418016L;
private Map<String, Long> counts;
private OutputCollector collector;
#Override #SuppressWarnings("rawtypes")
public void prepare(Map stormConf, TopologyContext context, OutputCollector outCollector) {
collector = outCollector;
counts = new HashMap<String, Long>();
}
#Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
// terminal bolt = does not emit anything
}
#Override
public void execute(Tuple tuple) {
System.out.println("HELLO " + tuple);
}
#Override
public void cleanup() {
System.out.println("HELLO FINAL");
}
}
In essence, it should just output each Kafka message; and when the cleanup function is called, a different message should appear.
I have looked at the worker logs, and I find the final message (i.e. "HELLO FINAL"), but the Kafka messages with "HELLO" are nowhere to be found. As far as I can tell this should be a simple printer bolt, but I can't see where I'm going wrong. The workers logs indicate I am connected to the Kafka bus (it fetches the offset etc.).
In short, why are my println's not showing up in the worker logs?
EDIT
public class AckedTopology {
private static final String SPOUT_ID = "monitoring_test_spout";
private static final String REPORT_BOLT_ID = "acking-report-bolt";
private static final String TOPOLOGY_NAME = "monitoring-topology";
public static void main(String[] args) throws Exception {
int numSpoutExecutors = 1;
KafkaSpout kspout = buildKafkaSpout();
ReportBolt reportBolt = new ReportBolt();
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout(SPOUT_ID, kspout, numSpoutExecutors);
builder.setBolt(REPORT_BOLT_ID, reportBolt);
Config cfg = new Config();
StormSubmitter.submitTopology(TOPOLOGY_NAME, cfg, builder.createTopology());
}
private static KafkaSpout buildKafkaSpout() {
String zkHostPort = "URL";
String topic = "TOPIC";
String zkRoot = "/brokers";
String zkSpoutId = "monitoring_test_spout_id";
ZkHosts zkHosts = new ZkHosts(zkHostPort);
SpoutConfig spoutCfg = new SpoutConfig(zkHosts, topic, zkRoot, zkSpoutId);
KafkaSpout kafkaSpout = new KafkaSpout(spoutCfg);
return kafkaSpout;
}
}
Your bolt is not chained with the spout. You need to use storm's grouping in order to do that .. Use something like this
builder.setBolt(REPORT_BOLT_ID, reportBolt).shuffleGrouping(SPOUT_ID);
The setBolt typically returns a InputDeclarer object. In your case by specifying shuffleGrouping(SPOUT_ID) you are telling storm that you are interested in consuming all the tuples emitted by component having id REPORT_BOLT_ID.
Read more on stream groupings and choose the one based on your need.

Akka: Testing Supervisor Recommendations

I am very new to Akka and using Java to program my system.
Problem definition
- I have a TenantMonitor which when receives TenantMonitorMessage(), starts a new actor DiskMonitorActor.
- The DiskMonitorActor may fail for various reasons and may throw DiskException. The DiskMonitorActor has been Unit Tested.
What I need?
- I want to test behavior TenantMonitorActor, so that when DiskException happens, it takes correct action like stop(), resume() or any (depending upon what my application may need)
What I tried?
Based on the documentation, the closest I could perform is the section called Expecting Log Messages.
Where I need help?
- While I understand the expecting the correct error log is important, it just asserts first part, that exception is thrown and is logged correctly, but does not help in asserting that right strategy is called
Code?
TenantMonitorActor
public class TenantMonitorActor extends UntypedActor {
public static final String DISK_MONITOR = "diskMonitor";
private static final String assetsLocationKey = "tenant.assetsLocation";
private static final String schedulerKey = "monitoring.tenant.disk.schedule.seconds";
private static final String thresholdPercentKey = "monitoring.tenant.disk.threshold.percent";
private final LoggingAdapter logging = Logging.getLogger(getContext().system(), this);
private final Config config;
private TenantMonitorActor(final Config config) {
this.config = config;
}
private static final SupervisorStrategy strategy =
new OneForOneStrategy(1, Duration.create(1, TimeUnit.SECONDS),
new Function<Throwable, Directive>() {
public Directive apply(final Throwable param) throws Exception {
if (param instanceof DiskException) {
return stop();
}
return restart();
}
});
public static Props props(final Config config) {
return Props.create(new Creator<TenantMonitorActor>(){
public TenantMonitorActor create() throws Exception {
return new TenantMonitorActor(config);
}
});
}
#Override
public void onReceive(final Object message) throws Exception {
if (message instanceof TenantMonitorMessage) {
logging.info("Tenant Monitor Setup");
setupDiskMonitoring();
}
}
#Override
public SupervisorStrategy supervisorStrategy() {
return strategy;
}
private void setupDiskMonitoring() {
final ActorRef diskMonitorActorRef = getDiskMonitorActorRef(config);
final FiniteDuration start = Duration.create(0, TimeUnit.SECONDS);
final FiniteDuration recurring = Duration.create(config.getInt(schedulerKey),
TimeUnit.SECONDS);
final ActorSystem system = getContext().system();
system.scheduler()
.schedule(start, recurring, diskMonitorActorRef,
new DiskMonitorMessage(), system.dispatcher(), null);
}
private ActorRef getDiskMonitorActorRef(final Config monitoringConf) {
final Props diskMonitorProps =
DiskMonitorActor.props(new File(monitoringConf.getString(assetsLocationKey)),
monitoringConf.getLong(thresholdPercentKey));
return getContext().actorOf(diskMonitorProps, DISK_MONITOR);
}
}
Test
#Test
public void testActorForNonExistentLocation() throws Exception {
final Map<String, String> configValues =
Collections.singletonMap("tenant.assetsLocation", "/non/existentLocation");
final Config config = mergeConfig(configValues);
new JavaTestKit(system) {{
assertEquals("system", system.name());
final Props props = TenantMonitorActor.props(config);
final ActorRef supervisor = system.actorOf(props, "supervisor");
new EventFilter<Void>(DiskException.class) {
#Override
protected Void run() {
supervisor.tell(new TenantMonitorMessage(), ActorRef.noSender());
return null;
}
}.from("akka://system/user/supervisor/diskMonitor").occurrences(1).exec();
}};
}
UPDATE
The best I could write is to make sure that the DiskMonitor is stopped once the exception occurs
#Test
public void testSupervisorForFailure() {
new JavaTestKit(system) {{
final Map<String, String> configValues =
Collections.singletonMap("tenant.assetsLocation", "/non/existentLocation");
final Config config = mergeConfig(configValues);
final TestActorRef<TenantMonitorActor> tenantTestActorRef = getTenantMonitorActor(config);
final ActorRef diskMonitorRef = tenantTestActorRef.underlyingActor().getContext()
.getChild(TenantMonitorActor.DISK_MONITOR);
final TestProbe testProbeDiskMonitor = new TestProbe(system);
testProbeDiskMonitor.watch(diskMonitorRef);
tenantTestActorRef.tell(new TenantMonitorMessage(), getRef());
testProbeDiskMonitor.expectMsgClass(Terminated.class);
}};
}
Are there better ways?
I have the feeling that testing supervisor strategy is some sort of grey area -- it is up to personal opinion where we start testing Akka itself, instead of one's understanding of how the framework works. Testing validation of entities in ORM frameworks strikes me as a similar problem. We don't want to test whether email validation logic is correct (e.g. in Hibernate), but rather if our rule is correctly declared.
Following this logic, I would write the test as follows:
final TestActorRef<TenantMonitorActor> tenantTestActorRef =
getTenantMonitorActor(config);
SupervisorStrategy.Directive directive = tenantTestActorRef.underlyingActor()
.supervisorStrategy().decider().apply(new DiskException());
assertEquals(SupervisorStrategy.stop(), directive);

Categories

Resources