How can I handle a bytearray message in a Spring Integration Flow - java

I'm having some issues with Spring Integration.
I have an error channel that is supposed to consume an error dedicated queue in case other flows have an error, this handler allows us to log some important data in a very specific format, and then the message is discarded.
The problem is, that this flow is configured to receive a specific message type, like a FailedMessage class.
Whenever the message is consumed, I get a class cast exception saying that [B cannot be cast to FailedMessage.
So, after doing some research I implemented the following transformer:
private FailedMessage parseFailedMessage(byte[] genericMessage){
try {
ByteArrayInputStream in = new ByteArrayInputStream(genericMessage);
ObjectInputStream is = new ObjectInputStream(in);
return (FailedMessage) is.readObject();
} catch (IOException | ClassNotFoundException e) {
throw new RuntimeException("An unexpected error occurred parsing FailedMessage", e);
}
}
And the IntegrationFlow spec is :
#Bean
public IntegrationFlow errorHandlingFlow(XmlMessageTransformer transformer) {
return IntegrationFlows.from("ErrorChannel")
.transform(this::parseFailedMessage)
.<FailedMessage>handle((p, h) -> {
processFailedMessage(p, transformer);
return p;
})
.channel("discard")
.get();
}
Is this an acceptable way to handle this kind of message or there's a way to automatize the transform step?

Related

Flux consumer doesnt stop to consume data

I have this implementation
#Override
public Flux<byte[]> translateData(final String datasetId) {
return keyVaultRepository.findByDatasetId(datasetId)
.map(keyVault -> {
try {
return translatorService.createTranslator(keyVault.getKey()); // throws CryptoException in the test
} catch (CryptoException e) {
throw Exceptions.propagate(new ApiException("Unable to provide translated file"));
}
})
.flatMapMany(translator -> storageService.getEntry(datasetId).map(translator::update));
}
and this failing test
#Test
void getTranslatedDataWithError() throws StorageException, CryptoException {
final List<byte[]> bytes = new ArrayList<>();
// exec + validate
getWebTestClient()
.get()
.uri(uriBuilder -> uriBuilder.path("/{datasetId}").build(datasetId))
.exchange()
.expectStatus().is5xxServerError()
.returnResult(byte[].class)
.getResponseBody()
.onErrorStop()
.subscribe(bytes::add);
assertThat(bytes).isEmpty();
}
the part .is5xxServerError() is succeeding but the list is not empty.
The Microservice which is calling the endpoint of translateData should not consume any data from upstream but apparently this is the case.
I've found a workaround by throwing a RuntimeException in the catch block (I could also make the CryptoException unchecked but thats not the matter of my question) and then handle the case in my ControllerAdvice/GlobalExceptionhandler and just return a ResponseEntity with an ErrorDto .
The core of my question is, how can I do this natively with Flux. So that the consumer of the endpoint notices there is an error and .subscribe(bytes::add); wont be even executed.
I have tried it already with .doOnError , .onErrorResume etc. but it always ends with a non empty list :(
By this I am afraid that the bytes will later delivered to the client (which should not of course he should get an error response)

Republish message to same queue with updated headers after automatic nack in Spring AMQP

I am trying to configure my Spring AMQP ListenerContainer to allow for a certain type of retry flow that's backwards compatible with a custom rabbit client previously used in the project I'm working on.
The protocol works as follows:
A message is received on a channel.
If processing fails the message is nacked with the republish flag set to false
A copy of the message with additional/updated headers (a retry counter) is published to the same queue
The headers are used for filtering incoming messages, but that's not important here.
I would like the behaviour to happen on an opt-in basis, so that more standardised Spring retry flows can be used in cases where compatibility with the old client isn't a concern, and the listeners should be able to work without requiring manual acking.
I have implemented a working solution, which I'll get back to below. Where I'm struggling is to publish the new message after signalling to the container that it should nack the current message, because I can't really find any good hooks after the nack or before the next message.
Reading the documentation it feels like I'm looking for something analogous to the behaviour of RepublishMessageRecoverer used as the final step of a retry interceptor. The main difference in my case is that I need to republish immediately on failure, not as a final recovery step. I tried to look at the implementation of RepublishMessageRecoverer, but the many of layers of indirection made it hard for me to understand where the republishing is triggered, and if a nack goes before that.
My working implementation looks as follows. Note that I'm using an AfterThrowsAdvice, but I think an error handler could also be used with nearly identical logic.
/*
MyConfig.class, configuring the container factory
*/
#Configuration
public class MyConfig {
#Bean
// NB: bean name is important, overwrites autoconfigured bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(
ConnectionFactory connectionFactory,
Jackson2JsonMessageConverter messageConverter,
RabbitTemplate rabbitTemplate
) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setMessageConverter(messageConverter);
// AOP
var a1 = new CustomHeaderInspectionAdvice();
var a2 = new MyThrowsAdvice(rabbitTemplate);
Advice[] adviceChain = {a1, a2};
factory.setAdviceChain(adviceChain);
return factory;
}
}
/*
MyThrowsAdvice.class, hooking into the exception flow from the listener
*/
public class MyThrowsAdvice implements ThrowsAdvice {
private static final Logger logger = LoggerFactory.getLogger(MyThrowsAdvice2.class);
private final AmqpTemplate amqpTemplate;
public MyThrowsAdvice2(AmqpTemplate amqpTemplate) {
this.amqpTemplate = amqpTemplate;
}
public void afterThrowing(Method method, Object[] args, Object target, ListenerExecutionFailedException ex) {
var message = message(args);
var cause = ex.getCause();
// opt-in to old protocol by throwing an instance of BusinessException in business logic
if (cause instanceof BusinessException) {
/*
NB: Since we want to trigger execution after the current method fails
with an exception we need to schedule it in another thread and delay
execution until the nack has happened.
*/
new Thread(() -> {
try {
Thread.sleep(1000L);
var messageProperties = message.getMessageProperties();
var count = getCount(messageProperties);
messageProperties.setHeader("xb-count", count + 1);
var routingKey = messageProperties.getReceivedRoutingKey();
var exchange = messageProperties.getReceivedExchange();
amqpTemplate.send(exchange, routingKey, message);
logger.info("Sent!");
} catch (InterruptedException e) {
logger.error("Sleep interrupted", e);
}
}).start();
// NB: Produce the desired nack.
throw new AmqpRejectAndDontRequeueException("Business logic exception, message will be re-queued with updated headers", cause);
}
}
private static long getCount(MessageProperties messageProperties) {
try {
Long c = messageProperties.getHeader("xb-count");
return c == null ? 0 : c;
} catch (Exception e) {
return 0;
}
}
private static Message message(Object[] args) {
try {
return (Message) args[1];
} catch (Exception e) {
logger.info("Bad cast parse", e);
throw new AmqpRejectAndDontRequeueException(e);
}
}
}
Now, as you can imagine, I'm not particularly pleased with the indeterminism of scheduling a new thread with a delay.
So my question is simply, is there any way I could produce a deterministic solution to my problem using the provided hooks of the ListenerContainer ?
Your current solution risks message loss; since you are publishing on a different thread after a delay. If the server crashes during that delay, the message is lost.
It would be better to publish immediately to another queue with a TTL and dead-letter configuration to republish the expired message back to the original queue.
Using the RepublishMessageRecoverer with retries set to maxattempts=1 should do what you need.

Spring 5 Error handling of Postexchange requests

I use an external rest api in my spring application, I can send json post requests to create objects but when a field is incorrect or if there is a duplicate it returns a 400 bad request error, and a body saying what the problem is.
I use Spring 5 with #PostExchange in the following code:
This is used to point spring into the right direction of the external api
public interface HardwareClient {
#PostExchange("/assetmgmt/assets/templateId/C04DBCC3-5FD3-45A2-BD34-8A84CE2EAC20")
String addMonitor(#RequestBody Monitor monitor);
}
This is the helper that is autowired into the class where I have the data that needs to be sent.
#Component
public class HardwareHelper {
private Logger logger = Logger.getLogger(getClass().getName());
#Autowired
HardwareClient hardwareClient;
#Async
public Future<String> addMonitor(MonitorForm monitorForm){
try {
Monitor monitor = new Monitor(monitorForm.objectID(), monitorForm.model(), monitorForm.make(),monitorForm.serialNumber(), monitorForm.orderNumber(),monitorForm.budgetholder(),monitorForm.ownership());
hardwareClient.addMonitor(monitor);
return new AsyncResult<String>("Success");
} catch (Exception e){
logger.info("HardwareHelper.addMonitor error: " + e.getMessage());
//todo error handling
}
return null;
}
}
When an error occurs the logger will print the error but I need to be able to control what happens after based on the response. So I need to see the body of the post request that is returned after. If everything goes well an ID is returned that I can read by printing the results of the addMonitor() method, but this is obviously not possible when it throws an exception as it skips to the catch part. How do I scan the request body when an error is thrown and handle this appropriately

Handle exception after reaching max attempts in resilience4j-retry using Spring Boot

I have a scenario I want to log each retry attempt and when the last one fails (i.e. maxAttempts reached) a exception is thrown and let's say an entry to a database is created.
I try to achieve this using Resilience4j-retry with Spring Boot, therefore I use application.yml and annotations.
#Retry(name = "default", fallbackMethod="fallback")
#CircuitBreaker(name = "default", fallbackMethod="fallback")
public ResponseEntity<List<Person>> person() {
return restTemplate.exchange(...); // let's say this always throws 500
}
The fallback logs the cause of the exception into an application log.
public ResponseEntity<?> fallback(Exception e) {
var status = HttpStatus.INTERNAL_SERVER_ERROR;
var cause = "Something unknown";
if (e instanceof ResourceAccessException) {
var resourceAccessException = (ResourceAccessException) e;
if (e.getCause() instanceof ConnectTimeoutException) {
cause = "Connection timeout";
}
if (e.getCause() instanceof SocketTimeoutException) {
cause = "Read timeout";
}
} else if (e instanceof HttpServerErrorException) {
var httpServerErrorException = (HttpServerErrorException) e;
cause = "Server error";
} else if (e instanceof HttpClientErrorException) {
var httpClientErrorException = (HttpClientErrorException) e;
cause = "Client error";
} else if (e instanceof CallNotPermittedException) {
var callNotPermittedException = (CallNotPermittedException) e;
cause = "Open circuit breaker";
}
var message = String.format("%s caused fallback, caught exception %s",
cause, e.getMessage());
log.error(message); // application log entry
throw new MyRestException (message, e);
}
When I call this method person() the retry happens as maxAttempt configured. I expect my custom runtime MyRestException is caught on each retry and thrown on the last one (when maxAttempt is reached), so I wrap the call in the try-catch.
public List<Person> person() {
try {
return myRestService.person().getBody();
} catch (MyRestException ex) {
log.error("Here I am ready to log the issue into the database");
throw new ex;
}
}
Unfortunatelly, the retry seems to be ignored as the fallback encounters and rethrows the exception that is immediatelly caught with my try-catch instead of the Resilience4j-retry mechanism.
How to achieve the behavior when the maxAttempts is hit? Is there a way to define a specific fallback method for such case?
Why don't you catch and map exceptions to MyRestException inside of your Service methods, e.g. myRestService.person()?
It makes your configuration even simpler, because you only have to add MyRestException to the configuration of your RetryConfig and CircuitBreakerConfig.
Spring RestTemplate also has mechanisms to register a custom ResponseErrorHandler, if you don't want to add the boilerplate code to every Service method. -> https://www.baeldung.com/spring-rest-template-error-handling
I would not map CallNotPermittedException to MyRestException. You don't want to retry when the CircuitBreaker is open. Add CallNotPermittedException to the list of ignored exceptions in your RetryConfig.
I think you don't need the fallback mechanism at all. I thing mapping an exception to another exception is not a "fallback".

How to parse DFT_P03 message with ZPM segment

I am coding a server application that will receive DFT_P03 messages with an added ZPM segment (which i have created a class for as per the HAPI documentation). Currently i am able to access this field as a generic segment when doing the following :
#Override
public Message processMessage(Message t, Map map) throws ReceivingApplicationException, HL7Exception
{
String encodedMessage = new DefaultHapiContext().getPipeParser().encode(t);
logEntryService.logDebug(LogEntry.CONNECTIVITY, "Received message:\n" + encodedMessage + "\n\n");
try
{
InboundMessage inboundMessage = new InboundMessage();
inboundMessage.setMessageTime(new Date());
inboundMessage.setMessageType("Usage");
DFT_P03 usageMessage = (DFT_P03) t;
Segment ZPMSegment = (Segment)usageMessage.get("ZPM");
inboundMessage.setMessage(usageMessage.toString());
Facility facility = facilityService.findByCode(usageMessage.getMSH().getReceivingFacility().getNamespaceID().getValue());
inboundMessage.setTargetFacility(facility);
String controlID = usageMessage.getMSH().getMessageControlID().encode();
controlID = controlID.substring(controlID.indexOf("^") + 1, controlID.length());
inboundMessage.setControlId(controlID);
Message response;
try
{
inboundMessageService.save(inboundMessage);
response = t.generateACK();
logEntryService.logDebug(LogEntry.CONNECTIVITY, "Message ACKed");
}
catch (Exception ex)
{
response = t.generateACK(AcknowledgmentCode.AE, new HL7Exception(ex));
logEntryService.logDebug(LogEntry.CONNECTIVITY, "Message NACKed");
}
return response;
}
catch (IOException e)
{
logEntryService.logDebug(LogEntry.CONNECTIVITY, "Message rejected");
throw new HL7Exception(e);
}
}
I have created a DFT_P03_Custom class as following :
public class DFT_P03_Custom extends DFT_P03
{
public DFT_P03_Custom() throws HL7Exception
{
this(new DefaultModelClassFactory());
}
public DFT_P03_Custom(ModelClassFactory factory) throws HL7Exception
{
super(factory);
String[] segmentNames = getNames();
int indexOfPid = Arrays.asList(segmentNames).indexOf("FT1");
int index = indexOfPid + 1;
Class<ZPM> type = ZPM.class;
boolean required = true;
boolean repeating = false;
this.add(type, required, repeating, index);
}
public ZPM getZPM()
{
return getTyped("ZPM", ZPM.class);
}
}
When trying to typecast the message to a DFT_P03_Custom instance i get a ClassCastException. As per their documentation, i did create the CustomModelClassFactory class but using this i just get tons of validation errors on the controlId field.
I am already using an identical logic to send custom MFN_M01 messages with an added ZFX segment and that works flawlessly. I understand there is some automatic typecasting being done by HAPI when it receives a DFT_P03 message and that is likely what i need to somehow override for it to be able to give me a DFT_P03_Custom instance instead.
If you have some insight on how i can achieve this without having to use a generic segment instance please help!
Thank you!
I finally figured this out. The only way i got this to work was to generate a conformance profile XML file (using an example message from our application as a base) with the messaging workbench on the HAPI site and use the maven plugin to generate the message and segment classes. Only with these classes am i able to correctly parse a message to my custom class. One thing to note is that it DOES NOT work if i try to use the MSH, PID, PV1 or FT1 classes provided by HAPI and use my Z-segment class. It only works if all the segments are the classes generated by the conformance plugin. This combined with a CustomModelClassFactory class (as shown on the HAPI website) and the proper package structure finally allowed me to access my Z-segment.

Categories

Resources