Handle error in Spring Webflux MultipartFile.transferTo - java

I'm using spring-webflux and I wonder if someone knows how to handle error in Mono<Void>. I'm using MultipartFile's method transferTo, which on success returns Mono.empty() and in other cases it wraps exceptions in Mono.error().
public Mono<UploadedFile> create(final User user, final FilePart file) {
final UploadedFile uploadedFile = new UploadedFile(file.filename(), user.getId());
final Path path = Paths.get(fileUploadConfig.getPath(), uploadedFile.getId());
file.transferTo(path);
uploadedFile.setFilePath(path.toString());
return repo.save(uploadedFile);
}
I want to save UploadedFile only in case transferTo ended successfully. But I can't use map/flatMap because empty Mono obviously doesn't emit value. onErrorResume only accepts Mono with same type (Void).

Hi try to chain your operators like this:
...
return Mono.just(file)
.map(f -> f.transferTo(path))
.then(Mono.just(uploadedFile))
.flatMap(uF -> {
uF.setFilePath(path.toString());
return repo.save(uF)
});
}
if your transferTo will finished successfully it calls then operators.
P.S. if I'm not mistaken FilePart is blocking, try to avoid it.

Related

How to convert Flux of ByteBuffer to Spring BodyInserter

I have an usecase to read a file from s3 and publish to rest service in java.
For the implementation, I am trying awssdk s3 API to read file which returns Flux<ByteBuffer> and then publish to rest service using the Spring WebClient.
Per my exploration, the spring WebClient requires BodyInserter which can be prepared using the BodyInserters.fromDataBuffers. I am unable to figure out how to convert properly Flux to Flux and call WebClient exchange;
Flux<ByteBuffer> byteBufferFlux = getS3File(key);
Flux<DataBuffer> dataBufferFlux= byteBufferFlux.map(byteBuffer -> {
?????????????Convert bytebuffer to data buffer ??????
return dataBuffer;
});
BodyInserter<Flux<DataBuffer>, ReactiveHttpOutputMessage> inserter = BodyInserters.fromDataBuffers(dataBufferFlux);
Any suggestions how to achieve this?
You can convert using DefaultDataBuffer which you can create via the DefaultDataBufferFactory
DataBufferFactory dataBufferFactory = new DefaultDataBufferFactory();
Flux<DataBuffer> buffer = getS3File(key).map(dataBufferFactory::wrap);
BodyInserter<Flux<DataBuffer>, ReactiveHttpOutputMessage> inserter =
BodyInserters.fromDataBuffers(buffer);
You don't actually need a BodyInserter at all though if using Webclient you can the following method signature for body()
<T, P extends Publisher<T>> RequestHeadersSpec<?> body(P publisher, Class<T> elementClass);
Which you can then pass your Flux<ByteBuffer> directly into, whilst specifying the Class to use
WebClient.create("http://someUrl")
.post()
.uri("/someUri")
.body(getS3File(key),ByteBuffer.class)
You may not need dataBufferFlux and should be able to write the Flux to your rest endpoint. Try this:
Flux<ByteBuffer> byteBufferFlux = getS3File(key);
BodyInserter<Flux<ByteBuffer>, ReactiveHttpOutputMessage> = BodyInserters.fromPublisher(byteBufferFlux, ByteBuffer.class);

How to mock - reading file from s3

I am new to writing unit tests. I am trying to read a JSON file stored in S3 and I am getting an "Argument passed to when() is not a mock!" and "profile file cannot be null" error.
This is what I have tried so far Retrieving Object Using JAVA:
private void amazonS3Read() {
String clientRegion = "us-east-1";
String bucketName = "version";
String key = "version.txt";
S3Object fullObject = null;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
fullObject = s3Client.getObject(new GetObjectRequest(bucketName, key));
S3ObjectInputStream s3is = fullObject.getObjectContent();
json = returnStringFromInputStream(s3is);
fullObject.close();
s3is.close();
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
//Do some operations with the data
}
Test File
#Test
public void amazonS3ReadTest() throws Exception {
String bucket = "version";
String keyName = "version.json";
InputStream inputStream = null;
S3Object s3Object = Mockito.mock(S3Object.class);
GetObjectRequest getObjectRequest = Mockito.mock(GetObjectRequest.class);
getObjectRequest = new GetObjectRequest(bucket, keyName);
AmazonS3 client = Mockito.mock(AmazonS3.class);
Mockito.doNothing().when(AmazonS3ClientBuilder.standard());
client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
Mockito.doReturn(s3Object).when(client).getObject(getObjectRequest);
s3Object = client.getObject(getObjectRequest);
Mockito.doReturn(inputStream).when(s3Object).getObjectContent();
inputStream = s3Object.getObjectContent();
//performing other operations
}
Getting two different exceptions:
Argument passed to when() is not a mock! Example of correct stubbing: doThrow(new RuntimeException()).when(mock).someMethod();
org.mockito.exceptions.misusing.NotAMockException:
Argument passed to when() is not a mock!
Example of correct stubbing:
OR
profile file cannot be null
java.lang.IllegalArgumentException: profile file cannot be null
at com.amazonaws.util.ValidationUtils.assertNotNull(ValidationUtils.java:37)
at com.amazonaws.auth.profile.ProfilesConfigFile.<init>(ProfilesConfigFile.java:142)
at com.amazonaws.auth.profile.ProfilesConfigFile.<init>(ProfilesConfigFile.java:133)
at com.amazonaws.auth.profile.ProfilesConfigFile.<init>(ProfilesConfigFile.java:100)
at com.amazonaws.auth.profile.ProfileCredentialsProvider.getCredentials(ProfileCredentialsProvider.java:135)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1184)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:774)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:726)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:719)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:701)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:669)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:651)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:515)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4443)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4390)
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1427)
What am I doing wrong and how do I fix this?
Your approach looks wrong.
You want to mock dependencies and invocations of a private method :amazonS3Read() and you seem to want to unit test that method.
We don't unit test private methods of a class but we test the class from its API (application programming interface), that is public/protected method.
Your unit test is a series of mock recording: most of it is a description via Mockito of what your private method does. I have even a hard time to identify the no mocked part....
What do you assert here ? That you invoke 4 methods on some mocks ? Unfortunately, it asserts nothing in terms of result/behavior. You can add incorrect invocations between the invoked methods and the test will stay green because you don't test a result that you can assert with the assertEquals(...) idiom.
It doesn't mean that mocking a method is not acceptable but when your test is mainly mocking, something is wrong and we can trust in its result.
I would advise you two things :
write an unit test that focuses on asserting the logic that you performed : computation/transformation/transmitted value and so for... don't focus on chaining methods.
write some integration tests with some light and simple S3 compatible servers that will give you a real feedback in terms of behavior assertion. Side effects may be tested in this way.
You have for example Riak, MinIo or still Localstack.
To be more concrete, here is a refactor approach to improve things.
If the amazonS3Read() private method has to be unitary tested, you should probably move it into a specific class for example MyAwsClient and make it a public method.
Then the idea is to make amazonS3Read() as clear as possible in terms of responsibility.
Its logic could be summarized such as :
1) Get some identifier information to pass to the S3 services.
Which means defined a method with the parameters :
public Result amazonS3Read(String clientRegion, String bucketName, String key) {...}
2) Apply all fine grained S3 functions to get the S3ObjectInputStream object.
We could gather all of these in a specific method of a class AmazonS3Facade :
S3ObjectInputStream s3is = amazonS3Facade.getObjectContent(clientRegion, bucketName, key);
3) Do your logic that is process the returned S3ObjectInputStream and return a result
json = returnStringFromInputStream(s3is);
// ...
return result;
How to test that now ?
Simply enough.
With JUnit 5 :
#ExtendWith(MockitoExtension.class)
public MyAwsClientTest{
MyAwsClient myAwsClient;
#Mock
AmazonS3Facade amazonS3FacadeMock;
#Before
void before(){
myAwsClient = new MyAwsClient(amazonS3FacadeMock);
}
#Test
void amazonS3Read(){
// given
String clientRegion = "us-east-1";
String bucketName = "version";
String key = "version.txt";
S3ObjectInputStream s3IsFromMock = ... // provide a stream with a real content. We rely on it to perform the assertion
Mockito.when(amazonS3FacadeMock.getObjectContent(clientRegion, bucketName, key))
.thenReturn(s3IsFromMock);
// when
Result result = myAwsClient.amazonS3Read(clientRegion, bucketName, key);
// assert result content.
Assertions.assertEquals(...);
}
}
What are the advantages ?
the class implementation is readable and maintainable because it focuses on your functional processing.
the whole S3 logic was moved into a single place AmazonS3Facade (Single Responsibility principle/modularity).
thanks to that, the test implementation is now readable and maintainable
the test tests really the logic that you perform (instead of verifying a series of invocations on multiple mocks).
Note that unitary testing AmazonS3Facade has few/no value since that is only a series of invocation to S3 components, impossible to assert in terms of returned result and so very brittle.
But writing an integration test for that with a simple and lightweight S3 compatible server as these quoted early makes really sense.
Your error says:
Argument passed to when() is not a mock!
Your are passing AmazonS3ClientBuilder.standard() in Mockito.doNothing().when(AmazonS3ClientBuilder.standard()); which is not a mock, this is why it doesn't work.
Consider using PowerMock in order to mock static methods.
Here is an example.

why do I receive an empty string when mapping Mono<Void> to Mono<String>?

I am developing an API REST using Spring WebFlux, but I have problems when uploading files. They are stored but I don't get the expected return value.
This is what I do:
Receive a Flux<Part>
Cast Part to FilePart.
Save parts with transferTo() (this return a Mono<Void>)
Map the Mono<Void> to Mono<String>, using file name.
Return Flux<String> to client.
I expect file name to be returned, but client gets an empty string.
Controller code
#PostMapping(value = "/muscles/{id}/image")
public Flux<String> updateImage(#PathVariable("id") String id, #RequestBody Flux<Part> file) {
log.info("REST request to update image to Muscle");
return storageService.saveFiles(file);
}
StorageService
public Flux<String> saveFiles(Flux<Part> parts) {
log.info("StorageService.saveFiles({})", parts);
return
parts
.filter(p -> p instanceof FilePart)
.cast(FilePart.class)
.flatMap(file -> saveFile(file));
}
private Mono<String> saveFile(FilePart filePart) {
log.info("StorageService.saveFile({})", filePart);
String filename = DigestUtils.sha256Hex(filePart.filename() + new Date());
Path target = rootLocation.resolve(filename);
try {
Files.deleteIfExists(target);
File file = Files.createFile(target).toFile();
return filePart.transferTo(file)
.map(r -> filename);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
FilePart.transferTo() returns Mono<Void>, which signals when the operation is done - this means the reactive Publisher will only publish an onComplete/onError signal and will never publish a value before that.
This means that the map operation was never executed, because it's only given elements published by the source.
You can return the name of the file and still chain reactive operators, like this:
return part.transferTo(file).thenReturn(part.filename());
It is forbidden to use the block operator within a reactive pipeline and it even throws an exception at runtime as of Reactor 3.2.
Using subscribe as an alternative is not good either, because subscribe will decouple the transferring process from your request processing, making those happen in different execution sequences. This means that your server could be done processing the request and close the HTTP connection while the other part is still trying to read the file part to copy it on disk. This is likely to fail in subtle ways at runtime.
FilePart.transferTo() returns Mono<Void> that is a constant empty. Then, map after that was never executed. I solved it by doing this:
private Mono<String> saveFile(FilePart filePart) {
log.info("StorageService.saveFile({})", filePart);
String filename = DigestUtils.sha256Hex(filePart.filename() + new Date());
Path target = rootLocation.resolve(filename);
try {
Files.deleteIfExists(target);
File file = Files.createFile(target).toFile();
return filePart
.transferTo(file)
.doOnSuccess(data -> log.info("do something..."))
.thenReturn(filename);
} catch (IOException e) {
throw new RuntimeException(e);
}
}

RxJava: OnErrorFailedException. Identifying the correct cause

Being inspired by T.Nurkiewicz's "Reactive Programming with RxJava" I tried to apply it in a project that I am working on and here's the issue that I am facing.
I have a Rest end point that takes an input stream and a username and either returns a link for the updated username or returns a Bad Request error. Here's how I tried to implement this using RxJava:
#PUT
#Path("{username}")
public Response updateCredential(#PathParam("username") final String username, InputStream stream) {
CredentialCandidate candidate = new CredentialCandidate();
Observable.just(repository.getByUsername(username))
.subscribe(
credential -> {
serializeCandidate(candidate, stream);
try {
repository.updateCredential(build(credential, candidate));
} catch (Exception e) {
String msg = "Failed to update credential +\""+username+"\": "+e.getMessage();
throw new BadRequestException(msg, Response.status(Response.Status.BAD_REQUEST).build());
}
},
ex -> {
String msg = "Couldn't update credential \""+username+"\""
+ ". A credential with such username doesn't exist: " + ex.getMessage();
logger.error(msg);
throw new BadRequestException(msg, Response.status(Response.Status.BAD_REQUEST).build());
});//if the Observable completes without exceptions we have a success case
Map<String, String> map = new HashMap<>();
map.put("path", "credential/" + username);
return Response.ok(getJsonRepr("link", uriGenerator.apply(appsUriBuilder, map).toASCIIString())).build();
}
My issue is at the line 11 (the catch clause of the onNext method). This is the log output that quickly will demonstrate what happens:
19:23:50.472 [http-listener(4)] ERROR com.vgorcinschi.rimmanew.rest.services.CredentialResourceService - Couldn't update credential "admin". A credential with such username doesn't exist: Failed to update credential +"admin": Password too weak!
So the exception thrown in the onNext method goes to the upstream and ends-up in the onError method! Apparently this works as designed, but I am confused as to how I could return the correct reason of the Bad Request Error. After all in my test case a credential with the user was found by the repository, the correct error was that the suggested password was too weak. This is the helper method that generated the error:
private Credential build(Credential credential, CredentialCandidate candidate) {
if(!isOkPsswd.test(candidate.getPassword())){
throw new BadRequestException("Password too weak!", Response.status(Response.Status.BAD_REQUEST).build());
}
...
}
I am still fairly new to Reactive Programming so I realise I may be missing something that is obvious. Skimming through the book didn't get me to an answer, so I would appreciate any help.
Just in case, this is the full stack trace:
updateCredentialTest(com.vgorcinschi.rimmanew.services.CredentialResourceServiceTest) Time elapsed: 0.798 sec <<< ERROR!
rx.exceptions.OnErrorFailedException: Error occurred when trying to propagate error to Observer.onError
at com.vgorcinschi.rimmanew.rest.services.CredentialResourceService.lambda$updateCredential$9(CredentialResourceService.java:245)
at rx.internal.util.ActionSubscriber.onNext(ActionSubscriber.java:39)
at rx.observers.SafeSubscriber.onNext(SafeSubscriber.java:134)
at rx.internal.util.ScalarSynchronousObservable$WeakSingleProducer.request(ScalarSynchronousObservable.java:276)
at rx.Subscriber.setProducer(Subscriber.java:209)
at rx.Subscriber.setProducer(Subscriber.java:205)
at rx.internal.util.ScalarSynchronousObservable$JustOnSubscribe.call(ScalarSynchronousObservable.java:138)
at rx.internal.util.ScalarSynchronousObservable$JustOnSubscribe.call(ScalarSynchronousObservable.java:129)
at rx.Observable.subscribe(Observable.java:10238)
at rx.Observable.subscribe(Observable.java:10205)
at rx.Observable.subscribe(Observable.java:10045)
at com.vgorcinschi.rimmanew.rest.services.CredentialResourceService.updateCredential(CredentialResourceService.java:238)
at com.vgorcinschi.rimmanew.services.CredentialResourceServiceTest.updateCredentialTest(CredentialResourceServiceTest.java:140)
It's seems you didn't grasp Reactive programming principles right.
First thing is that Observable are asynchronous by their API, while you are trying to enforce it to be synchronous API, by trying to return the Response value directly from the method, instead of returning Observable<Response> that emits this Response value over time by its onNext() notification.
That's why you are struggling with the exception, each notification lambda method (onNext/onError) is encapsulated by the Observable mechanism, in order to create a proper stream that obey some rules (the Observable contract), some of those expected behaviors are that errors should be redirect to the onError() method, which is the exception catch method, you shouldn't throw there, and throwing there will be considered as fatal error and will swallowed by throwing OnErrorFailedException.
Ideally it will be something like this:
public Observable<Response> updateCredential(#PathParam("username") final String username,
InputStream stream) {
rerurn Observable.fromCallable(() -> {
CredentialCandidate candidate = new CredentialCandidate();
Credential credential = repository.getByUsername(username);
serializeCandidate(candidate, stream);
repository.updateCredential(build(credential, candidate));
Map<String, String> map = new HashMap<>();
map.put("path", "credential/" + username);
return Response.ok(getJsonRepr("link", uriGenerator.apply(appsUriBuilder, map).toASCIIString())).build();
})
.onErrorReturn(throwable -> {
String msg = "Failed to update credential +\"" + username + "\": " + e.getMessage();
throw new BadRequestException(msg, Response.status(Response.Status.BAD_REQUEST).build());
});
}
use fromCallable in order to make the request happen when subscribing (while Observable.just(repository.getByUsername(username)) will act synchronously when the Observable is constructs ), the success path is withing the callable itself, while if any error occurred, you will transform it to your custom exception using onErrorReturn operator.
with his approach you will return Observable object that will act when you will subscribe to it, you will get all the benefits of Observable and Reactive approach such being able to compose it with some other operations, being able to specify from outside whether it will act synchronously (current thread) or async on some other thread (using Scheduler) .
For more detailed explanation regarding reactive programming I suggest to start from this great tutorial from André Staltz.

How do you handle EmptyResultDataAccessException with Spring Integration?

I have a situation where before I process an input file I want to check if certain information is setup in the database. In this particular case it is a client's name and parameters used for processing. If this information is not setup, the file import shall fail.
In many StackOverflow pages, the users resolve handling EmptyResultDataAccessException exceptions generated by queryForObject returning no rows by catching them in the Java code.
The issue is that Spring Integration is catching the exception well before my code is catching it and in theory, I would not be able to tell this error from any number of EmptyResultDataAccessException exceptions which may be thrown with other queries in the code.
Example code segment showing try...catch with queryForObject:
MapSqlParameterSource mapParameters = new MapSqlParameterSource();
// Step 1 check if client exists at all
mapParameters.addValue("clientname", clientName);
try {
clientID = this.namedParameterJdbcTemplate.queryForObject(FIND_BY_NAME, mapParameters, Long.class);
} catch (EmptyResultDataAccessException e) {
SQLException sqle = (SQLException) e.getCause();
logger.debug("No client was found");
logger.debug(sqle.getMessage());
return null;
}
return clientID;
In the above code, no row was returned and I want to properly handle it (I have not coded that portion yet). Instead, the catch block is never triggered and instead, my generic error handler and associated error channel is triggered instead.
Segment from file BatchIntegrationConfig.java:
#Bean
#ServiceActivator(inputChannel="errorChannel")
public DefaultErrorHandlingServiceActivator errorLauncher(JobLauncher jobLauncher){
logger.debug("====> Default Error Handler <====");
return new DefaultErrorHandlingServiceActivator();
}
Segment from file DefaultErrorHandlingServiceActivator.java:
public class DefaultErrorHandlingServiceActivator {
#ServiceActivator
public void handleThrowable(Message<Throwable> errorMessage) throws Throwable {
// error handling code should go here
}
}
Tested Facts:
queryForObject expects a row to be returned and will thrown an
exception if otherwise, therefore you have to handle the exception
or use a different query which returns a row.
Spring Integration is monitoring exceptions and catching them before
my own code can hand them.
What I want to be able to do:
Catch the very specific condition and log it or let the end user know what they need to do to fix the problem.
Edit on 10/26/2016 per recommendation from #Artem:
Changed my existing input channel to Spring provided Handler Advice:
#Transformer(inputChannel = "memberInputChannel", outputChannel = "commonJobGateway", adviceChain="handleAdvice")
Added support Bean and method for the advice:
#Bean
ExpressionEvaluatingRequestHandlerAdvice handleAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setOnFailureExpression("payload");
advice.setFailureChannel(customErrorChannel());
advice.setReturnFailureExpressionResult(true);
advice.setTrapException(true);
return advice;
}
private QueueChannel customErrorChannel() {
return new DirectChannel();
}
I initially had some issues with wiring up this feature, but in the end, I realized that it is creating yet another channel which will need to be monitored for errors and handled appropriately. For simplicity, I have chosen to not use another channel at this time.
Although potentially not the best solution, I switched to checking for row counts instead of returning actual data. In this situation, the data exception is avoided.
The main code above moved to:
MapSqlParameterSource mapParameters = new MapSqlParameterSource();
mapParameters.addValue("clientname", clientName);
// Step 1 check if client exists at all; if exists, continue
// Step 2 check if client enrollment rules are available
if (this.namedParameterJdbcTemplate.queryForObject(COUNT_BY_NAME, mapParameters, Integer.class) == 1) {
if (this.namedParameterJdbcTemplate.queryForObject(CHECK_RULES_BY_NAME, mapParameters, Integer.class) != 1) return null;
} else return null;
return findClientByName(clientName);
I then check the data upon return to the calling method in Spring Batch:
if (clientID != null) {
logger.info("Found client ID ====> " + clientID);
}
else {
throw new ClientSetupJobExecutionException("Client " +
fileNameParts[1] + " does not exist or is improperly setup in the database.");
}
Although not needed, I created a custom Java Exception which could be useful at a later point in time.
Spring Integration Service Activator can be supplied with the ExpressionEvaluatingRequestHandlerAdvice, which works like a try...catch and let you to perform some logic onFailureExpression: http://docs.spring.io/spring-integration/reference/html/messaging-endpoints-chapter.html#expression-advice
Your problem might be that you catch (EmptyResultDataAccessException e), but it is a cause, not root on the this.namedParameterJdbcTemplate.queryForObject() invocation.

Categories

Resources