I am new to writing unit tests. I am trying to read a JSON file stored in S3 and I am getting an "Argument passed to when() is not a mock!" and "profile file cannot be null" error.
This is what I have tried so far Retrieving Object Using JAVA:
private void amazonS3Read() {
String clientRegion = "us-east-1";
String bucketName = "version";
String key = "version.txt";
S3Object fullObject = null;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
fullObject = s3Client.getObject(new GetObjectRequest(bucketName, key));
S3ObjectInputStream s3is = fullObject.getObjectContent();
json = returnStringFromInputStream(s3is);
fullObject.close();
s3is.close();
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
//Do some operations with the data
}
Test File
#Test
public void amazonS3ReadTest() throws Exception {
String bucket = "version";
String keyName = "version.json";
InputStream inputStream = null;
S3Object s3Object = Mockito.mock(S3Object.class);
GetObjectRequest getObjectRequest = Mockito.mock(GetObjectRequest.class);
getObjectRequest = new GetObjectRequest(bucket, keyName);
AmazonS3 client = Mockito.mock(AmazonS3.class);
Mockito.doNothing().when(AmazonS3ClientBuilder.standard());
client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
Mockito.doReturn(s3Object).when(client).getObject(getObjectRequest);
s3Object = client.getObject(getObjectRequest);
Mockito.doReturn(inputStream).when(s3Object).getObjectContent();
inputStream = s3Object.getObjectContent();
//performing other operations
}
Getting two different exceptions:
Argument passed to when() is not a mock! Example of correct stubbing: doThrow(new RuntimeException()).when(mock).someMethod();
org.mockito.exceptions.misusing.NotAMockException:
Argument passed to when() is not a mock!
Example of correct stubbing:
OR
profile file cannot be null
java.lang.IllegalArgumentException: profile file cannot be null
at com.amazonaws.util.ValidationUtils.assertNotNull(ValidationUtils.java:37)
at com.amazonaws.auth.profile.ProfilesConfigFile.<init>(ProfilesConfigFile.java:142)
at com.amazonaws.auth.profile.ProfilesConfigFile.<init>(ProfilesConfigFile.java:133)
at com.amazonaws.auth.profile.ProfilesConfigFile.<init>(ProfilesConfigFile.java:100)
at com.amazonaws.auth.profile.ProfileCredentialsProvider.getCredentials(ProfileCredentialsProvider.java:135)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1184)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:774)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:726)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:719)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:701)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:669)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:651)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:515)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4443)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4390)
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1427)
What am I doing wrong and how do I fix this?
Your approach looks wrong.
You want to mock dependencies and invocations of a private method :amazonS3Read() and you seem to want to unit test that method.
We don't unit test private methods of a class but we test the class from its API (application programming interface), that is public/protected method.
Your unit test is a series of mock recording: most of it is a description via Mockito of what your private method does. I have even a hard time to identify the no mocked part....
What do you assert here ? That you invoke 4 methods on some mocks ? Unfortunately, it asserts nothing in terms of result/behavior. You can add incorrect invocations between the invoked methods and the test will stay green because you don't test a result that you can assert with the assertEquals(...) idiom.
It doesn't mean that mocking a method is not acceptable but when your test is mainly mocking, something is wrong and we can trust in its result.
I would advise you two things :
write an unit test that focuses on asserting the logic that you performed : computation/transformation/transmitted value and so for... don't focus on chaining methods.
write some integration tests with some light and simple S3 compatible servers that will give you a real feedback in terms of behavior assertion. Side effects may be tested in this way.
You have for example Riak, MinIo or still Localstack.
To be more concrete, here is a refactor approach to improve things.
If the amazonS3Read() private method has to be unitary tested, you should probably move it into a specific class for example MyAwsClient and make it a public method.
Then the idea is to make amazonS3Read() as clear as possible in terms of responsibility.
Its logic could be summarized such as :
1) Get some identifier information to pass to the S3 services.
Which means defined a method with the parameters :
public Result amazonS3Read(String clientRegion, String bucketName, String key) {...}
2) Apply all fine grained S3 functions to get the S3ObjectInputStream object.
We could gather all of these in a specific method of a class AmazonS3Facade :
S3ObjectInputStream s3is = amazonS3Facade.getObjectContent(clientRegion, bucketName, key);
3) Do your logic that is process the returned S3ObjectInputStream and return a result
json = returnStringFromInputStream(s3is);
// ...
return result;
How to test that now ?
Simply enough.
With JUnit 5 :
#ExtendWith(MockitoExtension.class)
public MyAwsClientTest{
MyAwsClient myAwsClient;
#Mock
AmazonS3Facade amazonS3FacadeMock;
#Before
void before(){
myAwsClient = new MyAwsClient(amazonS3FacadeMock);
}
#Test
void amazonS3Read(){
// given
String clientRegion = "us-east-1";
String bucketName = "version";
String key = "version.txt";
S3ObjectInputStream s3IsFromMock = ... // provide a stream with a real content. We rely on it to perform the assertion
Mockito.when(amazonS3FacadeMock.getObjectContent(clientRegion, bucketName, key))
.thenReturn(s3IsFromMock);
// when
Result result = myAwsClient.amazonS3Read(clientRegion, bucketName, key);
// assert result content.
Assertions.assertEquals(...);
}
}
What are the advantages ?
the class implementation is readable and maintainable because it focuses on your functional processing.
the whole S3 logic was moved into a single place AmazonS3Facade (Single Responsibility principle/modularity).
thanks to that, the test implementation is now readable and maintainable
the test tests really the logic that you perform (instead of verifying a series of invocations on multiple mocks).
Note that unitary testing AmazonS3Facade has few/no value since that is only a series of invocation to S3 components, impossible to assert in terms of returned result and so very brittle.
But writing an integration test for that with a simple and lightweight S3 compatible server as these quoted early makes really sense.
Your error says:
Argument passed to when() is not a mock!
Your are passing AmazonS3ClientBuilder.standard() in Mockito.doNothing().when(AmazonS3ClientBuilder.standard()); which is not a mock, this is why it doesn't work.
Consider using PowerMock in order to mock static methods.
Here is an example.
Related
I'm using spring-webflux and I wonder if someone knows how to handle error in Mono<Void>. I'm using MultipartFile's method transferTo, which on success returns Mono.empty() and in other cases it wraps exceptions in Mono.error().
public Mono<UploadedFile> create(final User user, final FilePart file) {
final UploadedFile uploadedFile = new UploadedFile(file.filename(), user.getId());
final Path path = Paths.get(fileUploadConfig.getPath(), uploadedFile.getId());
file.transferTo(path);
uploadedFile.setFilePath(path.toString());
return repo.save(uploadedFile);
}
I want to save UploadedFile only in case transferTo ended successfully. But I can't use map/flatMap because empty Mono obviously doesn't emit value. onErrorResume only accepts Mono with same type (Void).
Hi try to chain your operators like this:
...
return Mono.just(file)
.map(f -> f.transferTo(path))
.then(Mono.just(uploadedFile))
.flatMap(uF -> {
uF.setFilePath(path.toString());
return repo.save(uF)
});
}
if your transferTo will finished successfully it calls then operators.
P.S. if I'm not mistaken FilePart is blocking, try to avoid it.
I am developing an API REST using Spring WebFlux, but I have problems when uploading files. They are stored but I don't get the expected return value.
This is what I do:
Receive a Flux<Part>
Cast Part to FilePart.
Save parts with transferTo() (this return a Mono<Void>)
Map the Mono<Void> to Mono<String>, using file name.
Return Flux<String> to client.
I expect file name to be returned, but client gets an empty string.
Controller code
#PostMapping(value = "/muscles/{id}/image")
public Flux<String> updateImage(#PathVariable("id") String id, #RequestBody Flux<Part> file) {
log.info("REST request to update image to Muscle");
return storageService.saveFiles(file);
}
StorageService
public Flux<String> saveFiles(Flux<Part> parts) {
log.info("StorageService.saveFiles({})", parts);
return
parts
.filter(p -> p instanceof FilePart)
.cast(FilePart.class)
.flatMap(file -> saveFile(file));
}
private Mono<String> saveFile(FilePart filePart) {
log.info("StorageService.saveFile({})", filePart);
String filename = DigestUtils.sha256Hex(filePart.filename() + new Date());
Path target = rootLocation.resolve(filename);
try {
Files.deleteIfExists(target);
File file = Files.createFile(target).toFile();
return filePart.transferTo(file)
.map(r -> filename);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
FilePart.transferTo() returns Mono<Void>, which signals when the operation is done - this means the reactive Publisher will only publish an onComplete/onError signal and will never publish a value before that.
This means that the map operation was never executed, because it's only given elements published by the source.
You can return the name of the file and still chain reactive operators, like this:
return part.transferTo(file).thenReturn(part.filename());
It is forbidden to use the block operator within a reactive pipeline and it even throws an exception at runtime as of Reactor 3.2.
Using subscribe as an alternative is not good either, because subscribe will decouple the transferring process from your request processing, making those happen in different execution sequences. This means that your server could be done processing the request and close the HTTP connection while the other part is still trying to read the file part to copy it on disk. This is likely to fail in subtle ways at runtime.
FilePart.transferTo() returns Mono<Void> that is a constant empty. Then, map after that was never executed. I solved it by doing this:
private Mono<String> saveFile(FilePart filePart) {
log.info("StorageService.saveFile({})", filePart);
String filename = DigestUtils.sha256Hex(filePart.filename() + new Date());
Path target = rootLocation.resolve(filename);
try {
Files.deleteIfExists(target);
File file = Files.createFile(target).toFile();
return filePart
.transferTo(file)
.doOnSuccess(data -> log.info("do something..."))
.thenReturn(filename);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
I am trying to use Lambda function for S3 Put event notification. My Lambda function should be called once I put/add any new JSON file in my S3 bucket.
The challenge I have is there are not enough documents for this to implement such Lambda function in Java. Most of doc I found are for Node.js
I want, my Lambda function should be called and then inside that Lambda function, I want to consume that added json and then send that JSON to AWS ES Service.
But what all classes I should use for this? Anyone has any idea about this? S3 abd ES are all setup and running. The auto generated code for lambda is
`
#Override
public Object handleRequest(S3Event input, Context context) {
context.getLogger().log("Input: " + input);
// TODO: implement your handler
return null;
}
What next??
Handling S3 events in Lambda can be done, but you have to keep in mind, the the S3Event object only transports the reference to the object and not the object itself. To get to the actual object you have to invoke the AWS SDK yourself.
Requesting a S3 Object within a lambda function would look like this:
public Object handleRequest(S3Event input, Context context) {
AmazonS3Client s3Client = new AmazonS3Client(new DefaultAWSCredentialsProviderChain());
for (S3EventNotificationRecord record : input.getRecords()) {
String s3Key = record.getS3().getObject().getKey();
String s3Bucket = record.getS3().getBucket().getName();
context.getLogger().log("found id: " + s3Bucket+" "+s3Key);
// retrieve s3 object
S3Object object = s3Client.getObject(new GetObjectRequest(s3Bucket, s3Key));
InputStream objectData = object.getObjectContent();
//insert object into elasticsearch
}
return null;
}
Now the rather difficult part to insert this object into ElasticSearch. Sadly the AWS SDK does not provide any functions for this. The default approach would be to do a REST call against the AWS ES endpoint. There are various samples out their on how to proceed with calling an ElasticSearch instance.
Some people seem to go with the following project:
Jest - Elasticsearch Java Rest Client
Finally, here are the steps for S3 --> Lambda --> ES integration using Java.
Have your S3, Lamba and ES created on AWS. Steps are here.
Use below Java code in your lambda function to fetch a newly added object in S3 and send it to ES service.
public Object handleRequest(S3Event input, Context context) {
AmazonS3Client s3Client = new AmazonS3Client(new DefaultAWSCredentialsProviderChain());
for (S3EventNotificationRecord record : input.getRecords()) {
String s3Key = record.getS3().getObject().getKey();
String s3Bucket = record.getS3().getBucket().getName();
context.getLogger().log("found id: " + s3Bucket+" "+s3Key);
// retrieve s3 object
S3Object object = s3Client.getObject(new GetObjectRequest(s3Bucket, s3Key));
InputStream objectData = object.getObjectContent();
//Start putting your objects in AWS ES Service
String esInput = "Build your JSON string here using S3 objectData";
HttpClient httpClient = new DefaultHttpClient();
HttpPut putRequest = new HttpPut(AWS_ES_ENDPOINT + "/{Index_name}/{product_name}/{unique_id}" );
StringEntity input = new StringEntity(esInput);
input.setContentType("application/json");
putRequest.setEntity(input);
httpClient.execute(putRequest);
httpClient.getConnectionManager().shutdown();
}
return "success";}
Use either Postman or Sense to create Actual index & corresponding mapping in ES.
Once done, download and run proxy.js on your machine. Make sure you setup ES Security steps suggested in this post
Test setup and Kibana by running http://localhost:9200/_plugin/kibana/ URL from your machine.
All is set. Go ahead and set your dashboard in Kibana. Test it by adding new objects in your S3 bucket
I've got: I have method void method of service class. This method takes some data from remote and than flush them with OutputStream.
public void pullAndFlushData(URI uri, Params params) {
InputStream input = doHttpRequest(uri, params);
OutputStream output = new OutputStream("somepath");
IOUtils.copyLarge(input, output);
output.flush();
output.close();
}
I want: To test the results of this method. So I want to mock output.flush() and check whether it contains correct data.
Question: How to mock OutputStream#flush method?
Your current code won't work:
OutputStream output = new OutputStream("SomePath");
... won't compile because OutputStream is abstract.
So somewhere you're going to need to tell the method what OutputStream to use. To make it more testable, make the stream a parameter.
public void pullAndFlushData(OutputStream output, URI uri, Params params) {
InputStream input = doHttpRequest(uri, params);
IOUtils.copyLarge(input, output);
output.flush();
output.close();
}
Alternatively, output could be a field in the object, populated by the constructor or a setter. Or you could pass the object a factory. Whichever of these you choose, it means that the caller can take control of what kind of OutputStream is used -- for the production code, a FileOutputStream; for tests, a ByteArrayOutputStream.
You may wish to review the decision to close() the OutputStream here - and instead do it in the same block as the OutputStream is opened.
Now you can test it by having your unit test supply an OutputStream.
#Test
public void testPullAndFlushData() {
URI uri = ...;
Params params = ...;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
someObject.pullAndFlushData(baos, uri, params);
assertSomething(..., baos.toByteArray());
}
This doesn't use Mockito, but it's a good pattern for testing methods that use OutputStream.
You could let Mockito mock an OutputStream and use it in the same way - setting expectations for the write() calls upon it. But that would become quite brittle about the way copyLarge() chunks the data.
You could also use Mockito's spy() to check that calls were made to your real ByteArrayOutputStream.
#Test
public void testPullAndFlushData() {
URI uri = ...;
Params params = ...;
ByteArrayOutputStream spybaos = spy(new ByteArrayOutputStream());
someObject.pullAndFlushData(spybaos, uri, params);
assertSomething(..., spybaos.toByteArray());
verify(spybaos).flush(); // asserts that flush() has been called.
}
However, note that the Mockito team was quite reluctant to provide spy(), and in most cases doesn't believe it's a good way to test. Read the Mockito docs for reasons.
I am new to using Mockito test framework. I need to unit test one method which return the the string content. Also the same contents will be stored in one .js file (i.e. "8.js").
How do I verify the the string contents returned from the method is as expected as i want.
Please find the below code for generating the .js file:
public String generateJavaScriptContents(Project project)
{
try
{
// Creating projectId.js file
FileUtils.mkdir(outputDir);
fileOutputStream = new FileOutputStream(outputDir + project.getId() + ".js");
streamWriter = new OutputStreamWriter(fileOutputStream, "UTF-8");
StringTemplateGroup templateGroup =
new StringTemplateGroup("viTemplates", "/var/vi-xml/template/", DefaultTemplateLexer.class);
stringTemplate = templateGroup.getInstanceOf("StandardJSTemplate");
stringTemplate.setAttribute("projectIdVal", project.getId());
stringTemplate.setAttribute("widthVal", project.getDimension().getWidth());
stringTemplate.setAttribute("heightVal", project.getDimension().getHeight());
stringTemplate.setAttribute("playerVersionVal", project.getPlayerType().getId());
stringTemplate.setAttribute("finalTagPath", finalPathBuilder.toString());
streamWriter.append(stringTemplate.toString());
return stringTemplate.toString();
}
catch (Exception e)
{
logger.error("Exception occurred while generating Standard Tag Type Content", e);
return "";
}
}
The output of above method writes the .js file and the contents of that file are looks something below:
var projectid = 8; var playerwidth = 300; var playerheight =
250; var player_version = 1; .....
I have written the testMethod() using mockito to test this, however i am able to write the .js file successfully using the test method, but how do I verify its contents?
Can anyone help me to sort out this problem?
As #ĆukaszBachman mentions, you can read the contents from the js file. There are a couple of things to consider when using this approach:
The test will be slow, as you will have to wait for the js content to be written to the disk, read the content back from the disk and assert the content.
The test could theoretically be flaky because the entire js content may not be written to the disk by the time the code reads from the file. (On that note, you should probably consider calling flush() and close() on your OutputStreamWriter, if you aren't already.)
Another approach is to mock your OutputStreamWriter and inject it into the method. This would allow you to write test code similar to the following:
OutputStreamWriter mockStreamWriter = mock(OutputStreamWriter.class);
generateJavaScriptContents(mockStreamWriter, project);
verify(mockStreamWriter).append("var projectid = 8;\nvar playerwidth = 300;...");
http://mockito.googlecode.com/svn/branches/1.5/javadoc/org/mockito/Mockito.html#verify%28T%29
If you persist this *.js file on file system then simply create util method which will read it's contents and then use some sort of assertEquals to compare it with your fixed data.
Here is code for reading file contents into String.