Mock OutputStream#flush method - java

I've got: I have method void method of service class. This method takes some data from remote and than flush them with OutputStream.
public void pullAndFlushData(URI uri, Params params) {
InputStream input = doHttpRequest(uri, params);
OutputStream output = new OutputStream("somepath");
IOUtils.copyLarge(input, output);
output.flush();
output.close();
}
I want: To test the results of this method. So I want to mock output.flush() and check whether it contains correct data.
Question: How to mock OutputStream#flush method?

Your current code won't work:
OutputStream output = new OutputStream("SomePath");
... won't compile because OutputStream is abstract.
So somewhere you're going to need to tell the method what OutputStream to use. To make it more testable, make the stream a parameter.
public void pullAndFlushData(OutputStream output, URI uri, Params params) {
InputStream input = doHttpRequest(uri, params);
IOUtils.copyLarge(input, output);
output.flush();
output.close();
}
Alternatively, output could be a field in the object, populated by the constructor or a setter. Or you could pass the object a factory. Whichever of these you choose, it means that the caller can take control of what kind of OutputStream is used -- for the production code, a FileOutputStream; for tests, a ByteArrayOutputStream.
You may wish to review the decision to close() the OutputStream here - and instead do it in the same block as the OutputStream is opened.
Now you can test it by having your unit test supply an OutputStream.
#Test
public void testPullAndFlushData() {
URI uri = ...;
Params params = ...;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
someObject.pullAndFlushData(baos, uri, params);
assertSomething(..., baos.toByteArray());
}
This doesn't use Mockito, but it's a good pattern for testing methods that use OutputStream.
You could let Mockito mock an OutputStream and use it in the same way - setting expectations for the write() calls upon it. But that would become quite brittle about the way copyLarge() chunks the data.
You could also use Mockito's spy() to check that calls were made to your real ByteArrayOutputStream.
#Test
public void testPullAndFlushData() {
URI uri = ...;
Params params = ...;
ByteArrayOutputStream spybaos = spy(new ByteArrayOutputStream());
someObject.pullAndFlushData(spybaos, uri, params);
assertSomething(..., spybaos.toByteArray());
verify(spybaos).flush(); // asserts that flush() has been called.
}
However, note that the Mockito team was quite reluctant to provide spy(), and in most cases doesn't believe it's a good way to test. Read the Mockito docs for reasons.

Related

How to mock - reading file from s3

I am new to writing unit tests. I am trying to read a JSON file stored in S3 and I am getting an "Argument passed to when() is not a mock!" and "profile file cannot be null" error.
This is what I have tried so far Retrieving Object Using JAVA:
private void amazonS3Read() {
String clientRegion = "us-east-1";
String bucketName = "version";
String key = "version.txt";
S3Object fullObject = null;
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
fullObject = s3Client.getObject(new GetObjectRequest(bucketName, key));
S3ObjectInputStream s3is = fullObject.getObjectContent();
json = returnStringFromInputStream(s3is);
fullObject.close();
s3is.close();
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
//Do some operations with the data
}
Test File
#Test
public void amazonS3ReadTest() throws Exception {
String bucket = "version";
String keyName = "version.json";
InputStream inputStream = null;
S3Object s3Object = Mockito.mock(S3Object.class);
GetObjectRequest getObjectRequest = Mockito.mock(GetObjectRequest.class);
getObjectRequest = new GetObjectRequest(bucket, keyName);
AmazonS3 client = Mockito.mock(AmazonS3.class);
Mockito.doNothing().when(AmazonS3ClientBuilder.standard());
client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
Mockito.doReturn(s3Object).when(client).getObject(getObjectRequest);
s3Object = client.getObject(getObjectRequest);
Mockito.doReturn(inputStream).when(s3Object).getObjectContent();
inputStream = s3Object.getObjectContent();
//performing other operations
}
Getting two different exceptions:
Argument passed to when() is not a mock! Example of correct stubbing: doThrow(new RuntimeException()).when(mock).someMethod();
org.mockito.exceptions.misusing.NotAMockException:
Argument passed to when() is not a mock!
Example of correct stubbing:
OR
profile file cannot be null
java.lang.IllegalArgumentException: profile file cannot be null
at com.amazonaws.util.ValidationUtils.assertNotNull(ValidationUtils.java:37)
at com.amazonaws.auth.profile.ProfilesConfigFile.<init>(ProfilesConfigFile.java:142)
at com.amazonaws.auth.profile.ProfilesConfigFile.<init>(ProfilesConfigFile.java:133)
at com.amazonaws.auth.profile.ProfilesConfigFile.<init>(ProfilesConfigFile.java:100)
at com.amazonaws.auth.profile.ProfileCredentialsProvider.getCredentials(ProfileCredentialsProvider.java:135)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1184)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:774)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:726)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:719)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:701)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:669)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:651)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:515)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4443)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4390)
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1427)
What am I doing wrong and how do I fix this?
Your approach looks wrong.
You want to mock dependencies and invocations of a private method :amazonS3Read() and you seem to want to unit test that method.
We don't unit test private methods of a class but we test the class from its API (application programming interface), that is public/protected method.
Your unit test is a series of mock recording: most of it is a description via Mockito of what your private method does. I have even a hard time to identify the no mocked part....
What do you assert here ? That you invoke 4 methods on some mocks ? Unfortunately, it asserts nothing in terms of result/behavior. You can add incorrect invocations between the invoked methods and the test will stay green because you don't test a result that you can assert with the assertEquals(...) idiom.
It doesn't mean that mocking a method is not acceptable but when your test is mainly mocking, something is wrong and we can trust in its result.
I would advise you two things :
write an unit test that focuses on asserting the logic that you performed : computation/transformation/transmitted value and so for... don't focus on chaining methods.
write some integration tests with some light and simple S3 compatible servers that will give you a real feedback in terms of behavior assertion. Side effects may be tested in this way.
You have for example Riak, MinIo or still Localstack.
To be more concrete, here is a refactor approach to improve things.
If the amazonS3Read() private method has to be unitary tested, you should probably move it into a specific class for example MyAwsClient and make it a public method.
Then the idea is to make amazonS3Read() as clear as possible in terms of responsibility.
Its logic could be summarized such as :
1) Get some identifier information to pass to the S3 services.
Which means defined a method with the parameters :
public Result amazonS3Read(String clientRegion, String bucketName, String key) {...}
2) Apply all fine grained S3 functions to get the S3ObjectInputStream object.
We could gather all of these in a specific method of a class AmazonS3Facade :
S3ObjectInputStream s3is = amazonS3Facade.getObjectContent(clientRegion, bucketName, key);
3) Do your logic that is process the returned S3ObjectInputStream and return a result
json = returnStringFromInputStream(s3is);
// ...
return result;
How to test that now ?
Simply enough.
With JUnit 5 :
#ExtendWith(MockitoExtension.class)
public MyAwsClientTest{
MyAwsClient myAwsClient;
#Mock
AmazonS3Facade amazonS3FacadeMock;
#Before
void before(){
myAwsClient = new MyAwsClient(amazonS3FacadeMock);
}
#Test
void amazonS3Read(){
// given
String clientRegion = "us-east-1";
String bucketName = "version";
String key = "version.txt";
S3ObjectInputStream s3IsFromMock = ... // provide a stream with a real content. We rely on it to perform the assertion
Mockito.when(amazonS3FacadeMock.getObjectContent(clientRegion, bucketName, key))
.thenReturn(s3IsFromMock);
// when
Result result = myAwsClient.amazonS3Read(clientRegion, bucketName, key);
// assert result content.
Assertions.assertEquals(...);
}
}
What are the advantages ?
the class implementation is readable and maintainable because it focuses on your functional processing.
the whole S3 logic was moved into a single place AmazonS3Facade (Single Responsibility principle/modularity).
thanks to that, the test implementation is now readable and maintainable
the test tests really the logic that you perform (instead of verifying a series of invocations on multiple mocks).
Note that unitary testing AmazonS3Facade has few/no value since that is only a series of invocation to S3 components, impossible to assert in terms of returned result and so very brittle.
But writing an integration test for that with a simple and lightweight S3 compatible server as these quoted early makes really sense.
Your error says:
Argument passed to when() is not a mock!
Your are passing AmazonS3ClientBuilder.standard() in Mockito.doNothing().when(AmazonS3ClientBuilder.standard()); which is not a mock, this is why it doesn't work.
Consider using PowerMock in order to mock static methods.
Here is an example.

How do I Execute Java from Java?

I have this DownloadFile.java and downloads the file as it should:
import java.io.*;
import java.net.URL;
public class DownloadFile {
public static void main(String[] args) throws IOException {
String fileName = "setup.exe";
// The file that will be saved on your computer
URL link = new URL("http://onlinebackup.elgiganten.se/software/elgiganten/setup.exe");
// The file that you want to download
// Code to download
InputStream in = new BufferedInputStream(link.openStream());
ByteArrayOutputStream out = new ByteArrayOutputStream();
byte[] buf = new byte[1024];
int n = 0;
while (-1 != (n = in.read(buf))) {
out.write(buf, 0, n);
}
out.close();
in.close();
byte[] response = out.toByteArray();
FileOutputStream fos = new FileOutputStream(fileName);
fos.write(response);
fos.close();
// End download code
System.out.println("Finished");
}
}
I want to execute this from a mouse event in Gui.java.
private void jLabel17MouseClicked(java.awt.event.MouseEvent evt){
}
How do I do this?
Your current method is a static method, which is fine, but all the data that it extracts is held tightly within the main method, preventing other classes from using it, but fortunately this can be corrected.
My suggestion:
re-write your DownloadFile code so that it is does not simply a static main method, but rather a method that can be called by other classes easily, and that returns the data from the file of interest. This way outside classes can call the method and then receive the data that the method extracted.
Give it a String parameter that will allow the calling code to pass in the URL address.
Give it a File parameter for the file that it should write data to.
Consider having it return data (a byte array?), if this data will be needed by the calling program.
Or if it does not need to return data, perhaps it could return boolean to indicate if the download was successful or not.
Make sure that your method throws all exceptions (such as IO and URL excptions) that it needs to throw.
Also, if this is to be called by a Swing GUI, be sure to call this type of code in a background thread, such as in a SwingWorker, so that this code does not tie up the Swing event thread, rendering your GUI frozen for a time.

How to send a InputStream in play framework in an java only project without chunked responses?

In a Java (only) Play 2.3 project we need to send a non-chunked response of an InputStream directly to the client. The InputStream comes from a remote service from which we want to stream directly to the client, without blocking or buffering to a local file. Since we know the size before reading the input stream, we do not want a chunked response.
What is the best way to return a result for an input stream with a known size? (preferable without using Scala).
When looking at the default ok(file, ..) method for returning File objects it goes deep into play internals which are only accessible from scala, and it uses the play-internal execution context which can't even be accessed from outside. Would be nice if it would work identical, just with an InputStream.
FWIW I have now found a way to serve an InputStream, which basically duplicates the logic which the Results.ok(File) method to allow directly passing in an InputStream.
The key is to use the scala call to create an Enumerator from an InputStream: play.api.libs.iteratee.Enumerator$.MODULE$.fromStream
private final MessageDispatcher fileServeContext = Akka.system().dispatchers().lookup("file-serve-context");
protected void serveInputStream(InputStream inputStream, String fileName, long contentLength) {
response().setHeader(
HttpHeaders.CONTENT_DISPOSITION,
"attachment; filename=\"" + fileName + "\"");
// Set Content-Type header based on file extension.
scala.Option<String> contentType = MimeTypes.forFileName(fileName);
if (contentType.isDefined()) {
response().setHeader(CONTENT_TYPE, contentType.get());
} else {
response().setHeader(CONTENT_TYPE, ContentType.DEFAULT_BINARY.getMimeType());
}
response().setHeader(CONTENT_LENGTH, Long.toString(contentLength));
return new WrappedScalaResult(new play.api.mvc.Result(
new ResponseHeader(StatusCode.OK, toScalaMap(response().getHeaders())),
// Enumerator.fromStream() will also close the input stream once it is done.
play.api.libs.iteratee.Enumerator$.MODULE$.fromStream(
inputStream,
FILE_SERVE_CHUNK_SIZE,
fileServeContext),
play.api.mvc.HttpConnection.KeepAlive()));
}
/**
* A simple Result which wraps a scala result so we can call it from our java controllers.
*/
private static class WrappedScalaResult implements Result {
private play.api.mvc.Result scalaResult;
public WrappedScalaResult(play.api.mvc.Result scalaResult) {
this.scalaResult = scalaResult;
}
#Override
public play.api.mvc.Result toScala() {
return scalaResult;
}
}

How to clear the screen output of a Java HttpServletResponse

I'm writing to the browser window using servletResponse.getWriter().write(String).
But how do I clear the text which was written previously by some other similar write call?
The short answer is, you cannot -- once the browser receives the response, there is no way to take it back. (Unless there is some way to abnormally stop a HTTP response to cause the client to reload the page, or something to that extent.)
Probably the last place a response can be "cleared" in a sense, is using the ServletResponse.reset method, which according to the Servlet Specification, will reset the buffer of the servlet's response.
However, this method also seems to have a catch, as it will only work if the buffer has not been committed (i.e. sent to the client) by the ServletOutputStream's flush method.
You cannot. The best thing is to write to a buffer (StringWriter / StringBuilder) and then you can replace the written data any time. Only when you know for sure what is the response you can write the buffer's content to the response.
In the same matter, and reason to write the response this way and not to use some view technology for your output such as JSP, Velocity, FreeMarker, etc.?
If you have an immediate problem that you need to solve quickly, you could work around this design problem by increasing the size of the response buffer - you'll have to read your application server's docs to see if this is possible. However, this solution will not scale as you'll soon run into out-of-memory issues if you site traffic peaks.
No view technology will protect you from this issue. You should design your application to figure out what you're going to show the user before you start writing the response. That means doing all your DB access and business logic ahead of time. This is a common issue I've seen with convoluted system designs that use proxy objects that lazily access the database. E.g. ORM with Entity relationships are bad news if accessed from your view layer! There's not much you can do about an exception that happens 3/4 of the way into a rendered page.
Thinking about it, there might be some way to inject a page redirect via AJAX. Anyone ever heard of a solution like that?
Good luck with re-architecting your design!
I know the post is pretty old, but just thought of sharing my views on this.
I suppose you could actually use a Filter and a ServletResponseWrapper to wrap the response and pass it along the chain.
That is, You can have an output stream in the wrapper class and write to it instead of writing into the original response's output stream... you can clear the wrapper's output stream as and when you please and you can finally write to the original response's output stream when you are done with your processing.
For example,
public class MyResponseWrapper extends HttpServletResponseWrapper {
protected ByteArrayOutputStream baos = null;
protected ServletOutputStream stream = null;
protected PrintWriter writer = null;
protected HttpServletResponse origResponse = null;
public MyResponseWrapper( HttpServletResponse response ) {
super( response );
origResponse = response;
}
public ServletOutputStream getOutputStream()
throws IOException {
if( writer != null ) {
throw new IllegalStateException( "getWriter() has already been " +
"called for this response" );
}
if( stream == null ) {
baos = new ByteArrayOutputStream();
stream = new MyServletStream(baos);
}
return stream;
}
public PrintWriter getWriter()
throws IOException {
if( writer != null ) {
return writer;
}
if( stream != null ) {
throw new IllegalStateException( "getOutputStream() has already " +
"been called for this response" );
}
baos = new ByteArrayOutputStream();
stream = new MyServletStream(baos);
writer = new PrintWriter( stream );
return writer;
}
public void commitToResponse() {
origResponse.getOutputStream().write(baos.toByteArray());
origResponse.flush();
}
private static class MyServletStream extends ServletOutputStream {
ByteArrayOutputStream baos;
MyServletStream(ByteArrayOutputStream baos) {
this.baos = baos;
}
public void write(int param) throws IOException {
baos.write(param);
}
}
//other methods you want to implement
}

Streaming Data through Spring JDBC, unknown length

I currently have an application that inserts byte[] into our DB through the use of Spring JDBC [SqlLobValue]. The problem is, this is not a scalable way to take in data, as the server is buffering all the data in memory before writing to the database. I would like to stream the data from the HttpServletRequest Inputstream, but all the constructors I can find for any classes that take an Inputstream as an argument also require the content length as an argument. I do not, and will not, require the user to know the content length when POSTing data to my application. Is there a way around this limitation?
I can find no documentation about what happens if I pass -1 for content length, but my guess is it will throw an Exception. I'm not sure why they couldn't just have the stream keep reading until the read(...) returns -1, the required behavior of an InputStream.
I presume you meant "InputStream" rather than "OutputStream". I tried this out, but I was having bigger problems with my JDBC driver, so I am unsure if this actually works.
InputStream inputStream = httpServletRequest.getInputStream();
int contentLength = -1; // fake, will be ignored anyway
SqlLobValue sqlLobValue = new SqlLobValue(
inputStream,
contentLength,
new DefaultLobHandler() {
public LobCreator getLobCreator() {
return new DefaultLobHandler.DefaultLobCreator() {
public void setBlobAsBinaryStream(PreparedStatement ps, int paramIndex, InputStream binaryStream, int contentLength) throws SQLException {
// The contentLength parameter should be the -1 we provided earlier.
// You now have direct access to the PreparedStatement.
// Simply avoid calling setBinaryStream(int, InputStream, int)
// in favor of setBinaryStream(int, InputStream).
ps.setBinaryStream(paramIndex, binaryStream);
}
};
}
}
);
jdbcTemplate.update(
"INSERT INTO foo (bar) VALUES (?)",
new Object[]{ sqlLobValue }
);

Categories

Resources