How can I write meaningful unit tests around the Braintree Java API? - java

I am writing a payment gateway based upon the Java API for Braintree Payments (version 2.71.0 as of writing).
I would like to write unit tests to check that the requests I send to Braintree have the right parameters set. However, it seems that the objects exposed by the Java API are write-only.
Note that I don't want my automated tests to depend upon the availability of the Braintree sandbox: I want to write robust unit tests, not flaky system tests.
In a perfect world, I would like to be able to write something like this (using Mockito and AssertJ):
BraintreeGateway mockGateway = Mockito.mock(BraintreeGateway.class);
TransactionGateway transactionGateway = Mockito.mock(TransactionGateway.class);
Result<Transaction> mockResult = (Result<Transaction>) Mockito.mock(Result.class);
BigDecimal totalAmount = BigDecimal.valueOf(1234, 2);
String customerId = "some-customer-id";
Mockito.when(mockGateway.transaction()).thenReturn(transactionGateway);
Mockito.when(transactionGateway.sale(any())).thenReturn(mockResult);
underTest.performTransaction(totalAmount, customerId);
ArgumentCaptor<TransactionRequest> reqCaptor = ArgumentCaptor.forClass(TransactionRequest.class);
Mockito.verify(transactionGateway).sale(reqCaptor.capture());
TransactionRequest sentRequest = reqCaptor.getValue();
Assertions.assertThat(sentRequest.getAmount()).isEqualTo(totalAmount);
Assertions.assertThat(sentRequest.getCustomer().getId()).isEqualTo(customerId);
Alas, the only methods I get on the sentRequest are setters.
As a workaround, I could try to mock one level deeper and catch the HTTP requests sent by the Braintree API, but that would be hardly readable and (once again) quite flaky.
Any better idea?

I'd wrap the Braintree API into a seperate interfaces and write an integration test for the API, to test the expected behavior. That way you have an integration test to run, when the backend changes (in terms of API, technology and version), you can run smoke tests after a deployment and you can mock the new interface away in your unit test. You shouldn't - actually you can't by definition - unit test a third party system.
What does your test actually test anyway (what's underTest)? It looks it only tests your mocking and argument capturing.
... And if something is hardly readable, it's usually the writers fault :P

Related

Testing conditions and exceptions in Integration Tests?

I have written several Unit Tests and now switched to write Integration Test in our Java (Spring Boot) app. We use JUnit and Mockito libraries for testing.
As far as I know, Integration Tests check the entire rings rather than a function. However, I am confused that if I should also check the if conditions in the methods while integration testing. Here is an example service method:
#Override
public CountryDTO create(CountryRequest request) {
if (countryRepository.existsByCodeIgnoreCase(countryCode)) {
throw new EntityAlreadyExistsException();
}
final Country country = new Country();
country.setCode("UK");
country.setName("United Kingdom");
final Country created = countryRepository.save(country);
return new CountryDTO(created);
}
My questions are:
1. Can I write integration test for a Service or a Repository class?
2. when I test create method in my service above, I think I just create the proper request values (CountryRequest) in my Test class, then pass them to this create method and then check the returned value. Is that true? Or do I also need to test the condition in the if clause (countryRepository.existsByCodeIgnoreCase(countryCode))?
3. When I test find methods, I think I should first create record by calling create method and the proper place for this is #BeforeEach setup() {} method. Is that true?
If you wrote Unit tests that made sure, your services and repositories are working correctly (for example by validation and parameterized tests) I believe, you don't have to write integration tests for them.
You should write integration tests to check the behavior of your app. By testing if your controller is working correctly you will also check if service and repo are ok.
I believe unit test should check it.
Do you ask if you should create record in db? If you want to test if repository is correctly communicating with service and it with controller, you have to do it with some data.

Unit Testing a Public method with private supporting methods inside of it?

When trying to perform test driven development on my JSF app, I have a hard time understanding how to make my classes more testable and decoupled.. For instance:
#Test
public void testViewDocumentReturnsServletPath(){
DocumentDO doc = new DocumentDO();
doc.setID(7L);
doc.setType(“PDF”);
DocumentHandler dh = new DocumentHandler(doc);
String servletPath = dh.viewDocument();
assertTrue(servletPath, contains(“../../pdf?path=“);
}
This is only testable (with my current knowledge) if I remove some of the supporting private methods inside viewDocument() that are meant to interact with external resources like the DB.
How can I unit test the public API with these supporting private methods inside as well?
Unit testing typically includes mocking of external dependencies that a function relies on in order to get a controlled output. This means that if your private method makes a call to an API you can use a framework like Mockito to force a specific return value which you can then use to assure your code handles the value the way you expect. In Mockito for example, this would look like:
when(someApiCall).thenReturn(someResource);
This same structure holds if you wish to interact with a database or any other external resource that the method you are testing does not control.

What is the point of writing behavioral JUnit test in Spring-Rest?

I am new to JUnit mockito, I have this test function written for my Spring rest resource.
#Test
public void getAllMessageHappyTest() throws Exception {
List<Message> messageList = new ArrayList<>();
messageList.add(new Message(1,"Hello"));
messageList.add(new Message(5,"Hello world"));
messageList.add(new Message(3,"Hello World, G!"));
when(messageService.getAllMessages()).thenReturn(messageList);
RequestBuilder requestBuilder = MockMvcRequestBuilders.get("/messages/").accept(MediaType.APPLICATION_JSON);
MvcResult mvcResult = mockMvc.perform(requestBuilder).andReturn();
String expected = ""; // expected
JSONAssert.assertEquals(expected,mvcResult.toString(),false);
}
In the above scenario, I have the when(messageService.getAllMessages()).thenReturn(messageList); returning the messageList which is written by me(or by member of team) and I am comparing the returned JSON with the String expected which will also be written by me(or by the same member of team). So both the things are written by the same guy, so what is the point of having such kind of tests.
If I understand the question correctly the concern is this; because the person who writes the test also hardcodes (in the form of a JSON string) the expectation the test may be redundant or at least may be of limited value. Perhaps the sub text to your question is that since whoever wrote the underlying endpoint will provide the expectation then it must pass and if its success is preordained then it is of little value.
However, regardless of who writes the test and who writes the code-under-test, the example test you showed above has value because:
It tests more than the retuned JSON, it also tests ...
That the REST endpoint mapping is correct i.e. that it exposes an endpoint named "/messages/" which accepts JSON
The REST layer is using a serialiser which produces some JSON
Continued running of this test case will ensure that the expected behaviour of this endpoint continues to be met even after you (or some other member of your team) are no longer working on this code or, in other words; it acts as a regression safety net.
The code-under-test may be changed in future, if so then this test case provides a baseline against which future development can take place.
The test case provides a form of documentation for your code; people who are unfamiliar with this codebase can review the tests to understand how the code is expected to behave.
In addition, this test case could be extended to include tests for sad paths such as invalid repsonses, unsecured access attempts etc thereby improving test coverage.
Update 1: in response ot this comment:
even if someone makes changes in an actual code and now after making actual code is producing a different kind of JSON(say not as required) even then too test case will pass because when then is hardcoded and expected is also hardcoded. So what is the point?
A test like this clearly makes no sense:
String json = "...";
when(foo.getJson()).thenReturn(json);
assertEquals(json, foo.getJson());
Bu that is not what your test does. Instead your test asserts that the response - in the form of JSON - matches the serialised form of the response returned by your mocked messageService.getAllMessages(). So, your test covers the serialisation piece along with the various aspects of the Spring MVC layer such as the endpoint->controller mapping and interceptors and filters (if you have any).

JUnit 5, pass information from test class to extension

I am trying to write an extension for Junit5 similar to what I had for Junit4 but I am failing to grasp how to do that in the new (stateless) extension system.
The idea in the previous version was that user could pass information into extension class and hence change the way it behaved. Here is a pseudo snippet showing approximately what is used to do:
public void MyTest {
// here I can define different behaviour for my extension
#Rule MyCustomRule rule = MyCustomRule.of(Foo.class).withPackage(Bar.class.getPackage).alsoUse(Cookies.class);
#Test
public void someTest() {
// some test code already affected by the #Rule
// plus, user has access to that class and can use it, say, retrieve additional information
rule.grabInfoAboutStuff();
}
}
Now, I know how to operate JUnit 5 extension, what lifecycles to use etc. But I don't know how to give the test-writer the power to modify my extension's behaviour with JUnit5. Any pointers appreciated.
As of JUnit Jupiter 5.0.1, it is unfortunately not possible to pass parameters to an Extension programmatically like you could for rules in JUnit 4.
However, I am working on adding such support in JUnit Jupiter 5.1. You can follow the following issue if you like: https://github.com/junit-team/junit5/issues/497
In the interim, the only way to pass information to an extension is for the extension to support custom annotations and extract the user-supplied information from there. For example, I allow users to provide a custom SpEL expression in the #EnabledIf annotation in the Spring Framework, and my ExecutionCondition extension pulls the expression from the annotation using reflection.
followup on the (accepted) answer from Sam as in the meantime the referred bug has been implemented with junit 5.1
use #RegisterExtension
see https://junit.org/junit5/docs/current/user-guide/#extensions-registration-programmatic

Determining which test cases covered a method

The current project I'm working on requires me to write a tool which runs functional tests on a web application, and outputs method coverage data, recording which test case traversed which method.
Details:
The web application under test will be a Java EE application running in a servlet container (eg. Tomcat). The functional tests will be written in Selenium using JUnit. Some methods will be annotated so that they will be instrumented prior to deployement into the test enviornment. Once the Selenium tests are executed, the execution of annotated methods will be recorded.
Problem: The big obstacle of this project is finding a way to relate an execution of a test case with the traversal of a method, especially that the tests and the application run on different JVMs, and there's no way to transmit the name of the test case down the application, and no way in using thread information to relate test with code execution.
Proposed solution: My solution would consist of using the time of execution: I extend the JUnit framework to record the time the test case was executed, and I instrument the application so that it saves the time the method was traversed. And I try to use correlation to link the test case with method coverage.
Expected problems: This solution assumes that test cases are executed sequentially, and a test case ends befores the next one starts. Is this assumption reasonable with JUnit?
Question: Simply, can I have your input on the proposed solution, and perhaps suggestions on how to improve and make it more robust and functional on most Java EE applications? Or leads to already implemented solutions?
Thank you
Edit: To add more requirements, the tool should be able to work on any Java EE application and require the least amount of configuration or change in the application. While I know it isn't a realistic requirement, the tool should at least not require any huge modification of the application itself, like adding classes or lines of code.
Have you looked at existing coverage tools (Cobertura, Clover, Emma, ...). I'm not sure if one of them is able to link the coverage data to test cases, but at least with Cobertura, which is open-source, you might be able to do the following:
instrument the classes with cobertura
deploy the instrumented web app
start a test suite
after each test, invoke a URL on the web app which saves the coverage data to some file named after the test which has just been run, and resets the coverage data
after the test suite, generate a cobertura report for every saved file. Each report will tell which code has been run by the test
If you need a merged report, I guess it shouldn't be too hard to generate it from the set of saved files, using the cobertura API.
Your proposed solution seems like a reasonable one, except for the proposed solution to relate the test and request by timing. I've tried to do this sort of thing before, and it works. Most of the time. Unless you write your JUnit code very carefully, you'll have lots of issues, because of differences in time between the two machines, or if you've only got one machine, just matching one time against another.
A better solution would be to implement a Tomcat Valve which you can insert into the lifecycle in the server.xml for your webapp. Valves have the advantage that you define them in the server.xml, so you're not touching the webapp at all.
You will need to implement invoke(). The best place to start is probably with AccessLogValve. This is the implementation in AccessLogValve:
/**
* Log a message summarizing the specified request and response, according
* to the format specified by the <code>pattern</code> property.
*
* #param request Request being processed
* #param response Response being processed
*
* #exception IOException if an input/output error has occurred
* #exception ServletException if a servlet error has occurred
*/
public void invoke(Request request, Response response) throws IOException,
ServletException {
if (started && getEnabled()) {
// Pass this request on to the next valve in our pipeline
long t1 = System.currentTimeMillis();
getNext().invoke(request, response);
long t2 = System.currentTimeMillis();
long time = t2 - t1;
if (logElements == null || condition != null
&& null != request.getRequest().getAttribute(condition)) {
return;
}
Date date = getDate();
StringBuffer result = new StringBuffer(128);
for (int i = 0; i < logElements.length; i++) {
logElements[i].addElement(result, date, request, response, time);
}
log(result.toString());
} else
getNext().invoke(request, response);
}
All this does is log the fact that you've accessed it.
You would implement a new Valve. For your requests you pass a unique id as a parameter for the URL, which is used to identify the tests that you're running. Your valve would do all of the heavy lifting before and after the invoke(). You could remove remove the unique parameter for the getNext().invoke() if needed.
To measure the coverage, you could use a coverage tool as suggested by JB Nizet, based on the unique id that you're passing over.
So, from junit, if your original call was
#Test void testSomething() {
selenium.open("http://localhost/foo.jsp?bar=14");
}
You would change this to be:
#Test void testSomething() {
selenium.open("http://localhost/foo.jsp?bar=14&testId=testSomething");
}
Then you'd pick up the parameter testId in your valve.

Categories

Resources