Test method that returns void - java

I have a void method and I want to test it. How do I do that?
Here's the method:
public void updateCustomerTagCount() {
List<String> fileList = ImportTagJob.fetchData();
try {
for (String tag : fileList) {
Long tagNo = Long.parseLong(tag);
Customer customer = DatabaseInterface.getCustomer(tagNo);
customer.incrementNoOfTimesRecycled();
DatabaseInterface.UpdateCustomer(customer);
}
} catch(IllegalArgumentException ex) {
ex.printStackTrace();
}
}

when the method returns void, you can't test the method output. Instead, you must test what are the expected consequences of that method. For example:
public class Echo {
String x;
public static void main(String[] args){
testVoidMethod();
}
private static void testVoidMethod() {
Echo e = new Echo();
//x == null
e.voidMethod("xyz");
System.out.println("xyz".equals(e.x)); //true expected
}
private void voidMethod(String s) {
x = s;
}
}

It might not be always true, but basic concept of unit test is to check if function works as expected and properly handling errors when unexpected parameters/situation is given.
So basically unit test is against the functions that takes input parameters and return some output so we can write those unit test.
The code like yours, however, includes some other dependency (database call) and that's something you can't execute unless you write integration-test code or real database connection related one and actually that's not recommended for unit test.
So what you need to do might be introducing unit test framework, especially Mockto/Powermock or some other stuff that provides object mocking feature. With those test framework, you can simulate database operation or other function call that is going to be happening outside of your test unit code.
Also, about how do I test void function, there is nothing you can with Assert feature to compare output since it returns nothing as you mentioned.
But still, there is a way for unit test.
Just call updateCustomerTagCount() to make sure function works. Even with just calling the function, those unit test can raise your unit test coverage.
Of course for your case, you need to mock
ImportTagJob.fetchData();
and
DatabaseInterface.getCustomer(tagNo);
and have to.
Let mocked
ImportTagJob.fetchData();
throw empty list as well as non-empty list and check if your code works as you expected. Add exception handling if necessary. In your code, there are two condition depends on whether fieList are null or non-null, you need to test it.
Also, mock those objects and let them throw IllegalArgumentException where you expect it to be thrown, and write an unit test if the function throws a exception. In Junit, it should be like
#Test(expected = IllegalArgumentException.class)
public void updateCustomerTagCountTest(){
// mock the objects
xxxxx.updateCustomerTagCount();
}
That way, you can ensure that function will throw exception properly when it has to.

Related

If you want to use assertThrows while testing, should you do that with stubs or mocks?

I have this Method that throws an IllegalArgumentException when somebody tries to call it with value 0.
I want to write several stub and mock tests - for example - for the method getFrequentRenterPoints.
I coudn't figure out any "when" or "verify" statements which are used in mocks so I mixed parts of mocks and parts of stubs together and came up with this:
#Test
public void methodGetFrequentRenterPointsShouldThrowIllegalArgumentException() {
//given
Movie movieMock = mock(Movie.class);
//when
movieMock.getFrequentRenterPoints(0);
//then
assertThrows(IllegalArgumentException.class, () -> {
movieMock.getFrequentRenterPoints(0);
});
}
Is it okay to have in a class with other Mocks, or if I want to use assertThrows should I change this into a stub? Or can I use assertThrows with mocks?
The answer from Benjamin Eckardt is correct.
But I try to approach this question from another point of view: when to use mocking? This is one of my favourite answers to that question.
So in practise:
Say your code is like (just guessing all the business objects & names...):
List<RenterPoints> getFrequentRenterPoints(int renterId) {
if(p <= 0) {
throw new IllegalArgumentException();
}
// this is just the rest of code in which your test does not enter because
// of thrown exception
return somethingToReturn();
}
For this you do not need and you should not want to mock anything here.
But when things get more complicated like your method would be like:
List<RenterPoints> getFrequentRenterPoints(int renterId) {
if(p <= 0) {
throw new IllegalArgumentException();
}
// What is this?
// It is injected in the Movie - say - like
//
// #Resource
// private RenterPointService renterPointService;
List<RenterPoints> unfiltered = renterPointService.getRenterPoints(renterId);
return filterToFrequent(unfiltered);
}
Now if you test renterId >= 1 what about this renterPointService how do you instantiate it to not get NPE? Say if it is injected and requires to pull up heavy framework for testing or it requires very heavy construction or so? You do not, you mock it.
You are testing the class Movie not the class RenterPointService so you should not bother to think how RenterPointService works but what it returns when used in the class Movie. Still: you do not mock the class Movie which you are testing.
Assuming using you are using Mockito and using annotations the mocking would be then done in your test class like:
#Mock
private RenterPointService renterPointService;
#InjectMocks
private Movie movie;
Then you would do mocking of methods for renterPointService like:
when(renterPointService.getRenterPoints(anyInt))
.thenReturn(someListContaineingMockRenterPointsForThisTest);
Usually you expect the tested production method to throw and not the mock or stub. I drafted it by using new Movie().
Furthermore in that case it does not really make sense to separate the calls into when and then because if movieMock.getFrequentRenterPoints(0); throws, assertThrows(...) will never be executed.
To apply the given/when/then structure with the assertThrows API you could extract the passed lambda in some way, but I personally don't see much benefit in it.
#Test
public void methodGetFrequentRenterPointsShouldThrowIllegalArgumentException() {
// given
Movie movieMock = new Movie();
// when/then
assertThrows(IllegalArgumentException.class, () -> {
movieMock.getFrequentRenterPoints(0);
});
}

How to validate overall execution of multiple JUnit 4 tests?

Desired: A "Test of the Tests"
Imagine there is some additional "sanity check" that could be performed after a test class completes all its tests that would indicate whether test execution as a whole executed successfully. This final sanity check could possibly use some aggregated information about the tests. Just as a crude example: The number of calls to a shared method is counted, and if the count is not above some minimum expected threshold after all tests complete, then it is clear that something is wrong even if all the individual tests pass.
What I have described is probably in some "grey area" of best practices because while it does violate the doctrine of atomic unit tests, the final sanity check is not actually testing the class being tested; rather, it is checking that test execution as a whole was a success: A "test of the tests," so-to-speak. It is additional logic regarding the tests themselves.
This Solution Seems Bad
One way to accomplish this "test of tests" is to place the sanity check in a static #AfterClass method. If the check fails, one can call Assert.fail(), which actually works (surprisingly, since I presumed it could only be invoked from within methods annotated with #Test, which by nature must be instance methods, not static):
public class MyTest {
[...]
#AfterClass
public static void testSufficientCount() {
if (MyTest.counterVariable < MIN_COUNT) {
Assert.fail("This fail call actually works. Wow.");
}
}
}
There are many reasons why this solution is a kludge:
Assume there are N tests in total (where a "test" is an instance method annotated with #Test). When Assert.fail() is not called in #AfterClass, N tests in total are reported by the IDE, as expected. However, when Assert.fail() is called in #AfterClass, N + 1 tests in total are reported by the IDE (the extra one being the static #AfterClass method). The additional static method was not annotated with #Test, so it should not be counted as a test. Further, the total number of tests should not be a function of whether some tests pass or fail.
The #AfterClass method is static by definition. Therefore, only static members are accessible. This presents a problem for my specific situation; I will leave this statement without elaboration because the explanation is out of the scope of the question, but basically it would be most desirable if only instance members were used.
[Other reasons too...]
Is There a Better Way?
Is there a way to implement this "test of tests" that is considered good and common practice? Does JUnit 4 support adding some kind of logic to ensure a group of unit tests within a class executed properly (failing in some way if they did not)? Is there a name for this thing I have called a "test of tests"?
About variable number of tests
I don't think there is a valid solution ...
About static fields
I tried to follow your example and, if I understood well, with a combination of Verifier, TestRule and ClassRule it is possible to use only the instance fields of the test class
Here my code from which to take a cue:
public class ATest {
public int countVariable = 0;
private static class MyVerifier extends Verifier {
public int count = 0;
#Override
protected void verify() throws Throwable {
assertTrue(count < 1); // cause new failed test
// assertTrue(count >= 1); // it's all ok
}
}
#ClassRule
public static MyVerifier v = new MyVerifier();
private class MyRule implements TestRule {
ATest a;
MyVerifier v;
public MyRule(ATest a, MyVerifier v) {
this.a = a;
this.v = v;
}
#Override
public Statement apply(Statement base, Description description) {
try {
base.evaluate();
this.v.count = a.countVariable;
} catch (Throwable ex) {
Logger.getLogger(ATest.class.getName()).log(Level.SEVERE, null, ex);
}
return base;
}
}
#Rule
public MyRule rule = new MyRule(this, v);
#org.junit.Test
public void testSomeMethod() {
countVariable++; // modifies instance counter
assertTrue(true);
}
#org.junit.Test
public void testSomeMethod2() {
countVariable++; // modifies instance counter
assertTrue(true);
}
}
Having said that
"test of tests" isn't consider a common and good practice because, as you know, it violates at least two of five principles of the FIRST rule(see Cleean code from Uncle Bob Martin): tests must be
F: Fast
I: Indipendent
R: Repeteable
S: Self-validating
T: Timely (linked to TDD practice)

Should all of my unit tests always pass when method is given bad input?

I am somewhat new to test driven development and am trying to determine whether I have a problem with my approach to unit testing.
I have several unit tests (let's call them group A) that test whether my method's return is as expected.
I also have a unit test "B" whose passing condition is that an IllegalArgumentException is thrown when my method is given invalid input.
The unit tests in group A fail when the method is given invalid input since the method needs valid input to return correctly.
If I catch the exception, unit test "B" will fail, but if I don't catch the exception, the tests in group A will fail.
Is it OK to have unit tests fail in this way, or can I modify the code in some way so that all tests always pass?
Am I doing TDD all wrong?
Here's a notion of my code for more clarity:
public class Example{
public static String method(String inputString, int value){
if(badInput){
throw new IllegalArgumentException();
}
//do some things to inputString
return modifiedInputString;
}
}
public class ExampleTests{
#Test
public void methodReturnsIllegalArgumentExceptionForBadInput(){
assertThrows(IllegalArgumentException.class, ()->{Example.method(badInput,badValue);})
}
//"Group A" tests only pass with valid input. Bad input causes IllegalArgumentException
#Test
public void methodReturnsExpectedType(){
assertTrue(actual == expected);
}
#Test
public void methodReturnsExpectedValue(){
assertTrue(actual == expected);
}
#Test
public void methodReturnsExpectedStringFormat(){
assertTrue(actual == expected);
}
}
As you've correctly noted in the comment the problem is with too broad test setup and too implicit tests.
Tests are much more readable when they are self-contained on the business level. Common test setup should be focused on setting up technical details, but all business setup should be within each test itself.
Example for your case (a conceptual example, must be reworked to match the details of your implementation):
#Test
public void givenWhatever_whenDoingSomething_methodReturnsExpectedType(){
given(someInputs);
Type result = executeSut(); //rename executeSut to actual function under test name
assertTrue(result == expected);
}
This way by just looking at a test a reader knows what scenario is being tested. Common test setup and helper functions such as given abstract away technical details, so the reader does not get distracted at the first inspection. If they are interested, the details are always available - but are usually less important, such can be hidden.

How do I test a very simple void method in jUnit?

I know there are several question about void-method Unit-Testing, but my question is different.
I'm learning java, so my boss give me some tasks with different requirements on my tasks.
In my actual task, there is a requirement which says, the jUnit test must cover >60%. So I need to test a very simple method to reach this 60%. The method is the following:
public void updateGreen() {
// delete this outprint if the Power Manager works
System.out.println(onCommand + "-green");
// p = Runtime.getRuntime().exec(command + "-green");
// wait until the command is finished
// p.waitFor();
}
Because of intern problems, I can't execute the command with the Runtime task. So there is only a System.out in this method.
I've multiple methods like that, so tests for this method will cover over 10% of my whole code.
Is it useful to test such a method? When yes, how?
If there is a lot of such methods, the thing which you might want to test here is that updateScreen() uses the right string, "some-command-green" and that the System.out is being invoked. In order to do this you might want to extract System.out into an object field and mock it (i.e. with Mockito's spy()) to test the string that was provided to println.
I.e.
class MyClass{
PrintStream out = System.out;
public void updateGreen() { ... }
}
In test:
#Test
public void testUpdate(){
MyClass myClass = new MyClass();
myClass.out = Mockito.spy(new PrintStream(...));
// mock a call with an expected input
doNothing().when(myClass.out).println("expected command");
myClass.updateGreen();
// test that there was a call
Mockito.verify(myClass.out, Mockito.times(1)).println("expected command");
}
You could return true if the method ran successfully and false otherwise. It would be easy to test for this.
You could also test the output of this method, as described here:
Should we unit test console outputs?
But in my experience, it is much better to have methods return an optimistic or pessimistic value (true/false, 1/0/-1 etc) to indicate their status.
You can also write a getter method for the onCommand flag:
public string getFlag(){
// some logic here
return "green";
// otherwise default to no flags
return "";
}
You could test that onCommand + "-green" has been written to System.out by using the System Rules library.

Running multiple JUnit tests - Don't repeat if a failure happens once

I am in a project now that is using JUnit as a framework to test engineering data (ref: last question Creating a Java reporting project -- would like to use JUnit and Ant but not sure how)
Since a picture (err a code block) tells a 1,000 words, so let me paste my loop:
JUnitCore junit = new JUnitCore();
RunListener listener = new RunListener();
junit.addListener(listener);
[...]
for (AbstractFault fault : faultLog) {
theFault = fault;
Result result = junit.run(GearAndBrakeFaultLogReports.class);
for (Failure f : result.getFailures()) {
output.println(log.getName());
output.println(fault.getName());
output.println(HelperFunctions.splitCamelCase(f.getDescription()
.getMethodName()));
output.println(f.getMessage());
output.println();
}
}
As you can see, I am running the "junit.run" many times (for each fault in the log).
However, if any one of my tests fires a fail() I don't want to repeat that test. In other words, if there are 50 faults in a log, and in fault #1 a test fails, I don't want to attempt that test in the 49 future faults I am looping through.
Here is an example test:
private static boolean LeftMLGDownTooLongFound = false;
#Test
public final void testLeftMLGDownTooLong() {
if (!LeftMLGDownTooLongFound
&& handleLDGReportFaults(false)
&& theFault.getName().equals(FaultNames.LDG_LG_DWN_TIME.toString())) {
assertNotGreater(getPCandRecNum(), 8f, ldgFault.getLeftStrutUpTime());
LeftMLGDownTooLongFound = true;
}
}
Currently, do to this, I am making a static bool that is set to false at first, but switches to true after the first assertion. Not sure if this works, but its the idea. I don't want to do this for every single test (100's of them).
Is there any public function, method, or way in the JUnitCore or Runner class that I can flag it so a test never runs more than once after a fail() is called?
Ah, figured it out. To do this, I need to implement a way to find the failed tests, then in the #Before area, ax out of the test. Here is what I added.
#Rule public TestName name = new TestName();
#Before
public void testNonFailedOnly() {
Assume.assumeTrue(!failedTests.contains(name.getMethodName()));
}
private static List<String> failedTests = new ArrayList<String>(256);
#Rule
public TestWatcher watchman = new TestWatcher() {
/* (non-Javadoc)
* #see org.junit.rules.TestWatcher#failed(java.lang.Throwable, org.junit.runner.Description)
*/
#Override
protected void failed(Throwable e, Description description) {
super.failed(e, description);
failedTests.add(description.getMethodName());
}
};
It does add about 1.5 seconds of overhead, which sucks... but better than the alternative!!
Anyone have ideas on how to optimize this? I believe the overhead is from the TestWatcher, don't think it from the arraylist.
I used a Java Class that every test extends.
In the #Before of this class I set a boolean hasPassed = false;
At the end of every #Test method I set this variable hasPassed = true;
In the #AfterMethod you can then check the variable.
If your test causes an exception, it wont reach the end and the variable is still false.

Categories

Resources