All JUnit assert methods have an optional first parameter being the message printed when the assertion fails.
How do I ensure the parameter is always passed, and developers in my project never lazily skip describing what the assertion is doing?
Is there an inspection tool that can check for that?
Is there anything I can do programmatically?
My project is maven-friendly.
As code inspection seems to be your aim, I would recommend a tool called PMD. If there is not already a rule for this, I would think it is fairly trivial to create one. Furthermore, this will help you in detecting other code mess which your developers may be creating.
Here is a link:
http://pmd.sourceforge.net/
In our team, we do code reviews of pull requests. If asserts are not following the standard, you could flag it in the review. Only pull requests that have sufficient approvals are allowed to be merged.
That said, I would probably not enforce this rule on my team. Instead I'd tell them to write shorter tests where the method name will clearly state the intent, like:
#Test(expected = IllegalArgumentException.class);
shouldThrowExceptionWhenInputIsNegative() {}
#Test
shouldFilterOutNulls() {}
#Test
shouldCreateAdditionalRecordWhenBankBalanceIsOver10000() {}
etc...
Related
How would i set up using pmd and checkstyle results as advice only and disable them on the build server? And would it be bad practice to do so?
Both pmd and checkstyle offer valuable advice, and i want to keep on using them.
But (here comes the but) i find that my code collects a lot of lint trying to work around some of the warnings. To name a few examples:
Test-classes contain many mockito and junit static imports, invariably i have to add #SuppressWarnings("PMD.TooManyStaticImports").
A class under test needs its fields filled with mock objects, these are not used anywhere in the test but they need to be declared and annotated with #Mock for the class under test to work correctly. Add #SuppressWarnings("PMD.UnusedPrivateField").
In test classes i will have methods for creating objects from a long list of parameters, eg: createPerson(String firstname, String lastname, int shoesize, String favouritecolor, ...). These objects are normally created from a database or XML. Add #SuppressWarnings("PMD.ParameterNumberCheck").
Sometimes my documentation will be: "This method makes sure that X in the following 3 cases: \n ...". Apparently this is not allowed as the first sentence should end with a period.
Parent class X has some field y that all its children need and use, but checkstyle won't allow it unless the field is accessed through a method (getY()). This is just unnatural, IMO.
One option would be to turn the checks causing the most nuisance off permanently, however a check may be a nuisance or very useful depending on the context.
I recognize that explicitly suppressing warnings in the code is also a way to document that only in the specific context, the check is irrelavant and annoying. It is the amount of suppresions that annoys me, almost every testclass needs suppressions, and some of the other classes need workarounds.
So would it be a solutions to generate the warings, but not allow checkstyle and pmd violations to fail te build?
Test-classes contain ...
A class under test ...
In test classes ...
It seems to me, you should suppress these checks under your test code as you don't agree with them.
This is a common occurrence, like in Checkstyle we don't document our test code but our main code documents everything. To get around this for PMD, we split our configuration between test and main. To get around this for Checkstyle utility, we suppress violations for the test directory. You can also look at the options for the Checks, and see if there is anyway to configure it to ignore your cases.
Sometimes my documentation will be: "This method makes sure that X in the following 3 cases: \n ...".
I can't say for certain since I don't know the contents of your methods, but the first sentence should be a simple explanation of what the method does and it's goal. Then you can follow it by your specific cases you mentioned. Checkstyle just requires the first sentence to end with a period, not every sentence.
Parent class X has some field y that all its children need and use, but checkstyle won't allow it unless the field is accessed through a method (getY()). This is just unnatural, IMO.
Since you completely dislike this, then just disable the check for protected fields. If you look at the documentation for VisibilityModifier, you can change protectedAllowed to true and have it ignore these specific cases.
i find that my code collects a lot of lint trying to work around some of the warnings.
To me, it just seems you are not customizing these tools to your preferences and just trying to use a default configuration.
Before stepping into the TDD cycle, I like to sketch out the tests that need to be implemented - i.e. write empty test methods with speaking names.
Unfortunately I have not found a way to "paint them yellow" - mark them as pending for JUnit. I can make them either fail or pass. Now I am letting them fail by throwing an Exception, but I'd rather use an equivalent of pending from rspec.
Is there such an option in JUnit or an "adjacent" library?
You can use #Ignore to ignore the test,
or this library to introduce the #PendingImplementation annotation:
https://github.com/ttsui/pending
I don't think there are other ways to achieve this..
You could use Assume or #Ignore, both are not quite what you are after but close. The 3rd party library pending also exists. I have not used it, but appears to do what you want.
I would like to find unit tests (written with JUnit) which never fail. I.e. tests which have something like
try {
// call some methods, but no assertions
} catch(Throwable e) {
// do nothing
}
Those tests are basically useless, because they will never find a problem in the code. So is there a way to make each (valid) unit test fail? For instance each call to any assert method would throw an exception? Then tests which still remain green are useless.
Well, one approach is to use something like Jester which implements mutation testing. It doesn't do things quite the way you've suggested, but it tries to find tests which will still pass however much you change the production code, but randomly mutating it and rerunning the tests.
If you are trying to make every call to every Assert method fail, then I'm not certain how to help you. #JonSkeet's suggestion of Jester may be close to what you want.
However, if you're trying to find an Assert method that always fails, Assert.fail is what you want. It throws an AssertionError on invokation.
I'm assuming that you want to track code that swallows assertions, and/or does not make any assertions. The simplest way to do this will be to build your own JUnit JAR, replacing the original Assert class with your own.
This won't help you find cases where the assertions are bogus, and if you're in an environment where developers don't bother to assert, you're likely to have bogus assertions as well. It also doesn't help you find tests that are marked with #Ignore, but grep will do that for you.
Add this as your default test implementation:
Assert.assertTrue(false);
Every test that starts this way will fail until you replace that line with something suitable.
I'm not sure about what you want but I understand that the Units are already written, so the only way I can imagine is to override the asserts methods, but even with this you will have to change a little bit the unit code.
Run CheckStyle against your JUnit sources with the following configuration:
<module name="IllegalCatch">
<property name="illegalClassNames" value="java.lang.Throwable, java.lang.Error, java.lang.AssertionError"/>
</module>
Assume the following setup:
interface Entity {}
interface Context {
Result add(Entity entity);
}
interface Result {
Context newContext();
SpecificResult specificResult();
}
class Runner {
SpecificResult actOn(Entity entity, Context context) {
return context.add(entity).specificResult();
}
}
I want to see that the actOn method simply adds the entity to the context and returns the specificResult. The way I'm testing this right now is the following (using Mockito)
#Test
public void testActOn() {
Entity entity = mock(Entity.class);
Context context = mock(Context.class);
Result result = mock(Result.class);
SpecificResult specificResult = mock(SpecificResult.class);
when(context.add(entity)).thenReturn(result);
when(result.specificResult()).thenReturn(specificResult);
Assert.assertTrue(new Runner().actOn(entity,context) == specificResult);
}
However this seems horribly white box, with mocks returning mocks. What am I doing wrong, and does anybody have a good "best practices" text they can point me to?
Since people requested more context, the original problem is an abstraction of a DFS, in which the Context collects the graph elements and calculates results, which are collated and returned. The actOn is actually the action at the leaves.
It depends of what and how much you want your code to be tested. As you mentionned the tdd tag, I suppose you wrote your test contracts before any actual production code.
So in your contract what do you want to test on the actOn method:
That it returns a SpecificResult given both a Context and an Entity
That add(), specificResult() interactions happen on respectively the Context and the Entity
That the SpecificResult is the same instance returned by the Result
etc.
Depending on what you want to be tested you will write the corresponding tests. You might want to consider relaxing your testing approach if this section of code is not critical. And the opposite if this section can trigger the end of the world as we know it.
Generally speaking whitebox tests are brittle, usually verbose and not expressive, and difficult to refactor. But they are well suited for critical sections that are not supposed to change a lot and by neophytes.
In your case having a mock that returns a mock does look like a whitebox test. But then again if you want to ensure this behavior in the production code this is ok.
Mockito can help you with deep stubs.
Context context = mock(Context.class, RETURNS_DEEP_STUBS);
given(context.add(any(Entity.class)).specificResult()).willReturn(someSpecificResult);
But don't get used to it as it is usually considered bad practice and a test smell.
Other remarks :
Your test method name is not precise enough testActOn does tell the reader what behavior your are testing. Usually tdd practitioners replace the name of the method by a contract sentence like returns_a_SpecificResult_given_both_a_Context_and_an_Entity which is clearly more readable and give the practitioner the scope of what is being tested.
You are creating mock instances in the test with Mockito.mock() syntax, if you have several tests like that I would recommend you to use a MockitoJUnitRunner with the #Mock annotations, this will unclutter a bit your code, and allow the reader to better see what's going on in this particular test.
Use the BDD (Behavior Driven Dev) or the AAA (Arrange Act Assert) approach.
For example:
#Test public void invoke_add_then_specificResult_on_call_actOn() {
// given
... prepare the stubs, the object values here
// when
... call your production code
// then
... assertions and verifications there
}
All in all, as Eric Evans told me Context is king, you shall take decisions with this context in mind. But you really should stick to best practice as much as possible.
There's many reading on test here and there, Martin Fowler has very good articles on this matter, James Carr compiled a list of test anti-patterns, there's also many reading on using well the mocks (for example the don't mock types you don't own mojo), Nat Pryce is the co-author of Growing Object Oriented Software Guided by Tests which is in my opinion a must read, plus you have google ;)
Consider using fakes instead of mocks. It's not really clear what the classes in question are meant to to, but if you can build a simple in-memory (not thread-safe, not persistent etc) implementation of both interfaces, you can use that for flexible testing without the brittleness that sometimes comes from mocking.
I like to use names beginning mock for all my mock objects. Also, I would replace
when(result.specificResult()).thenReturn(specificResult);
Assert.assertTrue(new Runner().actOn(entity,context) == specificResult);
with
Runner toTest = new Runner();
toTest.actOn( mockEntity, mockContext );
verify( mockResult ).specificResult();
because all you're trying to assert is that specificResult() gets run on the right mock object. Whereas your original assert doesn't make it quite so clear what is being asserted. So you don't actually need a mock for SpecificResult. That cuts you down to just one when call, which seems to me to be about right for this kind of test.
But yes, this does seem frightfully white box. Is Runner a public class, or some hidden implementation detail of a higher level process? If it's the latter, then you probably want to write tests around the behaviour at the higher level; rather than probing implementation details.
Not knowing much about the context of the code, I would suggest that Context and Result are likely simple data objects with very little behavior. You could use a Fake as suggested in another answer or, if you have access to the implementations of those interfaces and construction is simple, I'd just use the real objects in lieu of Fakes or Mocks.
Although the context would provide more information, I don't see any problems with your testing methodology myself. The whole point of mock objects is to verify calling behavior without having to instantiate the implementations. Creating stub objects or using actual implementing classes just seems unnecessary to me.
However this seems horribly white box, with mocks returning mocks.
This may be more about the class design than the testing. If that is the way the Runner class works with the external interfaces then I don't see any problem with having the test simulate that behavior.
First off, since nobody's mentioned it, Mockito supports chaining so you can just do:
when(context.add(entity).specificResult()).thenReturn(specificResult);
(and see Brice's comment for how to do enable this; sorry I missed it out!)
Secondly, it comes with a warning saying "Don't do this except for legacy code." You're right about the mock-returning-mock being a bit strange. It's OK to do white-box mocking generally because you're really saying, "My class ought to collaborate with a helper like <this>", but in this case it's collaborating across two different classes, coupling them together.
It's not clear why the Runner needs to get the SpecificResult, as opposed to whatever other result comes out of context.add(entity), so I'm going to make a guess: the Result contains a result with some messages or other information and you just want to know whether it's a success or failure.
That's like me saying, "Don't tell me all about my shopping order, just tell me that I made it successfully!" The Runner shouldn't know that you only want that specific result; it should just return everything that came out, the same way that Amazon shows you your total, postage and all the things you bought, even if you've shopped there lots and are perfectly aware of what you're getting.
If some classes regularly use your Runner just to get a specific result while others require more feedback then I'd make two methods to do it, maybe called something like add and addWithFeedback, the same way that Amazon let you do one-click shopping by a different route.
However, be pragmatic. If it's readable the way you've done it and everyone understands it, use Mockito to chain them and call it a day. You can change it later if you have need.
Recently a new concept of Theories was added to JUnit (since v4.4).
In a nutshell, you can mark your test method with #Theory annotation (instead of #Test), make your test method parametrized and declare an array of parameters, marked with #DataPoints annotation somewhere in the same class.
JUnit will sequentially run your parametrized test method passing parameters retrieved from #DataPoints one after another. But only until the first such invocation fails (due to any reason).
The concept seems to be very similar to #DataProviders from TestNG, but when we use data providers, all the scenarios are run inspite of their execution results. And it's useful because you can see how many scenarious work/don't work and you can fix your program more effectively.
So, I wonder what's the reason not to execute #Theory-marked method for every #DataPoint? (It appears not so difficult to inherit from Theories runner and make a custom runner which will ignore failures but why don't we have such behaviour out of the box?)
UPD: I have created a fault-tolerant version of Theories runner and made it available for a public access: https://github.com/rgorodischer/fault-tolerant-theories
In order to compare it with the standard Theories runner run StandardTheoriesBehaviorDemo then FaultTolerantTheoriesBehaviorDemo which are placed under src/test/... folder.
Reporting multiple failures in a single test is generally a sign that
the test does too much, compared to what a unit test ought to do.
Usually this means either that the test is really a
functional/acceptance/customer test or, if it is a unit test, then it
is too big a unit test.
JUnit is designed to work best with a number of small tests. It
executes each test within a separate instance of the test class. It
reports failure on each test. Shared setup code is most natural when
sharing between tests. This is a design decision that permeates JUnit,
and when you decide to report multiple failures per test, you begin to
fight against JUnit. This is not recommended.
Long tests are a design smell and indicate the likelihood of a design
problem. Kent Beck is fond of saying in this case that "there is an
opportunity to learn something about your design." We would like to
see a pattern language develop around these problems, but it has not
yet been written down.
Source: http://junit.sourceforge.net/doc/faq/faq.htm#tests_12
To ignore assertion failures you can also use a JUnit error collector rule:
The ErrorCollector rule allows execution of a test to continue after
the first problem is found (for example, to collect all the incorrect
rows in a table, and report them all at once)
For example you can write a test like this.
public static class UsesErrorCollectorTwice {
#Rule
public ErrorCollector collector= new ErrorCollector();
#Test
public void example() {
String x = [..]
collector.checkThat(x, not(containsString("a")));
collector.checkThat(y, containsString("b"));
}
}
The error collector uses hamcrest Matchers. Depending on your preferences this is positive or not.
AFAIK, the idea is the same as with asserts, the first failure stops the test. This is the difference between Parameterized & Theories.
Parameterized takes a set of data points and runs a set of test methods with each of them. Theories does the same, but fails when the first assert fails.
Try looking at Parameterized. Maybe it provides what you want.
A Theory is wrong if a single test in it is wrong, according to the definition of a Theory. If your test cases don't follow this rule, it would be wrong to call them a "Theory".