How is if condition different from JUnit's assumingThat? - java

Why would one use JUnit's assumingThat() method instead of a plain old simple if clause? If one can use simple thing why would you complicate it with something else that does it the same way.
Is it just a expressionality thing, or what's the advantage, I don't see other benefits.

Junit's assume is not a new feature in version 5, it has been there since v4.4 and it has other applications.
You could skip testing with if, but with assume you can tag failure lifecycle method to it, using a Listener.
Example Situation (Most Common) - You could have a listener, which creates reports of the test. And there could be a code to add the failed tests, passed tests and assume failed tests to the report. If you want to achieve this without using listener or testAssumptionFailure method, then you would have to repeatedly call it everywhere.
Instead adding a listener makes it modular and maintainable.
You have many varities of assume methods which you could use to stop repeatedly write if, else and messages.

Related

JUnit - make sure that a function is invoked on every path

I know how to make sure that a function has been invoked, using:
mockito.verify
now, I want to make sure that on every path of the function (every 'if', 'if else' and 'else') - the function was invoked.
I can basically write unit test for every case, but I want to make sure that if any further cases will be added - there will also be invocation to that method.
Unit testing alone will not do that. You have to look into using coverage in order to get there.
Unit testing can only tell you if the paths that were taken resulted in a "valid" result; but there is no knowledge of "all paths" that exist; and if they were all hit.
So you want to turn here for example and learn which coverage tool would work for you.
When you are working with eclipse or intellij, those things work out of the box; you can install plugins like cobertura or eclemma within eclipse; and then do a "run unit test with coverage".
But of course: that only results in a number. You then have to look carefully at your code to understand if you are happy with that number (where those IDEs make that really easy; they can show you your source code, and which paths were taken).
Meaning: coverage is a whole concept, and you have to understand what that means; and in which way you can make that concept helpful for your daily work. For example, you the last thing you want is your boss giving you a specific target goal for coverage.
And just to be sure: there is no tooling that tells you: you added new code, and now this specific method invocation is no longer coming through all parts. What coverage gives you is that you had 75.32% coverage before your change; and afterwards, it went down to 74.01% ... the rest is then up to you.
now, I want to make sure that on every path of the function (every 'if', 'if else' and 'else') - the function was invoked.
You don't want that.
The misunderstanding is that you don't test "the code". You test public observable behavior. In your case the behavior is that your unit under test (UuT) (after doing other stuff) calls a method on a dependency (I hope).
You don't want to test "the code" because it may change for becoming cleaner and/or support more behavior. But then you don't want to change your existing tests since they will guarantee that the desired behavior is preserved during your refactoring.
On the other hand each test method should verify exactly one expectation about the UuTs behavior. This means that you should already have one test method for each execution path though your if/else cascade. So all you have to do is to add the verify() instruction to each of this test methods.
Finally you may have an easier job testing your code if you force Single Layer of Abstraction principle which basically says that a method either calls methods on some dependencies (aka "dispatching"), calls internal methods or does low level operations. This principle may lead to a design, where the "low level" stuff your UuT currently does moves to a new dependency so that your UuT only needs to do two calls on some dependencies in a certain order...
I have encountered this issue before where I needed to tightly test bound the switch cases, and I was desperate enough to do that.
I am asuming test coverage analysis is not enough for you. For me, the if-else conditions part was so critical that changing something unintentionally could have proved greatly disastrous, so I could not afford to leave a failure prone code and I needed a test case to satisfy myself.
How I satisfied myself is here:
1: Changed the conditions such as if..else, etc to a swich case variable - TaskSwitcherEnum, say taskSwitcher, and performed all sorts of operations under various possible values of TaskSwitcherEnum.
switch (taskSwitcher){
case TaskSwitcherEnum.Task_Type_1:
//do Something before break
break;
case TaskSwitcherEnum.Task_Type_2:
//do Something again
break;
...
}
2: Tightly tested the desired method for all possible values of taskSwitcherEnum. Mockito.verify(), whether required task method is called once, for each given TaskSwitcherEnum value.
3: Finally did a junit like this:
assertEquals("Task performance strategy is designed to handle only five cases.", 5, taskSwitcherEnum.values().length);
Doing this made sure [at least test covered] following things:
1: That my code has only desired branches, and any other code branch addition/deletion is caught by a test case.
2: That each branch does it's desired job by calling a method that I want, through testing on every particular Enum values against a called method.
Gist of the whole answer is, sometimes a little design change helps a lot.

TestNG - #BeforeMethod for specific methods

I'm using Spring Test with TestNG to test our DAOs, and I wanted to run a specific text fixture script before certain methods, allowing the modifications to be rolled back after every method so that the tests are free to do anything with the fixture data.
Initially I thought that 'groups' would be fit for it, but I already realized they're not intended for that (see this question: TestNG BeforeMethod with groups ).
Is there any way to configure a #BeforeMethod method to run only before specific #Tests? The only ways I see are workarounds:
Define an ordinary setup method and call at the beginning of every #Test method;
Move the #BeforeMethod method to a new class (top level or inner class), along with all methods that depend on it.
Neither is ideal, I'd like to keep my tests naturally grouped and clean, not split due to lack of alternatives.
You could add a parameter your #BeforeMethod with the type 'java.lang.reflect.Method'. TestNG will then inject the reflection information for the current test method including the method name, which you could use for switching.
If you add another 'Object' parameter, you will also get the invocation parameters of the test method.
You'all find all on possible parameters for TestNG-annotated methods in chapter 5.18.1 of the TestNG documentation.
Tests are simply not designed to do this. Technically speaking, a single tests is supposed to handle being idempotent for itself meaning it sets up, tests, and takes down. That is a single test. However, a lot of tests sometimes have the same set-up and take down method, whereas other tests need one set-up before they all run. This is the purpose of the #Before type tags.
If you don't like set-up and tear-down inside your test, your more then welcome to architect your own system, but technically speaking, if certain methods require specific set-ups or tear-downs, then that really should be embodied IN the test, since it is a requirement for test to pass. It is ok to call a set-up method, but ultimately, it should be OBVIOUS that a test needs a specific set-up in order to pass. After all, if your using specific set-ups, aren’t you actually testing states rather than code?

Is there a general way to mark a JUnit test as pending?

Before stepping into the TDD cycle, I like to sketch out the tests that need to be implemented - i.e. write empty test methods with speaking names.
Unfortunately I have not found a way to "paint them yellow" - mark them as pending for JUnit. I can make them either fail or pass. Now I am letting them fail by throwing an Exception, but I'd rather use an equivalent of pending from rspec.
Is there such an option in JUnit or an "adjacent" library?
You can use #Ignore to ignore the test,
or this library to introduce the #PendingImplementation annotation:
https://github.com/ttsui/pending
I don't think there are other ways to achieve this..
You could use Assume or #Ignore, both are not quite what you are after but close. The 3rd party library pending also exists. I have not used it, but appears to do what you want.

Weird problem with doing tests with junit

I'm writing a little library for movies for myself. It's partly for learning TDD. Now I have a problem I can't solve.
The code is in here https://github.com/hasanen/MovieLibrary/blob/master/movielibrary-core/src/test/java/net/pieceofcode/movielibrary/service/MovieLibraryServiceITC.java
The problem is that when I run the whole class (right click above class name in eclipse), the second test fails because removing doesn't succeed. But when right clicking the method (getMovieGenres_getAllGenresAndRemoveOne_returnsTwoGenreAndIdsAreDifferent) and choosing Run as Junit Test, it works.
I don't necessarily need the fix, but at least some advice on how to find why junit is acting like this.
From the way you explain the problem, the problem appears to be in the setUp class. The setUp class runs before every test case invocation. This is the general sequence.
1- Add three movies.
2- Test if three movies exists.
3- Add three movies
4- remove movie item # 1.
Since sequence 1-4 works, the problem is sequence 3. Either sequence 3 swallows some exception or mutates the underlying object. (may be changes the sequence.) Without knowing how addMovie changes the underlying object, its hard to tell.
Something outside your test class (likely a superclass) is creating movieLibraryService, and it's not being recreated as often as it needs to be for independent testing.
If you add the line
movieLibraryService = new MovieLibraryService();
at the top of your testSetUp() method, this service will be properly reset before the running of each test method, and they will likely work properly.
As it is, I suspect you're getting a failure on the assertions about size, as the size is becoming 6 instead of 3.
Alternatively, you could add a teardown method (annotated with #After) which removes the contents of the movie library so that it always starts empty.
IMHO the problem is your test isn't real unit test but integration one. So while testing your service you're testing all the layers it uses. I recommend yo to use mocks for lower layers dependencies (EasyMock or something) and use integration tests only for your repository layer. This way you can avoid persistence layer influences while testing service layer.

JUnit dynamic method dispatch?

Here is the problem I am facing. I have been tasked with testing the query parsing engine of a piece of software through negative testing. That is, I must write a large number of queries that will fail, and test that they do indeed fail, as well as having the expected error message for the particular error in the query. These are defined in an XML file. I've written a simple wrapper around the parsing of the XML document and struct-like classes for these test cases.
Now, given that I am using JUnit as a testing framework, I'm running into this issue - the act of running through all of these externally defined tests lives in a single method. If a single test fails, then no more will be run. Is there any way to dynamically dispatch a method to handle each of the tests as I encounter them? This way, if a test fails, we can still run the remaining ones while getting a report on what did and did not fail.
The other alternative is, of course, writing all of the JUnit tests. I'd like to avoid this for many reasons, one of which is that the number of tests to be run is extremely large, and a test case is 99% boilerplate code.
Thanks.
You should look into JUnit's Parameterized annotation.
If I understand correctly, the input data and expected results are all defined in XML, so you don't need specific code to handle each test case?
If you use JUnit4, you could write your own Runner implementation. You could either implement Runner directly or extend ParentRunner. All you need to implement is one method that returns a description of the tests, and another method that runs the tests.

Categories

Resources