Weird problem with doing tests with junit - java

I'm writing a little library for movies for myself. It's partly for learning TDD. Now I have a problem I can't solve.
The code is in here https://github.com/hasanen/MovieLibrary/blob/master/movielibrary-core/src/test/java/net/pieceofcode/movielibrary/service/MovieLibraryServiceITC.java
The problem is that when I run the whole class (right click above class name in eclipse), the second test fails because removing doesn't succeed. But when right clicking the method (getMovieGenres_getAllGenresAndRemoveOne_returnsTwoGenreAndIdsAreDifferent) and choosing Run as Junit Test, it works.
I don't necessarily need the fix, but at least some advice on how to find why junit is acting like this.

From the way you explain the problem, the problem appears to be in the setUp class. The setUp class runs before every test case invocation. This is the general sequence.
1- Add three movies.
2- Test if three movies exists.
3- Add three movies
4- remove movie item # 1.
Since sequence 1-4 works, the problem is sequence 3. Either sequence 3 swallows some exception or mutates the underlying object. (may be changes the sequence.) Without knowing how addMovie changes the underlying object, its hard to tell.

Something outside your test class (likely a superclass) is creating movieLibraryService, and it's not being recreated as often as it needs to be for independent testing.
If you add the line
movieLibraryService = new MovieLibraryService();
at the top of your testSetUp() method, this service will be properly reset before the running of each test method, and they will likely work properly.
As it is, I suspect you're getting a failure on the assertions about size, as the size is becoming 6 instead of 3.
Alternatively, you could add a teardown method (annotated with #After) which removes the contents of the movie library so that it always starts empty.

IMHO the problem is your test isn't real unit test but integration one. So while testing your service you're testing all the layers it uses. I recommend yo to use mocks for lower layers dependencies (EasyMock or something) and use integration tests only for your repository layer. This way you can avoid persistence layer influences while testing service layer.

Related

JUnit - make sure that a function is invoked on every path

I know how to make sure that a function has been invoked, using:
mockito.verify
now, I want to make sure that on every path of the function (every 'if', 'if else' and 'else') - the function was invoked.
I can basically write unit test for every case, but I want to make sure that if any further cases will be added - there will also be invocation to that method.
Unit testing alone will not do that. You have to look into using coverage in order to get there.
Unit testing can only tell you if the paths that were taken resulted in a "valid" result; but there is no knowledge of "all paths" that exist; and if they were all hit.
So you want to turn here for example and learn which coverage tool would work for you.
When you are working with eclipse or intellij, those things work out of the box; you can install plugins like cobertura or eclemma within eclipse; and then do a "run unit test with coverage".
But of course: that only results in a number. You then have to look carefully at your code to understand if you are happy with that number (where those IDEs make that really easy; they can show you your source code, and which paths were taken).
Meaning: coverage is a whole concept, and you have to understand what that means; and in which way you can make that concept helpful for your daily work. For example, you the last thing you want is your boss giving you a specific target goal for coverage.
And just to be sure: there is no tooling that tells you: you added new code, and now this specific method invocation is no longer coming through all parts. What coverage gives you is that you had 75.32% coverage before your change; and afterwards, it went down to 74.01% ... the rest is then up to you.
now, I want to make sure that on every path of the function (every 'if', 'if else' and 'else') - the function was invoked.
You don't want that.
The misunderstanding is that you don't test "the code". You test public observable behavior. In your case the behavior is that your unit under test (UuT) (after doing other stuff) calls a method on a dependency (I hope).
You don't want to test "the code" because it may change for becoming cleaner and/or support more behavior. But then you don't want to change your existing tests since they will guarantee that the desired behavior is preserved during your refactoring.
On the other hand each test method should verify exactly one expectation about the UuTs behavior. This means that you should already have one test method for each execution path though your if/else cascade. So all you have to do is to add the verify() instruction to each of this test methods.
Finally you may have an easier job testing your code if you force Single Layer of Abstraction principle which basically says that a method either calls methods on some dependencies (aka "dispatching"), calls internal methods or does low level operations. This principle may lead to a design, where the "low level" stuff your UuT currently does moves to a new dependency so that your UuT only needs to do two calls on some dependencies in a certain order...
I have encountered this issue before where I needed to tightly test bound the switch cases, and I was desperate enough to do that.
I am asuming test coverage analysis is not enough for you. For me, the if-else conditions part was so critical that changing something unintentionally could have proved greatly disastrous, so I could not afford to leave a failure prone code and I needed a test case to satisfy myself.
How I satisfied myself is here:
1: Changed the conditions such as if..else, etc to a swich case variable - TaskSwitcherEnum, say taskSwitcher, and performed all sorts of operations under various possible values of TaskSwitcherEnum.
switch (taskSwitcher){
case TaskSwitcherEnum.Task_Type_1:
//do Something before break
break;
case TaskSwitcherEnum.Task_Type_2:
//do Something again
break;
...
}
2: Tightly tested the desired method for all possible values of taskSwitcherEnum. Mockito.verify(), whether required task method is called once, for each given TaskSwitcherEnum value.
3: Finally did a junit like this:
assertEquals("Task performance strategy is designed to handle only five cases.", 5, taskSwitcherEnum.values().length);
Doing this made sure [at least test covered] following things:
1: That my code has only desired branches, and any other code branch addition/deletion is caught by a test case.
2: That each branch does it's desired job by calling a method that I want, through testing on every particular Enum values against a called method.
Gist of the whole answer is, sometimes a little design change helps a lot.

JUnit Parameterized - merge fails

Background:
Our Test-Suite uses an inhouse developed Test-Framework, based on JUnit. Several of our tests use JUnit's Parameterized functionality in order to test the variety of different test data, e.g. layout tests (we use Galen Framework), where we want to verify the correct behaviour at diferrent window resolutions.
Our TestCaseRule, which is applied to all of our tests in a base class, saves the failed tests into a Database, from there we can browse through the fails by a Web Interface.
Problem:
JUnit's Parameterized Runner creates one fail instance for each failed test + parameter combination.
That means, if I have a class with for instance 3 tests, and each one runs 6 times (6 parameters), if all tests fail I would get 6x3=18 fails in my reporting, instead of (desired) 3. Thereby our reporting gets an entire different meaning and becomes useless...
Desired:
I have googled a lot but unfortunately could not find anyone facing the same issue. The best solution for me would be if I could get JUnit to merge all fails per method and concatenating stacktraces, so I could really just ensure one method will result at the most in 1 fail. I also do not want to skip all following tests, so I don't miss fails which would be generated on different parameters.
I experimented with reflection; fetching Parameters data in a #Before method, iterating through the test method, injecting the parameters and finally preveting the actual test to be executed, but it was quite hacky and did not represent an accceptable solution because of its test-scope-lack.
I am thankful for all help attempts!

TestNG - #BeforeMethod for specific methods

I'm using Spring Test with TestNG to test our DAOs, and I wanted to run a specific text fixture script before certain methods, allowing the modifications to be rolled back after every method so that the tests are free to do anything with the fixture data.
Initially I thought that 'groups' would be fit for it, but I already realized they're not intended for that (see this question: TestNG BeforeMethod with groups ).
Is there any way to configure a #BeforeMethod method to run only before specific #Tests? The only ways I see are workarounds:
Define an ordinary setup method and call at the beginning of every #Test method;
Move the #BeforeMethod method to a new class (top level or inner class), along with all methods that depend on it.
Neither is ideal, I'd like to keep my tests naturally grouped and clean, not split due to lack of alternatives.
You could add a parameter your #BeforeMethod with the type 'java.lang.reflect.Method'. TestNG will then inject the reflection information for the current test method including the method name, which you could use for switching.
If you add another 'Object' parameter, you will also get the invocation parameters of the test method.
You'all find all on possible parameters for TestNG-annotated methods in chapter 5.18.1 of the TestNG documentation.
Tests are simply not designed to do this. Technically speaking, a single tests is supposed to handle being idempotent for itself meaning it sets up, tests, and takes down. That is a single test. However, a lot of tests sometimes have the same set-up and take down method, whereas other tests need one set-up before they all run. This is the purpose of the #Before type tags.
If you don't like set-up and tear-down inside your test, your more then welcome to architect your own system, but technically speaking, if certain methods require specific set-ups or tear-downs, then that really should be embodied IN the test, since it is a requirement for test to pass. It is ok to call a set-up method, but ultimately, it should be OBVIOUS that a test needs a specific set-up in order to pass. After all, if your using specific set-ups, aren’t you actually testing states rather than code?

How can I convert these JUnit 3 tests to JUnit 4?

My company wants to move off of JUnit 3 and start using only JUnit 4. The other intern and I have been given the task of converting the older JUnit 3 tests to use JUnit 4 conventions. However, I'm having a problem converting the testfile I'm working on right now.
From what I can tell, there is a generateTest method that returns a SSlTest (SSlTest is a subclass of TestCase). The returned SslTest overrides runTest. runTest contains a try-catch block that starts two threads, clientThread and serverThread (these are both subclasses of Thread that are defined within the testfile). It looks like the actual testing is being done inside the threads, since the rest of runTest is used for catching exceptions from the two threads.
generateTest is called by another method, generateSuite (returns a TestSuite). generateSuite contains an outer for-loop that adds suites to a main suite. The inner for-loop uses generateTest to add tests to each suite within the main suite. The main suite is what is returned by the method.
Finally, inside the suite() method that is called in the main method of the test file, a while-loop is setup to generate suites using generateSuite and add them to a bigger suite.
The only guides I've found on migrating to JUnit 4 are for much simpler test cases. I'm very lost right now and no one else at my company knows enough JUnit 4 to help me, so any tips would be much appreciated!
The very first thing I would do is try to convince whomever gave me the task that it is unnecessary. I know that is hard as an intern, but it is worth making sure that person understands this isn't necessary.
Facts for convincing:
The JUnit 4 jar contains both the junit.framework and org.junit package structures so it is backward compatible.
JUnit has broad adoption. They owners of the JUnit project are well aware of this and aren't going to ask people to rewrite all the tests. In other words, they aren't going to just drop compatibility.
Actually try it. Seriously. Try running your existing test code as is with the JUnit 4 jar. You'll see if you get any compiler errors. If you do, those are the areas to focus. If you don't, you have great evidence to show to the person who gave you the task.
This doesn't mean you won't have to change anything. It means you won't have to change the majority of your code. If you have custom runners, you'll want to use the JUnit 4 style. You also might need classpath suite to collect the tests.
There is also value in converting a few of the tests to the JUnit 4 so developers on the team have some examples to use. But converting them all isn't a good use of time.
On not being able to post code
Getting help on the internet is extremely difficult without code. I can understand your employer not wanting you to post code. (But then they probably don't want you posting class and method names either - which you did.) Luckily, there is an alternative. Create a SSCCE instead. (Read the link - it will help you a lot as you progress in your jobs.) In addition for the smaller example being easier to read, it will allow you to change the class/method/etc names and then your employer won't have their code online.

Dummy data and unit testing strategies in a modular application stack

How do you manage dummy data used for tests? Keep them with their respective entities? In a separate test project? Load them with a Serializer from external resources? Or just recreate them wherever needed?
We have an application stack with several modules depending on another with each containing entities. Each module has its own tests and needs dummy data to run with.
Now a module that has a lot of dependencies will need a lot of dummy data from the other modules. Those however do not publish their dummy objects because they are part of the test resources so all modules have to setup all dummy objects they need again and again.
Also: most fields in our entities are not nullable so even running transactions against the object layer requires them to contain some value, most of the time with further limitations like uniqueness, length, etc.
Is there a best practice way out of this or are all solutions compromises?
More Detail
Our stack looks something like this:
One Module:
src/main/java --> gets jared (.../entities/*.java contains the entities)
src/main/resources --> gets jared
src/test/java --> contains dummy object setup, will NOT get jared
src/test/resources --> not jared
We use Maven to handle dependencies.
module example:
Module A has some dummy objects
Module B needs its own objects AND the same as Module A
Option a)
A Test module T can hold all dummy objects and provide them in a test scope (so the loaded dependencies don't get jared) to all tests in all Modules. Will that work? Meaning: If I load T in A and run install on A will it NOT contain references introduced by T especially not B? Then however A will know about B's datamodel.
Option b)
Module A provides the dummy objects somewhere in src/main/java../entities/dummy allowing B to get them while A does not know about B's dummy data
Option c)
Every module contains external resources which are serialized dummy objects. They can be deserialized by the test environment that needs them because it has the dependency to the module to which they belong. This will require every module to create and serialize its dummy objects though and how would one do that? If with another unit test it introduces dependencies between unit tests which should never happen or with a script it'll be hard to debug and not flexible.
Option d)
Use a mock framework and assign the required fields manually for each test as needed. The problem here is that most fields in our entities are not nullable and thus will require setters or constructors to be called which would end us up at the start again.
What we don't want
We don't want to set up a static database with static data as the required objects' structure will constantly change. A lot right now, a little later. So we want hibernate to set up all tables and columns and fill those with data at unit testing time. Also a static data base would introduce a lot of potential errors and test interdependencies.
Are my thoughts going in the right direction? What's the best practice to deal with tests that require a lot of data? We'll have several interdependent modules that will require objects filled with some kind of data from several other modules.
EDIT
Some more info on how we're doing it right now in response to the second answer:
So for simplicity, we have three modules: Person, Product, Order.
Person will test some manager methods using a MockPerson object:
(in person/src/test/java:)
public class MockPerson {
public Person mockPerson(parameters...) {
return mockedPerson;
}
}
public class TestPerson() {
#Inject
private MockPerson mockPerson;
public testCreate() {
Person person = mockPerson.mockPerson(...);
// Asserts...
}
}
The MockPerson class will not be packaged.
The same applies for the Product Tests:
(in product/src/test/java:)
public class MockProduct() { ... }
public class TestProduct {
#Inject
private MockProduct mockProduct;
// ...
}
MockProduct is needed but will not be packaged.
Now the Order Tests will require MockPerson and MockProduct, so now we currently need to create both as well as MockOrder to test Order.
(in order/src/test/java:)
These are duplicates and will need to be changed every time Person or Product changes
public class MockProduct() { ... }
public class MockPerson() { ... }
This is the only class that should be here:
public class MockOrder() { ... }
public class TestOrder() {
#Inject
private order.MockPerson mockPerson;
#Inject
private order.MockProduct mockProduct;
#Inject
private order.MockOrder mockOrder;
public testCreate() {
Order order = mockOrder.mockOrder(mockPerson.mockPerson(), mockProduct.mockProduct());
// Asserts...
}
}
The problem is, that now we have to update person.MockPerson and order.MockPerson whenever Person is changed.
Isn't it better to just publish the Mocks with the jar so that every other test that has the dependency anyway can just call Mock.mock and get a nicely setup object? Or is this the dark side - the easy way?
This may or may not apply - I'm curious to see an example of your dummy objects and the setup code related. (To get a better idea of whether it applies to your situation.) But what I've done in the past is not even introduce this kind of code into the tests at all. As you describe, it's hard to produce, debug, and especially package and maintain.
What I've usaully done (and AFAIKT in Java this is the best practice) is try to use the Test Data Builder pattern, as described by Nat Pryce in his Test Data Builders post.
If you think this is somewhat relevant, check these out:
Does a framework like Factory Girl exist for Java?
make-it-easy, Nat's framework that implements this pattern.
Well, I read carefully all evaluations so far, and it is very good question. I see following approaches to the problem:
Set up (static) test data base;
Each test has it's own set up data that creates (dynamic) test data prior to running unit tests;
Use dummy or mock object. All modules know all dummy objects, this way there is no duplicates;
Reduce the scope of the unit test;
First option is pretty straight forward and has many drawbacks, somebody has to reproduce it's once in while, when unit tests "mess it up", if there are changes in the data-module, somebody has to introduce corresponding changes to the test data, a lot of maintenance overhead. Not to say that generation of this data on the first hand maybe tricky. See aslo second option.
Second option, you write your test code that prior to the testing invokes some of your "core" business methods that creates your entity. Ideally, your test code should be independent from the production code, but in this case, you will end up with duplicate code, that you should support twice. Sometimes, it is good to split your production business method in order to have entry point for your unit test (I makes such methods private and use Reflection to invoke them, also some remark on the method is needed, refactoring is now a bit tricky). The main drawback that if you must change your "core" business methods it suddenly effects all of your unit test and you can't test. So, developers should be aware of it and not make partials commits to the "core" business methods, unless they works. Also, with any change in this area, you should keep in your mind "how it will affect my unit test". Sometimes also, it is impossible to reproduce all the required data dynamically (usually, it is because of the third-parties API, for example, you call another application with it's own DB from which you required to use some keys. This keys (with the associated data) is created manually through third-party application. In such a case, this data and only this data, should be created statically. For example, your created 10000 keys starting from 300000.
Third option should be good. Options a) and d) sounds for me pretty good. For your dummy object you can use the mock framework or you can not to use it. Mock Framework is here only to help you. I don't see problem that all of your unit know all your entities.
Fourth option means that you redefine what is "unit" in your unit test. When you have couple of modules with interdependence than it can be difficult to test each module in isolation. This approach says, that what we originally tested was integration test and not unit test. So, we split our methods, extract small "units of works" that receives all it's interdependences to another modules as parameters. This parameters can be (hopefully) easily mocked up. The main drawback of this approach, that you don't test all of your code, but only so to say, the "focal points". You need to make integration test separately (usually by QA team).
I'm wondering if you couldn't solve your problem by changing your testing approach.
Unit Testing a module which depends on other modules and, because of that, on the test data of other modules is not a real unit test!
What if you would inject a mock for all of the dependencies of your module under test so you can test it in complete isolation. Then you don't need to setup a complete environment where each depending module has the data it needs, you only setup the data for the module your actually testing.
If you imagine a pyramid, then the base would be your unit tests, above that you have functional tests and at the top you have some scenario tests (or as Google calls them, small, medium and big tests).
You will have a huge amount of Unit Tests that can test every code path because the mocked dependencies are completely configurable. Then you can trust in your individual parts and the only thing that your functional and scenario tests will do is test if each module is wired correctly to other modules.
This means that your module test data is not shared by all your tests but only by a few that are grouped together.
The Builder Pattern as mentioned by cwash will definitely help in your functional tests.
We are using a .NET Builder that is configured to build a complete object tree and generate default values for each property so when we save this to the database all required data is present.

Categories

Resources