I have a question about parameterized tests in JUnit. I am running a test suite with all of my test classes, it is a requirement for my course to have a test suite to run all of my test classes, so I cannot modify that. The issue is that I have a bunch of Entry objects (let's just take this as an object with a unique id starting from 1 and incremented every time a new instance of it is created), and they are being pre-processed by JUnit. On compiling and running my program, I have 9 entries which are declared in the ParamTest class. Within another class (EntryTest) I have one Entry that I have created and it should have an ID of one. However, it has an ID of 10, meaning the 9 entries from the parameterized test class have been created before hand.
My question is, is there anyway to force the ParamTest class not to do any of the pre-processing before the EntryTest class is run or is this impossible. In the suite I have made sure to declare EntryTest before ParamTest. If it's impossible is there anyway I can get around this other than creating separate suites or running the tests separately? I was thinking a public static int to keep track of the ID from the pre-processed amounts but it sounds like an ugly solution.
I think your testing is going to get ugly, fast, unless you have a way of resetting your static class to a known state.
I would recommend you expose a package-private method that allows you to reset the ID value to something specific (e.g. 0).
Tests should be entirely independent of one another, even within the same test class.
Related
Background:
Our Test-Suite uses an inhouse developed Test-Framework, based on JUnit. Several of our tests use JUnit's Parameterized functionality in order to test the variety of different test data, e.g. layout tests (we use Galen Framework), where we want to verify the correct behaviour at diferrent window resolutions.
Our TestCaseRule, which is applied to all of our tests in a base class, saves the failed tests into a Database, from there we can browse through the fails by a Web Interface.
Problem:
JUnit's Parameterized Runner creates one fail instance for each failed test + parameter combination.
That means, if I have a class with for instance 3 tests, and each one runs 6 times (6 parameters), if all tests fail I would get 6x3=18 fails in my reporting, instead of (desired) 3. Thereby our reporting gets an entire different meaning and becomes useless...
Desired:
I have googled a lot but unfortunately could not find anyone facing the same issue. The best solution for me would be if I could get JUnit to merge all fails per method and concatenating stacktraces, so I could really just ensure one method will result at the most in 1 fail. I also do not want to skip all following tests, so I don't miss fails which would be generated on different parameters.
I experimented with reflection; fetching Parameters data in a #Before method, iterating through the test method, injecting the parameters and finally preveting the actual test to be executed, but it was quite hacky and did not represent an accceptable solution because of its test-scope-lack.
I am thankful for all help attempts!
I'm using Spring Test with TestNG to test our DAOs, and I wanted to run a specific text fixture script before certain methods, allowing the modifications to be rolled back after every method so that the tests are free to do anything with the fixture data.
Initially I thought that 'groups' would be fit for it, but I already realized they're not intended for that (see this question: TestNG BeforeMethod with groups ).
Is there any way to configure a #BeforeMethod method to run only before specific #Tests? The only ways I see are workarounds:
Define an ordinary setup method and call at the beginning of every #Test method;
Move the #BeforeMethod method to a new class (top level or inner class), along with all methods that depend on it.
Neither is ideal, I'd like to keep my tests naturally grouped and clean, not split due to lack of alternatives.
You could add a parameter your #BeforeMethod with the type 'java.lang.reflect.Method'. TestNG will then inject the reflection information for the current test method including the method name, which you could use for switching.
If you add another 'Object' parameter, you will also get the invocation parameters of the test method.
You'all find all on possible parameters for TestNG-annotated methods in chapter 5.18.1 of the TestNG documentation.
Tests are simply not designed to do this. Technically speaking, a single tests is supposed to handle being idempotent for itself meaning it sets up, tests, and takes down. That is a single test. However, a lot of tests sometimes have the same set-up and take down method, whereas other tests need one set-up before they all run. This is the purpose of the #Before type tags.
If you don't like set-up and tear-down inside your test, your more then welcome to architect your own system, but technically speaking, if certain methods require specific set-ups or tear-downs, then that really should be embodied IN the test, since it is a requirement for test to pass. It is ok to call a set-up method, but ultimately, it should be OBVIOUS that a test needs a specific set-up in order to pass. After all, if your using specific set-ups, aren’t you actually testing states rather than code?
My company wants to move off of JUnit 3 and start using only JUnit 4. The other intern and I have been given the task of converting the older JUnit 3 tests to use JUnit 4 conventions. However, I'm having a problem converting the testfile I'm working on right now.
From what I can tell, there is a generateTest method that returns a SSlTest (SSlTest is a subclass of TestCase). The returned SslTest overrides runTest. runTest contains a try-catch block that starts two threads, clientThread and serverThread (these are both subclasses of Thread that are defined within the testfile). It looks like the actual testing is being done inside the threads, since the rest of runTest is used for catching exceptions from the two threads.
generateTest is called by another method, generateSuite (returns a TestSuite). generateSuite contains an outer for-loop that adds suites to a main suite. The inner for-loop uses generateTest to add tests to each suite within the main suite. The main suite is what is returned by the method.
Finally, inside the suite() method that is called in the main method of the test file, a while-loop is setup to generate suites using generateSuite and add them to a bigger suite.
The only guides I've found on migrating to JUnit 4 are for much simpler test cases. I'm very lost right now and no one else at my company knows enough JUnit 4 to help me, so any tips would be much appreciated!
The very first thing I would do is try to convince whomever gave me the task that it is unnecessary. I know that is hard as an intern, but it is worth making sure that person understands this isn't necessary.
Facts for convincing:
The JUnit 4 jar contains both the junit.framework and org.junit package structures so it is backward compatible.
JUnit has broad adoption. They owners of the JUnit project are well aware of this and aren't going to ask people to rewrite all the tests. In other words, they aren't going to just drop compatibility.
Actually try it. Seriously. Try running your existing test code as is with the JUnit 4 jar. You'll see if you get any compiler errors. If you do, those are the areas to focus. If you don't, you have great evidence to show to the person who gave you the task.
This doesn't mean you won't have to change anything. It means you won't have to change the majority of your code. If you have custom runners, you'll want to use the JUnit 4 style. You also might need classpath suite to collect the tests.
There is also value in converting a few of the tests to the JUnit 4 so developers on the team have some examples to use. But converting them all isn't a good use of time.
On not being able to post code
Getting help on the internet is extremely difficult without code. I can understand your employer not wanting you to post code. (But then they probably don't want you posting class and method names either - which you did.) Luckily, there is an alternative. Create a SSCCE instead. (Read the link - it will help you a lot as you progress in your jobs.) In addition for the smaller example being easier to read, it will allow you to change the class/method/etc names and then your employer won't have their code online.
I'm trying to cleanup my tests by always resetting to a known state before each test. In JUnit it seems that the best way to do this is to have a setup() method that sets the values for some fields. When running tests in parallel the field is always correct since each test is executed in a new instance of the test.
However in TestNG this doesn't seem to be the case. According to a post on their mailing list, setting fields in #BeforeMethod in a multithreaded testing doesn't guarantee their value.
As I need the classes I'm testing to be in a known state, is there a cleaner solution to this than using DataProvider or saying "Don't ever run tests in mulithreaded mode"?
There is only one difference between TestNG and JUnit in this specific area: JUnit will create a brand new instance of your test before each test method, TestNG will not.
What this means is that with TestNG, values stored in fields by test methods will be preserved between invocations, which is very useful if this object is complex and takes time to create. It also helps speed up test runs since you don't have to recreate this state from scratch every time.
If you want this state to be reset every time, simply put the initialization code in #BeforeMethod, like you do with JUnit (except it's called #Before).
As for multithreading, I don't understand why you are saying that there is no guarantee about that value, can you be more specific?
I'm writing a little library for movies for myself. It's partly for learning TDD. Now I have a problem I can't solve.
The code is in here https://github.com/hasanen/MovieLibrary/blob/master/movielibrary-core/src/test/java/net/pieceofcode/movielibrary/service/MovieLibraryServiceITC.java
The problem is that when I run the whole class (right click above class name in eclipse), the second test fails because removing doesn't succeed. But when right clicking the method (getMovieGenres_getAllGenresAndRemoveOne_returnsTwoGenreAndIdsAreDifferent) and choosing Run as Junit Test, it works.
I don't necessarily need the fix, but at least some advice on how to find why junit is acting like this.
From the way you explain the problem, the problem appears to be in the setUp class. The setUp class runs before every test case invocation. This is the general sequence.
1- Add three movies.
2- Test if three movies exists.
3- Add three movies
4- remove movie item # 1.
Since sequence 1-4 works, the problem is sequence 3. Either sequence 3 swallows some exception or mutates the underlying object. (may be changes the sequence.) Without knowing how addMovie changes the underlying object, its hard to tell.
Something outside your test class (likely a superclass) is creating movieLibraryService, and it's not being recreated as often as it needs to be for independent testing.
If you add the line
movieLibraryService = new MovieLibraryService();
at the top of your testSetUp() method, this service will be properly reset before the running of each test method, and they will likely work properly.
As it is, I suspect you're getting a failure on the assertions about size, as the size is becoming 6 instead of 3.
Alternatively, you could add a teardown method (annotated with #After) which removes the contents of the movie library so that it always starts empty.
IMHO the problem is your test isn't real unit test but integration one. So while testing your service you're testing all the layers it uses. I recommend yo to use mocks for lower layers dependencies (EasyMock or something) and use integration tests only for your repository layer. This way you can avoid persistence layer influences while testing service layer.