I use JUnit 3.x TestRunner that intantiates all tests at once before running them.
Is there a Test Runner available that would create each test (or at least each test suite's tests) just before running them?
I can use JUnit 4.x runners but my tests are 3.x tests.
In JUnit 3 you'd need to write your own TestSuite class that delayed instantiation of the tests in the suite.
You are probably doing it wrong.
Each unit test should be self-contained and not depend on any other test results.
Otherwise when one of the tests break it will break all the tests that depend on it. So you will see a lot of errors without easy way to understand what is the actual cause. On the other hand if all unit tests are independent a broken test is extremely easy to debug and fix.
EDIT: I am assuming the reason you ask the original question is because you have some dependencies in your test. If I am wrong please ignore this answer :)
Related
How do we test the junit test cases we wrote? I thought manually testing ie create test data and asserting expected and actual values are okay is fine. But recently I have encountered a situation where junit tests were passing but the particular SUT code was failing during UI testing (that means the junit tests failed to guard the bug).
If your tests are passing, but the actual code that the tests were meant to cover is failing, then one of two things has happened:
The test suite hasn't adapted to cover that specific use case, or
The tests written to cover that specific use case are insufficient.
In any case, you need to rewrite your tests. Having a test suite which doesn't allow you to guard against specific aberrant behaviors makes your entire test suite worthless.
You do also mention that it fails explicitly during UI tests. This could result from a disconnect of expectations between the UI and the backend testing. In that event, either align the backend tests with the UI's actual inputs, or look to implement an integration test which covers the UI's workflow.
How do we test the junit test cases we wrote?
You should not.
Unit test are not infallible but testing tests makes no sense.
You should consider automatic tests as executable specifications.
Generally, if your specifications are wrong, you are stuck.
For automatic testing it is exactly the same thing.
To avoid this kind of problem or at least reduce it, I favor:
review of code and testing code with peers of the development team.
completing unit tests by integration and business test validated by the business team.
continuous improvement of automatic tests.
It is simple : as soon as a hole is detected in UI manual testing, an automatic test should be updated if the test exists but some checks are missing or else a new test should be created if the test is missing.
To verify quality of unit tests personally I use following techniques:
Coverage metrics. It's good idea to have good line and branch coverage. But it's usually not possible to have 100% line coverage, and coverage itself doesn't guarantee that code was actually tested rather simply called from test class.
Test code review. Personally I prefer writing tests with clear structure 'setup - run - assert'. If 'run' or 'assert' steps are missing, then there is something wrong with the test.
Mutation testing. There are frameworks which allow you to modify your production code in some simple way (apply mutators on code), then run your unit tests on modified code, and if no test fails, this code is not tested or tests are bad. For Java I use PIT Mutation Testing.
Also, sometimes it makes sense to apply not just unit tests but also some other testing techniques - manual testing, integration testing, load testing, etc.
I have encountered a situation where junit tests were passing but the
particular SUT code was failing.
Your unit tests should NOT miss the coverage of any method's functionality or their side effects. This is where the Code coverage tools like Cobertura play the role, it is NOT that the tests are passing, but we need to ensure that the each of the method's and their side effects have been unit tested/covered properly.
No, code coverage is just as bad a placebo here. You can have 100%
line coverage but still be in the same fix the OP is in?
Tools like Cobertura are there at least to find how much percentage of the code coverage we are doing, but, you will get even more bugs if you don't bother about tests coverage.
The main point is that these coverage tools don't tell you whether your internal business requirements have been really met or not.
I am running a suite of integration tests using maven and about 10% of the tests would fail or throw an error. However, when I start the server and run the individual failed tests manually from my IDE(intellij idea), they all pass with no problem. What could be the cause of this issue?
This is almost always caused by the unit tests running in an inconsistent order, or a race condition between two tests running in parallel via forked tests. If Test #1 finishes first, it passes. But if Test #2 finishes first, it leaves a test resource, such as a test database, in an alternate state causing Test #1 to fail. It is very common with database tests, esepecially when one or more alter the database. Even in IDEA, you may find all the tests in the com.example.FooTest class always pass when you run that class. But if you run all the tests in the com.example package or all tests in the project, sometimes (or even always) a test in FooTest fails.
The fix is to ensure your tests are always guaranteed a consistent state when run. (That is a guiding principle for good unit tests.) You need to pay attention to test setup and tear-down via the #Before, #BeforeClass, #After, and #AfterClass annotations (or TestNG equivalents). I recommend Googling database unit testing best practices. For database tests, running tests in a Transaction can prevent these type of issues. That way the database is rolled back to its starting state whether the test passes or fails. Spring has some great support for JDBC dtaabase tests. (Even if your project is not a Spring project, the classes can be very useful.) Read section 11.2.2 Unit Testing support Classes and take a look at the AbstractTransactionalJUnit4SpringContextTests / AbstractTransactionalTestNGSpringContextTests classes and the #TransactionConfiguration annotation (this latter one being from running Spring Contexts). There are also other Database testing tools out there such as DbUnit.
I need to setup a database in my tests (schema and some test data), this takes quite a bit of time, and as such I prefer to have it done once for all tests that are being run, and reset so that any chanages to the DB are rolled back between tests.
I'm not sure though which JUnit facilities should be used for this.
It seems like I can set a #BeforeClass/#AfterClass on a test suite, but than I can't run individual tests anymore.
Is there some way to add a setup/teardown for all tests that will run even when only executing a subset of the tests and not a specific suite? (For example NUnit has SetUpFixture)
I guess the transactions/truncation of the DB can be done using JUnit Rules...
You can use in-memory databases like HSQL or H2 to speed up test.
To roll back, you can use transactional feature.
Is there some way to add a setup/teardown for all tests that will run even when only executing a subset of the tests and not a specific suite?
For this, you can create a super class which is extended by other test classes. In super class, you can set up to setup/teardown.
I have a JUnit 4 test suite with BeforeClass and AfterClass methods that make a setup/teardown for the following test classes.
What I need is to run the test classes also by them selves, but for that I need a setup/teardown scenario (BeforeClass and AfterClass or something like that) for each test class. The thing is that when I run the suite I do not want to execute the setup/teardown before and after each test class, I only want to execute the setup/teardown from the test suite (once).
Is it possible ? Thanks in advance.
I don't know of any standard way to do this with JUnit. The reason for it, as you probably already know, is that your test cases should run independently of each other. This concerns the "normal" setup/teardown methods which run before and after each test method. Class setup and teardown is a bit different though - although I would still prefer running my tests independently and staying out of the trouble zone.
However, if you really are convinced of what you are doing, you could use a global flag to signal whether or not the class setup/teardown is to run, and to check for its state in the class setup/teardown methods. In your test suite, you could include a special class as the very first one, which does nothing more than execute the setup and set the global flag to indicate to the real test cases that their class setup/teardown methods must not be run. Similarly, a special last class in the suite can execute the teardown code. The caveat is that I am afraid JUnit does not guarantee the order of execution of test classes inside a suite, although most probably it does execute them in the specified order - but this is just an implementation detail. Try it out, it may work for you - but there is no guarantee it will always do what you expect.
If you have jUnit 4.7+ I recommend looking into the new feature called Rules (which are explained in this blog post). They might not be exactly what you want, but they are probably the best you get with jUnit.
Supposedly TestNG has better test grouping possibilities, but I haven't really looked into it myself yet.
No, there's no standard way to do this in JUnit, though you could hack something up as Péter Török suggested.
Note however that you are more or less abusing JUnit in doing this. The whole point of unit tests it that they are independent of each other. This is because dependencies between tests create a total maintenance nightmare (tests failing because the run in the wrong order).
So I'd advise you to strongly consider if it's not better to just always run the setup...
We noticed that when testNG test cases extend TestCase (JUnit) those tests start executing as Junit tests. Also, I should probably mention, the tests are run through Maven.
Is this a bug or a feature? Is it possible to override this behavior and still run those types of tests as TestNG tests? Do you know a link where TestNG talks about this?
thanks.
I didn't think either TestNG or JUnit required any base classes now that both use annotations to specify test methods. Why do you think you need to extend a class? And why on earth would a TestNG class extend the JUnit base class TestCase? Is it any surprise that they run as JUnit tests?
It sounds like neither bug nor feature but user error on your part. I'm not sure what you're getting at here. Why would you do this?
UPDATE: Your question is confusing me. Did you have JUnit tests running successfully that you're not trying to convert to TestNG, or visa versa? I'm having a very hard time understanding what you're trying to achieve here. Leave Maven out of it. It's immaterial whether they're run by you, Ant, or Maven.
Looking at the maven surefire plugin info I can't see any way to select a test for TestNG processing only if it also extends a jUnit 3 class.
IFAIK your best bet is to just work on each class seperately, removing the jUnit references and then retesting. That way you never have the mixture in one class and you should avoid problems. To make the work manageable I would be inclined to do this only when I was changing a test case for some other reason.