I have an application developed in Java that uses Arquillian for testing. The application has about 300 tests in all.
Is there an easy way to log the results of each test? The tests are not all in the same class of course. So, I am wondering if there is a way to easily show the test name and results without needing to add logging to each of the 300 tests.
I would like for the logging to be shown during the maven build, while it is actually running the tests, so that I can see the results in real time.
I did end up using the Arquillian recorder-extension library; however, it does not show tests in real time so I ended up forking the project and editing it to work for my needs.
Related
Preface
I'm deliberatly talking about system tests. We do have a rather exhaustive suite of unit tests, some of which use mocking, and those aren't going anywhere. The system tests are supposed to complement the unit tests and as such mocking is not an option.
The Problem
I have a rather complex system that only communicates via REST and websocket events.
My team has a rather large collection of (historically grown) system tests based JUnit.
I'm currently migrating this codebase to JUnit5.
The tests usually consist of an #BeforeAll in which the system is started in a configuration specific to the test-class, which takes around a minute. Then there is a number of independent tests on this system.
The problem we routinely run into is that booting the system takes a considerable amount of time and may even fail. One could say that the booting itself can be considered a test-case. JUnit handles lifecycle methods kind of weirdly - the time they take isn't shown in the report; if they fail it messes with the count of tests; it's not descriptive; etc.
I'm currently looking for a workaround, but what my team has done over the last few years is kind of orthogonal to the core idea of JUnit (cause it's a unit testing framework).
Those problems would go away if I replaced the #BeforeAllwith a test-method (let's call it #Test public void boot(){...}) and introduce an order-dependency (which is pretty easy using JUnit 5) that enforces boot to run before any other test is run.
So far so good! This looks and works great. The actual problem starts when the tests aren't executed by the CI server but by developers who try to troubleshoot. When I try to start a single test boot is filtered from the test execution and the test fails.
Is there any solution to this in JUnit5? Or is there a completely different approach I should take?
I suspect there may be a solution in using #TestTemplate but I'm really not sure how to procede. Also afaik that would only allow me to generate new named tests that would be filtered as well. Do I have to write a custom test-engine? That doesn't seem compelling.
This more general testing problem then related to Junit5. In order to skip very long boot up you can mock some components if it is possible. Having the booting system as a test does not make sense because there are other tests depending on that. Better to use #beforeAll in this case as it was before. For testing boot up, you can make separate test class for that which will run completely independent from other tests.
Another option is to group this kind of test and separate from the plain unit test and run it only if needed (for example before deployment on CI server). This really depends on specific use case and should those test be part of regular build on your local machine.
The third option is to try to reduce boot time if it possible. This is option if you can't use mocks/stubs or exclude those tests from regular build.
What are the use cases for grouping tests using TestNG groups or JUnit categories?
I usually groups by tests by function using JUnit categories: unit-tests, integration-tests, etc. But at my current team, we just had a conversation and the team decided we want to run all the tests all the time because they don't see any benefits for grouping tests.
So I'm curious to know if people here group their tests and why.
There are multiple kinds of tests and JUnit and TestNG supports them all.
In the case of Unit Tests, you can run them all and get the feedback within seconds or minutes.
When it comes down to integration or end-to-end tests you might want to group your tests because of the time factor.
Let's say, you have unit tests, API tests, and even GUI tests.
While unit tests can run with each build, the other tests might take too long.
Example:
In one project I had over 300 GUI tests and it took 2 hours to run them in parallel. When we introduced a hotfix for a specific component and we needed to deploy it as fast as we could - we would run regression tests just for the component. That's when the grouping might come in handle.
Another example:
In my current project, I have data-driven API tests. To test a component, I have to perform 5000 requests with automated test and it takes up to 30 minutes. It's just for 1 component where we have around 14 for now. Imagine, running a full test suite.
The solution to run all kind of tests for the full regression with each build would be tough for Continous Integration/Continous Development.
The other approach is to run just the smoke tests, but you still have to either group your tests or create specific Runner class (Just like JUnit runner or Cucumber runner) to run just the portion of the tests.
The whole purpose of automated tests is to provide quick feedback if the version developed does not contain bugs due to regression of quality. If we have to wait few hours for the feedback. If we can't have it, we would have to delay each version/build (depends when we run those tests) which even might collide with SLA we agreed on with the customer.
To be even more specific:
Let's suppose we have a critical bug in payment systems and per SLA we have to fix the critical bugs like this within 8 hours. The developer fixes the bug, creates a build of the application to deploy to the testing environment and we want to make sure we did not introduce any new bugs. Full regression of automated tests, including unit tests, API tests, and GUI tests might take up to few hours but we have only those few hours to introduce the change to our clients (production environment). Instead of running the whole test suite, we can run a group of tests regarding the payments.
Hope it helps
I have around 200 testNG test cases can be executed through maven by and suite.xml file. But I want to convert these test cases into web service or any other similar webservice so that anybody can call any test case from there machine and will be able to know whether that particular functionality is working fine at that moment.
But what if no one calls the test webservices for longer time? You won't know the state of your application, if you have any failures/regressions.
Instead, you can use
continuous integration to run the tests automatically on every code push; see Jenkins for a more complete solution; or, more hacky, you can create your own cron job/demon/git hook on a server to run your tests automatically
a maven plugin that displays the results of the last execution of the automated tests; see Surefire for a html report on the state of the last execution of each test
One option for running my tests in my Play! application is by executing the command play auto-test.
One of the ways Play seems to identify tests to run is to find all test classes with the super class play.test.UnitTest (or another Play equivalent). Having a test class extend UnitTest seems to come with some overhead as shown by this bit of stuff spat out in the console:
INFO info, Starting C:\projects\testapp\.
WARN warn, Declaring modules in application.conf is deprecated. Use dependencies.yml instead (module.secure)
INFO info, Module secure is available (C:\play-1.2.1\modules\secure)
INFO info, Module spring is available (C:\projects\testapp\.\modules\spring-1.0.1)
WARN warn, Actually play.tmp is set to null. Set it to play.tmp=none
WARN warn, You're running Play! in DEV mode
INFO info, Connected to jdbc:h2:mem:play;MODE=MYSQL;LOCK_MODE=0
INFO info, Application 'Test App' is now started !
Obviously having a Play environment for tests that requires such a setup is useful, however, if I have a test class that tests production code that executes logic that does not require a Play environment I don't want to have to extend UnitTest so that I can avoid the overhead of starting up a Play environment.
If I have a test class that does not extend UnitTest then it does not get executed by the command play auto-test. Is there a way to get the play auto-test command to execute all tests regardless of whether I extend Play's UnitTest?
Edit: Someone has actually raised a ticket for this very issue
the short answer: no. A tad longer answer: no unless you change code in the framework. The autotest is an Ant task that sets the server and triggers the testing, but it's not using the ant task, so it won't detect (by default) your 'normal' unit tests.
You have two options: either you add an extra task to the Ant file of Play to run unit tests via the task (you will need to include the relevant jars too) or you edit the code used to launch the Play test environment.
Both imply changing the framework to a certain level. Although giving that you are using Play, I wonder why you should not have all your tests follow the Play pattern...
If these tests doesn't require any Play! feature, why don't you put them on a library ? With your example (math add) : create a calculator.jar package, and build it with Ant or Maven after running tests.
Like this, you can use your library in several Play! projects (or Spring, Struts, ... if you want.
I really don't get why the problem itself is even debatable. Having simple and small unit tests (even in the web-part of your project) is the most normal thing to do.
The extra overhead of framework initialisation slows down your roundtrips significantly if you have many tests. As it can be seen in the ticket, the current workaround is to make your unit tests extend org.junit.Assert instead of play.test.UnitTest
I've recently been asked to, effectively, sell my department on unit testing. I can't tell you how excited this makes me, but I do have one concern. We're using JUnit with Spring and Maven, and this means that each time mvn test is called, it rebuilds the database. Obviously, we can't integrate that with our production server -- it would kill valuable data.
How do I prevent the rebuilding without telling maven to skip testing?
The best I could figure was to assign the script to operate in a test database (line breaks added for readability):
mvn test
-Ddbunit.schema=<database>test
-Djdbc.url=jdbc:mysql://localhost/<database>test?
createDatabaseIfNotExist=true&
useUnicode=true&characterEncoding=utf-8
I can't help but think there must be a better way.
I'm especially interested in learning if there is an easy way to tell Maven to only run tests on particular classes without building anything else? mvn -Dtest=<test-name> test still rebuilds the database.
======= update =======
Bit of egg on my face here. I didn't realize that I was using the same variable in two places, meaning that the POM was using a "skip.test" variable for both rebuilding the database and for running the tests...
Update: I guess that DBUnit does the rebuilding of the DB because it is told to do so in the test setup method. If you change your setup method, you can eliminate the DB rebuild. Of course, you should do it so that you get the DB reset when you need it, and omit it when you don't. My first bet would be to use a system property to control this. You can set the property on the command line the same way you already do with jdbc.url et al. Then in the setup method you add an if to test for that property and do the DB reset if it is set.
A test database, completely separated from your production DB is definitely the best choice if you can have it. You can even use e.g. Derby, an in-memory DB which can run embedded within the JVM. But in case you absolutely can't have a separate DB, use at least a separate test schema inside that DB.
In this scenario I would recommend you put your DB connection parameters into profiles within your pom, the default being the test DB, and a separate profile to contain the production settings. This way it can never happen that you accidentally run your tests against the production DB.
In general, however, it is also important to understand that tests run against a DB are not really unit tests in the strict sense, rather integration tests. If you have an existing set of such tests, fine, use them as much as you can. However, you should try to move towards adding more real unit tests, which test only a small, isolated portion of your code at once (a method or class at most), ideally self contained (need no DB, net, config files etc.) so they can run fast - this is a very important point. If you have 5000 unit tests and each takes only 5 seconds to run, that totals up to almost 7 hours, so you obviously won't run them very often. If a test takes only 5 milliseconds, you get the results in less than half a minute, so you can afford to run all your tests before you commit your latest change - many times a day. That makes a huge difference in the speed of feedback you get from the tests.
Hope this helps.
We're using JUnit with Spring and Maven, and this means that each time mvn test is called, it rebuilds the database.
Maven doesn't do anything with databases by itself, your code does. In any case, it's very unusual to run tests (which are not unit tests) against a production database.
How do I prevent the rebuilding without telling maven to skip testing?
Hard to say without more details (you're not showing anything) but profiles might be a way to go.
Unit tests, by definition, only operate on a single component in the system. You should not be attempting to write unit tests which integrate with any external services (web, DB, etc.). The solution I have to this is to use a good mocking framework to stub out the behaviour of any dependencies your components have. This encourages good interface APIs since most mocking frameworks work best with simple interfaces. It would be best to create a Repository pattern interface for any interactions with your DB and then mock out the impl any time you are testing a class that interacts with it. You can then functionally test your Repository impl separately. This also has the added benefit of keeping your unit tests fast enough to remain part of your CI so that your feedback cycle is as fast as possible.