Is there some way to have complete control over execution of test methods (including before/after methods) in JUnit5, similar to JUnit4 #RunWith annotation)?
I'm trying to build a JUnit5 Arquillian extension, but since Aquillian basically needs to execute each test in a container, I'm coming to a problem when running Arquillian from a Junit5 extension.
My code is here: BasicJunit5ArquillianTest.java
The test should run all methods (including before/after) in a separate container, which can be a separate JVM, remote or embedded server or anything isolated. My extension runs the test method from beforeEach hook, using Arquillian to transfer the test class and run it in a container using LauncherFactory.create(), collect test results and transfer them back.
The problem is that the test methods are executed twice - via normal JUnit5 execution and via my Arquillian extension from a beforeEach hook. I'd like to run tests only via Arquillian and skip normal execution of methods.
Is this possible in a JUnit5 extension?
Or I need to create a custom test engine, possibly extending the Jupiter test engine?
There is no extension point (yet?) that allows you to define where or how tests are run. This is already true for threads, which means there is no way to run them on JavaFX application thread or Swing EDT.
You might have to go deeper and implement an engine but that means that users have to choose between writing Arquillian tests or writing Jupiter tests.
UPDATE: In a newer version of JUnit 5 released since this answer was accepted, JUnit 5 now provides the InvocationInterceptor extension point, which is exactly what is needed to implement a custom runner as an extension, which has a full control over how the tests are executed and can even replace the body of the test method with something completely different (e.g. run the test in a different JVM and return the result).
Related
Pytest provides the ability to run pytest on a test repository using a --collect-only option that simply outputs the tests it finds based on current configurations.
Additionally, there are hooks that can be implemented that affect what happens during various phases of the collection, such as pytest_collection_modifyitems.
I'm wondering if there's an equivalent collection/hook system available for Java tests using TestNG.
TestNG can be instructed to look for all #Test methods within a specific package and everything inside.
But by default, after TestNG discovers the tests, it executes them.
So in order for your use case to be done, you would need to do the following:
Create a testng suite xml file, that refers to the top level package com.foo.bar for e.g., wherein all the test classes are residing inside either com.foo.bar package or inside a sub-package.
Run TestNG with the JVM argument -Dtestng.mode.dryrun=true which causes TestNG to simulate as if the tests were run without actually running them.
Using both these two things, your use case should be satisfied.
I've written multiple test classes to test my methods using junit 5.
all the test classes pass successfully when I run them indivually
but when I try to run them all at a time using a test suite as shown below, some of my tests get pending and the testing won't finish. it doesn't even jump to test other classes
as all the methods pass successfully, I don't think if there's any problem with the class ParametrizedMethodTest
I'm using junit-platform-runner version 1-6-2
From the current JavaDoc:
Please note that test classes and suites annotated with
#RunWith(JUnitPlatform.class) cannot be executed directly on the JUnit
Platform (or as a "JUnit 5" test as documented in some IDEs). Such
classes and suites can only be executed using JUnit 4 infrastructure.
In other words, JUnit 5 does not support test suites in the way you want to do it in your example. If you want to run all your tests classes just select the package and choose Run Tests from the context menu.
Preface
I'm deliberatly talking about system tests. We do have a rather exhaustive suite of unit tests, some of which use mocking, and those aren't going anywhere. The system tests are supposed to complement the unit tests and as such mocking is not an option.
The Problem
I have a rather complex system that only communicates via REST and websocket events.
My team has a rather large collection of (historically grown) system tests based JUnit.
I'm currently migrating this codebase to JUnit5.
The tests usually consist of an #BeforeAll in which the system is started in a configuration specific to the test-class, which takes around a minute. Then there is a number of independent tests on this system.
The problem we routinely run into is that booting the system takes a considerable amount of time and may even fail. One could say that the booting itself can be considered a test-case. JUnit handles lifecycle methods kind of weirdly - the time they take isn't shown in the report; if they fail it messes with the count of tests; it's not descriptive; etc.
I'm currently looking for a workaround, but what my team has done over the last few years is kind of orthogonal to the core idea of JUnit (cause it's a unit testing framework).
Those problems would go away if I replaced the #BeforeAllwith a test-method (let's call it #Test public void boot(){...}) and introduce an order-dependency (which is pretty easy using JUnit 5) that enforces boot to run before any other test is run.
So far so good! This looks and works great. The actual problem starts when the tests aren't executed by the CI server but by developers who try to troubleshoot. When I try to start a single test boot is filtered from the test execution and the test fails.
Is there any solution to this in JUnit5? Or is there a completely different approach I should take?
I suspect there may be a solution in using #TestTemplate but I'm really not sure how to procede. Also afaik that would only allow me to generate new named tests that would be filtered as well. Do I have to write a custom test-engine? That doesn't seem compelling.
This more general testing problem then related to Junit5. In order to skip very long boot up you can mock some components if it is possible. Having the booting system as a test does not make sense because there are other tests depending on that. Better to use #beforeAll in this case as it was before. For testing boot up, you can make separate test class for that which will run completely independent from other tests.
Another option is to group this kind of test and separate from the plain unit test and run it only if needed (for example before deployment on CI server). This really depends on specific use case and should those test be part of regular build on your local machine.
The third option is to try to reduce boot time if it possible. This is option if you can't use mocks/stubs or exclude those tests from regular build.
How can I have a karate setup so that I can run a bunch of tests when running locally and a subset with running in pre-prod?
When I run the tests locally, I spin up a mock server and set it up using Background. In pre-prod, no mock server is required, so I would like to skip the Background execution.
Also, I was not able to use the #Before annotation to start my cucumber Test Runner.
Use tags. Refer to the documentation: https://github.com/intuit/karate#cucumber-tags
#preprod
Scenario: some scenario
Personally I prefer the approach where you spin up mock servers from your JUnit test classes, and there are a lot of examples, like this one: example
But you can do this also, refer the docs on conditional logic:
* eval if (karate.env == 'preprod') karate.call('mock-start.feature')
I was not able to use the #Before annotation
That's not really helpful, please follow the instructions here: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue
I have written unit tests for a third party rest API. These tests are what I would call live tests in that they do test the rest api responses and require valid credentials. This is required because the documentation provided by the third party is not up-to-date so its the only way of knowing what the response will be. Obviously, I can't use these as the unit tests because they actually connect externally. Where would be a good place to put these tests or separate them from mocked unit tests?
I have currently had to comment them out when I check them in so that they don't get run by the build process.
I tend to use assumeTrue for these sort of tests and pass a system property to the tests. So the start of one of your tests would be:
#Test
public void remoteRestTest()
{
assumeTrue(System.getProperty("run.rest.tests").equals("true"));
...
}
This will only allow the test to run if you pass -Drun.rest.tests=true to your build.
What you are looking for are integration tests. While the scope of a unit test is usually a single class the scope of an integration test is a whole component in its environment and this includes the availability of external resources such as your remove REST service. Yes, you should definitely keep integration test separate from unit tests. How this can be done in your environment depends on your build process.
For instance, in case you work with Maven there is a Maven Failsafe Plugin that targets integration testing in your build process.