Karate conditional background execution - java

How can I have a karate setup so that I can run a bunch of tests when running locally and a subset with running in pre-prod?
When I run the tests locally, I spin up a mock server and set it up using Background. In pre-prod, no mock server is required, so I would like to skip the Background execution.
Also, I was not able to use the #Before annotation to start my cucumber Test Runner.

Use tags. Refer to the documentation: https://github.com/intuit/karate#cucumber-tags
#preprod
Scenario: some scenario
Personally I prefer the approach where you spin up mock servers from your JUnit test classes, and there are a lot of examples, like this one: example
But you can do this also, refer the docs on conditional logic:
* eval if (karate.env == 'preprod') karate.call('mock-start.feature')
I was not able to use the #Before annotation
That's not really helpful, please follow the instructions here: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue

Related

Writing System Tests in JUnit

Preface
I'm deliberatly talking about system tests. We do have a rather exhaustive suite of unit tests, some of which use mocking, and those aren't going anywhere. The system tests are supposed to complement the unit tests and as such mocking is not an option.
The Problem
I have a rather complex system that only communicates via REST and websocket events.
My team has a rather large collection of (historically grown) system tests based JUnit.
I'm currently migrating this codebase to JUnit5.
The tests usually consist of an #BeforeAll in which the system is started in a configuration specific to the test-class, which takes around a minute. Then there is a number of independent tests on this system.
The problem we routinely run into is that booting the system takes a considerable amount of time and may even fail. One could say that the booting itself can be considered a test-case. JUnit handles lifecycle methods kind of weirdly - the time they take isn't shown in the report; if they fail it messes with the count of tests; it's not descriptive; etc.
I'm currently looking for a workaround, but what my team has done over the last few years is kind of orthogonal to the core idea of JUnit (cause it's a unit testing framework).
Those problems would go away if I replaced the #BeforeAllwith a test-method (let's call it #Test public void boot(){...}) and introduce an order-dependency (which is pretty easy using JUnit 5) that enforces boot to run before any other test is run.
So far so good! This looks and works great. The actual problem starts when the tests aren't executed by the CI server but by developers who try to troubleshoot. When I try to start a single test boot is filtered from the test execution and the test fails.
Is there any solution to this in JUnit5? Or is there a completely different approach I should take?
I suspect there may be a solution in using #TestTemplate but I'm really not sure how to procede. Also afaik that would only allow me to generate new named tests that would be filtered as well. Do I have to write a custom test-engine? That doesn't seem compelling.
This more general testing problem then related to Junit5. In order to skip very long boot up you can mock some components if it is possible. Having the booting system as a test does not make sense because there are other tests depending on that. Better to use #beforeAll in this case as it was before. For testing boot up, you can make separate test class for that which will run completely independent from other tests.
Another option is to group this kind of test and separate from the plain unit test and run it only if needed (for example before deployment on CI server). This really depends on specific use case and should those test be part of regular build on your local machine.
The third option is to try to reduce boot time if it possible. This is option if you can't use mocks/stubs or exclude those tests from regular build.

JUnit running a final test?

There are so many posts about running JUnit tests in a specific order and I fully understand:
Tests should not order specific
and that the creators did this with no 1 in mind
But I have test cases that creates a bunch out output files. I need to capability to have one final test that goes and collects these files, zip it and emails it of to someone.
Is there a way to group JUnit tests together for me to have a "wrap up" group that goes and do this? Or is there a better way of doing this?
I am running these from Jenkins as a maven job. I could create another job that does just that based on the previous jobs output but would prefer if I can do it all in one meaning I would be able to run it everywhere even from my IDE.
Maybe the #After and #AfterClass annotations are what you are looking for:
#AfterClass
void cleanupClass() {
//will run after all tests are finished
}
#After
void cleanup() {
//will run after every test
}
However, I would consider handling this through Jenkins if possible. In my opinion the annotations above are for cleaning up any kind of setup that was previously done in order to do the testing.
Sending these files through email does not sound like part of the testing and therefore I would be inclined to keep it separated.
I guess the real problem is that you want the results and output of the tests sent via email.
Your suggestion of using a test for this threw me on the wrong track.
Definitely use some sort of custom Jenkins post hook to do this. There are some fancy plugins that let you code groovy which will do the trick.
Do not abuse a unit test for this. These (should) also run locally as part of builds and you don't want that email being sent every time.

How to implement a custom runner in JUnit5

Is there some way to have complete control over execution of test methods (including before/after methods) in JUnit5, similar to JUnit4 #RunWith annotation)?
I'm trying to build a JUnit5 Arquillian extension, but since Aquillian basically needs to execute each test in a container, I'm coming to a problem when running Arquillian from a Junit5 extension.
My code is here: BasicJunit5ArquillianTest.java
The test should run all methods (including before/after) in a separate container, which can be a separate JVM, remote or embedded server or anything isolated. My extension runs the test method from beforeEach hook, using Arquillian to transfer the test class and run it in a container using LauncherFactory.create(), collect test results and transfer them back.
The problem is that the test methods are executed twice - via normal JUnit5 execution and via my Arquillian extension from a beforeEach hook. I'd like to run tests only via Arquillian and skip normal execution of methods.
Is this possible in a JUnit5 extension?
Or I need to create a custom test engine, possibly extending the Jupiter test engine?
There is no extension point (yet?) that allows you to define where or how tests are run. This is already true for threads, which means there is no way to run them on JavaFX application thread or Swing EDT.
You might have to go deeper and implement an engine but that means that users have to choose between writing Arquillian tests or writing Jupiter tests.
UPDATE: In a newer version of JUnit 5 released since this answer was accepted, JUnit 5 now provides the InvocationInterceptor extension point, which is exactly what is needed to implement a custom runner as an extension, which has a full control over how the tests are executed and can even replace the body of the test method with something completely different (e.g. run the test in a different JVM and return the result).

Can we Customize Cucumber Test Suite at run time?

I have a cucumber test runner class in which i write my test suite to run like below
#CucumberOptions( features={"Feature_Files/featues"
} ,glue={ "com.automation.stepdef"
} ,monochrome=true ,dryRun= false ,plugin = {"html:target/cucumber-html-report"
} ,tags = {"#Startup"
}
)
If I wish to customize this tag option on successful completion of #startup feature, is it possible ?
The most common way of running two or more dependant test suites is creation of triggers for two or more jobs in your CI. This can be done with various plugins as described here.
Otherwise, if that's some test preparation actions you can use #Before or realted JUnit #BeforeClass annotation.
Seems not possible with current Cucumber. What you are asking for is the dependency among test scenarios, which IMO is a very good feature. For example, we have some login feature and some other functional features. It would not make any sense and would actually be a waste of time to run other features if the login feature does not work in the first place. To make things worse, you will see a lot of failures in test report in which you could not easily spot the root cause which is non-working login feature.
TestNG supports "dependsOnMethod" feature. However, TestNG is not a BDD tool.
QAF https://qmetry.github.io/qaf/qaf-2.1.7b/scenario.html#meta-data supports this as a BDD tool. However, it would be too heavy to introduce a new tool for such a simple feature.
All we need is some addition to Cucumber syntax and a customized test runner to build up the scenarios execution order as per dependencies and skip the features if the feature they depends on fails.
I would love to see if someone can put some effort into this :)
BTW, CI could workaround this issue, but again it's too heavy and clumsy. Imagine you have multi-dependencies among test scenarios, how many CI pipelines will you need then? Also, you can not workaround this in local dev env with CI because simply you would not set CI locally.

Writing Tests for Background Processes (like background jobs)

I have a web application built using Spring which contains some jobs.
A typical job is to run through the database, get a list of modified customers, generate a file and FTP it. My question is, how to go about unit testing in this job?
Should I only write unit tests for each "step" of the job, like:
Test for the method which fetches the modified customers.
Test for file generation code.
Test for FTP'ing the code.
But in this case, I will miss the "integration" test case for the above job. Also, Emma reports there is untested code in form of the job.
Any thoughts appreciated.
Thanks!
Unit testing is actually testing only one class at a time. That means you have to mock the dependencies. Spring is great for that.
I would advice Mockito to do the mocking. It is a marvellous tool, and you will learn TDD which is also a way to write beautiful code.
Integration test is another topic and requires another strategy.
Testing against the database is done by extending AbstractTransactionalJUnit4SpringContextTests. You will find examples on the net. In general you also use an in memory db to make those tests (h2 is good for that). It can be done in the unit test phase.
Generating the file can be done as unit test. You generate files and verify the proper content. Or errors...
For the FTP part, I would say it's more part of an integration test, unless you can spawn an FTP server from your build script.
You have to write an unit test for each step. Maybe you'll need to mock some methods.
And then, you can write an integration test to validate the whole, but maybe you'll need to stub some parts (like the FTP server, using an embedded FTP server in your test).

Categories

Resources