How to keep Maven integration tests isolated from each other? - java

I have multiple Maven integration tests that are updating the the state of the database, which could create conflicts between these tests. I was wondering if there is a way to isolate these integration tests by leveraging Maven phases or any other approach? Ideally, I would like to have a way to run database migrations before every integration test class. I am using Flyway as the migration tool for my PostgreSQL database and I am using JUnit 4.12. The migrations that I am running are basically creating and populating tables with data for testing.

Junit has #Before and #After annotations to let it invoke methods before and after each test class.
Those methods are then responsible for bringing the database into a known state before each test.

Responsibilities of Maven is to run tests one by one on integration-tests phase, and check results on verify. It also able to prepare/shutdown environment. Check failsafe plugin.
And all isolation between tests is a responsibility of a test framework you use (JUnit, TestNG, Cucumber, etc.).

I was able to solve this using flyway-core. Basically I ended up doing the following inside each of the test classes:
#BeforeClass
public static void migrateDB(){
Flyway flyway = Flyway.configure().dataSource(url, user, password).load();
flyway.clean();
flyway.migrate();
}

Related

Flyway in common library that has integration tests

I'm running into a silly situation where I have my flyway dependency defined in a common library's pom file. This common library happens to have Spring Boot integration tests that load the context and so when those integration tests run I get
java.lang.IllegalStateException: Cannot find migrations location in: [classpath:db/migration] (please add migrations or check your Flyway configuration)
I'd rather not duplicate the dependency definition in all of the applications' poms and I can't remove the integration tests. I'd also rather not have a dummy migrations folder. Can I just turn this off somehow in the integration tests?
#SpringBootTest
#RunWith(SpringRunner.class)
#ActiveProfiles("test")
#DisableFlywaySomehow
public class MyITest {...}
Use a different profile (although you can also use your current test profile) and just set spring.flyway.enabled = false in your application.{properties,yml} file.
You can also play with the #TestPropertySource annotation on a test-by-test case.
To be fair, your integration tests should operate on whatever state your app is "current". Which also means (if applicable) database. Why not incorporate testcontainers or similar tool to mimic real-life situation then it'll run actual migrations from source code and you'll test what's necessary.
The disabling of db sounds dodgy unless you are testing integrity of your application disregarding the database. In that case, provide FlywayMigrationStrategy as null - it will stop from executing any migrations and you'll have no DB environment.
Once again - it's up to your application

Unit tests for third party rest services

I have written unit tests for a third party rest API. These tests are what I would call live tests in that they do test the rest api responses and require valid credentials. This is required because the documentation provided by the third party is not up-to-date so its the only way of knowing what the response will be. Obviously, I can't use these as the unit tests because they actually connect externally. Where would be a good place to put these tests or separate them from mocked unit tests?
I have currently had to comment them out when I check them in so that they don't get run by the build process.
I tend to use assumeTrue for these sort of tests and pass a system property to the tests. So the start of one of your tests would be:
#Test
public void remoteRestTest()
{
assumeTrue(System.getProperty("run.rest.tests").equals("true"));
...
}
This will only allow the test to run if you pass -Drun.rest.tests=true to your build.
What you are looking for are integration tests. While the scope of a unit test is usually a single class the scope of an integration test is a whole component in its environment and this includes the availability of external resources such as your remove REST service. Yes, you should definitely keep integration test separate from unit tests. How this can be done in your environment depends on your build process.
For instance, in case you work with Maven there is a Maven Failsafe Plugin that targets integration testing in your build process.

Can we Customize Cucumber Test Suite at run time?

I have a cucumber test runner class in which i write my test suite to run like below
#CucumberOptions( features={"Feature_Files/featues"
} ,glue={ "com.automation.stepdef"
} ,monochrome=true ,dryRun= false ,plugin = {"html:target/cucumber-html-report"
} ,tags = {"#Startup"
}
)
If I wish to customize this tag option on successful completion of #startup feature, is it possible ?
The most common way of running two or more dependant test suites is creation of triggers for two or more jobs in your CI. This can be done with various plugins as described here.
Otherwise, if that's some test preparation actions you can use #Before or realted JUnit #BeforeClass annotation.
Seems not possible with current Cucumber. What you are asking for is the dependency among test scenarios, which IMO is a very good feature. For example, we have some login feature and some other functional features. It would not make any sense and would actually be a waste of time to run other features if the login feature does not work in the first place. To make things worse, you will see a lot of failures in test report in which you could not easily spot the root cause which is non-working login feature.
TestNG supports "dependsOnMethod" feature. However, TestNG is not a BDD tool.
QAF https://qmetry.github.io/qaf/qaf-2.1.7b/scenario.html#meta-data supports this as a BDD tool. However, it would be too heavy to introduce a new tool for such a simple feature.
All we need is some addition to Cucumber syntax and a customized test runner to build up the scenarios execution order as per dependencies and skip the features if the feature they depends on fails.
I would love to see if someone can put some effort into this :)
BTW, CI could workaround this issue, but again it's too heavy and clumsy. Imagine you have multi-dependencies among test scenarios, how many CI pipelines will you need then? Also, you can not workaround this in local dev env with CI because simply you would not set CI locally.

Integration tests randomly fail or throw error when run by maven

I am running a suite of integration tests using maven and about 10% of the tests would fail or throw an error. However, when I start the server and run the individual failed tests manually from my IDE(intellij idea), they all pass with no problem. What could be the cause of this issue?
This is almost always caused by the unit tests running in an inconsistent order, or a race condition between two tests running in parallel via forked tests. If Test #1 finishes first, it passes. But if Test #2 finishes first, it leaves a test resource, such as a test database, in an alternate state causing Test #1 to fail. It is very common with database tests, esepecially when one or more alter the database. Even in IDEA, you may find all the tests in the com.example.FooTest class always pass when you run that class. But if you run all the tests in the com.example package or all tests in the project, sometimes (or even always) a test in FooTest fails.
The fix is to ensure your tests are always guaranteed a consistent state when run. (That is a guiding principle for good unit tests.) You need to pay attention to test setup and tear-down via the #Before, #BeforeClass, #After, and #AfterClass annotations (or TestNG equivalents). I recommend Googling database unit testing best practices. For database tests, running tests in a Transaction can prevent these type of issues. That way the database is rolled back to its starting state whether the test passes or fails. Spring has some great support for JDBC dtaabase tests. (Even if your project is not a Spring project, the classes can be very useful.) Read section 11.2.2 Unit Testing support Classes and take a look at the AbstractTransactionalJUnit4SpringContextTests / AbstractTransactionalTestNGSpringContextTests classes and the #TransactionConfiguration annotation (this latter one being from running Spring Contexts). There are also other Database testing tools out there such as DbUnit.

JUnit setup for all tests

I need to setup a database in my tests (schema and some test data), this takes quite a bit of time, and as such I prefer to have it done once for all tests that are being run, and reset so that any chanages to the DB are rolled back between tests.
I'm not sure though which JUnit facilities should be used for this.
It seems like I can set a #BeforeClass/#AfterClass on a test suite, but than I can't run individual tests anymore.
Is there some way to add a setup/teardown for all tests that will run even when only executing a subset of the tests and not a specific suite? (For example NUnit has SetUpFixture)
I guess the transactions/truncation of the DB can be done using JUnit Rules...
You can use in-memory databases like HSQL or H2 to speed up test.
To roll back, you can use transactional feature.
Is there some way to add a setup/teardown for all tests that will run even when only executing a subset of the tests and not a specific suite?
For this, you can create a super class which is extended by other test classes. In super class, you can set up to setup/teardown.

Categories

Resources