I want to create 2 JUnit TestSuites. They both utilize the same test classes, but they should each use different parameters. For example, in test suite A, I want my data to be collected from file A and to be written to database A. In test suite B, I want my data to be collected from file B and to be written to databaseB.
The reason I use testSuites for this is because:
I can put all the specific parameters in the testsuite classes
I can reuse the testclasses
I can choose which testsuite to run. I do not want all tests to always run with all possible paramaters!
The problem is I cannot really pass the parameters. I understand the way the Parameterized class works with JUnit, but it does not allow point 3 in the list above. If I use the code below it will run my test class with both databse connections, which is not what I want to achieve.
#RunWith(value = Parameterized.class)
public class TestCheckData
{
private File file;
private DatabaseSource databaseSource;
public TestCheckData(File file, DatabaseSource databaseSource)
{
this.file = file;
this.databaseSource = databaseSource;
}
#Parameters
public static Iterable<Object[]> data1()
{
return Arrays.asList(new Object[][]
{
{ TestSuiteA.DATA_FILE_A, TestSuite1.DATABASE_A },
{ TestSuiteB.DATA_FILE_B, TestSuite1.DATABASE_B }
});
}
I already find some way of passing configurations in a spring context in this question, but I'm not using any special framework.
Well, this would be a little unconventional, but you could add a different Test class to the beginning of each suite run that would set the parameters you want to use for that test. So you'd have classes like:
public abstract class StaticParameters {
public static File dataFileToUse = null;
public static DatabaseSource databaseToUse = null;
}
public class Suite1Params extends StaticParameters {
#BeforeClass
public static void setParams() {
dataFileToUse = DATA_FILE_A;
databaseToUse = DATABASE_A;
}
}
public class Suite2Params extends StaticParameters {
#BeforeClass
public static void setParams() {
dataFileToUse = DATA_FILE_B;
databaseToUse = DATABASE_B;
}
}
Then you'd just make Suite1Params or Suite2Params the first in your suite list. You might have to add a fake #Test entry to the params classes, I'm not sure if the Suite runner requires that.
You could modify the tests so that they get the parameters from a config file. This way you would always only have 1 Suite.
The path of the config file can be looked up via a System property.
Then on the invocation of the test suite, you could pass in a different config file by changing the property using the -D option on the JVM.
So for example if you named the proprerty env.properties then your command would be:
%java -Denv.properties=prod.config runMyTests
or
%java -Denv.properties=dev.config runMyTests
etc
Related
is there a way to tell a Test with annotations or something like that, to load properties based on a custom annotation and run the tests equally to the number of parameters that test has.
For example:
I want to run test A that has values injected with Spring #value , three times, and for run 1 I want the test to get the values from property file X for run 2 from property file Y and you got it, run 3 from property file Z.
#Value("${person.name}")
private String person.name;
#RunTestWithProperties(properties = {X,Y,Z})
#Test
public void testA() {(System.out.println(person.name); }
On the first run, this test would print the person.name from X
properties file, on the second run the test would print the
person.name from Y and so on.
What would be expected:
testA runs 3 times(each run with different properties) from files X, Y and Z;
I could use data providers or something like that, load properties with system variables but it is not the solution I want.
Technologies I use are Java, TestNG and Spring. Any solution is more than welcomed.
Thank you in advance guys!
You can use parameterized tests. You need to create a method annotated with #Parameterized.Parameters where you can load all you data in a collection (Basically the parameters you need to pass for each run).
Then create a constructor to pass the arguments and this constructor argument will be passed from this collection on each run
e.g.
#RunWith(Parameterized.class)
public class RepeatableTests {
private String name;
public RepeatableTests(String name) {
this.name = name;
}
#Parameterized.Parameters
public static List<String> data() {
return Arrays.asList(new String[]{"Jon","Johny","Rob"});
}
#Test
public void runTest() {
System.out.println("run --> "+ name);
}
}
or if you don't want to use constructor injection you can use #Parameter annotation to bind the value
#RunWith(Parameterized.class)
public class RepeatableTests {
#Parameter
public String name;
#Parameterized.Parameters(name="name")
public static List<String> data() {
return Arrays.asList(new String[]{"Jon","Johny","Rob"});
}
#Test
public void runTest() {
System.out.println("run --> "+ name);
}
}
If I have a beforeMethod with a group, and I run a different group, but within that group there exists a test that has both the group that I'm running as well as the group with the beforeMethod, I want that test to run its beforemethod. So for example:
#BeforeMethod(groups = "a")
public void setupForGroupA() {
...
}
#Test(groups = {"supplemental", "a"})
public void test() {
...
}
when I run testNG with groups=supplemental, I still want the beforeMethod to run before test, but because the group is supplemental instead of 'a', it won't.
This seems like such an obvious feature to me that I feel like I must be using the groups incorrectly, so I would also like to explain my workflow as well, in case that's where my issue is.
I'm using groups to define different layers of tests, as well as whether they need their own account to be created or if they need to use a proxy to access their data, etc. I'll have groups of smoke, supplemental and regression as well as groups of uniqueAccount, proxy, etc. I don't need specific setup for the first groupings, but those are the groups I pass in to run in maven. I require specific setups for the latter groups, but I never want to run just the tests that require a proxy, or require a unique account.
Groups configuration is not evaluated at runtime. test method won't activate setupForGroupA method.
The feature is used to find methods you want to run.
According to the following example:
#BeforeMethod(groups = "a")
public void setupForGroupA() {
...
}
#Test(groups = {"supplemental", "a"})
public void test() {
...
}
#Test(groups = {"supplemental"})
public void test2() {
...
}
If you run this class with group "a" it will run setupForGroupA and test methods because they are marked with the group "a".
If you run this class with group "supplemental" it will run test and test2 methods because they are marked with the group "supplemental".
It looks you have different behavior for some methods, so a good approach is to separate methods in different classes and select tests by class instead of select tests by groups.
public class A {
#BeforeMethod
public void setupForGroupA() {
...
}
#Test
public void test() {
...
}
}
and
public class Supplemental {
#Test
public void test2() {
...
}
}
Running class A will run setupForGroupA and test only.
Running class Supplemental will run test2 only.
Running both classes will run everything.
In case you want to run both classes and filter by something else, you can implement your own logic with a method interceptor:
#MyCustomAnnotation(tags = "a", "supplemental")
public class A {
...
}
#MyCustomAnnotation(tags = "supplemental")
public class Supplemental {
...
}
public class MyInterceptor implements IMethodInterceptor {
public List<IMethodInstance> intercept(List<IMethodInstance> methods, ITestContext context) {
// for each method, if its class contains the expected tag, then add it to the list
// expect tag can be passed by a system property or in a parameter from the suite file (available from ITestContext)
}
}
If I get it correct, you want to run your before method every time. In this case, you could set alwaysRun=true for your before method like this-
#BeforeMethod(alwaysRun = true, groups = "a")
public void setupForGroupA() {
...
}
This one of the solutions you want.
I have the following class(and method in it)
class Fetcher{
public void fetch(String key){
File file = File.createTempFile(key,"*.txt");
.....
....
}
}
I want to unit test this method and want to mock the createTempFile method
For this i have written the unit test as follows
#RunWith(PowerMockRunner.class)
#PrepareForTest({File.class})
public class FetcherTest {
public void test() {
String key = "key";
File file = new File("Hello");
PowerMock.mockStatic(File.class);
EasyMock.expect(File.createTempFile(EasyMock.anyObject(String.class),EasyMock.anyObject(String.class))).andReturn(file).once();
PowerMock.replay(File.class);
Fetcher fetcher = new Fetcher();
fetcher.fetch("key");
PowerMock.verify(File.class);
}
}
Executing the unit test provides the following error:
Expectation failure on verify: File.createTempFile(,):
expected: 1,actual: 0
I have looked through a lot of articles but am not able to figure out what's missing here and why File is not getting mocked. Please help with any suggestions
When you mock Java System classes (and the File is a Java System Class) you have to add a ClassThatCallsTheSystemClass to #PrepareForTest.
So you need to add class Fetcher to #PrepareForTest
#RunWith(PowerMockRunner.class)
#PrepareForTest({Fetcher.class})
With JUnit you can use #RunWith(Parameterized.class) to provide a set of parameters to pass to the test constructor and then run tests with each object.
I'm trying to move as much test logic as possible into data, but there are some tests that won't easily be converted into data-driven tests. Is there a way to use JUnit's Parameterized runner to run some tests with parameters, and then also add non-data-driven tests that aren't run repeatedly for each test object construction?
My workaround for this was to create a single class and place the programmatic and data-driven tests in two separate sub-classes. A sub-class must be static for JUnit to run its tests. Here's a skeleton:
#RunWith(Enclosed.class) // needed for working well with Ant
public class MyClassTests {
public static class Programmatic {
#Test
public void myTest(){
// test something here
}
}
#RunWith(Parameterized.class)
public static class DataDriven {
#Parameters
public static Collection<Object[]> getParams() {
return Collections.emptyList();
}
private String data;
public DataDriven(String testName, String data){
this.data = data;
}
#Test
public void test() throws AnalyzeExceptionEN{
// test data string here
}
}
}
one way is to use Junit's Enclosed runner. it's very verbose but also pretty powerful. it allows you to combine multiple different runners in one file.
other option is to use custom junit runner. for sure zohhak supports tests with parameters and without. small extract:
#RunWith(ZohhakRunner.class)
public class CoercingTest {
#TestWith("ONE_OF_ENUM_VALUES")
public void should_coerce_enum(SampleEnum param) {
assertThat(param).isEqualTo(SampleEnum.ONE_OF_ENUM_VALUES);
}
#Test
public void should_run_standard_junit_test() {
//this will also work
}
}
if it's not enough for you, for sure you can find other runners that support both kind of tests.
I have a class, which I use as a basis for my unit tests. In this class I initialize the whole environment for my tests, setting up database mappings, enter a number of database records across multiple tables, etc. That class has a method with a #BeforeClass annotation which does the initialization. Next thing, I extend that class with specific classes in which I have #Test methods.
My question is, since the before class is exactly the same for all these test classes, how can I ensure that they are run only once for all the tests.
One simple solution is that I could keep all the tests in one class. However, the number of tests is huge, also they are categorised based on functional heads. So they are located in different classes. However since they need the exact same setup, they inherit the #BeforeClass. As a result the whole setup is done at least once per test class, taking much more time in total than I would prefer.
I could, though, put them all in various subpackages under one package, hence if there is a way, how I can run set up once for all the tests within that package, it would be great.
With JUnit4 test suite you can do something like this :
#RunWith(Suite.class)
#Suite.SuiteClasses({ Test1IT.class, Test2IT.class })
public class IntegrationTestSuite
{
#BeforeClass
public static void setUp()
{
System.out.println("Runs before all tests in the annotation above.");
}
#AfterClass
public static void tearDown()
{
System.out.println("Runs after all tests in the annotation above.");
}
}
Then you run this class as you would run a normal test class and it will run all of your tests.
JUnit doesn't support this, you will have to use the standard Java work-arounds for singletons: Move the common setup code into a static code block and then call an empty method in this class:
static {
...init code here...
}
public static void init() {} // Empty method to trigger the execution of the block above
Make sure that all tests call init(), for example my putting it into a #BeforeClass method. Or put the static code block into a shared base class.
Alternatively, use a global variable:
private static boolean initialize = true;
public static void init() {
if(!initialize) return;
initialize = false;
...init code here...
}
Create one base class for all tests:
public class BaseTest {
static{
/*** init code here ***/
}
}
and every test should inherit from it:
public class SomeTest extends BaseTest {
}
You can make one BaseTest class with a #BeforeClass method, then have all the other tests inherit from it. This way, when each test object is constructed, #BeforeClass gets executed.
Also avoid executing it just once for all the test suite, since all the test cases should be independent. #BeforeClass should execute only once each test case, not test suite.
If you can tolerate adding spring-test to your project, or you are using it already, then a good approach is to use the technique described here: How to load DBUnit test data once per case with Spring Test
Not sure if anyone still is using JUnit and trying to fix it without using Spring Runner (aka no spring integration). TestNG has this feature. But here is a JUnit based solution.
Create a RunOnce per thread operation like so. This maintains a list of classes for which the operation has run.
public class RunOnceOperation {
private static final ThreadLocal t = new ThreadLocal();
public void run(Function f) {
if (t.get() == null) {
t.set(Arrays.asList(getClass()));
f.apply(0);
} else {
if (!((List) t.get()).contains(getClass())) {
((List) t.get()).add(getClass());
f.apply(0);
}
}
}
}
Back in your unit test
#Before
public beforeTest() {
operation.run(new Function<Integer, Void>() {
#Override
public Void apply(Integer t) {
checkBeanProperties();
return null;
}
});
}
private void checkBeanProperties() {
//I only want to check this once per class.
//Also my bean check needs instance of the class and can't be static.
}
My function interface is like this:
interface Function<I,O> {
O apply(I i);
}
When you use this way, you can perform operations once per class using ThreadLocal.