I have one question related to the test data and the test-class structure.
I have a test class with few tests inside.
Now, the given and the expected data are structures that I create for almost every test.
I write my tests to look like this:
private static final List<String> EXPECTED_DATA = List.of("a","b","c","d","e","f");
#Test
void shouldReturnAttributes() {
Service service = new Service();
List<String> actualData = service.doSomething();
assertThat(actualData).containsExactlyInAnyOrderElementsOf(TestData.EXPECTED_DATA);
}
Currently, I set my test data at the beginning of the test class as constants.
Once more tests are added, more constants start appearing at the beginning of the test class, resulting in a lot of scrolling down to reach the actual tests.
So, a friend came up with an idea that if the constants are not on the top of the test class, the tests will be more readable.
The test data that are used from more that one test class are moved to a CommonTestData class and the rest test data that are used only from a specific class we structured them as follows.
We moved them inside a private static class TestData and the code looks like this:
class ProductAttributeServiceTest {
#Test
void shouldReturnAttributes() {
Service service = new Service();
List<String> actualData = service.doSomething();
assertThat(actualData).containsExactlyInAnyOrderElementsOf(EXPECTED_DATA);
}
private static class TestData {
private static final List<String> EXPECTED_DATA = List.of("a","b","c","d","e","f");
}
}
Could you propose another way of doing that?
How do you structure your test data to improve test readability?
One approach could be put the test data in text,csv type files..
Putting the test data in file would give chance to name the files with specific test scenarios. Which will eventually add more readability to the test data.
And these files can be arranged in test scenario based folder structures as well.
Once test data been arranged in files then the owner ship and maintenance of these files can be transferred to domain experts and test data can be added/ modified directly on need basis.
One test data supplier class can be created which will do task of reading the test data from the files and provide to test.
So tests would have communication only to this supplier class through an API like,
Public string getTestData(string test scenario name)
And if the test data on each constant is not that much big to put in separate files.. then a single json yml based config file having one field for each data constant would do the job.
Related
I have a maven Java project in Intellij IDEA community. The TestNg version is very old i.e. 6.9.5 and I simply cannot update it. I have 6 TestNg test methods in a class. Only 5/6 of these methods use data provider methods, all of which are in one DataProvider class.
When I run the test class, only the method without data provider (say test_5) runs successfully. The others are marked as "test ignored". Moreover, when I comment or disable test_5, then all the other tests run. Can I make testng give a detailed reason for ignoring tests ?
Here is brief information about my project. I can't give the full code.
public class MyUtilityClass {
public class MyUtilityClass(){
//Load data from property files and initialize members, do other stuff.
}
}
public class BaseTest {
MyUtilityClass utilObj = new MyUtilityClass();
//do something with utilObj, provide common annotated methods for tests etc.
}
public class TestClass extends BaseTest {
#BeforeClass
public void beforeMyClass(){
//Get some data from this.utilObj and do other things also.
}
#Test(dataProvider = "test_1", dataProviderClass = MyDataProvider.class)
test_1(){}
#Test(dataProvider = "test_2", dataProviderClass = MyDataProvider.class)
test_2(){}
...
//test_5 was the only one without data provider.
test_5(){}
#Test(dataProvider = "test_6", dataProviderClass = MyDataProvider.class)
test_6(){}
}
public class MyDataProvider {
MyUtilityClass utilObj = new MyUtilityClass();
//do something with utilObj besides other things.
}
Your tests need to end in exactly the same environment in which they started.
You gave nary a clue as to what your code is like, but I can say that it is almost certainly either a database that is being written to and not reverted or an internal, persistent data structure that is being modified and not cleared.
If the tests go to the database, try enclosing the entire test in a transaction that you revert at the end of the test. If you can't do this, try mocking out the database.
If it's not the DB, look for an internal static somewhere, either a singleton pattern or a static collection contained in an object. Improve that stuff right out of your design and you should be okay.
I could give you more specific tips with code, but as is--that's about all I can tell you.
I solved my problem. Test_5 is the only test method which does not have a data provider. So, I provided a mock data provider method for it.
I am currently writing JUnit test cases using the Selenium-RC API. I am writing my scripts like:
//example JUnit test case
public class myScripts extends SeleneseTestCase {
public void setUp() throws Exception {
SeleniumServer newSelServ = new SeleniumServer();
newSelServ.start();
setUp("https://mySite.com", "*firefox");
}
public void insert_Test_Name throws Exception {
//write test code here
}
}
And for each test, I have a new JUnit file. Now, since the beginning of my JUnit files will all basically be the same, just with minor variations towards the end, I was thinking about creating a pre-formatted Java template to write create a file with the redundant code already written. However, I can't find any information on whether this is possible. Does Eclipse allow you to create file templates for certain packages?
Create a super class to add all the common code. Creating template is really bad because of the fact you are duplicating the code at the end of the day.
class Super extends SeleneseTestCase{
// Add all common code
}
class Test1 extends Super{
// only special test case logic
}
Also I would suggest not to create SeleniumServer instance for each test case, It will reduce overall performance of the test suite. You can reuse object as long as you are running test sequentially.
I have a Parameterized test that is fed, say, with files:
#RunWith(Parameterized.class)
public class FileTest {
...
public static Collection<Object[]> data() {
return IteratorUtils.toList( FileUtils.iterateFiles(testFilesDir
, TrueFileFilter.INSTANCE
, (IOFileFilter) null) );
}
Whether it's files on a file system, rows from a table or URLs makes no difference, really. Just a Parameterized test that's fed with a large amount of data points and takes a long time to conclude.
Now I am running the test, say 10,000 files and I detect a problem with file #9,203. I fix the bug and to
verify the fix I want to re-run the test, but only for this particular file (because I can't wait 2 hours). Subsequent re-runs (after the fix is verified) should of course comprise the entire data set.
Is there any way to do that, e.g. by supplying some run-time parameters in a console-invocation of JUnit so that only one particular data point is used?
OK, so in the end I found a way to accomplish this. Use a constructor for your parameterized test class that also takes a friendly name that you can easily pass from the command line. E.g. something like:
private final File testFile;
private final String friendlyTestName;
public FileTest(File testFile, String friendlyTestName) {
this.testFile = testFile;
this.friendlyTestName = friendlyTestName;
}
Of course, you would then have to generate the appropriate tuples in the method that provides the data points. E.g. in the example below the friendly name is simply the filename of the test file (without the path; let's assume that they are unique):
#Parameters(name= "{index}: {1}")
public static Collection<Object[]> data() {
Collection<File> _rv = IteratorUtils.toList( FileUtils.iterateFiles(testFilesDir, TrueFileFilter.INSTANCE, (IOFileFilter) null) );
Collection<Object[]> rv = new ArrayList<>();
for (File f : _rv)
rv.add(new Object[]{f, f.getName()});
return rv;
}
Then, when invoking Ant from the command line pass a target-friendly-name parameter:
ant -Dtarget-friendly-name=a-005 test
... and make sure it is conveyed all the way to the junit Ant task. E.g. in your build.xml file you should have something like:
<junit printsummary="${junit.summary}" showoutput="${junit.output}">
<sysproperty key="target-friendly-name" value="${target-friendly-name}"/>
...
</junit>
Finally, in the test method itself use assumeTrue to demand that the friendly name of the data point equals the target friendly name (if present; otherwise all tests are run).
#Test
public void testFile() {
assumeTrue( (targetFriendlyName==null)||(targetFriendlyName.equals(friendlyTestName)) );
...
}
I was looking for a way to directly use the {index} property of the Parameters annotation which would have removed the need to define a separate friendlyName but haven't figured a way to do so; hence this solution requires the unnatural addition of a friendly name field in the test class.
I have an application with a class registered as a message listener that receives messages from a queue, checks it's of the correct class type (in public void onMessage(Message message)) and sends it to another class that converts this class to a string and writes the line to a log file (in public void handleMessage(MessageType m)). How would you write unit tests for this?
If you can use Mockito in combination with JUnit your test could look like this:
public void onMessage_Success() throws Excepton {
// Arrange
Message message = aMessage().withContent("...").create();
File mockLogFile = mock(File.class);
MessageHandler mockMessageHandler = mock(MessageHandler.class);
when(mockMessageHandler).handleMessage(any(MessageType.class)
.thenReturn("somePredefinedTestOutput");
when(mockMessageHandler).getLogFile().thenReturn(mockLogFile);
MessageListener sut = spy(new MessageListener());
Whitebox.setInternalState(sut, "messageHanlder", mockMessageHandler);
// or simply sut.setMessageHandler(mockMessageHandler); if a setter exists
// Act
sut.onMessage(message);
// Assert
assertThat(mockLogFile, contains("your desired content"));
verify(sut, times(1)).handleMessage(any(Message.class));
}
Note that this is just a simple example how you could test this. There are probably plenty of other ways to test the functionality. The example above showcaeses a typical builder-pattern for the generation of default-messages which accept certain values for testing. Moreover, I have not really clarified the Hamcrest matcher for the contains method on the mockLogFile.
As #Keppil also mentioned in his comment, it makes sense to create multiple test-cases which varry slightly in the arrange and assert parts where the bad-cases are tested
What I probably didn't explain enough is that getLogFile() method (which with high certainty has an other name in your application) of MessageHandler should return the reference to the file used by your MessageHandler instance to store the actual log-messages. Therefore, it probably is better to define this mockMessageHandler as spy(new MessageHandler()) instead of mock(MessageHandler.class) although this means that the unit-test is actually an integration test as the interaction of two classes is tested at the same time.
But overall, I hope you got the idea - use mock(Class) to generate default implementations for dependencies your system-under-test (SUT) requires or spy(Instance) if you want to include a real-world object instead of one that only has null-values as return types. You can take influence on the return-value of mocked objects with when(...).thenReturn(...)/.thenThrow(...) or doReturn(...).when(...) in case of void-operations f.e.
If you have dependency-injection into private fields in place you should use Whitebox.setInternalState(...) to inject the values into the sut or mock classes if no public or package-private (if you obtain the testing-model of reusing the package structure of the system-under-test classes within your test-classes) setter-methods are available.
Further, verify(...) lets you verify that a certain method was invoked while executing the SUT. This is quite handy in this scenario when the actual assertion isn't that trivial.
I have a class that takes in a single file, finds the file related to it, and opens it. Something along the lines of
class DummyFileClass
{
private File fileOne;
private File fileTwo;
public DummyFileClass(File fileOne)
{
this.fileOne = fileOne;
fileTwo = findRelatedFile(fileOne)
}
public void someMethod()
{
// Do something with files one and two
}
}
In my unit test, I want to be able to to test someMethod() without having to have physical files sitting somewhere. I can mock fileOne, and pass it to the constructor, but since fileTwo is being calculated in the constructor, I don't have control of this.
I could mock the method findRelatedFile() - but is this the best practice? Looking for the best design rather than a pragmatic workaround here. I'm fairly new to mocking frameworks.
In this sort of situation, I would use physical files for testing the component and not rely on a mocking framework. As fge mentions it may be easier plus you don't have to worry about any incorrect assumptions you may make of your mock.
For instance, if you rely upon File#listFiles() you may have your mock return a fixed list of Files, however, the order they are returned in is not guaranteed - a fact you may only discover when you run your code on a different platform.
I would consider using JUnit's TemporaryFolder rule to help you set up the file and directory structure you need for your test, e.g.:
public class DummyFileClassTest {
#Rule
public TemporaryFolder folder = new TemporaryFolder();
#Test
public void someMethod() {
// given
final File file1 = folder.newFile("myfile1.txt");
final File file2 = folder.newFile("myfile2.txt");
... etc...
}
}
The rule should clean up any created files and directories when the test completes.