I have a Parameterized test that is fed, say, with files:
#RunWith(Parameterized.class)
public class FileTest {
...
public static Collection<Object[]> data() {
return IteratorUtils.toList( FileUtils.iterateFiles(testFilesDir
, TrueFileFilter.INSTANCE
, (IOFileFilter) null) );
}
Whether it's files on a file system, rows from a table or URLs makes no difference, really. Just a Parameterized test that's fed with a large amount of data points and takes a long time to conclude.
Now I am running the test, say 10,000 files and I detect a problem with file #9,203. I fix the bug and to
verify the fix I want to re-run the test, but only for this particular file (because I can't wait 2 hours). Subsequent re-runs (after the fix is verified) should of course comprise the entire data set.
Is there any way to do that, e.g. by supplying some run-time parameters in a console-invocation of JUnit so that only one particular data point is used?
OK, so in the end I found a way to accomplish this. Use a constructor for your parameterized test class that also takes a friendly name that you can easily pass from the command line. E.g. something like:
private final File testFile;
private final String friendlyTestName;
public FileTest(File testFile, String friendlyTestName) {
this.testFile = testFile;
this.friendlyTestName = friendlyTestName;
}
Of course, you would then have to generate the appropriate tuples in the method that provides the data points. E.g. in the example below the friendly name is simply the filename of the test file (without the path; let's assume that they are unique):
#Parameters(name= "{index}: {1}")
public static Collection<Object[]> data() {
Collection<File> _rv = IteratorUtils.toList( FileUtils.iterateFiles(testFilesDir, TrueFileFilter.INSTANCE, (IOFileFilter) null) );
Collection<Object[]> rv = new ArrayList<>();
for (File f : _rv)
rv.add(new Object[]{f, f.getName()});
return rv;
}
Then, when invoking Ant from the command line pass a target-friendly-name parameter:
ant -Dtarget-friendly-name=a-005 test
... and make sure it is conveyed all the way to the junit Ant task. E.g. in your build.xml file you should have something like:
<junit printsummary="${junit.summary}" showoutput="${junit.output}">
<sysproperty key="target-friendly-name" value="${target-friendly-name}"/>
...
</junit>
Finally, in the test method itself use assumeTrue to demand that the friendly name of the data point equals the target friendly name (if present; otherwise all tests are run).
#Test
public void testFile() {
assumeTrue( (targetFriendlyName==null)||(targetFriendlyName.equals(friendlyTestName)) );
...
}
I was looking for a way to directly use the {index} property of the Parameters annotation which would have removed the need to define a separate friendlyName but haven't figured a way to do so; hence this solution requires the unnatural addition of a friendly name field in the test class.
Related
I have one question related to the test data and the test-class structure.
I have a test class with few tests inside.
Now, the given and the expected data are structures that I create for almost every test.
I write my tests to look like this:
private static final List<String> EXPECTED_DATA = List.of("a","b","c","d","e","f");
#Test
void shouldReturnAttributes() {
Service service = new Service();
List<String> actualData = service.doSomething();
assertThat(actualData).containsExactlyInAnyOrderElementsOf(TestData.EXPECTED_DATA);
}
Currently, I set my test data at the beginning of the test class as constants.
Once more tests are added, more constants start appearing at the beginning of the test class, resulting in a lot of scrolling down to reach the actual tests.
So, a friend came up with an idea that if the constants are not on the top of the test class, the tests will be more readable.
The test data that are used from more that one test class are moved to a CommonTestData class and the rest test data that are used only from a specific class we structured them as follows.
We moved them inside a private static class TestData and the code looks like this:
class ProductAttributeServiceTest {
#Test
void shouldReturnAttributes() {
Service service = new Service();
List<String> actualData = service.doSomething();
assertThat(actualData).containsExactlyInAnyOrderElementsOf(EXPECTED_DATA);
}
private static class TestData {
private static final List<String> EXPECTED_DATA = List.of("a","b","c","d","e","f");
}
}
Could you propose another way of doing that?
How do you structure your test data to improve test readability?
One approach could be put the test data in text,csv type files..
Putting the test data in file would give chance to name the files with specific test scenarios. Which will eventually add more readability to the test data.
And these files can be arranged in test scenario based folder structures as well.
Once test data been arranged in files then the owner ship and maintenance of these files can be transferred to domain experts and test data can be added/ modified directly on need basis.
One test data supplier class can be created which will do task of reading the test data from the files and provide to test.
So tests would have communication only to this supplier class through an API like,
Public string getTestData(string test scenario name)
And if the test data on each constant is not that much big to put in separate files.. then a single json yml based config file having one field for each data constant would do the job.
I have a Java bot running based on the PircBotX framework. An IRC bot simply replies on commands. So now I have a list of static strings e.g.; !weather, !lastseen and the likes in my Main.java file.
For each command I add I create a new static string and I compare each incoming message if it starts with any of the defined commands.
Pseudocode
Receive message `m`
if m matches !x
-> do handleX()
if m matches !y
-> do handleY()
This is basicly a very large if test.
What I would like to do is create some sort of skeleton class that perhaps implements an interface and defines on which command it should act and a body that defines the code it should execute. Something I'm thinking of is shown below:
public class XkcdHandler implements CommandHandlerInterface
{
public String getCommand()
{
return "!xkcd";
}
public void HandleCommand(String[] args, Channel ircChannel)
{
// Get XKCD..
ircChannel.send("The XKCD for today is ..");
}
}
With such a class I could simply add a new class and be done with it. Now I have to add the command, add the if test in the list, and add the method to the Main.java class. It is just not a nice example of software architecture.
Is there a way that I could create something that automatically loads these classes (or instances of those classes), and then just call something like invokeMatchingCommand()? This code could then iterate a list of loaded commands and invoke HandleCommand on the matching instance.
Update
With the answer of BalckEye in mind I figured I could load all classes that are found in a package (i.e., Modules), instantiate them and store them in a list. This way I could handle each message as shown in his answer (i.e., iterate the list and execute the class method for each matching command).
However, it seems, according to this thread, that it's not really viable to do. At this point I'm having a look at classloaders, perhaps that would be a viable solution.
There are several ways I think. You can just use a Map with the command as the key and an interface which executes your code as the value. Something like this:
Map<String, CommandInterface> commands = new ....
and then use the map like this:
CommandInterface cmd = commands.get(command);
if(cmd != null) {
cmd.execute();
}
You are looking for the static block, for instance:
class main {
private static List<CommandHandlerInterface> modules = new ArrayList<...>();
static { // gets called when a static member gets accessed for the first time (once per class)
modules.add(new WeatherCommand());
// etc.
}
// method here which iterates over modules and checks
}
I have a class that takes in a single file, finds the file related to it, and opens it. Something along the lines of
class DummyFileClass
{
private File fileOne;
private File fileTwo;
public DummyFileClass(File fileOne)
{
this.fileOne = fileOne;
fileTwo = findRelatedFile(fileOne)
}
public void someMethod()
{
// Do something with files one and two
}
}
In my unit test, I want to be able to to test someMethod() without having to have physical files sitting somewhere. I can mock fileOne, and pass it to the constructor, but since fileTwo is being calculated in the constructor, I don't have control of this.
I could mock the method findRelatedFile() - but is this the best practice? Looking for the best design rather than a pragmatic workaround here. I'm fairly new to mocking frameworks.
In this sort of situation, I would use physical files for testing the component and not rely on a mocking framework. As fge mentions it may be easier plus you don't have to worry about any incorrect assumptions you may make of your mock.
For instance, if you rely upon File#listFiles() you may have your mock return a fixed list of Files, however, the order they are returned in is not guaranteed - a fact you may only discover when you run your code on a different platform.
I would consider using JUnit's TemporaryFolder rule to help you set up the file and directory structure you need for your test, e.g.:
public class DummyFileClassTest {
#Rule
public TemporaryFolder folder = new TemporaryFolder();
#Test
public void someMethod() {
// given
final File file1 = folder.newFile("myfile1.txt");
final File file2 = folder.newFile("myfile2.txt");
... etc...
}
}
The rule should clean up any created files and directories when the test completes.
I am writing tests for an interpreter from some programming language in Java using JUnit framework. To this end I've created a large number of test cases most of them containing code snippets in a language under testing. Since these snippets are normally small it is convenient to embed them in the Java code. However, Java doesn't support multiline string literals which makes the code snippets a bit obscure due to escape sequences and the necessity to split longer string literals, for example:
String output = run("let a := 21;\n" +
"let b := 21;\n" +
"print a + b;");
assertEquals(output, "42");
Ideally I would like something like:
String output = run("""
let a := 21;
let b := 21;
print a + b;
""");
assertEquals(output, "42");
One possible solution is to move the code snippets to the external files and refer each file from corresponding test case. However this adds significant maintenance burden.
Another solution is to use a different JVM language, such as Scala or Jython which support multiline string literals, to write the tests. This will add a new dependency to the project and will require to port existing tests.
Is there any other way to keep the clarity of the test code snippets while not adding too much maintenance?
Moving the test cases to a file worked for me in the past, it was an interpreter as well:
created an XML file containg the snippets to be interpreted as well as the expected result. It was a fairly simple XML definition, a list of test elements mainly containing testID, value, expected result, type, and a description.
implemented exactly one JUnit test that read the file and looped through its contents, in case of failure we used the testID and description to log failing tests.
It mainly worked because we had one generic well-defined interface to the interpreter like your run method, so refactoring was still possible. In our case this did not increase maintenance effort, in fact we could easily create new tests by just adding more elements to the XML file.
Maybe this is not the optimal way in which Unit tests should be used, but it worked well for us.
Since you are talking about other JVM languages, have you considered Groovy? You would have to add an external dependency, but only at compile/test time (you don't have to put it in your production package), and it provides multiline strings. And one major advantage in your case : its syntax is backwards compatible with Java (meaning you won't have to rewrite your tests)!
I have done this in the past. I've done something similar to what was suggested by home, I used external file(s) containing the tests and their expected results, but using the #Parameterized test runner.
#RunWith(Parameterized.class)
public class ParameterTest {
#Parameters
public static List<Object[]> data() {
List<Object[]> list = new LinkedList<Object[]>();
for (File file : new File("/temp").listFiles()) {
list.add(new Object[]{file.getAbsolutePath(), readFile(file)});
}
return list;
}
private static String readFile(File file) {
// read file
return "file contents";
}
private String filename;
private String contents;
public ParameterTest(String filename, String contents) {
this.filename = filename;
this.contents = contents;
}
#Test
public void test1() {
// here we test something
}
#Test
public void test2() {
// here we test something
}
}
Here we are running test1() & test2() once for each file in /temp, with the parameters of the filename and the contents of the file. The Test Class is instantiated and called for each item that you add into the list in the method annotated with #Parameters.
Using this test runner, you can rerun a particular file if it fails; most IDEs support rerunning a single failed test. The disadvantage of #Parameterized is that there isn't any way to sensibly identify the tests so that the names appear in the Eclipse JUnit plugin. All you get is 0, 1, 2, etc. But at least you can rerun the failed tests.
As home says, good logging is important to identify the failing tests correctly and to aid debugging especially when running outside the IDE.
Could a sensible unit test be written for this code which extracts a rar archive by delegating it to a capable tool on the host system if one exists?
I can write a test case based on the fact that my machine runs linux and the unrar tool is installed, but if another developer who runs windows would check out the code the test would fail, although there would be nothing wrong with the extractor code.
I need to find a way to write a meaningful test which is not binded to the system and unrar tool installed.
How would you tackle this?
public class Extractor {
private EventBus eventBus;
private ExtractCommand[] linuxExtractCommands = new ExtractCommand[]{new LinuxUnrarCommand()};
private ExtractCommand[] windowsExtractCommands = new ExtractCommand[]{};
private ExtractCommand[] macExtractCommands = new ExtractCommand[]{};
#Inject
public Extractor(EventBus eventBus) {
this.eventBus = eventBus;
}
public boolean extract(DownloadCandidate downloadCandidate) {
for (ExtractCommand command : getSystemSpecificExtractCommands()) {
if (command.extract(downloadCandidate)) {
eventBus.fireEvent(this, new ExtractCompletedEvent());
return true;
}
}
eventBus.fireEvent(this, new ExtractFailedEvent());
return false;
}
private ExtractCommand[] getSystemSpecificExtractCommands() {
String os = System.getProperty("os.name");
if (Pattern.compile("linux", Pattern.CASE_INSENSITIVE).matcher(os).find()) {
return linuxExtractCommands;
} else if (Pattern.compile("windows", Pattern.CASE_INSENSITIVE).matcher(os).find()) {
return windowsExtractCommands;
} else if (Pattern.compile("mac os x", Pattern.CASE_INSENSITIVE).matcher(os).find()) {
return macExtractCommands;
}
return null;
}
}
Could you not pass the class a Map<String,ExtractCommand[]> instances and then make an abstract method, say GetOsName, for getting the string to match. then you could look up the match string in the map to get the extract command in getSystemSpecificExtractCommands method. This would allow you to inject a list containing a mock ExtractCommand and override the GetOsName method to return the key of your mock command, so you could test that when the extract worked, the eventBus is fired etc.
private Map<String,EvenetCommand[]> eventMap;
#Inject
public Extractor(EventBus eventBus, Map<String,EventCommand[]> eventMap) {
this.eventBus = eventBus;
this.eventMap = eventMap;
}
private ExtractCommand[] getSystemSpecificExtractCommands() {
String os = GetOsName();
return eventMap.Get(os);
}
protected GetOsName();
{
return System.getProperty("os.name");
}
I would look for some pure java APIs for manipulating rar files. This way the code will not be system dependent.
A quick search on google returned this:
http://www.example-code.com/java/rar_unrar.asp
Start with a mock framework. You'll need to refactor a bit, as you will need to ensure that some of those private and local scope properties/variables can be overridden if need be.
Then when you are testing Extract, you make sure you've mocked out the commands, and ensure that the Extract method is called on your mocked objects. You'll also want to ensure that your event got fired too.
Now to make it more testable you can use constructor or property injection. Either way, you'll need to make the private ExtractCommand arrays overriddable.
Sorry, don't have time to recode it, and post, but that should just about get you started nicely.
Good luck.
EDIT. It does sound like you are more after a functional test anyway if you want to test that it is actually extracted correctly.
Testing can be tricky, especially getting the divides right between the different types of tests and when they should be run and what their responsibilities are. This is even more so with cross-platform code.
While it's possible to think of this as 1 code base you are testing, it's really multiple code bases, the generic java code and code for each target platform, so you will need multiple tests.
To begin with unit testing, you will not be exercising the external command. Rather, each platform specific class is tested to see that it generates the correct command line, without actually executing it.
Your java class that hides all the platform specifics (which command to use) has a unit test to verify that it instantiates the correct platform specific class for a given platform. The platform can be a parameter to the core test, so multiple platforms can be "emulated". To take the unit test further, you could mock out the command implementation (e.g. having a RAR file and it's uncompressed form as part of your test data, and the command is a simple copy of the uncompressed data.)
Once these unit tests are in place and green, you then can move on to functional tests, where the real platform specific commands are executed. Of course, these functional tests have to be run on the actual platform. Each functional test corresponds to a platform specific class that knows how to create the correct commandline to unrar.
Your build is configured to exclude tests for classes that don't apply to the current platform, for example, so LinuxUnrarer is not tested on Windows. The platform independent java class is always tested, and it will instantiate the appropriate platform specific test. This gives you a integration test to see that the system works end to end.
As to cross platform UNRAR, there is a java RAR scanner, but it doesn't decompress.