How to test an interpreter using JUnit? - java

I am writing tests for an interpreter from some programming language in Java using JUnit framework. To this end I've created a large number of test cases most of them containing code snippets in a language under testing. Since these snippets are normally small it is convenient to embed them in the Java code. However, Java doesn't support multiline string literals which makes the code snippets a bit obscure due to escape sequences and the necessity to split longer string literals, for example:
String output = run("let a := 21;\n" +
"let b := 21;\n" +
"print a + b;");
assertEquals(output, "42");
Ideally I would like something like:
String output = run("""
let a := 21;
let b := 21;
print a + b;
""");
assertEquals(output, "42");
One possible solution is to move the code snippets to the external files and refer each file from corresponding test case. However this adds significant maintenance burden.
Another solution is to use a different JVM language, such as Scala or Jython which support multiline string literals, to write the tests. This will add a new dependency to the project and will require to port existing tests.
Is there any other way to keep the clarity of the test code snippets while not adding too much maintenance?

Moving the test cases to a file worked for me in the past, it was an interpreter as well:
created an XML file containg the snippets to be interpreted as well as the expected result. It was a fairly simple XML definition, a list of test elements mainly containing testID, value, expected result, type, and a description.
implemented exactly one JUnit test that read the file and looped through its contents, in case of failure we used the testID and description to log failing tests.
It mainly worked because we had one generic well-defined interface to the interpreter like your run method, so refactoring was still possible. In our case this did not increase maintenance effort, in fact we could easily create new tests by just adding more elements to the XML file.
Maybe this is not the optimal way in which Unit tests should be used, but it worked well for us.

Since you are talking about other JVM languages, have you considered Groovy? You would have to add an external dependency, but only at compile/test time (you don't have to put it in your production package), and it provides multiline strings. And one major advantage in your case : its syntax is backwards compatible with Java (meaning you won't have to rewrite your tests)!

I have done this in the past. I've done something similar to what was suggested by home, I used external file(s) containing the tests and their expected results, but using the #Parameterized test runner.
#RunWith(Parameterized.class)
public class ParameterTest {
#Parameters
public static List<Object[]> data() {
List<Object[]> list = new LinkedList<Object[]>();
for (File file : new File("/temp").listFiles()) {
list.add(new Object[]{file.getAbsolutePath(), readFile(file)});
}
return list;
}
private static String readFile(File file) {
// read file
return "file contents";
}
private String filename;
private String contents;
public ParameterTest(String filename, String contents) {
this.filename = filename;
this.contents = contents;
}
#Test
public void test1() {
// here we test something
}
#Test
public void test2() {
// here we test something
}
}
Here we are running test1() & test2() once for each file in /temp, with the parameters of the filename and the contents of the file. The Test Class is instantiated and called for each item that you add into the list in the method annotated with #Parameters.
Using this test runner, you can rerun a particular file if it fails; most IDEs support rerunning a single failed test. The disadvantage of #Parameterized is that there isn't any way to sensibly identify the tests so that the names appear in the Eclipse JUnit plugin. All you get is 0, 1, 2, etc. But at least you can rerun the failed tests.
As home says, good logging is important to identify the failing tests correctly and to aid debugging especially when running outside the IDE.

Related

JUnit - How to unit test method that reads files in a directory and uses external libraries

I have this method that I am using in a NetBeans plugin:
public static SourceCodeFile getCurrentlyOpenedFile() {
MainProjectManager mainProjectManager = new MainProjectManager();
Project openedProject = mainProjectManager.getMainProject();
/* Get Java file currently displaying in the IDE if there is an opened project */
if (openedProject != null) {
TopComponent activeTC = TopComponent.getRegistry().getActivated();
DataObject dataLookup = activeTC.getLookup().lookup(DataObject.class);
File file = FileUtil.toFile(dataLookup.getPrimaryFile()); // Currently opened file
// Check if the opened file is a Java file
if (FilenameUtils.getExtension(file.getAbsoluteFile().getAbsolutePath()).equalsIgnoreCase("java")) {
return new SourceCodeFile(file);
} else {
return null;
}
} else {
return null;
}
}
Basically, using NetBeans API, it detects the file currently opened by the user in the IDE. Then, it loads it and creates a SourceCodeFile object out of it.
Now I want to unit test this method using JUnit. The problem is that I don't know how to test it.
Since it doesn't receive any argument as parameter, I can't test how it behaves given wrong arguments. I also thought about trying to manipulate openedProject in order to test the method behaviour given some different values to that object, but as far as I'm concernet, I can't manipulate a variable in JUnit that way. I also cannot check what the method returns, because the unit test will always return null, since it doesn't detect any opened file in NetBeans.
So, my question is: how can I approach the unit testing of this method?
Well, your method does take parameters, "between the lines":
MainProjectManager mainProjectManager = new MainProjectManager();
Project openedProject = mainProjectManager.getMainProject();
basically fetches the object to work on.
So the first step would be to change that method signature, to:
public static SourceCodeFile getCurrentlyOpenedFile(Project project) {
...
Of course, that object isn't used, except for that null check. So the next level would be to have a distinct method like
SourceCodeFile lookup(DataObject dataLookup) {
In other words: your real problem is that you wrote hard-to-test code. The "default" answer is: you have to change your production code, to make easier to test.
For example by ripping it apart, and putting all the different aspects into smaller helper methods.
You see, that last method lookup(), that one takes a parameter, and now it becomes (somehow) possible to think up test cases for this. Probably you will have to use a mocking framework such as Mockito to pass mocked instances of that DataObject class within your test code.
Long story short: there are no detours here. You can't test your code (in reasonable ways) as it is currently structured. Re-structure your production code, then all your ideas about "when I pass X, then Y should happen" can work out.
Disclaimer: yes, theoretically, you could test the above code, by heavily relying on frameworks like PowerMock(ito) or JMockit. These frameworks allow you to contol (mock) calls to static methods, or to new(). So they would give you full control over everything in your method. But that would basically force your tests to know everything that is going on in the method under test. Which is a really bad thing.

Should I repeat code in actual class in tests

I want to test that a specific method produces the expected result, but to do that I need to manipulate the input in the test as well.
class ToTest {
public String produceResponse(String input) {
// ....
encryptedIds = encryptIds(input)
output = doStuff(input, encryptedIds)
}
public encryptIds(input) {
....
}
}
In my test I need to check that produceResponse actually produces the expected response.
in order to do that I have to encrypt the ids in the input.
My question is: should I rewrite encryptIds in the test (so that I would have more controller on the result) or should I call encryptIds from the class itself.
Is there a better approach to solve this? I don't like that in my test I know what happens in the specific flow.
If I understand correctly, you would like to test produceResponse() with known encryptedIds as input.
You could do that without refactoring the code, but it would probably be a good idea to refactor it, so that's what I'm going to explain:
class ToTest {
private IdEncryptor encryptor;
public ToTest(IdEncryptor encryptor) {
this.encryptor = encryptor;
}
public String produceResponse(String input) {
String[] encryptedIds = encryptor.encryptIds(input);
return doStuff(input, encryptedIds);
}
}
Now you can unit-test IdEncryptor to test that it produces correct encrypted IDs based on a String input.
And to test the ToTest class, you can mock the IdEncryptor so that whatever the input it receives, it produces the encryptedIds you desire. For example with mockito:
IdEncryptor mockEncryptor = mock(IdEncryptor.class);
when(mockEncryptor.encryptIds(any(String.class)).thenReturn(new String[] {"a", "b"});
ToTest toTest = new ToTest(mockEncryptor);
String response = toTest.produceResponse("input");
// expect that the response is what you expect given "a", "b" as input of doStuff()
Never copy any production code into the unit test as it will get outdated at some point.
If both methods are public, they are part of the public API, so:
you should first unit test the correct behavior of the encryptIds(String) method
then unit test the produceResponse(String) method which will internally use the already tested encryptIds(String) method
If encryptIds(String) would not be part of the public API:
then it is internal implementation and helper method which is not unit testable
produceResponse(String) is then responsible for encryption as a side-effect:
you can still test it if you mark it package private (no modifier)
you can also change the implementation of the encryptIds(String) only for testing purposes
Is encrypting id's something that is integral to your system or not? As it stands this class takes some input and produces some output and as far as your test is concerned this is what's important, no more, no less.
What is the impact of not performing the encryption? If your doStuff method will just fail if it doesn't happen then it is an internal detail to your class-under-test and I wouldn't have the tests care about it at all. If it's a step that absolutely must be performed then I would refactor the code to verify that it absolutely has happened, maybe using a mock as #jb-nizet answered.
As for the general case of duplicating production code in tests, as #Crazyjavahacking stated you should not do this, but I have no issue with using production code from a test- maybe not at a unit level but definitely the higher up the system I go, e.g. when testing writing to a DB I will use the reading code to verify it's happened correctly, but will also have independent tests to verify the reading path as well

Parameterized test: selectively run for only one data point

I have a Parameterized test that is fed, say, with files:
#RunWith(Parameterized.class)
public class FileTest {
...
public static Collection<Object[]> data() {
return IteratorUtils.toList( FileUtils.iterateFiles(testFilesDir
, TrueFileFilter.INSTANCE
, (IOFileFilter) null) );
}
Whether it's files on a file system, rows from a table or URLs makes no difference, really. Just a Parameterized test that's fed with a large amount of data points and takes a long time to conclude.
Now I am running the test, say 10,000 files and I detect a problem with file #9,203. I fix the bug and to
verify the fix I want to re-run the test, but only for this particular file (because I can't wait 2 hours). Subsequent re-runs (after the fix is verified) should of course comprise the entire data set.
Is there any way to do that, e.g. by supplying some run-time parameters in a console-invocation of JUnit so that only one particular data point is used?
OK, so in the end I found a way to accomplish this. Use a constructor for your parameterized test class that also takes a friendly name that you can easily pass from the command line. E.g. something like:
private final File testFile;
private final String friendlyTestName;
public FileTest(File testFile, String friendlyTestName) {
this.testFile = testFile;
this.friendlyTestName = friendlyTestName;
}
Of course, you would then have to generate the appropriate tuples in the method that provides the data points. E.g. in the example below the friendly name is simply the filename of the test file (without the path; let's assume that they are unique):
#Parameters(name= "{index}: {1}")
public static Collection<Object[]> data() {
Collection<File> _rv = IteratorUtils.toList( FileUtils.iterateFiles(testFilesDir, TrueFileFilter.INSTANCE, (IOFileFilter) null) );
Collection<Object[]> rv = new ArrayList<>();
for (File f : _rv)
rv.add(new Object[]{f, f.getName()});
return rv;
}
Then, when invoking Ant from the command line pass a target-friendly-name parameter:
ant -Dtarget-friendly-name=a-005 test
... and make sure it is conveyed all the way to the junit Ant task. E.g. in your build.xml file you should have something like:
<junit printsummary="${junit.summary}" showoutput="${junit.output}">
<sysproperty key="target-friendly-name" value="${target-friendly-name}"/>
...
</junit>
Finally, in the test method itself use assumeTrue to demand that the friendly name of the data point equals the target friendly name (if present; otherwise all tests are run).
#Test
public void testFile() {
assumeTrue( (targetFriendlyName==null)||(targetFriendlyName.equals(friendlyTestName)) );
...
}
I was looking for a way to directly use the {index} property of the Parameters annotation which would have removed the need to define a separate friendlyName but haven't figured a way to do so; hence this solution requires the unnatural addition of a friendly name field in the test class.

Sensible unit test possible?

Could a sensible unit test be written for this code which extracts a rar archive by delegating it to a capable tool on the host system if one exists?
I can write a test case based on the fact that my machine runs linux and the unrar tool is installed, but if another developer who runs windows would check out the code the test would fail, although there would be nothing wrong with the extractor code.
I need to find a way to write a meaningful test which is not binded to the system and unrar tool installed.
How would you tackle this?
public class Extractor {
private EventBus eventBus;
private ExtractCommand[] linuxExtractCommands = new ExtractCommand[]{new LinuxUnrarCommand()};
private ExtractCommand[] windowsExtractCommands = new ExtractCommand[]{};
private ExtractCommand[] macExtractCommands = new ExtractCommand[]{};
#Inject
public Extractor(EventBus eventBus) {
this.eventBus = eventBus;
}
public boolean extract(DownloadCandidate downloadCandidate) {
for (ExtractCommand command : getSystemSpecificExtractCommands()) {
if (command.extract(downloadCandidate)) {
eventBus.fireEvent(this, new ExtractCompletedEvent());
return true;
}
}
eventBus.fireEvent(this, new ExtractFailedEvent());
return false;
}
private ExtractCommand[] getSystemSpecificExtractCommands() {
String os = System.getProperty("os.name");
if (Pattern.compile("linux", Pattern.CASE_INSENSITIVE).matcher(os).find()) {
return linuxExtractCommands;
} else if (Pattern.compile("windows", Pattern.CASE_INSENSITIVE).matcher(os).find()) {
return windowsExtractCommands;
} else if (Pattern.compile("mac os x", Pattern.CASE_INSENSITIVE).matcher(os).find()) {
return macExtractCommands;
}
return null;
}
}
Could you not pass the class a Map<String,ExtractCommand[]> instances and then make an abstract method, say GetOsName, for getting the string to match. then you could look up the match string in the map to get the extract command in getSystemSpecificExtractCommands method. This would allow you to inject a list containing a mock ExtractCommand and override the GetOsName method to return the key of your mock command, so you could test that when the extract worked, the eventBus is fired etc.
private Map<String,EvenetCommand[]> eventMap;
#Inject
public Extractor(EventBus eventBus, Map<String,EventCommand[]> eventMap) {
this.eventBus = eventBus;
this.eventMap = eventMap;
}
private ExtractCommand[] getSystemSpecificExtractCommands() {
String os = GetOsName();
return eventMap.Get(os);
}
protected GetOsName();
{
return System.getProperty("os.name");
}
I would look for some pure java APIs for manipulating rar files. This way the code will not be system dependent.
A quick search on google returned this:
http://www.example-code.com/java/rar_unrar.asp
Start with a mock framework. You'll need to refactor a bit, as you will need to ensure that some of those private and local scope properties/variables can be overridden if need be.
Then when you are testing Extract, you make sure you've mocked out the commands, and ensure that the Extract method is called on your mocked objects. You'll also want to ensure that your event got fired too.
Now to make it more testable you can use constructor or property injection. Either way, you'll need to make the private ExtractCommand arrays overriddable.
Sorry, don't have time to recode it, and post, but that should just about get you started nicely.
Good luck.
EDIT. It does sound like you are more after a functional test anyway if you want to test that it is actually extracted correctly.
Testing can be tricky, especially getting the divides right between the different types of tests and when they should be run and what their responsibilities are. This is even more so with cross-platform code.
While it's possible to think of this as 1 code base you are testing, it's really multiple code bases, the generic java code and code for each target platform, so you will need multiple tests.
To begin with unit testing, you will not be exercising the external command. Rather, each platform specific class is tested to see that it generates the correct command line, without actually executing it.
Your java class that hides all the platform specifics (which command to use) has a unit test to verify that it instantiates the correct platform specific class for a given platform. The platform can be a parameter to the core test, so multiple platforms can be "emulated". To take the unit test further, you could mock out the command implementation (e.g. having a RAR file and it's uncompressed form as part of your test data, and the command is a simple copy of the uncompressed data.)
Once these unit tests are in place and green, you then can move on to functional tests, where the real platform specific commands are executed. Of course, these functional tests have to be run on the actual platform. Each functional test corresponds to a platform specific class that knows how to create the correct commandline to unrar.
Your build is configured to exclude tests for classes that don't apply to the current platform, for example, so LinuxUnrarer is not tested on Windows. The platform independent java class is always tested, and it will instantiate the appropriate platform specific test. This gives you a integration test to see that the system works end to end.
As to cross platform UNRAR, there is a java RAR scanner, but it doesn't decompress.

How to write automated unit tests for java annotation processor?

I'm experimenting with java annotation processors. I'm able to write integration tests using the "JavaCompiler" (in fact I'm using "hickory" at the moment). I can run the compile process and analyse the output. The Problem: a single test runs for about half a second even without any code in my annotation processor. This is way too long to using it in TDD style.
Mocking away the dependencies seems very hard for me (I would have to mock out the entire "javax.lang.model.element" package). Have someone succeed to write unit tests for an annotation processor (Java 6)? If not ... what would be your approach?
This is an old question, but it seems that the state of annotation processor testing hadn't gotten any better, so we released Compile Testing today. The best docs are in package-info.java, but the general idea is that there is a fluent API for testing compilation output when run with an annotation processor. For example,
ASSERT.about(javaSource())
.that(JavaFileObjects.forResource("HelloWorld.java"))
.processedWith(new MyAnnotationProcessor())
.compilesWithoutError()
.and().generatesSources(JavaFileObjects.forResource("GeneratedHelloWorld.java"));
tests that the processor generates a file that matches GeneratedHelloWorld.java (golden file on the class path). You can also test that the processor produces error output:
JavaFileObject fileObject = JavaFileObjects.forResource("HelloWorld.java");
ASSERT.about(javaSource())
.that(fileObject)
.processedWith(new NoHelloWorld())
.failsToCompile()
.withErrorContaining("No types named HelloWorld!").in(fileObject).onLine(23).atColumn(5);
This is obviously a lot simpler than mocking and unlike typical integration tests, all of the output is stored in memory.
You're right mocking the annotation processing API (with a mock library like easymock) is painful. I tried this approach and it broke down pretty rapidly. You have to setup to many method call expectations. The tests become unmaintainable.
A state-based test approach worked for me reasonably well. I had to implement the parts of the javax.lang.model.* API I needed for my tests. (That were only < 350 lines of code.)
This is the part of a test to initiate the javax.lang.model objects. After the setup the model should be in the same state as the Java compiler implementation.
DeclaredType typeArgument = declaredType(classElement("returnTypeName"));
DeclaredType validReturnType = declaredType(interfaceElement(GENERATOR_TYPE_NAME), typeArgument);
TypeParameterElement typeParameter = typeParameterElement();
ExecutableElement methodExecutableElement = Model.methodExecutableElement(name, validReturnType, typeParameter);
The static factory methods are defined in the class Model implementing the javax.lang.model.* classes. For example declaredType. (All unsupported operations will throw exceptions.)
public static DeclaredType declaredType(final Element element, final TypeMirror... argumentTypes) {
return new DeclaredType(){
#Override public Element asElement() {
return element;
}
#Override public List<? extends TypeMirror> getTypeArguments() {
return Arrays.asList(argumentTypes);
}
#Override public String toString() {
return format("DeclareTypeModel[element=%s, argumentTypes=%s]",
element, Arrays.toString(argumentTypes));
}
#Override public <R, P> R accept(TypeVisitor<R, P> v, P p) {
return v.visitDeclared(this, p);
}
#Override public boolean equals(Object obj) { throw new UnsupportedOperationException(); }
#Override public int hashCode() { throw new UnsupportedOperationException(); }
#Override public TypeKind getKind() { throw new UnsupportedOperationException(); }
#Override public TypeMirror getEnclosingType() { throw new UnsupportedOperationException(); }
};
}
The rest of the test verifies the behavior of the class under test.
Method actual = new Method(environment(), methodExecutableElement);
Method expected = new Method(..);
assertEquals(expected, actual);
You can have a look at the source code of the Quickcheck #Samples and #Iterables source code generator tests. (The code is not optimal, yet. The Method class has to many parameters and the Parameter class is not tested in its own test but as part of the Method test. It should illustrate the approach nevertheless.)
Viel Glück!
jOOR is a small Java reflection library that also provides simplified access to the in-memory Java compilation API in javax.tool.JavaCompiler. We added support for this to unit test jOOQ's annotation processors. You can easily write unit tests like this:
#Test
public void testCompileWithAnnotationProcessors() {
AProcessor p = new AProcessor();
try {
Reflect.compile(
"org.joor.test.FailAnnotationProcessing",
"package org.joor.test; " +
"#A " +
"public class FailAnnotationProcessing { " +
"}",
new CompileOptions().processors(p)
).create().get();
Assert.fail();
}
catch (ReflectException expected) {
assertFalse(p.processed);
}
}
The above example has been taken from this blog post
I was in a similar situation, so I created the Avatar library. It won't give you the performance of a pure unit test with no compilation, but if used correctly you shouldn't see much of a performance hit.
Avatar lets you write a source file, annotate it, and convert it to elements in a unit test. This allows you to unit test methods and classes which consume Element objects, without manually invoking javac.
I ran into the same problem awhile ago and found this question. Although the other answers provided are decent, I felt that that there was still room for improvement. Based on the other answers for this question, I created Elementary, a suite of JUnit 5 extensions that provide a real annotation processing environment for unit tests.
Most libraries test annotation processors by running them. However, most annotation processors are pretty complex and broken into more fine-grained components. It is not feasible to test individual components by running the annotation processor. Instead, we make the annotation processing environment available to these tests.
The following code snippet illustrates how to test a Lint component:
import com.karuslabs.elementary.junit.Cases;
import com.karuslabs.elementary.junit.Tools;
import com.karuslabs.elementary.junit.ToolsExtension;
import com.karuslabs.elementary.junit.annotations.Case;
import com.karuslabs.elementary.junit.annotations.Introspect;
import com.karuslabs.utilitary.type.TypeMirrors;
#ExtendWith(ToolsExtension.class)
#Introspect
class ToolsExtensionExampleTest {
Lint lint = new Lint(Tools.typeMirrors());
#Test
void lint_string_variable(Cases cases) {
var first = cases.one("first");
assertTrue(lint.lint(first));
}
#Test
void lint_method_that_returns_string(Cases cases) {
var second = cases.get(1);
assertFalse(lint.lint(second));
}
#Case("first") String first;
#Case String second() { return "";}
}
class Lint {
final TypeMirrors types;
final TypeMirror expectedType;
Lint(TypeMirrors types) {
this.types = types;
this.expectedType = types.type(String.class);
}
public boolean lint(Element element) {
if (!(element instanceof VariableElement)) {
return false;
}
var variable = (VariableElement) element;
return types.isSameType(expectedType, variable.asType());
}
}
By annotating the test class with #Introspect and test cases with #Case, we can declare test cases in the same file as the tests. The corresponding Element representation of the test cases can be retrieved by a test using Cases.
If anyone is interested, I wrote an article, The Problem with Annotation Processors that details the problems with unit testing annotation processors.
I have used http://hg.netbeans.org/core-main/raw-file/default/openide.util.lookup/test/unit/src/org/openide/util/test/AnnotationProcessorTestUtils.java though this is based on java.io.File for simplicity and so has the performance overhead you complain about.
Thomas's suggestion of mocking the whole JSR 269 environment would lead to a pure unit test. You might instead want to write more of an integration test which checks how your processor actually runs inside javac, giving more assurance it is correct, but merely want to avoid disk files. Doing this would require you to write a mock JavaFileManager, which is unfortunately not as easy as it seems and I have no examples handy, but you should not need to mock other things like Element interfaces.
An option is to bundle all tests in one class. Half a second for compiling etc. is then a constant for a given set of tests, the real test time for a test is negligible, I assume.

Categories

Resources