I have Java code that uses JAR:
public class Version {
public String getVersion() {
// Use Java Package API to return information specified in the manifest of this JAR.
return getClass().getPackage().getImplementationVersion();
}
}
How do I run JUnit test for this code?
It fails in development build (in Eclipse) since there is no JAR file yet.
It fails in production build (in Gradle) since there is no JAR file yet.
You always need to mock the dependencies for your unit testing. Boundary is unit test your code and not the jar itself. Mockito framework is good and there are other frameworks that do the job.
Chances are, that this can't be properly mocked (and thus: not unit tested). The point is that you are actually calling a method on "this". But you can't test some object ... and mock it at the same time.
You see, if your production code would look like this:
public String getVersion() {
return someObject.getClass().....
}
then you could create a mock object; and insert that into your Version class. But even then, the method getClass() is final within java.lang.Object; and therefore you can't be mocking it anyway.
[ Reasonable mocking frameworks like EasyMock or Mokito work by extending classes and overriding the methods you want to control. There are frameworks like PowerMock that do byte code manipulation and that allow for this kind of mocking - but you should never ever use such libraries; as they have really bad side effects (like breaking most coverage libraries) ]
What might work:
class Version {
private final Package packageForVersionCheck;
public Version() {
this(getClass().getPackage()));
}
Version(Package somePackage) {
this.packageForVersionCheck = ...
}
public String getVersion() {
return this.packageForVersionCheck.getImpl....
Now you can use dependency injection to provide a "mocked" package that returns that string. But well, that looks like a lot of code for almost no gain.
Long story short: sometimes, you simply can't write a reasonable unit test. Then do the next best thing: create some "functional" test that is automatically executed in a "customer like" setup; and make sure that you have an automated setup to run such tests, too.
Related
Currently the JUnit5 Framework works with Inversion of Control. I.e. you annotate a test method with #Test and then JUnit scans your classpath (in the simplest case)
Now is there a way for me to be in charge of calling the test cases through JUnit APIs? Maybe by hooking my test implementations to some test registry provided by JUnit?
I'm pretty new to JUnit - how did older versions go about this?
The reason I'm asking is that normally to execute my test cases, I'd have to run something along the lines of
java -jar junit-platform-standalone.jar --class-path target --scan-class-path
on the command line. My situation requires me to run the test cases through by executing one of my own classes, like that e.g.
java /com/example/MyTestCassesLauncher
EDIT: to clarify, I need one of my own classes to be hosting/launching my test cases, something like this:
// Maybe this needs to extend one of JUnit's launchers?
public class MyTestCassesLauncher {
public static void main(String[] args) {
JUnitLauncher.launchTests(new MyTestClass());
}
}
where JUnitLauncher.launchTests is some kind of API provided by the platform. I'm not looking for a method with that exact same signature but a mechanism that would allow me to ultimately call my own MyTestClassesLauncher class to run the tests.
Thanks in advance.
Not sure what you arę actually trying to achieve but in Junit5 to change behaviour of your tests you can use Extensions mechanism, similar to Junit4 RunWith but more powerful
Such custom extension can provide some additional logic like in this logging example
public class LoggingExtension implements
TestInstancePostProcessor {
#Override
public void postProcessTestInstance(Object testInstance,
ExtensionContext context) throws Exception {
Logger logger = LogManager.getLogger(testInstance.getClass());
testInstance.getClass()
.getMethod("setLogger", Logger.class)
.invoke(testInstance, logger);
}
}
The way Junit controls it's flow is Junit problem - you should not modify framework but extend it
I have this method that I am using in a NetBeans plugin:
public static SourceCodeFile getCurrentlyOpenedFile() {
MainProjectManager mainProjectManager = new MainProjectManager();
Project openedProject = mainProjectManager.getMainProject();
/* Get Java file currently displaying in the IDE if there is an opened project */
if (openedProject != null) {
TopComponent activeTC = TopComponent.getRegistry().getActivated();
DataObject dataLookup = activeTC.getLookup().lookup(DataObject.class);
File file = FileUtil.toFile(dataLookup.getPrimaryFile()); // Currently opened file
// Check if the opened file is a Java file
if (FilenameUtils.getExtension(file.getAbsoluteFile().getAbsolutePath()).equalsIgnoreCase("java")) {
return new SourceCodeFile(file);
} else {
return null;
}
} else {
return null;
}
}
Basically, using NetBeans API, it detects the file currently opened by the user in the IDE. Then, it loads it and creates a SourceCodeFile object out of it.
Now I want to unit test this method using JUnit. The problem is that I don't know how to test it.
Since it doesn't receive any argument as parameter, I can't test how it behaves given wrong arguments. I also thought about trying to manipulate openedProject in order to test the method behaviour given some different values to that object, but as far as I'm concernet, I can't manipulate a variable in JUnit that way. I also cannot check what the method returns, because the unit test will always return null, since it doesn't detect any opened file in NetBeans.
So, my question is: how can I approach the unit testing of this method?
Well, your method does take parameters, "between the lines":
MainProjectManager mainProjectManager = new MainProjectManager();
Project openedProject = mainProjectManager.getMainProject();
basically fetches the object to work on.
So the first step would be to change that method signature, to:
public static SourceCodeFile getCurrentlyOpenedFile(Project project) {
...
Of course, that object isn't used, except for that null check. So the next level would be to have a distinct method like
SourceCodeFile lookup(DataObject dataLookup) {
In other words: your real problem is that you wrote hard-to-test code. The "default" answer is: you have to change your production code, to make easier to test.
For example by ripping it apart, and putting all the different aspects into smaller helper methods.
You see, that last method lookup(), that one takes a parameter, and now it becomes (somehow) possible to think up test cases for this. Probably you will have to use a mocking framework such as Mockito to pass mocked instances of that DataObject class within your test code.
Long story short: there are no detours here. You can't test your code (in reasonable ways) as it is currently structured. Re-structure your production code, then all your ideas about "when I pass X, then Y should happen" can work out.
Disclaimer: yes, theoretically, you could test the above code, by heavily relying on frameworks like PowerMock(ito) or JMockit. These frameworks allow you to contol (mock) calls to static methods, or to new(). So they would give you full control over everything in your method. But that would basically force your tests to know everything that is going on in the method under test. Which is a really bad thing.
Java 8 here but this is a general unit testing question that (is likely) language-agnostic.
The syntax of writing a JUnit test is easy, but deciding on what tests to write and how to test main/production code is what I find to be the biggest challenge. In reading up on unit testing best practices, I keep hearing the same thing over and over again:
Test the contract
I believe the idea there is that unit tests should not be brittle and should not necessarily break if the method's implementation changes. That the method should define a contract of inputs -> results/outcomes and that the tests should aim to verify that contract is being honored. I think.
Let's say I have the following method:
public void doFizzOnBuzz(Buzz buzz, boolean isFoobaz) {
// wsClient is a REST client for a microservice
Widget widget = wsClient.getWidgetByBuzzId(buzz.getId());
if(widget.needsFile()) {
File file = readFileFromFileSystem(buzz.getFile());
if(isFoobaz) {
// Do something with the file (doesn't matter what)
}
}
return;
}
private File readFileFromFileSystem(String filename) {
// Private helper method; implementation doesn't matter here EXCEPT...
// Any checked exceptions that Java might throw (as a result of working)
// with the file system are wrapped in a RuntimeException (hence are now
// unchecked.
// Reads a file from the file system based on the filename/URI you specify
}
So here, we have a method we wish to write unit tests for (doFizzOnBuzz). This method:
Has two parameters, buzz and isFoobaz
Uses a class property wsClient to make a network/REST call
Calls a private helper method that not only works with the external file system, but that "swallows" checked exceptions; hence readFileFromFileSystem could throw RuntimeExceptions
What kinds of unit tests can we write for this that "test the contract"?
Validating inputs (buzz and isFoobaz) are obvious ones; the contract should define what valid values/states for each of those are, and what exceptions/results should occur if they are invalid.
But beyond that, I'm not really sure what the "contract" here would even be, which makes writing tests for it very difficult. So I guess this question really should be something like "How do I determine what the contract is for a unit test, and then how do you write tests that target the contract and not the implementation?"
But that title would be too long for a SO question.
Your code with the methods doFizzOnBuzz(Buzz buzz, boolean isFoobaz) and private File readFileFromFileSystem(String filename) is not easily testable, because the first method will try and read a file, and that's not something you want to do in test.
Here, doFizzOnBuzz needs something to provide a File for it to work with. This FileProvider (as I'll call it) could be an interface, something like:
public interface FileProvider {
File getFile(String filename);
}
When running in production, an implementation to actually read the file from disk is used, but when unit testing doFizzOnBuzz a mock implementation of FileProvider could be used instead. This returns a mock File.
The key point to remember is that when testing doFizzOnBuzz, we are not testing whatever provides the file, or anything else. We assume that to working correctly. These other bits of code have their own unit tests.
A mocking framework such as Mockito can be used a create mock implementations of FileProvider and File, and to inject the mock FileProvider into the class under test, probably using a setter:
public void setFileProvider(FileProvider f) {
this.fileProvider = f;
}
Also, I don't know what a wsClient is, bit I do know it has a getWidgetByBuzzId() method. This class too could be an interface, and for testing purposes the interface would be mocked, and return a mock Widget, similar to the FileProvider above.
With mockito, not only can you set up mock implementations of interfaces, you can also define what values are returned when methods are called on that interface: e.g.
//setup mock FileProvider
FileProvider fp = Mockito.mock(FileProvider.class);
//Setup mock File for FileProvider to return
File mockFile = Mockito.mock(File.class);
Mockito.when(mockFile.getName()).thenReturn("mockfilename");
//other methods...
//Make mock FileProvider return mock File
Mockito.when(fp.getFile("filename")).thenReturn(mockFile);
ClassUnderTest test = new ClassUnderTest();
test.setFileProvider(fp); //inject mock file provider
//Also set up mocks for Buzz,, Widget, and anything else
//run test
test.doFizzOnBuzz(...)
//verify that FileProvider.getFile() was actually called:
Mockito.verify(fp).getFile("filenane");
The above test fails if getFile() was not called with the parameter 'filename'
Conclusion
If you cannot directly observe the results of a method, e.g. it is void, you can use Mocking to verify its interaction with other classes and methods.
The problem is that your contract method does not tell what effect you can observe from the outside. It is basically a BiConsumer, so appart from ensuring there is an exception or not, there is not much unit testing possible.
The test you could do is to ensure that the (Mocked) REST service is called, or that the File (part of the Buzz parameter, which might be pointing to a temporary file) will be impacted by the method under some conditions.
If you want to unit test the output of the method, you may need to refactor to separate the determination of what should be done (file needs update) from actually doing it.
Could a sensible unit test be written for this code which extracts a rar archive by delegating it to a capable tool on the host system if one exists?
I can write a test case based on the fact that my machine runs linux and the unrar tool is installed, but if another developer who runs windows would check out the code the test would fail, although there would be nothing wrong with the extractor code.
I need to find a way to write a meaningful test which is not binded to the system and unrar tool installed.
How would you tackle this?
public class Extractor {
private EventBus eventBus;
private ExtractCommand[] linuxExtractCommands = new ExtractCommand[]{new LinuxUnrarCommand()};
private ExtractCommand[] windowsExtractCommands = new ExtractCommand[]{};
private ExtractCommand[] macExtractCommands = new ExtractCommand[]{};
#Inject
public Extractor(EventBus eventBus) {
this.eventBus = eventBus;
}
public boolean extract(DownloadCandidate downloadCandidate) {
for (ExtractCommand command : getSystemSpecificExtractCommands()) {
if (command.extract(downloadCandidate)) {
eventBus.fireEvent(this, new ExtractCompletedEvent());
return true;
}
}
eventBus.fireEvent(this, new ExtractFailedEvent());
return false;
}
private ExtractCommand[] getSystemSpecificExtractCommands() {
String os = System.getProperty("os.name");
if (Pattern.compile("linux", Pattern.CASE_INSENSITIVE).matcher(os).find()) {
return linuxExtractCommands;
} else if (Pattern.compile("windows", Pattern.CASE_INSENSITIVE).matcher(os).find()) {
return windowsExtractCommands;
} else if (Pattern.compile("mac os x", Pattern.CASE_INSENSITIVE).matcher(os).find()) {
return macExtractCommands;
}
return null;
}
}
Could you not pass the class a Map<String,ExtractCommand[]> instances and then make an abstract method, say GetOsName, for getting the string to match. then you could look up the match string in the map to get the extract command in getSystemSpecificExtractCommands method. This would allow you to inject a list containing a mock ExtractCommand and override the GetOsName method to return the key of your mock command, so you could test that when the extract worked, the eventBus is fired etc.
private Map<String,EvenetCommand[]> eventMap;
#Inject
public Extractor(EventBus eventBus, Map<String,EventCommand[]> eventMap) {
this.eventBus = eventBus;
this.eventMap = eventMap;
}
private ExtractCommand[] getSystemSpecificExtractCommands() {
String os = GetOsName();
return eventMap.Get(os);
}
protected GetOsName();
{
return System.getProperty("os.name");
}
I would look for some pure java APIs for manipulating rar files. This way the code will not be system dependent.
A quick search on google returned this:
http://www.example-code.com/java/rar_unrar.asp
Start with a mock framework. You'll need to refactor a bit, as you will need to ensure that some of those private and local scope properties/variables can be overridden if need be.
Then when you are testing Extract, you make sure you've mocked out the commands, and ensure that the Extract method is called on your mocked objects. You'll also want to ensure that your event got fired too.
Now to make it more testable you can use constructor or property injection. Either way, you'll need to make the private ExtractCommand arrays overriddable.
Sorry, don't have time to recode it, and post, but that should just about get you started nicely.
Good luck.
EDIT. It does sound like you are more after a functional test anyway if you want to test that it is actually extracted correctly.
Testing can be tricky, especially getting the divides right between the different types of tests and when they should be run and what their responsibilities are. This is even more so with cross-platform code.
While it's possible to think of this as 1 code base you are testing, it's really multiple code bases, the generic java code and code for each target platform, so you will need multiple tests.
To begin with unit testing, you will not be exercising the external command. Rather, each platform specific class is tested to see that it generates the correct command line, without actually executing it.
Your java class that hides all the platform specifics (which command to use) has a unit test to verify that it instantiates the correct platform specific class for a given platform. The platform can be a parameter to the core test, so multiple platforms can be "emulated". To take the unit test further, you could mock out the command implementation (e.g. having a RAR file and it's uncompressed form as part of your test data, and the command is a simple copy of the uncompressed data.)
Once these unit tests are in place and green, you then can move on to functional tests, where the real platform specific commands are executed. Of course, these functional tests have to be run on the actual platform. Each functional test corresponds to a platform specific class that knows how to create the correct commandline to unrar.
Your build is configured to exclude tests for classes that don't apply to the current platform, for example, so LinuxUnrarer is not tested on Windows. The platform independent java class is always tested, and it will instantiate the appropriate platform specific test. This gives you a integration test to see that the system works end to end.
As to cross platform UNRAR, there is a java RAR scanner, but it doesn't decompress.
I'm experimenting with java annotation processors. I'm able to write integration tests using the "JavaCompiler" (in fact I'm using "hickory" at the moment). I can run the compile process and analyse the output. The Problem: a single test runs for about half a second even without any code in my annotation processor. This is way too long to using it in TDD style.
Mocking away the dependencies seems very hard for me (I would have to mock out the entire "javax.lang.model.element" package). Have someone succeed to write unit tests for an annotation processor (Java 6)? If not ... what would be your approach?
This is an old question, but it seems that the state of annotation processor testing hadn't gotten any better, so we released Compile Testing today. The best docs are in package-info.java, but the general idea is that there is a fluent API for testing compilation output when run with an annotation processor. For example,
ASSERT.about(javaSource())
.that(JavaFileObjects.forResource("HelloWorld.java"))
.processedWith(new MyAnnotationProcessor())
.compilesWithoutError()
.and().generatesSources(JavaFileObjects.forResource("GeneratedHelloWorld.java"));
tests that the processor generates a file that matches GeneratedHelloWorld.java (golden file on the class path). You can also test that the processor produces error output:
JavaFileObject fileObject = JavaFileObjects.forResource("HelloWorld.java");
ASSERT.about(javaSource())
.that(fileObject)
.processedWith(new NoHelloWorld())
.failsToCompile()
.withErrorContaining("No types named HelloWorld!").in(fileObject).onLine(23).atColumn(5);
This is obviously a lot simpler than mocking and unlike typical integration tests, all of the output is stored in memory.
You're right mocking the annotation processing API (with a mock library like easymock) is painful. I tried this approach and it broke down pretty rapidly. You have to setup to many method call expectations. The tests become unmaintainable.
A state-based test approach worked for me reasonably well. I had to implement the parts of the javax.lang.model.* API I needed for my tests. (That were only < 350 lines of code.)
This is the part of a test to initiate the javax.lang.model objects. After the setup the model should be in the same state as the Java compiler implementation.
DeclaredType typeArgument = declaredType(classElement("returnTypeName"));
DeclaredType validReturnType = declaredType(interfaceElement(GENERATOR_TYPE_NAME), typeArgument);
TypeParameterElement typeParameter = typeParameterElement();
ExecutableElement methodExecutableElement = Model.methodExecutableElement(name, validReturnType, typeParameter);
The static factory methods are defined in the class Model implementing the javax.lang.model.* classes. For example declaredType. (All unsupported operations will throw exceptions.)
public static DeclaredType declaredType(final Element element, final TypeMirror... argumentTypes) {
return new DeclaredType(){
#Override public Element asElement() {
return element;
}
#Override public List<? extends TypeMirror> getTypeArguments() {
return Arrays.asList(argumentTypes);
}
#Override public String toString() {
return format("DeclareTypeModel[element=%s, argumentTypes=%s]",
element, Arrays.toString(argumentTypes));
}
#Override public <R, P> R accept(TypeVisitor<R, P> v, P p) {
return v.visitDeclared(this, p);
}
#Override public boolean equals(Object obj) { throw new UnsupportedOperationException(); }
#Override public int hashCode() { throw new UnsupportedOperationException(); }
#Override public TypeKind getKind() { throw new UnsupportedOperationException(); }
#Override public TypeMirror getEnclosingType() { throw new UnsupportedOperationException(); }
};
}
The rest of the test verifies the behavior of the class under test.
Method actual = new Method(environment(), methodExecutableElement);
Method expected = new Method(..);
assertEquals(expected, actual);
You can have a look at the source code of the Quickcheck #Samples and #Iterables source code generator tests. (The code is not optimal, yet. The Method class has to many parameters and the Parameter class is not tested in its own test but as part of the Method test. It should illustrate the approach nevertheless.)
Viel Glück!
jOOR is a small Java reflection library that also provides simplified access to the in-memory Java compilation API in javax.tool.JavaCompiler. We added support for this to unit test jOOQ's annotation processors. You can easily write unit tests like this:
#Test
public void testCompileWithAnnotationProcessors() {
AProcessor p = new AProcessor();
try {
Reflect.compile(
"org.joor.test.FailAnnotationProcessing",
"package org.joor.test; " +
"#A " +
"public class FailAnnotationProcessing { " +
"}",
new CompileOptions().processors(p)
).create().get();
Assert.fail();
}
catch (ReflectException expected) {
assertFalse(p.processed);
}
}
The above example has been taken from this blog post
I was in a similar situation, so I created the Avatar library. It won't give you the performance of a pure unit test with no compilation, but if used correctly you shouldn't see much of a performance hit.
Avatar lets you write a source file, annotate it, and convert it to elements in a unit test. This allows you to unit test methods and classes which consume Element objects, without manually invoking javac.
I ran into the same problem awhile ago and found this question. Although the other answers provided are decent, I felt that that there was still room for improvement. Based on the other answers for this question, I created Elementary, a suite of JUnit 5 extensions that provide a real annotation processing environment for unit tests.
Most libraries test annotation processors by running them. However, most annotation processors are pretty complex and broken into more fine-grained components. It is not feasible to test individual components by running the annotation processor. Instead, we make the annotation processing environment available to these tests.
The following code snippet illustrates how to test a Lint component:
import com.karuslabs.elementary.junit.Cases;
import com.karuslabs.elementary.junit.Tools;
import com.karuslabs.elementary.junit.ToolsExtension;
import com.karuslabs.elementary.junit.annotations.Case;
import com.karuslabs.elementary.junit.annotations.Introspect;
import com.karuslabs.utilitary.type.TypeMirrors;
#ExtendWith(ToolsExtension.class)
#Introspect
class ToolsExtensionExampleTest {
Lint lint = new Lint(Tools.typeMirrors());
#Test
void lint_string_variable(Cases cases) {
var first = cases.one("first");
assertTrue(lint.lint(first));
}
#Test
void lint_method_that_returns_string(Cases cases) {
var second = cases.get(1);
assertFalse(lint.lint(second));
}
#Case("first") String first;
#Case String second() { return "";}
}
class Lint {
final TypeMirrors types;
final TypeMirror expectedType;
Lint(TypeMirrors types) {
this.types = types;
this.expectedType = types.type(String.class);
}
public boolean lint(Element element) {
if (!(element instanceof VariableElement)) {
return false;
}
var variable = (VariableElement) element;
return types.isSameType(expectedType, variable.asType());
}
}
By annotating the test class with #Introspect and test cases with #Case, we can declare test cases in the same file as the tests. The corresponding Element representation of the test cases can be retrieved by a test using Cases.
If anyone is interested, I wrote an article, The Problem with Annotation Processors that details the problems with unit testing annotation processors.
I have used http://hg.netbeans.org/core-main/raw-file/default/openide.util.lookup/test/unit/src/org/openide/util/test/AnnotationProcessorTestUtils.java though this is based on java.io.File for simplicity and so has the performance overhead you complain about.
Thomas's suggestion of mocking the whole JSR 269 environment would lead to a pure unit test. You might instead want to write more of an integration test which checks how your processor actually runs inside javac, giving more assurance it is correct, but merely want to avoid disk files. Doing this would require you to write a mock JavaFileManager, which is unfortunately not as easy as it seems and I have no examples handy, but you should not need to mock other things like Element interfaces.
An option is to bundle all tests in one class. Half a second for compiling etc. is then a constant for a given set of tests, the real test time for a test is negligible, I assume.