Android annotation processing - generate different code for different build flavor - java

I'm building a library that requires some annotation processing to generate code. I now run into an issue that the release build doesn't need to have as much code as the debug build does (since this is a library for modifying configuration variants - primarily used for testing purposes). The following code illustrates the situations. Let's say I want to create a class ConfigManager from some annotated classes and properties. In debug builds, I need this much:
public class ConfigManager {
public Class getConfigClass() {
return abc.class;
}
public void method1() {
doSomething1();
}
public void method2() {
doSomething2();
}
public void method3() {
doSomething3();
}
}
While in release builds, I only need this much:
public class ConfigManager {
public Class getConfigClass() {
return abc.class;
}
}
I have a feeling it may be possible by writing a Gradle plugin to check for build flavor at compile time and invoke a different processor/or somehow pass a parameter to a processor to generate different code. However this topic is pretty new to me so I'm not sure how to achieve this. A couple hours of googling also didnt help. So I'm wondering if anyone could give me a direction or example? Thanks

Pass an option (release=true/false) to your processor.
From javac https://docs.oracle.com/javase/8/docs/technotes/tools/windows/javac.html
-Akey[=value]
Specifies options to pass to annotation processors. These options are not interpreted by javac directly, but are made available for use by individual processors. The key value should be one or more identifiers separated by a dot (.).
In combination with Processor.html#getSupportedOptions https://docs.oracle.com/javase/8/docs/api/javax/annotation/processing/Processor.html#getSupportedOptions
Returns the options recognized by this processor. An implementation of the processing tool must provide a way to pass processor-specific options distinctly from options passed to the tool itself, see getOptions.
Implementation outline:
public Set<String> getSupportedOptions() {
Set<String> set = new HashSet<>();
set.add("release");
return set;
}
// -Arelease=true
boolean isRelease(ProcessingEnvironment env) {
return Boolean.parseBoolean(env.getOptions().get("release"));
}
See Pass options to JPAAnnotationProcessor from Gradle for how to pass options in a gradle build.

Related

Accessing global variable across multiple objects in Java

I got a very basic question (new to Java) and which goes as below. To give a bit of background, I am using BDD driven test automation framework, working with CUCUMBER and JAVA.
I want to set a global variable in my main class object depending on the parameter/value in one of my step definitions and then access the same variable across the test in other step definitions (or objects)
Let's say my class is
public class FeatureStepDefinitions{
#Given("I want to login to system as (.+)$")
public void iWantToLoginToSystemAs(String userType)
{
//some logic
}
#When("I send a request for user type (.+)$")
public void iSendRequestForUserType(String userType)
{
//some logic
}
#Then("I should be able to see the right response$")
public void iShouldBeAbleToSeeTheRightResponse()
{
if(userType.equalsIgnoreCase("xyz")
{
//verify this logic
}
else if(userType.equalsIgnoreCase("abc")
{
//verify that logic
}
}
I know I can use the parameter "userType" in my THEN statement and perform this, but my question is if I do not want to refactor an existing then and still want to verify different behaviours depending on userType set in previous steps.
Any help/direction is appreciated
The recommended way to share state between steps in cucumber-jvm is to use Dependency Injection.
From the Cucumber docs:
"If your programming language is Java, you will be writing glue code (step definitions and hooks) in plain old Java classes.
Cucumber will create a new instance of each of your glue code classes before each scenario.
If all of your glue code classes have an empty constructor, you don’t need anything else. However, most projects will benefit from a dependency injection (DI) module to organize your code better and to share state between step definitions.
The available dependency injection modules are:
PicoContainer (The recommended one if your application doesn’t use another DI module)
Spring
Guice
OpenEJB
Weld
Needle"
While you can declare a variable in your step definitions class to share state between the step definitions, this will only allow you to share between step definitions declared in the same file, and not between files.
As the number of step definition grows, you'll want to group them in some meaningful way, and this approach will no longer suffice.
I did a bit of digging around and found its quite simple.
public class FeatureStepDefinitions{
public Static String globalUserType = null;
#Given("I want to login to system as (.+)$")
public void iWantToLoginToSystemAs(String userType)
{
globalUserType = userType;
//some logic
}
#When("I send a request for user type (.+)$")
public void iSendRequestForUserType(String userType)
{
//some logic
}
#Then("I should be able to see the right response$")
public void iShouldBeAbleToSeeTheRightResponse()
{
if(globalUserType.equalsIgnoreCase("xyz")
{
//verify this logic
}
else if(globalUserType.equalsIgnoreCase("abc")
{
//verify that logic
}
}

Configuring DropWizard Programmatically

I have essentially the same question as here but am hoping to get a less vague, more informative answer.
I'm looking for a way to configure DropWizard programmatically, or at the very least, to be able to tweak configs at runtime. Specifically I have a use case where I'd like to configure metrics in the YAML file to be published with a frequency of, say, 2 minutes. This would be the "normal" default. However, under certain circumstances, I may want to speed that up to, say, every 10 seconds, and then throttle it back to the normal/default.
How can I do this, and not just for the metrics.frequency property, but for any config that might be present inside the YAML config file?
Dropwizard reads the YAML config file and configures all the components only once on startup. Neither the YAML file nor the Configuration object is used ever again. That means there is no direct way to configure on run-time.
It also doesn't provide special interfaces/delegates where you can manipulate the components. However, you can access the objects of the components (usually; if not you can always send a pull request) and configure them manually as you see fit. You may need to read the source code a bit but it's usually easy to navigate.
In the case of metrics.frequency you can see that MetricsFactory class creates ScheduledReporterManager objects per metric type using the frequency setting and doesn't look like you can change them on runtime. But you can probably work around it somehow or even better, modify the code and send a Pull Request to dropwizard community.
Although this feature isn't supported out of the box by dropwizard, you're able to accomplish this fairly easy with the tools they give you. Note that the below solution definitely works on config values you've provided, but it may not work for built in configuration values.
Also note that this doesn't persist the updated config values to the config.yml. However, this would be easy enough to implement yourself simply by writing to the config file from the application. If anyone would like to write this implementation feel free to open a PR on the example project I've linked below.
Code
Start off with a minimal config:
config.yml
myConfigValue: "hello"
And it's corresponding configuration file:
ExampleConfiguration.java
public class ExampleConfiguration extends Configuration {
private String myConfigValue;
public String getMyConfigValue() {
return myConfigValue;
}
public void setMyConfigValue(String value) {
myConfigValue = value;
}
}
Then create a task which updates the config:
UpdateConfigTask.java
public class UpdateConfigTask extends Task {
ExampleConfiguration config;
public UpdateConfigTask(ExampleConfiguration config) {
super("updateconfig");
this.config = config;
}
#Override
public void execute(Map<String, List<String>> parameters, PrintWriter output) {
config.setMyConfigValue("goodbye");
}
}
Also for demonstration purposes, create a resource which allows you to get the config value:
ConfigResource.java
#Path("/config")
public class ConfigResource {
private final ExampleConfiguration config;
public ConfigResource(ExampleConfiguration config) {
this.config = config;
}
#GET
public Response handleGet() {
return Response.ok().entity(config.getMyConfigValue()).build();
}
}
Finally wire everything up in your application:
ExampleApplication.java (exerpt)
environment.jersey().register(new ConfigResource(configuration));
environment.admin().addTask(new UpdateConfigTask(configuration));
Usage
Start up the application then run:
$ curl 'http://localhost:8080/config'
hello
$ curl -X POST 'http://localhost:8081/tasks/updateconfig'
$ curl 'http://localhost:8080/config'
goodbye
How it works
This works simply by passing the same reference to the constructor of ConfigResource.java and UpdateConfigTask.java. If you aren't familiar with the concept see here:
Is Java "pass-by-reference" or "pass-by-value"?
The linked classes above are to a project I've created which demonstrates this as a complete solution. Here's a link to the project:
scottg489/dropwizard-runtime-config-example
Footnote: I haven't verified this works with the built in configuration. However, the dropwizard Configuration class which you need to extend for your own configuration does have various "setters" for internal configuration, but it may not be safe to update those outside of run().
Disclaimer: The project I've linked here was created by me.
I solved this with bytecode manipulation via Javassist
In my case, I wanted to change the "influx" reporter
and modifyInfluxDbReporterFactory should be ran BEFORE dropwizard starts
private static void modifyInfluxDbReporterFactory() throws Exception {
ClassPool cp = ClassPool.getDefault();
CtClass cc = cp.get("com.izettle.metrics.dw.InfluxDbReporterFactory"); // do NOT use InfluxDbReporterFactory.class.getName() as this will force the class into the classloader
CtMethod m = cc.getDeclaredMethod("setTags");
m.insertAfter(
"if (tags.get(\"cloud\") != null) tags.put(\"cloud_host\", tags.get(\"cloud\") + \"_\" + host);tags.put(\"app\", \"sam\");");
cc.toClass();
}

Finding unused values in message resource file

I am working on a project that has been through multiple hands with a sometimes rushed development. Over time the message.properties file has become out of sync with the jsps that use it. Now I don't know which properties are used and which aren't. Is there a tool (eclipse plugin perhaps) that can root out dead messages?
The problem is that messages may be accessed by JSP or Java, and resource names may be constructed rather than literal strings.
Simple grepping may be able to identify "obvious" resource access. The other solution, a resource lookup mechanism that tracks what's used, is only semi-reliable as well since code paths may determine which resources are used, and unless every path is traveled, you may miss some.
A combination of the two will catch most everything (over time).
Alternatively you can hide the functionality of ResourceBundle behind another façade ResourceBundle, which should generally pipe all calls to original one, but add logging and/or statistics collection on the top.
The example can be as following:
import java.util.Collection;
import java.util.Enumeration;
import java.util.HashSet;
import java.util.NoSuchElementException;
import java.util.ResourceBundle;
public class WrapResourceBundle {
static class LoggingResourceBundle extends ResourceBundle {
private Collection<String> usedKeys = new HashSet<String>();
public LoggingResourceBundle(ResourceBundle parentResourceBundle) {
setParent(parentResourceBundle);
}
#Override
protected Object handleGetObject(String key) {
Object value = parent.getObject(key);
if (value != null) {
usedKeys.add(key);
return value;
}
return null;
}
#Override
public Enumeration<String> getKeys() {
return EMPTY_ENUMERATOR;
}
public Collection<String> getUsedKeys() {
return usedKeys;
}
private static EmptyEnumerator EMPTY_ENUMERATOR = new EmptyEnumerator();
private static class EmptyEnumerator implements Enumeration<String> {
EmptyEnumerator() {
}
public boolean hasMoreElements() {
return false;
}
public String nextElement() {
throw new NoSuchElementException("Empty Enumerator");
}
}
}
public static void main(String[] args) {
LoggingResourceBundle bundle = new LoggingResourceBundle(ResourceBundle.getBundle("test"));
bundle.getString("key1");
System.out.println("Used keys: " + bundle.getUsedKeys());
}
}
Considering that some of your keys are run-time generated, I don't think you'll ever be able to find a tool to validate which keys are in use and which ones are not.
Given the problem you posed, I would probably write an AOP aspect which wraps the MessageSource.getMessage() implementation and log all the requested codes that are being retrieved from the resource bundle. Given that MessageSource is an interface, you would need to know the implementation that you are using, but I suspect that you must know that already.
Given that you would be writing the aspect yourself, you can create a format that is easily correlated against your resource bundle and once you are confident that it contains all the keys required, it becomes a trivial task to compare the two files and eliminate any superfluous lines.
If you really want to be thorough about this, if you already have Spring configured for annotation scan, you could even package up your aspect as its own jar (or .class) and drop it in a production WEB-INF/lib (WEB-INF/classes) folder, restart the webapp and let it run for a while. The great thing about annotations is that it can all be self contained. Once you are sure that you have accumulated enough data you just delete the jar (.class) and you're good to go.
I know that at least two of the major java IDEs can offer this functionality.
IntelliJ IDEA has a (disabled, by default) Inspection that you can
use to do this:
go to Settings -> Inspections -> Properties files -> ... and enable
the 'Unused property'
..Only problem I had was that it didn't pick up some usages of the property from a custom tag library I had written, which I was using in a few JSPs.
Eclipse also has something like this ( http://help.eclipse.org/helios/index.jsp?topic=%2Forg.eclipse.jdt.doc.user%2Ftasks%2Ftasks-202.htm ) but I haven't really exhausted the how well it works.

Sensible unit test possible?

Could a sensible unit test be written for this code which extracts a rar archive by delegating it to a capable tool on the host system if one exists?
I can write a test case based on the fact that my machine runs linux and the unrar tool is installed, but if another developer who runs windows would check out the code the test would fail, although there would be nothing wrong with the extractor code.
I need to find a way to write a meaningful test which is not binded to the system and unrar tool installed.
How would you tackle this?
public class Extractor {
private EventBus eventBus;
private ExtractCommand[] linuxExtractCommands = new ExtractCommand[]{new LinuxUnrarCommand()};
private ExtractCommand[] windowsExtractCommands = new ExtractCommand[]{};
private ExtractCommand[] macExtractCommands = new ExtractCommand[]{};
#Inject
public Extractor(EventBus eventBus) {
this.eventBus = eventBus;
}
public boolean extract(DownloadCandidate downloadCandidate) {
for (ExtractCommand command : getSystemSpecificExtractCommands()) {
if (command.extract(downloadCandidate)) {
eventBus.fireEvent(this, new ExtractCompletedEvent());
return true;
}
}
eventBus.fireEvent(this, new ExtractFailedEvent());
return false;
}
private ExtractCommand[] getSystemSpecificExtractCommands() {
String os = System.getProperty("os.name");
if (Pattern.compile("linux", Pattern.CASE_INSENSITIVE).matcher(os).find()) {
return linuxExtractCommands;
} else if (Pattern.compile("windows", Pattern.CASE_INSENSITIVE).matcher(os).find()) {
return windowsExtractCommands;
} else if (Pattern.compile("mac os x", Pattern.CASE_INSENSITIVE).matcher(os).find()) {
return macExtractCommands;
}
return null;
}
}
Could you not pass the class a Map<String,ExtractCommand[]> instances and then make an abstract method, say GetOsName, for getting the string to match. then you could look up the match string in the map to get the extract command in getSystemSpecificExtractCommands method. This would allow you to inject a list containing a mock ExtractCommand and override the GetOsName method to return the key of your mock command, so you could test that when the extract worked, the eventBus is fired etc.
private Map<String,EvenetCommand[]> eventMap;
#Inject
public Extractor(EventBus eventBus, Map<String,EventCommand[]> eventMap) {
this.eventBus = eventBus;
this.eventMap = eventMap;
}
private ExtractCommand[] getSystemSpecificExtractCommands() {
String os = GetOsName();
return eventMap.Get(os);
}
protected GetOsName();
{
return System.getProperty("os.name");
}
I would look for some pure java APIs for manipulating rar files. This way the code will not be system dependent.
A quick search on google returned this:
http://www.example-code.com/java/rar_unrar.asp
Start with a mock framework. You'll need to refactor a bit, as you will need to ensure that some of those private and local scope properties/variables can be overridden if need be.
Then when you are testing Extract, you make sure you've mocked out the commands, and ensure that the Extract method is called on your mocked objects. You'll also want to ensure that your event got fired too.
Now to make it more testable you can use constructor or property injection. Either way, you'll need to make the private ExtractCommand arrays overriddable.
Sorry, don't have time to recode it, and post, but that should just about get you started nicely.
Good luck.
EDIT. It does sound like you are more after a functional test anyway if you want to test that it is actually extracted correctly.
Testing can be tricky, especially getting the divides right between the different types of tests and when they should be run and what their responsibilities are. This is even more so with cross-platform code.
While it's possible to think of this as 1 code base you are testing, it's really multiple code bases, the generic java code and code for each target platform, so you will need multiple tests.
To begin with unit testing, you will not be exercising the external command. Rather, each platform specific class is tested to see that it generates the correct command line, without actually executing it.
Your java class that hides all the platform specifics (which command to use) has a unit test to verify that it instantiates the correct platform specific class for a given platform. The platform can be a parameter to the core test, so multiple platforms can be "emulated". To take the unit test further, you could mock out the command implementation (e.g. having a RAR file and it's uncompressed form as part of your test data, and the command is a simple copy of the uncompressed data.)
Once these unit tests are in place and green, you then can move on to functional tests, where the real platform specific commands are executed. Of course, these functional tests have to be run on the actual platform. Each functional test corresponds to a platform specific class that knows how to create the correct commandline to unrar.
Your build is configured to exclude tests for classes that don't apply to the current platform, for example, so LinuxUnrarer is not tested on Windows. The platform independent java class is always tested, and it will instantiate the appropriate platform specific test. This gives you a integration test to see that the system works end to end.
As to cross platform UNRAR, there is a java RAR scanner, but it doesn't decompress.

How to write automated unit tests for java annotation processor?

I'm experimenting with java annotation processors. I'm able to write integration tests using the "JavaCompiler" (in fact I'm using "hickory" at the moment). I can run the compile process and analyse the output. The Problem: a single test runs for about half a second even without any code in my annotation processor. This is way too long to using it in TDD style.
Mocking away the dependencies seems very hard for me (I would have to mock out the entire "javax.lang.model.element" package). Have someone succeed to write unit tests for an annotation processor (Java 6)? If not ... what would be your approach?
This is an old question, but it seems that the state of annotation processor testing hadn't gotten any better, so we released Compile Testing today. The best docs are in package-info.java, but the general idea is that there is a fluent API for testing compilation output when run with an annotation processor. For example,
ASSERT.about(javaSource())
.that(JavaFileObjects.forResource("HelloWorld.java"))
.processedWith(new MyAnnotationProcessor())
.compilesWithoutError()
.and().generatesSources(JavaFileObjects.forResource("GeneratedHelloWorld.java"));
tests that the processor generates a file that matches GeneratedHelloWorld.java (golden file on the class path). You can also test that the processor produces error output:
JavaFileObject fileObject = JavaFileObjects.forResource("HelloWorld.java");
ASSERT.about(javaSource())
.that(fileObject)
.processedWith(new NoHelloWorld())
.failsToCompile()
.withErrorContaining("No types named HelloWorld!").in(fileObject).onLine(23).atColumn(5);
This is obviously a lot simpler than mocking and unlike typical integration tests, all of the output is stored in memory.
You're right mocking the annotation processing API (with a mock library like easymock) is painful. I tried this approach and it broke down pretty rapidly. You have to setup to many method call expectations. The tests become unmaintainable.
A state-based test approach worked for me reasonably well. I had to implement the parts of the javax.lang.model.* API I needed for my tests. (That were only < 350 lines of code.)
This is the part of a test to initiate the javax.lang.model objects. After the setup the model should be in the same state as the Java compiler implementation.
DeclaredType typeArgument = declaredType(classElement("returnTypeName"));
DeclaredType validReturnType = declaredType(interfaceElement(GENERATOR_TYPE_NAME), typeArgument);
TypeParameterElement typeParameter = typeParameterElement();
ExecutableElement methodExecutableElement = Model.methodExecutableElement(name, validReturnType, typeParameter);
The static factory methods are defined in the class Model implementing the javax.lang.model.* classes. For example declaredType. (All unsupported operations will throw exceptions.)
public static DeclaredType declaredType(final Element element, final TypeMirror... argumentTypes) {
return new DeclaredType(){
#Override public Element asElement() {
return element;
}
#Override public List<? extends TypeMirror> getTypeArguments() {
return Arrays.asList(argumentTypes);
}
#Override public String toString() {
return format("DeclareTypeModel[element=%s, argumentTypes=%s]",
element, Arrays.toString(argumentTypes));
}
#Override public <R, P> R accept(TypeVisitor<R, P> v, P p) {
return v.visitDeclared(this, p);
}
#Override public boolean equals(Object obj) { throw new UnsupportedOperationException(); }
#Override public int hashCode() { throw new UnsupportedOperationException(); }
#Override public TypeKind getKind() { throw new UnsupportedOperationException(); }
#Override public TypeMirror getEnclosingType() { throw new UnsupportedOperationException(); }
};
}
The rest of the test verifies the behavior of the class under test.
Method actual = new Method(environment(), methodExecutableElement);
Method expected = new Method(..);
assertEquals(expected, actual);
You can have a look at the source code of the Quickcheck #Samples and #Iterables source code generator tests. (The code is not optimal, yet. The Method class has to many parameters and the Parameter class is not tested in its own test but as part of the Method test. It should illustrate the approach nevertheless.)
Viel Glück!
jOOR is a small Java reflection library that also provides simplified access to the in-memory Java compilation API in javax.tool.JavaCompiler. We added support for this to unit test jOOQ's annotation processors. You can easily write unit tests like this:
#Test
public void testCompileWithAnnotationProcessors() {
AProcessor p = new AProcessor();
try {
Reflect.compile(
"org.joor.test.FailAnnotationProcessing",
"package org.joor.test; " +
"#A " +
"public class FailAnnotationProcessing { " +
"}",
new CompileOptions().processors(p)
).create().get();
Assert.fail();
}
catch (ReflectException expected) {
assertFalse(p.processed);
}
}
The above example has been taken from this blog post
I was in a similar situation, so I created the Avatar library. It won't give you the performance of a pure unit test with no compilation, but if used correctly you shouldn't see much of a performance hit.
Avatar lets you write a source file, annotate it, and convert it to elements in a unit test. This allows you to unit test methods and classes which consume Element objects, without manually invoking javac.
I ran into the same problem awhile ago and found this question. Although the other answers provided are decent, I felt that that there was still room for improvement. Based on the other answers for this question, I created Elementary, a suite of JUnit 5 extensions that provide a real annotation processing environment for unit tests.
Most libraries test annotation processors by running them. However, most annotation processors are pretty complex and broken into more fine-grained components. It is not feasible to test individual components by running the annotation processor. Instead, we make the annotation processing environment available to these tests.
The following code snippet illustrates how to test a Lint component:
import com.karuslabs.elementary.junit.Cases;
import com.karuslabs.elementary.junit.Tools;
import com.karuslabs.elementary.junit.ToolsExtension;
import com.karuslabs.elementary.junit.annotations.Case;
import com.karuslabs.elementary.junit.annotations.Introspect;
import com.karuslabs.utilitary.type.TypeMirrors;
#ExtendWith(ToolsExtension.class)
#Introspect
class ToolsExtensionExampleTest {
Lint lint = new Lint(Tools.typeMirrors());
#Test
void lint_string_variable(Cases cases) {
var first = cases.one("first");
assertTrue(lint.lint(first));
}
#Test
void lint_method_that_returns_string(Cases cases) {
var second = cases.get(1);
assertFalse(lint.lint(second));
}
#Case("first") String first;
#Case String second() { return "";}
}
class Lint {
final TypeMirrors types;
final TypeMirror expectedType;
Lint(TypeMirrors types) {
this.types = types;
this.expectedType = types.type(String.class);
}
public boolean lint(Element element) {
if (!(element instanceof VariableElement)) {
return false;
}
var variable = (VariableElement) element;
return types.isSameType(expectedType, variable.asType());
}
}
By annotating the test class with #Introspect and test cases with #Case, we can declare test cases in the same file as the tests. The corresponding Element representation of the test cases can be retrieved by a test using Cases.
If anyone is interested, I wrote an article, The Problem with Annotation Processors that details the problems with unit testing annotation processors.
I have used http://hg.netbeans.org/core-main/raw-file/default/openide.util.lookup/test/unit/src/org/openide/util/test/AnnotationProcessorTestUtils.java though this is based on java.io.File for simplicity and so has the performance overhead you complain about.
Thomas's suggestion of mocking the whole JSR 269 environment would lead to a pure unit test. You might instead want to write more of an integration test which checks how your processor actually runs inside javac, giving more assurance it is correct, but merely want to avoid disk files. Doing this would require you to write a mock JavaFileManager, which is unfortunately not as easy as it seems and I have no examples handy, but you should not need to mock other things like Element interfaces.
An option is to bundle all tests in one class. Half a second for compiling etc. is then a constant for a given set of tests, the real test time for a test is negligible, I assume.

Categories

Resources