Is there way to do compile-time annotation processing in Java?
Consider this example:
#Name("appName")
private Field<String> appName;
public void setAppName(String name) {
appName.setValue(name);
}
public String getAppName(String name) {
return appName.getValue();
}
public void someFunction() {
String whatFieldName = appName.getName();
}
Where the annotation Name will be processed at compile-time to set the value for Field That is without the common runtime annotation processing. As such, when appName.getName(); (the Field) is accessed it will return the typed value.
Yes, there is, but, no, it cannot change existing files. You can 'plug in' to the compiler and be informed of any annotations; as part of this, you can see signatures (so, field declarations, method signatures, types, etc) but no contents (so not the expression used to initialize a field, and not the contents in the {} of a method declaration), and you can make NEW files, even java files, but you can't edit existing ones.
Project Lombok does edit them, but that is quite the framework to make that possible.
There are some crazy tricks you can use. Project lombok uses one trick (reflect its way into compiler internals, fix everything from there, install agents and plugins in IDEs). Another trick is to use a java source file as a template, of sorts. You name your class some funky (so if you want, say, public class AppDescriptor, you'd actually make the java file AppDescriptorTemplate.java and put public class AppDescriptorTemplate inside. This file has the annotation precisely as you pasted. Your annotation processor can then, during compilation, generate AppDescriptor.java, writing the impls of all methods as simple pass-throughs (a field of type AppDescriptorTemplate is generated, and all methods in ADT are copied over, and the implementations are all one-liners that just invoke that method on the template class). The template class can be package private. In this specific scenario it sounds like you can generate virtually the whole thing based off of pretty much only "appName", though.
Lombok plugs straight into the build and is therefore virtually entirely transparent, in the sense that you simply type in your IDE and the methods it generates just appear as you type, whereas 'normal' annotation processors that e.g. use the XTemplate trick do not work that way and require the build system to kick in, every time. It can be a bit of a productivity drain.
Related
I understand that annotations serve a purpose to modify code without actually BEING code, such as:
#Author(
name = "Benjamin Franklin",
date = "3/27/2003"
)
But I don't understand how using the annotation is any better/ clearer/ more concise than just saying name = "Benjamin Franklin" ? How does the addition of annotations strengthen the code?
EDIT: Sorry for another questoin, but I know that #Override can help prevent/ track spelling mistakes when calling methods or classes, but how does it do that? Does it help the actual program at all?
Annotations are just metadata. On their own they serve little to no purpose. There must be an annotation processor, either at the compiler or run time level that uses them for something.
With an annotation like
#Author(
name = "Benjamin Franklin",
date = "3/27/2003"
)
for example, some annotation processor might read it with reflection at run time and create some log file that this author wrote whatever it's annotating on that date.
Annotations are metadata.
#Override annotation is used to make sure that you are overriding method of a superclass and not just making a method with the same name. Common mistakes here consist of:
spelling the method's name wrong
equal(Object o) instead of equals(Object o)
putting different set of arguments
MyString extends String { public boolean equals(MyString str) {} }
equals(MyString str) is not overriding the method equals(Object o) and therefore will not be used by standard Java comparators (which is used in some standard functions, such as List.contains() and this is prone to error situation).
This annotation helps compiler to ensure that you code everything correctly and in this way it helps program.
#Deprecated annotation doesn't make program not to compile but it makes developers think about using the code that can and/or will be removed in a future releases. So they (developers) would think about moving onto another (updated) set of functions. And if you compile your program with the flag -Xlint compilation process will return with an error unless you remove all usages of deprecated code or explicitly mark them with annotation #SuppressWarnings("deprecation").
#SuppressWarnings is used to suppress warnings (yes, I know it's Captain Obvious style :)). There is a deprecation suppression with #SuppressWarnings("deprecation"), unsafe type casting with #SuppressWarnings("unchecked") and some others. This is helpfull when your project compiler have a compilation flag -Xlint and you cannot (or don't want to) change that.
There are also annotation processors that you integrate into your program build process to ensure that program code meets some sort of criteria. For example with IntelliJ Idea IDE annotation processor you can use #Nullable and #NotNull annotations. They show other programmers when they use your code so that can transfer null as a certain parameter to a method or not. If they transfer null it will cause exception during compilation or before executing a single line method's code.
So annotations are quite helpful if you use them to their full potential.
Annotations are most likely used by other programs. Examples include:
#Override
IDE (compiler?) ensures that the signatures match
#Deprecated
IDE marks occurences, compiler warning
#FXML
JavaFX can use these annotations initialize variables in a controller class when an .fxml File is inflated (see http://docs.oracle.com/javafx/2/get_started/fxml_tutorial.htm). They are also used by JavaFX Scene Builder.
Annotations works as a way to marking up the code. Several frameworks uses it, and some others make a great use of it producing your own.
Besides, is important to understand that annotations are the equivalent to meta-data, but is much more than that, since it works as a tag language for the code.
Java #Annotation
#Annotation(from Java 5) adds a metadata which are used for instruction in compile, deployment and run time. It is defined by RetentionPolicy
RetentionPolicy defines a lifetime
RetentionPolicy.SOURCE: It is visible only in compile time(#Override, #SuppressWarnings, #StringDef). For example it can be used by apt to generate some code
RetentionPolicy.CLASS: It is visible in compile and deployment time(.class). For example it can be used by ASM or Java AOP paradigm like AspectJ
RetentionPolicy.RUNTIME: It is visible in deployment and run time. For example it can be used java reflection using getAnnotations(). Dagger 2 uses #Scope annotation
Create a custom Annotation
#Retention(<retention_policy>) //optional
#Target(<element_type>) //optional to specify Java element like, field, method...
#Inherited // optional will be visible by subclass
#Documented // optional will be visible by JavaDoc
#interface MyAnnotation {
//attributes:
String someName();
}
using
#MyAnnotation(someName = "Alex")
public class SomeClass {
}
I'd like to implement annotation processor that will generate new class based on existing "prototype" class.
import java.util.List
#MyAnnotation
class MySuperClassPrototype {
static MySuperClassPrototype createInstance() {
return new MySuperClassPrototype();
}
}
As a result of code below. The following new source file (compilation unit) will be generated:
import java.util.List
class MySuperClass {
static MySuperClass createInstance() {
return new MySuperClass();
}
public void specialAddedMethod() {
/*...*/
}
}
I'd like to copy all top-level import statements and static members and not static members of prototype-class. I've moved pretty far with Compiler Tree API (com.sun.source.tree). I can print out Tree data-type while substituting new class name for old. But there are problems that seems pretty hard.
If I get Tree.Kind.IDENTIFIER in the tree, how can I find what actual class it references. I need to replace all occurrences of MySuperClassPrototype identifier with MySuperClass identifier, and than print out whole tree.
Is it feasible?
Similarly I need to filter out #MyAnnotation annotation, and again it is represented with Tree.Kind.IDENTIFIER or Tree.Kind.MEMBER_SELECT.
How can I find out actual annotation class that is referenced by this identifier?
And another problem is printing out tree. If I use toString method I got decent result, but constructors are printed as methods with "<init>" name instead of methods with the same name as it's class, so I need to manually print every kind of Tree node.
You can see code I've come with here
Yes, it is possible and I know at least 2 ways.
First, "traditional" way is to write ant task/maven plugin/just command line java utility that scans given file path and calls for each class something like Class.forName(className).getAnnotations(MyAnnotation.class). If this is not null discover class using reflection and do what you need.
Other way is a little bit more difficult but more powerful.
You can implement your own Processor (that implements javax.annotation.processing.Processor or even better extends javax.annotation.processing.AbstractProcessor.
Your processor will just have to be placed to the compiler classpath and it will run automatically when compiler runs. You can even configure your IDE (e.g. Eclipse) to run your processor. It is a kind of extension to java compiler. So, every time eclipse builds your project it runs the processor and creates all new classes according to new annotations you have added.
Please take a look on this project as a reference.
8 Years and not yet answered. Because of that, i will try to answer it, to your satisfaction.
I fill furthermore concentrate on the static part of the question.
TL;DR:
You will not find copy and paste code in this answer.
Is it feasible?
Yes, absolutely.
How can I find out actual annotation class that is referenced by this identifier?
You will have to use the RoundEnvironment within an Annotation Processor to get the TypeElement.
Static Metaprogramming
Static metaprogramming (which you asked for) is metaprogramming done at compile time. By Kontrast: Dynamic metaprogramming is metaprogramming done at run time. And metaprogramming it self is the design of programs, that handle other programs as data.
Pfeh, a lot to take in. If you are interested in this topic, a more or less good source for that is wikipedia.
Your target would be, to generate a class at compile time. For run time, this would be done with something like cglib. But, since you choose static (and for all the right reasons), i will not explain this.
The concept you are looking for is the annotation processor. The link is a link to Baeldung, where they do exactly, what you are looking for, only with the builder pattern in mind. You will love to hear, that this scenario is highly encouraged and easy to do with the annotation processor API. It even allows you, to generate code, which again is passed to the same or another annotation processor, without you doing anything.
Before jumping right in, try to google yourself about "Java Annotation Processing". There are a lot of good sources out there, which will help you. To much, to list here. Just note, that coding in an annotation processor is different than coding normally. Not a huge difference, but the classes you are working on are not yet created. So keep this in mind and don't get discouraged!
Using the Annotation Processor
Your basic annotation processor would look something like this:
#SupportedAnnotationTypes("package.of.MyAnnotation")
#SupportedSourceVersion(SourceVersion.RELEASE_8)
#AutoService(Processor.class)
public class BuilderProcessor extends AbstractProcessor {
#Override
public boolean process(Set<? extends TypeElement> annotations,
RoundEnvironment roundEnv) {
// First let's find all annotated elements
Set<? extends Element> annotatedElements = roundEnv.getElementsAnnotatedWith(MyAnnotation.class);
// Handle all the annotated classes
return false;
}
}
The AutoService Annotation is used, to dynamically register your annotation processor. It comes from an external source, just so you don't wonder, why this code won't compile.
In the handle all annotated classes part, you have the annotated Elements (which are the annotated classes). You now would have to verify, that they are classes and not interfaces or other annotations. This is because #Target(ElementType.Type) aims at any type, which includes interfaces and annotations. Furthermore, you would want to verify, that anything you require is present, or print an error to the compiler using the Messager.
If you print an error here (for example), you will stop compiling and the error will be seen in most modern IDEs. It can be reached by calling roundEnv.getMessager()
Afterwards you can generate a new class and write it to the input of the compiler, as a .java file. This can be done by using the Filer.
An answer in StackOverflow really does no justice to this topic. I highly recommend looking at the Baeldung example and trying to uncover things from there. This API is as old as Java 6, but still not that greatly used. I encourage you, the reader, to try it out for yourself :)
take a look at https://github.com/rzwitserloot/lombok/, It add methods as you described.
such as
#Getter add getter methods based on fields
#Setter
#ToString add toString() methods base on the fields
I found that there seem to be 2 general solutions:
don't obfuscate what is referred to through the reflection API [Retroguard, Jobfuscate]
replace Strings in reflection API invocations with the obfuscated name.
Those solutions work only for calls within the same project - client code (in another project) may not use the reflection API to access non-public API methods.
In the case of 2 it also only works when the Reflection API is used with Strings known at compile-time (private methods testing?). In those cases dp4j also offers a solution injecting the reflection code after obfuscation.
Reading Proguard FAQ I wondered if 2 otherwise always worked when it says:
ProGuard automatically handles
constructs like
Class.forName("SomeClass") and
SomeClass.class. The referenced
classes are preserved in the shrinking
phase, and the string arguments are
properly replaced in the obfuscation
phase.
With variable string arguments, it's generally not possible to determine
their possible values.
Q: what does the statement in bold mean? Any examples?
With variable string arguments, it's generally not possible to determine their possible values.
public Class loadIt(String clsName) throws ClassNotFoundException {
return Class.forName(clsName);
}
basically if you pass a non-constant string to Class.forName, there's generally no way for proguard or any obfuscation tool to figure out what class you are talking about, and thus can't automatically adjust the code for you.
The Zelix KlassMaster Java obfuscator can automatically handle all Reflection API calls. It has a function called AutoReflection which uses an "encrypted old name" to "obfuscated name" lookup table.
However, it again can only work for calls within the same obfuscated project.
See http://www.zelix.com/klassmaster/docs/tutorials/autoReflectionTutorial.html.
It means that this:
String className;
if (Math.random() <= 0.5) className = "ca.simpatico.Foo";
else className = "ca.simpatico.Bar";
Class cl = Class.forName(className);
Won't work after obfuscation. ProGuard doesn't do a deep enough dataflow analysis to see that the class name which gets loaded came from those two string literals.
Really, your only plausible option is to decide which classes, interfaces, and methods should be accessible through reflection, and then not obfuscate those. You're effectively defining a strange kind of API to clients - one which will only be accessed reflectively.
In a project of mine I have two packages full of DTOs, POJOs with just getters and setters. While it's important that they are simple java beans (e.g. because Apache CXF uses them to create Web Service XSDs etc.), it's also awful and error-prone to program like that.
Foo foo = new Foo();
foo.setBar("baz");
foo.setPhleem(123);
return foo;
I prefer fluent interfaces and builder objects, so I use maven / gmaven to automatically create builders for the DTOs. So for the above code, a FooBuilder is automatically generated, which I can use like this:
Foo foo = new FooBuilder()
.bar("baz")
.phleem(123)
.build();
I also automatically generates Unit tests for the generated Builders. A unit test would generate both of the above codes (builder version and non builder version) and assert that both versions are equivalent in terms of equals() and hashcode(). The way I can achieve that is to have a globally accessible Map with defaults for every property type. Something like this:
public final class Defaults{
private Defaults(){}
private static final Map<Class<?>, Object> DEFAULT_VALUES =
new HashMap<Class<?>, Object>();
static{
DEFAULT_VALUES.put(String.class, "baz");
// argh, autoboxing is necessary :-)
DEFAULT_VALUES.put(int.class, 123);
// etc. etc.
}
public static getPropertyValue(Class<?> type){
return DEFAULT_VALUES.get(type);
}
}
Another non-trivial aspect is that the pojos sometimes have collection members. e.g.:
foo.setBings(List<Bing> bings)
but in my builder I would like this to generate two methods from this case: a set method and an add method:
fooBuilder.bings(List<Bing> bings); // set method
fooBuilder.addBing(Bing bing); // add method
I have solved this by adding a custom annotation to the property fields in Foo
#ComponentType(Bing.class)
private List<Bing> bings;
The builder builder (sic) reads the annotation and uses the value as the generic type of the methods to generate.
We are now getting closer to the question (sorry, brevity is not one of my merits :-)).
I have realized that this builder approach could be used in more than one project, so I am thinking of turning it into a maven plugin. I am perfectly clear about how to generate a maven plugin, so that's not part of the question (nor is how to generate valid Java source code). My problem is: how can I deal with the two above problems without introducing any common dependencies (between Project and Plugin):
<Question>
I need a Defaults class (or a similar mechanism) for getting default values for generated unit tests (this is a key part of the concept, I would not trust automatically generated builders if they weren't fully tested). Please help me come up with a good and generic way to solve this problem, given that each project will have it's own domain objects.
I need a common way of communicating generic types to the builder generator. The current annotation based version I am using is not satisfactory, as both project and plugin need to be aware of the same annotation.
</Question>
Any Ideas?
BTW: I know that the real key point of using builders is making objects immutable. I can't make mine immutable, because standard java beans are necessary, but I use AspectJ to enforce that neither set-methods nor constructors are called anywhere in my code base except in the builders, so for practical purposes, the resulting objects are immutable.
Also: Yes, I am aware of existing Builder-generator IDE plugins. That doesn't fit my purpose, I want an automated solution, that's always up to date whenever the underlying code has changed.
Matt B requested some info about how I generate my builders. Here's what I do:
I read a class per reflection, use Introspector.getBeanInfo(clazz).getPropertyDescriptors() to get an array of property descriptors. All my builders have a base class AbstractBuilder<T> where T would be Foo in the above case. Here's the code of the Abstract Builder class. For every property in the PropertyDescriptor array, a method is generated with the name of the property. This would be the implementation of FooBuilder.bar(String):
public FooBuilder bar(String bar){
setProperty("bar", bar);
return this;
}
the build() method in AbstractBuilder instantiates the object and assigns all properties in it's property map.
A POJO is an object which doesn't follow the Java Bean spoec. ie. it doesn't have setters/getters.
JavaBeans are not required to have setters, if you don't want them to be called, don't generate them. (Your builder can call a package local or private constructor to create your immutable objects)
Have you looked at Diezel ?
It's a Builder generator.
It handles generic types, so it might be helpful here for the question 2
It generates all the interfaces, and implementation boiler plate based on a description XML file. You might be able, through introspection to generate this XML (or even goes directly into lower API )
It is bundled as a maven plugin.
I'm writing a library that needs to have some code if a particular library is included. Since this code is scattered all around the project, it would be nice if users didn't have to comment/uncomment everything themselves.
In C, this would be easy enough with a #define in a header, and then code blocks surrounded with #ifdefs. Of course, Java doesn't have the C preprocessor...
To clarify - several external libraries will be distributed with mine. I do not want to have to include them all to minimize my executable size. If a developer does include a library, I need to be able to use it, and if not, then it can just be ignored.
What is the best way to do this in Java?
There's no way to do what you want from within Java. You could preprocess the Java source files, but that's outside the scope of Java.
Can you not abstract the differences and then vary the implementation?
Based on your clarification, it sounds like you might be able to create a factory method that will return either an object from one of the external libraries or a "stub" class whose functions will do what you would have done in the "not-available" conditional code.
As other have said, there is no such thing as #define/#ifdef in Java. But regarding your problem of having optional external libraries, which you would use, if present, and not use if not, using proxy classes might be an option (if the library interfaces aren't too big).
I had to do this once for the Mac OS X specific extensions for AWT/Swing (found in com.apple.eawt.*). The classes are, of course, only on the class-path if the application is running on Mac OS. To be able to use them but still allow the same app to be used on other platforms, I wrote simple proxy classes, which just offered the same methods as the original EAWT classes. Internally, the proxies used some reflection to determine if the real classes were on the class-path and would pass through all method calls. By using the java.lang.reflect.Proxy class, you can even create and pass around objects of a type defined in the external library, without having it available at compile time.
For example, the proxy for com.apple.eawt.ApplicationListener looked like this:
public class ApplicationListener {
private static Class<?> nativeClass;
static Class<?> getNativeClass() {
try {
if (ApplicationListener.nativeClass == null) {
ApplicationListener.nativeClass = Class.forName("com.apple.eawt.ApplicationListener");
}
return ApplicationListener.nativeClass;
} catch (ClassNotFoundException ex) {
throw new RuntimeException("This system does not support the Apple EAWT!", ex);
}
}
private Object nativeObject;
public ApplicationListener() {
Class<?> nativeClass = ApplicationListener.getNativeClass();
this.nativeObject = Proxy.newProxyInstance(nativeClass.getClassLoader(), new Class<?>[] {
nativeClass
}, new InvocationHandler() {
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
String methodName = method.getName();
ApplicationEvent event = new ApplicationEvent(args[0]);
if (methodName.equals("handleReOpenApplication")) {
ApplicationListener.this.handleReOpenApplication(event);
} else if (methodName.equals("handleQuit")) {
ApplicationListener.this.handleQuit(event);
} else if (methodName.equals("handlePrintFile")) {
ApplicationListener.this.handlePrintFile(event);
} else if (methodName.equals("handlePreferences")) {
ApplicationListener.this.handlePreferences(event);
} else if (methodName.equals("handleOpenFile")) {
ApplicationListener.this.handleOpenFile(event);
} else if (methodName.equals("handleOpenApplication")) {
ApplicationListener.this.handleOpenApplication(event);
} else if (methodName.equals("handleAbout")) {
ApplicationListener.this.handleAbout(event);
}
return null;
}
});
}
Object getNativeObject() {
return this.nativeObject;
}
// followed by abstract definitions of all handle...(ApplicationEvent) methods
}
All this only makes sense, if you need just a few classes from an external library, because you have to do everything via reflection at runtime. For larger libraries, you probably would need some way to automate the generation of the proxies. But then, if you really are that dependent on a large external library, you should just require it at compile time.
Comment by Peter Lawrey: (Sorry to edit, its very hard to put code into a comment)
The follow example is generic by method so you don't need to know all the methods involved. You can also make this generic by class so you only need one InvocationHandler class coded to cover all cases.
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
String methodName = method.getName();
ApplicationEvent event = new ApplicationEvent(args[0]);
Method method = ApplicationListener.class.getMethod(methodName, ApplicationEvent.class);
return method.invoke(ApplicationListener.this, event);
}
In Java one could use a variety of approaches to achieve the same result:
Dependency Injection
Annotations
Reflection
The Java way is to put behaviour that varies into a set of separate classes abstracted through an interface, then plug the required class at run time. See also:
Factory pattern
Builder pattern
Strategy pattern
Well, Java syntax is close enough to C that you could simply use the C preprocessor, which is usually shipped as a separate executable.
But Java isn't really about doing things at compile time anyway. The way I've handled similar situations before is with reflection. In your case, since your calls to the possibly-non-present library are scattered throughout the code, I would make a wrapper class, replace all the calls to the library with calls to the wrapper class, and then use reflection inside the wrapper class to invoke on the library if it is present.
Use a constant:
This week we create some constants
that have all of the benefits of using
the C preprocessor's facilities to
define compile-time constants and
conditionally compiled code.
Java has gotten rid of the entire
notion of a textual preprocessor (if
you take Java as a "descendent" of
C/C++). We can, however, get the best
benefits of at least some of the C
preprocessor's features in Java:
constants and conditional compilation.
I don't believe that there really is such a thing. Most true Java users will tell you that this is a Good Thing, and that relying on conditional compilation should be avoided at almost all costs.
I'm don't really agree with them...
You CAN use constants that can be defined from the compile line, and that will have some of the effect, but not really all. (For example, you can't have things that don't compile, but you still want, inside #if 0... (and no, comments don't always solve that problem, because nesting comments can be tricky...)).
I think that most people will tell you to use some form of inheritance to do this, but that can be very ugly as well, with lots of repeated code...
That said, you CAN always just set up your IDE to throw your java through the pre-processor before sending it to javac...
"to minimize my executable size"
What do you mean by "executable size"?
If you mean the amount of code loaded at runtime, then you can conditionally load classes through the classloader. So you distribute your alternative code no matter what, but it's only actually loaded if the library that it stands in for is missing. You can use an Adapter (or similar) to encapsulate the API, to make sure that almost all of your code is exactly the same either way, and one of two wrapper classes is loaded according to your case. The Java security SPI might give you some ideas how this can be structured and implemented.
If you mean the size of your .jar file, then you can do the above, but tell your developers how to strip the unnecessary classes out of the jar, in the case where they know they aren't going to be needed.
I have one more best way to say.
What you need is a final variable.
public static final boolean LibraryIncluded= false; //or true - manually set this
Then inside the code say as
if(LibraryIncluded){
//do what you want to do if library is included
}
else
{
//do if you want anything to do if the library is not included
}
This will work as #ifdef. Any one of the blocks will be present in the executable code. Other will be eliminated in the compile time itself
Use properties to do this kind of thing.
Use things like Class.forName to identify the class.
Do not use if-statements when you can trivially translate a property directly to a class.
Depending on what you are doing (not quite enough information) you could do something like this:
interface Foo
{
void foo();
}
class FakeFoo
implements Foo
{
public void foo()
{
// do nothing
}
}
class RealFoo
{
public void foo()
{
// do something
}
}
and then provide a class to abstract the instantiation:
class FooFactory
{
public static Foo makeFoo()
{
final String name;
final FooClass fooClass;
final Foo foo;
name = System.getProperty("foo.class");
fooClass = Class.forName(name);
foo = (Foo)fooClass.newInstance();
return (foo);
}
}
Then run java with -Dfoo.name=RealFoo|FakeFoo
Ignored the exception handling in the makeFoo method and you can do it other ways... but the idea is the same.
That way you compile both versions of the Foo subclasses and let the developer choose at runtime which they wish to use.
I see you specifying two mutually exclusive problems here (or, more likely, you have chosen one and I'm just not understanding which choice you've made).
You have to make a choice: Are you shipping two versions of your source code (one if the library exists, and one if it does not), or are you shipping a single version and expecting it to work with the library if the library exists.
If you want a single version to detect the library's existence and use it if available, then you MUST have all the code to access it in your distributed code--you cannot trim it out. Since you are equating your problem with using a #define, I assumed this was not your goal--you want to ship 2 versions (The only way #define can work)
So, with 2 versions you can define a libraryInterface. This can either be an object that wraps your library and forwards all the calls to the library for you or an interface--in either case this object MUST exist at compile time for both modes.
public LibraryInterface getLibrary()
{
if(LIBRARY_EXISTS) // final boolean
{
// Instantiate your wrapper class or reflectively create an instance
return library;
}
return null;
}
Now, when you want to USE your library (cases where you would have had a #ifdef in C) you have this:
if(LIBRARY_EXISTS)
library.doFunc()
Library is an interface that exists in both cases. Since it's always protected by LIBRARY_EXISTS, it will compile out (should never even load into your class loader--but that's implementation dependent).
If your library is a pre-packaged library provided by a 3rd party, you may have to make Library a wrapper class that forwards it's calls to your library. Since your library wrapper is never instantiated if LIBRARY_EXISTS is false, it shouldn't even be loaded at runtime (Heck, it shouldn't even be compiled in if the JVM is smart enough since it's always protected by a final constant.) but remember that the wrapper MUST be available at compile time in both cases.
If it helps have a look at j2me polish or Using preprocessor directives in BlackBerry JDE plugin for eclipse?
this is for mobiles app but this can be reused no ?