OSGi: Ensure that all extensions are loaded in a Declarative Services application - java

I am working on an application that is meant to be extensible by the customer. It is based on OSGi (Equinox) and makes heavy use of Declarative Services (DS). Customer-installed bundles provide their own service implementations which my application then makes use of. There is no limit on the number of service implementations that customer-specific bundles may provide.
Is there a way to ensure that, when the application's main function is executed, all customer-provided service implementations have been registered?
To clarify, suppose my application consists of a single DS component RunnableRunner:
public class RunnableRunner
{
private final List<Runnable> runnables = new ArrayList<Runnable>();
public void bindRunnable(Runnable runnable)
{
runnables.add(runnable);
}
public void activate()
{
System.out.println("Running runnables:");
for (Runnable runnable : runnables) {
runnable.run();
}
System.out.println("Done running runnables.");
}
}
This component is registered using a DS component.xml such as the following:
<?xml version="1.0" encoding="UTF-8"?>
<scr:component xmlns:scr="http://www.osgi.org/xmlns/scr/v1.1.0" name="RunnableRunner" activate="activate">
<implementation class="RunnableRunner"/>
<reference bind="bindRunnable" interface="java.lang.Runnable" name="Runnable"
cardinality="0..n" policy="dynamic"/>
</scr:component>
I understand that there is no guarantee that, at the time activate() is called, all Runnables have been bound. In fact, experiments I made with Eclipse/Equinox indicate that the DS runtime won't be able to bind Runnables contributed by another bundle if that bundle happens to start after the main bundle (which is a 50/50 chance unless explicit start levels are used).
So, what alternatives are there for me? How can I make sure the OSGi containers tries as hard as it can to resolve all dependencies before activating the RunnableRunner?
Alternatives I already thought about:
Bundle start levels: too coarse (they work on bundle level, not on component level) and also unreliable (they're only taken as a hint by OSGi)
Resorting to Eclipse's Extension Points: too Eclipse-specific, hard to combine with Declarative Services.
Making the RunnableRunner dynamically reconfigure whenever a new Runnable is registered: not possible, at some point I have to execute all the Runnables in sequence.
Any advice on how to make sure some extensible service is "ready" before it is used?

By far the best way is not to care and design your system that it flows correctly. There are many reasons a service appears and disappears so any mirage of stability is just that. Not handling the actual conditions creates fragile systems.
In your example, why can't the RunnableRunner not execute the stuff for each Runnable service as it comes available? The following code is fully OSGi dynamic aware:
#Component
public class RunnableRunner {
#Reference Executor executor;
#Reference(policy=ReferencePolicy.DYNAMIC)
void addRunnable( Runnable r) {
executor.execute(r);
}
}
I expect you find this wrong for a reason you did not specify. This reason is what you should try to express as a service registration.
If you have a (rare) use cases where you absolutely need to know that 'all' (whatever that means) services are available then you could count the number of instances, or use some other condition. In OSGi with DS the approach is then to turn this condition into a service so that then others can depend on it and you get all the guarantees that services provide.
In that case just create a component that counts the number of instances. Using the configuration, you register a Ready service once you reach a certain count.
public interface Ready {}
#Component
#Designate( Config.class )
public class RunnableGuard {
#ObjectClass
#interface Config {
int count();
}
int count = Integer.MAX_VALUE;
int current;
ServiceRegistration<Ready> registration;
#Activate
void activate(Config c, BundleContext context) {
this.context = context;
this.count = c.count();
count();
}
#Deactivate void deactivate() {
if ( registration != null )
registration.unregister();
}
#Reference
void addRunnable( Runnable r ) {
count(1);
}
void removeRunnable( Runnable r ) {
count(-1);
}
synchronized void count(int n) {
this.current += n;
if ( this.current >= count && registration==null)
registration = context.registerService(
Ready.class, new Ready() {}, null
);
if ( this.current < count && registration != null) {
registration.unregister();
registration = null;
}
}
}
Your RunnableRunner would then look like:
#Component
public class RunnableRunner {
#Reference volatile List<Runnable> runnables;
#Reference Ready ready;
#Activate void activate() {
System.out.println("Running runnables:");
runnables.forEach( Runnable::run );
System.out.println("Done running runnables.");
}
}
Pretty fragile code but sometimes that is the only option.
I did not know there were still people writing XML ... my heart is bleeding for you :-)

If you do not know which extensions you need to start then you can only make you component dynamic. You then react to each extension as it is added.
If you need to make sure that your extensions have been collected before some further step may happen then you can use names for your required extensions and name them in a config.
So for example you could have a config property "extensions" that lists all extension names spearated by spaces. Each extension then must have a service property like "name". In your component you then compare the extensions you found with the required extensions by name. You then do your "activation" only when all required extensions are present.
This is for example used in CXF DOSGi to apply intents on a service like specified in remote service admin spec.

Related

How to activate an OSGI bundle only when the certain condition is satisfied?

I'm a bit new in OSGI and I want the following: to activate my bundle only when some prerequisites are satisfied (which, btw, we get form a native library, but that's another story). AFAIK it could be achieved through the #Reference DS, but I don't get the idea fully. I mean if I write something like this before my #Activate DS:
#Reference
public AnotherService as
#Activate
public void activate() {
//some code
}
this actually means, that my bundle won't be activated until the AnotherService is activated. But could I write in AnotherService or in my bundle something like this?:
#Activate
public void activate() {
if (condition){
deactivate()
}
//some code
}
#Deactivate
public void deactivate() {
//some code
}
As far as I understand, that's impossible. Then the questions arises: how could I control the activation of my bundle or its references depending on certain condition(s)? I.e. I want my bundle to be either activated, when the condition is satisfied (before activation) or deactivated, when not. It won't suit for me the way: "just make if-statement, if it is not satisfied, do nothing, but still be activated", because the 'activiy' of this bundle is very resource heavy. Maybe I just have a fully wrong idea of OSGI.
This is not something you should do. Your bundle should always be activated as long as it can be resolved (i.e. all of its imports and other static dependencies are satisfied).
In fact, unless you are coding OSGi at an advanced level, you should not write BundleActivators at all. If you are reading an OSGi tutorial in 2020 that tells you to write a BundleActivator, stop reading... that tutorial is junk.
You should use Declarative Services components. A component is an object managed by OSGi and can have its lifecycle bound to the availability of a service. For example:
#Component
public class MyComponent implements MyService {
private final AnotherService service;
#Activate
public MyComponent(#Reference AnotherService service) {
this.service = service;
}
#Override
public void doSomething() {
// ...
}
}
In this example, an instance of MyComponent will be created and published as a service of type MyService only when a service of type AnotherService is available. You will be able to invoke the AnotherService via the final service field. This is a mandatory service reference, which is the default in DS, but it's also possible to create optional and dynamic references (i.e. references that can re-bind to other service instances during the lifetime of your component), but don't worry about those use-cases until later in your OSGi learning.

JavaEE - EJB/CDI Method Duration Mechanism

not sure how to title this issue but lets hope description may give better explaination. I am looking for a way to annotate a ejb method or cdi method with a custom annotation like " #Duration" or someothing aaand so to kill methods execution if takes too long after the given duration period. I guess some pseudo code will make everything clear:
public class myEJBorCdiBean {
#Duration(seconds = 5)
public List<Data> complexTask(..., ...)
{
while(..)
// this takes more time than the given 5 seconds so throw execption
}
To sum up, a method takes extremely long and it shall throw a given time duration expired error or something like that
Kinda a timeout mechanism, I dont know if there is already something like this, I am new to javaEE world.
Thanks in advance guys
You are not supposed to use Threading API inside EJB/CDI container. EJB spec clearly states that:
The enterprise bean must not attempt to manage threads. The enterprise
bean must not attempt to start, stop, suspend, or resume a thread, or
to change a thread’s priority or name. The enterprise bean must not
attempt to manage thread groups.
Managed beans and the invocation of their business methods have to be fully controlled by the container in order to avoid corruption of their state. Depending on your usecase, either offload this operation to a dedicated service(outside javaee), or you could come up with some semi-hacking solution using EJB #Singleton and Schedule - so that you could periodically check for some control flag. If you are running on Wildfly/JBoss, you could misuse the #TransactionTimeout annotation for this- as EJB methods are by default transaction aware, setting the timeout on Transaction will effective control the invocation timeout on the bean method. I am not sure, how it is supported on other applications servers.
If async processing is an option, then EJB #Asynchronous could be of some help: see Asynchronous tutorial - Cancelling and asynchronous operation.
As a general advice: Do not run long running ops in EJB/CDI. Every request will spawn a new thread, threads are limited resource and your app will be much harder to scale and maintain(long running op ~= state), what happens if your server crashes during method invocation, how would the use case work in clustered environment. Again it is hard to say, what is a better approach without understanding of your use case, but investigate java EE batch api, JMS with message driven beans or asynchronous processing with #Asynchronous
It is a very meaningful idea – to limit a complex task to a certain execution time. In practical web-computing, many users will be unwilling to wait for a complex search task to complete when its duration exceeds a maximally acceptable amount of time.
The Enterprise container controls the thread pool, and the allocation of CPU-resources among the active threads. It does so taking into account also retention times during time-consuming I/O-tasks (typically disk access).
Nevertheless, it makes sense to program a start task variable, and so now and then during the complex task verify the duration of that particular task. I advice you to program a local, runnable task, which picks scheduled tasks from a job queue. I have experience with this from a Java Enterprise backend application running under Glassfish.
First the interface definition Duration.java
// Duration.java
#Qualifier
#Target({ElementType.TYPE, ElementType.FIELD, ElementType.PARAMETER, ElementType.METHOD})
#Documented
#Retention(RetentionPolicy.RUNTIME)
public #interface Duration {
public int minutes() default 0; // Default, extended from class, within path
}
Now follows the definition of the job TimelyJob.java
// TimelyJob.java
#Duration(minutes = 5)
public class TimelyJob {
private LocalDateTime localDateTime = LocalDateTime.now();
private UUID uniqueTaskIdentifier;
private String uniqueOwnerId;
public TimelyJob(UUID uniqueTaskIdentifier, String uniqueOwnerId) {
this.uniqueTaskIdentifier = uniqueTaskIdentifier;
this.uniqueOwnerId = uniqueOwnerId;
}
public void processUntilMins() {
final int minutes = this.getClass().getAnnotation(Duration.class).minutes();
while (true) {
// do some heavy Java-task for a time unit, then pause, and check total time
// break - when finished
if (minutes > 0 && localDateTime.plusMinutes(minutes).isAfter(LocalDateTime.now())) {
break;
}
try {
Thread.sleep(5);
} catch (InterruptedException e) {
System.err.print(e);
}
}
// store result data in result class, 'synchronized' access
}
public LocalDateTime getLocalDateTime() {
return localDateTime;
}
public UUID getUniqueTaskIdentifier() {
return uniqueTaskIdentifier;
}
public String getUniqueOwnerId() {
return uniqueOwnerId;
}
}
The Runnable task that executes the timed jobs - TimedTask.java - is implemented as follows:
// TimedTask.java
public class TimedTask implements Runnable {
private LinkedBlockingQueue<TimelyJob> jobQueue = new LinkedBlockingQueue<TimelyJob>();
public void setJobQueue(TimelyJob job) {
this.jobQueue.add(job);
}
#Override
public void run() {
while (true) {
try {
TimelyJob nextJob = jobQueue.take();
nextJob.processUntilMins();
Thread.sleep(100);
} catch (InterruptedException e) {
System.err.print(e);
}
}
}
}
and in a seperate code, the staring of the TimedTask
public void initJobQueue() {
new Thread(new TimedTask()).start();
}
This functionality actually implements a batch-job scheduler in Java, using annotations to control the end-task time limit.

Best practice to associate message and target class instance creation

The program I am working on has a distributed architecture, more precisely the Broker-Agent Pattern. The broker will send messages to its corresponding agent in order to tell the agent to execute a task. Each message sent contains the target task information(the task name, configuration properties needed for the task to perform etc.). In my code, each task in the agent side is implemente in a seperate class. Like :
public class Task1 {}
public class Task2 {}
public class Task3 {}
...
Messages are in JSON format like:
{
"taskName": "Task1", // put the class name here
"config": {
}
}
So what I need is to associate the message sent from the broker with the right task in the agent side.
I know one way is to put the target task class name in the message so that the agent is able to create an instance of that task class by the task name extracted from the message using reflections, like:
Class.forName(className).getConstructor(String.class).newInstance(arg);
I want to know what is the best practice to implement this association. The number of tasks is growing and I think to write string is easy to make mistakes and not easy to maintain.
If you're that specific about classnames you could even think about serializing task objects and sending them directly. That's probably simpler than your reflection approach (though even tighter coupled).
But usually you don't want that kind of coupling between Broker and Agent. A broker needs to know which task types there are and how to describe the task in a way that everybody understands (like in JSON). It doesn't / shouldn't know how the Agent implements the task. Or even in which language the Agent is written. (That doesn't mean that it's a bad idea to define task names in a place that is common to both code bases)
So you're left with finding a good way to construct objects (or call methods) inside your agent based on some string. And the common solution for that is some form of factory pattern like: http://alvinalexander.com/java/java-factory-pattern-example - also helpful: a Map<String, Factory> like
interface Task {
void doSomething();
}
interface Factory {
Task makeTask(String taskDescription);
}
Map<String, Factory> taskMap = new HashMap<>();
void init() {
taskMap.put("sayHello", new Factory() {
#Override
public Task makeTask(String taskDescription) {
return new Task() {
#Override
public void doSomething() {
System.out.println("Hello" + taskDescription);
}
};
}
});
}
void onTask(String taskName, String taskDescription) {
Factory factory = taskMap.get(taskName);
if (factory == null) {
System.out.println("Unknown task: " + taskName);
}
Task task = factory.makeTask(taskDescription);
// execute task somewhere
new Thread(task::doSomething).start();
}
http://ideone.com/We5FZk
And if you want it fancy consider annotation based reflection magic. Depends on how many task classes there are. The more the more effort to put into an automagic solution that hides the complexity from you.
For example above Map could be filled automatically by adding some class path scanning for classes of the right type with some annotation that holds the string(s). Or you could let some DI framework inject all the things that need to go into the map. DI in larger projects usually solves those kinds of issues really well: https://softwareengineering.stackexchange.com/questions/188030/how-to-use-dependency-injection-in-conjunction-with-the-factory-pattern
And besides writing your own distribution system you can probably use existing ones. (And reuse rather then reinvent is a best practice). Maybe http://www.typesafe.com/activator/template/akka-distributed-workers or more general http://twitter.github.io/finagle/ work in your context. But there are way too many other open source distributed things that cover different aspects to name all the interesting ones.

Best Practice in an OSGi UI application

I am somewhat new to the OSGi world. And some concepts still elude me.
I'm trying to create a graphical OSGi application using Swing, Equinox and Declarative Services. The goal is to ease the creation of plugins and extensions for the application.
I have stumbled with a design problem and, since I am doing this from the ground up, I want to use all the best practices I can.
I do have a bundle that contains the API and only exposes interfaces to be implemented as services.
public class SomeClass {
}
public interface Manager<T> {
void add(T obj);
void update(T obj);
void remove(T obj);
}
public interface SomeClassManager extends Manager<SomeClass> {
}
public interface Listener<T> {
void added(T obj);
void updated(T obj);
void removed(T obj);
}
public interface SomeClassListener extends Listener<SomeClass> {
}
Let's say I have a bundle (Core) that provides a service that is a manager of certain types of objects (It basically contains an internal List and adds, removes and updates it).
public class SomeClassCoreManager implements SomeClassManager {
private ArrayList<SomeClass> list = new ArrayList<SomeClass>();
private ArrayList<SomeListener> listeners = new ArrayList<SomeListener>();
protected void bindListener(SomeListener listener) {
listeners.add(listener);
}
protected void undindListener(SomeListener listener) {
listeners.remove(listener);
}
public void add(SomeClass obj) {
// Adds the object to the list
// Fires all the listeners with "added(obj)"
}
public void update(SomeClass obj) {
// Updates the object in the list.
// Fires all the listeners with "updated(obj)"
}
public void remove(SomeClass obj) {
// Removes the object from the list.
// Fires all the listeners with "removed(obj)"
}
}
I also have a second bundle (UI) that takes care of the main UI. It should not "care" for the object managing itself, but should be notified when an object is added, removed or changed in order to update a JTree. For that purpose I used a Whiteboard pattern: The UI bundle implements a service that is used by the Core bundle to fire object change events.
public class MainWindow extends JFrame {
private JTree tree = new JTree();
private SomeClassManager manager;
protected void activate() {
// Adds the tree and sets its model and creates the rest of the UI.
}
protected void bindManager(SomeClassManager manager) {
this.manager = manager;
}
protected unbindManager(SomeClassManager manager) {
this.manager = null;
}
}
public class SomeClassUIListener implements SomeClassListener {
public void added(SomeClass obj) {
// Should add the object to the JTree.
}
public void updated(SomeClass obj) {
// Should update the existing object in the JTree.
}
public void removed(SomeClass obj) {
// Should remove the existing object from the JTree.
}
}
My problem here is the following:
The MainWindow is a DS component. I am using its activator to initiate the whole UI. The instance creation is handled by OSGi.
In order to get the updates from the manager, I am exposing the SomeClassUIListener as a Declarative Service. Its instance is also handled by OSGi.
How should I access the instance of the JTree model from the SomeClassUIListener?
I have come up with several options but I am not sure which to use:
Option 1:
Use some kind of internal DI system for the UI bundle (like Guice or Pico) and put it in a class with a static method to get it and use it all over the bundle.
This approach seems to be frowned upon by some.
Option 2:
Inject a reference to the MainWindow (by turning it into a service) in the SomeClassUIListener through OSGi and go from there. Is this possible or advisable? Seems to me that it is the simpler solution. But, on the other hand, wouldn't this clutter the bundle with component config files as the UI got more and more complex?
Option 3:
Create a separate bundle only for listeners and use OSGi to update the MainWindow. This seems to me a bit extreme, as I would have to create an enormous amount of bundles as the UI complexity grows.
Option 4:
Use the MainWindow class to implement the Listener. But, the more services in the main UI bundle, the bigger the MainWindow class would be. I think this would not be a good option.
I cannot think of more options. Is any of the above the way to go? Or is there another option?
Thank you in advance.
Edit:
Just to clarify as Peter Kriens had some doubts about this question.
My goal here is to decouple the user interface from the Manager. By Manager I mean a kind of repository in which I store a certain type of objects (For instance, if you consider the Oracle's JTree tutorial at http://docs.oracle.com/javase/tutorial/uiswing/components/tree.html, the manager would contain instances of Books).
The Manager may be used by any other bundle but, according to my current plan, it would notify any listener registered in it. The listener may be the main UI bundle but may also be any other bundle that chooses to listen for updates.
I am not sure I completely grasp your proposal, and it feels like you are on your way to create a whole load of infrastructure. In OSGi this is generally not necessary, so why not start small and simple.
Your basic model is a manager and an extension. This is the domain model and I would try to flow things around here:
#Component(immediate)
public class ManagerImpl { // No API == immediate
List<Extension> extensions = new CopyOnWriteArrayList<Extension>();
JFrame frame = new JFrame();
#Reference(cardinality=MULTIPLE)
void addExtension( Extension e ) {
addComponent(frame, e.getName(), e.getComponent());
extensions.add(e);
}
void removeExtension( Extension e) {
if ( extensions.remove(e) ) {
removeComponent(frame, e.getName());
}
}
#Component
public class MyFirstExtension implements Extension {
public String getName() { return "My First Extension";}
public Component getComponent() { return new MyFirstExtensionComponent(this); }
}
Isn't this what you're looking for? Be very careful not to create all kinds of listeners, in general you find the events already in the OSGi registry.
Some option here would be to pass the tree model instance as the argument in the listeners methods.
public void added(JTree tree, SomeClass obj)
This way listeners manager would be responsible only for listeners logic, not for the tree state.
Another nice option would be to create a separated TreeProviderService, responsible for holding and serving singleton JTree instance for the application. In such case you would consume TreeProviderService directly from the listeners.
I propose to simply also use DS for the UI creation and wiring. If you use the annotations Peter mentioned you will not clutter your bundles with component descriptors in XML form.
So your listener is a #Component and you inject the UI elements it needs to update into it.
Btw. what you plan to do sounds a bit like databinding to me so you should also investigate what these offer already.
See: Swing data binding frameworks
Btw. you may also want to look for more advanced frameworks than swing. For example some time ago I did a small tutorial for vaadin: https://github.com/cschneider/Karaf-Tutorial/tree/master/vaadin
It already has a databinding for java beans. So this made it really easy for me to code the UI. The full UI is just this little class: https://github.com/cschneider/Karaf-Tutorial/blob/master/vaadin/tasklist-ui-vaadin/src/main/java/net/lr/tutorial/karaf/vaadin/ExampleApplication.java
In the old version I still needed a bridge to run vaadin in OSGi but version 7 should be quite OSGi ready.

OSGi loose-coupling best practice

I'd like to know what is considered the best practices or patterns for decoupling application code from framework code, specifically regarding OSGi.
I'm going to use the example from the Felix SCR pages
The example service is a Comparator
package sample.service;
import java.util.Comparator;
public class SampleComparator implements Comparator
{
public int compare( Object o1, Object o2 )
{
return o1.equals( o2 ) ? 0 : -1;
}
}
The code above contains no framework plumbing, it's focused and concise. Making this available to the application, when using OSGi, involves registering it with a service registry. One way, as described on the Felix pages linked, is by using the Service Component Runtime.
// OSGI-INF/sample.xml
<?xml version="1.0" encoding="UTF-8"?>
<component name="sample.component" immediate="true">
<implementation class="sample.service.SampleComparator" />
<property name="service.description" value="Sample Comparator Service" />
<property name="service.vendor" value="Apache Software Foundation" />
<service>
<provide interface="java.util.Comparator" />
</service>
</component>
and
Service-Component: OSGI-INF/sample.xml
All nice and lovely, my service implementation has no coupling at all to OSGI.
Now I want to use the service...
package sample.consumer;
import java.util.Comparator;
public class Consumer {
public void doCompare(Object o1, Object o2) {
Comparator c = ...;
}
}
Using SCR lookup strategy I need to add framework-only methods:
protected void activate(ComponentContext context) {
Comparator c = ( Comparator ) context.locateService( "sample.component" );
}
Using SCR event strategy I also need to add framework-only methods:
protected void bindComparator(Comparator c) {
this.c = c;
}
protected void unbindComparator(Comparator c) {
this.c = null;
}
Neither are terribly onerous, though I think it's probable you'd end up with a fair amount of this type of code duplicated in classes, which makes it more noise to filter.
One possible solution I can see would be to use an OSGi specific class to mediate between the consumer, via more traditional means, and the framework.
package sample.internal;
public class OsgiDependencyInjector {
private Consumer consumer;
protected void bindComparator(Comparator c) {
this.consumer.setComparator(c);
}
protected void unbindComparator(Comparator c) {
this.consumer.setComparator(null);
}
}
Though I'm not sure how you'd arrange this in the SCR configuration.
There is also org.apache.felix.scr.annotations, though that means it'll all only work if you're building with the maven-scr-plugin. Not so bad really and, AFAICT, they impose no runtime implications.
So, now you've read all that, what do you suggest is the best way of consuming OSGi provided services without 'polluting' application code with framework code?
1) I do not think the bind methods are polluting your code, they are just bean setters (you can also call them setXXX to be more traditional). You will need those for unit testing as well.
2) If you use bnd (which is in maven, ant, bndtools, eclipse plugin, etc) then you can also use the bnd annotations. bnd will then automatically create the (always horrible) xml for you.
package sample.service;
import java.util.Comparator;
import aQute.bnd.annotations.component.*;
#Component
public class SampleComparator implements Comparator {
public int compare( Object o1, Object o2 ) {
return o1.equals( o2 ) ? 0 : -1;
}
}
#Component
class Consumer {
Comparator comparator;
public void doCompare( Object o1, Object o2 ) {
if ( comparator.compare(o1,o2) )
....
}
#Reference
protected setComparator( Comparator c ) {
comparator = c;
}
}
In your manifest, just add:
Service-Component: *
This will be picked up by bnd. So no OSGi code in your domain code. You might be puzzled there is no unset method but the default for bnd is static binding. So the set method is called before you're activated and you're deactivated before the unset would be called. As long as your Consumer object would be a µservice too, you're safe. Look at bndtools, the bnd home page, and my blogs for more info about µservices.
PS. Your sample is invalid code because o1 will answer the both greater than and lesser than o2 if o1 != o2, this is not allowed by the Comparator contract and will make sorts unstable.
I'll write you how we do it in my project. As an OSGi container we are using Fuse ESB, although somewhere inside Apache Karaf can be found. To not pollute our code we use Spring DM (http://www.springsource.org/osgi), which greatly facilitates the interaction with OSGi container. It is tested "against Equinox 3.2.x, Felix 1.0.3+, and Knopflerfish 2.1.x as part of our continuous integration process" (the newest release).
Advantages of this approach:
all "osgi" configuration in xml files - code not polluted
ability to work with different implementations of OSGi container
How it looks?
publishing service in OSGi registry:
< osgi:service id="some-id"
ref="bean-implementing-service-to-expose" interface="interface-of-your-service"
/>
importing service from OSGi registry:
< osgi:reference id="bean-id" interface="interface-of-exposed-service"/>
Moreover, to create valid OSGi bundles we use maven-bundle-plugin.
The advantage of the felix annotations compared to the ones in aQute.bnd.annotations.component seems to be that bind and unbind methods are automatically created by the felix scr plugin (you can annotate a private field). The disadvantage of the felix plugin is that it acts on the Java sources and so doesn't work for class files created in other languages (such as scala).

Categories

Resources