OSGi declarative service is active, but bind() is not called - java

I'm facing an issue in OSGi context with declarative services which I don't understand. I try to explain:
I have a FooService which needs the FooManagerService (1..1 static). The FooManagerService references the FooService, but it's optional (0..n dynamic).
The goal is, if a FooService becomes available, it registers (bind() method is called) at the FooManagerService, so that the FooManagerService always has a list of all available FooService implementations in the system.
It works well on Windows, but on Linux I encounter the problem, that the FooService becomes active (activate() method is called), but that isn't recognized by the FooManagerService (bind() method isn't called). If I disable and enable FooService manually on the OSGi console, it is recognized by the FooManagerService.
I don't understand, why this happens. It can be avoided, by increasing the start level of the bundle, where FooServiceImpl is located. But that feels like an ugly workaround for, that's why I would like to understand what's going on there.
I attach a picture which describes the references between the services. Any hint is appreciated. Thanks in advance!
Best regards
Steffi
Service Manager Diagram

There is a cycle here that should be ok according to the theory. However, there are a number of problems in practice.
First, your implementations should be immediate=true. This solves some problems since it prevents a nasty problem that DS cannot get a service because it is being initialised. I.e. if the FooManager and the FooService impls must be immediate. This is described in OSGi enRoute Cycles
However, there is one more problem :-( Apache Felix DS has a bug that causes an effect as you describe. This bug is related to bundle ordering. This is reported in Apache Felix JIRA 5618.
If this DS bug is the problem then there is unfortunately only one solid solution. Unfortunate, because it requires you to descent to the bowels of OSGi. The solution is to register the manager service by hand and ensure it is not registered by DS:
#Component(service={}, immediate=true )
public class FooManagerImpl implements FooManager {
private ServiceRegistration<FooManager> registration;
#Reference
volatile List<FooService> foos;
#Activate
void activate( BundleContext context, Map<String,Object> properties ) {
registration = context.registerService(FooManager.class, this, new Hashtable<String,Object>(properties));
}
#Deactivate
void deactivate() {
registration.unregister();
}
...
}
The trick here is that the FooManager does not register its service until it has been activated while normally it is registered before it is activated.
I know Apache Felix is working on it but do not know how far they are.
Anyway, cycles always suck. Sadly, they are not always preventable but I would certainly try.
Note: registering a service manually will not create a capability. If you use Requirements/Capabilities you should add a service capability in the manifest to make the resolver work. If this line is gibberish to you, ignore it.

Related

Inject multiple remote EJBs in bean

In Java EE, if I have an interface:
#Remote
public interface MetaService {
ServiceData get();
}
And I have, in an ear 2 implementations:
#Stateless
public class Service1MetaService implements Calculator {
#Override
public ServiceData get() {...}
}
#Stateless
public class Service2MetaService implements Calculator {
#Override
public ServiceData get() {...}
}
I can create a bean, where:
#Stateless
public class View {
#Inject
private Instance<MetaService> metaServices;
...
}
And in View, the field metaServices will have the 2 implementations of MetaService.
I'd like similar functionality with remote beans.
So let's say, I have the above interface and implementations, but the packaging is different.
In base.jar I have the MetaService interface. This is packaged will all the subsequent applications mentioned below.
In a.ear I have the Service1MetaService implementation, while in b.ear I have the Service2MetaService implementation and in c.war I have the View class, which would like to use these implementations.
But as you would expect, the injected Instance is empty (not null tho). Is there a way to find the remote bean references in my injected Instance instance, even though these implementations are in separate applications?
One important thing is that in the View class I don't know and don't care about the number of these implementations, nor the names of the applications they are deployed in. So there is no way for me to use specific JNDI strings to get these references.
P.S.: Should I try and use technologies like JMS instead? So that I call the method add on a JMS proxy, which sends out the requests and waits for answers from all the applications that implement said interface?
P.S.: To clarify, the reason I need this is actually so that I can get data of running services on my application server(s). I updated the example interface and implementations, so that it's more clear. Also, it would be nice, if I could get these metadata synchronously, so JMS is not neccessarily prefered, however I can probably make it work.
I managed to convince myself to move away from remote EJBs. Well, it was also thanks to #chrylis-onstrike- as well, however, I'll opt for using JMS for this purpose.
The reason is that I can broadcast a request for the different services I need data from on-demand, enabling me to check for new services going online, or services failing.
Thanks to everyone who spent time trying to help me out.

Lookup for EJB subclass by superclass EJB name

I have a parent and child EJB
#Stateless
#Local(MyCoreLocal.class)
#Remote(MyCore.class)
public class MyCoreEjb implements MyCoreLocal, MyCore {
...
}
#Stateless
#Local(MyCustomizationLocal.class)
#Remote(MyCustomization.class)
public class MyCustomizationEjb extends MyCoreEjb implements MyCustomizationLocal, MyCustomization{
...
}
for architecural reasons at my company, I can't change MyCore project. But both it's all packed together in the same jar and deployed to JBOSS 4.2.3.
The problem is, I have to use MyCustomizationEjb whenever someone calls for MyCoreEjb. How can I override the JNDI entry for MyCoreEjb to point to MyCustomizationEjb in order to redirect all calls for MyCoreEjb transparently to MyCustomizationEjb?
ps: I have full control over ejb-jar.xml of the project, but can't change annotations.
I figured out a way how i could overpass the problem. In reality i didn't need to redirect all call for MyCustomizationEjb. I needed it just for a particular method (at this time).
So my solution was to make a Method Interceptor on the specific method I wanted and just "redirect" the execution to MyCustomizationEjb like this:
public class SpecificMethodInterceptor{
#EJB
MyCustomization myCustomization;
#AroundInvoke
public Object intercept(InvocationContext ctx) throws Exception {
Object result = myCustomization.specificMethod((Param1Type)ctx.getParameters()[0], (Param2Type) ctx.getParameters()[1]);
return result;
}
This way I could now call the extended specificMethod transparently.
I know this is not the most maintainable or scalable solution (since I'll need one interceptor for each method I want to override), but giving this particular project limitations I believe it was the best choice.
Note: There is no problem for not continue the execution (with ctx.proceed()) because this Interceptor is the last one called before the execution reaches the EJB. The only way it could go wrong is if someone make a method interceptor at the EJB, which would be skipped in the execution. But it's not a problem in this particular project.

(DataNucleus) JDO - Service / Repository Layering and #Transactional

For personal education I am currently developing a little application framework around Guice to learn-by-doing how Spring etc. work behind the scenes.
Intro
Just for the sake of context, here is what I have so far and plan to do so you get a feeling for what I try to archive:
Context (Core)
ApplicationContext/-Configuration
Modules (auto discovered, setting up the Guice bindings)
Extensions
Config: #Config
Locale: #Locale and i18n services
Resources: #Property, #Resource and some classes providing easy access to resources
Persistence: Problems - there we go!
Question
I'd like to use the JDO standard (and its reference implementation DataNucleus) for the persistence layer. Setting up the PersistenceManagerFactory was easy, so was using it in a basic manner. I am however targeting a typical service / repository layer architecture, e.g.:
Person
PersonRepository (JDO)
PersonService (Transactions, using PersonRepository)
That alone wouldn't be too hard either, but as soon as I tried properly integrating transactions into the concept I got a bit lost.
Desired
class PersonService {
#Transactional(TxType.REQUIRED)
public Set<Person> doX() {
// multiple repository methods called here
}
}
class PersonRepository {
private PersistenceManagerFactory pmf;
public Set<Person> doX() {
try (PersistenceManager pm = pmf.getPersistenceManager()) {
pm.....
}
}
}
Difficulties
DataNucleus supports RESOURCE_LOCAL (pm.currentTransaction()) as well as JTA transactions and I would like to support both as well (the user should not have to distinguish between the two outside the configuration). He should not have to bother about transaction handling anyway, that is part of the annotation's method interceptor (I guess).
I'd love to support the #Transactional (from JTA) annotation that can be placed on service layer methods. Knowing that annotation is not per-se available in JDO, I thought it could be made usable as well.
How exactly should the repository layer "speak" JDO? Should each method get a PersistenceManager(Proxy)from the PersistenceManagerFactory and close it afterwards (as in the example) or get a PersistenceManager injected (rather than the factory)? Should each method close the PersistenceManager (in both scenarios)? That would not work with RESOURCE_LOCAL transactions I guess since a transaction is bound to one PersistenceManager.
What I tried
I have a JDOTransactionalInterceptor (working with pmf.getPersistenceManagerProxy) and a JTATransactionalInterceptor (very similar to https://github.com/HubSpot/guice-transactional/blob/master/src/main/java/com/hubspot/guice/transactional/impl/TransactionalInterceptor.java working with a ThreadLocal)
Summary
I am aware that my question may not be as clear as desired and mixes the service / repository layer questions (which is my main problem I think) and transaction stuff (which I could figure out once I understand how to properly use PMF/PM in repository layer I think)
There is no scope à la RequestScoped etc. I just want the first #Transactional method call to be the starting point for that whole thing (and that is the point: Is this impossible and the PMF/PM have to be scoped before and I have to direct my thinkings into that direction?)
Thanks for any clarification / help!

Programmatically using components in OSGi

In my application, using services by themselves is pretty useless. You always need some external configuration information for a service to be usable.
Components coupled with ConfigurationAdmin makes sense, since then for each configuration I create, a component instance will be created. This is just perfect for my use-case.
Now, the question arises, what if I'd like to use a component from an other bundle programmatically? Does this make sense?
I know I could export the component as a service yet again, and consume that from other beans, but let's say I have a servlet, where the user can create the configurations, and for each configured instance there are a list of actions; when he clicks the actions, I'd need to find the appropriate component, and execute the action on it.
What'd be the best way to implement this functionality on top of OSGi?
"Using a component from another bundle programatically" sounds exactly like OSGi Services to me.
This method retrieves the osgi service (iso having the osgi container wire the dependencies):
public class ServiceLocator {
public static <T extends Object> T getService(final Class<T> clazz) {
final BundleContext bundleContext = FrameworkUtil.getBundle(clazz).getBundleContext();
// OSGI uses the order of registration if multiple services are found
final ServiceReference<T> ref = bundleContext.getServiceReference(clazz);
return bundleContext.getService(ref);
}
}
I used this when introducing DS in existing project which does not use DS everywhere. Not all components in the project were instantiated as osgi DS components. Anywhere I need to access a DS Component in classes instantiated by any other means I used this method...

JMX MXBean Attributes all UNDEFINED - Spring 3.0.x/Tomcat 6.0

I've been trying to get a sample JMX MXBean working in a Spring-configured webapp, but any basic attributes on the MXBean are coming up as UNDEFINED when I connect with jconsole.
Java interface/classes:
public interface IJmxBean { // marker interface for spring config, see below
}
public interface MgmtMXBean { // lexical convention for MXBeans - mgmt interface
public int getAttribute();
}
public class Mgmt implements IJmxBean, MgmtMXBean { // actual JMX bean
private IServiceBean serviceBean; // service bean injected by Spring
private int attribute = 0;
#Override
public int getAttribute() {
if(serviceBean != null) {
attribute = serviceBean.getRequestedAttribute();
}
return attribute;
}
public void setServiceBean(IServiceBean serviceBean) {
this.serviceBean = serviceBean;
}
}
Spring JMX config:
<beans>
<context:component-scan base-package="...">
<context:include-filter type="assignable" expression="...IJmxBean" />
</context:component-scan>
<context:mbean-export />
</beans>
Here's what I know so far:
The element is correctly instantiating a bean named "mgmt". I've got logging in a zero-argument public constructor that indicates it gets constructed.
is correctly automatically detecting and registering the MgmtMXBean interface with my Tomcat 6.0 container. I can connect to the MBeanServer in Tomcat with jconsole and drill down to the Mgmt MXBean.
When examining the MXBean, "Attribute" is always listed as UNDEFINED, but jconsole can tell the correct type of the attribute. Further, hitting "Refresh" in jconsole does not actually invoke the getter method of "Attribute"- I have logging in the getter method to indicate if it is being invoked (similar to the constructor logging that works) and I see nothing in the logs.
At this point I'm not sure what I'm doing wrong. I've tried a number of things, including constructing an explicit Spring MBeanExporter instance and registering the MXBean by hand, but it either results in the MBean/MXBean not getting registered with Tomcat's MBean server or an Attribute value of UNDEFINED.
For various reasons, I'd prefer not to have to use Spring's #ManagedResource/#ManagedAttribute annotations.
Is there something that I'm missing in the Spring docs or MBean/MXBean specs?
ISSUE RESOLVED: Thanks to prompting by Jon Stevens (above), I went back and re-examined my code and Spring configuration files:
Throwing an exception in the getAttribute() method is a sure way to get "Unavailable" to show up as the attribute's value in JConsole. In my case:
The Spring JMX config file I was using was lacking the default-autowire="" attribute on the root <beans> element;
The code presented above checks to see if serviceBean != null. Apparently I write better code on stackoverflow.com than in my test code, since my test code wasn't checking for that. Nor did I have implements InitializingBean or #PostConstruct to check for serviceBean != null like I normally do on almost all the other beans I use;
The code invoking the service bean was before the logging, so I never saw any log messages about getter methods being entered;
JConsole doesn't report when attribute methods throw exceptions;
The NPE did not show up in the Tomcat logs.
Once I resolved the issue with serviceBean == null, everything worked perfectly. Regardless, +1 to Jon for providing a working demo, since there are literally 50 different ways to configure MBeans/MXBeans within Spring.
I've recently built a sample Spring based webapp that very cleanly enables JMX for latest versions of Spring, Hibernate and Ehcache.
It has examples for both EntityManager based access and DAO access (including transactions!). It also shows how to do annotation based injection in order to negate having to use Spring's xml config for beans. There is even a SpringMVC based example servlet using annotations. Basically, this is a Spring based version of a fairly powerful application server running on top of any servlet engine.
It isn't documented yet, but I'll get to that soon. Take a look at the configuration files and source code and it should be pretty clear.
The motivation behind this is that I got tired of all of the crazy blog posts with 50 different ways to set things up and finally made a single simple source that people can work from. It is up on github so feel free to fork the project and do whatever you want with it.
https://github.com/lookfirst/fallback

Categories

Resources