I am trying to use the picoContainer in my project.
I know very little about it but want to give it a shot.
As I understand, I have to create a picoContainer and registercomponents with it.
I did this
public static PicoContainer getPicoContainer(){
final MutablePicoContainer pico = new DefaultPicoContainer();
pico.registerComponentImplementation(X.class);
pico.registerComponentImplementation(A.class);
pico.registerComponentImplementation(C.class);
pico.registerComponentImplementation(V.class);
pico.registerComponentImplementation(T.class);
pico.registerComponentImplementation(D.class);
return pico;
}
Now my problem is that for any component to get the other component, it needs a handle on pico.
To access any component it needs to do this
A juicer = pico.getComponent(A.class);
So, in the constructor for each of them, I need to pass in the pico object? I can easily replace this with a factory. What's the point then? I'm sure i'm missing something here.
Would appreciate any help.
Common pattern is to have somewhere a factory for the main container.
For stand-alone app it probably will be "public static void main()" entry point, for web app it will be front controller servlet or filter or context listener (pico has support class for listener case).
So at the entry point you configure the container in a way you mentioned above "public static PicoContainer getPicoContainer()" then you need to pass control to an entry point in the container. The nice way is to have at least one container's component to implement lifecycle interface (http://picocontainer.codehaus.org/lifecycle.html) then you start() the container and have everything wired up.
In normal case you should never access the container itself beside entry configuration and such things as special factories or transaction demarcation etc.
Related
I'm facing an issue in OSGi context with declarative services which I don't understand. I try to explain:
I have a FooService which needs the FooManagerService (1..1 static). The FooManagerService references the FooService, but it's optional (0..n dynamic).
The goal is, if a FooService becomes available, it registers (bind() method is called) at the FooManagerService, so that the FooManagerService always has a list of all available FooService implementations in the system.
It works well on Windows, but on Linux I encounter the problem, that the FooService becomes active (activate() method is called), but that isn't recognized by the FooManagerService (bind() method isn't called). If I disable and enable FooService manually on the OSGi console, it is recognized by the FooManagerService.
I don't understand, why this happens. It can be avoided, by increasing the start level of the bundle, where FooServiceImpl is located. But that feels like an ugly workaround for, that's why I would like to understand what's going on there.
I attach a picture which describes the references between the services. Any hint is appreciated. Thanks in advance!
Best regards
Steffi
Service Manager Diagram
There is a cycle here that should be ok according to the theory. However, there are a number of problems in practice.
First, your implementations should be immediate=true. This solves some problems since it prevents a nasty problem that DS cannot get a service because it is being initialised. I.e. if the FooManager and the FooService impls must be immediate. This is described in OSGi enRoute Cycles
However, there is one more problem :-( Apache Felix DS has a bug that causes an effect as you describe. This bug is related to bundle ordering. This is reported in Apache Felix JIRA 5618.
If this DS bug is the problem then there is unfortunately only one solid solution. Unfortunate, because it requires you to descent to the bowels of OSGi. The solution is to register the manager service by hand and ensure it is not registered by DS:
#Component(service={}, immediate=true )
public class FooManagerImpl implements FooManager {
private ServiceRegistration<FooManager> registration;
#Reference
volatile List<FooService> foos;
#Activate
void activate( BundleContext context, Map<String,Object> properties ) {
registration = context.registerService(FooManager.class, this, new Hashtable<String,Object>(properties));
}
#Deactivate
void deactivate() {
registration.unregister();
}
...
}
The trick here is that the FooManager does not register its service until it has been activated while normally it is registered before it is activated.
I know Apache Felix is working on it but do not know how far they are.
Anyway, cycles always suck. Sadly, they are not always preventable but I would certainly try.
Note: registering a service manually will not create a capability. If you use Requirements/Capabilities you should add a service capability in the manifest to make the resolver work. If this line is gibberish to you, ignore it.
This seems like it should be trivial, but both Google and StackOverflow seem to be just as uncooperative as Spring docs (or I just don't know where to look).
My Spring Boot application needs to manually instantiate certain classes. Some of the classes have dependencies, so I can't use .newInstance(); instead, I figure I need to ask Spring to give me the instance from its DI container. Something like
Class<? extends Interface> className = service.getClassName();
Interface x = SpringDI.getInstance(className);
But I can't seem to find any way of doing this. What should I do?
EDIT
Class names are resolved dynamically, I have updated my sample pseuido-code to reflect that.
How about autowiring the ApplicationContext in the component in which you want to instantiate those classes? As ApplicationContext implements the BeanFactory interface, you can call the getBean() method.
Something like:
#Autowired
private ApplicationContext applicationContext;
[...]
applicationContext.getAutowireCapableBeanFactory().getBean(clazz_name);
I am not sure why you would want to do this, tho, as it defies the purpose of using Spring. (you can just not use Spring but use Java's reflection API)
Please refer to this part of the JavaDocs: http://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/beans/factory/BeanFactory.html
I have a Spring integration application with several FileTailingMessageProducers and DirectMessageChannels created programmatically -- i.e. not through XML configuration, but within a ApplicationListener<ContextRefreshedEvent>. Now I would like to monitor the message channels using JMX. I guess I will have to add them using my integrationMBeanExporter.
This is what I tried:
DirectChannelMetrics directChannelMetrics = new DirectChannelMetrics(tailedLines, "tailedLines");
integrationMBeanExporter.getServer().registerMBean(directChannelMetrics, new ObjectName("d:foo=foo"));
Yet I am getting the following Exception:
javax.management.NotCompliantMBeanException: MBean class org.springframework.integration.monitor.DirectChannelMetrics does not implement DynamicMBean, and neither follows the Standard MBean conventions
It is surprising to me, that the DirectChannelMetrics does not fulfill JMX requirements, since when I look into my application with jvisualvm I can see other beans of this type registered without problems.
Any ideas?
From one side MBeanExporter does this on the matter:
return new StandardMBean(bean, ((Class<Object>) ifc));
Before registering bean as an MBean.
From other side I think your logic smells a bit. It looks abnormal to create MessageChannels at runtime. Especially those which are for the JMX export.
I can agree about dynamic FileTailingMessageProducers, but seems for me we can avoid dynamic channels with refactored logic for the predefined channels.
You could leverage Spring's MBeanExport.registerManagedResource(directChannelMetrics, new ObjectName("d:foo=foo")). Spring will generate a management interface for the instance of DirectChannelMetric class. But DirectChannelMetric class needs either to implement the Mbean/MXBean interface or to match current MBeanInfoAssembler expectations(be marked with the #ManagedResource annotation in the case of MetadataMBeanInfoAssembler or implements one of specified interfaces in the case of InterfaceBasedMBeanInfoAssembler etc).
I have a java class where I need to have full control the time and place of instantiation - therefore I can't initialize it as blueprint bean.
In the same bundle as that class I have a bean that I export as an OSGi-service. I would need to get access to that very service instance from the previously explained non-blueprint class.
I can't just perform a service lookup as there are other services implementing the same interface aswell. Creating a second (internal) instance of the service class will not work either.
So, as a recap:
Before I used blueprint, I had the service implementation as classic singleton, enabling me to register the same instance as service in the activator class that I could later access from within the bundle. But with blueprint (as far as I know) making the service class a "classic" singleton is not possible because it would not be possible for blueprint to create the service instance
I can't perform a service lookup because there is more than one service registered implementing the service interface.
My current solution is to query all services implementing the interface and looping the list to find the one thats instance of the one class I want.
BundleContext ctx = FrameworkUtil.getBundle(getClass()).getBundleContext();
ServiceReference<?>[] refs = ctx.getServiceReferences(ServiceInterface.class.getName(), null);
ServiceImpl provider = null;
for (ServiceReference ref : refs) {
if (ctx.getService(ref) instanceof ServiceImpl) {
provider = (ServiceImpl) ctx.getService(ref);
}
}
But I do not really like the idea of that approach.
Is there any better way to solve that? Maybe some way to request a service instance direct from the blueprint container? I found the interface BlueprintContainer with a method to get instances by the ID they got - but again the only way to get an instance of the BlueprintContainer I found is to inject the instance in the class - where I hit the initial problem of the class not possible to be a blueprint bean again.
Just set a property when exporting the service. So you can filter for it. This way you can distinguish your service impl from the others.
I also propose to use a ServiceTracker for your service. So you do not have to handle the lookup for every call to the service. If you do not use a ServiceTracker make sure to unget the service after use.
In my application, using services by themselves is pretty useless. You always need some external configuration information for a service to be usable.
Components coupled with ConfigurationAdmin makes sense, since then for each configuration I create, a component instance will be created. This is just perfect for my use-case.
Now, the question arises, what if I'd like to use a component from an other bundle programmatically? Does this make sense?
I know I could export the component as a service yet again, and consume that from other beans, but let's say I have a servlet, where the user can create the configurations, and for each configured instance there are a list of actions; when he clicks the actions, I'd need to find the appropriate component, and execute the action on it.
What'd be the best way to implement this functionality on top of OSGi?
"Using a component from another bundle programatically" sounds exactly like OSGi Services to me.
This method retrieves the osgi service (iso having the osgi container wire the dependencies):
public class ServiceLocator {
public static <T extends Object> T getService(final Class<T> clazz) {
final BundleContext bundleContext = FrameworkUtil.getBundle(clazz).getBundleContext();
// OSGI uses the order of registration if multiple services are found
final ServiceReference<T> ref = bundleContext.getServiceReference(clazz);
return bundleContext.getService(ref);
}
}
I used this when introducing DS in existing project which does not use DS everywhere. Not all components in the project were instantiated as osgi DS components. Anywhere I need to access a DS Component in classes instantiated by any other means I used this method...