i'm fairly new to osgi and am trying to get a functional proof of concept together.
The setup is that my common api is created in a bundle creatively named common-api.jar with no bundle activator, but it exports all it's interfaces. the one of interest in this situation is DatabaseService.java.
I then have a Second bundle called systemx-database-service. That implements the database service interface. this works fine as in the activator of the implementation bundle i test the connection to the database and select some arbitraty values. I also register the service i want to be available to the other bundle's like so:
context.registerService(DatabaseService.class.getName(), new SystemDatabaseServiceImpl(context), new Properties());
The basic idea being when you look for a service reference for a Database service you'll get back the SystemDatabaseService implementation.
When i do a inspect service the output it this:
-> inspect s c 69
System Database Service (69) provides services:
----------------------------------------------
objectClass = za.co.xxx.xxx.common.api.DatabaseService
service.id = 39
which would lead me to believe that if i do this in a test bundle:
context.getService(context.getServiceReference(DatabaseService.class));
i should get back an instance of DatabaseService.class, but alas no such luck. it simply seems like it cannot find the service. stick with me here my story gets stranger.
figuring there is no where to go but up i wrote this monstrosity:
for (Bundle bundle : bundles) {
if (bundle.getSymbolicName().equals("za.co.xxx.xxx.database-service")) {
ServiceReference[] registeredServices = bundle.getRegisteredServices();
for (ServiceReference ref : registeredServices) {
DatabaseService service = (DatabaseService) context.getService(ref);
// use service here.
}
}
}
}
now i can actually see the service reference, but i get this error
java.lang.ClassCastException: za.co.xxx.xxx.database.service.impl.SystemDatabaseServiceImpl cannot be cast to za.co.xxx.xx.common.api.DatabaseService
which is crazy since the implementation clearly implements the interface!
Any help would be appreciated. Please keep in mind i'm very new at the osgi way of thinking so my whole approach here might be flawed.
oh. if anyone wants the manifests i can post them. and i'm using the maven-bnd-plugin to build and executing on felix.
thanks
Nico
The test bundle must resolve to the same import of the DatabaseService interface as the SystemDatabaseServiceImpl. If this does not occur, then getServiceReference documents that it will return null even if a service is found. By locating the bundle manually and attempting to locate the service and cast, you're showing why getServiceReference behaves in this way: if it returned arbitrary services, Java casts would fail.
I would recommend printing DatabaseService.class.getClassLoader() in both the impl bundle and test bundle to prove if they're the same bundle. If they're not, then you need to adjust your OSGi MANIFEST.MF metadata to ensure that they have a consistent view of the interface class.
For example, is the DatabaseService interface included in both the test and impl bundles? If yes, you need to move that interface to either the impl bundle (and Export-Package) or to a third interface bundle and Export-Package. Then, adjust the other bundles to Import-Package.
Related
I have 2 OSGI bundles TestCommons that is the service provider bundle and TestMyBundle that consumes that Service.
Now I have used Declarative services in testMyBundle and based on that I have setter and unsetter in the Activator class of TestMyBundle. So, setter method is called whenever TestCommons service is found and unsetter is called if TestCommons service is unregistered from OSGI.
Now I want to update TestCommons bundle programatically, I used update() method of org.osgi.framework.Bundle interface to update an existing bundle.
Now if I update the bundle, setter and unsetter methods of Bundle TestMyBundle is not called and the bundle is not notified. How can I notify the dependent bundle of the update programatically?
One way is refreshing but I am not able to manually refresh the bundles.
Here is the code that I have written
Bundle[] bundle = context.getBundles();
String symbolicName = "TestCommons";
try {
FrameworkWiring frameworkWiring = null;
for (Bundle b : bundle) {
if (b.getSymbolicName().equalsIgnoreCase(symbolicName)) {
b.update(new FileInputStream(new File("/home/temp/TestCommons-0.0.1-SNAPSHOT.jar")));
frameworkWiring = context.getBundle().adapt(FrameworkWiring.class);
break;
}
}
frameworkWiring.refreshBundles(null);
} catch (Exception e) {
System.out.println("Exception occured while starting...");
e.printStackTrace();
}
}
Now, adapt() method is returning me null. So refresh cannot be called. Please update me as what is the issue here and what other approach could be taken to achieve bundle updation.
Any leads would be appreciated. Thanks...
Your TestMyBundle is not updated as the interface package of the service is updated. So you are already on the right track that you need a refresh of your TestMyBundle to pick up the change.
In practice you can often avoid this by using a separate bundle for the api. As long as you do not update the service interface - and it should be rare - you can then simply update the service bundle and your declarative services component in the client will pick up the new service.
Now about refreshing bundles. You are already right that you need FrameworkWiring for this but only the system bundle can be adapted to FrameworkWiring. So this should do the trick:
Bundle systemBundle = context.getBundle(0);
systemBundle.adapt(FrameworkWiring.class).refresh(null);
This will refresh all bundles.
I'm having a problem with Apache Camel that I can't understand. I have this issue with JBoss Fuse 6.3.0, which bundles Apache Camel 2.17.0.redhat-630224.
I have a simple route: it downloads files from an FTP server, transforms them into POJOs (this part works), then aggregates them into a single POJO which is marshalled and saved to a file.
In JBoss Developer Studio, I test this by doing "Run as... > local Camel context". Behind the scenes, this simply runs mvn clean package org.apache.camel:camel-maven-plugin:run. Whether I do it from the IDE, or manually in my terminal, the route works fine.
However, when I build an OSGi bundle (with mvn clean install) which I then deploy into JBoss Fuse (Apache Karaf), the application deploys successfully and the download/transform parts works fine, but then the aggregation fails.
The aggregation is handled by a custom class that implements org.apache.camel.processor.aggregate.AggregationStrategy (documented here). The problem I have is that the newExchange parameter I receive always have a null body. Now, the oldExchange being null the first time is expected, but the newExchange's body? (edit: the correlation expression is a simple constant, since all POJOs are aggregated together)
Even weirder: if I modify the route to marshall my POJOs just before the aggregator, I receive a String with the expected data. This proves (I think!) that the transformations work as expected. Also, Fuse's logs show no error messages (neither at deploy time nor at runtime). This looks a lot like a configuration or dependency issue, but for the life of me I can't find any similar issue reported anywhere.
Has anyone ever seen something similar before? Or at least, do you have any tips as to what could be the problem's source?
Edit: here's the relevant part of the route:
<choice>
// one <when> per file which produces a POJO
<when id="_when_some_xml">
<simple>${file:onlyname} == 'something.xml'</simple>
<to id="_to2" uri="ref:transform_something_xml"/>
</when>
</choice>
// if I add a marshalling here, I receive non-null exchanges in the aggregator... but they're strings and not the POJOs I want.
<aggregate completionSize="12" id="_aggregate_things"
strategyMethodAllowNull="true" strategyRef="MyAggregator">
<correlationExpression>
<constant trim="false">true</constant>
</correlationExpression>
<log id="_log_things_aggregated" message="Data aggregated."/>
<convertBodyTo id="_convertBodyTo_anotherClass" type="net.j11e.mypackage.MyClass"/>
// [...] next: marshal and save to file
Note: I tried with strategyMethodAllowNull="false", didn't change a thing.
And here's the Aggregator:
public class EpgAggregator implements AggregationStrategy {
#Override
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
// first message being aggregated: no oldExchange, simply keep the message
if (oldExchange == null) {
System.out.println("Old exchange is null");
return newExchange;
}
if (newExchange.getIn().getBody(MyClass.class) == null) {
System.out.println("newExchange body is null");
}
// ...
The second if triggers every time, even for the first aggregation, if I remove the return in the first if.
Edit
Ok, so thanks to noMad17n's comment below, I had a breakthrough: the problem has to do with class loading.
When I got the newExchanges's body without specifying a class (Object newBody = newExchange.getIn().getBody();), the result was not null, but I couldn't cast it as MyClass: I got a java.lang.ClassCastException: net.j11e.MyClass cannot be cast to net.j11e.MyClass.
Reading about how OSGi can lead to multiple classloaders loading the same class, I renamed MyClass to MyOtherClass and after a reboot (??), everything worked. However, after uninstalling my bundle and reinstalling it, the problem is back.
osgi:find-class MyClass returns two bundles: mine and dozer-osgi, which is (I guess) logical since MyClass instances are produced by a dozer transformation.
Ok, so maybe I should not uninstall and reinstall bundles very often but use osgi:update, osgi:refresh, or whatever. But still, there should be a way to make this work? Something else than uninstalling my bundle, refreshing/updating dozer, stopping/restarting Fuse, and reinstalling my bundle, hoping that one of the aforementioned operations somehow makes the correct classes be loaded?
For those who might encounter this issue in the future, here's a recap:
the problem is caused by the fact that an older version of a package exported by your bundle is still used by another (in my case, dozer-osgi). Here, this causes the cast to MyClass to fail, which makes getBody return null (getMandatoryBody would return an exception, etc.)
to identify the bundle causing the issue, use the command osgi:find-class MyClass. This will return your bundle... and another.
refresh that bundle by finding its bundle id (osgi:list | grep thebundle) and refreshing it (osgi:refresh 123). You can also refresh the bundle from Fuse's web UI (hawtio): OSGi > bundles > your bundle > the refresh button at the top of the page (next to the start, stop, update, and uninstall buttons).
That's more a mitigation than a proper solution to this issue. The real solution would probably involve fixing the package import/export rules or something, but this is beyond my current skills.
Fair warning, too: sometimes, refreshing dozer-osgi apparently wasn't enough. MyClass was not imported by it anymore (osgi:find-class MyClass would not return dozer-osgi), but I still had my NullPointerException problem. In these rare occurrences, I had to restart Fuse. I don't know why these few cases happened.
I am developing an application that is build on top of Apache Felix and JavaFX. The application can be extended by 3rd party bundles that implement a specific Interface and make it available to the OSGi Runtime Service Registry.
The problem is that those bundles (or plugins) should not be able to retrieve any of the services that are just used internally by my application. An example would be a PersistenceService that is used to save the processed data. Plugins are (in my application) by definition not allowed to store any data through my service but are allowed to save them through a specific service designed for the plugins only.
I had the idea of using the FindHook Interface offered by OSGi to filter out those requests but that didn't work good. Obviously, to make it work, the bundle needs to me loaded at the very start, eve before my core application gets loaded. I ensured this happens by specifying the start level for this bundle using the felix.auto.deploy.install.1 = "file\:bundles/de/zerotask/voices-findhook/0.1-SNAPSHOT/voices-findhook-0.1-SNAPSHOT.jar"
As far as I understood, the start level of the system bundle will be 1 which means that my bundle should always be loaded right after the system bundle.
Here is my implementation of the FindHook interface:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Arrays;
import java.util.Collection;
import java.util.HashSet;
import java.util.Set;
import org.osgi.framework.BundleContext;
import org.osgi.framework.ServiceReference;
import org.osgi.framework.hooks.service.FindHook;
/**
*
* #author PositiveDown
*/
public class VoicesFindHook implements FindHook {
private static Logger log = LoggerFactory.getLogger(VoicesFindHook.class);
private static final String[] INTERNAL_BUNDLE_TABLE = new String[]{
"de.zerotask.voices-core-actions",
"de.zerotask.voices-findhook",
"de.zerotask.voices-interfaces-persistable",
"de.zerotask.voices-models",
"de.zerotask.voices-models-actions",
"de.zerotask.voices-services-configuration-internal",
"de.zerotask.voices-services-input-internal",
"de.zerotask.voices-services-licenses-internal",
"de.zerotask.voices-services-modelsmanager-internal",
"de.zerotask.voices-services-persistence-internal",
"de.zerotask.voices-services-window-internal",
"de.zerotask.voices-ui-dialogs-about",
"de.zerotask.voices-ui-dialogs-newprofile",
"de.zerotask.voices-ui-dockable-listview",
"de.zerotask.voices-ui-dockable-properties",
"de.zerotask.voices-ui-layout",
"de.zerotask.voices-utils-io",
"de.zerotask.voices-utils-services",
"de.zerotask.voices-utils-ui"
};
private static final String[] INTERNAL_SERVICES_TABLE = new String[]{
// model services
// configuration service
"de.zerotask.voices.services.configuration.IConfiguration",
// window service
"de.zerotask.voices.services.window.IWindowService",
// persistence services
"de.zerotask.voices.services.persistence.IPathResolver",
"de.zerotask.voices.services.persistence.IPersistenceService"
};
private static final Set<String> INTERNAL_BUNDLES = new HashSet<>(Arrays.asList(INTERNAL_BUNDLE_TABLE));
private static final Set<String> INTERNAL_SERVICES = new HashSet<>(Arrays.asList(INTERNAL_SERVICES_TABLE));
#Override
public void find(BundleContext context, String name, String filter, boolean allServices, Collection<ServiceReference<?>> references) {
// only allow the usage of internal interfaces from internal packages
String symbolicName = context.getBundle().getSymbolicName();
// debug
log.debug("Processing Bundle {} and service {}", symbolicName, name);
// if the service is one of the internal ones, proceed
if (INTERNAL_SERVICES.contains(name)) {
// retrieve the bundle id
log.debug("Service {} is in internal table", name);
// if the name is not in the internal bundle table, remove all service references
if (!INTERNAL_BUNDLES.contains(symbolicName)) {
log.debug("Bundle {} not in internal table => removing service references...", symbolicName);
// remove them
references.clear();
}
}
}
}
The idea is to have a table of internal bundles``` andinternal services```. Each time a service is looked up, the hook will check if it is an internal service. If this is the case, it will also check if the caller bundle is an internal bundle. If that's not true, the hook will remove all services found from the collection.
I am by far no OSGi expert but this method should work because it is based on the SymbolicNames which are unique in each container.
I have tested the above code with two small test bundles. One providing the interface + implementation and the other one consuming it. I changed the hook so it will not return any services for the consumer bundle (to just simply check if it works).
No my problem is, the consumer bundle gets somehow loaded first. I have no idea why. By doing this it basically breaks my loading property set in the properties file.
I am not sure if this helps but the provider bundle's name starts with an 'y', the consumer one with an 't' and the hook one with an 'v'.
The funny thing is, Felix is loading them in alphabetically order.
I would really appreciate any help here.
Services are implicitly available to every bundle – that is the purpose of services after all.
You can work around this with various hacks like FindHooks etc, but as you have already discovered you are constantly fighting against the true nature of the OSGi Framework and services.
It sounds more like you are creating an isolation system between a kernel and a user space, so that you cannot accidentally pollute the user area with kernel services and vice versa. The proper way (IMHO) to achieve this is with a separate OSGi Framework instance for the two areas. It's quite simple to run up a new Framework using the FrameworkFactory API. Then you can expose select packages and services from the kernel using the BundleContext of the system bundle of the user-area Framework.
However as BJ points out in comments, you may be over-engineering this. What's the worst that can happen if the plugins can see your system services? If those services are well designed then the answer should be "not a lot".
I see two options:
ServicePermission, this is the standard way;
or
ServiceFactory, you decide what bundle can get the real service. Others receive a fake implementation.
I have a (web-)application that needs special configurations and/or extensions based on the customer using the application. I call these additions "plugins" and they are auto discovered by classpath scanning when the application starts. For extensions that is incredibly easy. Let's say I want to have a plugin which adds an API that prints "hello world" when the URL /myplugin/greet is called: I just create a #Controller annotated class with the according #RequestMapping, put this in a myplugin.jar, copy that on the classpath and that's it.
Problems come up when I want to change some defaults and especially if I want to do this multiple times. Let's say my core application has a config like this:
#Configuration
public class CoreConfiguration {
#Bean
public Set<String> availableModules() {
return Collections.singleton("core");
}
}
Now I have two plugins that don't know about each other (but they do know the CoreConfig), but they both want to add themselves to the list of available modules. How would I do that? If I only had a single plugin that wants to override the module list I could override the existing bean from CoreConfiguration, but with two plugins that becomes a problem. What I imagine is something like this:
#Configuration
public class FirstPluginConfiguration {
#Bean
public Set<String> availableModules(Set<String> availableModules) {
Set<String> extendedSet = new HashSet<>(availableModules);
extendedSet.add("FirstPlugin");
return extendedSet;
}
}
Of course a SecondPluginConfiguration would look nearly exactly like this, except that the Set is not extended by "FirstPlugin", but by "SecondPlugin". I tested it to check what would happen and spring will just never call the First/SecondPluginConfiguration "availableModules" methods but it does not show an error either.
Now of course in this case this could easily be solved by using a mutable Set in the CoreConfiguration and then autowiring and extending the set in the other configurations, but for example I also want to be able to add method interceptors to some beans. So for example I might have an interface CrashLogger which has a logCrash(Throwable t) method and in CoreConfiguration a ToFileCrashLogger is created that writes stack traces to files as the name suggests. Now a plugin could say that he also wants to get notified about crashes, for example the plugin wants to ADDITIONALLY send the stacktrace to someone by email. For that matter that plugin could wrap the CrashLogger configured by the CoreConfiguration and fire BOTH. A second plugin could wrap the wrapper again and do something totally different with the stacktrace and still call both of the other CrashLoggers.
The later does sound somewhat like AOP and if I'd just let ALL my beans be proxied (I did not test that) I could autowire them into my plugin configurations, cast them to org.springframework.aop.framework.Advised and then add advices that manipulate behaviour. However it does seem like a huge overkill to generate proxies for each and everyone of my beans just so that that plugin can potentially add one or two advices one one or two beans.
I've written a #component in DS that is supposed to be instantiated and activated in multiple instances. In order to test that I've written a pax exam test where I boot karaf and added scr. Everything works fine, but... it will not instantiate the services until after the test method has run thus gives me no space to do assertions etc.
#Test
public final void testing() throws Exception {
props = createProperties(user, pass, host);
cfg = configurationAdmin.
createFactoryConfiguration(CouchbaseConnectionProvider.SVC_NAME);
cfg.update(props);
final ServiceTracker tracker = new ServiceTracker(bundleContext, CouchbaseConnectionProvider.class, null);
tracker.open();
CouchbaseConnectionProvider svc = (CouchbaseConnectionProvider) tracker.waitForService(5000);
// It will wait 5s and after testing exits it will create the service
}
What am I doing wrong here?
Since when method exits it will properly create and activate the service with all properties.
I may add that the test method using a thread "ion(3)-127.0.0.1" and when DS instantiates uses the thread "84-b6b23468b652)".
Cheers,
Mario
Update 3
There where actually two bugs, one on my side and one somewhere else (in felix CM?) since the config where accessable by my interface impl bundle after a while (while container was shutting down) but it should really been bound to the pax test bundle (and of course CM itself) and never been "free:d" when container was shutting down. Where it that bug is I do not know - I'll wrap up a minimalistic mvn project and try the felix cm guys and I'll post the update here.
Update 2
I've filed a bug (https://ops4j1.jira.com/browse/PAXEXAM-725) if someone is interested to follow the progress (if there's a bug ;))
Update 1
This is my configuration in the testclass
package se.crossbreed.foundation.persistence.provider.couchbase;
#RunWith(PaxExam.class)
#ExamReactorStrategy(PerClass.class)
public class CouchbaseConnectionProviderTests extends CbTestBase {
...
}
Here is the configuration in the testclass that will use base class for
base options.
#org.ops4j.pax.exam.Configuration
public Option[] config() {
List<Option> options = super.baseConfig();
options.addAll(Arrays
.asList(features(karafStandardRepo, "scr"),
mavenBundle()
.groupId("se.crossbreed.foundation.persistence")
.artifactId(
"se.crossbreed.foundation.persistence.core")
.versionAsInProject(),
mavenBundle().groupId("io.reactivex")
.artifactId("rxjava").versionAsInProject(),
mavenBundle()
.groupId("se.crossbreed.ports.bundles")
.artifactId(
"se.crossbreed.ports.bundles.couchbase.java-client")
.versionAsInProject(),
mavenBundle()
.groupId("se.crossbreed.foundation.persistence")
.artifactId(
"se.crossbreed.foundation.persistence.provider.couchbase")
.versionAsInProject()));
// above bundle is the one I'm trying to test and where
// this test resides in (project wise)
return options.toArray(new Option[] {});
}
The base configuration is gotten from a base class
protected List<Option> baseConfig() {
return new ArrayList<Option>(
Arrays.asList(new Option[] {
logLevel(LogLevel.INFO),
karafDistributionConfiguration().frameworkUrl(karafUrl)
.unpackDirectory(new File("target", "exam"))
.useDeployFolder(false),
configureConsole().ignoreLocalConsole(),
mavenBundle().groupId("biz.aQute.bnd")
.artifactId("bndlib").version("${version.bndlib}"),
mavenBundle()
.groupId("se.crossbreed.foundation")
.artifactId(
"se.crossbreed.foundation.core.annotations")
.versionAsInProject(),
mavenBundle()
.groupId("se.crossbreed.foundation")
.artifactId(
"se.crossbreed.foundation.core.interfaces")
.versionAsInProject() }));
}
The package for the test is
package se.crossbreed.foundation.persistence.provider.couchbase;
And the CouchbaseConnectionProvider is on the same package
package se.crossbreed.foundation.persistence.provider.couchbase;
import se.crossbreed.foundation.persistence.core.CbDbConnectionProvider;
public interface CouchbaseConnectionProvider extends CbDbConnectionProvider {
public final static String SVC_NAME = "couchbase.connection.provider";
}
The implementation:
package se.crossbreed.foundation.persistence.provider.couchbase.impl;
#Component(immediate = true, name =
CouchbaseConnectionProvider.SVC_NAME, provide = {
CouchbaseConnectionProvider.class, CbDbConnectionProvider.class,
CbService.class }, properties = { "providerType=DOCUMENT" },
configurationPolicy = ConfigurationPolicy.require)
public class CouchbaseConnectionProviderImpl implements
CouchbaseConnectionProvider { ... }
Here's the project structure of the Couchbase Provider and the test that I'm failing to get to work (until after the test has run ;).
(I don't actually see anything wrong with your code, the ConfigurationAdmin should work asynchronously. The new service comming up after the test still looks like a synchronization issue though. In that case, this setup might fix it.)
Instead of creating the configuration inside the test method you could use pax-exam-cm to create the factory configuration with the other options:
#org.ops4j.pax.exam.Configuration
public Option[] config() {
List<Option> options = super.baseConfig();
options.addAll(Arrays
.asList(features(karafStandardRepo, "scr"),
//missing conversion: putAll() needs a Map
ConfigurationAdminOptions.factoryConfiguration(CouchbaseConnectionProvider.SVC_NAME)
.putAll(createProperties(user, pass, host)).create(true).asOption(),
mavenBundle()
.groupId("se.crossbreed.foundation.persistence")
.artifactId(
"se.crossbreed.foundation.persistence.core")
.versionAsInProject(),
mavenBundle().groupId("io.reactivex")
.artifactId("rxjava").versionAsInProject(),
mavenBundle()
.groupId("se.crossbreed.ports.bundles")
.artifactId(
"se.crossbreed.ports.bundles.couchbase.java-client")
.versionAsInProject(),
mavenBundle()
.groupId("se.crossbreed.foundation.persistence")
.artifactId(
"se.crossbreed.foundation.persistence.provider.couchbase")
.versionAsInProject()));
// above bundle is the one I'm trying to test and where
// this test resides in (project wise)
return options.toArray(new Option[] {});
}
Maven settings:
<dependency>
<groupId>org.ops4j.pax.exam</groupId>
<artifactId>pax-exam-cm</artifactId>
<version>${exam.version}</version>
</dependency>
You can then also simply use the #Inject annotation to get the CouchbaseConnectionProvider inside the test.
#Inject
CouchbaseConnectionProvider svc;
I suspect that the test deploys the CouchbaseConnectionProvider interface with itself. So you try to retrieve the service using a different interface than the one the real service provides.
You should try to add imports and exports to your test bundle for the package CouchbaseConnectionProvider resides in.
To do this use a ProbeBuilder
#ProbeBuilder
public TestProbeBuilder probeConfiguration(TestProbeBuilder probe) {
probe.setHeader(Constants.IMPORT_PACKAGE, "..");
probe.setHeader(Constants.EXPORT_PACKAGE, "..");
return probe;
}
thanks both of you for your input - I chose to answer this question myself since I had a bug in my code and got help from Christoph.
I quote the answer from him here if there someone else did what I did.
The problem was that I did not set the configuration ownership as anonymous via (pid, null) in createFactoryConfiguration. Instead I used createFactoryConfiguration(pid) then it got bound to the current executing bundle and not the bundle I was testing. As Christoph explained it was possible for me to get the bundle location of the service bundle and set that explicitly.
Cheers,
Mario
Here's Christoph Läubrich answer
"Christoph Läubrich added a comment - 13 minutes ago
Okay I think I know what might be the problem now:
You are using the createFactoryConfiguration(java.lang.String factoryPid), this means you will create a configuration that is exclusivly bound to your bundle! Thus no other bundle is allowed to access the configuration!
Use the createFactoryConfiguration(java.lang.String factoryPid, java.lang.String location) instead with a null argument for the location! This way you create an anonymous configuration that will be bound to the first bundle that fetches this config. Alternativly you can get the location of the target bundle and explicitly pass this as an parameter, but this is often not needed.
If this still do not work, we must take a closer look at your configuration, connect to the karaf shell (while stopped at a breakpoint) and get a list of all bundles (bundle:list) and a list of all components (scr:list).
Also you should collect detailed information about the probe bundle and the bundle that should provide the service (packages:imports)."