OSGi loose-coupling best practice - java

I'd like to know what is considered the best practices or patterns for decoupling application code from framework code, specifically regarding OSGi.
I'm going to use the example from the Felix SCR pages
The example service is a Comparator
package sample.service;
import java.util.Comparator;
public class SampleComparator implements Comparator
{
public int compare( Object o1, Object o2 )
{
return o1.equals( o2 ) ? 0 : -1;
}
}
The code above contains no framework plumbing, it's focused and concise. Making this available to the application, when using OSGi, involves registering it with a service registry. One way, as described on the Felix pages linked, is by using the Service Component Runtime.
// OSGI-INF/sample.xml
<?xml version="1.0" encoding="UTF-8"?>
<component name="sample.component" immediate="true">
<implementation class="sample.service.SampleComparator" />
<property name="service.description" value="Sample Comparator Service" />
<property name="service.vendor" value="Apache Software Foundation" />
<service>
<provide interface="java.util.Comparator" />
</service>
</component>
and
Service-Component: OSGI-INF/sample.xml
All nice and lovely, my service implementation has no coupling at all to OSGI.
Now I want to use the service...
package sample.consumer;
import java.util.Comparator;
public class Consumer {
public void doCompare(Object o1, Object o2) {
Comparator c = ...;
}
}
Using SCR lookup strategy I need to add framework-only methods:
protected void activate(ComponentContext context) {
Comparator c = ( Comparator ) context.locateService( "sample.component" );
}
Using SCR event strategy I also need to add framework-only methods:
protected void bindComparator(Comparator c) {
this.c = c;
}
protected void unbindComparator(Comparator c) {
this.c = null;
}
Neither are terribly onerous, though I think it's probable you'd end up with a fair amount of this type of code duplicated in classes, which makes it more noise to filter.
One possible solution I can see would be to use an OSGi specific class to mediate between the consumer, via more traditional means, and the framework.
package sample.internal;
public class OsgiDependencyInjector {
private Consumer consumer;
protected void bindComparator(Comparator c) {
this.consumer.setComparator(c);
}
protected void unbindComparator(Comparator c) {
this.consumer.setComparator(null);
}
}
Though I'm not sure how you'd arrange this in the SCR configuration.
There is also org.apache.felix.scr.annotations, though that means it'll all only work if you're building with the maven-scr-plugin. Not so bad really and, AFAICT, they impose no runtime implications.
So, now you've read all that, what do you suggest is the best way of consuming OSGi provided services without 'polluting' application code with framework code?

1) I do not think the bind methods are polluting your code, they are just bean setters (you can also call them setXXX to be more traditional). You will need those for unit testing as well.
2) If you use bnd (which is in maven, ant, bndtools, eclipse plugin, etc) then you can also use the bnd annotations. bnd will then automatically create the (always horrible) xml for you.
package sample.service;
import java.util.Comparator;
import aQute.bnd.annotations.component.*;
#Component
public class SampleComparator implements Comparator {
public int compare( Object o1, Object o2 ) {
return o1.equals( o2 ) ? 0 : -1;
}
}
#Component
class Consumer {
Comparator comparator;
public void doCompare( Object o1, Object o2 ) {
if ( comparator.compare(o1,o2) )
....
}
#Reference
protected setComparator( Comparator c ) {
comparator = c;
}
}
In your manifest, just add:
Service-Component: *
This will be picked up by bnd. So no OSGi code in your domain code. You might be puzzled there is no unset method but the default for bnd is static binding. So the set method is called before you're activated and you're deactivated before the unset would be called. As long as your Consumer object would be a µservice too, you're safe. Look at bndtools, the bnd home page, and my blogs for more info about µservices.
PS. Your sample is invalid code because o1 will answer the both greater than and lesser than o2 if o1 != o2, this is not allowed by the Comparator contract and will make sorts unstable.

I'll write you how we do it in my project. As an OSGi container we are using Fuse ESB, although somewhere inside Apache Karaf can be found. To not pollute our code we use Spring DM (http://www.springsource.org/osgi), which greatly facilitates the interaction with OSGi container. It is tested "against Equinox 3.2.x, Felix 1.0.3+, and Knopflerfish 2.1.x as part of our continuous integration process" (the newest release).
Advantages of this approach:
all "osgi" configuration in xml files - code not polluted
ability to work with different implementations of OSGi container
How it looks?
publishing service in OSGi registry:
< osgi:service id="some-id"
ref="bean-implementing-service-to-expose" interface="interface-of-your-service"
/>
importing service from OSGi registry:
< osgi:reference id="bean-id" interface="interface-of-exposed-service"/>
Moreover, to create valid OSGi bundles we use maven-bundle-plugin.

The advantage of the felix annotations compared to the ones in aQute.bnd.annotations.component seems to be that bind and unbind methods are automatically created by the felix scr plugin (you can annotate a private field). The disadvantage of the felix plugin is that it acts on the Java sources and so doesn't work for class files created in other languages (such as scala).

Related

OSGi: Ensure that all extensions are loaded in a Declarative Services application

I am working on an application that is meant to be extensible by the customer. It is based on OSGi (Equinox) and makes heavy use of Declarative Services (DS). Customer-installed bundles provide their own service implementations which my application then makes use of. There is no limit on the number of service implementations that customer-specific bundles may provide.
Is there a way to ensure that, when the application's main function is executed, all customer-provided service implementations have been registered?
To clarify, suppose my application consists of a single DS component RunnableRunner:
public class RunnableRunner
{
private final List<Runnable> runnables = new ArrayList<Runnable>();
public void bindRunnable(Runnable runnable)
{
runnables.add(runnable);
}
public void activate()
{
System.out.println("Running runnables:");
for (Runnable runnable : runnables) {
runnable.run();
}
System.out.println("Done running runnables.");
}
}
This component is registered using a DS component.xml such as the following:
<?xml version="1.0" encoding="UTF-8"?>
<scr:component xmlns:scr="http://www.osgi.org/xmlns/scr/v1.1.0" name="RunnableRunner" activate="activate">
<implementation class="RunnableRunner"/>
<reference bind="bindRunnable" interface="java.lang.Runnable" name="Runnable"
cardinality="0..n" policy="dynamic"/>
</scr:component>
I understand that there is no guarantee that, at the time activate() is called, all Runnables have been bound. In fact, experiments I made with Eclipse/Equinox indicate that the DS runtime won't be able to bind Runnables contributed by another bundle if that bundle happens to start after the main bundle (which is a 50/50 chance unless explicit start levels are used).
So, what alternatives are there for me? How can I make sure the OSGi containers tries as hard as it can to resolve all dependencies before activating the RunnableRunner?
Alternatives I already thought about:
Bundle start levels: too coarse (they work on bundle level, not on component level) and also unreliable (they're only taken as a hint by OSGi)
Resorting to Eclipse's Extension Points: too Eclipse-specific, hard to combine with Declarative Services.
Making the RunnableRunner dynamically reconfigure whenever a new Runnable is registered: not possible, at some point I have to execute all the Runnables in sequence.
Any advice on how to make sure some extensible service is "ready" before it is used?
By far the best way is not to care and design your system that it flows correctly. There are many reasons a service appears and disappears so any mirage of stability is just that. Not handling the actual conditions creates fragile systems.
In your example, why can't the RunnableRunner not execute the stuff for each Runnable service as it comes available? The following code is fully OSGi dynamic aware:
#Component
public class RunnableRunner {
#Reference Executor executor;
#Reference(policy=ReferencePolicy.DYNAMIC)
void addRunnable( Runnable r) {
executor.execute(r);
}
}
I expect you find this wrong for a reason you did not specify. This reason is what you should try to express as a service registration.
If you have a (rare) use cases where you absolutely need to know that 'all' (whatever that means) services are available then you could count the number of instances, or use some other condition. In OSGi with DS the approach is then to turn this condition into a service so that then others can depend on it and you get all the guarantees that services provide.
In that case just create a component that counts the number of instances. Using the configuration, you register a Ready service once you reach a certain count.
public interface Ready {}
#Component
#Designate( Config.class )
public class RunnableGuard {
#ObjectClass
#interface Config {
int count();
}
int count = Integer.MAX_VALUE;
int current;
ServiceRegistration<Ready> registration;
#Activate
void activate(Config c, BundleContext context) {
this.context = context;
this.count = c.count();
count();
}
#Deactivate void deactivate() {
if ( registration != null )
registration.unregister();
}
#Reference
void addRunnable( Runnable r ) {
count(1);
}
void removeRunnable( Runnable r ) {
count(-1);
}
synchronized void count(int n) {
this.current += n;
if ( this.current >= count && registration==null)
registration = context.registerService(
Ready.class, new Ready() {}, null
);
if ( this.current < count && registration != null) {
registration.unregister();
registration = null;
}
}
}
Your RunnableRunner would then look like:
#Component
public class RunnableRunner {
#Reference volatile List<Runnable> runnables;
#Reference Ready ready;
#Activate void activate() {
System.out.println("Running runnables:");
runnables.forEach( Runnable::run );
System.out.println("Done running runnables.");
}
}
Pretty fragile code but sometimes that is the only option.
I did not know there were still people writing XML ... my heart is bleeding for you :-)
If you do not know which extensions you need to start then you can only make you component dynamic. You then react to each extension as it is added.
If you need to make sure that your extensions have been collected before some further step may happen then you can use names for your required extensions and name them in a config.
So for example you could have a config property "extensions" that lists all extension names spearated by spaces. Each extension then must have a service property like "name". In your component you then compare the extensions you found with the required extensions by name. You then do your "activation" only when all required extensions are present.
This is for example used in CXF DOSGi to apply intents on a service like specified in remote service admin spec.

How do I hide OSGi services so that other bundles can't find them?

I am developing an application that is build on top of Apache Felix and JavaFX. The application can be extended by 3rd party bundles that implement a specific Interface and make it available to the OSGi Runtime Service Registry.
The problem is that those bundles (or plugins) should not be able to retrieve any of the services that are just used internally by my application. An example would be a PersistenceService that is used to save the processed data. Plugins are (in my application) by definition not allowed to store any data through my service but are allowed to save them through a specific service designed for the plugins only.
I had the idea of using the FindHook Interface offered by OSGi to filter out those requests but that didn't work good. Obviously, to make it work, the bundle needs to me loaded at the very start, eve before my core application gets loaded. I ensured this happens by specifying the start level for this bundle using the felix.auto.deploy.install.1 = "file\:bundles/de/zerotask/voices-findhook/0.1-SNAPSHOT/voices-findhook-0.1-SNAPSHOT.jar"
As far as I understood, the start level of the system bundle will be 1 which means that my bundle should always be loaded right after the system bundle.
Here is my implementation of the FindHook interface:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Arrays;
import java.util.Collection;
import java.util.HashSet;
import java.util.Set;
import org.osgi.framework.BundleContext;
import org.osgi.framework.ServiceReference;
import org.osgi.framework.hooks.service.FindHook;
/**
*
* #author PositiveDown
*/
public class VoicesFindHook implements FindHook {
private static Logger log = LoggerFactory.getLogger(VoicesFindHook.class);
private static final String[] INTERNAL_BUNDLE_TABLE = new String[]{
"de.zerotask.voices-core-actions",
"de.zerotask.voices-findhook",
"de.zerotask.voices-interfaces-persistable",
"de.zerotask.voices-models",
"de.zerotask.voices-models-actions",
"de.zerotask.voices-services-configuration-internal",
"de.zerotask.voices-services-input-internal",
"de.zerotask.voices-services-licenses-internal",
"de.zerotask.voices-services-modelsmanager-internal",
"de.zerotask.voices-services-persistence-internal",
"de.zerotask.voices-services-window-internal",
"de.zerotask.voices-ui-dialogs-about",
"de.zerotask.voices-ui-dialogs-newprofile",
"de.zerotask.voices-ui-dockable-listview",
"de.zerotask.voices-ui-dockable-properties",
"de.zerotask.voices-ui-layout",
"de.zerotask.voices-utils-io",
"de.zerotask.voices-utils-services",
"de.zerotask.voices-utils-ui"
};
private static final String[] INTERNAL_SERVICES_TABLE = new String[]{
// model services
// configuration service
"de.zerotask.voices.services.configuration.IConfiguration",
// window service
"de.zerotask.voices.services.window.IWindowService",
// persistence services
"de.zerotask.voices.services.persistence.IPathResolver",
"de.zerotask.voices.services.persistence.IPersistenceService"
};
private static final Set<String> INTERNAL_BUNDLES = new HashSet<>(Arrays.asList(INTERNAL_BUNDLE_TABLE));
private static final Set<String> INTERNAL_SERVICES = new HashSet<>(Arrays.asList(INTERNAL_SERVICES_TABLE));
#Override
public void find(BundleContext context, String name, String filter, boolean allServices, Collection<ServiceReference<?>> references) {
// only allow the usage of internal interfaces from internal packages
String symbolicName = context.getBundle().getSymbolicName();
// debug
log.debug("Processing Bundle {} and service {}", symbolicName, name);
// if the service is one of the internal ones, proceed
if (INTERNAL_SERVICES.contains(name)) {
// retrieve the bundle id
log.debug("Service {} is in internal table", name);
// if the name is not in the internal bundle table, remove all service references
if (!INTERNAL_BUNDLES.contains(symbolicName)) {
log.debug("Bundle {} not in internal table => removing service references...", symbolicName);
// remove them
references.clear();
}
}
}
}
The idea is to have a table of internal bundles``` andinternal services```. Each time a service is looked up, the hook will check if it is an internal service. If this is the case, it will also check if the caller bundle is an internal bundle. If that's not true, the hook will remove all services found from the collection.
I am by far no OSGi expert but this method should work because it is based on the SymbolicNames which are unique in each container.
I have tested the above code with two small test bundles. One providing the interface + implementation and the other one consuming it. I changed the hook so it will not return any services for the consumer bundle (to just simply check if it works).
No my problem is, the consumer bundle gets somehow loaded first. I have no idea why. By doing this it basically breaks my loading property set in the properties file.
I am not sure if this helps but the provider bundle's name starts with an 'y', the consumer one with an 't' and the hook one with an 'v'.
The funny thing is, Felix is loading them in alphabetically order.
I would really appreciate any help here.
Services are implicitly available to every bundle – that is the purpose of services after all.
You can work around this with various hacks like FindHooks etc, but as you have already discovered you are constantly fighting against the true nature of the OSGi Framework and services.
It sounds more like you are creating an isolation system between a kernel and a user space, so that you cannot accidentally pollute the user area with kernel services and vice versa. The proper way (IMHO) to achieve this is with a separate OSGi Framework instance for the two areas. It's quite simple to run up a new Framework using the FrameworkFactory API. Then you can expose select packages and services from the kernel using the BundleContext of the system bundle of the user-area Framework.
However as BJ points out in comments, you may be over-engineering this. What's the worst that can happen if the plugins can see your system services? If those services are well designed then the answer should be "not a lot".
I see two options:
ServicePermission, this is the standard way;
or
ServiceFactory, you decide what bundle can get the real service. Others receive a fake implementation.

Configuration Admin and Declarative Services not created during pax exam test phase

I've written a #component in DS that is supposed to be instantiated and activated in multiple instances. In order to test that I've written a pax exam test where I boot karaf and added scr. Everything works fine, but... it will not instantiate the services until after the test method has run thus gives me no space to do assertions etc.
#Test
public final void testing() throws Exception {
props = createProperties(user, pass, host);
cfg = configurationAdmin.
createFactoryConfiguration(CouchbaseConnectionProvider.SVC_NAME);
cfg.update(props);
final ServiceTracker tracker = new ServiceTracker(bundleContext, CouchbaseConnectionProvider.class, null);
tracker.open();
CouchbaseConnectionProvider svc = (CouchbaseConnectionProvider) tracker.waitForService(5000);
// It will wait 5s and after testing exits it will create the service
}
What am I doing wrong here?
Since when method exits it will properly create and activate the service with all properties.
I may add that the test method using a thread "ion(3)-127.0.0.1" and when DS instantiates uses the thread "84-b6b23468b652)".
Cheers,
Mario
Update 3
There where actually two bugs, one on my side and one somewhere else (in felix CM?) since the config where accessable by my interface impl bundle after a while (while container was shutting down) but it should really been bound to the pax test bundle (and of course CM itself) and never been "free:d" when container was shutting down. Where it that bug is I do not know - I'll wrap up a minimalistic mvn project and try the felix cm guys and I'll post the update here.
Update 2
I've filed a bug (https://ops4j1.jira.com/browse/PAXEXAM-725) if someone is interested to follow the progress (if there's a bug ;))
Update 1
This is my configuration in the testclass
package se.crossbreed.foundation.persistence.provider.couchbase;
#RunWith(PaxExam.class)
#ExamReactorStrategy(PerClass.class)
public class CouchbaseConnectionProviderTests extends CbTestBase {
...
}
Here is the configuration in the testclass that will use base class for
base options.
#org.ops4j.pax.exam.Configuration
public Option[] config() {
List<Option> options = super.baseConfig();
options.addAll(Arrays
.asList(features(karafStandardRepo, "scr"),
mavenBundle()
.groupId("se.crossbreed.foundation.persistence")
.artifactId(
"se.crossbreed.foundation.persistence.core")
.versionAsInProject(),
mavenBundle().groupId("io.reactivex")
.artifactId("rxjava").versionAsInProject(),
mavenBundle()
.groupId("se.crossbreed.ports.bundles")
.artifactId(
"se.crossbreed.ports.bundles.couchbase.java-client")
.versionAsInProject(),
mavenBundle()
.groupId("se.crossbreed.foundation.persistence")
.artifactId(
"se.crossbreed.foundation.persistence.provider.couchbase")
.versionAsInProject()));
// above bundle is the one I'm trying to test and where
// this test resides in (project wise)
return options.toArray(new Option[] {});
}
The base configuration is gotten from a base class
protected List<Option> baseConfig() {
return new ArrayList<Option>(
Arrays.asList(new Option[] {
logLevel(LogLevel.INFO),
karafDistributionConfiguration().frameworkUrl(karafUrl)
.unpackDirectory(new File("target", "exam"))
.useDeployFolder(false),
configureConsole().ignoreLocalConsole(),
mavenBundle().groupId("biz.aQute.bnd")
.artifactId("bndlib").version("${version.bndlib}"),
mavenBundle()
.groupId("se.crossbreed.foundation")
.artifactId(
"se.crossbreed.foundation.core.annotations")
.versionAsInProject(),
mavenBundle()
.groupId("se.crossbreed.foundation")
.artifactId(
"se.crossbreed.foundation.core.interfaces")
.versionAsInProject() }));
}
The package for the test is
package se.crossbreed.foundation.persistence.provider.couchbase;
And the CouchbaseConnectionProvider is on the same package
package se.crossbreed.foundation.persistence.provider.couchbase;
import se.crossbreed.foundation.persistence.core.CbDbConnectionProvider;
public interface CouchbaseConnectionProvider extends CbDbConnectionProvider {
public final static String SVC_NAME = "couchbase.connection.provider";
}
The implementation:
package se.crossbreed.foundation.persistence.provider.couchbase.impl;
#Component(immediate = true, name =
CouchbaseConnectionProvider.SVC_NAME, provide = {
CouchbaseConnectionProvider.class, CbDbConnectionProvider.class,
CbService.class }, properties = { "providerType=DOCUMENT" },
configurationPolicy = ConfigurationPolicy.require)
public class CouchbaseConnectionProviderImpl implements
CouchbaseConnectionProvider { ... }
Here's the project structure of the Couchbase Provider and the test that I'm failing to get to work (until after the test has run ;).
(I don't actually see anything wrong with your code, the ConfigurationAdmin should work asynchronously. The new service comming up after the test still looks like a synchronization issue though. In that case, this setup might fix it.)
Instead of creating the configuration inside the test method you could use pax-exam-cm to create the factory configuration with the other options:
#org.ops4j.pax.exam.Configuration
public Option[] config() {
List<Option> options = super.baseConfig();
options.addAll(Arrays
.asList(features(karafStandardRepo, "scr"),
//missing conversion: putAll() needs a Map
ConfigurationAdminOptions.factoryConfiguration(CouchbaseConnectionProvider.SVC_NAME)
.putAll(createProperties(user, pass, host)).create(true).asOption(),
mavenBundle()
.groupId("se.crossbreed.foundation.persistence")
.artifactId(
"se.crossbreed.foundation.persistence.core")
.versionAsInProject(),
mavenBundle().groupId("io.reactivex")
.artifactId("rxjava").versionAsInProject(),
mavenBundle()
.groupId("se.crossbreed.ports.bundles")
.artifactId(
"se.crossbreed.ports.bundles.couchbase.java-client")
.versionAsInProject(),
mavenBundle()
.groupId("se.crossbreed.foundation.persistence")
.artifactId(
"se.crossbreed.foundation.persistence.provider.couchbase")
.versionAsInProject()));
// above bundle is the one I'm trying to test and where
// this test resides in (project wise)
return options.toArray(new Option[] {});
}
Maven settings:
<dependency>
<groupId>org.ops4j.pax.exam</groupId>
<artifactId>pax-exam-cm</artifactId>
<version>${exam.version}</version>
</dependency>
You can then also simply use the #Inject annotation to get the CouchbaseConnectionProvider inside the test.
#Inject
CouchbaseConnectionProvider svc;
I suspect that the test deploys the CouchbaseConnectionProvider interface with itself. So you try to retrieve the service using a different interface than the one the real service provides.
You should try to add imports and exports to your test bundle for the package CouchbaseConnectionProvider resides in.
To do this use a ProbeBuilder
#ProbeBuilder
public TestProbeBuilder probeConfiguration(TestProbeBuilder probe) {
probe.setHeader(Constants.IMPORT_PACKAGE, "..");
probe.setHeader(Constants.EXPORT_PACKAGE, "..");
return probe;
}
thanks both of you for your input - I chose to answer this question myself since I had a bug in my code and got help from Christoph.
I quote the answer from him here if there someone else did what I did.
The problem was that I did not set the configuration ownership as anonymous via (pid, null) in createFactoryConfiguration. Instead I used createFactoryConfiguration(pid) then it got bound to the current executing bundle and not the bundle I was testing. As Christoph explained it was possible for me to get the bundle location of the service bundle and set that explicitly.
Cheers,
Mario
Here's Christoph Läubrich answer
"Christoph Läubrich added a comment - 13 minutes ago
Okay I think I know what might be the problem now:
You are using the createFactoryConfiguration(java.lang.String factoryPid), this means you will create a configuration that is exclusivly bound to your bundle! Thus no other bundle is allowed to access the configuration!
Use the createFactoryConfiguration(java.lang.String factoryPid, java.lang.String location) instead with a null argument for the location! This way you create an anonymous configuration that will be bound to the first bundle that fetches this config. Alternativly you can get the location of the target bundle and explicitly pass this as an parameter, but this is often not needed.
If this still do not work, we must take a closer look at your configuration, connect to the karaf shell (while stopped at a breakpoint) and get a list of all bundles (bundle:list) and a list of all components (scr:list).
Also you should collect detailed information about the probe bundle and the bundle that should provide the service (packages:imports)."

I can't unit test my class without exposing private fields -- is there something wrong with my design?

I have written some code which I thought was quite well-designed, but then I started writing unit tests for it and stopped being so sure.
It turned out that in order to write some reasonable unit tests, I need to change some of my variables access modifiers from private to default, i.e. expose them (only within a package, but still...).
Here is some rough overview of my code in question. There is supposed to be some sort of address validation framework, that enables address validation by different means, e.g. validate them by some external webservice or by data in DB, or by any other source. So I have a notion of Module, which is just this: a separate way to validate addresses. I have an interface:
interface Module {
public void init(InitParams params);
public ValidationResponse validate(Address address);
}
There is some sort of factory, that based on a request or session state chooses a proper module:
class ModuleFactory {
Module selectModule(HttpRequest request) {
Module module = chooseModule(request);// analyze request and choose a module
module.init(createInitParams(request)); // init module
return module;
}
}
And then, I have written a Module that uses some external webservice for validation, and implemented it like that:
WebServiceModule {
private WebServiceFacade webservice;
public void init(InitParams params) {
webservice = new WebServiceFacade(createParamsForFacade(params));
}
public ValidationResponse validate(Address address) {
WebService wsResponse = webservice.validate(address);
ValidationResponse reponse = proccessWsResponse(wsResponse);
return response;
}
}
So basically I have this WebServiceFacade which is a wrapper over external web service, and my module calls this facade, processes its response and returns some framework-standard response.
I want to test if WebServiceModule processes reponses from external web service correctly. Obviously, I can't call real web service in unit tests, so I'm mocking it. But then again, in order for the module to use my mocked web service, the field webservice must be accessible from the outside. It breaks my design and I wonder if there is anything I could do about it. Obviously, the facade cannot be passed in init parameters, because ModuleFactory does not and should not know that it is needed.
I have read that dependency injection might be the answer to such problems, but I can't see how? I have not used any DI frameworks before, like Guice, so I don't know if it could be easily used in this situation. But maybe it could?
Or maybe I should just change my design?
Or screw it and make this unfortunate field package private (but leaving a sad comment like // default visibility to allow testing (oh well...) doesn't feel right)?
Bah! While I was writing this, it occurred to me, that I could create a WebServiceProcessor which takes a WebServiceFacade as a constructor argument and then test just the WebServiceProcessor. This would be one of the solutions to my problem. What do you think about it? I have one problem with that, because then my WebServiceModule would be sort of useless, just delegating all its work to another components, I would say: one layer of abstraction too far.
Yes, your design is wrong. You should do dependency injection instead of new ... inside your class (which is also called "hardcoded dependency"). Inability to easily write a test is a perfect indicator of a wrong design (read about "Listen to your tests" paradigm in Growing Object-Oriented Software Guided by Tests).
BTW, using reflection or dependency breaking framework like PowerMock is a very bad practice in this case and should be your last resort.
I agree with what yegor256 said and would like to suggest that the reason why you ended up in this situation is that you have assigned multiple responsibilities to your modules: creation and validation. This goes against the Single responsibility principle and effectively limits your ability to test creation separately from validation.
Consider constraining the responsibility of your "modules" to creation alone. When they only have this responsibility, the naming can be improved as well:
interface ValidatorFactory {
public Validator createValidator(InitParams params);
}
The validation interface becomes separate:
interface Validator {
public ValidationResponse validate(Address address);
}
You can then start by implementing the factory:
class WebServiceValidatorFactory implements ValidatorFactory {
public Validator createValidator(InitParams params) {
return new WebServiceValidator(new ProdWebServiceFacade(createParamsForFacade(params)));
}
}
This factory code becomes hard to unit-test, since it is explicitly referencing prod code, so keep this impl very concise. Put any logic (like createParamsForFacade) on the side, so that you can test it separately.
The web service validator itself only gets the responsibility of validation, and takes in the façade as a dependency, following the Inversion of Control (IoC) principle:
class WebServiceValidator implements Validator {
private final WebServiceFacade facade;
public WebServiceValidator(WebServiceFacade facade) {
this.facade = facade;
}
public ValidationResponse validate(Address address) {
WebService wsResponse = webservice.validate(address);
ValidationResponse reponse = proccessWsResponse(wsResponse);
return response;
}
}
Since WebServiceValidator is not controlling the creation of its dependencies anymore, testing becomes a breeze:
#Test
public void aTest() {
WebServiceValidator validator = new WebServiceValidator(new MockWebServiceFacade());
...
}
This way you have effectively inverted the control of the creation of the dependencies: Inversion of Control (IoC)!
Oh, and by the way, write your tests first. This way you will naturally gravitate towards a testable solution, which is usually also the best design. I think that this is due to the fact that testing requires modularity, and modularity is coincidentally the hallmark of good design.

Finding unused values in message resource file

I am working on a project that has been through multiple hands with a sometimes rushed development. Over time the message.properties file has become out of sync with the jsps that use it. Now I don't know which properties are used and which aren't. Is there a tool (eclipse plugin perhaps) that can root out dead messages?
The problem is that messages may be accessed by JSP or Java, and resource names may be constructed rather than literal strings.
Simple grepping may be able to identify "obvious" resource access. The other solution, a resource lookup mechanism that tracks what's used, is only semi-reliable as well since code paths may determine which resources are used, and unless every path is traveled, you may miss some.
A combination of the two will catch most everything (over time).
Alternatively you can hide the functionality of ResourceBundle behind another façade ResourceBundle, which should generally pipe all calls to original one, but add logging and/or statistics collection on the top.
The example can be as following:
import java.util.Collection;
import java.util.Enumeration;
import java.util.HashSet;
import java.util.NoSuchElementException;
import java.util.ResourceBundle;
public class WrapResourceBundle {
static class LoggingResourceBundle extends ResourceBundle {
private Collection<String> usedKeys = new HashSet<String>();
public LoggingResourceBundle(ResourceBundle parentResourceBundle) {
setParent(parentResourceBundle);
}
#Override
protected Object handleGetObject(String key) {
Object value = parent.getObject(key);
if (value != null) {
usedKeys.add(key);
return value;
}
return null;
}
#Override
public Enumeration<String> getKeys() {
return EMPTY_ENUMERATOR;
}
public Collection<String> getUsedKeys() {
return usedKeys;
}
private static EmptyEnumerator EMPTY_ENUMERATOR = new EmptyEnumerator();
private static class EmptyEnumerator implements Enumeration<String> {
EmptyEnumerator() {
}
public boolean hasMoreElements() {
return false;
}
public String nextElement() {
throw new NoSuchElementException("Empty Enumerator");
}
}
}
public static void main(String[] args) {
LoggingResourceBundle bundle = new LoggingResourceBundle(ResourceBundle.getBundle("test"));
bundle.getString("key1");
System.out.println("Used keys: " + bundle.getUsedKeys());
}
}
Considering that some of your keys are run-time generated, I don't think you'll ever be able to find a tool to validate which keys are in use and which ones are not.
Given the problem you posed, I would probably write an AOP aspect which wraps the MessageSource.getMessage() implementation and log all the requested codes that are being retrieved from the resource bundle. Given that MessageSource is an interface, you would need to know the implementation that you are using, but I suspect that you must know that already.
Given that you would be writing the aspect yourself, you can create a format that is easily correlated against your resource bundle and once you are confident that it contains all the keys required, it becomes a trivial task to compare the two files and eliminate any superfluous lines.
If you really want to be thorough about this, if you already have Spring configured for annotation scan, you could even package up your aspect as its own jar (or .class) and drop it in a production WEB-INF/lib (WEB-INF/classes) folder, restart the webapp and let it run for a while. The great thing about annotations is that it can all be self contained. Once you are sure that you have accumulated enough data you just delete the jar (.class) and you're good to go.
I know that at least two of the major java IDEs can offer this functionality.
IntelliJ IDEA has a (disabled, by default) Inspection that you can
use to do this:
go to Settings -> Inspections -> Properties files -> ... and enable
the 'Unused property'
..Only problem I had was that it didn't pick up some usages of the property from a custom tag library I had written, which I was using in a few JSPs.
Eclipse also has something like this ( http://help.eclipse.org/helios/index.jsp?topic=%2Forg.eclipse.jdt.doc.user%2Ftasks%2Ftasks-202.htm ) but I haven't really exhausted the how well it works.

Categories

Resources