i have a ( bnd-annotated ) component that implements a simple api and exposes itself as a service
package com.mycompany.impl;
import com.mycompany.api.IFoo;
#Component(designateFactory=FooImpl.Configuration.class)
class FooImpl implements IFoo {
interface Configuration {
String foo();
// ..
}
Configuration configuration;
#Activate
public void activate(Map properties) {
configuration = Configurable.createConfigurable(Configuration.class, properties);
// ..
}
}
its configuration is loaded from a watched directory by Felix FileInstall and the service is instantiated by the Felix Configuration Service ( at least, i assume thats whats happening - i’m new to OSGi, please bear with me ) This, with the generated MetaType descriptor is working great.
However, as it stands, FooImpl requires structured configuration ( lists of lists, maps of lists..etc ) and i was wondering if there is an elegant ( * ) way to configure instances of the component through a similar workflow; that is to say, configuration discovery and instantiation/deployment remains centralised.
It seems to me that the Configuration Service spec manages maps - will i have to roll my own Configuration Service & FileInstall to be able to present components with xml/json/yaml backed structured configuration?
as opposed to, say, defining the location of an xml configuration file in properties ...confiception ? and doing my own parsing.
Yes and no...
The OSGi Configuration Admin service deals with abstract Configuration records, which are based on flat maps (actually java.util.Dictionary, but it's essentially the same thing). Config Admin does not know anything about the underlying physical storage; it always relies on somebody else to call the methods on the ConfigurationAdmin service, i.e. getConfiguration, createFactoryConfiguration etc.
The "somebody else" that calls Config Admin is usually called a "management agent". Felix FileInstall is a very simple example of a management agent that reads files in the Java properties format. Actually FileInstall is probably too simple and I don't consider it appropriate for production deployment — but that's a separate discussion.
It sounds like you want to write your own management agent that reads XML files and feeds them into Config Admin. This is really not a large or difficult task and you should not be afraid to take it on. Config Admin was designed under the assumption that applications would have very diverse requirements for configuration data storage, and that most applications would therefore have to write their own simple management agent, which is why it does not define its own storage format or location(s).
However, once the configuration data has been read by your management agent, it must be passed into Config Admin as a map/dictionary, which in turn will pass it to the components as a map. Therefore the components themselves do not receive highly structured data e.g. trees or nested maps. There is some flexibility though: configuration properties can contain lists of the based type; you can also use enum values etc.
Related
I have two instances of clients with different configs that I am creating (timeout, threadpool, etc...), and would like to leverage Dropwizard's metric on both of the clients.
final JerseyClientBuilder jerseyClientBuilder = new JerseyClientBuilder(environment)
.using(configuration.getJerseyClientConfiguration());
final Client config1Client = jerseyClientBuilder.build("config1Client");
environment.jersey().register(config1Client);
final Client config2Client = jerseyClientBuilder.build("config2Client");
environment.jersey().register(config2Client);
However, I am getting
org.glassfish.jersey.internal.Errors: The following warnings have been detected:
HINT: Cannot create new registration for component type class org.glassfish.jersey.client.JerseyClient:
Existing previous registration found for the type.
And only one client's metric shows up.
How do I track both clients' metrics or is it not common to have 2 clients in a single dropwizard app?
Never mind, turned out I was an idiot (for trying to save some resource on the ClientBuilder).
2 Things that I did wrong with my original code:
1. You don't need to register Jersey clients, just the resource is enough... somehow I missed the resource part in my code and just straight up trying to register the client
2. You need to explicitly build each JerseyClientBuilder and then build your individually configured clients, then dropwizard will fetch by each JerseyClientBuilder's metrics
In the end, I just had to change my code to the following:
final Client config1Client = new JerseyClientBuilder(environment)
.using(configuration.getJerseyClientConfiguration()).build("config1Client");
final Client config2Client = new JerseyClientBuilder(environment)
.using(configuration.getJerseyClientConfiguration()).build("config2Client");
Doh.
environment.jersey().register() has a javadoc listing of Adds the given object as a Jersey singleton component meaning that the objects registered become part of the jersey dependency injection framework. Specifically this method is used to add resource classes to the jersey context, but any object with an annotation or type that Jersey looks for can be added this way. Additionally, since they are singletons you can only have one of them per any concrete type (which is why you are getting a "previous registration" error from Jersey).
I imagine that you want to have two Jersey clients to connect to two different external services via REST/HTTP. Since your service needs to talk to these others to do its work, you'll want to have the clients accessible wherever the "work" or business logic is being performed.
For example, this guide creates a resource class that requires a client to an external http service to do currency conversions. I'm not saying this is a great example (just a top google result for dropwizard external client example). In fact, I think this not a good to structure your application. I'd create several internal objects that hide from the resource class how the currency information is fetched, like a business object (BO) or data access object (DAO), etc.
For your case, you might want something like this (think of these as constructor calls). JC = jersey client, R = resource object, BO = business logic object
JC1()
JC2()
B1(JC1)
B2(JC2)
R1(B1)
R2(B2)
R3(B1, B2)
environment.jersey().register(R1)
environment.jersey().register(R2)
environment.jersey().register(R3)
The official Dropwizard docs are somewhat helpful. They at least explain how to create a jersey client; they don't explain how to structure your application.
If you're using the Jersey client builder from dropwizard, each of the clients that you create should be automatically registered to record metrics. Make sure you're using the client builder from the dropwizard-client artifact and package io.dropwizard.client. (Looks like you are because you have the using(config) method.)
I have read the links
Akka and spring configuration
http://doc.akka.io/docs/akka/2.4.1/java/untyped-actors.html
Spring is no longer available as a module in Akka 2.4.1 but can be created and used as an extension. I also understand that the concept of bean/actor creation being managed by DI-fwk like Spring can cause fundamental conflicts with the Akka Actor parent-child/supervision model. So I still don't understand how to wire these together.
I have a set of actor classes and I have written them to be generic enough for example: properties like "listener", "name", "messageQueueName" etc are configurable. The link above tells me that I provide convenience factory constructors and then create actor with the code snippet
system.actorOf(DemoActor.props(42), "demo");
It is this line that I do not like. What I want to write in my application.conf is something like
deployment {
/demo {
magicNumber : 42
}
}
and then in all my application I simply want to look up the actor (I am okay to use the actorSelection) method.
Am I doing something wrong?
I think you are on the wrong path there, you should have a look at these tutorials:
http://www.lightbend.com/activator/template/akka-java-spring
https://myshittycode.com/2015/08/26/akka-spring-integration/
and for passing arguments via constructors check my answer to this question:
Custom Spring Bean Parameters
The configuration file is used for akka specific parameters (like specifying the dispatcher, mailbox, etc.)
I have an example relating my problem. (files joint: https://drive.google.com/file/d/0B8ThLrV6-uchaFlTZTNGQ1FnT1E/view?usp=sharing )
I have 3 ipojo components (3 bunbles):
CallHello uses a DelayService service which implemented in both HelloDelay or HelloComponentReplace
HelloDelay and HelloComponentReplace use a HelloService service which implemented in HelloPrint.
At deployment, I deploy 5 bundles:
service.hello.service.jar
printer.hello.printer.jar
delay.hello.delay.jar
replace.hello.replace.jar
call.hello.call.jar
Result: DelayService uses always the implementation in HelloDelay.
Finally, I run Main.java to control manually selection between HelloDelay and HelloComponentReplace.
I implemented a function to start/stop or uninstall/install bundles in Main.java (and it works well). However, either HelloDelay or HelloComponentReplace is valid.
In the case both is active and valid, i read on the iPOJO website and I can use “comparator”. But I don’t understand how to apply ‘comparator’ to control selection between 2 components above. Is this to change priority? I know that we can change priority of bundle but I cannot know how to apply to my file (Main.java) and iPOJO.
Could we control connection (binding) between a requiring component and many providing components (same service or interface)?
I hope that you could help my difficulty in this time.
Best regards,
You can manipulate the service binding using interceptors: http://felix.apache.org/documentation/subprojects/apache-felix-ipojo/apache-felix-ipojo-userguide/ipojo-advanced-topics/service-binding-interceptors.html
With interceptors, you can hide services, and / or sort the service providers in order to enforce the provider you want to use.
I'm working to develop a multi-tenant Play Framework 2.1 application. I intend to override the onRequest method of the GlobalSettings class to load and set a custom configuration based on the subdomain of the request. Problem is, I don't see how this would be possible in Play 2.x.
I can override system properties at the command line when starting the server, but how can I do this programmatically in Java code for each request?
The code would look something like this (I assume):
#Override
public play.mvc.Action onRequest(Request request, Method actionMethod) {
//Look up configuration settings in Cache based on request subdomain
//(i.e. Cache.get("subdomain.conf"))
//if not in cache:
//load appropriate configuration file for this subdomain (java.io.File)
//set new configuration from file for this request
//cache the configuration for future use in a new thread
//else
//set configuration from cache for this request
return super.onRequest(request, actionMethod);
}
}
Looking up the URL and getting/setting the cache is easy, but I cannot figure out how to SET a new configuration programmatically for Play Framework 2.1 and the documentation is a little light on things like this.
Any thoughts? Anyone know a better, more efficient way to do this?
So, in a sort of roundabout way, I created the basis for a multi-tenant Play application using a Scala Global. There may be a more efficient way to implement this using a filter, but I'm finding this seems to work so far. This does not appear to be as easily implemented in Java.
Instead of using the configuration file, I'm using the database. I assume it would be far more efficient to use a key-value cache, but this seems to work for now.
In Global.scala:
object Global extends GlobalSettings {
override def onRouteRequest(request: RequestHeader): Option[Handler] = {
if (request.session.get("site").isEmpty){
val id = models.Site.getSiteIDFromURL(request.host)
request.session.+("site" -> id)
}
super.onRouteRequest(request)
}
}
And then, obviously, you have to create a database model to query the site based on the request domain and/or the session value set in the request. If anyone knows a better way I'd love to hear it.
I am reading Bloch's Effective java book[1] and came across the following example of SPI:
//Service interface
public interface Service {
//Service specific methods here
}
//Service provider interface
public interface Provider {
Service newService();
}
//Class for service registration and access
public class Services {
private Services(){}
private static final Map<String, Provider> providers =
new ConcurrentHashMap<String, Provider>();
public static final String DEFAULT_PROVIDER_NAME = "<def>";
//Registration
public static void registerDefaultProvider(Provider p) {
registerProvider(DEFAULT_PROVIDER_NAME, p);
}
public static void registerProvider(String name, Provider p) {
providers.put(name, p);
}
//Access
public static Service newInstance() {
return newInstance(DEFAULT_PROVIDER_NAME);
}
public static Service newInstance(String name) {
// you get the point..lookup in the map the provider by name
// and return provider.newService();
}
This my question: why is the Provider interface necessary? Couldn't we have just as easily registered the Service(s) themselves - e.g. maintain a map of the Service implementations and then return the instance when looked up? Why the extra layer of abstraction?
Perhaps this example is just too generic - any "better" example to illustrate the point would be great too.
[1] Second edition, Chapter 2. The first edition example does not refer to the Service Provider Interfaces.
Why is the Provider interface necessary? Couldn't we have just as easily registered the Service(s) themselves - e.g. maintain a map of the Service implementations and then return the instance when looked up?
As others have stated, the purpose of a Provider is to have an AbstractFactory that can make Service instances. You don't always want to keep a reference to all the Service implementations because they might be short lived and/or might not be reusable after they have been executed.
But what is the purpose of the provider and how can you use a "provider registration API" if you don't have a provider
One of the most powerful reasons to have a Provider interface is so you DON'T need to have an implementation at compile time. Users of your API can add their own implementations later.
Let's use JDBC as an example like Ajay used in another answer but let's take it a little further:
There are many different types of Databases and database vendors who all have slightly different ways of managing and implementing databases (and perhaps how to query them). The creators of Java can't possibly create implementations of all these different possible ways for many reasons:
When Java was first written, many of these database companies or systems didn't exist yet.
Not all these database vendors are open source so the creators of Java couldn't know how to communicate with them even if they wanted to.
Users might want to write their own custom database
So how do you solve this? By using a Service Provider.
The Driver interface is the Provider. It provides methods for interacting with a particular vendor's databases. One of the methods in Driver is a factory method to make a Connection instance(which is the Service) to the database given a url and other properties (like user name and password etc).
Each Database vendor writes their own Driver implementation for how to communicate with their own database system. These aren't included in the JDK; you must go to the company websites or some other code repository and download them as a separate jar.
To use these drivers, you must add the jar to your classpath and then use the JDK DriverManager class to register the driver.
The DriverManager class is the Service Registration.
The DriverManager class has a method registerDriver(Driver) that is used to register a Driver instance in the Service Registration so it can be used. By convention, most Driver implementations register at class loading time so all you have to do in your code is write
Class.forname("foo.bar.Driver");
To register the Driver for vendor "foo.bar" (assuming you have the jar with that class in your classpath.)
Once the Database Drivers are registered, you can get a Service implementation instance that is connected to your database.
For example, if you had a mysql database on your local machine named "test" and you had a user account with username "monty" and password "greatsqldb" then you can create a Service implementation like this :
Connection conn =
DriverManager.getConnection("jdbc:mysql://localhost/test?" +
"user=monty&password=greatsqldb");
The DriverManager class sees the String you passed in and finds the registered driver that can understand what that means. (This is actually done using the Chain of Responsibility Pattern by going through all the registered Drivers and invoking their Driver.acceptsUrl(Stirng) method until the url gets accepted)
Notice that there is no mysql specific code in the JDK. All you had to do is register a Driver of some vendor and then pass a properly formatted String to the Service Provider. If we later decide to use a different database vendor (like oracle or sybase) then we just swap jars and modify the our connection string. The code in the DriverManager does not change.
Why didn't we just make a connection once and keep it around? Why do we need the Service?
We might want connect/disconnect after each operation. Or we might want to keep the connection around longer. Having the Service allows us to create new connections whenever we want and does not preclude us from keeping a reference to it to re-use later.
This is a very powerful concept and is used by frameworks to allow many possible permutations and extensions without cluttering the core codebase.
EDIT
Working with multiple Providers and Providers that provide multiple Services:
There is nothing stopping you from having multiple Providers. You can connect to multiple databases created using different database vendor software at the same time. You can also connect to multiple databases produced by the same vendor at the same time.
Multiple services - Some Providers may even provide different Service implementations depending on the connect url. For example, H2 can create both file system based or in-memeory based databases. The way to tell H2 which one you want to use is a different url format. I haven't looked at the H2 code, but I assume the file based and the in memory based are different Service implementations.
Why doesn't the DriverManager just manage Connections and Oracle could implement the OracleConnectionWrapper? No providers!
That would also require you to know that you have an Oracle connection. That is very tight coupling and I would have to change lots of code if I ever changed vendors.
The Service Registration just takes a String. Remember that it uses the chain of Responsiblity to find the first registered Provider that knows how to handle the url. the application can be vendor neutral, and it can get the connection url and Driver class name from a property file. That way I don't have to recompile my code if I change vendors. However, if I hardcoded refernences to "OracleConnectionWrapper" and then I changed vendors, I would have to rewrite portions of my code and then recompile.
There is nothing preventing someone from supporting multiple database vendor url formats if they want. So I can make a GenericDriver that could handle mysql and oracle if I wanted to.
If you might need more than one service of each type, you can't just reuse the old Services. (Additionally, tests and the like might want to create fresh services for each test, rather than reusing services that might have been modified or updated by previous tests.)
I think the answer is mentioned in Effective Java along with an example.
An optional fourth component of a service provider framework is a
service provider interface, which providers implement to create
instances of their service implementation. In the absence of a service
provider interface, implementations are registered by class name and
instantiated reflectively (Item 53).
In the case of JDBC,
Connection plays the part of the service interface,
DriverManager.registerDriver is the provider registration API, DriverManager.getConnection is the service access API, and
Driver is the service provider interface.
So as you have correctly noted it is not a must to have the Provider interface but just a little cleaner approach.
So seems like you can have multiple Providers for the same Service and based on a specific Provider name you may get different instances of the same Service. So I would say each Provider is kind of like factory that creates the service appropriately.
For example suppose class PaymentService implements Service and it requires a Gateway. You have PayPal and Chase gateway that deal with those payment processors. Now you create a PayPalProvider and ChaseProvider each of which knows how to create the correct the PaymentService instance with the right gateway.
But I agree, seems contrived.
As a synthesis of the other answers (the fourth component is the textual reason) I think this is to limit compilation dependencies. With the SPI, you have all the tools to exclude en explicit reference to the implementation:
The META-INF/services/ directory contains files mentioning the available service provider implementations
The ServiceLoader standard class allows the resolution of the available implementations names and by the way a dynamic construction [1].
The SPI was not mentioned in the first edition. It was perhaps not the right place to include it in an item about static factories. The DriverManager mentioned in the text is a hint, but Bloch does not go in deep. In a way, the platform implements a kind of ServiceLocator pattern to reduce compilation dependencies, depending on the environment. With a SPI in your abstract factory, it becomes the ServiceFactory of a ServiceLocator with the help of the ServiceLoader for modularity.
The ServiceLoader iterator could be used to populate dynamically the services map of the example.
[1] In an OSGi environment, this is a subtle operation.
Service Provider Interface without a provider
Let's see how it would look like without a provider.
//Service interface
public interface Service {
//Service specific methods here
}
//Class for service registration and access
public class Services {
private Services(){}
private static final Map<String, Service> services =
new ConcurrentHashMap<String, Service>();
public static final String DEFAULT_SERVICE_NAME = "<def>";
//Registration
public static void registerDefaultService(Provider p) {
registerService(DEFAULT_SERVICE_NAME, p);
}
public static void registerService(String name, Provider p) {
services.put(name, p);
}
//Access
public static Service getInstance() {
return newInstance(DEFAULT_SERVICE_NAME);
}
public static Service getInstance(String name) {
// you get the point..lookup in the map the service by name
// and return it;
}
As you see, it's possible to create a Service Provider Interface without a Provider interface. Callers of #getInstance(..) eventually wouldn't notice a difference.
Then why do we need a provider?
The Provider interface is an Abstract Factory and Services#newInstance(String) is a Factory Method. Both design patterns have the advantage that they decouple service implementation from service registration.
Single responsibility principle
Instead of implementing the service instantiation in a startup event handler, which registers all services, you create one provider per service. This makes it loosely coupled and easier to refactor, because Service and Service Provider could be put near to each other, for example into another JAR-file.
"Factory methods are common in toolkits and frameworks, where library code needs to create objects of types that may be subclassed by applications using the framework." [1]
Lifetime management:
You might have realized in the upper code without providers, that we're registering service instances instead of a provider, which could decide to instantiate a new service instance.
This approach has some disadvantages:
1. Service instances have to be created before the first service call. Lazy initialization isn't possible. This will delay startup and bind resources to services which are rarely used or even never.
1b. You "cannot" close services after usage, because there is no way to reinstantiate them. (With a provider you could design the service interface in a way that the caller has to call #close(), which informs the provider and the provider decides to keep or finalize the service instance.)
2. All callers will use the same service instance, therefore you have to make sure that it's thread-safe. But making it thread-safe will make it slow. In contrary a provider might choose to create a couple of service instances to reduce retention time.
Conclusion
A provider interface isn't required, but it encapsulates service-specific instantiation logic and optimizes resource allocation.