I am new in JBPM6. My scenario is like this that i want to execute some java code using JBPM service task.From documentation i am not able to understand how to use domain specific process and Work Item Handler in this type of code.
If someone have sample example of it please share.That will be very much helpful.
Thank you in advance.
Here is how to add a handler inside a Eclipse maven project. I call it the Awesome handler, but your should pick a more specific name.
1) First create a work item definition file in src/main/resources/WorkItemDefinitions.wid. My icon file is located in src/main/resources.
import org.drools.core.process.core.datatype.impl.type.StringDataType;
[
[
"name" : "Awesome",
"parameters" : [
"Message1" : new StringDataType(),
"Message2" : new StringDataType()
],
"displayName" : "Awesome",
"icon" : "icon-info.gif"
]
]
2) Create a Work Item Handler Config file in src/main/resources/META-INF/CustomWorkItemHandlers.conf
[
"Awesome": new org.jbpm.examples.util.handler.AwesomeHandler()
]
3) Create a drools session config file: src/main/resources/META-INF/drools.session.conf
drools.workItemHandlers = CustomWorkItemHandlers.conf
4) Create your Handler so that it matches the class you defined in step 2
public class AwesomeHandler implements WorkItemHandler {
public AwesomeHandler() {
super();
}
public void executeWorkItem(WorkItem workItem, WorkItemManager manager) {
System.out.println("Executing Awesome handler");
manager.completeWorkItem(workItem.getId(), null);
}
public void abortWorkItem(WorkItem workItem, WorkItemManager manager) {
System.out.println("Aborting");
}
}
5) After you establish the handler, you must register it with your session.
//Get session
KieSession ksession = runtime.getKieSession();
//Register handlers
ksession.getWorkItemManager().registerWorkItemHandler("Awesome", new AwesomeHandler());
At this point you should restart eclipse. When eclipse opens, there should be a 'Custom Tasks' tab in the palette. It should contain an entry labeled 'Awesome' with the specified icon.
I know the question is already answered, but I wanted to do the same (execute java code in service task) without creating work item definition (I did't want to use a custom task but a service task as it is). This is how I solved it:
here I read about the ServiceTaskHandler but I couldn't find very good info about the usage.
I read the ServiceTaskHandler code, it uses reflection to run your java code.
I found this (it says jbpm5-samples but I tested with jbpm 6.3), it uses a service task, the service task executes method "hello" from a Class (HelloService) you create:
package com.test;
import java.util.HashMap;
import java.util.Map;
public class HelloService {
public DataOutput hello(com.test.DataInput name) {
Map<String, Object> dataMap = new HashMap<String, Object>();
dataMap.put("s", "Hello " + name.getDataMap().get("s") + "!");
DataOutput output = new DataOutput(dataMap);
return output;
}
}
The ServiceTaskHandler is registered the same way as the step (5) in the answer marked correct:
//Get session
KieSession ksession = runtime.getKieSession();
//Register handlers
ksession.getWorkItemManager().registerWorkItemHandler("Service Task", new ServiceTaskHandler());
After that I associated the service task with the java class (HelloService - method hello).
To do that I used the eclipse bpmn modeler but I didn't find it very intuitive, so I opened the sample's bpmn file (BPMN2-ServiceProcess.bpmn2) with the modeler and filled my service task with the same stuff I read there.
#mike i really hope you see my comment and willing to help me. so i followed everysteps that you mentioned but i still never see my customtask.
enter image description here
this is my project directory and i think everything is right, but to make sure ill just posts my code here
WorkItemDefinitions.wid:
import org.drools.core.process.core.datatype.impl.type.StringDataType;
[
[
"name" : "Awesome",
"parameters" : [
"Message1" : new StringDataType(),
"Message2" : new StringDataType()
],
"displayName" : "Awesome",
"icon" : "ezgif.com-apng-to-gif.gif"
]
]
drools.session.conf:
drools.workItemHandlers = CustomWorkItemHandlers.conf
CustomWorkItemHandlers.conf:
[
"Awesome": new com.sample.AwesomeHandler()
]
AwsomeHandler.java:
package com.sample;
import org.kie.api.runtime.process.WorkItem;
import org.kie.api.runtime.process.WorkItemHandler;
import org.kie.api.runtime.process.WorkItemManager;
public class AwesomeHandler implements WorkItemHandler{
public AwesomeHandler() {
super();
}
#Override
public void executeWorkItem(WorkItem workItem, WorkItemManager manager) {
// TODO Auto-generated method stub
System.out.println("Executing Awesome handler");
manager.completeWorkItem(workItem.getId(), null);
}
#Override
public void abortWorkItem(WorkItem workItem, WorkItemManager manager) {
// TODO Auto-generated method stub
System.out.println("Aborting");
}
}
in main:
TaskService taskService = engine.getTaskService();
ksession.getWorkItemManager().registerWorkItemHandler("Awesome", new AwesomeHandler());
I really don't know what i did wrong and i need this for my unis. i really hope i will get a reply and wish you a very good day ;)
Apart from the (excellent) example provided by Mike, if your only goal is to execute some Java code, you can consider using a Script task instead (and just embed the Java code in your process) or reuse the already existing Service Task that can invoke an operation on a Java class.
Related
I’m somewhat new to Kotlin/Java, but I have been using AWS Lambda for several years now (all Python and Node). I’ve been trying to “successfully” enable SnapStart on a SpringBoot Lambda using Kotlin running on java11 corretto (the only runtime supported currently), but it doesn’t seem to be working as I would have expected.
I have hooked into the CRaC lifecycle methods beforeCheckpoint and afterRestore. In beforeCheckpoint I’ve initialized the SpringBoot application and I can see it in the deployment logs (AWS creates log streams for the deployment phase with SnapStart lambdas).
However, the concerning thing is I’m also seeing the SpringBoot app get initialized in the function invocation logs too. I would have expected that to only happen during the deployment/initialization phase when the snapshot is being created. As a result I’m not really seeing a tremendous improvement on latency or overall.
Any ideas why this is happening?
I ran into essentially the same issue (with Java instead of Kotlin) and the solution was to switch the runtime->handler from
org.springframework.cloud.function.adapter.aws.SpringBootStreamHandler
to
org.springframework.cloud.function.adapter.aws.FunctionInvoker::handleRequest
It would probably be worth mentioning that as of 2023-02-20 SnapStart isn't engaged for $LATEST version of an AWS Lambda function, i.e. make sure you are invoking a particular published version. Otherwise, Best practices for working with Lambda SnapStart article says that the main performance killers are dynamically loaded classes, and network connections that need to be re-established from time to time.
From Snapstart Integration issue raised for Spring Cloud Function on GitHub I tend to think that switching to org.springframework.cloud.function.adapter.aws.FunctionInvoker probably somewhat helps, but doesn't address the performance challenges mentioned above. I'm not sure if I'm interpreting olegz's advice correctly, but what worked best so far for my AWS lambda function built with Spring Boot/Spring Cloud Function is a "warm-up" config. It hooks into the CRaC lifecycle via beforeCheckpoint() and issues dummy requests to S3 and DynamoDB before the VM snapshot is made. This way most dynamically-loaded classes are pre-loaded, and network connections are pre-established, before any subsequent function invocation takes place.
package eu.mycompany.mysamplesystem.attachmentstore.configuration;
import com.amazonaws.services.lambda.runtime.events.S3Event;
import eu.mycompany.mysamplesystem.attachmentstore.handlers.MainEventHandler;
import lombok.extern.slf4j.Slf4j;
import org.crac.Core;
import org.crac.Resource;
import org.springframework.context.annotation.Configuration;
import software.amazon.awssdk.services.s3.model.NoSuchKeyException;
import java.util.ArrayList;
import java.util.List;
#Configuration
#Slf4j
public class WarmUpConfig implements Resource {
private final MainEventHandler mainEventHandler;
public WarmUpConfig(final MainEventHandler mainEventHandler) {
Core.getGlobalContext().register(this);
this.mainEventHandler = mainEventHandler;
}
#Override
public void beforeCheckpoint(org.crac.Context<? extends Resource> context) {
log.debug("Warm-up MainEventHandler by issuing dummy requests");
dummyS3Invocation();
dummyDynamoDbInvocation();
}
#Override
public void afterRestore(org.crac.Context<? extends Resource> context) {
}
public void dummyS3Invocation() {
S3Event s3Event = generateWarmUpEvent("ObjectCreated:Put");
try {
mainEventHandler.handleRequest(s3Event, null);
throw new IllegalStateException("Warm-up event processing should have reached S3 and failed with S3Exception");
} catch (NoSuchKeyException e) {
log.debug("S3Exception is expected, since it is a warm-up");
}
}
public void dummyDynamoDbInvocation() {
S3Event s3Event = generateWarmUpEvent("ObjectRemoved:Delete");
mainEventHandler.handleRequest(s3Event, null);
}
private S3Event generateWarmUpEvent(String eventName) {
S3Event.S3BucketEntity s3BucketEntity = new S3Event.S3BucketEntity("hopefully_non_existing_bucket", null, null);
S3Event.S3ObjectEntity s3ObjectEntity = new S3Event.S3ObjectEntity("hopefully/non/existing.key", 0L, null, null, null);
S3Event.S3Entity s3Entity = new S3Event.S3Entity(null, s3BucketEntity, s3ObjectEntity, null);
List<S3Event.S3EventNotificationRecord> records = new ArrayList<>();
records.add(new S3Event.S3EventNotificationRecord(null, eventName, null, null, null, null, null, s3Entity, null));
return new S3Event(records);
}
}
P.S.: The MainEventHandler is basically the entry point to all the business logic exposed by the Function.
#SpringBootApplication
#RequiredArgsConstructor
public class Lambda {
private final MainEventHandler mainEventHandler;
public static void main(String... args) {
SpringApplication.run(Lambda.class, args);
}
#Bean
public Function<Message<S3Event>, String> defaultFunctionLambda() {
return message -> {
Context context = message.getHeaders().get("aws-context", Context.class);
return mainEventHandler.handleRequest(message.getPayload(), context);
};
}
}
I download a fresh 6.1 broadleaf-commerce and run my local machine via java -javaagent:./admin/target/agents/spring-instrument.jar -jar admin/target/admin.jar successfully on mine macbook. But in my centos 7 I run sudo java -javaagent:./admin/target/agents/spring-instrument.jar -jar admin/target/admin.jar with following error
2020-10-12 13:20:10.838 INFO 2481 --- [ main] c.b.solr.autoconfigure.SolrServer : Syncing solr config file: jar:file:/home/mynewuser/seafood-broadleaf/admin/target/admin.jar!/BOOT-INF/lib/broadleaf-boot-starter-solr-2.2.1-GA.jar!/solr/standalone/solrhome/configsets/fulfillment_order/conf/solrconfig.xml to: /tmp/solr-7.7.2/solr-7.7.2/server/solr/configsets/fulfillment_order/conf/solrconfig.xml
*** [WARN] *** Your Max Processes Limit is currently 62383.
It should be set to 65000 to avoid operational disruption.
If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
WARNING: Starting Solr as the root user is a security risk and not considered best practice. Exiting.
Please consult the Reference Guide. To override this check, start with argument '-force'
2020-10-12 13:20:11.021 ERROR 2481 --- [ main] c.b.solr.autoconfigure.SolrServer : Problem starting Solr
Here is the source code of solr configuration, I believe it is the place to change the configuration to run with the argument -force in programming way.
package com.community.core.config;
import org.apache.solr.client.solrj.SolrClient;
import org.apache.solr.client.solrj.impl.HttpSolrClient;
import org.broadleafcommerce.core.search.service.SearchService;
import org.broadleafcommerce.core.search.service.solr.SolrConfiguration;
import org.broadleafcommerce.core.search.service.solr.SolrSearchServiceImpl;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.stereotype.Component;
/**
*
*
* #author Phillip Verheyden (phillipuniverse)
*/
#Component
public class ApplicationSolrConfiguration {
#Value("${solr.url.primary}")
protected String primaryCatalogSolrUrl;
#Value("${solr.url.reindex}")
protected String reindexCatalogSolrUrl;
#Value("${solr.url.admin}")
protected String adminCatalogSolrUrl;
#Bean
public SolrClient primaryCatalogSolrClient() {
return new HttpSolrClient.Builder(primaryCatalogSolrUrl).build();
}
#Bean
public SolrClient reindexCatalogSolrClient() {
return new HttpSolrClient.Builder(reindexCatalogSolrUrl).build();
}
#Bean
public SolrClient adminCatalogSolrClient() {
return new HttpSolrClient.Builder(adminCatalogSolrUrl).build();
}
#Bean
public SolrConfiguration blCatalogSolrConfiguration() throws IllegalStateException {
return new SolrConfiguration(primaryCatalogSolrClient(), reindexCatalogSolrClient(), adminCatalogSolrClient());
}
#Bean
protected SearchService blSearchService() {
return new SolrSearchServiceImpl();
}
}
Let me preface this by saying you would be better off simply not starting the application as root. If you are in Docker, you can use the USER command to switch to a non-root user.
The Solr server startup in Broadleaf Community is done programmatically via the broadleaf-boot-starter-solr dependency. This is the wrapper around Solr that ties it to the Spring lifecycle. All of the real magic happens in the com.broadleafcommerce.solr.autoconfigure.SolrServer class.
In that class, you will see a startSolr() method. This method is what adds startup arguments to Solr.
In your case, you will need to mostly copy this method wholesale and use cmdLine.addArgument(...) to add additional arguments. Example:
class ForceStartupSolrServer extends SolrServer {
public ForceStartupSolrServer(SolrProperties props) {
super(props);
}
protected void startSolr() {
if (!isRunning()) {
if (!downloadSolrIfApplicable()) {
throw new IllegalStateException("Could not download or expand Solr, see previous logs for more information");
}
stopSolr();
synchConfig();
{
CommandLine cmdLine = new CommandLine(getSolrCommand());
cmdLine.addArgument("start");
cmdLine.addArgument("-p");
cmdLine.addArgument(Integer.toString(props.getPort()));
// START MODIFICATION
cmdLine.addArgument("-force");
// END MODIFICATION
Executor executor = new DefaultExecutor();
PumpStreamHandler streamHandler = new PumpStreamHandler(System.out);
streamHandler.setStopTimeout(1000);
executor.setStreamHandler(streamHandler);
try {
executor.execute(cmdLine);
created = true;
checkCoreStatus();
} catch (IOException e) {
LOG.error("Problem starting Solr", e);
}
}
}
}
}
Then create an #Configuration class to override the blAutoSolrServer bean created by SolrAutoConfiguration (note the specific package requirement for org.broadleafoverrides.config):
package org.broadleafoverrides.config;
public class OverrideConfiguration {
#Bean
public ForceStartupSolrServer blAutoSolrServer(SolrProperties props) {
return new ForceStartupSolrServer(props);
}
}
I would really like to use YAML config for Spring Boot, as I find it quite readable and useful to have a single file showing what properties are active in my different profiles. Unfortunately, I'm finding that setting properties in application.yml can be rather fragile.
Things like using a tab instead of spaces will cause properties to not exist (without warnings as far as I can see), and all too often I find that my active profiles are not being set, due to some unknown issue with my YAML.
So I was wondering whether there are any hooks that would enable me to get hold of the currently active profiles and properties, so that I could log them.
Similarly, is there a way to cause start-up to fail if the application.yml contains errors? Either that or a means for me to validate the YAML myself, so that I could kill the start-up process.
In addition to other answers: logging active properties on context refreshed event.
Java 8
package mypackage;
import lombok.extern.slf4j.Slf4j;
import org.springframework.context.event.ContextRefreshedEvent;
import org.springframework.context.event.EventListener;
import org.springframework.core.env.ConfigurableEnvironment;
import org.springframework.core.env.MapPropertySource;
import org.springframework.stereotype.Component;
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
#Slf4j
#Component
public class AppContextEventListener {
#EventListener
public void handleContextRefreshed(ContextRefreshedEvent event) {
printActiveProperties((ConfigurableEnvironment) event.getApplicationContext().getEnvironment());
}
private void printActiveProperties(ConfigurableEnvironment env) {
System.out.println("************************* ACTIVE APP PROPERTIES ******************************");
List<MapPropertySource> propertySources = new ArrayList<>();
env.getPropertySources().forEach(it -> {
if (it instanceof MapPropertySource && it.getName().contains("applicationConfig")) {
propertySources.add((MapPropertySource) it);
}
});
propertySources.stream()
.map(propertySource -> propertySource.getSource().keySet())
.flatMap(Collection::stream)
.distinct()
.sorted()
.forEach(key -> {
try {
System.out.println(key + "=" + env.getProperty(key));
} catch (Exception e) {
log.warn("{} -> {}", key, e.getMessage());
}
});
System.out.println("******************************************************************************");
}
}
Kotlin
package mypackage
import mu.KLogging
import org.springframework.context.event.ContextRefreshedEvent
import org.springframework.context.event.EventListener
import org.springframework.core.env.ConfigurableEnvironment
import org.springframework.core.env.MapPropertySource
import org.springframework.stereotype.Component
#Component
class AppContextEventListener {
companion object : KLogging()
#EventListener
fun handleContextRefreshed(event: ContextRefreshedEvent) {
printActiveProperties(event.applicationContext.environment as ConfigurableEnvironment)
}
fun printActiveProperties(env: ConfigurableEnvironment) {
println("************************* ACTIVE APP PROPERTIES ******************************")
env.propertySources
.filter { it.name.contains("applicationConfig") }
.map { it as EnumerablePropertySource<*> }
.map { it -> it.propertyNames.toList() }
.flatMap { it }
.distinctBy { it }
.sortedBy { it }
.forEach { it ->
try {
println("$it=${env.getProperty(it)}")
} catch (e: Exception) {
logger.warn("$it -> ${e.message}")
}
}
println("******************************************************************************")
}
}
Output like:
************************* ACTIVE APP PROPERTIES ******************************
server.port=3000
spring.application.name=my-app
...
2017-12-29 13:13:32.843 WARN 36252 --- [ main] m.AppContextEventListener : spring.boot.admin.client.service-url -> Could not resolve placeholder 'management.address' in value "http://${management.address}:${server.port}"
...
spring.datasource.password=
spring.datasource.url=jdbc:postgresql://localhost/my_db?currentSchema=public
spring.datasource.username=db_user
...
******************************************************************************
I had the same problem, and wish there was a debug flag that would tell the profile processing system to spit out some useful logging. One possible way of doing it would be to register an event listener for your application context, and print out the profiles from the environment. I haven't tried doing it this way myself, so your mileage may vary. I think maybe something like what's outlined here:
How to add a hook to the application context initialization event?
Then you'd do something like this in your listener:
System.out.println("Active profiles: " + Arrays.toString(ctxt.getEnvironment().getActiveProfiles()));
Might be worth a try. Another way you could probably do it would be to declare the Environment to be injected in the code where you need to print the profiles. I.e.:
#Component
public class SomeClass {
#Autowired
private Environment env;
...
private void dumpProfiles() {
// Print whatever needed from env here
}
}
Actuator /env service displays properties, but it doesn't displays which property value is actually active. Very often you may want to override your application properties with
profile-specific application properties
command line arguments
OS environment variables
Thus you will have the same property and different values in several sources.
Snippet bellow prints active application properties values on startup:
#Configuration
public class PropertiesLogger {
private static final Logger log = LoggerFactory.getLogger(PropertiesLogger.class);
#Autowired
private AbstractEnvironment environment;
#PostConstruct
public void printProperties() {
log.info("**** APPLICATION PROPERTIES SOURCES ****");
Set<String> properties = new TreeSet<>();
for (PropertiesPropertySource p : findPropertiesPropertySources()) {
log.info(p.toString());
properties.addAll(Arrays.asList(p.getPropertyNames()));
}
log.info("**** APPLICATION PROPERTIES VALUES ****");
print(properties);
}
private List<PropertiesPropertySource> findPropertiesPropertySources() {
List<PropertiesPropertySource> propertiesPropertySources = new LinkedList<>();
for (PropertySource<?> propertySource : environment.getPropertySources()) {
if (propertySource instanceof PropertiesPropertySource) {
propertiesPropertySources.add((PropertiesPropertySource) propertySource);
}
}
return propertiesPropertySources;
}
private void print(Set<String> properties) {
for (String propertyName : properties) {
log.info("{}={}", propertyName, environment.getProperty(propertyName));
}
}
}
If application.yml contains errors it will cause a failure on startup. I guess it depends what you mean by "error" though. Certainly it will fail if the YAML is not well formed. Also if you are setting #ConfigurationProperties that are marked as ignoreInvalidFields=true for instance, or if you set a value that cannot be converted. That's a pretty wide range of errors.
The active profiles will probably be logged on startup by the Environment implementation (but in any case it's easy for you to grab that and log it in your launcher code - the toString() of teh Environment will list the active profiles I think). Active profiles (and more) are also available in the /env endpoint if you add the Actuator.
In case you want to get the active profiles before initializing the beans/application, the only way I found is registering a custom Banner in your SpringBootServletInitializer/SpringApplication (i.e. ApplicationWebXml in a JHipster application).
e.g.
#Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder builder)
{
// set a default to use when no profile is configured.
DefaultProfileUtil.addDefaultProfile(builder.application());
return builder.sources(MyApp.class).banner(this::printBanner);
}
/** Custom 'banner' to obtain early access to the Spring configuration to validate and debug it. */
private void printBanner(Environment env, Class<?> sourceClass, PrintStream out)
{
if (env.getProperty("spring.datasource.url") == null)
{
throw new RuntimeException(
"'spring.datasource.url' is not configured! Check your configuration files and the value of 'spring.profiles.active' in your launcher.");
}
...
}
Im currently trying to setup my own implementation of a ManagedServiceFactory. Here is what I'm trying to do: I need multiple instances of some service on a per-configuration base. With DS the components worked perfectly but now I found out that these services should handle there own lifecycle (i.e. (de)registration at the service registry) depending on the availability of some external resource, which is impossible with DS.
Thus my idea was to create a ManagedServiceFactory, which then would receive configs from the ConfigurationAdmin and create instances of my class. These again would try to connect to the resource in a seperate thread and register themselves as service when they're ready to operate.
Since I had no luck implementing this yet, I tried to break everything down to the most basic parts, not even dealing with the dynamic (de)registration, just trying to get the ManagedServiceFacotry to work:
package my.project.factory;
import java.util.Dictionary;
import java.util.HashMap;
import java.util.Hashtable;
import java.util.Map;
import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;
import org.osgi.framework.Constants;
import org.osgi.framework.ServiceRegistration;
import org.osgi.service.cm.ConfigurationException;
import org.osgi.service.cm.ManagedServiceFactory;
public class Factory implements BundleActivator, ManagedServiceFactory {
private ServiceRegistration myReg;
private BundleContext ctx;
private Map<String, ServiceRegistration> services;
#Override
public void start(BundleContext context) throws Exception {
System.out.println("starting factory...");
this.ctx = context;
java.util.Dictionary properties = new Hashtable<String, Object>();
properties.put(Constants.SERVICE_PID, "my.project.servicefactory");
myReg = context.registerService(ManagedServiceFactory.class, this,
properties);
System.out.println("registered as ManagedServiceFactory");
services = new HashMap<String, ServiceRegistration>();
}
#Override
public void stop(BundleContext context) throws Exception {
for(ServiceRegistration reg : services.values()) {
System.out.println("deregister " + reg);
reg.unregister();
}
if(myReg != null) {
myReg.unregister();
} else {
System.out.println("my service registration as already null " +
"(although it shouldn't)!");
}
}
#Override
public String getName() {
System.out.println("returning facotry name");
return "ServiceFactory";
}
#Override
public void updated(String pid, Dictionary properties)
throws ConfigurationException {
System.out.println("retrieved update for pid " + pid);
ServiceRegistration reg = services.get(pid);
if (reg == null) {
services.put(pid, ctx.registerService(ServiceInterface.class,
new Service(), properties));
} else {
// i should do some update here
}
}
#Override
public void deleted(String pid) {
ServiceRegistration reg = services.get(pid);
if (reg != null) {
reg.unregister();
}
}
}
Now, it should receive configurations from the ConfigurationAdmin for PID my.project.servicefactory, shouldn't it?
But it does not receive any configurations from the ConfigurationAdmin. The bundle is started, the service is registered and in the web console, I can see the config admin holds a reference to my ManagedServiceFactory. Is there a certain property which should be set? The interface specification does not suggest that. Actually my implementation is more or less the same as the example there. I've no idea what I'm doing wrong here, any pointers to the solutions are very welcome.
Also, I orginally thought to implement the ManagedServiceFactory itself as DS, which also should be possible, but I failed at the same point: no configurations are handed over by the ConfigAdmin.
update
To clarify the question: I think that this is mainly an configuration problem. As I see it, I should be able to specify two PIDs for the factory, one which identifies a configuration for the factory itself (if any), and one which would produce services trough this factory, which I thought should be the factory.pid. But the framework constants do not hold anything like this.
update 2
After searching a bit the Felix Fileinstall source code, I found out that it treats configuration files differently when there is a - in the filename or not. Having the configuration file named my.project.servicefactory.cfg it did not work, but the configs named my.project.servicefactory-foo.cfg and my.project.servicefactory-bar.cfg were properly handed over to my ManagedServiceFactory as expected, and multiple services with ServiceInterface were registered. Hurray!
update 3
As proposed by Neil, I put the declarative service part in a new question to bound the scope of this one.
I think that the problem is you have a singleton configuration record rather than a factory record. You need to call Config Admin with the createFactoryConfiguration method using my.project.servicefactory as the factoryPid.
If you are using Apache Felix FileInstall (which is a nice easy way to create config records without writing code) then you need to create a file called my.project.servicefactory-1.cfg in the load directory. You can create further configurations with the same factoryPID by calling them my.project.servicefactory-2.cfg etc.
I created some OSGi-bundles. One of these has a function to export data to xml using XStream. It just works fine. Also importing again when using the Bundle as library and not within an OSGi context, works.
But if I call my import within another bundle I get a "com.thoughtworks.xstream.converters.ConversionException" with the follwing Debugging Information printed out:
---- Debugging information ----
message : Cannot find class ChildDate
cause-exception : com.thoughtworks.xstream.converters.reflection.ObjectAccessException
cause-message : Cannot find class ChildData
class : ChildData
required-type : ChildData
path : /ParentData/ChildData
-------------------------------
message : Could not call ParentData.readObject()
cause-exception : com.thoughtworks.xstream.converters.ConversionException
cause-message : Cannot find class ParentData : Cannot find class ChildData
class : ParentData
required-type : ChildData
path : /ParentData/ChildData
-------------------------------
I think it's a similar problem to this: XStream and OSGi or this: CannotResolveClassException in OSGi environment
So I tried to solve it setting the ClassLoader. But it does not work.
Parts of my ParentData class:
public class ParentData implements Serializable {
// [...]
private static ClassLoader classLoaderForImport = ParentData.class.getClassLoader();
// [...]
public static void setClassLoaderForImport(ClassLoader classLoaderForImport) {
ParentData.classLoaderForImport = classLoaderForImport;
}
// [...]
public static Lumicon importFromXMLFile(String path) {
return importFromFile(new DomDriver(), path);
}
private static ParentData importFromFile(HierarchicalStreamDriver driver, String path) {
try {
XStream xstream = new XStream(driver);
//set the classloader as the default one won't work in any case
xstream.setClassLoader(ParentData.classLoaderForImport);
xstream.alias("ParentData", classLoaderForImport.loadClass(ParentData.class.getName()));//ParentData.class);
xstream.alias("ChildData", classLoaderForImport.loadClass(ChildData.class.getName()));//ChildData.class);
Reader reader = new FileReader(path);
Object object = xstream.fromXML(reader);
return (ParentData) object;
} catch (ClassNotFoundException ex) {
System.out.println("This did not work.");
} catch (FileNotFoundException e) {
System.out.println("File " + path + " not found.");
}
}
// [...]
}
The function xstream.fromXML(reader) does not work, but classLoaderForImport.loadClass(ChildData.class.getName()) doesn't fail.
This is how I call it from another Bundle:
ParentData.setClassLoaderForImport(ParentData.class.getClassLoader());
data = ParentData.importFromXMLFile(path); // this is where the Exception happens
I also tried this.getClass().getClassLoader() and ChildData.class.getClassLoader()
Could it be, that this does not work, because this function is called from a third bundle?
Some more Info:
Java version: 1.4 (no, I can not upgrade to 1.5 or 1.6)
Execution environment: J2SE-1.6
Maven Version: 2.2.1
Running with Pax Runner (1.5.0) from OPS4J - http://www.ops4j.org
OSGi: Equinox 3.6.0
XStream 1.3.1 (com.springsource.com.thoughtworks.xstream)
Any help would be very welcome!
Assuming your question contains all the relevant code, the problem is given away by the following:
Could not call ParentData.readObject()
You have declared that your ParentData implements Serializable. Although the Serializable interface does not include any methods, both Java serialization and XStream expect a readObject and a writeObject. I'm a little rusty on the exact situations in which these are mandatory, but I would suggest you to remove the implements Serializable from the ParentData class.