Am building a wrapper library using apache-flink where I am listening(consuming) from multiple topics and I have a set of applications that want to process the messages from those topics.
Example :
I have 10 applications app1, app2, app3 ... app10 (each of them is a java library part of the same on-prem project, ie., all 10 jars are part of same .war file)
out of which only 5 are supposed to consume the messages coming to the consumer group. I am able to do filtering for 5 apps with the help of filter function.
The challenge is in the strStream.process(executionServiceInterface) function, where app1 provides an implementation class for ExceucionServiceInterface as ExecutionServiceApp1Impl and similary app2 provides ExecutionServiceApp2Impl.
when there are multiple implementations available spring wants us to provide #Qualifier annotation or #Primary has to be marked on the implementations (ExecutionServiceApp1Impl , ExecutionServiceApp2Impl).
But I don't really want to do this. As am building a generic wrapper library that should support any no of such applications (app1, app2 etc) and all of them should be able to implement their own implementation logic(ExecutionServiceApp1Impl , ExecutionServiceApp2Impl).
Can someone help me here ? how to solve this ?
Below is the code for reference.
#Autowired
private ExceucionServiceInterface executionServiceInterface;
public void init(){
StreamExecutionEnvironment environment = StreamExecutionEnvironment.getExecutionEnvironment();
FlinkKafkaConsumer011<String> consumer = createStringConsumer(topicList, kafkaAddress, kafkaGroup);
if (consumer != null) {
DataStream<String> strStream = environment.addSource(consumer);
strStream.filter(filterFunctionInterface).process(executionServiceInterface);
}
}
public FlinkKafkaConsumer011<String> createStringConsumer(List<String> listOfTopics, String kafkaAddress, String kafkaGroup) throws Exception {
FlinkKafkaConsumer011<String> myConsumer = null;
try {
Properties props = new Properties();
props.setProperty("bootstrap.servers", kafkaAddress);
props.setProperty("group.id", kafkaGroup);
myConsumer = new FlinkKafkaConsumer011<>(listOfTopics, new SimpleStringSchema(), props);
} catch(Exception e) {
throw e;
}
return myConsumer;
}
Many thanks in advance!!
Solved this problem by using Reflection, below is the code that solved the issue.
Note : this requires me to know the list of fully qualified classNames and method names along with their parameters.
#Component
public class SampleJobExecutor extends ProcessFunction<String, String> {
#Autowired
MyAppProperties myAppProperties;
#Override
public void processElement(String inputMessage, ProcessFunction<String, String>.Context context,
Collector<String> collector) throws Exception {
String className = null;
String methodName = null;
try {
Map<String, List<String>> map = myAppProperties.getMapOfImplementors();
JSONObject json = new JSONObject(inputMessage);
if (json != null && json.has("appName")) {
className = map.get(json.getString("appName")).get(0);
methodName = map.get(json.getString("appName")).get(1);
}
Class<?> forName = Class.forName(className);
Object job = forName.newInstance();
Method method = forName.getDeclaredMethod(methodName, String.class);
method.invoke(job , inputMessage);
} catch (Exception e) {
e.printStackTrace();
}
}
Related
after upgrading to spring boot version v2.7.1 we are seeing that there are lots of queued task, we never had seen such queued task increasing in the last version we were using v2.2.2.
Our team has tried to check the things in v2.7.1 but couldn't found anything in this version.
Can anyone please review the code and let us know what we are missing or have written wrong that is causing the issue. We are using spring integration to pull emails from client server and for that we have add a taskexecutor to have concurrent polling.
Versions that we use:
Spring Boot = 2.7.1
Spring Integration = 5.5.14
Earlier we were using:
Spring Boot = 2.2.2 release
Spring Integration = 5.2.3 release
I've attached the code below.
Configuration class for Imap Integration
#Configuration
#EnableIntegration
public class ImapIntegrationConfig {
private final ApplicationContext applicationContext;
#Autowired
public ImapIntegrationConfig(ApplicationContext applicationContext) {
this.applicationContext = applicationContext;
}
#Bean("mailTaskExecutor")
public ThreadPoolTaskExecutor mailTaskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setMaxPoolSize(1000);
taskExecutor.setCorePoolSize(100);
taskExecutor.setTaskDecorator(new SecurityAwareTaskDecorator(applicationContext));
taskExecutor.setWaitForTasksToCompleteOnShutdown(true);
taskExecutor.setAwaitTerminationSeconds(Integer.MAX_VALUE);
return taskExecutor;
}
#Bean("imapMailChannel")
public ExecutorChannelSpec imapMailChannel() {
return MessageChannels.executor(mailTaskExecutor());
}
#Bean
public HeaderMapper<MimeMessage> mailHeaderMapper() {
return new DefaultMailHeaderMapper();
}
}
ImapListener Class to register the flow
public void registerImapFlow(ImapSetting imapSetting) {
ImapMailReceiver mailReceiver = createImapMailReceiver(imapSetting);
// create the flow for an email process
//#formatter:off
StandardIntegrationFlow flow = IntegrationFlows
.from(Mail.imapInboundAdapter(mailReceiver),
consumer -> consumer.autoStartup(true)
.poller(Pollers.fixedDelay(Duration.ofSeconds(5), Duration.ofMinutes(2))
.taskExecutor(taskExecutor)
.errorHandler(t -> logger.error("Error while polling emails for address " + imapSetting.getUsername(), t))
.maxMessagesPerPoll(10)))
.enrichHeaders(Map.of(CONCERN_CODE, imapSetting.getConcernCode(), IMAP_CONFIG_ID, imapSetting.getImapSettingId()))
.channel(imapMailChannel).get();
//#formatter:on
// give the bean a unique name to avoid clashes with multiple imap settings
String flowId = concernIdentifier.getConcernIdentifier() + "-" + imapSetting.getImapSettingId();
IntegrationFlowContext.IntegrationFlowRegistration existingFlow = integrationFlowContext.getRegistrationById(flowId);
if (existingFlow != null) {
// destroy the previous beans
existingFlow.destroy();
}
// register the new flow
integrationFlowContext.registration(flow).id(flowId).useFlowIdAsPrefix().register();
}
Process message method
#ServiceActivator(inputChannel = "imapMailChannel")
public void processMessage(Message<?> message) throws InvalidMessageException {
String concern = (String) message.getHeaders().get(CONCERN_CODE);
if (isEmpty(concern)) {
logger.error("Received null concern!");
}
Long imapConfigId = (Long) message.getHeaders().get(IMAP_CONFIG_ID);
String logMessage = null;
String messageId = null;
try {
Object payload = message.getPayload();
if (payload instanceof MimeMultipart) {
//.......................//
}
else if (payload instanceof String) {
//......................//
}
catch (Exception e) {
logger.error("Error while processing " + logMessage, e);
if (concern != null) {
metricUtil.emailFailed(concern);
}
throw new MaxxtonException("CCM-MessageID: Exception in processMessage() method", e, MessageErrorCode.UNABLE_TO_PROCESS_EMAIL);
}
metricUtil.emailProcessed(concern);
}
ImapMailReceiver method
private ImapMailReceiver createImapMailReceiver(ImapSetting imapSettings) {
String url = String.format(imapSettings.getImapUrl(), URLEncoder.encode(imapSettings.getUsername(), UTF_8), URLEncoder.encode(imapSettings.getPassword(), UTF_8));
ImapMailReceiver receiver = new ImapMailReceiver(url);
receiver.setSimpleContent(true);
Properties mailProperties = new Properties();
mailProperties.put("mail.debug", "false");
mailProperties.put("mail.imap.connectionpoolsize", "5");
mailProperties.put("mail.imap.fetchsize", 4194304);
mailProperties.put("mail.imap.connectiontimeout", 15000);
mailProperties.put("mail.imap.timeout", 30000);
mailProperties.put("mail.imaps.connectionpoolsize", "5");
mailProperties.put("mail.imaps.fetchsize", 4194304);
mailProperties.put("mail.imaps.connectiontimeout", 15000);
mailProperties.put("mail.imaps.timeout", 30000);
receiver.setJavaMailProperties(mailProperties);
receiver.setSearchTermStrategy(this::notSeenTerm);
receiver.setAutoCloseFolder(false);
receiver.setShouldDeleteMessages(false);
receiver.setShouldMarkMessagesAsRead(true);
receiver.setHeaderMapper(mailHeaderMapper);
receiver.setEmbeddedPartsAsBytes(false);
return receiver;
}
Added a screenshot taken from Grafana of active and queued task when we have upgraded to SP v2.7.1 and SI v5.5.14
At a glance it all looks OK. Unless you really don't close that folder manually elsewhere since you use receiver.setAutoCloseFolder(false);
There is no reason in that .taskExecutor(taskExecutor) since you use MessageChannels.executor(mailTaskExecutor()) immediately after producing message from the Mail.imapInboundAdapter().
I remember that in Gitter I suggested you to check how it works with the spring.task.scheduling.pool.size=10 placed into the application.properties. This is the only obvious difference between the mentioned versions: https://docs.spring.io/spring-boot/docs/current/reference/html/messaging.html#messaging.spring-integration.
Your screenshot doesn't prove that the problem is exactly with Spring Integration. Perhaps tasks are queued somehow by the tool which exports metrics to Graphana. I believe you have upgraded not just Spring Integration in your project...
I was working with karate framework to test my rest service and it work great, however I have service that consume message from kafka topic then persist on mongo to finally notify kafka.
I made a java producer on my karate project, it called by js to be used by feature.
Then I have a consumer to check the message
Feature:
* def kafkaProducer = read('../js/KafkaProducer.js')
JS:
function(kafkaConfiguration){
var Producer = Java.type('x.y.core.producer.Producer');
var producer = new Producer(kafkaConfiguration);
return producer;
}
Java:
public class Producer {
private static final Logger LOGGER = LoggerFactory.getLogger(Producer.class);
private static final String KEY = "C636E8E238FD7AF97E2E500F8C6F0F4C";
private KafkaConfiguration kafkaConfiguration;
private ObjectMapper mapper;
private AESEncrypter aesEncrypter;
public Producer(KafkaConfiguration kafkaConfiguration) {
kafkaConfiguration.getProperties().put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
kafkaConfiguration.getProperties().put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArraySerializer");
this.kafkaConfiguration = kafkaConfiguration;
this.mapper = new ObjectMapper();
this.aesEncrypter = new AESEncrypter(KEY);
}
public String produceMessage(String payload) {
// Just notify kafka with payload and return id of payload
}
Other class
public class KafkaConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(KafkaConfiguration.class);
private Properties properties;
public KafkaConfiguration(String host) {
try {
properties = new Properties();
properties.put(BOOTSTRAP_SERVERS_CONFIG, host);
properties.put(ConsumerConfig.GROUP_ID_CONFIG, "karate-integration-test");
properties.put(ConsumerConfig.CLIENT_ID_CONFIG, "offset123");
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
} catch (Exception e) {
LOGGER.error("Fail creating the consumer...", e);
throw e;
}
}
public Properties getProperties() {
return properties;
}
public void setProperties(Properties properties) {
this.properties = properties;
}
}
I'd would like to use the producer code with anotation like cucumber does like:
#Then("^Notify kafka with payload (-?\\d+)$")
public void validateResult(String payload) throws Throwable {
new Producer(kafkaConfiguration).produceMessage(payload);
}
and on feature use
Then Notify kafka with payload "{example:value}"
I want to do that because I want to reuse that code on base project in order to be included in other project
If annotation doesn't works, maybe you can suggest me another way to do it
The answer is simple, use normal Java / Maven concepts. Move the common Java code to the "main" packages (src/main/java). Now all you need to do is build a JAR and add it as a dependency to any Karate project.
The last piece of the puzzle is this: use the classpath: prefix to refer to any features or JS files in the JAR. Karate will be able to pick them up.
EDIT: Sorry Karate does not support Cucumber or step-definitions. It has a much simpler approach. Please read this for details: https://github.com/intuit/karate/issues/398
I am using akka actor system for multi threading. It is working fine in normal use-cases. However, Akka is closing JVM on fatal error. Please let me know how I can configure Akka to disable "akka.jvm-exit-on-fatal-error" in java. Below is code.
public class QueueListener implements MessageListener {
private String _queueName=null;
public static boolean isActorinit=false;
public static ActorSystem system=null;
private ActorRef myActor;
public QueueListener(String actorId, String qName){
this._queueName = qName;
if(!isActorinit){
system=ActorSystem.create(actorId);
isActorinit=true;
}
myActor=system.actorOf( Props.create(MessageExecutor.class, qName),qName+"id");
}
/*
* (non-Javadoc)
* #see javax.jms.MessageListener#onMessage(javax.jms.Message)
*/
#Override
public void onMessage(Message msg) {
executeRequest(msg);
}
/** This method will process the message fetch by the listener.
*
* #param msg - javax.jms.Messages parameter get queue message
*/
private void executeRequest(Message msg){
String requestData=null;
try {
if(msg instanceof TextMessage){
TextMessage textMessage= (TextMessage) msg;
requestData = textMessage.getText().toString();
}else if(msg instanceof ObjectMessage){
ObjectMessage objMsg = (ObjectMessage) msg;
requestData = objMsg.getObject().toString();
}
myActor.tell(requestData, ActorRef.noSender());
} catch (JMSException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
}
}
Create an application.conf file in your project (sr/main/resources for example) and add the following content:
akka {
jvm-exit-on-fatal-error = false
}
No need to create new config file if you already have one of course, in that case it is just adding the new entry:
jvm-exit-on-fatal-error = false
Be careful. Letting the JVM run after fatal errors like OutOfMemory is normally not a good idea and leads to serious problems.
See here for the configuration details - you can provide a separate config file, but for the small number of changes I was making to the akka config (and also given that I was already using several Spring config files) I found it easier to construct and load the configuration programmatically. Your config would look something like:
import com.typesafe.config.Config;
import com.typesafe.config.ConfigFactory;
StringBuilder configBuilder = new StringBuilder();
configBuilder.append("{\"akka\" : { \"jvm-exit-on-fatal-error\" : \"off\"}}");
Config mergedConfig = ConfigFactory.load(ConfigFactory.parseString(configBuilder.toString()).withFallback(ConfigFactory.load()));
system = ActorSystem.create(actorId, mergedConfig);
This is loading the default Config, overriding its jvm-exit-on-fatal-error entry, and using this new Config as the config for the ActorSystem. I haven't tested this particular config, so there is a 50% chance that you'll get some sort of JSON parsing error when you try to use it; for comparison, the actual config I use which DOES parse correctly (but which doesn't override jvm-exit-on-fatal-error) is
private ActorSystem createActorSystem(int batchManagerCount) {
int maxActorCount = batchManagerCount * 5 + 1;
StringBuilder configBuilder = new StringBuilder();
configBuilder.append("{\"akka\" : { \"actor\" : { \"default-dispatcher\" : {");
configBuilder.append("\"type\" : \"Dispatcher\",");
configBuilder.append("\"executor\" : \"default-executor\",");
configBuilder.append("\"throughput\" : \"1\",");
configBuilder.append("\"default-executor\" : { \"fallback\" : \"thread-pool-executor\" },");
StringBuilder executorConfigBuilder = new StringBuilder();
executorConfigBuilder.append("\"thread-pool-executor\" : {");
executorConfigBuilder.append("\"keep-alive-time\" : \"60s\",");
executorConfigBuilder.append(String.format("\"core-pool-size-min\" : \"%d\",", maxActorCount));
executorConfigBuilder.append(String.format("\"core-pool-size-max\" : \"%d\",", maxActorCount));
executorConfigBuilder.append(String.format("\"max-pool-size-min\" : \"%d\",", maxActorCount));
executorConfigBuilder.append(String.format("\"max-pool-size-max\" : \"%d\",", maxActorCount));
executorConfigBuilder.append("\"task-queue-size\" : \"-1\",");
executorConfigBuilder.append("\"task-queue-type\" : \"linked\",");
executorConfigBuilder.append("\"allow-core-timeout\" : \"on\"");
executorConfigBuilder.append("}");
configBuilder.append(executorConfigBuilder.toString());
configBuilder.append("}}}}");
Config mergedConfig = ConfigFactory.load(ConfigFactory.parseString(configBuilder.toString()).withFallback(ConfigFactory.load()));
return ActorSystem.create(String.format("PerformanceAsync%s", systemId), mergedConfig);
}
As you can see I was primarily interested in tweaking the dispatcher.
Is it possible to have my app update the config settings at runtime? I can easily expose the settings I want in my UI but is there a way to allow the user to update settings and make them permanent ie save them to the config.yaml file? The only way I can see it to update the file by hand then restart the server which seems a bit limiting.
Yes. It is possible to reload the service classes at runtime.
Dropwizard by itself does not have the way to reload the app, but jersey has.
Jersey uses a container object internally to maintain the running application. Dropwizard uses the ServletContainer class of Jersey to run the application.
How to reload the app without restarting it -
Get a handle to the container used internally by jersey
You can do this by registering a AbstractContainerLifeCycleListener in Dropwizard Environment before starting the app. and implement its onStartup method as below -
In your main method where you start the app -
//getting the container instance
environment.jersey().register(new AbstractContainerLifecycleListener() {
#Override
public void onStartup(Container container) {
//initializing container - which will be used to reload the app
_container = container;
}
});
Add a method to your app to reload the app. It will take in the list of string which are the names of the service classes you want to reload. This method will call the reload method of the container with the new custom DropWizardConfiguration instance.
In your Application class
public static synchronized void reloadApp(List<String> reloadClasses) {
DropwizardResourceConfig dropwizardResourceConfig = new DropwizardResourceConfig();
for (String className : reloadClasses) {
try {
Class<?> serviceClass = Class.forName(className);
dropwizardResourceConfig.registerClasses(serviceClass);
System.out.printf(" + loaded class %s.\n", className);
} catch (ClassNotFoundException ex) {
System.out.printf(" ! class %s not found.\n", className);
}
}
_container.reload(dropwizardResourceConfig);
}
For more details see the example documentation of jersey - jersey example for reload
Consider going through the code and documentation of following files in Dropwizard/Jersey for a better understanding -
Container.java
ContainerLifeCycleListener.java
ServletContainer.java
AbstractContainerLifeCycleListener.java
DropWizardResourceConfig.java
ResourceConfig.java
No.
Yaml file is parsed at startup and given to the application as Configuration object once and for all. I believe you can change the file after that but it wouldn't affect your application until you restart it.
Possible follow up question: Can one restart the service programmatically?
AFAIK, no. I've researched and read the code somewhat for that but couldn't find a way to do that yet. If there is, I'd love to hear that :).
I made a task that reloads the main yaml file (it would be useful if something in the file changes). However, it is not reloading the environment. After researching this, Dropwizard uses a lot of final variables and it's quite hard to reload these on the go, without restarting the app.
class ReloadYAMLTask extends Task {
private String yamlFileName;
ReloadYAMLTask(String yamlFileName) {
super("reloadYaml");
this.yamlFileName = yamlFileName;
}
#Override
public void execute(ImmutableMultimap<String, String> parameters, PrintWriter output) throws Exception {
if (yamlFileName != null) {
ConfigurationFactoryFactory configurationFactoryFactory = new DefaultConfigurationFactoryFactory<ReportingServiceConfiguration>();
ValidatorFactory validatorFactory = Validation.buildDefaultValidatorFactory();
Validator validator = validatorFactory.getValidator();
ObjectMapper objectMapper = Jackson.newObjectMapper();
final ConfigurationFactory<ServiceConfiguration> configurationFactory = configurationFactoryFactory.create(ServiceConfiguration.class, validator, objectMapper, "dw");
File confFile = new File(yamlFileName);
configurationFactory.build(new File(confFile.toURI()));
}
}
}
You can change the configuration in the YAML and read it while your application is running. This will not however restart the server or change any server configurations. You will be able to read any changed custom configurations and use them. For example, you can change the logging level at runtime or reload other custom settings.
My solution -
Define a custom server command. You should use this command to start your application instead of the "server" command.
ArgsServerCommand.java
public class ArgsServerCommand<WC extends WebConfiguration> extends EnvironmentCommand<WC> {
private static final Logger LOGGER = LoggerFactory.getLogger(ArgsServerCommand.class);
private final Class<WC> configurationClass;
private Namespace _namespace;
public static String COMMAND_NAME = "args-server";
public ArgsServerCommand(Application<WC> application) {
super(application, "args-server", "Runs the Dropwizard application as an HTTP server specific to my settings");
this.configurationClass = application.getConfigurationClass();
}
/*
* Since we don't subclass ServerCommand, we need a concrete reference to the configuration
* class.
*/
#Override
protected Class<WC> getConfigurationClass() {
return configurationClass;
}
public Namespace getNamespace() {
return _namespace;
}
#Override
protected void run(Environment environment, Namespace namespace, WC configuration) throws Exception {
_namespace = namespace;
final Server server = configuration.getServerFactory().build(environment);
try {
server.addLifeCycleListener(new LifeCycleListener());
cleanupAsynchronously();
server.start();
} catch (Exception e) {
LOGGER.error("Unable to start server, shutting down", e);
server.stop();
cleanup();
throw e;
}
}
private class LifeCycleListener extends AbstractLifeCycle.AbstractLifeCycleListener {
#Override
public void lifeCycleStopped(LifeCycle event) {
cleanup();
}
}
}
Method to reload in your Application -
_ymlFilePath = null; //class variable
public static boolean reloadConfiguration() throws IOException, ConfigurationException {
boolean reloaded = false;
if (_ymlFilePath == null) {
List<Command> commands = _configurationBootstrap.getCommands();
for (Command command : commands) {
String commandName = command.getName();
if (commandName.equals(ArgsServerCommand.COMMAND_NAME)) {
Namespace namespace = ((ArgsServerCommand) command).getNamespace();
if (namespace != null) {
_ymlFilePath = namespace.getString("file");
}
}
}
}
ConfigurationFactoryFactory configurationFactoryFactory = _configurationBootstrap.getConfigurationFactoryFactory();
ValidatorFactory validatorFactory = _configurationBootstrap.getValidatorFactory();
Validator validator = validatorFactory.getValidator();
ObjectMapper objectMapper = _configurationBootstrap.getObjectMapper();
ConfigurationSourceProvider provider = _configurationBootstrap.getConfigurationSourceProvider();
final ConfigurationFactory<CustomWebConfiguration> configurationFactory = configurationFactoryFactory.create(CustomWebConfiguration.class, validator, objectMapper, "dw");
if (_ymlFilePath != null) {
// Refresh logging level.
CustomWebConfiguration webConfiguration = configurationFactory.build(provider, _ymlFilePath);
LoggingFactory loggingFactory = webConfiguration.getLoggingFactory();
loggingFactory.configure(_configurationBootstrap.getMetricRegistry(), _configurationBootstrap.getApplication().getName());
// Get my defined custom settings
CustomSettings customSettings = webConfiguration.getCustomSettings();
reloaded = true;
}
return reloaded;
}
Although this feature isn't supported out of the box by dropwizard, you're able to accomplish this fairly easy with the tools they give you.
Before I get started, note that this isn't a complete solution for the question asked as it doesn't persist the updated config values to the config.yml. However, this would be easy enough to implement yourself simply by writing to the config file from the application. If anyone would like to write this implementation feel free to open a PR on the example project I've linked below.
Code
Start off with a minimal config:
config.yml
myConfigValue: "hello"
And it's corresponding configuration file:
ExampleConfiguration.java
public class ExampleConfiguration extends Configuration {
private String myConfigValue;
public String getMyConfigValue() {
return myConfigValue;
}
public void setMyConfigValue(String value) {
myConfigValue = value;
}
}
Then create a task which updates the config:
UpdateConfigTask.java
public class UpdateConfigTask extends Task {
ExampleConfiguration config;
public UpdateConfigTask(ExampleConfiguration config) {
super("updateconfig");
this.config = config;
}
#Override
public void execute(Map<String, List<String>> parameters, PrintWriter output) {
config.setMyConfigValue("goodbye");
}
}
Also for demonstration purposes, create a resource which allows you to get the config value:
ConfigResource.java
#Path("/config")
public class ConfigResource {
private final ExampleConfiguration config;
public ConfigResource(ExampleConfiguration config) {
this.config = config;
}
#GET
public Response handleGet() {
return Response.ok().entity(config.getMyConfigValue()).build();
}
}
Finally wire everything up in your application:
ExampleApplication.java (exerpt)
environment.jersey().register(new ConfigResource(configuration));
environment.admin().addTask(new UpdateConfigTask(configuration));
Usage
Start up the application then run:
$ curl 'http://localhost:8080/config'
hello
$ curl -X POST 'http://localhost:8081/tasks/updateconfig'
$ curl 'http://localhost:8080/config'
goodbye
How it works
This works simply by passing the same reference to the constructor of ConfigResource.java and UpdateConfigTask.java. If you aren't familiar with the concept see here:
Is Java "pass-by-reference" or "pass-by-value"?
The linked classes above are to a project I've created which demonstrates this as a complete solution. Here's a link to the project:
scottg489/dropwizard-runtime-config-example
Footnote: I haven't verified this works with the built in configuration. However, the dropwizard Configuration class which you need to extend for your own configuration does have various "setters" for internal configuration, but it may not be safe to update those outside of run().
Disclaimer: The project I've linked here was created by me.
I'm just starting to learn osgi. Need create application, which provide Search Service. Search Service depends on the platform (SearchServiceLinux, SearchServiceAndroid, SearchServiceXXX ...). Also search service depends on a parameter that the user enters. Parameter is mandatory.
My Search Service Consumer (Then user set the parameter i create new instance of SearchService):
#Component(immediate = true, publicFactory = false)
#Provides(specifications = {TestConsumer.class})
#Instantiate
public class TestConsumer {
#Requires(filter = "(factory.name=package.ISearchService)")
private Factory mFactory;
private ComponentInstance mSearchComponentInstance;
...
public void userSetParameter(String pParameter) {
Properties lProperties = new Properties();
lProperties.put("instance.name", mFactory.getName() + "-" + pParameter);
lProperties.put("Parameter", pParameter);
if (mSearchComponentInstance != null) {
mSearchComponentInstance.dispose();
}
try {
mSearchComponentInstance = mFactory.createComponentInstance(lProperties);
} catch (UnacceptableConfiguration e) {
e.printStackTrace();
} catch (MissingHandlerException e) {
e.printStackTrace();
} catch (ConfigurationException e) {
e.printStackTrace();
}
}
My Search Service:
#Component
#Provides(specifications = {ISearchService.class}, strategy = "SINGLETON")
public class TestServise implements ISearchService{
#ServiceProperty(name = "Parameter", mandatory = true)
private int mParameter;
...
Questions:
1) Is this true structure of the program? #ServiceProperty or #Property more preferable in this case? What is the best practice for OSGI Service which requires parameters from user input? Is it possible to reform the structure of the consumer to use:
#Requires (filter = "need filter for SearchService with Parameter=XXX or create this service")
ISearchService mSearchService;
2) Can be applied in this situation iPOJO Event Admin Handlers?
Consumer:
#Publishes(name = "p1", topics = "userChangeParameter")
private Publisher mPublisher;
public void userChangeParameter(String pParameter) {
Properties lProperties = new Properties();
lProperties.put("Parameter", pParameter);
mPublisher.send(lProperties);
}
Search Service:
#Subscriber(name = "s0", topics = "foo")
public void subscriber(Event pEvent) {
System.out.println("Subscriber : " + pEvent.getProperty("Parameter"));
}
3) What is the best structure to create a service that depends on the parameters entered by the user? Maybe the problem is solved easily by using Apache Felix Subprojects?
I use apache felix 4.2.1.
I would create a service like this:
#Component(
metatype = false)
#SlingServlet(
paths = { "/bin/test/service" }, methods = { "POST" }, extensions = { "json" },
selectors = { "selector1", "selector2"}, generateComponent = false)
public class TestConsumer extends SlingAllMethodsServlet {
//inject all the services here like SearchServiceLinux, etc.
#Reference
private SearchServiceLinux searchServiceLinux;
}
You can use this service like
http://localhost/bin/test/service.seletor1.html
Now based on selector you can decide which class will handle the request means you can decide that seletor1 will be handled by class X and selector2 will be handled by class Y
If parameters are mandatory then I would recommend you to accept only POST on this service and make sure you provide search parameters in POST say parameter name is searchParam, so based on selector you can decide the handler and you can pass searchParam to this handler to generate search results.
Hope this helps.