I am using kafka streams in my project. I compile my project as war and run it in tomcat.
My project works as I want without any errors. If I first stop tomcat and then start it, it works without error. However, if I redeploy(undeploy and deploy) the service without stopping tomcat, I start getting errors. When I do research, there is information that tomcat caches the old version of the service. I could not reach a solution even though I applied some solutions. I will be grateful if you could help me.
I want to say it again. My code block works normally. If I run the service for the first time in tomcat, I don't get an error. Or if I close tomcat completely and start it again, I don't get an error. However, if I redeploy(undeploy and deploy) the service without stopping tomcat , I start getting an error.
I am sharing a small code block below.
Properties streamConfiguration = kafkaStreamsConfiguration.createStreamConfiguration(createKTableGroupId(), new AppSerdes.DataWrapperSerde());
StreamsBuilder streamsBuilder = new StreamsBuilder();
KTable<String, DataWrapper> kTableDataWrapper = streamsBuilder.table(topicAction.getTopicName());
KTable<String, DataWrapper> kTableWithStore = kTableDataWrapper.filter((key, dataWrapper) -> key != null && dataWrapper != null, Materialized.as(createStoreName()));
kTableWithStore.toStream().filter((key, dataWrapper) -> // Filter)
.mapValues((ValueMapperWithKey<String, DataWrapper, Object>) (key, dataWrapper) -> {
// Logics
})
.to(createOutputTopicName());
this.kafkaStreams = new KafkaStreams(streamsBuilder.build(), streamConfiguration);
this.kafkaStreams.start();
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
if (kafkaStreams != null) {
kafkaStreams.close();
}
}));
public Properties createStreamConfiguration(String appId, Serde serde) {
Properties properties = new Properties();
properties.put(StreamsConfig.APPLICATION_ID_CONFIG, appId);
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaBrokers);
properties.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
properties.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, serde.getClass());
properties.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, dynamicKafkaSourceTopologyConfiguration.getkTableCommitIntervalMs());
properties.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, dynamicKafkaSourceTopologyConfiguration.getkTableMaxByteBufferMB() * 1024 * 1024);
properties.put(StreamsConfig.STATE_DIR_CONFIG, KafkaStreamsConfigurationConstants.stateStoreLocation);
return properties;
}
Error :
2022-02-16 14:19:39.663 WARN 9529 --- [ Thread-462] o.a.k.s.p.i.StateDirectory : Using /tmp directory in the state.dir property can cause failures with writing the checkpoint file due to the fact that this directory can be cleared by the OS
2022-02-16 14:19:39.677 ERROR 9529 --- [ Thread-462] o.a.k.s.p.i.StateDirectory : Unable to obtain lock as state directory is already locked by another process
2022-02-16 14:19:39.702 ERROR 9529 --- [ Thread-462] f.t.s.c.- Message : Unable to initialize state, this can happen if multiple instances of Kafka Streams are running in the same state directory - Localized Message : Unable to initialize state, this can happen if multiple instances of Kafka Streams are running in the same state directory - Print Stack Trace : org.apache.kafka.streams.errors.StreamsException: Unable to initialize state, this can happen if multiple instances of Kafka Streams are running in the same state directory
at org.apache.kafka.streams.processor.internals.StateDirectory.initializeProcessId(StateDirectory.java:186)
at org.apache.kafka.streams.KafkaStreams.<init>(KafkaStreams.java:681)
at org.apache.kafka.streams.KafkaStreams.<init>(KafkaStreams.java:657)
at org.apache.kafka.streams.KafkaStreams.<init>(KafkaStreams.java:567)
I think this is because
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
if (kafkaStreams != null) {
kafkaStreams.close();
}
}));
is not being called during re-deploy, as JVM process continue to run. Please try another way to be notified when your application is being redeployed, for example using ServletContextListener
My problem was solved thanks to #udalmik.
I solved my problem by extending my beans from DisposableBean.
Additionally I have prototype beans. This solution didn't work on my prototype beans.
I am writing my solution for both prototype and singleton beans.
// For Singleton Bean
#Service
public class PersonSingletonBean implements DisposableBean {
#Override
public void destroy() throws Exception {
if (kafkaStreams != null) {
kafkaStreams.close();
}
}
}
// For PrototypeBean
#Service
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public class PersonPrototypeBean implements DisposableBean {
#Override
public void destroy() {
if (kafkaStreams != null) {
kafkaStreams.close();
}
}
}
#Service
public class PersonPrototypeBeanList implements DisposableBean {
private final List<PersonPrototypeBean> personPrototypeBeanList = Collections.synchronizedList(new ArrayList<>());
public void addToPersonPrototypeBeanList(PersonPrototypeBean personPrototypeBean) {
personPrototypeBeanList.add(personPrototypeBean);
}
public void destroy() throws Exception {
synchronized (personPrototypeBeanList) {
for (PersonPrototypeBean personPrototypeBean : personPrototypeBeanList) {
if (personPrototypeBean != null) {
((DisposableBean) personPrototypeBean).destroy();
}
}
personPrototypeBeanList.clear();
}
}
}
Related
after upgrading to spring boot version v2.7.1 we are seeing that there are lots of queued task, we never had seen such queued task increasing in the last version we were using v2.2.2.
Our team has tried to check the things in v2.7.1 but couldn't found anything in this version.
Can anyone please review the code and let us know what we are missing or have written wrong that is causing the issue. We are using spring integration to pull emails from client server and for that we have add a taskexecutor to have concurrent polling.
Versions that we use:
Spring Boot = 2.7.1
Spring Integration = 5.5.14
Earlier we were using:
Spring Boot = 2.2.2 release
Spring Integration = 5.2.3 release
I've attached the code below.
Configuration class for Imap Integration
#Configuration
#EnableIntegration
public class ImapIntegrationConfig {
private final ApplicationContext applicationContext;
#Autowired
public ImapIntegrationConfig(ApplicationContext applicationContext) {
this.applicationContext = applicationContext;
}
#Bean("mailTaskExecutor")
public ThreadPoolTaskExecutor mailTaskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setMaxPoolSize(1000);
taskExecutor.setCorePoolSize(100);
taskExecutor.setTaskDecorator(new SecurityAwareTaskDecorator(applicationContext));
taskExecutor.setWaitForTasksToCompleteOnShutdown(true);
taskExecutor.setAwaitTerminationSeconds(Integer.MAX_VALUE);
return taskExecutor;
}
#Bean("imapMailChannel")
public ExecutorChannelSpec imapMailChannel() {
return MessageChannels.executor(mailTaskExecutor());
}
#Bean
public HeaderMapper<MimeMessage> mailHeaderMapper() {
return new DefaultMailHeaderMapper();
}
}
ImapListener Class to register the flow
public void registerImapFlow(ImapSetting imapSetting) {
ImapMailReceiver mailReceiver = createImapMailReceiver(imapSetting);
// create the flow for an email process
//#formatter:off
StandardIntegrationFlow flow = IntegrationFlows
.from(Mail.imapInboundAdapter(mailReceiver),
consumer -> consumer.autoStartup(true)
.poller(Pollers.fixedDelay(Duration.ofSeconds(5), Duration.ofMinutes(2))
.taskExecutor(taskExecutor)
.errorHandler(t -> logger.error("Error while polling emails for address " + imapSetting.getUsername(), t))
.maxMessagesPerPoll(10)))
.enrichHeaders(Map.of(CONCERN_CODE, imapSetting.getConcernCode(), IMAP_CONFIG_ID, imapSetting.getImapSettingId()))
.channel(imapMailChannel).get();
//#formatter:on
// give the bean a unique name to avoid clashes with multiple imap settings
String flowId = concernIdentifier.getConcernIdentifier() + "-" + imapSetting.getImapSettingId();
IntegrationFlowContext.IntegrationFlowRegistration existingFlow = integrationFlowContext.getRegistrationById(flowId);
if (existingFlow != null) {
// destroy the previous beans
existingFlow.destroy();
}
// register the new flow
integrationFlowContext.registration(flow).id(flowId).useFlowIdAsPrefix().register();
}
Process message method
#ServiceActivator(inputChannel = "imapMailChannel")
public void processMessage(Message<?> message) throws InvalidMessageException {
String concern = (String) message.getHeaders().get(CONCERN_CODE);
if (isEmpty(concern)) {
logger.error("Received null concern!");
}
Long imapConfigId = (Long) message.getHeaders().get(IMAP_CONFIG_ID);
String logMessage = null;
String messageId = null;
try {
Object payload = message.getPayload();
if (payload instanceof MimeMultipart) {
//.......................//
}
else if (payload instanceof String) {
//......................//
}
catch (Exception e) {
logger.error("Error while processing " + logMessage, e);
if (concern != null) {
metricUtil.emailFailed(concern);
}
throw new MaxxtonException("CCM-MessageID: Exception in processMessage() method", e, MessageErrorCode.UNABLE_TO_PROCESS_EMAIL);
}
metricUtil.emailProcessed(concern);
}
ImapMailReceiver method
private ImapMailReceiver createImapMailReceiver(ImapSetting imapSettings) {
String url = String.format(imapSettings.getImapUrl(), URLEncoder.encode(imapSettings.getUsername(), UTF_8), URLEncoder.encode(imapSettings.getPassword(), UTF_8));
ImapMailReceiver receiver = new ImapMailReceiver(url);
receiver.setSimpleContent(true);
Properties mailProperties = new Properties();
mailProperties.put("mail.debug", "false");
mailProperties.put("mail.imap.connectionpoolsize", "5");
mailProperties.put("mail.imap.fetchsize", 4194304);
mailProperties.put("mail.imap.connectiontimeout", 15000);
mailProperties.put("mail.imap.timeout", 30000);
mailProperties.put("mail.imaps.connectionpoolsize", "5");
mailProperties.put("mail.imaps.fetchsize", 4194304);
mailProperties.put("mail.imaps.connectiontimeout", 15000);
mailProperties.put("mail.imaps.timeout", 30000);
receiver.setJavaMailProperties(mailProperties);
receiver.setSearchTermStrategy(this::notSeenTerm);
receiver.setAutoCloseFolder(false);
receiver.setShouldDeleteMessages(false);
receiver.setShouldMarkMessagesAsRead(true);
receiver.setHeaderMapper(mailHeaderMapper);
receiver.setEmbeddedPartsAsBytes(false);
return receiver;
}
Added a screenshot taken from Grafana of active and queued task when we have upgraded to SP v2.7.1 and SI v5.5.14
At a glance it all looks OK. Unless you really don't close that folder manually elsewhere since you use receiver.setAutoCloseFolder(false);
There is no reason in that .taskExecutor(taskExecutor) since you use MessageChannels.executor(mailTaskExecutor()) immediately after producing message from the Mail.imapInboundAdapter().
I remember that in Gitter I suggested you to check how it works with the spring.task.scheduling.pool.size=10 placed into the application.properties. This is the only obvious difference between the mentioned versions: https://docs.spring.io/spring-boot/docs/current/reference/html/messaging.html#messaging.spring-integration.
Your screenshot doesn't prove that the problem is exactly with Spring Integration. Perhaps tasks are queued somehow by the tool which exports metrics to Graphana. I believe you have upgraded not just Spring Integration in your project...
I am using hystrix javanica collapser in spring boot, but I found it did not work, my code just like this below:
service class:
public class TestService {
#HystrixCollapser(batchMethod = "getStrList")
public Future<String> getStr(String id) {
System.out.println("single");
return null;
}
#HystrixCommand
public List<String> getStrList(List<String> ids) {
System.out.println("batch,size=" + ids.size());
List<String> strList = Lists.newArrayList();
ids.forEach(id -> strList.add("test"));
return strList;
}
}
where I use:
public static void main(String[] args) {
TestService testService = new TestService();
HystrixRequestContext context = HystrixRequestContext.initializeContext();
Future<String> f1= testService.getStr("111");
Future<String> f2= testService.getStr("222");
try {
Thread.sleep(3000);
System.out.println(f1.get()); // nothing printed
System.out.println(f2.get()); // nothing printed
} catch (Exception e) {
}
context.shutdown();
}
It printed 3 single instead of 1 batch.
I want to know what's wrong with my code, a valid example is better.
I can't find a hystrix javanica sample on the internet, So I have to read the source code to solve this problem, now it's solved, and this is my summary:
when you use hystrix(javanica) collapser in spring-boot, you have to:
Defined a hystrixAspect spring bean and import hystrix-strategy.xml;
Annotate single method with #Hystrix Collapser annotate batch method with #HystrixCommand;
Make your single method need 1 parameter(ArgType) return Future , batch method need List return List and make sure size of args be equal to size of return.
Set hystrix properties batchMethod, scope, if you want to collapse requests from multiple user threads, you must set the scope to GLOBAL;
Before you submit a single request, you must init the hystrix context with HystrixRequestContext.initializeContext(), and shutdown the context when your request finish;
i have written a large scale http server using , but im getting this error when number of concurrent requests increases
WARNING: Thread Thread[vert.x-eventloop-thread-1,5,main] has been blocked for 8458 ms, time limit is 1000
io.vertx.core.VertxException: Thread blocked
here is my full code :
public class MyVertxServer {
public Vertx vertx = Vertx.vertx(new VertxOptions().setWorkerPoolSize(100));
private HttpServer server = vertx.createHttpServer();
private Router router = Router.router(vertx);
public void bind(int port){
server.requestHandler(router::accept).listen(port);
}
public void createContext(String path,MyHttpHandler handler){
if(!path.endsWith("/")){
path += "/";
}
path+="*";
router.route(path).handler(new Handler<RoutingContext>() {
#Override
public void handle(RoutingContext ctx) {
String[] handlerID = ctx.request().uri().split(ctx.currentRoute().getPath());
String suffix = handlerID.length > 1 ? handlerID[1] : null;
handler.Handle(ctx, new VertxUtils(), suffix);
}
});
}
}
and how i call it :
ver.createContext("/getRegisterManager",new ProfilesManager.RegisterHandler());
ver.createContext("/getLoginManager", new ProfilesManager.LoginHandler());
ver.createContext("/getMapcomCreator",new ItemsManager.MapcomCreator());
ver.createContext("/getImagesManager", new ItemsManager.ImagesHandler());
ver.bind(PORT);
how ever i dont find eventbus() useful for http servers that process send/receive files , because u need to send the RoutingContext in the message with is not possible.
could you please point me to the right direction? thanks
added a little bit of handler's code:
class ProfileGetter implements MyHttpHandler{
#Override
public void Handle(RoutingContext ctx, VertxUtils utils, String suffix) {
String username = utils.Decode(ctx.request().headers().get("username"));
String lang = utils.Decode(ctx.request().headers().get("lang"));
display("profile requested : "+username);
Profile profile = ProfileManager.FindProfile(username,lang);
if(profile == null){
ctx.request().response().putHeader("available","false");
utils.sendResponseAndEnd(ctx.response(),400);
return;
}else{
ctx.request().response().putHeader("available","true");
utils.writeStringAndEnd(ctx, new Gson().toJson(profile));
}
}
}
here ProfileManager.FindProfile(username,lang) does a long running database job on the same thread
...
basically all of my processes are happening on the main thread , because if i use executor i will get strange exceptions and nullpointers in Vertx , making me feel like the request proccessors in Vertx are parallel
Given the small amount of code in the question lets agree that the problem is on the line:
Profile profile = ProfileManager.FindProfile(username,lang);
Assuming that this is internally doing some blocking JDBC call which is a anti-pattern in Vert.x you can solve this in several ways.
Say that you can totally refactor the ProfileManager class which IMO is the best then you can update it to be reactive, so your code would be like:
ProfileManager.FindProfile(username,lang, res -> {
if (res.failed()) {
// handle error, sent 500 back, etc...
} else {
Profile profile = res.result();
if(profile == null){
ctx.request().response().putHeader("available","false");
utils.sendResponseAndEnd(ctx.response(),400);
return;
}else{
ctx.request().response().putHeader("available","true");
utils.writeStringAndEnd(ctx, new Gson().toJson(profile));
}
}
});
Now what would be hapening behind the scenes is that your JDBC call would not block (which is tricky because JDBC is blocking by nature). So to fix this and you're lucky enough to use MySQL or Postgres then you could code your JDBC against the async-client if you're stuck with other RDBMS servers then you need to use the jdbc-client which in turn will use a thread pool to offload the work from the event loop thread.
Now say that you cannot change the ProfileManager code then you can still off load it to the thread pool by wrapping the code in a executeBlocking block:
vertx.executeBlocking(future -> {
Profile profile = ProfileManager.FindProfile(username,lang);
future.complete(profile);
}, false, res -> {
if (res.failed()) {
// handle error, sent 500 back, etc...
} else {
Profile profile = res.result();
if(profile == null){
ctx.request().response().putHeader("available","false");
utils.sendResponseAndEnd(ctx.response(),400);
return;
}else{
ctx.request().response().putHeader("available","true");
utils.writeStringAndEnd(ctx, new Gson().toJson(profile));
}
}
});
I'd like to create configuration/bean to automatically start H2DB in my development profile. I'd like to have it running as a tcp server. It's needed to be started before any DataSource configuration. Can someone tell me how to achieve this?
Wha have I done is
#Profile("h2")
#Component
public class H2DbServerConfiguration implements SmartLifecycle {
private static final Logger logger = LoggerFactory.getLogger(H2DbServerConfiguration.class);
private Server server;
#Override
public boolean isAutoStartup() {
return true;
}
#Override
public void stop(Runnable callback) {
stop();
new Thread(callback).start();
}
#Override
public void start() {
logger.debug("############################################");
logger.debug("############################################");
logger.debug("STARTING SERVER");
logger.debug("############################################");
logger.debug("############################################");
try {
server = Server.createTcpServer("-web", "-webAllowOthers", "-webPort", "8082").start();
} catch (SQLException e) {
throw new RuntimeException("Unable to start H2 server", e);
}
}
#Override
public void stop() {
logger.debug("############################################");
logger.debug("############################################");
logger.debug("STOPPING SERVER");
logger.debug("############################################");
logger.debug("############################################");
if (server != null)
if (server.isRunning(true))
server.stop();
}
#Override
public boolean isRunning() {
return server != null ? server.isRunning(true) : false;
}
#Override
public int getPhase() {
return 0;
}
}
but this isn't an option for me because component is created after datasource (I have liquibase setup so it's too late) and Phase is still the same that means FIFO order and I'd like to be FILO.
Mix #Profile and #Component seams to me a bad idea. Profiles are designed to work with Configuration (documentation)
Do you really need profile? In my opinion it makes sense if you have several possible configurations, one based on H2, and if you want be able to switch between these configurations (typically at start time by setting a properties...)
Manage the H2 server with a bean (documentation) seams correct to me (as suggested by Stefen). Maybe you will prefer annotations... If you want a spring profile, then you will need a Configuration object too. It will simply load the H2 server bean (in my opinion it's better to manage the H2 server lifecycle with a bean than with a context/config).
Create your server as a bean :
#Bean(initMethod = "start", destroyMethod = "stop")
Server h2Server() throws Exception {
return Server.createTcpServer("-tcp","-tcpAllowOthers","-tcpPort","9192");
}
Now you can configure spring to create other beans (e.g the datasource)
after the bean h2Server using #DependsOn
#DependsOn("h2Server")
#Bean
DataSource dataSource(){
...
}
Hi, what about using spring boot? It has automatically configured datasource so I don't want to reconfigure it.
You are right, to use the above approach you have to create your own datasource in order to annotate it with #DependsOn .
But it looks like this is not really necessary.
In one of my projects I am creating the h2Server as a bean as described.
I use the datasource created by spring, so without any #DependsOn.
It works perfectly. Just give it a try.
Your solution with SmartLifecycle does not work, because it creates the server on ApplicationContext refresh, which happens after all beans (including the datasource ) were created.
Is it possible to have my app update the config settings at runtime? I can easily expose the settings I want in my UI but is there a way to allow the user to update settings and make them permanent ie save them to the config.yaml file? The only way I can see it to update the file by hand then restart the server which seems a bit limiting.
Yes. It is possible to reload the service classes at runtime.
Dropwizard by itself does not have the way to reload the app, but jersey has.
Jersey uses a container object internally to maintain the running application. Dropwizard uses the ServletContainer class of Jersey to run the application.
How to reload the app without restarting it -
Get a handle to the container used internally by jersey
You can do this by registering a AbstractContainerLifeCycleListener in Dropwizard Environment before starting the app. and implement its onStartup method as below -
In your main method where you start the app -
//getting the container instance
environment.jersey().register(new AbstractContainerLifecycleListener() {
#Override
public void onStartup(Container container) {
//initializing container - which will be used to reload the app
_container = container;
}
});
Add a method to your app to reload the app. It will take in the list of string which are the names of the service classes you want to reload. This method will call the reload method of the container with the new custom DropWizardConfiguration instance.
In your Application class
public static synchronized void reloadApp(List<String> reloadClasses) {
DropwizardResourceConfig dropwizardResourceConfig = new DropwizardResourceConfig();
for (String className : reloadClasses) {
try {
Class<?> serviceClass = Class.forName(className);
dropwizardResourceConfig.registerClasses(serviceClass);
System.out.printf(" + loaded class %s.\n", className);
} catch (ClassNotFoundException ex) {
System.out.printf(" ! class %s not found.\n", className);
}
}
_container.reload(dropwizardResourceConfig);
}
For more details see the example documentation of jersey - jersey example for reload
Consider going through the code and documentation of following files in Dropwizard/Jersey for a better understanding -
Container.java
ContainerLifeCycleListener.java
ServletContainer.java
AbstractContainerLifeCycleListener.java
DropWizardResourceConfig.java
ResourceConfig.java
No.
Yaml file is parsed at startup and given to the application as Configuration object once and for all. I believe you can change the file after that but it wouldn't affect your application until you restart it.
Possible follow up question: Can one restart the service programmatically?
AFAIK, no. I've researched and read the code somewhat for that but couldn't find a way to do that yet. If there is, I'd love to hear that :).
I made a task that reloads the main yaml file (it would be useful if something in the file changes). However, it is not reloading the environment. After researching this, Dropwizard uses a lot of final variables and it's quite hard to reload these on the go, without restarting the app.
class ReloadYAMLTask extends Task {
private String yamlFileName;
ReloadYAMLTask(String yamlFileName) {
super("reloadYaml");
this.yamlFileName = yamlFileName;
}
#Override
public void execute(ImmutableMultimap<String, String> parameters, PrintWriter output) throws Exception {
if (yamlFileName != null) {
ConfigurationFactoryFactory configurationFactoryFactory = new DefaultConfigurationFactoryFactory<ReportingServiceConfiguration>();
ValidatorFactory validatorFactory = Validation.buildDefaultValidatorFactory();
Validator validator = validatorFactory.getValidator();
ObjectMapper objectMapper = Jackson.newObjectMapper();
final ConfigurationFactory<ServiceConfiguration> configurationFactory = configurationFactoryFactory.create(ServiceConfiguration.class, validator, objectMapper, "dw");
File confFile = new File(yamlFileName);
configurationFactory.build(new File(confFile.toURI()));
}
}
}
You can change the configuration in the YAML and read it while your application is running. This will not however restart the server or change any server configurations. You will be able to read any changed custom configurations and use them. For example, you can change the logging level at runtime or reload other custom settings.
My solution -
Define a custom server command. You should use this command to start your application instead of the "server" command.
ArgsServerCommand.java
public class ArgsServerCommand<WC extends WebConfiguration> extends EnvironmentCommand<WC> {
private static final Logger LOGGER = LoggerFactory.getLogger(ArgsServerCommand.class);
private final Class<WC> configurationClass;
private Namespace _namespace;
public static String COMMAND_NAME = "args-server";
public ArgsServerCommand(Application<WC> application) {
super(application, "args-server", "Runs the Dropwizard application as an HTTP server specific to my settings");
this.configurationClass = application.getConfigurationClass();
}
/*
* Since we don't subclass ServerCommand, we need a concrete reference to the configuration
* class.
*/
#Override
protected Class<WC> getConfigurationClass() {
return configurationClass;
}
public Namespace getNamespace() {
return _namespace;
}
#Override
protected void run(Environment environment, Namespace namespace, WC configuration) throws Exception {
_namespace = namespace;
final Server server = configuration.getServerFactory().build(environment);
try {
server.addLifeCycleListener(new LifeCycleListener());
cleanupAsynchronously();
server.start();
} catch (Exception e) {
LOGGER.error("Unable to start server, shutting down", e);
server.stop();
cleanup();
throw e;
}
}
private class LifeCycleListener extends AbstractLifeCycle.AbstractLifeCycleListener {
#Override
public void lifeCycleStopped(LifeCycle event) {
cleanup();
}
}
}
Method to reload in your Application -
_ymlFilePath = null; //class variable
public static boolean reloadConfiguration() throws IOException, ConfigurationException {
boolean reloaded = false;
if (_ymlFilePath == null) {
List<Command> commands = _configurationBootstrap.getCommands();
for (Command command : commands) {
String commandName = command.getName();
if (commandName.equals(ArgsServerCommand.COMMAND_NAME)) {
Namespace namespace = ((ArgsServerCommand) command).getNamespace();
if (namespace != null) {
_ymlFilePath = namespace.getString("file");
}
}
}
}
ConfigurationFactoryFactory configurationFactoryFactory = _configurationBootstrap.getConfigurationFactoryFactory();
ValidatorFactory validatorFactory = _configurationBootstrap.getValidatorFactory();
Validator validator = validatorFactory.getValidator();
ObjectMapper objectMapper = _configurationBootstrap.getObjectMapper();
ConfigurationSourceProvider provider = _configurationBootstrap.getConfigurationSourceProvider();
final ConfigurationFactory<CustomWebConfiguration> configurationFactory = configurationFactoryFactory.create(CustomWebConfiguration.class, validator, objectMapper, "dw");
if (_ymlFilePath != null) {
// Refresh logging level.
CustomWebConfiguration webConfiguration = configurationFactory.build(provider, _ymlFilePath);
LoggingFactory loggingFactory = webConfiguration.getLoggingFactory();
loggingFactory.configure(_configurationBootstrap.getMetricRegistry(), _configurationBootstrap.getApplication().getName());
// Get my defined custom settings
CustomSettings customSettings = webConfiguration.getCustomSettings();
reloaded = true;
}
return reloaded;
}
Although this feature isn't supported out of the box by dropwizard, you're able to accomplish this fairly easy with the tools they give you.
Before I get started, note that this isn't a complete solution for the question asked as it doesn't persist the updated config values to the config.yml. However, this would be easy enough to implement yourself simply by writing to the config file from the application. If anyone would like to write this implementation feel free to open a PR on the example project I've linked below.
Code
Start off with a minimal config:
config.yml
myConfigValue: "hello"
And it's corresponding configuration file:
ExampleConfiguration.java
public class ExampleConfiguration extends Configuration {
private String myConfigValue;
public String getMyConfigValue() {
return myConfigValue;
}
public void setMyConfigValue(String value) {
myConfigValue = value;
}
}
Then create a task which updates the config:
UpdateConfigTask.java
public class UpdateConfigTask extends Task {
ExampleConfiguration config;
public UpdateConfigTask(ExampleConfiguration config) {
super("updateconfig");
this.config = config;
}
#Override
public void execute(Map<String, List<String>> parameters, PrintWriter output) {
config.setMyConfigValue("goodbye");
}
}
Also for demonstration purposes, create a resource which allows you to get the config value:
ConfigResource.java
#Path("/config")
public class ConfigResource {
private final ExampleConfiguration config;
public ConfigResource(ExampleConfiguration config) {
this.config = config;
}
#GET
public Response handleGet() {
return Response.ok().entity(config.getMyConfigValue()).build();
}
}
Finally wire everything up in your application:
ExampleApplication.java (exerpt)
environment.jersey().register(new ConfigResource(configuration));
environment.admin().addTask(new UpdateConfigTask(configuration));
Usage
Start up the application then run:
$ curl 'http://localhost:8080/config'
hello
$ curl -X POST 'http://localhost:8081/tasks/updateconfig'
$ curl 'http://localhost:8080/config'
goodbye
How it works
This works simply by passing the same reference to the constructor of ConfigResource.java and UpdateConfigTask.java. If you aren't familiar with the concept see here:
Is Java "pass-by-reference" or "pass-by-value"?
The linked classes above are to a project I've created which demonstrates this as a complete solution. Here's a link to the project:
scottg489/dropwizard-runtime-config-example
Footnote: I haven't verified this works with the built in configuration. However, the dropwizard Configuration class which you need to extend for your own configuration does have various "setters" for internal configuration, but it may not be safe to update those outside of run().
Disclaimer: The project I've linked here was created by me.