Question regarding FtpInboundFileSynchronizer running with multiple instances/applications - java

I've recently been trying to configure and set up a spring boot application that will later be run in kubernetes and have multiple pods running of it. The application is meant to download files from a FTP server. I've found some existing code for doing this in Springboot, particularly FtpInboundFileSynchronizer and so I tried set it up and make sure it works. I have a working solution with a ConcurrentMetaDataStore. So my only real question is if it will be fine running it with multiple instances or if I require something additional for it to be run with multiple pods?
My configuration looks something like this:
#Getter
#Setter
#Configuration
#ConfigurationProperties(prefix = "ftp")
public class FtpConfiguration
{
private final static int PASSIVE_LOCAL_DATA_CONNECTION_MODE = 2;
private final static int DEFAULT_FTP_PORT = 21;
String host;
String username;
String password;
String localDirectory;
String remoteDirectory;
FtpRemoteFileTemplate template;
FtpInboundFileSynchronizer synchronizer;
DataSource templateSource;
#Bean
public ConcurrentMetadataStore metadataStore(DataSource dataSource)
{
var jbdcMetaDatastore = new JdbcMetadataStore(dataSource);
jbdcMetaDatastore.setTablePrefix("INT_");
jbdcMetaDatastore.setRegion("TEMPORARY");
jbdcMetaDatastore.afterPropertiesSet();
return jbdcMetaDatastore;
}
#Bean
public DefaultFtpSessionFactory defaultFtpSessionFactory()
{
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost(host);
sf.setUsername(username);
sf.setPassword(password);
sf.setPort(DEFAULT_FTP_PORT);
sf.setConnectTimeout(5000);
sf.setClientMode(PASSIVE_LOCAL_DATA_CONNECTION_MODE);
return sf;
}
#Bean
FtpRemoteFileTemplate ftpRemoteFileTemplate(DefaultFtpSessionFactory dsf)
{
return new FtpRemoteFileTemplate(dsf);
}
#Bean
FtpInboundFileSynchronizer ftpInboundFileSynchronizer(DefaultFtpSessionFactory dsf)
{
FtpInboundFileSynchronizer ftpInSync = new FtpInboundFileSynchronizer(dsf);
ftpInSync.setRemoteDirectory(remoteDirectory);
ftpInSync.setFilter(ftpFileListFilter());
return ftpInSync;
}
public FileListFilter<FTPFile> ftpFileListFilter()
{
try (ChainFileListFilter<FTPFile> chain = new ChainFileListFilter<>())
{
chain.addFilter(new FtpPersistentAcceptOnceFileListFilter(metadataStore(templateSource), "TEST"));
return chain;
}
catch (IOException e)
{
throw new RuntimeException("Failed to create FtpPersistentAcceptOnceFileListFilter", e);
}
}
}
and then I just call the the SynchronizeToLocalDirectory method.
FtpClient(
FtpRemoteFileTemplate template, FtpInboundFileSynchronizer synchronizer,
#Value("${ftp.remote-directory}") String remoteDirectory,
#Value("${ftp.local-directory}") String localDirectory)
{
this.template = template;
this.synchronizer = synchronizer;
this.remoteDirectory = remoteDirectory;
this.localDirectory = localDirectory;
}
synchronizer.setRemoteDirectory(remoteDirectory);
synchronizer.synchronizeToLocalDirectory(new File(localDirectory));
Would this solution handle multiple applications without problems? Or what else would I need? Does the ConcurrentMetaData store alone make sure this works? (so for example there wouldn't be a conflict/crash if two instances at the same time try to synchronise same directory as they'd both be fine thanks to the metastore being #Transactional).

Your assumption is correct: as long as all your pods are connecting to the same data base, that JdbcMetadataStore will ensure that no concurrent read for the same file are going to happen.
It is not clear, though, why would one use an FtpInboundFileSynchronizer manually, but not via an FtpInboundFileSynchronizingMessageSource and subsequent integration flow, but that's I guess fully different story and question.
On the other hand: why do you ask this question at all? Didn't you try your solution? Isn't docs enough to be sure where and how to go: https://docs.spring.io/spring-integration/docs/current/reference/html/file.html#remote-persistent-flf ?

Related

How to trigger SFTP inbound channel in test

I have found out that an IntegrationFlow I have written using Java DSL wasn't very testable so I have followed Configuring with Java Configuration and split it into #Bean configuration.
In my unit test I have used a 3rd party SFTP in memory server and I tried triggering InboundChannelAdaper and then calling receive() on the channel.
I had a problem with finding out the type of Channel to use, as Channel usage was not mentioned anywhere in the SFTP Adapters documentation, but ultimately I found what I think is correct (QueueChannel) in the testing examples repository .
My problem is that the unit test I wrote is hanging on the channel's receive() method. Through debugging I determined that session factory's getSession() never gets called.
What am I doing wrong?
#Bean
public PollableChannel sftpChannel() {
return new QueueChannel();
}
#Bean
#EndpointId("sftpInboundAdapter")
#InboundChannelAdapter(channel = "sftpChannel", poller = #Poller(fixedDelay = "1000"))
public SftpInboundFileSynchronizingMessageSource sftpMessageSource() {
SftpInboundFileSynchronizingMessageSource source =
new SftpInboundFileSynchronizingMessageSource(sftpInboundFileSynchronizer);
source.setLocalDirectory(new File("/local"));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
source.setMaxFetchSize(6);
return source;
}
#Bean
public SftpInboundFileSynchronizer sftpInboundFileSynchronizer() {
SftpInboundFileSynchronizer fileSynchronizer = new SftpInboundFileSynchronizer(testSftpSessionFactory);
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setPreserveTimestamp(true);
fileSynchronizer.setRemoteDirectory("/remote");
List<String> filterFileNameList = List.of("1.txt");
fileSynchronizer.setFilter(new FilenameListFilter(filterFileNameList));
return fileSynchronizer;
}
#Bean
private DefaultSftpSessionFactory testSftpSessionFactory(String username, String password, int port, String host) {
DefaultSftpSessionFactory defaultSftpSessionFactory = new DefaultSftpSessionFactory();
defaultSftpSessionFactory.setPassword("password");
defaultSftpSessionFactory.setUser("username");
defaultSftpSessionFactory.setHost("localhost");
defaultSftpSessionFactory.setPort(777);
defaultSftpSessionFactory.setAllowUnknownKeys(true);
Properties config = new java.util.Properties();
config.put( "StrictHostKeyChecking", "no" );
defaultSftpSessionFactory.setSessionConfig(config);
return defaultSftpSessionFactory;
}
#ExtendWith(SpringExtension.class)
#ContextConfiguration(classes = {IntegrationFlowTestSupport.class, Synchronizer.class, Channel.class, Activator.class})
public class IntegrationFlowConfigTest {
private static final String CONTENTS = "abcdef 1234567890";
#Autowired
PollableChannel sftpChannel;
#Autowired
DefaultSftpSessionFactory testSftpSessionFactory;
#Autowired
SftpInboundFileSynchronizer sftpInboundFileSynchronizer;
#Autowired
SftpInboundFileSynchronizingMessageSource sftpMessageSource;
#Autowired
SourcePollingChannelAdapter sftpInboundAdapter;
#Test
public void test() throws Exception {
FileEntry f1 = new FileEntry("/remote/1.txt", CONTENTS);
FileEntry f2 = new FileEntry("/remote/2.txt", CONTENTS);
FileEntry f3 = new FileEntry("/remote/3.txt", CONTENTS);
withSftpServer(server -> {
server.setPort(777);
server.addUser("username", "password");
server.putFile(f1.getPath(), f1.createInputStream());
server.putFile(f2.getPath(), f2.createInputStream());
sftpInboundAdapter.start();
Message<?> message = sftpChannel.receive();
});
}
}
First of all it is wrong to rewrite your code to satisfy unit test expectations. We spend not one hour thinking about dividing concerns from production code to testing.
See respective documentation: https://docs.spring.io/spring-integration/docs/current/reference/html/testing.html#test-context.
For your use-case it might be better to do a mock on that #ServiceActivator instead of QueueChannel and competing consumer in your test. What I mean that you already have a consumer in your configuration with that #ServiceActivator. So, there is no guarantee that your manual sftpChannel.receive() would give you a message from the queue since this one could be consumed by your #ServiceActivator subscriber.
The fixedDelay = "0" looks suspicious. Isn't that too often to ask SFTP server for new files? How do you expect your system would be stable enough if you give it so much stress with such a short delay?
We don't know what is withSftpServer(server -> {, and it is also not clear what is testSftpSessionFactory. So, not sure yet how you start an SFTP server and connect to it from your code.
I also see sftpMessageSource.start();, but there is nowhere in your that it is stopped somehow. Plus I guess you really meant to start an endpoint, not source. The endpoint in your case is a SourcePollingChannelAdapter created for that #InboundChannelAdapter. You can use an #EndpointId, if it is not autowired automatically by type.
In our tests we use Apache MINA SSH library: https://github.com/spring-projects/spring-integration/blob/main/spring-integration-sftp/src/test/java/org/springframework/integration/sftp/SftpTestSupport.java#L64-L76

Karate 0.9.6 1.1.0 - org.graalvm.polyglot.PolyglotException: not found error when using classpath to specify the file location [duplicate]

I was working with karate framework to test my rest service and it work great, however I have service that consume message from kafka topic then persist on mongo to finally notify kafka.
I made a java producer on my karate project, it called by js to be used by feature.
Then I have a consumer to check the message
Feature:
* def kafkaProducer = read('../js/KafkaProducer.js')
JS:
function(kafkaConfiguration){
var Producer = Java.type('x.y.core.producer.Producer');
var producer = new Producer(kafkaConfiguration);
return producer;
}
Java:
public class Producer {
private static final Logger LOGGER = LoggerFactory.getLogger(Producer.class);
private static final String KEY = "C636E8E238FD7AF97E2E500F8C6F0F4C";
private KafkaConfiguration kafkaConfiguration;
private ObjectMapper mapper;
private AESEncrypter aesEncrypter;
public Producer(KafkaConfiguration kafkaConfiguration) {
kafkaConfiguration.getProperties().put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
kafkaConfiguration.getProperties().put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArraySerializer");
this.kafkaConfiguration = kafkaConfiguration;
this.mapper = new ObjectMapper();
this.aesEncrypter = new AESEncrypter(KEY);
}
public String produceMessage(String payload) {
// Just notify kafka with payload and return id of payload
}
Other class
public class KafkaConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(KafkaConfiguration.class);
private Properties properties;
public KafkaConfiguration(String host) {
try {
properties = new Properties();
properties.put(BOOTSTRAP_SERVERS_CONFIG, host);
properties.put(ConsumerConfig.GROUP_ID_CONFIG, "karate-integration-test");
properties.put(ConsumerConfig.CLIENT_ID_CONFIG, "offset123");
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
} catch (Exception e) {
LOGGER.error("Fail creating the consumer...", e);
throw e;
}
}
public Properties getProperties() {
return properties;
}
public void setProperties(Properties properties) {
this.properties = properties;
}
}
I'd would like to use the producer code with anotation like cucumber does like:
#Then("^Notify kafka with payload (-?\\d+)$")
public void validateResult(String payload) throws Throwable {
new Producer(kafkaConfiguration).produceMessage(payload);
}
and on feature use
Then Notify kafka with payload "{example:value}"
I want to do that because I want to reuse that code on base project in order to be included in other project
If annotation doesn't works, maybe you can suggest me another way to do it
The answer is simple, use normal Java / Maven concepts. Move the common Java code to the "main" packages (src/main/java). Now all you need to do is build a JAR and add it as a dependency to any Karate project.
The last piece of the puzzle is this: use the classpath: prefix to refer to any features or JS files in the JAR. Karate will be able to pick them up.
EDIT: Sorry Karate does not support Cucumber or step-definitions. It has a much simpler approach. Please read this for details: https://github.com/intuit/karate/issues/398

SolrHealthIndicator without deprecated CompositeHealthIndicator

I've tried to upgrade Spring Boot to 2.2.4.RELEASE version. Everzthing if fine exept problem with CompositeHealthIndicator which is deprecated.
I have this bean method
#Autowired
private HealthAggregator healthAggregator;
#Bean
public HealthIndicator solrHealthIndicator() {
CompositeHealthIndicator composite = new CompositeHealthIndicator(
this.healthAggregator);
composite.addHealthIndicator("solr1", createHealthIndicator(firstHttpSolrClient()));
composite.addHealthIndicator("solr2", createHealthIndicator(secondHttpSolrClient()));
composite.addHealthIndicator("querySolr", createHealthIndicator(queryHttpSolrClient()));
return composite;
}
private CustomSolrHealthIndicator createHealthIndicator(SolrClient source) {
try {
return new CustomSolrHealthIndicator(source);
} catch (Exception ex) {
throw new IllegalStateException("Unable to create helthCheckIndicator for solr client instance.", ex);
}
}
That registers HealthIndicator for 3 instances of SOLR (2 indexing, 1 for query). Everything worked fine until Spring Boot update. After update the method CompositeHealthIndicator.addHealthIndicator is not present, the whole class is marked as Deprecated.
The class which is created in createHealthIndicator is like this:
public class CustomSolrHealthIndicator extends SolrHealthIndicator {
private final SolrClient solrClient;
public CustomSolrHealthIndicator(SolrClient solrClient) {
super(solrClient);
this.solrClient = solrClient;
}
#Override
protected void doHealthCheck(Health.Builder builder) throws Exception {
if (!this.solrClient.getClass().isAssignableFrom(HttpSolrClient.class)) {
super.doHealthCheck(builder);
}
HttpSolrClient httpSolrClient = (HttpSolrClient) this.solrClient;
if (StringUtils.isBlank(httpSolrClient.getBaseURL())) {
return;
}
super.doHealthCheck(builder);
}
}
Is there any easy way to transform the old way how to register the instances of SOLR i want to check if they are up or down at Spring Boot version 2.2.X?
EDIT:
I have tried this:
#Bean
public CompositeHealthContributor solrHealthIndicator() {
Map<String, HealthIndicator> solrIndicators = Maps.newLinkedHashMap();
solrIndicators.put("solr1", createHealthIndicator(firstHttpSolrClient()));
solrIndicators.put("solr2", createHealthIndicator(secondHttpSolrClient()));
solrIndicators.put("querySolr", createHealthIndicator(queryHttpSolrClient()));
return CompositeHealthContributor.fromMap(solrIndicators);
}
private CustomSolrHealthIndicator createHealthIndicator(SolrClient source) {
try {
return new CustomSolrHealthIndicator(source);
} catch (Exception ex) {
throw new IllegalStateException("Unable to create healthCheckIndicator for solr client instance.", ex);
}
}
The CustomSolrHealthIndicator has no changes against start state.
But I cannot create that bean. When calling createHealthIndicator I am getting NoClassDefFoundError
Does anyone know where the problem is?
Looks like you can just use CompositeHealthContributor. It's not much different from what you have already. It appears something like this would work. You could override the functionality to add them one at a time if you'd like, also, which might be preferable if you have a large amount of configuration. Shouldn't be any harm with either approach.
#Bean
public HealthIndicator solrHealthIndicator() {
Map<String, HealthIndicator> solrIndicators;
solrIndicators.put("solr1", createHealthIndicator(firstHttpSolrClient()));
solrIndicators.put("solr2", createHealthIndicator(secondHttpSolrClient()));
solrIndicators.put("querySolr", createHealthIndicator(queryHttpSolrClient()));
return CompositeHealthContributor.fromMap(solrIndicators);
}
Instead of deprecated CompositeHealthIndicator#addHealthIndicator use constructor with map:
#Bean
public HealthIndicator solrHealthIndicator() {
Map<String, HealthIndicator> healthIndicators = new HashMap<>();
healthIndicators.put("solr1", createHealthIndicator(firstHttpSolrClient()));
healthIndicators.put("solr2", createHealthIndicator(secondHttpSolrClient()));
healthIndicators.put("querySolr", createHealthIndicator(queryHttpSolrClient()));
return new CompositeHealthIndicator(this.healthAggregator, healthIndicators);
}

Override default Spring-Boot application.properties settings in Junit Test with dynamic value

I want to override properties defined in application.properties in tests, but #TestPropertySource only allows to provide predefined values.
What I need is to start a server on a random port N, then pass this port to spring-boot application. The port has to be ephemeral to allow running multiple tests on the same host at the same time.
I don't mean the embedded http server (jetty), but some different server that is started at the beginning of the test (e.g. zookeeper) and the application being tested has to connect to it.
What's the best way to achieve this?
(here's a similar question, but answers do not mention a solution for ephemeral ports - Override default Spring-Boot application.properties settings in Junit Test)
As of Spring Framework 5.2.5 and Spring Boot 2.2.6 you can use Dynamic Properties in tests:
#DynamicPropertySource
static void dynamicProperties(DynamicPropertyRegistry registry) {
registry.add("property.name", "value");
}
Thanks to the changes made in Spring Framework 5.2.5, the use of #ContextConfiguration and the ApplicationContextInitializer can be replaced with a static #DynamicPropertySource method that serves the same purpose.
#SpringBootTest
#Testcontainers
class SomeSprintTest {
#Container
static LocalStackContainer localStack =
new LocalStackContainer().withServices(LocalStackContainer.Service.S3);
#DynamicPropertySource
static void initialize(DynamicPropertyRegistry registry) {
AwsClientBuilder.EndpointConfiguration endpointConfiguration =
localStack.getEndpointConfiguration(LocalStackContainer.Service.S3);
registry.add("cloud.aws.s3.default-endpoint", endpointConfiguration::getServiceEndpoint);
}
}
You could override the value of the port property in the #BeforeClass like this:
#BeforeClass
public static void beforeClass() {
System.setProperty("zookeeper.port", getRandomPort());
}
The "clean" solution is to use an ApplicationContextInitializer.
See this answer to a similar question.
See also this github issue asking a similar question.
To summarize the above mentioned posts using a real-world example that's been sanitized to protect copyright holders (I have a REST endpoint which uses an #Autowired DataSource which needs to use the dynamic properties to know which port the in-memory MySQL database is using):
Your test must declare the initializer (see the #ContextConfiguration line below).
// standard spring-boot test stuff
#RunWith(SpringRunner.class)
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
#ActiveProfiles("local")
#ContextConfiguration(
classes = Application.class,
// declare the initializer to use
initializers = SpringTestDatabaseInitializer.class)
// use random management port as well so we don't conflict with other running tests
#TestPropertySource(properties = {"management.port=0"})
public class SomeSprintTest {
#LocalServerPort
private int randomLocalPort;
#Value("${local.management.port}")
private int randomManagementPort;
#Test
public void testThatDoesSomethingUseful() {
// now ping your service that talks to the dynamic resource
}
}
Your initializer needs to add the dynamic properties to your environment. Don't forget to add a shutdown hook for any cleanup that needs to run. Following is an example that sets up an in-memory database using a custom DatabaseObject class.
public class SpringTestDatabaseInitializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
private static final int INITIAL_PORT = 0; // bind to an ephemeral port
private static final String DB_USERNAME = "username";
private static final String DB_PASSWORD = "password-to-use";
private static final String DB_SCHEMA_NAME = "default-schema";
#Override
public void initialize(ConfigurableApplicationContext applicationContext) {
DatabaseObject databaseObject = new InMemoryDatabaseObject(INITIAL_PORT, DB_USERNAME, DB_PASSWORD, DB_SCHEMA_NAME);
registerShutdownHook(databaseObject);
int databasePort = startDatabase(databaseObject);
addDatabasePropertiesToEnvironment(applicationContext, databasePort);
}
private static void addDatabasePropertiesToEnvironment(ConfigurableApplicationContext applicationContext, int databasePort) {
String url = String.format("jdbc:mysql://localhost:%s/%s", databasePort, DB_SCHEMA_NAME);
System.out.println("Adding db props to environment for url: " + url);
TestPropertySourceUtils.addInlinedPropertiesToEnvironment(
applicationContext,
"db.port=" + databasePort,
"db.schema=" + DB_SCHEMA_NAME,
"db.url=" + url,
"db.username=" + DB_USERNAME,
"db.password=" + DB_PASSWORD);
}
private static int startDatabase(DatabaseObject database) {
try {
database.start();
return database.getBoundPort();
} catch (Exception e) {
throw new IllegalStateException("Failed to start database", e);
}
}
private static void registerShutdownHook(DatabaseObject databaseObject) {
Runnable shutdownTask = () -> {
try {
int boundPort = databaseObject.getBoundPort();
System.out.println("Shutting down database at port: " + boundPort);
databaseObject.stop();
} catch (Exception e) {
// nothing to do here
}
};
Thread shutdownThread = new Thread(shutdownTask, "Database Shutdown Thread");
Runtime.getRuntime().addShutdownHook(shutdownThread);
}
}
When I look at the logs, it shows that for both of my tests that use this initializer class, they use the same object (the initialize method only gets called once, as does the shutdown hook). So it starts up a database, and leaves it running until both tests finish, then shuts the database down.

Updating Dropwizard config at runtime

Is it possible to have my app update the config settings at runtime? I can easily expose the settings I want in my UI but is there a way to allow the user to update settings and make them permanent ie save them to the config.yaml file? The only way I can see it to update the file by hand then restart the server which seems a bit limiting.
Yes. It is possible to reload the service classes at runtime.
Dropwizard by itself does not have the way to reload the app, but jersey has.
Jersey uses a container object internally to maintain the running application. Dropwizard uses the ServletContainer class of Jersey to run the application.
How to reload the app without restarting it -
Get a handle to the container used internally by jersey
You can do this by registering a AbstractContainerLifeCycleListener in Dropwizard Environment before starting the app. and implement its onStartup method as below -
In your main method where you start the app -
//getting the container instance
environment.jersey().register(new AbstractContainerLifecycleListener() {
#Override
public void onStartup(Container container) {
//initializing container - which will be used to reload the app
_container = container;
}
});
Add a method to your app to reload the app. It will take in the list of string which are the names of the service classes you want to reload. This method will call the reload method of the container with the new custom DropWizardConfiguration instance.
In your Application class
public static synchronized void reloadApp(List<String> reloadClasses) {
DropwizardResourceConfig dropwizardResourceConfig = new DropwizardResourceConfig();
for (String className : reloadClasses) {
try {
Class<?> serviceClass = Class.forName(className);
dropwizardResourceConfig.registerClasses(serviceClass);
System.out.printf(" + loaded class %s.\n", className);
} catch (ClassNotFoundException ex) {
System.out.printf(" ! class %s not found.\n", className);
}
}
_container.reload(dropwizardResourceConfig);
}
For more details see the example documentation of jersey - jersey example for reload
Consider going through the code and documentation of following files in Dropwizard/Jersey for a better understanding -
Container.java
ContainerLifeCycleListener.java
ServletContainer.java
AbstractContainerLifeCycleListener.java
DropWizardResourceConfig.java
ResourceConfig.java
No.
Yaml file is parsed at startup and given to the application as Configuration object once and for all. I believe you can change the file after that but it wouldn't affect your application until you restart it.
Possible follow up question: Can one restart the service programmatically?
AFAIK, no. I've researched and read the code somewhat for that but couldn't find a way to do that yet. If there is, I'd love to hear that :).
I made a task that reloads the main yaml file (it would be useful if something in the file changes). However, it is not reloading the environment. After researching this, Dropwizard uses a lot of final variables and it's quite hard to reload these on the go, without restarting the app.
class ReloadYAMLTask extends Task {
private String yamlFileName;
ReloadYAMLTask(String yamlFileName) {
super("reloadYaml");
this.yamlFileName = yamlFileName;
}
#Override
public void execute(ImmutableMultimap<String, String> parameters, PrintWriter output) throws Exception {
if (yamlFileName != null) {
ConfigurationFactoryFactory configurationFactoryFactory = new DefaultConfigurationFactoryFactory<ReportingServiceConfiguration>();
ValidatorFactory validatorFactory = Validation.buildDefaultValidatorFactory();
Validator validator = validatorFactory.getValidator();
ObjectMapper objectMapper = Jackson.newObjectMapper();
final ConfigurationFactory<ServiceConfiguration> configurationFactory = configurationFactoryFactory.create(ServiceConfiguration.class, validator, objectMapper, "dw");
File confFile = new File(yamlFileName);
configurationFactory.build(new File(confFile.toURI()));
}
}
}
You can change the configuration in the YAML and read it while your application is running. This will not however restart the server or change any server configurations. You will be able to read any changed custom configurations and use them. For example, you can change the logging level at runtime or reload other custom settings.
My solution -
Define a custom server command. You should use this command to start your application instead of the "server" command.
ArgsServerCommand.java
public class ArgsServerCommand<WC extends WebConfiguration> extends EnvironmentCommand<WC> {
private static final Logger LOGGER = LoggerFactory.getLogger(ArgsServerCommand.class);
private final Class<WC> configurationClass;
private Namespace _namespace;
public static String COMMAND_NAME = "args-server";
public ArgsServerCommand(Application<WC> application) {
super(application, "args-server", "Runs the Dropwizard application as an HTTP server specific to my settings");
this.configurationClass = application.getConfigurationClass();
}
/*
* Since we don't subclass ServerCommand, we need a concrete reference to the configuration
* class.
*/
#Override
protected Class<WC> getConfigurationClass() {
return configurationClass;
}
public Namespace getNamespace() {
return _namespace;
}
#Override
protected void run(Environment environment, Namespace namespace, WC configuration) throws Exception {
_namespace = namespace;
final Server server = configuration.getServerFactory().build(environment);
try {
server.addLifeCycleListener(new LifeCycleListener());
cleanupAsynchronously();
server.start();
} catch (Exception e) {
LOGGER.error("Unable to start server, shutting down", e);
server.stop();
cleanup();
throw e;
}
}
private class LifeCycleListener extends AbstractLifeCycle.AbstractLifeCycleListener {
#Override
public void lifeCycleStopped(LifeCycle event) {
cleanup();
}
}
}
Method to reload in your Application -
_ymlFilePath = null; //class variable
public static boolean reloadConfiguration() throws IOException, ConfigurationException {
boolean reloaded = false;
if (_ymlFilePath == null) {
List<Command> commands = _configurationBootstrap.getCommands();
for (Command command : commands) {
String commandName = command.getName();
if (commandName.equals(ArgsServerCommand.COMMAND_NAME)) {
Namespace namespace = ((ArgsServerCommand) command).getNamespace();
if (namespace != null) {
_ymlFilePath = namespace.getString("file");
}
}
}
}
ConfigurationFactoryFactory configurationFactoryFactory = _configurationBootstrap.getConfigurationFactoryFactory();
ValidatorFactory validatorFactory = _configurationBootstrap.getValidatorFactory();
Validator validator = validatorFactory.getValidator();
ObjectMapper objectMapper = _configurationBootstrap.getObjectMapper();
ConfigurationSourceProvider provider = _configurationBootstrap.getConfigurationSourceProvider();
final ConfigurationFactory<CustomWebConfiguration> configurationFactory = configurationFactoryFactory.create(CustomWebConfiguration.class, validator, objectMapper, "dw");
if (_ymlFilePath != null) {
// Refresh logging level.
CustomWebConfiguration webConfiguration = configurationFactory.build(provider, _ymlFilePath);
LoggingFactory loggingFactory = webConfiguration.getLoggingFactory();
loggingFactory.configure(_configurationBootstrap.getMetricRegistry(), _configurationBootstrap.getApplication().getName());
// Get my defined custom settings
CustomSettings customSettings = webConfiguration.getCustomSettings();
reloaded = true;
}
return reloaded;
}
Although this feature isn't supported out of the box by dropwizard, you're able to accomplish this fairly easy with the tools they give you.
Before I get started, note that this isn't a complete solution for the question asked as it doesn't persist the updated config values to the config.yml. However, this would be easy enough to implement yourself simply by writing to the config file from the application. If anyone would like to write this implementation feel free to open a PR on the example project I've linked below.
Code
Start off with a minimal config:
config.yml
myConfigValue: "hello"
And it's corresponding configuration file:
ExampleConfiguration.java
public class ExampleConfiguration extends Configuration {
private String myConfigValue;
public String getMyConfigValue() {
return myConfigValue;
}
public void setMyConfigValue(String value) {
myConfigValue = value;
}
}
Then create a task which updates the config:
UpdateConfigTask.java
public class UpdateConfigTask extends Task {
ExampleConfiguration config;
public UpdateConfigTask(ExampleConfiguration config) {
super("updateconfig");
this.config = config;
}
#Override
public void execute(Map<String, List<String>> parameters, PrintWriter output) {
config.setMyConfigValue("goodbye");
}
}
Also for demonstration purposes, create a resource which allows you to get the config value:
ConfigResource.java
#Path("/config")
public class ConfigResource {
private final ExampleConfiguration config;
public ConfigResource(ExampleConfiguration config) {
this.config = config;
}
#GET
public Response handleGet() {
return Response.ok().entity(config.getMyConfigValue()).build();
}
}
Finally wire everything up in your application:
ExampleApplication.java (exerpt)
environment.jersey().register(new ConfigResource(configuration));
environment.admin().addTask(new UpdateConfigTask(configuration));
Usage
Start up the application then run:
$ curl 'http://localhost:8080/config'
hello
$ curl -X POST 'http://localhost:8081/tasks/updateconfig'
$ curl 'http://localhost:8080/config'
goodbye
How it works
This works simply by passing the same reference to the constructor of ConfigResource.java and UpdateConfigTask.java. If you aren't familiar with the concept see here:
Is Java "pass-by-reference" or "pass-by-value"?
The linked classes above are to a project I've created which demonstrates this as a complete solution. Here's a link to the project:
scottg489/dropwizard-runtime-config-example
Footnote: I haven't verified this works with the built in configuration. However, the dropwizard Configuration class which you need to extend for your own configuration does have various "setters" for internal configuration, but it may not be safe to update those outside of run().
Disclaimer: The project I've linked here was created by me.

Categories

Resources