I am building a dockerized spring-cloud based microservice that registers with eureka. Part of the registration process is asking the host for the port mapped to the container so docker can choose a free host port for the containerized service.
I have a host based service the dockerized service can ask for the port mapping and am now trying to register the microservice with eureka using the external port.
I get the right port inside my microservice but am unable to override the EurekaInstanceConfig.
What i have tried:
#SpringBootApplication
#EnableEurekaClient
public class ApplicationBootstrapper {
#Value("${containerIp}")
private String containerIp;
#Bean
public EurekaInstanceConfigBean eurekaInstanceConfigBean() {
EurekaInstanceConfigBean config = new EurekaInstanceConfigBean();
String hostPort = new RestTemplate().getForObject(
"http://{hostname}:7691/container/{id}/hostPort",
String.class,
containerIp,
config.getHostname());
config.setPreferIpAddress(true);
config.setIpAddress(containerIp);
config.setNonSecurePort(Integer.valueOf(hostPort));
return config;
}
My custom EurekaInstanceConfigBean gets created but the configuration is not picked up (the service registers with its internal container port).
The question is: How do i override the EurekaInstanceConfigBean?
EDIT (2):
As Steve pointed out and now as spring-cloud-1.0.0.RELEASE is available most of my previous solution is now obsolete. I've attached my final solution in case anyone is trying something similar:
#Configuration
public class EurekaConfig {
private static final Log logger = LogFactory.getLog(EurekaConfig.class);
#Value("${containerIp}")
private String containerIp;
#Value("${kompositPort:7691}")
private String kompositPort;
#Bean
public EurekaInstanceConfigBean eurekaInstanceConfigBean() {
Integer hostPort = new RestTemplate().getForObject(
"http://{containerIp}:{port}/container/{instanceId}/hostPort",
Integer.class,
containerIp,
kompositPort,
getHostname());
EurekaInstanceConfigBean config = new EurekaInstanceConfigBean();
config.setNonSecurePort(hostPort);
config.setPreferIpAddress(true);
config.setIpAddress(containerIp);
config.getMetadataMap().put("instanceId", getHostname());
return config;
}
private static String getHostname() {
String hostname = null;
try {
hostname = InetAddress.getLocalHost().getHostName();
} catch (UnknownHostException e) {
logger.error("Cannot get host info", e);
}
return hostname;
}
}
This was fixed only 6 days ago. Prior to that fix, whatever you set for nonSecurePort will be overridden with ${server.port}. My suggestion, which is kind of hacky but what can you do when working with pre-release libraries, is to subclass EurekaInstanceConfigBean and implement InitializingBean, so you can set the port in afterPropertiesSet().
Related
I've recently been trying to configure and set up a spring boot application that will later be run in kubernetes and have multiple pods running of it. The application is meant to download files from a FTP server. I've found some existing code for doing this in Springboot, particularly FtpInboundFileSynchronizer and so I tried set it up and make sure it works. I have a working solution with a ConcurrentMetaDataStore. So my only real question is if it will be fine running it with multiple instances or if I require something additional for it to be run with multiple pods?
My configuration looks something like this:
#Getter
#Setter
#Configuration
#ConfigurationProperties(prefix = "ftp")
public class FtpConfiguration
{
private final static int PASSIVE_LOCAL_DATA_CONNECTION_MODE = 2;
private final static int DEFAULT_FTP_PORT = 21;
String host;
String username;
String password;
String localDirectory;
String remoteDirectory;
FtpRemoteFileTemplate template;
FtpInboundFileSynchronizer synchronizer;
DataSource templateSource;
#Bean
public ConcurrentMetadataStore metadataStore(DataSource dataSource)
{
var jbdcMetaDatastore = new JdbcMetadataStore(dataSource);
jbdcMetaDatastore.setTablePrefix("INT_");
jbdcMetaDatastore.setRegion("TEMPORARY");
jbdcMetaDatastore.afterPropertiesSet();
return jbdcMetaDatastore;
}
#Bean
public DefaultFtpSessionFactory defaultFtpSessionFactory()
{
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost(host);
sf.setUsername(username);
sf.setPassword(password);
sf.setPort(DEFAULT_FTP_PORT);
sf.setConnectTimeout(5000);
sf.setClientMode(PASSIVE_LOCAL_DATA_CONNECTION_MODE);
return sf;
}
#Bean
FtpRemoteFileTemplate ftpRemoteFileTemplate(DefaultFtpSessionFactory dsf)
{
return new FtpRemoteFileTemplate(dsf);
}
#Bean
FtpInboundFileSynchronizer ftpInboundFileSynchronizer(DefaultFtpSessionFactory dsf)
{
FtpInboundFileSynchronizer ftpInSync = new FtpInboundFileSynchronizer(dsf);
ftpInSync.setRemoteDirectory(remoteDirectory);
ftpInSync.setFilter(ftpFileListFilter());
return ftpInSync;
}
public FileListFilter<FTPFile> ftpFileListFilter()
{
try (ChainFileListFilter<FTPFile> chain = new ChainFileListFilter<>())
{
chain.addFilter(new FtpPersistentAcceptOnceFileListFilter(metadataStore(templateSource), "TEST"));
return chain;
}
catch (IOException e)
{
throw new RuntimeException("Failed to create FtpPersistentAcceptOnceFileListFilter", e);
}
}
}
and then I just call the the SynchronizeToLocalDirectory method.
FtpClient(
FtpRemoteFileTemplate template, FtpInboundFileSynchronizer synchronizer,
#Value("${ftp.remote-directory}") String remoteDirectory,
#Value("${ftp.local-directory}") String localDirectory)
{
this.template = template;
this.synchronizer = synchronizer;
this.remoteDirectory = remoteDirectory;
this.localDirectory = localDirectory;
}
synchronizer.setRemoteDirectory(remoteDirectory);
synchronizer.synchronizeToLocalDirectory(new File(localDirectory));
Would this solution handle multiple applications without problems? Or what else would I need? Does the ConcurrentMetaData store alone make sure this works? (so for example there wouldn't be a conflict/crash if two instances at the same time try to synchronise same directory as they'd both be fine thanks to the metastore being #Transactional).
Your assumption is correct: as long as all your pods are connecting to the same data base, that JdbcMetadataStore will ensure that no concurrent read for the same file are going to happen.
It is not clear, though, why would one use an FtpInboundFileSynchronizer manually, but not via an FtpInboundFileSynchronizingMessageSource and subsequent integration flow, but that's I guess fully different story and question.
On the other hand: why do you ask this question at all? Didn't you try your solution? Isn't docs enough to be sure where and how to go: https://docs.spring.io/spring-integration/docs/current/reference/html/file.html#remote-persistent-flf ?
i have a problem, i do not know how to set the host dynamically and doing RPC operation on different host
Here is the situation
I have a multiple RabbitMQ running on different servers and networks (i.e 192.168.1.0/24, 192.168.2.0/24).
The behavior would be i have a list of IP address which i will perform an RPC with.
So, for each entry in the ip address list, i want to perform a convertSendAndReceive and process the reply and so on.
Tried some codes in documentation but it seems it does not work even the invalid address (addresses that don't have a valid RabbitMQ running, or is not event existing on the network, for example 1.1.1.1) gets received by a valid RabbitMQ (running on 192.168.1.1 for example)
Note: I can successfully perform RPC call on correct address, however, i can also successfully perform RPC call on invalid address which im not suppose to
Anyone has any idea about this?
Here is my source
TaskSchedulerConfiguration.java
#Configuration
#EnableScheduling
public class TaskSchedulerConfiguration {
#Autowired
private IpAddressRepo ipAddressRepo;
#Autowired
private RemoteProcedureService remote;
#Scheduled(fixedDelayString = "5000", initialDelay = 2000)
public void scheduledTask() {
ipAddressRepo.findAll().stream()
.forEach(ipaddress -> {
boolean status = false;
try {
remote.setIpAddress(ipaddress);
remote.doSomeRPC();
} catch (Exception e) {
logger.debug("Unable to Connect to licenser server: {}", license.getIpaddress());
logger.debug(e.getMessage(), e);
}
});
}
}
RemoteProcedureService.java
#Service
public class RemoteProcedureService {
#Autowired
private RabbitTemplate template;
#Autowired
private DirectExchange exchange;
public boolean doSomeRPC() throws JsonProcessingException {
//I passed this.factory.getHost() so that i will know if only the valid ip address will be received by the other side
//at this point, other side receives invalid ipaddress which supposedly will not be receive by the oher side
boolean response = (Boolean) template.convertSendAndReceive(exchange.getName(), "rpc", this.factory.getHost());
return response;
}
public void setIpAddress(String host) {
factory.setHost(host);
factory.setCloseTimeout(prop.getRabbitMQCloseConnectTimeout());
factory.setPort(prop.getRabbitMQPort());
factory.setUsername(prop.getRabbitMQUsername());
factory.setPassword(prop.getRabbitMQPassword());
template.setConnectionFactory(factory);
}
}
AmqpConfiguration.java
#Configuration
public class AmqpConfiguration {
public static final String topicExchangeName = "testExchange";
public static final String queueName = "rpc";
#Autowired
private LicenseVisualizationProperties prop;
//Commented this out since this will only be assigne once
//i need to achieve to set it dynamically in order to send to different hosts
//so put it in RemoteProcedureService.java, but it never worked
// #Bean
// public ConnectionFactory connectionFactory() {
// CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
// connectionFactory.setCloseTimeout(prop.getRabbitMQCloseConnectTimeout());
// connectionFactory.setPort(prop.getRabbitMQPort());
// connectionFactory.setUsername(prop.getRabbitMQUsername());
// connectionFactory.setPassword(prop.getRabbitMQPassword());
// return connectionFactory;
// }
#Bean
public DirectExchange exhange() {
return new DirectExchange(topicExchangeName);
}
}
UPDATE 1
It seems that, during the loop, when an valid ip is set in the CachingConnectionFactory succeeding ip addressing loop, regardliess if valid or invalid, gets received by the first valid ip set in CachingConnectionFactory
UPDATE 2
I found out that once it can establish a successfully connection, it will not create a new connection. How do you force RabbitTemplate to establish a new connection?
It's a rather strange use case and won't perform very well; you would be better to have a pool of connection factories and templates.
However, to answer your question:
Call resetConnection() to close the connection.
With any change to the SSL certificates (in its keystore), we need to restart the spring boot application. I want to update my key store entry periodically (may be every year), but want to avoid restarting the JVM. What would it take to achieve it. I wonder if writing custom KeyManager is an acceptable practice?
Unfortunately, this is not possible.
BUT
You have several solutions here.
Reload Tomcat connector (a bit hacky)
You can restart Tomcat connector i.e. restart 8843 is possible after you change your jssecacert file.
But I think that it is still a hack.
Reverse proxy: Nginx, Apache
This is a way to go. Your application should be behind some reverse proxy (e.g. nginx). This will give you an additional flexibility and reduce load on your app. Nginx will handle https and translate it to plain http. Anyway, you'll have to restart nginx, but nginx restart is so fast that it will be no downtime. Moreover, you could configure script to do this for you.
On Tomcat one can use local JMX to reload SSL context:
private static final String JMX_THREAD_POOL_NAME = "*:type=ThreadPool,name=*";
private static final String JMX_OPERATION_RELOAD_SSL_HOST_CONFIGS_NAME = "reloadSslHostConfigs";
private void reloadSSLConfigsOnConnectors() {
try {
MBeanServer server = ManagementFactory.getPlatformMBeanServer();
ObjectName objectName = new ObjectName(JMX_THREAD_POOL_NAME);
Set<ObjectInstance> allTP = server.queryMBeans(objectName, null);
logger.info("MBeans found: {}", allTP.size());
allTP.forEach(tp -> reloadSSLConfigOnThreadPoolJMX(server, tp));
} catch (Exception ex) {
logger.error("", ex);
}
}
private void reloadSSLConfigOnThreadPoolJMX(MBeanServer server, ObjectInstance tp) {
try {
logger.info("Invoking operation SSL reload on {}", tp.getObjectName());
server.invoke(tp.getObjectName(), JMX_OPERATION_RELOAD_SSL_HOST_CONFIGS_NAME, new Object[]{}, new String[]{});
logger.trace("Successfully invoked");
} catch (Exception ex) {
logger.error("Invoking SSL reload", ex);
}
}
I'm reloading all ThreadPool SSL context, but you really only need one: Tomcat:type=ThreadPool,name=https-jsse-nio-8443. I'm just afraid the name will change, so I cover all possibilities just in case.
I have solved this problem in my Spring Boot application by getting a
TomcatServletWebServerFactory
in a bean and add my own connector customizer to it.
#Bean
public ServletWebServerFactory servletContainer() {
TomcatServletWebServerFactory tomcat = new TomcatServletWebServerFactory();
// --- CUSTOMIZE SSL PORT IN ORDER TO BE ABLE TO RELOAD THE SSL HOST CONFIG
tomcat.addConnectorCustomizers(new DefaultSSLConnectorCustomizer());
return tomcat;
}
My customizer extracts the protocol for https for later use
public class DefaultSSLConnectorCustomizer implements TomcatConnectorCustomizer {
private Http11NioProtocol protocol;
#Override
public void customize(Connector connector) {
Http11NioProtocol protocol = (Http11NioProtocol) connector.getProtocolHandler();
if ( connector.getSecure()) {
//--- REMEMBER PROTOCOL WHICH WE NEED LATER IN ORDER TO RELOAD SSL CONFIG
this.protocol = protocol;
}
}
protected Http11NioProtocol getProtocol() {
return protocol;
}
}
When I have updated the keystore with the new private key I conduct a SSL host config reload. This code goes here
#Component
public class TomcatUtil {
public static final String DEFAULT_SSL_HOSTNAME_CONFIG_NAME = "_default_";
private final Logger logger = LoggerFactory.getLogger(getClass());
private ServletWebServerFactory servletWebServerFactory;
public TomcatUtil(ServletWebServerFactory servletWebServerFactory) {
this.servletWebServerFactory = servletWebServerFactory;
}
public void reloadSSLHostConfig() {
TomcatServletWebServerFactory tomcatFactoty = (TomcatServletWebServerFactory) servletWebServerFactory;
Collection<TomcatConnectorCustomizer> customizers = tomcatFactoty.getTomcatConnectorCustomizers();
for (TomcatConnectorCustomizer tomcatConnectorCustomizer : customizers) {
if (tomcatConnectorCustomizer instanceof DefaultSSLConnectorCustomizer) {
DefaultSSLConnectorCustomizer customizer = (DefaultSSLConnectorCustomizer) tomcatConnectorCustomizer;
Http11NioProtocol protocol = customizer.getProtocol();
try {
protocol.reloadSslHostConfig(DEFAULT_SSL_HOSTNAME_CONFIG_NAME);
logger.info("Reloaded SSL host configuration");
} catch (IllegalArgumentException e) {
logger.warn("Cannot reload SSL host configuration", e);
}
}
}
}
}
And finally
...
renewServerCertificate();
tomcatUtil.reloadSSLHostConfig();
I want to override properties defined in application.properties in tests, but #TestPropertySource only allows to provide predefined values.
What I need is to start a server on a random port N, then pass this port to spring-boot application. The port has to be ephemeral to allow running multiple tests on the same host at the same time.
I don't mean the embedded http server (jetty), but some different server that is started at the beginning of the test (e.g. zookeeper) and the application being tested has to connect to it.
What's the best way to achieve this?
(here's a similar question, but answers do not mention a solution for ephemeral ports - Override default Spring-Boot application.properties settings in Junit Test)
As of Spring Framework 5.2.5 and Spring Boot 2.2.6 you can use Dynamic Properties in tests:
#DynamicPropertySource
static void dynamicProperties(DynamicPropertyRegistry registry) {
registry.add("property.name", "value");
}
Thanks to the changes made in Spring Framework 5.2.5, the use of #ContextConfiguration and the ApplicationContextInitializer can be replaced with a static #DynamicPropertySource method that serves the same purpose.
#SpringBootTest
#Testcontainers
class SomeSprintTest {
#Container
static LocalStackContainer localStack =
new LocalStackContainer().withServices(LocalStackContainer.Service.S3);
#DynamicPropertySource
static void initialize(DynamicPropertyRegistry registry) {
AwsClientBuilder.EndpointConfiguration endpointConfiguration =
localStack.getEndpointConfiguration(LocalStackContainer.Service.S3);
registry.add("cloud.aws.s3.default-endpoint", endpointConfiguration::getServiceEndpoint);
}
}
You could override the value of the port property in the #BeforeClass like this:
#BeforeClass
public static void beforeClass() {
System.setProperty("zookeeper.port", getRandomPort());
}
The "clean" solution is to use an ApplicationContextInitializer.
See this answer to a similar question.
See also this github issue asking a similar question.
To summarize the above mentioned posts using a real-world example that's been sanitized to protect copyright holders (I have a REST endpoint which uses an #Autowired DataSource which needs to use the dynamic properties to know which port the in-memory MySQL database is using):
Your test must declare the initializer (see the #ContextConfiguration line below).
// standard spring-boot test stuff
#RunWith(SpringRunner.class)
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
#ActiveProfiles("local")
#ContextConfiguration(
classes = Application.class,
// declare the initializer to use
initializers = SpringTestDatabaseInitializer.class)
// use random management port as well so we don't conflict with other running tests
#TestPropertySource(properties = {"management.port=0"})
public class SomeSprintTest {
#LocalServerPort
private int randomLocalPort;
#Value("${local.management.port}")
private int randomManagementPort;
#Test
public void testThatDoesSomethingUseful() {
// now ping your service that talks to the dynamic resource
}
}
Your initializer needs to add the dynamic properties to your environment. Don't forget to add a shutdown hook for any cleanup that needs to run. Following is an example that sets up an in-memory database using a custom DatabaseObject class.
public class SpringTestDatabaseInitializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
private static final int INITIAL_PORT = 0; // bind to an ephemeral port
private static final String DB_USERNAME = "username";
private static final String DB_PASSWORD = "password-to-use";
private static final String DB_SCHEMA_NAME = "default-schema";
#Override
public void initialize(ConfigurableApplicationContext applicationContext) {
DatabaseObject databaseObject = new InMemoryDatabaseObject(INITIAL_PORT, DB_USERNAME, DB_PASSWORD, DB_SCHEMA_NAME);
registerShutdownHook(databaseObject);
int databasePort = startDatabase(databaseObject);
addDatabasePropertiesToEnvironment(applicationContext, databasePort);
}
private static void addDatabasePropertiesToEnvironment(ConfigurableApplicationContext applicationContext, int databasePort) {
String url = String.format("jdbc:mysql://localhost:%s/%s", databasePort, DB_SCHEMA_NAME);
System.out.println("Adding db props to environment for url: " + url);
TestPropertySourceUtils.addInlinedPropertiesToEnvironment(
applicationContext,
"db.port=" + databasePort,
"db.schema=" + DB_SCHEMA_NAME,
"db.url=" + url,
"db.username=" + DB_USERNAME,
"db.password=" + DB_PASSWORD);
}
private static int startDatabase(DatabaseObject database) {
try {
database.start();
return database.getBoundPort();
} catch (Exception e) {
throw new IllegalStateException("Failed to start database", e);
}
}
private static void registerShutdownHook(DatabaseObject databaseObject) {
Runnable shutdownTask = () -> {
try {
int boundPort = databaseObject.getBoundPort();
System.out.println("Shutting down database at port: " + boundPort);
databaseObject.stop();
} catch (Exception e) {
// nothing to do here
}
};
Thread shutdownThread = new Thread(shutdownTask, "Database Shutdown Thread");
Runtime.getRuntime().addShutdownHook(shutdownThread);
}
}
When I look at the logs, it shows that for both of my tests that use this initializer class, they use the same object (the initialize method only gets called once, as does the shutdown hook). So it starts up a database, and leaves it running until both tests finish, then shuts the database down.
While performing a client-server communication with various forums, I am unable to perform Remote-object's lookup on the client machine.
The errors which I receive are ConnectIOException(NoRouteToHostException), and sometimes ConnectException and sometimes someother.
This is not what I want to ask. But, the main concern is how should I setup client platform and server platform --- talking about networking details --- this is what I doubt interferes with my connection.
My questions :-
How should I edit my /etc/hosts file on both client-side and server-side? Server's IP- 192.168.1.8 & Client's IP-192.168.1.100. Means, should I add the system name in both the files:
192.168.1.8 SERVER-1 # on the server side
192.168.1.100 CLIENT-1 # on the client side
Should I edit like this? Can this be one of the possible concerns? I just want to remove any doubts left over to perform the rmi-communication!
Also, I am also setting Server's hostname property using System.setProperty("java.rmi.server.hostname",192.168.1.8); on the server side. Should I do the same on the client-side too?
I've read about setting classpath while running the java program on both server-side as well as the client-side. I did this too,but,again the same exceptions. No difference at all. I've read that since Java update 6u45, classpaths aren't necessary to include! Please throw some light on this too...
If I am missing something, Please enlighten about the same too. A brief idea/link to resources are most preferred.
You don't need any of this unless you have a problem. The most usual problem is the one described in the RMI FAQ #A.1, and editing the hosts file of the server or setting java.rmi.server.hostname in the server JVM is the solution to that.
'No route to host' is a network connectivity problem, not an RMI problem, and not one you'll solve with code or system property settings.
Setting the classpath has nothing to do with network problems.
Here is server example of which transfers an concrete class. This class must be exist in server and client classpath with same structure
Message:
public class MyMessage implements Serializable {
private static final long serialVersionUID = -696658756914311143L;
public String Title;
public String Body;
public MyMessage(String strTitle) {
Title = strTitle;
Body = "";
}
public MyMessage() {
Title = "";
Body = "";
}
}
And here is the server code that gets an message and returns another message:
public class SimpleServer {
public String ServerName;
ServerRemoteObject mRemoteObject;
public SimpleServer(String pServerName) {
ServerName = pServerName;
}
public void bindYourself() {
try {
mRemoteObject = new ServerRemoteObject(this);
java.rmi.registry.Registry iRegistry = LocateRegistry.getRegistry(RegistryContstants.RMIPort);
iRegistry.rebind(RegistryContstants.CMName, mRemoteObject);
} catch (Exception e) {
e.printStackTrace();
mRemoteObject = null;
}
}
public MyMessage handleEvent(MyMessage mMessage) {
MyMessage iMessage = new MyMessage();
iMessage.Body = "Response body";
iMessage.Title = "Response title";
return iMessage;
}
public static void main(String[] server) {
SimpleServer iServer = new SimpleServer("SERVER1");
iServer.bindYourself();
while (true) {
try {
Thread.sleep(10000);
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
and here is the remote interface of server remote object:
public interface ISimpleServer extends java.rmi.Remote{
public MyMessage doaction(MyMessage message) throws java.rmi.RemoteException;
}
all you need is adding MyMessage class both in server and client classpath.