I have just taken over maintenance of an app that has this as its only test: (Yes I know)
#ActiveProfiles("test")
#RunWith(SpringRunner.class)
#RequiredArgsConstructor
#SpringBootTest(classes = App.class)
#MockBean(ElasticSearchIndexService.class)
public class AppStartTest {
#Test
public void contextLoads() {
final var app = new App();
assertNotNull("the application context should have loaded.", app);
}
}
Apart from the fact that there is no automated testing in place is this a good way to test if a Spring boot application loads its context? I would have thought that a simple
assertTrue(true); in the test would suffice, as the context should be loaded no matter what. Why would I want to create another copy of the application? (I sadly could not find anything related to this during my google searches)
There is also the fact that it has both #RunWith(SpringRunner.class) and #SpringBootTest. The test currently "runs" properly but I would expect that this leads to some unexpected behaviour does it not? I found this SO answer talking about it but it does not go into depth why (or if ever) one should use both annotations.
Lastly I already removed the #RequiredArgsConstructor because I don't really know why it is there in the first place. In my humble opinion it does not really serve a purpose.
I am not certain if this fits SO but I was rather curious as I consider myself somewhat of a beginner Spring developer and maybe the previous dev knew more than me
is this a good way to test if a Spring boot application loads its context?
#M.Deinum already answered this
No it isn't. You should #Autowire your App or rather ApplicationContext and check that.
but for anyone looking for a code example it would look like this:
#Autowired
ApplicationContext applicationContext;
#Test
public void contextLoads() {
assertNotNull(applicationContext);
}
This checks if the Application Context was indeed initialized.
There is also the fact that it has both #RunWith(SpringRunner.class) and #SpringBootTest.
Is a remnant of this test initially using JUnit4, where this was indeed necessary. With JUnit5, which I am using the #RunWith(SpringRunner.class) can be left out.
I still am unsure why the #RequiredArgsConstructor annotation was ever needed but the test runs with and without it so I have removed it.
I am building a spring web socket app and I am facing the following issue.
When I am running the app using IntelliJ everything is fine and the app starts up just fine.
When I am building the fat jar with spring boot maven plugin and starting up the app using java -jar the app is failing to start with the following error
Failed to start bean 'subProtocolWebSocketHandler'; nested exception is java.lang.IllegalArgumentException: No handlers
at org.springframework.web.socket.messaging.SubProtocolWebSocketHandler:start()
My spring web socket config looks like this
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketConfig implements WebSocketMessageBrokerConfigurer {
private WebSocketMessageBrokerStats webSocketMessageBrokerStats;
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableSimpleBroker("/topic")
.setHeartbeatValue(new long []{webSocketsProperties.getClientHeartbeatsSecs() * 1000, webSocketsProperties.getServerHeartbeatsSecs() * 1000})
.setTaskScheduler(heartBeatScheduler());
config.setApplicationDestinationPrefixes("/app");
}
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/gs-guide-websocket").setAllowedOrigins("*").withSockJS();
}
#Autowired
public void setWebSocketMessageBrokerStats(WebSocketMessageBrokerStats webSocketMessageBrokerStats) {
this.webSocketMessageBrokerStats = webSocketMessageBrokerStats;
}
}
The reason why the above error is happening is because when I run the app using the jar the method
#Autowired(required = false)
public void setConfigurers(List<WebSocketMessageBrokerConfigurer> configurers) {
if (!CollectionUtils.isEmpty(configurers)) {
this.configurers.addAll(configurers);
}
}
inside DelegatingWebSocketMessageBrokerConfiguration which is supposed to autowire my WebSocketConfig is invoked after the
#Override
protected void registerStompEndpoints(StompEndpointRegistry registry) {
for (WebSocketMessageBrokerConfigurer configurer : this.configurers) {
configurer.registerStompEndpoints(registry);
}
}
in DelegatingWebSocketMessageBrokerConfiguration which is causing the no handlers error. When I am starting the app through IntelliJ this is happening in reverse and everything is fine.
Does anyone have any idea why this is happening and what might be the reason causing it?
Is there any chance that loading classpath is happening in a different order in a jar vs in IntelliJ and that confuses spring?
EDIT
My WebSocketConfig class is slightly different than what I have put above. I am autowiring WebSocketMessageBrokerStats in it with setter injection. I have updated the code above. The reason why I didn't put this in my initial question is that I thought it was insignificant. But it is not. Answer is coming below...
Thanks a lot in advance
(let me know if you want more technical details from my side)
Nick
So after playing around with my code I figured out that the issues is the injection of the WebSocketMessageBrokerStats bean. Apparently this is causing WebSocketConfig bean (which is a special type of config since it implements WebSocketMessageBrokerConfigurer) to be ready at a later stage in the Spring Context Initialisation leaving List<WebSocketMessageBrokerConfigurer> configurers empty when it is checked by the registerStompEndpoints().
So the solution was to create a second configuration class and move the WebSocketMessageBrokerStats bean and all the operations on it in the new config file.
The above is fixing the jar file and I am able to run it with java -jar, however I have no idea how IntelliJ was able to run the app successfully without the fix.
I have a Spring application that I am trying to test with EmbededRedis. So I created a component like below to Initialize and kill redis after test.
#Component
public class EmbededRedis {
#Value("${spring.redis.port}")
private int redisPort;
private RedisServer redisServer;
#PostConstruct
public void startRedis() throws IOException {
redisServer = new RedisServer(redisPort);
redisServer.start();
}
#PreDestroy
public void stopRedis() {
redisServer.stop();
}
}
But now I am facing a weird issue. Because spring caches the context, PreDestroy doesnt get called everytime after my test is executed, but for some reason, #PostConstruct gets called, and EmbededRedis tries to start the running redis server again and again, which is creatimg issues in the execution.
Is there a way to handle this situation by any mean?
Update
This is how I am primarily defining my tests.
#SpringBootTest(classes = {SpringApplication.class})
#ActiveProfiles("test")
public class RedisApplicationTest {
Ditch the class and write an #Configuration class which exposed RedisServer as a bean.
#Configuration
public void EmbeddedRedisConfiguration {
#Bean(initMethod="start", destroyMethod="stop")
public RedisServer embeddedRedisServer(#Value("${spring.redis.port}") int port) {
return new RedisServer(port);
}
}
So I debuged the ContextInitialization as suggested by #M. Deinum.
For me, the porblem was, Our application was mocking different classes in order to mix mocking with Spring context.
Now, when you use mocks, MockitoContextInitializer also becomes part of your cache key, which results in cache miss. Reason is, The classes under mock are obviously different for different test classes.
Looking at the situation, I preferred to go ahead with #DirtiesContext to invalidate the contest after the test is done, so that I can reinitialize the context later on for different test.
Note #DirtiesContext is in a way recommended to be avoided as it slows down your tests.
I am trying to build a DiscoveryClient and I want it to fire an event when there is a change to the routes. I am using
publisher.publishEvent(new InstanceRegisteredEvent<>(this, "serviceName"));
However, the event does not actually fire even if it is the same object. I am suspecting it is because it is a different thread, but #Scheduled also run from a different thread and it fires successfuly.
The circumstance that I hit was the fact that I was using the ApplicationEventPublisher that was provided during the BootstrapAutoConfiguration phase in the application. Because I was using that, the events I publish do not get propagated as expected.
To get around that I had to make sure to change the ApplicationEventPublisher that was have put in during bootstrap with something after by adding in another AutoConfiguration executed during the AutoConfiguration phase and not in Bootstrap phase.
I added (but it is optional) ApplicationEventPublisherAware to the class in my case DockerSwarmDiscovery
#Configuration
#ConditionalOnBean(DockerSwarmDiscovery.class)
#Slf4j
public class DockerSwarmDiscoveryWatchAutoConfiguration {
#Autowired
private DockerSwarmDiscovery dockerSwarmDiscovery;
#Autowired
private ApplicationEventPublisher applicationEventPublisher;
#PostConstruct
public void injectPublisher() {
dockerSwarmDiscovery.setApplicationEventPublisher(applicationEventPublisher);
}
}
I am running a Spring Boot 2 Application and added the actuator spring boot starter dependency. I enabled all web endpoints and then called:
http://localhost:8080/actuator/metrics
result is:
{
"names": ["jdbc.connections.active",
"jdbc.connections.max",
"jdbc.connections.min",
"hikaricp.connections.idle",
"hikaricp.connections.pending",
"hikaricp.connections",
"hikaricp.connections.active",
"hikaricp.connections.creation",
"hikaricp.connections.max",
"hikaricp.connections.min",
"hikaricp.connections.usage",
"hikaricp.connections.timeout",
"hikaricp.connections.acquire"]
}
But I am missing all the JVM stats and other built-in metrics. What am I missing here? Everything I read said that these metrics should be available at all times.
Thanks for any hints.
I want to share the findings with you. The problem was that a 3rd party library (Shiro) and my configuration for it. The bean loading of micrometer got mixed up which resulted in a too late initialisation of a needed PostProcessingBean which configures the MicroMeterRegistry (in my case the PrometheusMeterRegistry).
I dont know if its wise to do the configuration of the Registries via a different Bean (PostProcessor) which can lead to situations i had... the Registries should configure themselves without relying on other Beans which might get constructed too late.
In case this ever happens to anybody else:
I had a similar issue (except it wasn't Graphite but Prometheus, and I was not using Shiro).
Basically I only had Hikari and HTTP metrics, nothing else (no JVM metrics like GC).
I banged my head on several walls before finding out the root cause: there was a Hikari auto configure post processor in Spring Boot Autoconfigure that eagerly retrieved a MeterRegistry, so all Metric beans didn't have time to initialize before.
And to my surprise, when looking at this code in Github I didn't find it. So I bumped my spring-boot-starter-parent version from 2.0.4.RELEASE to 2.1.0.RELEASE and now everything works fine. I correctly get all the metrics.
As I expected, this problem is caused by the loading order of the beans.
I used Shiro in the project.
Shiro's verification method used MyBatis to read data from the database.
I used #Autowired for MyBatis' Mapper file, which caused the Actuator metrics related beans to not be assembled by SpringBoot (I don't know what the specific reason is).
So i disabled the automatic assembly of the Mapper file by manual assembly.
The code is as follows:
public class SpringContextUtil implements ApplicationContextAware {
private static ApplicationContext applicationContext;
public void setApplicationContext(ApplicationContext applicationContext)
throws BeansException {
SpringContextUtil.applicationContext = applicationContext;
}
public static ApplicationContext getApplicationContext() {
return applicationContext;
}
public static Object getBean(String beanId) throws BeansException {
return applicationContext.getBean(beanId);
}
}
Then
StoreMapper userMapper = (UserMapper) SpringContextUtil.getBean("userMapper");
UserModel userModel = userMapper.findUserByName(name);
The problem can be solved for the time being. This is just a stopgap measure, but at the moment I have no better way.
I can not found process_update_seconds in /actuator/prometheus, so I have spent some time to solve my problem.
My solution:
Rewrite HikariDataSourceMetricsPostProcessor and MeterRegistryPostProcessor;
The ordered of HikariDataSourceMetricsPostProcessor is Ordered.HIGHEST_PRECEDENCE + 1;
package org.springframework.boot.actuate.autoconfigure.metrics.jdbc;
...
class HikariDataSourceMetricsPostProcessor implements BeanPostProcessor, Ordered {
...
public int getOrder() {
return Ordered.HIGHEST_PRECEDENCE + 1;
}
}
The ordered of MeterRegistryPostProcessor is Ordered.HIGHEST_PRECEDENCE;
package org.springframework.boot.actuate.autoconfigure.metrics;
...
import org.springframework.core.Ordered;
class MeterRegistryPostProcessor implements BeanPostProcessor, Ordered {
...
#Override
public int getOrder() {
return Ordered.HIGHEST_PRECEDENCE;
}
}
In my case I have used shiro and using jpa to save user session id. I found the order of MeterRegistryPostProcessor and HikariDataSourceMetricsPostProcessor cause the problem. MeterRegistry did not bind the metirc because of the loading order.
Maybe my solution will help you to solve the problem.
I have a working sample with Spring Boot, Micrometer, and Graphite and confirmed the out-of-the-box MeterBinders are working as follows:
{
"names" : [ "jvm.memory.max", "process.files.max", "jvm.gc.memory.promoted", "tomcat.cache.hit", "system.load.average.1m", "tomcat.cache.access", "jvm.memory.used", "jvm.gc.max.data.size", "jvm.gc.pause", "jvm.memory.committed", "system.cpu.count", "logback.events", "tomcat.global.sent", "jvm.buffer.memory.used", "tomcat.sessions.created", "jvm.threads.daemon", "system.cpu.usage", "jvm.gc.memory.allocated", "tomcat.global.request.max", "tomcat.global.request", "tomcat.sessions.expired", "jvm.threads.live", "jvm.threads.peak", "tomcat.global.received", "process.uptime", "tomcat.sessions.rejected", "process.cpu.usage", "tomcat.threads.config.max", "jvm.classes.loaded", "jvm.classes.unloaded", "tomcat.global.error", "tomcat.sessions.active.current", "tomcat.sessions.alive.max", "jvm.gc.live.data.size", "tomcat.servlet.request.max", "tomcat.threads.current", "tomcat.servlet.request", "process.files.open", "jvm.buffer.count", "jvm.buffer.total.capacity", "tomcat.sessions.active.max", "tomcat.threads.busy", "my.counter", "process.start.time", "tomcat.servlet.error" ]
}
Note that the sample on the graphite branch, not the master branch.
If you could break the sample in the way you're seeing, I can take another look.