I have a Spring boot web application running on Production deployed on Amazon Web Servers. I have create two instances of my Web applications. But sometimes one/both instance(s) automatically stops. I can't understand how the process are killed automatically.
This issue is affecting many users experience. I am using Spring Boots default properties for tomcat.
Check your application.properties for endpoints.shutdown.enabled=true.
Perhaps the shutdown endpoint is getting called by someone.
Also, scan your code for any 'System.exit'
Also, the jvm may be crashing...
Is it stopping gracefully? Are there any logs?
Found the problem:
springBoot {
mainClass = ''
executable = true
buildInfo()
}
executable needs to be changed to false
Related
I need to run a scheduler for every 1hour, and it has to read DB and send emails. I have deployed my Spring Boot app into Azure webjobs as a scheduled trigger. The app is deployed and the scheduler is working fine, but I don't see the trigger calling my app. Am I doing it correctly?
Also if my understanding is correct , when the trigger starts won't it deploy my Spring boot app again? Please let me know
Since the data from DB is less, I have not preferred to work with Spring Batch
Also if my understanding is correct , when the trigger starts won't it deploy my Spring boot app again?
yes, you can't deploy spring boot application once the trigger is started.
The app is deployed and the scheduler is working fine, but I don't see the trigger calling my app. Am I doing it correctly?
You can use Maven application Plugin to deploy the spring boot application a you are not using the spring boot batch
Also here is the Microsoft Official document where you can have a look on connecting the DB with the application and then running the scheduler.
You can even check this Spring Boot deployment documentation and this SO with related discussions.
I've created a spring boot project and deployed it on a vm. I've added a command in local.rc that starts the spring boot application on reboot. I want to check whether the command got executed and the application is running. How do I do that?
There are two ways
On system level - you can run your project as a service, which is documented in the Official documentation - Deployments. Then you can query the application status service myapp status.
On application level - include Spring Boot Actuator in your app and use the Actuator endpoints such as /actuator/health as per Official documentation - Production Ready Endpoints. These endpoints can be exposed via HTTP or JMX.
Note: prior to spring boot 2.0 the actuator endpoint is /health
If it's a web project, it makes sense to include spring-boot-actuator (just add a dependency in maven and start the microservice).
In this case, it will automatically expose the following endpoint (for example, its actually can be flexibly set up):
http://<HOST>:<PORT>/health
Just issue an HTTP GET request, and if you get 200 - it's up and running.
If using an actuator is not an option (although it should be really addressed as a first bet), then you can merely telnet to http://<HOST>:<PORT>
The ratio behind this is that that PORT is exposed and ready to "listen" to external connections only after the application context is really started.
I have a regular Java/Spring Batch job that runs every night to get data from one database and insert/update in my project's database. This is working fine in the current setup where it is deployed on Tomcat.
Now I need to separate it out and run it on an Azure WebJob. What will be a good approach?
Can I use Spring Boot for this purpose?
But I am not sure how it will work. I mean, I can create a JAR of my project (Job written using Spring Boot) and copy it on a Azure WebJob. Then have a batch file with "java -jar..." but:
wouldn't it be like running and deploying the Spring Boot App with it's inbuilt web-server that will continue to run once I run it?
secondly, the next time the batch file is executed by Azure WebJob as per the schedule I set it will try to run the Spring Boot App again and I will probably get bind exception since the port is already in use from the first run.
Would appreciate if somebody can help me in doing this.
Thank you.
wouldn't it be like running and deploying the Spring Boot App with it's inbuilt web-server that will continue to run once I run it?
A Spring Boot app can be a non web app, and a good example is a Spring Boot batch app without a dependency to a servlet container.
You can create a Spring Boot app that runs your Spring Batch job and then stops when the job is finished without the need to deploy it in a (embedded) Tomcat. You can find an example here: https://spring.io/guides/gs/batch-processing/
secondly, the next time the batch file is executed by Azure WebJob as per the schedule I set it will try to run the Spring Boot App again and I will probably get bind exception since the port is already in use from the first run.
Once you have a script to run your app with java -jar mybatchapp.jar, you can use Azure Scheduler to run your job when you want. Since your batch app does not contain/start an embedded servlet container, you won't get a port conflict.
I am having an issue where EC2MetaDataUtils.getItems is being invoked on application start up ( Spring boot app), we do not use EC2 and so the calls made to AWS to get Metadata always fail, the application attempts to get this data 3 times and so this is adding around 15 seconds to the start time of the application.
I have been searching high and low for solutions I found a promising solution would suggested the following #EnableAutoConfiguration(exclude = { ContextResourceLoaderAutoConfiguration.class, ContextResourceLoaderConfiguration.class, ContextInstanceDataAutoConfiguration.class })
However when I try to start up the application it complains that ContextResourceLoaderConfiguration.class cannot be excluded as it is not auto configuration; if I just exclude the other 2 the application still invokes the MetaDataUtils.
Has anyone experienced this in the past and managed to resolve it?
Thank you for your time.
Resolved with the following:
#EnableAutoConfiguration(exclude = {ContextInstanceDataAutoConfiguration.class, ContextStackAutoConfiguration.class, ContextResourceLoaderAutoConfiguration.class})
when running spring-boot-application with AWS dependencies ,
It invokes stack auto-configuration , you need to disable it ,
add following to application.yml
cloud.aws.stack.auto: false
SpringBoot application should not do any call to EC2. This mean your are using some AWS specific library/component/what ever and this library on startup do this call.
Please check your dependencies and context configuration. There are nothing about SpringBoot. There is something with your custom dependencies/components.
If you're not using EC2, you can try removing the spring-cloud-aws* libraries from your dependencies.
You can use Spring profiles to differentiate between cloud and default profiles. For cloud profile, you can use spring-cloud-aws artifact to get metadata about EC2 instance which you need EC2 read permission access from an attached IAM role whereas for default profile, you don't need to worry about cloud environment and disable the cloud configuration properties which should not cause an issue for the application startup.
I have created a Spring boot application and executing the same as a an init.d service.
Tutorial Followed : https://docs.spring.io/spring-boot/docs/current/reference/html/deployment-install.html#deployment-initd-service
I have multiple symlink created in the same machine like worker-node1, worker-node2, worker-node3, worker-node4 etc so that i can run multiple instances of the same application in the same machine as independent processes. Is there any way to get the symlink name in the Spring boot application so that i can send which worker-node picked up a task to the job management server?
Any help on this regard would be helpful.