I have a spring boot project in which spring scheduler is working fine as i have added logger in scheduler method in my local system using Cron expression.
Problem:
When same spring boot app is deployed over PCF(Pivotal Cloud Foundary) it does not enable the scheduler and no logs are printed neither any error is shown in pcf logs related to scheduler.
While if i hit any controller through postman, logs are printing for that but not of scheduler.
I also provided cron expression value like for every minute in pcf environment variables in app and restarted the app. But that didn't help.
Can anyone suggest me something in this issue?
Thank you in advance for your valuable time!!
When you deploy your application in PCF Cloud space, it takes your code, scan it against available buildpacks unless explicitly provided by user and then creates a container image also known as Droplet using code, build-pack and base container.
If you use cloud config to manage the configurations for spring boot scheduler, it uses auto configurations to load the properties at runtime. In some cases due to the difference in the OS configuration and loading of these configurations at runtime, there will be mismatch between what timezone your app expects and the one that the server understands.
Most of the time it can be resolved by explicitly defining the timezone configuration in PCF manifest file or command line when pushing the application.
E.g. set environment TZ variable.
cf set-env {app-name} TZ 'America/Chicago'
OR by adding following in manifest.yml file:
env:
TZ: America/Chicago
Related
I need to run Spring Boot based app locally. It uses spring-cloud-starter-aws dependency.
The problem is that it tries to connect to EC2 metadata service always. Setting "cloud.aws.*" properties doesn't help.
I expect that default AWS credentials chain will be used, credentials and region will be read from one of AWS preferred way (e.g. ~/.aws/config and ~/.aws/credentials files).
I tried to set cloud.aws.credentials.useDefaultAwsCredentialsChain property but spring-cloud-starter-aws doesn't care
I found examples that use CloudFormation stack for very strange reason to run the app locally.
When I use AWS SDK for Java default AWS chain is used without any issues - I don't need to do anything specific for local running of the application (locally it reads credentials from files and on EC2 it uses instance metadata service). But with Spring Boot it doesn't work out of the box and I need to enable local running somehow.
I use 2.2.2.RELEASE version of Spring Boot and 2.2.1.RELEASE version of Spring Cloud. I have a feeling they introduced regression, because in previous versions it worked without problems.
Any ideas how to run the app locally?
Adding the following lines to configuration helps:
cloud.aws.region.static=my region
cloud.aws.stack.auto=false
spring.autoconfigure.exclude=org.springframework.cloud.aws.autoconfigure.metrics.CloudWatchExportAutoConfiguration
So Spring uses AWS default chain but only for credentials. AWS SDK uses it for region and other configuration parameters too. So this is Spring bug for sure.
It still gives a warning about no connection to instance metadata service once during application start but more or less this solution can be used for local running.
If we don't have the last line with excluding CloudWatchExportAutoConfiguration, there will be many exceptions in stack trace while closing the app. I use CloudWatch metrics in my app.
I guess rationale behind excluding aws auto configuration is that it has conflicts with boot actuator but I'm not sure.
We have a spring boot application running in PCF and it reads the PCF environment variables(CF_INSTANCE_INDEX, CF_INSTANCE_ADDR,..) from an application. Based on those variables, we are trying to implement the logic for a scheduler. While running this scheduler, these variables' values could have changed. Is there a way to refresh/reload bean that have env values during runtime?
we used #RefreshScope annotation on config properties bean.
#Configuration
#RefreshScope
public class PcfEnvProperties{
#Value("${CF_INSTANCE_INDEX}")
private int intanceIndex;
#Value("${CF_INSTANCE_ADDR}")
private String intanceAddr;
...
}
and refresh using
context.getBean(RefreshScope.class).refresh("PcfEnvProperties");
PcfEnvProperties pcfEnv = context.getBean(PcfEnvProperties.class);
But It is not loading the recently changed env variable into running application. Any ideas on how to accomplish this?
You can use Spring Cloud Config Server in combination with Spring Actuator to expose an endpoint in your service that will refresh the application's properties on the fly. You could set up your scheduler to hit this endpoint on a timer or as needed.
Here is one tutorial I found that seems pretty straightforward: https://jeroenbellen.com/manage-and-reload-spring-application-properties-on-the-fly/
You may have to play with the setup depending on how your platform is configured, but I believe it should do what you're wanting. We have deployed many java web services on our PCF platform using this actuator/config server approach, and we can just make a call to the refresh endpoint and it successfully pulls in (and overwrites when necessary) the new properties and values from the config server. Also you can pull out a list of the property names and values that changed from the response.
I'm not familiar with the specific property values you mentioned, but as long as they are normally a part of Spring's ApplicationContext (where properties usually are found) then you should be able to pull in changed values using this approach with Spring's cloud config server and actuator libraries.
Hope this helps
I have a regular Java/Spring Batch job that runs every night to get data from one database and insert/update in my project's database. This is working fine in the current setup where it is deployed on Tomcat.
Now I need to separate it out and run it on an Azure WebJob. What will be a good approach?
Can I use Spring Boot for this purpose?
But I am not sure how it will work. I mean, I can create a JAR of my project (Job written using Spring Boot) and copy it on a Azure WebJob. Then have a batch file with "java -jar..." but:
wouldn't it be like running and deploying the Spring Boot App with it's inbuilt web-server that will continue to run once I run it?
secondly, the next time the batch file is executed by Azure WebJob as per the schedule I set it will try to run the Spring Boot App again and I will probably get bind exception since the port is already in use from the first run.
Would appreciate if somebody can help me in doing this.
Thank you.
wouldn't it be like running and deploying the Spring Boot App with it's inbuilt web-server that will continue to run once I run it?
A Spring Boot app can be a non web app, and a good example is a Spring Boot batch app without a dependency to a servlet container.
You can create a Spring Boot app that runs your Spring Batch job and then stops when the job is finished without the need to deploy it in a (embedded) Tomcat. You can find an example here: https://spring.io/guides/gs/batch-processing/
secondly, the next time the batch file is executed by Azure WebJob as per the schedule I set it will try to run the Spring Boot App again and I will probably get bind exception since the port is already in use from the first run.
Once you have a script to run your app with java -jar mybatchapp.jar, you can use Azure Scheduler to run your job when you want. Since your batch app does not contain/start an embedded servlet container, you won't get a port conflict.
I am having an issue where EC2MetaDataUtils.getItems is being invoked on application start up ( Spring boot app), we do not use EC2 and so the calls made to AWS to get Metadata always fail, the application attempts to get this data 3 times and so this is adding around 15 seconds to the start time of the application.
I have been searching high and low for solutions I found a promising solution would suggested the following #EnableAutoConfiguration(exclude = { ContextResourceLoaderAutoConfiguration.class, ContextResourceLoaderConfiguration.class, ContextInstanceDataAutoConfiguration.class })
However when I try to start up the application it complains that ContextResourceLoaderConfiguration.class cannot be excluded as it is not auto configuration; if I just exclude the other 2 the application still invokes the MetaDataUtils.
Has anyone experienced this in the past and managed to resolve it?
Thank you for your time.
Resolved with the following:
#EnableAutoConfiguration(exclude = {ContextInstanceDataAutoConfiguration.class, ContextStackAutoConfiguration.class, ContextResourceLoaderAutoConfiguration.class})
when running spring-boot-application with AWS dependencies ,
It invokes stack auto-configuration , you need to disable it ,
add following to application.yml
cloud.aws.stack.auto: false
SpringBoot application should not do any call to EC2. This mean your are using some AWS specific library/component/what ever and this library on startup do this call.
Please check your dependencies and context configuration. There are nothing about SpringBoot. There is something with your custom dependencies/components.
If you're not using EC2, you can try removing the spring-cloud-aws* libraries from your dependencies.
You can use Spring profiles to differentiate between cloud and default profiles. For cloud profile, you can use spring-cloud-aws artifact to get metadata about EC2 instance which you need EC2 read permission access from an attached IAM role whereas for default profile, you don't need to worry about cloud environment and disable the cloud configuration properties which should not cause an issue for the application startup.
I have taken a simple standalone client example from examples provided in quartz 1.6 download bundle to run a job. But at the start up of the weblogic server, job is running twice. Is there any settings or something required to run the job only once at the start-up of the server? If any code snippet or external link will be much helpful.
Thanks
I had similar issue in tomcat. In my case I had my application bound to root path and to /appName path. So for instance there was app1.jar which was bound to / and to /app1 paths. As a result app was initialized twice in the server and hence quartz was triggered twice.