I get strange errors such as - cant't get aws credentials or Unable to load credentials from ...
Is there any way to set explicitly the s3a credentials in hadoop configuration?
As s3a is relatively new implementation (and works correctly from hadoop 2.7), you need to set two sets properties in hadoop configuration -
conf.set("fs.s3a.access.key", access_key);
conf.set("fs.s3a.secret.key", secret_key);
conf.set("fs.s3a.awsAccessKeyId", access_key);
conf.set("fs.s3a.awsSecretAccessKey", secret_key);
(conf is hadoop configuration)
the reason is that the naming convention changed between versions and to be on the safe side - set both
Related
I need to run Spring Boot based app locally. It uses spring-cloud-starter-aws dependency.
The problem is that it tries to connect to EC2 metadata service always. Setting "cloud.aws.*" properties doesn't help.
I expect that default AWS credentials chain will be used, credentials and region will be read from one of AWS preferred way (e.g. ~/.aws/config and ~/.aws/credentials files).
I tried to set cloud.aws.credentials.useDefaultAwsCredentialsChain property but spring-cloud-starter-aws doesn't care
I found examples that use CloudFormation stack for very strange reason to run the app locally.
When I use AWS SDK for Java default AWS chain is used without any issues - I don't need to do anything specific for local running of the application (locally it reads credentials from files and on EC2 it uses instance metadata service). But with Spring Boot it doesn't work out of the box and I need to enable local running somehow.
I use 2.2.2.RELEASE version of Spring Boot and 2.2.1.RELEASE version of Spring Cloud. I have a feeling they introduced regression, because in previous versions it worked without problems.
Any ideas how to run the app locally?
Adding the following lines to configuration helps:
cloud.aws.region.static=my region
cloud.aws.stack.auto=false
spring.autoconfigure.exclude=org.springframework.cloud.aws.autoconfigure.metrics.CloudWatchExportAutoConfiguration
So Spring uses AWS default chain but only for credentials. AWS SDK uses it for region and other configuration parameters too. So this is Spring bug for sure.
It still gives a warning about no connection to instance metadata service once during application start but more or less this solution can be used for local running.
If we don't have the last line with excluding CloudWatchExportAutoConfiguration, there will be many exceptions in stack trace while closing the app. I use CloudWatch metrics in my app.
I guess rationale behind excluding aws auto configuration is that it has conflicts with boot actuator but I'm not sure.
I have one java spring boot library and it is using some configuration as below using zookeeper address for loadbalancer.
<user:registry regProtocol="zookeeper" name="testZk" address="${zookeeper.address}"/>
zookeeper.address will be different between development and production environments.
Users of this library can include zookeeper.address in their cloud config properties based on the environment but are there other ways so that library users don't need to include these in their properties and library in some way use different properties based on environment from user.
Serving Plain Text will resolve above problem.
http://cloud.spring.io/spring-cloud-static/spring-cloud-config/2.0.0.M5/single/spring-cloud-config.html#_serving_plain_text
Just define multiple environments, you wish to in application properties and on the user side activate the properties, it will work.
I'm pretty new with java but I think that they could use an application.properties file to overwrite any environmental properties.
application.properties in spring
I'm new to java and k8, and I have some doubts about how to handle application configurations for my java apps. I've got one spring boot app and the other three use wildfly.
So, they all got hardcoded application configurations, and when starting them the just use something like:
java -Dswarm.project.stage=development -jar foobar/target/foobar-swarm.jar
except for the spring boot which has an application.properties file that consists of application configuration data.
So basically the three java apps have backed in two files (which I know is a no no):
- project-stages.yml
- standalone.xml
And when the developer wants to deploy to production he uses:
java -Dswarm.project.stage=production -jar foobar/target/foobar-swarm.jar
And, now we come to kubernetes which has three ways of dealing with application configuration data:
1.) Env variables
2.) Config maps
3.) Secrets
I was thinking of using configmaps instead of env variables because they have more benefits.
So, the developer gave me the possibility of overwriting those hardcoded variables with an external file : Dsystem.properties.file=/var/foobar/environment.properties
But I'm still overwriting an hardcoded files with an external file, and I'm not happy with that solution!
So, I'm basically looking on advise can those hardcoded files be supplied externally and populated with configmaps in k8 - what would be the best practice of handling the config files in the world of k8?
Tnx,
Tom
There are several questions in the post, but I can address only the one related to spring-boot.
The simplest and the most convenient way of specifying configurations to spring boot app is via its built in profiling feature. As you already mentioned you have application.properties. You can create similar files according to your usage cases: application-production.properties, application-staging.properties, application-k8s.properties, etc.
Kubernetes deployment doesn't change this in any way.
You can control which configuration to pick by setting SPRING_PROFILES_ACTIVE env variable from the kubernetes.
You might have something like this:
docker run -e SPRING_PROFILES_ACTIVE=k8s -d -p 0.0.0.0:8080:8080 \
--name=yourapp your_image_name bash -c "java -jar yourapp.jar"
It will pick configuration from application-k8s.properties.
Configuration files support environment variables as well.
You can have placeholders like ${YOUR_DB} in your properties files and Spring will automatically pick up env variable with name YOUR_DB. It is convenient to use this feature let's say when your app pod must have its own db pod.
If I got your question right you are asking how to configure a Spring Boot application via a k8s ConfigMap. Yes, you can do that.
Create a Docker image with WORKDIR work_dir in which you start the Spring Boot application eg via java -jar /work_dir/app.jar
Create a ConfigMap
Run a container of the above mentioned image within k8s
Mount the ConfigMap for the Spring Boot application.properties into the Container as /work_dir/config/application.properties
On changes in the ConfigMap the file within the container gets updated. You have to restart the Spring Boot Application to set your changes active.
I am using Weblogic 12.1.2 which contains 1-admin & 3-manage-servers(under 1-cluster) in the same machine.I want to store some data into a cache(distributed) which must be available among all the manager-servers inside a cluster.
So I am using oracle coherence feature for the same.
whenever I started coherence.sh it always gives the error saying that
"Could not load cache configuration resource file://coherence-cache-config.xml".
I have done some analysis and came to know that its always taking configuration from coherance.jar which comes with WebLogic. even after changing the PRE_CLASSPATH to my custom coherance.jar. it's always pointing to the WebLogic jar.Due to this i am not able to override "coherence-cache-config.xml" & "tangosol-coherence-override.xml".
Can you please suggest something. how can I override WebLogic default coherance.jar resources to my custom ones?
According to Coherence documentation, by default Coherence will use first coherence-cache-config.xml file found in classpath. But in your case it tries to load it from file://coherence-cache-config.xml location. It means that location of this file is somewhere overriden (either in tangosol-coherence-override.xml file or through tangosol.coherence.cacheconfig system property).
What more, file://coherence-cache-config.xml seems to be not a valid file uri. When I try to do:
new File(new URI("file://coherence-cache-config.xml"))
it results in exception
java.lang.IllegalArgumentException: URI has an authority component
So, make sure you properly set coherence-cache-config.xml file location in tangosol-coherence-override.xml file or through tangosol.coherence.cacheconfig system property (the documentation explains in details, how to do it).
I have a small Spring Boot app, using Spring Cloud AWS (1.0.0.RELEASE) to access SQS queue. It is beeing deployed on an EC2 instance with Instance Profile set. It appears that AWS side of things is working, as I can access both relevant metadata links: iam/info and iam/security-credentials/role-name, and they do contain correct information. Just to be sure, I've used aws cmdline utility (aws sqs list-queues) and it does work, so I guess setup is ok. However, when the app starts, it reads application.properties (which contains line cloud.aws.credentials.instanceProfile=true) then drops following warning: com.amazonaws.util.EC2MetadataUtils: Unable to retrieve the requested metadata and finally throws following exception:
Caused by: com.amazonaws.AmazonServiceException: The security token included in the request is invalid. (Service: AmazonSQS; Status Code: 403; Error Code: InvalidClientTokenId; Request ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1071)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:719)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:454)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:294)
at com.amazonaws.services.sqs.AmazonSQSClient.invoke(AmazonSQSClient.java:2291)
at com.amazonaws.services.sqs.AmazonSQSClient.getQueueUrl(AmazonSQSClient.java:516)
at com.amazonaws.services.sqs.buffered.AmazonSQSBufferedAsyncClient.getQueueUrl(AmazonSQSBufferedAsyncClient.java:278)
at org.springframework.cloud.aws.messaging.support.destination.DynamicQueueUrlDestinationResolver.resolveDestination(DynamicQueueUrlDestinationResolver.java:78)
at org.springframework.cloud.aws.messaging.support.destination.DynamicQueueUrlDestinationResolver.resolveDestination(DynamicQueueUrlDestinationResolver.java:37)
at org.springframework.messaging.core.CachingDestinationResolverProxy.resolveDestination(CachingDestinationResolverProxy.java:88)
at org.springframework.cloud.aws.messaging.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:295)
at org.springframework.cloud.aws.messaging.listener.SimpleMessageListenerContainer.start(SimpleMessageListenerContainer.java:38)
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:173)
... 17 common frames omitted
...which means that for some reason Spring Cloud AWS is not picking up on Instance Profile credentials. I've enabled debug log level on com.amazonaws.request and it appears that request is sent without access key and secret key.
DEBUG --- com.amazonaws.request : Sending Request: POST https://sqs.eu-west-1.amazonaws.com / Parameters: (Action: GetQueueUrl, Version: 2012-11-05, QueueName: xxxxxxxxxxxxx, ) Headers: (User-Agent: aws-sdk-java/1.9.3 Linux/3.14.35-28.38.amzn1.x86_64 Java_HotSpot(TM)_64-Bit_Server_VM/25.45-b02/1.8.0_45 AmazonSQSBufferedAsyncClient/1.9.3, )
Anybody has any idea what am I missing or at least any hints how to further debug this?
EDIT: After going through spring-cloud-aws code a bit, I've kinda moved forward. Configuration file application.properties bundled with jar had some text value for accessKey and secretKey. My customized application.properties haven't got those properties and that probably caused spring to use values in bundled file as defaults. I've included them with empty values, which changed the exception to com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain. It appears that AWS SDK is configured with DefaultProviderChain, yet it still fails to pick up instance profile credentials.
The solution to this problem comes from two distinct facts.
Instance profile credentials are going to be used only and only if application.properties has instanceProfile property set to true and accessKey set to null (ContextCredentialsAutoConfiguration).
Even if you will provide your custom application.properties file, Spring is going to read application.properties file bundled with app jar (if it does exist). If that's the case, properties from both files will sum up to create an execution enviroment. I suspect that bundled file is parsed first, then custom second, overriding any property present in bundled file.
In my case, bundled application.properties had accessKey and secretKey placeholders (with phony values) which were filled out by developer whenever he wanted some testing outside of EC2 enviroment. That made accessKey not null and therefore, excluded instance profile path. I just removed the application.properties file from jar and that solved the problem.
cloud:
aws:
credentials:
accessKey:
secretKey:
instanceProfile: true
useDefaultAwsCredentialsChain: true
This would do the trick, if you were using the latest (2.X.X) Spring AWS Cloud.