I have created the same function in Python and Java (simple hello world) following the guide. Using the same role the Python version works as expected generating the log stream entry and printing "ok".
from __future__ import print_function
import json
print('Loading function')
def lambda_handler(event, context):
return "ok"
However the Java version does not log anything with the same role and settings.
package com.streambright;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
public class Dbmgmt implements RequestHandler<Object, Object> {
#Override
public String handleRequest(Object in, Context ctx) {
System.out.println("test");
ctx.getLogger().log("o hai");
return "ok";
}
}
I am wondering why it does not put anything into CloudWatch Log Groups. Does anybody have the same experience with Java? Does anybody have the same experience? Is there a fix workaround for this?
Also posted on the AWS forum: https://forums.aws.amazon.com/thread.jspa?threadID=254747
Found the root cause of this. The role policy was not allowing the correct log resource to be created, and it silently failed. The AWS UI was not too helpful to help identify this issue, I was running into it accidentally during an audit. After changin the resource to * the lambda function was able to create the log resource.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"xray:PutTraceSegments",
"xray:PutTelemetryRecords",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"ec2:CreateNetworkInterface",
"ec2:DeleteNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"kms:Decrypt"
],
"Resource": "*"
}
]
}
Related
Iam trying to run simulateprincipalpolicy through java SDK and getting incorrect results.
I have policy something like this and attached this policy to role 'Myrole':
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "id",
"Effect": "Allow",
"Action": [
"ec2:DescribeTags"
],
"Resource": "*"
}
]
}
Java Code :
SimulatePrincipalPolicyRequest simulatePrincipalPolicyRequest = new SimulatePrincipalPolicyRequest()
simulatePrincipalPolicyRequest.setPolicySourceArn("arn:aws:iam::123456789012:role/Myrole");
simulatePrincipalPolicyRequest.withActionNames("ec2:DescribeTags");
Result:
{
EvalActionName: ec2:DescribeTags
EvalResourceName: *
EvalDecision: implicitDeny
MatchedStatements: []
MissingContextValues: []
OrganizationsDecisionDetail: {AllowedByOrganizations: false}
EvalDecisionDetails: {}
ResourceSpecificResults: []
}
The response is incorrect because when i try to perform that action Im able to do so.
I've run into a similar situation by calling simulate-principal-policy directly:
AllowedByOrganizations: false indicates there is an Organization SCP, an organisation-wide service control policy applied that somehow the simulator interprets as its denying access.
The issue in my case is that there are some of Org SCP that deny all kind of access to some regions, but not to others.
The simulator seems to be unable to discern a given region from others, or at least I did not find a way to overcome this problem.
I downloaded XS_JSCRIPT14_10-70001363 package from Service Marketplace.
Please suggest me how to run this App Router Login form with localhost
I am trying with npm startcommand, but getting UAA service exception. How to handle from localhost.
When you download the approuter, either via npm or service marketplace you have to provide two additional files for a basic setup inside the AppRouter directory (besides package.json, xs-app.json, etc.).
The default-services.json holds the variables that tell the approuter where to find the correct authentication server (e.g., XSUAA). You have to provide at least the clientid, clientsecret, and URL of the authorization server as part of this file like this:
{
"uaa": {
"url" : "http://my.uaa.server/",
"clientid" : "client-id",
"clientsecret" : "client-secret",
"xsappname" : "my-business-application"
}
}
You can get this parameters, for example, after binding on SAP Cloud Platform, CloudFoundry your application to an (empty) instance of XSUAA where you can retrieve the values via cf env <appname> from the `VCAP_SERVICES/xsuaa' properties (they have exactly the same property names).
In addition, you require the default-env.json file which holds at least the destination variable to which backend microservice you want to send the received Json Web Token to. It may look like this:
{
"destinations": [ {
"name": "my-destination", "url": "http://localhost:1234", "forwardAuthToken": true
}]
}
Afterwards, inside the approuter directory you can simply run npm start which runs the approuter per default under http://localhost:5000. It also writes nice console output you can use to debug the parameters above.
EDIT: Turns out I was incorrect, it is apparently possible to run the approuter locally.
First of all, here is the documentation for the approuter: https://help.sap.com/viewer/65de2977205c403bbc107264b8eccf4b/Cloud/en-US/01c5f9ba7d6847aaaf069d153b981b51.html
As far as I understood, you need to provide to files to the approuter for it to run locally, default-services.json and default-env.json (put them in the same directory as your package.json.
The default-services.json has a format like this:
{
"uaa": {
"url" : "http://my.uaa.server/",
"clientid" : "client-id",
"clientsecret" : "client-secret",
"xsappname" : "my-business-application"
}
}
The default-env.json is simply a json file holding the environment variables that the approuter needs to access, like so:
{
"VCAP_SERVICES": <env>,
...
}
Unfortunately, the documentation does not state which variables are required, therefore I cannot provide you with a working example.
Hope this helps you! Should you manage to get this running, I'm sure others would appreciate if you share your knowledge here.
I have tried to upload a file to AWS s3 bucket using 'TransferUtility' and I have registered the app on Mobile HUB and pasted the 'awsconfiguration.json' file in res/raw as mentioned in the doc.
I got this LOG :
I/AWSMobileClient: Welcome to AWS! You are connected successfully.
when I call this in 'onCreate()'
AWSMobileClient.getInstance().initialize(this).execute();
I got this error when I execute this code
TransferUtility transferUtility =
TransferUtility.builder()
.context(getApplicationContext())
.awsConfiguration(AWSMobileClient.getInstance().getConfiguration())
.s3Client(new AmazonS3Client(AWSMobileClient.getInstance().getCredentialsProvider()))
.build();
TransferObserver uploadObserver =
transferUtility.upload(
s3Bucket+"/"+s3Folder+"/"+fileName,
new File(fileUrl));
ERROR :
E/AndroidRuntime: FATAL EXCEPTION: main
Process: xxxx.xxxxx.com, PID: 28698
java.lang.IllegalArgumentException: Failed to read S3TransferUtility
please check your setup or awsconfiguration.json file
at
com.amazonaws.mobileconnectors.s3.transferutility.TransferUtility$Builder.build(TransferUtility.java:248)
Can anybody help me out of this and guide me on what I am doing wrong here.
Your effort is truly appreciated. Thank you
The error you mentioned comes from here: https://github.com/aws/aws-sdk-android/blob/master/aws-android-sdk-s3/src/main/java/com/amazonaws/mobileconnectors/s3/transferutility/TransferUtility.java#L248
This error means that you have awsconfiguration.json file but you may not have S3TransferUtility block in the json file. Can you check if you have the required block in the json file?
to elaborate on Karthikeyan's answer, a sample awsconfiguration.json looks like this
{
"Version": "1.0",
"CredentialsProvider": {
"CognitoIdentity": {
"Default": {
"PoolId": "COGNITO-IDENTITY-POOL-ID",
"Region": "COGNITO-IDENTITY-POOL-REGION"
}
}
},
"IdentityManager" : {
"Default" : { }
},
"S3TransferUtility": {
"Default": {
"Bucket": "S3-BUCKET-NAME",
"Region": "S3-REGION"
}
}
}
The issue I was having was that the S3TransferUtility did not exist in my awsconfiguration.json file when I generated it. The reason being was that I had a bucket tied to my mobile-hub app however, I wanted to integrate with an existing bucket. Follow these directions to integrate to an existing bucket https://docs.aws.amazon.com/aws-mobile/latest/developerguide/how-to-integrate-an-existing-bucket.html
Make sure you initialized the mobileClient instance and configuration file has default options in s3 block, then only you can able to access configuration.
private void initializeAwsMClient() {
AWSMobileClient.getInstance().initialize(this, awsStartupResult ->
Timber.d("AWSMobileClient is instantiated and you are connected to AWS!"))
.execute();
}
Spring boot app works fine running locally connecting to sandbox S3 & sandbox SQS, using DefaultAWSCredentialsProviderChain and set as system property.
When application is deployed to EC2 environment and using ProfileCredentials, I get a continuous stream of following error in CloudWatch:
{
"Host": "<myhost>",
"Date": "2016-12-20T21:52:56,777",
"Thread": "simpleMessageListenerContainer-1",
"Level": "WARN ",
"Logger": "org.springframework.cloud.aws.messaging.listener.SimpleMessageListenerContainer",
"Msg": "An Exception occurred while polling queue 'my-queue-name'. The failing operation will be retried in 10000 milliseconds",
"Identifiers": {
"Jvm-Instance": "",
"App-Name": "my-app",
"Correlation-Id": "ca9a556e-2fbc-3g49-9fb8-0e9213bb79bc",
"Session-Id": "",
"Thread-Group": "main",
"Thread-Id": "32",
"Version": ""
}
}
java.lang.NullPointerException
at org.springframework.cloud.aws.messaging.listener.SimpleMessageListenerContainer$AsynchronousMessageListener.run(SimpleMessageListenerContainer.java:255) [spring-cloud-aws-messaging-1.1.1.RELEASE.jar:1.1.1.RELEASE]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_91]
The problem boils down to SimpleMessageListenerContainer.java:255 :
ReceiveMessageResult receiveMessageResult = getAmazonSqs().receiveMessage(this.queueAttributes.getReceiveMessageRequest());
this.queueAttributes is null.
I have tried everything, including #EnableContextCredentials(instanceProfile=true), to setting cloud.aws.credentials.instanceProfile=true while making sure access & secretKey is null. The SQS queue definitely exists, and I have verified through aws cli on the EC2 instance itself that the profile credentials exist and are valid.
Additionally, in AWS environment the app also used S3 client to generate unique keys for bucket storage, which all works. It's only when the app tries to poll messages from SQS that seems to be failing.
I am processing messages like so:
#SqsListener("${aws.sqs.queue.name}")
public void receive(S3EventNotification s3EventNotificationRecord) {
more config:
#Bean
public AWSCredentialsProvider awsCredentialsProvider(
#Value("${aws.credentials.accessKey}") String accessKey,
#Value("${aws.credentials.secretKey}") String secretKey,
JasyptPropertyDecryptor propertyDecryptor) {
if (!Strings.isNullOrEmpty(accessKey) || !Strings.isNullOrEmpty(secretKey)) {
Preconditions.checkState(
!Strings.isNullOrEmpty(accessKey) && !Strings.isNullOrEmpty(secretKey),
"Error in accessKey/secretKey config. Either both must be provided, or neither.");
System.setProperty("aws.accessKeyId", propertyDecryptor.decrypt(accessKey));
System.setProperty("aws.secretKey", propertyDecryptor.decrypt(secretKey));
}
return DefaultAWSCredentialsProviderChain.getInstance();
}
#Bean
public S3Client s3Client(
AWSCredentialsProvider awsCredentialsProvider,
#Value("${aws.s3.region.name}") String regionName,
#Value("${aws.s3.bucket.name}") String bucketName) {
return new S3Client(awsCredentialsProvider, regionName, bucketName);
}
#Bean
public QueueMessageHandlerFactory queueMessageHandlerFactory() {
MappingJackson2MessageConverter messageConverter = new MappingJackson2MessageConverter();
messageConverter.setStrictContentTypeMatch(false);
QueueMessageHandlerFactory factory = new QueueMessageHandlerFactory();
factory.setArgumentResolvers(
Collections.<HandlerMethodArgumentResolver>singletonList(
new PayloadArgumentResolver(messageConverter)));
return factory;
}
One additional thing I noticed is that on application start up, ContextConfigurationUtils.registerCredentialsProvider is called and unless you specify cloud.aws.credentials.profileName= as empty in your app.properties, this class will add a ProfileCredentialsProvider to the list of awsCredentialsProviders. I figured this might be problematic since I'm not providing credentials on the EC2 instance that way, and instead it should be using InstanceProfileCredentialsProvider. This change did not work.
Turns out the issue was that the services I was using in AWS such as SQS has proper access permissions on them, but the IAM profile itself lacked the permissions to even attempt the service operations that the application needed to make.
Is there any solution to get a public hostname in google cloud like in other cloud platforms?
Currently the machine name is:
computername.c.googleprojectid.internal
but I want something like in Amazon or in Azure:
computername.cloudapp.net
You can use the Google Cloud DNS service to update the DNS record for your host on startup. (You could also use a service like dyn-dns, but I'm assuming that you want to us the Google tools where possible.) It looks like you'd want to use the "create change" API, using a service account associated with your VM. This would look something like:
POST https://www.googleapis.com/dns/v1beta1/projects/*myProject*/managedZones/*myZone.com*/changes
{
"additions": [
{
"name": "computername.myZone.com.",
"type": "A",
"ttl": 600,
"rrdatas": [
"200.201.202.203"
]
}
],
"deletions": [
],
}
Note that 200.201.202.203 needs to be the external IP address of your VM.