AWS Simulate principal policy Showing incorrect results - java

Iam trying to run simulateprincipalpolicy through java SDK and getting incorrect results.
I have policy something like this and attached this policy to role 'Myrole':
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "id",
"Effect": "Allow",
"Action": [
"ec2:DescribeTags"
],
"Resource": "*"
}
]
}
Java Code :
SimulatePrincipalPolicyRequest simulatePrincipalPolicyRequest = new SimulatePrincipalPolicyRequest()
simulatePrincipalPolicyRequest.setPolicySourceArn("arn:aws:iam::123456789012:role/Myrole");
simulatePrincipalPolicyRequest.withActionNames("ec2:DescribeTags");
Result:
{
EvalActionName: ec2:DescribeTags
EvalResourceName: *
EvalDecision: implicitDeny
MatchedStatements: []
MissingContextValues: []
OrganizationsDecisionDetail: {AllowedByOrganizations: false}
EvalDecisionDetails: {}
ResourceSpecificResults: []
}
The response is incorrect because when i try to perform that action Im able to do so.

I've run into a similar situation by calling simulate-principal-policy directly:
AllowedByOrganizations: false indicates there is an Organization SCP, an organisation-wide service control policy applied that somehow the simulator interprets as its denying access.
The issue in my case is that there are some of Org SCP that deny all kind of access to some regions, but not to others.
The simulator seems to be unable to discern a given region from others, or at least I did not find a way to overcome this problem.

Related

Suppressing Telemetry types in ApplicationInsights V3 Codeless Approach

Folks,
I am using the v3.2.4 of the applicationinsights.jar on a Wildfly application server and am able to see all information go into Azure (Application Insights) portal.
https://learn.microsoft.com/en-us/azure/azure-monitor/app/java-in-process-agent
However, I'm needing to do this for many application instances and am thinking it could be wise to suppress certain kinds of telemetry types (e.g. dependencies as one example) as it creates a lot of noise and data.
Is it possible to do this via the applicationinsights.json file?
Any guidance into this appreciated!
Update (5th Jan 2022): I am using a codeless solution whereby all configuration and thus suppression is done in the .json file.
Solutions involving v2 approaches via C#/java are out of scope (although this is what I have used in the past).
https://learn.microsoft.com/en-us/azure/azure-monitor/app/java-standalone-telemetry-processors shows some ideas but it is not explicit with respect to supression of certain types as the default approach seems to push too much data to Azure.
You could try using sampling and sampling overrides (preview) to achieve the desired results. Though I am not certain if you can easily match only dependency calls by certain attributes. Sampling overrides are the reccomended way to filter out telemetry for cost reasons.
Example: Suppress collecting telemetry for a noisy dependency call
This will suppress collecting telemetry for all GET my-noisy-key redis calls.
{
"connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
"preview": {
"sampling": {
"overrides": [
{
"attributes": [
{
"key": "db.system",
"value": "redis",
"matchType": "strict"
},
{
"key": "db.statement",
"value": "GET my-noisy-key",
"matchType": "strict"
}
],
"percentage": 0
}
]
}
}
}
You can also disable certain telemetry sources, though I admit that this is not exactly the same as what you are asking for:
{
"instrumentation": {
"azureSdk": {
"enabled": false
},
"cassandra": {
"enabled": false
},
"jdbc": {
"enabled": false
},
"jms": {
"enabled": false
},
"kafka": {
"enabled": false
},
"micrometer": {
"enabled": false
},
"mongo": {
"enabled": false
},
"rabbitmq": {
"enabled": false
},
"redis": {
"enabled": false
},
"springScheduling": {
"enabled": false
}
}
}
Below are few steps to suppress Application insights telemetry:
Create the telemetry process
Registering a Telemetry Process.
Adding the filters.
Enabling based on LogLevel.
This is one of the possible way.
You can get the full details from this blog

How to disable Geo backup policy in Azure SQL DW using java sdk

I have created Azure SQL Data warehouse and done pause and resume actions using Java SDK
Now I want to disable Geo backup policy when creation of Azure SQL Data warehouse.
How can I do it using Java SDK?
Refer below image for Geo backup policy in Azure SQL Data warehouse.
You could use Azure Rest API to do this.
PUT https://management.azure.com/subscriptions/***********/resourceGroups/shui156/providers/Microsoft.Sql/servers/shui156/databases/shui156/geoBackupPolicies/Default?api-version=2014-04-01
Body:
{
"id": "/subscriptions/***********/resourceGroups/shui156/providers/Microsoft.Sql/servers/shui156/databases/shui156/geoBackupPolicies/Default",
"name": "Default",
"type": "Microsoft.Sql/servers/databases/geoBackupPolicies",
"location": "East US",
"kind": null,
"properties": {
"state": "Disabled",
"storageType": null
}
}

Automate mongodb and elasticsearch sync

I am currently working on a project where our main database is mongodb and for searching we use elasticsearch. We have inserted data in to mongodb by a java application. And we used river plugin to sync data. Up to now we have done syncing data between mongodb and elasticsearch manually by executing shellscript files mentioned below. (setup.sh && bash.sh)
//setup.sh
curl -XPOST http://localhost:9200/classdata -d #setup.json
//setup.json
{
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0
},
"mappings": {
"classdata": {
"properties": {
"className": {
"type": "string"
},
"jarID": {
"index": "not_analyzed",
"type": "string"
},
"jarFileName": {
"index": "not_analyzed",
"type": "string"
},
"dependencies": {
"properties": {
"methodSignature": {
"type": "string"
},
"dependedntClass": {
"type": "string"
}
}
}
}
}
}
}
//bash.sh
curl -XPUT "localhost:9200/_river/classdata/_meta" -d '
{
"type": "mongodb",
"mongodb": {
"servers": [
{ "host": "127.0.0.1", "port": 27017 }
],
"options": { "secondary_read_preference": true },
"db": "E",
"collection": "ClassData"
},
"index": {
"name": "classdata",
"type": "classdata"
}
}'
But now our requirement has changed. Now we need to automate the process, like after inserting data in to mongodb we have to automatically sync data between elasticsearch and mongodb.
I have no idea how to do that. If some one know how to automate this process please help me.
I strongly recommend you monstache. It runs in the background and automately sync data from Mongodb to Elasticsearch. And you can configure to specify which db and what kind of operation(insert, update, delete...) you want to sync, the configuration options listed in here
MongoConnector Plugin supports data sync between MongoDB and Elastic Search.
1) Install Mongo Connector in your server.
`pip install mongo-connector`
2) Install Doc Manager based on target system. There are various implementations for Doc Manager based on the Target system. Install the one that supports Elastic Search and in particular the version that you have. Eg)
pip install 'mongo-connector[elastic5]'
3) Start Mongo Connector with configurations of the source(mongodb) and target systems. Eg)
mongo-connector -m <mongodb server hostname>:<replica set port> -t <replication endpoint URL, e.g. http://localhost:8983/solr> -d <name of doc manager>
Now data will be automatically synced up between the two systems.
For more information, use the following links,
https://www.mongodb.com/blog/post/introducing-mongo-connector
https://github.com/mongodb-labs/mongo-connector
https://github.com/mongodb-labs/mongo-connector/wiki/Usage%20with%20ElasticSearch

Logging with AWS Lambda from Java seems to be broken

I have created the same function in Python and Java (simple hello world) following the guide. Using the same role the Python version works as expected generating the log stream entry and printing "ok".
from __future__ import print_function
import json
print('Loading function')
def lambda_handler(event, context):
return "ok"
However the Java version does not log anything with the same role and settings.
package com.streambright;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
public class Dbmgmt implements RequestHandler<Object, Object> {
#Override
public String handleRequest(Object in, Context ctx) {
System.out.println("test");
ctx.getLogger().log("o hai");
return "ok";
}
}
I am wondering why it does not put anything into CloudWatch Log Groups. Does anybody have the same experience with Java? Does anybody have the same experience? Is there a fix workaround for this?
Also posted on the AWS forum: https://forums.aws.amazon.com/thread.jspa?threadID=254747
Found the root cause of this. The role policy was not allowing the correct log resource to be created, and it silently failed. The AWS UI was not too helpful to help identify this issue, I was running into it accidentally during an audit. After changin the resource to * the lambda function was able to create the log resource.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"xray:PutTraceSegments",
"xray:PutTelemetryRecords",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"ec2:CreateNetworkInterface",
"ec2:DeleteNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"kms:Decrypt"
],
"Resource": "*"
}
]
}

Google Cloud public hostname

Is there any solution to get a public hostname in google cloud like in other cloud platforms?
Currently the machine name is:
computername.c.googleprojectid.internal
but I want something like in Amazon or in Azure:
computername.cloudapp.net
You can use the Google Cloud DNS service to update the DNS record for your host on startup. (You could also use a service like dyn-dns, but I'm assuming that you want to us the Google tools where possible.) It looks like you'd want to use the "create change" API, using a service account associated with your VM. This would look something like:
POST https://www.googleapis.com/dns/v1beta1/projects/*myProject*/managedZones/*myZone.com*/changes
{
"additions": [
{
"name": "computername.myZone.com.",
"type": "A",
"ttl": 600,
"rrdatas": [
"200.201.202.203"
]
}
],
"deletions": [
],
}
Note that 200.201.202.203 needs to be the external IP address of your VM.

Categories

Resources