How to configure AWS DynamoDB Camel component - java

I am trying to POC accessing DynamoDB via an Apache Camel application. Obviously Dynamo DB will run in AWS but for development purposes we have it running locally as a docker container.
It was very easy to create a Dynamo BD table locally and put some items in there manually. I used for this my intelij Dynamo DB console and all I had to provide was a custom end point http://localhost:8000 and the Default credential provider chain.
Now at some certain times of the day I would like to trigger a job that will scan the Dynamo DB items and perform some actions on the returned data.
from("cron:myCron?schedule=0 */5 * * * *")
.log("Running myCron scheduler")
.setHeader(Ddb2Constants.OPERATION, () -> Ddb2Operations.Scan)
.to("aws2-ddb:myTable")
.log("Performing some work on items");
When I am trying to run my application it fails to start complaining that the security token is expired which makes me think it is trying to go to AWS rather than accessing the local. I was unable to find anything about how would I set this. The camel dynamo db component (https://camel.apache.org/components/3.15.x/aws2-ddb-component.html) is talking about being able to configure the component with a DynamoDbClient but this is an interface and its implementation called DefaultDynamoDbClient is not public and so is the DefaultDynamoDbClientBuilder.

Assuming that you use Spring Boot as Camel runtime, the simplest way in your case is to configure the DynamoDbClient used by Camel thanks to options set in the application.properties as next:
# The value of the access key used by the component aws2-ddb
camel.component.aws2-ddb.accessKey=test
# The value of the secret key used by the component aws2-ddb
camel.component.aws2-ddb.secretKey=test
# The value of the region used by the component aws2-ddb
camel.component.aws2-ddb.region=us-east-1
# Indicates that the component aws2-ddb should use the new URI endpoint
camel.component.aws2-ddb.override-endpoint=true
# The value of the URI endpoint used by the component aws2-ddb
camel.component.aws2-ddb.uri-endpoint-override=http://localhost:8000
For more details please refer to the documentation of those options:
camel.component.aws2-ddb.accessKey
camel.component.aws2-ddb.secretKey
camel.component.aws2-ddb.region
camel.component.aws2-ddb.override-endpoint
camel.component.aws2-ddb.uri-endpoint-override
For other runtimes, it can be configured programatically as next:
Ddb2Component ddb2Component = new Ddb2Component(context);
String accessKey = "test";
String secretKey = "test";
String region = "us-east-1";
String endpoint = "http://localhost:8000";
ddb2Component.getConfiguration().setAmazonDDBClient(
DynamoDbClient.builder()
.endpointOverride(URI.create(endpoint))
.credentialsProvider(
StaticCredentialsProvider.create(
AwsBasicCredentials.create(accessKey, secretKey)
)
)
.region(Region.of(region))
.build()
);

Related

Using org.apache.hadoop.fs.azure.NativeAzureFileSystem for Azure Data Lake Gen 2 (ADLS2)

I’m trying to use org.apache.hadoop.fs.azure.NativeAzureFileSystem to read metadata of a file in ADSL2. I can't initialize the NativeAzureFileSystem.
This works:
Configuration conf = new Configuration();
conf.set("fs.azure.account.key.myaccount.blob.core.windows.net","<….. my ADLS2 storage accoune key>");
Path fsPath = new Path("wasbs://mycontainer#myaccount.blob.core.windows.net/");
NativeAzureFileSystem afs = new NativeAzureFileSystem();
afs.initialize(fsPath.toUri(),conf);
… but it’s actually working via Blob interface (wasbs).
Here I’m trying to use abfss, and it doesn’t work:
Configuration conf = new Configuration();
conf.set("fs.azure.account.auth.type.myaccount.dfs.core.windows.net", "OAuth");
conf.set("fs.azure.account.oauth.provider.type.myaccount.dfs.core.windows.net", "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider");
conf.set("fs.azure.account.oauth2.client.id.myaccount.dfs.core.windows.net", "<.......>");
conf.set("fs.azure.account.oauth2.client.secret.myaccount.dfs.core.windows.net","<........>");
conf.set("fs.azure.account.oauth2.client.endpoint.myaccount.dfs.core.windows.net", "https://login.microsoftonline.com/<.....>/oauth2/token");
Path fsPath = new Path("abfss://mycontainer#myaccount.dfs.core.windows.net/");
NativeAzureFileSystem afs = new NativeAzureFileSystem();
afs.initialize(fsPath.toUri(),conf);
.. and I’m getting an error:
Caught: org.apache.hadoop.fs.azure.AzureException: org.apache.hadoop.fs.azure.AzureException: No credentials found for account myaccount.dfs.core.windows.net in the configuration, and its container delta is not accessible using anonymous credentials. Please check if the container exists first. If it is not publicly available, you have to provide account credentials.
org.apache.hadoop.fs.azure.AzureException: org.apache.hadoop.fs.azure.AzureException: No credentials found for account myaccount.dfs.core.windows.net in the configuration, and its container delta is not accessible using anonymous credentials. Please check if the container exists first. If it is not publicly available, you have to provide account credentials.
at org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.createAzureStorageSession(AzureNativeFileSystemStore.java:1123)
at org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.initialize(AzureNativeFileSystemStore.java:566)
at org.apache.hadoop.fs.azure.NativeAzureFileSystem.initialize(NativeAzureFileSystem.java:1423)
at org.apache.hadoop.fs.azure.NativeAzureFileSystem$initialize.call(Unknown Source)
I also tried setting up the conf without "myaccount.blob.core.windows.net" at the end of config entries, to no avail.
Am I missing some extra configuration here? The storage account is definitely enabled for hierarchical paths, and I can access it in Databricks using abfss.

How to get instanceid from cloud_run?

The logs from cloud run spit out some good json with resource.labels.revision_name = my_name-00046-kip.
The json path labels.instanceId is more like this though
00bf4bf02d71261c0c1f55a601331b336a5d90d365cca1b28330dcf3e456fb7c07d5b72f1d3c9a971e391b5edc3512aea8559d172b24e639
per this document I was able to get revision_name
https://cloud.google.com/run/docs/reference/container-contract#env-vars
but I can't get the instance id and metrics must be reported per instance or two instances reporting in the same minute will be rejected. how do I get instance id (preferably through DockerFile and if not through api call). If cloud run boots up 10 instances under one revision name, I have to make sure to uniquely report metrics to Generic Task resource where I plan on filling in job_id with the instance id.
thanks,
Dean
Please try using the metadata server to get the instance ID using the url:
http://metadata.google.internal/computeMetadata/v1/instance/id
Note that "Metadata-Flavor: Google" header is also required.
If you're using Java (as indicated by the tags), the easiest way to get the instance ID from the "internal metadata server" programmatically is probably to include the dependency com.google.cloud:google-cloud-core:1.93.5 (or newer) through Gradle/Maven and then call the following method:
import com.google.cloud.MetadataConfig;
String instanceId = MetadataConfig.getInstanceId();
The entries in the logging in Stackdriver is as follows
labels: {
instanceId: "00bf4bf02d4b374e91dda64bc4c4241a218302c4bcc73a01ecf85e582127e8c8076fcbe18b3cc934f5ed33e5dc1348c58cfd40cbecc0c9ae2a0b6d2356"
}
labels: {
configuration_name: "cloudrunservice"
location: "us-central1"
project_id: "xxxx-xxxx-000"
revision_name: "cloudrunservice-00002-leq"
service_name: "cloudrunservice"
}
type: "cloud_run_revision"
As you mentioned, each one has the instance Id, Revision name, and Service name. In this way, you do not have to worry about rejected entries in the logging by the same instance / time.
I could no see something related with the instances ID in the UI, managing Revisions. Handling this JSON from logging you could get the InsanceID.

Setting AWS region programmatically for SQS

I just started working on AWS SDK for Java and .net.
currently i am creating a AWS SQS Queue. I was able to Create a QUEUE, List the existing queues, and talk to the queues with .net SDK.
When i tried the same with the java i m getting following error.
Unable to find a region via the region provider chain. Must provide an
explicit region in the builder or setup environment to supply a
region.
I have set all the necessary access keys, Region and credentials in the aws preferences in eclipse.
This is how i am initializing SQS client in a Java maven project
AmazonSQS sqs = AmazonSQSClientBuilder.defaultClient();
I have googled and found that there is a key word called withregion() for S3 where i can specify the region but its not there for SQS.
I also tried setting region as
sqs.setRegion(Region.AP_Mumbai);
This shows up following exception
The method setRegion(com.amazonaws.regions.Region) in the type
AmazonSQS is not applicable for the arguments
(com.amazonaws.services.s3.model.Region)
i tried setting the same using com.amazonaws.regions.Region but there is no provision as such.
Please Suggest
I setup the aws sqs client this way:
BasicAWSCredentials bAWSc = new BasicAWSCredentials(accessKey, secretKey);
return AmazonSQSClientBuilder.standard().withRegion(region).withCredentials(new AWSStaticCredentialsProvider(bAWSc)).build();
based on what #Francesco put, I created a more intuitive version
BasicAWSCredentials bAWSc = new BasicAWSCredentials(accessKey, secretKey);
final AmazonSQS sqs = AmazonSQSClientBuilder.standard()
.withRegion(Regions.US_EAST_1)
.withCredentials(new AWSStaticCredentialsProvider(bAWSc ))
.build();

Why might describing Amazon EC2 instances yield no result?

I am trying to retrieve all the instances running in my AWS account (say instance id, etc). I use the following code. I am not able to print the instance ids. When I debug, I am just getting null values. But I have three instances running on AWS. Can someone point out what I am doing wrong here?
DescribeInstancesResult result = ec2.describeInstances();
List<Reservation> reservations = result.getReservations();
for (Reservation reservation : reservations) {
List<Instance> instances = reservation.getInstances();
for (Instance instance : instances) {
System.out.println(instance.getInstanceId());
}
}
The most common cause for issues like this is a missing region specification when initializing the client, see section To create and initialize an Amazon EC2 client within Create an Amazon EC2 Client for details:
Specifically, step 2 only creates an EC2 client without specifying the region explicitly:
2) Use the AWSCredentials object to create a new AmazonEC2Client instance, as follows:
amazonEC2Client = new AmazonEC2Client(credentials);
This yields a client talking to us-east-1 - surprisingly, the AWS SDKs and the AWS Management Console use different defaults even as outlined in step 3, which also shows how to specify a different endpoint:
3) By default, the service endpoint is ec2.us-east-1.amazonaws.com. To specify a different endpoint, use the setEndpoint method. For example:
amazonEC2Client.setEndpoint("ec2.us-west-2.amazonaws.com");
The AWS SDK for Java uses US East (N. Virginia) as the default region
if you do not specify a region in your code. However, the AWS
Management Console uses US West (Oregon) as its default. Therefore,
when using the AWS Management Console in conjunction with your
development, be sure to specify the same region in both your code and
the console. [emphasis mine]
The differing defaults are easy to trip over, and the respective default in the AWS Management Console has in fact changed over time - as so often in software development, I recommend to always be explicit about this in your code to avoid such subtle error sources.

How to create AWS default VPC through aws-java-sdk

I want to create default VPC with all default components(i.e default security group,internet gateway) and components that are needed for an instance launched inside this VPC to communicate to external world say through ssh. I can create such VPCthrough AWS VPC console keeping default option selected but I want to do it through java code using aws-java-sdk.
I tried this code
private static void createVpc()
{
System.out.println("Creating VPC.....\n");
CreateVpcRequest newVPC = new CreateVpcRequest();
String cidrBlock = "10.0.0.0/28";
newVPC.setCidrBlock(cidrBlock);
CreateVpcResult res = ec2.createVpc(newVPC);
Vpc vp = res.getVpc();
vp.setIsDefault(true);
String vpcId = vp.getVpcId();
System.out.println("Created VPC"+vpcId);
//deleteVpc("vpc-c80418aa");
}
but it creates VPC and no other associated components.
Please tell what else I need to do or provide sample code steps to build VPC with other components.
I do not think it is possible. A default VPC is created by default by AWS when you create your account.
In addition, old active account cannot have default VPC at all...
So either build a cloud formation template or use Java to build all required elements.
-R

Categories

Resources