Setting AWS region programmatically for SQS - java

I just started working on AWS SDK for Java and .net.
currently i am creating a AWS SQS Queue. I was able to Create a QUEUE, List the existing queues, and talk to the queues with .net SDK.
When i tried the same with the java i m getting following error.
Unable to find a region via the region provider chain. Must provide an
explicit region in the builder or setup environment to supply a
region.
I have set all the necessary access keys, Region and credentials in the aws preferences in eclipse.
This is how i am initializing SQS client in a Java maven project
AmazonSQS sqs = AmazonSQSClientBuilder.defaultClient();
I have googled and found that there is a key word called withregion() for S3 where i can specify the region but its not there for SQS.
I also tried setting region as
sqs.setRegion(Region.AP_Mumbai);
This shows up following exception
The method setRegion(com.amazonaws.regions.Region) in the type
AmazonSQS is not applicable for the arguments
(com.amazonaws.services.s3.model.Region)
i tried setting the same using com.amazonaws.regions.Region but there is no provision as such.
Please Suggest

I setup the aws sqs client this way:
BasicAWSCredentials bAWSc = new BasicAWSCredentials(accessKey, secretKey);
return AmazonSQSClientBuilder.standard().withRegion(region).withCredentials(new AWSStaticCredentialsProvider(bAWSc)).build();

based on what #Francesco put, I created a more intuitive version
BasicAWSCredentials bAWSc = new BasicAWSCredentials(accessKey, secretKey);
final AmazonSQS sqs = AmazonSQSClientBuilder.standard()
.withRegion(Regions.US_EAST_1)
.withCredentials(new AWSStaticCredentialsProvider(bAWSc ))
.build();

Related

How to programmatically set external ip for Google Cloud virtual machine in Java?

I am trying to programmatically start Google Cloud virtual machine instances. It occurred to me that in order to have internet access, have to set an external IP address.
// Access Config
AccessConfig accessConfig = AccessConfig.newBuilder()
.setNatIP("foo")
.setType("ONE_TO_ONE_NAT")
.setName("External NAT")
.setExternalIpv6("bar")
.build();
// Use the network interface provided in the networkName argument.
NetworkInterface networkInterface = NetworkInterface.newBuilder()
.setName(networkName)
.setAccessConfigs(0, accessConfig)
.build();
That is my status quo. It is inspired by this article post. I hoped that would work in Java, too, but currently, I am stuck.
All I get is:
com.google.api.gax.rpc.InvalidArgumentException: Bad Request
Unfortunately, Google Cloud Compute Engine Docs doesn't really provide any further information, on how to set the external IP properly.
Thanks in advance.
I have encountered the answer. In the Google Cloud Compute Engine Docs it is explained for Windows Instances. It took me a while to recognize it because I've focused only Linux Instances' related questions.
The solution:
instanceResource = Instance.newBuilder()
.setName(instanceName)
.setMachineType(machineType)
.addDisks(disk)
// Add external internet to instance
.addNetworkInterfaces(NetworkInterface.newBuilder()
.addAccessConfigs(AccessConfig.newBuilder()
.setType("ONE_TO_ONE_NAT")
.setName("External NAT")
.build())
.setName("global/networks/default")
.build())
.setMetadata(buildMetadata())
.build();

How to configure AWS DynamoDB Camel component

I am trying to POC accessing DynamoDB via an Apache Camel application. Obviously Dynamo DB will run in AWS but for development purposes we have it running locally as a docker container.
It was very easy to create a Dynamo BD table locally and put some items in there manually. I used for this my intelij Dynamo DB console and all I had to provide was a custom end point http://localhost:8000 and the Default credential provider chain.
Now at some certain times of the day I would like to trigger a job that will scan the Dynamo DB items and perform some actions on the returned data.
from("cron:myCron?schedule=0 */5 * * * *")
.log("Running myCron scheduler")
.setHeader(Ddb2Constants.OPERATION, () -> Ddb2Operations.Scan)
.to("aws2-ddb:myTable")
.log("Performing some work on items");
When I am trying to run my application it fails to start complaining that the security token is expired which makes me think it is trying to go to AWS rather than accessing the local. I was unable to find anything about how would I set this. The camel dynamo db component (https://camel.apache.org/components/3.15.x/aws2-ddb-component.html) is talking about being able to configure the component with a DynamoDbClient but this is an interface and its implementation called DefaultDynamoDbClient is not public and so is the DefaultDynamoDbClientBuilder.
Assuming that you use Spring Boot as Camel runtime, the simplest way in your case is to configure the DynamoDbClient used by Camel thanks to options set in the application.properties as next:
# The value of the access key used by the component aws2-ddb
camel.component.aws2-ddb.accessKey=test
# The value of the secret key used by the component aws2-ddb
camel.component.aws2-ddb.secretKey=test
# The value of the region used by the component aws2-ddb
camel.component.aws2-ddb.region=us-east-1
# Indicates that the component aws2-ddb should use the new URI endpoint
camel.component.aws2-ddb.override-endpoint=true
# The value of the URI endpoint used by the component aws2-ddb
camel.component.aws2-ddb.uri-endpoint-override=http://localhost:8000
For more details please refer to the documentation of those options:
camel.component.aws2-ddb.accessKey
camel.component.aws2-ddb.secretKey
camel.component.aws2-ddb.region
camel.component.aws2-ddb.override-endpoint
camel.component.aws2-ddb.uri-endpoint-override
For other runtimes, it can be configured programatically as next:
Ddb2Component ddb2Component = new Ddb2Component(context);
String accessKey = "test";
String secretKey = "test";
String region = "us-east-1";
String endpoint = "http://localhost:8000";
ddb2Component.getConfiguration().setAmazonDDBClient(
DynamoDbClient.builder()
.endpointOverride(URI.create(endpoint))
.credentialsProvider(
StaticCredentialsProvider.create(
AwsBasicCredentials.create(accessKey, secretKey)
)
)
.region(Region.of(region))
.build()
);

Creating EKS cluster by using Java application

everyone I am new to AWS SDK. I am trying to create an EKS cluster from my java application.
I have used this eksctl create cluster command to create a cluster and I have also done this by using cluster templates.
I have tried to use AWS SDK to create clusters but that didn't work and have no idea how to go with it.
If anyone of you has a good sample code or explanation of using AWS SDK for creating a cluster using cluster template or anything which can help me to reach there would be helpful.
here i provide you a sample of Java code. i wish its serve your purpose on eks cluster creation:
String accessKey = "your_aws_access_key";
String secretKey = "your_aws_secret_key";
AWSCredentials credentials = new BasicAWSCredentials (accessKey, secretKey);
ClientConfiguration clientConfig = new ClientConfiguration ();
clientConfig.setProtocol (Protocol.HTTPS);
clientConfig.setMaxErrorRetry (DEFAULT_MAX_ERROR_RETRY);
clientConfig.setRetryPolicy (new RetryPolicy (PredefinedRetryPolicies.DEFAULT_RETRY_CONDITION,
DEFAULT_BACKOFF_STRATEGY, DEFAULT_MAX_ERROR_RETRY, false));
AmazonEKS amazonEKS = AmazonEKSClientBuilder.standard ()
.withClientConfiguration (clientConfig)
.withCredentials (new AWSStaticCredentialsProvider (credentials))
.withRegion ("us-east-1") //replace your region name
.build ();
CreateClusterResult eksCluster = amazonEKS.createCluster (
new CreateClusterRequest ().withName ("cluster-name") //with other param
);

How enable force global bucket access in aws s3 sdk java 2.0?

Here is a link to the documentation for java 3 sdk version 1. Does version 2.0 has something similar or they removed such option?
Yes! It is possible in AWS SDK v2 to execute S3 operations on regions other than the one configured in the client.
In order to do this, set useArnRegionEnabled to true on the client.
An example of this using Scala is:
val s3Configuration = S3Configuration.builder.useArnRegionEnabled(true).build
val client = S3Client
.builder
.credentialsProvider({$foo})
.region(Region.EU_WEST_1)
.overrideConfiguration({$foo})
.serviceConfiguration(s3Configuration)
.build
Here is the documentation: https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Configuration.Builder.html#useArnRegionEnabled-java.lang.Boolean-
Not supported per here
In version 1.x, services such as Amazon S3, Amazon SNS, and Amazon SQS allowed access to resources across Region boundaries. This is no longer allowed in version 2.x using the same client. If you need to access a resource in a different region, you must create a client in that region and retrieve the resource using the appropriate client.
This works for me when using java AWS SDK 2.16.98 and only requires the name of the bucket rather than the full arn.
private S3Client defaultClient;
private S3Client bucketSpecificClient;
private String bucketName = "my-bucket-in-some-region";
// this client seems to be able to look up the location of buckets from any region
defaultClient = S3Client.builder().endpointOverride(URI.create("https://s3.us-east-1.amazonaws.com")).region(Region.US_EAST_1).build();
public S3Client getClient() {
if (bucketSpecificClient == null) {
String bucketLocation = defaultClient.getBucketLocation(builder -> builder.bucket(this.bucketName)).locationConstraintAsString();
Region region = bucketLocation.trim().equals("") ? Region.US_EAST_1 : Region.of(bucketLocation);
bucketSpecificClient = S3Client.builder().region(region).build();
}
return bucketSpecificClient;
}
Now you can use bucketSpecificClient to perform operations on objects in the bucket my-bucket-in-some-region

AmazonS3ClientBuilder.defaultClient() fails to account for region?

The Amazon Java SDK has marked the Constructors for AmazonS3Client deprecated in favor of some AmazonS3ClientBuilder.defaultClient(). Following the recommendation, though, does not result in an AmazonS3 client that works the same. In particular, the client has somehow failed to account for Region. If you run the tests below, the thisFails test demonstrates the problem.
public class S3HelperTest {
#Test
public void thisWorks() throws Exception {
AmazonS3 s3Client = new AmazonS3Client(); // this call is deprecated
s3Client.setS3ClientOptions(S3ClientOptions.builder().setPathStyleAccess(true).build());
assertNotNull(s3Client);
}
#Test
public void thisFails() throws Exception {
AmazonS3 s3Client = AmazonS3ClientBuilder.defaultClient();
/*
* The following line throws like com.amazonaws.SdkClientException:
* Unable to find a region via the region provider chain. Must provide an explicit region in the builder or
* setup environment to supply a region.
*/
s3Client.setS3ClientOptions(S3ClientOptions.builder().setPathStyleAccess(true).build());
}
}
com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
at com.amazonaws.client.builder.AwsClientBuilder.setRegion(AwsClientBuilder.java:371)
at com.amazonaws.client.builder.AwsClientBuilder.configureMutableProperties(AwsClientBuilder.java:337)
at com.amazonaws.client.builder.AwsSyncClientBuilder.build(AwsSyncClientBuilder.java:46)
at com.amazonaws.services.s3.AmazonS3ClientBuilder.defaultClient(AmazonS3ClientBuilder.java:54)
at com.climate.tenderfoot.service.S3HelperTest.thisFails(S3HelperTest.java:21)
...
Is this an AWS SDK Bug? Is there some "region default provider chain" or some mechanism to derive the region from the Environment and set it into the client? It seems really weak that the method to replace the deprecation doesn't result in the same capability.
Looks like a region is required for the builder.
Probably this thread is related (I would use .withRegion(Regions.US_EAST_1) though in the 3rd line):
To emulate the previous behavior (no region configured), you'll need
to also enable "forced global bucket access" in the client builder:
AmazonS3 client =
AmazonS3ClientBuilder.standard()
.withRegion("us-east-1") // The first region to try your request against
.withForceGlobalBucketAccess(true) // If a bucket is in a different region, try again in the correct region
.build();
This will suppress the exception you received and automatically retry
the request under the region in the exception. It is made explicit in
the builder so you are aware of this cross-region behavior. Note: The
SDK will cache the bucket region after the first failure, so that
every request against this bucket doesn't have to happen twice.
Also, from the AWS documentation if you want to use AmazonS3ClientBuilder.defaultClient(); then you need to have
~/.aws/credentials and ~/.aws/config files
~/.aws/credentials contents:
[default]
aws_access_key_id = your_id
aws_secret_access_key = your_key
~/.aws/config contents:
[default]
region = us-west-1
From the same AWS documentation page, if you don't want to hardcode the region/credentials, you can have it as environment variables in your Linux machine the usual way:
export AWS_ACCESS_KEY_ID=your_access_key_id
export AWS_SECRET_ACCESS_KEY=your_secret_access_key
export AWS_REGION=your_aws_region
BasicAWSCredentials creds = new BasicAWSCredentials("key_ID", "Access_Key");
AWSStaticCredentialsProvider provider = new
AWSStaticCredentialsProvider(creds);
AmazonSQS sqs =AmazonSQSClientBuilder.standard()
.withCredentials(provider)
.withRegion(Regions.US_EAST_2)
.build();
Create file named "config" under .aws.
And place below content.
~/.aws/config contents:
[default]
region = us-west-1
output = json
AmazonSQS sqsClient = AmazonSQSClientBuilder
.standard()
.withRegion(Regions.AP_SOUTH_1)
.build();

Categories

Resources