In older version of Couchbase Java SDK there were several check-and-set (CAS) methods for implementing optimistic locking. But what is corresponding API in a newer version of SDK (>= 2.0)?
Initial code:
JsonDocument doc = bucket.get("myKey");
Long casValue = doc.cas();
// some method to set new value for "myKey" only if CAS value
// has not been changed
As you already saw, the CAS value is embedded in the document. Most methods in the API will take that into account if the CAS is not 0, for example replace(Document) will throw a CASMismatchException if the given document has a different CAS value than what is on the server.
Related
I want to utilize the AWS SDK to set and define min/max tasks for my auto scaling policy for my ECS service.
So I'm able to successfully modify my auto scaling group policy for my ECS containers instances using code.
UpdateAutoScalingGroupRequest request = new UpdateAutoScalingGroupRequest().withAutoScalingGroupName("helloWorld-ASG").withMinSize(1);
UpdateAutoScalingGroupResult response = client.updateAutoScalingGroup(request);
UpdateScalingPlanResult scalingResponse = scalingClient.updateScalingPlan(scalingRequest);
but how do I do this for the auto scaling policy for my ECS service?
What classes do I need to do this? Is it possible?
For ECS service auto scaling look at the AWSApplicationAutoScalingClient, PutScalingPolicyRequest and PutScalingPolicyResult classes, then depending on your preferred scaling policy you will need either the StepScalingPolicyConfiguration or TargetTrackingScalingPolicyConfiguration class.
See the following example taken from the AWS Java SDK docs:
AWSApplicationAutoScaling client = AWSApplicationAutoScalingClientBuilder.standard().build();
PutScalingPolicyRequest request = new PutScalingPolicyRequest()
.withPolicyName("web-app-cpu-gt-75")
.withServiceNamespace("ecs")
.withResourceId("service/default/web-app")
.withScalableDimension("ecs:service:DesiredCount")
.withPolicyType("StepScaling")
.withStepScalingPolicyConfiguration(
new StepScalingPolicyConfiguration().withAdjustmentType("PercentChangeInCapacity")
.withStepAdjustments(new StepAdjustment().withMetricIntervalLowerBound(0d).withScalingAdjustment(200)).withCooldown(60));
PutScalingPolicyResult response = client.putScalingPolicy(request);AWSApplicationAutoScaling client = AWSApplicationAutoScalingClientBuilder.standard().build();
This answer above was for version 1.x of the SDK for version 2.x of the SDK you would need something like this one:
https://sdk.amazonaws.com/java/api/latest/index.html?software/amazon/awssdk/services/applicationautoscaling/ApplicationAutoScalingClient.html
Here is a link to the documentation for java 3 sdk version 1. Does version 2.0 has something similar or they removed such option?
Yes! It is possible in AWS SDK v2 to execute S3 operations on regions other than the one configured in the client.
In order to do this, set useArnRegionEnabled to true on the client.
An example of this using Scala is:
val s3Configuration = S3Configuration.builder.useArnRegionEnabled(true).build
val client = S3Client
.builder
.credentialsProvider({$foo})
.region(Region.EU_WEST_1)
.overrideConfiguration({$foo})
.serviceConfiguration(s3Configuration)
.build
Here is the documentation: https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Configuration.Builder.html#useArnRegionEnabled-java.lang.Boolean-
Not supported per here
In version 1.x, services such as Amazon S3, Amazon SNS, and Amazon SQS allowed access to resources across Region boundaries. This is no longer allowed in version 2.x using the same client. If you need to access a resource in a different region, you must create a client in that region and retrieve the resource using the appropriate client.
This works for me when using java AWS SDK 2.16.98 and only requires the name of the bucket rather than the full arn.
private S3Client defaultClient;
private S3Client bucketSpecificClient;
private String bucketName = "my-bucket-in-some-region";
// this client seems to be able to look up the location of buckets from any region
defaultClient = S3Client.builder().endpointOverride(URI.create("https://s3.us-east-1.amazonaws.com")).region(Region.US_EAST_1).build();
public S3Client getClient() {
if (bucketSpecificClient == null) {
String bucketLocation = defaultClient.getBucketLocation(builder -> builder.bucket(this.bucketName)).locationConstraintAsString();
Region region = bucketLocation.trim().equals("") ? Region.US_EAST_1 : Region.of(bucketLocation);
bucketSpecificClient = S3Client.builder().region(region).build();
}
return bucketSpecificClient;
}
Now you can use bucketSpecificClient to perform operations on objects in the bucket my-bucket-in-some-region
I am trying to retrieve all the instances running in my AWS account (say instance id, etc). I use the following code. I am not able to print the instance ids. When I debug, I am just getting null values. But I have three instances running on AWS. Can someone point out what I am doing wrong here?
DescribeInstancesResult result = ec2.describeInstances();
List<Reservation> reservations = result.getReservations();
for (Reservation reservation : reservations) {
List<Instance> instances = reservation.getInstances();
for (Instance instance : instances) {
System.out.println(instance.getInstanceId());
}
}
The most common cause for issues like this is a missing region specification when initializing the client, see section To create and initialize an Amazon EC2 client within Create an Amazon EC2 Client for details:
Specifically, step 2 only creates an EC2 client without specifying the region explicitly:
2) Use the AWSCredentials object to create a new AmazonEC2Client instance, as follows:
amazonEC2Client = new AmazonEC2Client(credentials);
This yields a client talking to us-east-1 - surprisingly, the AWS SDKs and the AWS Management Console use different defaults even as outlined in step 3, which also shows how to specify a different endpoint:
3) By default, the service endpoint is ec2.us-east-1.amazonaws.com. To specify a different endpoint, use the setEndpoint method. For example:
amazonEC2Client.setEndpoint("ec2.us-west-2.amazonaws.com");
The AWS SDK for Java uses US East (N. Virginia) as the default region
if you do not specify a region in your code. However, the AWS
Management Console uses US West (Oregon) as its default. Therefore,
when using the AWS Management Console in conjunction with your
development, be sure to specify the same region in both your code and
the console. [emphasis mine]
The differing defaults are easy to trip over, and the respective default in the AWS Management Console has in fact changed over time - as so often in software development, I recommend to always be explicit about this in your code to avoid such subtle error sources.
Using JClouds, up to version 1.6.x it was possible to access to the native EC2 provider API by using the following idiom:
AWSEC2Client ec2Client = AWSEC2Client.class.cast(context.getProviderSpecificContext().getApi());
Actually, I copied from the documentation page: http://jclouds.apache.org/guides/aws/
It turns out that in the latest release this method has been removed. Is there an alternative method/way to access to the provider specific features (security groups, key-pairs, etc)?
Unwrapping the API from the ComputeServiceContext
ComputeServiceContext context = ContextBuilder.newBuilder("aws-ec2")
.credentials("accessKey", "secretAccessKey")
.buildView(ComputeServiceContext.class);
ComputeService computeService = context.getComputeService();
AWSEC2Api ec2Api = context.unwrapApi(AWSEC2Api.class);
Building the API directly
AWSEC2Api ec2Api = ContextBuilder.newBuilder("aws-ec2")
.credentials("accessKey", "secretAccessKey")
.buildApi(AWSEC2Api.class);
As my question title says, is Memcache supposed to play well with Google cloud endpoints?
Locally it does, I can store a key / value pair using JCache in my application and read it from within a Google Cloud Endpoints API method.
When I upload my application and run it on the cloud it returns null exactly as the way it happens when I've found out that we can't access sessions from inside of cloud endpoints...
And I doing something wrong or Google cloud endpoints isn't supposed to access the cache as well?
I really need to share some tokens safely between my application and cloud endpoints and I don't want to write / read from the Datastore (those are volatile tokens...). Any ideas?
Endpoints definitely works with memcache, using the built-in API. I just tried the following trivial snippet within an API method and saw the incrementing values as expected:
String key = "key";
Integer cached = 0;
MemcacheService memcacheService = MemcacheServiceFactory.getMemcacheService();
memcacheService.setErrorHandler(new StrictErrorHandler());
cached = (Integer) memcacheService.get(key);
if (cached == null) {
memcacheService.put(key, 0);
} else {
memcacheService.put(key, cached + 1);
}
This should work for you, unless you have a specific requirement for JCache.