The following code will create a new API KEY in AWS API Gateway. Just for fun, I also get an existing usage plan called "Basic" with an id of "1234"
For the life of me I can't find out how to take my newly created API Key and add the existing usage plan to it. This can be done manually on the web portal with the "Add to Usage Plan" button but I want to add my new user to a free plan.
BasicAWSCredentials awsCreds = new BasicAWSCredentials(aws_id, aws_key);
apiGateway = AmazonApiGatewayClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.withRegion(Regions.US_EAST_1).build();
CreateApiKeyRequest createApiKeyRequest = new CreateApiKeyRequest();
createApiKeyRequest.setName("awesome company);
createApiKeyRequest.setEnabled(true);
createApiKeyRequest.setCustomerId("someid");
CreateApiKeyResult result = apiGateway.createApiKey(createApiKeyRequest);
GetUsagePlanRequest getUsagePlanRequest = new GetUsagePlanRequest();
getUsagePlanRequest.setUsagePlanId("1234");
GetUsagePlanResult getUsagePlanResult = apiGateway.getUsagePlan(getUsagePlanRequest);
Any AWS SDK experts know how to connect a usage plan to an api key?
Here's the solution to my post - the key type being "API_KEY" isn't documented anywhere, i found it in some random python sample :/ This creates a new user with an api key and adds them to a usage plan with the api gateway java sdk
BasicAWSCredentials awsCreds = new BasicAWSCredentials(aws_id, aws_key);
apiGateway = AmazonApiGatewayClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.withRegion(Regions.US_EAST_1).build();
CreateApiKeyRequest createApiKeyRequest = new CreateApiKeyRequest();
createApiKeyRequest.setName("My awesome new user");
createApiKeyRequest.setEnabled(true);
createApiKeyRequest.setCustomerId(UUID.randomUUID().toString());
CreateApiKeyResult result = apiGateway.createApiKey(createApiKeyRequest);
GetUsagePlanRequest getUsagePlanRequest = new GetUsagePlanRequest();
getUsagePlanRequest.setUsagePlanId(BASIC_USAGE_PLAN_ID);
CreateUsagePlanKeyRequest createUsagePlanKeyRequest = new CreateUsagePlanKeyRequest()
.withUsagePlanId(BASIC_USAGE_PLAN_ID);
createUsagePlanKeyRequest.setKeyId(result.getId());
createUsagePlanKeyRequest.setKeyType("API_KEY");
apiGateway.createUsagePlanKey(createUsagePlanKeyRequest);
This should maybe be a comment instead, but I made an answer for readability (the key type is documented here).
// Client
AmazonApiGateway client = AmazonApiGatewayClientBuilder.standard().withRegion("my region here").build();
// Create new key
CreateApiKeyRequest keyReq = new CreateApiKeyRequest();
keyReq.setName("key name");
keyReq.setDescription("description");
keyReq.setEnabled(true);
CreateApiKeyResult keyRes = client.createApiKey(keyReq);
// Use existing plan
CreateUsagePlanKeyRequest planReq = new CreateUsagePlanKeyRequest();
planReq.setUsagePlanId("my usage plan id");
planReq.setKeyId(keyRes.getId()); // id from new key
planReq.setKeyType("API_KEY");
// add key to plan
client.createUsagePlanKey(planReq);
Note, this example is without a try-catch block
Related
I'm working on a project that used Uber Cadence Java Client. How can I get the list of open/closed workflows from the code? I can get it from CLI but not from java code.
Thank you.
WorkflowServiceTChannel cadenceService =
new WorkflowServiceTChannel(ClientOptions.defaultInstance());
ListOpenWorkflowExecutionsRequest request = new ListOpenWorkflowExecutionsRequest();
request.setDomain(DOMAIN);
=request.set...;
ListOpenWorkflowExecutionsResponse resp = cadenceService.ListOpenWorkflowExecutions(request);
ListClosedWorkflowExecutionsRequest request = new ListClosedWorkflowExecutionsRequest();
request.setDomain(DOMAIN);
=request.set...;
ListClosedWorkflowExecutionsResponse resp = cadenceService.ListClosedWorkflowExecutions(request);
// If you have advanced visibility
ListWorkflowExecutionsRequest request = new ListWorkflowExecutionsRequest();
request.setDomain(DOMAIN);
=request.setQuery(...);
ListWorkflowExecutionsResponse resp = cadenceService.ListWorkflowExecutions(request);
See how the cadenceService is used in this sample
Documentation about advanced visibility
I am trying to add CNAMEs for the existing Distribution in aws cloud front programmatically.
I have tried the following code, but it did not give any result. If someone knows how to do it programmatically. Please kind enough to mention it. Thank you
AmazonCloudFront cloudFront = AmazonCloudFrontAsyncClientBuilder.standard()
.withRegion(Regions.AP_EAST_1)
.withCredentials(new AWSStaticCredentialsProvider(
new BasicAWSCredentials(route53Manager.getAccessKey(), route53Manager.getSecretKey())))
.build();
GetDistributionConfigResult result = cloudFront.getDistributionConfig(
new GetDistributionConfigRequest("E1EJBNNYJZ6G34"));
Aliases aliases = new Aliases()
.withItems(subDomain)
.withQuantity(1);
DistributionConfig config = result.getDistributionConfig()
.withEnabled(true)
.withAliases(aliases);
It looks like you are missing the update distribution code and a few extra things. See the below code:
AmazonCloudFront cloudFront = AmazonCloudFrontAsyncClientBuilder.standard()
.withRegion(Regions.AP_EAST_1)
.withCredentials(new AWSStaticCredentialsProvider(
new BasicAWSCredentials(route53Manager.getAccessKey(), route53Manager.getSecretKey())))
.build();
//create the request
GetDistributionConfigRequest distributionConfigRequest = new GetDistributionConfigRequest("E1EJBNNYJZ6G34");
//submit the request and get the resulting config
GetDistributionConfigResult distributionConfigResult = cloudFront.getDistributionConfig(distributionConfigRequest);
Aliases aliases = new Aliases()
.withItems(subDomain)
.withQuantity(1);
DistributionConfig config = distributionConfigResult.getDistributionConfig()
.withEnabled(true)
.withAliases(aliases);
//create the update request
UpdateDistributionRequest updateDistributionRequest = new UpdateDistributionRequest(config, distributionConfigRequest.getId(), distributionConfigResult.getETag());
//submit the request to update the config
UpdateDistributionResult updateDistributionResult = cloudfront.updateDistribution(updateDistributionRequest);
//print output of result to console
System.out.println(updateDistributionResult);
I am trying to use Lambda function for S3 Put event notification. My Lambda function should be called once I put/add any new JSON file in my S3 bucket.
The challenge I have is there are not enough documents for this to implement such Lambda function in Java. Most of doc I found are for Node.js
I want, my Lambda function should be called and then inside that Lambda function, I want to consume that added json and then send that JSON to AWS ES Service.
But what all classes I should use for this? Anyone has any idea about this? S3 abd ES are all setup and running. The auto generated code for lambda is
`
#Override
public Object handleRequest(S3Event input, Context context) {
context.getLogger().log("Input: " + input);
// TODO: implement your handler
return null;
}
What next??
Handling S3 events in Lambda can be done, but you have to keep in mind, the the S3Event object only transports the reference to the object and not the object itself. To get to the actual object you have to invoke the AWS SDK yourself.
Requesting a S3 Object within a lambda function would look like this:
public Object handleRequest(S3Event input, Context context) {
AmazonS3Client s3Client = new AmazonS3Client(new DefaultAWSCredentialsProviderChain());
for (S3EventNotificationRecord record : input.getRecords()) {
String s3Key = record.getS3().getObject().getKey();
String s3Bucket = record.getS3().getBucket().getName();
context.getLogger().log("found id: " + s3Bucket+" "+s3Key);
// retrieve s3 object
S3Object object = s3Client.getObject(new GetObjectRequest(s3Bucket, s3Key));
InputStream objectData = object.getObjectContent();
//insert object into elasticsearch
}
return null;
}
Now the rather difficult part to insert this object into ElasticSearch. Sadly the AWS SDK does not provide any functions for this. The default approach would be to do a REST call against the AWS ES endpoint. There are various samples out their on how to proceed with calling an ElasticSearch instance.
Some people seem to go with the following project:
Jest - Elasticsearch Java Rest Client
Finally, here are the steps for S3 --> Lambda --> ES integration using Java.
Have your S3, Lamba and ES created on AWS. Steps are here.
Use below Java code in your lambda function to fetch a newly added object in S3 and send it to ES service.
public Object handleRequest(S3Event input, Context context) {
AmazonS3Client s3Client = new AmazonS3Client(new DefaultAWSCredentialsProviderChain());
for (S3EventNotificationRecord record : input.getRecords()) {
String s3Key = record.getS3().getObject().getKey();
String s3Bucket = record.getS3().getBucket().getName();
context.getLogger().log("found id: " + s3Bucket+" "+s3Key);
// retrieve s3 object
S3Object object = s3Client.getObject(new GetObjectRequest(s3Bucket, s3Key));
InputStream objectData = object.getObjectContent();
//Start putting your objects in AWS ES Service
String esInput = "Build your JSON string here using S3 objectData";
HttpClient httpClient = new DefaultHttpClient();
HttpPut putRequest = new HttpPut(AWS_ES_ENDPOINT + "/{Index_name}/{product_name}/{unique_id}" );
StringEntity input = new StringEntity(esInput);
input.setContentType("application/json");
putRequest.setEntity(input);
httpClient.execute(putRequest);
httpClient.getConnectionManager().shutdown();
}
return "success";}
Use either Postman or Sense to create Actual index & corresponding mapping in ES.
Once done, download and run proxy.js on your machine. Make sure you setup ES Security steps suggested in this post
Test setup and Kibana by running http://localhost:9200/_plugin/kibana/ URL from your machine.
All is set. Go ahead and set your dashboard in Kibana. Test it by adding new objects in your S3 bucket
I am trying to import the vmdk to AWS EC2, but there seems to be no good java API documentation.
I have come across below flow, which should work,
DiskImageDetail id = new DiskImageDetail();
id.setFormat(DiskImageFormat.VMDK);
// id.setImportManifestUrl(importManifestUrl)
// TODO: set to 10GB e.g.
id.setBytes(80000000000L);
VolumeDetail volume = new VolumeDetail();
volume.setSize(80000000000L);
DiskImage i = new DiskImage();
i.setImage(id);
i.setVolume(volume);
i.setDescription("disk image");
List<DiskImage> listImages = new ArrayList<DiskImage>();
listImages.add(i);
ImportInstanceLaunchSpecification ls = new ImportInstanceLaunchSpecification();
ImportInstanceRequest ir = new ImportInstanceRequest();
ir.setDescription("Test");
ir.setDiskImages(listImages);
ir.setRequestCredentials(Connection.getAWSCredentials());
// ir.setGeneralProgressListener()
ir.setLaunchSpecification(ls);
// Some code to set
ImportVolumeRequest ivr = new ImportVolumeRequest();
//ivr.setSomeData();
AmazonEC2 ec2 = // set some connection
ec2.importInstance(ir);
ec2.importVolume(ivr);
But I am not sure about what values to pass, there's no sample code as well !
It can be done with the cmdlets but with Java I don't see anything hopeful.
Appreciate any help on this.
Thanks in advance
I'm looking to leverage RackSpace's CloudFiles platform for large object storage (word docs, images, etc). Following some of their guides, I found a useful code snippet, that looks like it should work, but doesn't in my case.
Iterable<Module> modules = ImmutableSet.<Module> of(
new Log4JLoggingModule());
Properties properties = new Properties();
properties.setProperty(LocationConstants.PROPERTY_ZONE, ZONE);
properties.setProperty(LocationConstants.PROPERTY_REGION, "ORD");
CloudFilesClient cloudFilesClient = ContextBuilder.newBuilder(PROVIDER)
.credentials(username, apiKey)
.overrides(properties)
.modules(modules)
.buildApi(CloudFilesClient.class);
The problem is that when this code executes, it tries to log me in the IAD (Virginia) instance of CloudFiles. My organization's goal is to use the ORD (Chicago) instance as primary to be colocated with our cloud and use DFW as a back up environment. The login response results in the IAD instance coming back first, so I'm assuming JClouds is using that. Browsing around, it looks like the ZONE/REGION attributes are ignored for CloudFiles. I was wondering if there is any way to override the code that comes back for authentication to loop through the returned providers and choose which one to login to.
Update:
The accepted answer is mostly good, with some more info available in this snippet:
RestContext<CommonSwiftClient, CommonSwiftAsyncClient> swift = cloudFilesClient.unwrap();
CommonSwiftClient client = swift.getApi();
SwiftObject object = client.newSwiftObject();
object.getInfo().setName(FILENAME + SUFFIX);
object.setPayload("This is my payload."); //input stream.
String id = client.putObject(CONTAINER, object);
System.out.println(id);
SwiftObject obj2 = client.getObject(CONTAINER,FILENAME + SUFFIX);
System.out.println(obj2.getPayload());
We are working on the next version of jclouds (1.7.1) that should include multi-region support for Rackspace Cloud Files and OpenStack Swift. In the meantime you might be able to use this code as a workaround.
private void uploadToRackspaceRegion() {
Iterable<Module> modules = ImmutableSet.<Module> of(new Log4JLoggingModule());
String provider = "swift-keystone"; //Region selection is limited to swift-keystone provider
String identity = "username";
String credential = "password";
String endpoint = "https://identity.api.rackspacecloud.com/v2.0/";
String region = "ORD";
Properties overrides = new Properties();
overrides.setProperty(LocationConstants.PROPERTY_REGION, region);
overrides.setProperty(Constants.PROPERTY_API_VERSION, "2");
BlobStoreContext context = ContextBuilder.newBuilder(provider)
.endpoint(endpoint)
.credentials(identity, credential)
.modules(modules)
.overrides(overrides)
.buildView(BlobStoreContext.class);
RestContext<CommonSwiftClient, CommonSwiftAsyncClient> swift = context.unwrap();
CommonSwiftClient client = swift.getApi();
SwiftObject uploadObject = client.newSwiftObject();
uploadObject.getInfo().setName("test.txt");
uploadObject.setPayload("This is my payload."); //input stream.
String eTag = client.putObject("jclouds", uploadObject);
System.out.println("eTag = " + eTag);
SwiftObject downloadObject = client.getObject("jclouds", "test.txt");
System.out.println("downloadObject = " + downloadObject.getPayload());
context.close();
}
Use swift as you would Cloud Files. Keep in mind that if you need to use Cloud Files CDN stuff, the above won't work for that. Also, know that this way of doing things will eventually be deprecated.