I'm looking to leverage RackSpace's CloudFiles platform for large object storage (word docs, images, etc). Following some of their guides, I found a useful code snippet, that looks like it should work, but doesn't in my case.
Iterable<Module> modules = ImmutableSet.<Module> of(
new Log4JLoggingModule());
Properties properties = new Properties();
properties.setProperty(LocationConstants.PROPERTY_ZONE, ZONE);
properties.setProperty(LocationConstants.PROPERTY_REGION, "ORD");
CloudFilesClient cloudFilesClient = ContextBuilder.newBuilder(PROVIDER)
.credentials(username, apiKey)
.overrides(properties)
.modules(modules)
.buildApi(CloudFilesClient.class);
The problem is that when this code executes, it tries to log me in the IAD (Virginia) instance of CloudFiles. My organization's goal is to use the ORD (Chicago) instance as primary to be colocated with our cloud and use DFW as a back up environment. The login response results in the IAD instance coming back first, so I'm assuming JClouds is using that. Browsing around, it looks like the ZONE/REGION attributes are ignored for CloudFiles. I was wondering if there is any way to override the code that comes back for authentication to loop through the returned providers and choose which one to login to.
Update:
The accepted answer is mostly good, with some more info available in this snippet:
RestContext<CommonSwiftClient, CommonSwiftAsyncClient> swift = cloudFilesClient.unwrap();
CommonSwiftClient client = swift.getApi();
SwiftObject object = client.newSwiftObject();
object.getInfo().setName(FILENAME + SUFFIX);
object.setPayload("This is my payload."); //input stream.
String id = client.putObject(CONTAINER, object);
System.out.println(id);
SwiftObject obj2 = client.getObject(CONTAINER,FILENAME + SUFFIX);
System.out.println(obj2.getPayload());
We are working on the next version of jclouds (1.7.1) that should include multi-region support for Rackspace Cloud Files and OpenStack Swift. In the meantime you might be able to use this code as a workaround.
private void uploadToRackspaceRegion() {
Iterable<Module> modules = ImmutableSet.<Module> of(new Log4JLoggingModule());
String provider = "swift-keystone"; //Region selection is limited to swift-keystone provider
String identity = "username";
String credential = "password";
String endpoint = "https://identity.api.rackspacecloud.com/v2.0/";
String region = "ORD";
Properties overrides = new Properties();
overrides.setProperty(LocationConstants.PROPERTY_REGION, region);
overrides.setProperty(Constants.PROPERTY_API_VERSION, "2");
BlobStoreContext context = ContextBuilder.newBuilder(provider)
.endpoint(endpoint)
.credentials(identity, credential)
.modules(modules)
.overrides(overrides)
.buildView(BlobStoreContext.class);
RestContext<CommonSwiftClient, CommonSwiftAsyncClient> swift = context.unwrap();
CommonSwiftClient client = swift.getApi();
SwiftObject uploadObject = client.newSwiftObject();
uploadObject.getInfo().setName("test.txt");
uploadObject.setPayload("This is my payload."); //input stream.
String eTag = client.putObject("jclouds", uploadObject);
System.out.println("eTag = " + eTag);
SwiftObject downloadObject = client.getObject("jclouds", "test.txt");
System.out.println("downloadObject = " + downloadObject.getPayload());
context.close();
}
Use swift as you would Cloud Files. Keep in mind that if you need to use Cloud Files CDN stuff, the above won't work for that. Also, know that this way of doing things will eventually be deprecated.
Related
Hi Guys i need some help, i got a super user in sharepoint and drive,
I need to create a application in java that when a username and filename is passed to it, will go into the users drive and look for the filename passed and return the file-Id for it. i by chance the user has multiple files with the same name i need to return the latest one.
I have tried multiple ways but it does not seem to work
here is a copy of my code
GraphServiceClient graphClient = (GraphServiceClient) GraphServiceClient.builder().authenticationProvider(authProvider).buildClient();
IDriveItemRequestBuilder sDriveReq = graphClient.users(userEmail).drive().root();
String encodedFileName = URLEncoder.encode(fileName, "UTF-8");
IDriveItemSearchCollectionRequest searchRequest = sDriveReq.search(encodedFileName).buildRequest();
IDriveItemSearchCollectionPage searchResult = searchRequest.get();
DriveItem fileResult = null;
for (DriveItem driveItem : searchResult.getCurrentPage()) {
fileResult = driveItem;
}
The following code will create a new API KEY in AWS API Gateway. Just for fun, I also get an existing usage plan called "Basic" with an id of "1234"
For the life of me I can't find out how to take my newly created API Key and add the existing usage plan to it. This can be done manually on the web portal with the "Add to Usage Plan" button but I want to add my new user to a free plan.
BasicAWSCredentials awsCreds = new BasicAWSCredentials(aws_id, aws_key);
apiGateway = AmazonApiGatewayClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.withRegion(Regions.US_EAST_1).build();
CreateApiKeyRequest createApiKeyRequest = new CreateApiKeyRequest();
createApiKeyRequest.setName("awesome company);
createApiKeyRequest.setEnabled(true);
createApiKeyRequest.setCustomerId("someid");
CreateApiKeyResult result = apiGateway.createApiKey(createApiKeyRequest);
GetUsagePlanRequest getUsagePlanRequest = new GetUsagePlanRequest();
getUsagePlanRequest.setUsagePlanId("1234");
GetUsagePlanResult getUsagePlanResult = apiGateway.getUsagePlan(getUsagePlanRequest);
Any AWS SDK experts know how to connect a usage plan to an api key?
Here's the solution to my post - the key type being "API_KEY" isn't documented anywhere, i found it in some random python sample :/ This creates a new user with an api key and adds them to a usage plan with the api gateway java sdk
BasicAWSCredentials awsCreds = new BasicAWSCredentials(aws_id, aws_key);
apiGateway = AmazonApiGatewayClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.withRegion(Regions.US_EAST_1).build();
CreateApiKeyRequest createApiKeyRequest = new CreateApiKeyRequest();
createApiKeyRequest.setName("My awesome new user");
createApiKeyRequest.setEnabled(true);
createApiKeyRequest.setCustomerId(UUID.randomUUID().toString());
CreateApiKeyResult result = apiGateway.createApiKey(createApiKeyRequest);
GetUsagePlanRequest getUsagePlanRequest = new GetUsagePlanRequest();
getUsagePlanRequest.setUsagePlanId(BASIC_USAGE_PLAN_ID);
CreateUsagePlanKeyRequest createUsagePlanKeyRequest = new CreateUsagePlanKeyRequest()
.withUsagePlanId(BASIC_USAGE_PLAN_ID);
createUsagePlanKeyRequest.setKeyId(result.getId());
createUsagePlanKeyRequest.setKeyType("API_KEY");
apiGateway.createUsagePlanKey(createUsagePlanKeyRequest);
This should maybe be a comment instead, but I made an answer for readability (the key type is documented here).
// Client
AmazonApiGateway client = AmazonApiGatewayClientBuilder.standard().withRegion("my region here").build();
// Create new key
CreateApiKeyRequest keyReq = new CreateApiKeyRequest();
keyReq.setName("key name");
keyReq.setDescription("description");
keyReq.setEnabled(true);
CreateApiKeyResult keyRes = client.createApiKey(keyReq);
// Use existing plan
CreateUsagePlanKeyRequest planReq = new CreateUsagePlanKeyRequest();
planReq.setUsagePlanId("my usage plan id");
planReq.setKeyId(keyRes.getId()); // id from new key
planReq.setKeyType("API_KEY");
// add key to plan
client.createUsagePlanKey(planReq);
Note, this example is without a try-catch block
I want to change all files in folder over GCP to be publicly shared.
I see how to do this via gsutils.
How can i do this via java api?
Here is my try:
public static void main(String[] args) throws Exception {
//// more setting up code here...
GoogleCredential credential = GoogleCredential.fromStream(credentialsStream, httpTransport, jsonFactory);
credential = credential.createScoped(StorageScopes.all());
final Storage storage = new Storage.Builder(httpTransport, jsonFactory, credential)
.setApplicationName("monkeyduck")
.build();
final Storage.Objects.Get getRequest1 = storage.objects().get(bucketName, "sounds/1.0/arabic_test22/1000meters.mp3");
final StorageObject object1 = getRequest1.execute();
System.out.println(object1);
final List<ObjectAccessControl> aclList = new ArrayList<>();
// final ObjectAccessControl acl = new ObjectAccessControl()
// .setRole("PUBLIC-READER")
// .setProjectTeam(new ObjectAccessControl.ProjectTeam().setTeam("viewers"));
final ObjectAccessControl acl = new ObjectAccessControl()
.setRole("READER").setEntity("allUsers");
//System.out.println(acl);
aclList.add(acl);
object1.setAcl(aclList);
final Storage.Objects.Insert insertRequest = storage.objects().insert(bucketName, object1);
insertRequest.getMediaHttpUploader().setDirectUploadEnabled(true);
insertRequest.execute();
}
}
I get NPE because insertRequest.getMediaHttpUploader() == null
Instead of using objects().insert(), try using the ACL API
ObjectAccessControl oac = new ObjectAccessControl()
oac.setEntity("allUsers")
oac.setRole("READER");
Insert insert = service.objectAccessControls().insert(bucketName, "sounds/1.0/arabic_test22/1000meters.mp3", oac);
insert.execute();
About the folder matter. In Cloud Storage the concept of "folder" does not exists, it is only "bucket" and "object name".
The fact you can see the file grouped inside folders (I'm talking about the Cloud Storage Browser) it is only a graphic representation. With the API you will always handle "bucket" and "object name".
Knowing this, the Objects: list provides a prefix parameter which you can use to filter all the objects where the name starts with it. If you think the start of your object name as the folder, this filter can achieve what you're looking for.
From the documentation of the API I quote
In conjunction with the prefix filter, the use of the delimiter
parameter allows the list method to operate like a directory listing,
despite the object namespace being flat. For example, if delimiter
were set to "/", then listing objects from a bucket that contains the
objects "a/b", "a/c", "d", "e", "e/f" would return objects "d" and
"e", and prefixes "a/" and "e/".
I am trying to use Lambda function for S3 Put event notification. My Lambda function should be called once I put/add any new JSON file in my S3 bucket.
The challenge I have is there are not enough documents for this to implement such Lambda function in Java. Most of doc I found are for Node.js
I want, my Lambda function should be called and then inside that Lambda function, I want to consume that added json and then send that JSON to AWS ES Service.
But what all classes I should use for this? Anyone has any idea about this? S3 abd ES are all setup and running. The auto generated code for lambda is
`
#Override
public Object handleRequest(S3Event input, Context context) {
context.getLogger().log("Input: " + input);
// TODO: implement your handler
return null;
}
What next??
Handling S3 events in Lambda can be done, but you have to keep in mind, the the S3Event object only transports the reference to the object and not the object itself. To get to the actual object you have to invoke the AWS SDK yourself.
Requesting a S3 Object within a lambda function would look like this:
public Object handleRequest(S3Event input, Context context) {
AmazonS3Client s3Client = new AmazonS3Client(new DefaultAWSCredentialsProviderChain());
for (S3EventNotificationRecord record : input.getRecords()) {
String s3Key = record.getS3().getObject().getKey();
String s3Bucket = record.getS3().getBucket().getName();
context.getLogger().log("found id: " + s3Bucket+" "+s3Key);
// retrieve s3 object
S3Object object = s3Client.getObject(new GetObjectRequest(s3Bucket, s3Key));
InputStream objectData = object.getObjectContent();
//insert object into elasticsearch
}
return null;
}
Now the rather difficult part to insert this object into ElasticSearch. Sadly the AWS SDK does not provide any functions for this. The default approach would be to do a REST call against the AWS ES endpoint. There are various samples out their on how to proceed with calling an ElasticSearch instance.
Some people seem to go with the following project:
Jest - Elasticsearch Java Rest Client
Finally, here are the steps for S3 --> Lambda --> ES integration using Java.
Have your S3, Lamba and ES created on AWS. Steps are here.
Use below Java code in your lambda function to fetch a newly added object in S3 and send it to ES service.
public Object handleRequest(S3Event input, Context context) {
AmazonS3Client s3Client = new AmazonS3Client(new DefaultAWSCredentialsProviderChain());
for (S3EventNotificationRecord record : input.getRecords()) {
String s3Key = record.getS3().getObject().getKey();
String s3Bucket = record.getS3().getBucket().getName();
context.getLogger().log("found id: " + s3Bucket+" "+s3Key);
// retrieve s3 object
S3Object object = s3Client.getObject(new GetObjectRequest(s3Bucket, s3Key));
InputStream objectData = object.getObjectContent();
//Start putting your objects in AWS ES Service
String esInput = "Build your JSON string here using S3 objectData";
HttpClient httpClient = new DefaultHttpClient();
HttpPut putRequest = new HttpPut(AWS_ES_ENDPOINT + "/{Index_name}/{product_name}/{unique_id}" );
StringEntity input = new StringEntity(esInput);
input.setContentType("application/json");
putRequest.setEntity(input);
httpClient.execute(putRequest);
httpClient.getConnectionManager().shutdown();
}
return "success";}
Use either Postman or Sense to create Actual index & corresponding mapping in ES.
Once done, download and run proxy.js on your machine. Make sure you setup ES Security steps suggested in this post
Test setup and Kibana by running http://localhost:9200/_plugin/kibana/ URL from your machine.
All is set. Go ahead and set your dashboard in Kibana. Test it by adding new objects in your S3 bucket
This is part of my code snippet
WorkspaceConnector connector = null;
WorkspaceFactory workspaceFactory = null;
String variableListString = null;
Properties sasServerProperties = new Properties();
sasServerProperties.put("host", host);
sasServerProperties.put("port", port);
sasServerProperties.put("userName", userName);
sasServerProperties.put("password", password);
Properties[] sasServerPropertiesList = { sasServerProperties };
workspaceFactory = new WorkspaceFactory(sasServerPropertiesList, null, logWriter);
connector = workspaceFactory.getWorkspaceConnector(0L);
IWorkspace sasWorkspace = connector.getWorkspace();
ILanguageService sasLanguage = sasWorkspace.LanguageService();
//send variable list string
//continued
I need to send the "variableListString" to the SAS server through IOM bridge. Java SAS API doesn't give explicit ways to do it. Using CORBA and JDBC is the best way to do it?? Give me a hint how to do it. Is there any alternative method to do it??
This was asked a while back but useful in case anyone is still looking to do the same.
One way to do this is build a string of sas code and submit it to the server. We use this method for setting up variables on the host for the connected session. You can also use this technique to include sas code using code like %include "path to my code/my sas code.sas";:
...continue from code in the question...
langService = iWorkspace.LanguageService();
StringBuilder sb = new StringBuilder();
sb.append("%let mysasvar=" + javalocalvar);
... more variables
try {
langService.Submit(sb.toString());
} catch (GenericError e) {
e.printStackTrace();
}