I'm trying to implement a move operation using the Amazon S3 Java API.
The problem I am having is that the CopyObjectResult object returned by the AmazonS3Client.copyObject method doesn't seem to have a clear indicator about wiether the operation was successful or not.
Given that after this operation I am going to be deleting a file, I'd want to make sure that the operation was successful.
Any suggestions?
This is what I ended up doing,
def s3 = createS3Client(credentials)
def originalMeta = s3.getObjectMetadata(originalBucketName, key)
s3.copyObject(originalBucketName, key, newBucketName, newKey)
def newMeta = s3.getObjectMetadata(newBucketName, newKey)
// check that md5 matches to confirm this operation was successful
return originalMeta.contentMD5 == newMeta.contentMD5
Note that this is Groovy code, but it is extremely similar to how the Java code would work.
I don't like having to make two additional operations to check the metadata, so if there is anyway to do this more efficiently let me know.
I'm pretty sure you can just use CopyObjectResult object's getETag method to confirm that, after created, it has a valid ETag, as was made in the CopyObjectRequest setETag method.
getETag
public String getETag() Gets the ETag value for the new object that
was created in the associated CopyObjectRequest. Returns: The ETag
value for the new object. See Also: setETag(String) setETag
public void setETag(String etag)
Sets the ETag value for the new
object that was created from the associated copy object request.
Parameters: etag - The ETag value for the new object. See Also:
getETag()
Always trust the data.
Been a year since I did a similar function with the Amazon PhP SDK a couple years ago, but it should work.
AWS documentation says
The source and destination ETag is identical for a successfully copied
Related
Is it possible to get the Stream ARN of a DynamoDB table using the AWS CDK?
I tried the below, but when I access the streamArn using the getTableStreamArn it returns back null.
ITable table = Table.fromTableArn(this, "existingTable", <<existingTableArn>>);
System.out.println("ITable Stream Arn : " + table.getTableStreamArn());
Tried using fromTableAttribute as well, but the stream arn is still empty.
ITable table =
Table.fromTableAttributes(
this, "existingTable", TableAttributes.builder().tableArn(<<existingTableArn>>).build());
this is not possible with the fromTableArn method... Please see the documentation here:
https://docs.aws.amazon.com/cdk/api/latest/docs/aws-dynamodb-readme.html#importing-existing-tables
If you intend to use the tableStreamArn (including indirectly, for
example by creating an
#aws-cdk/aws-lambda-event-source.DynamoEventSource on the imported
table), you must use the Table.fromTableAttributes method and the
tableStreamArn property must be populated.
That value is most likely not available when your Java Code is running.
With the CDK there is a multi-step process to get your code to execute:
Your Java Code is executed and triggers the underlying JSII Layer
JSII executes the underlying Javascript/Typescript implementation of the CDK
The Typescript Layer produces the CloudFormation Code
The CloudFormation Code (and other assets) is sent to the AWS API
CloudFormation executes the template and provisions the resources
Some attributes are only available during Step 5) and before that only contain internal references that are eventually put into the CloudFormation template. If I recall correctly, the Table Stream ARN is one of them.
That means if you want that value, you have to create a CloudFormation Output that shows them, which will be populated during the deployment.
I have an S3-Bucket with two files:
s3://bucketA/objectA/objectB/fileA
s3://bucketA/objectA/objectB/fileB
I want to use the s3Client in java to create a copy of objectA known as objectC using the copyObject method of s3Client.
s3://bucketA/objectA/ ---Copy-To---> s3://bucketA/objectC/
The problem is the contents of objectA are not being copied into objectC. Object C does not contain object B and fileA and fileB. How can I copy the contents of the object as well?
Here is my code: (I am using kotlin)
s3client.copyObject(CopyObjectRequest("bucketA", "objectA","bucketA", "objectC"))
I checked in the S3 console and this works (it creates a folder called objectC, but I'm unable to get the contents of objectA into object C.)
What is happening is that using the SDK you're not making a recursive copy of the objects.
So the easiest solution is to use the AWS CLI
aws s3 cp s3://source-awsexamplebucket s3://source-awsexamplebucket --recursive --storage-class STANDARD
Note that you've to take into consideration the size of the objects, the amount, etc. If its something too big a batch mechanism could be made to help your system cope with load. You can read it further on the AWS documentation.
Now, and assuming you need to be doing that programmatically. The algorithm has 2 parts listing + copying. Something along those lines will work.
ListObjectsV2Result result = s3.listObjectsV2(from_bucket);
List<S3ObjectSummary> objects = result.getObjectSummaries();
for (S3ObjectSummary os : objects) {
s3.copyObject(from_bucket, os.getKey(), to_bucket, os.getKey());
}
// exception handling and error handling ommited for brevity
I want to extract signature changes (method parameter changes to be exact) from commits to git repository by a java program. I have used the following code:
for (Ref branch : branches) {
String branchName = branch.getName();
for (RevCommit commit : commits) {
boolean foundInThisBranch = false;
RevCommit targetCommit = walk.parseCommit(repo.resolve(
commit.getName()));
for (Map.Entry<String, Ref> e : repo.getAllRefs().entrySet()) {
if (e.getKey().startsWith(Constants.R_HEADS)) {
if (walk.isMergedInto(targetCommit, walk.parseCommit(
e.getValue().getObjectId()))) {
String foundInBranch = e.getValue().getName();
if (branchName.equals(foundInBranch)) {
foundInThisBranch = true;
break;
}
}
}
}
I can extract commit message, commit data and Author name from that, however, I am not able to extract parameter changes from them. I mean it is unable for me to identify parameter changes. I want to know if there is any way to recognize that. I mean it is impossible to recognize them from commit notes that are generated by programmers; I am looking for something like any specific annotation or something else.
This is my code to extract differences:
CanonicalTreeParser oldTreeIter = new CanonicalTreeParser();
oldTreeIter.reset(reader, oldId);
CanonicalTreeParser newTreeIter = new CanonicalTreeParser();
newTreeIter.reset(reader, headId);
List<DiffEntry> diffs= git.diff()
.setNewTree(newTreeIter)
.setOldTree(oldTreeIter)
.call();
ByteArrayOutputStream out = new ByteArrayOutputStream();
DiffFormatter df = new DiffFormatter(out);
df.setRepository(git.getRepository());
The export is really huge and impossible to extract method changes.
You show a way you've found to examine the diffs, but say that the output is too large and you can't extract the method signature changes. If by that you mean that you're asking about specific git support for telling you that a method signature changes, then no - no such support exists. This is because git does not "know" anything about the languages you may or may not have used in the files under source control. Everything is just content that is, or is not, different from other content.
Since a method signature could be split across lines in any number of ways, it's not even guaranteed that just because a method's signature changed its name would appear anywhere in the diff. What you would really have to do is perform a sort of "structural diff". That is, you would have to
check out the "old" version, and pass it to a java parser
check out the "new" version, and pass it to a java parser
compare the resulting parse trees, looking for methods that belong to the same object, but have changed
Even that won't be terribly easy, because methods could be renamed, and because method overloading could make it unclear which signature change goes with which version of a method.
From there what you have is a non-trivial coding problem, which is beyond the scope of SO to answer. If you decide to tackle this problem and run into specific programming questions along the way, of course you could post those questions and perhaps someone will be able to help.
I'm trying to filter the list of instance based on the machine type. However this doesn't seem to work.
Compute.Instances.List request = computeService.instances().list("project-name","us-central1-a" );
request.setFilter("(machinetype = zones/us-central1-a/machineTypes/n1-standard-1)");
InstanceList instanceList = request.execute();
List<Instance> instances = instanceList.getItems();
The response is empty even though, I have an instance that match the filter! (when I remove the filter it gets the instance.)
[chaker#cbenhamed:~]$ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
foo-bar-worker-n1-standard-1-65304152130-zfq us-central1-a n1-standard-1 true 10.240.0.2 00.000.00.255 RUNNING
According to the documentation, the filter parameter should work in this case. Because, first the machineType is in the root of the Instance object. And second that's the right form of the machineType argument
Full or partial URL of the machine type resource to use for this instance, in the format: zones/zone/machineTypes/machine-type. This is provided by the client when the instance is created.
I tried to inspect HTTP requests made by gcloud
gcloud compute instances list --filter="machineType:n1-standard-1" --log-http
But it turned out that it gets the whole list (across all zones!) and filter them locally!
It seems to be a misunderstanding, the documentation describes the machineType as an argument of the Response body no as a filter. So in this case you can't use a partial URL also you can only use the following comparison operators =, !=, >, or < which none of them works as a like.
I think the only way to use this filter is using the full URL just as Oleksandr Bushkovskyi commented:
machineType="https://www.googleapis.com/compute/v1/projects/[PROJECT]/zones/[ZONE]/machineTypes/[MACHINE_TYPE]"
I use this code from the JavaGit example:
File repositoryDirectory = new File("Library\\build\\jar\\");
DotGit dotGit = DotGit.getInstance(repositoryDirectory);
// Print commit messages of the current branch
for (Commit c : dotGit.getLog()) {
System.out.println(c.getMessage());
}
How could I get the id of commit this way?
Or it might be more appropriate library to interact with git?
According to the documentation (I don't know very much this library), you should invoke the getCommitName() method and use the returned Ref object to get the information you want (I think the SHA1 hash or the tag).