I have a folder named output inside a bucket named BucketA. I have a list of files in output folder. How do I download them to my local machine using AWS Java SDK ?
Below is my code:
AmazonS3Client s3Client = new AmazonS3Client(credentials);
File localFile = new File("/home/abc/Desktop/AmazonS3/");
s3Client.getObject(new GetObjectRequest("bucketA", "/bucketA/output/"), localFile);
And I got the error:
AmazonS3Exception: The specified key does not exist.
Keep in mind that S3 is not a filesystem, but it is an object store. There's a huge difference between the two, one being that directory-style activities simply won't work.
Suppose you have an S3 bucket with two objects in it:
/path/to/file1.txt
/path/to/file2.txt
When working with these objects you can't simply refer to /path/to/ like you can when working with files in a filesystem directory. That's because /path/to/ is not a directory but just part of a key in a very large hash table. This is why the error message indicates an issue with a key. These are not filename paths but keys to objects within the object store.
In order to copy all the files in a location like /path/to/ you need to perform it in multiple steps. First, you need to get a listing of all the objects whose keys begin with /path/to, then you need to loop through each individual object and copy them one by one.
Here is a similar question with an answer that shows how to download multiple files from S3 using Java.
I know this question was asked longtime ago, but still this answer might help some one.
You might want to use something like this to download objects from S3
new ListObjectsV2Request().withBucketName("bucketName").withDelimiter("delimiter").withPrefix("path/to/image/");
as mentioned in the S3 doc
delimiter be "/" and prefix be your "folder like structure".
You can use the predefined classes for upload directory and download directory
For Download
MultipleFileDownload xfer = xfer_mgr.downloadDirectory(
bucketName, key, new File("C:\\Users\\miracle\\Deskto\\Downloads"));
For Upload
MultipleFileUpload xfer = xfer_mgr.uploadDirectory(bucketName, key,Dir,true);
The error message means that the bucket (in this case "bucketA") does not contain a file with the name you specified (in this case "/bucketA/output/").
When you specify the key, do not include the bucket name in the key. S3 supports "folders" in the key, which are delimited with "/", so you probably do not want to try to use keys that end with "/".
If your bucket "bucketA" contains a file called "output", you probably want to say
new GetObjectRequest("bucketA", "output")
If this doesn't work, other things to check:
Do the credentials you are using have permission to read from the bucket?
Did you spell all the names correctly?
You might want to use listObjects("bucketA") to verify what the bucket actually contains (as seen with the credentials you are using).
Related
My Code:
Page<Blob> blobs = storage.list(bucketName, BlobListOption.prefix(folderPath));
for (Blob blob : blobs.iterateAll()) {
if (!blob.isDirectory()) {
// do stuff with the blob
}
}
It does list the entire content for this folder and it's sub-folders, including:
files
folders (blob objects with name ending in / and size 0).
The problem: blob.isDirectory() always returns false.
What is the correct way to do such a listing and distinguish files from folders ?
Thank you.
As explained in this document-
Cloud Storage operates with a flat namespace, which means that folders don't actually exist within Cloud Storage. If you create an object named folder1/file.txt in the bucket your-bucket, the path to the object is your-bucket/folder1/file.txt, but there is no folder named folder1; instead, the string folder1 is part of the object's name.
So the directory you are talking about are actually not directories or folders, it is just a visual representation of folders that resembles a local file browser. That is the reason blob.isDirectory() always returns false.
Now in your case to distinguish between the files and the folders(which are not really folders but just a visual representation), you may consider using some regular expression on the blob name. As the files will not have the / character you can easily distinguish between the two.
(Sorry if this is simple; this is my first post)
Is the groovy/grails asset pipeline modifiable at runtime?
Problem: I am creating an application where users create the objects. The objects are stored as text files so that only the necessary objects are built at runtime. Currently, the text file includes a string which represents the filename of the image. The plan was to have these images stored in assets/images/ as this works best for later displaying the object. However, now I am running into issues with saving files to assets/images/ at run time, and I can't even figure out if this is possible. *Displaying images already works in the way I require if I drag and drop the images into the desired folder, however I need a way for the controller to put the image there instead. The relevant section of controller code:
def folder = new File("languageDevelopment/grails-app/assets/images/")
//println folder
def f = request.getFile('keyImage');
if (f.empty)
{
flash.message = 'file cannot be empty'
render(view: 'create')
return
}
f.transferTo(folder)
The error I'm receiving is a fileNotFoundException
"/var/folders/9c/0brqct9j6pj4j85wnc5zljvc0000gn/T/languageDevelopment/grails-app/assets/images (No such file or directory)"
on f.transferTo(folder)
What is the section it is adding to the beginning of my "folder" object?
Thanks in advance. If you need more information or have a suggestion to a different route please let me know!
new File("languageDevelopment/grails-app/assets/images/")
This folder is present only in your sources
After deployment it will looks like "/PATH-TO-TOMCAT/webapps/ROOT/assets/" if you use tomcat.
Also asset/images, asset/font etc. will be merged to assets folder.
If you'd like to store temporary files you can create some directory under src/resources folder.
For example "src/resources/images"
And you can get access to this folder from classloader:
this.class.classLoader.getResource('images/someImage.png').path
I am new to S3 and I am trying to create multiple directories in Amazon S3 using java by only making one call to S3.
I could only come up with this :-
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(0);
InputStream emptyContent = new ByteArrayInputStream(new byte[0]);
PutObjectRequest putObjectRequest = new PutObjectRequest(bucket,
"test/tryAgain/", emptyContent, metadata);
s3.putObject(putObjectRequest);
But the problem with this while uploading 10 folders (when the key ends with "/" in the console we can see the object as a folder ) is that I have to make 10 calls to S3.
But I want to do a create all the folders at once like we do a batch delete using DeleteObjectsRequest.
Can anyone please suggest me or help me how to solve my problem ?
Can you be a bit more specific as to what you're trying to do (or avoid doing)?
If you're primarily concerned with the cost per PUT, I don't think there is a way to batch 'upload' a directory with each file being a separate key and avoid that cost. Each PUT (even in a batch process) will cost you the price per PUT.
If you're simply trying to find a way to efficiently and recursively upload a folder, check out the uploadDirectory() method of TransferManager.
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html#uploadDirectory-java.lang.String-java.lang.String-java.io.File-boolean-
public MultipleFileUpload uploadDirectory(String bucketName,
String virtualDirectoryKeyPrefix,
File directory,
boolean includeSubdirectories)
I have a desktop application which downloads all files on server.When a new file is added I want to download only the newer file.
Well to know which one is the "new one" you have to create a map/or other datastructure and put pair of the name/metadata creationtime(or last modified time)which one suits you best , when you iterate over your files just see their metadata with
Path file = ...;
BasicFileAttributes attr = Files.readAttributes(file, BasicFileAttributes.class);
attr.creationTime(); //or attr.lastModifiedTime();
When you compare these times with one on server decide to download only the one with latest time.
Either way you have to keep track of at least the name/time modified(or created) at your previous download and compare these.
If this application on your desktop is not some kind of service that runs nonstop,find some way to persist that data on system,serialization or embed database h2/hsqldb within it.Use streams with conncurent iteration/ parralelStream to check these times and compare ,in case you use java8
edit- to get metadata from url, check this question Get the Last Modified date of an URL
I am using AmazonS3Client.java to upload files to S3 from my application. I am using the putObject method to upload the file
val putObjectRequest = new PutObjectRequest(bucketName, key, inputStream, metadata)
val acl = CannedAccessControlList.Private
putObjectRequest.setCannedAcl(acl)
s3.putObject(putObjectRequest)
This works for buckets at the topmost level in my S3 account. Now, suppose i want to upload the file to a sub-bucket for example bucketB which is inside bucketA . How should i specify the bucket name for bucketB ?
Thank You
It is admittedly somewhat surprising, but there is no such thing as a "sub-bucket" in S3. All buckets are top-level. The structures inside buckets that you see in the S3 admin console or other UIs are called "folders", but even they don't really exist! You can't directly create or destroy folders, for instance, or set any attributes on them. Folders are purely a presentation-level convention for viewing the underlying flat set of objects in your bucket. That said, it's pretty easy to split your objects into (purely non-existent) folders. Just give them heirarchical names, with each level separated by a "/".
val putObjectRequest = new PutObjectRequest(bucketName, topFolderName +"/" + subFolderName+ "/" +key, inputStream, metadata)
Trying using putObjectRequest.setKey("folder")