I have used multi part uploading for uploading image to Amazon S3 as mentioned in documentation.
But then the files uploaded can be access directly without access key or anything. Tested using the remote URL which is got from response for a particular file.
Is there any way to restrict access to uploaded file?
Also is there a way to change the upload URL here, If I want to add a folder and the the file?
Yes you can create folder by using below method.
AmazonS3 amazons3Client = new AmazonS3Client(new ProfileCredentialsProvider());
public void createFolder(String bucketName, String folderName)
{
try
{
ObjectMetadata objectMetaData = new ObjectMetadata();
objectMetaData.setContentLength(0);
InputStream emptyContent = new ByteArrayInputStream(new byte[0]);
amazons3Client.putObject(new PutObjectRequest(bucketName, folderName + "/", emptyContent, objectMetaData));
}
catch (Exception exception)
{
LOGGER.error("Exception In Create Folder", exception);
}
}
Access rights you can use policy and it will apply on specific to your bucket,Please go through below link
You can allow specific IP to access.
http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
For managing access to your file, you will need to follow the instructions here: http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html
It is covered in more detail here: http://docs.aws.amazon.com/AmazonS3/latest/dev/intro-managing-access-s3-resources.html
Related
I am newbie and recently started working on amazon s3 services.
I have create a java maven project and using Java 1.8 and aws-java-sdk version 1.11.6 version in my sample program
Below is source code for the same and it executes successfully.
It returns version id as output of the program.
System.out.println("Started the program to create the bucket....");
BasicAWSCredentials awsCreds = new BasicAWSCredentials(CloudMigrationConstants.AWS_ACCOUNT_KEY, CloudMigrationConstants.AWS_ACCOUNT_SECRET_KEY);
AmazonS3Client s3Client = new AmazonS3Client(awsCreds);
String uploadFileName="G:\\Ebooks\\chap1.doc";
String bucketName="jinesh1522421795620";
String keyName="test/";
System.out.println("Uploading a new object to S3 from a file\n");
File file = new File(uploadFileName);
PutObjectResult putObjectResult=s3Client.putObject(new PutObjectRequest(
bucketName, keyName, file));
System.out.println("Version id :" + putObjectResult.getVersionId());
System.out.println("Finished the program to create the bucket....");
But when I try to see the files using s3browser or amazon console I do not see the files are listed inside the bucket.
Can you please let me know what is wrong with My Java program?
I think I misunderstood the concept. We have to specify the name of the file to store while specifying the key. In above program what I missed was specifying the name of the file along with name of the folder hence I was not able to see the file.
File file = new File(uploadFileName);
PutObjectResult putObjectResult=s3Client.putObject(new PutObjectRequest(
bucketName, keyName+"/chap1.doc", file));
I am using Google Sheets API to query a spreadsheet on Google. The code just works fine when run from inside eclipse. But, when the project is packed to a jar (say, spreadsheets-api.jar) and used from a different project as maven dependency, I am running into issues with the following method call:
public static Sheets createService() throws GoogleServiceInitException {
if (properties == null)
init();
URL keyfile = GoogleSpreadSheetService.class.getResource(getValue(PropertyEnum.PRIVATEKEY_FILE));
if (keyfile == null)
throw new GoogleServiceInitException("Missing private key file");
Credential credential;
try {
credential = new GoogleCredential.Builder().setTransport(HTTP_TRANSPORT).setJsonFactory(JSON_FACTORY)
.setServiceAccountId(getValue(PropertyEnum.SERVICE_ACCOUNT_ID))
.setServiceAccountPrivateKeyFromP12File(Paths.get(keyfile.toURI()).toFile())
.setServiceAccountScopes(SCOPES).build();
return new Sheets.Builder(HTTP_TRANSPORT, JSON_FACTORY, credential).setApplicationName(APPLICATION_NAME)
.build();
} catch (GeneralSecurityException | IOException | URISyntaxException e) {
logger.error(APGeneralUtilities.stacktraceToString(e));
throw new GoogleServiceInitException(e.getMessage());
}
}
The issue is with method setServiceAccountPrivateKeyFromP12File which only takes a java.io.File as parameter. So, when I try to access this file by referring to the embedded P12 file in the jar as an URL and then coverting it to File using this line of code - Paths.get(keyfile.toURI()).toFile(), the code bombs with a java.nio.file.FileSystemNotFoundException.
The problem is pretty clear. I need to refer to the P12 file as a stream, and I could do that by using Class.getResourceAsStream() or other alternatives, but Google's credential builder will only take a file as parameter.
What are my options now? The way I see it, to work with sheets api, I just have to get the stream, copy it to temp file, then give that file's path as a parameter to the Credentials builder, and finally delete the temp file after everything is done. Is there a simpler way to solve this problem?
How to rename or copy file from azure storage using java file system sdk
Is there any way to rename or copy file stored in azurestorage from azurestorage.jar for java if so pls help us.
Assume the file is on the file share mounted in the system, you can use Files.copy(...) to copy file
Path sourcePath = new File("path/to/source/file").toPath();
Path targetPath = new File("path/to/target/file").toPath();
Files.copy(sourcePath, targetPath);
Note that the code will download the source file to local host, then upload to the azure storage service.
If you want to avoid download and upload, use azure storage rest api to copy file. If you don't want to deal with rest api directly, use azure-sdk-for-java or similar SDKs for python and C#.
https://stackoverflow.com/a/66774796/12066108 shows how to copy file with azure-sdk-for-java library.
You could use CloudFile.startCopy(source) to rename and copy blob files. And here is the complete code.
package nau.edu.cn.steven;
import com.microsoft.azure.storage.CloudStorageAccount;
import com.microsoft.azure.storage.file.CopyStatus;
import com.microsoft.azure.storage.file.CloudFile;
import com.microsoft.azure.storage.file.CloudFileClient;
import com.microsoft.azure.storage.file.CloudFileDirectory;
import com.microsoft.azure.storage.file.CloudFileShare;
public class AzureCopyFile {
// Connection string
public static final String storageConnectionString =
"DefaultEndpointsProtocol=http;"
+ "AccountName=your_account_name;"
+ "AccountKey= your_account_key";
public static void main( String[] args )
{
try {
CloudStorageAccount account = CloudStorageAccount.parse(storageConnectionString);
CloudFileClient fileClient = account.createCloudFileClient();
// Get a reference to the file share
CloudFileShare share = fileClient.getShareReference("sampleshare");
if(share.createIfNotExists()) {
System.out.println("New share created");
}
// Get a reference to the root directory for the share for example
CloudFileDirectory rootDir = share.getRootDirectoryReference();
// Old file
CloudFile oldCloudFile = rootDir.getFileReference("Readme.txt");
// New file
CloudFile newCloudFile = rootDir.getFileReference("Readme2.txt");
// Start copy
newCloudFile.startCopy(oldCloudFile.getUri());
// Exit until copy finished
while(true) {
if(newCloudFile.getCopyState().getStatus() != CopyStatus.PENDING) {
break;
}
// Sleep for a second maybe
Thread.sleep(1000);
}
}
catch(Exception e) {
System.out.print("Exception encountered: ");
System.out.println(e.getMessage());
System.exit(-1);
}
}
}
According to the javadocs for the class CloudFile of Azure File Storage, there is no rename operation supported natively for File Storage, even for Blob Storage.
If you want to do the rename action, you need to perform 2 steps include copy a file with a new name and delete the file with old name.
There are two threads below separately from SO & MSDN.
Programmatically (.NET) renaming an Azure File or Directory using File (not Blob) Storage, that's the same for using Java.
https://social.msdn.microsoft.com/Forums/azure/en-US/04c415fb-cc1a-4270-986b-03a68b05aa81/renaming-files-in-blobs-storage?forum=windowsazuredata.
As #Steven said, the copy operation is supported via the function startCopy for a new file reference.
Is there any way to upload a file which is stored on AWS S3 from EMR instance to another EC2 instance directory.
So far I am trying to do it using java SFTP. Also tried to use AWS S3 Client to put object into s3.
Here is my code:
try {
String bucketName = "my-bucket/product-images";
BufferedImage image;
URL url =new URL(product_image_url);
image = ImageIO.read(url);
String imageName = FilenameUtils.getBaseName(product_image_url);
/*Tried creating new file in current directory*/
File file = new File(imageName);
ImageIO.write(image,"jpg",file);
s3client.putObject(new PutObjectRequest(bucketName,imageName,file));
/*Here passing source as file name created in current dir
I have also tried giving bucket path as
sftpChannel.put("s3://xwalker-images/product-images/"+imageName, imageName);
*/
sftpChannel.put("imageName, "/my_images/"+imageName);
} catch (SftpException e) {
e.printStackTrace();
}
Here I am getting error Nullpointer at sftpChannel.put as imageName is getting NULL.
Can anyone suggest me where I am going wrong?
I have tried this executing on a local machine and it works fine. But when I run it on AWS-EMR it fails.
Is it possible to do what I am expecting in AWS S3?
I am trying to upload a file to Google Drive using the Google Drive API and Java.
The file I'm trying to upload is a docx file (previously exported using the Drive API). My problem is that when uploading files that has filenames containing swedish characters (like åäö), the Drive API throws an exception.
The code for uploading looks essentially like this:
File file = new File("/path/to/file.docx");
String contentType = "application/vnd.openxmlformats-officedocument.wordprocessingml.document";
File fileToUpload = new File()
.setTitle("abc")
.setMimeType(contentType);
FileContent mediaContent = new FileContent(contentType, file);
try {
Drive.Files.Insert request = client.files().insert(fileToUpload, mediaContent);
request.execute();
} catch (IOException e) {
logger.error("Could not restore file");
throw e;
}
This code actually works fine. But if I change setTitle("abc") to setTitle("abcö") I get this exception:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 400 Bad Request
Bad Request
I have tried using versions v2-rev82-1.14.2-beta and v2-rev82-1.15.0-rc of google-api-services-drive, with the same result.
I am using OSX if that helps (Windows seems to have these kinds of problems more often).
EDIT:
After some experimenting I found that if I exclude the FileContent object (the actual file content), and only create an empty file uploading the meta data, then the swedish characters in the title are fine.
Drive.Files.Insert request = client.files().insert(fileToUpload);
Seems like it's not the title that's the main problem after all. If I just can figure out how to add the file data to the empty file, I should be done.
EDIT2: Solution found!
Adding: request.getMediaHttpUploader().setDirectUploadEnabled(true); enables direct upload (not sure what I was using before though), and apparently that makes Google Drive not care about my strange Swedish characters anymore (Again, not sure why).
This is the code I ended up with (with simplifications):
File file = new File("/path/to/file.docx");
String contentType = "application/vnd.openxmlformats-officedocument.wordprocessingml.document";
File fileToUpload = new File()
.setTitle("abcö")
.setMimeType(contentType);
FileContent mediaContent = new FileContent(contentType, file);
try {
Drive.Files.Insert request = client.files().insert(fileToUpload, mediaContent);
request.getMediaHttpUploader().setDirectUploadEnabled(true);
request.execute();
} catch (IOException e) {
logger.error("Could not restore file");
throw e;
}
Now I only need to add support for resumable uploads for big files, but that's a different story.
EDIT3: By the way, here are the docs that got me on the right track.
Encode it as UTF-8, or change the uploadType to "media".