Upload File To Amazon S3 Using Java Not Working - java

I am newbie and recently started working on amazon s3 services.
I have create a java maven project and using Java 1.8 and aws-java-sdk version 1.11.6 version in my sample program
Below is source code for the same and it executes successfully.
It returns version id as output of the program.
System.out.println("Started the program to create the bucket....");
BasicAWSCredentials awsCreds = new BasicAWSCredentials(CloudMigrationConstants.AWS_ACCOUNT_KEY, CloudMigrationConstants.AWS_ACCOUNT_SECRET_KEY);
AmazonS3Client s3Client = new AmazonS3Client(awsCreds);
String uploadFileName="G:\\Ebooks\\chap1.doc";
String bucketName="jinesh1522421795620";
String keyName="test/";
System.out.println("Uploading a new object to S3 from a file\n");
File file = new File(uploadFileName);
PutObjectResult putObjectResult=s3Client.putObject(new PutObjectRequest(
bucketName, keyName, file));
System.out.println("Version id :" + putObjectResult.getVersionId());
System.out.println("Finished the program to create the bucket....");
But when I try to see the files using s3browser or amazon console I do not see the files are listed inside the bucket.
Can you please let me know what is wrong with My Java program?

I think I misunderstood the concept. We have to specify the name of the file to store while specifying the key. In above program what I missed was specifying the name of the file along with name of the folder hence I was not able to see the file.
File file = new File(uploadFileName);
PutObjectResult putObjectResult=s3Client.putObject(new PutObjectRequest(
bucketName, keyName+"/chap1.doc", file));

Related

How to save an aspose workbook (.xlsx) to aws s3 using java?

I want to save Aspose worbook (.xlsx) to AWS S3 using java. Any help?
Providing S3 path directly to workbook.save("s3://...") will not work.
I am creating this file in AWS EMR Cluster. I can save this file in the same cluster and then move the file to S3. But I would like to know if there is any way of saving it directly to S3. I looked for answers but did not get any.
You can save the file in EMR Cluster and then move it to S3, then delete the file from EMR Cluster. The code snippet is given below:
workbook.save("temp.xlsx");
File file = new File("temp.xlsx");
InputStream dataStream = new FileInputStream(file);
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.build();
ObjectMetadata metadata = new ObjectMetadata();
s3Client.putObject(new PutObjectRequest(bucketName, s3Key, dataStream, metadata));
file.delete();

Read and write to a file in Amazon s3 bucket

I need to read a large (>15mb) file (say sample.csv) from an Amazon S3 bucket. I then need to process the data present in sample.csv and keep writing it to another directory in the S3 bucket. I intend to use an AWS Lambda function to run my java code.
As a first step I developed java code that runs on my local system. The java code reads the sample.csv file from the S3 bucket and I used the put method to write data back to the S3 bucket. But I find only the last line was processed and put back.
Region clientRegion = Region.Myregion;
AwsBasicCredentials awsCreds = AwsBasicCredentials.create("myAccessId","mySecretKey");
S3Client s3Client = S3Client.builder().region(clientRegion).credentialsProvider(StaticCredentialsProvider.create(awsCreds)).build();
ResponseInputStream<GetObjectResponse> s3objectResponse = s3Client.getObject(GetObjectRequest.builder().bucket(bucketName).key("Input/sample.csv").build());
BufferedReader reader = new BufferedReader(new InputStreamReader(s3objectResponse));
String line = null;
while ((line = reader.readLine()) != null) {
s3Client.putObject(PutObjectRequest.builder().bucket(bucketName).key("Test/Testout.csv").build(),RequestBody.fromString(line));
}
Example: sample.csv contains
1,sam,21,java,beginner;
2,tom,28,python,practitioner;
3,john,35,c#,expert.
My output should be
1,mas,XX,java,beginner;
2,mot,XX,python,practitioner;
3,nhoj,XX,c#,expert.
But only 3,nhoj,XX,c#,expert is written in the Testout.csv.
The putObject() method creates an Amazon S3 object.
It is not possible to append or modify an S3 object, so each time the while loop executes, it is creating a new Amazon S3 object.
Instead, I would recommend:
Download the source file from Amazon S3 to local disk (use GetObject() with a destinationFile to download to disk)
Process the file and output to a local file
Upload the output file to the Amazon S3 bucket (method)
This separates the AWS code from your processing code, which should be easier to maintain.

AmazonS3 multipart uploading

I have used multi part uploading for uploading image to Amazon S3 as mentioned in documentation.
But then the files uploaded can be access directly without access key or anything. Tested using the remote URL which is got from response for a particular file.
Is there any way to restrict access to uploaded file?
Also is there a way to change the upload URL here, If I want to add a folder and the the file?
Yes you can create folder by using below method.
AmazonS3 amazons3Client = new AmazonS3Client(new ProfileCredentialsProvider());
public void createFolder(String bucketName, String folderName)
{
try
{
ObjectMetadata objectMetaData = new ObjectMetadata();
objectMetaData.setContentLength(0);
InputStream emptyContent = new ByteArrayInputStream(new byte[0]);
amazons3Client.putObject(new PutObjectRequest(bucketName, folderName + "/", emptyContent, objectMetaData));
}
catch (Exception exception)
{
LOGGER.error("Exception In Create Folder", exception);
}
}
Access rights you can use policy and it will apply on specific to your bucket,Please go through below link
You can allow specific IP to access.
http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
For managing access to your file, you will need to follow the instructions here: http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html
It is covered in more detail here: http://docs.aws.amazon.com/AmazonS3/latest/dev/intro-managing-access-s3-resources.html

How to upload a file present in S3 from AWS EMR to EC2 machine using java

Is there any way to upload a file which is stored on AWS S3 from EMR instance to another EC2 instance directory.
So far I am trying to do it using java SFTP. Also tried to use AWS S3 Client to put object into s3.
Here is my code:
try {
String bucketName = "my-bucket/product-images";
BufferedImage image;
URL url =new URL(product_image_url);
image = ImageIO.read(url);
String imageName = FilenameUtils.getBaseName(product_image_url);
/*Tried creating new file in current directory*/
File file = new File(imageName);
ImageIO.write(image,"jpg",file);
s3client.putObject(new PutObjectRequest(bucketName,imageName,file));
/*Here passing source as file name created in current dir
I have also tried giving bucket path as
sftpChannel.put("s3://xwalker-images/product-images/"+imageName, imageName);
*/
sftpChannel.put("imageName, "/my_images/"+imageName);
} catch (SftpException e) {
e.printStackTrace();
}
Here I am getting error Nullpointer at sftpChannel.put as imageName is getting NULL.
Can anyone suggest me where I am going wrong?
I have tried this executing on a local machine and it works fine. But when I run it on AWS-EMR it fails.
Is it possible to do what I am expecting in AWS S3?

Converting MultipartFile to java.io.File without copying to local machine

I have a Java Spring MVC web application. From client, through AngularJS, I am uploading a file and posting it to Controller as webservice.
In my Controller, I am gettinfg it as MultipartFile and I can copy it to local machine.
But I want to upload the file to Amazone S3 bucket. So I have to convert it to java.io.File. Right now what I am doing is, I am copying it to local machine and then uploading to S3 using jets3t.
Here is my way of converting in controller
MultipartHttpServletRequest mRequest=(MultipartHttpServletRequest)request;
Iterator<String> itr=mRequest.getFileNames();
while(itr.hasNext()){
MultipartFile mFile=mRequest.getFile(itr.next());
String fileName=mFile.getOriginalFilename();
fileLoc="/home/mydocs/my-uploads/"+date+"_"+fileName; //date is String form of current date.
Then I am using FIleCopyUtils of SpringFramework
File newFile = new File(fileLoc);
// if the directory does not exist, create it
if (!newFile.getParentFile().exists()) {
newFile.getParentFile().mkdirs();
}
FileCopyUtils.copy(mFile.getBytes(), newFile);
So it will create a new file in the local machine. That file I am uplaoding in S3
S3Object fileObject = new S3Object(newFile);
s3Service.putObject("myBucket", fileObject);
It creates file in my local system. I don't want to create.
Without creating a file in local system, how to convert a MultipartFIle to java.io.File?
MultipartFile, by default, is already saved on your server as a file when user uploaded it.
From that point - you can do anything you want with this file.
There is a method that moves that temp file to any destination you want.
http://docs.spring.io/spring/docs/3.0.x/api/org/springframework/web/multipart/MultipartFile.html#transferTo(java.io.File)
But MultipartFile is just API, you can implement any other MultipartResolver
http://docs.spring.io/spring/docs/3.0.x/api/org/springframework/web/multipart/MultipartResolver.html
This API accepts input stream and you can do anything you want with it. Default implementation (usually commons-multipart) saves it to temp dir as a file.
But other problem stays here - if S3 API accepts a file as a parameter - you cannot do anything with this - you need a real file. If you want to avoid creating files at all - create you own S3 API.
The question is already more than one year old, so I'm not sure if the jets35 link provided by the OP had the following snippet at that time.
If your data isn't a File or String you can use any input stream as a data source, but you must manually set the Content-Length.
// Create an object containing a greeting string as input stream data.
String greeting = "Hello World!";
S3Object helloWorldObject = new S3Object("HelloWorld2.txt");
ByteArrayInputStream greetingIS = new ByteArrayInputStream(greeting.getBytes());
helloWorldObject.setDataInputStream(greetingIS);
helloWorldObject.setContentLength(
greeting.getBytes(Constants.DEFAULT_ENCODING).length);
helloWorldObject.setContentType("text/plain");
s3Service.putObject(testBucket, helloWorldObject);
It turns out you don't have to create a local file first. As #Boris suggests you can feed the S3Object with the Data Input Stream, Content Type and Content Length you'll get from MultipartFile.getInputStream(), MultipartFile.getContentType() and MultipartFile.getSize() respectively.
Instead of copying it to your local machine, you can just do this and replace the file name with this:
File newFile = new File(multipartFile.getOriginalName());
This way, you don't have to have a local destination create your file
if you are try to use in httpentity check my answer here
https://stackoverflow.com/a/68022695/7532946

Categories

Resources