Can I get a reference for the apis of Java+ spark sql accessing emc object store via S3 api. I tried many S3 apis(aws-java-sdk.1.7.4 jar) but stuck in some error related to bucket name.(Because my bucket name contains "" underscore. My object store on emc which allows bucket names with "". But I want to access this by spark sql but through S3 api.
Trouble is that the S3A connectors all expect bucket name to be a valid hostname, but _ isn't allowed in DNS names.
AWS now forbid new buckets with underscores, and the people who do the S3 connectors for tools like spark aren't going to do anything with bug reports about this other than close as "wontfix".
Sorry, but you'll just have to rename your bucket.
Related
I want to List of all the s3 bucket using java but due to some limitation I am not allowed to use AWS SDK.
Is there a way using java i can achieve this, limitation is I do not want to use AWS sdk, just a plain call to the aws service url , i have access key and secret key ,region, how do i set this information in the header and hit the service
I want to use certificate based authentication in AWS Lambda to generate oauth tokens. Currently I am storing the certificates and private keys locally and running it like a normal java application.
I am planning to use AWS Secrets manager to store these certificates and keys. However the issue is since we are using terraform to provision AWS resources, it seems like we will have to keep these certs and keys in our bitbucket repo which will have security risks. Is there any other way I can use these certificates in AWS lambda without actually storing them in bitbucket repo?
The Terraform aws_secretsmanager_secret_version resource takes a string value, but that doesn't mean you have to hard-code the string inside that resource. You need to think about how you can read that key value into Terraform and reference it inside the resource.
For example, that string could come from a local file, or an S3 object. Terraform could also generate the TLS key for you.
How can we use minio storage as same as S3. Is there any need to change the code of java spring boot?. Previous codes are aws related. I don't want to change the code, But i want to access the storage from another source. Is that possible with minio.
I assume, that you are using the aws-java-sdk s3client. Therefore you just need to set the endpoint-configuration (for example http://localhost:9000, if minio runs on port 9000 on your local machine)
For more infos about the endpoint-configuration, you can look here
I'm having a brainstorming issue on how to get user uploaded pictures viewed by only the friends of the users.
So what I've come up with so far is:
Create a DynamoDB table for each user, with a dynamic list of friends/new friends added.
Generate a Signed URL for every user-uploaded picture.
Allow access to the Signed URL to every friend listed in the DynamoDB table to view set picture/s.
Does this sound correct? Also, would I technically have just one bucket for ALL user uploaded pictures? Something about my design sounds off...
Can anyone give me a quick tutorial on how to accomplish this via Java?
There two basic approaches:
Permissions in Amazon S3, or
Application-controlled access to object in Amazon S3
Permissions in Amazon S3
You can provide credentials (either via IAM or Amazon Cognito) that allow users to access a particular path within an Amazon S3 bucket. For example, each user could have their own path within the bucket.
Your application would generate URLs that include signatures that identify them as that particular user and Amazon S3 would grant access to the objects.
One benefit of this approach is that you could provide the AWS credentials to the users and they could interact directly with AWS, such as using the AWS Command-Line Interface (CLI) to upload/download files without having to always go via your application.
Application-controlled access to object in Amazon S3
In this scenario, users have no permissions within Amazon S3. Instead, each time that your application wishes to generate a URL to an object in S3 (eg in an <img> tag), you created a pre-signed URL. This will grant access to the object for a limited time. It only takes a couple of lines of code and can be done within the application without communication with AWS to generate the URL.
There is no need to store pre-signed URLs. They are generated on-the-fly.
The benefit of this approach is that your application has full control over which objects they can access. Friends could share pictures with other users and the application would grant access, whereas the first method only grants access to objects within the user's specific path.
I am new in Amazon S3 service. I have an Amazon S3 database, the directory(bucket) structure is like:
-All bucket
-MyCompany
-MyProduct
-Product_1
- sub1_prod1
- sub1_prod2
...
-Product_2
- sub2_prod1
- sub2_prod2
...
As you see above, under MyProduct bucket I have several product buckets (e.g. Product_1), under each of the product bucket I have several sub-product(e.g. sub1_prod1). Each sub-product contains multiple files.
Now, I want to implement Java code in my Android client to query all my products under MyProduct bucket, how can I do this? I am using AmazonS3Client class provided by Amazon Android SDK.
P.S.:
I am able to create my AmazonS3Client object by using my credential.
AmazonS3Client s3 = new AmazonS3Client(myCred);
I know how to upload files to S3 bucket in java code, but I am not sure how to query the S3 database & get the result in my Android client, that's to get all the file names under each sub_product bucket.
I have an Amazon S3 database
IMHO, Amazon S3 is not a database, any more than a directory of files is a database. You may wish to consider other Amazon AWS services that are actual databases, such as DynamoDB or RDS.
that's to get all the file names under each sub_product bucket
By reading the documentation, it would appear that you will need to use some flavor of listObjects().
The brute-force approach would be to use the listObjects() that just takes the bucket name. That will give you a list of everything, and you would need to sort them into the tree structure yourself.
The less-brute-force approach would be to use the listObjects() that takes the bucket name and a prefix, or the listObjects() that takes a ListObjectsRequest parameter. To use filesystem terms, this will tell you the files and subdirectories in that directory. This way, you can download the pieces more easily. However, this may require a lot of HTTP requests.