I'm using the AWS Java SDK within my Spring Boot app.
Currently, when i want to return the URL of the s3 object i use:
s3Client.putObject(new PutObjectRequest(S3_BUCKET_NAME, key,fileToUpload));
URL signedUrl = s3Client.getUrl(S3_BUCKET_NAME, key);
And the signedUrl looks like this :
https://<my_bucket_name>.s3.eu-central-1.amazonaws.com/<my_key>
The problem is that this URL is invalid (it returns HTTPS error during connection). Right now, i cant configure my custom domain and resolve the problem via CloudFront configuration.
So my idea is to force the different format on the SDK. Something like this:
https://s3.eu-central-1.amazonaws.com/<my_bucket_name>/<my_key>
Can someone point me in the right direction?
PS:
I know that, i can do a simple replace on the URL but it is not an elegant solution.
Ok, the problem was with bucket name. The dot character in bucket name was causing all the trouble.
foo-bar-com as bucket name works as expected.
foo.bar.cam.s3. bucket name is causing the https exception.
Related
I have quick and relatively easy question I think, but I don't get it so here I am.
So, I've got something like this:
file.upload = Upload.upload({
url: 'sendemail',
data: {file: file}
});
Whatever about rest of the code. I want to know for what is that url: section. It's for my java spring #RequestMapping("/sendemail")? Or it is for folder on my server to store the file?
Please answer me, I just want to know it :<
So when you are using Java Spring. It provides you a lots of cool annotations.
One of them is
#RequestMapping()
This annotation helps for routing your services. So when you write RequestMapping("/sendemail"), it looks for the end point sendemail and does the job accordingly.
Now to your question,
So {url: 'sendemail'} specifies that the url should end with /sendemail so as to do the mentioned job.
I am using the Stash's REST API in my project. My task is to get the tag details for a specific tag. After checking the Stash's REST API documentation, I found the correct endpoint that I should be using. It is
/rest/api/1.0/projects/{projectKey}/repos/{repositorySlug}/tags/{name:.*}
Please see this link for the Stash's REST API documentation.
There is one more endpoint /rest/api/1.0/projects/{projectKey}/repos/{repositorySlug}/tags
With this endpoint I am able to retrieve all the tags. The StashTag object looks something like this.
{
"id": "refs/tags/v4.0.0",
"displayId": "v4.0.0",
"latestChangeset": "234dadf41742cfc2a10cadc7c2364438bd8891c5",
"latestCommit": "234dadf41742cfc2a10cadc7c2278658bd8891c5"
"hash" : "null"
}
My first problem is, I don't know which field to use as the parameter for {name:.*}. Should it be the displayId or Id or anything else.
The second problem is, I don't understand what it means to have : (colon) followed by a . (dot) in the endpoint /rest/api/1.0/projects/{projectKey}/repos/{repositorySlug}/tags/{name:.*}.
Can someone explain me what is the purpose of :. in the path param and how to hit this kind of an endpoint. Also an example of the complete endpoint would be nice.
So far I have tried hitting
https://stashtest.abc.com/rest/api/1.0/projects/KARTIK/repos/kartiks-test-repository/tags/v4.0.0
https://stashtest.abc.com/rest/api/1.0/projects/KARTIK/repos/kartiks-test-repository/tags/refs/tags/v4.0.0
None of these endpoints work.
Any help is appreciated.
The {name:.*} is really just saying that the field name can be anything. Chalk this one up to poor documentation on their part. Think of it like Regex field, because that's exactly what it is. I'm sure at one point they had something like ^[0-9] then went back and changed it when they realized using only tag numbers would omit anyone using their lightweight tag features.
Remove the v from your tag version and see if that helps. If it does not, I would also recommend creating a lightweight tag (something like mytag) and seeing if you can hit it that way (i.e., /kartiks-test-repository/tags/mytag).
But looking at that documentation is telling me that your v in your tag name is throwing off the REST call.
I'm using the JSON API Java library to upload objects to Google Cloud Storage. I've figured out how to add the entity allUsers with role READER to get public access, but any other entity/role entries I try to add to my list of ObjectAccessControl produce some generic errors like
com.google.api.client.googleapis.json.GoogleJsonResponseException: 400 Bad Request
"code" : 400,
"errors" : [ {
"domain" : "global",
"message" : "Invalid Value",
"reason" : "invalid"
}
...for each ACL entry I have, except the allUsers READER one which seems to work
I'm not sure what it's complaining about here. I'm trying to reproduce the default permissions I see in the Developers Console, i.e. when I don't specify any ACL on the metadata.
owners-projectId owner,
editors-projectId owner,
viewers-projectId reader,
and user Id owner (I am guessing this is the service account ID)
I'm adding these to the ACL list the same way as the allUsers entity. I've searched for hours trying to find some documentation or similar issues to this, but only found the one regarding allUsers. I've tried escaping these ids, thinking the JSON library might not be doing so for me, but get the same errors.
Here's my relevant Java code:
// Set permissions and content type on StorageObject metadata
StorageObject objectMetadata = new StorageObject();
// set access control
List<ObjectAccessControl> acl = Lists.newArrayList();
acl.add(new ObjectAccessControl().setEntity("allUsers").setRole("READER"));
// this one allows upload to work without error if it is the only access specified,
// but prevents me from modifying publicly available status or editing permissions
// via the developers console (I think this is to be expected)
// attempt to replicating bucket defaults...
// adding any of these will cause the error
acl.add(new ObjectAccessControl().setEntity("owners-projectId").setRole("OWNER"));
acl.add(new ObjectAccessControl().setEntityId("editors-projectId").setRole("OWNER"));
acl.add(new ObjectAccessControl().setEntityId("viewers-projectId").setRole("READER"));
objectMetadata.setAcl(acl);
where projectId is my project ID copied from the Developer's console site.
Finally figured this out.
I first suspected my storage scope of DEVSTORAGE_READ_WRITE was not sufficient, and tried DEVSTORAGE_FULL_CONTROL, but that was not the reason.
Also ignore my use of setEntityId(...) in my original post, although this was something I also tried to no avail.
The problem was simply incorrect syntax in the entity argument. The document you need is this:
https://cloud.google.com/storage/docs/json_api/v1/defaultObjectAccessControls
Then you will see that the proper method looks something like:
acl.add(new ObjectAccessControl().setEntity("project-owners-projectId").setRole("OWNER"));
Oddly enough, this did NOT work:
acl.add(new ObjectAccessControl().setProjectTeam(new ProjectTeam().setProjectNumber("projectId").setTeam("owners")).setRole("OWNER"));
I suspect that method of setting the project team entity is a bug in the Java library, but I'm not sure.
The error message for incorrect entities that keeps saying Domain global required, or Domain global invalid value, is simply not very instructive. Setting domain is not required for this case. Also see the discussion at:
What domain is Google Cloud Storage expecting in the ACL definitions when inserting an object?
Hope this helps someone else out!
You can set the access control permission by using "predefinedAcl" the code is as follows.
Storage.Objects.Insert insertObject =client.objects().insert(, ,);
insertObject.setPredefinedAcl("publicRead");
This will work fine
I have written a Servlet where I am reading an image from blobstore, another image from GCS and then after applying a composite on both these images I am storing the composite image back in GCS.
My code works well till here.
After that, when I am trying to get the serving url for the composite image, I am getting an OBJECT_NOT_FOUND.
Just to experiment I manually uploaded a image in GCS and gave all the necessary permissions. Added the serviceaccount as OWNER and gave READ access to All users. And then again I am just trying to get the serving url. Following is my code:-
BlobKey newImageKey = blobstoreService.createGsBlobKey(gcsPath);
//log.severe("GCS PATH: " + gcsPath + " BlobKey: " + newImageKey);
ServingUrlOptions options = ServingUrlOptions.Builder.withBlobKey(newImageKey);
String profilePicLink = imgService.getServingUrl(options);
I also tried the below code:-
ServingUrlOptions options = ServingUrlOptions.Builder.withGoogleStorageFileName(gcsPath);
String profilePicLink = imgService.getServingUrl(options);
And in both the cases this is the error that I am getting:
/controller javax.servlet.ServletException:
java.lang.IllegalArgumentException: OBJECT_NOT_FOUND:
Btw, I have not enable billing as I am using the default bucket with the free quota. This is still in development so the free quota works for me.
OK, so I found out where exactly the exception is happening...
byte[] responseBytes = ApiProxy.makeSyncCall(PACKAGE, "GetUrlBase",
request.build().toByteArray());
and the exception it is throwing is :
ApiProxy.ApplicationException Application Error 8
Enabled billing and tried, still of no use :(
Have been trying to solve this the whole day and tried to search a solution everywhere.
Though this actually does not answer my original question but I have found a workaround. I installed python and gsutil and set the default acl of my bucket to read. Now when I am saving an image file in GCS I am just showing the public url link.
The above can also be achieved if in the GCSFileOptions we add .acl("public-read").
Once the acl is applied by either of the above two methods, in the GCS cloud console you can see the images shared publicly link check box comes as a dash and it says you do not have permission to edit permissions. I was getting confused seeing it, as I was expecting the checkbox to be checked.
But even in the above scenario the publicly shared link will work which is:-
http://storage.googleapis.com/[bucket_name]/[gcs_object_name]
I would still appreciate if someone can explain why the getServingUrl is not not working. Yes, it is still not working after set default acl to read.
Thanks,
Sukalpo.
I could not reproduce this issue by either uploading to Google Cloud storage via the console or via the App Engine GCS Java client. In both cases I could create a public URL for the image
even without specifying any specific permissions.
Do you want to create a production issue request,
https://code.google.com/p/googleappengine/issues/entry?template=Production%20issue
, so we can get more details about your specific case?
What is your gcsPath? I have to use:
"/gs/" + gcsFileName.getBucketName() + "/" + gcsFileName.getObjectName();
Honestly, the only way I've run into this error (and run into this unsolved question) was when I was accidentally using the wrong filename in a difficult to notice way while fetching the BlobKey using Google's APIs.
So, check the obvious things first.
I have hit a snag that i can not seem to solve. My issue is with retrieving the blob key after the app engine has called my service back. I have tried using blobstoreService.getUploads(request) and i have also tried pulling the blob key from the input stream on the request that is called back to me.
the really strange part is that if i go look into the dashboard i see all of my images in the blob store data view.
I get this error no matter how i try to get the blob keys out :
com.google.apphosting.utils.servlet.ParseBlobUploadFilter doFilter:
Could not parse multipart message: javax.mail.internet.ParseException:
Missing ';'
I am really hung up on this one and i could really use a little help.
EDIT more of the code
the fetch of the blob store url
private String fetchUrl()
{
String url = blobstoreService.createUploadUrl("/BS/returnKey");
return url;
}
snippit of the return code where the error occurs
...
if(inUrl.contains("returnKey"))
{
Map<String, List<BlobKey>> blobs = blobstoreService.getUploads(req);
...
so in my dev environment (the development app server packed in with GAE plugin for eclipse), it works fine, but then after i deploy to the app engine, the same code will not work.
I also tried pulling the data out of the Input stream from the request with teh same results (working on the dev , not on the prod).
thanks to everyone for your help!
The issue was that you can not have spaces in the id of the input on the form. I feel like there should be a more obvious error.
In any event i hope that someone finds this useful!