I have a service in Google Cloud Compute that is part of a multi-tenant application. We use vault to refresh credentials every 4-5 hours through a transient account in this location: /dev/shm/gcp_credentials.json.
When creating a GCS storage object, I want to figure out what's the best way to update those credentials.
I noticed there's a method to build the storage object that allows me to pass a credentials object. I'm using google-auth-credentials-library:0.16.2The credentials object has a method credentials.refresh() as per the doc reference: https://javadoc.io/doc/com.google.auth/google-auth-library-credentials/0.16.2/com/google/auth/Credentials.html and for the storage object, I'm using google-cloud-storage:1.83.0
Storage refreshStorage() {
return StorageOptions.newBuilder().setCredentials(credentials)
.setProjectId("project_id").build().getService();
}
What I want to understand is, if I call credentials.refresh(), will this guarantee that when I call storage.create(blobInfo, records); it re-authenticates, or do I have to call again the above method to pass a new storage object with credentials refreshed?
The credentials.refresh() will remove the cached state of your credentials so the proper process will be:
refreshing your credentials > calling credentials.refresh() to avoid
any error reading the cache credential > working with your storage
Related
In an Android project I use Firebase with signInAnonymously() and I am getting userId like this
userId = FirebaseAuth.getInstance().getCurrentUser().getUid()
and I use the userId to create children nodes in Firebase Realtime Database that only this user can access based on the access rules of that database.
The problem is that I noticed userId changes randomly and when that happens all content created by that user is lost to them. Is there something I can do to keep the same userId until the app is uninstalled? What other way can I use to ensure steady and exclusive access for that user to a Realtime database child? Can installation id be used?
You can use Firebase Anonymous Authentication to create and use only temporary accounts to authenticate with Firebase. The Anonymous Authentication accounts don't persist across application uninstalls. When an application is uninstalled, everything that was saved locally will be wiped out, including the anonymous auth token that identifies that account. So there is no way you can reclaim that token for the user. So each time you sign in, a new UID is generated.
What other way can I use to ensure steady and exclusive access for that user to a Realtime Database child?
To always have the same UID, then you have to implement the Firebase Authentication with a provider like Google, Facebook, Twitter, and so on.
I want to fetch messages from service bus queue in azure, for all triggers other than HttpTrigger and kafkaTrigger I need to specify a value (connection string) for AzureWebJobsStorage in local settings. I have a blob storage deployed in azure so I took connection string of the storage account and put it in loca.settings.json
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "java",
"AzureWebJobsStorage" : "DefaultEndpointsProtocol=https;AccountName=xxx;AccountKey=xxx;EndpointSuffix=core.windows.net",
"myConnection" : "<Connection string>"
}
}
but I get an exception from azure
The 'messageReceiver' function is in error: Microsoft.Azure.WebJobs.Host: Error indexing method 'Functions.messageReceiver'. Microsoft.WindowsAzure.Storage: No valid combination of account information found.
I checked multiple times connection string is right. They said I need to remove endpoint suffix but that didn't work.
Thank you in advance.
Azure Function Storage account requirements
When creating a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. This is because Functions relies on Azure Storage for operations such as managing triggers and logging function executions. Some storage accounts don't support queues and tables. These accounts include blob-only storage accounts, Azure Premium Storage, and general-purpose storage accounts with ZRS replication. These unsupported accounts are filtered out of from the Storage Account blade when creating a function app. detailhere
You can use local storage account, if you're using Windows 10 machine
The ACL after uploading an object to Google Cloud Storage is overwritten each time.
I'm creating BlobInfo with preset ACL, so that the uploaded object have to be public read:
String blobId = "PUBLIC/1";
com.google.cloud.storage.BlobInfo info = com.google.cloud.storage.BlobInfo.newBuilder(RuntimeConfig.USERDATA_BUCKET_NAME, blobId)
.setContentType(mimetype)
.setMetadata(metadata)
.setAcl(new ArrayList<>(Arrays.asList(Acl.of(Acl.User.ofAllUsers(), Acl.Role.READER)))) // <-- HERE
.build();
After this I'm signing the URL, like this:
URL signedUrl = storage.signUrl(info, 30,
TimeUnit.MINUTES,
Storage.SignUrlOption.httpMethod(com.google.cloud.storage.HttpMethod.valueOf("PUT")),
Storage.SignUrlOption.withContentType());
Then, I'm uploading the file from the web and this works fine, with only one problem - there is no ACL with public read.
Of course, I can change the ACL after the upload is done, like this:
storage.createAcl(info.getBlobId(), (Acl.of(Acl.User.ofAllUsers(), Acl.Role.READER)));
But is there a way to set it directly within the signed URL, without an additional trigger from the web after having successfully uploaded it?
Thanks!
Signed URLs vs ACLs ( aka. Access Control Lists)
Access Control Lists (ACLs) and Signed URLs (query string authentication) are not the same thing.
While they both provide control of who has access to your Cloud Storage buckets and objects as well as as what level of access they have, ACLs grant read or write access to users for individual buckets or objects. In most instances, Cloud IAM permissions are used over ACLs since the latter are only used when fine-grained control over individual objects is needed.
On the other hand, Signed URLs provide time-limited read or write access to an object through a generated URL. Anyone who has this URL will have access to the object for a certain duration of time that is specified.
Therefore, I'm not aware of any way to implement ACLs directly within Signed URLs to answer your question.
Managing your ACLs
From Documentation:
To avoid setting ACLs every time you create a new object, you can set a default object ACL on a bucket. After you do this, every new object that is added to that bucket that does not explicitly have an ACL applied to it will have the default applied to it.
Changing the default object ACL
The following sample adds a default object ACL to a bucket:
Acl acl = storage.createDefaultAcl(bucketName, Acl.of(User.ofAllAuthenticatedUsers(), Role.READER));
The following sample deletes a default object ACL from a bucket:
boolean deleted = storage.deleteDefaultAcl(bucketName, User.ofAllAuthenticatedUsers());
if (deleted) {
// the acl entry was deleted
} else {
// the acl entry was not found
}
Propagation Details
If you change the default object ACL for a bucket, the change may take time to propagate, and new objects created in the bucket may still get the old default object ACL for a short period of time. In order to make sure that new objects created in the bucket get the updated default object ACL, you should wait at least 30 seconds between changing the default object ACL and creating new objects.
Background:
I have two accounts A and B. Account A owns a bucket my-bucket. I have given account B access to this bucket -- account B can read objects from and write objects to this bucket. This is working as expected.
However, account A can only read those objects in my-bucket that it has written on its own. Although it can list even those objects that account B has written, it cannot read those.
Below is what I see when I try to download all objects from my-bucket using AWS CLI with AWS configuration of account A.
download: s3://my-bucket/PN1492646400000.csv to tp/PN1492646400000.csv
download: s3://my-bucket/PN1491264000000.csv to tp/PN1491264000000.csv
download: s3://my-bucket/PN1493942400000.csv to tp/PN1493942400000.csv
download failed: s3://my-bucket/PN1503346865232.csv to tp/PN1503346865232.csv An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
download: s3://my-bucket/PN1495389670000.csv to tp/PN1495389670000.csv
download: s3://my-bucket/PN1496685403000.csv to tp/PN1496685403000.csv
download: s3://my-bucket/PN1497945130000.csv to tp/PN1497945130000.csv
download: s3://my-bucket/PN1500508800000.csv to tp/PN1500508800000.csv
As one can see, I could download all files, but PN1503346865232.csv (this was written by account B using a java method putObject).
What I tried so far:
I have looked into the following two questions:
Amazon S3 file 'Access Denied' exception in Cross-Account: One of the comment asks to do a putObject with acl, but does not specify what acl.
S3: User cannot access object in his own s3 bucket if created by another user: This talks about stopping account B from putting objects into my-bucket without giving ownership access. Does just putting this constraint help me get full access?
This is how I tried to put the ACL while putting the object in the java code.
AccessControlList acl = new AccessControlList();
acl.grantPermission(new CanonicalGrantee(S3_BUCKET_OWNER_ID), Permission.FullControl);
PutObjectRequest putObjectRequest = new PutObjectRequest(S3_BUCKET, remoteCleanedPrismFilename, fileStream, null)
.withAccessControlList(acl);
s3Client.putObject(putObjectRequest);
It throws exception saying: com.amazonaws.services.s3.model.AmazonS3Exception: Invalid id
Questions:
I wonder how am I supposed to get this ID. Is it not the aws account ID i.e. a 10-12 digit number?
Even if I find this ID, will giving this ACL same as bucket-owner-full-
control?
This post got so many views, but not a single answer!
To help others, I am posting some workarounds that I found from my research so far. After these workarounds, account A is able to access the objects created by account B.
In the java code that is running in account B, I explicitly set the ACL on the object that is to be created.
AccessControlList acl = new AccessControlList();
acl.grantPermission(new CanonicalGrantee(S3_BUCKET_OWNER_ID), Permission.FullControl);
PutObjectRequest putObjectRequest = new PutObjectRequest(S3_BUCKET, remoteCleanedPrismFilename, fileStream, null)
.withAccessControlList(acl);
s3Client.putObject(putObjectRequest);
Here S3_BUCKET_OWNER_ID is the canonical ID of account A. Please note that it is not the AWS account ID that we know and I do not know a better way, to find this out, than the following
aws s3api get-object-acl --bucket my-bucket --key PN1491264000000.csv --output text --profile accountA
This is still not an elegant solution according to me and I believe something better exists. I will edit this answer once I find something better.
Edit:
Canonical ID of the bucket owner can be found more elegantly as follows:
String s3BucketOwnerId = s3Client.getBucketAcl(S3_BUCKET).getOwner().getId();
As my question title says, is Memcache supposed to play well with Google cloud endpoints?
Locally it does, I can store a key / value pair using JCache in my application and read it from within a Google Cloud Endpoints API method.
When I upload my application and run it on the cloud it returns null exactly as the way it happens when I've found out that we can't access sessions from inside of cloud endpoints...
And I doing something wrong or Google cloud endpoints isn't supposed to access the cache as well?
I really need to share some tokens safely between my application and cloud endpoints and I don't want to write / read from the Datastore (those are volatile tokens...). Any ideas?
Endpoints definitely works with memcache, using the built-in API. I just tried the following trivial snippet within an API method and saw the incrementing values as expected:
String key = "key";
Integer cached = 0;
MemcacheService memcacheService = MemcacheServiceFactory.getMemcacheService();
memcacheService.setErrorHandler(new StrictErrorHandler());
cached = (Integer) memcacheService.get(key);
if (cached == null) {
memcacheService.put(key, 0);
} else {
memcacheService.put(key, cached + 1);
}
This should work for you, unless you have a specific requirement for JCache.