I'm attempting to use Spring to access files from Google Storage Buckets with the end goal of using MultiResourceItemReader to read in multiple XML files from the bucket. I currently have Spring working with this process when the XML files are locally on my machine (not GCP)
Now, I want to do the same thing, but instead of XML files on my machine, the files are in a GCP Storage bucket. I can access the bucket contents outside of Spring, one file at at time. For example this little bit of test code allows me to get access to the bucket and then see the files in the bucket. In this snippet, I setup the credentials via the JSON key file. (not an environment variable)
public static void storageDriver() throws IOException {
// Load credentials from JSON key file. If you can't set the GOOGLE_APPLICATION_CREDENTIALS
// environment variable, you can explicitly load the credentials file to construct the
// credentials.
String name = "";
String bucketName = "";
String bucketFileName = "";
String bucketFullPath = "";
Resource myBucker;
GoogleCredentials credentials;
File credentialsPath = new File("mycreds.json");
try (FileInputStream serviceAccountStream = new FileInputStream(credentialsPath)) {
credentials = ServiceAccountCredentials.fromStream(serviceAccountStream);
}
Storage storage = StorageOptions.newBuilder()
.setCredentials(credentials)
.setProjectId("myProject")
.build()
.getService();
for (Bucket bucket:storage.list().iterateAll()){
if(bucket.getName().equalsIgnoreCase("myGoogleBucket")){
bucketName = bucket.getName();
System.out.println(bucket);
for (Blob blob : bucket.list().iterateAll()){
bucketFileName = blob.getName();
bucketFullPath = "gs://"+bucketName+"/"+bucketFileName;
System.out.println(bucketFullPath);
}
}
};
However, when I try to do the following with Spring, Spring complains that I don't have the GOOGLE_APPLICATION_CREDENTIALS defined. (which of course I don't since I'm doing it programmatically.
For example, I'll add
#Value("gs://myGoogleBucket")
private Resource[] resources;
The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials.
Spring Cloud GCP simplifies your GCS configuration.
You can add Storage support to your app. Then, either specify the location of your service account credentials through the spring.cloud.gcp.storage.credentials.location property, or by logging in with application default credentials using the Google Cloud SDK.
This will automatically provide you with a fully configured Storage object and things like #Value(gs://YOUR-BUCKET/YOUR-FILE) should just work.
I tried many ways, but at last this excerpt from spring docs is the one which worked for me:
Due to the way logging is set up, the GCP project ID and credentials defined in application.properties are ignored. Instead, you should set the GOOGLE_CLOUD_PROJECT and GOOGLE_APPLICATION_CREDENTIALS environment variables to the project ID and credentials private key location, respectively. You can do this easily if you’re using the Google Cloud SDK, using the gcloud config set project [YOUR_PROJECT_ID] and gcloud auth application-default login commands, respectively.
Related
I am trying to develop an Android application which integrates Jitsi for video conferencing. Normally, a room name is chosen and a room is created. However, anyone that knows or guesses the room name can join the call. In order to prevent this, I want to put a jwt token for conference rooms. I found a link that explains jwt token process for jitsi-meet.
The link is this: https://github.com/jitsi/lib-jitsi-meet/blob/master/doc/tokens.md
In this link I do not understand about three concepts:
Manual plugin configuration
Modify your Prosody config with these three steps:
\1. Adjust plugin_paths to contain the path pointing to jitsi meet Prosody plugins location. That's where plugins are copied on jitsi-meet-token package install. This should be included in global config section(possibly at the beginning of your host config file).
plugin_paths = { "/usr/share/jitsi-meet/prosody-plugins/" }
Also optionally set the global settings for key authorization. Both these options default to the '*' parameter which means accept any issuer or audience string in incoming tokens
asap_accepted_issuers = { "jitsi", "some-other-issuer" }
asap_accepted_audiences = { "jitsi", "some-other-audience" }
\2. Under you domain config change authentication to "token" and provide application ID, secret and optionally token lifetime:
VirtualHost "jitmeet.example.com"
authentication = "token";
app_id = "example_app_id"; -- application identifier
app_secret = "example_app_secret"; -- application secret known only to your token
-- generator and the plugin
allow_empty_token = false; -- tokens are verified only if they are supplied by the client
Alternately instead of using a shared secret you can set an asap_key_server to the base URL where valid/accepted public keys can be found by taking a sha256() of the 'kid' field in the JWT token header, and appending .pem to the end
VirtualHost "jitmeet.example.com"
authentication = "token";
app_id = "example_app_id"; -- application identifier
asap_key_server = "https://keyserver.example.com/asap"; -- URL for public keyserver storing keys by kid
allow_empty_token = false; -- tokens are verified only if they are supplied
\3. Enable room name token verification plugin in your MUC component config section:
Component "conference.jitmeet.example.com" "muc"
modules_enabled = { "token_verification" }
In these three instructions, the words "host config file", "domain config file" and "MUC component config section". What are these? I do not know where to do these cahnges.
I think my reply arrives a little bit late, but I try the same to give my contribution :)
If you have installed Jitsi in "classic" way (without docker):
host config file: /etc/prosody/prosody.cfg.lua
domain config file: /etc/prosody/conf.d/<your_domain_name>.cfg.lua
MUC component config section: always in /etc/prosody/conf.d/<your_domain_name>.cfg.lua search the section that starts with Component "conference.<your_domain_name>" "muc"
I hope you have resolved your doubts :)
I'm using saml2aws to generate temporary AWS keys for my clients. Previously I was using environmental variables:
aws_access_key_id=MY KEY
aws_secret_access_key=MY SECRET
After authenticating with saml2aws my ~/.aws/config file looks like this (unchanged):
[default]
output=json
And my ~/.aws/credentials looks like this:
[default]
aws_access_key_id = MY KEY ID
aws_secret_access_key = MY KEY
aws_session_token = MY SESSION TOKEN
aws_security_token = MY TOKEN
x_principal_arn = MY ARN
x_security_token_expires = TIME
When I try this from the cli with aws s3 ls it works but when I try to access S3 from the Java SDK:
AmazonS3Client(ProfileCredentialsProvider())
.listObjects(ListObjectsRequest()
.withBucketName("some-bucket")
.withPrefix("some-prefix")
.withDelimiter("/")
.withMaxKeys(10000))
I get:
com.amazonaws.AmazonClientException: Unable to load credentials into profile. Profile Name or AWS Access Key ID or AWS Secret Access Key missing for a profile.
And it doesn't work even if I explicitly try to use the default profile: ProfileCredentialsProvider("default") or even if I don't set a provider at all!
What am I doing wrong?
AmazonS3Client().listObjects(ListObjectsRequest()
.withBucketName("some-bucket")
.withPrefix("some-prefix")
.withDelimiter("/")
.withMaxKeys(10000))
is enough, no need of explicitly giving ProfileCredentialsProvider().
For more see
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Client.html#AmazonS3Client--
Hi I've an issue with Java SDK library for Google Cloud.
I need to query Dialogflow V2 API and I'm using this SDK (https://github.com/GoogleCloudPlatform/google-cloud-java/tree/master)
I've followed the instructions in order to set GOOGLE_APPLICATION_CREDENTIALS as environment variable.
I did several attempts (I'm using a Mac):
export GOOGLE_APPLICATION_CREDENTIALS=path/to/my.json
doesn't work
putting
export GOOGLE_APPLICATION_CREDENTIALS=path/to/my.json
in .bash_profile
doesn't work
In my Java code:
System.setProperty("GOOGLE_APPLICATION_CREDENTIALS", "/path/to/my.json");
GoogleCredential credential = GoogleCredential.getApplicationDefault();
doesn't work
String jsonPath = "/path/to/my.json";
GoogleCredentials credentials = GoogleCredentials.fromStream(new FileInputStream(jsonPath));
Storage storage = StorageOptions.newBuilder().setCredentials(credentials).build().getService();
doesn't work
putting GOOGLE_APPLICATION_CREDENTIALS as environment variable in Eclipse by "Maven build > Environment > New variable" and restarted IDE
doesn't work
The same error always occurs:
An error occurred during Dialogflow http request: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information
Really I can't undestand what is wrong.
This is the code snippet to query Dialogflow never reached due to the above error:
SessionsClient sessionsClient = SessionsClient.create();
SessionName session = SessionName.of("my-project-id", sessionId);
InputStream stream = new ByteArrayInputStream(requestBody.getBytes(StandardCharsets.UTF_8));
QueryInput queryInput = QueryInput.newBuilder().build().parseFrom(stream);
DetectIntentResponse response = sessionsClient.detectIntent(session, queryInput);
Exception is raised on
SessionsClient sessionsClient = SessionsClient.create();
Thanks in advance.
The solution is to set GOOGLE_APPLICATION_CREDENTIALS as environment variable for the Application Server.
Unlike mentioned in question, I've got 'credentials' working when provided that way
Storage storage = StorageOptions.newBuilder().setCredentials(credentials).build().getService();
But same approach with FirestoreOptions failed with mentioned error "The Application Default Credentials are not available".
Note: I'm willing to not configure my app via environment variable and prefer doing it in code. One more reason for that - I need multiple different connections from one app.
After debugging into Google's code I've found that they do not use supplied credentials, they rather call getCredentialsProvider() on provided options. I think this is bug in their implementation, but workaround is rather simple:
Create own CredentialsProvider
Set it into FirestoreOptions
Dummy sample:
public static class MyCredentialsProvider implements CredentialsProvider {
GoogleCredentials credentials;
public MyCredentialsProvider(GoogleCredentials credentials) {
this.credentials = credentials;
}
#Override
public Credentials getCredentials() throws IOException {
return credentials;
}
}
...
FirestoreOptions.Builder builder = FirestoreOptions.newBuilder()
.setProjectId(serviceAccount.project_id)
.setCredentialsProvider(new MyCredentialsProvider(credentials));
I have just started using AWS for my project. I want to write a project that uploads critical files to s3 bucket. I donot want to expose any secret keys so that all other developers / users can access the uploaded documents. Please provide some pointer how to begin with.
My Current Implementation:
return new AmazonS3Client(new AWSCredentials() {
#Override
public String getAWSAccessKeyId() {
return accessKey;
}
#Override
public String getAWSSecretKey() {
return accessKeySecret;
}, clientConfiguration )
Then I use amazonS3Client.putObject(putReq); to upload file.
So, here I am exposing my keys that enables other colleague to download/view the files. Anyone can use it to download/upload file from s3cmd, browser plugins etc.
On reading AWS docs, I got to know I can use EC2 instance and setup IAM profile. BUt I am not sure how can I do with java code. Please provide some link and example
Look at the InstanceProfileCredentialsProvider class. It gets IAM credentials (access/secret key) from the instance's metadata. Launch your instance under an IAM role that has a policy that permits access to S3.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new InstanceProfileCredentialsProvider())
.build();
Source reference
If your users need access to upload to S3, then they will need access to the keys, there's nothing you can do about that.
What you can do though, is to give them keys which have permissions to upload files to S3, but no permission to read/download. So, you'd have an upload policy with the PutObject permission, and a read policy with the List/Get permissions.
I am trying to upload a file to S3. The code to do so is below:
AmazonS3 s3Client = new AmazonS3Client(new ProfileCredentialsProvider());
String key = String.format(Constants.KEY_NAME + "/%s/%s", activity_id, aFile.getName());
s3Client.putObject(Constants.BUCKET_NAME, key, aFile.getInputStream(), new ObjectMetadata());
The problem I am having is that my ProfileCredentialsProvider cannot access my AWS keys. I have set my environment variables:
AWS_ACCESS_KEY=keys go here
AWS_SECRET_KEY=keys go here
AWS_ACCESS_KEY_ID=keys go here
AWS_DEFAULT_REGION=us-east-1
AWS_SECRET_ACCESS_KEY=keys go here
And as per Amazon's Documentation the set environment variables have precedence over any configuration files. This leads me to ask, why are my keys not being grabbed from my environment variables?
Figured it out.
If you specify a ProfileCredentialsProvider() the AWS SDK will look for a configuration file, regardless of precedence. Simply creating a S3 Client like this:
AmazonS3 s3Client = new AmazonS3Client();
Will check the various locations specified for credentials.