Get Kubernetes pods information through microservice - java

When sending request to api, it throws NPE at api.listPodForAllNamespaces step. Could you please advise, what should be the correct configuration here?

As the error says:
pods is forbidden: User \"system:serviceaccount:sds-test:default\" cannot list resource \"pods\"
This means that the service account default in the namespace sds-test does not have the appropriate permissions to list pods. You are probably not specifying a service account when you deploy. K8s will automatically assign you the default service account.
You need to create a ServiceAccount. Grant it the required access using a Role and RoleBinding. Then update your Deployment/Pod to use your newly created ServiceAccount. Details of which can be found here

Related

Linking Keycloak accounts through Spring Boot

I was wondering if there is a way to Link a Broker realm user to the provider through the keycloak library in spring boot.
Situation:
When we log in with a user through the realm provider, keycloak identifies their existence in the broker (or creates them) and then an email is sent to the accounts link.
But the way I use keycloak, I have a service responsible for creating these to customize them for the application. In other words, when a user is created through this SpringBoot service, the idea is to check the existence of the realm provider and link the user created in the broker there.
Question:
Is it possible to link the broker's account with an existing one in the provider programmatically?
Additional:
it is possible to add the link directly through the admin console, so there must be a way to do it programmatically.
Image of manual creation of account link in admin console
I tried using the setSocialLinks method or the setFederatedIdentities method but it doesn't seem to work.
FederatedIdentityRepresentation federatedIdentity = new FederatedIdentityRepresentation();
federatedIdentity.setIdentityProvider(super.getProviderRealmName());
federatedIdentity.setUserId(providerUserId);
federatedIdentity.setUserName(user.getUsername());
user.setFederatedIdentities(Collections.singletonList(federatedIdentity));
Response brokerResult = brokerUserResource.create(user);
Well, there's an option to do it automatically already on the Authentication flow configuration

How to share data between replicas of microservice dynamically without restarting application

I have a microservice on kubernetes. It is scaled up to 4 pods. I have created one Object to set some value dynamically using rest api.but that object will be updated on single pod by rest api. I have to share same value on 3 more pods with single URL at same time without restarting application.
I was thinking hazelcast for this requirement but it require admin privileges which i can not provide.
Error
{"date":"2019-03-19T08:30:32.920+00:00","loglevel":"ERROR","logger_name":"com.hazelcast.internal.cluster.impl.DiscoveryJoiner","thread_name":"main","message":"[10.128.10.37]:5701 [some-group] [3.10.2] Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/somespace/endpoints/some-service . Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. endpoints \"some-service\" is forbidden: User \"system:serviceaccount:some-test:default\" cannot get endpoints in the namespace \"somespace\": User \"system:serviceaccount:some-test:default\" cannot get endpoints in project \"somespace\".","stack_trace":"io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/gaming/endpoints/some-service . Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. endpoints \"some-service\" is forbidden: User \"system:serviceaccount:some-test:default\" cannot get endpoints in the namespace \"gaming\": User \"system:serviceaccount:some-test:default\" cannot get endpoints in project \"gaming\".\n\tat io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:470)\n\tat io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:407)\n\tat io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:379)\n\tat io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:343)\n\tat io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:312)\n\tat io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:295)\n\tat io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:787)\n\tat io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:217)\n\tat io.fabric8.kubernetes.client

Azure Multifactor authentication in java desktop application

I need to authenticate via azure ad in my application. I found this example code: https://github.com/Azure-Samples/active-directory-java-native-headless but my azure is configured with MFA and I get this error:
{"error_description":"AADSTS50076: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '00000003-0000-0000-c000-000000000000'.\r\nTrace ID: 643e8491-904a-4cea-b2a6-c720dda97f00\r\nCorrelation ID: 946f5469-c2b3-4de4-8c92-ab73aabc13d3\r\nTimestamp: 2018-08-27 12:59:25Z","error":"interaction_required"}
And now I'm not sure how to provide verification code to my application. Does anyone has any example code, wiki how to use it with MFA?
This sample should help you. It uses OpenID connect with a Java application.
As for your error, in AAD, if you do an initial login in one location, and then login from another location, there are conditions on the AD that flag this as "risky activity".
So for your account there is a "moved to a new location" flag that can get set, automatically triggering the need for MFA. If you do face this, check the conditional access locations in Azure and see if you can clear the flag. (Or set up the original account with named locations in place.)
https://learn.microsoft.com/en-us/azure/active-directory/active-directory-conditional-access-locations

Openshift monitoring - spring , display pods

Hey everyone:D I'd like to get json with lists of pods from openshift. I'm using :
Node[] nodes = template.getForObject("[url_address]/api/v1/nodes", Node[].class);
but its need authentication, so how to solve this problemm. Any idea??
The authorization requires a valid bearer token, default kubernetes client library should use service account mounted to pod to attempt to authenticate properly. It is likely that you either do not use the client lib that does that for you or have no proper serviceaccount bound to the pod (or SA has no access granted to required resources).
For that you may want to just add the access rights to the default account for this project.
https://docs.openshift.com/container-platform/3.3/admin_solutions/user_role_mgmt.html

Google Cloud Storage with a service account in Java - 403 Caller does not have storage.objects.list access to bucket

We want to download files from Google Storage in our application server. It is important to have read-only restricted access to a single bucket and nothing else.
At first I used a regular user account (not a service account) which have permissions to access all buckets in our Google Cloud project, and everything worked fine - my Java code opened buckets and downloaded files without problems.
Storage storage = StorageOptions.getDefaultInstance().getService();
Bucket b = storage.get( "mybucketname" );
Then I wanted to switch to use a specially created service account which has access to a single bucket only. So I created a service account, gave permissions to read a single bucket, and downloaded its key file. The permissions in Google Cloud Console are named as:
Storage Object Viewer (3 members) Read access to GCS objects.
gsutil command line utility works fine with this account - from the command line it allows accessing this bucket but not the others.
The initialization from the command line is done using the following command:
gcloud --project myprojectname auth activate-service-account files-viewer2#myprojectname.iam.gserviceaccount.com --key-file=/.../keyfilename.json
I even tried two different service accounts which have access to different buckets, and from the command line I can switch between them and gsutil gives access to a relevant bucket only, and for any other it returns the error:
"AccessDeniedException: 403 Caller does not have storage.objects.list access to bucket xxxxxxxxxx."
So, from the command line everything worked fine.
But in Java there is some problem with the authentication.
The default authentication I previously used with a regular user account stopped working - it reports the error:
com.google.cloud.storage.StorageException: Anonymous users does not have storage.buckets.get access to bucket xxxxxxxxxx.
Then I've tried the following code (this is the simplest variant because it relies on the key json file, but I've already tried a number of other variants found in various forums, with no success):
FileInputStream fis = new FileInputStream( "/path/to/the/key-file.json" );
ServiceAccountCredentials credentials = ServiceAccountCredentials.fromStream( fis );
Storage storage = StorageOptions.newBuilder().setCredentials( credentials )
.setProjectId( "myprojectid" ).build().getService();
Bucket b = storage.get( "mybucketname" );
And all I receive is this error:
com.google.cloud.storage.StorageException: Caller does not have storage.buckets.get access to bucket mybucketname.
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden
The same error is returned no matter to what buckets I'm trying to access (even non-existing).
What confuses me is that the same service account, initialized with the same JSON key file, works fine from the command line.
So I think something is missing in Java code that ensures correct authentication.
TL;DR - If you're using Application Default Credentials (which BTW you are when you do StorageOptions.getDefaultInstance().getService();), and if you need to use the credentials from a service account, you can do so without changing your code. All you need to do is set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the full path of your service account json file and you are all set.
Longer version of the solution using Application Default Credentials
Use your original code as-is
Storage storage = StorageOptions.getDefaultInstance().getService();
Bucket b = storage.get( "mybucketname" );
Set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the full path of your json file containing the service account credentials.
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service_account_credentials.json
Run your java application once again to verify that it is working as expected.
Alternate solution using hard-coded Service Account Credentials
The code example you posted for initializing ServiceAccountCredentials looks valid to me on a quick glance. I tried the following code snippet and it is working for me as expected.
String SERVICE_ACCOUNT_JSON_PATH = "/path/to/service_account_credentials.json";
Storage storage =
StorageOptions.newBuilder()
.setCredentials(
ServiceAccountCredentials.fromStream(
new FileInputStream(SERVICE_ACCOUNT_JSON_PATH)))
.build()
.getService();
Bucket b = storage.get("mybucketname");
When specifying a service account credential, the project ID is automatically picked up from the information present in the json file. So you do not have to specify it once again. I'm not entirely sure though if this is related to the issue you're observing.
Application Default Credentials
Here is the full documentation regarding Application Default Credentials explaining which credentials are picked up based on your environment.
How the Application Default Credentials work
You can get Application Default Credentials by making a single client
library call. The credentials returned are determined by the
environment the code is running in. Conditions are checked in the
following order:
The environment variable GOOGLE_APPLICATION_CREDENTIALS is checked. If this variable is specified it should point to a file that
defines the credentials. The simplest way to get a credential for this
purpose is to create a Service account key in the Google API Console:
a. Go to the API Console Credentials page.
b. From the project drop-down, select your project.
c. On the Credentials page, select the Create credentials drop-down,
then select Service account key.
d.From the Service account drop-down, select an existing service
account or create a new one.
e. For Key type, select the JSON key option, then select Create. The
file automatically downloads to your computer.
f. Put the *.json file you just downloaded in a directory of your
choosing. This directory must be private (you can't let anyone get
access to this), but accessible to your web server code.
g. Set the environment variable GOOGLE_APPLICATION_CREDENTIALS to
the path of the JSON file downloaded.
If you have installed the Google Cloud SDK on your machine and have run the command gcloud auth application-default login, your
identity can be used as a proxy to test code calling APIs from that
machine.
If you are running in Google App Engine production, the built-in service account associated with the application will be used.
If you are running in Google Compute Engine production, the built-in service account associated with the virtual machine instance
will be used.
If none of these conditions is true, an error will occur.
IAM roles
I would recommend going over the IAM permissions and the IAM roles available for Cloud Storage. These provide control at project and bucket level. In addition, you can use ACLs to control permissions at the object level within the bucket.
If your use case involves just invoking storage.get(bucketName). This operation will require just storage.buckets.get permission and the best IAM role for just this permission is roles/storage.legacyObjectReader.
If you also want to grant the service account permissions to get (storage.objects.get) and list (storage.objects.list) individual objects, then also add the role roles/storage.objectViewer to the service account.
Thanks to #Taxdude's long explanation, I understood that my Java code should be all right, and started looking at other possible reasons for the problem.
One of additional things I've tried were the permissions set to the service account, and there I've found the solution – it was unexpected, actually.
When a service account is created, it must not be given permissions to read from Google Storage, because then it will have read permissions to ALL buckets, and it is impossible to change that (not sure why), because the system marks these permissions as "inherited".
Therefore, you have to:
Create a "blank" service account with no permissions, and
Configure permissions from the bucket configuration
To do so:
Open Google Cloud Web console
Open Storage Browser
Select your bucket
Open the INFO PANEL with Permissions
Add the service account with the Storage Object Viewer permission, but there are also permissions named Storage Legacy Object Reader and Storage Legacy Bucket Reader
Because of the word "Legacy" I thought those should not be used – they look like something kept for backward compatibility. And after experimenting and adding these "legacy" permissions, all of a sudden the same code I was trying all the time started working properly.
I'm still not entirely sure what is the minimal set of permissions I should assign to a service account, but at least now it works with all three "read" permissions on the bucket – two "legacy" and one "normal".

Categories

Resources