Retrieve secret from Kubernetes in Spring - java

I have created a secret in Kubernetes and mounted it in a volume, so part of the yaml file looks like this:
volumeMounts:
- name: test-key
readOnly: true
mountPath: /opt/key
Then the secret itself contains:
My problem comes when trying to retrieve it using in Spring. How I'm supposed to do it?
What I've tried so far is setting in the application.properties --> spring.datasource.private-key=${PRIVATE_KEY} but it's not working. It gives me a placeholder error:
Caused by: java.lang.IllegalArgumentException: Could not resolve placeholder 'spring.datasource.private-key' in value "${spring.datasource.private-key}"
Any idea what I'm doing wrong?
UPDATE:
This is the way that Im reading the secret in spring, but it still give me the same error related to the placeholder when deploying:
#Value("${spring.datasource.private-key}")
private String privateKey;
#Bean
public PrivateKey getPrivateKeyFromEnvironmentVariable() throws IOException, NoSuchAlgorithmException {
List<String> activeProfiles = Arrays.asList(environment.getActiveProfiles());
String key;
if(activeProfiles.contains(LOCAL_ENVIRONMENT_NAME) ) {
key = resourceUtil.asString(LOCAL_PRIVATE_KEY_RESOURCE_PATH);
} else if(activeProfiles.contains(TEST_ENVIRONMENT_NAME)) {
key = generatePrivateKey();
} else {
//key = System.getenv(PRIVATE_KEY_ENVIRONMENT_VARIABLE_NAME);
key = privateKey;
}

I have created a secret in Kubernetes and mounted it in a volume.
But the sprint boot expects environment variable for the PRIVATE_KEY. So only binding will not help.
so I will suggest to create secrets
apiVersion: v1
data:
private_key: YWRtaW4=
kind: Secret
metadata:
name: mysecret
type: Opaque
Now reference this secret and set as an environment variable so sprintboot can understand.
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: busybox
command: ["env"]
env:
- name: PRIVATE_KEY
valueFrom:
secretKeyRef:
name: mysecret
key: private_key
if you run this, there will be an env name PRIVATE_KEY with value admin that is base64 YWRtaW4= in the secret file.

Related

Spring Boot - Config Server - Jasypt DecryptionException: Unable to decrypt: ENC()

I have a problem about running config server in my spring boot microservice example.
After I defined the dependency shown below, I tried to encrypt the password.
<dependency>
<groupId>com.github.ulisesbocchio</groupId>
<artifactId>jasypt-spring-boot-starter</artifactId>
</dependency>
Next, I want to test if it works or not through this code snippet shown below.
public static void main(String[] args) {
StandardPBEStringEncryptor standardPBEStringEncryptor = new StandardPBEStringEncryptor();
standardPBEStringEncryptor.setPassword("demo-password");
standardPBEStringEncryptor.setAlgorithm("PBEWithHMACSHA512AndAES_256");
standardPBEStringEncryptor.setIvGenerator(new RandomIvGenerator());
String result = standardPBEStringEncryptor.encrypt("spring-cloud-password");
System.out.println(result);
System.out.println(standardPBEStringEncryptor.decrypt(result));
}
Then I copied it and pasted it wrapping with ENC(encrpted-password) in yml file.
Here is the yml file shown below
spring:
application:
name: configserver
cloud:
config:
server:
git:
uri: Github-repo-address
username: Github-username
password: github-token
clone-on-start: true
default-label: main
fail-fast: true
security:
user:
name: spring-cloud-user
password: ENC(YcplhYriW9Uwo+pByJxBl04lqiQKGEIbBgVeIXn+DBITIHV9IUVenfknA2VHFswkm144fSrQRqjxZ17+g+z3GA==)
jasypt:
encryptor:
password: ${PASSWORD}
I get ${PASSWORD} from program arguments part.
Next, I run the app but I got this issue shown below.
com.ulisesbocchio.jasyptspringboot.exception.DecryptionException: Unable to decrypt: ENC(YcplhYriW9Uwo+pByJxBl04lqiQKGEIbBgVeIXn+DBITIHV9IUVenfknA2VHFswkm144fSrQRqjxZ17+g+z3GA==). Decryption of Properties failed, make sure encryption/decryption passwords match
at com.ulisesbocchio.jasyptspringboot.resolver.DefaultPropertyResolver.lambda$resolvePropertyValue$0(DefaultPropertyResolver.java:46)
at java.base/java.util.Optional.map(Optional.java:260)
at com.ulisesbocchio.jasyptspringboot.resolver.DefaultPropertyResolver.resolvePropertyValue(DefaultPropertyResolver.java:40)
at com.ulisesbocchio.jasyptspringboot.resolver.DefaultLazyPropertyResolver.resolvePropertyValue(DefaultLazyPropertyResolver.java:50)
at com.ulisesbocchio.jasyptspringboot.EncryptablePropertySource.getProperty(EncryptablePropertySource.java:20)
at com.ulisesbocchio.jasyptspringboot.caching.CachingDelegateEncryptablePropertySource.getProperty(CachingDelegateEncryptablePropertySource.java:41)
at com.ulisesbocchio.jasyptspringboot.wrapper.EncryptableMapPropertySourceWrapper.getProperty(EncryptableMapPropertySourceWrapper.java:31)
at org.springframework.cloud.bootstrap.encrypt.EnvironmentDecryptApplicationInitializer.merge(EnvironmentDecryptApplicationInitializer.java:236)
at org.springframework.cloud.bootstrap.encrypt.EnvironmentDecryptApplicationInitializer.merge(EnvironmentDecryptApplicationInitializer.java:207)
at org.springframework.cloud.bootstrap.encrypt.EnvironmentDecryptApplicationInitializer.decrypt(EnvironmentDecryptApplicationInitializer.java:189)
at org.springframework.cloud.bootstrap.encrypt.EnvironmentDecryptApplicationInitializer.initialize(EnvironmentDecryptApplicationInitializer.java:124)
at org.springframework.cloud.bootstrap.BootstrapApplicationListener$DelegatingEnvironmentDecryptApplicationInitializer.initialize(BootstrapApplicationListener.java:441)
at org.springframework.boot.SpringApplication.applyInitializers(SpringApplication.java:626)
at org.springframework.boot.SpringApplication.prepareContext(SpringApplication.java:370)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:314)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1237)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1226)
at com.microservices.demo.config.server.ConfigServer.main(ConfigServer.java:11)
Caused by: org.jasypt.exceptions.EncryptionOperationNotPossibleException: null
at org.jasypt.encryption.pbe.StandardPBEByteEncryptor.decrypt(StandardPBEByteEncryptor.java:1169)
at org.jasypt.encryption.pbe.StandardPBEStringEncryptor.decrypt(StandardPBEStringEncryptor.java:738)
at org.jasypt.encryption.pbe.PooledPBEStringEncryptor.decrypt(PooledPBEStringEncryptor.java:511)
at com.ulisesbocchio.jasyptspringboot.encryptor.DefaultLazyEncryptor.decrypt(DefaultLazyEncryptor.java:57)
at com.ulisesbocchio.jasyptspringboot.resolver.DefaultPropertyResolver.lambda$resolvePropertyValue$0(DefaultPropertyResolver.java:44)
... 17 common frames omitted
How can I fix it?
Edited I passed the value as shown below
Program Arguments -> -Djasypt.encryptor.password='Demo_Pwd!2020'
1.Make sure that the jasypt.encryptor.password property in your application.yml file is set to the same value as the demo-password that you used when encrypting the spring-cloud-password in your main method.
2.Make sure that you are passing the correct value for the PASSWORD program argument when running your application.
3.Make sure that you are using the correct algorithm when encrypting and decrypting the password. In your main method, you are using the "PBEWithHMACSHA512AndAES_256" algorithm, but it's not clear if this is the same algorithm that is being used by Jasypt in your application.
4.Make sure that you are using the correct value for the encrypted password in your application.yml file. It's possible that the value you have pasted there is incorrect or has been modified in some way.

Swagger open API definition not working with Micronaut JWT security Micronaut version 2.2.1

Using the Micronaut 2.2.1 with JWT security and swagger open API. The controller definition is not working as shown below
Application.yml
micronaut:
application:
name: demo
security:
enabled: true
intercept-url-map:
- pattern: /swagger-ui/**
access:
- isAnonymous()
router:
static-resources:
swagger:
paths: classpath:META-INF/swagger
mapping: /swagger/**
swagger-ui:
paths: classpath:META-INF/swagger/views/swagger-ui
mapping: /swagger-ui/**
Controller
#Secured(SecurityRule.IS_ANONYMOUS)
#Controller("/product")
public record ProductController(IProducer iProducer) {
#Get(uri = "/{text}")
public Single<String> get(String text){
return iProducer.sendText(text);
}
}
api.service.yml
openapi: 3.0.1
info:
title: API service
description: My API
contact:
name: Fred
url: https://gigantic-server.com
email: Fred#gigagantic-server.com
license:
name: Apache 2.0
url: https://foo.bar
version: "0.0"
paths:
/product/{text}:
get:
operationId: get
parameters:
- name: text
in: path
required: true
schema:
type: string
responses:
"200":
description: get 200 response
content:
application/json:
schema:
type: string
I was missing
intercept-url-map:
- pattern: /swagger-ui/**
httpMethod: GET
access:
- isAnonymous()
- pattern: /swagger/**
access:
- isAnonymous()

Trigger Kubernetes from from another kubernetes job

I am running on kubernetes job (job-1) from base pod. It works for basic use case. For second use case, I want trigger another kubernetes job(job-2) from already running job: job-1. While running job-2 I get service account error as given below:
Error occurred while starting container for Prowler due to exception : Failure executing: POST at: https://172.20.0.1/apis/batch/v1/namespaces/my-namespace/jobs. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. jobs.batch is forbidden: User "system:serviceaccount:my-namespace:default" cannot create resource "jobs" in API group "batch" in the namespace "my-namespace".
I have created service account with required permissions as given below:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-sa-service-role-binding
subjects:
- kind: ServiceAccount
name: my-sa
namespace: my-namespace
roleRef:
kind: Role
name: my-namespace
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: my-sa-service-role
rules:
- apiGroups: [""]
resources: ["secrets", "pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["batch", "extensions"]
resources: ["jobs"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-sa
I am passing "my-sa" as service account name but still, it refers to default service account.
I am using fabric8io kubernetes client to trigger the job and below is my code:
final Job job = new JobBuilder()
.withApiVersion("batch/v1")
.withNewMetadata()
.withName("demo")
.withLabels(Collections.singletonMap("label1", "maximum-length-of-63-characters"))
.withAnnotations(Collections.singletonMap("annotation1", "some-annotation"))
.endMetadata()
.withNewSpec().withParallelism(1)
.withNewTemplate()
.withNewSpec().withServiceAccount("my-sa")
.addNewContainer()
.withName("prowler")
.withImage("demo-image")
.withEnv(env)
.endContainer()
.withRestartPolicy("Never")
.endSpec()
.endTemplate()
.endSpec()
.build();
If you see the error message in detail, you'll find that your client is not using the service account you created (my-sa). Its using the default service account in the namespace instead:
"system:serviceaccount:my-namespace:default" cannot create resource "jobs"
And it should be safe to assume, that the default service account will not be having the privileges to create jobs.
It should be worthwhile to look into the official documentation of fabric8io, to see how you can authenticate with a custom service-account. From what I could find in the docs, it should be mostly handled by mounting the secret, corresponding to the service-account into the pod, then configuring your application code or probably setting up an specific environment variable.

THE HIVE 4: impossible to launch because of cortex.connector module not found

I'm trying to setup The Hive 4 but it fails to start saying:
Cannot load module[Module [connectors.cortex.CortexConnector] cannot be instantiated
I look at the modules loaded in the java process module list and found:
/opt/thehive/lib/org.thp.thehive-cortex-4.0.0-RC1.jar
/opt/thehive/lib/org.thp.cortex-client-4.0.0-RC1.jar
/opt/thehive/lib/org.thp.cortex-dto-4.0.0-RC1.jar
As it is working with the version 3 of The Hive, I looked at the loaded modules and only found:
/opt/thehive/lib/org.thehive-project.thehivecortex-3.3.0-1.jar
I've checked the connection to my cortex server with:
curl -H 'Authorization: Bearer OBFUSCATED' http://OBFUSCATED:9001/api/analyzer
It works
I hop someone could help because I'm completely stuck.
Thanks in advance
Here's my application.conf
play.http.secret.key = OBFUSCATED
# Authentication
auth {
# ad : use ActiveDirectory to authenticate users. Configuration is under "auth.ad" key
provider = [local]
}
# Maximum time between two requests without requesting authentication
session {
warning = 5m
inactivity = 1h
}
play.http.parser.maxMemoryBuffer= 1M
play.http.parser.maxDiskBuffer = 1D
# Cortex
play.modules.enabled += connectors.cortex.CortexConnector
cortex {
"CORTEX-SERVER-ID" {
url = "https://OBFUSCATED:9001/"
key = "OBFUSCATED"
}
refreshDelay = 1 minute
maxRetryOnError = 3
statusCheckInterval = 1 minute
}
https.port: 9000
play.server.https.keyStore {
path: /etc/thehive/keystore.jks
type: JKS
password: OBFUSCATED
}
http.port: disabled
auth.method.basic = true
db {
provider: janusgraph
janusgraph {
storage {
backend: cql
hostname: [
"127.0.0.1"
] # seed node ip addresses
#username: "<cassandra_username>" # login to connect to database (if configured in Cassandra)
#password: "<cassandra_passowrd"
cql {
cluster-name: thehivedb # cluster name
keyspace: thehive # name of the keyspace
local-datacenter: datacenter1 # name of the datacenter where TheHive runs (relevant only on multi datacenter setup)
# replication-factor: 2 # number of replica
read-consistency-level: ONE
write-consistency-level: ONE
}
}
}
}
storage {
provider: hdfs
hdfs {
root: "hdfs://thehive1:10000" # namenode server
location: "/thehive"
username: thehive
}
}
this is the solution I've obtained on the TheHive github project:
See the key "play.modules.enabled" in application.conf. Replace
play.modules.enabled += connectors.cortex.CortexConnector
by
play.modules.enabled += org.thp.thehive.connector.cortex.CortexModule

Docker-Compose version is unsupported

I'm using TestContainers to run dgraph.
Here is my test code:
package net.dgraph.java.client
import io.dgraph.DgraphAsyncClient
import io.dgraph.DgraphClient
import org.testcontainers.containers.DockerComposeContainer
import org.testcontainers.containers.GenericContainer
import org.testcontainers.spock.Testcontainers
import spock.lang.Shared
import spock.lang.Specification
import java.time.Duration
import java.time.temporal.ChronoUnit
#Testcontainers
public class DGraphTest extends Specification {
private SyncSigmaDgraphClient syncClient
private AsyncSigmaDGraphClient asyncClient
private static address
static DockerComposeContainer compose
def setup() {
syncClient = SigmaDgraphClientBuilder
.create()
.withHost(address)
.withPort(port1)
.buildSync()
}
static {
compose =
new DockerComposeContainer(
new File("src/test/resources/docker-compose.yaml"))
compose.start()
this.address = compose.getServiceHost("dgraph", 8080)
this.port1 = compose.getServicePort("dgraph",8080)
}
And my docker-compose.yaml file looks like:
version: "3.2"
services:
zero:
image: dgraph/dgraph:latest
volumes:
- /tmp/data:/dgraph
ports:
- 5080:5080
- 6080:6080
restart: on-failure
command: dgraph zero --my=zero:5080
alpha:
image: dgraph/dgraph:latest
volumes:
- /tmp/data:/dgraph
ports:
- 8080:8080
- 9080:9080
restart: on-failure
command: dgraph alpha --my=alpha:7080 --lru_mb=2048 --zero=zero:5080
ratel:
image: dgraph/dgraph:latest
ports:
- 8000:8000
command: dgraph-ratel
My docker version is Docker version 19.03.2, build 6a30dfc and my docker-compose version is docker-compose version 1.24.1, build 4667896b
.
However I get the following error:
[main] ERROR 🐳 [docker/compose:1.8.0] - Log output from the failed container:
Version in "src/test/resources/docker-compose.yaml" is unsupported. You might be seeing this error because you're using the wrong Compose file version. Either specify a version of "2" (or "2.0") and place your service definitions under the `services` key, or omit the `version` key and place your service definitions at the root of the file to use version 1.
One part I find interesting is that the error log is showing docker/compose:1.8.0, which is an older version than the one I am currently running. I have tried changing versions in my docker-compose but that doesn't seem to work. I have looked at other questions that have the same error, and none of their solutions work. I feel like the TestContainer library uses an older version of docker-compose than I do, but if this is the issue then I do not know how to fix it.
I believe you want local compose mode:
compose =
new DockerComposeContainer(
new File("src/test/resources/docker-compose.yaml")).withLocalCompose(true)
See the local compose mode documentation for more details:
You can override Testcontainers' default behaviour and make it use a
docker-compose binary installed on the local machine. This will
generally yield an experience that is closer to running docker-compose
locally, with the caveat that Docker Compose needs to be present on
dev and CI machines.
This was the method I ultimately went with:
I used Network.newNetwork() to tie the zero and alpha instance together. I used debugging and docker logs to see the message that dgraph zero needs to wait for in order for it to start up successfully.
static {
Network network = Network.newNetwork()
dgraph_zero = new GenericContainer<>("dgraph/dgraph")
.withExposedPorts(5080)
.withNetworkAliases("zero")
.withStartupTimeout(Duration.of(1, ChronoUnit.MINUTES))
.withCommand("dgraph zero --my=zero:5080")
.withNetwork(network)
.waitingFor(Wait.forLogMessage('.* Updated Lease id: 1.*\\n',1))
dgraph_zero.start()
dgraph_alpha = new GenericContainer<>("dgraph/dgraph")
.withExposedPorts(9080)
.withStartupTimeout(Duration.of(1, ChronoUnit.MINUTES))
.withNetworkAliases("alpha")
.withCommand("dgraph alpha --my=alpha:7080 --lru_mb=2048 --zero=zero:5080")
.withNetwork(network)
.waitingFor(Wait.forLogMessage(".*Server is ready.*\\n",1))
dgraph_alpha.start()
this.address = dgraph_alpha.containerIpAddress
this.port1 = dgraph_alpha.getMappedPort(9080)
ManagedChannel channel = ManagedChannelBuilder
.forAddress(address,port1)
.usePlaintext()
.build();
DgraphGrpc.DgraphStub stub = DgraphGrpc.newStub(channel);
this.dgraphclient = new DgraphClient(stub) ;
Transaction txn = this.dgraphclient.newTransaction();

Categories

Resources