Creating a mysql docker container setting env variables - java

I am using the spotify-docker-client to create and start a mysql container for testing. It works perfect, but I am having a hard time trying to find how to set certain values to connect to the database like MYSQL_ROOT_PASSWORD, MYSQL_DATABASE, MYSQL_USER, and MYSQL_PASSWORD. This is my code:
final ContainerConfig containerConfig = ContainerConfig.builder()
.hostConfig(hostConfig)
.image(image)
.env("MYSQL_ROOT_PASSWORD","testrootpwd","MYSQL_DATABASE", "test", "MYSQL_USER", "test", "MYSQL_PASSWORD", "test")
.build();
LOG.debug("Creating container for image: {}", image);
final ContainerCreation creation = this.docker.createContainer(containerConfig);
I am assuming that .env call is to set environment variables. And according to the mysql container documentation, setting those env variables is the way to do it:
https://hub.docker.com/_/mysql
But still, I can't connect to the container, I connected to bash and I see that those env variables are not set.
Does anyone know how to do it?
I could create a dockerfile and create my own image, but I don't want to do that, I want to do it with the spotify client.

This client uses docker API, so if the client is lacking documentation you can always check the original API.
Check the CREATE A CONTAINER section in Docker Engine API.
You can see that there is a JSON request example with env field:
"Env": [
"FOO=bar",
"BAZ=quux"
],
So my guess is that you can do just that in your Java code:
final ContainerConfig containerConfig = ContainerConfig.builder()
.hostConfig(hostConfig)
.image(image)
.env("MYSQL_ROOT_PASSWORD=testrootpwd", "MYSQL_DATABASE=test", ...)
.build();
P.S. Also please note what the documentation says regarding this param:
A list of environment variables to set inside the container in the
form ["VAR=value", ...]. A variable without = is removed from the
environment, rather than to have an empty value.
Might help you avoiding bugs later.

Related

Test containers: ignore parent `EXPOSE` instruction from Dockerfile

I'm trying to run Couchbase v.5.1.1 docker container for test purposes via Test container with fixed exposed ports, like:
trait CouchbaseTestEnvironment extends ForAllTestContainer {
this: Suite =>
def couchbaseContainer: FixedHostPortGenericContainer = {
val consumer = new Slf4jLogConsumer(LoggerFactory.getLogger(getClass))
/*
* Couchbase should know which ports are exposed for client, because this is how it exposes services.
* E.g. client ask only for on port - say 8091. And query service port is 8093. So client, won't ask for every port,
* instead CB will tell client on which port query service exposed, that's why CB should be aware about port mapping.
* That's why we need to give CB port mappings
*
* See for more details:
* https://stackoverflow.com/questions/59277436/couchbase-in-docker-for-integration-tests-make-the-ports-8092-8093-8094-and-8
*/
def randomPort: Int = {
val (from, to) = (32768, 35000) //linux private port range
from + Random.nextInt(to - from)
}
val random8091Port = randomPort
val random8092Port = randomPort
val random8093Port = randomPort
val random8094Port = randomPort
val random11210Port = randomPort
val container = FixedHostPortGenericContainer(
imageName = "couchbase:community-5.0.1",
exposedHostPort = random8091Port,
exposedContainerPort = random8091Port,
env = Map(
"COUCHBASE_RANDOM_PORT_8091" -> random8091Port.toString,
"COUCHBASE_RANDOM_PORT_8092" -> random8092Port.toString,
"COUCHBASE_RANDOM_PORT_8093" -> random8093Port.toString,
"COUCHBASE_RANDOM_PORT_8094" -> random8094Port.toString,
"COUCHBASE_RANDOM_PORT_11210" -> random11210Port.toString
)
)
container.container.withFixedExposedPort(random8092Port, random8092Port)
container.container.withFixedExposedPort(random8093Port, random8093Port)
container.container.withFixedExposedPort(random8094Port, random8094Port)
container.container.withFixedExposedPort(random11210Port, random11210Port)
container.container.withLogConsumer(consumer)
container
}
}
So as you can see 5 FIXED ports should be exposed.
But, when I'm running tests I actually can see, that instead other ports exposed with random ports:
docker ps
f4fc1ce06544 couchbase:community-5.0.1 "/entrypoint.sh /opt…" 59 seconds ago Up 1 second 0.0.0.0:55264->8091/tcp, 0.0.0.0:55263->8092/tcp, 0.0.0.0:55262->8093/tcp, 0.0.0.0:55261->8094/tcp, 0.0.0.0:55260->11207/tcp, 0.0.0.0:55259->11210/tcp, 0.0.0.0:55258->11211/tcp, 0.0.0.0:55257->18091/tcp, 0.0.0.0:55256->18092/tcp, 0.0.0.0:55255->18093/tcp, 0.0.0.0:55254->18094/tcp unruffled_mendel
03b491ac2ea8 testcontainersofficial/ryuk:0.3.0
So as you can see another ports was exposed, and mapped to random ports instead fixed.
As far as I understand, test containers, ignores ports I gave, and instead exposes ports from Couchbase Dockerfile: https://github.com/couchbase/docker/blob/master/community/couchbase-server/5.1.1/Dockerfile#L74
EXPOSE 8091 8092 8093 8094 8095 8096 11207 11210 11211 18091 18092 18093 18094 18095 18096
Can I somehow force Test containers to ignore EXPOSE instruction?
Partially helped question: Couchbase in docker for integration tests: Make the ports 8092, 8093, 8094 and 8095 configurable to be able to use docker’s random ports
Can I somehow force Test containers to ignore EXPOSE instruction?
I don't know if there is a simple configuration option for this, but a workaround solution I found is to use an advanced feature of the docker-java create container command customization. I'm providing an example in Java, translate it to Scala yourself, please. Apply it as the last command before returning a container object from your function:
container.withCreateContainerCmdModifier(
cmd -> cmd.getHostConfig().withPublishAllPorts(false)
);
The main point here is the usage of .withPublishAllPorts(false). From my understanding, this is the same as --publish-all (or -P) arguments of the docker run command. Testcontainers library sets this value to true by default. This modification overrides it to false.
With this configuration no ports are published at all for your example, not the 5 fixed as expected:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2ee4fb91b97c couchbase:community-5.0.1 "/entrypoint.sh couc…" 33 seconds ago Up 32 seconds 8091-8094/tcp, 11207/tcp, 11210-11211/tcp, 18091-18094/tcp trusting_keldysh
This is because in the answer you provided, the author created a special custom docker image of couchbase, which "understands" environment variables like COUCHBASE_RANDOM_PORT_8091. Your code uses the standard couchbase image couchbase:community-5.0.1, which basically just ignores these environment variables. So in order to run counchbase on not standard internal ports, you need to build a custom image with the "magic" configure-node.sh script, which tunes couchbase config using values provided in environment variables.
I hope it helps anyhow :)

Using Docker Secrets with Spotify Docker Client

I'm using Spotify's Docker-Client, but have ran into a documentation wall. I'm trying to figure out how to pass docker secrets that are already created in the environment to the containers built using docker-client. The documentation only shows how to create secrets, but this isn't very useful since the secrets already exist. I'm able to get a list of secrets in the environment using the listSecrets in DockerClient, but I have no way to convert them from Secret to SecretBind. Any help is very much appreciated.
I figured this out looking through all of the code spotify/docker-client code. The documentation does not show a way to convert a Secret to SecretBind that ContainerSpec needs to pass in the Docker Secrets.
public SecretBind createBind(Secret secret) {
SecretFile file = SecretFile.builder()
.name(secret.secretSpect().name())
.uid("0")
.gid("0")
.build();
SecretBind bind = SecretBind.builder()
.secretName(secret.secretSpec().name())
.secretId(secret.id())
.file(file)
.build();
return bind;
}

How to run downloaded App Router via Service Marketplace

I downloaded XS_JSCRIPT14_10-70001363 package from Service Marketplace.
Please suggest me how to run this App Router Login form with localhost
I am trying with npm startcommand, but getting UAA service exception. How to handle from localhost.
When you download the approuter, either via npm or service marketplace you have to provide two additional files for a basic setup inside the AppRouter directory (besides package.json, xs-app.json, etc.).
The default-services.json holds the variables that tell the approuter where to find the correct authentication server (e.g., XSUAA). You have to provide at least the clientid, clientsecret, and URL of the authorization server as part of this file like this:
{
"uaa": {
"url" : "http://my.uaa.server/",
"clientid" : "client-id",
"clientsecret" : "client-secret",
"xsappname" : "my-business-application"
}
}
You can get this parameters, for example, after binding on SAP Cloud Platform, CloudFoundry your application to an (empty) instance of XSUAA where you can retrieve the values via cf env <appname> from the `VCAP_SERVICES/xsuaa' properties (they have exactly the same property names).
In addition, you require the default-env.json file which holds at least the destination variable to which backend microservice you want to send the received Json Web Token to. It may look like this:
{
"destinations": [ {
"name": "my-destination", "url": "http://localhost:1234", "forwardAuthToken": true
}]
}
Afterwards, inside the approuter directory you can simply run npm start which runs the approuter per default under http://localhost:5000. It also writes nice console output you can use to debug the parameters above.
EDIT: Turns out I was incorrect, it is apparently possible to run the approuter locally.
First of all, here is the documentation for the approuter: https://help.sap.com/viewer/65de2977205c403bbc107264b8eccf4b/Cloud/en-US/01c5f9ba7d6847aaaf069d153b981b51.html
As far as I understood, you need to provide to files to the approuter for it to run locally, default-services.json and default-env.json (put them in the same directory as your package.json.
The default-services.json has a format like this:
{
"uaa": {
"url" : "http://my.uaa.server/",
"clientid" : "client-id",
"clientsecret" : "client-secret",
"xsappname" : "my-business-application"
}
}
The default-env.json is simply a json file holding the environment variables that the approuter needs to access, like so:
{
"VCAP_SERVICES": <env>,
...
}
Unfortunately, the documentation does not state which variables are required, therefore I cannot provide you with a working example.
Hope this helps you! Should you manage to get this running, I'm sure others would appreciate if you share your knowledge here.

update vcap env from application

is there a way in cf to update the vcap env port for service by code from my application, lets say I want to change the port to 12345
e.g.
{
"VCAP_SERVICES": {
"mongodb": [
{
"credentials": {
"dbname": "ztmvvvmtrz",
"hostname": "13.15.241.29",
"password": "abzArl7AsssseKpi",
"port": "22241",
while trying the cf set-env its update the user provided env and wasn't able to help...
some example on java / node.js will by great
I'm not sure exactly which information you're looking to change here, but values in environment variables like VCAP_SERVICES, VCAP_APPLICATION, PORT and anything starting with CF_ like CF_INSTANCE_PORT, CF_INSTANCE_PORTS and/or CF_INSTANCE_IP are all provided for you by the platform. They are effectively static. Changing them will not do anything.

Launching Instance: VPC security groups may not be used for a non-VPC launch

I'm attempting to create an instance in another region, but I get this error:
AWS Error Code: InvalidParameterCombination, AWS Error Message: VPC security groups may not be used for a non-VPC launch
Here is the code I'm executing.
RunInstancesRequest instancereq = new RunInstancesRequest();
instancereq.setInstanceType("m3.medium");
instancereq.setImageId("ami-37b1b45e");
instancereq.setMinCount(1);
instancereq.setMaxCount(1);
ArrayList<String> secgroup = new ArrayList<String>();
instancereq.setKeyName("testkey");
secgroup.add("testdefault");
instancereq.setSecurityGroups(secgroup);
instancereq.setPlacement(getAzPlacement());
RunInstancesResult instanceresult = ec2.runInstances(instancereq);
I've also tried, instead of using the name "testdefault", using the actual groupid (sg-########), but I'll get an error saying that security group doesn't exist (which is wrong, it does). Which, based on the API doc, if using a non-default VPC, you should pass the actual groupid but I'll get an error like this:
InvalidGroup.NotFound, AWS Error Message: The security group 'sg-########' does not exist
If I use "default" as the setSecurityGroups it will use the default VPC. It just doesn't seem like like the groupid I'm passing, despite it being accurate.
Also, if I comment out the setSecurityGroups code, and use setSubnetId instead and pass the subnet id, it will create the instance just fine, but it goes into the "default" security group, not "testdefault" like I want.
All I'm trying to accomplish is creating an instance and having it use the already existing VPC group.
My Answer will focus on below statement:
All I'm trying to accomplish is creating an instance and having it use the already existing VPC group.
So, as I understand, you want to launch an instance in a non-default VPC and assign it an existing VPC security group to it.
I am not a java guy, but I could do what you wanted in ruby as below.
require 'aws-sdk-core'
Aws.config = {
:access_key_id => "my_access_key",
:secret_access_key => "my_secret_key",
:region => 'us-west-2'
}
ec2 = Aws::EC2.new
ec2.run_instances(
min_count: 1,
max_count: 1,
image_id: 'ami-8635a9b6',
instance_type: 't1.micro',
placement: {
availability_zone: 'us-west-2a'
},
network_interfaces: [
{
subnet_id: 'subnet-e881bd63',
groups: ['sg-fd53bf5e'],
device_index: 0,
associate_public_ip_address: true
}
],
key_name: 'my-key'
).each do |resp|
resp.instances.each do |x|
puts x.instance_id
end
end
Although this is a Ruby code, it is pretty straight forward and should give you some clear hints on how to go about doing it in Java as all these AWS SDKs are polling the same web service APIs.
I guess, the things that you should be concentrating in above code is:
:region => 'us-west-2'
and
placement: {
availability_zone: 'us-west-2a'
},
network_interfaces: [
{
subnet_id: 'subnet-e881bd63',
groups: ['sg-fd53bf5e'],
device_index: 0,
associate_public_ip_address: true
}
],
Make sure you explicitly specify the region.
Check how I have defined the subnet ID and security group ID. This code will launch my EC2 instance in subnet-e881bd63 of my VPC and will apply VPC security group ID sg-fd53bf5e to its 0th network interface. Besides, it will also assign a public IP address to my instance. (by default, it will not assign a public IP address when you launch instances in VPC).
FYI. When you launch instances in VPC, you must provide Security group ID instead of security group name.
This same error occurs using the command line program so I'm adding a separate answer helped by QuickNull. Simply make sure you specify the security group and subnet. For example:
aws ec2 run-instances --image-id ami-XXXXXXXX --count 1 --instance-type t1.micro --key-name XXXXXXXX --security-group-ids sg-XXXXXXXX --subnet-id subnet-XXXXXXXX
You can't specify security group names for VPC launch (setSecurityGroups). For a non-default VPC, you must use security group IDs instead. See EC2 run-instances page (withSecurityGroupIds , or --security-group-ids from CLI).
When you specify a security group for a nondefault VPC to the CLI or the API actions, you must use the security group ID and not the security group name to identify the security group.
See: Security Groups for EC2-VPC
Related:
Terraform throws "groupName cannot be used with the parameter subnet" or "VPC security groups may not be used for a non-VPC launch"
Thanks to #slayedbylucifer for his ruby code, here's the java solution for reference:
// Creates an instance in the specified subnet of a non-default VPC and using the
// security group with id sg-1234567
ec2.runInstances(new RuntInstancesRequest()
...
.withSubnetId("subnet-1234abcd")
.withSecurityGroupIds("sg-1234567"));

Categories

Resources