I am trying to use postgres in WildFly Full 9.0.1.Final. Unfortunately, for a task that should be simple, we have had 2 people trying for days to figure out how to make this work.
I should add that I am using Docker, and as I build the docker image I am trying various ways to add postgres support to WildFly Full 9.0.1.Final.
We have tried the following:
batch file
batch
connect
#module add --name=org.postgres --resources=/opt/jboss/wildfly/psql-jdbc.jar --dependencies=javax.api,javax.transaction.api
#module add --name=org.postgres --resources=/opt/jboss/wildfly/postgresql-9.3-1101.jdbc41.jar --dependencies=javax.api,javax.transaction.api
module add --name=org.postgresql --slot=main --resources=/opt/jboss/wildfly/postgresql-9.3-1101.jdbc41.jar --dependencies=javax.api,javax.transaction.api
#/subsystem=datasources/jdbc-driver=postgresql:add(driver-name="postgresql",driver-module-name="org.postgresql",driver-class-name=org.postgresql.Driver)
#datasources/jdbc-driver=postgresql:add(driver-name=postgresql,driver-module-name=org.postgresql,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)
/subsystem=datasources/jdbc-driver=postgres:add(driver-name="org.postgresql",driver-module-name="org.postgresql",driver-class-name=org.postgresql.Driver)
#The batch failed with the following error (you are remaining in the batch editing mode to have a chance to correct the error): {"WFLYCTL0062: Composite operation failed and was rolled back. Steps that #failed:" => {"Operation step-1" => "WFLYJCA0041: Failed to load module for driver [org.postgresql]"}}
#/subsystem=datasources/jdbc-driver=postgres:add(driver-name="postgres",driver-module-name="org.postgresql",driver-class-name=org.postgresql.Driver)
# Add the datasource
#data-source add --jndi-name=java:jboss/datasources/ISDS --name=pu-magick --connection-url=jdbc:postgresql://UI_PG_DATABASE:5432/magick --driver-name=postgresql --user-name=magick --password=magick
run-batch
In a Dockerfile
ADD modules /opt/jboss/wildfly/modules/
This attempt resulted in:
Caused by: java.lang.IllegalStateException: No layers directory found at /opt/jboss/wildfly/modules/system/layers
at org.jboss.modules.LayeredModulePathFactory.resolveLayeredModulePath(LayeredModulePathFactory.java:65)
at org.jboss.modules.LocalModuleFinder.getRepoRoots(LocalModuleFinder.java:111)
at org.jboss.modules.LocalModuleFinder.<init>(LocalModuleFinder.java:107)
at org.jboss.modules.LocalModuleFinder.<init>(LocalModuleFinder.java:88)
at org.jboss.modules.LocalModuleLoader.<init>(LocalModuleLoader.java:57)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.jboss.modules.DefaultBootModuleLoaderHolder$1.run(DefaultBootModuleLoaderHolder.java:37)
at org.jboss.modules.DefaultBootModuleLoaderHolder$1.run(DefaultBootModuleLoaderHolder.java:33)
at java.security.AccessController.doPrivileged(Native Method)
at org.jboss.modules.DefaultBootModuleLoaderHolder.<clinit>(DefaultBootModuleLoaderHolder.java:33)
... 1 more
Adding driver to deployment directory
I have found in the past that a database driver can be added to jboss by adding the jar to the deployment directory. This, in my opinion, is about as complicated as it needs to be.
So, I also tried copying postgresql-9.3-1101.jdbc41.jar to /opt/jboss/wildfly/standalone/deployments/
DockerFile
FROM wildflyext/wildfly-camel
MAINTAINER ah <ah#domain.io>
ENV TMPDIR /tmp/
ENV WFDIR /opt/jboss/wildfly/
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin --silent
## COPY PG MODULE TO SERVER
#ADD module.xml opt/jboss/wildfly/modules/
#ADD standalone.xml $WFDIR/standalone/configuration/
#ADD system /opt/jboss/wildfly/modules/
## COPY PG DRIVER TO SERVER
#ADD postgresql-9.3-1101.jdbc41.jar /opt/jboss/wildfly/standalone/deployments/
#ADD postgresql-9.3-1101.jdbc41.jar /opt/jboss/wildfly/
#ADD psql-jdbc.jar $WFDIR/standalone/deployments/
## COPY STANDALONE TO SERVER
ADD standalone-camel.xml /opt/jboss/wildfly/standalone/configuration/
ADD config.sh $TMPDIR
ADD batch.cli $TMPDIR
RUN $TMPDIR/config.sh
#CMD ["-c", "standalone-camel.xml"] # loads correct standalone, cannot access mgmt console - connection interupted
#CMD ["-b", "0.0.0.0", "-c", "standalone-camel.xml"] # does not load correct standalone, cannot access mgmt console - connection interupted
#CMD ["-c", "standalone-camel.xml"m "-b", "0.0.0.0"] # WFLYSRV0073: Invalid option '/bin/sh'
#CMD ["-c", "standalone-camel.xml", "-b", "0.0.0.0"] # loads correct standalone, cannot access mgmt console - connection interupted
# attempt with two CMDs = loads incorrect standalone - standalone.xml, not standalone-camel.xml, cannot access mgmt console - connection interupted
#CMD ["-c", "standalone-camel.xml"]
#CMD ["-b", "0.0.0.0"]
# attempt with NO CMDs
The solution was the following:
In the cli file, I needed to have the following - the tick was correct naming:
module add --name=org.postgresql --slot=main --resources=/opt/jboss/wildfly/postgresql-9.3-1101.jdbc41.jar --dependencies=javax.api,javax.transaction.api
/subsystem=datasources/jdbc-driver=postgresql:add(driver-name="postgresql",driver-module-name="org.postgresql",driver-class-name=org.postgresql.Driver)
And I needed to remove this driver from the standalone, as it would clash with the above instructions.
Related
Enviornment - solr-8.9.0, java version "11.0.12" 2021-07-20 LTS, apache-zookeeper-3.6.1-bin/
To set-up solrCloud i have done following steps-
Setting-up Zookeeper on Node 1
a. Go inside <ZK_HOME>/conf directory.
b. Make a copy of zoo_sample.cfg & rename to zoo.cfg (or mv zoo_sample.cfg to zoo.cfg)
c. Edit zoo.cfg and modify data_dir parameter to a directory location where you would like Zookeeper to store its data.
dataDir=<ZK_HOME>/conf/data
d. Now start Zookeeper with command
./bin/zkServer.sh start
Solr Setup on Node 1 / Machine 1
a. Create directory solr-8.9.0/server/solr/node1/solr/.
b. Copy default zoo.cfg & solr.xml from solr-8.9.0/server/solr to solr5.x.x/server/solr/node1/solr/
c. Now lets start Solr using below command (basically you want to start in cloud mode with Zookeeper)
./bin/solr start -cloud -s solr-8.9.0/server/solr/node1/solr -p 8983 -z <Node1 IP>:2181 -m 2g
Solr Setup on Node 2 / Machine 2
a. Create directory solr-8.9.0/server/solr/node1/solr/.
b. Copy default zoo.cfg & solr.xml from solr-8.9.0/server/solr to solr5.x.x/server/solr/node1/solr/
c. ./solr start -cloud -s solr-8.9.0/server/solr/node1/solr -p 8983 -z <Node1 IP>:2181 -m 2g
Upload configs to Zookeeper
a. ./server/scripts/cloud-scripts/zkcli.sh -zkhost <Node1 IP>:2181 -cmd upconfig -confname _defaults -confdir solr-8.9.0/server/solr/configsets/_defaults/conf
Creating a collection
http://<Node1 IP>:8983/solr/admin/collections?action=CREATE&name=<myCollection>&numShards=2&replicationFactor=2&maxShardsPerNode=2&collection.configName=_defaults
But i am getting following error while creation of collection
{
"responseHeader":{
"status":400,
"QTime":1213},
"failure":{
"$Node2:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://$Node2:8983/solr: Path /home/solr/solr-8.9.0/server/solr/node1/solr/myCollection_shard1_replica_n2 must be relative to SOLR_HOME, SOLR_DATA_HOME coreRootDirectory. Set system property 'solr.allowPaths' to add other allowed paths.",
"$Node2:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://$Node2:8983/solr: Path /home/solr/solr-8.9.0/server/solr/node1/solr/myCollection_shard2_replica_n6 must be relative to SOLR_HOME, SOLR_DATA_HOME coreRootDirectory. Set system property 'solr.allowPaths' to add other allowed paths.",
"127.0.1.1:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://127.0.1.1:8983/solr: Path /data/Lucene/solr/solrcloud/solr-8.9.0/server/solr/node1/solr/myCollection_shard2_replica_n4 must be relative to SOLR_HOME, SOLR_DATA_HOME coreRootDirectory. Set system property 'solr.allowPaths' to add other allowed paths.",
"127.0.1.1:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://127.0.1.1:8983/solr: Path /data/Lucene/solr/solrcloud/solr-8.9.0/server/solr/node1/solr/myCollection_shard1_replica_n1 must be relative to SOLR_HOME, SOLR_DATA_HOME coreRootDirectory. Set system property 'solr.allowPaths' to add other allowed paths."},
"Operation create caused exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: Underlying core creation failed while creating collection: myCollection",
"exception":{
"msg":"Underlying core creation failed while creating collection: myCollection",
"rspCode":400},
"error":{
"metadata":[
"error-class","org.apache.solr.common.SolrException",
"root-error-class","org.apache.solr.common.SolrException"],
"msg":"Underlying core creation failed while creating collection: myCollection",
"code":400}}
Why above error was occurred? What steps i am missing while setting up solrCloud on 2 machines with 1 zookeeper instance? Could someone help me find the missing piece?
As error suggest
Use absolute path while starting the solr instance on both nodes.
Use abosolute path for 'confdir' parameter while uploading configuration to zookeeper.
I have a problem similar to Run (Docker) Test Container in gitlab with Maven. The difference is that rather than my script running mvn directly it runs a docker multistage build that runs the test inside of the docker image. Unfortunately this doesn't appear to work for the PostgreSQL Test Container.
Dockerfile
#############
### build ###
#############
# base image
FROM maven:3-jdk-11 as build
# set working directory
WORKDIR /app
# add app
COPY . .
RUN export MAVEN_OPTS="-Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=WARN -Dorg.slf4j.simpleLogger.showDateTime=true -Djava.awt.headless=true" && export MAVEN_CLI_OPTS="-B -U --batch-mode --errors --fail-at-end --show-version -DinstallAtEnd=true -DdeployAtEnd=true"
RUN mvn $MAVEN_CLI_OPTS clean install
############
### prod ###
############
# Yea this isn't right, but it crashes before it gets to this point. This is for example purposes only.
FROM openjdk:15-jdk-alpine
COPY --from=build /app/reproducer-testcontainer/target/reproducer-testcontainer.jar /reproducer-testcontainer.jar
CMD java -jar reproducer-testcontainer.jar
When I run mvn clean install it works properly and runs my test using the PostgreSQL Test Container. However, when I run docker build . it fails at the mvn clean install step with the below stack trace.
Stack trace:
13:05:01.250 [main] ERROR org.testcontainers.dockerclient.EnvironmentAndSystemPropertyClientProviderStrategy - ping failed with configuration Environment variables, system properties and defaults. Resolved:
dockerHost=unix:///var/run/docker.sock
apiVersion='{UNKNOWN_VERSION}'
registryUrl='https://index.docker.io/v1/'
registryUsername='root'
registryPassword='null'
registryEmail='null'
dockerConfig='DefaultDockerClientConfig[dockerHost=unix:///var/run/docker.sock,registryUsername=root,registryPassword=<null>,registryEmail=<null>,registryUrl=https://index.docker.io/v1/,dockerConfigPath=/root/.docker,sslConfig=<null>,apiVersion={UNKNOWN_VERSION},dockerConfig=<null>]'
due to org.rnorth.ducttape.TimeoutException: Timeout waiting for result with exception
org.rnorth.ducttape.TimeoutException: Timeout waiting for result with exception
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:51)
<snip>
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
Caused by: java.io.IOException: com.sun.jna.LastErrorException: [2] No such file or directory
at org.testcontainers.shaded.org.scalasbt.ipcsocket.UnixDomainSocket.<init>(UnixDomainSocket.java:62)
<snip>
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: com.sun.jna.LastErrorException: [2] No such file or directory
at org.testcontainers.shaded.org.scalasbt.ipcsocket.UnixDomainSocketLibrary.connect(Native Method)
at org.testcontainers.shaded.org.scalasbt.ipcsocket.UnixDomainSocket.<init>(UnixDomainSocket.java:57)
... 35 common frames omitted
In my CI pipeline I'd like to only run docker build . and not worry about having another stage that does the mvn clean install.
How do I fix the configuration to get the java PostgreSQL Testcontainers to work inside of a Docker build so that I can use it in a multi-stage build?
Full Code example: https://gitlab.com/raymondcg/reproducer-testcontainer
Not really Testcontainers related.
Testcontainers requires a valid Docker daemon. When you build images, there is no daemon mounted into the image build context.
You can easily verify that by doing:
RUN curl --unix-socket /var/run/docker.sock http:/_/_ping
Make this command return "OK" (no need to run the Testcontainers code), and your tests will pass as well.
You can overwrite testcontainers default docker host by adding:
ENV DOCKER_HOST=tcp://host.docker.internal:2375
to your build stage.
I have a Java Spring Boot app which works with a Postgres database. I want to use Docker for both of them. Initially, I created a docker-compose.yml file as given below:
version: '3.2'
services:
postgres:
restart: always
container_name: sample_db
image: postgres:10.4
ports:
- '5432:5432'
environment:
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_DB=${POSTGRES_DB}
# APP**
web:
build: .
environment:
SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/test
expose:
- '8080'
ports:
- '8080:8080'
Then,inside the application.properties file I defined the following properties.
server.port=8080
spring.jpa.generate-ddl=true
spring.datasource.url=jdbc:postgresql://postgres:5432/test
spring.datasource.driver-class-name=org.postgresql.Driver
spring.datasource.username=root
spring.datasource.password=root
spring.flyway.baseline-on-migrate=true
spring.flyway.enabled=true
# The SQL dialect makes Hibernate generate better SQL for the chosen database
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.PostgreSQLDialect
# Hibernate ddl auto (create, create-drop, validate, update)
spring.jpa.hibernate.ddl-auto = validate
spring.jpa.properties.hibernate.temp.use_jdbc_metadata_defaults: true
Also,I created a Dockerfile in my project directory, which looks like this:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
EXPOSE 8080
RUN mkdir -p /app/
RUN mkdir -p /app/logs/
COPY target/household-0.0.1-SNAPSHOT.jar /app/app.jar
FROM postgres
ENV POSTGRES_PASSWORD postgres
ENV POSTGRES_DB testdb
COPY schema.sql /docker-entrypoint-initdb.d/
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app/app.jar"]
I issued these commands and ended up in the error as given below.
mvn clean package
docker build ./ -t springbootapp
docker-compose up
ERROR: for household-appliances_web_1 Cannot start service web: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"java\": executable file not found in $PATH": unknown
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"java\": executable file not found in $PATH": unknown
ERROR: Encountered errors while bringing up the project.
Kindly anyone help on this!
I had this error when setting up a Rails appliation for Docker:
My docker-entrypoint.sh file was placed in the root folder of my application with this content:
#!/bin/sh
set -e
bundle exec rails server -b 0.0.0.0 -e production
And in my Dockerfile, I defined my entrypoint command this way:
RUN ["chmod", "+x", "docker-entrypoint.sh"]
ENTRYPOINT ["docker-entrypoint.sh"]
But I was getting the error below when I ran the docker-compose up command:
ERROR: for app Cannot start service app: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: "docker-entrypoint.sh": executable file not found in $
PATH": unknown
Here's how I fixed it:
Specify an actual path for the docker-entrypoint.sh file, that is instead of:
ENTRYPOINT ["docker-entrypoint.sh"]
use
ENTRYPOINT ["./docker-entrypoint.sh"]
This tells docker that the docker-entrypoint.sh file is located in the root folder of your application, you could also specify a different path if the path to your docker-entrypoint.sh is different, but ensure you do not miss out on the ./ prefix to your docker-entrypoint.sh file path definition.
So mine looked like this afterwards:
RUN ["chmod", "+x", "docker-entrypoint.sh"]
ENTRYPOINT ["./docker-entrypoint.sh"]
That's all.
I hope this helps
application.properties file content is irrelevant to question, so you can remove it.
Lets look to your Dockerfile, I will remove irrelevant code
FROM openjdk:8-jdk-alpine
COPY target/household-0.0.1-SNAPSHOT.jar /app/app.jar
FROM postgres
COPY schema.sql /docker-entrypoint-initdb.d/
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app/app.jar"]
So you are using multistage building, you just copying file from host to first stage.
As final stage you are using postgres image and telling to set ENTRYPOINT to java, but java does not exists in the postgres image.
What you should change:
You should have postgres containe separated from java container like you have it in docker-compose.yml file and second suggestion use CMD instead of ENTRYPOINT.
Your final Dockerfile should be
FROM openjdk:8-jdk-alpine
COPY target/household-0.0.1-SNAPSHOT.jar /app/app.jar
CMD ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app/app.jar"]
The FROM postgres line creates a second image (it is a multi-stage build) that is based on the PostgreSQL database server. Everything above that line is effectively ignored. So your final image is running a second database, and not a JVM.
You don't need this line, and you don't need to extend the database server to run a client. You can delete this line, and the application will start up.
You'll also have to separately get that schema file into the database container. Just bind-mounting the file in volumes: in the docker-compose.yml file is an easy path. If you have a database migration system in your application, running migrations on startup will be a more robust approach.
I am trying mysql to hdfs data ingestion using gobblin. While running mysql-to-gobblin.pull using steps below:
1) start hadoop:
sbin\start-all.cmd
2) start mysql service:
sudo service mysql start
3) set GOBBLIN_WORK_DIR:
export GOBBLIN_WORK_DIR=/mnt/c/users/name/incubator-gobblin/GOBBLIN_WORK_DIR
4) set GOBBLIN_JOB_CONFIG_DIR
export GOBBLIN_JOB_CONFIG_DIR=/mnt/c/users/name/incubator-gobblin/GOBBLIN_JOB_CONFIG_DIR
5) Start standalone
bin/gobblin.sh service standalone start --jars /mnt/C/Users/name/incubator-gobblin/build/gobblin-sql/libs/gobblin-sql-0.15.0.jar
gives below error
ERROR [JobScheduler-0] org.apache.gobblin.scheduler.JobScheduler$NonScheduledJobRunner 637 - Failed to run job GobblinMySql
org.apache.gobblin.runtime.JobException: Failed to run job GobblinMySql
Caused by: java.lang.ClassNotFoundException: org.apache.gobblin.source.extractor.extract.jdbc.MysqlSource
below is the mysql-to-gobblin.pull file
# Job properties
job.name=GobblinMySql
job.group=MySql
job.description=Data pull from MySql
# Extract properties
extract.table.type=snapshot_only
extract.table.name=user
# Property to consider the extract as full dump
extract.is.full=true
# Source properties
# Source properties - source class to extract data from Mysql Source
source.class=org.apache.gobblin.source.extractor.extract.jdbc.MysqlSource
# Source properties
source.max.number.of.partitions=1
source.querybased.partition.interval=1
source.querybased.is.compression=true
source.querybased.watermark.type=timestamp
# Converter properties - Record from mysql source will be processed by the below series of converters
converter.classes=gobblin.converter.avro.JsonIntermediateToAvroConverter
# date columns format
converter.avro.timestamp.format=yyyy-MM-dd HH:mm:ss'.0'
converter.avro.date.format=yyyy-MM-dd
converter.avro.time.format=HH:mm:ss
# Qualitychecker properties
qualitychecker.task.policies=gobblin.policies.count.RowCountPolicy,gobblin.policies.schema.SchemaCompatibilityPolicy
qualitychecker.task.policy.types=OPTIONAL,OPTIONAL
# Publisher properties
data.publisher.type=gobblin.publisher.BaseDataPublisher
source.querybased.schema=praveen_schema
source.entity=user
source.querybased.extract.type=snapshot
writer.builder.class=org.apache.gobblin.writer.SimpleDataWriterBuilder
writer.file.path.type=tablename
writer.destination.type=HDFS
writer.output.format=txt
data.publisher.type=org.apache.gobblin.publisher.BaseDataPublisher
mr.job.max.mappers=1
metrics.reporting.file.enabled=true
metrics.log.dir=/gobblin-kafka/metrics
metrics.reporting.file.suffix=txt
bootstrap.with.offset=earliest
fs.uri=hdfs://localhost:9000
writer.fs.uri=hdfs://localhost:9000
state.store.fs.uri=hdfs://localhost:9000
mr.job.root.dir=/gobblin-kafka/working
state.store.dir=/gobblin-kafka/state-store
task.data.root.dir=/jobs/kafkaetl/gobblin/gobblin-kafka/task-data
data.publisher.final.dir=/gobblintest/job-output
I am running this command from /mnt/c/users/name/incubator-gobblin/build/gobblin-distribution/distributions/gobblin-dist directory.
What changes I need to do here? How can i solve it?
solution is to add this jar or dependency to get rid of Caused by: java.lang.ClassNotFoundException: org.apache.gobblin.source.extractor.extract.jdbc.MysqlSource
<dependency>
<groupId>com.linkedin.gobblin</groupId>
<artifactId>gobblin-core</artifactId>
<version>0.8.0</version>
</dependency>
download jar from this mvn website.
Hope this helps.
I'm creating a custom Dockerfile with extensions for official keycloak docker image. I want to change web-context and add some custom providers.
Here's my Dockerfile:
FROM jboss/keycloak:7.0.0
COPY startup-config.cli /opt/jboss/tools/cli/startup-config.cli
RUN /opt/jboss/keycloak/bin/jboss-cli.sh --connect --controller=localhost:9990 --file="/opt/jboss/tools/cli/startup-config.cli"
ENV KEYCLOAK_USER=admin
ENV KEYCLOAK_PASSWORD=admin
and startup-config.cli file:
/subsystem=keycloak-server/:write-attribute(name=web-context,value="keycloak/auth")
/subsystem=keycloak-server/:add(name=providers,value="module:module:x.y.z.some-custom-provider")
Bu unfortunately I receive such error:
The controller is not available at localhost:9990: java.net.ConnectException: WFLYPRT0053: Could not connect to remote+http://localhost:9990. The connection failed: WFLYPRT0053: Could not connect to remote+http://localhost:9990. The connection failed: Connection refused
The command '/bin/sh -c /opt/jboss/keycloak/bin/jboss-cli.sh --connect --controller=localhost:9990 --file="/opt/jboss/tools/cli/startup-config.cli"' returned a non-zero code: 1
Is it a matter of invalid localhost? How should I refer to the management API?
Edit: I also tried with ENTRYPOINT instead of RUN, but the same error occurred during container initialization.
You are trying to have Wildfly load your custom config file at build-time here. The trouble is, that the Wildfly server is not running while the Dockerfile is building.
Wildfly actually already has you covered regarding automatically loading custom config, there is built in support for what you want to do. You simply need to put your config file in a "magic location" inside the image.
You need to drop your config file here:
/opt/jboss/startup-scripts/
So that your Dockerfile looks like this:
FROM jboss/keycloak:7.0.0
COPY startup-config.cli /opt/jboss/startup-scripts/startup-config.cli
ENV KEYCLOAK_USER=admin
ENV KEYCLOAK_PASSWORD=admin
Excerpt from the keycloak documentation:
Adding custom script using Dockerfile
A custom script can be added by
creating your own Dockerfile:
FROM keycloak
COPY custom-scripts/ /opt/jboss/startup-scripts/
Now you can simply start the image, and the built features in keycloak (Wildfly feature really) will go look for a config in that spedific directory, and then attempt to load it up.
Edit from comment with final solution:
While the original answer solved the issue with being able to pass configuration to the server at all, an issue remained with the content of the script. The following error was received when starting the container:
=========================================================================
Executing cli script: /opt/jboss/startup-scripts/startup-config.cli
No connection to the controller.
=========================================================================
The issue turned out to be in the startup-config.cli script, where the jboss command embed-server was missing, needed to initiate a connection to the jboss instance. Also missing was the closing stop-embedded-server command. More about configuring jboss in this manner in the docs here: CHAPTER 8. EMBEDDING A SERVER FOR OFFLINE CONFIGURATION
The final script:
embed-server --std-out=echo
/subsystem=keycloak-server/theme=defaults/:write-attribute(name=cacheThemes,value=false)
/subsystem=keycloak-server/theme=defaults/:write-attribute(name=cacheTemplates,value=false)
stop-embedded-server
WildFly management interfaces are not available when building the Docker image. Your only option is to start the CLI in embedded mode as discussed here Running CLI commands in WildFly Dockerfile.
A more advanced approach consists in using the S2I installation scripts to trigger CLI commands.