I am trying to edit the standalone.xml through docker and trying to add but the keycloak is taking its standalone.xml . But I am able to see the changes inside the standalone.xml file. I need to add this line in standalone.xml file
<provider>module:org.keycloak.examples.event-sysout</provider>
Also tried hot deployement but then can't fetch third party libraries code
First, it seems in a docker container by default standalone-ha.xml is used. You can find this in /opt/jboss/tools/docker-entrypoint.sh.
Second, I think after changing configuration file you'll have to restart keycloak server (container).
Not sure what do you mean by "dynamically". But it will be easier to modify the file locally and build a custom docker image. Dockerfile may look like:
FROM jboss/keycloak:6.0.1
ADD <path on your system>/standalone-ha.xml /opt/jboss/keycloak/standalone/configuration/standalone-ha.xml
You can not replace or overwrite standalone-ha.xml/standalone.xml without jboss-cli on docker image. Only need to create a sh file and put inside startup-script folder. During initializing it will start and configure your file.
keycloak-cli
embed-server --server-config=standalone-ha.xml --std-out=echo
batch
/subsystem=keycloak-server:list-add(name=providers, value=module:org.keycloak.examples.event-sysout)
run-batch
stop-embedded-server
Dockerfile
FROM jboss/keycloak:latest
COPY keycloak.cli /opt/jboss/startup-scripts/keycloak.cli
Might be a little late . but I found out you can edit on the dockerfile.
FROM quay.io/keycloak/keycloak:11.0.0
RUN sed -i -E "s/(<staticMaxAge>)2592000(<\/staticMaxAge>)/\1\-1\2/" /opt/jboss/keycloak/standalone/configuration/standalone.xml
RUN sed -i -E "s/(<cacheThemes>)true(<\/cacheThemes>)/\1false\2/" /opt/jboss/keycloak/standalone/configuration/standalone.xml
RUN sed -i -E "s/(<cacheTemplates>)true(<\/cacheTemplates>)/\1false\2/" /opt/jboss/keycloak/standalone/configuration/standalone.xml
RUN sed -i -E "s/(<staticMaxAge>)2592000(<\/staticMaxAge>)/\1\-1\2/" /opt/jboss/keycloak/standalone/configuration/standalone-ha.xml
RUN sed -i -E "s/(<cacheThemes>)true(<\/cacheThemes>)/\1false\2/" /opt/jboss/keycloak/standalone/configuration/standalone-ha.xml
RUN sed -i -E "s/(<cacheTemplates>)true(<\/cacheTemplates>)/\1false\2/" /opt/jboss/keycloak/standalone/configuration/standalone-ha.xml
ref : https://github.com/anthonny/kit-keycloak-theme/blob/master/Dockerfile
You should go to that running docker container and change in there.
The best is, use docker manager like Kitematic https://kitematic.com/
Select running keycloak container, click EXEC icon, cd keycloak/standalone/configuration, vi standalone.xml, :wq to exit, restart docker image through Kitematic, should work
Related
I have written a small CLI using Java, Argparse4j, and packaged it in docker using this Dockerfile:
FROM openjdk:18
ENV JAR_NAME "my-jar-with-dependencies.jar"
ENV PROJECT_HOME /opt/app
RUN mkdir -p $PROJECT_HOME
WORKDIR $PROJECT_HOME
COPY run.sh $PROJECT_HOME/run.sh
RUN chmod +x $PROJECT_HOME/run.sh
COPY target/$JAR_NAME $PROJECT_HOME/cli.jar
ENTRYPOINT ["./run.sh"]
The last line of the Dockerfile then invokes a simple bash script:
#!/bin/bash
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar ./cli.jar "$#"
The CLI I wrote has three main actions: upload, download and apply. Therefore argparse4j expects one of these actions to be passed as the first parameter, i.e.
java -jar cli.jar download #... whatever other argument
This works just fine when running the docker image locally, but completely fails when running in the CI pipeline:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
- download -f zip -u true test-download.zip
This is the error that is returned:
Executing "step_script" stage of the job script 00:01
Using docker image sha256:<sha> for <url>/my-image:<tag> with digest <url>/my-image:<tag>#sha256:<sha> ...
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
usage: tool [-h] ACTION ...
tool: error: invalid choice: 'sh' (choose from 'upload', 'download',
'apply')
I have tried following the suggestion in gitlab-runner doesn't run ENTRYPOINT scripts in Dockerfile but I can't seem to get the CI part to work correctly.
I would like to avoid using the entrypoint directive as it needs to be used on multiple files, so I rather fix the issue at the root.
Does anyone have an idea of what is happening or how to fix it?
I would like to avoid using the entrypoint directive as it needs to be used on multiple files, so I rather fix the issue at the root.
You can change your Dockerfile instead to keep default ENTRYPOINT (as openjdk:18 doesn't define any entrypoint, it will be empty):
FROM openjdk:18
# ...
# ENTRYPOINT ["./run.sh"] # remove this
# Add run.sh to path to be able to use `run.sh` from any directory
ENV PATH="${PATH}:/opt/app"
And update your run.sh to specify full path to jar:
#!/bin/bash
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar $PROJECT_HOME/cli.jar "$#"
Now your container will start in Gitlab without having to specify entrypoint keyword for job. You can then setup something like this:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
# Specify command using run.sh
# This command is run from within your container
# Note that script is not an argument passed on your image startup
# But independent commands run within your container using shell
- run.sh download -f zip -u true test-download.zip
Notes:
Gitlab won't run your script in Dockerfile's WORKDIR but in a dedicated directory where your project will be cloned.. Using ./ will look for script and jar in current directory at the moment your command is run, but they wouldn't be found if not run from /opt/app. Specyfing full path to jar and adding your run.sh script to PATH make sure they'll be found wherever your run.sh from. Alternatively you could run cd /opt/app in your job's script but it may cause unwanted side effects.
Without ENTRYPOINT you won't be able to run Docker commands like this
docker run " <url>/my-image:<tag>" download ...
You'll need to specify either COMMAND or --entrypoint such as
docker run "<url>/my-image:<tag>" run.sh download ...
docker run --entrypoint run.sh "<url>/my-image:<tag>" download ...
You specified not wanting to do this, but overriding image entrypoint on your job seems a much simpler and straightforward solution. Using multiple files you may leverage Gitlab's extends and include.
And now for the fun part
what is happening
When Gitlab run your container for a job it will use the entrypoint defined in your Dockerfile by default. From doc:
The runner starts a Docker container using the defined entrypoint. The default from Dockerfile that may be overridden in the
.gitlab-ci.yml file.
The runner attaches itself to a running container.
The runner prepares a script (the combination of before_script, script, and after_script).
The runner sends the script to the container’s shell stdin and receives the output.
And what the doc doesn't say is that Gitlab will try to use various form of sh as Docker command. In short for step 1. it's like running this Docker command:
# Gitlab tries to run container for your job
docker run -it "<url>/my-image:<tag>" sh
It doesn't work as Gitlab will use default entrypoint and the final command run in Docker is:
./run.sh sh
Where ./run.sh is the entrypoint from Dockerfile and sh is the command provided by Gitlab. It causes the error you see:
tool: error: invalid choice: 'sh' (choose from 'upload', 'download', 'apply')
You never reach your job's script (step 4). See ENTRYPOINT vs. CMD for details.
Furthermore, the script you define is a command itself. Even if your container started, it wouldn't work as the following command would be run inside your container:
download -f zip -u true test-download.zip
# 'download' command doesn't exists
# You probably want to run instead something like:
/opt/app/run.sh download -f zip -u true test-download.zip
So, after a bit of research, I have been able to find a solution that works for me.
From my research (and as Pierre B. pointed out in his answer), Gitlab essentially tries to inject a shell script that performs a check for which shell is available.
Now, my solution is in no way elegant, but does achieve what I wanted. I modified the Dockerfile like so:
FROM openjdk:18-bullseye
ENV JAR_NAME "my-jar.jar"
ENV PROJECT_HOME /opt/app
RUN mkdir -p $PROJECT_HOME
WORKDIR $PROJECT_HOME
COPY run.sh $PROJECT_HOME/run.sh
RUN chmod +x $PROJECT_HOME/run.sh
COPY target/$JAR_NAME $PROJECT_HOME/cli.jar
RUN echo '#!/bin/bash \njava $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar $PROJECT_HOME/cli.jar "$#"' > /usr/bin/i18n && \
chmod +x /usr/bin/i18n
ENTRYPOINT ["./run.sh"]
And also modified the run.sh script this way:
#!/bin/bash
if [[ -n "$CI" ]]; then
echo "this block will only execute in a CI environment"
exec /bin/bash
else
echo "Not in CI. Running the image normally"
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar ./cli.jar "$#"
fi
This works because Gitlab, in its list of predefined variables, provides a CI env var that is set when the script is running on the CI. By doing so, I skip the java invocation but leave it in the case I need to use it when not on a CI.
Now when I need to use my image, all I need to specify in my .gitlab-ci.yml file is this:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
- i18n download -f zip -u true test-download.zip # This calls the script that was injected in the Dockerfile
This way I essentially mimic an actual CLI, and can use it in all my projects that require this tool.
I am not sure though why I need to "echo" the script for the CLI, and I can't simply copy it. For some reason the env variables are not passed down and I couldn't spend any more time debugging it. So for now, it will stay like this.
If you have any recommendations on how to clean this up, please leave some comments and I will edit my answer!
Try to wrap your script in single quotes:
script:
- 'download -f zip -u true test-download.zip'
EDIT:
Oh, this open bug in gitlab could be relevant to you
As one of the hardening tasks defined for Tomcat, I'm supposed to change the output of $CATALINA_HOME/bin/version.sh to output bogus values.
This seemed pretty straight-forward from what I could find. All of the articles essentially pointed towards: extract ServerInfo.properties from catalina.jar, modify it, recommit it, and restart Tomcat.
In the stateless container world though, I have to extract it, modify it, recommit it, START tomcat. If I restart the container after tomcat is started, the container dies. Here are the steps I tried in my container entrypoint shell script:
jar -xf /opt/tomcat/lib/catalina.jar org/apache/catalina/util/ServerInfo.properties
sed -i 's#server.info=.*#server.info=Server#' /opt/tomcat/org/apache/catalina/util/ServerInfo.properties
sed -i 's#server.number=.*#server.info=0.0#' /opt/tomcat/org/apache/catalina/util/ServerInfo.properties
jar -uf /opt/tomcat/lib/catalina.jar org/apache/catalina/util/ServerInfo.properties
However, in 9.0.45, the recommit back to catalina.jar throws an error:
jar: Package org.apache.catalina.ssi missing from ModulePackages class file attribute
Investigating this, I found this article: jar: Package org.apache.catalina.ssi missing from ModulePackages class file attribute
...which essentially said it's a bug in newer versions of tomcat and the guy who got around it used 7zip instead of jar. So I modified my code with the following:
mkdir -p /opt/tomcat/test
pushd /opt/tomcat/test
7z x -y /opt/tomcat/lib/catalina.jar:org/apache/catalina/util/ServerInfo.properties
sed -i 's#server.info=.*#server.info=Server#' /opt/tomcat/test/ServerInfo.properties
sed -i 's#server.number=.*#server.info=0.0#' /opt/tomcat/test/ServerInfo.properties
7z u /opt/tomcat/lib/catalina.jar:org/apache/catalina/util/ServerInfo.properties /opt/tomcat/test/ServerInfo.properties
popd
...while this did work in-so-far as I can tell (the ServerInfo.properties file in the backup location (test) did successfully get modified and I received a success indicator from 7zip in the container console log output for the recommit), after tomcat starts and I run version.sh, it still reports the actual verison.
To reiterate, these changes are being applied before tomcat is ever started so even though all of the documentation says to restart tomcat after-the-change, I'm expecting that I shouldn't have to restart tomcat (again, since this operation kills the container and wipes any interactive container changes (stateless)).
EDIT 1:
In an effort to confirm, after the container is started, I've exec'd into the container to verify that the entrypoint script was successful. I manually extracted the file out of catalina.jar and verified that the operations did appear to be successful because the interactively-extracted ServerInfo.properties from catalina.jar does reflect the changes (so the process appears to have worked) but running $CATALINA_HOME/bin/version.sh still shows the actual version...
There is another option which is much easier and works just fine.
Alternatively, the version number can be changed by creating the file CATALINA_BASE/lib/org/apache/catalina/util/ServerInfo.properties with content as follows...
https://tomcat.apache.org/tomcat-9.0-doc/security-howto.html#web.xml
You don't need to replace anything in catalina.jar
Just create this extra config file as follows, that is enough.
mkdir -p "${CATALINA_HOME}/lib/org/apache/catalina/util"
echo "server.info=X" > "${CATALINA_HOME}/lib/org/apache/catalina/util/ServerInfo.properties"
## proceed with tomcat start...
Bonus
Remove server info from HTTP headers in server responses (also check this).
## turn off server info and stacktraces on error pages
file="${CATALINA_HOME}"/conf/server.xml
xmlstarlet edit -L \
--delete "//Connector[starts-with(#protocol, 'HTTP')]/#server" \
--insert "//Connector[starts-with(#protocol, 'HTTP')]" -t attr -n "server" -v "X" \
--delete "//Valve[#className='org.apache.catalina.valves.ErrorReportValve']" \
--subnode "/Server/Service[#name='Catalina']/Engine[#name='Catalina']/Host[#name='localhost']" -t elem -n "Valve" \
--insert "/Server/Service[#name='Catalina']/Engine[#name='Catalina']/Host[#name='localhost']/Valve[not(#className)]" -t attr -n "className" -v "org.apache.catalina.valves.ErrorReportValve" \
--insert "//Valve[#className='org.apache.catalina.valves.ErrorReportValve']" -t attr -n "showReport" -v "false" \
--insert "//Valve[#className='org.apache.catalina.valves.ErrorReportValve']" -t attr -n "showServerInfo" -v "false" \
${file}
We have a legacy application that I am trying to dockerize. The jar of the application has both the application and an activemq bundled together. (We cannot change the way it is built). And has certain installation steps. I created the following initial Dockerfile for this however I am facing an issue (mentioned after the Dockerfile) when I run the image.
The Dockerfile looks like this :
FROM registry:4000/openjdk:8-jre-alpine
RUN addgroup -S appuser && adduser -S -G appuser appuser
ADD ./fe.jar /home/appuser
RUN chmod +x /home/appuser/fe.jar \
&& chown appuser:appuser /home/appuser/fe.jar
USER appuser
RUN ["java", "-jar", "/home/appuser/fe.jar", "-i"]
WORKDIR /home/appuser/fe/activemq/bin
CMD ["/bin/sh", "-c", "activemq"]
The RUN command extracts the application and the activemq at the location into folder called fe.
The WORKDIR seems to be setting the working directly to activemq/bin. I confirmed this by using sh script which triggers when the image is run. In the sh script I trigger an ls and pwd command to see the contents and the location.
However when I run the image which triggers the CMD command I get the error that :
/bin/sh: activemq: not found
What can be the possible issue here?
If activemq is an executable in your bin directory (and not in PATH) then you need to edit your CMD:
CMD ["/bin/sh", "-c", "./activemq"]
Also make sure that your script is executable.
Found the problem. The activemq script starts with #!/bin/bash and I am trying to run it using sh. I need to first install bash in the image and then run the activemq script using one.
I got the hint from this answer : docker alpine /bin/sh script.sh not found
Now it moved ahead however the container dies after running immediately. Not sure what the issue is. Doesn't even give any error.
Please I have read posts that seem to address this, but I am still not able to solve my problem. I have a default application.properties in my spring boot app running inside a docker container, However I want to add external application.properties.
This is my docker file:
FROM tomcat:8-alpine
COPY target/demoapp.war /usr/local/tomcat/webapps/demo.war
RUN sh -c 'touch /usr/local/tomcat/webapps/demo.war'
EXPOSE 8080
ENTRYPOINT [ "sh", "-c", "java -Djava.security.egd=file:/dev/./urandom -jar /usr/local/tomcat/webapps/demo.war", "--spring.config.location=C:/Users/650/Documents/Workplace/application.properties"]
I build the file with:
docker build . -t imagename
And I run the spring boot docker container with:
docker run -p 8080:8080 -v C:/Users/650/Documents/Workplace/application.properties:/config/application.properties --name demosirs --link postgres-
standalone:postgres -d imagename
The container still doesnt locate my external application.properties, please how do I overcome this challenge ?
You are referring to the host path in the docker ENTRYPOINT command. Just refer to the container specific path:
ENTRYPOINT [ "sh", "-c", "java -Djava.security.egd=file:/dev/./urandom -jar /usr/local/tomcat/webapps/demo.war", "--spring.config.location=/config/application.properties"]
And, I notice, you are using files as values to the -v argument to docker run. So change that to the corresponding directory:
-v C:/Users/650/Documents/Workplace:/config
In your entrypoint pass another argument
-Dspring.config.location=<your custom property file location>
you can try giving file path instead of the classpath.
"--spring.config.location=file:${configDirectory}/application.yml"
You can make use of spring cloud config project.
https://spring.io/guides/gs/centralized-configuration/
I want to add camunda-bpm-wildfly with active mq and run in same docker container.
First I added them to two containers and tried to run as follows. It was OK.
1. Running camunda-bpm-wildfly.
Dockerfile :
FROM camunda/camunda-bpm-platform:wildfly-latest
ADD standalone.xml standalone/configuration/
ADD bin/ bin/
ADD fusepatch/ fusepatch/
ADD modules/ modules/
ADD hawtio-wildfly-1.5.3.war standalone/deployments/
Commands :
docker build my-wildfly .
docker images
sudo docker run -d --name my-wildfly --net="host" -p 7070:7070 my-wildfly
2. Running activemq.
Dockerfile :
FROM webcenter/activemq:latest
Commands :
docker build amq-alone .
docker images
docker run --name='amq-alone' -d -p 8161:8161 -p 61616:61616 -p 61613:61613 amq-alone
Then I searched for a way to add two images to the same container and noted that we can't add multiple images to the same container[Ref : Docker - container with multiple images.
Then I downlaoded the activemq and I tried to extend it as follows.
It builds correctly and when I run also it runs correctly. But only wildfly runs in port 7070 not the activemq.
Dockerfile :
FROM camunda/camunda-bpm-platform:wildfly-latest
ADD standalone.xml standalone/configuration/
ADD bin/ bin/
ADD fusepatch/ fusepatch/
ADD modules/ modules/
ADD hawtio-wildfly-1.5.3.war standalone/deployments/
ADD apache-activemq-5.15.2/ apache-activemq-5.15.2/
RUN apache-activemq-5.15.2/bin/activemq start
Commands :
docker build my-wildfly-amq .
docker images
sudo docker run -d --name my-wildfly-amq --net="host" -p 7070:7070 -p 8161:8161 -p 61616:61616 -p 61613:61613 my-wildfly-amq
Log :
me#my-pc:~/$ docker build -t=my-wildfly-amq .
Sending build context to Docker daemon 375.8MB
Step 1/8 : FROM camunda/camunda-bpm-platform:wildfly-latest
---> 274d119b1660
Step 2/8 : ADD standalone.xml standalone/configuration/
---> Using cache
---> 41c2f6d423ec
Step 3/8 : ADD bin/ bin/
---> Using cache
---> 27c1952f442e
Step 4/8 : ADD fusepatch/ fusepatch/
---> Using cache
---> 66419d22d6b7
Step 5/8 : ADD modules/ modules/
---> bbdee5ab8ea2
Step 6/8 : ADD hawtio-wildfly-1.5.3.war standalone/deployments/
---> 237821cdb2c8
Step 7/8 : ADD apache-activemq-5.15.2/ apache-activemq-5.15.2/
---> 309b552b5150
Step 8/8 : RUN apache-activemq-5.15.2/bin/activemq start
---> Running in ce0e55cfd13b
INFO: Loading '/camunda/apache-activemq-5.15.2//bin/env'
INFO: Using java '/usr/bin/java'
INFO: Starting - inspect logfiles specified in logging.properties and log4j.properties to get details
INFO: pidfile created : '/camunda/apache-activemq-5.15.2//data/activemq.pid' (pid '46')
---> f903dc0b2db5
Removing intermediate container ce0e55cfd13b
Successfully built f903dc0b2db5
Successfully tagged my-wildfly-amq:latest
What am I missing here? How to add active mq with the camunda-bpm-wildfly running in same docker container?
UPDATE#1 :
With the #bluescore 's answer I tried to use CMD as follows and it worked. Both activemq and wildfly was started. But one problem is there. Normally when we start camunda-bpm-wildfly we invokes start-camunda.sh (not the wildfly bin/standalone.sh). But here I can't see that file in -ti mode also. How to start the camunda as the image starts itself? (I checked in the dockerhub and github also but couldn't find a tip)
Dockerfile :
FROM camunda/camunda-bpm-platform:wildfly-latest
ADD standalone.xml standalone/configuration/
ADD bin/ bin/
ADD fusepatch/ fusepatch/
ADD modules/ modules/
ADD hawtio-wildfly-1.5.3.war standalone/deployments/
ADD apache-activemq-5.15.2/ apache-activemq-5.15.2/
ADD my-wildfly-amq.sh my-wildfly-amq.sh
CMD bash my-wildfly-amq.sh
my-wildfly-amq.sh
apache-activemq-5.15.2/bin/activemq start
bin/standalone.sh
Docker version 17.09.0-ce
Ubuntu 16.04
You're misunderstanding how RUN works. Use an ENTRYPOINT or CMD script in place of the final RUN command of the container you extended. RUN executes a command during a build, not during a docker run. CMD and ENTRYPOINT tell the container what to execute when it's actually ran.
Check out the Dockerfile for the camunda-bpm-platform image you're using as a base. Notice that CMD at the end, which executes a shell script.
If you want to run both ActiveMQ and wildfly, you should write a shell script that runs both of them, then replace your final RUN with a CMD or ENTRYPOINT to execute that script. Something like:
CMD ["/usr/local/bin/your_script.sh"]
When your container starts up with docker run, this script will run.