Extract test coverage from the docker container in GitLab-CI - java

I'm using GitLab-CI and running the java application that is built with help of Gradle.
There is an application running inside the Docker container. In parallel, the job automation runs the ApiTestSuite test suite along with the java application.
The question is: How can I get the test coverage from the running application after the ApiTestSuite suite ends?
Here is the simplified version of the GitLab job:
automation:
stage: validate
tags:
- dind
variables:
PROFILE: some-profile
services:
- docker:stable-dind
- redis:5-alpine
- webcenter/activemq
before_script:
- docker run --detach --name MY_APPLICATION
script:
- ./gradlew ApiTestSuite
I can fetch the code coverage while running unit test cases but have some problems understanding how it can be implemented for the example above.
Thank you in advance :)

Tell java to use JaCoCo Agent while running the application inside the container
-javaagent:/home/java/jacoco.jar=destfile=folder_inside_contrainer/jacoco-it.exec,append=true
Configure Agent to write the results into folder_inside_contrainer
replace *folder_inside_contrainer* with the one you will share with the host later on
Share folder folder_inside_contrainer between the container and the host (in this case GitLab Job)
docker run -d -v gitlab_folder:folder_inside_contrainer
Export results that are in .exec format into .html format via JaCoCo-cli.jar
java -jar jacococli.jar report gitlab_folder/jacoco-it.exec --classfiles path_to_app_compiled_classes --html coverage/api-coverage-html
Grab the coverage from HTML and display it for that job
api-coverage:
stage: release
script:
- 'apt-get install -y unzip'
- 'curl -L "https://search.maven.org/remotecontent?filepath=org/jacoco/jacoco/0.8.6/jacoco-0.8.6.zip" -o jacoco.zip'
- 'unzip jacoco'
- 'java -jar lib/jacococli.jar report gitlab_folder/jacoco-it.exec --classfiles path_to_app_compiled_classes --html coverage/api-coverage-html'
- grep -oP "Total.*?([0-9]{1,3})%" coverage/api-coverage-html/index.html
coverage: "/Total.*?([0-9]{1,3})%/"
artifacts:
paths:
- coverage/

Related

gitlab-runner passes wrong arguments to custom image

I have written a small CLI using Java, Argparse4j, and packaged it in docker using this Dockerfile:
FROM openjdk:18
ENV JAR_NAME "my-jar-with-dependencies.jar"
ENV PROJECT_HOME /opt/app
RUN mkdir -p $PROJECT_HOME
WORKDIR $PROJECT_HOME
COPY run.sh $PROJECT_HOME/run.sh
RUN chmod +x $PROJECT_HOME/run.sh
COPY target/$JAR_NAME $PROJECT_HOME/cli.jar
ENTRYPOINT ["./run.sh"]
The last line of the Dockerfile then invokes a simple bash script:
#!/bin/bash
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar ./cli.jar "$#"
The CLI I wrote has three main actions: upload, download and apply. Therefore argparse4j expects one of these actions to be passed as the first parameter, i.e.
java -jar cli.jar download #... whatever other argument
This works just fine when running the docker image locally, but completely fails when running in the CI pipeline:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
- download -f zip -u true test-download.zip
This is the error that is returned:
Executing "step_script" stage of the job script 00:01
Using docker image sha256:<sha> for <url>/my-image:<tag> with digest <url>/my-image:<tag>#sha256:<sha> ...
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
usage: tool [-h] ACTION ...
tool: error: invalid choice: 'sh' (choose from 'upload', 'download',
'apply')
I have tried following the suggestion in gitlab-runner doesn't run ENTRYPOINT scripts in Dockerfile but I can't seem to get the CI part to work correctly.
I would like to avoid using the entrypoint directive as it needs to be used on multiple files, so I rather fix the issue at the root.
Does anyone have an idea of what is happening or how to fix it?
I would like to avoid using the entrypoint directive as it needs to be used on multiple files, so I rather fix the issue at the root.
You can change your Dockerfile instead to keep default ENTRYPOINT (as openjdk:18 doesn't define any entrypoint, it will be empty):
FROM openjdk:18
# ...
# ENTRYPOINT ["./run.sh"] # remove this
# Add run.sh to path to be able to use `run.sh` from any directory
ENV PATH="${PATH}:/opt/app"
And update your run.sh to specify full path to jar:
#!/bin/bash
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar $PROJECT_HOME/cli.jar "$#"
Now your container will start in Gitlab without having to specify entrypoint keyword for job. You can then setup something like this:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
# Specify command using run.sh
# This command is run from within your container
# Note that script is not an argument passed on your image startup
# But independent commands run within your container using shell
- run.sh download -f zip -u true test-download.zip
Notes:
Gitlab won't run your script in Dockerfile's WORKDIR but in a dedicated directory where your project will be cloned.. Using ./ will look for script and jar in current directory at the moment your command is run, but they wouldn't be found if not run from /opt/app. Specyfing full path to jar and adding your run.sh script to PATH make sure they'll be found wherever your run.sh from. Alternatively you could run cd /opt/app in your job's script but it may cause unwanted side effects.
Without ENTRYPOINT you won't be able to run Docker commands like this
docker run " <url>/my-image:<tag>" download ...
You'll need to specify either COMMAND or --entrypoint such as
docker run "<url>/my-image:<tag>" run.sh download ...
docker run --entrypoint run.sh "<url>/my-image:<tag>" download ...
You specified not wanting to do this, but overriding image entrypoint on your job seems a much simpler and straightforward solution. Using multiple files you may leverage Gitlab's extends and include.
And now for the fun part
what is happening
When Gitlab run your container for a job it will use the entrypoint defined in your Dockerfile by default. From doc:
The runner starts a Docker container using the defined entrypoint. The default from Dockerfile that may be overridden in the
.gitlab-ci.yml file.
The runner attaches itself to a running container.
The runner prepares a script (the combination of before_script, script, and after_script).
The runner sends the script to the container’s shell stdin and receives the output.
And what the doc doesn't say is that Gitlab will try to use various form of sh as Docker command. In short for step 1. it's like running this Docker command:
# Gitlab tries to run container for your job
docker run -it "<url>/my-image:<tag>" sh
It doesn't work as Gitlab will use default entrypoint and the final command run in Docker is:
./run.sh sh
Where ./run.sh is the entrypoint from Dockerfile and sh is the command provided by Gitlab. It causes the error you see:
tool: error: invalid choice: 'sh' (choose from 'upload', 'download', 'apply')
You never reach your job's script (step 4). See ENTRYPOINT vs. CMD for details.
Furthermore, the script you define is a command itself. Even if your container started, it wouldn't work as the following command would be run inside your container:
download -f zip -u true test-download.zip
# 'download' command doesn't exists
# You probably want to run instead something like:
/opt/app/run.sh download -f zip -u true test-download.zip
So, after a bit of research, I have been able to find a solution that works for me.
From my research (and as Pierre B. pointed out in his answer), Gitlab essentially tries to inject a shell script that performs a check for which shell is available.
Now, my solution is in no way elegant, but does achieve what I wanted. I modified the Dockerfile like so:
FROM openjdk:18-bullseye
ENV JAR_NAME "my-jar.jar"
ENV PROJECT_HOME /opt/app
RUN mkdir -p $PROJECT_HOME
WORKDIR $PROJECT_HOME
COPY run.sh $PROJECT_HOME/run.sh
RUN chmod +x $PROJECT_HOME/run.sh
COPY target/$JAR_NAME $PROJECT_HOME/cli.jar
RUN echo '#!/bin/bash \njava $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar $PROJECT_HOME/cli.jar "$#"' > /usr/bin/i18n && \
chmod +x /usr/bin/i18n
ENTRYPOINT ["./run.sh"]
And also modified the run.sh script this way:
#!/bin/bash
if [[ -n "$CI" ]]; then
echo "this block will only execute in a CI environment"
exec /bin/bash
else
echo "Not in CI. Running the image normally"
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar ./cli.jar "$#"
fi
This works because Gitlab, in its list of predefined variables, provides a CI env var that is set when the script is running on the CI. By doing so, I skip the java invocation but leave it in the case I need to use it when not on a CI.
Now when I need to use my image, all I need to specify in my .gitlab-ci.yml file is this:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
- i18n download -f zip -u true test-download.zip # This calls the script that was injected in the Dockerfile
This way I essentially mimic an actual CLI, and can use it in all my projects that require this tool.
I am not sure though why I need to "echo" the script for the CLI, and I can't simply copy it. For some reason the env variables are not passed down and I couldn't spend any more time debugging it. So for now, it will stay like this.
If you have any recommendations on how to clean this up, please leave some comments and I will edit my answer!
Try to wrap your script in single quotes:
script:
- 'download -f zip -u true test-download.zip'
EDIT:
Oh, this open bug in gitlab could be relevant to you

Add a test runner to playwright in docker

I am starting using playwright, I have pulled the docker image and I am wondering how should I run tests. I have followed some old messages in playwright slack and run this command :
docker run -d -it --rm -v /C/Workspace/playwrite/src/test/java/org/example/TestExample.java:. --ipc host --shm-size 2gb mcr.microsoft.com/playwright/java:v1.14.0-focal
The container has started, the test has been moved in docker but I don't think it has been run. My question is does the docker playwright has already a test runner such as junit? If nor do I have to modify the docker image? I don't find any docs that explain that.
The way it works is to run the Playwright image, which then provides you with an environment to setup and run Playwright
# run in the root of your Playwright project
docker run -v $PWD:/e2e -w /e2e --rm -it mcr.microsoft.com/playwright:v1.16.2-focal /bin/bash
# you are now inside the container (bash)
root#f4cfe077964d:/e2e# npm install
root#f4cfe077964d:/e2e# npx playwright install
root#f4cfe077964d:/e2e# npx playwright test
More details about using the Playwright Docker image here

How to run Springboot application with Junit test with Docker File

Right now I have my Dockerfile as below:
FROM openjdk:8-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
What if I want to run some junit test after I start the springboot application? I am using Maven for dependencies.
Where should I put those lines?
mvn test
mvn clean test
mvn clean compile test
Or what other commands should I use?
Premise: Even if the solution is oriented to your specifics it would be better to execute tests during target jar build phase
To execute tests on your Dockerfile you can do one of the following:
copy also source files on Dockerfile and execute maven test commands. Doing this you can also build your target jar directly on the Container and also you need maven to be installed on the Container.
copy just the target jar file and use it to execute your tests you need to include test-classes on the target jar. ( See at How can I include test classes into Maven jar and execute them? ).
Regardless the way you choose, you can modify your entrypoint to execute multiple commands. You can do it basically in 2 ways:
(a) Creating a bash script that executes your commands( See at Multiple commands on docker ENTRYPOINT )
(b) Using supervisord ( See at How to write a Dockerfile which I can start a service and run a shell and also accept arguments for the shell? ). This is a better solution in order to manage process related to the container life ( so the process with PID 1 ).
Example
Let's suppose you choose to copy all source files ( option 1 ) and use a bash
script ( option 1(a) ) to do it.
Create the command.sh file as follow in order to attach container to Spring application process even if mvn test will be executed:
#!/bin/bash
#Execute Spring application
CMD="java -jar target/app.jar"
$CMD &
SERVICE_PID=$!
#Execute Tests
mvn test
#Wait for Spring execution
wait "$SERVICE_PID"
Your Dockerfile will look like follow:
#Start from maven docker image
FROM maven:3.6.1-jdk-8-alpine
#Copy all sources
COPY . .
#Build ( because you want to execute tests after the spring boot application is started you should disable test during build phase )
RUN mvn install
#Start container
COPY commands.sh /scripts/commands.sh
RUN ["chmod", "+x", "/scripts/commands.sh"]
ENTRYPOINT ["/scripts/commands.sh"]

Travis yml to run Selenium Java Gradle Docker build

I'm looking for an example .travis.yml file that would execute a Gradle build inside a Docker container that would run my Selenium tests. So far I've seen various blog posts and answers, but they are either in a language that I'm not looking for like JavaScript, or they use Maven instead of Gradle.
I finally got a working example after piecing it together from various blogs:
.travis.yml
sudo: required
dist: trusty
language: java
jdk:
- oraclejdk8
script:
- gradle clean test
before_install:
- docker run -d -p 4444:4444 -p 5900:5900 -v /dev/shm:/dev/shm -e VNC_NO_PASSWORD=1 selenium/standalone-chrome-debug:latest
before_cache:
- rm -f $HOME/.gradle/caches/modules-2/modules-2.lock
- rm -fr $HOME/.gradle/caches/*/plugin-resolution/
cache:
directories:
- $HOME/.gradle/caches/
- $HOME/.gradle/wrapper/

Cucumber sandwich jar keeps on listening and not generating the reports on jenkins slave

Below are the commands that I am using in the Jenkins build--> Execute Shell configuration
1) git config --local user.name XXXXX
2) curl -o cucumber-sandwich.jar -Lk "path to download cucumber sandwich jar"
3)
nohup java -jar cucumber-sandwich.jar &
/opt/beasys/apache-maven-3.0.4/bin/mvn -f myProject/pom.xml -s settings.xml -Ptest -Dit.test=myproject clean verify
4) nohup java -jar cucumber-sandwich.jar -f /opt/jenkins/ws/myProject/target/TestResults/json -o /opt/jenkins/ws/myProject/CucumberReports/cucumber-sandwich
Locally the reports are getting generated(feature_overview.html etc) but not on jenkins server.
It's possible this is because you need the -n flag to your step #3 to get the report to stop generating after 1 test run; however I would strogly recommend using the cucumber jenkins plugin and follow the steps here: http://moduscreate.com/integrating-bdd-cucumber-test-suites-jenkins-3-steps/

Categories

Resources