I am running load test on one of my sftp server application using jmeter. I run my jmx script as below
nohup sh jmeter.sh -n -t <jmx_file> -l <jtl_file> &
Script does have Simple Data Writer which creates csv file with result, which i convert into html using below command running on cmd from bin folder of jmeter.
jmeter -g <csv_path> -o <html_folder>
It was working couple of days back and now if i run the above command it gives error as below
The system cannot find the path specified.
errorlevel=3
Press any key to continue . . .
There was an update in my jdk from 1.8_241 to 1.8_251 and i have updated my java_home as well.
Do i need to do anything else in jmeter to make this work?
Double check that you can launch JMeter normally without nohup like:
./jmeter.sh --version
Ensure that the test is producing the .jtl results file and the file is up to date and not from the previous executions
Check the contents of nohup.out file
In general you should not be using any listeners, it is sufficient to have the .jtl results file in order to generate the dashboard and the dashboard can be generated during the test run like:
jmeter -n -t <jmx_file> -l <jtl_file> -e -o <html_folder>
if you don't have any background logic to remove "stale" jtl_file and html_folder you will need to add -f command-line argument to force JMeter to overwrite the old result file and dashboard folder like:
jmeter -n -f -t <jmx_file> -l <jtl_file> -e -o <html_folder>
Related
I have written a small CLI using Java, Argparse4j, and packaged it in docker using this Dockerfile:
FROM openjdk:18
ENV JAR_NAME "my-jar-with-dependencies.jar"
ENV PROJECT_HOME /opt/app
RUN mkdir -p $PROJECT_HOME
WORKDIR $PROJECT_HOME
COPY run.sh $PROJECT_HOME/run.sh
RUN chmod +x $PROJECT_HOME/run.sh
COPY target/$JAR_NAME $PROJECT_HOME/cli.jar
ENTRYPOINT ["./run.sh"]
The last line of the Dockerfile then invokes a simple bash script:
#!/bin/bash
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar ./cli.jar "$#"
The CLI I wrote has three main actions: upload, download and apply. Therefore argparse4j expects one of these actions to be passed as the first parameter, i.e.
java -jar cli.jar download #... whatever other argument
This works just fine when running the docker image locally, but completely fails when running in the CI pipeline:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
- download -f zip -u true test-download.zip
This is the error that is returned:
Executing "step_script" stage of the job script 00:01
Using docker image sha256:<sha> for <url>/my-image:<tag> with digest <url>/my-image:<tag>#sha256:<sha> ...
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
usage: tool [-h] ACTION ...
tool: error: invalid choice: 'sh' (choose from 'upload', 'download',
'apply')
I have tried following the suggestion in gitlab-runner doesn't run ENTRYPOINT scripts in Dockerfile but I can't seem to get the CI part to work correctly.
I would like to avoid using the entrypoint directive as it needs to be used on multiple files, so I rather fix the issue at the root.
Does anyone have an idea of what is happening or how to fix it?
I would like to avoid using the entrypoint directive as it needs to be used on multiple files, so I rather fix the issue at the root.
You can change your Dockerfile instead to keep default ENTRYPOINT (as openjdk:18 doesn't define any entrypoint, it will be empty):
FROM openjdk:18
# ...
# ENTRYPOINT ["./run.sh"] # remove this
# Add run.sh to path to be able to use `run.sh` from any directory
ENV PATH="${PATH}:/opt/app"
And update your run.sh to specify full path to jar:
#!/bin/bash
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar $PROJECT_HOME/cli.jar "$#"
Now your container will start in Gitlab without having to specify entrypoint keyword for job. You can then setup something like this:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
# Specify command using run.sh
# This command is run from within your container
# Note that script is not an argument passed on your image startup
# But independent commands run within your container using shell
- run.sh download -f zip -u true test-download.zip
Notes:
Gitlab won't run your script in Dockerfile's WORKDIR but in a dedicated directory where your project will be cloned.. Using ./ will look for script and jar in current directory at the moment your command is run, but they wouldn't be found if not run from /opt/app. Specyfing full path to jar and adding your run.sh script to PATH make sure they'll be found wherever your run.sh from. Alternatively you could run cd /opt/app in your job's script but it may cause unwanted side effects.
Without ENTRYPOINT you won't be able to run Docker commands like this
docker run " <url>/my-image:<tag>" download ...
You'll need to specify either COMMAND or --entrypoint such as
docker run "<url>/my-image:<tag>" run.sh download ...
docker run --entrypoint run.sh "<url>/my-image:<tag>" download ...
You specified not wanting to do this, but overriding image entrypoint on your job seems a much simpler and straightforward solution. Using multiple files you may leverage Gitlab's extends and include.
And now for the fun part
what is happening
When Gitlab run your container for a job it will use the entrypoint defined in your Dockerfile by default. From doc:
The runner starts a Docker container using the defined entrypoint. The default from Dockerfile that may be overridden in the
.gitlab-ci.yml file.
The runner attaches itself to a running container.
The runner prepares a script (the combination of before_script, script, and after_script).
The runner sends the script to the container’s shell stdin and receives the output.
And what the doc doesn't say is that Gitlab will try to use various form of sh as Docker command. In short for step 1. it's like running this Docker command:
# Gitlab tries to run container for your job
docker run -it "<url>/my-image:<tag>" sh
It doesn't work as Gitlab will use default entrypoint and the final command run in Docker is:
./run.sh sh
Where ./run.sh is the entrypoint from Dockerfile and sh is the command provided by Gitlab. It causes the error you see:
tool: error: invalid choice: 'sh' (choose from 'upload', 'download', 'apply')
You never reach your job's script (step 4). See ENTRYPOINT vs. CMD for details.
Furthermore, the script you define is a command itself. Even if your container started, it wouldn't work as the following command would be run inside your container:
download -f zip -u true test-download.zip
# 'download' command doesn't exists
# You probably want to run instead something like:
/opt/app/run.sh download -f zip -u true test-download.zip
So, after a bit of research, I have been able to find a solution that works for me.
From my research (and as Pierre B. pointed out in his answer), Gitlab essentially tries to inject a shell script that performs a check for which shell is available.
Now, my solution is in no way elegant, but does achieve what I wanted. I modified the Dockerfile like so:
FROM openjdk:18-bullseye
ENV JAR_NAME "my-jar.jar"
ENV PROJECT_HOME /opt/app
RUN mkdir -p $PROJECT_HOME
WORKDIR $PROJECT_HOME
COPY run.sh $PROJECT_HOME/run.sh
RUN chmod +x $PROJECT_HOME/run.sh
COPY target/$JAR_NAME $PROJECT_HOME/cli.jar
RUN echo '#!/bin/bash \njava $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar $PROJECT_HOME/cli.jar "$#"' > /usr/bin/i18n && \
chmod +x /usr/bin/i18n
ENTRYPOINT ["./run.sh"]
And also modified the run.sh script this way:
#!/bin/bash
if [[ -n "$CI" ]]; then
echo "this block will only execute in a CI environment"
exec /bin/bash
else
echo "Not in CI. Running the image normally"
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar ./cli.jar "$#"
fi
This works because Gitlab, in its list of predefined variables, provides a CI env var that is set when the script is running on the CI. By doing so, I skip the java invocation but leave it in the case I need to use it when not on a CI.
Now when I need to use my image, all I need to specify in my .gitlab-ci.yml file is this:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
- i18n download -f zip -u true test-download.zip # This calls the script that was injected in the Dockerfile
This way I essentially mimic an actual CLI, and can use it in all my projects that require this tool.
I am not sure though why I need to "echo" the script for the CLI, and I can't simply copy it. For some reason the env variables are not passed down and I couldn't spend any more time debugging it. So for now, it will stay like this.
If you have any recommendations on how to clean this up, please leave some comments and I will edit my answer!
Try to wrap your script in single quotes:
script:
- 'download -f zip -u true test-download.zip'
EDIT:
Oh, this open bug in gitlab could be relevant to you
We have a legacy application that I am trying to dockerize. The jar of the application has both the application and an activemq bundled together. (We cannot change the way it is built). And has certain installation steps. I created the following initial Dockerfile for this however I am facing an issue (mentioned after the Dockerfile) when I run the image.
The Dockerfile looks like this :
FROM registry:4000/openjdk:8-jre-alpine
RUN addgroup -S appuser && adduser -S -G appuser appuser
ADD ./fe.jar /home/appuser
RUN chmod +x /home/appuser/fe.jar \
&& chown appuser:appuser /home/appuser/fe.jar
USER appuser
RUN ["java", "-jar", "/home/appuser/fe.jar", "-i"]
WORKDIR /home/appuser/fe/activemq/bin
CMD ["/bin/sh", "-c", "activemq"]
The RUN command extracts the application and the activemq at the location into folder called fe.
The WORKDIR seems to be setting the working directly to activemq/bin. I confirmed this by using sh script which triggers when the image is run. In the sh script I trigger an ls and pwd command to see the contents and the location.
However when I run the image which triggers the CMD command I get the error that :
/bin/sh: activemq: not found
What can be the possible issue here?
If activemq is an executable in your bin directory (and not in PATH) then you need to edit your CMD:
CMD ["/bin/sh", "-c", "./activemq"]
Also make sure that your script is executable.
Found the problem. The activemq script starts with #!/bin/bash and I am trying to run it using sh. I need to first install bash in the image and then run the activemq script using one.
I got the hint from this answer : docker alpine /bin/sh script.sh not found
Now it moved ahead however the container dies after running immediately. Not sure what the issue is. Doesn't even give any error.
How can I add the actual date to the file name of a JMeter csv result file while running the test in command line mode?
E.g.
jmeter -n -t ~/tests/TP_Login_api.jmx -Jusers=12 -Jhost=devops1 -Juser_file=users_max.csv -Jaccount=max -l /results/results_TP_Login_api_${__time(yyyyMMdd-HHmmss)}.csv -e -o/results/report/TP_Login_api
Any ideas how dto achieve that?
In batch you need to work with batch syntax. For Bsh use:
#!/bin/bash
NOW=$(date +"%m-%d-%Y")
jmeter -n -t ~/tests/TP_Login_api.jmx -Jusers=12 -Jhost=devops1 -Juser_file=users_max.csv -Jaccount=max -l /results/results_TP_Login_api_$NOW.csv -e -o/results/report/TP_Login_api
For windows see date.
Below are the commands that I am using in the Jenkins build--> Execute Shell configuration
1) git config --local user.name XXXXX
2) curl -o cucumber-sandwich.jar -Lk "path to download cucumber sandwich jar"
3)
nohup java -jar cucumber-sandwich.jar &
/opt/beasys/apache-maven-3.0.4/bin/mvn -f myProject/pom.xml -s settings.xml -Ptest -Dit.test=myproject clean verify
4) nohup java -jar cucumber-sandwich.jar -f /opt/jenkins/ws/myProject/target/TestResults/json -o /opt/jenkins/ws/myProject/CucumberReports/cucumber-sandwich
Locally the reports are getting generated(feature_overview.html etc) but not on jenkins server.
It's possible this is because you need the -n flag to your step #3 to get the report to stop generating after 1 test run; however I would strogly recommend using the cucumber jenkins plugin and follow the steps here: http://moduscreate.com/integrating-bdd-cucumber-test-suites-jenkins-3-steps/
I have a .jar file I want to run whenever the system reboots/starts, so I put the line
nohup java -jar /mnt/fioa/fusion/nfs/labStats/LabInfoAutoLog.jar > /dev/null &
in my /etc/rc.local file. The program is validated as working, and if I run the above command at the command line the program works as expected.
Other versions I have tried without success:
nohup /usr/bin/java -jar /mnt/fioa/fusion/nfs/labStats/LabInfoAutoLog.jar > /dev/null &
and:
nohup java -jar /mnt/fioa/fusion/nfs/labStats/LabInfoAutoLog.jar 2> /dev/null \ .. &
I am running centos 6.4.
Check that your jar file is accesible roots, NFS mounted volumes may impose special restrictions for root.
Instead of discarding your error messages, you might want to route them to syslog, something like 2> /sbin/logger -t FOO 1> /sbin/logger -t BAR
Maybe the path isn't set yet at startup time and you need the full path to the java executable or, possibly, nohup.