In JMeter I want to use a client certificate without all the overhead of converting the certificate and do not forget to click on the SSL Manger Menu after JMeter restart.
I want the flexibility to use different certificates where ever needed.
The Java Solution here looks very promising. I tried to use a JSR223 PreProcessor with Groovy. This fails with the first line. It was unable to import a standard Java Class.
2017-11-08 16:02:39,139 ERROR o.a.j.m.JSR223PreProcessor: Problem in JSR223 script, JSR223 PreProcessor
javax.script.ScriptException: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
Script37.groovy: 1: unable to resolve class java.security.Keystore
# line 1, column 1.
import java.security.Keystore;
What do I have to do to use standard Java classes?
The whole idea is based on a solution used in SoapUI.
import com.eviware.soapui.settings.SSLSettings
import com.eviware.soapui.model.settings.Settings
import com.eviware.soapui.SoapUI
Settings settings = SoapUI.getSettings()
settings.setString(SSLSettings.KEYSTORE, "../certificates/foo.p12")
settings.setString(SSLSettings.KEYSTORE_PASSWORD , "bar")
settings.reloadSettings()
Will something like this work in JMeter? Which client is used to send the HTTP samplers?
These are not "standard Java classes", it looks like something from SoapUI
You need to have these com.eviware.soapui.* classes under JMeter Classpath in order to make it work. Once you add the necessary .jars JMeter restart will be required to pick them up. However I doubt you will be able to use this com.eviware.soapui.model.settings.Settings class instance in JMeter test.
There is an easier way to configure JMeter to use client-side certificates: just add the next lines to system.properties file:
javax.net.ssl.keyStoreType=pkcs12
javax.net.ssl.keyStore=../certificates/foo.p12
javax.net.ssl.keyStorePassword=bar
or pass them via -D command-line argument to JMeter startup script like:
jmeter -Djavax.net.ssl.keyStoreType=pkcs12 -Djavax.net.ssl.keyStore=../certificates/foo.p12 -Djavax.net.ssl.keyStorePassword=bar -n -t test.jmx -l result.jtl
See How to Set Your JMeter Load Test to Use Client Side Certificates article for more details on the approach.
Related
I am trying to make sure my Jenkins instance is not exploitable with the latest log4j exploit.
I have a pipeline script that runs, I tried following this instruction :
https://community.jenkins.io/t/apache-log4j-2-vulnerability-cve-2021-44228/990
This is one of my stages of my pipeline script:
stage('Building image aaa') {
steps{
script {
sh "echo executing"
org.apache.logging.log4j.core.lookup.JndiLookup.class.protectionDomain.codeSource
sh "docker build --build-arg SCRIPT_ENVIRONMENT=staging -t $IMAGE_REPO_NAME:$IMAGE_TAG ."
}
}
}
But I get a different error than what's described here and I'm unsure if I'm checking this correctly. This is the error:
groovy.lang.MissingPropertyException: No such property: org for class: groovy.lang.Binding
at groovy.lang.Binding.getVariable(Binding.java:63)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onGetProperty(SandboxInterceptor.java:271)
at org.kohsuke.groovy.sandbox.impl.Checker$7.call(Checker.java:353)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:357)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:333)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:333)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:333)
at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.getProperty(SandboxInvoker.java:29)
at com.cloudbees.groovy.cps.impl.PropertyAccessBlock.rawGet(PropertyAccessBlock.java:20)
at WorkflowScript.run(WorkflowScript:31)
at ___cps.transform___(Native Method)
at com.cloudbees.groovy.cps.impl.PropertyishBlock$ContinuationImpl.get(PropertyishBlock.java:74)
at com.cloudbees.groovy.cps.LValueBlock$GetAdapter.receive(LValueBlock.java:30)
....etc
This is probably the easiest way to check if you Jenkins has the log4j vulnerability (through plugins or otherwise).
Go to https://your-jenkins.domain/script
Paste org.apache.logging.log4j.core.lookup.JndiLookup.class.protectionDomain.codeSource
If the output is groovy.lang.MissingPropertyException: No such property: org for class: Script1 You're good then, otherwise you're not good.
This way you don't have to change your pipeline to test or go through the approval process like mentioned in the other answer, you can just paste and verify without needing to configure additionally.
I don't think a class name would be directly interpreted as a groovy codeSource argument in a declarative pipeline (as opposed to a scripted one)
Try the approach of "How to import a file of classes in a Jenkins Pipeline?", with:
node{
def cl = load 'Classes.groovy'
def a = cl.getProperty("org.apache.logging.log4j.core.lookup.JndiLookup").protectionDomain.codeSource
...
}
Note that getCLassLoader() is by default disallowed, and should require from an Jenkins administrator In-process Script Approval.
I am trying to run an app that uses a kafka producer (Python client), and an apache beam pipeline that will (for now) simply consume those messages by printing them to STDOUT.
I understand that using Kafka external transform with apache beam is a cross-language endeavor as it calls to a Java external service. I followed the following link's Option 1 :
Option 1: Use the default expansion service
This is the recommended and easiest setup option for using Python
Kafka transforms. This option is only available for Beam 2.22.0 and
later.
This option requires following pre-requisites before running the Beam
pipeline.
Install Java runtime in the computer from where the pipeline is
constructed and make sure that ‘java’ command is available.
I am running apache-beam==2.31.0, and just installed java :
openjdk 11.0.11 2021-04-20
OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2.18.04)
OpenJDK 64-Bit Server VM (build 11.0.11+9-Ubuntu-0ubuntu2.18.04, mixed mode, sharing
I am not completely sure which runner I should use as the portability documentation seems to point towards a Universal Local Runner, but I can't seem to find this runner
in the documentation.
Here's the code sample I'm trying to make work:
import argparse
import apache_beam as beam
from helpers import ccloud_lib
from apache_beam.io.external.kafka import ReadFromKafka
from apache_beam.options.pipeline_options import PipelineOptions
def run(argv=None):
"""Main entry point; runs a word_count pipeline"""
parser = argparse.ArgumentParser()
parser.add_argument(
"--input_topic",
dest="input_topic",
default="wordcount",
help="Kafka topic to use for input",
)
parser.add_argument(
"--kafka_config",
dest="config_file",
default="config/confluent/python.config",
)
args = parser.parse_known_args(argv)[0]
beam_options = PipelineOptions(runner="DirectRunner")
consumer_conf = ccloud_lib.read_ccloud_config(args.config_file)
consumer_conf["group.id"] = "python_wordcount_group_1"
consumer_conf["auto.offset.reset"] = "earliest"
with beam.Pipeline(options=beam_options) as pipeline:
pipeline
| "Read"
>> ReadFromKafka(
consumer_config=consumer_conf,
topics=[args.input_topic],
)
| "Print" >> beam.Map(print)
I launch the module, but I don't undestand exactly how this works as some java artifacts seem to be downloaded and a docker image launched. I then get the following warning message :
INFO:apache_beam.runners.portability.fn_api_runner.worker_handlers:b'2021/08/25 14:38:05 Failed to obtain provisioning information: failed to dial server at localhost:36071\n\tcaused by:\ncontext deadline exceeded\n'
To summarize my questions, can you explain what goes on when I launch the script? And which runner should I be using to do this? How do I fix this?
I think the universal runner is located under apache_beam.runners.portability.portable_runner.
I'm trying to write a Java application that uses the JournalParser to extract authors, citations, etc. from journal articles. The documentation for the GrobidJournalParser gives instructions for the command line app and for TikaServer. I need to point to Grobid running somewhere other than localhost:8080. I have a GrobidExtractor.properties file containing the correct URL on my classpath, but it doesn't seem to get found - I get an error because it's trying to access Grobid on localhost:8080.
WARNING: Interceptor for {http://localhost:8080/processHeaderDocument}WebClient has thrown exception, unwinding now
org.apache.cxf.interceptor.Fault: No message body writer has been found for class org.apache.cxf.jaxrs.ext.multipart.MultipartBody, ContentType: multipart/form-data
at org.apache.cxf.jaxrs.client.WebClient$BodyWriter.doWriteBody(WebClient.java:1220)
Is there some other way to tell Tika or the JournalParser where to find Grobid? The Javadocs were not helpful in this regard.
As explained in the documentation on using GROBID with Tika, if you want to configure Tika to use an alternate GROBID server you do so with a file named org/apache/tika/parser/journal/GrobidExtractor.properties
You have only called yours GrobidExtractor.properties, which is why it isn't being picked up. The full path is required
Assuming you're using Linux, using the Tika app, and with the GROBID properties in the current directory, you'd need to fix it with something like:
mkdir -p org/apache/tika/parser/journal
mv GrobidExtractor.properties org/apache/tika/parser/journal/
java -classpath .:tika-app-1.13.jar org.apache.tika.cli.TikaCLI --metadata journal.pdf
Has anyone out in the community successfully created a Selenium build in Jenkins using Browserstack as their cloud provider, while requiring a local testing connection behind a firewall?
I can say for sure Saucelabs is surprisingly easy to execute builds with the Sauce Jenkins plugin in a continuous deployment environment as I have done it. I cannot however, say the same for Browserstack. The organization I work with currently uses Browserstack, and although their service does support automated testing using a binary application I find it troublesome with Jenkins. I need to make absolutely sure Browserstack is not a viable solution, if so. I love Saucelabs and what their organization provides, but if Browserstack works I don't want to switch if I don't need to.
The Browserstack documentation instructs you to run a command, with some available options, in order to create a local connection before execution.
nohup ./[binary file] -localIdentifier [id] [auth key] localhost,3000,0 &
I have added the above statement as a pre-build step shell command. I have to also add 'nohup' as once the binary creates a successful connection, the build never actually starts since I have not exited as displayed in the output below.
BrowserStackLocal v3.5
You can now access your local server(s) in our remote browser.
Press Ctrl-C to exit
Normally I can successfully execute the first build without a problem. Subsequent build configurations using the same command never connect. The above message displays, but during test execution Browserstack reports no local testing connection was established. This confuses me.
To give you a better idea of what's being executed, I have 15 build configurations for various projects suites and browser combinations. Two Jenkins executors exist and I have more than 5 Browserstack VM's available at any given time. Five of the builds will automatically begin execution when the associated project code is pushed to the staging server, filling up both executors. One of them will begins and end fine. None of the others will as Browserstack reports local testing is not available.
Saucelabs obviously has this figured out with their plugin, which is great. If Browserstack requires shell commands to create local testing connections, I must be doing something wrong, out of order, etc.
Environment:
Java 7
Selenium 2.45
JUnit 4.11
Maven 3.1.1
Allure 1.4.10
Jenkins 1.5
Can someone post some information who use Browserstack in a continuous testing environment while utilizing multiple parallel test executions and tell me how each build is configured?
Thanks,
I've recently looked into BrowserStack with Selenium and the BrowserStack Plugin has made this task much easier.
Features
Manage your BrowserStack credentials globally or per build job.
Set up and tear down BrowserStack Local for testing internal, dev or
staging environments.
Embed BrowserStack Automate reports in your
Jenkins job results.
Much easier integration all round.
This is Umang replying on behalf of BrowserStack.
To start with, you are using the correct command for setting up the Local Testing connection. Although you do not need to specify the ‘localhost,3000,0’ details. We would also suggest you use the “-forcelocal” parameter while initiating the connection. The command should be as follows:
nohup ./[binary file] [auth key] -localIdentifier [id] -forcelocal &
The parameter “-forcelocal” will route all traffic via your IP address. Also, the process to initiate the connection before running your tests is correct.
However, here I’d like on confirm on the “id” you’ve specified while creating the connection. As you shared, there are 15 build configurations and I understand that each build has a different “id” specified. Please make sure that “id” specified while setting up the Local Testing connection and in the tests (“browserstack.localIdentifier” = “id”) is the same. Else, you will receive the error “[browserstack.local] is set to true but local testing through BrowserStack is not connected”
Integrating BrowserStack with Jenkins is a little bit tricky, but don't worry, it's perfectly doable :-)
The BrowserStackLocal client needs to be started as a background process, as per Umang's suggestion, and that's pretty much how the SauceLabs plugin works as well.
The trouble is that when Jenkins sees that you start daemon processes all by yourself and not via a plugin, it kills them. That's why you need to convince it otherwise.
I've described how to go about it step by step in this article, but if you're using the Pipeline Plugin, you can use the below script as a starting point:
node {
with_browser_stack 'linux-x64', {
// Execute tests: here's where a step like
// sh 'mvn clean verify'
// would go
}
}
// ----------------------------------------------
def with_browser_stack(type, actions) {
// Prepare the BrowserStackLocal client
if (! fileExists("/var/tmp/BrowserStackLocal")) {
sh "curl -sS https://www.browserstack.com/browserstack-local/BrowserStackLocal-${type}.zip > /var/tmp/BrowserStackLocal.zip"
sh "unzip -o /var/tmp/BrowserStackLocal.zip -d /var/tmp"
sh "chmod +x /var/tmp/BrowserStackLocal"
}
// Start the connection
sh "BUILD_ID=dontKillMe nohup /var/tmp/BrowserStackLocal 42MyAcc3sK3yV4lu3 -onlyAutomate > /var/tmp/browserstack.log 2>&1 & echo \$! > /var/tmp/browserstack.pid"
try {
// Execute tests
actions()
}
finally {
// Stop the connection
sh "kill `cat /var/tmp/browserstack.pid`"
}
}
You'd of course need to replace the fake access key (42MyAcc3sK3yV4lu3) with yours, or provide it via an environmental variable.
The important part here is the BUILD_ID, because that's what the Jenkins ProcessTreeKiller looks for when it decides whether to kill your daemon process or not.
Hope this helps!
Jan
Paul Whelan's answer to use the Jenkin's BrowserStack Plugin is currently the simplest way to integrate Jenkins with BrowserStack. The plugin supports all Jenkins versions >1.580.1.
To ensure that you get BrowserStack test reports you will need to configure your project's pom.xml as documented on the plugin wiki.
Just in case anyone else was having problems with this
For BrowserStackLocal v4.8 I found that -localidentifier has been removed from the binary options. (This is probably old news!)
When I removed the capabilities['browserstack.localIdentifier'] property from our automated tests the connection started working.
local binary
browserstack <key> -v -forcelocal
selenium setup
Capybara.register_driver :browserstack do |app|
capabilities = Selenium::WebDriver::Remote::Capabilities.new
# If we're running the BrowserStackLocal binary, we need to
# tell the driver as well
capabilities['browserstack.local'] = true
# other useful options
capabilities['browserstack.debug'] = true
capabilities['browserstack.javascriptEnabled'] = true
capabilities['javascriptEnabled'] = true
# etc ...
I'm stuck on Mule version 3.4.0 due to requirements at work. I'm writing a service script to manage the service lifecycle of Mule and would really like to be able to have it hang and wait for a debugger to connect based on whether a certain option is present in the parameters.
I'm comfortable with Bash and implementing this, but I'm having an extremely hard time trying to get Mule to pass along the
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=9989
to the underlying Java process, as it uses its own (stupid) wrapper to address Java.
I'm trying to modify the bin/mule script to have a mode called debug which will pass the above debugger options to the JVM when invoked with:
bin/mule debug
My current work can be found here on PasteBin, and here is the relevant part near line 511:
debug() {
echo "Debugging $APP_LONG_NAME..."
getpid
if [ "X$pid" = "X" ]
then
# The string passed to eval must handle spaces in paths correctly.
COMMAND_LINE="$CMDNICE \"$WRAPPER_CMD\" \"$WRAPPER_CONF\" wrapper.syslog.ident=$APP_NAME wrapper.pidfile=\"$PIDFILE\" $ANCHORPROP $LOCKPROP"
######################################################################
# Customized for Mule
######################################################################
echo "command line: $COMMAND_LINE"
echo "mule opts: $MULE_OPTS"
echo "JPDA_OPTS: $JPDA_OPTS"
eval $COMMAND_LINE $JPDA_OPTS $MULE_OPTS
######################################################################
else
echo "$APP_LONG_NAME is already running."
exit 1
fi
}
I cannot upgrade to a newer version of Mule. I need to find a way to modify this script to simply wait for a debugger when invoked with bin/mule debug. I've modified it enough to get into this debug function I've defined which is basically a copy of their own console function for starting in console mode. I can't seem to figure out how to get my debug opts passed to the JVM. Any ideas?
The parameter -debug, following documentation, was present in 3.4.x:
./mule -debug
Give it a try.