Set up testing of our questionnaire with jmeter - java

I've worked with jmeter a little before and have just downloaded jmeter 2.7.
Our web application has a questionnaire that each person fills out. Like most questionnaires, the questions that show up vary depending on answers to previous questions, so there are multiple paths and very rarely does one person see all of the questions.
What I'd like to do is create a control file that will specify a group of questionnaires which it will load and log those people into the system and fill out a questionnaire checking the path and results at the end to make sure the answers were stored properly.
I would like to have 25 simultaneous users of this. Eventually I'd like to have a few hundred.
How do I get starting setting all of this up through jmeter? I don't mean a walkthrough, but I'm a little familiar with a number of the jmeter components. Which components would I use to solve this problem and in what order?
Thanks.

First of all I recommend upgrading to the latest version of jMeter.
To start every test you should add a thread group(right click on the test plan):
Then you would specify number of users/threads to 25 by clicking on your thread group and filling in the number of threads field.
Since you're dealing with web you would add a http request to your thread group (I have many more samplers in my screenshot don't get confused, this is because it's possible to extend jmeter with anything you need really) :
Then after doing some web request you would validate those web requests by using i.e. response assertion :
I could go on for a long time really. Jmeter documentation is somewhat poor in my opinion but it's a great tool.
Without any specific questions this should be enough to get you started.

Related

How to run a scheduled task on a single openshift pod only?

Story: in my java code i have a few ScheduledFuture's that i need to run everyday on specific time (15:00 for example), the only available thing that i have is database, my current application and openshift with multiple pods. I can't move this code out of my application and must run it from there.
Problem: ScheduledFuture works on every pod, but i need to run it only once a day. I have a few ideas, but i don't know how to implement them.
Idea #1:
Set environment variable to specific pod, then i will be able to check if this variable exists (and its value), read it and run schedule task if required. I know that i have a risk of hovered pods, but that's better not to run scheduled task at all than to run it multiple times.
Idea #2:
Determine a leader pod somehow, this seems to be a bad idea in my case since it always have "split-brain" problem.
Idea #3 (a bit offtopic):
Create my own synchronization algorithm thru database. To be fair, it's the simplest way to me since i'm a programmer and not SRE. I understand that this is not the best one tho.
Idea #4 (a bit offtopic):
Just use quartz schedule library. I personally don't really like that and would prefer one of the first two ideas (if i will able to implement them), but at the moment it seems like my only valid choice.
UPD. May be you have some other suggestions or a warning that i shouldn't ever do that?
I would suggest to use a ready-to-use solution. Getting those things right, especially covering all possible corner-cases wrt. reliability, is hard. If you do not want to use quartz, I would at least suggest to use a database-backed solution. Postgres, for example, has SELECT ... FOR UPDATE SKIP LOCKED; (scroll down to the section "The Locking Clause") which may be used to implement one-time only scheduling.
You can create cron job using openshift
https://docs.openshift.com/container-platform/4.7/nodes/jobs/nodes-nodes-jobs.html
and have this job trigger some endpoint in you application that will invoke your logic.

How to pass inputs to Java Selenium tests when triggered via a web endpoint?

I seek to find a highly scalable and flexible solution for kicking off Selenium tests from a remote machine, preferably via a web-based endpoint, where I can pass some data through to my tests.
I've tried using both jUnitEE and TestNGEE - plus a ServletFilter - trying to get what I want but can't quite hit all my requirements so I can't help but think that I'm trying to go about it completely the wrong way...someone has to have solved this before...I just can't figure out how...
What I'd like to have happen:
Someone wanting to execute a java Selenium test navigates to a webpage of mine. Potentially this is a jUnitEE or TestNGEE servlet, perhaps it's something else.
User selects a Selenium test to run from a list of available tests, plus a couple of values from form elements on the page. Let's say that it's 2 string values - one for environment and one for username.
User presses the Run Test button.
The server takes the selected test and starts its execution, providing it with the environment and the username values specified by the user.
Additional requirements:
All activities should be thread safe. Data should not get criss-crossed between tests, even when multiple users initiate the same test at the same time.
Notes:
While I'd be happy to have this working even with just one parameter, the hope is that the user would be able to pass a list of any number of arbitrary key/value pairs which are then made available to the executed test, potentially even a csv or other type of data file, or a web endpoint from which to retrieve the data.
Example:
User hits the endpoint: http://testLauncherUI.mySite.com/myServlet?test=com.mySite.selenium.flow1&environment=testEnvironment1.mySite.com&username=userNumber1&otherRandomKey=randomValue
testLauncher's myServlet triggers the contained test matching com.mySite.selenium.flow1 and that test in turn navigates to 'http://testEnvironment1.mySite.com', and proceeds to enter the 'userNumber1' text into the input box.
A second user can hit the same servlet while the prior test is still executing, but with different (or same) params: http://testLauncherUI.mySite.com/myServlet?test=com.mySite.selenium.flow1&environment=testEnvironment2.mySite.com&username=userNumber1&otherRandomKey=randomValue
testLauncher's myServlet kicks off another thread, running the same test, but against the specified site: 'http://testEnvironment2.mySite.com', and proceeds to enter the 'userNumber1' text into the input box.
What am I missing here?
Thanks in advance!
I've ended up dropping JUnitEE altogether. Life is now better. My stack that now makes this possible is: GitLab, GitLabCI (w/Docker), Gradle, Junit/TestNG
I'm now storing my code in GitLab (Enterprise) and using Gradle as a build system. Doing so allows for this:
The included GitLabCI to be configured to host a URL that can trigger a GitLab pipeline. Each GitLab pipeline runs in a docker container.
My GitLabCI config is setup to execute a gradle command when this trigger (URL) is POSTed to. The trigger URL can contain a variable number of custom variables which are turned into Environment Variables by GitLab.
My project is now a Gradle project so when my GitLab trigger is POSTed to, I'm using Gradle's filters to specify which tests to execute (e.g. `$ ./gradlew test my-test-subproj::test System.getenv( 'TARGETED_TESTS' )).
I POST the URL for my tests (e.g. http://myGitLab.com/my-project/trigger?TARGETED_TESTS=com.myapp.feature1.tests), and a docker container spins up from GitLabCI to run the matching ones. Of course, with this approach, I can set whatever variables that I need to and they can be read in at any level - GitLabCI, Gradle, or the test/test-framework itself.
This approach seems highly flexible and robust enough to do what I need it to, leaving each of my many teams to configure and handle the project to their specific needs without being overly prescriptive.

OWASP ZAP: Active Scanner in Continuos Integration

Trying to use ZAP (2.4.3) in a continuos integration (CI) setting. I can run ZAP as a daemon, run all my Selenium tests (in Java) by using ZAP as a proxy, and then being able to use the REST api calling htmlreport to get a final report of the Passive Scanner. This works fine, but I would like to also use the Active Scanner.
Using the Active Scanner in CI is mentioned several times in ZAP's documentation, but haven't found any working example or tutorial about it... does any exist?
What I would like to achieve is something like: Run Active Scanner on all the pages visited by the Selenium regression suite, once it is finished to run.
Trying to look at ZAP's REST api, but is mostly undocumented:
https://github.com/zaproxy/zaproxy/wiki/ApiGen_Index
Ideally, it would be great to have something like:
Start Active Scan asynchronously on all visited urls
Poll to check if Active Scan run is completed
In the REST api it seems there is something related, but:
ascan/scan needs an url as input. Could call core/urls to see what the Selenium tests have visited, but then how to set the right authentication (logging credential)? What if the order in which the urls are visited is important? What if a page is only accessable with a specific credential?
there is an ascan/scanAsUser, but it is unclear how contextId and userId can be retrieved from ZAP. A cumbersome workaround would be to modify the Selenium tests to write on disk the urls they visit and which logging/password credentials they are using, and then, once all tests are finished, to read from disk such info to call ZAP. Is there any simpler way?
OK, so theres a lot of questions here:)
ZAP typically scans hierarchies of URLs, eg everything under https://www.example.com/app the top level url of your application. We kind of assume you know what that will be ;)
Authentication is non trivial to handle, see https://github.com/zaproxy/zaproxy/wiki/FAQformauth
The ascan/status call returns the completed %
You may find the ZAP User Group http://groups.google.com/group/zaproxy-users better for these sort of questions.
But yes, we do need to improve the API documentation :/
Cheers,
Simon (ZAP Project Lead)

How to Accelerate Java Web Testing

I am coding an intricate method in a Spring Controller that takes as input request.getParameterMap(). In developing the method iteratively, each time I make a tweak, I have to Deploy, and then go through the steps on the web form.
That process can take minutes, just to tweak a small code change.
Are there any tricks or methods to speed up this process? All I really need is the input from request.getParameterMap(). Can I serialize that Map data somehow, and re-use it?
I am using Netbeans, if that is relevant.
In my experience the best is to setup a JUnit test, which doesn't use the web server at all, but just instantiates the controller, calls the method and checks the result.
Since your controller wasn't written from the ground up for this kind of approach, it might be quite some work to get this going at this stage. If you post the method in question we might help with this.
The next best thing is setting up an integration test, which starts up the application server, executes the request (possibly through the actual web gui using selenium or something).
Still a lot of work, but the difficulties are less dependent on the current workstyle.
As a final work around you can try to make the roundtrip for a manual test faster. There might be IDE dependent possibilities so you would have to let us know about the IDE in use.
I haven't tested it, but many people praise JRebel for this kind of thing, so you might want to give it a try.
If you don't want to fill the web form again and again try Jmeter(It's a free load testing tool).
Create a test plan with -> set number of threads to 1 --> http request sampler -> set method to post and add post parameters. Once everthing is setup fire the request
please check this link below for reference
http://community.blazemeter.com/knowledgebase/articles/65142-the-new-http-sampler-in-jmeter-2-6

I need aliveness test library for HTTP Servers

I'm writing a monitor service for our EC2 based cluster, it task will be [connect via HTTP/S to our events servers each X(ms), verify they are alive, rest].
I need a toolkit that will be able to perform the Connect test itself and report success or failure. I've tried to do this with Apache HTTPClient but I'm getting too many false positives on failures which did not happen. I've also looked at JMeter which at first looked quite promising but after downloading a 15mb file with ~25 3rd party jars started to feel like a huge overkill.
The requirement is simple: Check that tested node replies correctly in a defined time frame on HTTP GET request.
Could you suggest a library that allows this service? It is crucial to keep the false positive rate into a bare minimum because hmmm... well that means our processing stops until a broken node is examined... (A no-no indeed :)
Thank you,
Maxim.
For something in a Unix environment (which I'm guessing is what you are using because you are using Apache), try Monit http://mmonit.com/monit/
You can use Monit to make requests to your services, expect certain content and then create alerts based on what it thinks the state of the service is. Here's an example of a config file that can be used to monitor Apache: http://mmonit.com/wiki/Monit/ConfigurationExamples#apache
You can install Monit on each of your boxes and then use M/Monit to monitor of your monitored boxes.

Categories

Resources