Trying to use ZAP (2.4.3) in a continuos integration (CI) setting. I can run ZAP as a daemon, run all my Selenium tests (in Java) by using ZAP as a proxy, and then being able to use the REST api calling htmlreport to get a final report of the Passive Scanner. This works fine, but I would like to also use the Active Scanner.
Using the Active Scanner in CI is mentioned several times in ZAP's documentation, but haven't found any working example or tutorial about it... does any exist?
What I would like to achieve is something like: Run Active Scanner on all the pages visited by the Selenium regression suite, once it is finished to run.
Trying to look at ZAP's REST api, but is mostly undocumented:
https://github.com/zaproxy/zaproxy/wiki/ApiGen_Index
Ideally, it would be great to have something like:
Start Active Scan asynchronously on all visited urls
Poll to check if Active Scan run is completed
In the REST api it seems there is something related, but:
ascan/scan needs an url as input. Could call core/urls to see what the Selenium tests have visited, but then how to set the right authentication (logging credential)? What if the order in which the urls are visited is important? What if a page is only accessable with a specific credential?
there is an ascan/scanAsUser, but it is unclear how contextId and userId can be retrieved from ZAP. A cumbersome workaround would be to modify the Selenium tests to write on disk the urls they visit and which logging/password credentials they are using, and then, once all tests are finished, to read from disk such info to call ZAP. Is there any simpler way?
OK, so theres a lot of questions here:)
ZAP typically scans hierarchies of URLs, eg everything under https://www.example.com/app the top level url of your application. We kind of assume you know what that will be ;)
Authentication is non trivial to handle, see https://github.com/zaproxy/zaproxy/wiki/FAQformauth
The ascan/status call returns the completed %
You may find the ZAP User Group http://groups.google.com/group/zaproxy-users better for these sort of questions.
But yes, we do need to improve the API documentation :/
Cheers,
Simon (ZAP Project Lead)
Related
I seek to find a highly scalable and flexible solution for kicking off Selenium tests from a remote machine, preferably via a web-based endpoint, where I can pass some data through to my tests.
I've tried using both jUnitEE and TestNGEE - plus a ServletFilter - trying to get what I want but can't quite hit all my requirements so I can't help but think that I'm trying to go about it completely the wrong way...someone has to have solved this before...I just can't figure out how...
What I'd like to have happen:
Someone wanting to execute a java Selenium test navigates to a webpage of mine. Potentially this is a jUnitEE or TestNGEE servlet, perhaps it's something else.
User selects a Selenium test to run from a list of available tests, plus a couple of values from form elements on the page. Let's say that it's 2 string values - one for environment and one for username.
User presses the Run Test button.
The server takes the selected test and starts its execution, providing it with the environment and the username values specified by the user.
Additional requirements:
All activities should be thread safe. Data should not get criss-crossed between tests, even when multiple users initiate the same test at the same time.
Notes:
While I'd be happy to have this working even with just one parameter, the hope is that the user would be able to pass a list of any number of arbitrary key/value pairs which are then made available to the executed test, potentially even a csv or other type of data file, or a web endpoint from which to retrieve the data.
Example:
User hits the endpoint: http://testLauncherUI.mySite.com/myServlet?test=com.mySite.selenium.flow1&environment=testEnvironment1.mySite.com&username=userNumber1&otherRandomKey=randomValue
testLauncher's myServlet triggers the contained test matching com.mySite.selenium.flow1 and that test in turn navigates to 'http://testEnvironment1.mySite.com', and proceeds to enter the 'userNumber1' text into the input box.
A second user can hit the same servlet while the prior test is still executing, but with different (or same) params: http://testLauncherUI.mySite.com/myServlet?test=com.mySite.selenium.flow1&environment=testEnvironment2.mySite.com&username=userNumber1&otherRandomKey=randomValue
testLauncher's myServlet kicks off another thread, running the same test, but against the specified site: 'http://testEnvironment2.mySite.com', and proceeds to enter the 'userNumber1' text into the input box.
What am I missing here?
Thanks in advance!
I've ended up dropping JUnitEE altogether. Life is now better. My stack that now makes this possible is: GitLab, GitLabCI (w/Docker), Gradle, Junit/TestNG
I'm now storing my code in GitLab (Enterprise) and using Gradle as a build system. Doing so allows for this:
The included GitLabCI to be configured to host a URL that can trigger a GitLab pipeline. Each GitLab pipeline runs in a docker container.
My GitLabCI config is setup to execute a gradle command when this trigger (URL) is POSTed to. The trigger URL can contain a variable number of custom variables which are turned into Environment Variables by GitLab.
My project is now a Gradle project so when my GitLab trigger is POSTed to, I'm using Gradle's filters to specify which tests to execute (e.g. `$ ./gradlew test my-test-subproj::test System.getenv( 'TARGETED_TESTS' )).
I POST the URL for my tests (e.g. http://myGitLab.com/my-project/trigger?TARGETED_TESTS=com.myapp.feature1.tests), and a docker container spins up from GitLabCI to run the matching ones. Of course, with this approach, I can set whatever variables that I need to and they can be read in at any level - GitLabCI, Gradle, or the test/test-framework itself.
This approach seems highly flexible and robust enough to do what I need it to, leaving each of my many teams to configure and handle the project to their specific needs without being overly prescriptive.
I have developed a micro-framework in Java which does the following function:
All test cases list will be in a MS-Access database along with test data for the Application to be tested
I have created multiple classes and each having multiple methods with-in them. Each of these methods represent a test-case.
My framework will read the list of test cases marked for execution from Access and dynamically decide which class/method to execute based on reflection.
The framework has methods for sendkeys, click and all other generic methods. It takes care of reporting in Excel.
All this works fine without any issue.
Now I am looking to run the test cases across multiple machines using Grid. I read in many sites that we need a framework like TestNG to have this in Grid. But I hope that it could be possible to integrate Grid in my own framework. I have read many articles and e-books which does not explain the coding logic for this.
I will be using only windows 7 with IE. I don't need cross browser/os testing.
I can make any changes to the framework to accomplish this. So please feel free.
In the Access DB which I mentioned above, I will have details about test case and the machine in which the test case should run. Currently users can select the test cases they want to run locally in the Access DB and run it.
How will my methods(test scripts) know which machine its going to be executed? What kind of code changes I should do apart from using RemoteWebDriver and Capabilities?
Please let me know if you need any more information on my code or have any question. Aslo kindly correct me if any of my understanding on Grid is wrong.
How will my methods know which machine it is going to be executed? - You just need to know one machine with a grid setup - the ip of your hub machine. The hub machine will decide where to send the request to from the nodes that are registered with, depending upon the capabilities you specify while instantiating the driver. When you initialize the RemoteWebDriver instance, you need to specify the host (ip of your hub). I would suggest to keep the hub ip as a configurable property.
The real use of the grid is for parallel remote execution. So how do you make your tests run in parallel is a thing that you need to decide. You can use a framework like Testng which provides parallelism with simple settings. You might need to restructure your tests to accomodate testng. The other option would be to implement multithreading yourself to trigger your tests in parallel. I would recommend testng based on my experience since it provides many more capabilities apart from parallelism. You need to take care that each instance of driver is specific to your thread and not a global variable.
All tests can hit the hub and the hub can take care of the rest.
It is important to remember that Grid does not execute your tests in parallel for you. It is the job of your framework to divide tests across multiple threads and collate the results . It is also key to realise that when running on Grid, the test script still executes in the machine the test was started on. Grid provides a REST API to open and interact with browsers, so your test will be using this rather than opening a browser locally. Any other non-selenium code will be executed within the context of the original machine not machine where the browser has been opened (e.g. File System access is not where the browser has opened). Any use of static classes and globals in your framework may also cause issues as each test will acces these concurrently. Your code must be thread safe.
Hopefully this hasn't put you off using Grid. It is an awesome tool and really easy to use. It is the parallel execute which is hard and frameworks such as TestNG provide this out of the box.
Good luck with your framework.
I am coding an intricate method in a Spring Controller that takes as input request.getParameterMap(). In developing the method iteratively, each time I make a tweak, I have to Deploy, and then go through the steps on the web form.
That process can take minutes, just to tweak a small code change.
Are there any tricks or methods to speed up this process? All I really need is the input from request.getParameterMap(). Can I serialize that Map data somehow, and re-use it?
I am using Netbeans, if that is relevant.
In my experience the best is to setup a JUnit test, which doesn't use the web server at all, but just instantiates the controller, calls the method and checks the result.
Since your controller wasn't written from the ground up for this kind of approach, it might be quite some work to get this going at this stage. If you post the method in question we might help with this.
The next best thing is setting up an integration test, which starts up the application server, executes the request (possibly through the actual web gui using selenium or something).
Still a lot of work, but the difficulties are less dependent on the current workstyle.
As a final work around you can try to make the roundtrip for a manual test faster. There might be IDE dependent possibilities so you would have to let us know about the IDE in use.
I haven't tested it, but many people praise JRebel for this kind of thing, so you might want to give it a try.
If you don't want to fill the web form again and again try Jmeter(It's a free load testing tool).
Create a test plan with -> set number of threads to 1 --> http request sampler -> set method to post and add post parameters. Once everthing is setup fire the request
please check this link below for reference
http://community.blazemeter.com/knowledgebase/articles/65142-the-new-http-sampler-in-jmeter-2-6
I've worked with jmeter a little before and have just downloaded jmeter 2.7.
Our web application has a questionnaire that each person fills out. Like most questionnaires, the questions that show up vary depending on answers to previous questions, so there are multiple paths and very rarely does one person see all of the questions.
What I'd like to do is create a control file that will specify a group of questionnaires which it will load and log those people into the system and fill out a questionnaire checking the path and results at the end to make sure the answers were stored properly.
I would like to have 25 simultaneous users of this. Eventually I'd like to have a few hundred.
How do I get starting setting all of this up through jmeter? I don't mean a walkthrough, but I'm a little familiar with a number of the jmeter components. Which components would I use to solve this problem and in what order?
Thanks.
First of all I recommend upgrading to the latest version of jMeter.
To start every test you should add a thread group(right click on the test plan):
Then you would specify number of users/threads to 25 by clicking on your thread group and filling in the number of threads field.
Since you're dealing with web you would add a http request to your thread group (I have many more samplers in my screenshot don't get confused, this is because it's possible to extend jmeter with anything you need really) :
Then after doing some web request you would validate those web requests by using i.e. response assertion :
I could go on for a long time really. Jmeter documentation is somewhat poor in my opinion but it's a great tool.
Without any specific questions this should be enough to get you started.
I'm writing a monitor service for our EC2 based cluster, it task will be [connect via HTTP/S to our events servers each X(ms), verify they are alive, rest].
I need a toolkit that will be able to perform the Connect test itself and report success or failure. I've tried to do this with Apache HTTPClient but I'm getting too many false positives on failures which did not happen. I've also looked at JMeter which at first looked quite promising but after downloading a 15mb file with ~25 3rd party jars started to feel like a huge overkill.
The requirement is simple: Check that tested node replies correctly in a defined time frame on HTTP GET request.
Could you suggest a library that allows this service? It is crucial to keep the false positive rate into a bare minimum because hmmm... well that means our processing stops until a broken node is examined... (A no-no indeed :)
Thank you,
Maxim.
For something in a Unix environment (which I'm guessing is what you are using because you are using Apache), try Monit http://mmonit.com/monit/
You can use Monit to make requests to your services, expect certain content and then create alerts based on what it thinks the state of the service is. Here's an example of a config file that can be used to monitor Apache: http://mmonit.com/wiki/Monit/ConfigurationExamples#apache
You can install Monit on each of your boxes and then use M/Monit to monitor of your monitored boxes.