Modularizing JMeter tests? - java

We are looking to migrate our testing framework over to JMeter. We have 50 + test cases, each of them with repeating actions like Logging in and logging out for example. How can I modularize my approach?
What I'm looking for specifically is a "Add test item from file" so that I could for example, add the login code.
We also have things like connectionID's that need to be passed on every request. Is there anyway jMeter can AUTOMATICALLY replace all occurrences of it with a Jmeter variable? Atm the proxy-recorder records the actual connection string, and we have to manually replace that with ${connectionID}. Is there a better way?

This works fine for me.
Make a new thread group at the bottom of the test plan and put a simple controller in it. Inside the simple controller put the code you want to repeat. I use two simple controllers but one is actually a DB test case suite. While keeping whatever is inside the thread group enabled, make sure to put the thread group itself as disabled, or else it will execute again on its own.
Now, in any particular test case, add User Parameters and add a Module Controller. The Module Controller can point to the simple controller section(s) you made before. Have the simple controller with a ${variables} set, then override them here inside the particular test you are running by putting the variable in the User Parameters. Thus you get the different variables and tests with the same suite.
I put a Simple Controller inside the Simple Controller to add lengthy db tests. This ends up as
Thread Group > Simple Controller > Simple Controller > JDBC Request. All are renamed.
You can select different ones in the Module Controller inside the tests. I have about six right now but this gets repeated dozens of times.
This is all with a stock Jmeter 2.3 . If you are in an environment such that you can't install the plugins, this will work fine. I've never tried them
HTH

As far as automatically replacing the connection IDs, there is not, to my knowledge, a way to do that via the GUI. However, the test scripts are simple XML files and so it would be very easy to write a sed or awk script to do that replacement for you.
As far as the "add test file from here" part, in 2.6 (not sure about other versions, not used them) there is a logic controller called "Include Controller" that can load test snippets.There is also the ability to save snippets of test code called "test fragments" to their own .jmx files.
If you start a new test plan, right click on test plan then add -> test fragment -> test fragment this will add the container, then you can add your other requests in and use this chunk inside the aforementioned Include element.
If you are able to use the latest version, or if the prior versions support this, this may be a simpler option than writing your own plugin.

By using below Jmeter elements, we can modularize the test scripts.
Test Fragment
Module Controller
Parameterized Controller
Include Controller
Please check this for more details & examples.
http://www.testautomationguru.com/jmeter-modularizing-test-scripts/

I know 2 options for you:
Module Controller
Parameterized Controller

What I'm looking for specifically is a "Add test item from file" so that I could for example, add the login code.
Sounds like you might want to encapsulate some of that repeated logic in your own custom Samplers or Config Elements. There is a guide to writing plugins to JMeter on the project page.
This is the approach that we have taken on my current team for handling JMeter simulating requests of a custom RPC format.

One thing is you can run the jmeter scripts under "noGUI" model. You can specify the jmeter test scripts to run and put a batch of them into a .bat file. Like:
#echo
JMeter -n -t MyTestPlan1.jmx
JMeter -n -t MyTestPlan2.jmx
Another way I agree with #matt is you can write the plugin to get what your need.

Related

Skip Test cases based on URL

I have different projects for web ordering. I have POM framework with TestNG and maven. I have multiple test cases that I have automated in different projects depending upon flows.
Flows can be:
Takeout for one client
Steps:
Add Item
Checkout
Takeout for another client
Steps:
Select Location
Add Item
Checkout
As we can see there is an additional step to select location for second client. So, what I want to do is have one project and that contains test cases for both the client separately in different folders and package and run it using testng.xml ? If I add all the test case in testng.xml and run it, then i will have many failed test cases which I don't want. So is there any way how to run only specific test cases depending on url or depending upon clients name? There is a way to do with IAnnotationTransformer but I can't think of the logic. Help will be much appreciated.
Thank you in advance.

JUnit report generation for individual testcase

I am making an framework that internally user JUnit and REST Assured. This framework will have the 4 #Test methods for CRUD operations. Whenever the user want to do any operation, he will call only that particular Test method. But at the end of the each operation(say GET or DELETE or any other), it should generate the report.
I tried using surefire-report plugin. As I have read, that will generate report only when we build the project(running all the Test methods).
Is there any mechanism that fulfills my requirement of generation report for individual run also?
Execution will be like : final output will be the jar with individual CRUD facility.API.execute(GET, end_point_name);API.execute(POST, end_point_name,data);Test method get and post is called respectively for the above calls. Report should be generated for both the test cases for normal run as java application.
There are 3 solutions to your problem :
Either you write your logger statement and do proper logging of the events. You can either store it in DEBUG, INFO etc mode for better understanding and more control.
ExtentReports is another way to go :
http://www.ontestautomation.com/creating-html-reports-for-your-selenium-tests-using-extentreports/ refer the above link where they have a provided a detailed way of using the same.
You can also create a separate testng.xml file. Like maintaining a separate suite file this will internally make sure with the help surefire to create a separate reports.

How to pass inputs to Java Selenium tests when triggered via a web endpoint?

I seek to find a highly scalable and flexible solution for kicking off Selenium tests from a remote machine, preferably via a web-based endpoint, where I can pass some data through to my tests.
I've tried using both jUnitEE and TestNGEE - plus a ServletFilter - trying to get what I want but can't quite hit all my requirements so I can't help but think that I'm trying to go about it completely the wrong way...someone has to have solved this before...I just can't figure out how...
What I'd like to have happen:
Someone wanting to execute a java Selenium test navigates to a webpage of mine. Potentially this is a jUnitEE or TestNGEE servlet, perhaps it's something else.
User selects a Selenium test to run from a list of available tests, plus a couple of values from form elements on the page. Let's say that it's 2 string values - one for environment and one for username.
User presses the Run Test button.
The server takes the selected test and starts its execution, providing it with the environment and the username values specified by the user.
Additional requirements:
All activities should be thread safe. Data should not get criss-crossed between tests, even when multiple users initiate the same test at the same time.
Notes:
While I'd be happy to have this working even with just one parameter, the hope is that the user would be able to pass a list of any number of arbitrary key/value pairs which are then made available to the executed test, potentially even a csv or other type of data file, or a web endpoint from which to retrieve the data.
Example:
User hits the endpoint: http://testLauncherUI.mySite.com/myServlet?test=com.mySite.selenium.flow1&environment=testEnvironment1.mySite.com&username=userNumber1&otherRandomKey=randomValue
testLauncher's myServlet triggers the contained test matching com.mySite.selenium.flow1 and that test in turn navigates to 'http://testEnvironment1.mySite.com', and proceeds to enter the 'userNumber1' text into the input box.
A second user can hit the same servlet while the prior test is still executing, but with different (or same) params: http://testLauncherUI.mySite.com/myServlet?test=com.mySite.selenium.flow1&environment=testEnvironment2.mySite.com&username=userNumber1&otherRandomKey=randomValue
testLauncher's myServlet kicks off another thread, running the same test, but against the specified site: 'http://testEnvironment2.mySite.com', and proceeds to enter the 'userNumber1' text into the input box.
What am I missing here?
Thanks in advance!
I've ended up dropping JUnitEE altogether. Life is now better. My stack that now makes this possible is: GitLab, GitLabCI (w/Docker), Gradle, Junit/TestNG
I'm now storing my code in GitLab (Enterprise) and using Gradle as a build system. Doing so allows for this:
The included GitLabCI to be configured to host a URL that can trigger a GitLab pipeline. Each GitLab pipeline runs in a docker container.
My GitLabCI config is setup to execute a gradle command when this trigger (URL) is POSTed to. The trigger URL can contain a variable number of custom variables which are turned into Environment Variables by GitLab.
My project is now a Gradle project so when my GitLab trigger is POSTed to, I'm using Gradle's filters to specify which tests to execute (e.g. `$ ./gradlew test my-test-subproj::test System.getenv( 'TARGETED_TESTS' )).
I POST the URL for my tests (e.g. http://myGitLab.com/my-project/trigger?TARGETED_TESTS=com.myapp.feature1.tests), and a docker container spins up from GitLabCI to run the matching ones. Of course, with this approach, I can set whatever variables that I need to and they can be read in at any level - GitLabCI, Gradle, or the test/test-framework itself.
This approach seems highly flexible and robust enough to do what I need it to, leaving each of my many teams to configure and handle the project to their specific needs without being overly prescriptive.

How to use existing Java API tests inside Jmeter

We have existing Java tests that singularly tests our back end. These tests are pretty elaborate, and also run as a single users. I would like to know if I can simply take these existing tests/classes/libraries/jars etc and just wrap JMeter around them to execute them as JMeter tests from the command line (i.e. Maven).
Maybe add in some listeners and other JMeter components, but the tests are perfect the way they are except that they are not multi-threaded and do not have the reporting functions that JMeter has.
Can this be done using JSR233?
What if my libraries are located elsewhere? How can I use them in the JMeter project?
You have at least 3 options:
Implement JavaSamplerClient by extending AbstractJavaSamplerClient, this class will call your class. Create a Jar from this and put it in jmeter/lib/ext + add dependencies to jmeter/lib folder and you can then use Java Request and select your class.
Use JSR223 Sampler + Groovy wrapper for your class
Use JUnit Sampler if you have some JUnit classes

Generating Unit Tests Automatically

I have a web tool which when queried returns generated Java classes based upon the arguments in the URL.
The classes we retrieve from the webserver change daily and we need to ensure that they still can process known inputs.
Note these classes do not test the webserver, they run locally and transform xml into a custom format. I am not testing the webserver.
These classes must then be placed in specific package structure compiled and run against a known set of input data and compared against known output data.
I would like to do this automatically each night to make sure that the generated classes are correct.
What is the best way to achieve this?
Specifically whats the best way to:
retrieve the code from a webserver and place it in a file
compile the code and then call it
I'm sure a mix of junit and ant will be able to achieve this but is there and standard solution / approach for this?
First up, to answer your question: No, I do not think that there is a standard approach for this. This sounds like quite an unusual situation ;-)
Given that, what I would do is to write your JUnit tests to all call a class GeneratedCode, and then, once you download the code, rename the class to GeneratedCode, compile, and run your unit tests.
You have the same goal as continuous integration ;-)
Maybe a bit overkill for this simple task, but this is the standard way to get something, compile something and test something regularly.
E.g. you could try hudson.
You should be creating a "mock" interface for your web service that (a) behaves the same way and (b) returns a known answer.
You should then do some other integration testing with the live web service where a person looks at the results and decides if they worked.
Can you only test the generated classes after they were published on the webservice ? You have no way to test during or just after the generation ?
One idea, if the generated code isn't to complex, is to load it via the GroovyClassLoader and to run your tests against it. See this page for examples.

Categories

Resources