I have different projects for web ordering. I have POM framework with TestNG and maven. I have multiple test cases that I have automated in different projects depending upon flows.
Flows can be:
Takeout for one client
Steps:
Add Item
Checkout
Takeout for another client
Steps:
Select Location
Add Item
Checkout
As we can see there is an additional step to select location for second client. So, what I want to do is have one project and that contains test cases for both the client separately in different folders and package and run it using testng.xml ? If I add all the test case in testng.xml and run it, then i will have many failed test cases which I don't want. So is there any way how to run only specific test cases depending on url or depending upon clients name? There is a way to do with IAnnotationTransformer but I can't think of the logic. Help will be much appreciated.
Thank you in advance.
Related
I seek to find a highly scalable and flexible solution for kicking off Selenium tests from a remote machine, preferably via a web-based endpoint, where I can pass some data through to my tests.
I've tried using both jUnitEE and TestNGEE - plus a ServletFilter - trying to get what I want but can't quite hit all my requirements so I can't help but think that I'm trying to go about it completely the wrong way...someone has to have solved this before...I just can't figure out how...
What I'd like to have happen:
Someone wanting to execute a java Selenium test navigates to a webpage of mine. Potentially this is a jUnitEE or TestNGEE servlet, perhaps it's something else.
User selects a Selenium test to run from a list of available tests, plus a couple of values from form elements on the page. Let's say that it's 2 string values - one for environment and one for username.
User presses the Run Test button.
The server takes the selected test and starts its execution, providing it with the environment and the username values specified by the user.
Additional requirements:
All activities should be thread safe. Data should not get criss-crossed between tests, even when multiple users initiate the same test at the same time.
Notes:
While I'd be happy to have this working even with just one parameter, the hope is that the user would be able to pass a list of any number of arbitrary key/value pairs which are then made available to the executed test, potentially even a csv or other type of data file, or a web endpoint from which to retrieve the data.
Example:
User hits the endpoint: http://testLauncherUI.mySite.com/myServlet?test=com.mySite.selenium.flow1&environment=testEnvironment1.mySite.com&username=userNumber1&otherRandomKey=randomValue
testLauncher's myServlet triggers the contained test matching com.mySite.selenium.flow1 and that test in turn navigates to 'http://testEnvironment1.mySite.com', and proceeds to enter the 'userNumber1' text into the input box.
A second user can hit the same servlet while the prior test is still executing, but with different (or same) params: http://testLauncherUI.mySite.com/myServlet?test=com.mySite.selenium.flow1&environment=testEnvironment2.mySite.com&username=userNumber1&otherRandomKey=randomValue
testLauncher's myServlet kicks off another thread, running the same test, but against the specified site: 'http://testEnvironment2.mySite.com', and proceeds to enter the 'userNumber1' text into the input box.
What am I missing here?
Thanks in advance!
I've ended up dropping JUnitEE altogether. Life is now better. My stack that now makes this possible is: GitLab, GitLabCI (w/Docker), Gradle, Junit/TestNG
I'm now storing my code in GitLab (Enterprise) and using Gradle as a build system. Doing so allows for this:
The included GitLabCI to be configured to host a URL that can trigger a GitLab pipeline. Each GitLab pipeline runs in a docker container.
My GitLabCI config is setup to execute a gradle command when this trigger (URL) is POSTed to. The trigger URL can contain a variable number of custom variables which are turned into Environment Variables by GitLab.
My project is now a Gradle project so when my GitLab trigger is POSTed to, I'm using Gradle's filters to specify which tests to execute (e.g. `$ ./gradlew test my-test-subproj::test System.getenv( 'TARGETED_TESTS' )).
I POST the URL for my tests (e.g. http://myGitLab.com/my-project/trigger?TARGETED_TESTS=com.myapp.feature1.tests), and a docker container spins up from GitLabCI to run the matching ones. Of course, with this approach, I can set whatever variables that I need to and they can be read in at any level - GitLabCI, Gradle, or the test/test-framework itself.
This approach seems highly flexible and robust enough to do what I need it to, leaving each of my many teams to configure and handle the project to their specific needs without being overly prescriptive.
I've got a bunch of Selenium tests in my project and I'd love running them with IDEA. I need to pass certain VM arguments (where my firefox binary is located etc.) and I don't want to create a run config for every Test class that I have.
There are also too many tests to just run all every time.
So, does anyone know if it's possible to create a "parent" run config which would be used for all tests in a certain path whether I run them together or just a single one?
Now I feel silly :P
Run Configurations has a Defaults tab where you can set default values for JUnit tasks
I have a Java application with function tests that uses a huge amount of data. The tests are run from TeamCity. There are several agents running the tests. I'd like to separate the data into another project that will basically only do an update from version control and store the data on a local machine running the agent.
Then I need every agent to know where on the local machine are the data located and pass it as a parameter to the main build.
Is there a way to configure the builds this way?
The motivation is that the cleaning of a work directory removes this data when they are not separated. Cleaning is sometimes necessary, but never because of the test data.
You could do a separate build, and other build would depends on this one, and probably using artifacts you can get the files you need. Or you in the build, you use a custom location where the agent would clone the repository, so you can know where it is, and use it as a parameter for your others builds.
We are looking to migrate our testing framework over to JMeter. We have 50 + test cases, each of them with repeating actions like Logging in and logging out for example. How can I modularize my approach?
What I'm looking for specifically is a "Add test item from file" so that I could for example, add the login code.
We also have things like connectionID's that need to be passed on every request. Is there anyway jMeter can AUTOMATICALLY replace all occurrences of it with a Jmeter variable? Atm the proxy-recorder records the actual connection string, and we have to manually replace that with ${connectionID}. Is there a better way?
This works fine for me.
Make a new thread group at the bottom of the test plan and put a simple controller in it. Inside the simple controller put the code you want to repeat. I use two simple controllers but one is actually a DB test case suite. While keeping whatever is inside the thread group enabled, make sure to put the thread group itself as disabled, or else it will execute again on its own.
Now, in any particular test case, add User Parameters and add a Module Controller. The Module Controller can point to the simple controller section(s) you made before. Have the simple controller with a ${variables} set, then override them here inside the particular test you are running by putting the variable in the User Parameters. Thus you get the different variables and tests with the same suite.
I put a Simple Controller inside the Simple Controller to add lengthy db tests. This ends up as
Thread Group > Simple Controller > Simple Controller > JDBC Request. All are renamed.
You can select different ones in the Module Controller inside the tests. I have about six right now but this gets repeated dozens of times.
This is all with a stock Jmeter 2.3 . If you are in an environment such that you can't install the plugins, this will work fine. I've never tried them
HTH
As far as automatically replacing the connection IDs, there is not, to my knowledge, a way to do that via the GUI. However, the test scripts are simple XML files and so it would be very easy to write a sed or awk script to do that replacement for you.
As far as the "add test file from here" part, in 2.6 (not sure about other versions, not used them) there is a logic controller called "Include Controller" that can load test snippets.There is also the ability to save snippets of test code called "test fragments" to their own .jmx files.
If you start a new test plan, right click on test plan then add -> test fragment -> test fragment this will add the container, then you can add your other requests in and use this chunk inside the aforementioned Include element.
If you are able to use the latest version, or if the prior versions support this, this may be a simpler option than writing your own plugin.
By using below Jmeter elements, we can modularize the test scripts.
Test Fragment
Module Controller
Parameterized Controller
Include Controller
Please check this for more details & examples.
http://www.testautomationguru.com/jmeter-modularizing-test-scripts/
I know 2 options for you:
Module Controller
Parameterized Controller
What I'm looking for specifically is a "Add test item from file" so that I could for example, add the login code.
Sounds like you might want to encapsulate some of that repeated logic in your own custom Samplers or Config Elements. There is a guide to writing plugins to JMeter on the project page.
This is the approach that we have taken on my current team for handling JMeter simulating requests of a custom RPC format.
One thing is you can run the jmeter scripts under "noGUI" model. You can specify the jmeter test scripts to run and put a batch of them into a .bat file. Like:
#echo
JMeter -n -t MyTestPlan1.jmx
JMeter -n -t MyTestPlan2.jmx
Another way I agree with #matt is you can write the plugin to get what your need.
In TDD(Test Driven Development) development process, how to deal with the test data?
Assumption that a scenario, parse a log file to get the needed column. For a strong test, How do I prepare the test data? And is it properly for me locate such files to the test class files?
Maven, for example, uses a convention for folder structures that takes care of test data:
src
main
java <-- java source files of main application
resources <-- resource files for application (logger config, etc)
test
java <-- test suites and classes
resources <-- additional resources for testing
If you use maven for building, you'll want to place the test resources in the right folder, if your building with something different, you may want to use this structure as it is more than just a maven convention, to my opinion it's close to 'best practise'.
Another option is to mock out your data, eliminating any dependency on external sources. This way it's easy to test various data conditions without having to have multiple instances of external test data. I then generally use full-fledged integration tests for lightweight smoke testing.
Hard code them in the tests so that they are close to the tests that use them, making the test more readable.
Create the test data from a real log file. Write a list of the tests intended to be written, tackle them one by one and tick them off once they pass.
getClass().getClassLoader().getResourceAsStream("....xml");
inside the test worked for me. But
getClass().getResourceAsStream("....xml");
didn't worked.
Don't know why but maybe it helps some others.
When my test data must be an external file - a situation I try to avoid, but can't always - I put it into a reserved test-data directory at the same level as my project, and use getClass().getClassLoader().getResourceAsStream(path) to read it. The test-data directory isn't a requirement, just a convenience. But try to avoid needing to do this; as #philippe points out, it's almost always nicer to have the values hard-coded in the tests, right where you can see them.