I am having a Spring Java application with Cucumber feature files to run selenium test case.
Now I need to dynamically parametrize the feature files. So I am building an application which exposes all the parameters in feature files on UI
Users will modify these parameters. Once modified the selenium test cases should run by picking these new parameters.
Kindly advise
Related
I am trying to automate the creating and running the performance test script. Firstly, I create the UML activity diagram and I can export this diagram to outside as a xml file. I write a java program that can read this xml file. Now I want to create test script with using the properties which is a separate file that include host, username, time, etc. knowledge about the configuration. I want to use these two files and create a test script for performance test. Which performance test tool is useful and appropriate for my plan? Also I read the jmeter documentation and it can not be run from the script. Jmeter is manually configured.
JMeter's .jmx scripts are basically XML files so you could come up with an XSLT transformation which will convert your UML diagram into an XML file which can be consumed by JMeter. Check out build-web-test-plan.jmx which implements Building a Web Test Plan user manual chapter
Given you're able to write Java programs you can create/execute a JMeter test plan programmatically using JMeter API. Check out Five Ways To Launch a JMeter Test without Using the JMeter GUI article for general instructions and jmeter-from-code sample project
There are is Taurus tool which provides possibility of creating/running a JMeter test plan using simple declarative YAML syntax
I am working on a cucumber java framework, We have build our test cases using feature files.
I have a task to implement the re run failed scenarios / feature functionality in the framework. We have created the test cases feature wise, meaning a feature file is independent but not the scenario. If we try to run the any scenario in a feature file it will fail as the scenario is not independent.
I have tried to implement the re run functionality but it support only scenario re run and not feature re run (if any scenario failed in a feature).
Is there any way to re run the feature file instead of scenario if any of the scenario failed in a feature file ?
I have some selenium code which repeats in every JMX I have, how can I create a jar of that and use it in JMeter web driver sampler? I am using Beanshell language in web driver sampler.
For example, if I need to use login and logout in every Jmx of web driver sampler, and now I am repeating it in every JMX. How do I keep those login and logout script somewhere and use that. Keeping as jars would be fine, but how can I do it in jMeter?
Given the .jar containing your functions to perform login/logout will be present in JMeter Classpath you should be able to normally using import statement or equivalent call your functions from the WebDriver Sampler code.
One point: don't use java language as it is not real Java, it is Beanshell interpreter which has limited support of Java features and not very good performance. Since JMeter 3.1 it is recommended to use groovy for any scripting purposes so consider migrating on next available opportunity. Most likely you won't have to change your code.
Also be aware that there is a built-in mechanism in JMeter which helps you to avoid code duplication: Module Controller so instead of having .jars you can have separate WebDriver Sampler instances which will be doing common tasks like login/logout and you will be able to call them via Module Controller where required.
I have two separate java maven projects: one is my web app itself and other one is tellurium+selenium automation tests for my web(I moved these tests to separate projects as their code doesn't really belong to the web app project code and doesn't use java classes of my web app, also I want to reuse some parts of those tests for testing my other web apps). Therefore, project where my tests reside doesn't know anything about my web app, except tellurium/selenium conf files(host name, credentials, browser).
So the question: is there any way to measure code coverage of my web app backend that is invoked by my tellurium/selenium tests that reside in separate project?
Thanks in advance. Any help is highly appreciated.
EMMA or cobetura can instrument your classes so that after the test run they create a coverage report.
http://emma.sourceforge.net/reference/ch02s03.html
<instr>/instr is EMMA's offline class instrumentor. It adds bytecode
instrumentation to all classes found in an instrumentation path that
also pass through user-provided coverage filters. Additionally, it
produces the class metadata file necessary for associating runtime
coverage data with the original class definitions during coverage
report generation.
I've got a Java software that reads settings from properties files and database, reads input files from a directory and creates output files in another directory. It also makes modifications to database.
I need to improve testing of this software from being manual to automatic. Currently the user copies some files to input directory, executes the program and inspects the files in the output director. I'd like to automate this to just running the tests and inspecting the test result file. The test platform would have a expected result file(s) for each input file. The test results should be readable by people that are not programmers :)
I don't want to do this in a jUnit test in the build phase because the tests have to be executed against development and test environments. Is there any tools/platforms that could help me with this or should I build this kind of thing from scratch?
I'd recommend to use TestNG testing framework.
This is functionality testing framework, which provides similar to jUnit functionality, but has a number of features specific to functional testing - like test dependencies, groups etc.
The test results should be readable by
people that are not programmers :)
You can implement your own test listener and use it to build custom test report.