Can we Customize Cucumber Test Suite at run time? - java

I have a cucumber test runner class in which i write my test suite to run like below
#CucumberOptions( features={"Feature_Files/featues"
} ,glue={ "com.automation.stepdef"
} ,monochrome=true ,dryRun= false ,plugin = {"html:target/cucumber-html-report"
} ,tags = {"#Startup"
}
)
If I wish to customize this tag option on successful completion of #startup feature, is it possible ?

The most common way of running two or more dependant test suites is creation of triggers for two or more jobs in your CI. This can be done with various plugins as described here.
Otherwise, if that's some test preparation actions you can use #Before or realted JUnit #BeforeClass annotation.

Seems not possible with current Cucumber. What you are asking for is the dependency among test scenarios, which IMO is a very good feature. For example, we have some login feature and some other functional features. It would not make any sense and would actually be a waste of time to run other features if the login feature does not work in the first place. To make things worse, you will see a lot of failures in test report in which you could not easily spot the root cause which is non-working login feature.
TestNG supports "dependsOnMethod" feature. However, TestNG is not a BDD tool.
QAF https://qmetry.github.io/qaf/qaf-2.1.7b/scenario.html#meta-data supports this as a BDD tool. However, it would be too heavy to introduce a new tool for such a simple feature.
All we need is some addition to Cucumber syntax and a customized test runner to build up the scenarios execution order as per dependencies and skip the features if the feature they depends on fails.
I would love to see if someone can put some effort into this :)
BTW, CI could workaround this issue, but again it's too heavy and clumsy. Imagine you have multi-dependencies among test scenarios, how many CI pipelines will you need then? Also, you can not workaround this in local dev env with CI because simply you would not set CI locally.

Related

Writing System Tests in JUnit

Preface
I'm deliberatly talking about system tests. We do have a rather exhaustive suite of unit tests, some of which use mocking, and those aren't going anywhere. The system tests are supposed to complement the unit tests and as such mocking is not an option.
The Problem
I have a rather complex system that only communicates via REST and websocket events.
My team has a rather large collection of (historically grown) system tests based JUnit.
I'm currently migrating this codebase to JUnit5.
The tests usually consist of an #BeforeAll in which the system is started in a configuration specific to the test-class, which takes around a minute. Then there is a number of independent tests on this system.
The problem we routinely run into is that booting the system takes a considerable amount of time and may even fail. One could say that the booting itself can be considered a test-case. JUnit handles lifecycle methods kind of weirdly - the time they take isn't shown in the report; if they fail it messes with the count of tests; it's not descriptive; etc.
I'm currently looking for a workaround, but what my team has done over the last few years is kind of orthogonal to the core idea of JUnit (cause it's a unit testing framework).
Those problems would go away if I replaced the #BeforeAllwith a test-method (let's call it #Test public void boot(){...}) and introduce an order-dependency (which is pretty easy using JUnit 5) that enforces boot to run before any other test is run.
So far so good! This looks and works great. The actual problem starts when the tests aren't executed by the CI server but by developers who try to troubleshoot. When I try to start a single test boot is filtered from the test execution and the test fails.
Is there any solution to this in JUnit5? Or is there a completely different approach I should take?
I suspect there may be a solution in using #TestTemplate but I'm really not sure how to procede. Also afaik that would only allow me to generate new named tests that would be filtered as well. Do I have to write a custom test-engine? That doesn't seem compelling.
This more general testing problem then related to Junit5. In order to skip very long boot up you can mock some components if it is possible. Having the booting system as a test does not make sense because there are other tests depending on that. Better to use #beforeAll in this case as it was before. For testing boot up, you can make separate test class for that which will run completely independent from other tests.
Another option is to group this kind of test and separate from the plain unit test and run it only if needed (for example before deployment on CI server). This really depends on specific use case and should those test be part of regular build on your local machine.
The third option is to try to reduce boot time if it possible. This is option if you can't use mocks/stubs or exclude those tests from regular build.

Is it possible to generate Cucumber HTML Reports with Karate's JUnit5 fluent API?

Our team is starting a JUnit 5 project with karate tests.
Currently we are using this as a template for our Karate test runner https://github.com/intuit/karate#junit-5-parallel-execution.
It allows us to pass in the "target/surefire-reports" and then before the test finishes we call ReportBuilder.generateReports(). It is basically identical to this code https://github.com/intuit/karate/blob/b50202b3c8a8916a7db0f3d5196d42086ab80a04/karate-junit4/src/test/java/com/intuit/karate/mock/MockServerTest.java.
This works well, but while I was looking at how to set up JUnit 5 I noticed this very slick fluent api https://github.com/intuit/karate#junit-5.
It would be nice to use that syntax, but I can't get the Cucumber report generated like I can with Runner.parallel. I made sure the maven-surefire-plugin was in build.gradle(although I could have messed that up) but it didn't seem to help.
I also tried doing ReportBuilder.generateReports() and the related logic from the parallel execution example in the #AfterAll function, but couldn't get that working either. The errors suggested that the target/surefire-reports folder didn't exist.
Is the cucumber report supported in the second example? If so, is there a trick to getting it setup?
Great question. The reason we de-couple the JUnit execution and the parallel-runner - is JUnit is more useful in development mode, and you expect detailed pass/fail stats in the IDE for example. But this will be an un-necessary overhead in "CI mode".
That said, we have put in some work on making the Parallel runner a fluent interface, so great timing :) You can find an example on line 57 here.
May I request you to try the develop branch and see if you are missing anything ? Building is easy, here are some instructions: https://github.com/intuit/karate/wiki/Developer-Guide

Can I get a Cucumber feature and its steps from a variable?

I'm new to BDD and particularly Cucumber.
Can I get a features and its steps from a variable? Also, I want to get a feature and its steps from a test tracker (TestRail) before running tests by the special selection of this tests, and put it in a list, then one by one get a scenario and run it.
Is there such a possibility? Should I use Cucumber or another framework for this?
No, you can't define a Cucumber scenario in code (or at least not in a supported way). But if you were going to write code to get a scenario and its steps from your test tracker and run it, you could equally well write code to put the scenario and its steps in files and run the scenario with the cucumber executable.
I don't know of a Java testing framework in which you can define tests dynamically. You could do that in Ruby with RSpec or (less cleanly) minitest. But I don't know whether a Ruby test framework would be acceptable, or whether it would be OK for the people writing entries in your test tracker to have to read and/or write RSpec examples. (It seems strange to have Cucumber step definitions in a test tracker, too; having features in a test tracker seems more reasonable, aside from the question of how to run them.)

Unit testing for a Selenium project

A question about best practices or practice at all ;)
I am currently working on a test automation system using Selenium in Java. It is supposed to be used for end-to-end acceptance testing of a webapp. The test cases are written in the Gherkin language and executed by the BDD framework Cucumber (Cucumber-JVM). The low-level functions use Selenium/WebDriver for interacting with the AUT and the browser. The Selenium code is structured using the PageObject pattern which abstracts the usage of WebDriver away. The cucumber step definitions call just the methods provided by the PageObjects.
As the project continues and becomes more and more complex I would like to start writing unit test to make sure the acceptance tests, and the utility functions around those, do what they should :)
Now to the question:
Is it feasible to write unit test for testing a test automation project?
The main problem is that during my first approach to unit testing using TestNG I realised, that my unit tests ended up doing more or less the same stuff the acceptance tests already did. This is counter productive, as the unit tests are very slow and have a lot dependencies.
Or does one just test the utility classes and leave the Selenium code be in such a case. ie. test just the stuff that can be tested without calling the Selenium WebDriver and interacting with the AUT?
Note just to be sure I have not been misunderstood. I'm asking about running unit test ON the acceptance test code and all the auxiliary code. Not about running the Selenium test cases using an unit testing framework like JUnit or TestNG.
Any help and/or ideas will be appreciated, as I am not sure how to tackle this one. That is if writing tests for tests is at all sensible ;)
I'm sure someone will consider my response "opinionated" and vote it down, but nevertheless I think that
Yes, if you have a test framework you are relying upon for your acceptance testing, the framework itself needs to be tested.
From my experience the value is in 2 areas:
Be able to change your framework with confidence. When you create some function, you know quite a lot about it, i.e. which use cases it supports, what it's designed to do, etc. But other people (or even you a year from now) may not have the same level of knowledge, even with documentation. So either new functions will pop up every time someone needs some slight modification in behavior (because they are not confident to change existing function), or someone may break whole bunch of acceptance tests.
Best if those are true unit tests, able to run completely independently from anything (use mocks, predefined static test data, etc).
Protect yourself from unexpected changes / bugs in Selenium itself (or other important 3rd-party libraries). When you're updating Selenium to next version (and that usually needs to be done every 3-6 months), there's always a chance that they changed some default you were relying upon (and didn't even know), or broke something, or suddenly something returns a different exception, or does not throw exception where it previously did and so on. Of course no need to get carried away and duplicate Selenium own unit tests, but when it comes to non-trivial things, or relying on some features with poor documentation, those tests may help a lot.
Those are integration tests. Ideally they should run against test-only webapp (not the real application) that replicate specifically tested behaviors in a way convenient for tests.
Of course some compromises possible as well. For example having a small subset of acceptance tests that are serving as unit / integration tests (they run first, and other tests only run if they are passing). Might be cheaper to begin with those and slowly migrate to proper unit/integration tests when you debug / fix issues in the test framework.
Another question is how do you separate tests testing your framework from actual acceptance tests for the product. What worked for me is keeping test framework and acceptance tests in 2 separate projects. That way I can change the framework, build it (which also includes running unit and integration tests) many times if needed. When all unit and integration tests are passing, then I can update the version used by actual acceptance tests.
Personally, I would not write test code that tests the test code.
Writing tests to shield myself from bugs in 3:rd party tools like Selenium is not something I would do. If the tests passes and the expected result is validated, then that would be enough for me.
When I upgrade to a new version of Selenium, I would do it when I had a working state. I.e. all tests passed. The only change would be the Selenium version. If I now have tests that breaks, I know that Selenium is behaving differently in this version than I expected and can act accordingly.
I would write tests for any complicated utility functionality I may need.
I would, however, work hard on writing the test code extremely easy to understand and avoid anything remotely complicated in it. And then be careful to verify that each test verified the behaviour I am expecting.

Getting into testing

I am at the stage now where I have a fairly good understanding of programming/development, using Java.
Could anyone tell me the best way for me to start using testing packages? I have looked at Hibernate but not surer where to go with it...
I use Eclipse 3.5 on Mac OS X. Is it a case of writing scripts to test methods? What is unit-testing? etc.
Where do I begin?
Many thanks. Alex
What is Unit Testing
Unit testing is writing code (i.e. test code) that passes known inputs into code under test and then validating the code under test returns expected outputs. It's the most granular testing you can perform on an application. To make it easier, usually a unit testing framework is used. For Java, JUnit is the most popular, but TestNG is also notable.
Getting Started
Unit testing frameworks provide tools for test execution, validation and results reporting. For your setup, Eclipse has built in support for JUnit. Eclipse is able to automatically detect tests, compile tests and code under test, execute tests, and report results within the IDE. Furthermore, failures are reported as clickable stack trace information that loads the corresponding file at the given line number.
Mock Objects
That you're also working with Hibernate, suggests you also investigate a mock object framework as well - such as jMock. Mock objects are usually substituted as part of a code under tests's composition and serve two purposes: (1) returning known outputs and (2) recording they've been called and how so that unit tests can introspect that information as part of validation.
The ability to use Mock objects to make testing easier is predicated on dependency injection. That is other entities that compose the object under test. The idea is decoupling dependencies (e.g. Hibernate) to focus on testing algorithms that manipulate that data you're working with.
Database
However, if you've got code that is not easily refactored, or perhaps you want to validate database code, you can also test Hibernate interaction as well. In that case you want a database in a known state. Three approaches come to mind:
Restoring a database backup at the beginning of each test execution.
Use dbunit, which provides its own mechanisms for maintain state.
Transactional locking with rollback. Wrap the entire case is wrapped with a try{} finally{}, where the latter always rolls back the transaction.
James Shore ("a thought leader in the Agile software development community") has a series of screen casts of him demonstrating Test Driven Development, using Eclipse.
http://jamesshore.com/Blog/Lets-Play/
While there are many ways to start testing, there is no "best" way so there's no point in looking for that as a starting point.
Search the web for a good tutorial on junit and do it. That will be the absolute best way to get started IMO. Don't get sidetracked with code coverage or integrating with Hudson or any of the other tasks that are on the periphery to testing. Focus on writing a handful (or 10) if tests first.
Once you understand the basics you can start looking at other tools to see if they meet your needs any better or worse than junit.
First up: Hibernate is not a testing package.
Now that's out of the way, I'd suggest you take a look at JUnit. Read up on unit testing first so you know what it is (the Wikipedia entry is a good place to start), then try the JUnit cookbook. Write some unit tests for a small piece of your code to see how it works, then move on to bigger chunks.
While you are at it, take a look at other development tools like Cobertura (for finding out how good your test coverage is) and static analysis tools like Findbugs and Checkstyle. These all integrate nicely with Ant and probably Eclipse, too.
If you are interested in improving your coding standards and build systems then I highly recommend using Ant, JUnit, Cobertura, Checkstyle and Findbugs together with a continuous integration server (e.g. Hudson or CruiseControl) and a version control system (e.g. git). With a toolkit like that you can't go wrong.
There are other frameworks out there (TestNG, Mockito etc) so take a look at them, too, and decide which you prefer (EDIT: And which work nicely together. Mockito + JUnit is a good combination.)

Categories

Resources