Creating Examples for ScenarioOutline in code - java

I want to dynamically create multiple examples for a ScenarioOutline in a feature file. Is it possible to do this in the #before hook somehow?
I know this is not how you're supposed to use cucumber, but how would it be possible?
I already tried accesing the Scenario in the hook, but there are no methods to get all steps and their variables/placeholders

This has been asked a couple of times before, usually as the more specific question "How can I import scenario outline examples from CSV?". You might find a workaround that works for you by researching that question, such as this answer that suggests using QAF Gherkin scenario factory, or this answer that suggest passing a CSV into the scenario, and then using the example table to index into it.
BUT, that said, defining scenarios dynamically from file is specifically listed in the Cucumber FAQ as an anti-pattern
We advise you not to use Excel or csv files to define your test cases; using Excel or csv files is considered an anti-pattern.
One of the goals of Cucumber is to have executable specifications. This means your feature files should contain just the right level of information to document the expected behaviour of the system. If your test cases are kept in separate files, how would you be able to read the documentation?
And sometimes when this question gets asked, there's a strong response from people who know the pain of living with a misused BDD tool, practically begging them not to do it.
Cucumber as a BDD tool involves a lot of overhead (writing feature files) and provides a certain value (a vibrant, team-wide understanding of how the product should work, probably). If you write feature files that don't buy you that value, you're investing all this time into an expensive, unnecessary layer of your test framework. Cucumber basically becomes a glorified test runner, and there are much cheaper ways to run your test if you don't really need the value BDD is supposed to provide.

Cucumber doesn't encourage to have examples outside feature file.
However there are few non standard way available with cucumber to use examples outside the feature file. One of them, you can refer in grasshopper's post.
Another alternate is using gherkin with QAF which provides lots of features inbuilt data-providers including XML/CSV/JSON/EXCEL/DB. It also supports to provide example generated through code using custom data-provider. For example:
Scenario Outline: scenario with dynamic test-data
....
Examples:{"dataProvider":"dynamic-examples", "dataProviderClass":"my.project.impl.CustomExamplesProvider"}
package my.project.impl;
public class CustomExamplesProvider{
#DataProvider(name="dynamic-examples")
public static Object[][] dataProviderForBDD(){
//generate and return data.
//This is just example with hard-coded values and you can generate and return data as per need.
Map<Object, Object> ex1 = Maps.newHashMap();
ex1.put("fruit", "grapes");
ex1.put("color", "green");
Map<Object, Object> ex2 = Maps.newHashMap();
ex2.put("fruit", "banana");
ex2.put("color", "yellow");
return new Object[][] {{ex1},{ex2}} ;
}
}

Related

How to write description in Junit?

I am using Test Driven Development approach for coding and testing various modules.
What I want to do? :
I want to write some description to all my test cases so it could easily be readable for anyone.
How I am writing description right now?
#Test
#DisplayName("Description about my test case.")
public void addTwoObjects_onInvalidMapping_shouldReturnAnError(){
.
.
.
}
How I don't want to write description?: I don't want to use comments to write description of code.
Also, I don't want to use the DisplayName() annotation which I am currently using in JUNIT5. As this annotation by my understanding is meant for renaming technical function names and not for writing description.
Reference:
Test classes and test methods can declare custom display names via #DisplayName — with spaces, special characters, and even emojis — that will be displayed in test reports and by test runners and IDEs [2]
This Question is similar to JUnit test description, but for two differences: (a) Asking about the current generation of JUnit 5, and (b) Explicitly asking for place to put a lengthy description rather than simply renaming the test method’s name.
You are correct about the DisplayName annotation. The intention there is to simply provide a more readable name than the method’s name. This name is meant to be picked up by tooling that presents a user-interface to monitor the running of your tests. That annotation is not appropriate for lengthy description and notes.
Javadoc
The Javadoc facility in Java enables you to attach lengthy descriptions and notes to your source code. Java includes tools to extract the content of your Javadoc for presentation as nicely formatted pages written in auto-generated HTML.
Your JUnit tests are Java source code. So your test source code can carry Javadoc just like your app source code can carry Javadoc.
Your IDE will likely have features to assist in writing the Javadoc.
You wrote:
in case if anyone in company maybe new intern comes. He will get some insights what the code is doing even the non technical staff will get some understanding from the description.
Indeed, this exactly what Javadoc is for. Being embedded within the source means you cannot lose the content of your description and notes.
Javadoc on the source code of your tests seems to meet to your needs.

Cascading method calls in FitNess?

I'm new to FIT and FitNess and I'm wondering if is possible to cascade method calls without defining special fixtures.
Background: we are testing our web based GUI with Selenium WebDriver. I have created a framework based on the PageObject pattern to decouple the HTML from the page logic. This framework is used in our JUnit tests. The framework is implemented in a Fluent API style with grammar.
Something like this:
boolean connectionTest =
connectionPage
.databaseHost( "localhost" )
.databaseName( "SOME-NAME" )
.instanceNameConnection()
.instanceName("SOME-INSTANCE-NAME")
.windowsAuthentication()
.apply()
.testConnection();
Some testers want to create acceptance tests but aren't developers. So had a look to FIT. Would it be possible to use my framework with FIT as is without developing special fixtures?
I don't believe you can use the existing code with 'plain-vanilla' Fit, it would at least require a special fixture class to be defined. Maybe 'SystemUnderTest' could help?
Otherwise Slim's version might be something to get it to work for you.
As a side note: I've put a FitNesse baseline installation including features to do website testing with (almost) no Java code on GitHub. In my experience it's BrowserTest will allow non-developers to create/modify/maintain tests easily, and integrate those tests with you continuous integration process (if you have one). I would suggest you (or your testers) also have a look at that.
I know you asked about Java but in case any .NET developers see this, it's possible with the .NET implementation, fitSharp:
|with|new|connection page|
|with|database host|localhost|
|with|database name|some-name|
etc.
See http://fitsharp.github.io/Fit/WithKeyword.html
I have solved my problem by writing a generic fixture which receives the target methods and their arguments from the fitness table and uses Java reflection to invoke the appropriate framework methods.
So I have one fixture of all different page objects that are returned from the framework.

Can Java self-modify via user input?

I'm interested in an executed script allowing user input to modify the process and corresponding source.
What precedents exist to implement such a structure?
Yes, depending on what is meant.
Consider such projects as ObjectWeb ASM (see the the ASM 2.0 tutorial for a general rundown).
Trying to emit the-would-need-to-be-decompiled Java source code is another story: if this was the goal then perhaps the source should be edited, re-compiled, and somehow loaded in/over. (This is possible as well, consider tools like JRebel.)
Happy coding.
You should not be able to modify existing classes. But if you implement a ClassLoader then you can dynamically load classes from non-traditional sources: network, XML file, user input, random number generator, etc.
There are probably other, better ways.
Maybe the Java scripting API is what you're looking for:
http://docs.oracle.com/javase/6/docs/api/javax/script/package-summary.html
http://docs.oracle.com/javase/6/docs/technotes/guides/scripting/programmer_guide/index.html
I wrote an app once that used reflection to allow tests to be driven by a text file. For instance, if you had a class like this:
class Tuner(String Channel) {
tune(){...
play(){...
stop(){...
}
You could execute methods via code like:
tuner=Channel 1
tune tuner
play tuner
stop tuner
It had some more capabilities (You could pass objects into other objects, etc), but mostly I used it to drive tests on a cable box where a full write/build/deploy in order to test took on the order of a half hour.
You could create a few reusable classes and tie them together with this test language to make some very complex and easy to create tests.
THAT is a DSL, not monkeying around with your loose-syntax language by eliminating parenthesis and adding underscores and dots in random locations to make it look like some strange semi-English.

Save object in debug and than use it as stub in tests

My application connects to db and gets tree of categories from here. In debug regime I can see this big tree object and I just thought of ability to save this object somewhere on disk to use in test stubs. Like this:
mockedDao = mock(MyDao.class);
when(mockedDao.getCategoryTree()).thenReturn(mySavedObject);
Assuming mySavedObject - is huge enough, so I don't want to generate it manually or write special generation code. I just want to be able to serialize and save it somewhere during debug session then deserialize it and pass to thenReturn in tests.
Is there is a standard way to do so? If not how is better to implement such approach?
I do love your idea, it's awesome!
I am not aware of a library that would offer that feature out of the box. You can try using ObjectOutoutStream and ObjectInputStream (ie the standard Java serialization) if your objects all implement Seriablizable. Typically they do not. In that case, you might have more luck using XStream or one of its friends.
We usually mock the entire DB is such scenarios, reusing (and implicitly testing) the code to load the categories from the DB.
Specifically, our unit tests run against an in-memory database (hsqldb), which we initialize prior to each test run by importing test data.
Have look at Dynamic Managed Beans - this offers a way to change values of a running java application. Maybe there's a way to define a MBean that holds your tree, read the tree, store it somewhere and inject it again later.
I've run into this same problem and considered possible solutions. A few months ago I wrote custom code to print a large binary object as hex encoded strings. My toJava() method returns a String which is source code for a field definition of the object required. This wasn't hard to implement. I put log statements in to print the result to the log file, and then cut and paste from the log file to a test class. New unit tests reference that file, giving me the ability to dig into operations on an object that would be very hard to build another way.
This has been extremely useful but I quickly hit the limit on the size of bytecode in a compilation unit.

Writing long test method names to describe tests vs using in code documentation

For writing unit tests, I know it's very popular to write test methods that look like
public void Can_User_Authenticate_With_Bad_Password()
{
...
}
While this makes it easy to see what the test is testing for, I think it looks ugly and it doesn't display well in auto-generated documentation (like sandcastle or javadoc).
I'm interested to see what people think about using a naming schema that is the method being tested and underscore test and then the test number. Then using the XML code document(.net) or the javadoc comments to describe what is being tested.
/// <summary>
/// Tests for user authentication with a bad password.
/// </summary>
public void AuthenticateUser_Test1()
{
...
}
by doing this I can easily group my tests together by what methods they are testing, I can see how may test I have for a given method, and I still have a full description of what is being tested.
we have some regression tests that run vs a data source (an xml file), and these file may be updated by someone without access to the source code (QA monkey) and they need to be able to read what is being tested and where, to update the data sources.
I prefer the "long names" version - although only to describe what happens. If the test needs a description of why it happens, I'll put that in a comment (with a bug number if appropriate).
With the long name, it's much clearer what's gone wrong when you get a mail (or whatever) telling you which tests have failed.
I would write it in terms of what it should do though:
LogInSucceedsWithValidCredentials
LogInFailsWithIncorrectPassword
LogInFailsForUnknownUser
I don't buy the argument that it looks bad in autogenerated documentation - why are you running JavaDoc over the tests in the first place? I can't say I've ever done that, or wanted generated documentation. Given that test methods typically have no parameters and don't return anything, if the method name can describe them reasonably that's all the information you need. The test runner should be capable of listing the tests it runs, or the IDE can show you what's available. I find that more convenient than navigating via HTML - the browser doesn't have a "Find Type" which lets me type just the first letters of each word of the name, for example...
Does the documentation show up in your test runner? If not that's a good reason for using long, descriptive names instead.
Personally I prefer long names and rarely see the need to add comments to tests.
I've done my dissertation on a related topic, so here are my two cents: Any time you rely on documentation to convey something that is not in your method signature, you are taking the huge risk that nobody would read the documentation.
When developers are looking for something specific (e.g., scanning a long list of methods in a class to see if what they're looking for is already there), most of them are not going to bother to read the documentation. They want to deal with one type of information that they can easily see and compare (e.g., names), rather than have to start redirecting to other materials (e.g., hover long enough to see the JavaDocs).
I would strongly recommend conveying everything relevant in your signature.
Personally I prefer using the long method names. Note you can also have the method name inside the expression, as:
Can_AuthenticateUser_With_Bad_Password()
I suggest smaller, more focussed (test) classes.
Why would you want to javadoc tests?
What about changing
Can_User_Authenticate_With_Bad_Password
to
AuthenticateDenieTest
AuthenticateAcceptTest
and name suit something like User
As a Group how do we feel about doing a hybrid Naming schema like this
/// <summary>
/// Tests for user authentication with a bad password.
/// </summary>
public void AuthenticateUser_Test1_With_Bad_Password()
{
...
}
and we get the best of both.

Categories

Resources