I'm new to BDD and particularly Cucumber.
Can I get a features and its steps from a variable? Also, I want to get a feature and its steps from a test tracker (TestRail) before running tests by the special selection of this tests, and put it in a list, then one by one get a scenario and run it.
Is there such a possibility? Should I use Cucumber or another framework for this?
No, you can't define a Cucumber scenario in code (or at least not in a supported way). But if you were going to write code to get a scenario and its steps from your test tracker and run it, you could equally well write code to put the scenario and its steps in files and run the scenario with the cucumber executable.
I don't know of a Java testing framework in which you can define tests dynamically. You could do that in Ruby with RSpec or (less cleanly) minitest. But I don't know whether a Ruby test framework would be acceptable, or whether it would be OK for the people writing entries in your test tracker to have to read and/or write RSpec examples. (It seems strange to have Cucumber step definitions in a test tracker, too; having features in a test tracker seems more reasonable, aside from the question of how to run them.)
Related
Our team is starting a JUnit 5 project with karate tests.
Currently we are using this as a template for our Karate test runner https://github.com/intuit/karate#junit-5-parallel-execution.
It allows us to pass in the "target/surefire-reports" and then before the test finishes we call ReportBuilder.generateReports(). It is basically identical to this code https://github.com/intuit/karate/blob/b50202b3c8a8916a7db0f3d5196d42086ab80a04/karate-junit4/src/test/java/com/intuit/karate/mock/MockServerTest.java.
This works well, but while I was looking at how to set up JUnit 5 I noticed this very slick fluent api https://github.com/intuit/karate#junit-5.
It would be nice to use that syntax, but I can't get the Cucumber report generated like I can with Runner.parallel. I made sure the maven-surefire-plugin was in build.gradle(although I could have messed that up) but it didn't seem to help.
I also tried doing ReportBuilder.generateReports() and the related logic from the parallel execution example in the #AfterAll function, but couldn't get that working either. The errors suggested that the target/surefire-reports folder didn't exist.
Is the cucumber report supported in the second example? If so, is there a trick to getting it setup?
Great question. The reason we de-couple the JUnit execution and the parallel-runner - is JUnit is more useful in development mode, and you expect detailed pass/fail stats in the IDE for example. But this will be an un-necessary overhead in "CI mode".
That said, we have put in some work on making the Parallel runner a fluent interface, so great timing :) You can find an example on line 57 here.
May I request you to try the develop branch and see if you are missing anything ? Building is easy, here are some instructions: https://github.com/intuit/karate/wiki/Developer-Guide
One of the problems of a team lead is that people on the team (sometimes even including myself) often create JUnit tests without any testing functionality.
It's easily done since the developers use their JUnit test as a harness to launch the part of the application they are coding, and then either deliberately or forgetfully just check it in without any assert tests or mock verifies.
Then later it gets forgotten that the tests are incomplete, yet they pass and produce great code coverage. Running up the application and feeding data through it will create high code coverage stats from Cobertura or Jacoco and yet nothing is tested except its ability to run without blowing up - and I've even seen that worked-around with big try-catch blocks in the test.
Is there a reporting tool out there which will test the tests, so that I don't need to review the test code so often?
I was temporarily excited to find Jester which tests the tests by changing the code under test (e.g. an if clause) and re-running it to see if it breaks the test.
However this isn't something you could set up to run on a CI server - it requires set-up on the command line, can't run without showing its GUI, only prints results onto the GUI and also takes ages to run.
PIT is the standard Java mutation tester. From their site:
Mutation testing is conceptually quite simple.
Faults (or mutations) are automatically seeded into your code, then your tests are run. If your tests fail then the mutation is killed, if your tests pass then the mutation lived.
...
Traditional test coverage (i.e line, statement, branch etc) measures only which code is executed by your tests. It does not check that your tests are actually able to detect faults in the executed code. It is therefore only able to identify code the is definitely not tested.
The most extreme example of the problem are tests with no assertions. Fortunately these are uncommon in most code bases. Much more common is code that is only partially tested by its suite. A suite that only partially tests code can still execute all its branches (examples).
As it is actually able to detect whether each statement is meaningfully tested, mutation testing is the gold standard against which all other types of coverage are measured.
The quality of your tests can be gauged from the percentage of mutations killed.
It has a corresponding Maven plugin to make it simple to integrate as part of a CI build. I believe the next version will also include proper integration with Maven site reports too.
Additionally, the creator/maintainer is pretty active here on StackOverflow, and is good about responding to tagged questions.
As far as possible, write each test before implementing the feature or fixing the bug the test is supposed to deal with. The sequence for a feature or bug fix becomes:
Write a test.
Run it. At this point it will fail if it is a good test. If it does
not fail, change, replace, or add to it.
When you have a failing test, implement the feature it is supposed
to test. Now it should pass.
You have various options:
You probably could use some code analysis tool like checkstyle to verify that each test has an assertion. Or alternatively use a JUnit Rule to verify this, but both is easily tricked and works only on a superficial level.
Mutation testing as Jester does is again a technical solution which would work, and it seems #Tom_G has a tool that might work. But these tools are (in my experience) extremely slow, because the work by changing the code, running tests, analyzing result over and over again. So even tiny code bases take lots of time and I wouldn't even think about using it in a real project.
Code Reviews: such bad tests are easily caught by code reviews, and they should be part of every development process anyway.
All this still only scratches on the surface. The big question you should ponder is: why do developers feel tempted to create code just to start a certain part of the application? Why don't they write tests for what they want to implement, so there is almost no need for starting parts of the application. Get some training for automated unit testing and especially TDD/BDD, i.e. a process where you write the tests first.
In my experience it is very likely that you will hear things like: We can't test this because .... You need to find the real reason why the developers, can't or don't want to write these tests, which might or might not be the reasons they state. Then fix those reasons and those abominations of tests will go away all on their own.
What you are looking for is indeed mutation testing.
Regarding tool support, you might also want to look at the Major
mutation framework (mutation-testing.org), which is quite efficient and configurable. Major
uses a compiler-integrated mutator and gives you great control over
what should be mutated and tested. As far as I know Major does not yet
produce graphical reports but rather data (csv) files that you can
process or visualize in any way you want.
Sounds like you need to consider a coverage tool like Jacoco, the gradle plugin provides report on coverage. I also use the EclEmma Eclipse plugin to obtain the same results, but with a fairly nice integration in the IDE.
In my experience, Jacoco has provided acceptable numbers even when there are no-op unit test. As it seems able to accurately determine the tested code paths. No-op test get low or 0% coverage scores and the score increase as the test become more complete.
Update
To address the down-voter. Perhaps a more appropriate tool to address this is PMD. Can be used in an IDE or build system. With proper configuration and rule development it could be used to find these incomplete unit tests. I have used it in the past to find methods missing certain security related annotation in the past.
I have a lot of test cases written in a single file. They are actually sort of instructions, which can be read and run from inside JAVA.
The problem is this approach is not good. The file becomes big and unmanageable with a lot of test cases. How should I manage them? I was thinking of splitting them in different files, and having a XML Database for metadata. Any better ways?
P.S: They are not plain English test cases, they are sort of instructions which can be run inside JAVA.
Update: But they are not unit tests, more of like functional tests and they are not test classes. A program reads the different test cases from file and runs them.
Look at the approach that is used by Cucumber. Here you find human readable "feature" descriptions that each contain a number of different scenarios for that feature to test it out completely. No one feature file is a single test, none are test classes themselves, and a program reads all the feature files and runs them.
The overall pattern here would probably be instructive for you as well.
http://cukes.info/
Note that recently there has been a significant amount of work done making writing these cuke tests in Java easy as well as the original native language of Ruby.
The Java port of Cucumber uses a JUnit 4 custom test runner like this:
#RunWith(Cucumber.class)
#Feature("create_user_account.feature")
public class CreateUserAccountTest {
}
You can run this class as a JUnit test and the console output looks very similar to what you see on the Cucumber website. So you basically have one of these "test classes" for every feature. Then you can run a whole package worth of features, a single feature, or the entire project worth of features all at once by either grouping them into test suites or using Eclipse's test run batching.
I am at the stage now where I have a fairly good understanding of programming/development, using Java.
Could anyone tell me the best way for me to start using testing packages? I have looked at Hibernate but not surer where to go with it...
I use Eclipse 3.5 on Mac OS X. Is it a case of writing scripts to test methods? What is unit-testing? etc.
Where do I begin?
Many thanks. Alex
What is Unit Testing
Unit testing is writing code (i.e. test code) that passes known inputs into code under test and then validating the code under test returns expected outputs. It's the most granular testing you can perform on an application. To make it easier, usually a unit testing framework is used. For Java, JUnit is the most popular, but TestNG is also notable.
Getting Started
Unit testing frameworks provide tools for test execution, validation and results reporting. For your setup, Eclipse has built in support for JUnit. Eclipse is able to automatically detect tests, compile tests and code under test, execute tests, and report results within the IDE. Furthermore, failures are reported as clickable stack trace information that loads the corresponding file at the given line number.
Mock Objects
That you're also working with Hibernate, suggests you also investigate a mock object framework as well - such as jMock. Mock objects are usually substituted as part of a code under tests's composition and serve two purposes: (1) returning known outputs and (2) recording they've been called and how so that unit tests can introspect that information as part of validation.
The ability to use Mock objects to make testing easier is predicated on dependency injection. That is other entities that compose the object under test. The idea is decoupling dependencies (e.g. Hibernate) to focus on testing algorithms that manipulate that data you're working with.
Database
However, if you've got code that is not easily refactored, or perhaps you want to validate database code, you can also test Hibernate interaction as well. In that case you want a database in a known state. Three approaches come to mind:
Restoring a database backup at the beginning of each test execution.
Use dbunit, which provides its own mechanisms for maintain state.
Transactional locking with rollback. Wrap the entire case is wrapped with a try{} finally{}, where the latter always rolls back the transaction.
James Shore ("a thought leader in the Agile software development community") has a series of screen casts of him demonstrating Test Driven Development, using Eclipse.
http://jamesshore.com/Blog/Lets-Play/
While there are many ways to start testing, there is no "best" way so there's no point in looking for that as a starting point.
Search the web for a good tutorial on junit and do it. That will be the absolute best way to get started IMO. Don't get sidetracked with code coverage or integrating with Hudson or any of the other tasks that are on the periphery to testing. Focus on writing a handful (or 10) if tests first.
Once you understand the basics you can start looking at other tools to see if they meet your needs any better or worse than junit.
First up: Hibernate is not a testing package.
Now that's out of the way, I'd suggest you take a look at JUnit. Read up on unit testing first so you know what it is (the Wikipedia entry is a good place to start), then try the JUnit cookbook. Write some unit tests for a small piece of your code to see how it works, then move on to bigger chunks.
While you are at it, take a look at other development tools like Cobertura (for finding out how good your test coverage is) and static analysis tools like Findbugs and Checkstyle. These all integrate nicely with Ant and probably Eclipse, too.
If you are interested in improving your coding standards and build systems then I highly recommend using Ant, JUnit, Cobertura, Checkstyle and Findbugs together with a continuous integration server (e.g. Hudson or CruiseControl) and a version control system (e.g. git). With a toolkit like that you can't go wrong.
There are other frameworks out there (TestNG, Mockito etc) so take a look at them, too, and decide which you prefer (EDIT: And which work nicely together. Mockito + JUnit is a good combination.)
I'm learning JUnit. Since my app includes graphical output, I want the ability to eyeball the output and manually pass or fail the test based on what I see. It should wait for me for a while, then fail if it times out.
Is there a way to do this within JUnit (or its extensions), or should I just throw up a dialog box and assertTrue on the output? It seems like it might be a common problem with an existing solution.
Edit: If I shouldn't be using JUnit for this, what should I be using? I want to manually verify the build every so often, and unit test automatically, and it'd be great if the two testing frameworks got along.
Manually accepting/rejecting a test defeats the purpose of using an automated test framework. JUnit is not made for this kind of stuff. Unless you find a way to create and inject a mockup of the object representing your output device, you should consider alternatives (don't really know any there sorry).
I once wrote automated tests for a video decoding component. I dumped the decoded data to a file using some other decoder as a reference, and then compared the output of my decoder to that using the PSNR of each pair of images. This is not 100% self contained (needs external files as resources), but was automated at least, and worked fine for me.
Although you could probably code that, that is not what JUnit is about. It is about automated tests, not guided manual tests. Generally that "does it look right" test is regarded as an integration test, as it is something that is very hard to automate correctly in a way that doesn't break for trivial changes all the time.
Take a look at Abbot to give you a more robust way to test your GUI.
Unit tests shouldn't require human intervention. If you need a user to take an action then I think you're doing it wrong.
If you need a human to verify things, then don't do this as part of your unit tests. Just make it a required step for your test department to carry out when QA'ing builds. (this still works of your QA department is just you.)
I recommend using your unit tests for the Models if using MVC, or any utility method (i.e. with Swing it's common to have color mapping methods). If you have a good set of unit tests on things like model behavior, if you have a UI bug it'll help narrow your search.
Visual based unit tests are very difficult, at a company I worked at they had tried these visual tests but slight differences in video cards could produce failed tests. In the end this is where a good Q/A team is required.
Take a look at FEST-Swing. It provides an easy way to automatically test your GUIs.
The other thing you'll want to do is separate your the code which does the bulk of the work from your gui code as much as possible. You can then write unit tests on this work code without having to deal with the user interface. You'll also find that you'll run these tests much more frequently, as they can run quickly.