I am new to junit concept.
Can anyone explain to me clearly what the Data Driven concept is?
And another other question would be if we can write two RunWith-annotated methods in one junit class.
#RunWith(Parameterized.class)
...
and
#RunWith(Theories.class)
...
http://support.smartbear.com/viewarticle/29139/
Explains data driven concept in detail. Mainly it invlolves in creating different sets of data to test the code in short. Mainly used to write automated test cases where a certain piece of code is always run through different types of test data and tested for desired output.
And for the second question I don't think multiple #Runwith method makes sense, as it is a directive for junit to load proper runner to execute testcases instead of default runner built into the Junit. I haven't tried it also. Hope this offers some answer to your question.
Related
I am using JUnit and RestAssured create API tests.
I was wondering if there is a way to execute test cases in the order how they are written in the class file. Currently when I execute them it seems random.
I tried #TestMethodOrder(MethodOrderer.OrderAnnotation.class) and adding #Order(xy) into the tests but I think that this didn't help me.
Just to describe my problem:
I have multiple tests in the following order in the class: Post tests, Get Tests, Delete tests. As you may already know I want Delete tests to be executed as very last.
Is it possible to do it somehow?
Thanks
Short answer: no. IMO it is not even possible because your order of methods is not visible in the generated byte code, but I'm not a byte code expert and may be wrong.
The annotation #TestMethodOrder is part of JUnit Jupiter (JUnit 5) and therefore it does not work with JUnit 4 tests. JUnit 4 has an annotation #FixMethodOrder that allows you to execute test methods in alphabetic order. By prefixing your test names with some String like a_ you can achieve what you want. I know it's ugly.
My company wants to move off of JUnit 3 and start using only JUnit 4. The other intern and I have been given the task of converting the older JUnit 3 tests to use JUnit 4 conventions. However, I'm having a problem converting the testfile I'm working on right now.
From what I can tell, there is a generateTest method that returns a SSlTest (SSlTest is a subclass of TestCase). The returned SslTest overrides runTest. runTest contains a try-catch block that starts two threads, clientThread and serverThread (these are both subclasses of Thread that are defined within the testfile). It looks like the actual testing is being done inside the threads, since the rest of runTest is used for catching exceptions from the two threads.
generateTest is called by another method, generateSuite (returns a TestSuite). generateSuite contains an outer for-loop that adds suites to a main suite. The inner for-loop uses generateTest to add tests to each suite within the main suite. The main suite is what is returned by the method.
Finally, inside the suite() method that is called in the main method of the test file, a while-loop is setup to generate suites using generateSuite and add them to a bigger suite.
The only guides I've found on migrating to JUnit 4 are for much simpler test cases. I'm very lost right now and no one else at my company knows enough JUnit 4 to help me, so any tips would be much appreciated!
The very first thing I would do is try to convince whomever gave me the task that it is unnecessary. I know that is hard as an intern, but it is worth making sure that person understands this isn't necessary.
Facts for convincing:
The JUnit 4 jar contains both the junit.framework and org.junit package structures so it is backward compatible.
JUnit has broad adoption. They owners of the JUnit project are well aware of this and aren't going to ask people to rewrite all the tests. In other words, they aren't going to just drop compatibility.
Actually try it. Seriously. Try running your existing test code as is with the JUnit 4 jar. You'll see if you get any compiler errors. If you do, those are the areas to focus. If you don't, you have great evidence to show to the person who gave you the task.
This doesn't mean you won't have to change anything. It means you won't have to change the majority of your code. If you have custom runners, you'll want to use the JUnit 4 style. You also might need classpath suite to collect the tests.
There is also value in converting a few of the tests to the JUnit 4 so developers on the team have some examples to use. But converting them all isn't a good use of time.
On not being able to post code
Getting help on the internet is extremely difficult without code. I can understand your employer not wanting you to post code. (But then they probably don't want you posting class and method names either - which you did.) Luckily, there is an alternative. Create a SSCCE instead. (Read the link - it will help you a lot as you progress in your jobs.) In addition for the smaller example being easier to read, it will allow you to change the class/method/etc names and then your employer won't have their code online.
I am having a design problem in test automation:-
Requirements - Need to test different servers (using unix console and not GUI) through automation framework. Tests which I'm going to run - Unit, System, Integration
Question: While designing a test case, I am thinking that a Test Case should be a part of a test suite (test suite is a class), just as we have in Python's pyunit framework. But, should we keep test cases as functions for a scalable automation framework or should be keep test cases as separate classes(each having their own setup, run and teardown methods) ? From automation perspective, Is the idea of having a test case as a class more scalable, maintainable or as a function?
Normally Test Cases are used as classes rather than functions because each test case has own setup data and initialization mechanism. Implementing test cases as a single function will not only make difficult to set up test data before running any test case, but yes you can have different test method in a test case class if you are running same test scenario.
The following is my opinion:
Pros of writing tests as functions:
If you need any pre-requisites for that test case, just call another function which provides the pre-requisites. Do the same thing for teardown steps
Looks simple for a new person in the team. Easy to undertstand what is happening by looking into tests as functions
Cons of writing tests as functions:
Not maintainable - Because if there are huge number of tests where
same kind of pre-requisites are required, the test case author has to
maintain calling each pre-requisite function in the test case. Same
for each teardown inside the test case
If there are so many calls to such a pre-requisite function inside many test cases, and if anything changes in the product functionality etc, you have to manually make efforts in many places again.
Pros of writing test cases as classes:
Setup, run and teardown are clearly defined. the test pre-requisites are easily understood
If there is Test 1 which is does something and the result of Test 1 is used as a setup pre-requisite in Test 2 and 3, its easy to just inherit from Test 1, and call its setup, run a teardown methods first, and then, continue your tests. This helps make the tests independent of each other. Here, you dont need to make efforts to maintain the actual calling of your code. It will be done implicitly because of inheritance.
Sometimes, if the setup method of Test 1 and run method of Test 2 might become the pre-requisites of another Test 3. In that case, just inherit from both of Test 1 and Test 2 classes and in the Test 3's setup method, call the setup of Test 1 and run of Test 2. Again you dont need to need to maintain the calling of the actual code, because you are calling the setup and run methods, which are tried and tested from the framework perspective.
Cons of writing test case as classes:
When the number of tests increase, you cant look into a particular test and say what it does, because it may have inherited so much levels that you cant back track. But, there is a solution around it - Write doc strings in each setup, run, teardown method of each test case. And, write a custom wrapper to generate doc strings for each test case. While/After inheriting, you should provide an option to add/Remove the docstring of a particular function (setup, run, teardown) to the inherited function. This way, you can just run that wrapper and get information about a test case from its doc-strings
Recently a new concept of Theories was added to JUnit (since v4.4).
In a nutshell, you can mark your test method with #Theory annotation (instead of #Test), make your test method parametrized and declare an array of parameters, marked with #DataPoints annotation somewhere in the same class.
JUnit will sequentially run your parametrized test method passing parameters retrieved from #DataPoints one after another. But only until the first such invocation fails (due to any reason).
The concept seems to be very similar to #DataProviders from TestNG, but when we use data providers, all the scenarios are run inspite of their execution results. And it's useful because you can see how many scenarious work/don't work and you can fix your program more effectively.
So, I wonder what's the reason not to execute #Theory-marked method for every #DataPoint? (It appears not so difficult to inherit from Theories runner and make a custom runner which will ignore failures but why don't we have such behaviour out of the box?)
UPD: I have created a fault-tolerant version of Theories runner and made it available for a public access: https://github.com/rgorodischer/fault-tolerant-theories
In order to compare it with the standard Theories runner run StandardTheoriesBehaviorDemo then FaultTolerantTheoriesBehaviorDemo which are placed under src/test/... folder.
Reporting multiple failures in a single test is generally a sign that
the test does too much, compared to what a unit test ought to do.
Usually this means either that the test is really a
functional/acceptance/customer test or, if it is a unit test, then it
is too big a unit test.
JUnit is designed to work best with a number of small tests. It
executes each test within a separate instance of the test class. It
reports failure on each test. Shared setup code is most natural when
sharing between tests. This is a design decision that permeates JUnit,
and when you decide to report multiple failures per test, you begin to
fight against JUnit. This is not recommended.
Long tests are a design smell and indicate the likelihood of a design
problem. Kent Beck is fond of saying in this case that "there is an
opportunity to learn something about your design." We would like to
see a pattern language develop around these problems, but it has not
yet been written down.
Source: http://junit.sourceforge.net/doc/faq/faq.htm#tests_12
To ignore assertion failures you can also use a JUnit error collector rule:
The ErrorCollector rule allows execution of a test to continue after
the first problem is found (for example, to collect all the incorrect
rows in a table, and report them all at once)
For example you can write a test like this.
public static class UsesErrorCollectorTwice {
#Rule
public ErrorCollector collector= new ErrorCollector();
#Test
public void example() {
String x = [..]
collector.checkThat(x, not(containsString("a")));
collector.checkThat(y, containsString("b"));
}
}
The error collector uses hamcrest Matchers. Depending on your preferences this is positive or not.
AFAIK, the idea is the same as with asserts, the first failure stops the test. This is the difference between Parameterized & Theories.
Parameterized takes a set of data points and runs a set of test methods with each of them. Theories does the same, but fails when the first assert fails.
Try looking at Parameterized. Maybe it provides what you want.
A Theory is wrong if a single test in it is wrong, according to the definition of a Theory. If your test cases don't follow this rule, it would be wrong to call them a "Theory".
Here is the problem I am facing. I have been tasked with testing the query parsing engine of a piece of software through negative testing. That is, I must write a large number of queries that will fail, and test that they do indeed fail, as well as having the expected error message for the particular error in the query. These are defined in an XML file. I've written a simple wrapper around the parsing of the XML document and struct-like classes for these test cases.
Now, given that I am using JUnit as a testing framework, I'm running into this issue - the act of running through all of these externally defined tests lives in a single method. If a single test fails, then no more will be run. Is there any way to dynamically dispatch a method to handle each of the tests as I encounter them? This way, if a test fails, we can still run the remaining ones while getting a report on what did and did not fail.
The other alternative is, of course, writing all of the JUnit tests. I'd like to avoid this for many reasons, one of which is that the number of tests to be run is extremely large, and a test case is 99% boilerplate code.
Thanks.
You should look into JUnit's Parameterized annotation.
If I understand correctly, the input data and expected results are all defined in XML, so you don't need specific code to handle each test case?
If you use JUnit4, you could write your own Runner implementation. You could either implement Runner directly or extend ParentRunner. All you need to implement is one method that returns a description of the tests, and another method that runs the tests.