How do you test junit tests - java

How do we test the junit test cases we wrote? I thought manually testing ie create test data and asserting expected and actual values are okay is fine. But recently I have encountered a situation where junit tests were passing but the particular SUT code was failing during UI testing (that means the junit tests failed to guard the bug).

If your tests are passing, but the actual code that the tests were meant to cover is failing, then one of two things has happened:
The test suite hasn't adapted to cover that specific use case, or
The tests written to cover that specific use case are insufficient.
In any case, you need to rewrite your tests. Having a test suite which doesn't allow you to guard against specific aberrant behaviors makes your entire test suite worthless.
You do also mention that it fails explicitly during UI tests. This could result from a disconnect of expectations between the UI and the backend testing. In that event, either align the backend tests with the UI's actual inputs, or look to implement an integration test which covers the UI's workflow.

How do we test the junit test cases we wrote?
You should not.
Unit test are not infallible but testing tests makes no sense.
You should consider automatic tests as executable specifications.
Generally, if your specifications are wrong, you are stuck.
For automatic testing it is exactly the same thing.
To avoid this kind of problem or at least reduce it, I favor:
review of code and testing code with peers of the development team.
completing unit tests by integration and business test validated by the business team.
continuous improvement of automatic tests.
It is simple : as soon as a hole is detected in UI manual testing, an automatic test should be updated if the test exists but some checks are missing or else a new test should be created if the test is missing.

To verify quality of unit tests personally I use following techniques:
Coverage metrics. It's good idea to have good line and branch coverage. But it's usually not possible to have 100% line coverage, and coverage itself doesn't guarantee that code was actually tested rather simply called from test class.
Test code review. Personally I prefer writing tests with clear structure 'setup - run - assert'. If 'run' or 'assert' steps are missing, then there is something wrong with the test.
Mutation testing. There are frameworks which allow you to modify your production code in some simple way (apply mutators on code), then run your unit tests on modified code, and if no test fails, this code is not tested or tests are bad. For Java I use PIT Mutation Testing.
Also, sometimes it makes sense to apply not just unit tests but also some other testing techniques - manual testing, integration testing, load testing, etc.

I have encountered a situation where junit tests were passing but the
particular SUT code was failing.
Your unit tests should NOT miss the coverage of any method's functionality or their side effects. This is where the Code coverage tools like Cobertura play the role, it is NOT that the tests are passing, but we need to ensure that the each of the method's and their side effects have been unit tested/covered properly.
No, code coverage is just as bad a placebo here. You can have 100%
line coverage but still be in the same fix the OP is in?
Tools like Cobertura are there at least to find how much percentage of the code coverage we are doing, but, you will get even more bugs if you don't bother about tests coverage.
The main point is that these coverage tools don't tell you whether your internal business requirements have been really met or not.

Related

Does it make sense to write addition test for a method if it is already covered by another test?

I am testing my spring boot application with unit tests and in coverage report, I have +85% branch coverage. There are some cases where some methods are covered while testing other methods such as mappers, utility methods, etc.
For example, in the service method, I called mapper and utility methods, and those methods are marked as covered in the coverage report because I tested that service method.
My question is, does it make sense to write additional tests for those mapper and utility methods since they are already covered in the service tests?
I usually take a different approach that I would like to share.
Say, I'm a developer that has got the task to create a Mapper or utility.
At this time the Service doesn't even exist.
I write my code, but then I want to check myself. So I unit test it. I don't care about the coverage at this point, for me, coverage is a tool that helps me to understand and decide whether I've made all the checks I need or I've missed an area in a code of Mapper that I wrote that possibly has a bug.
So I've made sure that my code is perfect and I submit it and move to another task...
Later (weeks, months, years) someone, me or another programmer, creates a Service that uses my code. Lets pretends its you, for example.
So you basically don't care about Mapping code, you assume that it works. However you do want to check your Service, and so you write your unit tests that can mock the dependency on Mapper or run it within the test if its not a real dependency and its fast and everything. Again you don't care about the overall coverage, you care about your code, you want to make sure that your code doesn't have bugs, and again, coverage helps you make sure that you've done your best to check your code before it meets production.
As for my old tests of Mapper - well, they're still run.
Bottom line I don't think that code should be covered for the sake of coverage, but tests (and coverage) should provide you a safety net if you wish.
With this in mind, You should write tests for your code, if you can mock other dependencies be it, if not - just run them.
P.S. I believe there might be many different opinions of this, so there is no one single right answer for this question
Do not write unittests to "cover code".
Write unittests to verify behavior.
The number of tests executing a certain line of production code does not have any meaning.
Also unittests test a unit (an arbitrary piece of code) in isolation. That means if the code under test uses other units as dependencies these other units should be replaces by test doubles (such as stubs, fakes or mocks) during the unit test.
Of course it does . Unit tests are not just for coverage . Coverage is one part of it . Your main aim is to write unit tests to avoid or minimize the bugs from testers or from production .
You should test your function with all sort of negative scenarios and positive scenarios .
apart from these you should test boundary conditions of your code .

Catch Fake Unit Tests in Java

I have tests which do not have asserts in my repo, though jacoco gives good coverage. Is there a way to detect tests like this, other than better code reviews ?
Use PMD. It has a standard rule for unitTests without any asserts.
If your intent is only to find trivially wrong tests, namely those that lack any assert statement, checkers like PMD might do the job. You will likely get some false positives, namely for tests where the actual test goal is just to ensure that the SUT does not throw an exception (a case mentioned in the comments by Peter Lawrey).
There are, however, many more problems in tests that are not as simple to find: An assertion could just be wrong (assertTrue(isPrime(9))),or, the assertion may only address a part of the relevant aspects (like, when dealing with rational numbers, only checking the numerator but not the denominator).
For the detection of such quality problems there also exist approaches, like mutation testing, which can help to some extent. And, while writing the tests in the first place: Using a test first development process ensures that a test at least failed once.
However, a test suite may have additional problems that are not related to the set of tests that are executed, but to non-functional quality criteria: Test execution time, maintainability of the test suite, expressiveness of the diagnostic output in case of a test failure etc.
When you are interested in detecting these kinds of problems, I would argue reviews are unavoidable.

Unit testing for a Selenium project

A question about best practices or practice at all ;)
I am currently working on a test automation system using Selenium in Java. It is supposed to be used for end-to-end acceptance testing of a webapp. The test cases are written in the Gherkin language and executed by the BDD framework Cucumber (Cucumber-JVM). The low-level functions use Selenium/WebDriver for interacting with the AUT and the browser. The Selenium code is structured using the PageObject pattern which abstracts the usage of WebDriver away. The cucumber step definitions call just the methods provided by the PageObjects.
As the project continues and becomes more and more complex I would like to start writing unit test to make sure the acceptance tests, and the utility functions around those, do what they should :)
Now to the question:
Is it feasible to write unit test for testing a test automation project?
The main problem is that during my first approach to unit testing using TestNG I realised, that my unit tests ended up doing more or less the same stuff the acceptance tests already did. This is counter productive, as the unit tests are very slow and have a lot dependencies.
Or does one just test the utility classes and leave the Selenium code be in such a case. ie. test just the stuff that can be tested without calling the Selenium WebDriver and interacting with the AUT?
Note just to be sure I have not been misunderstood. I'm asking about running unit test ON the acceptance test code and all the auxiliary code. Not about running the Selenium test cases using an unit testing framework like JUnit or TestNG.
Any help and/or ideas will be appreciated, as I am not sure how to tackle this one. That is if writing tests for tests is at all sensible ;)
I'm sure someone will consider my response "opinionated" and vote it down, but nevertheless I think that
Yes, if you have a test framework you are relying upon for your acceptance testing, the framework itself needs to be tested.
From my experience the value is in 2 areas:
Be able to change your framework with confidence. When you create some function, you know quite a lot about it, i.e. which use cases it supports, what it's designed to do, etc. But other people (or even you a year from now) may not have the same level of knowledge, even with documentation. So either new functions will pop up every time someone needs some slight modification in behavior (because they are not confident to change existing function), or someone may break whole bunch of acceptance tests.
Best if those are true unit tests, able to run completely independently from anything (use mocks, predefined static test data, etc).
Protect yourself from unexpected changes / bugs in Selenium itself (or other important 3rd-party libraries). When you're updating Selenium to next version (and that usually needs to be done every 3-6 months), there's always a chance that they changed some default you were relying upon (and didn't even know), or broke something, or suddenly something returns a different exception, or does not throw exception where it previously did and so on. Of course no need to get carried away and duplicate Selenium own unit tests, but when it comes to non-trivial things, or relying on some features with poor documentation, those tests may help a lot.
Those are integration tests. Ideally they should run against test-only webapp (not the real application) that replicate specifically tested behaviors in a way convenient for tests.
Of course some compromises possible as well. For example having a small subset of acceptance tests that are serving as unit / integration tests (they run first, and other tests only run if they are passing). Might be cheaper to begin with those and slowly migrate to proper unit/integration tests when you debug / fix issues in the test framework.
Another question is how do you separate tests testing your framework from actual acceptance tests for the product. What worked for me is keeping test framework and acceptance tests in 2 separate projects. That way I can change the framework, build it (which also includes running unit and integration tests) many times if needed. When all unit and integration tests are passing, then I can update the version used by actual acceptance tests.
Personally, I would not write test code that tests the test code.
Writing tests to shield myself from bugs in 3:rd party tools like Selenium is not something I would do. If the tests passes and the expected result is validated, then that would be enough for me.
When I upgrade to a new version of Selenium, I would do it when I had a working state. I.e. all tests passed. The only change would be the Selenium version. If I now have tests that breaks, I know that Selenium is behaving differently in this version than I expected and can act accordingly.
I would write tests for any complicated utility functionality I may need.
I would, however, work hard on writing the test code extremely easy to understand and avoid anything remotely complicated in it. And then be careful to verify that each test verified the behaviour I am expecting.

JUnit report to show test functionality, not coverage

One of the problems of a team lead is that people on the team (sometimes even including myself) often create JUnit tests without any testing functionality.
It's easily done since the developers use their JUnit test as a harness to launch the part of the application they are coding, and then either deliberately or forgetfully just check it in without any assert tests or mock verifies.
Then later it gets forgotten that the tests are incomplete, yet they pass and produce great code coverage. Running up the application and feeding data through it will create high code coverage stats from Cobertura or Jacoco and yet nothing is tested except its ability to run without blowing up - and I've even seen that worked-around with big try-catch blocks in the test.
Is there a reporting tool out there which will test the tests, so that I don't need to review the test code so often?
I was temporarily excited to find Jester which tests the tests by changing the code under test (e.g. an if clause) and re-running it to see if it breaks the test.
However this isn't something you could set up to run on a CI server - it requires set-up on the command line, can't run without showing its GUI, only prints results onto the GUI and also takes ages to run.
PIT is the standard Java mutation tester. From their site:
Mutation testing is conceptually quite simple.
Faults (or mutations) are automatically seeded into your code, then your tests are run. If your tests fail then the mutation is killed, if your tests pass then the mutation lived.
...
Traditional test coverage (i.e line, statement, branch etc) measures only which code is executed by your tests. It does not check that your tests are actually able to detect faults in the executed code. It is therefore only able to identify code the is definitely not tested.
The most extreme example of the problem are tests with no assertions. Fortunately these are uncommon in most code bases. Much more common is code that is only partially tested by its suite. A suite that only partially tests code can still execute all its branches (examples).
As it is actually able to detect whether each statement is meaningfully tested, mutation testing is the gold standard against which all other types of coverage are measured.
The quality of your tests can be gauged from the percentage of mutations killed.
It has a corresponding Maven plugin to make it simple to integrate as part of a CI build. I believe the next version will also include proper integration with Maven site reports too.
Additionally, the creator/maintainer is pretty active here on StackOverflow, and is good about responding to tagged questions.
As far as possible, write each test before implementing the feature or fixing the bug the test is supposed to deal with. The sequence for a feature or bug fix becomes:
Write a test.
Run it. At this point it will fail if it is a good test. If it does
not fail, change, replace, or add to it.
When you have a failing test, implement the feature it is supposed
to test. Now it should pass.
You have various options:
You probably could use some code analysis tool like checkstyle to verify that each test has an assertion. Or alternatively use a JUnit Rule to verify this, but both is easily tricked and works only on a superficial level.
Mutation testing as Jester does is again a technical solution which would work, and it seems #Tom_G has a tool that might work. But these tools are (in my experience) extremely slow, because the work by changing the code, running tests, analyzing result over and over again. So even tiny code bases take lots of time and I wouldn't even think about using it in a real project.
Code Reviews: such bad tests are easily caught by code reviews, and they should be part of every development process anyway.
All this still only scratches on the surface. The big question you should ponder is: why do developers feel tempted to create code just to start a certain part of the application? Why don't they write tests for what they want to implement, so there is almost no need for starting parts of the application. Get some training for automated unit testing and especially TDD/BDD, i.e. a process where you write the tests first.
In my experience it is very likely that you will hear things like: We can't test this because .... You need to find the real reason why the developers, can't or don't want to write these tests, which might or might not be the reasons they state. Then fix those reasons and those abominations of tests will go away all on their own.
What you are looking for is indeed mutation testing.
Regarding tool support, you might also want to look at the Major
mutation framework (mutation-testing.org), which is quite efficient and configurable. Major
uses a compiler-integrated mutator and gives you great control over
what should be mutated and tested. As far as I know Major does not yet
produce graphical reports but rather data (csv) files that you can
process or visualize in any way you want.
Sounds like you need to consider a coverage tool like Jacoco, the gradle plugin provides report on coverage. I also use the EclEmma Eclipse plugin to obtain the same results, but with a fairly nice integration in the IDE.
In my experience, Jacoco has provided acceptable numbers even when there are no-op unit test. As it seems able to accurately determine the tested code paths. No-op test get low or 0% coverage scores and the score increase as the test become more complete.
Update
To address the down-voter. Perhaps a more appropriate tool to address this is PMD. Can be used in an IDE or build system. With proper configuration and rule development it could be used to find these incomplete unit tests. I have used it in the past to find methods missing certain security related annotation in the past.

Why use JUnit for testing?

Maybe my question is a newbie one, but I can not really understand the circumstances under which I would use junit?
Whether I write simple applications or larger ones I test them with the System.out statements and it seams quite easy to me.
Why create test-classes with JUnit, unnecessary folders in the project if we still have to call the same methods, check what they return and we then have an overhead of annotating everything?
Why not write a class and test it at once with System.out but not create Test-classes?
PS. I have never worked on large projects I am just learning.
So what is the purpose?
That's not testing, that's "looking manually at output" (known in the biz as LMAO). More formally it's known as "looking manually for abnormal output" (LMFAO). (See note below)
Any time you change code, you must run the app and LMFAO for all code affected by those changes. Even in small projects, this is problematic and error-prone.
Now scale up to 50k, 250k, 1m LOC or more, and LMFAO any time you make a code change. Not only is it unpleasant, it's impossible: you've scaled up the combinations of inputs, outputs, flags, conditions, and it's difficult to exercise all possible branches.
Worse, LMFAO might mean visiting pages upon pages of web app, running reports, poring over millions of log lines across dozens of files and machines, reading generated and delivered emails, checking text messages, checking the path of a robot, filling a bottle of soda, aggregating data from a hundred web services, checking the audit trail of a financial transaction... you get the idea. "Output" doesn't mean a few lines of text, "output" means aggregate system behavior.
Lastly, unit and behavior tests define system behavior. Tests can be run by a continuous integration server and checked for correctness. Sure, so can System.outs, but the CI server isn't going to know if one of them is wrong–and if it does, they're unit tests, and you might as well use a framework.
No matter how good we think we are, humans aren't good unit test frameworks or CI servers.
Note: LMAO is testing, but in a very limited sense. It isn't repeatable in any meaningful way across an entire project or as part of a process. It's akin to developing incrementally in a REPL, but never formalizing those incremental tests.
We write tests to verify the correctness of a program's behaviour.
Verifying the correctness of a program's behaviour by inspecting the content of output statements using your eyes is a manual, or more specifically, a visual process.
You could argue that
visual inspection works, I check that the code does what it's meant to
do, for these scenarios and once I can see it's correct we're good to
go.
Now first up, it's great to that you are interested in whether or not the code works correctly. That's a good thing. You're ahead of the curve! Sadly, there are problems with this as an approach.
The first problem with visual inspection is that you're a bad welding accident away from never being able to check your code's correctness again.
The second problem is that the pair of eyes used is tightly coupled with the brain of the owner of the eyes. If the author of the code also owns the eyes used in the visual inspection process, the process of verifying correctness has a dependency on the knowledge about the program internalised in the visual inspector's brain.
It is difficult for a new pair of eyes to come in and verify the correctness of the code simply because they are not partnered up with brain of the original coder. The owner of the second pair of eyes will have to converse with original author of the code in order to fully understand the code in question. Conversation as a means of sharing knowledge is notoriously unreliable. A point which is moot if the Original Coder is unavailable to the new pair eyes. In that instance the new pair of eyes has to read the original code.
Reading other people's code that is not covered by unit tests is more difficult than reading code that has associated unit tests. At best reading other peoples code is tricky work, at its worst this is the most turgid task in software engineering. There's a reason that employers, when advertising job vacancies, stress that a project is a greenfield (or brand new) one. Writing code from scratch is easier than modifying existing code and thereby makes the advertised job appear more attractive to potential employees.
With unit testing we divide code up into its component parts. For each component we then set out our stall stating how the program should behave. Each unit test tells a story of how that part of the program should act in a specific scenario. Each unit test is like a clause in a contract that describes what should happen from the client code's point of view.
This then means that a new pair of eyes has two strands of live and accurate documentation on the code in question.
First they have the code itself, the implementation, how the code was done; second they have all of the knowledge that the original coder described in a set of formal statements that tell the story of how this code is supposed to behave.
Unit tests capture and formally describe the knowledge that the original author possessed when they implemented the class. They provide a description of how that class behaves when used by a client.
You are correct to question the usefulness of doing this because it is possible to write unit tests that are useless, do not cover all of the code in question, become stale or out of date and so on. How do we ensure that unit tests not only mimics but improves upon the process of a knowledgeable, conscientious author visually inspecting their code's output statements at runtime? Write the unit test first then write the code to make that test pass. When you are finished, let the computers run the tests, they're fast they are great at doing repetitive tasks they are ideally suited to the job.
Ensure test quality by reviewing them each time you touch off the code they test and run the tests for each build. If a test fails, fix it immediately.
We automate the process of running tests so that they are run each time we do a build of the project. We also automate the generation of code coverage reports that details what percentage of code that is covered and exercised by tests. We strive for high percentages. Some companies will prevent code changes from being checked in to source code control if they do not have sufficient unit tests written to describe any changes in behaviour to the code. Typically a second pair of eyes will review code changes in conjunction with the author of the changes. The reviewer will go through the changes ensure that the changes understandable and sufficiently covered by tests. So the review process is manual, but when the tests (unit and integration tests and possibly user acceptance tests) pass this manual review process the become part of the automatic build process. These are run each time a change is checked in. A continuous-integration server carries out this task as part of the build process.
Tests that are automatically run, maintain the integrity of the code's behaviour and help to prevent future changes to the code base from breaking the code.
Finally, providing tests allows you to aggressively re-factor code because you can make big code improvements safe in the knowledge that your changes do not break existing tests.
There is a caveat to Test Driven Development and that is that you have to write code with an eye to making it testable. This involves coding to interfaces and using techniques such as Dependency Injection to instantiate collaborating objects. Check out the work of Kent Beck who describes TDD very well. Look up coding to interfaces and study design-patterns
When you test using something like System.out, you're only testing a small subset of possible use-cases. This is not very thorough when you're dealing with systems that could accept a near infinite amount of different inputs.
Unit tests are designed to allow you to quickly run tests on your application using a very large and diverse set of different data inputs. Additionally, the best unit tests also account for boundary cases, such as the data inputs that lie right on the edge of what is considered valid.
For a human being to test all of these different inputs could take weeks whereas it could take minutes for a machine.
Think of it like this: You're also not "testing" something that will be static. Your application is most likely going through constant changes. Therefore, these unit tests are designed to run at different points in the compile or deployment cycle. Perhaps the biggest advantage is this:
If you break something in your code, you'll know about it right now, not after you deployed, not when a QA tester catches a bug, not when your clients have cancelled. You'll also have a better chance of fixing the glitch immediately, since it's clear that the thing that broke the part of the code in question most likely happened since your last compile. Thus, the amount of investigative work required to fix the problem is greatly reduced.
I added some other System.out can NOT do:
Make each test cases independent (It's important)
JUnit can do it: each time new test case instance will be created and #Before is called.
Separate testing code from source
JUnit can do it.
Integration with CI
JUnit can do it with Ant and Maven.
Arrange and combine test cases easily
JUnit can do #Ignore and test suite.
Easy to check result
JUnit offers many Assert methods (assertEquals, assertSame...)
Mock and stub make you focus on the test module.
JUnit can do: Using mock and stub make you setup correct fixture, and focus on the test module logic.
Unit tests ensure that code works as intended. They are also very helpful to ensure that the code still works as intended in case you have to change it later to build new functionalities to fix a bug. Having a high test coverage of your code allows you to continue developing features without having to perform lots of manual tests.
Your manual approach by System.out is good but not the best one.This is one time testing that you perform. In real world, requirements keep on changing and most of the time you make a lot of modificaiotns to existing functions and classes. So… not every time you test the already written piece of code.
there are also some more advanced features are in JUnit like like
Assert statements
JUnit provides methods to test for certain conditions, these methods typically start with asserts and allow you to specify the error message, the expected and the actual result
Some of these methods are
fail([message]) - Lets the test fail. Might be used to check that a certain part of the code is not reached. Or to have failing test before the test code is implemented.
assertTrue(true) / assertTrue(false) - Will always be true / false. Can be used to predefine a test result, if the test is not yet implemented.
assertTrue([message,] condition) - Checks that the boolean condition is true.
assertEquals([message,] expected, actual) - Tests whether two values are equal (according to the equals method if implemented, otherwise using == reference comparison). Note: For arrays, it is the reference that is checked, and not the contents, use assertArrayEquals([message,] expected, actual) for that.
assertEquals([message,] expected, actual, delta) - Tests whether two float or double values are in a certain distance from each other, controlled by the delta value.
assertNull([message,] object) - Checks that the object is null
and so on. See the full Javadoc for all examples here.
Suites
With Test suites, you can in a sense combine multiple test classes into a single unit so you can execute them all at once. A simple example, combining the test classes MyClassTest and MySecondClassTest into one Suite called AllTests:
import org.junit.runner.RunWith;
import org.junit.runners.Suite;
import org.junit.runners.Suite.SuiteClasses;
#RunWith(Suite.class)
#SuiteClasses({ MyClassTest.class, MySecondClassTest.class })
public class AllTests { }
The main advantage of JUnit is that it is automated rather than you manually having to check with your print outs. Each test you write stays with your system. This means that if you make a change that has an unexpected side effect your test will catch it and fail rather than you having to remember to manually test everything after each change.
JUnit is a unit testing framework for the Java Programming Language. It is important in the test driven development, and is one of a family of unit testing frameworks collectively known as xUnit.
JUnit promotes the idea of "first testing then coding", which emphasis on setting up the test data for a piece of code which can be tested first and then can be implemented . This approach is like "test a little, code a little, test a little, code a little..." which increases programmer productivity and stability of program code that reduces programmer stress and the time spent on debugging.
Features
JUnit is an open source framework which is used for writing & running tests.
Provides Annotation to identify the test methods.
Provides Assertions for testing expected results.
Provides Test runners for running tests.
JUnit tests allow you to write code faster which increasing quality
JUnit is elegantly simple. It is less complex & takes less time.
JUnit tests can be run automatically and they check their own results and provide immediate feedback. There's no need to manually comb through a report of test results.
JUnit tests can be organized into test suites containing test cases and even other test suites.
Junit shows test progress in a bar that is green if test is going fine and it turns red when a test fails.
I have slightly different perspective of why JUnit is needed.
You can actually write all test cases yourself but it's cumbersome. Here are the problems:
Instead of System.out we can add if(value1.equals(value2)) and return 0 or -1 or error message. In this case, we need a "main" test class which runs all these methods and checks results and maintains which test cases failed and which are passed.
If you want to add some more tests you need to add them to this "main" test class as well. Changes to existing code. If you want to auto detect test cases from test classes, then you need to use reflection.
All your tests and your main class to run tests are not detected by eclipse and you need to write custom debug/run configurations to run these tests. You still don't see those pretty green/red colored outputs though.
Here is what JUnit is doing:
It has assertXXX() methods which are useful for printing helpful error messages from the conditions and communicating results to "main" class.
"main" class is called runner which is provided by JUnit, so we don't have to write any. And it detects the test methods automatically by reflection. If you add new tests with #Test annotation then they are automatically detected.
JUnit has eclipse integration and maven/gradle integration as well, so it is easy to run tests and you will not have to write custom run configurations.
I'm not an expert in JUnit, so that's what I understood as of now, will add more in future.
You cannot write any test case without using testing framework or else you will have to write your testing framewok to give justice to your test cases.
Here are some info about JUnit Framework apart from that you can use TestNG framework .
What is Junit?
Junit is widely used testing framework along with Java Programming Language. You can use this automation framework for both unit testing and UI testing.It helps us define the flow of execution of our code with different Annotations. Junit is built on idea of "first testing and then coding" which helps us to increase productivity of test cases and stability of the code.
Important Features of Junit Testing -
It is open source testing framework allowing users to write and run test cases effectively.
Provides various types of annotations to identify test methods.
Provides different Types of Assertions to verify the results of test case execution.
It also gives test runners for running tests effectively.
It is very simple and hence saves time.
It provides ways to organize your test cases in form of test suits.
It gives test case results in simple and elegant way.
You can integrate jUnit with Eclipse, Android Studio, Maven & Ant, Gradle and Jenkins
JUNIT is the method that is usually accepted by java developer.
Where they can provide similar expected input to the function and decide accordingly that written code is perfectly written or if test case fails then different approach may also need to implement.
JUNIT will make development fast and will ensure the 0 defects in the function.
JUNIT : OBSERVE AND ADJUST
Here is my perspective of JUNIT.
JUNIT can be used to,
1)Observe a system behaviour when a new unit is added in that system.
2)Make adjustment in the system to welcome the "new" unit in the system.
What? Exactly.
Real life eg.
When your relative visits your college hostel room,
1) You will pretend to be more responsible.
2) You will keep all things where they should be, like shoes in shoe rack not on chair, clothes in cupboard not on chair.
3) You will get rid of all the contraband.
4) you will start cleanUp in every device you posses.
In programming terms
System: Your code
UNIT: new functionality.
As JUNIT framework is used for JAVA language so JUNIT = JAVA UNIT (May be).
Suppose a you already have a well bulletproof code, but a new requirement came and you have to add the new requirement in your code. This new requirement may break your code for some input(testcase).
Easy way to adapt this change is using unit testing (JUNIT).
For that you should write multiple testcases for your code when you are building your codebase. And whenever a new requirement comes you just run all the test cases to see if any test case fails.
If No then you are a BadA** artist and you are ready to deploy the new code.
If any of the testcases fail then you change your code and again run testcases until you get the green status.

Categories

Resources