As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I would like to ask about using Unit Testing in Web Development. The idea of Unit Testing is great but does it really bring value in the context of web application? Second part of my question is about TDD. If one creates an integration test before the actual code can this approach be called "Test Driven Development"?
1. Assumptions
By the definition Unit Test should test code on only one service layer. If a test tests code across multiple layers we have an integration test.
2. Argument
2.1 No Algorithms
There are not many algorithms in a Web Application. It's not like building a 3D Physics engine where every method does something challenging and hard to debug. Web application is mostly about integration and generating HTML.
The biggest challenges for a web application are:
- clean code (universal problem for any software but not testable)
- data consistency
- integration (Programming Language, File System, Configuration Files, Web Server, Caching Systems, Database, Search Engine, External APIs - all those systems have to work together at a request)
If you want to build a Unit Test for every class in a web app you will be testing (in most cases): filling arrays, string concatenations and languages' native functions.
2.2 Cost
Just because a Web App is all about integration there are always multiple dependencies. You have to mock many different classes and writing even a small test might be actually a big job. What's even worse it's not only about test. Software needs to be testable. It means that one has to be able to inject dependencies in almost every class. It's not always possible to inject dependencies without creating additional layer between two classes (or systems). It complicates code and makes it more expensive to work with.
3. Integration Test
If web development is all about integration why not to test it? There are few counter arguments.
3.1 Integration test says "something is broken" but doesn't say where
That really comes down to: How much time does it take to find a bug when an integration test fails in comparison to time required for making code "UnitTestable" and more complicated (I guess this is subjective)? In my experience it never took long to find source of a problem.
3.2 You can run unit test on any environment, it's hard to do with an integration test
Yes, if you want to run integration test without a database. Usually there is a database. So long you operate on fix data and clean after each test then it should be fine. Transactional databases are perfect for this task. Open transaction, insert data, test, rollback.
3.3 Integration Tests are hard to maintain
I can't comment on that because all my test work well and I never had problem with that.
4. Create good Unit Tests!
The whole argument can be attacked with "If you create your Unit Test right, then you don't have any problems.". Can't that be the same with integration tests? If it's easier to create an integration test why not to stick with it and just make it right?
Don't get me wrong I'm not against Unit Test. It's perfect idea and I recommend to everybody. I'm trying to understand: does it really fit web development? I would like to hear you opinions and experience.
This is a damn good question, and one that more developers should be asking themselves.
I don't think this only applies to web development, but I think it's a good example to base the discussion on.
When deciding on your testing strategies, the points you raised are very valid.
In a commercial world, cost (in terms of time) is likely the most important factor.
I always follow some simple rules to keep my testing strategies realistic.
Unit test important, unit testable "libs" (Auth, Billing, Service Abstractions, etc)
Unit test models, assuming you have any, where possible (and it really should be possible!)
Integration test important endpoints of your application (totally depends on the app)
Once you've got these three points nailed, given your cost constraints you could potentially continue to test anything else that makes sense, in the way that makes sense.
Unit testing and integration testing aren't mutually exclusive, and unless you're building a beautifully unit testable library, you should do both.
The short answer is yes. Unit tests are valuable as are integration tests.
Personally I'm not that good at doing TDD, but what I've noticed is that it greatly improves design because I have to think first and then code.
That's invaluable, at least, for me but it also takes a lot of getting used to.
This, I think, is the main reason to why it is so widely misunderstood.
Let's go through your points:
No Algorithms (but what you mean is few):
Well as stated in the comments, that depends on the application. This actually has nothing to do with TDD. Having few algorithms is not an argument to not test them. You may have several different cases with lots of different states. Wouldn't it be nice to know that they work as intended?
Cost
Sure it will cost, and if it is a small project, you might not get any value from TDD if you or your developers are new to it. Then again a small project might be just the thing to get started with it, so you are up to speed for the next, bigger project. This is a calculation you must do yourself. We are programmers, not economists.
Integration Test
Test that too!
Integration test says "something is broken" but doesn't say where
I'm sorry but I don't understand this one.
Hard to maintain
If it is hard to maintain you're doing it wrong. Write the test first, change the test before the implementation, when changing the code. Failing tests during development is a must for TDD. Actually don't quote me on that one, that is just my limited understanding of TDD.
I find this to be a very nice quote about unit-tests
Unit tests should be prescriptive, not descriptive. In other words, unit tests should define what your code is supposed to do, not illustrate after the fact what your code does.
What it boils down to is that integration tests and unit testing are two different things. TDD is a methodology for development, integration tests is more a validation of the application working as expected from a users vantage point.
Edit
The community wiki says this about integration testing:
In an integration test, all input modules are modules that have
already been unit tested. These modules are grouped in larger
aggregates and integration tests (defined in an integration test plan)
are applied to those aggregates. After a successful integration test,
the integrated system is ready for system testing.
So actually my understanding of what integration testing is was wrong from the start, but I don't think my answer is too far off except for the last paragraph.
Integration test takes more time to execute compared to unit test.
"...all about integration there are always multiple dependencies" -> if you are coding on a dependent unit without completing the others, then you can't do integration test yet. If you want to make sure your code is working at this point, you you can do unit testing.
If each members of your team is developing their own dependent unit, they can't do integration test yet because the required units are still being developed by others. But at least they can make sure their code is working if they perform unit testing before committing to repository.
Related
The Developers are usually writing test cases with Junits
The Testers are usually writing test cases with Cucumber
I am confused, that how are these(Cucumber & Junit) different, if at the end, both are meant for validating logic of our code !
Is my supposition correct, if I say...
simple method testing is done by Junits
tough scenarios, we have Cucumber
Unit tests, as their name suggests, are designed to test small units of code. A unit test is written in code, this makes it less readable to the business, but also makes it very powerful and potentially very fast.
Cucumber is a tool used to describe behaviour. Each cuke will exercise a much larger chunk of code.
The two sorts of tests are very different. So different that most Cucumber enthusiasts don't want to use the word test (https://cucumber.io/blog/collaboration/the-worlds-most-misunderstood-collaboration-tool/)
Scenarios are not at all about validating the logic of your code, they have nothing to do with code. They are about describing the behaviour of your application and supporting the development of that behaviour. After the scenario and the behaviour has been implemented they are used to confirm that the behaviour is 'probably' still working as intended.
Unit tests are all about code, they are written in code and (ideally) connect directly to code. They definitely can be used to validate the logic of your code.
There are some other major differences
Runtime
Scenarios are slow
Unit tests are fast (unless they are badly written)
This is not a small difference. An optimally written unit test can be 1000 times faster than an optimally written scenario.
Expressiveness and Power
scenarios use natural language to express themselves. They use abstraction and naming to wield power. Well written cukes are very readable by the business
unit tests use code to express themselves and to wield power. They are much more powerful. However its much harder for them to express themselves. Unit tests cannot be read by their stakeholders. Even the output of beautifully written unit tests is very hard to read for a non-coder
Summary
You have two very different things, working in different ways, using different tools (natural language vs code). So its not surprising you have different tools.
Unfortunately we have lots of people using Cucumber as a test tool to write unit like tests, and we have lots of people using unit test tools to write slow tests that test large chunks of code. Both these things are possible, maybe even viable, but they are far from optimal.
Cucumber is a Behavior Driven Design (BDD) framework. Where you can check the behavior of a piece of code.
JUnit is a lower level "Unit test" tool that allows developers to test every possible part of the code.
You can prefer Cucumber vs Junit for more clarity.
So, i'll put an example, explaining the difference between domain of Cucumber vs Junit
Considering a Web application's example..
Junits will be testing code logic of methods involved at Controller, Service & DAO layer
Cucumber Tests for this application, will bear cases having various REST request, exposed by API of this application. [It will have test requests for Get, Post, Put, Delete with multiple data for each request, as per case]
How to write a unit test framework?
Can anyone suggest some good reading?
I wish to work on basic building blocks that we use as programmers, so I am thinking of working on developing a unit test framework for Java.
I don't intend to write a framework that will replace junit;
my intention is to gain some experience by doing a worthy project.
There are several books that describe how to build a unit test framework. One of those is Test-Driven Development: By Example (TDD) by Kent Beck. Another book you might look at is xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros.
Why do you want to build your own unit test framework?
Which ones have you tried and what did you find that was missing?
If (as your comments suggest) your objective is to learn about the factors that go into making a good unit test framework by doing it yourself, then chapters 18-24 (Part II: The xUnit Example) of the TDD book show how it can be done in Python. Adapting that to Java would probably teach you quite a lot about Python, unit testing frameworks and possibly Java too.
It will still be valuable to you to have some experience with some unit test framework so that you can compare what you produce with what others have produced. Who knows, you might have some fundamental insight that they've missed and you may improve things for everyone. (It isn't very likely, I'm sorry to say, but it is possible.)
Note that the TDD people are quite adamant that TDD does not work well with databases. That is a nuisance to me as my work is centred on DBMS development; it means I have to adapt the techniques usually espoused in the literature to accommodate the realities of 'testing whether the DBMS works does mean testing against a DBMS'. I believe that the primary reason for their concern is that setting up a database to a known state takes time, and therefore makes testing slower. I can understand that concern - it is a practical problem.
Basically, it consists of three parts:
preparing set of tests
running tests
making reports
Preparing set of tests means that your framework should collect all tests which you want to run. You can specify these tests (usually classes with test methods which satisfy some convention or marked with certain annotation or implement marker interface) in a separate file (java or xml), or you can find them dynamically (making a search over classpath).
If you choose the dynamic searching, then you'll probably have to use some libraries which can analyse java bytecode. Otherwise you'll have to load all the classes in your classpath, and this a) requires much time and b) will execute all static initializers of loaded classes and can cause unexpected tests results.
Running tests can vary significantly depending on features of your framework. The simplest way is just calling test methods inside a try/catch block, analysing and saving results (you have to analyze 2 situations - when the assertion exception was thrown and when it was not thrown).
Making reports is all about printing analyzed results in xml/html/wiki or whatever else format.
The Cook's Tour is written by Kent Beck (I believe; it's not attributed), and describes the thought process that went into writing JUnit. I would suggest reading it and considering how you might choose an alternate line of development.
What is the most commonly used approach used for testing in Java projects (iterative development) ?
My suggestion is that you should have a healthy mix of automated and manual testing.
AUTOMATED TESTING
Unit Testing
Use NUnit to test your classes, functions and interaction between them.
http://www.nunit.org/index.php
Automated Functional Testing
If it's possible you should automate a lot of the functional testing. Some frame works have functional testing built into them. Otherwise you have to use a tool for it. If you are developing web sites/applications you might want to look at Selenium.
http://www.peterkrantz.com/2005/selenium-for-aspnet/
Continuous Integration
Use CI to make sure all your automated tests run every time someone in your team makes a commit to the project.
http://martinfowler.com/articles/continuousIntegration.html
MANUAL TESTING
As much as I love automated testing it is, IMHO, not a substitute for manual testing. The main reason being that an automated can only do what it is told and only verify what it has been informed to view as pass/fail. A human can use it's intelligence to find faults and raise questions that appear while testing something else.
Exploratory Testing
ET is a very low cost and effective way to find defects in a project. It take advantage of the intelligence of a human being and a teaches the testers/developers more about the project than any other testing technique i know of. Doing an ET session aimed at every feature deployed in the test environment is not only an effective way to find problems fast, but also a good way to learn and fun!
http://www.satisfice.com/articles/et-article.pdf
Personal experience would suggest the most popular approach is none at all.
I've worked with TDD (Test Driven Development) before, and my feelings towards it are mixed. Essentially, you write your tests before you write your code, and write your code to satisfy the requirements of the test. TDD forces you to have an extremely clear idea about your requirements prior to starting. An added benefit is that, once you are done development, assuming you followed closely to TDD procedures, you'd have a complete set of test suites to go with the code. The down side is that it takes an extremely long time, and at times, you'd just want to skip a couple of steps (e.g. maybe writing code before tests like a sane person would prefer to do).
More can be read here (wiki link)
Unit testing?
Contract-based programming, a la Eiffel?
Waterfall model?
Different shops do different things. If there were one method to rule them all, you wouldn't be asking this question.
On the premise of doing testing at all, I would say that testing with JUnit is the common approach to do testing in Java.
Although most tests are written with JUnit mostly tests tend to be more integration tests than unit tests. (meaning not testing one thing in isolation but some things together)
Additionally test are mostly not written in a test first approach but in parallel or after a specific feature has been implemented.
If you go to a team that makes more advanced use of testing you might probably find some CI Server (Cruise Control, Hudson) running the tests at least once a day during a nightly build.
In the order of most commonly used approach:
no tests at all
manual tests: running the app,
clicking or providing input, check
results
try to write some JUnits, forget
about them, slide to 2 and 1
Start with TDD, see that it's hard
then slide to 3, 2 and 1
on the theoretical side there are loads of ways to properly test the code.
If you are looking for something practical take a look at
Clean Code Talk. Take a look at the whole series, about 5 talks (can't post more than one link).
My Suggestion for testing of the java project is to keep it simple.
Steps :-
Manual Testing :-Achieve a stable product.
Automation Testing :- Maintain the quality of the product.
Report Generation and reporting :- Let people know the quality of the product.
Continuous Integration :-Make it a complete automated,continuous tool.
When developer will commit the Functionality then Start testing the it module by module.Try to compare the Actual Output with the expected output and against that Log the issues.
When developer resolved with the issues,Start with the Integration Testing and also start testing the resolved state issues and check whether any regression occur due to issue Fixing.
At last when product become the stable one,Then start for automating the the modules.
You can also follow automation step by step like:-
1.Automating the modules.
2.Report generation and send mail for product HealthCheck.
3.Continuous Integration and Automation testing on private server on local machine.
I would like to make integration tests and system tests for my applications but producing good integration and system tests have often needed so much effort that I have not bothered. The few times I tried, I wrote custom, application-specific test harnesses, which felt like re-inventing the wheel each time. I wonder if this is the wrong approach. Is there a "standard" approach to integration and full system testing?
EDIT: To clarify, it's automated tests, for desktop and web applications. Ideally a complete test suite that exercises the full functionality of the application.
If by "make integration tests and system tests" you mean automated tests, then the answer is no, there is no standard approach. What approach choose will depend on:
the characteristics of the application (e.g. does it have a GUI?, is it read only?, how many external dependencies does it have, etc)
what you are trying to test (maybe you only need GUI tests, or perhaps the opposite is true and you don't really care about the GUI but the internal logic is critical)
how quickly you want to see the results (e.g. the more you stub out the faster your tests become)
the skillsets on your team
Personally, I love any approach that integrates with JUnit. JUnit is a fantastic framework which is well supported and easily tied into a continuous intergration server. Here are a few possible approaches:
Selenium with JUnit - Fantastic tool for driving web applications
Concordion - for all application types. Integrates with JUnit and allows plain english specifications of tests. Your 'fixture'/test code will hook into key words in the specification and assert or perform actions on them.
FEST - for swing applications, again it integrates with JUnit (see a theme yet? ;) (more choices here)
The above examples provide a tremendous amount of out of the box help for testing. Of course, they still require effort to wire to your application and maintain, but the advantages are well worth it. In addition to the above, you may need to be thinking about how to stub or mock out areas of your application. Perhaps you want to do all of your testing "under the GUI" or "above the database". In the first scenario, you'll need your tests to start at the points in your code where the GUI would interact with it and in the latter you'll need to stub out the services which interact with your database.
Point being, there's lots of ways to do this. Best start with a very clear understanding of what you want to get out of your testing. Then learn what existing frameworks are out there to help you based on what you want to test, and finally, don't try to conquer the world in a night. Start small getting a few tests running. Get the green bar (always a joy to see!) Get a stable proven platform for testing and make sure you're happy with it. Then add more as you go along.
I'm looking into how best to automate integration tests (by which I mean complete use cases entirely within our application)
The questions
Correct Approach for Unit Testing Complex Interactions
What are the pros and cons of automated Unit Tests vs automated Integration tests?
cover the "why" and "what" aspects very well.
The question Automated integration testing a C++ app with a database implies that xUnit frameworks are a good way to create and execute integration tests. Are xUnit's really well suited to that task? Are there common gotcha's to be aware of? A good approach to follow?
Are there better approaches (short of possibly purchasing the HP / former Mercury tool suite)?
My specific environment for this project is Java / SpringSource / Hibernate but am also interested in suggestions for the .Net platform.
The question Automated integration testing a C++ app with a database implies that xUnit frameworks are a good way to create and execute integration tests. Are xUnit's really well suited to that task? Are there common gotcha's to be aware of? A good approach to follow?
JUnit and TestNG are initially unit testing frameworks but can be used for integration testing as well. So to me, the answer is yes, they are well suited for integration testing, e.g. testing the service --> domain > persistence --> database layers (I'll actually come back on integration testing later). One of the tricky things when doing integration tests that involve the database is the data. A tool such as DbUnit can help and is typically used to put the database in a known state before to run each test (and to perform asserts on the database content). Another approach is to run the tests in a transaction and to rollback the transaction at the end of the test. Spring allows to do that very easily, so does the unitils library. In any case, a best practice is to avoid interdependent tests as much as possible (as mentioned in the link you gave), they are just a nightmare.
Are there better approaches (short of possibly purchasing the HP / former Mercury tool suite)?
To my knowledge, such tools are more end-to-end testing tools i.e. functional testing tools. So if by integration tests (which for me mean testing several components together) you actually mean functional tests (this is my understanding of complete use cases), I'd suggest to look at things like:
Abbot, Marathon, Frankenstein, etc for Swing applications
iMacros, Selenium, Cucumber, etc for Web applications
SoapUI for Web Services
Pay a special attention to the one in bold (all are actually great tools but I'm sure the one in bold provide good automation capabilities). And because HTTP and SOAP are standards (this doesn't apply to the Swing UI testing tools of course), these tools are not really Java specific (even if the tests themselves are written in Java/Groovy for SoapUI). And BTW, Selenium supports many programming languages).
Multi-threading can be a problem, since JUnit won't pick up exceptions in other threads. There are some Java Puzzlers about that. You also need to invent your own ways of doing statistical testing and the assert methods can be a bit rough. I also think that the semantics of JUnit are a bit unclear (JUnit uses one separate instance per test method for instance). For these reasons I switched to TestNG, which in my opinion is a better designed framework. The fact that JUnit was designed using extreme programming shows sometimes.
As mentioned you can do it with xUnit frameworks, but if you will want to mix Java and .Net, or web applications and desktop applications or add some more complexity to overall picture, than you won't be able to do it with just one unit test framework. So you will need to have many test tools, many test environments, many test script developers (for example one foe Java unit tests, one for .Net tests)... and this will add up complexity, troubles, costs...
As for HP Quick Test Pro you mentioned it should cover most of your needs. I mean most, because there may be some areas where it is not suitable (no way to run scripts on applications via Citrix virtualization), but for the most cases it will do the job. It is suitable for java/.net/web and other things (there are plugins for specialized uses). QTP usually operates on GUI objects so you can prepare test cases for user use cases, and test can be performed in a manner that normal user would perform actions (just a bit faster you have to intentionally slow it down to user speed if needed).
You will probably you will need one tool, one test environment, one test scrip developer (VB). It is expensive, but if it is for company it should be better choice in the long run.
And if you ask from company perspective it will play well with HP Quality Center if you decide to use it for your whole Testing division/team. Unless you use IBM solutions, than they have their own tool suite as part of their software Delivery Platform including Rational Robot