Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Can you explain in a few sentences:
Why we need it / why they make our life easier ?
How to unit-test [simple example in Java] ?
When do not we need them / types of projects we can leave unit-testing out?
useful links
Why we need it / why they make our life easier ?
It allows you to check the expected behavior of the piece(s) of code you are testing, serving as a contract that it must satisfy.
It also allows you to safely re-factor code without breaking the functionality (contract) of it.
It allows you to make sure that bug fixes stay fixed by implementing a Unit test after correcting a bug.
It may serve as as a way to write decoupled code (if you have testing in mind while writing your code).
How to unit-test [simple example in Java] ?
Check out the JUnit website and the JUnit cookbook for details. There isn't much to writing JUnit test cases. Actually coming up with good test cases is surely harder than the actual implementation.
When do not we need them / types of projects we can leave unit-testing out?
Don't try to test every method in a class, but rather focus on testing the functionality of a class. Beans for example, you won't write tests for the getters and setters...
Links
JUnit - Unit testing
EclEmma - test coverage tool
link text - Wikipedia link to unit testing
Because that is your proof that the application actually works as intended. You'll also find regression bugs much easier. Testing becomes easier, as you don't have to manually go through every possible application state. And finally, you'll most likely find bugs you didn't even know existed even though you manually tested your code.
Google for junit
Unit tests should always be written, as said, it is your proof that the application works as intended. Some things cannot or can be hard to test, for example a graphical user interface. This doesn't mean that the GUI shouldn't be tested, it only means you should use other tools for it.
See point 2.
It's probably work reading the Wikipedia article on Unit Testing, as this will answer most of your questions regarding why. The JUnit web site has resources for writing a Java unit test, of which the Junit Cookbook should probably be your first stop.
Personally, I write unit tests to test the contract of a method, i.e. the documentation for a particular function. This way you will enter into a cycle of increasing your test coverage and improving documentation. However, you should try to avoid testing:
Other people's code, including the JDK
Non-deterministic code, such as java.util.Random
JUnit is not the only unit testing framework available for Java, so you should evaluate other frameworks like TestNG before diving into it.
In addition to "top-level" frameworks, you will also find quite a few projects covering specific areas, such as:
HTMLUnit for Web
SIPUnit for SIP
SwingUnit for GUI Code
What I would have written is already covered in many of the responses here but I thought I'd add this...
The best article I ever read on when to use/not to use unit tests was on Steve Sanderson's blog. That is an excellent article covering the cost/benefit of unit testing on different parts of your code-base (i.e. a compelling argument against 100% coverage)
One of the best books on the hows and whys of unit testing in Java is Pragmatic Unit Testing in Java with JUnit (Andy Hunt & Dave Thomas)
This is how how your programming should be :
Decide on an interface (not necessarily a java interface, but how method looks like to everybody
Write a test
Code the implementation
As it's name shows, it is test for our units in our program. For example, if you have a method that returns the summation of two integers, you will test it with two numbers to see if it works correctly. For example, for 2+2, it returns 4.
We need these tests, since in big projects, there are a lot of programmers. It is important that these programmers work in harmony. unit tests act like the proof of correctness for the procedures that our programmers writes. so, a programmer writes his procedure plus it's unit test in order to show that his procedure works correctly. Then, he commits his changes to the project. These tests help you to prevent a lot of bugs before occurrence.
I have a step by step procedure as an example, to write down a Java unit test in Eclipse. Please look "How To Write Unite Tests".
I hope it helps.
Related
I'd like to ask what's the practical difference between Cucumber and JUnit. I haven't worked with Cucumber at all; found some documentation but I'd greatly appreciate some feedback from someone who has worked with both (interested in a high lvl overview).
To break it down - what i'm interested in (I'll be using Selenium and not Protractor) :
Are there any things that Cucumber can't do vs Junit.
What's easier to use (coding, how fast you can write the tests) ?
Both work with Page Objects?
Some things that i need to get done
Test css styling
Test page responsiveness
Standard operation on WebElements (clicking, getting data etc)
Asserts.
Anything in addition to this is more than welcomed. Greatly appreciate your answer on this, thank you!
JUnit and Cucumber are aiming at different goals. They are rather complement to each other than replace each other.
Are there any things that Cucumber can't do vs Junit.
There isn't anything you can do with JUnit that you can't do with Cucumber. And the other way around.
The difference is that while JUnit aims at tests, Cucumber aims at collaboration with non technical people. Non technical people will not understand what a unit test does. They will, however, be able to understand and validate an example written in Gherkin.
What's easier to use (coding, how fast you can write the tests) ?
There is more overhead when you use Cucumber. You will have to implement each step as a method and not just one test method as you would do if you used JUnit. The readability you gain from expressing examples using plain text is sometimes worth the extra work.
Both work with Page Objects?
Page Objects are an abstraction for the web page you are verifying. It is a class you write as developer/tester. The Page Objects can be used by both JUnit and Cucumber. In fact, there is no difference between the tools from that perspective.
The choice to use JUnit or Cucumber is a matter of granularity and audience.
A work flow that works well is to mix the tools. Define examples of how the application should work using BDD, (Cucumber, Gherkin). Implement these scenarios using Cucumber. Then, use JUnit to work out details that may be important but not necessary important for the business stakeholders at a high level. Think of corner cases that are important but are too much details for your stakeholders.
An image that describes this mix is available here: https://cucumber.io/images/home/bdd-cycle.png
I wrote blog post a while back where I talk about the right tool for the job: http://www.thinkcode.se/blog/2016/07/25/the-right-tool-for-the-job
The right tool may be Cucumber. It can also be JUnit. It all depends on your audience.
Simply spoken, those two work on completely different levels of abstraction.
JUnit is mainly an automation framework; giving you the ability to rapidly write down test cases using the Java programming language. It provides annotations that make to easily declare: "this method over here is a JUnit test". It was intended as framework for unit tests; but many people also use it to drive full scale "integration" or "function" tests.
Cucumber on the other hand works on a much higher level of abstraction. You start by writing "test descriptions" in pure text. Leading to probably the key difference: you don't need a to know Java to write a cucumber test (you just need a java programmer to provide the "glue code" that allows Cucumber to turn your text input into some executable piece of code).
In that sense, you are somehow asking us to compare apples and turnips here; as one would be using these two toolsets for a different set of "problem solution". But as lined out; you can also use JUnit to drive "bigger" tests; so the main differentiation between these two tools is the level of abstraction that you are dealing with.
EDIT: your comment is correct; as those tools are for different "settings", you shouldn't expect that a non-technical person alone will be able to use cucumber to write good tests covering everything. Cucumber is a nice way to enable non-technical participation for creating tests; but in the end, you are solving technical (java related) problems; thus you need Java programming expertise at some point. Either "within the same person"; or at least within different people in your team.
Cucumber seems to make something more user friendly but I don't think business analysts really care what it is. Ultimately developers have to write unit tests, integration tests , cucumber tests (so Cucumber makes no sense for developer who has already written unit tests & integration tests & Business analyst don't care because they have already provided what they want).
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I would like to ask about using Unit Testing in Web Development. The idea of Unit Testing is great but does it really bring value in the context of web application? Second part of my question is about TDD. If one creates an integration test before the actual code can this approach be called "Test Driven Development"?
1. Assumptions
By the definition Unit Test should test code on only one service layer. If a test tests code across multiple layers we have an integration test.
2. Argument
2.1 No Algorithms
There are not many algorithms in a Web Application. It's not like building a 3D Physics engine where every method does something challenging and hard to debug. Web application is mostly about integration and generating HTML.
The biggest challenges for a web application are:
- clean code (universal problem for any software but not testable)
- data consistency
- integration (Programming Language, File System, Configuration Files, Web Server, Caching Systems, Database, Search Engine, External APIs - all those systems have to work together at a request)
If you want to build a Unit Test for every class in a web app you will be testing (in most cases): filling arrays, string concatenations and languages' native functions.
2.2 Cost
Just because a Web App is all about integration there are always multiple dependencies. You have to mock many different classes and writing even a small test might be actually a big job. What's even worse it's not only about test. Software needs to be testable. It means that one has to be able to inject dependencies in almost every class. It's not always possible to inject dependencies without creating additional layer between two classes (or systems). It complicates code and makes it more expensive to work with.
3. Integration Test
If web development is all about integration why not to test it? There are few counter arguments.
3.1 Integration test says "something is broken" but doesn't say where
That really comes down to: How much time does it take to find a bug when an integration test fails in comparison to time required for making code "UnitTestable" and more complicated (I guess this is subjective)? In my experience it never took long to find source of a problem.
3.2 You can run unit test on any environment, it's hard to do with an integration test
Yes, if you want to run integration test without a database. Usually there is a database. So long you operate on fix data and clean after each test then it should be fine. Transactional databases are perfect for this task. Open transaction, insert data, test, rollback.
3.3 Integration Tests are hard to maintain
I can't comment on that because all my test work well and I never had problem with that.
4. Create good Unit Tests!
The whole argument can be attacked with "If you create your Unit Test right, then you don't have any problems.". Can't that be the same with integration tests? If it's easier to create an integration test why not to stick with it and just make it right?
Don't get me wrong I'm not against Unit Test. It's perfect idea and I recommend to everybody. I'm trying to understand: does it really fit web development? I would like to hear you opinions and experience.
This is a damn good question, and one that more developers should be asking themselves.
I don't think this only applies to web development, but I think it's a good example to base the discussion on.
When deciding on your testing strategies, the points you raised are very valid.
In a commercial world, cost (in terms of time) is likely the most important factor.
I always follow some simple rules to keep my testing strategies realistic.
Unit test important, unit testable "libs" (Auth, Billing, Service Abstractions, etc)
Unit test models, assuming you have any, where possible (and it really should be possible!)
Integration test important endpoints of your application (totally depends on the app)
Once you've got these three points nailed, given your cost constraints you could potentially continue to test anything else that makes sense, in the way that makes sense.
Unit testing and integration testing aren't mutually exclusive, and unless you're building a beautifully unit testable library, you should do both.
The short answer is yes. Unit tests are valuable as are integration tests.
Personally I'm not that good at doing TDD, but what I've noticed is that it greatly improves design because I have to think first and then code.
That's invaluable, at least, for me but it also takes a lot of getting used to.
This, I think, is the main reason to why it is so widely misunderstood.
Let's go through your points:
No Algorithms (but what you mean is few):
Well as stated in the comments, that depends on the application. This actually has nothing to do with TDD. Having few algorithms is not an argument to not test them. You may have several different cases with lots of different states. Wouldn't it be nice to know that they work as intended?
Cost
Sure it will cost, and if it is a small project, you might not get any value from TDD if you or your developers are new to it. Then again a small project might be just the thing to get started with it, so you are up to speed for the next, bigger project. This is a calculation you must do yourself. We are programmers, not economists.
Integration Test
Test that too!
Integration test says "something is broken" but doesn't say where
I'm sorry but I don't understand this one.
Hard to maintain
If it is hard to maintain you're doing it wrong. Write the test first, change the test before the implementation, when changing the code. Failing tests during development is a must for TDD. Actually don't quote me on that one, that is just my limited understanding of TDD.
I find this to be a very nice quote about unit-tests
Unit tests should be prescriptive, not descriptive. In other words, unit tests should define what your code is supposed to do, not illustrate after the fact what your code does.
What it boils down to is that integration tests and unit testing are two different things. TDD is a methodology for development, integration tests is more a validation of the application working as expected from a users vantage point.
Edit
The community wiki says this about integration testing:
In an integration test, all input modules are modules that have
already been unit tested. These modules are grouped in larger
aggregates and integration tests (defined in an integration test plan)
are applied to those aggregates. After a successful integration test,
the integrated system is ready for system testing.
So actually my understanding of what integration testing is was wrong from the start, but I don't think my answer is too far off except for the last paragraph.
Integration test takes more time to execute compared to unit test.
"...all about integration there are always multiple dependencies" -> if you are coding on a dependent unit without completing the others, then you can't do integration test yet. If you want to make sure your code is working at this point, you you can do unit testing.
If each members of your team is developing their own dependent unit, they can't do integration test yet because the required units are still being developed by others. But at least they can make sure their code is working if they perform unit testing before committing to repository.
How to write a unit test framework?
Can anyone suggest some good reading?
I wish to work on basic building blocks that we use as programmers, so I am thinking of working on developing a unit test framework for Java.
I don't intend to write a framework that will replace junit;
my intention is to gain some experience by doing a worthy project.
There are several books that describe how to build a unit test framework. One of those is Test-Driven Development: By Example (TDD) by Kent Beck. Another book you might look at is xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros.
Why do you want to build your own unit test framework?
Which ones have you tried and what did you find that was missing?
If (as your comments suggest) your objective is to learn about the factors that go into making a good unit test framework by doing it yourself, then chapters 18-24 (Part II: The xUnit Example) of the TDD book show how it can be done in Python. Adapting that to Java would probably teach you quite a lot about Python, unit testing frameworks and possibly Java too.
It will still be valuable to you to have some experience with some unit test framework so that you can compare what you produce with what others have produced. Who knows, you might have some fundamental insight that they've missed and you may improve things for everyone. (It isn't very likely, I'm sorry to say, but it is possible.)
Note that the TDD people are quite adamant that TDD does not work well with databases. That is a nuisance to me as my work is centred on DBMS development; it means I have to adapt the techniques usually espoused in the literature to accommodate the realities of 'testing whether the DBMS works does mean testing against a DBMS'. I believe that the primary reason for their concern is that setting up a database to a known state takes time, and therefore makes testing slower. I can understand that concern - it is a practical problem.
Basically, it consists of three parts:
preparing set of tests
running tests
making reports
Preparing set of tests means that your framework should collect all tests which you want to run. You can specify these tests (usually classes with test methods which satisfy some convention or marked with certain annotation or implement marker interface) in a separate file (java or xml), or you can find them dynamically (making a search over classpath).
If you choose the dynamic searching, then you'll probably have to use some libraries which can analyse java bytecode. Otherwise you'll have to load all the classes in your classpath, and this a) requires much time and b) will execute all static initializers of loaded classes and can cause unexpected tests results.
Running tests can vary significantly depending on features of your framework. The simplest way is just calling test methods inside a try/catch block, analysing and saving results (you have to analyze 2 situations - when the assertion exception was thrown and when it was not thrown).
Making reports is all about printing analyzed results in xml/html/wiki or whatever else format.
The Cook's Tour is written by Kent Beck (I believe; it's not attributed), and describes the thought process that went into writing JUnit. I would suggest reading it and considering how you might choose an alternate line of development.
I am new to TDD and want to know which kind oft test i need for which part of the software.
Currently my team creates a relative complex editor on the Netbeans Platform, where i have to integrate an external editor and also write own stuff.
So how can i do the tests best for the GUI, the own code, integration code?
Where do i create coded tests and where should i use test cases and testers?
We are considering to use scala specs or junit for the coded tests.
Thank you for your help!
As a general rule you should consider writing a test case for every function/method you write in your class.
In our TDD process we simply follow the rule that every Java class must have a corresponding JUnit class that contains #Test method for every method of that Java class.
Then we follow this with 'code coverage' that will tell us how much of the code we have actually written has been tested. For this I suggest looking at a tool called Cobertura (link here) which provide us an easy visual way to examine the percentage of our code that has been tested. it works by simply detecting how many line of code your JUnit class has tested for every Java class. This will provide you with a good idea about what functionality in your code has or has not been tested.
We usually aim to have about 80% of the code tested (easier said than done though)
consider writing test cases for your high priority functionality first to get you started.
We don't usually write JUnit for the GUI. not sure if there is a way but we leave the GUI to be tested by the testers when the application is going through the usual Testing phase.
hope this helps a little.
This is very subjective, but I would suggest cleanly separating your gui and writing solid unit tests against the interface between the gui and the business layer. Don't worry about automating the gui testing. This will naturally force you to create a clean separation of concerns between the layers.
GUI is for human interaction, therefore automated GUI test will result in epic failures. If you want to test GUI properly, you'll just need humans.
However, you can test how your app will react to different user actions. Especially, think to test how your app should cope with user errors (for instance, when letters are written in a numeric only input field).
If the distinction between unit tests, functional tests and integration tests is not clear for you, don't bother too much. Try to test as much thing as you think necessary (again don't forget worst case scenarios). When in doubt, remember that it's better to test too much, than to little.
Our project contains 2600 class files and we have decided to start using automated tests.
We know we have should have started this 2599 class files ago, but how and where should large projects start to write tests?
Pick a random class and just go?
What's important to know? Are there any good tools to use?
Write a unit test before you change something, and for every bug you encounter.
In other words, test the functionality you are currently working on. Otherwise, it is going to take a lot of time and effort to write tests for all classes.
Start writing tests for each bug that is filed (Write the test, watch it fail, fix the bug, test again). Also test new features first (they are more likely to have errors). It will be slow in the beginning, but as your test infrastructure grows, it will become easier.
If you are using java 5 or higher, use junit 4.
Learn about the difference of unit tests, integration tests and acceptance tests. Also have a look at mocking.
Other answers have given useful advice, but I miss a clear articulation of the underlying principle: strive to maximize the benefit from your efforts. Covering a large legacy codebase with unit tests to a significant extent takes a lot of time and effort. You want to maximize the outcome of your effort from the start. This not only gives valuable feedback early on, but helps convincing / keeping up the support of both management and fellow developers that the effort is worth it.
So
start with the easiest way to test the broadest functionality, which is typically system/integration tests,
identify the critical core functionality of the system and focus on this,
identify the fastest changing/most unstable part(s) of the system and focus on these.
Don't try unit tests first. Do system tests (end-to-end-tests) that cover large areas of code. Write unit tests for all new code.
This way you stabilize the old code with your system regression tests. As more and more new code comes in the fraction of code without unit tests begin to fade away. Writing unit tests for old code without the system tests in place will likly break the code and will be to much work to be justified as the code is not written with testability in mind.
You may find Michael Feathers' book Working Effectively with Legacy Code useful.
You're fairly dorked now, but write tests that bolster the most critical code you have. For example if you have code that allows functionality based upon users' rights, then that's a biggy - test that. The routine that camelcases a name and writes it to a log file? Not so much.
"If this code broke, how much would it suck" is a good litmus test.
"Our internal maintenance screens would look bad on IE6" is one answer.
"We'd send 10,000,000 emails to each of our customers" is another answer.
Which classes would you test first, hehe.
You might find this book relevant and interesting. The author explains how to do exactly what you ask for here.
http://my.safaribooksonline.com/0131177052
Oh, and one more thing - having an insufficient number of unit tests is far better than having none. Add a few at a time if that's all you can do. Don't give up.