I have a web application built using Spring which contains some jobs.
A typical job is to run through the database, get a list of modified customers, generate a file and FTP it. My question is, how to go about unit testing in this job?
Should I only write unit tests for each "step" of the job, like:
Test for the method which fetches the modified customers.
Test for file generation code.
Test for FTP'ing the code.
But in this case, I will miss the "integration" test case for the above job. Also, Emma reports there is untested code in form of the job.
Any thoughts appreciated.
Thanks!
Unit testing is actually testing only one class at a time. That means you have to mock the dependencies. Spring is great for that.
I would advice Mockito to do the mocking. It is a marvellous tool, and you will learn TDD which is also a way to write beautiful code.
Integration test is another topic and requires another strategy.
Testing against the database is done by extending AbstractTransactionalJUnit4SpringContextTests. You will find examples on the net. In general you also use an in memory db to make those tests (h2 is good for that). It can be done in the unit test phase.
Generating the file can be done as unit test. You generate files and verify the proper content. Or errors...
For the FTP part, I would say it's more part of an integration test, unless you can spawn an FTP server from your build script.
You have to write an unit test for each step. Maybe you'll need to mock some methods.
And then, you can write an integration test to validate the whole, but maybe you'll need to stub some parts (like the FTP server, using an embedded FTP server in your test).
Related
Preface
I'm deliberatly talking about system tests. We do have a rather exhaustive suite of unit tests, some of which use mocking, and those aren't going anywhere. The system tests are supposed to complement the unit tests and as such mocking is not an option.
The Problem
I have a rather complex system that only communicates via REST and websocket events.
My team has a rather large collection of (historically grown) system tests based JUnit.
I'm currently migrating this codebase to JUnit5.
The tests usually consist of an #BeforeAll in which the system is started in a configuration specific to the test-class, which takes around a minute. Then there is a number of independent tests on this system.
The problem we routinely run into is that booting the system takes a considerable amount of time and may even fail. One could say that the booting itself can be considered a test-case. JUnit handles lifecycle methods kind of weirdly - the time they take isn't shown in the report; if they fail it messes with the count of tests; it's not descriptive; etc.
I'm currently looking for a workaround, but what my team has done over the last few years is kind of orthogonal to the core idea of JUnit (cause it's a unit testing framework).
Those problems would go away if I replaced the #BeforeAllwith a test-method (let's call it #Test public void boot(){...}) and introduce an order-dependency (which is pretty easy using JUnit 5) that enforces boot to run before any other test is run.
So far so good! This looks and works great. The actual problem starts when the tests aren't executed by the CI server but by developers who try to troubleshoot. When I try to start a single test boot is filtered from the test execution and the test fails.
Is there any solution to this in JUnit5? Or is there a completely different approach I should take?
I suspect there may be a solution in using #TestTemplate but I'm really not sure how to procede. Also afaik that would only allow me to generate new named tests that would be filtered as well. Do I have to write a custom test-engine? That doesn't seem compelling.
This more general testing problem then related to Junit5. In order to skip very long boot up you can mock some components if it is possible. Having the booting system as a test does not make sense because there are other tests depending on that. Better to use #beforeAll in this case as it was before. For testing boot up, you can make separate test class for that which will run completely independent from other tests.
Another option is to group this kind of test and separate from the plain unit test and run it only if needed (for example before deployment on CI server). This really depends on specific use case and should those test be part of regular build on your local machine.
The third option is to try to reduce boot time if it possible. This is option if you can't use mocks/stubs or exclude those tests from regular build.
We are currently improving the test coverage of a set of database-backed applications (or 'services') we are running by introducing functional tests. For me, functional tests treat the system under test (SUT) as a black box and test it through its public interface (be it a Web interface, REST, or our potential adventure into the messaging realm using AMQP).
For that, the test cases either A) bootstrap an instance of the application or B) use an instance that is already running.
The A version allows for test cases to easily test the current version of the system through the test phase of a build tool or inside a CI job. That is what e.g. the Grails functional test phase is for. Or Maven could be set up to do this.
The B version requires the system to already run but the system could be inside (or at least closer to) a production environment. Grails can do this through the -baseUrl option when executing functional tests.
What now puzzles me is how to achieve a required state of the service prior to the execution of every test case?
If I e.g. want to test a REST interface that does basic CRUD, how do I create an entity in the database so that I can test the HTTP GET for it?
I see different possibilities:
Using the same API (e.g. HTTP POST) to create the entity. Downside: Changing the creation method breaks two test cases. Furthermore, there might not be a creation method for all APIs.
Adding an additional CRUD API for testing and only activating that in non-production environments. That API is then used for testing. Downside: adds additional code to the production system, API logic might not be trivial, e.g. creation of complex entity graphs (through aggregation/composition), and we need to make sure the API is not activated for production.
Basically the same approach is followed by the Grails Remote Control plugin. It allows you to "grab into your application" and invoke arbitrary code through serialisation. Downside: Feels "brittle". There might be similar mechanisms for different languages/frameworks (this question is not Grails specific).
Directly accessing the relational database and creating/deleting content, e.g. using DbUnit or just manually creating entities through JDBC. Downside: you duplicate creation/deletion logic and/or ORM inside the test case. Refactoring the DB breaks the test case though the SUT still works.
Besides these possibilities, Grails when using the (-inline) option for functional tests allows accessing Spring services (since the application instance is run inside the same JVM as the test case). Same applies for Spring Boot "integration tests". But I cannot run the tests against an already running application version (as described as option B above).
So how do you do that? Did I miss any option for that?
Also, how do you guarantee that each test case cleans up after itself properly so that the next test case sees the SUT in the same state?
as with unit testing you want to have a "clean" database before you run a functional test. You will need some setup/teardown functionality to bring the database into a defined state.
easiest/fastest solution to clean the database is to delete all content with an sql script. (For debugging it is also useful to run this in the test setup to keep the state of the database after a test failure.) This can be maintained manually (it just contains delete <table> statements). If your database changes often you could try to generate the clean script (disable foreign keys (to avoid ordering problem), delete tables).
to generate test data you can use an sql script too but that will be hard to maintain, or create it by code. The code can be placed in ordinary Services. If you don't need real production data the build-test-data plugin is a great help at simplifying test data creation. If you are on the code side it also makes sense to re-use the production code to create test data to avoid duplication.
to call the test data setup simply use remote-control. I don't think it is more brittle than all the http & ajax stuff ;-). Since we now have all the creation code in a service the only thing you need to call with remote control is the Service that does create the data. It does not have to get more complicated than remote { ctx.testDataService.setupDataForXyz() }. If it is that simple you can even drop remote-control and use a controller/action to run it.
do not test too much detail with functional tests to make it not more complicated as it already is. :)
I am starting to use unit testing in my projects, and am writing tests that are testing at the method/function level.
I understand this and it makes sense.
But, what is integration testing? From what i read it moves the scope of testing up to test larger features of an application.
This implies that I write a new test suite to test larger things such as (on an e-commerce site) checkout functionality, user login functionality, basket functionality. So here i would have 3 integration tests written?
Is this correct - if not can someone explain what is meant.
Also, does integration test involve the ui (web application context here) and would employ the likes of selenium to automate. Or is integration testing still at the code level but tying together difference classes and areas of the code.
Consider a method like this PerformPayment(double amount, PaymentService service);
An unit test would be a test where you create a mock for the service argument.
An integration test would be a test where you use an actual external service so that you test if that service responds correctly to your input data.
Unit tests are tests that the tested code is inside of the actual class. Another dependencies of this class are mocked or ignored, because the focus is test the code inside the class.
Integration tests are tests that involves disk access, application service and/or frameworks from the target application. The integration tests run isolated from another external services.
I will give an example. You have a Spring application and you made a lot of unit tests to guarantee that the business logic is working properly. Perfect. But what kind of tests you have to guarantee:
Your application service can start
Your database entity is mapped correctly
You have all the necessary annotations working as expected
Your Filter is working properly
Your API is accepting some kind of data
Your main feature is really working in the basic scenario
Your database query is working as expected
Etc...
This can't be done with unit tests but you, as developer, need to guarantee that all things are working too. This is the objective of integration tests.
The ideal scenario is the integration tests running independent from another external systems that the application use in a production environment. You can accomplish that using Wiremock for Rest calls, a memory database like H2, mocking beans from some specific classes that call external systems, etc.
A little curiosity, Maven have a specific plugin for Integration Tests: the maven failsafe plugin, that execute test classes that the name ends with IT (by default). Example: UserIT.java.
The confusion about what Integration Test means
Some people understand the "integration test" as a test involving the "integration" to other external systems that the currently system use. This kind of tests can only be done in a environment where you have all the systems up and running to attend you. Nothing fake, nothing mocked.
This might be only a naming problem, but we have a lack of tests (what I understand as integration tests) that attends the necessity of the items described above. On contrary, we are jumping for a definition of unit tests (test class only) to a "integration" test (the whole real systems up). So what is in the middle of it if not the integration tests?
You can read more about this confusion on this article by Martin Fowler. He separates the "integration tests" term on two meanings: the "broad" and "narrow" integration tests:
narrow integration tests
exercise only that portion of the code in my service that talks to a
separate service
uses test doubles of those services, either in
process or remote
thus consist of many narrowly scoped tests, often no
larger in scope than a unit test (and usually run with the same test
framework that's used for unit tests)
broad integration tests
require live versions of all services, requiring substantial test
environment and network access
exercise code paths through all
services, not just code responsible for interactions
You can get even more details on this article.
Unit testing is where you are testing your business logic within a class or a piece of code. For example, if you are testing that a particular section of your method should call a repository your unit test will check to make sure that the method of the interface which calls the repository is called the correct number of times that you expect, otherwise it fails the test.
Integration testing on the other hand is testing that the actual service or repository (database) behavior is correct. It is checking that based on data you pass in you retrieve the expected results. This ties in with your unit tests so that you know what data you should retrieve and what it does with that data.
As far as I see the selenium tests should be in another test suite. Those tests are the most fragile test in nature even if you write them correctly. Here you can use Specflow or some other kind of specification by example framework. Perhaps you can call these tests as acceptance tests. These are for developers and business experts too.
The integration, or module tests normally do not use UI. The integration tests exercise some classes which are working together. These are lower level tests than the selenium tests, and a bit easier to maintain. These tests are for developers only.
Here are a couple of constraints that a good unit test satisfies. Meeting these constraints also required good testable code.
No I/O - disk or network
Only one assertion (if multiple, they should be minor variations of each other)
Does not exercise (cover) much more production code than what it asserts
These constraints usually don't apply to integration tests.
I am at the stage now where I have a fairly good understanding of programming/development, using Java.
Could anyone tell me the best way for me to start using testing packages? I have looked at Hibernate but not surer where to go with it...
I use Eclipse 3.5 on Mac OS X. Is it a case of writing scripts to test methods? What is unit-testing? etc.
Where do I begin?
Many thanks. Alex
What is Unit Testing
Unit testing is writing code (i.e. test code) that passes known inputs into code under test and then validating the code under test returns expected outputs. It's the most granular testing you can perform on an application. To make it easier, usually a unit testing framework is used. For Java, JUnit is the most popular, but TestNG is also notable.
Getting Started
Unit testing frameworks provide tools for test execution, validation and results reporting. For your setup, Eclipse has built in support for JUnit. Eclipse is able to automatically detect tests, compile tests and code under test, execute tests, and report results within the IDE. Furthermore, failures are reported as clickable stack trace information that loads the corresponding file at the given line number.
Mock Objects
That you're also working with Hibernate, suggests you also investigate a mock object framework as well - such as jMock. Mock objects are usually substituted as part of a code under tests's composition and serve two purposes: (1) returning known outputs and (2) recording they've been called and how so that unit tests can introspect that information as part of validation.
The ability to use Mock objects to make testing easier is predicated on dependency injection. That is other entities that compose the object under test. The idea is decoupling dependencies (e.g. Hibernate) to focus on testing algorithms that manipulate that data you're working with.
Database
However, if you've got code that is not easily refactored, or perhaps you want to validate database code, you can also test Hibernate interaction as well. In that case you want a database in a known state. Three approaches come to mind:
Restoring a database backup at the beginning of each test execution.
Use dbunit, which provides its own mechanisms for maintain state.
Transactional locking with rollback. Wrap the entire case is wrapped with a try{} finally{}, where the latter always rolls back the transaction.
James Shore ("a thought leader in the Agile software development community") has a series of screen casts of him demonstrating Test Driven Development, using Eclipse.
http://jamesshore.com/Blog/Lets-Play/
While there are many ways to start testing, there is no "best" way so there's no point in looking for that as a starting point.
Search the web for a good tutorial on junit and do it. That will be the absolute best way to get started IMO. Don't get sidetracked with code coverage or integrating with Hudson or any of the other tasks that are on the periphery to testing. Focus on writing a handful (or 10) if tests first.
Once you understand the basics you can start looking at other tools to see if they meet your needs any better or worse than junit.
First up: Hibernate is not a testing package.
Now that's out of the way, I'd suggest you take a look at JUnit. Read up on unit testing first so you know what it is (the Wikipedia entry is a good place to start), then try the JUnit cookbook. Write some unit tests for a small piece of your code to see how it works, then move on to bigger chunks.
While you are at it, take a look at other development tools like Cobertura (for finding out how good your test coverage is) and static analysis tools like Findbugs and Checkstyle. These all integrate nicely with Ant and probably Eclipse, too.
If you are interested in improving your coding standards and build systems then I highly recommend using Ant, JUnit, Cobertura, Checkstyle and Findbugs together with a continuous integration server (e.g. Hudson or CruiseControl) and a version control system (e.g. git). With a toolkit like that you can't go wrong.
There are other frameworks out there (TestNG, Mockito etc) so take a look at them, too, and decide which you prefer (EDIT: And which work nicely together. Mockito + JUnit is a good combination.)
I'm learning JUnit. Since my app includes graphical output, I want the ability to eyeball the output and manually pass or fail the test based on what I see. It should wait for me for a while, then fail if it times out.
Is there a way to do this within JUnit (or its extensions), or should I just throw up a dialog box and assertTrue on the output? It seems like it might be a common problem with an existing solution.
Edit: If I shouldn't be using JUnit for this, what should I be using? I want to manually verify the build every so often, and unit test automatically, and it'd be great if the two testing frameworks got along.
Manually accepting/rejecting a test defeats the purpose of using an automated test framework. JUnit is not made for this kind of stuff. Unless you find a way to create and inject a mockup of the object representing your output device, you should consider alternatives (don't really know any there sorry).
I once wrote automated tests for a video decoding component. I dumped the decoded data to a file using some other decoder as a reference, and then compared the output of my decoder to that using the PSNR of each pair of images. This is not 100% self contained (needs external files as resources), but was automated at least, and worked fine for me.
Although you could probably code that, that is not what JUnit is about. It is about automated tests, not guided manual tests. Generally that "does it look right" test is regarded as an integration test, as it is something that is very hard to automate correctly in a way that doesn't break for trivial changes all the time.
Take a look at Abbot to give you a more robust way to test your GUI.
Unit tests shouldn't require human intervention. If you need a user to take an action then I think you're doing it wrong.
If you need a human to verify things, then don't do this as part of your unit tests. Just make it a required step for your test department to carry out when QA'ing builds. (this still works of your QA department is just you.)
I recommend using your unit tests for the Models if using MVC, or any utility method (i.e. with Swing it's common to have color mapping methods). If you have a good set of unit tests on things like model behavior, if you have a UI bug it'll help narrow your search.
Visual based unit tests are very difficult, at a company I worked at they had tried these visual tests but slight differences in video cards could produce failed tests. In the end this is where a good Q/A team is required.
Take a look at FEST-Swing. It provides an easy way to automatically test your GUIs.
The other thing you'll want to do is separate your the code which does the bulk of the work from your gui code as much as possible. You can then write unit tests on this work code without having to deal with the user interface. You'll also find that you'll run these tests much more frequently, as they can run quickly.