We are currently improving the test coverage of a set of database-backed applications (or 'services') we are running by introducing functional tests. For me, functional tests treat the system under test (SUT) as a black box and test it through its public interface (be it a Web interface, REST, or our potential adventure into the messaging realm using AMQP).
For that, the test cases either A) bootstrap an instance of the application or B) use an instance that is already running.
The A version allows for test cases to easily test the current version of the system through the test phase of a build tool or inside a CI job. That is what e.g. the Grails functional test phase is for. Or Maven could be set up to do this.
The B version requires the system to already run but the system could be inside (or at least closer to) a production environment. Grails can do this through the -baseUrl option when executing functional tests.
What now puzzles me is how to achieve a required state of the service prior to the execution of every test case?
If I e.g. want to test a REST interface that does basic CRUD, how do I create an entity in the database so that I can test the HTTP GET for it?
I see different possibilities:
Using the same API (e.g. HTTP POST) to create the entity. Downside: Changing the creation method breaks two test cases. Furthermore, there might not be a creation method for all APIs.
Adding an additional CRUD API for testing and only activating that in non-production environments. That API is then used for testing. Downside: adds additional code to the production system, API logic might not be trivial, e.g. creation of complex entity graphs (through aggregation/composition), and we need to make sure the API is not activated for production.
Basically the same approach is followed by the Grails Remote Control plugin. It allows you to "grab into your application" and invoke arbitrary code through serialisation. Downside: Feels "brittle". There might be similar mechanisms for different languages/frameworks (this question is not Grails specific).
Directly accessing the relational database and creating/deleting content, e.g. using DbUnit or just manually creating entities through JDBC. Downside: you duplicate creation/deletion logic and/or ORM inside the test case. Refactoring the DB breaks the test case though the SUT still works.
Besides these possibilities, Grails when using the (-inline) option for functional tests allows accessing Spring services (since the application instance is run inside the same JVM as the test case). Same applies for Spring Boot "integration tests". But I cannot run the tests against an already running application version (as described as option B above).
So how do you do that? Did I miss any option for that?
Also, how do you guarantee that each test case cleans up after itself properly so that the next test case sees the SUT in the same state?
as with unit testing you want to have a "clean" database before you run a functional test. You will need some setup/teardown functionality to bring the database into a defined state.
easiest/fastest solution to clean the database is to delete all content with an sql script. (For debugging it is also useful to run this in the test setup to keep the state of the database after a test failure.) This can be maintained manually (it just contains delete <table> statements). If your database changes often you could try to generate the clean script (disable foreign keys (to avoid ordering problem), delete tables).
to generate test data you can use an sql script too but that will be hard to maintain, or create it by code. The code can be placed in ordinary Services. If you don't need real production data the build-test-data plugin is a great help at simplifying test data creation. If you are on the code side it also makes sense to re-use the production code to create test data to avoid duplication.
to call the test data setup simply use remote-control. I don't think it is more brittle than all the http & ajax stuff ;-). Since we now have all the creation code in a service the only thing you need to call with remote control is the Service that does create the data. It does not have to get more complicated than remote { ctx.testDataService.setupDataForXyz() }. If it is that simple you can even drop remote-control and use a controller/action to run it.
do not test too much detail with functional tests to make it not more complicated as it already is. :)
Related
I’m currently learning about unit tests and integration testing and as I understood it, unit tests are used to test the logic of a specific class and integration tests are used to check the cooperation of multiple classes and libraries.
But is it only used to test multiple classes and if they work together as expected, or is it also valid to access databases in an integration test? If so, what if the connection can‘t be established because of a server sided error, wouldn’t the tests fail, although the code itself would work as expected? How do I know what‘s valid to use in this kind of tests?
Second thing I don‘t understand is how they are set up. Unit tests seem to me have a quite common form, like:
public class classTest {
#BeforeEach
public void setUp(){
}
#Test
public void testCase(){
}
}
But how are integration tests written? Is it commonly done the same way, just including more classes and external factors or is there another way that is used for that?
[... ] is it also valid to access databases in an integration test? [...] How do I know what‘s valid to use in this kind of tests?
The distinction between unit-tests and integration tests is not whether or not more than one component is involved: Even in unit-testing you can get along without mocking all your dependencies if these dependencies don't keep you from reaching your unit-testing goals (see https://stackoverflow.com/a/55583329/5747415).
What distinguishes unit-testing and integration testing is the goal of the test. As you wrote, in unit-testing your focus is on finding bugs in the logic of a function, method or class. In integration testing the goal is then, obviously, to detect bugs that could not be found during unit-testing but can be found in the integrated (sub-)system. Always keeping the test goals in mind helps to create better tests and to avoid unnecessary redundancy between integration tests and unit-tests.
One flavor of integration testing is interaction testing: Here, the goal is to find bugs in the interaction between two or more components. (Additional components can be mocked, or not - again this depends on whether the additional components keep you from reaching your testing goals.) Typical questions in the interactions of two components A and B could be, for example if B is a library: Is component A calling the right function of component B, is component B in a proper state to be accessed by A via that function (B might not be initialized yet), is A passing the arguments in the correct order, do the arguments contain the values in the expected form, does B give back the results in the expected way and in the expected format?
Another flavor of integration testing is subsystem testing, where you do not focus on the interactions between components, but look at the boundaries of the subsystem formed by the integrated components. And, again, the goal is to find bugs that could not be found by the previous tests (i.e. unit-tests and interaction tests). For example, are the components integrated in the correct versions, can the desired use-cases be exercised on the integrated subsystem etc.
While unit-tests form the bottom of the test pyramid, integration testing is a concept that applies on different levels of integration and can even focus on interfaces orthogonal to the software integration strategy (for example when doing interaction testing of a driver and its corresponding hardware device).
Second thing I don‘t understand is how they are set up. [...] how are integration tests written?
There is an extreme variation here. For many integration tests you can just use the same testing framework that is used for unit-tests: There is nothing unit-test specific in these frameworks. You will, certainly, in the test cases have to ensure that the setup actually combines the components of interest in their proper versions. And, whether or not additional dependencies are just used or mocked needs to be decided (see above).
Another typical scenario is to perform integration tests in the fully integrated system, using a system-test-like setup. This is often done out of convenience, just to avoid the trouble to create different special setups for the different integration tests: The fully integrated system just has them all combined. Certainly, this has also disadvantages, because it is often impossible or at least impractical to perform all integration tests as desired this way. And, when doing integration testing this way the boundaries between integration testing and system testing get fuzzy. Keeping focused in such a case means you really have to have a good understanding of the different test goals.
There are also mixed forms, but there are too many to describe them here. Just one example, there is a possibility to mock some shared libraries with the help of LD_PRELOAD (What is the LD_PRELOAD trick?).
It would be valid to access a database as part of an integration test, as integration tests are supposed to show whether a feature is working correctly.
If a feature were to not work because of a failed connection to a server side error, you would want the test to fail to inform you that this feature is not working. Integration tests are not there to inform you where the fault lies, just that a feature is not working.
See https://stackoverflow.com/a/7876055/10461045 as this helps to clarify the widely accepted difference.
Using a database (or an external connection to a service you are using) in an integration test is not only valid, but should be done. However, do not rely on integration tests heavily. Unit test every logic element you have and set up integration tests for certain flows.
Integration tests can be written in the same way, except (as you mentioned) they include more methods etc. In fact, the code snippet you've shown above is a common start write-up of an integration test.
You can read up more on tests here: https://softwareengineering.stackexchange.com/questions/301479/are-database-integration-tests-bad
I'm currently writing a Java program that is an interface to another server. The majority of the functions (close to >90%) do something on the server. Currently, I'm just writing simple classes that run some actions on the server, and then check it myself, or add methods to the test that read back the written information.
Currently, I'm developing on my own computer, and have a version of the server running locally on a VM.
I don't want to continually run the tests at every build, as I don't want to keep modifying the server I am connected too. I am not sure the best way to go about my testing. I have my JUnit tests (on simple functions that do not interact externally) that run every build. I can't seem to see the set way in JUnit to write tests that do not have to run at every build (perhaps when their functions change?).
Or, can anyone point me in the correct direction of how to best handle my testing.
Thanks!
I don't want to continually run the tests at every build, as I don't want to keep modifying the server I am connected to
This should have raised the alarms for you. Running the tests is what gives you feedback on whether you broke stuff. Not running them means you're blind. It does not mean that everything is fine.
There are several approaches, depending on how much access you have to the server code.
Full Access
If you're writing the server yourself, or you have access to the code, then you can create a test-kit for the server - A modified version of the server that runs completely in-memory and allows you to control how the server responds, so you can simulate different scenarios.
This kind of test-kit is created by separating the logic parts of the server from its surroundings, and then mocking them or creating in-memory versions of them (such as databases, queues, file-systems, etc.). This allows the server to run very quickly and it can then be created and destroyed within the test itself.
Limited/No Access
If you have to write tests for integration with a server that's out of your control, such as a 3rd party API, then the approach is to write a "mock" of the remote service, and a contract test to check that the mock still behaves the same way as the real thing. I usually put those in a different build, and run that occasionally just to know that my mock server hasn't diverged from the real server.
Once you have your mock server, you can write an adapter layer for it, covered by integration tests. The rest of your code will only use the adapter, and therefore can be tested using plain unit tests.
The second approach can, of course, be employed when you have full access as well, but usually writing the test-kit is better, since those kinds of tests tend to be duplicated across projects and teams, and then when the server changes a whole bunch of people need to fix their tests, whereas if the test-kit is written as part of the server code, it only has to be altered in one place.
I am using Maven and TestNG.
How to distinguish at runtime when a particular method is being called by a TestNG/JUnit test-case or by the main java code
Several comments are alluding to this, however it's generally extremely poor practice to build in statements that work one way under test, and another way when the app is running standalone. This increases the probability that the app will pass tests, but fail in production.
Instead, you should look at why you're wanting to make this distinction. In general, it will be for the sake of some dependent object, or due to input of one variety or another. In these cases, it's better to engineer the class to accept dependent objects to be inserted into it via configuration and under test, the only thing that changes is the configuration. The class under test should not distinguish the dependant classes from one another. Instead, just work with their interfaces, so you can create mock classes for testing.
When accepting input, redirect the input source to take scripted input.
For databases, redirect to an in-memory DB which is configured for the test, etc.
You will find this approach will VASTLY improve the quality of the code you write, and decrease the probability of bugs sneaking past your unit tests.
At runtime, your code is never running unit tests. Unless you invoke them explicitly from your code, which you should never do.
Unit tests are only run manually, or during the test phase of the maven lifecycle.
I am at the stage now where I have a fairly good understanding of programming/development, using Java.
Could anyone tell me the best way for me to start using testing packages? I have looked at Hibernate but not surer where to go with it...
I use Eclipse 3.5 on Mac OS X. Is it a case of writing scripts to test methods? What is unit-testing? etc.
Where do I begin?
Many thanks. Alex
What is Unit Testing
Unit testing is writing code (i.e. test code) that passes known inputs into code under test and then validating the code under test returns expected outputs. It's the most granular testing you can perform on an application. To make it easier, usually a unit testing framework is used. For Java, JUnit is the most popular, but TestNG is also notable.
Getting Started
Unit testing frameworks provide tools for test execution, validation and results reporting. For your setup, Eclipse has built in support for JUnit. Eclipse is able to automatically detect tests, compile tests and code under test, execute tests, and report results within the IDE. Furthermore, failures are reported as clickable stack trace information that loads the corresponding file at the given line number.
Mock Objects
That you're also working with Hibernate, suggests you also investigate a mock object framework as well - such as jMock. Mock objects are usually substituted as part of a code under tests's composition and serve two purposes: (1) returning known outputs and (2) recording they've been called and how so that unit tests can introspect that information as part of validation.
The ability to use Mock objects to make testing easier is predicated on dependency injection. That is other entities that compose the object under test. The idea is decoupling dependencies (e.g. Hibernate) to focus on testing algorithms that manipulate that data you're working with.
Database
However, if you've got code that is not easily refactored, or perhaps you want to validate database code, you can also test Hibernate interaction as well. In that case you want a database in a known state. Three approaches come to mind:
Restoring a database backup at the beginning of each test execution.
Use dbunit, which provides its own mechanisms for maintain state.
Transactional locking with rollback. Wrap the entire case is wrapped with a try{} finally{}, where the latter always rolls back the transaction.
James Shore ("a thought leader in the Agile software development community") has a series of screen casts of him demonstrating Test Driven Development, using Eclipse.
http://jamesshore.com/Blog/Lets-Play/
While there are many ways to start testing, there is no "best" way so there's no point in looking for that as a starting point.
Search the web for a good tutorial on junit and do it. That will be the absolute best way to get started IMO. Don't get sidetracked with code coverage or integrating with Hudson or any of the other tasks that are on the periphery to testing. Focus on writing a handful (or 10) if tests first.
Once you understand the basics you can start looking at other tools to see if they meet your needs any better or worse than junit.
First up: Hibernate is not a testing package.
Now that's out of the way, I'd suggest you take a look at JUnit. Read up on unit testing first so you know what it is (the Wikipedia entry is a good place to start), then try the JUnit cookbook. Write some unit tests for a small piece of your code to see how it works, then move on to bigger chunks.
While you are at it, take a look at other development tools like Cobertura (for finding out how good your test coverage is) and static analysis tools like Findbugs and Checkstyle. These all integrate nicely with Ant and probably Eclipse, too.
If you are interested in improving your coding standards and build systems then I highly recommend using Ant, JUnit, Cobertura, Checkstyle and Findbugs together with a continuous integration server (e.g. Hudson or CruiseControl) and a version control system (e.g. git). With a toolkit like that you can't go wrong.
There are other frameworks out there (TestNG, Mockito etc) so take a look at them, too, and decide which you prefer (EDIT: And which work nicely together. Mockito + JUnit is a good combination.)
I've recently been asked to, effectively, sell my department on unit testing. I can't tell you how excited this makes me, but I do have one concern. We're using JUnit with Spring and Maven, and this means that each time mvn test is called, it rebuilds the database. Obviously, we can't integrate that with our production server -- it would kill valuable data.
How do I prevent the rebuilding without telling maven to skip testing?
The best I could figure was to assign the script to operate in a test database (line breaks added for readability):
mvn test
-Ddbunit.schema=<database>test
-Djdbc.url=jdbc:mysql://localhost/<database>test?
createDatabaseIfNotExist=true&
useUnicode=true&characterEncoding=utf-8
I can't help but think there must be a better way.
I'm especially interested in learning if there is an easy way to tell Maven to only run tests on particular classes without building anything else? mvn -Dtest=<test-name> test still rebuilds the database.
======= update =======
Bit of egg on my face here. I didn't realize that I was using the same variable in two places, meaning that the POM was using a "skip.test" variable for both rebuilding the database and for running the tests...
Update: I guess that DBUnit does the rebuilding of the DB because it is told to do so in the test setup method. If you change your setup method, you can eliminate the DB rebuild. Of course, you should do it so that you get the DB reset when you need it, and omit it when you don't. My first bet would be to use a system property to control this. You can set the property on the command line the same way you already do with jdbc.url et al. Then in the setup method you add an if to test for that property and do the DB reset if it is set.
A test database, completely separated from your production DB is definitely the best choice if you can have it. You can even use e.g. Derby, an in-memory DB which can run embedded within the JVM. But in case you absolutely can't have a separate DB, use at least a separate test schema inside that DB.
In this scenario I would recommend you put your DB connection parameters into profiles within your pom, the default being the test DB, and a separate profile to contain the production settings. This way it can never happen that you accidentally run your tests against the production DB.
In general, however, it is also important to understand that tests run against a DB are not really unit tests in the strict sense, rather integration tests. If you have an existing set of such tests, fine, use them as much as you can. However, you should try to move towards adding more real unit tests, which test only a small, isolated portion of your code at once (a method or class at most), ideally self contained (need no DB, net, config files etc.) so they can run fast - this is a very important point. If you have 5000 unit tests and each takes only 5 seconds to run, that totals up to almost 7 hours, so you obviously won't run them very often. If a test takes only 5 milliseconds, you get the results in less than half a minute, so you can afford to run all your tests before you commit your latest change - many times a day. That makes a huge difference in the speed of feedback you get from the tests.
Hope this helps.
We're using JUnit with Spring and Maven, and this means that each time mvn test is called, it rebuilds the database.
Maven doesn't do anything with databases by itself, your code does. In any case, it's very unusual to run tests (which are not unit tests) against a production database.
How do I prevent the rebuilding without telling maven to skip testing?
Hard to say without more details (you're not showing anything) but profiles might be a way to go.
Unit tests, by definition, only operate on a single component in the system. You should not be attempting to write unit tests which integrate with any external services (web, DB, etc.). The solution I have to this is to use a good mocking framework to stub out the behaviour of any dependencies your components have. This encourages good interface APIs since most mocking frameworks work best with simple interfaces. It would be best to create a Repository pattern interface for any interactions with your DB and then mock out the impl any time you are testing a class that interacts with it. You can then functionally test your Repository impl separately. This also has the added benefit of keeping your unit tests fast enough to remain part of your CI so that your feedback cycle is as fast as possible.