Writing System Tests in JUnit - java

Preface
I'm deliberatly talking about system tests. We do have a rather exhaustive suite of unit tests, some of which use mocking, and those aren't going anywhere. The system tests are supposed to complement the unit tests and as such mocking is not an option.
The Problem
I have a rather complex system that only communicates via REST and websocket events.
My team has a rather large collection of (historically grown) system tests based JUnit.
I'm currently migrating this codebase to JUnit5.
The tests usually consist of an #BeforeAll in which the system is started in a configuration specific to the test-class, which takes around a minute. Then there is a number of independent tests on this system.
The problem we routinely run into is that booting the system takes a considerable amount of time and may even fail. One could say that the booting itself can be considered a test-case. JUnit handles lifecycle methods kind of weirdly - the time they take isn't shown in the report; if they fail it messes with the count of tests; it's not descriptive; etc.
I'm currently looking for a workaround, but what my team has done over the last few years is kind of orthogonal to the core idea of JUnit (cause it's a unit testing framework).
Those problems would go away if I replaced the #BeforeAllwith a test-method (let's call it #Test public void boot(){...}) and introduce an order-dependency (which is pretty easy using JUnit 5) that enforces boot to run before any other test is run.
So far so good! This looks and works great. The actual problem starts when the tests aren't executed by the CI server but by developers who try to troubleshoot. When I try to start a single test boot is filtered from the test execution and the test fails.
Is there any solution to this in JUnit5? Or is there a completely different approach I should take?
I suspect there may be a solution in using #TestTemplate but I'm really not sure how to procede. Also afaik that would only allow me to generate new named tests that would be filtered as well. Do I have to write a custom test-engine? That doesn't seem compelling.

This more general testing problem then related to Junit5. In order to skip very long boot up you can mock some components if it is possible. Having the booting system as a test does not make sense because there are other tests depending on that. Better to use #beforeAll in this case as it was before. For testing boot up, you can make separate test class for that which will run completely independent from other tests.
Another option is to group this kind of test and separate from the plain unit test and run it only if needed (for example before deployment on CI server). This really depends on specific use case and should those test be part of regular build on your local machine.
The third option is to try to reduce boot time if it possible. This is option if you can't use mocks/stubs or exclude those tests from regular build.

Related

How to measure a unittest vs. intergrationtest? [duplicate]

This question already has answers here:
What's the difference between unit, functional, acceptance, and integration tests? [closed]
(8 answers)
Closed 2 years ago.
I work on my thesis and it's about softwaretesting. The programming language is Java.
I have a huge program with over 450.000 lines of code (without comments and blankspaces). There are also many JUnit-tests.
My question right now is: How can I get to know if a test is a unittest or an intergrationtest?
My ideas: Can I use the execution time of the tests? Or can I measure the performance of the CPU?
Do you have any tips? Do you have more experience in softwaretesting? I am not new to this, but this case is a bit new and huge for me...
Thank you in advance! :)
Alexis described the difference between unit and integration tests completely above, but I felt the need to chime in on the particularities of java and how to write and notice the difference between unit and integration tests.
Unit Tests
Generally for unit tests, a mocking framework is used to mock certain dependencies of a particular class and of a particular method. Since with unit tests, we only want to test a single unit of the code (class methods) we don't want to have to worry about the other classes that method communicates with, so we mock those external dependencies and often verify that the method call to those external dependencies was carried out, without actually running the external method.
Mocking is a powerful tool to write strong tests for your application and it encourages developers to write clean, well-designed and loosely-coupled code.
A common example you will see throughout a java project is the Mockito Framework. I advise reading more on the topic here.
Integration Tests
"Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems."
In java terms, this often refers to the interaction between classes. How class A calls class B and ensuring the link between them works correctly in regard to data transmission, error handling and control flow. Integration tests don't typically use a mocking framework but in-practice they can. Generally you want to fully test the component interactions without mocking out the dependency link. However, this depends on your needs in regard to application design. More information on this can be found here
Typically with integration tests, the database is tested regarding the CRUD (Create-Read-Update-Delete) operations to it. Unit tests have no connection to a database. This is because tests requiring a database connection will have side effects by their very nature. There is leakage from the concept of testing a "single unit" of your code. Therefore, normally the database is tested from integration tests and up.
Requiring this database connection will often cause your integration tests to run dramatically slower than your unit tests. This is due to needing to roll up the database for each set of transactions for a test class and often if you're using an IOC container (like Spring) then further overhead to time will exist. Typically, for integration tests we use a test database, rather than the real database, and is often in-memory.
In summary
If your project is designed well, the unit tests and integration tests should be segregated by package. It makes sense conceptually, and architecturally it is a better approach for developers to adapt to and feel comfortable with. It also completely draws that line between what is an integration test and what is a unit test.
Unit tests are very fast and should be the foundation of your application test suite.
Integration tests can be slow in most cases due to requiring a database connection but can cover more variants than a unit test can.
Unit tests typically use a mocking framework to mock external dependencies.
Integration tests typically use the real dependency objects for testing.
Unit tests should not have a database connection.
Integration tests typically do have a connection to a test database.
There are tons of articles/books about this and with a single query in Google you will find more, but in short:
Unit tests are the tests that test only the smallest unit (in Java the smallest unit is a method) in isolation.
Integration tests test the communication/collaboration between dependent units (in Java you might test that certain classes communicate properly).
There is no specific hint or annotation that declare your tests as Units Tests or Integration Tests, you have to dive on the test and figure it out.
You can measure the execution time of the tests really easily. There are a lot of tools for that but the Intellij (which is the most common IDE for Java) has a built-in feature for that. In the picture, you can see the execution time of the test suite printed by this IDE.

What are integration tests containing and how to set them up

I’m currently learning about unit tests and integration testing and as I understood it, unit tests are used to test the logic of a specific class and integration tests are used to check the cooperation of multiple classes and libraries.
But is it only used to test multiple classes and if they work together as expected, or is it also valid to access databases in an integration test? If so, what if the connection can‘t be established because of a server sided error, wouldn’t the tests fail, although the code itself would work as expected? How do I know what‘s valid to use in this kind of tests?
Second thing I don‘t understand is how they are set up. Unit tests seem to me have a quite common form, like:
public class classTest {
#BeforeEach
public void setUp(){
}
#Test
public void testCase(){
}
}
But how are integration tests written? Is it commonly done the same way, just including more classes and external factors or is there another way that is used for that?
[... ] is it also valid to access databases in an integration test? [...] How do I know what‘s valid to use in this kind of tests?
The distinction between unit-tests and integration tests is not whether or not more than one component is involved: Even in unit-testing you can get along without mocking all your dependencies if these dependencies don't keep you from reaching your unit-testing goals (see https://stackoverflow.com/a/55583329/5747415).
What distinguishes unit-testing and integration testing is the goal of the test. As you wrote, in unit-testing your focus is on finding bugs in the logic of a function, method or class. In integration testing the goal is then, obviously, to detect bugs that could not be found during unit-testing but can be found in the integrated (sub-)system. Always keeping the test goals in mind helps to create better tests and to avoid unnecessary redundancy between integration tests and unit-tests.
One flavor of integration testing is interaction testing: Here, the goal is to find bugs in the interaction between two or more components. (Additional components can be mocked, or not - again this depends on whether the additional components keep you from reaching your testing goals.) Typical questions in the interactions of two components A and B could be, for example if B is a library: Is component A calling the right function of component B, is component B in a proper state to be accessed by A via that function (B might not be initialized yet), is A passing the arguments in the correct order, do the arguments contain the values in the expected form, does B give back the results in the expected way and in the expected format?
Another flavor of integration testing is subsystem testing, where you do not focus on the interactions between components, but look at the boundaries of the subsystem formed by the integrated components. And, again, the goal is to find bugs that could not be found by the previous tests (i.e. unit-tests and interaction tests). For example, are the components integrated in the correct versions, can the desired use-cases be exercised on the integrated subsystem etc.
While unit-tests form the bottom of the test pyramid, integration testing is a concept that applies on different levels of integration and can even focus on interfaces orthogonal to the software integration strategy (for example when doing interaction testing of a driver and its corresponding hardware device).
Second thing I don‘t understand is how they are set up. [...] how are integration tests written?
There is an extreme variation here. For many integration tests you can just use the same testing framework that is used for unit-tests: There is nothing unit-test specific in these frameworks. You will, certainly, in the test cases have to ensure that the setup actually combines the components of interest in their proper versions. And, whether or not additional dependencies are just used or mocked needs to be decided (see above).
Another typical scenario is to perform integration tests in the fully integrated system, using a system-test-like setup. This is often done out of convenience, just to avoid the trouble to create different special setups for the different integration tests: The fully integrated system just has them all combined. Certainly, this has also disadvantages, because it is often impossible or at least impractical to perform all integration tests as desired this way. And, when doing integration testing this way the boundaries between integration testing and system testing get fuzzy. Keeping focused in such a case means you really have to have a good understanding of the different test goals.
There are also mixed forms, but there are too many to describe them here. Just one example, there is a possibility to mock some shared libraries with the help of LD_PRELOAD (What is the LD_PRELOAD trick?).
It would be valid to access a database as part of an integration test, as integration tests are supposed to show whether a feature is working correctly.
If a feature were to not work because of a failed connection to a server side error, you would want the test to fail to inform you that this feature is not working. Integration tests are not there to inform you where the fault lies, just that a feature is not working.
See https://stackoverflow.com/a/7876055/10461045 as this helps to clarify the widely accepted difference.
Using a database (or an external connection to a service you are using) in an integration test is not only valid, but should be done. However, do not rely on integration tests heavily. Unit test every logic element you have and set up integration tests for certain flows.
Integration tests can be written in the same way, except (as you mentioned) they include more methods etc. In fact, the code snippet you've shown above is a common start write-up of an integration test.
You can read up more on tests here: https://softwareengineering.stackexchange.com/questions/301479/are-database-integration-tests-bad

Unit testing for a Selenium project

A question about best practices or practice at all ;)
I am currently working on a test automation system using Selenium in Java. It is supposed to be used for end-to-end acceptance testing of a webapp. The test cases are written in the Gherkin language and executed by the BDD framework Cucumber (Cucumber-JVM). The low-level functions use Selenium/WebDriver for interacting with the AUT and the browser. The Selenium code is structured using the PageObject pattern which abstracts the usage of WebDriver away. The cucumber step definitions call just the methods provided by the PageObjects.
As the project continues and becomes more and more complex I would like to start writing unit test to make sure the acceptance tests, and the utility functions around those, do what they should :)
Now to the question:
Is it feasible to write unit test for testing a test automation project?
The main problem is that during my first approach to unit testing using TestNG I realised, that my unit tests ended up doing more or less the same stuff the acceptance tests already did. This is counter productive, as the unit tests are very slow and have a lot dependencies.
Or does one just test the utility classes and leave the Selenium code be in such a case. ie. test just the stuff that can be tested without calling the Selenium WebDriver and interacting with the AUT?
Note just to be sure I have not been misunderstood. I'm asking about running unit test ON the acceptance test code and all the auxiliary code. Not about running the Selenium test cases using an unit testing framework like JUnit or TestNG.
Any help and/or ideas will be appreciated, as I am not sure how to tackle this one. That is if writing tests for tests is at all sensible ;)
I'm sure someone will consider my response "opinionated" and vote it down, but nevertheless I think that
Yes, if you have a test framework you are relying upon for your acceptance testing, the framework itself needs to be tested.
From my experience the value is in 2 areas:
Be able to change your framework with confidence. When you create some function, you know quite a lot about it, i.e. which use cases it supports, what it's designed to do, etc. But other people (or even you a year from now) may not have the same level of knowledge, even with documentation. So either new functions will pop up every time someone needs some slight modification in behavior (because they are not confident to change existing function), or someone may break whole bunch of acceptance tests.
Best if those are true unit tests, able to run completely independently from anything (use mocks, predefined static test data, etc).
Protect yourself from unexpected changes / bugs in Selenium itself (or other important 3rd-party libraries). When you're updating Selenium to next version (and that usually needs to be done every 3-6 months), there's always a chance that they changed some default you were relying upon (and didn't even know), or broke something, or suddenly something returns a different exception, or does not throw exception where it previously did and so on. Of course no need to get carried away and duplicate Selenium own unit tests, but when it comes to non-trivial things, or relying on some features with poor documentation, those tests may help a lot.
Those are integration tests. Ideally they should run against test-only webapp (not the real application) that replicate specifically tested behaviors in a way convenient for tests.
Of course some compromises possible as well. For example having a small subset of acceptance tests that are serving as unit / integration tests (they run first, and other tests only run if they are passing). Might be cheaper to begin with those and slowly migrate to proper unit/integration tests when you debug / fix issues in the test framework.
Another question is how do you separate tests testing your framework from actual acceptance tests for the product. What worked for me is keeping test framework and acceptance tests in 2 separate projects. That way I can change the framework, build it (which also includes running unit and integration tests) many times if needed. When all unit and integration tests are passing, then I can update the version used by actual acceptance tests.
Personally, I would not write test code that tests the test code.
Writing tests to shield myself from bugs in 3:rd party tools like Selenium is not something I would do. If the tests passes and the expected result is validated, then that would be enough for me.
When I upgrade to a new version of Selenium, I would do it when I had a working state. I.e. all tests passed. The only change would be the Selenium version. If I now have tests that breaks, I know that Selenium is behaving differently in this version than I expected and can act accordingly.
I would write tests for any complicated utility functionality I may need.
I would, however, work hard on writing the test code extremely easy to understand and avoid anything remotely complicated in it. And then be careful to verify that each test verified the behaviour I am expecting.

Exactly what is integration testing - compared with unit

I am starting to use unit testing in my projects, and am writing tests that are testing at the method/function level.
I understand this and it makes sense.
But, what is integration testing? From what i read it moves the scope of testing up to test larger features of an application.
This implies that I write a new test suite to test larger things such as (on an e-commerce site) checkout functionality, user login functionality, basket functionality. So here i would have 3 integration tests written?
Is this correct - if not can someone explain what is meant.
Also, does integration test involve the ui (web application context here) and would employ the likes of selenium to automate. Or is integration testing still at the code level but tying together difference classes and areas of the code.
Consider a method like this PerformPayment(double amount, PaymentService service);
An unit test would be a test where you create a mock for the service argument.
An integration test would be a test where you use an actual external service so that you test if that service responds correctly to your input data.
Unit tests are tests that the tested code is inside of the actual class. Another dependencies of this class are mocked or ignored, because the focus is test the code inside the class.
Integration tests are tests that involves disk access, application service and/or frameworks from the target application. The integration tests run isolated from another external services.
I will give an example. You have a Spring application and you made a lot of unit tests to guarantee that the business logic is working properly. Perfect. But what kind of tests you have to guarantee:
Your application service can start
Your database entity is mapped correctly
You have all the necessary annotations working as expected
Your Filter is working properly
Your API is accepting some kind of data
Your main feature is really working in the basic scenario
Your database query is working as expected
Etc...
This can't be done with unit tests but you, as developer, need to guarantee that all things are working too. This is the objective of integration tests.
The ideal scenario is the integration tests running independent from another external systems that the application use in a production environment. You can accomplish that using Wiremock for Rest calls, a memory database like H2, mocking beans from some specific classes that call external systems, etc.
A little curiosity, Maven have a specific plugin for Integration Tests: the maven failsafe plugin, that execute test classes that the name ends with IT (by default). Example: UserIT.java.
The confusion about what Integration Test means
Some people understand the "integration test" as a test involving the "integration" to other external systems that the currently system use. This kind of tests can only be done in a environment where you have all the systems up and running to attend you. Nothing fake, nothing mocked.
This might be only a naming problem, but we have a lack of tests (what I understand as integration tests) that attends the necessity of the items described above. On contrary, we are jumping for a definition of unit tests (test class only) to a "integration" test (the whole real systems up). So what is in the middle of it if not the integration tests?
You can read more about this confusion on this article by Martin Fowler. He separates the "integration tests" term on two meanings: the "broad" and "narrow" integration tests:
narrow integration tests
exercise only that portion of the code in my service that talks to a
separate service
uses test doubles of those services, either in
process or remote
thus consist of many narrowly scoped tests, often no
larger in scope than a unit test (and usually run with the same test
framework that's used for unit tests)
broad integration tests
require live versions of all services, requiring substantial test
environment and network access
exercise code paths through all
services, not just code responsible for interactions
You can get even more details on this article.
Unit testing is where you are testing your business logic within a class or a piece of code. For example, if you are testing that a particular section of your method should call a repository your unit test will check to make sure that the method of the interface which calls the repository is called the correct number of times that you expect, otherwise it fails the test.
Integration testing on the other hand is testing that the actual service or repository (database) behavior is correct. It is checking that based on data you pass in you retrieve the expected results. This ties in with your unit tests so that you know what data you should retrieve and what it does with that data.
As far as I see the selenium tests should be in another test suite. Those tests are the most fragile test in nature even if you write them correctly. Here you can use Specflow or some other kind of specification by example framework. Perhaps you can call these tests as acceptance tests. These are for developers and business experts too.
The integration, or module tests normally do not use UI. The integration tests exercise some classes which are working together. These are lower level tests than the selenium tests, and a bit easier to maintain. These tests are for developers only.
Here are a couple of constraints that a good unit test satisfies. Meeting these constraints also required good testable code.
No I/O - disk or network
Only one assertion (if multiple, they should be minor variations of each other)
Does not exercise (cover) much more production code than what it asserts
These constraints usually don't apply to integration tests.

Getting into testing

I am at the stage now where I have a fairly good understanding of programming/development, using Java.
Could anyone tell me the best way for me to start using testing packages? I have looked at Hibernate but not surer where to go with it...
I use Eclipse 3.5 on Mac OS X. Is it a case of writing scripts to test methods? What is unit-testing? etc.
Where do I begin?
Many thanks. Alex
What is Unit Testing
Unit testing is writing code (i.e. test code) that passes known inputs into code under test and then validating the code under test returns expected outputs. It's the most granular testing you can perform on an application. To make it easier, usually a unit testing framework is used. For Java, JUnit is the most popular, but TestNG is also notable.
Getting Started
Unit testing frameworks provide tools for test execution, validation and results reporting. For your setup, Eclipse has built in support for JUnit. Eclipse is able to automatically detect tests, compile tests and code under test, execute tests, and report results within the IDE. Furthermore, failures are reported as clickable stack trace information that loads the corresponding file at the given line number.
Mock Objects
That you're also working with Hibernate, suggests you also investigate a mock object framework as well - such as jMock. Mock objects are usually substituted as part of a code under tests's composition and serve two purposes: (1) returning known outputs and (2) recording they've been called and how so that unit tests can introspect that information as part of validation.
The ability to use Mock objects to make testing easier is predicated on dependency injection. That is other entities that compose the object under test. The idea is decoupling dependencies (e.g. Hibernate) to focus on testing algorithms that manipulate that data you're working with.
Database
However, if you've got code that is not easily refactored, or perhaps you want to validate database code, you can also test Hibernate interaction as well. In that case you want a database in a known state. Three approaches come to mind:
Restoring a database backup at the beginning of each test execution.
Use dbunit, which provides its own mechanisms for maintain state.
Transactional locking with rollback. Wrap the entire case is wrapped with a try{} finally{}, where the latter always rolls back the transaction.
James Shore ("a thought leader in the Agile software development community") has a series of screen casts of him demonstrating Test Driven Development, using Eclipse.
http://jamesshore.com/Blog/Lets-Play/
While there are many ways to start testing, there is no "best" way so there's no point in looking for that as a starting point.
Search the web for a good tutorial on junit and do it. That will be the absolute best way to get started IMO. Don't get sidetracked with code coverage or integrating with Hudson or any of the other tasks that are on the periphery to testing. Focus on writing a handful (or 10) if tests first.
Once you understand the basics you can start looking at other tools to see if they meet your needs any better or worse than junit.
First up: Hibernate is not a testing package.
Now that's out of the way, I'd suggest you take a look at JUnit. Read up on unit testing first so you know what it is (the Wikipedia entry is a good place to start), then try the JUnit cookbook. Write some unit tests for a small piece of your code to see how it works, then move on to bigger chunks.
While you are at it, take a look at other development tools like Cobertura (for finding out how good your test coverage is) and static analysis tools like Findbugs and Checkstyle. These all integrate nicely with Ant and probably Eclipse, too.
If you are interested in improving your coding standards and build systems then I highly recommend using Ant, JUnit, Cobertura, Checkstyle and Findbugs together with a continuous integration server (e.g. Hudson or CruiseControl) and a version control system (e.g. git). With a toolkit like that you can't go wrong.
There are other frameworks out there (TestNG, Mockito etc) so take a look at them, too, and decide which you prefer (EDIT: And which work nicely together. Mockito + JUnit is a good combination.)

Categories

Resources