Unit testing a Hibernate driven application? [closed] - java

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
This may be a naive question, but I am new to both the junit and hibernate frameworks and I was wondering what the best way to go about unit testing an application that is largely calls to hibernate, or if it is even necessary to do so?
What is the best practice here?
EDIT:
Spring seems to be the big suggestion here. Unfortunately this may be alittle too much to bite off for one project. Junit, Hibernate and Spring are all new to me, and while they are all technologies I want to get under my belt, I think trying to incorporate them all into one project may be too overwhelming for me.
Links to tutorials and/or book suggestions are welcome.

Keep in mind the difference between unit testing and integration testing.
Unit tests should be testing code without any outside dependencies. These dependencies are mocked using a framework like, for example, JMock.
Integration tests are important too but the major drawback of them is that they take a long time to run. You can run thousands of true unit tests in a couple of seconds, but it's not the same with integration tests.
Depending on the size of your project/development team you might want to prioritize true unit tests over integration tests. Both style of tests are important but if you are pressed for resources, just going with Unit testing may be a better idea.
I wrote an application by myself that unit tested the Web (with Spring MVC this is easy) and Service layers, as well as domain objects. But I left the DAO alone because I didn't want to write a bunch of slow integration tests. If I had more people on staff I would have gone with integration tests as well, but in this case I didn't feel the time spent would be worth it.

As for best practices:
use an embedded database for running your tests if possible, so that you don't need a full deployed relational database just to run your tests (locally, or on your continuous build server if you have one). That way you also don't need to (necessarily) worry about rolling back etc, you can just recreate the database when you need to. Testing with an embedded database doesnt test peculiarities that could come up when you use your specific production database, but it does test your code, which should suffice.
You can also use DbUnit, an extension to JUnit, to easily fill the database with expected rows and put it in a known state, before you run your Hibernate tests.

Best practice? I use Spring and make all my tests transactional. I perform the test and rollback all the changes so I don't change the state of the database.

I like to use a in memory hsqldb for testing. The process for each hibernate POJO is:
Create the object
Persist it to the DB
Clear the session
Get it from the DB
Assert the objects are equal.
For DAOs, I create and persist enough objects to accurately test the methods, then run the tests and delete the objects as necessary to not intefere with other tests.

Hibernate source includes a lot of unit tests, I would recommend going through those and adapting a similar approach.
You can also look at the CaveatEmptor which the sample application developed for the book "Java Persistence with Hibernate"

If you're using Hibernate for Domain rich models, Unit testing domain logic is as simple as testing a POJO and Hibernate doesn't get in your way. The only caveat here is, For bidirectional mappings, you might have to set the object on both sides for unit tests.
Integration testing with database is generally not done for simple mappings. However it is suggested in the case of exquisite mappings like Single table inheritance etc. The only thing to remember here is, you may have to explicitly flush to database sometimes.

Sure, you'd unit test your persistence tier if it wasn't written in Hibernate, wouldn't you?
Create a given persistence interface that's implemented using Hibernate, instantiate some sample objects, perform CRUD operations, and ask JUnit to assert that the operations were successful. Same as any other class.

You could use Spring to help here.
It has a great unit test framework, you can use it to test CRUD ops and then rollback changes - great if you don't have the capability to reload a database every time.

Write a simple layer that passes requests to Hibernate. Then use a mocking library like EasyMock or JMock to assert that your Hibernate-veneer layer is correctly called by your application classes. This is nicely described in the partially-complete JMock book (scroll down to the test smell "everything is mocked").

Two cases are easy to test:
When practical, perform your various calculations and transformations in functions that don't know about saving or loading entities. If you can make these pure functions, so much better.
For functions that only save to the database without reading from it, you can choose not to save when testing.
The simplest (crudest) way to do #2 is by adding a reallyUpdate parameter to the function, then surrounding each "save" call with:
if (reallyUpdate) {
HibernateUtil.saveOrUpdate(theThing);
}
For me these were the lowest hanging fruit.

Related

Testing a java jersey application

I have a java/jersey api that is called from the front end. I need to write tests for the java code. How the code is written is:
1. The api call executes the resource method, this calls a separate method that gets data from db and returns to the resource method. This then returns a javax.ws.rs.core.Response to the client.
This is going to be my first time writing tests, so please answer considering I know nothing. What is the best way to start here? And what types of tests should I write. Unit tests are what I’m aiming for here.
Now I have done a lot of research here and I’m leaning towards using JUnit + Mockito to do this. But how do I check for the data in a Response object?
And how should I check the other file that is getting data from db? I found out DBUnit that can do that, but do I need it?
Another framework I came across was Rest Assured. Do I need to include that also? Or can the same things be done with JUnit/Mockito?
I just want some direction from people who have tested out jersey api’s. And want to know what is the most common way to do this.
I do not think there is a best way to do this, what you need to test is often subjective and dependent on the context.
However, you can structure your code in a way that the most important is tested easily and what's left (integration) can be done later / with different tools.
What I suggest here is to follow the principles of the hexagonal architecture. The idea is to keep at the center of your application and without any kind of dependencies (imports ...) to any framework (jaxrs, jpa, etc.) all business rules. These rules can be easily designed with TDD. You will then have very short running tests. It may be necessary to use Mockito to mock implementations of SPI interfaces.
In a second time, you can use this "core" by wiring adapters to the outer world (HTTP, databases, AMQP, etc.), using API and implementing SPI interfaces.
If you want to test these adapters, you exit the scope of unit-tests, and write integration-tests. Integration with a framework, a protocol, anything really.
This kind of tests can use a wide variety of tools, from a framework-related mock (like Jersey test framework), in-memory database (like H2), to fully operational middleware instance using tools like testcontainers.
What is important to remember when writing integration-tests is they are slow in regards of unit-tests. In order to keep a feedback-loop as short as possible, you will want to limit the number of integration-tests to a minimum.
Hoping this will help you!

TDD without local database?

When we develop a Rails application then we use a local database in our development environment, and make sure that our specs pass as part of TDD.
Is it a norm to not use a local database similar to Sqlite while doing TDD in Java? I have been told in-memory database(HSQL) is all that is needed for running unit and integration tests. Is this a standard practice being followed?
We use Sqlite in our Rails application for local development and for running our Rspecs. But my question is for Java development. We are working on rewritting a part of our application in Java. I have been told that you do not need any database for development if you write integration tests covering all functionality. And have been told that HSQL is sufficient for that. As I am used to having database for local development in Rails, I am wondering how you debug any issues later on? It is quite helpful to analyze any issues if we can replicate the data and scenario in local environment. How do you do same in Java/Spring if you do not use any database for development environment and rely completely on HSQL for testing?
For me, I never use any databases including HSQLDB to write an unit-test.
I prefer to create some interfaces like as: *Repository. and let's the SUT communicate with it. and then I write some implementation class let them implement the interface which I have created. and the classes hierarchy looks like below:
<<uses>>
SUT ---------------> Repository
^
| <<implement>>
|
|--------|--------|-------|
| | | |
JPA Hibernate JDBC .etc
this approach is known as Separation of Concerns. the application domain is a concern, data accessing is another concern. following this approach result in many plug-compatible components and independent modules, such as: domain, jpa, jdbc, and .etc, but the important thing is that will make your test is more testable.
Then I use Test Doubles to mock/stub out its collaboration in unit-test to testing them are work together as expected. the pseudo-code like as below:
repo = mock(Repository.class);
SUT it = new SUT(repository);
when(repo.find(id)).thenReturn(entity);
assert it.exercise() == expectedResult;
assert it.currentState == expectedState;
But you must write some integration test using database to testing each Repository implementation that operate on the third-party api. it is called by Martin: Test Isolation.
The answer to your question: is very common to have your test environment database as close as the development environment as possible.
I suppose that you are preoccupied with performance, there are more crucial things that you could improve before considering having an in-memory database.
Usually while TDD-ing you would only run the tests involved and later run your whole suite to check that you didn't break anything. If you are using Rspec you could use tags.
Another important thing is to clean the database at the beginning of every test since tests should be isolated and never depend on the result of previous tests. This will improve complex search queries that you could have in your system. there is a gem that could help you here.
Finally, if you are using some sort of continuous integration tool remember to set it up using rake db:schema:load instead of rake db:migrate. This will run your schema file as a single migration instead of running each single migration every time you commit. (Remember to keep this version-controlled and always up to date)
You are getting terminology wrong. TDD is about writing test cases in general. But most of the time, and also in your question, one thinks about using TDD for unit testing.
And unfortunately, terms are not very clear. When you turn to wikipedia, you find there (my words): "anything you do to test a piece of software" can be called a unit test.
But that isn't helpful. You should rather look for definitions such as here. And the main aspect there: unit tests work in isolation. Quoting from that link:
Runs in memory (no DB or File access, for example)
Thus:
when doing unit testing, you should not use any database
when you integration tests, you want to ensure that your solution works "end to end". In that sense you might be using a special instance of your database, but not a different kind of database.

Integration test per layer is a good practice?

I have an application that use spring-mvc, basically we have a presentation layer (controllers), service layer (business units, helpers), integration layer and data access layer(jdbc/jpa repositories), we want to ensure using testing that future addition to the code won't break nothing that was previously working, to do this we are using unit testing(mockito) and integration testing (spring-test,spring-test-mvc).
Unit testing is made per class/component, basically we tried to have a good coverage for the incoming inputs and possible flows within these components and this action is working fine, not have doubts here as unit test is about ensure the units works as expected.
Integration test is different story and very debatable one, as for now we are using sometimes the same scenarios we use to design our unit testing but having the entire system available using real platform and so on, but I have doubts about the best practices here.
As we have a controller, service, data layer one approach is made an IT per layer, example we have UserService class, we will have UserServiceTest which will be the Unit test and UserServiceIT, but maintainability is not ideal, I feel sometimes we repeat the same test scenario but now using the real system. Does this practice really make sense or in which scenarios this makes sense ?. If we already have 100% test coverage in the class with unit testing why we need IT for this one, seems that we have this only to ensure real component is going to start-up ?, Make sense to have all the same scenarios or which is a good criteria to decide?
Other approach is just go with the most important test cases via integration test but just from the controller layer, which it means invoke the REST services and verify the JSON output. This is enough ?, we don't need to verify more things in the others layers ?. I know calling the real REST api will use underneath all the layers (controller, service, dao) but this is enough ? Some consideration you will say here?
If we have a helper class I don't think make sense to have unit and IT, as most of the method as there for only one purpose I think unit testing will be enough here, does you think the same?.
Some classes in the data layer could use Criteria API, QueryDSL for those I go using IT as make the unit testing in some cases is extremely difficult, this is a valid justification?
I am trying to get the best way, tips and practices that makes the task of ensure the system integrity a real and valuable process keeping in mind the maintainability of this stuff.
You kindda touch the entire Test strategy needed for your application. Testing is not only about coverage and layers. As example:
we want to ensure using testing that future addition to the code won't break nothing that was previously working, to do this we are using unit testing(mockito) and integration testing (spring-test,spring-test-mvc).
this is how you actually support Regression testing, which is a type. If we look at the (detailed) Test pyramid
it's easy to see that the integration tests take good portion (recommended 5-15%). Integration goes cross-layer, but also cross-componentAPIs. It's natural that your business components will live in same layer, but you still need to assure that they are working as expected with each other too. Having mSOA will push you to support such extensive interfaces integration testing.
I agree with you on this one
Integration test is different story and very debatable
Some experts even suggest that you have to keep only unit tests and the GUI E2E ones. IMHO there are no strict best practices - only good ones. If you are happy with the trade-offs, use what ever suits your case.
I feel sometimes we repeat the same test scenario but now using the real system. Does this practice really make sense or in which scenarios this makes sense ? If we already have 100% test coverage in the class with unit testing why we need IT for this one, seems that we have this only to ensure real component is going to start-up ? Make sense to have all the same scenarios or which is a good criteria to decide?
It looks like you need to draw a line in those scenarios. Keeping the long story short - unit testing and Mock objects go together naturally. Component tests will require some real system behavior, it can be used to check the handling of data passed between various units, or subsystem components - like your component/service DB or messaging that is not an unit level task.
from the controller layer, which it means invoke the REST services and verify the JSON output. This is enough ?, We don't need to verify more things in the others layers ?. I know calling the real REST api will use underneath all the layers (controller, service, dao) but this is enough ?
Not quite true - testing the presentation layer will exercise the underlying layers too ... so why bother with all the rest of the testing? If you are OK with such approach - Selenium team suggests such DB validation approach.
If you're talking about Beans and ViewHelpers here
we have a helper class I don't think make sense to have unit and IT, as most of the method as there for only one purpose I think unit testing will be enough here, does you think the same?.
you'll need both unit and IT, because all the reasons valid for other components. Having Single responsibility doesn't deny need of IT testing.
make the unit testing in some cases is extremely difficult, this is a valid justification?
Same goes for all your encapsulated private (and static) classes, methods, properties etc. But there is a way of testing those as well - like reflection. This of course is for a special case of unit testing legacy code or an API you can't change. If you needed it for your own code, maybe this lack of testability points to a design smell.
The approach I would recommend, based on recent experience of testing Java EE 7 and Spring-based codebases, is:
Use per-feature integration tests, and avoid unit tests and mocking. Each integration test should cover code from all layers, from the presentation layer down to the infrastructure layer (this one containing reusable components that are not application or domain specific, but appropriate to the chosen architecture).
Most integration tests should be based on actual business requirements and input data. Others may be created to exercise remaining parts of the codebase, according to the code coverage report generated from each execution of the integration test suite.
So, assuming "full" code coverage is achieved with integration tests, and they run sufficiently fast, there isn't much reason to have unit tests at all. My experience is that when writing unit tests, developers tend to use too much mocking, often creating brittle tests that verify unnecessary implementation details. Also, unit tests can never provide the same level of confidence as integration tests can, since they usually don't cover things like database queries, ORM mapping, and so on.
Unit testing applies as you did on classes and components. Its purpose is to:
Write code (TDD).
Illustrate the code usage and make it sustainable over time and changes.
Cover as much border cases as possible.
When you encounter an issue with some specific usage or parameters, first reproduce it with a new test, then fix it.
Mocking should only be used when it is needed to test a class or component standalone behavior and the mocked feature comes in production from outside your application (an email server for instance). It is overkill and useless when the code is already covered and the mocking overlaps the responsibility of other kind of tests, such as integration tests.
Now that you know every piece of code works, how do the pieces work together?
This is where comes the integration testing which is about how the components interact together in various conditions and environments. There is sometimes little difference between UT and IT: for instance the testing of the data access layer.
Integration tests are used for the same purposes as unit testing but at a higher level, less atomic, to illustrate the use cases on APIs, services...
What do you call the "integration layer"?
The presentation layer testing is rather the responsibility of functional testing, not unit nor integration.
You also did not talk about performance testing.
Finally, the goal is getting all code wrote along with the tests, bugs fixed after reproduction with new tests, and maximum coverage cumulating the all kinds of tests in all possible conditions (OS, databases, browsers...).
So you validate your overall testing quality with:
a tool calculating the coverage. You will likely have to instrumentate the code to evaluate the coverage from functional testing or use advanced JDK tools.
the number of bugs coming from lack of tests on some components, services...
I usually consider a bunch of tests being good when reading them gives me immediately no doubt about how to use the code they cover and full confidence to its contract, capabilities, inputs and outputs, history and exhaustibility on use cases, strength and stability in regard to error management and report.
Not also the coverage is one important thing, but it could be better to have a few less tests if you focus on their quality: threadsafe, made of unordered methods and classes, testing real conditions (no "if test condition" hacks).
To answer your question: I would say that given the above considerations, you don't have to write an integration test per layer since you will rather choose a different testing strategy (unit, integration, functional, performance, smoke, mocked...) for each layer.

Writing tests for DAOs

I'm currently assigned to write a test for a project, is it necessary to write tests for DAO classes?
It depends :-)
If your DAO classes contain only the code needed to fetch entities from the DB, it is better to test them in separate integration tests*. The code to be unit tested is the "business logic" which you can unit test using mock DAOs.
[Update] E.g. with EasyMock you can easily set up a mock for a specific class (with its class extension, even concrete classes can be mocked), configure it to return a specific object from a certain method call, and inject it into your class to be tested.
The EasyMock website seems to be down right now, hopefully it will come back soon - then you can check the documentation, which is IMHO quite clean and thorough, with lots of code examples. Without much details in your question, I can't give a more concrete answer. [/Update]
If, OTOH, the DAOs contain also business logic, your best choice - if you can do that - would be to refactor them and move the business logic out of DAOs, then you can apply the previous strategy.
But the bottom line is, always keep in mind the unit testing motto "test everything which could possibly break". In other words, we need to prioritize our tasks and concentrate our efforts on writing the tests which provide the most benefit with the least effort. Write unit tests for the most critical, most bug-risky code parts first. Code which - in your view - is so simple it can't possibly break is further down the list. Of course it is advisable to consult with experienced developers on concrete pieces of code - they may know and notice possible traps and problems you aren't aware of.
* unit tests are supposed to be lightweight, fast and isolated from the environment as much as possible. Therefore tests which include calls to a real DB are not unit tests but integration tests. Even though technically they can be built and executed with JUnit (and e.g. DbUnit), they are much more complex and orders of magnitude slower than genuine unit tests. Sometimes this makes them unsuitable to be executed after every small code change, as regular unit tests could (and often should) be used.
Yes. But few folks would argue that, it doesn't come into the category of unit tests. Because it will not conform to the definition of unit test per say. We call it integration test where we test the integration of code to the database.
Moreover, I agree to the idea of Bruno here. Further, there are APIs available just to do that, one is DBUnit.
Yes, you should write unit tests for DAO's.
These unit tests can use an in-memory database. See for example: HyperSQL
Article on how to use HyperSQL to write persistence unit tests in Java:
http://www.mikebosch.com/?p=8
It's not necessary to write tests for anything. Do you get benefit from writing tests for your DAO classes? Probably.
Yes. There are several benefits of doing so. Once you are sure that your DAO layer is working fine, defect fixing in later stages becomes easy.
I would argue that we should write unit tests for DAOs and one of the biggest challenge to do it is the test data setup and cleanup. That is where I think, frameworks, such as Spring JDBC testing framework can help us out by letting us control the transaction using different annotations [Example: #Rollback(true)].
For example, if you are testing a "create/insert" operation, Spring allows you to completly rollback the transaction after the execution of the test method, thereby leaving the database in its original state always.
You may take a look at this link for more information: Spring Testing
This can be even more useful when for your integration tests where you don't want one test to spoil the data integrity, which can cause another test to fail.
The book xUnit Test Patterns offers a lot of great insights into this very question.

in-memory DBs evaluation

I am trying to increase the overall Integration test execution time and I am currently evaluating various in-memory db solutions. The idea is to have DAOs hit in-mem db during the tests as opposed to hitting a real DB. This is a java app using Hibernate for persistence.
I'd be interested to see your experience with one of these products H2, Derby, HSQLDB, Oracle Berkeley DB.
Some of my concerns are: will in-mem DBs be able to execute stored procedures, custom native sql? Can you selectively choose which one of your services should hit real DB vs in mem DB?
And overall, since this approach involves DB bootstrapping(pre-load/pre-create all tables with data) I am now thinking if it'd be simply easier to just mock out the DAO layer and not even worry about all the unknown problems that in mem DB may bring...
thanks.
My suggestion is to test everything, including the DAO layer as you mention. But see if you can test it in pieces. Services, DAOs, UI.
For service layer testing, mock out the DAOs. That way the service layer tests are independent of whether the DAOs are working. If the service layer tests are using DAOs and using a real database then I'd argue that it's not really a Unit test but an Integration test. Although those are valuable too, if they fail it doesn't pinpoint the problem like a Unit test.
For our DAO layer tests we use DbUnit with HSQLDB. (Using Unitils helps if you are using Spring/Hibernate/DbUnit to tie it all together.) Our DAO tests execute nice and quickly (which is important when you have 500+ tests). The memory db schema is built from our schema creation scripts so as a side effect we are testing those as well. We load/refresh a known set of data from some flat files into the memory database. (Compared to when we were using the DEV database and some data would get removed which then broke tests). This solution is working great for us and I would recommend it to anyone.
Note, however, that we are not able to test the DAO that uses a stored proc this way (but we only have one). I disagree somewhat with the poster who mentioned that using different databases is "bad" -- just be aware of the differences and know the implications of doing so.
You didn't mention if you are using Hibernate or not -- that is one important factor in that it abstracts us away from modifying any SQL that may be specific to Oracle or SQLServer or HSQLDB which another poster mentioned.
Mock out the DAO layer.
Despite what some claim unless you are just using trivial sql the subtle implementation differences and differing feature sets between databases will limit what you can do (stored procedures, views etc.) and also to some extent invalidate the tests.
My personal mocking framework of choice is Mockito. But there are lots that do the job and mocking out the DAO is standard practice so you'll find lots of documentation.
It is bad idea to have different databases for unit-testing and for production.
BTW, testing in real database should be fast, probably you are doing something wrong in your tests.
I just came across Oracle Times Ten in mem db.
http://www.oracle.com/technology/products/timesten/index.html
This may seem like possibly the most painless solution. Since no additional mocking/configuration is required. You still have all of your Integration tests intact hitting the DB but now the data is delivered faster. What do you guys think ?

Categories

Resources