j2ee service layer automated test cases - java

I am working on Big Spring, ibatis, velocity bases J2EE project there are thousands of classes and number of service layer classes, i need to create test cases for all server layer classes is there any automated tool to write test cases for all service layer classes ?

No.
You need to figure out what you application does, split it up into components that are testable, and then write tests for those components.
Sorry.

Unfortunately no automated test case generator tool exists.

Related

Spring boot REST application testing approach

I have a Spring boot + REST application. When I need to write unit testing, should I directly invoke the service beans or call the rest controller? If I invoke the rest controller directly, I have to use RestTemplate and invoke the rest api as a client, right?
What would be the best and required practice?
If I invoke the service beans directly it will result in less code coverage because controller methods code will be not covered. Is that acceptable?
Hmm, this is a complex question but I'll answer as best I can. A lot of this will depend on you/your organization's risk tolerance and how much time they want to invest in tests. I believe in a lot of testing, but there is such a thing as too much.
A unit test tests the unit of code. Great, but what's a unit? This article is a pretty good discussion: http://martinfowler.com/bliki/UnitTest.html but a unit is basically the smallest testable part of your application.
Much literature (e.g. https://www.amazon.ca/Continuous-Delivery-Reliable-Deployment-Automation/dp/0321601912/ ) describes multiple phases of testing including unit tests which are very low level and mock externalities such as DBs or file systems or remote systems, and "api acceptance tests" (sometimes called integration tests although this is a vague term that can mean other things). This latter type fires up a test instance of your application, invokes APIs and asserts on responses.
The short answer is as follows: for unit tests, focus on the units (probably services or more granular), but the other set of tests you describe, wherein the test behaves like a client and invokes your api, are worthwhile too. My suggestion: do both, but don't call both unit tests.
Best Approach is to Test VIA Controllers. WebServices are entered and values are returned here. So Controller is having quite a good role in this. There can be small logic as well, you may miss that
You can Try Using the MockMvc Method for testing controllers.
Reference: Reference-1, Reference-2
Or use the RestTemplate as you mentioned in question Reference-3
It is based on what you want to test, you can separate your test, specially if you have team of developers, make test case to test your business "services", and another test cases as integration test to use REST template, in this case you can figure your bugs faster and easier.
It depends on what you want to do.
One approach would be to unit test the units of work, like the service and the MVC controller. These test will only test eventual logic found in this classes and try to reach a high branch coverage, if applicable.
Besides this, you can write an integration test that makes the HTTP request, goes to the real service bean and only mock an eventual resource access.
For integration tests you can use Spring's support, see here: http://docs.spring.io/spring/docs/current/spring-framework-reference/html/integration-testing.html#spring-mvc-test-framework

Regression component tests with Cucumber. Is there any boundary to the layers that should be tested?

I found myself last week having to start thinking about how to refactor an old application that only contains unit tests. My first idea was to add some component test scenarios with Cucumber to get familiarised with the business logic and to ensure I don't break anything with my changes. But at that point I had a conversation with one of the architects in the company I work for that made me wonder whether it was worth it and what was actually the code I had to actually test.
This application has many different types of endpoints: rest endpoints to be called from and to call to, Oracle stored procedures and JMS topics and queues. It's deployed in a war file to a Tomcat server and the connection factory to the broker and the datasource to the database are configured in the server and fetched using JNDI.
My first idea was to load the whole application inside an embedded Jetty, pointing to the real web.xml so everything is loaded as it would be loaded from a production environment but then mocking the connection factory and the datasource. By doing that, all the connectivity logic to the infrastructure where the application is deployed would be tested. Thinking about the hexagonal architecture, this seems like too much effort having in mind that those are only ports which logic should only be about transforming received data into application data. Shouldn't this just be unit tested?
My next idea was to just mock the stored procedures and load the Spring XMLs in my test without any web server, which makes it easier to mock classes. For this I would be using libraries like Spring MockMvc for the rest endpoints and Mockrunner for JMS. But again, this approach would still test some adapters and complicate the test as the result of the tests would be XML and JSON payloads. The transformations done in this application are quite heavy where the same message type could contain different versions of a class (each message could contain many complex object that implement several interfaces).
So right now I was thinking that maybe the best approach would be to just create my tests from the entry point to the application, the services called from the adapters, and verify that the services responsible to send messages to the broker or to call other REST endpoints are actually invoked. Then just ensure there are proper unit tests for the endpoints and verify everything works once deployed by just providing some smoke tests that are executed in a real environment. This would test the connectivity logic and the business logic would be tested in isolation, without caring if a new adapter is added or one is removed.
Is this approach correct? Would I be leaving something without testing this way? Or is it still too much and I should just trust the unit tests?
Thanks.
Your application and environment sound quite complicated. I would definitely want integration tests. I'd test the app outside-in as follows:
Write a smoke-test suite that runs against the application in the actual production environment. Cucumber would be a good tool to use. That suite should only do things that are safe in production, and should be as small as possible while giving you confidence that the application is correctly installed and configured and that its integrations with other systems are working.
Write an acceptance test suite that runs against the entire application in a test environment. Cucumber would be a good choice here too.
I would expect the acceptance-test environment to include a Tomcat server with test versions of all services that exist in your production Tomcat and a database with a schema, stored procedure, etc. identical to production (but not production data). Handle external dependencies that you don't own by stubbing and mocking, by using a record/replay library such as Betamax and/or by implementing test versions of them yourself. Acceptance tests should be free to do anything to data, and they shouldn't have to worry about availability of services that you don't own.
Write enough acceptance tests to both describe the app's major use cases and to test all of the important interactions between the parts of the application (both subsystems and classes). That is, use your acceptance tests as integration tests. I find that there is very little conflict between the goals of acceptance and integration tests. Don't write any more acceptance tests than you need for specification and integration coverage, however, as they're relatively slow.
Unit-test each class that does anything interesting whatsoever, leaving out only classes that are fully tested by your acceptance tests. Since you're already integration-testing, your unit tests can be true unit tests which stubb or mock their dependencies. (Although there's nothing wrong with letting a unit-tested class use real dependencies that are simple enough to not cause issues in the unit tests).
Measure code coverage to ensure that the combination of acceptance and unit tests tests all your code.

How to test a jsp based web application?

What sort of a framework can be used to test a jsp web application?
All java classes and the servrlets needs to be tested too. What would be the best approach and related frameworks that can be used in this regard?
Update
Any other suggestions?
For the Java part I still recommend JUnit as Unit-Test-Framework.
For the test of the web part one can think about using a web test framework like Selenium. However, this part is more difficult to automate during (for example) a nightly build.
You can test your application front end with selinium webdriver automation testing.
you can test your java code with junit framework or
testNg now a days becoming more popular in the market
JUnit was selected as the testing framework for java components. Still looking for a method to test application front end?
There is a tool called JspTester (http://jsptester.com) which integrates as a servlet to your webapp, and allows to inject values directly to beans, model objects, request parameters and session variables. Useful when there is a need to test the JSPs in isolation from business logic or MVC controllers.

Integration Test of REST APIs with Code Coverage

We have built a REST API that exposes bunch of business services - a business service may invoke other platform/utility services to perform database read and writes, to perform service authorization etc.
We have deployed these services as WAR files in Tomcat.
We want to test this whole setup using a integration test suite which we would like to also treat as regression test suite.
What would be a good approach to perform integration testing on this and any tools that can speed up the development of suite? Here are few requirements we think we need to address:
Ability to define integration test cases which exercise business scenarios.
Set up the DB with test data before suite is run.
Invoke the REST API that is running on a remote server (Tomcat)
Validate the DB post test execution for verifying expected output
Have code coverage report of REST API so that we know how confident we should be in the scenarios covered by the suite.
At my work we have recently put together a couple of test suites to test some RESTful APIs we built. Like your services, ours can invoke other RESTful APIs they depend on. We split it into two suites.
Suite 1 - Testing each service in isolation
Mocks any peer services the API depends on using restito. Other alternatives include rest-driver, wiremock, pre-canned and betamax.
The tests, the service we are testing and the mocks all run in a single JVM
Launches the service we are testing in Jetty
I would definitely recommend doing this. It has worked really well for us. The main advantages are:
Peer services are mocked, so you needn't perform any complicated data setup. Before each test you simply use restito to define how you want peer services to behave, just like you would with classes in unit tests with Mockito.
The suite is super fast as mocked services serve pre-canned in-memory responses. So we can get good coverage without the suite taking an age to run.
The suite is reliable and repeatable as its isolated in it's own JVM, so no need to worry about other suites/people mucking about with an shared environment at the same time the suite is running and causing tests to fail.
Suite 2 - Full End to End
Suite runs against a full environment deployed across several machines
API deployed on Tomcat in environment
Peer services are real 'as live' full deployments
This suite requires us to do data set up in peer services which means tests generally take more time to write. As much as possible we use REST clients to do data set up in peer services.
Tests in this suite usually take longer to write, so we put most of our coverage in Suite 1. That being said there is still clear value in this suite as our mocks in Suite 1 may not be behaving quite like the real services.
With regards to your points, here is what we do:
Ability to define integration test cases which exercise business scenarios.
We use cucumber-jvm to define business scenarios for both of the above suites. These scenarios are English plain text files that business users can understand and also drive the tests.
Set up the DB with test data before suite is run.
We don't do this for our integration suites, but in the past I have used unitils with dbunit for unit tests and it worked pretty well.
Invoke the REST API that is running on a remote server (Tomcat)
We use rest-assured, which is a great HTTP client geared specifically for testing REST APIs.
Validate the DB post test execution for verifying expected output
I can't provide any recommendations here as we don't use any libraries to help make this easier, we just do it manually. Let me know if you find anything.
Have code coverage report of REST API so that we know how confident we should be in the scenarios covered by the suite.
We do not measure code coverage for our integration tests, only for our unit tests, so again I can't provide any recommendations here.
Keep your eyes peeled on our techblog as there may be more details on their in the future.
You may also check out the tool named Arquillian, it's a bit difficult to set up at first, but provides the complete runtime for integration tests (i.e. starts its own container instance and deploys your application along with the tests) and provides extensions that solve your problems (invoking REST endpoints, feeding the databases, comparing results after the tests).
Jacoco extension generates the coverage reports than can be later displayed i.e. by the Sonar tool.
I've used it for a relatively small-scale JEE6 project and, once I had managed to set it up, I was quite happy with how it works.

What does a mocking framework do for me?

I have heard some people who I cannot talk to are big fans of jmock. I've done test centered development for years and so I went through the website and looked at some of the docs and still can't figure out what good it is.
I had the same problem with spring. Their docs do a great job explaining it if you already understand what it is, so I'm not assuming that jmock is of no value. I just don't understand what it does for me.
So if jmock provides me with the ability to mock out stubbed data, let's go with an example of how I do things and see how jmock would be better.
Let's say I have my UI layer that says, create me a widget and the widget service, when creating a widget, initializes the widget and stores pieces of it in the three tables necessary to make up a widget.
When I write my tests, here's how I go about it.
First, I re-point hibernate to my test hypersonic database so I don't have to do a bunch of database set up. Hibernate creates my tables for me.
All of my tests for my classes have static factory methods that construct a test instance of the class for me. Each of my DAOs create test versions that point to the test schema. Then my service class has one that constructs itself with DAOs generated by the test class.
Now, when I run my test of the UI controller that calls the service, I am testing my code all the way through the application. Granted that this is not the total isolation generally wanted when doing a unit test, but it provides me, in my opinion, a better unit test because it executes the real code all the way through all of the supporting layers.
Because Hypersonic under hibernate is slow, it takes slightly longer to run all of my tests, but my entire build still runs in less than five minutes on an older computer for full build and packaging, so I find that pretty acceptable.
How would I do things differently with jmock?
In your example, there are two interfaces where one would use a mocking framework to do proper unit tests:
The interface between the UI layer and the widget service - replacing the widget service with a mock would allow you to test the UI layer in isolation, with the service returning manually created data and the mock verifying that the expected service calls (and no others) happen.
The interface between the widget service and the DAO - by replacing the DAO with a mock, any service methods that contain complex logic can be tested in isolation.
Granted that this is not the total isolation generally wanted when doing a unit test, but it provides me, in my opinion, a better unit test because it executes the real code all the way through all of the supporting layers.
This seems to be the core of your question. The answer has a number of facets:
If you're not testing components in isolation, you do not have unit tests, you have integration tests. As you observe, these are quite valuable, but they have their drawbacks
Since they test more things at the same time, they tend to break more often, they tend to break in large groups (when there's a problem with common functionality) and when they do, it is harder to find out where the actual problem lies
They are more constrained in what kinds of scenarios you can test. It can be hard or impossible to simulate certain edge cases in an integration test.
Sometimes a full integration test cannot be automated because some component is not sufficiently under your control (e.g. a third-party webservice) to set up the test data you need. In such a case you might even end up using a mocking framework in what is otherwise a high-level integration test.
I haven't looked at JMock in particular (I use Mockito) but in general mock frameworks allow you to "mock" external services such that you only need to test a single class at a time. Any dependencies of that class can be mocked, meaning the real method calls are not made, and instead stubs are called that return or throw constants. This is a good thing, because external calls can be slow, inconsistent, or unreliable--all bad things for unit testing.
To give a single example of how this works, imagine you have a service class that has a dependency on a web service client. If you test with the real web service client, it might be down, the connection might be slow, or the data behind the web service might change over time. How are you going to write a reliable test against that? (You can't). So you use a mock framework to mock/stub the web service client, and you create fake responses, fake errors, fake exceptions, to mimic the web service behavior. The difference is the result is always fast and consistent.
Also, you'd like to test all the failure cases that a given dependency might have, but without mocking that's hard to do. Consider the example I gave above. You'd like to be sure your code does the right thing if the web service throws an IOException because the web service is down (or times out), but it's not so easy to force that condition. With mocking this becomes trivial.

Categories

Resources