Best approach to integration test a Flex/Java web application via Maven? - java

I am working on a project that is developing a webapp with a 100% Flex UI that talks via Blaze to a Java backend running on an application server. The team has already created many unit tests, but have only created integration tests for the persistence module. Now we are wondering the best way to integration test the other parts. Here are the Maven modules we have now, I believe this is a very typical design:
Server Side:
1) a Java domain module -- this only has unit tests
2) a Java persistence module (DAO) -- right now this only has integration tests that talk to a live database to test the DAOs, nothing really to unit test here
3) a Java service module -- right now this only has unit tests
Client Side:
4) a Flex services module that is packaged as a SWC and talks to the Java backend -- currently this has no tests at all
5) a Flex client module that implements the Flex UI on top of the Flex services module - this has only unit tests currently (we used MATE to create a loosely couple client with no logic in the views).
These 5 modules are packaged up into a WAR that can be deployed in an application server or servlet container.
Here are the 4 questions I have:
Should we add integration tests to the service module or is this redundant given that the persist module has integration tests and the service module already has unit tests? It also seems that integration testing the Flex-Services module is a higher priority and would exercise the services module at the same time.
We like the idea of keeping the integration tests within their modules, but there is a circularity with the Flex services module and the WAR module. Integration test for the Flex services module cannot run without an app-server and therefore those tests will have
to come AFTER the war is built, yes?
What is a good technology to
integration test the Flex
client UIs (e.g. something like
Selenium, but for Flex)?
Should we put final integration tests in the
WAR module or create a separate
integration testing module that gets built after the WAR?
Any help/opinions is greatly appreciated!

More an hint than a strong answer but maybe have a look at fluint (formerly dpUInt) and the Continuous Integration with Maven, Flex, Fliunt, and Hudson blog post.

First off, just some clarification. When you say "4) Flex services module packaged as a SWC", you mean a Flex services library that I gather is loaded as an RSL. It's an important differential than writing the services as a runtime module because the latter could (and typically would) instantiate the services controller itself and distribute the service connection to other modules. Your alternative, simply a library you build into each module means they all create their own instance of a service controller. You're better off putting the services logic into a module that the application can load prior to the other modules loading and manages the movement of services between.
Eg.
Application.swf - starts, initialises IoC container, loads Services.swf, injects any dependencies it requires
Services.swf loads, establishes connection to server, manages required service collection
Application.swf adds managed instances from Services.swf into it's container (using some form of contextual awareness so as to prevent conflicts)
Application.swf loads ModuleA.swf, injects any dependencies it requires
ModuleA.swf loads, (has dependencies listed that come from Services.swf injected), uses those dependencies to contact services it requires.
That said, sticking with your current structure, I will answer your questions as accurately as possible.
What do you want to test in integration? That your services are there and returning what you expect I gather. As such, if using Remote Objects in BlazeDS, then you could write tests to ensure you can find the endpoint, that the channels can be found, the destination(s) exists, that all remote methods return as expected. The server team are testing the data store (from them to the DB and back), but you are testing that the contract between your client and the server still holds. This contract is for any assumptions - such as Value Objects returned on payloads, remote methods existing, etc, etc.
(See #4 below) The tests should be within their module however I would say here that you really should have a module to do the services (instead of a library as I suggested above). Regardless, yes still deploy the testing artifacts to a local web-server (using Jetty or some such) and ensure the integration tests goal depends on the WAR packager you use.
I find some developers interchange UI/functional testing with integration testing. Whilst you can indeed perform the two together, there is still room for automated integration tests in Flex where a webserver is loaded up and core services are checked to ensure they exist and are returning what is required. For the UI/functional tests, Adobe maintain a good collection of resources: http://www.adobe.com/products/flex/related/#ftesting. For integration tests as I mentioned,
Integration tests should have their own goal that depends the packaged WAR project.

Related

Change architecture of module to correctly handle transactions

I have application with modular architecture. Each module in this application communicate between each other though RPC protocol. This application is built on top of custom framework which is based on spring and hibernate inside. Each module has three layers architecture inside, so there is application layer in order to expose API for different modules or for UI part, it also has service layer in order to define business logic and handle transactions, data access layer in order to access/manage db data.
Additionally each module of this application is deployed as separate service (WAR) on dedicated port of application server. Also, each module has their own db connection pool. Currently we are working on performance topic and there is following scenario where performance should be improved:
Given: Module1.Component1 marked as #Transactional
And: Component1.method1() makes call to Module2.Component1
And: Module2.Component1 responds in
1sec;
30sec;
1min;
not responding;
responds with exception.
There are the following load on Module1Componet1
1 request/sec
10 requests/sec
With high load on the Module1 and with slow response of Module2 both jdbc connection pool are full and both modules are unable to react on further requests.
As an idea to improve performance for Module1, following changes can take place:
Extract external call to another module out of open transaction for Module1
It will fix performance issue and Module1 will behave appropriately even if Module2 response will be too slow and there will be huge bunch of request on Module1. This fix can be done on service layer with movement of #Transaction annotation to another place to do external call out of transaction. But, question is how to do this correctly in order to avoid similar issues in the future when developers will do similar changes. Could you please advice how it can be possible to restrict this though architecture, namely how to force to call RPC remote calls out of db transaction for future development?

Writing Java/Maven integration tests using webservice database and jms

I'm maintaining a Java Enterprise Application that exposes a few webservices and interacts with a few JMS queues and a database.
I've got most of the code unit tested and I thought all was good until in a refactoring I moved a class in another package and that changed a namespace in my webservice without me noticing and breaking all clients.
An Integration Test would have caught that, so I'm trying to write one.
The application is deployed on JBoss EAP 6.4, how can I make a webservice call in my mvn verify step?
What about JMS Queues? They are configured on the Application Server.
Is the application supposed to be already deployed?
Am I supposed to deploy the application with maven to a JBoss installation before the verify step or start an embedded webserver?
Most of the docs around are confusing to me and often I see suggestion to mock stuff, which is not integration testing (and I already do in Unit tests).
Someone told me "just use Jenkins" and I read the docs, installed it and still don't understand how that is supposed to help me write integration tests since all it does is run mvn verify anyway.
This topic and is too broad, there might be many different correct answers for this question and will depend a lot on the technologies you're using, so I'll focus first in this part only:
that changed a namespace in my webservice without me noticing and
breaking all clients
You can create unit tests for endpoints too, I do that all the time with spring-boot. For example, the code below starts the application, runs the testEndpoint() test and shuts down the application right after.
#RunWith(SpringJUnit4ClassRunner.class)
#SpringApplicationConfiguration(classes = {MyApplication.class})
#WebAppConfiguration
#IntegrationTest("server.port:0")
public class MyControllerTest {
#Value("${local.server.port}")
private int port;
#Test
public void testEndpoint() {
String endpointUrl = "http://localhost:" + port;
// make a HTTP request here to test the endpoint
}
}
I believe this can be done with any spring-mvc application, but the code would be a bit different and not as easy as with spring-boot.
Although this would usually catch most bugs in endpoints, it doesn't eliminate the need of integration tests against a deployed project.
So focusing on the bigger picture now, if you want to create end-to-end tests, including WebServices, JMS queues and databases, I suggest creating a separate project with tests only, probably using Cucumber or something similar. This project should be triggered in Jenkins (or any other CI tool) whenever needed (e.g. before a deployment, every hour, every commit and/or every night) and it will require that all applications are already deployed.
Alternatively, you could have Jenkins deploy an application and run integration tests only against this one application. In this case the tests will depend on all other applications to be already deployed.

How to mock webservices reponses in a maven application?

My application consumes external third-party webservices (Im successfully using cxf for this). How can I mock this webservices using local files to build pre-saved reponses (for test purposes) ?
More specifically:
I was thinking of using 2 maven projects: dao-ws and dao-ws-mock, both having the same interface.
The first dao-ws really calls the webservices using cxf, whereas the second dao-ws-mock uses local files to build pre-saved responses (used for test purposes).
mvn install build the webapp project, whereas mvn install -DuseMock build the webapp project with the dao-ws-mock dependency. Is this the correct way to do it? Is there a better/simpler way to do it?
Depending of the properties used, I will produce the same .war, but with different behavior. It sounds to be a bad practice for me (for example, I don't want to push war with mock dependencies on our internal Nexus). What do you think?
Best regards,
You could use SoapUI's build in mock services - http://www.soapui.org/Getting-Started/mock-services.html
You can generate a mock service based on a wsdl, specify default responses, and you can even create dynamic responses that return different responses depending on the request.
You can then build your mock services into a .war and deploy them: http://www.soapui.org/Service-Mocking/deploying-mock-services-as-war-files.html (This link shows how to do it in the GUI, but it can be done using maven as well)
You could use Sandbox - mock services are hosted and always available so there is no need to launch another server before running tests (disclaimer: I'm a founder).
You can generate mocks from service specifications (wsdl, Apiary, Swagger) and add dynamic behaviour as needed.

Deploying multi-tier application

I want to build and deploy my first Java EE 6 multi-tier application, with web and business tiers running on separate physical servers on Glassfish 3.1.
I think I understand what's required from a theoretical hi-level view but am not clear on the specifics and small details.
My current plan as is follows:
Create a Maven Enterprise Application in NetBeans 7.
Expose Session Facade EJBs via remote interface.
Have JSF Backing Beans utilise Session Facade EJBs via JNDI lookup.
Deploy EJB JAR to one server and web WAR to the other.
I'd really appreciate some pointers on:
Application structure.
Correct JNDI lookup with separate servers. Is injection possible?
Building appropriate archives.
Deployment configuration to allow tiers to communicate.
Unless you know you will be serving many requests per second, or have very data and/or CPU-heavy business logic, then you should be perfectly fine starting out by deploying both tiers on the same application server. Starting out by deploying to a single Glassfish application server using local interfaces lets you skip a lot of complexity in the runtime environment.
This in turn will allow you to use the simplest form of #EJB-injection in the web tier to access the session facades in the business tier. Local intefaces are faster because the application server can pass references rather than RMI proxies between the tiers and it lets you skip the JNDI lookups. You can always change the annotation later on, or introduce remote interfaces if you later find other reasons to deploy the tiers on separate servers.
Glassfish supports clustering, so you might never have to explicitly separate the two tiers--it all depends on the actual usage patterns, so performance monitoring is key.
Deploying the web tier as a WAR and the business logic as an EJB jar is the right thing to do. Depending on the size and the logical structure of your application, you might want to break that down into to multiple modules.
Maven takes care of building the archives. Make sure you define a sub-project for each war and jar archive, plus a sub-project for assembling the EAR-file. The latter project will pull in the war and jar files produced by the other sub-projects. String all the projects together with a master maven project and voila, you have the flexibility to build each component separately, build the entire thing, or any combination in-between.
You have chosen a hard path, as others have pointed out in comments and answers...
Let's start with the structure of your app(s). You are going to end up with four achives... two that you will deploy:
a "regular" jar for the Remote interface of your EJB (jar-of-interfaces)
an EJB jar that has the implementation of your EJB
an EAR archive that will contain the jar-of-interfaces (in the /lib subdirectory) and the EJB jar (in the 'root').
a WAR file that contains the code that uses the Remote interface of your EJB. This will have the jar-of-interfaces in WEB-INF/lib.
The rest of this answer is based on the EJB FAQ. The most applicable part of that document is here.
You can inject the EJB into the ManagedBean. You will not need to use a dot-lookup method in your ManagedBean.
You will need to use the corbaname for your bean in the glassfish-web.xml file.

Should integration testing of DAOs be done in an application server?

I have a three tier application under development and am creating integration tests for DAOs in the persistence layer. When the application runs in Websphere or JBoss I expect to use the connection pooling and transaction manager of those application servers. When the application runs in Tomcat or Jetty, we'll be using C3P0 for pooling and Atomikos for transactions.
Because of these different subsystems, should the DAO's be tested in a fully configured application server environment or should we handle those concerns when integration testing the service layer? Currently we plan on setting up a simple JDBC data source with non-JTA (i.e. resource-local) transactions for DAO integration testing, thus no application server is involved....but this leaves me wondering about environmental problems we won't uncover.
Besides testing each module using unittests, the integration test should test groups of modules.
I don't want to be pedantic but in therorie this is folowed by system test for black box testing by QA.
For smaller projects this may not be feasible
I think you're on the right track with this line of thinking. If possible you should set up a continuous integration server (e.g. Hudson) that runs your production environment. That way you can develop with pretty high confidence using Tomcat etc., running tests against your local setup, and when you check in your code be sure that those same tests are being run against the real deal.

Categories

Resources