My application consumes external third-party webservices (Im successfully using cxf for this). How can I mock this webservices using local files to build pre-saved reponses (for test purposes) ?
More specifically:
I was thinking of using 2 maven projects: dao-ws and dao-ws-mock, both having the same interface.
The first dao-ws really calls the webservices using cxf, whereas the second dao-ws-mock uses local files to build pre-saved responses (used for test purposes).
mvn install build the webapp project, whereas mvn install -DuseMock build the webapp project with the dao-ws-mock dependency. Is this the correct way to do it? Is there a better/simpler way to do it?
Depending of the properties used, I will produce the same .war, but with different behavior. It sounds to be a bad practice for me (for example, I don't want to push war with mock dependencies on our internal Nexus). What do you think?
Best regards,
You could use SoapUI's build in mock services - http://www.soapui.org/Getting-Started/mock-services.html
You can generate a mock service based on a wsdl, specify default responses, and you can even create dynamic responses that return different responses depending on the request.
You can then build your mock services into a .war and deploy them: http://www.soapui.org/Service-Mocking/deploying-mock-services-as-war-files.html (This link shows how to do it in the GUI, but it can be done using maven as well)
You could use Sandbox - mock services are hosted and always available so there is no need to launch another server before running tests (disclaimer: I'm a founder).
You can generate mocks from service specifications (wsdl, Apiary, Swagger) and add dynamic behaviour as needed.
Related
I am working on setting up a Lagom application in production. I have tried contacting Lightbend for ConductR license but haven't heard back in ages. So, now I am looking for an alternative approach. I have multiple questions.
Since the scale of the application is pretty small right now, I think using a static service locator works for me right now (open to other alternatives). Also, I am using MySQL as my event store instead of the default configuration of Cassandra (Reasons not relevant to this thread).
To suppress Cassandra and Lagom's Service Locator, I have added the following lines to my build.sbt:
lagomCassandraEnabled in ThisBuild := false
I have also added the following piece to my application.conf with service1-impl module.
lagom.services {
service1 = "http://0.0.0.0:8080"
}
For the dev environment, I have been able to successfully run my application using sbt runAll in a tmux session. With this configuration, there is no service locator running on the default 8000 port but I can individually hit service1 on 8080 port. (Not sure if this is the expected behaviour. Comments?)
I ran sbt dist to create a zip file and then unzipped it and ran the executable in there. Interestingly, the zip was created within the service1-impl folder. So, if I have multiple modules (services?), will sbt dist create individual zip files for each of the service?
When I run the executable created via sbt dist, it tries to connect to Cassandra and also launches a service locator and ignores the static service locator configuration that I added. Basically, looks like it ignores the lines I added to build.sbt. Anyone who can explain this?
Lastly, if I were to have 2 services, service1 and service2, and 2 nodes in the cluster with node 1 running service1 and node 2 running both the services, how would my static service locator look like in the application.conf and since each of the service would have its own application.conf, would I have to copy the same configuration w.r.t. static service locator in all the application.confs?
Would it be something like this?
lagom.services {
service1 = "http://0.0.0.0:8080"
service1 = "http://1.2.3.4:8080"
service2 = "http://1.2.3.4:8081"
}
Since each specific actor would be spawned on one of the nodes, how would it work with this service locator configuration?
Also, I don't want to run this in a tmux session in production. What would be the best way to finally run this code in production?
You can get started with ConductR in dev mode immediately, for free, without contacting sales. Instructions are at: https://www.lightbend.com/product/conductr/developer
You do need to register (read: provide a valid email) and accept TnC to access that page. The sandbox is free to use for dev mode today so you can see if ConductR is right for you quickly and easily.
For production, I'm thrilled to say that soon you'll be able to deploy up to 3 nodes in production if you register w/Lightbend.com (same as above) and generate a 'free tier' license key.
Lagom is opinionated about microservices. There's always Akka and Play if those opinions aren't shared by a project. Part of that opinion is that deployment should be easy. Good tools feel 'right' in the hand. You are of course free to deploy the app as you like, but be prepared to produce more polyfill the further from the marked trails you go.
Regarding service lookup, ConductR provides redirection for HTTP service lookups for use with 'withFollowRedirects' on Play WS [1]
Regarding sbt dist, each sub-project service will be a package. You can see this in the Chirper example [2] on which sbt dist generates chirp-impl.zip, friend-impl.zip, activity-stream-impl, etc as seen in the Chirper top level build.sbt file.
As that ConductR is the clean and lighted path, you can reference how it does things in order to better understand how to replace Lagom's deployment poly w/ your own. That's the interface Lagom knows best. Much of ConductR except the core is already OSS so can try github if the docs don't cover something.
Disclosure: I am a ConductR-ing Lightbender.
http://conductr.lightbend.com/docs/1.1.x/ResolvingServices
git#github.com:lagom/activator-lagom-java-chirper.git
I have developed a micro service (Spring Boot REST service, deployed as executable JAR) to track all activities from third party projects as my requirement and its working now.
Currently it's working apart of some projects, and now I have updated service with some additional features.
But I can't move it to live server without restarting the existing service as it is deployed as jar. I'm afraid to restart my service, restart may be leads to lose data of integrated projects.
What improvements can I make in my architecture to solve my problem?
What about JRebel plugin. It worked perfectly for me, but, unfortunately, it's not a free app. Like alternative, (i used this approach with Spring MVC, with Spring Boot it could be otherwise), I set up a soft link in work directory on a compiled path in JBoss (in my case it was dir with name target and *.class and *.jar files). As for me, the first solution with JRebel is the most appropriate for you.
Finally got a solution as commented by #Gimby .
We can do it by deploying multiple instances of services and it bound to a service registry ,Here i achieved it by using eureka as registry service and also used zuul as proxy .
I have 2 app engine projects that I have created in the Developers Console. One project is my production application and the other I plan to use for staging. I am developing my application using Cloud Endpoints.
I would like to have the applicationId, WEB_CLIENT_ID, ANDROID_CLIENT_ID, etc.. all be configurable such that in the terminal I can specify a 'stage' and 'prod' flag to use different configurations and push to to each respective project.
Something like:
mvn appengine:update -env=production
and
mvn appengine:update -env=stage
To do this I figure I'll need to parameterize <application> inside appengine-web.xml and also have the Constants.java file read from a config file.
How can I have different configurations for each environment?
A different approach that you could use would be to put all your CLIENT_IDs into the #API, #APIMethod annotations. This would allow the same code to be accessed from different clients via the authentication mechanisms.
I'm developing jaxws annotated webservice and I'm deploying it to axis2 (1.5.1) running at tomcat (6.0.20) in a folder named 'servicejars'. So far so good. But it is undeployable to the SimpleAxis2Server to make junit tests.
Deploying as service archive (.aar) doesn't run for jaxws webservice as discussed here https://issues.apache.org/jira/browse/AXIS2-4611.
How to make junit for jaxws service with axis2? Any suggestions?
Your description contains two problems.
First problem is that bug. If your web services cannot be deployed at all and no client can call them, you have to find a workaround! I cannot help with that part.
The second problem is finding the right junit test strategy. My advice is the following: if you can avoid it, don't call real web services with junit tests on the client side. Find a way to call your annotated methods from junit tests sitting on the server side. Your unit tests will be more efficient and won't depend on a jaxws client.
I've solved my problem.
I use the onboard solution of java6 (Endpoint.publish(..)) to publish the webservice from within junit.
It is very easy.
I am working on a project that is developing a webapp with a 100% Flex UI that talks via Blaze to a Java backend running on an application server. The team has already created many unit tests, but have only created integration tests for the persistence module. Now we are wondering the best way to integration test the other parts. Here are the Maven modules we have now, I believe this is a very typical design:
Server Side:
1) a Java domain module -- this only has unit tests
2) a Java persistence module (DAO) -- right now this only has integration tests that talk to a live database to test the DAOs, nothing really to unit test here
3) a Java service module -- right now this only has unit tests
Client Side:
4) a Flex services module that is packaged as a SWC and talks to the Java backend -- currently this has no tests at all
5) a Flex client module that implements the Flex UI on top of the Flex services module - this has only unit tests currently (we used MATE to create a loosely couple client with no logic in the views).
These 5 modules are packaged up into a WAR that can be deployed in an application server or servlet container.
Here are the 4 questions I have:
Should we add integration tests to the service module or is this redundant given that the persist module has integration tests and the service module already has unit tests? It also seems that integration testing the Flex-Services module is a higher priority and would exercise the services module at the same time.
We like the idea of keeping the integration tests within their modules, but there is a circularity with the Flex services module and the WAR module. Integration test for the Flex services module cannot run without an app-server and therefore those tests will have
to come AFTER the war is built, yes?
What is a good technology to
integration test the Flex
client UIs (e.g. something like
Selenium, but for Flex)?
Should we put final integration tests in the
WAR module or create a separate
integration testing module that gets built after the WAR?
Any help/opinions is greatly appreciated!
More an hint than a strong answer but maybe have a look at fluint (formerly dpUInt) and the Continuous Integration with Maven, Flex, Fliunt, and Hudson blog post.
First off, just some clarification. When you say "4) Flex services module packaged as a SWC", you mean a Flex services library that I gather is loaded as an RSL. It's an important differential than writing the services as a runtime module because the latter could (and typically would) instantiate the services controller itself and distribute the service connection to other modules. Your alternative, simply a library you build into each module means they all create their own instance of a service controller. You're better off putting the services logic into a module that the application can load prior to the other modules loading and manages the movement of services between.
Eg.
Application.swf - starts, initialises IoC container, loads Services.swf, injects any dependencies it requires
Services.swf loads, establishes connection to server, manages required service collection
Application.swf adds managed instances from Services.swf into it's container (using some form of contextual awareness so as to prevent conflicts)
Application.swf loads ModuleA.swf, injects any dependencies it requires
ModuleA.swf loads, (has dependencies listed that come from Services.swf injected), uses those dependencies to contact services it requires.
That said, sticking with your current structure, I will answer your questions as accurately as possible.
What do you want to test in integration? That your services are there and returning what you expect I gather. As such, if using Remote Objects in BlazeDS, then you could write tests to ensure you can find the endpoint, that the channels can be found, the destination(s) exists, that all remote methods return as expected. The server team are testing the data store (from them to the DB and back), but you are testing that the contract between your client and the server still holds. This contract is for any assumptions - such as Value Objects returned on payloads, remote methods existing, etc, etc.
(See #4 below) The tests should be within their module however I would say here that you really should have a module to do the services (instead of a library as I suggested above). Regardless, yes still deploy the testing artifacts to a local web-server (using Jetty or some such) and ensure the integration tests goal depends on the WAR packager you use.
I find some developers interchange UI/functional testing with integration testing. Whilst you can indeed perform the two together, there is still room for automated integration tests in Flex where a webserver is loaded up and core services are checked to ensure they exist and are returning what is required. For the UI/functional tests, Adobe maintain a good collection of resources: http://www.adobe.com/products/flex/related/#ftesting. For integration tests as I mentioned,
Integration tests should have their own goal that depends the packaged WAR project.