I have been implementing CI on a legacy project that has very little junit test coverage. I have explained that given where we are that the quickest bang for buck is to deploy and run up the code as part of the weekly build and then hit the running Tomcat instance with a number of HTTP requests. Planning on doing this via JUnit.
Tests will be basic, but cover.
Has server started and can it process HTTP requests.
Are we getting the expected data back from the server.
...
I appreciate that this isn't ideal, but at present there are no tests which enable the dev team to deduce whether the weekly build is good and actually runs.
All of the examples I have found to date seem to mock the actual HTTP request. Is there a framework out there that will provide the functionality to handle http requests and the blocking async results checking?
you can automate a browser with various tools.
A common one is selenium web browser. We couple this with BDD testing/cucumber/specflow for regression and CI.
Others include robot
For tools rather than a framework, you could use badboy, jmeter or grinder
Jmeter could be used with some validations, though it's really a performance testing tool. Why not curl or wget?
Decided that using SpringJunit4ClassRunner and autowiring the required dependencies in each of the test cases makes it simple to create isolated integration tests.
Related
We are developing microservice based application using Jhipster. For that, there are different components should run at the same time i.e, service registry, UAA server, Gateway, and other different services. To run all these components on my PC it consumes all the resources (16 GB of Ram). However, other developers, they don't have sufficient resources on their PC, which is the reason we are facing the problems continues development in the team.
So we are seeking some options for this problem to get efficiency over our development team.
Currently, if someone wants to add/change features on the application, he needs to work with both microservice and gateway(for the frontend).
So, in this case, what happen? suppose multiple developers are working on gateway and service at the same time in the development environment.
How are they going to debug/test? do they have to deploy gateway individually?
We are planning to deploy microservices on our own VPS server and in near future for the production heroku, kubernetes, jenkins, cloudfoundry can be used.
Correct me if I am wrong and is there any better option for smooth development?
I had read Sam Neuman's Microservice book that the problem of the single gateway based application while development.Now I am very curious about how Jhipster came to resolve this problem.
It seems to me that you are trying to work with your microservices as it was a monolith. One of the most important features in the microservice architecture is the ability to work on each microservice independently, which means that you do not need to run the whole infrastructure to develop a feature. Imagine a developer at Netflix who needs to run several hundreds of microservices on their PC to develop a feature - that would be crazy.
Microservices is all about ownership. Usually, different teams work on different microservices or some set of microservices which makes it really important to build good test design to make sure that whenever all the microservices are up and running as one cohesive system everything works as expected.
All that said, when you are developing your microservice you don't have to rely on other microservices. Instead, you better mock all the interactions with other microservices and write tests to check whether your microservice is doing what it has to do. I would suggest you wiremock as it has out-of-the-box support for Spring Boot. Another reason is that it is also supported by Spring Cloud Contract that enables you to use another powerful technique called Consumer Driven Contracts which makes it possible to make sure that contract between two microservices is not broken at any given time.
These are the integration tests (they run on a microservice level) and they have a very fast feedback because of the mocks, on the other hand, you can't guarantee that your application works fine after running all of them. That's why there should be another category of more coarse grained tests, aka end-to-end tests. These tests should be running against the whole application, meaning that all the infrastructure must be up and running and ready to serve your requests. This kind of tests is usually performed automatically by your CI, so you do not need all the microservices running on your PC. This is where you can check whether you API gateway works fine with other services.
So, ideally, your test design should follow the following test pyramid. The more coarse grained tests you have the less amount of them should be kept within the system. There is no silver bullet if to speak about proportions, rough numbers are:
Unit tests - 65%
Integration tests - 25%
End-to-end tests - 10%
P.S: Sorry for not answering your question directly, I had to use the analogy with tests to make it more clear. All I wanted to say is that in general, you don't need to take care of the whole system while developing. You need to take care of your particular microservice and mock out all the interactions with other services.
Developing using a shared gateway makes little sense as it means that you cannot use webpack dev server to hot reload your UI changes. Gateways and microservices can run without the registry just use local application properties and define static zuul routes. If your microservices are well defined, most developers will only need to run a small subset of them to develop new features or fix bugs.
The UAA server can be shared but alternatively you can create a security configuration simulating authentication that you would activate through a specific profile. This way when a developer works on one single web service, she can test it with a simple REST client like curl or swagger without having to worry about tokens.
Another possibility if you want to share the registry is to assign a spring profile per developer but it might be overkill compared to above approach.
I have to automate REST API testing in my project and integrate it in to existing CI in jenkins.
I am about to start coding using REST-assured.However I happened to see SOAP UI REST tutorial and understand that there is a maven plugin in SOAP UI to help jenkins integration. Before I progress, just wanted to know if there is an obvious advantage to using SOAP UI over Rest-assured.
I have to complete the automation of around 30 requests with complex JSON responses in about a month - including schema validation for responses.
I haven't used REST-assured, but I had a quick look and I see it's a java DSL for testing rest services. Given that it does what it says it does, here's my answer...
I've used SOAP UI for testing of web services. Generally, SOAP UI has been very good for manual testing, but I found it difficult for automated testing.
The main reason was that many of the file paths are hard corded into SOAP UI projects, and so a project referring to c:\development\myproject\wsdl\myservice.wsdl would suddenly not work on another developer machine at /dev/myproject/wsdl/myservice.wsdl.
I also found not being able to effectively edit the SOAP UI projects in intellij meant I was constantly alt-tabbing.
Yes, the soap ui maven plugin did work, but I found it cumbersome.
Note that I haven't used SOAP UI REST, just "normal" SOAP UI, but if your use case is purely to implement automated testing, and that the REST-assured framework does what it says, I would certainly recommend to use the DSL.
Given your current use case, the simplest among the 2 would be to use rest assured (+points to java dsl; bonus readability for testing; but you can always use other clients if you want to). Given that you intend to automate your test and integrate it on CI, you can simply create a module which runs your test suite on a given phase and gather the results.
PS: i currently use jbehave + rest-assured
I am aware RabbitMQ is written in Erlang and thus can't be embedded in a JVM like we would do with the ActiveMQ JMS broker for exemple.
But actually there are some projects that are done in another language and that can easily be embedded for integration tests.
For exemple, MongoDB, written in C++, can be easily started/stopped in the context of a JVM integration test with:
https://github.com/flapdoodle-oss/embedmongo.flapdoodle.de
There is also someone porting it to Java:
https://github.com/thiloplanz/jmockmongo/
So I just wonder how can we do integration tests when my application is written in Java, and the other technology is in another langage (like Erlang for RabbitMQ)?
In general what are the good practices?
I see 3 main solutions:
Starting a real RabbitMQ
Embedding a JVM port of the technology in the currently used Langage
Use standard technologies so that a technology in Erlang may have the same behavior and communication layer that another one in Java (RabbitMQ / Qpid / StormMQ implementing AMQP)
Is there a Maven/Sbt/Ant plugin to startup a temporary RabbitMQ broker?
Any project to support Junit/TestNG RabbitMQ before a test class?
I have seen that there is an opensource implementation of AMQP in Java: Apache Qpid
Has someone any experience using this implementation for integration testing while in production there is RabbitMQ? Is it even possible?
I am using Spring Integration.
By the way, I just noticed that the Spring-AMQP project mention on its github readme:
Many of the "integration" tests here require a running RabbitMQ server
- they will be skipped if the broker is not detected.
If it were me, I would look at mocking the stack component in question. What's the best mock framework for Java? (although not a great Stack Overflow question) and Mock Object might help get you going.
Mocking the components makes testing much easier (IMHO) than trying to "live test" everything.
In the past I have used a virtual machine that had a fresh install of RabbitMQ running. Developers would run it constantly and our CI server would start a new VM for each revision of the code. Our tests would fail if they could not connect to the server instead of skipping the tests as a lack of integration tests is a serious problem.
This tends to work reasonably well and prevents needing to start and stop RabbitMQ for the tests. Our tests were split up to use vhosts for isolation and a few calls to create vhosts on demand so we could parallelize tests is we needed to.
I will tell you my opinion on the Apache QPid vs Rabbit MQ as I worked with both. My opinion is that they do 'extend' AMQP, but are very different. If in production you will have Rabbit MQ, then use the same one in tests - also the same version, patches, fixes, etc. There are things that Apache Qpid can do and Rabbit MQ can not and vice-versa.
Stating the obvious integration tests are done so that you could test the integration of the application. I would start an instance of Rabbit MQ.
Based on Philip Christiano's answer that says to use VM's, now we have Docker and I think it is the way to go to embed different technologies in docker containers.
One can start a docker container containing a RabbitMQ server, and it will be faster than using VM's because docker containers are lightweight and startup time is a lot better.
There are some maven plugins that permit to start docker containers, for exemple: http://www.alexecollins.com/content/docker-maven-plugin/
Since I was in your exact same situation and couldn't find a good solution, I developed it myself (inspired by Embedded-Redis, Embedded-Mongo, etc.)
This library is a wrapper around the process of downloading, extracting, starting and managing RabbitMQ so it can work like an embedded service controlled by any JVM project.
Check it out: https://github.com/AlejandroRivera/embedded-rabbitmq
It's as simple as:
EmbeddedRabbitMqConfig config = new EmbeddedRabbitMqConfig.Builder()
.version(PredefinedVersion.V3_5_7)
// ...
.build();
EmbeddedRabbitMq rabbitMq = new EmbeddedRabbitMq(config);
rabbitMq.start();
...
rabbitMq.stop();
Works on Linux, Mac and Windows.
I'm very new to JUnit, but I want to set up some tests which does the following..
Tests a range of server to server API calls - verifying the responses are correct - I can do that fine.
Open a web page, enter data onto it and verify what happens on submit - This I am struggling with. Is it even possible?
I am thinking that I could call a web page using a server side http web request, but I'm not sure how I can interact with the site itself, i.e. enter data into the forms.
Any thoughts?
Thanks
Steve
You could use Selenium for this. I suggest you use the version 2 which is currently in development and should have a beta available soon (alphas are already available).
Have a look at Selenium, it's a system to test web applications (and de facto websites) you can write all your tests in java. There is an ather project named Tellurium, based on Selenium but Tellurium works with groovy and a DSL, it might be easier to handle at first.
How does this works ?
First you create tests in java (Selenium) or groovy (Tellurium)
Then you start your tests. It will work with your web browser. The application will interact with your browser to test every inch of your application (as you coded it)
At the end it give you a report about yours tests, just as JUnit do.
You can also exploit the nature of the web. There's no real reason to render a form, fill it out and submit it to test the form processing code. The display of the form is one HTTP request, and the submission is another. It's perfectly reasonable to test form submission code by mocking up what a browser would send and asserting that it's handled correctly.
You do need to make sure that the form rendering and submission test code are in sync, but you don't necessarily need a full integration for this either.
There are tools that allow testing without booting up a browser... one that springs to mind is HTMLUnit (and there are others). If you find that Selenium is a pain to write, or the tests brittle or flakey, look for simpler tools like this.
I suggest you to try the Robot Framework. This is an open source testing framework developed by engineers in Nokia Siemens Networks.
It is primarily built on python and the Selenium testing libraries. It also includes support for testing Java/J2EE server side code through Jython libraries. I personally use it in my work sometimes, and writing a test case is just as easy as describing an end-to-end flow through the use of Keywords (most of required ones are already inbuilt). You could go ahead and give this a shot if you find Selenium a li'l tough to work with. The Robot framework provides a fairly simple abstraction over raw selenium, coupled with the power to make Java/J2EE server-side calls too.
Regards,
Nagendra U M
Any idea how to use Jmeter for performance testing of Standalone Java app???
thanks
JMeter is used for simulating network traffic to a server and testing the responsiveness of the other end in heavy loads situations. It will be of some use for you, if your application exposes a network interface (HTTP, TCP, FTP, SOAP). Then you could add a "Sampler" and configure a scenario, where a lot of requests will be sent at the same time.
One easy way is to use a junit test in jmeter.
If you standalone app makes any HTTP Requests, then yes you can use JMeter to do a performance test for that. You can find out the HTTP Requests your app makes and add them in HTTP Request Sampler in Jmeter Test Plan and also add necessary data to it and then you can generate the load for that.
I came across this for benchmarking standalone applications - https://github.com/openjdk/jmh