I am aware RabbitMQ is written in Erlang and thus can't be embedded in a JVM like we would do with the ActiveMQ JMS broker for exemple.
But actually there are some projects that are done in another language and that can easily be embedded for integration tests.
For exemple, MongoDB, written in C++, can be easily started/stopped in the context of a JVM integration test with:
https://github.com/flapdoodle-oss/embedmongo.flapdoodle.de
There is also someone porting it to Java:
https://github.com/thiloplanz/jmockmongo/
So I just wonder how can we do integration tests when my application is written in Java, and the other technology is in another langage (like Erlang for RabbitMQ)?
In general what are the good practices?
I see 3 main solutions:
Starting a real RabbitMQ
Embedding a JVM port of the technology in the currently used Langage
Use standard technologies so that a technology in Erlang may have the same behavior and communication layer that another one in Java (RabbitMQ / Qpid / StormMQ implementing AMQP)
Is there a Maven/Sbt/Ant plugin to startup a temporary RabbitMQ broker?
Any project to support Junit/TestNG RabbitMQ before a test class?
I have seen that there is an opensource implementation of AMQP in Java: Apache Qpid
Has someone any experience using this implementation for integration testing while in production there is RabbitMQ? Is it even possible?
I am using Spring Integration.
By the way, I just noticed that the Spring-AMQP project mention on its github readme:
Many of the "integration" tests here require a running RabbitMQ server
- they will be skipped if the broker is not detected.
If it were me, I would look at mocking the stack component in question. What's the best mock framework for Java? (although not a great Stack Overflow question) and Mock Object might help get you going.
Mocking the components makes testing much easier (IMHO) than trying to "live test" everything.
In the past I have used a virtual machine that had a fresh install of RabbitMQ running. Developers would run it constantly and our CI server would start a new VM for each revision of the code. Our tests would fail if they could not connect to the server instead of skipping the tests as a lack of integration tests is a serious problem.
This tends to work reasonably well and prevents needing to start and stop RabbitMQ for the tests. Our tests were split up to use vhosts for isolation and a few calls to create vhosts on demand so we could parallelize tests is we needed to.
I will tell you my opinion on the Apache QPid vs Rabbit MQ as I worked with both. My opinion is that they do 'extend' AMQP, but are very different. If in production you will have Rabbit MQ, then use the same one in tests - also the same version, patches, fixes, etc. There are things that Apache Qpid can do and Rabbit MQ can not and vice-versa.
Stating the obvious integration tests are done so that you could test the integration of the application. I would start an instance of Rabbit MQ.
Based on Philip Christiano's answer that says to use VM's, now we have Docker and I think it is the way to go to embed different technologies in docker containers.
One can start a docker container containing a RabbitMQ server, and it will be faster than using VM's because docker containers are lightweight and startup time is a lot better.
There are some maven plugins that permit to start docker containers, for exemple: http://www.alexecollins.com/content/docker-maven-plugin/
Since I was in your exact same situation and couldn't find a good solution, I developed it myself (inspired by Embedded-Redis, Embedded-Mongo, etc.)
This library is a wrapper around the process of downloading, extracting, starting and managing RabbitMQ so it can work like an embedded service controlled by any JVM project.
Check it out: https://github.com/AlejandroRivera/embedded-rabbitmq
It's as simple as:
EmbeddedRabbitMqConfig config = new EmbeddedRabbitMqConfig.Builder()
.version(PredefinedVersion.V3_5_7)
// ...
.build();
EmbeddedRabbitMq rabbitMq = new EmbeddedRabbitMq(config);
rabbitMq.start();
...
rabbitMq.stop();
Works on Linux, Mac and Windows.
Related
Let us assume application have 10+ Spring Boot micro service. Which is the best way of deploying in production environment for bellow two options?
Using embedded server per service run through the java -jar xyz.jar?
Using the external application server like (Jboss or Tomcat) service are running on their own ports
?
The recommended approach is using the 1st option because :
You can use any of the light weight servers i.e., undertow, etc...
You can dockerize it, scale up as needed.
Easy to maintain and save time and money.
The 2nd option has some limitations like :
You are not using spring boot self deploying feature.
Multiple applications can also be deployed due to which your application may be slower.
Generally 1 would be preferable if you are on modern infrastructure, however there is no "best way" to do deployments. Both approaches have trade offs:
Gives you better isolation and when implement with containers or PAAS allows for immutable deployments which is a big improvement when it comes to testing. The downside is more complicated deployment process that should be automated and higher server resource consumption.
Usually simplifies the architecture and is better suited for manual deployment process. If your organization doesn't buy in into Continuous Delivery this approach will be much easier to live with. The downside is that over time application servers on different environments (DEV, QAT, PROD) will usually end up with a different configuration setup making testing so much harder.
I have been testing out microservices lately using springboot to create microservice projects. The more i understand about the setup the more questions i am confronted with.
How are all the microservices that are running, managed? How do developers manage, deploy or update microservices via a central location?
When deploying multiple instances of a microservice, do you leave the port to be decided during runtime, or should it be predefined?
I am sure there will be much more questions popping up later.
Links used:
http://www.springboottutorial.com/creating-microservices-with-spring-boot-part-1-getting-started
https://fernandoabcampos.wordpress.com/2016/02/04/microservice-architecture-step-by-step-tutorial/
Thanks in advance.
Microservices do tend to go out of control sooner than later. With so many services floating around, you need to think of deployment and monitoring strategies ahead of time.
Both of these are not an easy problem, but you have quite a few tools available at your disposal.
Start with CI/CD. Search around it and you will find a way around. One option is to make use of Jenkins for Blue/Green deployments
In this case jenkins will be one central place where you manage your deployments (but this is just an example, we do have quite a lot of tools build around this that may help you better based on your needs)
Other part of this problem lies in when where you tend to deploy stuff? Different cloud providers have their own specific ways of handling microservices and it depends on your host really. But one alternative is to make use containers.
If you go with raw containers like dockers directly you will have to take care of mapping ports (if they are deployed on same host machine) but then you can use abstraction on top of this like if you are on AWS then you can consider ECS or docker swarms or I personally prefer Kubernetes. You do not need to worry about the ports on which they are and can directly talk to your service over a load balancer. There is lot that is missing in here and you really need to pick one such tool and dig deep, but there are options out there for you to explore.
Next is monitoring, if you are going with kubernetes, you do get lot of monitoring tools out of the box that will help you access the service logs query them etc. But you also need to make sure that from development perspective you do provide for correlations id's, api metrics, response times, because you will need them to debug issues when its comes to microservices specially one related with latencies. If you are not on kubernetes you can still get all these features added but individually, like ELK stack for log monitoring (as you do not want to go to each service to check for logs), zipkin for tracking , API gateway and loadbalancers for service discovery and talking to containers.
Hope this helps you get started.
you can start with the following:
Monitoring :
Start with spring-boot-admin and prometheus.
https://github.com/codecentric/spring-boot-admin
Deployment:
Start with docker and docker-compose and move to kubernetes.
Few examples for docker compose:
https://github.com/jinternals/spring-cloud-stream
https://github.com/jinternals/spring-micrometer-demo
There are container services/container management systems available for example Amazon ECS, Azure container services, Kubernetes etc which take care of automated deployment by centralised repositories like Amazon ECR etc, automated scale up/down of microservice instances, take advantage of dynamic port allocations to run multiple instance of same service on a single instance/host and also give you a centralised dashboard to monitor resource usage and infrastructure events.
You can make use of any one to get answers to all of your questions as all of them provide most of the functionalities needed for managing your microservices.
We are developing microservice based application using Jhipster. For that, there are different components should run at the same time i.e, service registry, UAA server, Gateway, and other different services. To run all these components on my PC it consumes all the resources (16 GB of Ram). However, other developers, they don't have sufficient resources on their PC, which is the reason we are facing the problems continues development in the team.
So we are seeking some options for this problem to get efficiency over our development team.
Currently, if someone wants to add/change features on the application, he needs to work with both microservice and gateway(for the frontend).
So, in this case, what happen? suppose multiple developers are working on gateway and service at the same time in the development environment.
How are they going to debug/test? do they have to deploy gateway individually?
We are planning to deploy microservices on our own VPS server and in near future for the production heroku, kubernetes, jenkins, cloudfoundry can be used.
Correct me if I am wrong and is there any better option for smooth development?
I had read Sam Neuman's Microservice book that the problem of the single gateway based application while development.Now I am very curious about how Jhipster came to resolve this problem.
It seems to me that you are trying to work with your microservices as it was a monolith. One of the most important features in the microservice architecture is the ability to work on each microservice independently, which means that you do not need to run the whole infrastructure to develop a feature. Imagine a developer at Netflix who needs to run several hundreds of microservices on their PC to develop a feature - that would be crazy.
Microservices is all about ownership. Usually, different teams work on different microservices or some set of microservices which makes it really important to build good test design to make sure that whenever all the microservices are up and running as one cohesive system everything works as expected.
All that said, when you are developing your microservice you don't have to rely on other microservices. Instead, you better mock all the interactions with other microservices and write tests to check whether your microservice is doing what it has to do. I would suggest you wiremock as it has out-of-the-box support for Spring Boot. Another reason is that it is also supported by Spring Cloud Contract that enables you to use another powerful technique called Consumer Driven Contracts which makes it possible to make sure that contract between two microservices is not broken at any given time.
These are the integration tests (they run on a microservice level) and they have a very fast feedback because of the mocks, on the other hand, you can't guarantee that your application works fine after running all of them. That's why there should be another category of more coarse grained tests, aka end-to-end tests. These tests should be running against the whole application, meaning that all the infrastructure must be up and running and ready to serve your requests. This kind of tests is usually performed automatically by your CI, so you do not need all the microservices running on your PC. This is where you can check whether you API gateway works fine with other services.
So, ideally, your test design should follow the following test pyramid. The more coarse grained tests you have the less amount of them should be kept within the system. There is no silver bullet if to speak about proportions, rough numbers are:
Unit tests - 65%
Integration tests - 25%
End-to-end tests - 10%
P.S: Sorry for not answering your question directly, I had to use the analogy with tests to make it more clear. All I wanted to say is that in general, you don't need to take care of the whole system while developing. You need to take care of your particular microservice and mock out all the interactions with other services.
Developing using a shared gateway makes little sense as it means that you cannot use webpack dev server to hot reload your UI changes. Gateways and microservices can run without the registry just use local application properties and define static zuul routes. If your microservices are well defined, most developers will only need to run a small subset of them to develop new features or fix bugs.
The UAA server can be shared but alternatively you can create a security configuration simulating authentication that you would activate through a specific profile. This way when a developer works on one single web service, she can test it with a simple REST client like curl or swagger without having to worry about tokens.
Another possibility if you want to share the registry is to assign a spring profile per developer but it might be overkill compared to above approach.
I am using a embedded ActiveMQ in my application, the queue works excellent,
Now we want a way to be able to monitor this ActiveMQ, due to its embedded nature we cannot use the default web console provided by ActiveMQ.
I have had a look here http://activemq.apache.org/how-can-i-monitor-activemq.html , the provided options haven't helped much in my cause due to following reasons
Using JConsole is not a nice option because it uses up much of server resources and causes JVM to be slow.
StatisticsPlugin is a nice approach but doesn't provide a UI view, and gets reset on every server restart (This is what we will use if nothing else is found).
Have also had a look at similar question on SO ActiveMQ: how to programmatically monitor embedded broker , this is not what I want
Recently have heard about a tool called Hawtio , but this also seems to be useful when ActiveMq is running as standalone instance, (Please Correct If I am wrong on this, Any pointers will be definitely helpful)
So the help that I need is
Is Hawtio really useful for embedded instance ?
Are there any other tools availablle to achieve this goal?
Any help really appreciated.
I have used hawtio, its very useful tools for monitoring ActiveMQ. It used JMX internally which takes you java process running and monitor it.
I am considering using Java 6's embedded HTTP server for some sort of IPC with a Java daemon. It works pretty well and it's nice that's already bundled with all Java 6 installations. No need of additional libraries.
However, I would like to know if someone has tried this with production environments with heavy load. Does it perform well? Should I be looking for something more robust such as Tomcat or Jetty?
Well, as much as it saddens me to say bad things about Java, I'd really not recommend it for production use, or any kind of heavy use scenario. Even though it works well for small stuff like unit/integration tests, it has big memory issues when it is used intensivelly, especially when you use it for a big number of connections at once. I've had similar issues to the ones described here:
http://neopatel.blogspot.com/2010/05/java-comsunnethttpserverhttpserver.html
And Jetty is not that good for heavy production usage for pretty much the same reason. I'd go with Tomcat if I were you.
As an alternative, I believe you could consider Java Messaging Service as an alternative to Inter Process Communications and just have a JMS server running (like Active MQ)
If you want something that ships with Java have a look at RMI or RMI/IIOP.