Development only with testing - no actually running - good practice? - java

I just started in a new team. My task right now is to bring an old application to their new microservice framework (mostly Java). It runs in quite a complex environment, like consuming other web services, messaging via several queues, and providing an public API for other services to use.
The new thing for me is that their development process doesn't involve an actually running the application even once. Instead they solely do testing: unit tests and integration tests. For the integration tests they employ a couple of docker container that mimic their actual production system. There is no staging system - only the production system, where I have no access.
To me it feels weird to never actually run the application. Is this common? Have you guys encountered the same? Is this considered a good practice?
Edit: For the system it is not critical that it runs 24/7 without interruption. An outage after a new deployment is not nice but if monitored and handled (e.g. switch back to an older version) it's not a big problem.

Related

Spring boot micro service production Environment?

Let us assume application have 10+ Spring Boot micro service. Which is the best way of deploying in production environment for bellow two options?
Using embedded server per service run through the java -jar xyz.jar?
Using the external application server like (Jboss or Tomcat) service are running on their own ports
?
The recommended approach is using the 1st option because :
You can use any of the light weight servers i.e., undertow, etc...
You can dockerize it, scale up as needed.
Easy to maintain and save time and money.
The 2nd option has some limitations like :
You are not using spring boot self deploying feature.
Multiple applications can also be deployed due to which your application may be slower.
Generally 1 would be preferable if you are on modern infrastructure, however there is no "best way" to do deployments. Both approaches have trade offs:
Gives you better isolation and when implement with containers or PAAS allows for immutable deployments which is a big improvement when it comes to testing. The downside is more complicated deployment process that should be automated and higher server resource consumption.
Usually simplifies the architecture and is better suited for manual deployment process. If your organization doesn't buy in into Continuous Delivery this approach will be much easier to live with. The downside is that over time application servers on different environments (DEV, QAT, PROD) will usually end up with a different configuration setup making testing so much harder.

Docker containerized monolithic application

We have just started a new project using Spring boot, which will have a monolithic architecture. There are some talks about using docker for containerizing the application.
Are there any benefits other than easier deployment across different platforms?
I would also like to understand whether auto scaling applies here. If yes, how?
Thanks in advance!
I just would like to add that lots of people tend to focus on the deployment, but the advantages for cyber security are enormous. The process isolation in the bounds of what could be seen as an advanced jail by itself could make the case for Docker.
Another advantage is the complement it provides for you CI/CD efforts and methodologies. By including the process of building imagens in the app building process, one gets better control of the overall process including a better view of the cycles.
Besides that, you also expand the number of ecosystems where you application can be easily installed and run. Added the support of a swarm or kubernets you got yourself access the the current hardened and managed could solutions.
With a stretch we can talk about scalability, if your image is meant to cooperate with replicas of itself, or if you put your containers were the hardware is itself elastic. Scalability also comes in to the discussion when you use means to control hardware usage in order to prevented services for competing for resources. This is also true if you do not have a cluster, as you can also manage hardware usage within a host.
Now it really depends on your needs and ninche. Some environments for instance would benefit in obvious ways even if scalability is not a concern. The inner networks you can create for instance is an excellent excuse to implement Docker, you get process isolation and network isolation inside a small host. Of course Docker is not meant to be a cybersec solution, but it adds up to the ones you already have.
I think it really depends on the scale of your application. The main benefit will certainly be ease of deployments and development, either on premise or on a cloud provider.
If you are running other applications along with Spring, like a database, cache server or other applications, you should have a look at docker-compose. It would really simplify not just the deployment of the Spring app, but also of all its dependencies.
Docker could help a lot also in case you plan to scale your application to multiple nodes, using docker swarm.
As for autoscaling, it is not really supported by docker out of the box, but you could achieve it with other tools on top of docker swarm.

How to work on single gateway based microservice in a team?

We are developing microservice based application using Jhipster. For that, there are different components should run at the same time i.e, service registry, UAA server, Gateway, and other different services. To run all these components on my PC it consumes all the resources (16 GB of Ram). However, other developers, they don't have sufficient resources on their PC, which is the reason we are facing the problems continues development in the team.
So we are seeking some options for this problem to get efficiency over our development team.
Currently, if someone wants to add/change features on the application, he needs to work with both microservice and gateway(for the frontend).
So, in this case, what happen? suppose multiple developers are working on gateway and service at the same time in the development environment.
How are they going to debug/test? do they have to deploy gateway individually?
We are planning to deploy microservices on our own VPS server and in near future for the production heroku, kubernetes, jenkins, cloudfoundry can be used.
Correct me if I am wrong and is there any better option for smooth development?
I had read Sam Neuman's Microservice book that the problem of the single gateway based application while development.Now I am very curious about how Jhipster came to resolve this problem.
It seems to me that you are trying to work with your microservices as it was a monolith. One of the most important features in the microservice architecture is the ability to work on each microservice independently, which means that you do not need to run the whole infrastructure to develop a feature. Imagine a developer at Netflix who needs to run several hundreds of microservices on their PC to develop a feature - that would be crazy.
Microservices is all about ownership. Usually, different teams work on different microservices or some set of microservices which makes it really important to build good test design to make sure that whenever all the microservices are up and running as one cohesive system everything works as expected.
All that said, when you are developing your microservice you don't have to rely on other microservices. Instead, you better mock all the interactions with other microservices and write tests to check whether your microservice is doing what it has to do. I would suggest you wiremock as it has out-of-the-box support for Spring Boot. Another reason is that it is also supported by Spring Cloud Contract that enables you to use another powerful technique called Consumer Driven Contracts which makes it possible to make sure that contract between two microservices is not broken at any given time.
These are the integration tests (they run on a microservice level) and they have a very fast feedback because of the mocks, on the other hand, you can't guarantee that your application works fine after running all of them. That's why there should be another category of more coarse grained tests, aka end-to-end tests. These tests should be running against the whole application, meaning that all the infrastructure must be up and running and ready to serve your requests. This kind of tests is usually performed automatically by your CI, so you do not need all the microservices running on your PC. This is where you can check whether you API gateway works fine with other services.
So, ideally, your test design should follow the following test pyramid. The more coarse grained tests you have the less amount of them should be kept within the system. There is no silver bullet if to speak about proportions, rough numbers are:
Unit tests - 65%
Integration tests - 25%
End-to-end tests - 10%
P.S: Sorry for not answering your question directly, I had to use the analogy with tests to make it more clear. All I wanted to say is that in general, you don't need to take care of the whole system while developing. You need to take care of your particular microservice and mock out all the interactions with other services.
Developing using a shared gateway makes little sense as it means that you cannot use webpack dev server to hot reload your UI changes. Gateways and microservices can run without the registry just use local application properties and define static zuul routes. If your microservices are well defined, most developers will only need to run a small subset of them to develop new features or fix bugs.
The UAA server can be shared but alternatively you can create a security configuration simulating authentication that you would activate through a specific profile. This way when a developer works on one single web service, she can test it with a simple REST client like curl or swagger without having to worry about tokens.
Another possibility if you want to share the registry is to assign a spring profile per developer but it might be overkill compared to above approach.

Advice deploying war files vs executable jar with embedded container

There seems to be a current trend in java space to move away from deploying java web applications to a java servlet container (or application server) in the form of a war file (or ear file) and instead package the application as an executable jar with an embedded servlet/HTTP server like jetty. And I mean this more so in the way newer frameworks are influencing how new applications are developed and deployed rather than how applications are delivered to end users (because, for example, I get why Jenkins uses an embedded container, very easy to grab and go). Examples of frameworks adopting the executable jar option:
Dropwizard, Spring Boot, and Play (well it doesn't run on a servlet container but the HTTP server is embedded).
My question is, coming from an environment where we have deployed our (up to this point mostly Struts2) applications to a single tomcat application server, what changes, best practices, or considerations need to be made if we plan on using an embedded container approach? Currently, we have about 10 homegrown applications running on a single tomcat server and for these smallish applications
the ability to share resources and be managed on one server is nice. Our applications are not intended to be distributed to end users to run within their environment. However, moving forward if we decide to leverage a newer java framework, should this approach change? Is the shift to executable jars spurred on by the increasing use of cloud deployments (e.g., Heroku)?
If you've had experience managing multiple applications in the Play style of deployment versus traditional war file deployment on a single application server, please share your insight.
An interesting question. This is just my view on the topic, so take everything with a grain of salt. I have occasionally deployed and managed applications using both servlet containers and embedded servers. I'm sure there are still many good reasons for using servlet containers but I will try to just focus on why they are less popular today.
Short version: Servlet containers are great to manage multiple applications on a single host but don't seem very useful to manage just one single application. With cloud environments, a single application per virtual machine seems preferable and more common. Modern frameworks want to be cloud compatible, therefore the shift to embedded servers.
So I think cloud services are the main reason for abandoning servlet containers. Just like servlet containers let you manage applications, cloud services let you manage virtual machines, instances, data storage and much more. This sounds more complicated, but with cloud environments, there has been a shift to single app machines. This means you can often treat the whole machine like it is the application. Each application runs on a machine with appropriate size. Cloud instances can pop up and vanish at any time which is great for scaling. If an application needs more resources, you create more instances.
Dedicated servers on the other hand usually are powerful but with a fixed size, so you run multiple applications on a single machine to maximize the use of resources. Managing dozens of application - each with their own configurations, web servers, routes and connections etc. - is not fun, so using a servlet container helps you to keep everything manageable and yourself sane. It is harder to scale though. Servlet containers in the cloud don't seem very useful. They would have to be set up for each tiny instance, without providing much value since they only manage a single application.
Also, clouds are cool and non-cloud stuff is boring (if we still believe the hype). Many frameworks try to be scalable by default, so that they can easily be deployed to the clouds. Embedded servers are fast to deploy and run so they seem like a reasonable solution. Servlet containers are usually still supported but require a more complicated set up.
Some other points:
The embedded server could be optimized for the framework or is better integrated with the frameworks tooling (like the play console for example).
Not all cloud environments come with customizable machine images. Instead of writing initialization scripts to download and set up servlet containers, using dedicated software for cloud application deployments is much simpler.
I have yet to find a Tomcat setup that doesn't greet you with a perm gen space error every few redeployments of your app. Taking a bit longer to (re-)start embedded servers is no problem when you can almost instantly switch between staging and production instances without any downtime.
As already mentioned in the question, it's very convenient for the end user to just run the application.
Embedded servers are portable and convenient for development. Today everything is rapid, prototypes and MVPs need to be created and delivered as fast as possible. No one wants to spend too much time setting up an environment for every developer.

Proxy for test automation and eventual regression testing, or other ideas

I was wondering if anyone could recommend some existing software for working around a slow shared mainframe that is being connected to in a testing and development environment. Recently I've been refactoring some webpages that are dependent on this server, and I'm stuck with massive delays from making the same queries. Ideally I want to plug a solution between the the test mainframe and the server on the development system. For the record, I've been told it's a quirk of the test environment and bears no relation to the performance of the production systems.
My initial thought would be using a caching proxy to generate automated responses to commonly used paths. Ideally, if done right, having access to both sets of data could eventually lead to a regression suite.
I guess I'm hoping for existing solutions along these lines, or alternate ideas that I might have missed. I'm going to try it personally as a solution first and since it will be limited to a dev environment, so there should be room for flexibility.
The current development environment tends to be java and windows machines, but linux machines are available and with a clean solution, the technology behind it shouldn't matter as much.
:tldr I'm wondering what is the easiest way to set up a caching proxy to limit repeated interactions with a slow server, ideally with a means to access the cached results for later regression testing.
There are distributed cache software available on the market, e.g. Oracle Coherence or TIBCO ActiveSpaces. They are typically used with RDBMS, but you can also couple them with mainframe adapters and integration servers from the same vendors.

Categories

Resources