Currently I have a tomcat webserver that is hosting multiple .war microservices (if it matters: spring-boot applications). When upgrading an application, I'm using the tomcat parallel deployment feature by adding a myapp##005.war, myapp##006.war, etc to have zero downtime deployment.
I'd like to dockerize the applications. But what suits best for java applications webservice applications?
Is it better to package a war file directly into the container, so that each redeployment would require a new docker container? Or should the tomcat run as a container without applications, and mount the war files from a shared host system folder (and thus provide redeployment without dockerimage recreation)?
I could think of the following 3 possibilities:
run each war file as a jar instead with embedded tomcat, each as own docker container? Then each app is decoupled, but I cannot use the parallel deployment feature anymore as I have to kill the jar before another can take its place. If this would be the best approach, then the question is: how could I get zero downtime deployment though with docker containers?
run each war file as standalone tomcat, each as own docker container? Each app then would be decoupled and could also make use of parallel deployment. But I'd have to launch an explicit tomcat webserver for each application in every docker container here, which might have negative impacts on the host system performance?
run a standalone tomcat as docker, and place all *.war files in a shared folder for deployment? Here I could still use the parallel deployment feature. But isn't this against the idea of docker? Shouldn't the war application be packed inside the container? Performance and resource requirements would probably best here, as this requires only a single tomcat.
Which approach suits for java miroservices?
Deployment using a single Jar for each Docker container is definitely the best approach. As you mentioned, low coupling is something you want from a microservice. Rolling deployments/canary releases etc can easily be done with container orchestration tools like Docker Swarm and Kubernetes.
If you want to play around with these concepts, Docker Swarm is fairly easy:
In your compose file:
version: '3'
services:
example:
build: .
image: example-image:1.0
ports:
- 8080:8080
networks:
- mynet
deploy:
replicas: 6
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
The deploy part in you compose file is all Docker Swarm needs.
replicas tells you that 6 instances of your application will be deployed in the swarm
parallelism will tell you that 2 instances will be updated at the same time (instead of 6)
Between updates there will be a 10 second grace period (delay)
Lot's of other things you can do. Take a look at the documentation.
If you update your service, there will be no downtime, as Docker Swarm will serve all requests through the 4 containers that will still be running.
I don't recommend Docker Swarm in a production environment, but it's a great way to play around with the concepts of container orchestration.
Kubernetes learning curve is quite steep. If you're in the cloud, AWS for example, services like EKS, Fargate etc can take a lot of that complexity away for you.
Related
We have a Wildfly domain environment with 1 Wildfly master server and 2 Wildfly slave servers, each slave with 2 application instances.
We want now to transform the domain into a standalone environment so we will remain with 2 Wildfly standalone servers and we will decommission the Wildfly master server.
What will be the best way to approach this task?
Should we install Wildfly from scratch on the old Slaves and configure the standalone XML files or should we use the current installation?
Is there a way to convert/migrate all the parameters set now at the domain level to the standalone files or this is a manual task?
Also, can we run the 2 application instances on each standalone server?
Are the server-groups used in the standalone environment?
There is an operation 'read-config-as-xml' that should provide the XML of your servers.
After some radical changes to our schema and reading some posts on why you should avoid in memory databases.
We have decided to use MySQL locally for testing and developing. Using a MySQL docker container with a volume for persistence.
This is fairly straightforward however the issues we are having are the following:
Requires the container to be executed separate from the spring boot application (a manual task docker run
Same goes for stopping the container, its a independant process
My question is essentially, is it possible to have spring boot (when using a dev config profile) to manage this docker container.
i.e. I start development work in IntelliJ and run the service, the service checks if the container is running, if not starts it up.
If this idea is bad, then please let me know.
For testing its not issue, because we are using a maven docker plugin to create the container during the maven lifecycle.
Its more for devs working locally, and getting the service running locally with ease.
Any suggestions welcomed!
Bonus for Intellij setup!
I have created Spring boot application and Angular project (Angular 1) separately. In my local i am using npm to server the client app and it calls my back end app services. I am using Embedded tomcat in spring boot application.
Now i want to host my application in server. How do i do that?
Can i have embedded tomcat and build as jar or should i have to install standalone tomcat in the server and deploy my application as war?
How to configure my client code for example, in godaddy i have given ip xx.xx.xx.xx to www.xyz.com. The ip address is my production cloud server. How to redirect to angular application and that application calls server exposed apis.
I cannot have single application that has client code. I should do with two different application only. Please help me deploying this in best way. If embedded tomcat doesnt help then i can install standalone tomcat in server and and build my app as war and deploy it.
The current best practice would be to embed the servlet container (Tomcat, Jetty, other) into the artifact and build a fat JAR. The main advantage is a simplified deployment process: it's enough to push the far JAR into environment and execute it. Unlike the usual servlet container with WAR deployment model the embedded approach doesn't have to deal with additional configuration layers e.g. thread pools or data sources shared between different WARs.
One example of how to build a far JAR with embedded servlet container is spring-boot-starter-web dependency with spring-boot-maven-plugin:repackage goal. In this setup to build a fat JAR it's enough to run mvn clean package repackage.
If you are developing locally your web client code most likely will face issues with the same-origin policy. You most likely will need a CORS filter, however it's provided by Tomcat.
My system is divided to 2 or 3 servers architecture, and only one server has a webApp
Since we use javaMelody to monitor the webApp server, is there a way to use on a regular, no-
webApp JVM, and if so, how can we access the data
I would say include your application into a webapp (a war file for example) then deploy it in a server like Tomcat or Jetty. By the way, Tomcat or Jetty can be embedded.
Otherwise, there is currently no way to do this.
Assuming my clients are running my J2EE WAR application on their intranet, and I have an update for them... how do I push the updated war file to them?
I'd like it to be automatic and require no human interaction on the client's side.
Can this be done?
Any help would be appreciated.
Tomcat (if this is your target container...) offers a manager interface that will allow you to deploy/start/stop applications.
I have used both ant and maven tasks to great effect in deploying wars remotely all while being built-in to the build process.
Depending on your deployment process, this may not work for you, but for dev & qa: highly recommended.
Edit: of course apache has to be configured for this type of access to be allowed.
See: Deployer how-to
Glassfish has documentation on deployment here.
Ant tasks are also available here.
Glassfish uses Tomcat internally, but the Tomcat Manager is not available as it is a separate application.
If the glassfish admin console can be accessed, it can be used to upload and deploy war files.
I'm not sure if you're comfortable giving them access to your source code repository...even in read-only mode.
If you are, then you could script up something in ANT to check out the latest version of the source code (using CVS task) and then build the .war file (using WAR task).
The only trick would be automatically deploying it once the war has been built. Tomcat will automatically deploy applications copied into a certain directory. For Websphere, see this question and this question.
For other J2EE servers I don't know how it would be done.