How to identify a terminating pod in kubernetes using java - java

I am using fabric8 to get the status of a kuberenetes pod by the following code.
KubernetesHelper.getPodStatusText(pod);
I deploy an app within a container and there is a one to one mapping between a container and pod. My requirement is to redeploy the application. So after deleting the pod I check for the status and the method returns a status of "Running" while deleting.
I am unable to identify that the pod has been deleted since the newly deployed app also return a status of "Running". Are there any other variables of a pod that can be used to identify a healthy pod vs a terminating pod.

One way of doing this is to perform a rolling upgrade. This will ensure that your deployed application incurs no downtime (new pods are started before old pods are stopped). One caveat is that you must be using a replication controller or replication set to do so. Most rolling deployments will simply involve just updating the container's image for the new version of software.
You can do this through Java via fabric8's Kubernetes Java client. Here's an example:
client.replicationControllers()
.inNamespace("thisisatest")
.withName("nginx-controller")
.rolling().updateImage("nginx");
You can change any configuration of the replication controller (replicas, environment variables, etc). The call will return when your pods running the new version are Ready & the old replication & pods have been stopped & deleted.

Related

Java docker container with embedded or standalone tomcat?

Currently I have a tomcat webserver that is hosting multiple .war microservices (if it matters: spring-boot applications). When upgrading an application, I'm using the tomcat parallel deployment feature by adding a myapp##005.war, myapp##006.war, etc to have zero downtime deployment.
I'd like to dockerize the applications. But what suits best for java applications webservice applications?
Is it better to package a war file directly into the container, so that each redeployment would require a new docker container? Or should the tomcat run as a container without applications, and mount the war files from a shared host system folder (and thus provide redeployment without dockerimage recreation)?
I could think of the following 3 possibilities:
run each war file as a jar instead with embedded tomcat, each as own docker container? Then each app is decoupled, but I cannot use the parallel deployment feature anymore as I have to kill the jar before another can take its place. If this would be the best approach, then the question is: how could I get zero downtime deployment though with docker containers?
run each war file as standalone tomcat, each as own docker container? Each app then would be decoupled and could also make use of parallel deployment. But I'd have to launch an explicit tomcat webserver for each application in every docker container here, which might have negative impacts on the host system performance?
run a standalone tomcat as docker, and place all *.war files in a shared folder for deployment? Here I could still use the parallel deployment feature. But isn't this against the idea of docker? Shouldn't the war application be packed inside the container? Performance and resource requirements would probably best here, as this requires only a single tomcat.
Which approach suits for java miroservices?
Deployment using a single Jar for each Docker container is definitely the best approach. As you mentioned, low coupling is something you want from a microservice. Rolling deployments/canary releases etc can easily be done with container orchestration tools like Docker Swarm and Kubernetes.
If you want to play around with these concepts, Docker Swarm is fairly easy:
In your compose file:
version: '3'
services:
example:
build: .
image: example-image:1.0
ports:
- 8080:8080
networks:
- mynet
deploy:
replicas: 6
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
The deploy part in you compose file is all Docker Swarm needs.
replicas tells you that 6 instances of your application will be deployed in the swarm
parallelism will tell you that 2 instances will be updated at the same time (instead of 6)
Between updates there will be a 10 second grace period (delay)
Lot's of other things you can do. Take a look at the documentation.
If you update your service, there will be no downtime, as Docker Swarm will serve all requests through the 4 containers that will still be running.
I don't recommend Docker Swarm in a production environment, but it's a great way to play around with the concepts of container orchestration.
Kubernetes learning curve is quite steep. If you're in the cloud, AWS for example, services like EKS, Fargate etc can take a lot of that complexity away for you.

Kubernetes Java Spring microservices - generating a unique identifier for each container/service replica

I'm working with microservices in Java Spring MVC. With Kubernetes, the pods containing this microservice application logic can scale/replicate based on incoming load. In few words, there can be 2 or more copies of my application running.
I require to have a specific identifier mechanism which describes the specific pod replica / container containing the application. I was thinking to generate a random number as a descriptor at runtime and store it as an identifier to the container. But I was wondering if there is a better way, considering that I am working with Spring, TomCat and Kubernetes, I would expect that some of this tech stack can do something like this for me?
Kubernetes can do this. Each Pod will have a unique name that you can access as the hostname or through an environment variable. If you use a standard Deployment resource though this can change if the Pod dies and is recreated. It sounds to me like you want a StatefulSet, in which Pods are assigned unique ordinal indexes and retain these when recreated - https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-identity

Triggering schedule job only once in multiple node environment

I am using Docker swarm mode to run multiple instances of Java (Spring Boot) application, and I would like to run schedule job twice a day but it needs to be triggered only by one instance of application.
Is there any mechanism to configure Spring Boot application and Docker swarm to run that scheduled task just once?
I've seen in Jive property:
<property name="allNodes" value="false"/>
and now I am wondering if I can do some similar thing on my infrastructure.
Application instances are in same network, so I suppose network discovery wouldn't be the problem.
Can you create one node as master node and the scheduled job will run only in master node.On failure other node will promoted and become master so eligible for runing the job.
Or you can created a distributed lock(Hazelcast support distributed lock).Every node will call tryLock().The node wins will allow to run the job.

JBPM6: How to resume a process from the last successful node after the server crash?

I'm trying to implement the failover strategy when executing jbpm6 processes. My setup is the following:
I'm using jbpm6.2.0-Final (latest stable release) with persistence enabled
I'm constructing an instance of org.kie.spring.factorybeans.RuntimeManagerFactoryBean with type SINGLETON to get KSession to start/abort processes and complete/abort work items
all beans are wired by Spring 3.2
DB2 is used a database engine
I use Tomcat 7.0.27
In the positive scenario everything is working as I expect. But I would like to know how to resume the process in the case of server crash. To reproduce it I started my process (described as BPMN2 file), got at some middle step and killed the Tomcat process. After that I see uncompleted process instance in the PROCESS_INSTANCE_INFO table and uncompleted work item in the WORK_ITEM_INFO table. Also there is a session in the SESSION_INFO table.
My question is: could you show me the example of code which would take that remaining process and resume it starting from the last node (if it is possible).
Update
I forgot to mention that i'm not using jbpm-console, but I'm embedding jbpm into my javaee application.
If you initialize your RuntimeManager on init of your application Server it should take care of reloading and resuming the processes.
You need not worry about reloading it again by yourself.

Integration tests with Oracle Coherence

We have a set of integration tests, that use Oracle Coherence. All of them use the same config and the problem is that when you are running them in parallel, their coherence nodes join into one cluster and it is possible that one test affects others. Is there a simple way to prevent this joining?
Thanks!
We use LittleGrid in our tests rather than start Coherence natively. You can programmatically set up the grid and set the configuration.
For creating different clusters on a single machine for testing, you can use different tangosol-override config file.Just keep a tangosol-override file in the classpath of each cluster, provide different name to the clusters and specify different multi-cast address (not mandatory i guess). If you are using coherence 12C then you can also create different managed cluster in a single domain of weblogic server.
When you start a coherence node, it will read the tangosol-override file and issue multi-cast messages to the address mentioned in the file. When it doesn't find any other node or cluster with same cluster name. It starts it's own cluster as identifies itself as the master node.

Categories

Resources