Setup JMX Metrics for Kafka Consumer Docker - java

I heard that if I'm using a Java based Kafka Consumer, I can expose JMX metrics from it by adding some parameters (heard that here and some other posts)
My kafka Consumer is being ran inside a Docker.
Here are the four parameters i'm adding :
-Dcom.sun.management.jmxremote.port=1100
-Dom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.local.only=false
I'm adding it in the Entrypoint of my dockerfile. But my jConsole can't connect to it.
Here is my Dockerfile and my docker-compose related service :
FROM openjdk:8u181-jre
ADD ./app /app
ENTRYPOINT [ "java", "-jar", "-Dcom.sun.management.jmxremote.port=1100", "-Dcom.sun.management.jmxremote.authenticate=false", "-Dcom.sun.management.jmxremote.ssl=false", "-Dcom.sun.management.jmxremote.local.only=false", "/app/KafkaConsumer.jar" ]
jk_cons:
build: ./micro_services/jk_Cons
ports:
- "1100:1100"
volumes:
- /neito/shared/linux_shared/historian/logging:/app
Can someone enlight me on the way of exposing kafka consumer metrics through JMX ?
Have a nice day

Related

Connect Java Mission Control to Flink App in Docker

I'm trying to expose JMX in several docker-based Flink task managers using the --scale option of docker-compose for use with Java Mission Control. (e.g. docker-compose docker/flink-job-cluster.yml up -d --scale taskmanager=2)
My basic issue is that even though I have configured jmx with,
env.java.opts: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.host='0.0.0.0' -Djava.rmi.server.hostname='0.0.0.0'
env.java.opts.jobmanager: -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.rmi.port=9999
env.java.opts.taskmanager: -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.rmi.port=9999
Source: https://github.com/aedenj/apache-flink-kotlin-starter/blob/multiple-schemas/conf/flink/flink-conf.yaml
And then map the ports in the docker-compose file like so,
...
jobmanager:
image: flink:1.14.3-scala_2.11-java11
hostname: jobmanager
ports:
- "8081:8081" # Flink Web UI
- "9999:9999" # JMX Remote Port
command: "standalone-job"
....
taskmanager:
image: flink:1.14.3-scala_2.11-java11
depends_on:
- jobmanager
command: "taskmanager.sh start-foreground"
ports:
- "10000-10005:9999" # JMX Remote Port
environment:
- JOB_MANAGER_RPC_ADDRESS=jobmanager
.....
Source: https://github.com/aedenj/apache-flink-kotlin-starter/blob/multiple-schemas/docker/flink-job-cluster.yml
I am unable to connect Java Mission Control to the task managers using any of the ports. e.g. localhost:10000 or localhost:10001 etc. However I can connect to the Job Mangers using localhost:9999 And I can only connect to a task manager if I explicitly set the ports separately. Not using the dynamic port mapping in the docker-compose file. Ideally the the port mapping would work so I may continue to use the --scale command of docker-compose.

Config kie smart router for kie server

I use jboss/kie-server-showcass image to run kie server. I want to know How can I config kie smart router for image file. I pass KIE_SERVER_ROUTER as environment when execute docker run command:
docker run -p 8180:8080 -d --name kie-server -e KIE_SERVER_ROUTER=http://172.17.0.1:9000 --link jbpm-workbench:kie-wb jboss/kie-server-showcase:latest
but it doesn't work and the kie server can't register to kie smart router.
I clone the https://github.com/jboss-dockerfiles/business-central and in kieserver/showcase/etc change the start_kie-server.sh file and modify JBOSS_ARGUMENTS:
JBOSS_ARGUMENTS=" -b $JBOSS_BIND_ADDRESS -Dorg.kie.server.id=$KIE_SERVER_ID -Dorg.kie.server.user=$KIE_SERVER_USER -Dorg.kie.server.pwd=$KIE_SERVER_PWD -Dorg.kie.server.location=$KIE_SERVER_LOCATION -Dorg.kie.server.router=http://172.17.0.1:9000"
and add -Dorg.kie.server.router=http://172.17.0.1:9000" to JBOSS_ARGUMENTS, and kie server registered to kie smart router successfully. But I don't what modify original image, to do register kie server.
Is there anyway to register kie server as a docker container to kie smart router?
Hello had the same issue
First Build kie router image :
You need to have kie smart router jar you can download it from the Redhat site
####### BASE ############
FROM jboss/base-jdk:8
####### ENVIRONMENT ############
ENV ROUTER_HOME=/opt/jboss \
KIE_SERVER_ROUTER_REPO_DIR=/opt/jboss \
ROUTER_HOSTNAME_EXTERNAL=http://localhost:9000/
# ENV KIE_SERVER_CONTROLLER
ENV KIE_SERVER_CONTROLLER_USER admin
ENV KIE_SERVER_CONTROLLER_PWD admin
####### EXPOSE ROUTER PORT ############
EXPOSE 9000
####### Kie server router CUSTOM CONFIGURATION ############
ADD kie-server-router.jar $ROUTER_HOME/kie-server-router.jar
ADD start_kie-server-router.sh $ROUTER_HOME/start_kie-server-router.sh
###### SWITCH USER root ######
USER root
###### CHANGE FILE PROPERTIES ######
RUN chmod +x $ROUTER_HOME/start_kie-server-router.sh
RUN chgrp -R 0 /opt/jboss
RUN chmod -R g+rw /opt/jboss
RUN find /opt/jboss -type d -exec chmod g+x {} +
####### CUSTOM JBOSS USER ############
# Switchback to jboss user
USER jboss
####### RUNNING Kie server router ############
WORKDIR $ROUTER_HOME
CMD ["./start_kie-server-router.sh"
start_kie-server-router.sh :
#!/usr/bin/env bash
# Default arguments for running the KIE Execution server.
ROUTER_ARGUMENTS="-Dorg.kie.server.router.host=$HOSTNAME -Dorg.kie.server.router.port=9000 -Dorg.kie.server.router.repo=$KIE_SERVER_ROUTER_REPO_DIR -Dorg.kie.server.router.url.external=$ROUTER_HOSTNAME_EXTERNAL"
# Controller argument for the KIE Server router. Only enabled if set the environment variable/s or detected container linking.
if [ -n "$KIE_SERVER_CONTROLLER" ]; then
echo "Using '$KIE_SERVER_CONTROLLER' as KIE server controller"
ROUTER_ARGUMENTS="$ROUTER_ARGUMENTS -Dorg.kie.server.controller=$KIE_SERVER_CONTROLLER -Dorg.kie.server.controller.user=$KIE_SERVER_CONTROLLER_USER -Dorg.kie.server.controller.pwd=$KIE_SERVER_CONTROLLER_PWD"
fi
echo "Router arguments '$ROUTER_ARGUMENTS' "
exec java -jar $ROUTER_ARGUMENTS $ROUTER_HOME/kie-server-router.jar
exit $?
Build kie server image where you will be using smart router option
clone this repo : https://github.com/jboss-dockerfiles/business-central/tree/master/kie-server/showcase
In Dockerfile edit
ENV KIE_SERVER_LOCATION http://kie-server:8080/kie-server/services/rest/server
In start_kie-server.sh add
# If Kie server router is configured then use it
if [ -n "$KIE_SERVER_ROUTER" ]; then
echo "Using '$KIE_SERVER_ROUTER' as KIE server router"
JBOSS_ARGUMENTS="$JBOSS_ARGUMENTS -Dorg.kie.server.router=$KIE_SERVER_ROUTER"
fi
Finally docker compose
business-central:
image: jboss/business-central-workbench-showcase
container_name: business-central
ports:
- 8080:8080
- 8001:8001
kie-server:
image: jboss/kie-server-showcase:{yourTag}
container_name: kie-server
environment:
KIE_MAVEN_REPO: http://nexus:8081/repository/rules
JVM_OPTS: -Xmx1g -Xms1g
KIE_SERVER_CONTROLLER: http://business-central:8080/business-central/rest/controller
KIE_SERVER_ROUTER: http://kie-smart-router:9000
depends_on:
- nexus
- business-central
- kie-smart-router
ports:
- 8180:8080
kie-smart-router:
image: jboss/kie-smart-router:latest
container_name: kie-smart-router
environment:
KIE_SERVER_CONTROLLER: http://business-central:8080/business-central/rest/controller
depends_on:
- business-central
ports:
- 9000:9000

How to have the pod created run an application (command and args) and at the same time have a deployment and service referring to it?

Context:
Tech: Java, Docker Toolbox, Minikube.
I have a java web application (already packaged as web-tool.jar) that I want to run while having all the benefits of kubernetes.
In order to instruct kubernetes to take the image locally I use an image tag:
docker build -t despot/web-tool:1.0 .
and then make it available for minikube by:
docker save despot/web-tool:1.0 | (eval $(minikube docker-env) && docker load)
The docker file is:
FROM openjdk:11-jre
ADD target/web-tool-1.0-SNAPSHOT.jar app.jar
EXPOSE 1111
EXPOSE 2222
1. How can I have the pod created, run the java application and at the same time have a deployment and service referring to it?
1.1. Can I have a deployment created that will propagate a command and arguments when creating the pod? (best for me as I ensure creating a deployment and a service prior to creating the pod)
1.2. If 1.1. not feasible, can I kubectl apply some pod configuration with a command and args to the already created deployment/pod/service? (worse solution as additional manual steps)
1.3. If 1.2. not feasible, is it possible to create a deployment/service and attach it to an already running pod (that was started with "kubectl run ... java -jar app.jar reg")?
What I tried is:
a) Have a deployment created (that automatically starts a pod) and exposed (service created):
kubectl create deployment reggo --image=despot/web-tool:1.0
With this, a pod is created with a CrashLoopBackoff state as it doesn't have a foreground process running yet.
b) Tried the following in the hope of the deployment accepting a command and args that will propagate to the pod creation (1.1.):
kubectl create deployment reggo --image=despot/web-tool:1.0 -- java -jar app.jar reg
The same outcome of the pod, as the deployment doesn't accept command and args.
c) Tried applying a pod configuration with a command and args after the deployment created the pod, so I ran the command from a), found the id (reggo-858ccdcddd-mswzs) of the pod with (kubectl get pods) and then I executed:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: reggo-858ccdcddd-mswzs
spec:
containers:
- name: reggo-858ccdcddd-mswzs
command: ["java"]
args: ["-jar", "app.jar", "reg"]
EOF
but I got:
Warning: kubectl apply should be used on resource created by either
kubectl create --save-config or kubectl apply The Pod
"reggo-858ccdcddd-mswzs" is invalid:
* spec.containers[0].image: Required value
* spec.containers: Forbidden: pod updates may not add or remove containers
that lets me think that I can't execute the command by applying the command/args configuration.
Solution (using Arghya answer):
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: reggo
spec:
selector:
matchLabels:
app: reggo-label
template:
metadata:
labels:
app: reggo-label
spec:
containers:
- name: reggo
image: "despot/web-tool:1.0"
command: ["java"]
args: ["-jar", "app.jar", "reg"]
ports:
- containerPort: 1111
EOF
and executing:
kubectl expose deployment reggo --type=NodePort --port=1111
You could have the java -jar command as ENTRYPOINT in the docker file itself which tells Docker to run the java application.
FROM openjdk:11-jre
ADD target/web-tool-1.0-SNAPSHOT.jar app.jar
EXPOSE 1111
EXPOSE 2222
ENTRYPOINT ["java", "-jar", "app.jar", "reg"]
Alternatively the same can be achieved via command and args section in a kubernetes yaml
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
command: ["java"]
args: ["-jar", "app.jar", "reg"]
Now coming to the point of Forbidden: pod updates may not add or remove containers error the reason its happening is because you are trying to modify an existing pod object's containers section which is not allowed. Instead of doing that you can get the entire deployment yaml and open it up in an editor and edit it to add the command section and then delete the existing deployment and finally apply the modified deployment yaml to the cluster.
kubectl get deploy reggo -o yaml --export > deployment.yaml
Delete the existing deployment via kubectl delete deploy reggo
Edit deployment.yaml to add right command
Apply the yaml to the cluster kubectl apply -f deployment.yaml
As already mentioned, recommended approach is to add ENTRYPOINT in your Dockerfile with the command that you are willing to use.
However if you want to create a deployment using kubectl command you can run
kubectl run $DEPLOYMENT_NAME --image=despot/web-tool:1.0 --command -- java -jar app.jar reg
Additionally if you want to expose the deployment using same command you can pass:
--expose=true argument which will create a service of type ClusterIP and
--port=$PORT_NUMBER to choose port on which it will exposed.
to change port type to NodePort you have to run:
kubectl run $DEPLOYMENT_NAME --image=despot/web-tool:1.0 --expose=true --service-overrides='{ "spec": { "type": "NodePort" } }' --port=$PORT_NUMBER --command -- java -jar app.jar reg

Connect local container docker

I have a docker-compose configuration file with the following settings:
version: '3.6'
services:
mongo:
image: mongo
restart: always
ports:
- "27017:27017"
networks:
- devnetwork
discovery:
image: discovery
ports:
- "8761:8761"
networks:
- devnetwork
environment:
- SPRING_PROFILES_ACTIVE=dev
- DISCOVERY_SERVER=discovery
- DISCOVERY_PORT=8761
networks:
devnetwork:
After up compose I build and run the following dockerfile:
FROM openjdk:12
#RUN echo $(grep $(hostname) /etc/hosts | cut -f1) api >> /etc/hosts
ADD gubee-middleware/target/*.jar /usr/share/middlewareservice/middleware-service.jar
EXPOSE 9670
ENTRYPOINT ["/usr/bin/java", "-XX:+UnlockExperimentalVMOptions","-XX:+UseContainerSupport","-Dspring.profiles.active=dev", "-jar", "/usr/share/middlewareservice/middleware-service.jar"]
How do I make my port 9670 service dockerfile see other services that I start with docker-compose?
You need to attach the container to the network created by docker compose when you start it.
This can be done by passing --network=<network-name> (obtain by running docker network ls ) when executing your docker run command.
Once attached, your app will be able to reach the containers started by compose using their names as the host. Eg: http://mongo:27017

docker-compose java application connection to mongodb

2 Containers, one Java application and the second mongodb.
If I run my java app locally and mongodb in a container, it connects but if both run inside a container, java app can't connect to mongodb.
docker-compose file is as follows, am I missing something
version: "3"
services:
user:
image: jboss/wildfly
container_name: "user"
restart: always
ports:
- 8081:8080
- 65194:65193
volumes:
- ./User/target/User.war:/opt/jboss/wildfly/standalone/deployments/User.war
environment:
- JAVA_OPTS=-agentlib:jdwp=transport=dt_socket,address=0.0.0.0:65193,suspend=n,server=y -Djava.net.preferIPv4Stack=true
- MONGO_HOST=localhost
- MONGO_PORT=27017
- MONGO_USERNAME=myuser
- MONGO_PASSWORD=mypass
- MONGO_DATABASE=mydb
- MONGO_AUTHDB=admin
command: >
bash -c "/opt/jboss/wildfly/bin/add-user.sh admin Admin#007 --silent && /opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0"
links:
- mongo
mongo:
image: mongo:4.0.10
container_name: mongo
restart: always
volumes:
- ./assets:/docker-entrypoint-initdb.d/
environment:
- MONGO_INITDB_ROOT_USERNAME=myuser
- MONGO_INITDB_ROOT_PASSWORD=mypass
ports:
- 27017:27017
- 27018:27018
- 27019:27019
Edit
I'm also confused about the following.
links:
- mongo
depends_on:
- mongo
At 2019 July, official docker documentation :
Source: https://docs.docker.com/compose/compose-file/#links
Solution #1 : environment file before start
Basically We centralize all configurations in a file with environment variables and execute it before docker-compose up
The following approach helped me in these scenarios:
Your docker-compose.yml has several containers with complex dependencies between them
Some of your services in your docker-compose needs to connect to another process in the same machine. This process could be a docker container or not.
You need to share variables between several docker-compose files like host, passwords, etc
Steps
1.- Create one file to centralize configurations
This file could be named: /env/company_environments with extension or not.
export MACHINE_HOST=$(hostname -I | awk '{print $1}')
export GLOBAL_LOG_PATH=/my/org/log
export MONGO_PASSWORD=mypass
export MY_TOKEN=123456
2.- Use the env variables in your docker-compose.yml
container A
app_who_needs_mongo:
environment:
- MONGO_HOST=$MACHINE_HOST
- MONGO_PASSWORD=$MONGO_PASSWORD
- TOKEN=$MY_TOKEN
- LOG_PATH=$GLOBAL_LOG_PATH/app1
container B
app_who_needs_another_db_in_same_host:
environment:
- POSTGRESS_HOST=$MACHINE_HOST
- LOG_PATH=$GLOBAL_LOG_PATH/app1
3.- Startup your containers
Just add source before docker-compose commands:
source /env/company_environments
docker-compose up -d
Solution #2 : host.docker.internal
https://stackoverflow.com/a/63207679/3957754
Basically use a feature of docker in which host.docker.internal could be used as the ip of the server in which your docker-compose has started several containers
You probably cant connect because you set the MONGO_HOST as localhost and mongo is a linked service.
In order to use linked services network, you must specify the MONGO_HOST as the name of the service - mongo, like that:
MONGO_HOST=mongo

Categories

Resources