I use jboss/kie-server-showcass image to run kie server. I want to know How can I config kie smart router for image file. I pass KIE_SERVER_ROUTER as environment when execute docker run command:
docker run -p 8180:8080 -d --name kie-server -e KIE_SERVER_ROUTER=http://172.17.0.1:9000 --link jbpm-workbench:kie-wb jboss/kie-server-showcase:latest
but it doesn't work and the kie server can't register to kie smart router.
I clone the https://github.com/jboss-dockerfiles/business-central and in kieserver/showcase/etc change the start_kie-server.sh file and modify JBOSS_ARGUMENTS:
JBOSS_ARGUMENTS=" -b $JBOSS_BIND_ADDRESS -Dorg.kie.server.id=$KIE_SERVER_ID -Dorg.kie.server.user=$KIE_SERVER_USER -Dorg.kie.server.pwd=$KIE_SERVER_PWD -Dorg.kie.server.location=$KIE_SERVER_LOCATION -Dorg.kie.server.router=http://172.17.0.1:9000"
and add -Dorg.kie.server.router=http://172.17.0.1:9000" to JBOSS_ARGUMENTS, and kie server registered to kie smart router successfully. But I don't what modify original image, to do register kie server.
Is there anyway to register kie server as a docker container to kie smart router?
Hello had the same issue
First Build kie router image :
You need to have kie smart router jar you can download it from the Redhat site
####### BASE ############
FROM jboss/base-jdk:8
####### ENVIRONMENT ############
ENV ROUTER_HOME=/opt/jboss \
KIE_SERVER_ROUTER_REPO_DIR=/opt/jboss \
ROUTER_HOSTNAME_EXTERNAL=http://localhost:9000/
# ENV KIE_SERVER_CONTROLLER
ENV KIE_SERVER_CONTROLLER_USER admin
ENV KIE_SERVER_CONTROLLER_PWD admin
####### EXPOSE ROUTER PORT ############
EXPOSE 9000
####### Kie server router CUSTOM CONFIGURATION ############
ADD kie-server-router.jar $ROUTER_HOME/kie-server-router.jar
ADD start_kie-server-router.sh $ROUTER_HOME/start_kie-server-router.sh
###### SWITCH USER root ######
USER root
###### CHANGE FILE PROPERTIES ######
RUN chmod +x $ROUTER_HOME/start_kie-server-router.sh
RUN chgrp -R 0 /opt/jboss
RUN chmod -R g+rw /opt/jboss
RUN find /opt/jboss -type d -exec chmod g+x {} +
####### CUSTOM JBOSS USER ############
# Switchback to jboss user
USER jboss
####### RUNNING Kie server router ############
WORKDIR $ROUTER_HOME
CMD ["./start_kie-server-router.sh"
start_kie-server-router.sh :
#!/usr/bin/env bash
# Default arguments for running the KIE Execution server.
ROUTER_ARGUMENTS="-Dorg.kie.server.router.host=$HOSTNAME -Dorg.kie.server.router.port=9000 -Dorg.kie.server.router.repo=$KIE_SERVER_ROUTER_REPO_DIR -Dorg.kie.server.router.url.external=$ROUTER_HOSTNAME_EXTERNAL"
# Controller argument for the KIE Server router. Only enabled if set the environment variable/s or detected container linking.
if [ -n "$KIE_SERVER_CONTROLLER" ]; then
echo "Using '$KIE_SERVER_CONTROLLER' as KIE server controller"
ROUTER_ARGUMENTS="$ROUTER_ARGUMENTS -Dorg.kie.server.controller=$KIE_SERVER_CONTROLLER -Dorg.kie.server.controller.user=$KIE_SERVER_CONTROLLER_USER -Dorg.kie.server.controller.pwd=$KIE_SERVER_CONTROLLER_PWD"
fi
echo "Router arguments '$ROUTER_ARGUMENTS' "
exec java -jar $ROUTER_ARGUMENTS $ROUTER_HOME/kie-server-router.jar
exit $?
Build kie server image where you will be using smart router option
clone this repo : https://github.com/jboss-dockerfiles/business-central/tree/master/kie-server/showcase
In Dockerfile edit
ENV KIE_SERVER_LOCATION http://kie-server:8080/kie-server/services/rest/server
In start_kie-server.sh add
# If Kie server router is configured then use it
if [ -n "$KIE_SERVER_ROUTER" ]; then
echo "Using '$KIE_SERVER_ROUTER' as KIE server router"
JBOSS_ARGUMENTS="$JBOSS_ARGUMENTS -Dorg.kie.server.router=$KIE_SERVER_ROUTER"
fi
Finally docker compose
business-central:
image: jboss/business-central-workbench-showcase
container_name: business-central
ports:
- 8080:8080
- 8001:8001
kie-server:
image: jboss/kie-server-showcase:{yourTag}
container_name: kie-server
environment:
KIE_MAVEN_REPO: http://nexus:8081/repository/rules
JVM_OPTS: -Xmx1g -Xms1g
KIE_SERVER_CONTROLLER: http://business-central:8080/business-central/rest/controller
KIE_SERVER_ROUTER: http://kie-smart-router:9000
depends_on:
- nexus
- business-central
- kie-smart-router
ports:
- 8180:8080
kie-smart-router:
image: jboss/kie-smart-router:latest
container_name: kie-smart-router
environment:
KIE_SERVER_CONTROLLER: http://business-central:8080/business-central/rest/controller
depends_on:
- business-central
ports:
- 9000:9000
Related
I'm trying to expose JMX in several docker-based Flink task managers using the --scale option of docker-compose for use with Java Mission Control. (e.g. docker-compose docker/flink-job-cluster.yml up -d --scale taskmanager=2)
My basic issue is that even though I have configured jmx with,
env.java.opts: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.host='0.0.0.0' -Djava.rmi.server.hostname='0.0.0.0'
env.java.opts.jobmanager: -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.rmi.port=9999
env.java.opts.taskmanager: -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.rmi.port=9999
Source: https://github.com/aedenj/apache-flink-kotlin-starter/blob/multiple-schemas/conf/flink/flink-conf.yaml
And then map the ports in the docker-compose file like so,
...
jobmanager:
image: flink:1.14.3-scala_2.11-java11
hostname: jobmanager
ports:
- "8081:8081" # Flink Web UI
- "9999:9999" # JMX Remote Port
command: "standalone-job"
....
taskmanager:
image: flink:1.14.3-scala_2.11-java11
depends_on:
- jobmanager
command: "taskmanager.sh start-foreground"
ports:
- "10000-10005:9999" # JMX Remote Port
environment:
- JOB_MANAGER_RPC_ADDRESS=jobmanager
.....
Source: https://github.com/aedenj/apache-flink-kotlin-starter/blob/multiple-schemas/docker/flink-job-cluster.yml
I am unable to connect Java Mission Control to the task managers using any of the ports. e.g. localhost:10000 or localhost:10001 etc. However I can connect to the Job Mangers using localhost:9999 And I can only connect to a task manager if I explicitly set the ports separately. Not using the dynamic port mapping in the docker-compose file. Ideally the the port mapping would work so I may continue to use the --scale command of docker-compose.
I have a docker-compose configuration file with the following settings:
version: '3.6'
services:
mongo:
image: mongo
restart: always
ports:
- "27017:27017"
networks:
- devnetwork
discovery:
image: discovery
ports:
- "8761:8761"
networks:
- devnetwork
environment:
- SPRING_PROFILES_ACTIVE=dev
- DISCOVERY_SERVER=discovery
- DISCOVERY_PORT=8761
networks:
devnetwork:
After up compose I build and run the following dockerfile:
FROM openjdk:12
#RUN echo $(grep $(hostname) /etc/hosts | cut -f1) api >> /etc/hosts
ADD gubee-middleware/target/*.jar /usr/share/middlewareservice/middleware-service.jar
EXPOSE 9670
ENTRYPOINT ["/usr/bin/java", "-XX:+UnlockExperimentalVMOptions","-XX:+UseContainerSupport","-Dspring.profiles.active=dev", "-jar", "/usr/share/middlewareservice/middleware-service.jar"]
How do I make my port 9670 service dockerfile see other services that I start with docker-compose?
You need to attach the container to the network created by docker compose when you start it.
This can be done by passing --network=<network-name> (obtain by running docker network ls ) when executing your docker run command.
Once attached, your app will be able to reach the containers started by compose using their names as the host. Eg: http://mongo:27017
2 Containers, one Java application and the second mongodb.
If I run my java app locally and mongodb in a container, it connects but if both run inside a container, java app can't connect to mongodb.
docker-compose file is as follows, am I missing something
version: "3"
services:
user:
image: jboss/wildfly
container_name: "user"
restart: always
ports:
- 8081:8080
- 65194:65193
volumes:
- ./User/target/User.war:/opt/jboss/wildfly/standalone/deployments/User.war
environment:
- JAVA_OPTS=-agentlib:jdwp=transport=dt_socket,address=0.0.0.0:65193,suspend=n,server=y -Djava.net.preferIPv4Stack=true
- MONGO_HOST=localhost
- MONGO_PORT=27017
- MONGO_USERNAME=myuser
- MONGO_PASSWORD=mypass
- MONGO_DATABASE=mydb
- MONGO_AUTHDB=admin
command: >
bash -c "/opt/jboss/wildfly/bin/add-user.sh admin Admin#007 --silent && /opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0"
links:
- mongo
mongo:
image: mongo:4.0.10
container_name: mongo
restart: always
volumes:
- ./assets:/docker-entrypoint-initdb.d/
environment:
- MONGO_INITDB_ROOT_USERNAME=myuser
- MONGO_INITDB_ROOT_PASSWORD=mypass
ports:
- 27017:27017
- 27018:27018
- 27019:27019
Edit
I'm also confused about the following.
links:
- mongo
depends_on:
- mongo
At 2019 July, official docker documentation :
Source: https://docs.docker.com/compose/compose-file/#links
Solution #1 : environment file before start
Basically We centralize all configurations in a file with environment variables and execute it before docker-compose up
The following approach helped me in these scenarios:
Your docker-compose.yml has several containers with complex dependencies between them
Some of your services in your docker-compose needs to connect to another process in the same machine. This process could be a docker container or not.
You need to share variables between several docker-compose files like host, passwords, etc
Steps
1.- Create one file to centralize configurations
This file could be named: /env/company_environments with extension or not.
export MACHINE_HOST=$(hostname -I | awk '{print $1}')
export GLOBAL_LOG_PATH=/my/org/log
export MONGO_PASSWORD=mypass
export MY_TOKEN=123456
2.- Use the env variables in your docker-compose.yml
container A
app_who_needs_mongo:
environment:
- MONGO_HOST=$MACHINE_HOST
- MONGO_PASSWORD=$MONGO_PASSWORD
- TOKEN=$MY_TOKEN
- LOG_PATH=$GLOBAL_LOG_PATH/app1
container B
app_who_needs_another_db_in_same_host:
environment:
- POSTGRESS_HOST=$MACHINE_HOST
- LOG_PATH=$GLOBAL_LOG_PATH/app1
3.- Startup your containers
Just add source before docker-compose commands:
source /env/company_environments
docker-compose up -d
Solution #2 : host.docker.internal
https://stackoverflow.com/a/63207679/3957754
Basically use a feature of docker in which host.docker.internal could be used as the ip of the server in which your docker-compose has started several containers
You probably cant connect because you set the MONGO_HOST as localhost and mongo is a linked service.
In order to use linked services network, you must specify the MONGO_HOST as the name of the service - mongo, like that:
MONGO_HOST=mongo
I heard that if I'm using a Java based Kafka Consumer, I can expose JMX metrics from it by adding some parameters (heard that here and some other posts)
My kafka Consumer is being ran inside a Docker.
Here are the four parameters i'm adding :
-Dcom.sun.management.jmxremote.port=1100
-Dom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.local.only=false
I'm adding it in the Entrypoint of my dockerfile. But my jConsole can't connect to it.
Here is my Dockerfile and my docker-compose related service :
FROM openjdk:8u181-jre
ADD ./app /app
ENTRYPOINT [ "java", "-jar", "-Dcom.sun.management.jmxremote.port=1100", "-Dcom.sun.management.jmxremote.authenticate=false", "-Dcom.sun.management.jmxremote.ssl=false", "-Dcom.sun.management.jmxremote.local.only=false", "/app/KafkaConsumer.jar" ]
jk_cons:
build: ./micro_services/jk_Cons
ports:
- "1100:1100"
volumes:
- /neito/shared/linux_shared/historian/logging:/app
Can someone enlight me on the way of exposing kafka consumer metrics through JMX ?
Have a nice day
I trying to run an UPnP service on my docker container using the Cling UPNP library (http://4thline.org/projects/cling/). There is a simple program that creates a device (in software) that hosts some service. This is written in Java and when I try to run the program I get the following exception (Note: This runs perfectly fine directly on my Ubuntu machine):
Jun 5, 2015 1:47:24 AM org.teleal.cling.UpnpServiceImpl <init>
INFO: >>> Starting UPnP service...
Jun 5, 2015 1:47:24 AM org.teleal.cling.UpnpServiceImpl <init>
INFO: Using configuration: org.teleal.cling.DefaultUpnpServiceConfiguration
Jun 5, 2015 1:47:24 AM org.teleal.cling.transport.RouterImpl <init>
INFO: Creating Router: org.teleal.cling.transport.RouterImpl
Exception occured: org.teleal.cling.transport.spi.InitializationException: Could not discover any bindable network interfaces and/or addresses
org.teleal.cling.transport.spi.InitializationException: **Could not discover any bindable network interfaces and/or addresses
at org.teleal.cling.transport.impl.NetworkAddressFactoryImpl.<init>(NetworkAddressFactoryImpl.java:99)**
For anyone that finds this and needs the answer.
Your container is obscuring your external network. In other words, by default containers are isolated and cannot see the outer network which is of course required in order to open the ports in your IGD.
You can run your container as a "host" to make it non isolated, simply add --network host to your container creation command.
Example (taken from https://docs.docker.com/network/network-tutorial-host/#procedure):
docker run --rm -d --network host --name my_nginx nginx
I have tested this using docker-compose.yml which looks a bit different:
version: "3.4"
services:
upnp:
container_name: upnp
restart: on-failure:10
network_mode: host # works while the container runs
build:
context: .
network: host # works during the build if needed
version 3.4 is very important and the network: host will not work otherwise!
My upnp container Dockerfile looks like so:
FROM alpine:latest
RUN apk update
RUN apk add bash
RUN apk add miniupnpc
RUN mkdir /scripts
WORKDIR /scripts
COPY update_upnp .
RUN chmod a+x ./update_upnp
# cron to update each UPnP entries every 10 minutes
RUN echo -e "*/10\t*\t*\t*\t*\tbash /scripts/update_upnp 8080 TCP" >> /etc/crontabs/root
CMD ["crond", "-f"]
# on start, open needed ports
ENTRYPOINT bash update_upnp 80 TCP && bash update_upnp 8080 TCP
update_upnp script is simply using upnpc (installed as miniupnpc in the Dockerfile above) to open the ports.
Hopefully this will help somebody.
Edit: Here is how update_upnp script may look like:
#!/bin/bash
port=$1
protocol=$2
echo "Updating UPnP entry with port [$port] and protocol [$protocol]"
gateway=$(ip route | head -1 | awk '{print $3}')
echo "Detected gateway is [$gateway]"
# port - e.g. 80
# protocol - TCP or UDP
upnpc -e 'your-app-name' -r $port $protocol
echo "Done updating UPnP entry with port [$port] and protocol [$protocol]"