Cannot connect to Wildfly in Dockerfile - java

I'm creating a custom Dockerfile with extensions for official keycloak docker image. I want to change web-context and add some custom providers.
Here's my Dockerfile:
FROM jboss/keycloak:7.0.0
COPY startup-config.cli /opt/jboss/tools/cli/startup-config.cli
RUN /opt/jboss/keycloak/bin/jboss-cli.sh --connect --controller=localhost:9990 --file="/opt/jboss/tools/cli/startup-config.cli"
ENV KEYCLOAK_USER=admin
ENV KEYCLOAK_PASSWORD=admin
and startup-config.cli file:
/subsystem=keycloak-server/:write-attribute(name=web-context,value="keycloak/auth")
/subsystem=keycloak-server/:add(name=providers,value="module:module:x.y.z.some-custom-provider")
Bu unfortunately I receive such error:
The controller is not available at localhost:9990: java.net.ConnectException: WFLYPRT0053: Could not connect to remote+http://localhost:9990. The connection failed: WFLYPRT0053: Could not connect to remote+http://localhost:9990. The connection failed: Connection refused
The command '/bin/sh -c /opt/jboss/keycloak/bin/jboss-cli.sh --connect --controller=localhost:9990 --file="/opt/jboss/tools/cli/startup-config.cli"' returned a non-zero code: 1
Is it a matter of invalid localhost? How should I refer to the management API?
Edit: I also tried with ENTRYPOINT instead of RUN, but the same error occurred during container initialization.

You are trying to have Wildfly load your custom config file at build-time here. The trouble is, that the Wildfly server is not running while the Dockerfile is building.
Wildfly actually already has you covered regarding automatically loading custom config, there is built in support for what you want to do. You simply need to put your config file in a "magic location" inside the image.
You need to drop your config file here:
/opt/jboss/startup-scripts/
So that your Dockerfile looks like this:
FROM jboss/keycloak:7.0.0
COPY startup-config.cli /opt/jboss/startup-scripts/startup-config.cli
ENV KEYCLOAK_USER=admin
ENV KEYCLOAK_PASSWORD=admin
Excerpt from the keycloak documentation:
Adding custom script using Dockerfile
A custom script can be added by
creating your own Dockerfile:
FROM keycloak
COPY custom-scripts/ /opt/jboss/startup-scripts/
Now you can simply start the image, and the built features in keycloak (Wildfly feature really) will go look for a config in that spedific directory, and then attempt to load it up.
Edit from comment with final solution:
While the original answer solved the issue with being able to pass configuration to the server at all, an issue remained with the content of the script. The following error was received when starting the container:
=========================================================================
Executing cli script: /opt/jboss/startup-scripts/startup-config.cli
No connection to the controller.
=========================================================================
The issue turned out to be in the startup-config.cli script, where the jboss command embed-server was missing, needed to initiate a connection to the jboss instance. Also missing was the closing stop-embedded-server command. More about configuring jboss in this manner in the docs here: CHAPTER 8. EMBEDDING A SERVER FOR OFFLINE CONFIGURATION
The final script:
embed-server --std-out=echo
/subsystem=keycloak-server/theme=defaults/:write-attribute(name=cacheThemes,value=false)
/subsystem=keycloak-server/theme=defaults/:write-attribute(name=cacheTemplates,value=false)
stop-embedded-server

WildFly management interfaces are not available when building the Docker image. Your only option is to start the CLI in embedded mode as discussed here Running CLI commands in WildFly Dockerfile.
A more advanced approach consists in using the S2I installation scripts to trigger CLI commands.

Related

What causes this dependabot issue?

I am following this : https://bobbybouwmann.nl/blog/dependabot-on-gitlab, to attempt to get Dependabot working with gitlab. I get the following error?
Using docker image sha256:bc6c0ffef6650bcfbb0afd5a07b813b5ccf1d00ecddccadb85123c6ee57a7995 for docker with digest docker#sha256:63107bd6ce718f130bb2668704239467b74f034c446f9e9c4ae1ffa5ddf4e3dd ...
$ docker build -t "dependabot/dependabot-script" -f Dockerfile .
error during connect: Post "http://docker:2375/v1.24/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&t=dependabot%2Fdependabot-script&target=&ulimits=null&version=1": dial tcp: lookup docker on 172.31.0.2:53: no such host
So I checked, and the specified docker image does not exist: https://hub.docker.com/u/dependabot
but, if I use a different, publicly available image, I get this:
Executing "step_script" stage of the job script
00:03
Using docker image sha256:ed97757f85d791b7e0a967622f0d671b810d1ad45aef30d5314dcaef94e7c457 for sethjones/dependabot-script with digest sethjones/dependabot-script#sha256:209a0952bfeb845f67f2eeb9a647c25e058bbae1b9b0e343d7891884840f17e0 ...
/usr/local/lib/site_ruby/2.7.0/rubygems/core_ext/kernel_require.rb:85:in `require': cannot load such file -- dependabot/file_fetchers (LoadError)
from /usr/local/lib/site_ruby/2.7.0/rubygems/core_ext/kernel_require.rb:85:in `require'
from ./generic-update-script.rb:4:in `<main>'
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
I'd recommend that you check out the Dependabot GitLab repository. They provide a CI/CD template that is kept up-to-date, so instead of having to write your own jobs you can include their template using:
include:
- project: 'dependabot-gitlab/dependabot-standalone'
file: '.gitlab-ci.yml'
They also provide instructions on their page for how to configure it to automatically create MRs.

activemq webconsole failing

I am trying to upgrade my active,q from 5.11.1 to 5.15.14 and to also pack it in a docker image.
I faced some issues which I was able to solve with Jetty.
Now my app is working with the new activemq but for some reason I am use the web console.
I able to view it but once I try to login to see the queues I get and exception both on my browser and it the docker container:
WARN | /admin/
javax.servlet.ServletException: javax.servlet.ServletException: org.apache.jasper.JasperException: /index.jsp (line: [18], column: [0]) null
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:162)[jetty-all-9.4.35.v20201120-uber.jar:9.4.35.v20201120]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)[jetty-all-9.4.35.v20201120-uber.jar:9.4.35.v20201120]
at org.eclipse.jetty.server.Server.handle(Server.java:516)[jetty-all-9.4.35.v20201120-uber.jar:9.4.35.v20201120]
at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:388)[jetty-all-9.4.35.v20201120-uber.jar:9.4.35.v20201120]
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:633)[jetty-all-9.4.35.v20201120-uber.jar:9.4.35.v20201120]
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:380)[jetty-all-9.4.35.v20201120-uber.jar:9.4.35.v20201120]
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)[jetty-all-9.4.35.v20201120-uber.jar:9.4.35.v20201120]
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)[jetty-all-9.4.35.v20201120-uber.jar:9.4.35.v20201120]
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)[jetty-all-9.4.35.v20201120-uber.jar:9.4.35.v20201120]
at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)[jetty-all-9.4.35.v20201120-uber.jar:9.4.35.v20201120]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)[jetty-all-9.4.35.v20201120-uber.jar:9.4.35.v20201120]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)[jetty-all-9.4.35.v20201120-uber.jar:9.4.35.v20201120]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)[jetty-all-9.4.35.v20201120-uber.jar:9.4.35.v20201120]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129)[jetty-all-9.4.35.v20201120-uber.jar:9.4.35.v20201120]
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375)[jetty-all-9.4.35.v20201120-uber.jar:9.4.35.v20201120]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:773)[jetty-all-9.4.35.v20201120-uber.jar:9.4.35.v20201120]
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:905)[jetty-all-9.4.35.v20201120-uber.jar:9.4.35.v20201120]
I read some comments here about using java 8 but I am using java 8 so I gues it is not the case.
My Dockerfile is:
FROM openjdk:8-jdk
# Copy the whole directory of activemq into the image
COPY apache-activemq-5.15.14 /opt/apache-activemq-5.15.14
# Set the working directory to the bin folder
WORKDIR /opt/apache-activemq-5.15.14/bin
# Start up the activemq server
ENTRYPOINT ["./activemq","console"]
Of course I am exposing the 8161 port since I am able to view the console , I just get an exception once I login.
Any help would be appreciated
After investigating I decided to go to a lower version of 5.15.9.
In this version all works as expected.

Use Artifactory proxy to Docker Hub with testcontainers inside a firewall for testing

I'm trying to run tests of a spring boot application when it writes to mongodb using the testcontainers test library. Testcontainers should spin up a Docker image running mongodb. Then I run my test, it connects to the data store, writes something and I have assertions that make sure the stuff got stored. Then it all goes away.
The test needs to run on a Jenkins build agent (on Red Hat Linux 7.5) inside our corporate network which is pretty well locked down.
We have Artifactory set up with a proxy to docker hub. When I normally do a docker login I give it https://artifactory.example.com or just do docker run with "artifactory.example.com/docker-all/image:1.2.3"
The log on the Jenkins run has this in it:
00:02:13.052 2019-05-22 00:15:59.647 INFO 83570 --- [ main] o.t.d.DockerClientProviderStrategy : Found Docker environment with Environment variables, system properties and defaults. Resolved:
00:02:13.052 dockerHost=unix:///var/run/docker.sock
00:02:13.052 apiVersion='{UNKNOWN_VERSION}'
00:02:13.052 registryUrl='https://index.docker.io/v1/'
00:02:13.052 registryUsername='cicduser'
00:02:13.052 registryPassword='null'
00:02:13.052 registryEmail='null'
00:02:13.052 dockerConfig='DefaultDockerClientConfig[dockerHost=unix:///var/run/docker.sock,registryUsername=cicduser,registryPassword=<null>,registryEmail=<null>,registryUrl=https://index.docker.io/v1/,dockerConfigPath=/home/cicduser/.docker,sslConfig=<null>,apiVersion={UNKNOWN_VERSION},dockerConfig=<null>]'
Question: I don't know how to get the registryUrl listed there to be "https://artifactory.example.com/docker-all" and the registryUsername and registryPasswords set correctly (if our Artifactory gets locked down for reads).
There is lots of info to find online about using an HTTP proxy providing access to the internet at large. I think I have found how to do that. But that's not what I need to do.
It seems you can't change the URL
You can login to your artifactory docker hub remote in the same way in Jenkins, probably by using the credentials plugin - assuming you have Jenkinsfiles
Hopefully you can also get to your artifactory from your Jenkins docker agent. If not, then that is a separate problem

What is a simple, effective way to debug custom Kafka connectors?

I'm working a couple of Kafka connectors and I don't see any errors in their creation/deployment in the console output, however I am not getting the result that I'm looking for (no results whatsoever for that matter, desired or otherwise). I made these connectors based on Kafka's example FileStream connectors, so my debug technique was based off the use of the SLF4J Logger that is used in the example. I've searched for the log messages that I thought would be produced in the console output, but to no avail. Am I looking in the wrong place for these messages? Or perhaps is there a better way of going about debugging these connectors?
Example uses of the SLF4J Logger that I referenced for my implementation:
Kafka FileStreamSinkTask
Kafka FileStreamSourceTask
I will try to reply to your question in a broad way. A simple way to do Connector development could be as follows:
Structure and build your connector source code by looking at one of the many Kafka Connectors available publicly (you'll find an extensive list available here: https://www.confluent.io/product/connectors/ )
Download the latest Confluent Open Source edition (>= 3.3.0) from https://www.confluent.io/download/
Make your connector package available to Kafka Connect in one of the following ways:
Store all your connector jar files (connector jar plus dependency jars excluding Connect API jars) to a location in your filesystem and enable plugin isolation by adding this location to the
plugin.path property in the Connect worker properties. For instance, if your connector jars are stored in /opt/connectors/my-first-connector, you will set plugin.path=/opt/connectors in your worker's properties (see below).
Store all your connector jar files in a folder under ${CONFLUENT_HOME}/share/java. For example: ${CONFLUENT_HOME}/share/java/kafka-connect-my-first-connector. (Needs to start with kafka-connect- prefix to be picked up by the startup scripts). $CONFLUENT_HOME is where you've installed Confluent Platform.
Optionally, increase your logging by changing the log level for Connect in ${CONFLUENT_HOME}/etc/kafka/connect-log4j.properties to DEBUG or even TRACE.
Use Confluent CLI to start all the services, including Kafka Connect. Details here: http://docs.confluent.io/current/connect/quickstart.html
Briefly: confluent start
Note: The Connect worker's properties file currently loaded by the CLI is ${CONFLUENT_HOME}/etc/schema-registry/connect-avro-distributed.properties. That's the file you should edit if you choose to enable classloading isolation but also if you need to change your Connect worker's properties.
Once you have Connect worker running, start your connector by running:
confluent load <connector_name> -d <connector_config.properties>
or
confluent load <connector_name> -d <connector_config.json>
The connector configuration can be either in java properties or JSON format.
Run
confluent log connect to open the Connect worker's log file, or navigate directly to where your logs and data are stored by running
cd "$( confluent current )"
Note: change where your logs and data are stored during a session of the Confluent CLI by setting the environment variable CONFLUENT_CURRENT appropriately. E.g. given that /opt/confluent exists and is where you want to store your data, run:
export CONFLUENT_CURRENT=/opt/confluent
confluent current
Finally, to interactively debug your connector a possible way is to apply the following before starting Connect with Confluent CLI :
confluent stop connect
export CONNECT_DEBUG=y; export DEBUG_SUSPEND_FLAG=y;
confluent start connect
and then connect with your debugger (for instance remotely to the Connect worker (default port: 5005). To stop running connect in debug mode, just run: unset CONNECT_DEBUG; unset DEBUG_SUSPEND_FLAG; when you are done.
I hope the above will make your connector development easier and ... more fun!
i love the accepted answer. one thing - the environment variables didn't work for me... i'm using confluent community edition 5.3.1...
here's what i did that worked...
i installed the confluent cli from here:
https://docs.confluent.io/current/cli/installing.html#tarball-installation
i ran confluent using the command confluent local start
i got the connect app details using the command ps -ef | grep connect
i copied the resulting command to an editor and added the arg (right after java):
-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005
then i stopped connect using the command confluent local stop connect
then i ran the connect command with the arg
brief intermission ---
vs code development is led by erich gamma - of gang of four fame, who also wrote eclipse. vs code is becoming a first class java ide see https://en.wikipedia.org/wiki/Erich_Gamma
intermission over ---
next i launched vs code and opened the debezium oracle connector folder (cloned from here) https://github.com/debezium/debezium-incubator
then i chose Debug - Open Configurations
and entered the highlighted debugging configuration
and then run the debugger - it will hit your breakpoints !!
the connect command should look something like this:
/Library/Java/JavaVirtualMachines/jdk1.8.0_221.jdk/Contents/Home/bin/java -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005 -Xms256M -Xmx2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/var/folders/yn/4k6t1qzn5kg3zwgbnf9qq_v40000gn/T/confluent.CYZjfRLm/connect/logs -Dlog4j.configuration=file:/Users/myuserid/confluent-5.3.1/bin/../etc/kafka/connect-log4j.properties -cp /Users/myuserid/confluent-5.3.1/share/java/kafka/*:/Users/myuserid/confluent-5.3.1/share/java/confluent-common/*:/Users/myuserid/confluent-5.3.1/share/java/kafka-serde-tools/*:/Users/myuserid/confluent-5.3.1/bin/../share/java/kafka/*:/Users/myuserid/confluent-5.3.1/bin/../support-metrics-client/build/dependant-libs-2.12.8/*:/Users/myuserid/confluent-5.3.1/bin/../support-metrics-client/build/libs/*:/usr/share/java/support-metrics-client/* org.apache.kafka.connect.cli.ConnectDistributed /var/folders/yn/4k6t1qzn5kg3zwgbnf9qq_v40000gn/T/confluent.CYZjfRLm/connect/connect.properties
Connector module is executed by the kafka connector framework. For debugging, we can use the standalone mode. we can configure IDE to use the ConnectStandalone main function as entry point.
create debug configure as the following. Need remember to tick "Include dependencies with "Provided" scope if it is maven project
connector properties file need specify the connector class name "connector.class" for debugging
worker properties file can copied from kafka folder /usr/local/etc/kafka/connect-standalone.properties

Apache Connection Refused when running Docker-client Java API

I am trying to install the Docker-client Remote API library ( https://github.com/spotify/docker-client ) to do some image searches and inspect image data (all in public repositories). I have the boot2docker VM downloaded, installed and running. Commands such as "Docker pull ubuntu" work fine but I would like to do this via a Java program now. I used the Eclipse IDE Egit plugin to import the github project and created a Maven/Java project from the existing Master branch. The source code is completely imported and no errors reported. I then tried writing a simple test:
final DockerClient docker = DefaultDockerClient.fromEnv().build();
//docker.pull("busybox");
List<ImageSearchResult> results = docker.searchImages("ubuntu");
for (ImageSearchResult res : results) {
System.out.println(res.getName());
}
However, when running the code in Eclipse I get the following error:
Exception in thread "main" com.spotify.docker.client.DockerException: java.util.concurrent.ExecutionException: javax.ws.rs.ProcessingException: org.apache.http.conn.HttpHostConnectException: Connect to localhost:2375 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused: connect
at com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:1109)
at com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1028)
at com.spotify.docker.client.DefaultDockerClient.searchImages(DefaultDockerClient.java:653)
at com.spotify.docker.client.main.Test.main(Test.java:28)
I tried setting up an apache server on that port but then I get:
Exception in thread "main" com.spotify.docker.client.DockerRequestException: Request error: GET http://localhost:2375/v1.12/images/search?term=ubuntu: 404
at com.spotify.docker.client.DefaultDockerClient.propagate(DefaultDockerClient.java:1100)
at com.spotify.docker.client.DefaultDockerClient.request(DefaultDockerClient.java:1028)
at com.spotify.docker.client.DefaultDockerClient.searchImages(DefaultDockerClient.java:653)
at com.spotify.docker.client.main.Test.main(Test.java:28)
Can anyone tell me what I am supposed to do here to make my search/pull call work? This is my first try with Docker and I've searched through the documentation and googled the problem but can't find anyone with a similar problem.
Thank you!
EDIT: I am running docker in Windows 7 via the pre-built VM Boot2Docker. Maybe the Docker daemon running inside that is not accessible from programs outside of the VM such as Eclipse?
EDIT: solved it by upgrading to v1.6 instead of v1.5 which makes the daemon available in the Windows host. Current problem is that all my API calls are returning "The server failed to respond with a valid HTTP response"
I encountered a similar issue and I managed to solve this issue by using the following way, to build up the DockerClient:
final DockerClient docker = DefaultDockerClient.builder()
.uri(URI.create("unix:///var/run/docker.sock"))
.build();
I had been getting the same exception but adding the above URI part helped me to solve the issue.
A better explanation for a issue similar to the above and how to solve it has been provided in the following issue tracker.
https://github.com/spotify/docker-maven-plugin/issues/61
The Java program does essentially a docker search: that can only work in an environment where the docker engine is present.
Either in the boot2docker VM.
Or in a full Linux host.
I did encounter the same problem on Mac with eclipse and Docker version 1.10.3, I did search for a solution before I settled for a workaround - Using docker CLI docker-manager to create a new virtualbox and get the DOCKER_HOST and DOCKER_CERT_PATH values of that virtualbox and create a new builder.
In my case: I have created a virtual box default2 using docker CLI command docker-machine create -d virtualbox default2
Docker CLI
$ docker-machine env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.103:2376"
export DOCKER_CERT_PATH="/Users/XXXX/.docker/machine/machines/default2"
export DOCKER_MACHINE_NAME="default2"
Docker-client JAVA
DockerCertificates defaultCertificates = new DockerCertificates(Paths.get("/Users/XXXX/.docker/machine/machines/default2"));
DockerClient docker = DefaultDockerClient.builder()
.uri("https://192.168.99.103:2376")
.dockerCertificates(defaultCertificates)
.build();

Categories

Resources