Deploying a custom HAPI FHIR server using docker compose - java

I cloned the hapi-fhir-jpaserver-starter and modified the code to meet my requirements then did as they said in the README file:
mvn clean install
docker-compose up -d --build
It did deploy the server but with a new fresh HAPI server, not the one I modified and built.
How can I use docker compose to deploy my build not the version he gets from the docker repo?

You most likely need to update the DockerFile to use your modified code, from your fork, rather than from the original repository:
ARG HAPI_FHIR_STARTER_URL=https://github.com/path/to/your/repo
ARG HAPI_FHIR_STARTER_BRANCH=your_branch
Alternatively, perhaps have a look if this commit resolved the issue for you? https://github.com/hapifhir/hapi-fhir-jpaserver-starter/commit/213bda7cfcc2d5f6f150b8781093b315a17a43c2

Related

How to deploy Spring Boot + gradle service on Openshift

I've read tons of documentation and tutorials, but still cannot get through this.
I want to start developing a service using Spring boot and Gradle and deploy it on openshift.
With fabric8, there is a handy command 'mvn' clean install -Dfabric8.mode=openshift
to run the deployment.
This uses maven tho, and I'm with Gradle.
How can I do it? I know that I need an s2i-builder, but I cannot understand how to use them.
I know that fabric8, uses jboss-fuse-6/fis-java-openshift as s2i build, I may want to use the same for my builds.
Also, I would like to know if there is a way to redeploy from local files (this should be called a binary deploy) for dev purposes. As last thing, the very next step for me is setting up Jenkins but to get started I just really want to know how to proceed.
I've this simple Dockerfile:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom", "-XX:+UnlockExperimentalVMOptions", "-XX:+UseCGroupMemoryLimitForHeap", "-jar", "/app.jar"]
I'm using this plugin: "gradle.plugin.com.palantir.gradle.docker:gradle-docker:0.13.0" that gives me the Gradle task: ./gradlew build docker.
This container gets built successfully, and if I run it locally with docker run -p 8080:8080 it.example/microservice it runs perfectly fine. Added this bit of content just because I feel I'm not too far.

GitLab CI Maven dependency resolution fails

I want to setup a CI pipeline in GitLab for my Java project managed with Maven.
This is my gitlab-ci.yml
image: maven:3-jdk-9
variables:
MAVEN_CLI_OPTS: "--batch-mode"
stages:
- build
compile:
stage: build
script:
- mvn $MAVEN_CLI_OPTS compile
I always get the following exception:
I tried many things like changing versions of the plugins, various docker images, including a settings.xml and local repository in the project itself, but nothing works.
Thanks in advance for any help!
UPDATE:
Using the latest docker image everything works.
It seems like the CI server has no connection to the internet. Check this using the curl command in your .gitlab-ci.ymlfile.
But I'm pretty sure you guys at daimler have a local mirror, something like Artifactory.
In that case you have to use a settings.xml file.
Here is the official tutorial of Gitlab

how to use docker-compose and maven snaphot dependencies from external repos

I have several java components (WARs), all of them expose webservices, and they happen to use the samemessaging objects (DTOs).
This components all share a common maven dependency for the DTOs, let's call it "messaging-dtos.jar". This common dependency has a version number, for example messaging-dtos-1.2.3.jar, where 1.2.3 is the maven version for that artifact, which is published in a nexus repository and the like.
In the maven world, docker aside, it can get tedious to work with closed version dependencies. The solution for that is maven SNAPSHOTS. When you use for example Eclipse IDE, and you set a dependency to a SNAPSHOT version, this will cause the IDE to take the version from your current workspace instead of nexus, saving time by not having to close a version each time you make a small change.
Now, I don't know how to make this development cycle to work with docker and docker-compose. I have "Component A" which lives in its own git repo, and messaging-dtos.jar, which lives in another git repo, and it's published in nexus.
My Dockerfile simpy does a RUN mvn clean install at some point, bringing the closed version for this dependency (we are using Dockerfiles for the actual deployments, but for local environments we use docker-compose). This works for closed versions, but not for SNAPSHOTS (at least not for local SNAPSHOTs, I could publish the SNAPSHOT in nexus, but that creates another set of problems, with different content overwriting the same SNAPSHOT and such, been there and I would like to not come back).
I've been thinking about using docker-compose volumes at some point, maybe to mount whatever is in my local .m2 so ComponentA can find the snapshot dependency when it builds, but this doesn't feel "clean" enough, the build would depend partially on whatever is specified in the Dockerfile and partially on things build locally. I'm not sure that'd be the correct way.
Any ideas? Thanks!
I propose maintain two approaches: one for your local development environment (i.e. your machine) and another for building in your current CI tool.
For your local dev environment:
A Dockerfile that provides the system needs for your War application (i.e. Tomcat)
docker-compose to mount a volume with the built war app, from Eclipse or whatever IDE.
For CI (not your dev environment):
A very similar Dockerfile but one that can build your application (with maven installed)
A practical example
I use the docker feature: multi stage build.
A single Dockerfile for both Dev and CI envs that might be splited but I prefer to maintain only one:
FROM maven as build
ARG LOCAL_ENV=false
COPY ./src /app/
RUN mkdir /app/target/
RUN touch /app/target/app.war
WORKDIR /app
# Run the following only if we are not in Dev Environment:
RUN test $LOCAL_ENV = "false" && mvn clean install
FROM tomcat
COPY --from=build /app/target/app.war /usr/local/tomcat/webapps
The multi-stage build saves a lot of disk space discarding everything from the build, except what is being COPY --from='ed.
Then, docker-compose.yml used in Dev env:
version: "3"
services:
app:
build:
context: .
args:
LOCAL_ENV: true
volumes:
- ./target/app.war:/usr/local/tomcat/webapps/app.war
Build in CI (not your local machine):
# Will run `mvn clean install`, fetching whatever it needs from Nexus and so on.
docker build .
Run in local env (your local machine):
# Will inject the war that should be available after a build from your IDE
docker-compose up

Execute JUnit tests inside Docker container

I would like to build a test environment with Docker, where I can remotely send JUnit test classes (including the code that is tested), execute the tests and retrieve the results.
I found some articles which explained how to use docker for testing databaseconntection/writing inside a redis, but not how i can simple let my tests perform on docker and retrieve the results.
Do you have any recommendations how You would actually achieve this?
I don't know much about Jenkins, but would this might solve my problem?
Is there any good framework outside for this?
In a dockerfile, checkout your code and do a "maven test" command, redirect the result in a file that is on a mounted directory.
Each time you build the dockerfile, you do a unit test on your project.
With docker you also have a "docker test" command. I dont know if there is a plugin to use it on jenkins.
One way I found that works (using Gradle) is as follows. I know you are specifically referencing JUnit as your testing framework, but I actually think something similar to this could work.
Dockerfile (I called mine Dockerfile.UnitTests):
FROM gradle:jdk8 AS test-stage
WORKDIR /app
COPY . ./
RUN gradle clean
RUN gradle test
FROM scratch AS export-stage
COPY --from=test-stage /app/build/reports/tests/test/* /
I then run this with (in Gitbash on Windows 10):
> DOCKER_BUILDKIT=1 docker build -f Dockerfile.UnitTests --output type=tar,dest=UnitTests.tar .
This results in a tar file containing the test results displayed in an html file.
I executed the above in a Gitlab CI/CD pipeline and then sent the results to a web API for analysis.
A couple of assumptions:
My project is set up for Gradle builds so I have the structure from the root of my project src/test/java/groupname/projectname/testfile.java
I am working in Windows 10 targeting Linux containers and using Gitbash.

Using Docker in development for Java EE applications

I will add 300 points as bounty
I have recently started to take a closer look at Docker and how I can use it for faster getting new member of the team up and running with the development environment as well as shipping new versions of the software to production.
I have some questions regarding how and at what stage I should add the Java EE application to the container. As I see it there are multiple ways of doing this.
This WAS the typical workflow (in my team) before Docker:
Developer writes code
Developer builds the code with Maven producing a WAR
Developer uploads the WAR in the JBoss admin console / or with Maven plugin
Now after Docker came around I am a little confused about if I should create the images that I need and configure them so that all that is left to do when you run the JBoss Wildfly container is to deploy the application through the admin console on the web. Or should I create a new container for each time I build the application in Maven and add it with the ADD command in the Dockerfile and then just run the container without ever deploying to it once it is started?
In production I guess the last approach is what it preffered? Correct me if I am wrong.
But in development how should it be done? Are there other workflows?
With the latest version of Docker, you can achieve that easily with Docker Links, Docker Volume and Docker Compose. More information about these tools from Docker site.
Back to your workflow as you have mentioned: for any typical Java EE application, an application server and a database server are required. Since you do not mention in your post how the database is set up, I would assume that your development environment will have separated database server for each developer.
Taking all these into assumption, I could suggest the following workflow:
Build the base Wildfly application server from the official image. You can achieve that by: "docker pull" command
Run the base application server with:
docker run -d -it -p 8080:8080 -p 9990:9990 --name baseWildfly
jboss/wildfly
The application server is running now, you need to configure it to connect to your database server and also configure the datasource settings and other configuration if neccessary in order to start your Java EE application.
For this, you need to log into bash terminal of the Jboss container:
docker exec -i -t baseWildfly /bin/bash/
You are now in the terminal of container. You can configure the application server as you do for any linux environment.
You can test the configuration by manually deploying the WAR file to Wildfly. This can be done easily with the admin console, or maven plugin, or ADD command as you said. I usually do that with admin console, just for testing quickly. When you verify that the configuration works, you can remove the WAR file and create a snapshot of your container:
docker commit --change "add base settings and configurations"
baseWildfly yourRepository:tag
You can now push the created image to your private repository and share that with your developer team. They can now pull the image and run the application server to deploy right away.
We don't want to deploy the WAR file for every Maven build using admin console as that is too cumbersome, so next task is to automate it with Docker Volume.
Assuming that you have configured Maven to build the WAR file to "../your_project/deployments/", you can link that to deployment directory of Jboss container as following:
docker run -d -p 8080:8080 -v
../your_project/deployments:/opt/jboss/wildfly/standalone/deployments
Now, every time you rebuild the application with Maven, the application server will scan for changes and redeploy your WAR file.
It is also quite problematic to have separated database server for each developer, as they have to configure it by themselves in the container because they might have different settings (e.g. db's url, username, password, etc...). So, it's good to dockerize that eventually.
Assuming you use Postgres as your db server, you can pull it from postgres official repository. When you have the image ready, you can run the db server:
docker run -d -p 5432:5432 -t --name postgresDB postgres
or run the database server with the linked "data" directory:
docker run -d -p 5432:5432 -v
../your_postgres/data:/var/lib/postgresql -t --name postgresDB
postgres
The first command will keep your data in the container, while the latter one will keep your data in the host env.
Now you can link your database container with the Wildfly:
docker run -d -p 8080:8080 --link postgresDB:database -t baseWildfly
Following is the output of linking:
Now you can have the same environment for all members in developer's team and they can start coding with minimal set up.
The same base images can be used for Production environment, so that whenever you want to release new version, you just need to copy the WAR file to "your_deployment" folder of the host.
The good thing of dockerizing application server and db server is that you can cluster it easily in the future to scale it or to apply the High Availability.
I've used Docker with Glassfish extensively, for a long time now and wrote a blog on the subject a while ago here.
Its a great tool for JavaEE development.
For your production image I prefer to bundle everything together, building off the static base image and layering in the new WAR. I like to use the CI server to do the work and have a CI configuration for production branches which will grab a base, layer in the release build, and then publish the artifact. Typically we manually deploy into production but if you really want to get fancy you can even automate that with the CI server deploying into a production environment and using proxy servers to ensure new sessions that come it get the updated version.
In development I like to take the same approach when it comes time to locally running any that rely on the container (eg. Arquillian integration tests) prior to checking in code. That keeps the environment as close to production as possible which I think is important when it comes to testing. That's one big reason I am against approaches like testing with embedded containers but deploying to non-embedded ones. I've seen plenty of cases where a test will pass in the embedded environment and fail in the production/non-embedded one.
During a develop/deploy/hand test cycle, prior to committing code, I think the approach of deploying into a container (which is part of a base image) is more cost effective in terms of speed of that dev. cycle vs. building in your WAR each time. It's also a better approach if your dev environment uses a tool like JRebel or XRebel where you can hot deploy your code and simply refresh your browser to see the changes.
You might want to have a look at rhuss/docker-maven-plugin. It allows a seamless integration for using docker as your deployment unit:
Use a standard Maven assembly descriptor for building images with docker:build, so you generated WAR file or your Microservice can be easily added to a Docker image.
You can push the created image with docker:push
With docker:start and docker:stop you can utilize your image during unit tests.
This plugin comes with a comprehensive documentation, if there are any open questions, please open an issue.
And as you might have noticed, I'm the author of this plugin ;-). And frankly, there are other docker-maven-plugins out there, which all have a slightly different focus. For a simple check, you can have a look at shootout-docker-maven which provides sample configurations for the four most active maven-docker-plugins.
The workflow then simply shifts the artifact boundary from WAR/EAR files to Docker images. mvn docker:push moves them to a Docker registry from where it is pulled during the various testing stages used in a continuous delivery pipeline.
The way you would normally deploy anything with Docker is by producing a new image atop of the platform base image. This way you follow Docker dependency bundling philosophy.
In terms of Maven, you can produce a tarball assembly (let's say it's called jars.tar) and then call ADD jars.tar /app/lib in Dockerfile. You might also implement a Maven plugin that generates a Dockerfile as well.
This is the most sane approach with Docker today, other approaches, such as building image FROM scratch are not quite applicable for Java applications.
See also Java JVM on Docker/CoreOS.
The blog post about setting up JRebel with Docker by Arun Gupta would probably be handy here: http://blog.arungupta.me/configure-jrebel-docker-containers/
I have tried a simular scenario to use docker to run my application. In my situation i wanted to start docker with tomcat running the war. Then at the integration-test phase of maven start the cucumber/phantomjs integration test on the docker.
The example implementation is documented at https://github.com/abroer/cucumber-integration-test. You could extend this example to push the docker image to your private repo when the test is successfull. The pushed image can be used in any enviroment from development to production.
For my current deployment process I use glassfish and this trick, which works very nicely.
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>${plugin.exec.version}</version>
<executions>
<execution>
<id>docker</id>
<phase>package</phase>
<goals>
<goal>exec</goal>
</goals>
</execution>
</executions>
<configuration>
<executable>docker</executable>
<arguments>
<argument>cp</argument>
<argument>${project.build.directory}/${project.build.finalName}</argument>
<argument>glassfish:/glassfish4/glassfish/domains/domain1/autodeploy</argument>
</arguments>
</configuration>
</plugin>
Once you run: mvn clean package, the containers kicks-in and starts deployment of the latest war.

Categories

Resources