Bitcoinj OverlappingFileLockException in docker compose - java

I am testing a Bitcoinj application in docker compose, with 2 wallets. One starts correctly, second one throws a Caused by: java.io.IOException: org.bitcoinj.store.BlockStoreException: java.nio.channels.OverlappingFileLockException.
Wallet and SPV files are mounted into each docker containers and are located in, from workspace root, ./docker-volumes/wallet-app/... and ./docker-volumes/wallet-test/....
As far as I understand OverlappingFileLockException should be thrown when same JVM tries to lock the same file multiple times. In my case it is thrown when 2 containers seemingly lock 2 different files in different directories, so it seems like a docker compose issue, not Bitcoinj or other code.
My volumes are mounted as - ./docker-volumes/wallet-app:/wallet-app and - ./docker-volumes/wallet-test:/wallet-test in docker-compose.yml
Either one of the wallets starts correctly on its own.
After restarting crashed container with docker-compose restart wallet-... a couple times - it starts fine and I can run tests.
I suspect it has to do with volume mounts (although my containers should still run in 2 separate JVMs I would imagine), but have not been able to find anything particularly useful in docker docs. Any tips would be appreciated. Thanks.
EDIT:
relevant docker-compose.yml services
version: '3'
services:
listener:
image: listener:0.1
environment:
- NOTIFIER_URL=http://notifications:9000
- COIN_NET=test
- WALLET_PATH=/wallet
- MONGO_HOST=mongo
- MONGO_PORT=27017
- MONGO_DB=test_db
- RMQ_HOST=rabbit
- RMQ_PORT=5672
- RMQ_USERNAME=user
- RMQ_PASSWORD=password
- RMQ_ROUTING=event_producer
- RMQ_EXCHANGE=amq.direct
volumes:
- ./docker-volumes/wallet-app:/wallet
depends_on:
- mongo
- notifications
- rabbit
listener-test:
image: test:0.1
volumes:
- ./docker-volumes/wallet-test:/wallet-test

Related

Docker Binding parse exception with testcontainers

I would like to run some integrational tests which would include setting up a complete environment with org.testcontainers Docker Compose Module. I am new to Windows and Docker testing, same with the testcontainers.
Using versions:
Docker desktop community: 2.5.0.0
org.testcontainers:testcontainers:1.15.0
org.springframework.boot 2.3.4.
My code looks like the following:
#ClassRule
public static DockerComposeContainer environment = new DockerComposeContainer(
new File("C:\\dev\\myproject\\myapp\\docker-compose\\docker-compose.env.yml"),
new File("C:\\dev\\myproject\\myapp\\docker-compose\\docker-compose.yml"))
.withExposedService("myservice_1", 9999)
.withLocalCompose(true);
My compose files looks something likes this.
services:
myservice:
image: myapp/myservice:latest
hostname: myservice
volumes:
- ../volumeDir:/app/volumeDir
- ../config:/app/config
expose:
- 9999
ports:
- 9999:9999
command: -Dspring.liquibase.enabled=true
networks:
- internet
It looks like some Binding error, the most significant part of the stacktrace:
> java.lang.RuntimeException: java.lang.RuntimeException: org.testcontainers.shaded.com.fasterxml.jackson.databind.exc.ValueInstantiationException:
> Cannot construct instance of `com.github.dockerjava.api.model.Binds`,
> problem: Error parsing Bind
> 'C:\dev\myproject\myapp\volumeDir:/app/volumeDir:rw'
> at [Source: (org.testcontainers.shaded.okio.RealBufferedSource$1); line: 1,
> column: 1369] (through reference chain:
> com.github.dockerjava.api.command.InspectContainerResponse["HostConfig"]->com.github.dockerjava.api.model.HostConfig["Binds"])
> at org.rnorth.ducttape.timeouts.Timeouts.callFuture(Timeouts.java:68)
> at org.rnorth.ducttape.timeouts.Timeouts.doWithTimeout(Timeouts.java:60)
> at org.testcontainers.containers.wait.strategy.WaitAllStrategy.waitUntilReady(WaitAllStrategy.java:53)
> ...
I have tried to change the path to absolute without any difference. Do you have any ideas what can make this bind unparseable?
This error is due to a current issue with Testcontainers and the recent Docker for Windows version. They are already aware of it and a fix seems close to being merged.
UPDATE: Version 1.15.1 is now available that fixes this bug.

when pod of k8s restart, does the file on it exists and can be read or write well?

I run java program on k8s, but it encountered OOMKilled, i want to add jvm params like
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath="/tmp/dump.hprof"
but i wonder know does the dump file exists after the pod restart
The local filesystem of a container is ephemeral and so it will be gone once the container is killed. You should use volume mounts to write the heapdump to an external block or file storage or at least to the host's filesystem(not recommended for production) so that you can get the dump files after the container is restarted.
spec:
containers:
- name: a-jvm-container
image: openjdk:11.0.1-jdk-slim-sid
command: ["java", "-XX:+HeapDumpOnOutOfMemoryError", "-XX:HeapDumpPath=/dumps/oom.bin", "-jar", "yourapp.jar"]
volumeMounts:
- name: heap-dumps
mountPath: /dumps
volumes:
- name: heap-dumps
emptyDir: {}
Referring from docs here
An emptyDir volume is first created when a Pod is assigned to a Node,
and exists as long as that Pod is running on that node. As the name
says, it is initially empty. Containers in the Pod can all read and
write the same files in the emptyDir volume, though that volume can be
mounted at the same or different paths in each Container. When a Pod
is removed from a node for any reason, the data in the emptyDir is
deleted forever

Testcontainer issue with Bitbucket pipelines

I configured bitbucket-pipelines.yml and used image: gradle:6.3.0-jdk11. My project built on Java11 and Gradle 6.3. Everything was Ok till starting test cases. Because I used Testontainers to test the application. Bitbucket could not start up the Testcontainer.
The error is:
org.testcontainers.containers.ContainerLaunchException: Container startup failed
How can be fixed the issue?
If used Testcontainers inside the Bitbucket pipelines, There might be some issues. For instance, some issues like mentioned above. This issue can be fixed putting by following commands into bitbucket-pipelines.yml
Here the basic command is an environment variable.
TESTCONTAINERS_RYUK_DISABLED=true.
The full pipeline might be like this:
pipelines:
default:
- step:
script:
- export TESTCONTAINERS_RYUK_DISABLED=true
- mvn clean install
services:
- docker
definitions:
services:
docker:
memory: 2048

Keycloak docker container fails to import realm from volume

I want to run keycloak container with below docker compose file.
version: '2.1'
services:
# keycloak
keycloak:
container_name: keycloak
image: jboss/keycloak:latest
restart: always
ports:
- 8080:8080
volumes:
- C:\logs\keycloak:/usr/app/logs
- C:\settings:/etc/settings
environment:
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
- KEYCLOAK_IMPORT=/etc/settings/realm.json
Everything except realm import works fine in this case.
This is shortcut of the error thrown during container run:
Caused by: java.lang.RuntimeException: RESTEASY003325: Failed to construct public org.keycloak.services.resources.KeycloakApplication(javax.servlet.ServletContext,org.jboss.resteasy.core.Dispatcher)
Caused by: java.lang.RuntimeException: java.io.FileNotFoundException: /etc/settings/realm.json (Is a directory)
Caused by: java.io.FileNotFoundException: /etc/settings/realm.json (Is a directory)"}`
I am sure the file exists in this location.
I have checked several different configurations for import e.g. specyfing imported file: C:\settings\realm.json:/etc/settings/realm.json but the result is the same.
Have you got any ideas how the proper configuration should look like?
I had the same issue. It was caused by the fact that I was attempting to mount a volume using a relative path. I resolved it by replacing all relative paths with absolute paths.
SOLVED
It appears that the error may be described as follows.
error: File is mounted as a directory or mounted directories are empty.
reason: Password change to OS.
explanation: Docker cannot access files on the system that it works on, as it is after all a virtual machine, due to system’s password change. It does not inform about fail in accessing the file system either, just displays mounted directories in an invalid manner.
UPDATE
So the problem in my case was that a system password - Windows, was changed and credentials in docker were not updated. Since it was some time ago I no longer remember how to change the saved credentials in docker, I recall it was easy in the UI, but I know that was the solution - update of the system credentials stored by docker.

Can the docker image of sonarqube use env for configuring any settings?

I am trying to configure this image with LDAP.
In the documentation, they argue you can configure for jdbc with :
SONARQUBE_JDBC_USERNAME: sonar.jdbc.username*
SONARQUBE_JDBC_PASSWORD: sonar.jdbc.password*
SONARQUBE_JDBC_URL: sonar.jdbc.url*
I wonder how I could do the same for LDAP.
Is it possible to use any settings through their environment name ?
Eg: SONAR_LOG_LEVEL=DEBUG
Otherwise, there is inside the container a /opt/sonarqube/conf/sonar.properties
is it there and how should I start editing ?
Another way of achieving what you want is by creating your own sonar.properties file and copying that into the container along with the wrapper.properties.
In the docker-compose.yml
volumes:
- ./sonar-properties:/opt/sonarqube/conf
Otherwise, there is inside the container a /opt/sonarqube/conf/sonar.properties is it there and how should I start editing ?
No, generally what you want to do is possible adding information to your docker-compose file.
In particular in your YML file under the key "enviroment" you can add whatever variable you want.
Here an example of docker-compose.yml file:
version: "3"
services:
registry:
image: registry:2
ports:
- 0.0.0.0:5000:5000
environment:
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
REGISTRY_STORAGE_DELETE_ENABLED: "true"
volumes:
- /data/reg:/var/lib/registry
hostname: "myhost.registry"
Than use the compose file to deploy the stack with your custom enviroment.
The solution I found it's to take the configuration file (sonar.properties), to parameter it and to place it in docker-compose.yml :
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- ./sonar.properties:/opt/sonarqube/conf/sonar.properties
With that, the config file localy is placed in docker.
I hope it's help you

Categories

Resources