Issue with Redis setup cluster mode in docker(Windows 7) - java

I am trying to set up Redis in cluster mode and when I try to connect to Redis using Jedis API, I am seeing below exception.
Exception in thread "main" redis.clients.jedis.exceptions.JedisNoReachableClusterNodeException: No reachable node in cluster
at redis.clients.jedis.JedisSlotBasedConnectionHandler.getConnection(JedisSlotBasedConnectionHandler.java:57)
at redis.clients.jedis.JedisSlotBasedConnectionHandler.getConnectionFromSlot(JedisSlotBasedConnectionHandler.java:74)
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:116)
at redis.clients.jedis.JedisClusterCommand.run(JedisClusterCommand.java:31)
at redis.clients.jedis.JedisCluster.set(JedisCluster.java:103)
at com.redis.main.Main.main(Main.java:18)
I am using below command to start the Redis
$ docker run -v /d/redis.conf:/usr/bin/redis.conf --name myredis redis redis-server /usr/bin/redis.conf
And my simple redis.conf looks like below.
port 6379
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
And below are redis start up logs.
$ docker run -v /d/redis.conf:/usr/bin/redis.conf --name myredis redis redis-se
rver /usr/bin/redis.conf
1:C 11 Oct 18:06:01.657 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 11 Oct 18:06:01.663 # Redis version=4.0.2, bits=64, commit=00000000, modifi
d=0, pid=1, just started
1:C 11 Oct 18:06:01.664 # Configuration loaded
1:M 11 Oct 18:06:01.685 * Running mode=standalone, port=6379.
1:M 11 Oct 18:06:01.690 # WARNING: The TCP backlog setting of 511 cannot be enf
rced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 11 Oct 18:06:01.692 # Server initialized
1:M 11 Oct 18:06:01.696 # WARNING overcommit_memory is set to 0! Background sav
may fail under low memory condition. To fix this issue add 'vm.overcommit_memo
y = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overco
mit_memory=1' for this to take effect.
1:M 11 Oct 18:06:01.697 # WARNING you have Transparent Huge Pages (THP) support
enabled in your kernel. This will create latency and memory usage issues with R
dis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent
hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain
he setting after a reboot. Redis must be restarted after THP is disabled.
1:M 11 Oct 18:06:01.700 * Ready to accept connections
And below is the simple java program.
public class Main {
public static void main(String[] args) {
Set<HostAndPort> jedisClusterNodes = new HashSet<HostAndPort>();
jedisClusterNodes.add(new HostAndPort("127.0.0.1", 6379));
JedisCluster jc = new JedisCluster(jedisClusterNodes);
//Jedis jc = new Jedis("192.168.99.100");
jc.set("prime", "1 is prime");
String keyVal = jc.get("prime");
System.out.println(keyVal);
}
}
Not really sure what is going wrong here and will appreciate any help on this.

You need to expose the port when starting the redis container
docker run -v /d/redis.conf:/usr/bin/redis.conf -p 6379:6379 --name myredis redis redis-server /usr/bin/redis.conf

Related

Jenkins in Azure VM "This site can’t be reached" at 8080

I attempted to add the nvm-wrapper plugin (https://plugins.jenkins.io/nvm-wrapper/) to jenkins and reboot and am no longer able to reach the site (This site can’t be reached). It is deployed as an azure VM. The VM is running, I can ssh in and systemctl status jenkins.service is showing:
jenkins.service - LSB: Start Jenkins at boot time
Loaded: loaded (/etc/init.d/jenkins; generated)
Active: failed (Result: exit-code) since Mon 2022-04-04 21:04:43 UTC; 14h ago
Docs: man:systemd-sysv-generator(8)
Process: 2530 ExecStart=/etc/init.d/jenkins start (code=exited, status=7)
Apr 04 21:04:42 pipeline jenkins[2530]: Correct java version found
Apr 04 21:04:42 pipeline jenkins[2530]: * Starting Jenkins Automation Server jenkins
Apr 04 21:04:42 pipeline su[2587]: Successful su for jenkins by root
Apr 04 21:04:42 pipeline su[2587]: + ??? root:jenkins
Apr 04 21:04:42 pipeline su[2587]: pam_unix(su:session): session opened for user jenkins by (uid=0)
Apr 04 21:04:42 pipeline su[2587]: pam_unix(su:session): session closed for user jenkins
Apr 04 21:04:43 pipeline jenkins[2530]: ...fail!
Apr 04 21:04:43 pipeline systemd[1]: jenkins.service: Control process exited, code=exited status=7
Apr 04 21:04:43 pipeline systemd[1]: jenkins.service: Failed with result 'exit-code'.
Apr 04 21:04:43 pipeline systemd[1]: Failed to start LSB: Start Jenkins at boot time.
I had previously been running into Incorrect Java version due to having installed Java 12, so I modified the /etc/init.d/jenkins like so:
# Which Java versions can be used to run Jenkins
JAVA_ALLOWED_VERSIONS=( "18" "110" "120" )
# Work out the JAVA version we are working with:
JAVA_VERSION=$($JAVA -version 2>&1 | sed -n ';s/.* version "\([0-9]*\)\.\([0-9]*\)\..*".*/\1\2/p;')
to allow for Java 12. Any thoughts on how/why either the plugin addition or Java version and/or /etc/init.d/jenkins edit could be impacting things? My sense is that the initial reboot failed due to the Incorrect Java version issue, but not sure how I can resolve things and get it back up and running. It should, by default, be available at 8080 and that is where I am seeing This site can’t be reached.
I also have the networking set up like so:
and port 8080 should allow traffic. I have attempted to restart, hard start and stop/start to no avail as well. "Resource health" says the VM is available which should be obvious since it is running and I can ssh in. Do I need to redeploy perhaps?
First, please delegate the service to a non root user.
Second, azure, means in the net, please use SSL, the times without SSL are gone.
JAVA_ALLOWED_VERSIONS=( "18" "110" "120" )
you are trying java 12, but its configured java 120, i think you should fix that to
JAVA_ALLOWED_VERSIONS=( "18" "11" "12" )

Getting error "ErrImagePull" in kubernetes deployment in simple print hello world program

helloworld.java:
import java.util.*;
public class helloworld {
public static void main(String[] a)
{
int index;
Scanner scan = new Scanner(System.in);
for(index=0;index<20;index++)
System.out.println("helloworld\t"+(index+1));
}
}
Dockerfile.file:
FROM openjdk:7
LABEL maintainer="Arun kumar"
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
RUN javac helloworld.java
CMD ["java", "helloworld"]
This prepares a docker image, which I can then create using this command (in the repository where the files are present):
docker built -t javaprogram .
Then I run this image using this command:
docker run -it -d javaprogram
Then I create the copy of container using this command:
docker commit containerId arunkumarduraisamy66/javacontprogram
Then I push the image to my docker hub repository in public mode using this command:
docker push arunkumarduraisamy66/javacontprogram
Then I start MiniKube:
minikube start
Then I try to create a container using this command:
kubectl create deployment javakubedeployment --image=arunkumarduraisamy66/javacontprogram
It gives the following error: ErrImagePull
I am not sure how to resolve this.
C:\Users\thula\Documents\kubernetes and docker\docker java\helloworld> kubectl describe pod javaprogram-64b48854-ns7jp
Name: javaprogram-64b48854-ns7jp
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Wed, 30 Dec 2020 09:04:42 +0530
Labels: app=javaprogram
pod-template-hash=64b48854
Annotations: <none>
Status: Running
IP: 172.17.0.3
IPs:
IP: 172.17.0.3
Controlled By: ReplicaSet/javaprogram-64b48854
Containers:
javacontprogram:
Container ID: docker://8cb0722bde94704a3dfbec2514958c1cea88bd0f5df0afb2678292835c4f871e
Image: arunkumarduraisamy66/javacontprogram
Image ID: docker-pullable://arunkumarduraisamy66/javacontprogram#sha256:fe00f09ebc6a6bc651343a807b1adf9e48b62596ddf9424abc11ef3c6f713291
Port: <none>
Host Port: <none>
State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 30 Dec 2020 09:05:50 +0530
Finished: Wed, 30 Dec 2020 09:05:51 +0530
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 30 Dec 2020 09:05:16 +0530
Finished: Wed, 30 Dec 2020 09:05:17 +0530
Ready: False
Restart Count: 3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-bs9b6 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-bs9b6:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-bs9b6
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 79s default-scheduler Successfully assigned default/javaprogram-64b48854-ns7jp to minikube
Normal Pulled 74s kubelet Successfully pulled image "arunkumarduraisamy66/javacontprogram" in 3.4663937s
Normal Pulled 68s kubelet Successfully pulled image "arunkumarduraisamy66/javacontprogram" in 3.5957121s
Normal Pulled 46s kubelet Successfully pulled image "arunkumarduraisamy66/javacontprogram" in 3.8325583s
Normal Pulling 16s (x4 over 77s) kubelet Pulling image "arunkumarduraisamy66/javacontprogram"
Normal Pulled 13s kubelet Successfully pulled image "arunkumarduraisamy66/javacontprogram" in 3.5564947s
Normal Created 12s (x4 over 74s) kubelet Created container javacontprogram
Normal Started 12s (x4 over 73s) kubelet Started container javacontprogram
Warning BackOff 10s (x5 over 66s) kubelet Back-off restarting failed container
I've recreated your application's image and as mentioned in comments by #mmking it is terminated because pod has finished it's task. When you check logs of the pod you can see that it has ran correctly and printed helloworld 20 times and then gets terminated as there is nothing more to do for this pod:
$kubectl logs javakubedeployment-7bcfb44b74-8b5zz
helloworld 1
helloworld 2
helloworld 3
[...]
helloworld 19
helloworld 20
To keep pod running you can add simple sleep command in it.

Unable to install Jetty9 as a service in Ubuntu

I've followed the docs in order to install Jetty9 as a service but whenever I run
service jetty start
It would fail with no messages, my JETTY_HOME is /opt/jetty9, contains the home distribution for version 9.4.14. I've also created my JETTY_BASE at /usr/share/jetty9 with my webapp and modules.
Both Jetty Home and Base are owned by the user jetty. I've then symlinked to my init.d folder as:
ln -s /opt/jetty9/bin/jetty.sh /etc/init.d/jetty
Then I created a /etc/default/jetty file with the following content:
# change to 1 to prevent Jetty from starting
NO_START=0
# change to 'no' or uncomment to use the default setting in /etc/default/rcS
VERBOSE=yes
# Run Jetty as this user ID (default: jetty)
# Set this to an empty string to prevent Jetty from starting automatically
JETTY_USER=jetty
# The home directory of the Java Runtime Environment (JRE). You need at least
# Java 6. If JAVA_HOME is not set, some common directories for OpenJDK and
# the Oracle JDK are tried.
#JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java
# Extra options to pass to the JVM
#JAVA_OPTIONS="-Xmx256m -Djava.awt.headless=true"
# Timeout in seconds for the shutdown of all webapps
#JETTY_SHUTDOWN=30
# Additional arguments to pass to Jetty
#JETTY_ARGS=
# Jetty uses a directory to store temporary files like unpacked webapps
TMPDIR=/opt/jetty9/tmp
JETTY_HOME=/opt/jetty9
JETTY_BASE=/usr/share/jetty9
# Default for number of days to keep old log files in /var/log/jetty9/
#LOGFILE_DAYS=14
# If you run Jetty on port numbers that are all higher than 1023, then you # do not need authbind. It is used for binding Jetty to lower port numbers.
# (yes/no, default: no)
#AUTHBIND=yes
JETTY_HOST=0.0.0.0
If I start Jetty using java -jar $JETTY_HOME/start.jar in my base folder it would work with no problem. Also, if I run
service jetty supervise
It would also run with no issues, but when I call start it fails with:
root#app:/usr/share/jetty9# service jetty start
Job for jetty.service failed because the control process exited with error code.
See "systemctl status jetty.service" and "journalctl -xe" for details.
root#app:/usr/share/jetty9# service jetty status
● jetty.service - LSB: Jetty start script.
Loaded: loaded (/etc/init.d/jetty; generated)
Active: failed (Result: exit-code) since Mon 2018-12-03 15:05:26 UTC; 14s ago
Docs: man:systemd-sysv-generator(8)
Process: 21162 ExecStop=/etc/init.d/jetty stop (code=exited, status=0/SUCCESS)
Process: 21202 ExecStart=/etc/init.d/jetty start (code=exited, status=1/FAILURE)
Dec 03 15:05:22 app systemd[1]: Stopped LSB: Jetty start script..
Dec 03 15:05:22 app systemd[1]: Starting LSB: Jetty start script....
Dec 03 15:05:26 app jetty[21202]: Starting Jetty: FAILED Mon Dec 3 15:05:26 UTC 2018
Dec 03 15:05:26 app systemd[1]: jetty.service: Control process exited, code=exited status=1
Dec 03 15:05:26 app systemd[1]: jetty.service: Failed with result 'exit-code'.
Dec 03 15:05:26 app systemd[1]: Failed to start LSB: Jetty start script..
This is the output of service jetty check:
root#app:/usr/share/jetty9# service jetty check
Jetty NOT running
JAVA = /usr/bin/java
JAVA_OPTIONS = -Djetty.home=/opt/jetty9 -Djetty.base=/usr/share/jetty9 -Djava.io.tmpdir=/opt/jetty9/tmp
JETTY_HOME = /opt/jetty9
JETTY_BASE = /usr/share/jetty9
START_D = /usr/share/jetty9/start.d
START_INI = /usr/share/jetty9/start.ini
JETTY_START = /opt/jetty9/start.jar
JETTY_CONF = /opt/jetty9/etc/jetty.conf
JETTY_ARGS = jetty.state=/usr/share/jetty9/jetty.state jetty-started.xml
JETTY_RUN = /var/run/jetty
JETTY_PID = /var/run/jetty/jetty.pid
JETTY_START_LOG = /var/run/jetty/jetty-start.log
JETTY_STATE = /usr/share/jetty9/jetty.state
JETTY_START_TIMEOUT = 60
RUN_CMD = /usr/bin/java -Djetty.home=/opt/jetty9 -Djetty.base=/usr/share/jetty9 -Djava.io.tmpdir=/opt/jetty9/tmp -jar /opt/jetty9/start.jar jetty.state=/usr/share/jetty9/jetty.state jetty-started.xml
Any ideas?
UPDATE
Changing the user in /etc/default/jetty to root would solve the issue, but this is not a solution, isn't it?
# Run Jetty as this user ID (default: jetty)
# Set this to an empty string to prevent Jetty from starting automatically
JETTY_USER=root
I finally got this working, the jetty user should have permissions to the following folders and /usr/sbin/nologin as shell as described here.
JETTY_HOME
JETTY_BASE
/var/run/jetty <-- couldn't find a reference to this folder in the docs
And add the following to your /etc/default/jetty:
JETTY_SHELL=/bin/sh
JETTY_LOGS=/usr/share/jetty9/logs
JETTY_START_LOG=/usr/share/jetty9/logs/jetty-start-log.log
Also you should double check that there are no remaining log files owned by other user than jetty in your folders.

cassandra 3 throws Snitch class exception in debian docker container during startup

I am unable to start cassandra 3.0.9 on debian containers.
Exception (org.apache.cassandra.exceptions.ConfigurationException) encountered
during startup: Unable to find snitch class 'org.apache.cassandra.locator.GossippingPropertyFileSnitch'
org.apache.cassandra.exceptions.ConfigurationException: Unable to find snitch
class 'org.apache.cassandra.locator.GossippingPropertyFileSnitch'
at org.apache.cassandra.utils.FBUtilities.classForName(FBUtilities.java:480)
at org.apache.cassandra.utils.FBUtilities.construct(FBUtilities.java:513)
at org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:747)
at org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:446)
at org.apache.cassandra.config.DatabaseDescriptor.<clinit> (DatabaseDescriptor.java:119)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:543)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:696)
I am using cassandra cluster of 3 nodes of which 2 are seed nodes.
I followed the below link:
http://docs.datastax.com/en/cassandra/3.0/cassandra/initialize/initSingleDS.html
Below is my OS:
root#2e8538746e9e:/etc/cassandra# uname -a
Linux 2e8538746e9e 4.4.39-moby #1 SMP Fri Dec 16 07:34:12 UTC 2016 x86_64
GNU/Linux
root#2e8538746e9e:/etc/cassandra#
Any issues with installation or should i choose another snitch type?
No, the GossipingPropertyFileSnitch should be fine, but you have an extra 'p'.
Unable to find snitch class 'org.apache.cassandra.locator.GossippingPropertyFileSnitch'
Run this command, and make sure there's only one 'p' in "Gossiping."
$ grep endpoint_snitch cassandra.yaml
# endpoint_snitch -- Set this to a class that implements
endpoint_snitch: GossipingPropertyFileSnitch
Correcting the name of the snitch in your cassandra.yaml file should fix this issue.

Trouble running UPNP on Docker

I trying to run an UPnP service on my docker container using the Cling UPNP library (http://4thline.org/projects/cling/). There is a simple program that creates a device (in software) that hosts some service. This is written in Java and when I try to run the program I get the following exception (Note: This runs perfectly fine directly on my Ubuntu machine):
Jun 5, 2015 1:47:24 AM org.teleal.cling.UpnpServiceImpl <init>
INFO: >>> Starting UPnP service...
Jun 5, 2015 1:47:24 AM org.teleal.cling.UpnpServiceImpl <init>
INFO: Using configuration: org.teleal.cling.DefaultUpnpServiceConfiguration
Jun 5, 2015 1:47:24 AM org.teleal.cling.transport.RouterImpl <init>
INFO: Creating Router: org.teleal.cling.transport.RouterImpl
Exception occured: org.teleal.cling.transport.spi.InitializationException: Could not discover any bindable network interfaces and/or addresses
org.teleal.cling.transport.spi.InitializationException: **Could not discover any bindable network interfaces and/or addresses
at org.teleal.cling.transport.impl.NetworkAddressFactoryImpl.<init>(NetworkAddressFactoryImpl.java:99)**
For anyone that finds this and needs the answer.
Your container is obscuring your external network. In other words, by default containers are isolated and cannot see the outer network which is of course required in order to open the ports in your IGD.
You can run your container as a "host" to make it non isolated, simply add --network host to your container creation command.
Example (taken from https://docs.docker.com/network/network-tutorial-host/#procedure):
docker run --rm -d --network host --name my_nginx nginx
I have tested this using docker-compose.yml which looks a bit different:
version: "3.4"
services:
upnp:
container_name: upnp
restart: on-failure:10
network_mode: host # works while the container runs
build:
context: .
network: host # works during the build if needed
version 3.4 is very important and the network: host will not work otherwise!
My upnp container Dockerfile looks like so:
FROM alpine:latest
RUN apk update
RUN apk add bash
RUN apk add miniupnpc
RUN mkdir /scripts
WORKDIR /scripts
COPY update_upnp .
RUN chmod a+x ./update_upnp
# cron to update each UPnP entries every 10 minutes
RUN echo -e "*/10\t*\t*\t*\t*\tbash /scripts/update_upnp 8080 TCP" >> /etc/crontabs/root
CMD ["crond", "-f"]
# on start, open needed ports
ENTRYPOINT bash update_upnp 80 TCP && bash update_upnp 8080 TCP
update_upnp script is simply using upnpc (installed as miniupnpc in the Dockerfile above) to open the ports.
Hopefully this will help somebody.
Edit: Here is how update_upnp script may look like:
#!/bin/bash
port=$1
protocol=$2
echo "Updating UPnP entry with port [$port] and protocol [$protocol]"
gateway=$(ip route | head -1 | awk '{print $3}')
echo "Detected gateway is [$gateway]"
# port - e.g. 80
# protocol - TCP or UDP
upnpc -e 'your-app-name' -r $port $protocol
echo "Done updating UPnP entry with port [$port] and protocol [$protocol]"

Categories

Resources