Trouble running UPNP on Docker - java

I trying to run an UPnP service on my docker container using the Cling UPNP library (http://4thline.org/projects/cling/). There is a simple program that creates a device (in software) that hosts some service. This is written in Java and when I try to run the program I get the following exception (Note: This runs perfectly fine directly on my Ubuntu machine):
Jun 5, 2015 1:47:24 AM org.teleal.cling.UpnpServiceImpl <init>
INFO: >>> Starting UPnP service...
Jun 5, 2015 1:47:24 AM org.teleal.cling.UpnpServiceImpl <init>
INFO: Using configuration: org.teleal.cling.DefaultUpnpServiceConfiguration
Jun 5, 2015 1:47:24 AM org.teleal.cling.transport.RouterImpl <init>
INFO: Creating Router: org.teleal.cling.transport.RouterImpl
Exception occured: org.teleal.cling.transport.spi.InitializationException: Could not discover any bindable network interfaces and/or addresses
org.teleal.cling.transport.spi.InitializationException: **Could not discover any bindable network interfaces and/or addresses
at org.teleal.cling.transport.impl.NetworkAddressFactoryImpl.<init>(NetworkAddressFactoryImpl.java:99)**

For anyone that finds this and needs the answer.
Your container is obscuring your external network. In other words, by default containers are isolated and cannot see the outer network which is of course required in order to open the ports in your IGD.
You can run your container as a "host" to make it non isolated, simply add --network host to your container creation command.
Example (taken from https://docs.docker.com/network/network-tutorial-host/#procedure):
docker run --rm -d --network host --name my_nginx nginx
I have tested this using docker-compose.yml which looks a bit different:
version: "3.4"
services:
upnp:
container_name: upnp
restart: on-failure:10
network_mode: host # works while the container runs
build:
context: .
network: host # works during the build if needed
version 3.4 is very important and the network: host will not work otherwise!
My upnp container Dockerfile looks like so:
FROM alpine:latest
RUN apk update
RUN apk add bash
RUN apk add miniupnpc
RUN mkdir /scripts
WORKDIR /scripts
COPY update_upnp .
RUN chmod a+x ./update_upnp
# cron to update each UPnP entries every 10 minutes
RUN echo -e "*/10\t*\t*\t*\t*\tbash /scripts/update_upnp 8080 TCP" >> /etc/crontabs/root
CMD ["crond", "-f"]
# on start, open needed ports
ENTRYPOINT bash update_upnp 80 TCP && bash update_upnp 8080 TCP
update_upnp script is simply using upnpc (installed as miniupnpc in the Dockerfile above) to open the ports.
Hopefully this will help somebody.
Edit: Here is how update_upnp script may look like:
#!/bin/bash
port=$1
protocol=$2
echo "Updating UPnP entry with port [$port] and protocol [$protocol]"
gateway=$(ip route | head -1 | awk '{print $3}')
echo "Detected gateway is [$gateway]"
# port - e.g. 80
# protocol - TCP or UDP
upnpc -e 'your-app-name' -r $port $protocol
echo "Done updating UPnP entry with port [$port] and protocol [$protocol]"

Related

Connect Java Mission Control to Flink App in Docker

I'm trying to expose JMX in several docker-based Flink task managers using the --scale option of docker-compose for use with Java Mission Control. (e.g. docker-compose docker/flink-job-cluster.yml up -d --scale taskmanager=2)
My basic issue is that even though I have configured jmx with,
env.java.opts: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.host='0.0.0.0' -Djava.rmi.server.hostname='0.0.0.0'
env.java.opts.jobmanager: -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.rmi.port=9999
env.java.opts.taskmanager: -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.rmi.port=9999
Source: https://github.com/aedenj/apache-flink-kotlin-starter/blob/multiple-schemas/conf/flink/flink-conf.yaml
And then map the ports in the docker-compose file like so,
...
jobmanager:
image: flink:1.14.3-scala_2.11-java11
hostname: jobmanager
ports:
- "8081:8081" # Flink Web UI
- "9999:9999" # JMX Remote Port
command: "standalone-job"
....
taskmanager:
image: flink:1.14.3-scala_2.11-java11
depends_on:
- jobmanager
command: "taskmanager.sh start-foreground"
ports:
- "10000-10005:9999" # JMX Remote Port
environment:
- JOB_MANAGER_RPC_ADDRESS=jobmanager
.....
Source: https://github.com/aedenj/apache-flink-kotlin-starter/blob/multiple-schemas/docker/flink-job-cluster.yml
I am unable to connect Java Mission Control to the task managers using any of the ports. e.g. localhost:10000 or localhost:10001 etc. However I can connect to the Job Mangers using localhost:9999 And I can only connect to a task manager if I explicitly set the ports separately. Not using the dynamic port mapping in the docker-compose file. Ideally the the port mapping would work so I may continue to use the --scale command of docker-compose.

Remote debugging Java 9 in a docker container from IntelliJ IDEA

I have a Dockerfile with this content:
FROM openjdk:9
WORKDIR /project
ADD . /project
EXPOSE 5005
My docker-compose.yml looks like this:
version: "3.2"
services:
some-project:
build: .
ports:
- target: 5005
published: 5005
protocol: tcp
mode: host
command: "java '-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005' SomeClass"
When I do docker-composer up I see a message "Listening for transport dt_socket at address: 5005". But when I try to connect with jdb or Idea I get "java.io.IOException: handshake failed - connection prematurally closed".
Everything works fine if I change openjdk:9 to openjdk:8. However, I need Java 9 for my project.
From Java 9, the JDWP socket connector accept only local connections by default. See:
http://www.oracle.com/technetwork/java/javase/9-notes-3745703.html#JDK-8041435
So, to enable debug connections from outside, specify *:<port> as address:
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005

How can I retrieve the access token by calling to the URI inside vagrant box?

I have gotten a problem when I tried to run the example at here. I got the source code, built and run it. That's okay on my local environment (windows 10 OS), but when I created a vagrantfile with Ubuntu image and tried to run it in Ubuntu environment, then I got an issue. Because that example used Spring Security for authentication the user so that I have to run the command below to get an access token
curl http://acme:acmesecret#localhost:9999/uaa/oauth/token \
-d grant_type=password -d client_id=acme \
-d username=user -d password=password -ks | jq .
That's fine if I use ssh and run it inside the vagrant box. But if I run it outside the box (log out ubuntu box, return to Windows OS and use Postman for submitting the request to get the access token at localhost:9999/uaa/oauth/token), I always cannot see the port 9999 inside the vagrant box. I have the config for port_forward in the vagrantfile as below
config.vm.network "forwarded_port", guest: 8765, host: 8888 # API gateway port
config.vm.network "forwarded_port", guest: 8761, host: 8761 # eureka port
config.vm.network "forwarded_port", guest: 7979, host: 7979 # hytrix port
config.vm.network "forwarded_port", guest: 9999, host: 9999 # auth port, I try to portward 999 outside but could not sucess
Could anyone help me on that? I really get stuck at the moment :(

jvisualvm connect to remote jstatd not showing applications

I started a jstatd on the remote server (Ubuntu Server 14.04):
jstatd -J-Djava.security.policy=.jstatd.all.policy -J-Djava.rmi.server.logCalltrue -p 9099
and try to connect to it with jvisualvm on windows. I checked netstat, the connection is established, and on the remote it logs the call:
Sep 11, 2015 12:48:51 PM sun.rmi.server.UnicastServerRef logCall
FINER: RMI TCP Connection(4)-10.82.199.0: [10.82.199.0: sun.rmi.registry.RegistryImpl[0:0:0, 0]: java.rmi.Remote lookup(java.lang.String)]
Sep 11, 2015 12:48:55 PM sun.rmi.server.UnicastServerRef logCall
FINER: RMI TCP Connection(4)-10.82.199.0: [10.82.199.0: sun.rmi.registry.RegistryImpl[0:0:0, 0]: java.rmi.Remote lookup(java.lang.String)]
Sep 11, 2015 12:48:59 PM sun.rmi.server.UnicastServerRef logCall
FINER: RMI TCP Connection(4)-10.82.199.0: [10.82.199.0: sun.rmi.registry.RegistryImpl[0:0:0, 0]: java.rmi.Remote lookup(java.lang.String)]
All signs are saying that it's working. but however no applications is showing in jvisualvm:
Apparently VisualVM expects a consistent DNS name for the server you're trying to connect to remotely (the Ubuntu Server 14.04 in your case). Hence, if you're specifying an IP address instead of a DNS name to VisualVM you should add the following to your jstatd startup line:
-J-Djava.rmi.server.hostname=<the IP address to your Ubuntu server here>
Additionally, I found out that specifying the port option (-p 9099 in your case) is not supported in some VisualVM releases:
Known limitation: In this VisualVM release the jstatd's default port and rminame must be used when starting the jstatd utility, i.e. the use of the -p and -n options is not supported.
VisualVM Troubleshooting Guide
All in all, you should try running the following jstatd line on your Ubuntu Server:
jstatd -J-Djava.security.policy=.jstatd.all.policy -J-Djava.rmi.server.hostname=10.82.83.117 -J-Djava.rmi.server.logCalltrue
Sources:
http://www.catify.com/2012/09/26/remote-monitoring-with-visualvm/
It worked for me :)
jstatd -p 1099 -J-Djava.rmi.sver.hostname=10.250.105.112 -J-Djava.security.policy=<(echo 'grant codebase "file:${java.home}/../lib/tools.jar" {permission java.security.AllPermission;};')
Works for Me Perfectly
In case this helps someone else...
I was running into problems where neither jstatd nor adding a plain JMX connection in VisualVM worked. The former would not give any error messages, it just wouldn't list any apps. The latter would give me an error saying "Cannot connect to some-server:30648 using service:jmx:rmi:///jndi/rmi://some-server:30648/jmxrmi.
Trying to use the excellent sjk-plus tool to manually connect to the JMX service gave the following error:
$ java --add-opens java.base/jdk.internal.perf=ALL-UNNAMED \
--add-opens jdk.attach/sun.tools.attach=ALL-UNNAMED \
-Dsjk.breakCage=false \
-jar scripts/sjk-plus-0.14.jar
mx --get --allMatched -b com.acme.some.package:name=* -f Count \
-s some-server:30648
JMX Connection failed: java.rmi.ConnectException: Connection refused to host: 127.0.1.1; nested exception is:
java.net.ConnectException: Connection refused (Connection refused)
Do you see it? 127.0.1.1, what is that weird IP address doing there?
This was caused by a particular entry in the /etc/hosts file on the server:
user#some-server:~$ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 some-server
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Changing the some-server entry in the hosts file and restarting the process made it work with sjk-plus, and made it discoverable with jstatd as well.
in java 11+
jstatd -J-Djava.rmi.server.logCalls=true \
-J-Djava.security.policy=.jstatd.all.policy \
-J-Djava.net.preferIPv4Stack=true \
-J-Djava.security.policy=<(echo 'grant codebase "jrt:/jdk.jstatd" {permission java.security.AllPermission;}; grant codebase "jrt:/jdk.internal.jvmstat" {permission java.security.AllPermission;};')

jps can't connect to a remote jstatd

I'm trying query a remote JVM with jps using jstatd, in order to eventually monitor it using VisualVM.
I got jstatd running with the following security policy:
grant codebase "file:${java.home}/../lib/tools.jar" {
permission java.security.AllPermission;
};
jstatd is running on a 64-bit Linux box with a 1.6.0_10 version HotSpot vm. The jstatd command is:
jstatd -J-Djava.security.policy=jstatd.tools.policy -J-Djava.rmi.server.logCalls=true
I'm trying to run jps from a Windows 7 machine. Due to firewall restrictions, I'm tunneling the RMI data through an SSH tunnel to my Windows machine such that the jps command line is:
.\jps.exe -m -l rmi://localhost
When I run jps, I see the connection attempt in the jstatd log, which looks like this:
Feb 1, 2011 11:50:34 AM sun.rmi.server.UnicastServerRef logCall
FINER: RMI TCP Connection(3)-127.0.0.1: [127.0.0.1: sun.rmi.registry.RegistryImpl[0:0:0, 0]: java.rmi.Remote lookup(ja va.lang.String)]
but on the jps side I get the following error:
Error communicating with remote host: Connection refused to host: 192.168.1.137; nested exception is:
java.net.ConnectException: Connection refused: connect
Based on the connection attempt listed in the jstatd log, I think jps is actually reaching the host, but for some reason is getting blocked. Is there some security policy I have set or some other setting somewhere I can change so that I can get jps to pull stats from the remote jstatd?
My guess is that you're only forwarding the RMI registry port (1099), but you need to also open another port.
Check which ports on the remote side
# netstat -nap | grep jstatd
tcp 0 0 :::1099 :::* LISTEN 453/jstatd
tcp 0 0 :::58204 :::* LISTEN 453/jstatd
In this case you will need to forward port 58204 as well as 1099
Here is how you could easily do this.
Launch ejstatd in your remote host this way (executing from the ejstatd folder): mvn exec:java -Dexec.args="-pr 2000 -ph 2001 -pv 2002"
Open those 3 ports on your remote host and make them available to your local machine: 2000, 2001 and 2002
On your local machine, you will be able to use jps replacing <remotehost> with your remote host name: jps -m -l rmi://<remotehost>:2000
Disclaimer: I'm the author of the open source ejstatd tool

Categories

Resources