how to fix java.net.SocketException: Permission denied on ubuntu [duplicate] - java
It's very annoying to have this limitation on my development box, when there won't ever be any users other than me.
I'm aware of the standard workarounds, but none of them do exactly what I want:
authbind (The version in Debian testing, 1.0, only supports IPv4)
Using the iptables REDIRECT target to redirect a low port to a high port (the "nat" table is not yet implemented for ip6tables, the IPv6 version of iptables)
sudo (Running as root is what I'm trying to avoid)
SELinux (or similar). (This is just my dev box, I don't want to introduce a lot of extra complexity.)
Is there some simple sysctl variable to allow non-root processes to bind to "privileged" ports (ports less than 1024) on Linux, or am I just out of luck?
EDIT: In some cases, you can use capabilities to do this.
Okay, thanks to the people who pointed out the capabilities system and CAP_NET_BIND_SERVICE capability. If you have a recent kernel, it is indeed possible to use this to start a service as non-root but bind low ports. The short answer is that you do:
setcap 'cap_net_bind_service=+ep' /path/to/program
And then anytime program is executed thereafter it will have the CAP_NET_BIND_SERVICE capability. setcap is in the debian package libcap2-bin.
Now for the caveats:
You will need at least a 2.6.24 kernel
This won't work if your file is a script. (i.e. uses a #! line to launch an interpreter). In this case, as far I as understand, you'd have to apply the capability to the interpreter executable itself, which of course is a security nightmare, since any program using that interpreter will have the capability. I wasn't able to find any clean, easy way to work around this problem.
Linux will disable LD_LIBRARY_PATH on any program that has elevated privileges like setcap or suid. So if your program uses its own .../lib/, you might have to look into another option like port forwarding.
Resources:
capabilities(7) man page. Read this long and hard if you're going to use capabilities in a production environment. There are some really tricky details of how capabilities are inherited across exec() calls that are detailed here.
setcap man page
"Bind ports below 1024 without root on GNU/Linux": The document that first pointed me towards setcap.
Note: RHEL first added this in v6.
Update 2017:
Use authbind
Disclaimer (update per 2021): Note that authbind works via LD_PRELOAD, which is only used if your program uses libc, which is (or might) not be the case if your program is compiled with GO, or any other compiler that avoids C. If you use go, set the kernel parameter for the protected port range, see bottom of post. </EndUpdate>
Authbind is much better than CAP_NET_BIND_SERVICE or a custom kernel.
CAP_NET_BIND_SERVICE grants trust to the binary but provides no
control over per-port access.
Authbind grants trust to the
user/group and provides control over per-port access, and
supports both IPv4 and IPv6 (IPv6 support has been added as of late).
Install: apt-get install authbind
Configure access to relevant ports, e.g. 80 and 443 for all users and groups:
sudo touch /etc/authbind/byport/80
sudo touch /etc/authbind/byport/443
sudo chmod 777 /etc/authbind/byport/80
sudo chmod 777 /etc/authbind/byport/443
Execute your command via authbind
(optionally specifying --deep or other arguments, see man authbind):
authbind --deep /path/to/binary command line args
e.g.
authbind --deep java -jar SomeServer.jar
As a follow-up to Joshua's fabulous (=not recommended unless you know what you do) recommendation to hack the kernel:
I've first posted it here.
Simple. With a normal or old kernel, you don't.
As pointed out by others, iptables can forward a port.
As also pointed out by others, CAP_NET_BIND_SERVICE can also do the job.
Of course CAP_NET_BIND_SERVICE will fail if you launch your program from a script, unless you set the cap on the shell interpreter, which is pointless, you could just as well run your service as root...
e.g. for Java, you have to apply it to the JAVA JVM
sudo /sbin/setcap 'cap_net_bind_service=ep' /usr/lib/jvm/java-8-openjdk/jre/bin/java
Obviously, that then means any Java program can bind system ports.
Ditto for mono/.NET.
I'm also pretty sure xinetd isn't the best of ideas.
But since both methods are hacks, why not just lift the limit by lifting the restriction ?
Nobody said you have to run a normal kernel, so you can just run your own.
You just download the source for the latest kernel (or the same you currently have).
Afterwards, you go to:
/usr/src/linux-<version_number>/include/net/sock.h:
There you look for this line
/* Sockets 0-1023 can't be bound to unless you are superuser */
#define PROT_SOCK 1024
and change it to
#define PROT_SOCK 0
if you don't want to have an insecure ssh situation, you alter it to this:
#define PROT_SOCK 24
Generally, I'd use the lowest setting that you need, e.g. 79 for http, or 24 when using SMTP on port 25.
That's already all.
Compile the kernel, and install it.
Reboot.
Finished - that stupid limit is GONE, and that also works for scripts.
Here's how you compile a kernel:
https://help.ubuntu.com/community/Kernel/Compile
# You can get the kernel-source via package `linux-source`, no manual download required
apt-get install linux-source fakeroot
mkdir ~/src
cd ~/src
tar xjvf /usr/src/linux-source-<version>.tar.bz2
cd linux-source-<version>
# Apply the changes to PROT_SOCK define in /include/net/sock.h
# Copy the kernel config file you are currently using
cp -vi /boot/config-`uname -r` .config
# Install ncurses libary, if you want to run menuconfig
apt-get install libncurses5 libncurses5-dev
# Run menuconfig (optional)
make menuconfig
# Define the number of threads you wanna use when compiling (should be <number CPU cores> - 1), e.g. for quad-core
export CONCURRENCY_LEVEL=3
# Now compile the custom kernel
fakeroot make-kpkg --initrd --append-to-version=custom kernel-image kernel-headers
# And wait a long long time
cd ..
In a nutshell,
use iptables if you want to stay secure,
compile the kernel if you want to be sure this restriction never bothers you again.
sysctl method
Note:
As of late, updating the kernel is no longer required.
You can now set
sysctl net.ipv4.ip_unprivileged_port_start=80
Or to persist
sysctl -w net.ipv4.ip_unprivileged_port_start=80.
And if that yields an error, simply edit /etc/sysctl.conf with nano and set the parameter there for persistence across reboots.
or via procfs
echo 80 | sudo tee /proc/sys/net/ipv4/ip_unprivileged_port_start
You can do a port redirect. This is what I do for a Silverlight policy server running on a Linux box
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 943 -j REDIRECT --to-port 1300
For some reason no one mention about lowering sysctl net.ipv4.ip_unprivileged_port_start to the value you need.
Example: We need to bind our app to 443 port.
sysctl net.ipv4.ip_unprivileged_port_start=443
Some may say, there is a potential security problem: unprivileged users now may bind to the other privileged ports (444-1024).
But you can solve this problem easily with iptables, by blocking other ports:
iptables -I INPUT -p tcp --dport 444:1024 -j DROP
iptables -I INPUT -p udp --dport 444:1024 -j DROP
Comparison with other methods. This method:
from some point is (IMO) even more secure than setting CAP_NET_BIND_SERVICE/setuid, since an application doesn't setuid at all, even partly (capabilities actually are).
For example, to catch a coredump of capability-enabled application you will need to change sysctl fs.suid_dumpable (which leads to another potential security problems)
Also, when CAP/suid is set, /proc/PID directory is owned by root, so your non-root user will not have full information/control of running process, for example, user will not be able (in common case) to determine which connections belong to application via /proc/PID/fd/ (netstat -aptn | grep PID).
has security disadvantage: while your app (or any app that uses ports 443-1024) is down for some reason, another app could take the port. But this problem could also be applied to CAP/suid (in case you set it on interpreter, e.g. java/nodejs) and iptables-redirect. Use systemd-socket method to exclude this problem. Use authbind method to only allow special user binding.
doesn't require setting CAP/suid every time you deploy new version of application.
doesn't require application support/modification, like systemd-socket method.
doesn't require kernel rebuild (if running version supports this sysctl setting)
doesn't do LD_PRELOAD like authbind/privbind method, this could potentially affect performance, security, behavior (does it? haven't tested). In the rest authbind is really flexible and secure method.
over-performs iptables REDIRECT/DNAT method, since it doesn't require address translation, connection state tracking, etc. This only noticeable on high-load systems.
Depending on the situation, I would choose between sysctl, CAP, authbind and iptables-redirect. And this is great that we have so many options.
Or patch your kernel and remove the check.
(Option of last resort, not recommended).
In net/ipv4/af_inet.c, remove the two lines that read
if (snum && snum < PROT_SOCK && !capable(CAP_NET_BIND_SERVICE))
goto out;
and the kernel won't check privileged ports anymore.
The standard way is to make them "setuid" so that they start up as root, and then they throw away that root privilege as soon as they've bound to the port but before they start accepting connections to it. You can see good examples of that in the source code for Apache and INN. I'm told that Lighttpd is another good example.
Another example is Postfix, which uses multiple daemons that communicate through pipes, and only one or two of them (which do very little except accept or emit bytes) run as root and the rest run at a lower privilege.
Modern Linux supports /sbin/sysctl -w net.ipv4.ip_unprivileged_port_start=0.
You can setup a local SSH tunnel, eg if you want port 80 to hit your app bound to 3000:
sudo ssh $USERNAME#localhost -L 80:localhost:3000 -N
This has the advantage of working with script servers, and being very simple.
I know this is an old question, but now with recent (>= 4.3) kernels there is finally a good answer to this - ambient capabilities.
The quick answer is to grab a copy of the latest (as-yet-unreleased) version of libcap from git and compile it. Copy the resulting progs/capsh binary somewhere (/usr/local/bin is a good choice). Then, as root, start your program with
/usr/local/bin/capsh --keep=1 --user='your-service-user-name' \
--inh='cap_net_bind_service' --addamb='cap_net_bind_service' \
-- -c 'your-program'
In order, we are
Declaring that when we switch users, we want to keep our current capability sets
Switching user & group to 'your-service-user-name'
Adding the cap_net_bind_service capability to the inherited & ambient sets
Forking bash -c 'your-command' (since capsh automatically starts bash with the arguments after --)
There's a lot going on under the hood here.
Firstly, we are running as root, so by default, we get a full set of capabilities. Included in this is the ability to switch uid & gid with the setuid and setgid syscalls. However, ordinarily when a program does this, it loses its set of capabilities - this is so that the old way of dropping root with setuid still works. The --keep=1 flag tells capsh to issue the prctl(PR_SET_KEEPCAPS) syscall, which disables the dropping of capabilities when changing user. The actual changing of users by capsh happens with the --user flag, which runs setuid and setgid.
The next problem we need to solve is how to set capabilities in a way that carries on after we exec our children. The capabilities system has always had an 'inherited' set of capabilities, which is " a set of capabilities preserved across an execve(2)" [capabilities(7)]. Whilst this sounds like it solves our problem (just set the cap_net_bind_service capability to inherited, right?), this actually only applies for privileged processes - and our process is not privileged anymore, because we already changed user (with the --user flag).
The new ambient capability set works around this problem - it is "a set of capabilities that are preserved across an execve(2) of a program that is not privileged." By putting cap_net_bind_service in the ambient set, when capsh exec's our server program, our program will inherit this capability and be able to bind listeners to low ports.
If you're interested to learn more, the capabilities manual page explains this in great detail. Running capsh through strace is also very informative!
File capabilities are not ideal, because they can break after a package update.
The ideal solution, IMHO, should be an ability to create a shell with inheritable CAP_NET_BIND_SERVICE set.
Here's a somewhat convoluted way to do this:
sg $DAEMONUSER "capsh --keep=1 --uid=`id -u $DAEMONUSER` \
--caps='cap_net_bind_service+pei' -- \
YOUR_COMMAND_GOES_HERE"
capsh utility can be found in libcap2-bin package in Debian/Ubuntu distributions. Here's what goes on:
sg changes effective group ID to that of the daemon user. This is necessary because capsh leaves GID unchanged and we definitely do not want it.
Sets bit 'keep capabilities on UID change'.
Changes UID to $DAEMONUSER
Drops all caps (at this moment all caps are still present because of --keep=1), except inheritable cap_net_bind_service
Executes your command ('--' is a separator)
The result is a process with specified user and group, and cap_net_bind_service privileges.
As an example, a line from ejabberd startup script:
sg $EJABBERDUSER "capsh --keep=1 --uid=`id -u $EJABBERDUSER` --caps='cap_net_bind_service+pei' -- $EJABBERD --noshell -detached"
Two other simple possibilities: Daemon and Proxy
Daemon
There is an old (unfashionable) solution to the "a daemon that binds on a low port and hands control to your daemon". It's called inetd (or xinetd).
The cons are:
your daemon needs to talk on stdin/stdout (if you don't control the daemon -- if you don't have the source -- then this is perhaps a showstopper, although some services may have an inetd-compatibility flag)
a new daemon process is forked for every connection
it's one extra link in the chain
Pros:
available on any old UNIX
once your sysadmin has set up the config, you're good to go about your development (when you re-build your daemon, might you lose setcap capabilities? And then you'll have to go back to your admin "please sir...")
daemon doesn't have to worry about that networking stuff, just has to talk on stdin/stdout
can configure to execute your daemon as a non-root user, as requested
Proxy
Another alternative: a hacked-up proxy (netcat or even something more robust) from the privileged port to some arbitrary high-numbered port where you can run your target daemon. (Netcat is obviously not a production solution, but "just my dev box", right?). This way you could continue to use a network-capable version of your server, would only need root/sudo to start proxy (at boot), wouldn't be relying on complex/potentially fragile capabilities.
My "standard workaround" uses socat as the user-space redirector:
socat tcp6-listen:80,fork tcp6:8080
Beware that this won't scale, forking is expensive but it's the way socat works.
Linux supports capabilities to support more fine-grained permissions than just "this application is run as root". One of those capabilities is CAP_NET_BIND_SERVICE which is about binding to a privileged port (<1024).
Unfortunately I don't know how to exploit that to run an application as non-root while still giving it CAP_NET_BIND_SERVICE (probably using setcap, but there's bound to be an existing solution for this).
systemd is a sysvinit replacement which has an option to launch a daemon with specific capabilities. Options Capabilities=, CapabilityBoundingSet= in systemd.exec(5) manpage.
TLDR: For "the answer" (as I see it), jump down to the >>TLDR<< part in this answer.
OK, I've figured it out (for real this time), the answer to this question, and this answer of mine is also a way of apologizing for promoting another answer (both here and on twitter) that I thought was "the best", but after trying it, discovered that I was mistaken about that. Learn from my mistake kids: don't promote something until you've actually tried it yourself!
Again, I reviewed all the answers here. I've tried some of them (and chose not to try others because I simply didn't like the solutions). I thought that the solution was to use systemd with its Capabilities= and CapabilitiesBindingSet= settings. After wrestling with this for some time, I discovered that this is not the solution because:
Capabilities are intended to restrict root processes!
As the OP wisely stated, it is always best to avoid that (for all your daemons if possible!).
You cannot use the Capabilities related options with User= and Group= in systemd unit files, because capabilities are ALWAYS reset when execev (or whatever the function is) is called. In other words, when systemd forks and drops its perms, the capabilities are reset. There is no way around this, and all that binding logic in the kernel is basic around uid=0, not capabilities. This means that it is unlikely that Capabilities will ever be the right answer to this question (at least any time soon). Incidentally, setcap, as others have mentioned, is not a solution. It didn't work for me, it doesn't work nicely with scripts, and those are reset anyways whenever the file changes.
In my meager defense, I did state (in the comment I've now deleted), that James' iptables suggestion (which the OP also mentions), was the "2nd best solution". :-P
>>TLDR<<
The solution is to combine systemd with on-the-fly iptables commands, like this (taken from DNSChain):
[Unit]
Description=dnschain
After=network.target
Wants=namecoin.service
[Service]
ExecStart=/usr/local/bin/dnschain
Environment=DNSCHAIN_SYSD_VER=0.0.1
PermissionsStartOnly=true
ExecStartPre=/sbin/sysctl -w net.ipv4.ip_forward=1
ExecStartPre=-/sbin/iptables -D INPUT -p udp --dport 5333 -j ACCEPT
ExecStartPre=-/sbin/iptables -t nat -D PREROUTING -p udp --dport 53 -j REDIRECT --to-ports 5333
ExecStartPre=/sbin/iptables -A INPUT -p udp --dport 5333 -j ACCEPT
ExecStartPre=/sbin/iptables -t nat -A PREROUTING -p udp --dport 53 -j REDIRECT --to-ports 5333
ExecStopPost=/sbin/iptables -D INPUT -p udp --dport 5333 -j ACCEPT
ExecStopPost=/sbin/iptables -t nat -D PREROUTING -p udp --dport 53 -j REDIRECT --to-ports 5333
User=dns
Group=dns
Restart=always
RestartSec=5
WorkingDirectory=/home/dns
PrivateTmp=true
NoNewPrivileges=true
ReadOnlyDirectories=/etc
# Unfortunately, capabilities are basically worthless because they're designed to restrict root daemons. Instead, we use iptables to listen on privileged ports.
# Capabilities=cap_net_bind_service+pei
# SecureBits=keep-caps
[Install]
WantedBy=multi-user.target
Here we accomplish the following:
The daemon listens on 5333, but connections are successfully accepted on 53 thanks to iptables
We can include the commands in the unit file itself, and thus we save people headaches. systemd cleans up the firewall rules for us, making sure to remove them when the daemon isn't running.
We never run as root, and we make privilege escalation impossible (at least systemd claims to), supposedly even if the daemon is compromised and sets uid=0.
iptables is still, unfortunately, quite an ugly and difficult-to-use utility. If the daemon is listening on eth0:0 instead of eth0, for example, the commands are slightly different.
With systemd, you just need to slightly modify your service to accept preactivated sockets.
You can later use systemd socket activate.
No capabilities, iptables or other tricks are needed.
This is content of relevant systemd files from this example of simple python http server
File httpd-true.service
[Unit]
Description=Httpd true
[Service]
ExecStart=/usr/local/bin/httpd-true
User=subsonic
PrivateTmp=yes
File httpd-true.socket
[Unit]
Description=HTTPD true
[Socket]
ListenStream=80
[Install]
WantedBy=default.target
Port redirect made the most sense for us, but we ran into an issue where our application would resolve a url locally that also needed to be re-routed; (that means you shindig).
This will also allow you to be redirected when accessing the url on the local machine.
iptables -A PREROUTING -t nat -p tcp --dport 80 -j REDIRECT --to-port 8080
iptables -A OUTPUT -t nat -p tcp --dport 80 -j REDIRECT --to-port 8080
At startup:
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080
Then you can bind to the port you forward to.
There is also the 'djb way'. You can use this method to start your process as root running on any port under tcpserver, then it will hand control of the process to the user you specify immediately after the process starts.
#!/bin/sh
UID=$(id -u username)
GID=$(id -g username)
exec tcpserver -u "${UID}" -g "${GID}" -RHl0 0 port /path/to/binary &
For more info, see: http://thedjbway.b0llix.net/daemontools/uidgid.html
Use the privbind utility: it allows an unprivileged application to bind to reserved ports.
Bind port 8080 to 80 and open port 80:
sudo iptables -t nat -A OUTPUT -o lo -p tcp --dport 80 -j REDIRECT --to-port 8080
sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT
and then run program on port 8080 as a normal user.
you will then be able to access http://127.0.0.1 on port 80
I tried the iptables PREROUTING REDIRECT method. In older kernels it seems this type of rule wasn't supported for IPv6. But apparently it is now supported in ip6tables v1.4.18 and Linux kernel v3.8.
I also found that PREROUTING REDIRECT doesn't work for connections initiated within the machine. To work for conections from the local machine, add an OUTPUT rule also — see iptables port redirect not working for localhost. E.g. something like:
iptables -t nat -I OUTPUT -o lo -p tcp --dport 80 -j REDIRECT --to-port 8080
I also found that PREROUTING REDIRECT also affects forwarded packets. That is, if the machine is also forwarding packets between interfaces (e.g. if it's acting as a Wi-Fi access point connected to an Ethernet network), then the iptables rule will also catch connected clients' connections to Internet destinations, and redirect them to the machine. That's not what I wanted—I only wanted to redirect connections that were directed to the machine itself. I found I can make it only affect packets addressed to the box, by adding -m addrtype --dst-type LOCAL. E.g. something like:
iptables -A PREROUTING -t nat -p tcp --dport 80 -m addrtype --dst-type LOCAL -j REDIRECT --to-port 8080
One other possibility is to use TCP port forwarding. E.g. using socat:
socat TCP4-LISTEN:www,reuseaddr,fork TCP4:localhost:8080
However one disadvantage with that method is, the application that is listening on port 8080 then doesn't know the source address of incoming connections (e.g. for logging or other identification purposes).
Since the OP is just development/testing, less than sleek solutions may be helpful:
setcap can be used on a script's interpreter to grant capabilities to scripts. If setcaps on the global interpreter binary is not acceptable, make a local copy of the binary (any user can) and get root to setcap on this copy. Python2 (at least) works properly with a local copy of the interpreter in your script development tree. No suid is needed so the root user can control to what capabilities users have access.
If you need to track system-wide updates to the interpreter, use a shell script like the following to run your script:
#!/bin/sh
#
# Watch for updates to the Python2 interpreter
PRG=python_net_raw
PRG_ORIG=/usr/bin/python2.7
cmp $PRG_ORIG $PRG || {
echo ""
echo "***** $PRG_ORIG has been updated *****"
echo "Run the following commands to refresh $PRG:"
echo ""
echo " $ cp $PRG_ORIG $PRG"
echo " # setcap cap_net_raw+ep $PRG"
echo ""
exit
}
./$PRG $*
Answer at 2015/Sep:
ip6tables now supports IPV6 NAT: http://www.netfilter.org/projects/iptables/files/changes-iptables-1.4.17.txt
You will need kernel 3.7+
Proof:
[09:09:23] root#X:~ ip6tables -t nat -vnL
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 REDIRECT tcp eth0 * ::/0 ::/0 tcp dpt:80 redir ports 8080
0 0 REDIRECT tcp eth0 * ::/0 ::/0 tcp dpt:443 redir ports 1443
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 6148 packets, 534K bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 6148 packets, 534K bytes)
pkts bytes target prot opt in out source destination
There is a worked example of doing this with a file capable shared library linked to an unprivileged application on the libcap website. It was recently mentioned in an answer to a question about adding capabilities to shared libraries.
Related
Connection to localhost refused while running dockerised app [duplicate]
I have a Nginx running inside a docker container. I have a MySql running on the host system. I want to connect to the MySql from within my container. MySql is only binding to the localhost device. Is there any way to connect to this MySql or any other program on localhost from within this docker container? This question is different from "How to get the IP address of the docker host from inside a docker container" due to the fact that the IP address of the docker host could be the public IP or the private IP in the network which may or may not be reachable from within the docker container (I mean public IP if hosted at AWS or something). Even if you have the IP address of the docker host it does not mean you can connect to docker host from within the container given that IP address as your Docker network may be overlay, host, bridge, macvlan, none etc which restricts the reachability of that IP address.
Edit: If you are using Docker-for-mac or Docker-for-Windows 18.03+, connect to your mysql service using the host host.docker.internal (instead of the 127.0.0.1 in your connection string). If you are using Docker-for-Linux 20.10.0+, you can also use the host host.docker.internal if you started your Docker container with the --add-host host.docker.internal:host-gateway option. Otherwise, read below TLDR Use --network="host" in your docker run command, then 127.0.0.1 in your docker container will point to your docker host. Note: This mode only works on Docker for Linux, per the documentation. Note on docker container networking modes Docker offers different networking modes when running containers. Depending on the mode you choose you would connect to your MySQL database running on the docker host differently. docker run --network="bridge" (default) Docker creates a bridge named docker0 by default. Both the docker host and the docker containers have an IP address on that bridge. on the Docker host, type sudo ip addr show docker0 you will have an output looking like: [vagrant#docker:~] $ sudo ip addr show docker0 4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff inet 172.17.42.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::5484:7aff:fefe:9799/64 scope link valid_lft forever preferred_lft forever So here my docker host has the IP address 172.17.42.1 on the docker0 network interface. Now start a new container and get a shell on it: docker run --rm -it ubuntu:trusty bash and within the container type ip addr show eth0 to discover how its main network interface is set up: root#e77f6a1b3740:/# ip addr show eth0 863: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 66:32:13:f0:f1:e3 brd ff:ff:ff:ff:ff:ff inet 172.17.1.192/16 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::6432:13ff:fef0:f1e3/64 scope link valid_lft forever preferred_lft forever Here my container has the IP address 172.17.1.192. Now look at the routing table: root#e77f6a1b3740:/# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 172.17.42.1 0.0.0.0 UG 0 0 0 eth0 172.17.0.0 * 255.255.0.0 U 0 0 0 eth0 So the IP Address of the docker host 172.17.42.1 is set as the default route and is accessible from your container. root#e77f6a1b3740:/# ping 172.17.42.1 PING 172.17.42.1 (172.17.42.1) 56(84) bytes of data. 64 bytes from 172.17.42.1: icmp_seq=1 ttl=64 time=0.070 ms 64 bytes from 172.17.42.1: icmp_seq=2 ttl=64 time=0.201 ms 64 bytes from 172.17.42.1: icmp_seq=3 ttl=64 time=0.116 ms docker run --network="host" Alternatively you can run a docker container with network settings set to host. Such a container will share the network stack with the docker host and from the container point of view, localhost (or 127.0.0.1) will refer to the docker host. Be aware that any port opened in your docker container would be opened on the docker host. And this without requiring the -p or -P docker run option. IP config on my docker host: [vagrant#docker:~] $ ip addr show eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:98:dc:aa brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe98:dcaa/64 scope link valid_lft forever preferred_lft forever and from a docker container in host mode: [vagrant#docker:~] $ docker run --rm -it --network=host ubuntu:trusty ip addr show eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:98:dc:aa brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe98:dcaa/64 scope link valid_lft forever preferred_lft forever As you can see both the docker host and docker container share the exact same network interface and as such have the same IP address. Connecting to MySQL from containers bridge mode To access MySQL running on the docker host from containers in bridge mode, you need to make sure the MySQL service is listening for connections on the 172.17.42.1 IP address. To do so, make sure you have either bind-address = 172.17.42.1 or bind-address = 0.0.0.0 in your MySQL config file (my.cnf). If you need to set an environment variable with the IP address of the gateway, you can run the following code in a container : export DOCKER_HOST_IP=$(route -n | awk '/UG[ \t]/{print $2}') then in your application, use the DOCKER_HOST_IP environment variable to open the connection to MySQL. Note: if you use bind-address = 0.0.0.0 your MySQL server will listen for connections on all network interfaces. That means your MySQL server could be reached from the Internet ; make sure to set up firewall rules accordingly. Note 2: if you use bind-address = 172.17.42.1 your MySQL server won't listen for connections made to 127.0.0.1. Processes running on the docker host that would want to connect to MySQL would have to use the 172.17.42.1 IP address. host mode To access MySQL running on the docker host from containers in host mode, you can keep bind-address = 127.0.0.1 in your MySQL configuration and connect to 127.0.0.1 from your containers: [vagrant#docker:~] $ docker run --rm -it --network=host mysql mysql -h 127.0.0.1 -uroot -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 36 Server version: 5.5.41-0ubuntu0.14.04.1 (Ubuntu) Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> note: Do use mysql -h 127.0.0.1 and not mysql -h localhost; otherwise the MySQL client would try to connect using a unix socket.
For all platforms Docker v 20.10 and above (since December 14th 2020) Use your internal IP address or connect to the special DNS name host.docker.internal which will resolve to the internal IP address used by the host. On Linux, using the Docker command, add --add-host=host.docker.internal:host-gateway to your Docker command to enable this feature. To enable this in Docker Compose on Linux, add the following lines to the container definition: extra_hosts: - "host.docker.internal:host-gateway" For older macOS and Windows versions of Docker Docker v 18.03 and above (since March 21st 2018) Use your internal IP address or connect to the special DNS name host.docker.internal which will resolve to the internal IP address used by the host. Linux support pending https://github.com/docker/for-linux/issues/264 For older macOS versions of Docker Docker for Mac v 17.12 to v 18.02 Same as above but use docker.for.mac.host.internal instead. Docker for Mac v 17.06 to v 17.11 Same as above but use docker.for.mac.localhost instead. Docker for Mac 17.05 and below To access host machine from the docker container you must attach an IP alias to your network interface. You can bind whichever IP you want, just make sure you're not using it to anything else. sudo ifconfig lo0 alias 123.123.123.123/24 Then make sure that you server is listening to the IP mentioned above or 0.0.0.0. If it's listening on localhost 127.0.0.1 it will not accept the connection. Then just point your docker container to this IP and you can access the host machine! To test you can run something like curl -X GET 123.123.123.123:3000 inside the container. The alias will reset on every reboot so create a start-up script if necessary. Solution and more documentation here: https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds
Use host.docker.internal instead of localhost
I doing a hack similar to above posts of get the local IP to map to a alias name (DNS) in the container. The major problem is to get dynamically with a simple script that works both in Linux and OSX the host IP address. I did this script that works in both environments (even in Linux distribution with "$LANG" != "en_*" configured): ifconfig | grep -E "([0-9]{1,3}\.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1 So, using Docker Compose, the full configuration will be: Startup script (docker-run.sh): export DOCKERHOST=$(ifconfig | grep -E "([0-9]{1,3}\.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1) docker-compose -f docker-compose.yml up docker-compose.yml: myapp: build: . ports: - "80:80" extra_hosts: - "dockerhost:$DOCKERHOST" Then change http://localhost to http://dockerhost in your code. For a more advance guide of how to customize the DOCKERHOST script, take a look at this post with a explanation of how it works.
Solution for Linux (kernel >=3.6). Ok, your localhost server has a default docker interface docker0 with IP address 172.17.0.1. Your container started with default network settings --net="bridge". Enable route_localnet for docker0 interface: $ sysctl -w net.ipv4.conf.docker0.route_localnet=1 Add these rules to iptables: $ iptables -t nat -I PREROUTING -i docker0 -d 172.17.0.1 -p tcp --dport 3306 -j DNAT --to 127.0.0.1:3306 $ iptables -t filter -I INPUT -i docker0 -d 127.0.0.1 -p tcp --dport 3306 -j ACCEPT Create MySQL user with access from '%' that means - from anyone, excluding localhost: CREATE USER 'user'#'%' IDENTIFIED BY 'password'; Change in your script the mysql-server address to 172.17.0.1. From the kernel documentation: route_localnet - BOOLEAN: Do not consider loopback addresses as martian source or destination while routing. This enables the use of 127/8 for local routing purposes (default FALSE).
This worked for me on an NGINX/PHP-FPM stack without touching any code or networking where the app's just expecting to be able to connect to localhost Mount mysqld.sock from the host to inside the container. Find the location of the mysql.sock file on the host running mysql: netstat -ln | awk '/mysql(.*)?\.sock/ { print $9 }' Mount that file to where it's expected in the docker: docker run -v /hostpath/to/mysqld.sock:/containerpath/to/mysqld.sock Possible locations of mysqld.sock: /tmp/mysqld.sock /var/run/mysqld/mysqld.sock /var/lib/mysql/mysql.sock /Applications/MAMP/tmp/mysql/mysql.sock # if running via MAMP
Until host.docker.internal is working for every platform, you can use my container acting as a NAT gateway without any manual setup: https://github.com/qoomon/docker-host
Simplest solution for Mac OSX Just use the IP address of your Mac. On the Mac run this to get the IP address and use it from within the container: ifconfig | grep 'inet 192'| awk '{ print $2}' As long as the server running locally on your Mac or in another docker container is listening to 0.0.0.0, the docker container will be able to reach out at that address. If you just want to access another docker container that is listening on 0.0.0.0 you can use 172.17.0.1
Very simple and quick, check your host IP with ifconfig (linux) or ipconfig (windows) and then create a docker-compose.yml: version: '3' # specify docker-compose version services: nginx: build: ./ # specify the directory of the Dockerfile ports: - "8080:80" # specify port mapping extra_hosts: - "dockerhost:<yourIP>" This way, your container will be able to access your host. When accessing your DB, remember to use the name you specified before, in this case dockerhost and the port of your host in which the DB is running.
Several solutions come to mind: Move your dependencies into containers first Make your other services externally accessible and connect to them with that external IP Run your containers without network isolation Avoid connecting over the network, use a socket that is mounted as a volume instead The reason this doesn't work out of the box is that containers run with their own network namespace by default. That means localhost (or 127.0.0.1 pointing to the loopback interface) is unique per container. Connecting to this will connect to the container itself, and not services running outside of docker or inside of a different docker container. Option 1: If your dependency can be moved into a container, I would do this first. It makes your application stack portable as others try to run your container on their own environment. And you can still publish the port on your host where other services that have not been migrated can still reach it. You can even publish the port to the localhost interface on your docker host to avoid it being externally accessible with a syntax like: -p 127.0.0.1:3306:3306 for the published port. Option 2: There are a variety of ways to detect the host IP address from inside of the container, but each have a limited number of scenarios where they work (e.g. requiring Docker for Mac). The most portable option is to inject your host IP into the container with something like an environment variable or configuration file, e.g.: docker run --rm -e "HOST_IP=$(ip route get 1 | sed -n 's/^.*src \([0-9.]*\) .*$/\1/p')" ... This does require that your service is listening on that external interface, which could be a security concern. For other methods to get the host IP address from inside of the container, see this post. Slightly less portable is to use host.docker.internal. This works in current versions of Docker for Windows and Docker for Mac. And in 20.10, the capability has been added to Docker for Linux when you pass a special host entry with: docker run --add-host host.docker.internal:host-gateway ... The host-gateway is a special value added in Docker 20.10 that automatically expands to a host IP. For more details see this PR. Option 3: Running without network isolation, i.e. running with --net host, means your application is running on the host network namespace. This is less isolation for the container, and it means you cannot access other containers over a shared docker network with DNS (instead, you need to use published ports to access other containerized applications). But for applications that need to access other services on the host that are only listening on 127.0.0.1 on the host, this can be the easiest option. Option 4: Various services also allow access over a filesystem based socket. This socket can be mounted into the container as a bind mounted volume, allowing you to access the host service without going over the network. For access to the docker engine, you often see examples of mounting /var/run/docker.sock into the container (giving that container root access to the host). With mysql, you can try something like -v /var/run/mysqld/mysqld.sock:/var/run/mysqld/mysql.sock and then connect to localhost which mysql converts to using the socket.
Solution for Windows 10 Docker Community Edition 17.06.0-ce-win18 2017-06-28 (stable) You can use DNS name of the host docker.for.win.localhost, to resolve to the internal IP. (Warning some sources mentioned windows but it should be win) Overview I needed to do something similar, that is connect from my Docker container to my localhost, which was running the Azure Storage Emulator and CosmosDB Emulator. The Azure Storage Emulator by default listens on 127.0.0.1, while you can change the IP its bound too, I was looking for a solution that would work with default settings. This also works for connecting from my Docker container to SQL Server and IIS, both running locally on my host with default port settings.
For windows, I have changed the database url in spring configuration: spring.datasource.url=jdbc:postgresql://host.docker.internal:5432/apidb Then build the image and run. It worked for me.
None of the answers worked for me when using Docker Toolbox on Windows 10 Home, but 10.0.2.2 did, since it uses VirtualBox which exposes the host to the VM on this address.
This is not an answer to the actual question. This is how I solved a similar problem. The solution comes totally from: Define Docker Container Networking so Containers can Communicate. Thanks to Nic Raboy Leaving this here for others who might want to do REST calls between one container and another. Answers the question: what to use in place of localhost in a docker environment? Get how your network looks like docker network ls Create a new network docker network create -d my-net Start the first container docker run -d -p 5000:5000 --network="my-net" --name "first_container" <MyImage1:v0.1> Check out network settings for first container docker inspect first_container. "Networks": should have 'my-net' Start the second container docker run -d -p 6000:6000 --network="my-net" --name "second_container" <MyImage2:v0.1> Check out network settings for second container docker inspect second_container. "Networks": should have 'my-net' ssh into your second container docker exec -it second_container sh or docker exec -it second_container bash. Inside of the second container, you can ping the first container by ping first_container. Also, your code calls such as http://localhost:5000 can be replaced by http://first_container:5000
If you're running with --net=host, localhost should work fine. If you're using default networking, use the static IP 172.17.0.1. See this - https://stackoverflow.com/a/48547074/14120621
For those on Windows, assuming you're using the bridge network driver, you'll want to specifically bind MySQL to the IP address of the hyper-v network interface. This is done via the configuration file under the normally hidden C:\ProgramData\MySQL folder. Binding to 0.0.0.0 will not work. The address needed is shown in the docker configuration as well, and in my case was 10.0.75.1.
Edit: I ended up prototyping out the concept on GitHub. Check out: https://github.com/sivabudh/system-in-a-box First, my answer is geared towards 2 groups of people: those who use a Mac, and those who use Linux. The host network mode doesn't work on a Mac. You have to use an IP alias, see: https://stackoverflow.com/a/43541681/2713729 What is a host network mode? See: https://docs.docker.com/engine/reference/run/#/network-settings Secondly, for those of you who are using Linux (my direct experience was with Ubuntu 14.04 LTS and I'm upgrading to 16.04 LTS in production soon), yes, you can make the service running inside a Docker container connect to localhost services running on the Docker host (eg. your laptop). How? The key is when you run the Docker container, you have to run it with the host mode. The command looks like this: docker run --network="host" -id <Docker image ID> When you do an ifconfig (you will need to apt-get install net-tools your container for ifconfig to be callable) inside your container, you will see that the network interfaces are the same as the one on Docker host (eg. your laptop). It's important to note that I'm a Mac user, but I run Ubuntu under Parallels, so using a Mac is not a disadvantage. ;-) And this is how you connect NGINX container to the MySQL running on a localhost.
For Linux, where you cannot change the interface the localhost service binds to There are two problems we need to solve Getting the IP of the host Making our localhost service available to Docker The first problem can be solved using qoomon's docker-host image, as given by other answers. You will need to add this container to the same bridge network as your other container so that you can access it. Open a terminal inside your container and ensure that you can ping dockerhost. bash-5.0# ping dockerhost PING dockerhost (172.20.0.2): 56 data bytes 64 bytes from 172.20.0.2: seq=0 ttl=64 time=0.523 ms Now, the harder problem, making the service accessible to docker. We can use telnet to check if we can access a port on the host (you may need to install this). The problem is that our container will only be able to access services that bind to all interfaces, such as SSH: bash-5.0# telnet dockerhost 22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3 But services bound only to localhost will be inaccessible: bash-5.0# telnet dockerhost 1025 telnet: can't connect to remote host (172.20.0.2): Connection refused The proper solution here would be to bind the service to dockers bridge network. However, this answer assumes that it is not possible for you to change this. So we will instead use iptables. First, we need to find the name of the bridge network that docker is using with ifconfig. If you are using an unnamed bridge, this will just be docker0. However, if you are using a named network you will have a bridge starting with br- that docker will be using instead. Mine is br-5cd80298d6f4. Once we have the name of this bridge, we need to allow routing from this bridge to localhost. This is disabled by default for security reasons: sysctl -w net.ipv4.conf.<bridge_name>.route_localnet=1 Now to set up our iptables rule. Since our container can only access ports on the docker bridge network, we are going to pretend that our service is actually bound to a port on this network. To do this, we will forward all requests to <docker_bridge>:port to localhost:port iptables -t nat -A PREROUTING -p tcp -i <docker_bridge_name> --dport <service_port> -j DNAT --to-destination 127.0.0.1:<service_port> For example, for my service on port 1025 iptables -t nat -A PREROUTING -p tcp -i br-5cd80298d6f4 --dport 1025 -j DNAT --to-destination 127.0.0.1:1025 You should now be able to access your service from the container: bash-5.0# telnet dockerhost 1025 220 127.0.0.1 ESMTP Service Ready
First see this answer for the options that you have to fix this problem. But if you use docker-compose you can add network_mode: host to your service and then use 127.0.0.1 to connect to the local host. This is just one of the options described in the answer above. Below you can find how I modified docker-compose.yml from https://github.com/geerlingguy/php-apache-container.git: --- version: "3" services: php-apache: + network_mode: host image: geerlingguy/php-apache:latest container_name: php-apache ... + indicates the line I added. [Additional info] This has also worked in version 2.2. and "host" or just 'host' are both worked in docker-compose. --- version: "2.2" services: php-apache: + network_mode: "host" or + network_mode: host ...
I disagree with the answer from Thomasleveil. Making mysql bind to 172.17.42.1 will prevent other programs using the database on the host to reach it. This will only work if all your database users are dockerized. Making mysql bind to 0.0.0.0 will open the db to outside world, which is not only a very bad thing to do, but also contrary to what the original question author wants to do. He explicitly says "The MySql is running on localhost and not exposing a port to the outside world, so its bound on localhost" To answer the comment from ivant "Why not bind mysql to docker0 as well?" This is not possible. The mysql/mariadb documentation explicitly says it is not possible to bind to several interfaces. You can only bind to 0, 1, or all interfaces. As a conclusion, I have NOT found any way to reach the (localhost only) database on the host from a docker container. That definitely seems like a very very common pattern, but I don't know how to do it.
Try this: version: '3.5' services: yourservice-here: container_name: container_name ports: - "4000:4000" extra_hosts: # <---- here - localhost:192.168.1.202 - or-vitualhost.local:192.168.1.202 To get 192.168.1.202, uses ifconfig This worked for me. Hope this help!
In 7 years the question was asked, it is either docker has changed, or no one tried this way. So I will include my own answer. I have found all answers use complex methods. Today, I have needed this, and found 2 very simple ways: use ipconfig or ifconfig on your host and make note of all IP addresses. At least two of them can be used by the container. I have a fixed local network address on WiFi LAN Adapter: 192.168.1.101. This could be 10.0.1.101. the result will change depending on your router I use WSL on windows, and it has its own vEthernet address: 172.19.192.1 use host.docker.internal. Most answers have this or another form of it depending on OS. The name suggests it is now globally used by docker. A third option is to use WAN address of the machine, or in other words IP given by the service provider. However, this may not work if IP is not static, and requires routing and firewall settings.
You need to know the gateway! My solution with local server was to expose it under 0.0.0.0:8000, then run docker with subnet and run container like: docker network create --subnet=172.35.0.0/16 --gateway 172.35.0.1 SUBNET35 docker run -d -p 4444:4444 --net SUBNET35 <container-you-want-run-place-here> So, now you can access your loopback through http://172.35.0.1:8000
Connect to the gateway address. ❯ docker network inspect bridge | grep Gateway "Gateway": "172.17.0.1" Make sure the process on the host is listening on this interface or on all interfaces and is started after docker. If using systemd, you can add the below to make sure it is started after docker. [Unit] After=docker.service Example ❯ python -m http.server &> /dev/null & [1] 149976 ❯ docker run --rm python python -c "from urllib.request import urlopen;print(b'Directory listing for' in urlopen('http://172.17.0.1:8000').read())" True
Here is my solution : it works for my case set local mysql server to public access by comment #bind-address = 127.0.0.1 in /etc/mysql/mysql.conf.d restart mysql server sudo /etc/init.d/mysql restart run the following command to open user root access any host mysql -uroot -proot GRANT ALL PRIVILEGES ON *.* TO 'root'#'%' IDENTIFIED BY 'root' WITH GRANT OPTION; FLUSH PRIVILEGES; create sh script : run_docker.sh #!bin/bash HOSTIP=`ip -4 addr show scope global dev eth0 | grep inet | awk '{print \$2}' | cut -d / -f 1` docker run -it -d --name web-app \ --add-host=local:${HOSTIP} \ -p 8080:8080 \ -e DATABASE_HOST=${HOSTIP} \ -e DATABASE_PORT=3306 \ -e DATABASE_NAME=demo \ -e DATABASE_USER=root \ -e DATABASE_PASSWORD=root \ sopheamak/springboot_docker_mysql run with docker-composer version: '2.1' services: tomcatwar: extra_hosts: - "local:10.1.2.232" image: sopheamak/springboot_docker_mysql ports: - 8080:8080 environment: - DATABASE_HOST=local - DATABASE_USER=root - DATABASE_PASSWORD=root - DATABASE_NAME=demo - DATABASE_PORT=3306
You can get the host ip using alpine image docker run --rm alpine ip route | awk 'NR==1 {print $3}' This would be more consistent as you're always using alpine to run the command. Similar to Mariano's answer you can use same command to set an environment variable DOCKER_HOST=$(docker run --rm alpine ip route | awk 'NR==1 {print $3}') docker-compose up
you can use net alias for your machine OSX sudo ifconfig lo0 alias 123.123.123.123/24 up LINUX sudo ifconfig lo:0 123.123.123.123 up then from the container you can see the machine by 123.123.123.123
The CGroups and Namespaces are playing major role in the Container Ecosystem. Namespace provide a layer of isolation. Each container runs in a separate namespace and its access is limited to that namespace. The Cgroups controls the resource utilization of each container, whereas Namespace controls what a process can see and access the respective resource. Here is the basic understanding of the solution approach you could follow, Use Network Namespace When a container spawns out of image, a network interface is defined and create. This gives the container unique IP address and interface. $ docker run -it alpine ifconfig By changing the namespace to host, cotainers networks does not remain isolated to its interface, the process will have access to host machines network interface. $ docker run -it --net=host alpine ifconfig If the process listens on ports, they'll be listened on the host interface and mapped to the container. Use PID Namespace By changing the Pid namespace allows a container to interact with other process beyond its normal scope. This container will run in its own namespace. $ docker run -it alpine ps aux By changing the namespace to the host, the container can also see all the other processes running on the system. $ docker run -it --pid=host alpine ps aux Sharing Namespace This is a bad practice to do this in production because you are breaking out of the container security model which might open up for vulnerabilities, and easy access to eavesdropper. This is only for debugging tools and understating the loopholes in container security. The first container is nginx server. This will create a new network and process namespace. This container will bind itself to port 80 of newly created network interface. $ docker run -d --name http nginx:alpine Another container can now reuse this namespace, $ docker run --net=container:http mohan08p/curl curl -s localhost Also, this container can see the interface with the processes in a shared container. $ docker run --pid=container:http alpine ps aux This will allow you give more privileges to containers without changing or restarting the application. In the similar way you can connect to mysql on host, run and debug your application. But, its not recommend to go by this way. Hope it helps.
Until fix is not merged into master branch, to get host IP just run from inside of the container: ip -4 route list match 0/0 | cut -d' ' -f3 (as suggested by #Mahoney here).
I solved it by creating a user in MySQL for the container's ip: $ sudo mysql<br> mysql> create user 'username'#'172.17.0.2' identified by 'password';<br> Query OK, 0 rows affected (0.00 sec) mysql> grant all privileges on database_name.* to 'username'#'172.17.0.2' with grant option;<br> Query OK, 0 rows affected (0.00 sec) $ sudo vim /etc/mysql/mysql.conf.d/mysqld.cnf <br>bind-address = 172.17.0.1 $ sudo systemctl restart mysql.service Then on container: jdbc:mysql://<b>172.17.0.1</b>:3306/database_name
cannot use both TOMCAT server and my internet connection
I cannot have my tomcat server started and at the same time use internet. either I can start Tomcat (in Eclipse) and internet is not available. or I can access the internet but tomcat cannot be started. Here the original probleme I had when I first wanted to use Tomcat and display my html page on localhost. GRAVE: StandardServer.await: create[localhost:8005]: I find a way to start the Tomcat Server: in the terminal: sudo lsof -i : 8005 # checks port 80 sudo route -n flush sudo route add default 192.168.1.1 then I can use tomcat and localhost:8080 but my internet connexion is dead if I want my internet connexion then I stop the tomcat server by clicking on the red square in eclipse and then in the terminal I do: sudo route -n flush sudo route add default 192.168.0.1 THen I can use internet but tomcat cannot be restarted. I have to undergo the first process. this of course is a very boring process and I would like to know what 's wrong and how I could fix it. I use tomcat 9 / Mac OS sierra / Eclipse Neon3
When you say "my internet connexion is dead", do you mean that your network connection drops or that your DNS lookups fail? (What do you think this command is doing and why are you performing it: sudo route add default 192.168.1.1?) If your program is modifying your system's connectivity settings, I would strongly recommend against preventing it from doing that. There's no reason for it to do so at that level, a more appropriate place to set settings would be at some deploy stage. Alternatively, you could run your app in a Docker container which I strongly suspect will solve your problem. Visit www.docker.com to learn more.
How to block java application from sending email during development?
My application sends some emails to our customers to warn them about some errors while we process their files. However, I would like to disable this feature, without altering my code, for development/test purposes. Is there any argument to pass to my JVM in order to block it from sending emails ?
You can replace the JavaMail provider with one that "mocks" a real provider, just by adding a jar to your classpath. In addition to blocking outbound mail, it allows you to perform unit testing on your application's email functions. This library was created by Kohsuke Kawaguchi, creator of Hudson/Jenkins.
If the SMTP server's hostname is hardcoded in the code, for example: server = "smtp.example.com" You could alter the host file at /etc/hosts to override the DNS lookup. Add this to your hosts file: 127.0.0.1 smtp.example.com This will prevent your program from interacting with the mail server. Make sure to delete that line when you are done. Otherwise, if the IP address is what's hardcoded, you can use a firewall. The exact procedure will depend on the operating system you are using. If you're running an OS with a Linux kernel, you can use iptables to block that IP address: iptables -I OUTPUT 1 --destination 1.2.3.4 -j REJECT Or, for a more specific rule: iptables -I OUTPUT 1 --destination 1.2.3.4 -p tcp --dport 25 -j REJECT --reject-with tcp-reset Again, remember to change it back when you're done: iptables -D OUTPUT 1
How to connect to Java instances running on EC2 using JMX
We are having problem connecting to our Java applications running in Amazon's EC2 cluster. We definitely have allowed both the "JMX port" (which is usually the RMI registry port) and the server port (which does most of the work) to the security-group for the instances in question. Jconsole connects but seems to hang and never show any information. We are running our java with something like the following: java -server -jar foo.jar other parameters here > java.log 2>&1 We have tried: Telnets to the ports connect but no information is displayed. We can run jconsole on the instance itself using remote-X11 over ssh and it connects and shows information. So the JRE is exporting it locally. Opening all ports in the security group. Weeee. Using tcpdump to make sure the traffic is not going to other ports. Simulating it locally. We can always connect to our local JREs or those running elsewhere on our network using the same application parameters. java -version outputs: OpenJDK Runtime Environment (IcedTea6 1.11.5) (amazon-53.1.11.5.47.amzn1-x86_64) OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode) As an aside, we are using my Simple JMX package which allows us to set both the RMI registry and server ports which are typically semi-randomly chosen by the RMI registry. You can also force this with something like the following JMX URI: service:jmx:rmi://localhost:" + serverPort + "/jndi/rmi://:" + registryPort + "/jmxrmi" These days we use the same port for both the server and the registry. In the past we have used X as the registry-port and X+1 for the server-port to make the security-group rules easy. You connect to the registry-port in jconsole or whatever JMX client you are using.
We are having problem connecting to our Java applications running in Amazon's EC2 cluster. It turns out that the problem was a combination of two missing settings. The first forces the JRE to prefer ipv4 and not v6. This was necessary (I guess) since we are trying to connect to it via a v4 address: -Djava.net.preferIPv4Stack=true The real blocker was the fact that JMX works by first contacting the RMI port which responds with the hostname and port for the JMX client to connect. With no additional settings it will use the local IP of the box which is a 10.X.X.X virtual address which a remote client cannot route to. We needed to add the following setting which is the external hostname or IP of the server -- in this case it is the elastic hostname of the server. -Djava.rmi.server.hostname=ec2-107-X-X-X.compute-1.amazonaws.com The trick, if you are trying to automate your EC2 instances (and why the hell would you not), is how to find this address at runtime. To do that you need to put something like the following in our application boot script: # get our _external_ hostname RMI_HOST=`wget -q -O - http://169.254.169.254/latest/meta-data/public-hostname` ... java -server \ -Djava.net.preferIPv4Stack=true -Djava.rmi.server.hostname=$RMI_HOST \ -jar foo.jar other parameters here > java.log 2>&1 The mysterious 169.254.169.254 IP in the wget command above provides information that the EC2 instance can request about itself. I'm disappointed that this does not include tags which are only available in an authenticated call. I initially was using the extern ipv4 address but it looks like the JDK tries to make a connection to the server-port when it starts up. If it uses the external IP then this was slowing our application boot time until that timed out. The public-hostname resolves locally to the 10-net address and to the public-ipv4 externally. So the application now is starting fast and JMX clients still work. Woo hoo! Hope this helps someone else. Cost me 3 hours today. To force your JMX server to start the server and the RMI registry on designated ports so you can block them in the EC2 Security Groups, see this answer: How to close rmiregistry running on particular port? Edit: We just had this problem re-occur. It seems that the Java JMX code is doing some hostname lookups on the hostname of the box and using them to try to connect and verify the JMX connection. The issue seems to be a requirement that the local hostname of the box should resolve to the local-ip of the box. For example, if your /etc/sysconfig/network has HOSTNAME=server1.foobar.com then if you do a DNS lookup on server1.foobar.com, you should get to the 10-NET virtual address. We were generating our own /etc/hosts file and the hostname of the local host was missing from the file. This caused our applications to either pause on startup or not startup at all. Lastly One way to simplify your JMX creation is to use my SimpleJMX package.
Per the second answer Why does JMX connection to Amazon EC2 fail?, the difficulty here is that by default the RMI port is selected at random, and clients need access to both the JMX and RMI ports. If you're running jdk7u4 or later, the RMI port can be specified via an app property. Starting my server with the following JMX settings worked for me: Without authentication: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.rmi.port=9998 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Djava.rmi.server.hostname=<public EC2 hostname> With authentication: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.rmi.port=9998 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.password.file=/path/to/jmxremote.password -Djava.rmi.server.hostname=<public EC2 hostname> I also opened ports 9998-9999 in the EC2 security group for my instance.
A bit different approach by using ssh tunnels (On the Remote machine) Pass the following flags to the JVM -Dcom.sun.management.jmxremote.port=1099 -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Djava.rmi.server.hostname=127.0.0.1 (On the Remote machine) Check which ports java started to use $ netstat -tulpn | grep java tcp 0 0 0.0.0.0:37484 0.0.0.0:* LISTEN 2904/java tcp 0 0 0.0.0.0:1099 0.0.0.0:* LISTEN 2904/java tcp 0 0 0.0.0.0:45828 0.0.0.0:* LISTEN 2904/java (On the local machine) Make ssh tunnels for all the ports ssh -N -L 1099:127.0.0.1:1099 ubuntu#<ec2_ip> ssh -N -L 37484:127.0.0.1:37484 ubuntu#<ec2_ip> ssh -N -L 45828:127.0.0.1:45828 ubuntu#<ec2_ip>` (On the local machine) Connect by Java Mission Control to localhost:1099
The answer given by Gray worked for me, however I find that I have to open TCP ports 0 to 65535 or I don't get in. I think that you can connect on the main JMX port, and then get another one assigned. I got that from this blog post that has always worked well for me.
We are using AWS Elastic Container Service for running our spring boot services. The below config allowed us to connect to our docker containers. Without Authentication: -Dcom.sun.management.jmxremote \ -Dcom.sun.management.jmxremote.port=9090 \ -Dcom.sun.management.jmxremote.rmi.port=9090 \ -Dcom.sun.management.jmxremote.authenticate=false \ -Dcom.sun.management.jmxremote.ssl=false \ -Djava.rmi.server.hostname=$(/usr/bin/curl -s --connect-timeout 2 \ http://169.254.169.254/latest/meta-data/public-ipv4) I found it crisp and also doesn't require any other servicer side init script.
PPP Server on Windows
We have a solution where some hardware connects to a COM port on a Win 7 machine, and interacts with our Java app. The hardware wants to use a PPP Server to transparently connect to an other server over TCP/IP. Does anyone have a suggestion on how to do this? Start an OS native PPP Server from the Java app, with a connection to the COM port? How is this done?
You may be surprised to find that Win7 still supports PPP natively. Follow these steps (or something like them) and you should be mostly good to go. I haven't actually performed a PPP connection since probably Win98, maybe Win2k, but the steps look to be pretty similar to what they were back them. It's not straightforward, but these should get you 80 or 90% of the way (the last 10-20% will be the normal hair-pulling irritations of getting the serial connection properly configured - there are way too many options involved in serial communications and PPP for it to go right on the first connection attempt). Open Control Panel Select "Phone and Modem". If it asks you about location, type in whatever information you need to make that dialog box happy (I think it just needs your area code, but maybe not, or maybe other stuff - it doesn't matter we won't be using it). Tell it you want to install a modem, and don't worry that if can't find one - you'll be selecting one from a list. Click the "Add" button, and tell it not to bother detecting one automatically Under "(Standard Modem Types)" select the "Communications cable between two computers" tell it what serial port to use Now you need to set up the 'network adapter' for the PPP connection go to the "Network and Sharing Center" of Control Panel Click on "Set up a new connection or network" Select "Set up a dial-up connection" If it asks you about what modem to use, select the "Communications cable between two computers modem" that you just set up (this shouldn't happen unless you have an actual modem in your computer). Give the "Create a Dial-up Connection" dialog box a bogus phone number so it will let you continue... And give it a connection name that you like instead of "Dial-up Connection" Click "Connect" and it'll try to dial. Of course it'll fail. Click "Set up the connection anyway" Now configure various PPP settings on the new network adapter: Click on the "change adapter setting" link in the "Network and Sharing Center" control panel Right click on the network adapter that you just created ("Dial-up Connection" or whatever name you gave it), and select "Properties" Configure the "Communications cable between two computers" (mainly this lets you set the speed). Look through the other tabs for the various other options you might need to control. Don't forget to configure the TCP/IPv4 properties that you might need on the "Networking" tab. If you're using IPv6, make sure that stuff is configured too. Once the hardware device establishes a PPP connection to the Win7 COM port, the Java application should be able to communicate over the PPP link as if it were a regular network adapter. Good Luck!
This is a workaround using VirtualBox. I can't figure out how to run PPP server natively on Win7. pppd - Ubuntu ttyS0 - VirtualBox Port 1 - Win7 COM1 -- RS232 -- target's ppp client Prepare VirtualBox 5 and Ubuntu 16 as a guest OS on Win7 Go to the VirtualBox Settings -> Serial Ports -> Port 1 Check : Enable Serial Port Port Number : COM1 IRQ : 4 I/O Port : 0x3F8 Port Mode : Host Device Check : Connect to existing pipe/socket Path/Address : COM1 Open a Ubuntu terminal sudo apt-config install ppp sudo apt-get install ppp sudo stty -F /dev/ttyS0 raw sudo stty -F /dev/ttyS0 -a sudo pppd /dev/ttyS0 115200 192.168.17.1:192.168.17.2 proxyarp local noauth nodetach dump nocrtscts passive persist maxfail 0 holdoff 1 pppd options in effect: nodetach # (from command line) holdoff 1 # (from command line) persist # (from command line) maxfail 0 # (from command line) dump # (from command line) noauth # (from command line) /dev/ttyS0 # (from command line) 115200 # (from command line) lock # (from /etc/ppp/options) nocrtscts # (from command line) local # (from command line) asyncmap 0 # (from /etc/ppp/options) passive # (from command line) lcp-echo-failure 4 # (from /etc/ppp/options) lcp-echo-interval 30 # (from /etc/ppp/options) hide-password # (from /etc/ppp/options) proxyarp # (from command line) 192.168.17.1:192.168.17.2 # (from command line) noipx # (from /etc/ppp/options) Using interface ppp0 Connect: ppp0 <--> /dev/ttyS0 Cannot determine ethernet address for proxy ARP local IP address 192.168.17.1 remote IP address 192.168.17.2
Using Google on the basis of #hari comment about javax.comm I found a tutorial on TINI, which may be useful on your purposes: the guide takes a PPP connection through COM port with TINI library equally as you want to do.