I cannot connect to HBase running in Docker on Windows (banno/hbase-standalone image). However, I can connect to locally installed HBase.
banno/hbase-standalone image is run using:
docker run -d -p 2181:2181 -p 60000:60000 -p 60010:60010 -p 60020:60020 -p 60030:60030 banno/hbase-standalone
I also set up the port forwarding on the boot2docker-vm (which is required when running on Windows):
I can successfully telnet to all those ports on my localhost.
Next, here is a code sample that we use in our tests:
Configuration config = HBaseConfiguration.create();
config.clear();
config.setInt("timeout", 12000);
config.set("zookeeper.znode.parent", "/hbase");
config.set("hbase.zookeeper.quorum", "127.0.0.1");
config.set("hbase.zookeeper.property.clientPort", "2181");
config.set("hbase.master", "127.0.0.1:60000");
final Configuration configuration = HBaseConfiguration.create(config);
JobDefinition.Buildable.dumpProperties(configuration, newArrayList("hbase.*"));
HBaseAdmin.checkHBaseAvailable(config);
Which causes the following exception
Exception in thread "main" org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.net.UnknownHostException: unknown host: a3e6c240af20
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1651)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(HConnectionManager.java:1677)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterService(HConnectionManager.java:1885)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.isMasterRunning(HConnectionManager.java:900)
at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:2366)
at com.xxx.compute.hadoop.jobs.transaction.OurTest.main(OurTest.java:24)
Caused by: com.google.protobuf.ServiceException: java.net.UnknownHostException: unknown host: a3e6c240af20
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1674)
at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1715)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:42561)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(HConnectionManager.java:1688)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(HConnectionManager.java:1597)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1623)
... 5 more
Caused by: java.net.UnknownHostException: unknown host: a3e6c240af20
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.<init>(RpcClient.java:386)
at org.apache.hadoop.hbase.ipc.RpcClient.createConnection(RpcClient.java:352)
at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1526)
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1438)
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1657)
... 10 more
This is explainable. We run Windows, which requires boot2docker-vm virtual machine running using NAT. The Docker container of the image is running inside the boot2docker-vm also using NAT. However, the ports are "visible" to the host machine running tests, since Docker container exports the ports, and the boot2docker-vm forwards the ports the host machine. The name a3e6c240af20 actually comes from the Docker container ID, so probably a3e6c240af20 is a hostname for the Docker container :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a3e6c240af20 banno/hbase-standalone:latest "/bin/sh -c '/opt/hb 24 minutes ago Up 24 minutes 0.0.0.0:2181->2181/tcp, 0.0.0.0:60000->60000/tcp, 0.0.0.0:60010->60010/tcp, 0.0.0.0:60020->60020/tcp, 0.0.0.0:60030->60030/tcp agitated_wozniak
I am not sure how exactly HBase communication works, but apparently it makes RPC calls to the instance. HBase Docker returns its hostname hoping that the client will call it there. But since both boot2docker-vm and Docker container running using NAT, the host machine does not see the Docker container.
I tried to add a3e6c240af20 to my hosts file:
127.0.0.1 a3e6c240af20
Then I get a different error, also during the RPC call, which actually does not help me much:
Exception in thread "main" org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.lang.NullPointerException
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1651)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(HConnectionManager.java:1677)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterService(HConnectionManager.java:1885)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.isMasterRunning(HConnectionManager.java:900)
at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:2366)
at com.xxx.compute.hadoop.jobs.transaction.OurTest.main(OurTest.java:24)
Caused by: com.google.protobuf.ServiceException: java.lang.NullPointerException
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1674)
at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1715)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:42561)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(HConnectionManager.java:1688)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(HConnectionManager.java:1597)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1623)
... 5 more
Caused by: java.lang.NullPointerException
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.writeRequest(RpcClient.java:1051)
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1440)
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1657)
... 10 more
Does anyone have a suggestion how this can be solved?
Try add [boot2docker IP] a3e6c240af20 instead of 127.0.0.1 because HBase Java client needs to reach your docker's host not exactly localhost to reach zookeeper (CMIIW). Not pretty sure if it will works but it works in my Windows.
I used oddpoet/hbase-cdh5 docker image to avoid this issue.
docker run -d -p 2181:2181 -p 60000:60000 -p 60010:60010 -p 60020:60020 -p 60030:60030 -h hbase oddpoet/hbase-cdh5
fig.yml
hbase:
image: oddpoet/hbase-cdh5
hostname: hbase
ports:
- "3181:2181"
- "60000:60000"
- "60010:60010"
- "60020:60020"
- "60030:60030"
my configuration file
conf.set("hbase.zookeeper.quorum", zkPath);
conf.set("hbase.zookeeper.property.clientPort","2181");
conf.set("zookeeper.znode.parent", "/hbase");
conf.set("hbase.client.retries.number", "3"); // default 35
conf.set("hbase.rpc.timeout", "10000"); // default 60 secs
conf.set("hbase.rpc.shortoperation.timeout", "5000"); // default 10 secs
Related
Hello I have an app in Spring Boot and I am exposing some metrics on Prometheus. My next goal is to provide these metrics on Grafana in order to obtain some beautiful dashboards. I am using docker on WSL Ubuntu and typed the next commands for Prometheus and Grafana:
docker run -d --name=prometheus -p 9090:9090 -v /mnt/d/Projects/Msc-Thesis-Project/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus --config.file=/etc/prometheus/prometheus.yml
docker run -d --name=grafana -p 3000:3000 grafana/grafana
Below I am giving you the Prometheus dashboard in my browser and as you can see, everything is up and running. My problem is in Grafana configuration where I have to configure Prometheus as Data Source.
In the field URL I am providing the http://localhost:9090 but I am getting the following error:
Error reading Prometheus: Post "http://localhost:9090/api/v1/query": dial tcp 127.0.0.1:9090: connect: connection refused
I've searched everywhere and saw some workarounds that don't apply to me. To be specific I used the following: http://host.docker.internal:9090, http://server-ip:9090 and of course my system's IP address via the ipconfig command http://<ip_address>:9090. Nothing works!!!
I am not using docker-compose but just a prometheus.yml file which is as follows.
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- job_name: 'Spring Boot Application input'
metrics_path: '/actuator/prometheus'
scrape_interval: 2s
scheme: http
static_configs:
- targets: ['192.168.1.233:8080']
labels:
application: "MSc Project Thesis"
Can you advise me something?
You can use the docker inspect command to find the IP address of the Prometheus container and then replace the localhost word with it.
I'll suggest you to use docker-compose, which better supports in DNS resolving and your issues of localhost will get resolved.
It works for https://stackoverflow.com/a/74061034/4841138
Also, if you deploy the stack by docker compose and all dockers are in same network, you can do that:
URL: http://prometheus:9090
In above, prometheus is the domain name of the prometheus docker, which can be resolved by all dockers within same network.
I have an instance of Nifi in docker on virtual machine with exposed ports: 8080 and 10000.
On thin instance i created a simple pipeline with output port named "flink" and i want to read this data using flink-nifi connector:
SiteToSiteClientConfig clientConfig = new SiteToSiteClient.Builder()
.url("http://vm-address:8080/nifi")
.portName("flink")
.requestBatchCount(100)
.buildConfig();
DataStream<NiFiDataPacket> nifi = environment.addSource(new NiFiSource(clientConfig));
nifi.map(new MapFunction<NiFiDataPacket, JsonNode>() {
#Override
public JsonNode map(NiFiDataPacket value) throws Exception {
return DataConverter.byte2Json(value.getContent());
}
}).print();
In this case i got error:
Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: java.net.UnknownHostException
If i add localAddress in config:
SiteToSiteClientConfig clientConfig = new SiteToSiteClient.Builder()
.url("http://vm-address:8080/nifi")
.localAddress(InetAddress.getByName("vm-address"))
.portName("flink")
.requestBatchCount(100)
.buildConfig();
I got this error:
Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: java.net.BindException: Cannot assign requested address: JVM_Bind
I run this code from local pc on windows and flink is started in standalone mode.
Also, i tried to run this directly on virtual machine but i got the same error.
In logs there is a lot of retries:
execchain.RetryExec: I/O exception (java.net.BindException) caught
when processing request to
/vm-address->{}->http://vm-address:8080: Cannot assign requested
address: JVM_Bind
Finally, solved it!
The problem was in my docker configuration. Firstly, i run nifi like this:
docker run --name nifi -p 8008:8080 -p 10000:10000 -d apache/nifi:1.7.1
The network, by default, was bridge. In this case my container has some random hostname and i don't communicate with container directly, but through the docker.
When i choose network=host:
docker run --name nifi --network host -d apache/nifi:1.7.1 everything goes well.
Probably, i could solve it another way (maybe, explicitly resolve container hostname), but this was the easiest way
I have a new Spring Boot application that I just finished and am trying to deploy it to Docker. Inside the container the application works fine. It uses ports 9000 for user facing requests and 9100 for administrative tasks like health checks. When I start a docker instance and try to access port 9000 I get the following error:
curl: (56) Recv failure: Connection reset by peer
After a lot of experimentation (via curl), I confirmed in with several different configurations that the application functions fine inside the container, but when I try to map ports to the host it doesn't connect. I've tried starting it with the following commands. None of them allow me to access the ports from the host.
docker run -P=true my-app
docker run -p 9000:9000 my-app
The workaround
The only approach that works is using the --net host option, but this doesn't allow me to run more than one container on that host.
docker run -d --net=host my-app
Experiments with ports and expose
I've used various versions of the Dockerfile exposing different ports such as 9000 and 9100 or just 9000. None of that helped. Here's my latest version:
FROM ubuntu
MAINTAINER redacted
RUN apt-get update
RUN apt-get install openjdk-7-jre-headless -y
RUN mkdir -p /opt/app
WORKDIR /opt/app
ADD ./target/oauth-authentication-1.0.0.jar /opt/app/service.jar
ADD config.properties /opt/app/config.properties
EXPOSE 9000
ENTRYPOINT java -Dext.properties.dir=/opt/app -jar /opt/app/service.jar
Hello World works
To make sure I can run a Spring Boot application, I tried Simplest-Spring-Boot-MVC-HelloWorld and it worked fine.
Netstat Results
I've used netstat to do port scans from the host and from the container:
From the host
root#my-docker-host:~# nmap 172.17.0.71 -p9000-9200
Starting Nmap 6.40 ( http://nmap.org ) at 2014-11-14 19:19 UTC Nmap
scan report for my-docker-host (172.17.0.71)
Host is up (0.0000090s latency).
Not shown: 200 closed ports
PORT STATE SERVICE
9100/tcp open jetdirect
MAC Address: F2:1A:ED:F4:07:7A (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 1.48 seconds
From the container
root#80cf20c0c1fa:/opt/app# nmap 127.0.0.1 -p9000-9200
Starting Nmap 6.40 ( http://nmap.org ) at 2014-11-14 19:20 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.0000070s latency).
Not shown: 199 closed ports
PORT STATE SERVICE
9000/tcp open cslistener
9100/tcp open jetdirect
Nmap done: 1 IP address (1 host up) scanned in 2.25 seconds
The container is using Ubuntu
The hosts I've replicated this are Centos and Ubuntu.
This SO question seems similar but had very few details and no answers, so I thought I'd try to document my scenario a bit more.
I had a similar problem, in which specifying a host IP address as '127.0.0.1' wouldn't properly forward the port to the host.
Setting the web server's IP to '0.0.0.0' fixes the problem
eg - for my Node app - the following doesn't work
app.listen(3000, '127.0.0.1')
Where as the following does work:
app.listen(3000, '0.0.0.0')
Which I guess means that docker, by default, is exposing 0.0.0.0:containerPort -> local port
You should run with docker run -P to get the ports to map automatically to the same values to set in the Dockerfile.. Please see http://docs.docker.com/reference/run/#expose-incoming-ports
Has anyone else run across this exception? We saw it during a load test last night. The hostname is correct and normally works fine. It just started throwing this exception last night. Either it was a random DNS fail on amanzon's part or the Aws SDK for Java does something unexpected under load.
> Caused by: java.net.UnknownHostException: sdb.amazonaws.com
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:867)
at java.net.InetAddress.getAddressFromNameService(InetAddress.java:1246)
at java.net.InetAddress.getAllByName0(InetAddress.java:1197)
at java.net.InetAddress.getAllByName(InetAddress.java:1128)
at java.net.InetAddress.getAllByName(InetAddress.java:1064)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.resolveHostname(DefaultClientConnectionOperator.java:242)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:130)
at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:149)
at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:121)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:561)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:415)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:820)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:754)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:732)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:266)
I was facing the same problem
Caused by: java.net.UnknownHostException: ec2.sa-east-1.amazonaws.com while doing lein pallet up to upload files to aws bucket/ or while trying to get ips of remote machines.
1. First try,
Cleaning project, Waiting for few minutes/hours and then refiring lein pallet up -P aws-ec2 with the same aws configuration worked for me.
2. Second try,
Run lein pallet up -P aws-ec2 for single groups instead of whole cluster.
Change /etc/hosts the following way:
old
127.0.0.1 localhost localhost.localdomain
new
127.0.0.1 localhost localhost.localdomain add-your-localhost-name-here
I tried to installed Solr using:
java -jar start.jar
However I downloaded the source code and didn't compile it (Didn't pay attention). And the error was:
http://localhost:8983/solr/admin/
HTTP ERROR: 404
Problem accessing /solr/admin/. Reason:
NOT_FOUND
Then I downloaded the compiled version of solr but when trying to run the example configuration I'm getting exception:
java.net.BindException: Address already in use
Is there a way to revert solr configuration and start from scratch? Looks like the configuration got messed up. I don't see anything related to it in the manual.
Here is the error:
2011-07-10 22:41:27.631:WARN::failed SocketConnector#0.0.0.0:8983: java.net.BindException: Address already in use
2011-07-10 22:41:27.632:WARN::failed Server#c4e21db: java.net.BindException: Address already in use
2011-07-10 22:41:27.632:WARN::EXCEPTION
java.net.BindException: Address already in use
at java.net.PlainSocketImpl.socketBind(Native Method)
at java.net.PlainSocketImpl.bind(PlainSocketImpl.java:383)
at java.net.ServerSocket.bind(ServerSocket.java:328)
at java.net.ServerSocket.<init>(ServerSocket.java:194)
at java.net.ServerSocket.<init>(ServerSocket.java:150)
at org.mortbay.jetty.bio.SocketConnector.newServerSocket(SocketConnector.java:80)
at org.mortbay.jetty.bio.SocketConnector.open(SocketConnector.java:73)
at org.mortbay.jetty.AbstractConnector.doStart(AbstractConnector.java:283)
at org.mortbay.jetty.bio.SocketConnector.doStart(SocketConnector.java:147)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.jetty.Server.doStart(Server.java:235)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.mortbay.start.Main.invokeMain(Main.java:194)
at org.mortbay.start.Main.start(Main.java:534)
at org.mortbay.start.Main.start(Main.java:441)
at org.mortbay.start.Main.main(Main.java:119)
Jul 10, 2011 10:41:27 PM org.apache.solr.core.SolrCore registerSearcher
INFO: [] Registered new searcher Searcher#5b6b9e62 main
This means you already have an application running on that particular port.
Run:
$ lsof -i :8983
This gives you a list of any application running on that port. In my case, Solr is already running, and I get back:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 10289 patricia 111u IPv6 399410 0t0 TCP *:8983 (LISTEN)
Kill this process by filling in your PID:
$ kill 10289
And then try running Solr again.
The java.net.BindException means that you are attempting to restart solr while an earlier instance continues to run, or less probably that you have something else running on port 8983. You should find that process, kill it, and then start solr again.
Its bound to a some other application. In case if its a important app you can change jetty default port using following:
java -Djetty.port=8181 -jar start.jar
ps aux | grep solr
OR
ps aux | grep start.jar
AND the get then process id and kill it:
kill -9 #PID#
example : kill -9 4989
UPDATE:
after killing the process if you want to reinstall solr you soul first uninstall it, the following is one of the solutions to uninstall it:
sudo service solr stop
sudo rm -r /var/solr
sudo rm -r /opt/solr-5.3.1
sudo rm -r /opt/solr
sudo rm /etc/init.d/solr
sudo deluser --remove-home solr
sudo deluser --group solr
An now you can reinstall it with no problem.
If sudo lsof -i:8983 won't help finding application running on the same port, the common mistake is Tomcat misconfiguration (if you're using it).
For example by default Tomcat listens on port 8005 for SHUTDOWN command and if you set another Connector to listen on the same port, you'll get port conflict.
So please double check in server.xml if these ports are different:
<Server port="8005" shutdown="SHUTDOWN">
<Connector port="8983" protocol="HTTP/1.1"
Maybe some crazy idea is to use docker to read a complete extended step by step and repeatable installation:
Here dockerhub to select the specific version tu run in docker Docker Hub Solr
And here github to read the docker recipe Solr docker Recipe