Has anyone else run across this exception? We saw it during a load test last night. The hostname is correct and normally works fine. It just started throwing this exception last night. Either it was a random DNS fail on amanzon's part or the Aws SDK for Java does something unexpected under load.
> Caused by: java.net.UnknownHostException: sdb.amazonaws.com
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:867)
at java.net.InetAddress.getAddressFromNameService(InetAddress.java:1246)
at java.net.InetAddress.getAllByName0(InetAddress.java:1197)
at java.net.InetAddress.getAllByName(InetAddress.java:1128)
at java.net.InetAddress.getAllByName(InetAddress.java:1064)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.resolveHostname(DefaultClientConnectionOperator.java:242)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:130)
at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:149)
at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:121)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:561)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:415)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:820)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:754)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:732)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:266)
I was facing the same problem
Caused by: java.net.UnknownHostException: ec2.sa-east-1.amazonaws.com while doing lein pallet up to upload files to aws bucket/ or while trying to get ips of remote machines.
1. First try,
Cleaning project, Waiting for few minutes/hours and then refiring lein pallet up -P aws-ec2 with the same aws configuration worked for me.
2. Second try,
Run lein pallet up -P aws-ec2 for single groups instead of whole cluster.
Change /etc/hosts the following way:
old
127.0.0.1 localhost localhost.localdomain
new
127.0.0.1 localhost localhost.localdomain add-your-localhost-name-here
Related
I am trying to run pyspark on jupyter(via anaconda) in windows.Facing the below mentioned error while trying to create a SparkSession.
Exception: Java gateway process exited before sending its port number
Error snapshot 1
Error snapshot 2
I even tried adding JAVA_HOME,SPARK_HOME and HADOOP_HOME path into environment variable:
JAVA_HOME: C:\Java\jdk-11.0.16.1
SPARK_HOME:C:\Spark\spark-3.1.3-bin-hadoop3.2
HADOOP_HOME:C:\Spark\spark-3.1.3-bin-hadoop3.2
Even after this I am facing the same issue.
PS: My pyspark version is 3.3.1 and python version is 3.8.6.
As per spark documentation, the string for setting master should be "local[*]" or "local[N]" for only using N cores. If you leave out the master setting, it defaults to "local[*]".
After several attempts, I finally figured out the issue. It was because the windows firewall had blocked java that caused this error. Once I gave the access permission the error was rectified!
I have installed ONOS 2.3.0 on an Ubuntu Server 18.04.4 virtual machine running on Hyper-V following this steps (taken from here and here):
Firstly, I have installed Java 11 (openjdk-11-jdk and openjdk-11-jre), maven and curl;
then I have downloaded ONOS 2.3.0 from here and extracted it with tar xzf onos-2.3.0.tar.gz;
lastly, I exported the required environment variable export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64.
When I try to launch it using the command ./onos-service start (tested both from a normal user and sudo), it gives me the following errors:
21:54:57.869 ERROR [onos-core-net] FrameworkEvent ERROR - org.onosproject.onos-core-net
org.osgi.framework.ServiceException: Service factory returned null. (Component: org.onosproject.store.cfg.DistributedComponentConfigStore (6))
at org.apache.felix.framework.ServiceRegistrationImpl.getFactoryUnchecked(ServiceRegistrationImpl.java:380)
at org.apache.felix.framework.ServiceRegistrationImpl.getService(ServiceRegistrationImpl.java:247) org.apache.felix.framework.EventDispatcher.fireEventImmediately(EventDispatcher.java:834)
[...]
at org.apache.felix.framework.Felix.setActiveStartLevel(Felix.java:1373)
at org.apache.felix.framework.FrameworkStartLevelImpl.run(FrameworkStartLevelImpl.java:308) at java.base/java.lang.Thread.run(Thread.java:834)
[...]
21:54:57.881 WARN [NettyMessagingService] Failed to bind TCP server to port 0.0.0.0:9876 due to {}
java.net.BindException: Address already in use
at java.base/sun.nio.ch.Net.bind0(Native Method)
[...]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.base/java.lang.Thread.run(Thread.java:834)
21:54:57.899 ERROR [onos-core-primitives] bundle org.onosproject.onos-core-primitives:2.3.0 (192)[org.onosproject.store.atomix.impl.AtomixManager(115)] : The activate method has thrown an exception
java.util.concurrent.CompletionException: java.net.BindException: Address already in use
at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
[...]
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.net.BindException: Address already in use
at java.base/sun.nio.ch.Net.bind0(Native Method)
at java.base/sun.nio.ch.Net.bind(Net.java:455)
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:227)
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:132)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:563)
... 12 more
Connecting to karaf instance with ssh -p 8101 karaf#localhost confirm that ONOS is working (at least partially), the web interface login loads, but after login it hangs saying that ONOS GUI not ready yet... please stand by....
Does anyone has an idea about how to solve this problem?
Thanks in advance.
UPDATE 19-03-2020: I have prepared another virtual machine following exactly the same steps on another PC using VirtualBox and lower virtual resources assigned, and it works. Honestly i don't understand why it fails on the Hyper-V configuration.
UPDATE 20-03-2020: I have reinstalled Ubuntu configuring the network directly from the installer, and prerequisites and dependecies of ONOS offline (downloaded on another machine via sudo apt install --download-only <package-name>) and it worked. I think the problem was related to something in the network configuration that didn't let him recognize its own process on port 9876 (see the WARN above).
Hope this can be helpful for others.
I had this problem. ONOS is locked to the IP at first install. I grepped for my IP in the /onos folder and was able to reset the binding by deleting the following files that contained the IP. They were rebuilt at next ONOS run.
grep -rl 192.168. --exclude=*.log ~/onos
rm ~/onos/apache-karaf-4.2.9/data/db/partitions/data/partitions/1/raft-partition-1.conf
rm ~/onos/apache-karaf-4.2.9/data/db/partitions/data/partitions/1/raft-partition-1.meta
rm ~/onos/apache-karaf-4.2.9/data/db/partitions/data/partitions/1/.raft-partition-1.lock
rm ~/onos/apache-karaf-4.2.9/data/db/partitions/system/partitions/1/.system-partition-1.lock
rm ~/onos/apache-karaf-4.2.9/data/db/partitions/system/partitions/1/system-partition-1.conf
rm ~/onos/apache-karaf-4.2.9/data/db/partitions/system/partitions/1/system-partition-1.meta
I have faced this issue after changing the IP address of the controller (Host machine).
The quick way to solve it is to set the IP controller as it was (Static)
then reboot your machine
after putting the URL (YourIP:8181/onos/ui/index.html)
Karaf will ask you for login in credentials, use (username:karaf/password:karaf)
then on ONOS's login page, use onos/rocks as credentials.
Good luck..
I cannot connect to HBase running in Docker on Windows (banno/hbase-standalone image). However, I can connect to locally installed HBase.
banno/hbase-standalone image is run using:
docker run -d -p 2181:2181 -p 60000:60000 -p 60010:60010 -p 60020:60020 -p 60030:60030 banno/hbase-standalone
I also set up the port forwarding on the boot2docker-vm (which is required when running on Windows):
I can successfully telnet to all those ports on my localhost.
Next, here is a code sample that we use in our tests:
Configuration config = HBaseConfiguration.create();
config.clear();
config.setInt("timeout", 12000);
config.set("zookeeper.znode.parent", "/hbase");
config.set("hbase.zookeeper.quorum", "127.0.0.1");
config.set("hbase.zookeeper.property.clientPort", "2181");
config.set("hbase.master", "127.0.0.1:60000");
final Configuration configuration = HBaseConfiguration.create(config);
JobDefinition.Buildable.dumpProperties(configuration, newArrayList("hbase.*"));
HBaseAdmin.checkHBaseAvailable(config);
Which causes the following exception
Exception in thread "main" org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.net.UnknownHostException: unknown host: a3e6c240af20
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1651)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(HConnectionManager.java:1677)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterService(HConnectionManager.java:1885)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.isMasterRunning(HConnectionManager.java:900)
at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:2366)
at com.xxx.compute.hadoop.jobs.transaction.OurTest.main(OurTest.java:24)
Caused by: com.google.protobuf.ServiceException: java.net.UnknownHostException: unknown host: a3e6c240af20
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1674)
at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1715)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:42561)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(HConnectionManager.java:1688)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(HConnectionManager.java:1597)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1623)
... 5 more
Caused by: java.net.UnknownHostException: unknown host: a3e6c240af20
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.<init>(RpcClient.java:386)
at org.apache.hadoop.hbase.ipc.RpcClient.createConnection(RpcClient.java:352)
at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1526)
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1438)
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1657)
... 10 more
This is explainable. We run Windows, which requires boot2docker-vm virtual machine running using NAT. The Docker container of the image is running inside the boot2docker-vm also using NAT. However, the ports are "visible" to the host machine running tests, since Docker container exports the ports, and the boot2docker-vm forwards the ports the host machine. The name a3e6c240af20 actually comes from the Docker container ID, so probably a3e6c240af20 is a hostname for the Docker container :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a3e6c240af20 banno/hbase-standalone:latest "/bin/sh -c '/opt/hb 24 minutes ago Up 24 minutes 0.0.0.0:2181->2181/tcp, 0.0.0.0:60000->60000/tcp, 0.0.0.0:60010->60010/tcp, 0.0.0.0:60020->60020/tcp, 0.0.0.0:60030->60030/tcp agitated_wozniak
I am not sure how exactly HBase communication works, but apparently it makes RPC calls to the instance. HBase Docker returns its hostname hoping that the client will call it there. But since both boot2docker-vm and Docker container running using NAT, the host machine does not see the Docker container.
I tried to add a3e6c240af20 to my hosts file:
127.0.0.1 a3e6c240af20
Then I get a different error, also during the RPC call, which actually does not help me much:
Exception in thread "main" org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.lang.NullPointerException
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1651)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(HConnectionManager.java:1677)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterService(HConnectionManager.java:1885)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.isMasterRunning(HConnectionManager.java:900)
at org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:2366)
at com.xxx.compute.hadoop.jobs.transaction.OurTest.main(OurTest.java:24)
Caused by: com.google.protobuf.ServiceException: java.lang.NullPointerException
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1674)
at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1715)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:42561)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(HConnectionManager.java:1688)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(HConnectionManager.java:1597)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1623)
... 5 more
Caused by: java.lang.NullPointerException
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.writeRequest(RpcClient.java:1051)
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1440)
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1657)
... 10 more
Does anyone have a suggestion how this can be solved?
Try add [boot2docker IP] a3e6c240af20 instead of 127.0.0.1 because HBase Java client needs to reach your docker's host not exactly localhost to reach zookeeper (CMIIW). Not pretty sure if it will works but it works in my Windows.
I used oddpoet/hbase-cdh5 docker image to avoid this issue.
docker run -d -p 2181:2181 -p 60000:60000 -p 60010:60010 -p 60020:60020 -p 60030:60030 -h hbase oddpoet/hbase-cdh5
fig.yml
hbase:
image: oddpoet/hbase-cdh5
hostname: hbase
ports:
- "3181:2181"
- "60000:60000"
- "60010:60010"
- "60020:60020"
- "60030:60030"
my configuration file
conf.set("hbase.zookeeper.quorum", zkPath);
conf.set("hbase.zookeeper.property.clientPort","2181");
conf.set("zookeeper.znode.parent", "/hbase");
conf.set("hbase.client.retries.number", "3"); // default 35
conf.set("hbase.rpc.timeout", "10000"); // default 60 secs
conf.set("hbase.rpc.shortoperation.timeout", "5000"); // default 10 secs
We are using the Ganymed-SSH library and facing this error while doing SSH to another machine.
[root#XXXX test]# java -classpath .:ganymed-ssh2-build210.jar Basic
ERROR:java.io.IOException: There was a problem while connecting to 10.X.X.X:22
java.io.IOException: There was a problem while connecting to 10.X.X.X:22
at ch.ethz.ssh2.Connection.connect(Connection.java:699)
at ch.ethz.ssh2.Connection.connect(Connection.java:490)
at Basic.main(Basic.java:27)
Caused by: java.io.IOException: Key exchange was not finished, connection is closed.
at ch.ethz.ssh2.transport.KexManager.getOrWaitForConnectionInfo(KexManager.java:91)
at ch.ethz.ssh2.transport.TransportManager.getConnectionInfo(TransportManager.java:229)
at ch.ethz.ssh2.Connection.connect(Connection.java:655)
... 2 more
Caused by: java.io.IOException: Cannot read full block, EOF reached.
at ch.ethz.ssh2.crypto.cipher.CipherInputStream.getBlock(CipherInputStream.java:81)
at ch.ethz.ssh2.crypto.cipher.CipherInputStream.read(CipherInputStream.java:108)
at ch.ethz.ssh2.transport.TransportConnection.receiveMessage(TransportConnection.java:231)
at ch.ethz.ssh2.transport.TransportManager.receiveLoop(TransportManager.java:669)
at ch.ethz.ssh2.transport.TransportManager$1.run(TransportManager.java:468)
at java.lang.Thread.run(Thread.java:636)
Can anyone explain what could be the issue here? Where should we start to debug from?
SSH access from normal shell works correctly.
Probably there is some problem with your public key
ssh failed after 141742-01/02 patch on solaris 10 !
Enabled aes192/aes256 support in ssh/sshd does not work on S10u3 or older released
A workaround is to disable the use of aes192/aes256 ciphers for ssh and sshd. Change the two config files /etc/ssh/ssh_config and /etc/ssh/sshd_config and add the following line:
Ciphers aes128-ctr,aes128-cbc,arcfour,3des-cbc,blowfish-cbc
You’ll have to restart sshd to pickup the change (“svcadm restart ssh”).
Source:
http://blog.mydream.com.hk/howto/matching-cipher-is-not-supported-aes256-cbc
I am trying to debug a simple Java application on my machine using Eclipse as an IDE. When I try to debug the application by entering the Debug Perspective, I set a breakpoint and start debug. Within a few seconds, the following pop-up window:
Launching unicodeRead has encountered a problem. Cannot connect to VM.
The message dumped on the console is as follows:
ERROR: transport error 202: connect failed: Connection refused
ERROR: JDWP Transport dt_socket failed to initialize, TRANSPORT_INIT(510)
JDWP exit error AGENT_ERROR_TRANSPORT_INIT(197): No transports initialized [../../../src/share/back/debugInit.c:708]
FATAL ERROR in native method: JDWP No transports initialized, jvmtiError=AGENT_ERROR_TRANSPORT_INIT(197)
How do I correct this? Why does this happen?
I just had the same problem.
Yesterday everything worked fine, now nothing - same error as you gave. I found out that network admins made some changes in the meantime. Some firewall stuff. Problem is that Eclipse tries to establish connection to JVM at "localhost" (and some random port). When I tried pinging localhost (or 127.0.0.1) I got following:
C:\Windows\system32>ping 127.0.0.1
Pinging 127.0.0.1 with 32 bytes of data:
PING: transmit failed. General failure.
PING: transmit failed. General failure.
PING: transmit failed. General failure.
PING: transmit failed. General failure.
and
C:\Windows\system32>ping localhost
Ping request could not find host localhost. Please check the name and try again.
It seams that in some cases DNS is expected to resolve this, and if firewall prevents localhost requests to DNS - stuff breaks. I had to alter hosts file and remove comments in following lines, so I would not rely on DNS for this anymore:
# 127.0.0.1 localhost
# ::1 localhost
Although it is written that hosts file changes take effect immediately, I think that some processes locked this and restart was necessary in my case. After that, everything worked again.
Had same problem, but the solution was to run the application with -server=y option and not with -server=n.
Before:
java -agentlib:jdwp=transport=dt_socket,server=n,suspend=y,address=localhost:5005
After:
java -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=localhost:5005
Looks like the same problem as here. A reboot of the pc fixed the problem there. I haven't found any other solutions.
I was seeing an error while using the -X format:
java -Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=4000,suspend=n myapp
The error went away when I switched to the newer format:
java -agentlib:jdwp=transport=dt_socket,server=y,address=4000,suspend=n myapp
Its Very Simple,Just do the Following Changes in eclipse.ini file.
-vm
binary\com.sun.java.jdk.win32.x86_1.6.0.u43\jre\bin\javaw.exe
I changed
-agentlib:jdwp=transport=dt_socket,address=9009,server=n,suspend=y
to
-agentlib:jdwp=transport=dt_socket,address=9009,server=y,suspend=n
and that did the trick!
My case is I have a bunch of domains refer to 127.0.0.1 in hosts file, like this:
127.0.0.1 localhost domian1.local domain2.local domain3.local
one day I added another new domain to refer to 127.0.0.1. By mistake, I put the domain in front of "localhost", like this:
127.0.0.1 domain4.local localhost domian1.local domain2.local domainx.local
After this, I always got an alert window in eclipse while debugging:
Cannot connect to VM
com.sun.jdi.connect.TransportTimeoutException
In console:
ERROR: transport error 202: connect failed: Connection refused
ERROR: JDWP: Failed to initialize transport via localhost:50470, trying localhost via 127.0.0.1:50470
FATAL ERROR in native method: JDWP No transports initialized, jvmtiError=AGENT_ERROR_TRANSPORT_INIT(197)
ERROR: transport error 202: connect failed: Connection refused
ERROR: JDWP Transport dt_socket failed to initialize, TRANSPORT_INIT(510)
JDWP exit error AGENT_ERROR_TRANSPORT_INIT(197): No transports initialized [../../../src/share/back/debugInit.c:690]
The solution is keep "localhost" at the first position all the time.
127.0.0.1 localhost domian1.local domain2.local domainx.local domain4.local
What solved for me was deleting the entire domain1 folder inside the domains folder on glassfish main folder.
Eclipse will ask you to recreate a domain and then everything works again.
In eclipse select Run tab -> Debug configuration -> Junit -> select your test name ->
Environment tab -> add variable server=y .
I was getting the same error on my ubuntu machine because of a mishap with the /etc/hosts file. I had commented out the mapping of localhost to 127.0.0.1, and to complicate matters further there was a swap file hanging around.
This was the first line of my /etc/hosts:
127.0.0.1 #localhost
Deleting the # fixed the problem, whereas rebooting understandably had not.
My cause & solution were completely different.
I think in my case it was due to the installation of JProfiler. I fixed it by uninstalling JProfiler and launching eclipse with the -clean option. I suspect that JProfiler was inserting itself in the debugger. The -clean option forces Eclipse to re-assess its plugins, so that alone might have been sufficient.
Continuing #gonadarian's answer, it seems Eclipse uses port 127.0.0.1 for debug purposes. This port is also called localhost. The way this error can be removed is by ensuring that there are no processes or services running on the above ports. The way to do this, on Linux is:
As root, enter the command:
netstat -tulpn | grep 127.0.0.1
If there are processes running on the above port, it will show up in the format:
process_id/process name.
Kill the above processes like so: kill -KILL process_id
Restart the computer for these changes to take effect. The error should no longer occur.