Apache flink (Stable version 1.6.2) does not work - java

Recently, the stable version (1.6.2) of apache flink was released. I read this instruction. But when I run the following command:
./bin/flink run examples/streaming/SocketWindowWordCount.jar --port 9000
I get the following error:
The program finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: Job failed. (JobID: 264564a337d4c6705bde681b34010d28)
at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:268)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:486)
at org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:66)
at org.apache.flink.streaming.examples.socket.SocketWindowWordCount.main(SocketWindowWordCount.java:92)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:426)
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:816)
at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:290)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:216)
at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1053)
at org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1129)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1129)
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146)
at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:265)
... 20 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.flink.streaming.api.functions.source.SocketTextStreamFunction.run(SocketTextStreamFunction.java:96)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:94)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:58)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:99)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
at java.lang.Thread.run(Thread.java:748)
I found this link:Flink program cannot submit when i follow flink-1.4's quickstart and use "./bin/flink run examples/streaming/SocketWindowWordCount.jar --port 9000". However, it didn't help. I tried with Flink 1.6.2 with Hadoop® 2.8 as well as Flink 1.5.5 with Hadoop® 2.8 on mac os and ubuntu. But I just got the same error.

Works fine for me. Only difference I can see is that I'm using the version of Flink without hadoop, but I doubt that's the issue.
java.net.ConnectException: Connection refused usually means that there is no service listening on port 9000. You should have netcat running on this port via
$ nc -l 9000
(see https://ci.apache.org/projects/flink/flink-docs-stable/quickstart/setup_quickstart.html#run-the-example). If netcat is running, and it's not working, maybe try another port? It might also help to check all the log files for additional clues.

I solved the annoying problem by reading the nc -h (in Linux):
connect to somewhere: nc [-options] hostname port[s] [ports] ...
listen for inbound: nc -l -p port [-options] [hostname] [port]
So for listening a port, -p is mandatory, and I tried the following command:
nc -l -p 9000
and It worked.

Related

Connection refused when connecting to host mysql db from docker container

I am trying to connect to mysql server running on host machine (localhost) from a spring app running inside a docker container
this is my Dockerfile
FROM openjdk:11
ADD build/libs/demo-0.0.1-SNAPSHOT.jar demo.jar
EXPOSE 8000
RUN mkdir -p tmp/scripts
RUN mkdir -p tmp/logs
ENTRYPOINT ["java","-jar","/demo.jar"]
I am using Docker desktop for mac v2.3.0.3 with engine version 19.03.8
This is may hikariCP datasource connection properties
spring.datasource.jdbcUrl=jdbc:mysql://**host.docker.internal:3306**/freshid
spring.datasource.username=sa
spring.datasource.password=test
When I build docker build -f Dockerfile -t demo . and run the image as docker run -p 8000:8000 demo
I get the following error on application start up
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:835)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:455)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:240)
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:199)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:354)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:202)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:473)
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:554)
... 64 common frames omitted
Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61)
at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:105)
at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:151)
at com.mysql.cj.exceptions.ExceptionFactory.createCommunicationsException(ExceptionFactory.java:167)
at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:91)
at com.mysql.cj.NativeSession.connect(NativeSession.java:152)
at com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:955)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:825)
... 72 common frames omitted
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
at java.base/java.net.Socket.connect(Socket.java:609)
at com.mysql.cj.protocol.StandardSocketFactory.connect(StandardSocketFactory.java:155)
at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:65)
... 75 common frames omitted
I uninstalled and re-installed the docker (19.03.8) and it worked. The final connection string is jdbc:mysql://host.docker.internal:3306/db. This does not conclude any technical correctness but it worked. Thanks.
I would suggest a simple solution.
Step 1: Turboshooting: Try to run springboot appliction in you machine, outside container. If works fine, then there is no issue your application. (Make sure while testing locally you change your DB connection string from host.docker.internal to localhost)
Step 2: Please expose your DB port, that is 3306 by default in case of mysql when you run Docker Container with imperitive command
docker run -p 8000:8000 -p 3306:3306 demo
This should resolve your issue 😎😎

Cassandra won't start on OSX "Reason: Connection refused."

I'm working on a java project in which cassandra is included in the repository itself. I'm having trouble getting it to run however, receiving the following error:
/Users/xxx/dev/xxxx/build/cassandra/bin/cassandra-cli -h localhost -p 9052 -f
/Users/xxx/dev/xxxx/schema.txt
return code: 0
stderr: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused
at org.apache.thrift.transport.TSocket.open(TSocket.java:183)
at org.apache.thrift.transport.TFramedTransport.open(TFramedTransport.java:81)
at org.apache.cassandra.cli.CliMain.connect(CliMain.java:73)
at org.apache.cassandra.cli.CliMain.main(CliMain.java:249)
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at org.apache.thrift.transport.TSocket.open(TSocket.java:178)
... 3 more
Exception connecting to localhost/9052. Reason: Connection refused.
stdout: Not connected to a cassandra instance.
Not connected to a cassandra instance.
I've tried altering the port and the localhost hostname to 127.0.0.1 and 0.0.0.0 but this makes no difference really.
I'm using java version "1.7.0_71"
Any ideas would be appreciated, thanks
The issue was cassandra trying to start on a hostname that was not in my /etc/hosts file.
I found which hostname by running /Users/xxx/dev/xxxx/build/cassandra/bin/cassandra -f which gave better output as to what the issue was.

ConnectException when submitting hadoop job from eclipse

I'm trying to submit a job (a simple word count) to hadoop-2.5.0 (installed on a ubuntu 14.04.1 server running on a virtual machine) from eclipse on windows. In the job configuration, i've set "fs.defaultFS" to "hdfs://192.168.2.216:8020" (as suggested in this thread) but when I run the main progam I got the following exception:
WARN - NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
ERROR - Shell - Failed to locate the winutils binary in the hadoop binary path
Exception in thread "main" java.net.ConnectException: Call From EL-OUED/192.168.2.8 to 192.168.2.216:8020 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1414)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:699)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1762)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398)
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:145)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:458)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:343)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at com.heavenize.hadoop.WordCountMR.main(WordCountMR.java:55)
Caused by: java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:735)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
at org.apache.hadoop.ipc.Client.call(Client.java:1381)
... 28 more
Also, when checking connection configuration on hadoop, it seems it is listening/accepting for connections on 127.0.0.1:8020.
$netstat -lent | grep 8020
tcp 0 0 127.0.0.1:8020 0.0.0.0:* LISTEN 1001 10380
This is the content of core-site.xml, I wonder if it is the source of this problem and how to fix it?
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost</value>
</property>
</configuration>
Basically your namenode is listening on the localhost interface, therefore it allows connections only from 127.0.0.1. As you suggested, the error is indeed in the fs.default.name parameter, which should be modified to use the hostname instead of localhost.
Beware that /etc/hosts should contain a line like
192.168.2.216 hostname.fully.qualified.domain.com hostname
You can verify that the hostname is properly setting running the command "hostname" and "hostname -f". "hostname" should return the the name of the system as returned by gethostname, while "hostname -f" should return the fqdn of the system.

ConnectException (timed out) running groovy Koans with gradle wrapper

I'm trying to run groovy koans http://groovykoans.org/ and when I use the gradlew script it tries to download gradle from the internet (from http://services.gradle.org/distributions/gradle-1.8-bin.zip)
But it crashes with a connection timed out exception. I'm able to download the file fine from firefox. I've included http proxy args on the command line as per the instructions and I can ping services.gradle.org from my machine.
I'm on windows.
C:\Users\me\My Documents\documents\work\build_system\groovykoans-master>gradlew removeSolutions -Dhttp.proxyHost=proxy.blah.com -Dhttp.proxyPort=8000
Downloading http://services.gradle.org/distributions/gradle-1.8-bin.zip
Exception in thread "main" java.lang.RuntimeException: java.net.ConnectException: Connection timed out: connect
at org.gradle.wrapper.ExclusiveFileAccessManager.access(ExclusiveFileAccessManager.java:78)
at org.gradle.wrapper.Install.createDist(Install.java:47)
at org.gradle.wrapper.WrapperExecutor.execute(WrapperExecutor.java:129)
at org.gradle.wrapper.GradleWrapperMain.main(GradleWrapperMain.java:48)
Caused by: java.net.ConnectException: Connection timed out: connect
at java.net.DualStackPlainSocketImpl.connect0(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:69)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:157)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:378)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:473)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:203)
at sun.net.www.http.HttpClient.New(HttpClient.java:290)
at sun.net.www.http.HttpClient.New(HttpClient.java:306)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:995)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:931)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:849)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1299)
at org.gradle.wrapper.Download.downloadInternal(Download.java:59)
at org.gradle.wrapper.Download.download(Download.java:45)
at org.gradle.wrapper.Install$1.call(Install.java:60)
at org.gradle.wrapper.Install$1.call(Install.java:47)
at org.gradle.wrapper.ExclusiveFileAccessManager.access(ExclusiveFileAccessManager.java:65)
... 3 more
If I can't solve the connect issue is there a way I can manually install the gradle that I've successfully downloaded via my browser and bypass the download step from the gradlewrapper?
Well I managed to work out where the gradle zip was being put (C:\Users\me\.gradle\wrapper\dists\gradle-1.8-bin\vruqmccc8532n7gr46qavsii8 so I dropped my separately downloaded zip in there and it got me past the issue.
However since then I've also realized I was specifying the -Dhttp properties after the command and not before it so I suspect had I done that it would have worked. (Haven't retried it with a cleaned install area though) i.e I should have had
gradlew -Dhttp.proxyHost=proxy.blah.com -Dhttp.proxyPort=8000 removeSolutions
instead of:
gradlew removeSolutions -Dhttp.proxyHost=proxy.blah.com -Dhttp.proxyPort=8000
duh!

hadoop running application- ERROR security.UserGroupInformation: PriviledgedActionException

I have written WordCount code of hadoop as an java application in eclipse to test hadoop for running applications, but when I try to run it as hdfs user, this error will appear:
./hadoop jar /home/masi/eclipse_workspace/WordCount_apacheSample/bin/test2.jar WordCountApacheSample /user/hdfs/wordCountInput /user/hdfs/wordCountOutput
13/10/02 17:14:50 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is inited.
13/10/02 17:14:50 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is started.
13/10/02 17:14:50 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.net.ConnectException: Call From virtual-machine/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Exception in thread "main" java.net.ConnectException: Call From virtual-machine/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:532)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:780)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:727)
at org.apache.hadoop.ipc.Client.call(Client.java:1239)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
at sun.proxy.$Proxy9.getFileInfo(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
at sun.proxy.$Proxy9.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:630)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1559)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:811)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1345)
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:140)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:418)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:333)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1215)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1215)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1236)
at WordCountApacheSample.main(WordCountApacheSample.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:597)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:508)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:603)
at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:253)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1288)
at org.apache.hadoop.ipc.Client.call(Client.java:1206)
... 29 more
although I have tested input and output paths with hdfs://localhost:9000/ , there is no difference!
BTW, I have studied many posts related to my problem but they were not useful
any help is appreciated.
thanks.
finally I solved the problem by myself and decide to tell the reason here to help others :) the reason sounds somehow silly but the problem was this : the hadoop daemons were stop! my VM shut down suddenly and after restarting the VM, I had forgotten to start daemons(datanode, namenode,...) again! so the reason of this problem is this: datanode and namenode and other daemons are not running.
if you discover that your hdfs is corrupt then you can do following:
sudo -su hdfs
hadoop fsck /
hadoop dfsadmin -safemode leave
... - then delete the corrupted files if any -using following:
hadoop fs -rmr -skipTrash <folder with your files>
hadoop fsck -files delete /
check status :
hadoop fsck /
status should be HEALTHY after this - then manually restart everything in Ambari
I tried this on a small cluster and managed to get it up and running again after having similar error as mentioned above

Categories

Resources