ConnectException when submitting hadoop job from eclipse - java

I'm trying to submit a job (a simple word count) to hadoop-2.5.0 (installed on a ubuntu 14.04.1 server running on a virtual machine) from eclipse on windows. In the job configuration, i've set "fs.defaultFS" to "hdfs://192.168.2.216:8020" (as suggested in this thread) but when I run the main progam I got the following exception:
WARN - NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
ERROR - Shell - Failed to locate the winutils binary in the hadoop binary path
Exception in thread "main" java.net.ConnectException: Call From EL-OUED/192.168.2.8 to 192.168.2.216:8020 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1414)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:699)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1762)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398)
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:145)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:458)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:343)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at com.heavenize.hadoop.WordCountMR.main(WordCountMR.java:55)
Caused by: java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:735)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
at org.apache.hadoop.ipc.Client.call(Client.java:1381)
... 28 more
Also, when checking connection configuration on hadoop, it seems it is listening/accepting for connections on 127.0.0.1:8020.
$netstat -lent | grep 8020
tcp 0 0 127.0.0.1:8020 0.0.0.0:* LISTEN 1001 10380
This is the content of core-site.xml, I wonder if it is the source of this problem and how to fix it?
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost</value>
</property>
</configuration>

Basically your namenode is listening on the localhost interface, therefore it allows connections only from 127.0.0.1. As you suggested, the error is indeed in the fs.default.name parameter, which should be modified to use the hostname instead of localhost.
Beware that /etc/hosts should contain a line like
192.168.2.216 hostname.fully.qualified.domain.com hostname
You can verify that the hostname is properly setting running the command "hostname" and "hostname -f". "hostname" should return the the name of the system as returned by gethostname, while "hostname -f" should return the fqdn of the system.

Related

Apache flink (Stable version 1.6.2) does not work

Recently, the stable version (1.6.2) of apache flink was released. I read this instruction. But when I run the following command:
./bin/flink run examples/streaming/SocketWindowWordCount.jar --port 9000
I get the following error:
The program finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: Job failed. (JobID: 264564a337d4c6705bde681b34010d28)
at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:268)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:486)
at org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:66)
at org.apache.flink.streaming.examples.socket.SocketWindowWordCount.main(SocketWindowWordCount.java:92)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:426)
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:816)
at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:290)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:216)
at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1053)
at org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1129)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1129)
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146)
at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:265)
... 20 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.flink.streaming.api.functions.source.SocketTextStreamFunction.run(SocketTextStreamFunction.java:96)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:94)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:58)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:99)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
at java.lang.Thread.run(Thread.java:748)
I found this link:Flink program cannot submit when i follow flink-1.4's quickstart and use "./bin/flink run examples/streaming/SocketWindowWordCount.jar --port 9000". However, it didn't help. I tried with Flink 1.6.2 with Hadoop® 2.8 as well as Flink 1.5.5 with Hadoop® 2.8 on mac os and ubuntu. But I just got the same error.
Works fine for me. Only difference I can see is that I'm using the version of Flink without hadoop, but I doubt that's the issue.
java.net.ConnectException: Connection refused usually means that there is no service listening on port 9000. You should have netcat running on this port via
$ nc -l 9000
(see https://ci.apache.org/projects/flink/flink-docs-stable/quickstart/setup_quickstart.html#run-the-example). If netcat is running, and it's not working, maybe try another port? It might also help to check all the log files for additional clues.
I solved the annoying problem by reading the nc -h (in Linux):
connect to somewhere: nc [-options] hostname port[s] [ports] ...
listen for inbound: nc -l -p port [-options] [hostname] [port]
So for listening a port, -p is mandatory, and I tried the following command:
nc -l -p 9000
and It worked.

Embedded Firebird and Log4j configuration

I am trying to configure and use embedded Firebird with log4j. Essentially I want to log my entries to a DB table (Firebird). I am unable to do so with "Connection Refused" error complete call stack pasted below.
There was one possibility for this error that was a mismatched 32 vs 64 bit libraries being used / invoked, but if I write a simple java program and use Jaybird-full-2.2.9.jar I am able to connect and get data. It seems to be a problem when using log4j's property file.
Any help regarding this is appreciated.
log4j:ERROR Failed to excute sql
org.firebirdsql.jdbc.FBSQLException: GDS Exception. 335544721. Unable to complete network request to host "localhost".
at org.firebirdsql.jdbc.FBDataSource.getConnection(FBDataSource.java:120)
at org.firebirdsql.jdbc.AbstractDriver.connect(AbstractDriver.java:138)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at org.apache.log4j.jdbc.JDBCAppender.getConnection(JDBCAppender.java:251)
at org.apache.log4j.jdbc.JDBCAppender.execute(JDBCAppender.java:215)
at org.apache.log4j.jdbc.JDBCAppender.flushBuffer(JDBCAppender.java:289)
at org.apache.log4j.jdbc.JDBCAppender.append(JDBCAppender.java:186)
at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
at org.apache.log4j.Category.callAppenders(Category.java:206)
at org.apache.log4j.Category.forcedLog(Category.java:391)
at org.apache.log4j.Category.info(Category.java:666)
at com.xip.engines.Log4jAuditLoggerImpl.insert_AuitdLog(Log4jAuditLoggerImpl.java:20)
at com.xip.engines.Log4jAuditLoggerImpl.main(Log4jAuditLoggerImpl.java:40)
Caused by: org.firebirdsql.gds.GDSException: Unable to complete network request to host "localhost".
at org.firebirdsql.gds.impl.wire.AbstractJavaGDSImpl.connect(AbstractJavaGDSImpl.java:1876)
at org.firebirdsql.gds.impl.wire.AbstractJavaGDSImpl.internalAttachDatabase(AbstractJavaGDSImpl.java:431)
at org.firebirdsql.gds.impl.wire.AbstractJavaGDSImpl.iscAttachDatabase(AbstractJavaGDSImpl.java:411)
at org.firebirdsql.jca.FBManagedConnection.<init>(FBManagedConnection.java:105)
at org.firebirdsql.jca.FBManagedConnectionFactory.createManagedConnection(FBManagedConnectionFactory.java:509)
at org.firebirdsql.jca.FBStandAloneConnectionManager.allocateConnection(FBStandAloneConnectionManager.java:65)
at org.firebirdsql.jdbc.FBDataSource.getConnection(FBDataSource.java:118)
... 14 more
Caused by: java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.connect0(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.firebirdsql.gds.impl.wire.AbstractJavaGDSImpl.openSocket(AbstractJavaGDSImpl.java:1969)
at org.firebirdsql.gds.impl.wire.AbstractJavaGDSImpl.connect(AbstractJavaGDSImpl.java:1852)
... 20 more
Here is the log4j properties file I use.
# Define the root logger with file appender
log4j.rootLogger = ALL, DB
#log4j.category.org.firebirdsql=ALL, stdout
# Define the file appender
log4j.appender.DB=org.apache.log4j.jdbc.JDBCAppender
#log4j.appender.DB.URL=jdbc\:firebirdsql\://localhost\:3050/C\:\\xxxx\\xxxxx.FDB
log4j.appender.DB.URL=jdbc\:firebirdsql\://embedded\:C\:\\C\:\\xxxx\\xxxxx.FDB
#log4j.appender.DB.URL=jdbc\:firebirdsql\://local\:C\:\\C\:\\xxxx\\xxxxx.FDB
# Set Database Driver
log4j.appender.DB.driver=org.firebirdsql.jdbc.FBDriver
# Set database user name and password
log4j.appender.DB.user=SYSDBA
#log4j.appender.DB.password=masterkey
log4j.appender.DB.password=
# Set the SQL statement to be executed.
log4j.appender.DB.sql=INSERT INTO AUDITLOG VALUES (null,'%d{yyyy-MM-dd HH:mm:ss}', '%m', '%X{Type}', '%X{UserName}')
# Define the xml layout for file appender
log4j.appender.DB.layout=org.apache.log4j.PatternLayout
You are using an incorrect URL, the URL format for embedded is:
jdbc:firebirdsql:embedded:<database>
The url shown in your config (after removal of the escapes of the colon) is:
jdbc:firebirdsql://embedded:C:\\C\:\\xxxx\\xxxxx.FDB
Removing // should fix this (the same applies to the jdbc:firebirdsql:local URL shown commented out).
URLs with jdbc:firebirdsql://.... are handled by the pure java network protocol implementation. Given the config I would have expected a different error (because parsing the URL would fail), but I assume this might be a result of trying different configurations.

"Unable to execute HTTP Request: Broken Pipe" with Hadoop / s3 on Amazon EMR

I've developed a custom JAR that I'm using to process data in Elastic MapReduce. The data is several hundred thousands files coming from Amazon S3. The JAR doesn't do anything terribly funky to read data - it's just using CombineFileInputFormat.
When I run the job against a small amount of test data, everything executes flawlessly. However, when I run it against my full data set, a (random) amount of time into my job, I'll run into some sort of HTTP or socket error that's seemingly not getting properly handled.
During one job, I got the following in the SYSLOG:
2015-11-16 21:47:17,504 INFO com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem (main): exhausted retry un-registered class com.amazonaws.AmazonClientException
2015-11-16 21:47:17,504 INFO org.apache.hadoop.mapreduce.JobSubmitter (main): Cleaning up the staging area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1447686616083_0001
This was accompanied by the following in Standard Error:
Exception in thread "main" com.amazonaws.AmazonClientException: Unable to execute HTTP request: Remote host closed connection during handshake
A second job threw a similar error in the SYSLOG, but I got this in Standard Error:
Exception in thread "main" com.amazonaws.AmazonClientException: Unable to execute HTTP request: Broken pipe
(Full stack trace included at the bottom.)
I've built this for Hadoop 2.6.0, and I'm using the latest AWS build of Hadoop 2.6.0, so I'm not sure what's causing these errors. Does anybody have ideas for how I can get started troubleshooting this?
Exception in thread "main" com.amazonaws.AmazonClientException: Unable to execute HTTP request: Broken pipe
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:500)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3604)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:999)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:977)
at com.amazon.ws.emr.hadoop.fs.s3n.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:199)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy21.retrieveMetadata(Unknown Source)
at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.listStatus(S3NativeFileSystem.java:907)
at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.listStatus(S3NativeFileSystem.java:892)
at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.listStatus(EmrFileSystem.java:343)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1498)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1505)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1505)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1524)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1569)
at org.apache.hadoop.fs.FileSystem$4.<init>(FileSystem.java:1746)
at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1745)
at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1723)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:299)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:263)
at org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:177)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at com.rw.legion.Legion.main(Legion.java:103)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
at sun.security.ssl.OutputRecord.writeBuffer(OutputRecord.java:377)
at sun.security.ssl.OutputRecord.write(OutputRecord.java:363)
at sun.security.ssl.SSLSocketImpl.writeRecordInternal(SSLSocketImpl.java:837)
at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:808)
at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:679)
at sun.security.ssl.Handshaker.sendChangeCipherSpec(Handshaker.java:999)
at sun.security.ssl.ClientHandshaker.sendChangeCipherAndFinish(ClientHandshaker.java:1161)
at sun.security.ssl.ClientHandshaker.serverHelloDone(ClientHandshaker.java:1073)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:341)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:901)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:837)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1023)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1332)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1359)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1343)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:535)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:403)
at com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.connectSocket(SdkTLSSocketFactory.java:128)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:728)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
... 41 more
Set the client configuration of amazon client to the following to increase the timeout time. The internet timeout may be an issue for the same situation.
configuration.setMaxErrorRetry(3);
configuration.setConnectionTimeout(50*1000);
configuration.setSocketTimeout(50*1000);
configuration.setProtocol(Protocol.HTTP);
Also, you need to check for the certifications allowed for connection.

hadoop running application- ERROR security.UserGroupInformation: PriviledgedActionException

I have written WordCount code of hadoop as an java application in eclipse to test hadoop for running applications, but when I try to run it as hdfs user, this error will appear:
./hadoop jar /home/masi/eclipse_workspace/WordCount_apacheSample/bin/test2.jar WordCountApacheSample /user/hdfs/wordCountInput /user/hdfs/wordCountOutput
13/10/02 17:14:50 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is inited.
13/10/02 17:14:50 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is started.
13/10/02 17:14:50 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.net.ConnectException: Call From virtual-machine/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Exception in thread "main" java.net.ConnectException: Call From virtual-machine/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:532)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:780)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:727)
at org.apache.hadoop.ipc.Client.call(Client.java:1239)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
at sun.proxy.$Proxy9.getFileInfo(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
at sun.proxy.$Proxy9.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:630)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1559)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:811)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1345)
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:140)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:418)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:333)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1215)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1215)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1236)
at WordCountApacheSample.main(WordCountApacheSample.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:597)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:508)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:603)
at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:253)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1288)
at org.apache.hadoop.ipc.Client.call(Client.java:1206)
... 29 more
although I have tested input and output paths with hdfs://localhost:9000/ , there is no difference!
BTW, I have studied many posts related to my problem but they were not useful
any help is appreciated.
thanks.
finally I solved the problem by myself and decide to tell the reason here to help others :) the reason sounds somehow silly but the problem was this : the hadoop daemons were stop! my VM shut down suddenly and after restarting the VM, I had forgotten to start daemons(datanode, namenode,...) again! so the reason of this problem is this: datanode and namenode and other daemons are not running.
if you discover that your hdfs is corrupt then you can do following:
sudo -su hdfs
hadoop fsck /
hadoop dfsadmin -safemode leave
... - then delete the corrupted files if any -using following:
hadoop fs -rmr -skipTrash <folder with your files>
hadoop fsck -files delete /
check status :
hadoop fsck /
status should be HEALTHY after this - then manually restart everything in Ambari
I tried this on a small cluster and managed to get it up and running again after having similar error as mentioned above

Hadoop Connection Refused Error

I was trying to install RMR (RHadoop) package and I somehow managed to mess up my hadoop setup. Now, it gives the connection refused error which I just can't find a solution for. Any help would be appreciated. Thanks
java.net.ConnectException: Call to master/***.***.***.***:54310 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1095)
at org.apache.hadoop.ipc.Client.call(Client.java:1071)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy2.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123)
at org.apache.hadoop.mapred.Child$4.run(Child.java:254)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:489)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:434)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:560)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1202)
at org.apache.hadoop.ipc.Client.call(Client.java:1046)
... 18 more
When you see this, it basically means that you are unable to connected to the NameNode. It's either not running or running on a different port. If you backed up your working *-site.xml files, you may be able go back to the working version without the complete re-install you sugges in the comment to your question.
I have struggled two days and the night between to find out the answer to this problem.
In my case( and I'm sure this is the problem in most cases ) had to create the hadoop temporary folder by hand and add them to the hdfs-site.xml !
<property>
<name>dfs.data.dir</name>
<value>/home/stefan/Downloads/hadoop-2.7.1/tmp/dfs/name/data</value>
<final>true</final>
</property>
<property>
<name>dfs.name.dir</name>
<value>/home/stefan/Downloads/hadoop-2.7.1/tmp/dfs/name</value>
<final>true</final>
</property>
I hope this helps you guys not to go through the same hell as me.
Besides that
chown user_name hadoop_folder hadoop_temp_folder
chmod 755 hadoop_folder hadoop_temp_folder

Categories

Resources