When I start my HBase , the HReginServer and HMaster of my master machine start successful , but the slaves HReginServer can not start
my hadoop version is 2.8.0 , hbase version is 1.2.6 , zookeeper version is 3.4.9
and my hbase-site.xml is :
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master</name>
<value>60000</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/hbase/zoodata</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop,slave03,slave02</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.regionserver.port</name>
<value>60020</value>
</property>
and regionservers is :
hadoop
slave03
slave02
here is the error message
2017-06-16 15:09:34,730 ERROR [main]
regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer
at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2682)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:64)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2697)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2680)
... 5 more
Caused by: java.io.IOException: Problem binding to slave02/119.29.83.97:60020 : Cannot assign requested address. To switch ports use the 'hbase.regionserver.port' configuration property.
at org.apache.hadoop.hbase.regionserver.RSRpcServices.<init>(RSRpcServices.java:938)
at org.apache.hadoop.hbase.regionserver.HRegionServer.createRpcServices(HRegionServer.java:647)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:531)
... 10 more
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.hbase.ipc.RpcServer.bind(RpcServer.java:2592)
at org.apache.hadoop.hbase.ipc.RpcServer$Listener.<init>(RpcServer.java:585)
at org.apache.hadoop.hbase.ipc.RpcServer.<init>(RpcServer.java:2045)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.<init>(RSRpcServices.java:930)
... 12 more
I hope some help , thank you so much !
Related
Hi I am beginner to hadoop,
I just installed hive 2.3.7 and setup the metastore with mysql
according to this tutorial https://www.guru99.com/hive-metastore-configuration-mysql.html
and this one https://ravi-chamarthy.medium.com/apache-hive-configuration-with-mysql-metastore-3ecb9a0df3a1.
Here is my hdfs-site.xml file
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true</value>
<description>metadata is stored in a MySQL server</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.cj.jdbc.Driver</value>
<description>MySQL JDBC driver class</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveuser</value>
<description>user name for connecting to mysql server</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hivepassword</value>
<description>password for connecting to mysql server</description>
</property>
When I executed schematool -initSchema -dbType mysql
everything is fine. It initialize the 2.3.0 schema of hive.
when I started hive and execute the command show databases; or any other,
I got this errors
hive> show databases;
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is
generally unnecessary.
Exception in thread "main" java.lang.IllegalAccessError: tried to access method com.google.common.collect.Iterators.emptyIterator()Lcom/google/common/collect/UnmodifiableIterator; from class org.apache.hadoop.hive.ql.exec.FetchOperator
at org.apache.hadoop.hive.ql.exec.FetchOperator.<init>(FetchOperator.java:108)
at org.apache.hadoop.hive.ql.exec.FetchTask.initialize(FetchTask.java:87)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:541)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
Note: I used mysql 8.0.22, mysql-connector-java.jar, mysql-connector-java-8.0.22.jar
ubuntu 18.04, hadoop 3.1.4.
What version of Hadoop are you using? Might have to do with incompatible version of hadoop
Given the following local setup:
IBM WebSphere MQ Advanced for Developers V8.0
Payara 4.1.2.172
I'd like to connect to the local queue manager via JMS on other than the default port (1414).
In spite I added several properties to the connection factory to configure port 1415, it seems that the server is still trying to connect via port 1414, as Payara constantly throws java.net.ConnectException.
The relevant part of my domain.xml:
<connector-connection-pool resource-adapter-name="wmq.jmsra" name="jms/testCP" connection-definition-name="javax.jms.ConnectionFactory" transaction-support="XATransaction"></connector-connection-pool>
<connector-resource pool-name="jms/testCP" jndi-name="jms/testCF">
<property name="transportType" value="CLIENT"></property>
<property name="port" value="1415"></property>
<property name="channel" value="CHANNEL1"></property>
<property name="hostName" value="localhost"></property>
<property name="localAddress" value="localhost(1415)"></property>
<property name="connectionNameList" value="localhost(1415)"></property>
<property name="queuemanager" value="testQM"></property>
<property name="username" value="mqm"></property>
</connector-resource>
However the exception in server.log suggests that the resource adapter still wants to connect via port 1414:
[2017-08-20T12:41:47.366+0200] [Payara 4.1] [SEVERE] [] [javax.enterprise.system.core] [tid: _ThreadID=63 _ThreadName=AutoDeployer] [timeMillis: 1503225707366] [levelValue: 1000] [[
Exception while loading the app : EJB Container initialization error
java.lang.Exception
at com.sun.enterprise.connectors.inbound.ConnectorMessageBeanClient.setup(ConnectorMessageBeanClient.java:215)
at org.glassfish.ejb.mdb.MessageBeanContainer.(MessageBeanContainer.java:244)
at org.glassfish.ejb.mdb.MessageBeanContainerFactory.createContainer(MessageBeanContainerFactory.java:63)
at org.glassfish.ejb.startup.EjbApplication.loadContainers(EjbApplication.java:224)
at org.glassfish.ejb.startup.EjbDeployer.load(EjbDeployer.java:290)
at org.glassfish.ejb.startup.EjbDeployer.load(EjbDeployer.java:100)
at org.glassfish.internal.data.ModuleInfo.load(ModuleInfo.java:206)
at org.glassfish.internal.data.ApplicationInfo.load(ApplicationInfo.java:314)
at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:497)
at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:220)
at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:487)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:539)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:535)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$2.execute(CommandRunnerImpl.java:534)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$3.run(CommandRunnerImpl.java:565)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$3.run(CommandRunnerImpl.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:556)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1464)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1300(CommandRunnerImpl.java:109)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1846)
at org.glassfish.deployment.autodeploy.AutoOperation.run(AutoOperation.java:164)
at org.glassfish.deployment.autodeploy.AutoDeployer.deploy(AutoDeployer.java:597)
at org.glassfish.deployment.autodeploy.AutoDeployer.deployAll(AutoDeployer.java:484)
at org.glassfish.deployment.autodeploy.AutoDeployer.run(AutoDeployer.java:412)
at org.glassfish.deployment.autodeploy.AutoDeployer.run(AutoDeployer.java:403)
at org.glassfish.deployment.autodeploy.AutoDeployService$1.run(AutoDeployService.java:233)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
Caused by: com.ibm.mq.connector.DetailedResourceAdapterInternalException: MQJCA1011: Failed to allocate a JMS connection., error code: MQJCA1011 An internal error caused an attempt to allocate a connection to fail. See the linked exception for details of the failure.
at com.ibm.mq.connector.services.JCAExceptionBuilder.buildException(JCAExceptionBuilder.java:174)
at com.ibm.mq.connector.services.JCAExceptionBuilder.buildException(JCAExceptionBuilder.java:135)
at com.ibm.mq.connector.inbound.ConnectionHandler.allocateConnection(ConnectionHandler.java:393)
at com.ibm.mq.connector.inbound.MessageEndpointDeployment.acquireConnection(MessageEndpointDeployment.java:288)
at com.ibm.mq.connector.inbound.MessageEndpointDeployment.(MessageEndpointDeployment.java:228)
at com.ibm.mq.connector.ResourceAdapterImpl.endpointActivation(ResourceAdapterImpl.java:531)
at com.sun.enterprise.connectors.inbound.ConnectorMessageBeanClient.setup(ConnectorMessageBeanClient.java:207)
... 31 more
Caused by: com.ibm.msg.client.jms.DetailedIllegalStateException: JMSWMQ0018: Failed to connect to queue manager '' with connection mode 'Client' and host name 'localhost(1414)'.
Check the queue manager is started and if running in client mode, check there is a listener running. Please see the linked exception for more information.
at com.ibm.msg.client.wmq.common.internal.Reason.reasonToException(Reason.java:489)
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:215)
at com.ibm.msg.client.wmq.internal.WMQConnection.(WMQConnection.java:413)
at com.ibm.msg.client.wmq.internal.WMQXAConnection.(WMQXAConnection.java:67)
at com.ibm.msg.client.wmq.factories.WMQXAConnectionFactory.createV7ProviderConnection(WMQXAConnectionFactory.java:188)
at com.ibm.msg.client.wmq.factories.WMQConnectionFactory.createProviderConnection(WMQConnectionFactory.java:7814)
at com.ibm.msg.client.wmq.factories.WMQXAConnectionFactory.createProviderXAConnection(WMQXAConnectionFactory.java:98)
at com.ibm.msg.client.jms.admin.JmsConnectionFactoryImpl.createXAConnectionInternal(JmsConnectionFactoryImpl.java:347)
at com.ibm.mq.jms.MQXAConnectionFactory.createXAConnection(MQXAConnectionFactory.java:131)
at com.ibm.mq.connector.inbound.ConnectionHandler.allocateConnection(ConnectionHandler.java:268)
... 35 more
Caused by: com.ibm.mq.MQException: JMSCMQ0001: WebSphere MQ call failed with compcode '2' ('MQCC_FAILED') reason '2538' ('MQRC_HOST_NOT_AVAILABLE').
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:203)
... 43 more
Caused by: com.ibm.mq.jmqi.JmqiException: CC=2;RC=2538;AMQ9204: Connection to host 'localhost(1414)' rejected. [1=com.ibm.mq.jmqi.JmqiException[CC=2;RC=2538;AMQ9213: A communications error for 'TCP' occurred. [1=java.net.ConnectException[Connection refused (Connection refused)],3=connnectUsingLocalAddress,4=TCP,5=Socket.connect]],3=localhost(1414),5=RemoteTCPConnection.connnectUsingLocalAddress]
at com.ibm.mq.jmqi.remote.api.RemoteFAP.jmqiConnect(RemoteFAP.java:2282)
at com.ibm.mq.jmqi.remote.api.RemoteFAP.jmqiConnect(RemoteFAP.java:1294)
at com.ibm.mq.ese.jmqi.InterceptedJmqiImpl.jmqiConnect(InterceptedJmqiImpl.java:376)
at com.ibm.mq.ese.jmqi.ESEJMQI.jmqiConnect(ESEJMQI.java:560)
at com.ibm.msg.client.wmq.internal.WMQConnection.(WMQConnection.java:346)
... 42 more
Caused by: com.ibm.mq.jmqi.JmqiException: CC=2;RC=2538;AMQ9213: A communications error for 'TCP' occurred. [1=java.net.ConnectException[Connection refused (Connection refused)],3=connnectUsingLocalAddress,4=TCP,5=Socket.connect]
at com.ibm.mq.jmqi.remote.impl.RemoteTCPConnection.connnectUsingLocalAddress(RemoteTCPConnection.java:838)
at com.ibm.mq.jmqi.remote.impl.RemoteTCPConnection.protocolConnect(RemoteTCPConnection.java:1277)
at com.ibm.mq.jmqi.remote.impl.RemoteConnection.connect(RemoteConnection.java:863)
at com.ibm.mq.jmqi.remote.impl.RemoteConnectionSpecification.getSessionFromNewConnection(RemoteConnectionSpecification.java:409)
at com.ibm.mq.jmqi.remote.impl.RemoteConnectionSpecification.getSession(RemoteConnectionSpecification.java:305)
at com.ibm.mq.jmqi.remote.impl.RemoteConnectionPool.getSession(RemoteConnectionPool.java:146)
at com.ibm.mq.jmqi.remote.api.RemoteFAP.jmqiConnect(RemoteFAP.java:1730)
... 46 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at com.ibm.mq.jmqi.remote.impl.RemoteTCPConnection$5.run(RemoteTCPConnection.java:823)
at com.ibm.mq.jmqi.remote.impl.RemoteTCPConnection$5.run(RemoteTCPConnection.java:814)
at java.security.AccessController.doPrivileged(Native Method)
at com.ibm.mq.jmqi.remote.impl.RemoteTCPConnection.connnectUsingLocalAddress(RemoteTCPConnection.java:814)
... 52 more
]]
I'm out of ideas why the exception says:
JMSWMQ0018: Failed to connect to queue manager '' with connection mode 'Client' and host name 'localhost(1414)'.
Question 1: Why is queue manager '' (empty)?
Question 2: Why is host name 'localhost(1414)'?
Any help would be very much appreciated!
In reviewing the IBM MQ v8 Knowledge Center page "Installing and testing the resource adapter in GlassFish Server" it appears you have the wrong property name for the host name and queue manager. Try the following config. Checkout the documentation link above to make sure you have installed the RA and the other steps.
<connector-connection-pool resource-adapter-name="wmq.jmsra" name="jms/testCP" connection-definition-name="javax.jms.ConnectionFactory" transaction-support="XATransaction"></connector-connection-pool>
<connector-resource pool-name="jms/testCP" jndi-name="jms/testCF">
<property name="transportType" value="CLIENT"></property>
<property name="port" value="1415"></property>
<property name="channel" value="CHANNEL1"></property>
<property name="host" value="localhost"></property>
<property name="queueManager" value="testQM"></property>
</connector-resource>
I have found that the article referenced by JoshMC is incorrect. Step 6.f should not form part of the "Connector Resource" (connection factory) configuration, but the "Connector Connection Pool" configuration, i.e. between 5.f and 5.g.
If you create a new connection pool using the Admin Console then it displays many properties that can be populated. The following are the important ones:
channel
port
hostName
queueManager
transportType
username
such that the domain.xml should contain the following:
<connector-connection-pool resource-adapter-name="wmq.jmsra" name="jms/testCP" connection-definition-name="javax.jms.ConnectionFactory" transaction-support="XATransaction">
<property name="channel" value="CHANNEL1"></property>
<property name="port" value="1415"></property>
<property name="hostName" value="localhost"></property>
<property name="queueManager" value="testQM"></property>
<property name="transportType" value="CLIENT"></property>
<property name="username" value="mqm"></property>
</connector-connection-pool>
We found that we used the wrong version of the wmq.jmsra adapter.
We were using 7.5.0.4-p750-004-140807. With a newer Version 8.0.0.8-p800-008-171121 everything worked fine.
I have followed this site https://blogs.msdn.microsoft.com/arsen/2016/08/05/accessing-azure-data-lake-store-using-webhdfs-with-oauth2-from-spark-2-0-that-is-running-locally/ to connect ADLS storage with my Azure VM.
Created Azure VM and installed my application in it
Created Azure Data Lake Store and service pricipal
Here is my core-site.xml:-
<configuration>
<property>
<name>dfs.webhdfs.oauth2.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.webhdfs.oauth2.access.token.provider</name>
<value>org.apache.hadoop.hdfs.web.oauth2.ConfRefreshTokenBasedAccessTokenProvider</value>
</property>
<property>
<name>dfs.webhdfs.oauth2.refresh.url</name>
<value>https://login.windows.net/tenaid-id-here/oauth2/token</value>
</property>
<property>
<name>dfs.webhdfs.oauth2.client.id</name>
<value>Client id</value>
</property>
<property>
<name>dfs.webhdfs.oauth2.refresh.token.expires.ms.since.epoch</name>
<value>0</value>
</property>
<property>
<name>dfs.webhdfs.oauth2.refresh.token</name>
<value>Refresh token</value>
</property>
</configuration>
I have installed my application in Azure VM and I get following error when I upload file in my application.
2017-01-27 12:54:25.963 GMT+0000 WARN [admin-1fd467a4c41f43fe9f30ab446a5c93ac-84-b6792518109848bead029c9144603d04-libraryService.importDataFiles] LibraryImpl - Failed to write data file partID: 0 at: library/51dc056c0a634beba243120501fe70d6/545ca95c2a894f948b1f5184b013a53e/5c68d893090f471d81f3cdfc810bc4f7/b6d5ceb64bfd4d65ba4ea24d24f99e90
java.io.IOException: Mkdirs failed to create file:/clusters/myapp/library/51dc056c0a634beba243120501fe70d6/545ca95c2a894f948b1f5184b013a53e/5c68d893090f471d81f3cdfc810bc4f7/b6d5ceb64bfd4d65ba4ea24d24f99e90/data (exists=false, cwd=file:/home/palmtree/work/software/myapp-2.5-SNAPSHOT/myapp)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:450)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:890)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787)
at parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:150)
at parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:176)
at parquet.avro.AvroParquetWriter.<init>(AvroParquetWriter.java:93)
at com.myapp.hadoop.common.PaxParquetWriterImpl.doWriteRow(PaxParquetWriterImpl.java:52)
at com.myapp.hadoop.common.PaxParquetWriterImpl.access$000(PaxParquetWriterImpl.java:19)
at com.myapp.hadoop.common.PaxParquetWriterImpl$1.run(PaxParquetWriterImpl.java:43)
at com.myapp.hadoop.common.PaxParquetWriterImpl$1.run(PaxParquetWriterImpl.java:40)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at com.myapp.hadoop.common.PaxParquetWriterImpl.writpeRow(PaxParquetWriterImpl.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.myapp.hadoop.core.DistributionManager$$anon$10.invoke(DistributionManager.scala:313)
at com.sun.proxy.$Proxy56.writeRow(Unknown Source)
at com.myapp.library.stacks.DataFileWriter.write(DataFileWriter.java:49)
at com.myapp.library.LibraryImpl.pullImportData(LibraryImpl.java:747)
at com.myapp.library.LibraryImpl.importDataFile(LibraryImpl.java:631)
at com.myapp.frontend.server.LibraryAPI.importDataFile(LibraryAPI.java:269)
at com.myapp.frontend.server.LibraryWebSocketDelegate.importDataFile(LibraryWebSocketDelegate.java:189)
at com.myapp.frontend.server.LibraryWebSocketDelegate.importDataFiles(LibraryWebSocketDelegate.java:204)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.myapp.frontend.util.PXWebSocketProtocolHandler$PXMethodHandler.call(PXWebSocketProtocolHandler.java:144)
at com.myapp.frontend.util.PXWebSocketEndpoint.performMethodCall(PXWebSocketEndpoint.java:284)
at com.myapp.frontend.util.PXWebSocketEndpoint.access$200(PXWebSocketEndpoint.java:47)
at com.myapp.frontend.util.PXWebSocketEndpoint$1.run(PXWebSocketEndpoint.java:169)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2017-01-27 12:54:25.966 GMT+0000 WARN [admin-1fd467a4c41f43fe9f30ab446a5c93ac-84-b6792518109848bead029c9144603d04-libraryService.importDataFiles] LibraryImpl - Failed to import acquisition da73b76755c34c74a1643a324e41e156
com.myapp.iface.service.RequestFailedException
at com.myapp.library.LibraryImpl.pullImportData(LibraryImpl.java:754)
at com.myapp.library.LibraryImpl.importDataFile(LibraryImpl.java:631)
at com.myapp.frontend.server.LibraryAPI.importDataFile(LibraryAPI.java:269)
at com.myapp.frontend.server.LibraryWebSocketDelegate.importDataFile(LibraryWebSocketDelegate.java:189)
at com.myapp.frontend.server.LibraryWebSocketDelegate.importDataFiles(LibraryWebSocketDelegate.java:204)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.myapp.frontend.util.PXWebSocketProtocolHandler$PXMethodHandler.call(PXWebSocketProtocolHandler.java:144)
at com.myapp.frontend.util.PXWebSocketEndpoint.performMethodCall(PXWebSocketEndpoint.java:284)
at com.myapp.frontend.util.PXWebSocketEndpoint.access$200(PXWebSocketEndpoint.java:47)
at com.myapp.frontend.util.PXWebSocketEndpoint$1.run(PXWebSocketEndpoint.java:169)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Kindly help to solve this
Update 1:-
Tried the following to connect My application on Azure VM with ADLS:-
Added azure-data-lake-store-sdk in lib
I have followed this Service-to-service authentication to create application in Azure Active Directory .
I have also assigned the Azure AD application to the ADLS account root directory.
Root directory -> /clusters/myapp
Updated core-site.xml based on values from above documentation.
<configuration>
<property>
<name>dfs.adls.home.hostname</name>
<value>dev.azuredatalakestore.net</value>
</property>
<property>
<name>dfs.adls.home.mountpoint</name>
<value>/clusters</value>
</property>
<property>
<name>fs.adl.impl</name>
<value>org.apache.hadoop.fs.adl.AdlFileSystem</value>
</property>
<property>
<name>fs.AbstractFileSystem.adl.impl</name>
<value>org.apache.hadoop.fs.adl.Adl</value>
</property>
<property>
<name>dfs.adls.oauth2.refresh.url</name>
<value>https://login.windows.net/[tenantId]/oauth2/token</value>
</property>
<property>
<name>dfs.adls.oauth2.client.id</name>
<value>[CLIENT ID]</value>
</property>
<property>
<name>dfs.adls.oauth2.credential</name>
<value>[CLIENT KEY]</value>
</property>
<property>
<name>dfs.adls.oauth2.access.token.provider.type</name>
<value>ClientCredential</value>
</property>
<property>
<name>fs.azure.io.copyblob.retry.max.retries</name>
<value>60</value>
</property>
<property>
<name>fs.azure.io.read.tolerate.concurrent.append</name>
<value>true</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>adl://dev.azuredatalakestore.net</value>
<final>true</final>
</property>
<property>
<name>fs.trash.interval</name>
<value>360</value>
</property>
</configuration>
I am getting following error when I start my application Server in VM:-
2017-02-02 07:40:27.527 GMT+0000 INFO [main] DistributionManager - Looking for class loader for distroName=adl kerberized=false
2017-02-02 07:40:28.428 GMT+0000 ERROR [main] SimpleHdfsFileSystem - Failed to initialize HDFS file storage on null as hdfs root /myapp
org.apache.hadoop.security.AccessControlException: Unauthorized
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:347)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:98)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:623)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:472)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:502)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:498)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.mkdirs(WebHdfsFileSystem.java:919)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1877)
at com.myapp.hadoop.common.HdfsFileSystem$1.run(HdfsFileSystem.java:98)
at com.myapp.hadoop.common.HdfsFileSystem$1.run(HdfsFileSystem.java:91)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at com.myapp.hadoop.common.HdfsFileSystem.__initialize(HdfsFileSystem.java:91)
at com.myapp.hadoop.common.SimpleHdfsFileSystem.initialize(SimpleHdfsFileSystem.java:40)
at com.myapp.hadoop.hdp2.HadoopDistributionImpl.initializeHdfs(HadoopDistributionImpl.java:63)
at com.myapp.hadoop.hdp2.UnsecureHadoopDistributionImpl.connectToFileSystem(UnsecureHadoopDistributionImpl.java:22)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.myapp.hadoop.core.DistributionManager$$anon$1.invoke(DistributionManager.scala:135)
at com.sun.proxy.$Proxy22.connectToFileSystem(Unknown Source)
at com.myapp.library.LibraryStorageImpl.parseSimpleAuthFileSystem(LibraryStorageImpl.scala:126)
at com.myapp.library.LibraryStorageImpl.initializeStorageWithPrefix(LibraryStorageImpl.scala:64)
at com.myapp.library.LibraryStorageImpl.initialize(LibraryStorageImpl.scala:39)
at com.myapp.library.LibraryStorageImpl.initialize(LibraryStorageImpl.scala:33)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeCustomInitMethod(AbstractAutowireCapableBeanFactory.java:1581)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1522)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1452)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:274)
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1106)
at com.myapp.container.PxBeanContext.getBean(PxBeanContext.java:156)
at com.myapp.library.streaming.files.UploadFileServiceImpl.initialize(UploadFileServiceImpl.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeCustomInitMethod(AbstractAutowireCapableBeanFactory.java:1581)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1522)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1452)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:609)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:918)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:469)
at com.myapp.container.PxBeanContext.startup(PxBeanContext.java:42)
at com.myapp.jetty.FrontendServer.main(FrontendServer.java:124)
2017-02-02 07:40:28.462 GMT+0000 WARN [main] server - HQ222113: On ManagementService stop, there are 1 unexpected registered MBeans: [core.acceptor.dc9ff2aa-e91a-11e6-9a51-09b76b4431e6]
2017-02-02 07:40:28.479 GMT+0000 INFO [main] server - HQ221002: HornetQ Server version 2.5.0.SNAPSHOT (Wild Hornet, 124) [7039110c-dd57-11e6-b90d-2bc6685808f5] stopped
2017-02-02 07:40:28.480 GMT+0000 ERROR [main] FrontendServer - Fatal error trying to start server
java.lang.RuntimeException: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'com.myapp.library.streaming.files.UploadFileServiceImpl#0' defined in class path resource [system-config.xml]: Invocation of init method failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'com.myapp.library.LibraryStorageImpl#0' defined in class path resource [system-config.xml]: Invocation of init method failed; nested exception is java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: Unauthorized
at com.myapp.container.PxBeanContext.startup(PxBeanContext.java:44)
at com.myapp.jetty.FrontendServer.main(FrontendServer.java:124)
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'com.myapp.library.streaming.files.UploadFileServiceImpl#0' defined in class path resource [system-config.xml]: Invocation of init method failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'com.myapp.library.LibraryStorageImpl#0' defined in class path resource [system-config.xml]: Invocation of init method failed; nested exception is java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: Unauthorized
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1455)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:609)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:918)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:469)
at com.myapp.container.PxBeanContext.startup(PxBeanContext.java:42)
... 1 more
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'com.myapp.library.LibraryStorageImpl#0' defined in class path resource [system-config.xml]: Invocation of init method failed; nested exception is java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: Unauthorized
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1455)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:274)
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1106)
at com.myapp.container.PxBeanContext.getBean(PxBeanContext.java:156)
at com.myapp.library.streaming.files.UploadFileServiceImpl.initialize(UploadFileServiceImpl.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeCustomInitMethod(AbstractAutowireCapableBeanFactory.java:1581)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1522)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1452)
... 11 more
Caused by: java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: Unauthorized
at com.myapp.hadoop.common.SimpleHdfsFileSystem.initialize(SimpleHdfsFileSystem.java:45)
at com.myapp.hadoop.hdp2.HadoopDistributionImpl.initializeHdfs(HadoopDistributionImpl.java:63)
at com.myapp.hadoop.hdp2.UnsecureHadoopDistributionImpl.connectToFileSystem(UnsecureHadoopDistributionImpl.java:22)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.myapp.hadoop.core.DistributionManager$$anon$1.invoke(DistributionManager.scala:135)
at com.sun.proxy.$Proxy22.connectToFileSystem(Unknown Source)
at com.myapp.library.LibraryStorageImpl.parseSimpleAuthFileSystem(LibraryStorageImpl.scala:126)
at com.myapp.library.LibraryStorageImpl.initializeStorageWithPrefix(LibraryStorageImpl.scala:64)
at com.myapp.library.LibraryStorageImpl.initialize(LibraryStorageImpl.scala:39)
at com.myapp.library.LibraryStorageImpl.initialize(LibraryStorageImpl.scala:33)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeCustomInitMethod(AbstractAutowireCapableBeanFactory.java:1581)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1522)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1452)
... 28 more
Caused by: org.apache.hadoop.security.AccessControlException: Unauthorized
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:347)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:98)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:623)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:472)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:502)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:498)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.mkdirs(WebHdfsFileSystem.java:919)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1877)
at com.myapp.hadoop.common.HdfsFileSystem$1.run(HdfsFileSystem.java:98)
at com.myapp.hadoop.common.HdfsFileSystem$1.run(HdfsFileSystem.java:91)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at com.myapp.hadoop.common.HdfsFileSystem.__initialize(HdfsFileSystem.java:91)
at com.myapp.hadoop.common.SimpleHdfsFileSystem.initialize(SimpleHdfsFileSystem.java:40)
... 47 more
Using following jars in my project:-
azure-data-lake-store-sdk-2.1.4.jar
commons-cli-1.2.jar
commons-configuration-1.6.jar
hadoop-auth-2.7.1.jar
hadoop-azure-datalake-3.0.0-alpha1.jar
hadoop-common-2.5-SNAPSHOT.jar
hadoop-common-2.7.1.jar
hadoop-hdfs-2.7.3.jar
hadoop-hdp2-2.5-SNAPSHOT.jar
Clarifications:-
My intention is to connect My application on Azure VM with DataLake Store with out need of HDInsight Cluster. Is that possible ? If so what steps should I need to follow ? What are the configuration needs to be present in core-site.xml ?
File preview fails with AccessControlException error in ADLS
Login the HDInsight cluster which is associated to the Data Lake Store using ssh command - ssh [user]#[cluster2]-ssh.azurehdinsight.net
Copy a file to the cluster using the wget command - wget http://www.sample-videos.com/csv/Sample-Spreadsheet-10-rows.csv
Create a new folder in your Data Lake Store account
Now upload a file using PUT command
hdfs dfs -put Sample-Spreadsheet-10-rows.csv adl://dev2.azuredatalakestore.net/new
View the file in the Azure Portal
Actual Result: The file is uploaded and shows in the Azure Portal. But, file preview is broken and I see the below error
AccessControlException
OPEN failed with error 0x83090aa2 (Forbidden. ACL verification failed. Either the resource does not exist or the user is not authorized to perform the requested operation.). [4f97235c-0852-44c8-a8d4-cbe190ffdb34]
How to solve this issue ?
Firstly, we really do not recommend that you use the swebhdfs path. As called out in Arsen’s blog, the adl client is much more performant. Here are directions for configuring the adl filesystem:
Hadoop Azure Data Lake Support
For your specific error, it looks like the mkdir is invoked on the local file system as shown by the "file:" in the output of the mkdir command.
To solve the error, follow the steps mentioned in Arsen’s blog. After configuration, run hdfs command on the swebhdfs path like
bin\hadoop> fs -ls swebhdfs://avdatalake2.azuredatalakestore.net:443/
One more thing: Since posting that blog, Azure Data Lake now has full support for the Java SDK. Here is an article that describes how to use the Java SDK to perform basic file operations:
Get started with Azure Data Lake Store using Java
-- Cathy
The easiest way to connect to ADLS is to use the Java SDK that Cathy mentioned in her response.
Get started with Azure Data Lake Store using Java
In your example why are you trying to connect from your Azure VM with the data lake store using the Hadoop client? Using the Hadoop client is a more convoluted way to achieve the seemingly simple scenario of connecting from your app on an Azure VM to ADLS.
The Hadoop client is typically what people do to connect existing Hadoop clusters to ADLS. I have a feeling that is not what you are trying to do. Let us know if this is not the case.
As is described in the title, I deployed a hadoop v2.6.3 cluster on an internal network with static ip like 10.0.0.x.
Then I ran an example WordCount Program However, the shell just give the outputs and hangs:
hadoop jar wc.jar WordCount /user/alex/data/kaggle.sample /user/alex/wc/output
16/04/06 10:44:29 INFO client.RMProxy: Connecting to ResourceManager at master/10.0.0.7:8032
16/04/06 10:44:29 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/04/06 10:44:30 INFO input.FileInputFormat: Total input paths to process : 1
16/04/06 10:44:30 INFO mapreduce.JobSubmitter: number of splits:1
16/04/06 10:44:30 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1459942813464_0002
16/04/06 10:44:30 INFO impl.YarnClientImpl: Submitted application application_1459942813464_0002
16/04/06 10:44:30 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1459942813464_0002/
16/04/06 10:44:30 INFO mapreduce.Job: Running job: job_1459942813464_0002
Then I goes to Hadoop Cluster Web UI, and found that the job status is ACCEPTED, and not running. I checked the log file of YARN.ResourceManager, and its last ERROR message is like this:
2016-04-06 10:34:42,466 ERROR org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt: Error trying to assign container token and NM token to an allocated container container_1459942813464_0001_02_000001
java.lang.IllegalArgumentException: java.net.UnknownHostException: worker14.alex
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
at org.apache.hadoop.yarn.server.utils.BuilderUtils.newContainerToken(BuilderUtils.java:256)
at org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager.createContainerToken(RMContainerTokenSecretManager.java:220)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.pullNewlyAllocatedContainersAndNMTokens(SchedulerApplicationAttempt.java:448)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.getAllocation(FiCaSchedulerApp.java:269)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocate(CapacityScheduler.java:896)
at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMContainerAllocatedTransition.transition(RMAppAttemptImpl.java:937)
at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMContainerAllocatedTransition.transition(RMAppAttemptImpl.java:930)
at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:755)
at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:106)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:842)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:823)
at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:182)
at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.UnknownHostException: worker14.alex
... 19 more
The Hadoop Configuration file is following:
#core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:8020/</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/alex/hadoop-2.6.3/tmp/</value>
</property>
</configuration>
#yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/home/alex/hadoop-2.6.3/tmp/nm.local</value>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>/home/alex/hadoop-2.6.3/log/nm.log</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
#mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>10.0.0.7:10020</value>
</property>
<property>
<name>yarn.app.mapreduce.am.staging-dir</name>
<value>/home/alex/hadoop-2.6.3/tmp/staging</value>
</property>
<property>
<name>mapreduce.jobhistory.intermediate-done-dir</name>
<value>/home/alex/hadoop-2.6.3/tmp/mr-history/tmp</value>
</property>
<property>
<name>mapreduce.jobhistory.done-dir</name>
<value>/home/alex/hadoop-2.6.3/tmp/mr-history/done</value>
</property>
</configuration>
/etc/hosts file have map ips to either master or worker1 - worker14
slaves file are master, worker1 - worker14
It seems that my hostname resolve goes wrong. It is worker14.alex rather than worker14 (alex is my linux username)
So what's wrong with my configuration? Do I need to restart all the servers? Or I just need to restart some of the services like service networking restart?
were you able to get to a resolution? I'm seeing the exact same issue, I see a Caused by: java.net.UnknownHostException: var exception. – Nishant Kelkar
Check your yarn-site.xml, this value:
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/var/log/hadoop-yarn/apps</value>
If you put "hdfs://" before the path, the error occurs.
I am trying to connect to Hbase using Java(in Eclipse) in Cloudera VM, but getting below error. Am able to run same program in command line(by converting my program into jar)
my java program
`import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.*;
import org.apache.hadoop.hbase.util.Bytes;
//import org.apache.hadoop.mapred.MapTask;
import java.io.FileWriter;
import java.io.IOException;
public class HbaseConnection {
public static void main(String[] args) throws IOException {
Configuration config = HBaseConfiguration.create();
config.addResource("/usr/lib/hbase/conf/hbase-site.xml");
HTable table = new HTable(config, "test_table");
byte[] columnFamily = Bytes.toBytes("colf");
byte[] idColumnName = Bytes.toBytes("id");
byte[] groupIdColumnName = Bytes.toBytes("g_id");
Put put = new Put(Bytes.toBytes("testkey"));
put.add(columnFamily, idColumnName, Bytes.toBytes("test id"));
put.add(columnFamily, groupIdColumnName, Bytes.toBytes("test group id"));
table.put(put);
table.close();
}
}`
And I have kept hbase-site.xml in the source folder in eclipse
hbase-site.xml
<property>
<name>hbase.rest.port</name>
<value>8070</value>
<description>The port for the HBase REST server.</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://quickstart.cloudera:8020/hbase</value>
</property>
<property>
<name>hbase.regionserver.ipc.address</name>
<value>0.0.0.0</value>
</property>
<property>
<name>hbase.master.ipc.address</name>
<value>0.0.0.0</value>
</property>
<property>
<name>hbase.thrift.info.bindAddress</name>
<value>0.0.0.0</value>
</property>
And am getting below error while running the program in eclipse
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:389)
at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:366)
at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:247)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:188)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:150)
at com.aig.gds.hadoop.platform.idgen.hbase.HBaseTest.main(HBaseTest.java:34)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:387)
... 5 more
Caused by: java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.hdfs.DistributedFileSystem could not be instantiated
at java.util.ServiceLoader.fail(ServiceLoader.java:224)
at java.util.ServiceLoader.access$100(ServiceLoader.java:181)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:377)
at java.util.ServiceLoader$1.next(ServiceLoader.java:445)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2400)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2411)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:287)
at org.apache.hadoop.hbase.util.DynamicClassLoader.<init>(DynamicClassLoader.java:104)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.<clinit>(ProtobufUtil.java:197)
at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:69)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:83)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:801)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:633)
... 10 more
Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.conf.Configuration.addDeprecations([Lorg/apache/hadoop/conf/Configuration$DeprecationDelta;)V
at org.apache.hadoop.hdfs.HdfsConfiguration.addDeprecatedKeys(HdfsConfiguration.java:66)
at org.apache.hadoop.hdfs.HdfsConfiguration.<clinit>(HdfsConfiguration.java:31)
at org.apache.hadoop.hdfs.DistributedFileSystem.<clinit>(DistributedFileSystem.java:114)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:374)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:373)
... 26 more
Thanks in advance.
The root cause of your problem is in the stack trace:
NoSuchMethodError: org.apache.hadoop.conf.Configuration.addDeprecations
This means that your hadoop-common-* jar version is not in sync with your hadoop-hdfs-* jar version Or you may have a mix of different versions in your class path.
Note that addDeprecations is present in hadoop 2.3.0 and later :
https://hadoop.apache.org/docs/r2.3.0/api/org/apache/hadoop/conf/Configuration.html
but was missing in 2.2.0 and prior:
https://hadoop.apache.org/docs/r2.2.0/api/org/apache/hadoop/conf/Configuration.html