Hive error after the command 'show databases;' - java

Hi I am beginner to hadoop,
I just installed hive 2.3.7 and setup the metastore with mysql
according to this tutorial https://www.guru99.com/hive-metastore-configuration-mysql.html
and this one https://ravi-chamarthy.medium.com/apache-hive-configuration-with-mysql-metastore-3ecb9a0df3a1.
Here is my hdfs-site.xml file
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true</value>
<description>metadata is stored in a MySQL server</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.cj.jdbc.Driver</value>
<description>MySQL JDBC driver class</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveuser</value>
<description>user name for connecting to mysql server</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hivepassword</value>
<description>password for connecting to mysql server</description>
</property>
When I executed schematool -initSchema -dbType mysql
everything is fine. It initialize the 2.3.0 schema of hive.
when I started hive and execute the command show databases; or any other,
I got this errors
hive> show databases;
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is
generally unnecessary.
Exception in thread "main" java.lang.IllegalAccessError: tried to access method com.google.common.collect.Iterators.emptyIterator()Lcom/google/common/collect/UnmodifiableIterator; from class org.apache.hadoop.hive.ql.exec.FetchOperator
at org.apache.hadoop.hive.ql.exec.FetchOperator.<init>(FetchOperator.java:108)
at org.apache.hadoop.hive.ql.exec.FetchTask.initialize(FetchTask.java:87)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:541)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
Note: I used mysql 8.0.22, mysql-connector-java.jar, mysql-connector-java-8.0.22.jar
ubuntu 18.04, hadoop 3.1.4.

What version of Hadoop are you using? Might have to do with incompatible version of hadoop

Related

HBase access from Java API Client

I'm having some troubles accessing HBase from java API Client and I can't figure out what I'm doing wrong.
I'm using HBase 1.1.2 in standalone mode on VM (10.166.205.41) with RHEL6 and JAVA 1.7.
Here is my HBase configuration from the hbase-site.xml
<configuration>
<property>
<name>hbase.zookeeper.quorum</name>
<value>10.166.205.41</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>9091</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>file:///usr/local/hbaserootdir/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/hbaserootdir/zookeeper</value>
</property>
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
</property>
</configuration>
My regionservers file is defined as followed:
10.166.205.41
The HBase shell client is working fine and i can access the HBase master UI from url 10.166.205.41:16010.
Here is my Java API Client running on Eclipse on Windows 7.
Pom.xml
<dependencies>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>1.1.2</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase</artifactId>
<version>1.1.2</version>
<type>pom</type>
</dependency>
</dependencies>
Source code:
public class InsertData {
final static Logger logger = Logger.getLogger(InsertData.class);
public static void main(String[] args) throws IOException {
Configuration config = HBaseConfiguration.create();
config.setInt("timeout", 120000);
config.set("hbase.zookeeper.quorum","10.166.205.41");
config.set("hbase.zookeeper.property.clientPort", "9091");
Connection connection = ConnectionFactory.createConnection(config);
Table table = connection.getTable(TableName.valueOf("emp"));
try {
Get g = new Get(Bytes.toBytes("1"));
Result result = table.get(g);
byte [] name = result.getValue(Bytes.toBytes("personal data"), Bytes.toBytes("name"));
logger.info("name : " + Bytes.toString(name));
} finally {
table.close();
connection.close();
}
}
}
During execution connection to server 10.166.205.41:41571 failed.
2018-12-14 16:35:19 DEBUG FailedServers:56 - Added failed server with address hlzudd5hdf01.yres.ytech/10.166.205.41:41571 to list caused by org.apache.hbase.thirdparty.io.netty.channel.ConnectTimeoutException: connection timed out: hlzudd5hdf01.yres.ytech/10.166.205.41:41571
2018-12-14 16:35:19 DEBUG ClientCnxn:843 - Reading reply sessionid:0x167ad4d815d000b, packet:: clientPath:/hbase/meta-region-server serverPath:/hbase/meta-region-server finished:false header:: 3,4 replyHeader:: 3,4697,0 request:: '/hbase/meta-region-server,F response:: #ffffffff0001a726567696f6e7365727665723a3431353731ffffffa0ffffffe9ffffff80fffffffd5611ffffff8c6a50425546a24a17686c7a7564643568646630312e797265732e797465636810ffffffe3ffffffc4218ffffffceffffff93ffffffb6ffffffeafffffffa2c100183,s{4519,4519,1544800813291,1544800813291,0,0,0,0,77,0,4519}
2018-12-14 16:35:19 DEBUG ClientCnxn:742 - Got ping response for sessionid: 0x167ad4d815d000b after 38ms
2018-12-14 16:35:19 DEBUG AbstractRpcClient:349 - Not trying to connect to hlzudd5hdf01.yres.ytech/10.166.205.41:41571 this server is in the failed servers list
2018-12-14 16:35:19 DEBUG ClientCnxn:843 - Reading reply sessionid:0x167ad4d815d000b, packet:: clientPath:/hbase/meta-region-server serverPath:/hbase/meta-region-server finished:false header:: 4,4 replyHeader:: 4,4697,0 request:: '/hbase/meta-region-server,F response:: #ffffffff0001a726567696f6e7365727665723a3431353731ffffffa0ffffffe9ffffff80fffffffd5611ffffff8c6a50425546a24a17686c7a7564643568646630312e797265732e797465636810ffffffe3ffffffc4218ffffffceffffff93ffffffb6ffffffeafffffffa2c100183,s{4519,4519,1544800813291,1544800813291,0,0,0,0,77,0,4519}
On the HBase master UI this is the region server address and by clicking on the link i can't get the page neither.
I packaged my program as jar file and running it on my VM is fine which makes me think it could be a port access issue.
Entering netstat -tanp | grep LISTEN command on my RHEL6 VM tells me my region server port is listening
tcp 0 0 10.166.205.41:41571 0.0.0.0:* LISTEN 26322/java
Seems I don't have any firewall running so don't know why the connection failed. Maybe it is something else.
I'm out of idea to fix that issue so if you could help me that would be much appreciated ^^
Thanks a lot.

Hbase slave's HReginServer can not start

When I start my HBase , the HReginServer and HMaster of my master machine start successful , but the slaves HReginServer can not start
my hadoop version is 2.8.0 , hbase version is 1.2.6 , zookeeper version is 3.4.9
and my hbase-site.xml is :
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master</name>
<value>60000</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/hbase/zoodata</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop,slave03,slave02</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.regionserver.port</name>
<value>60020</value>
</property>
and regionservers is :
hadoop
slave03
slave02
here is the error message
2017-06-16 15:09:34,730 ERROR [main]
regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer
at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2682)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:64)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2697)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2680)
... 5 more
Caused by: java.io.IOException: Problem binding to slave02/119.29.83.97:60020 : Cannot assign requested address. To switch ports use the 'hbase.regionserver.port' configuration property.
at org.apache.hadoop.hbase.regionserver.RSRpcServices.<init>(RSRpcServices.java:938)
at org.apache.hadoop.hbase.regionserver.HRegionServer.createRpcServices(HRegionServer.java:647)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:531)
... 10 more
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.hbase.ipc.RpcServer.bind(RpcServer.java:2592)
at org.apache.hadoop.hbase.ipc.RpcServer$Listener.<init>(RpcServer.java:585)
at org.apache.hadoop.hbase.ipc.RpcServer.<init>(RpcServer.java:2045)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.<init>(RSRpcServices.java:930)
... 12 more
I hope some help , thank you so much !

Hadoop Job hangs at ACCEPTED, with yarn resourcemanager log java.net.UnknownHostException

As is described in the title, I deployed a hadoop v2.6.3 cluster on an internal network with static ip like 10.0.0.x.
Then I ran an example WordCount Program However, the shell just give the outputs and hangs:
hadoop jar wc.jar WordCount /user/alex/data/kaggle.sample /user/alex/wc/output
16/04/06 10:44:29 INFO client.RMProxy: Connecting to ResourceManager at master/10.0.0.7:8032
16/04/06 10:44:29 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/04/06 10:44:30 INFO input.FileInputFormat: Total input paths to process : 1
16/04/06 10:44:30 INFO mapreduce.JobSubmitter: number of splits:1
16/04/06 10:44:30 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1459942813464_0002
16/04/06 10:44:30 INFO impl.YarnClientImpl: Submitted application application_1459942813464_0002
16/04/06 10:44:30 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1459942813464_0002/
16/04/06 10:44:30 INFO mapreduce.Job: Running job: job_1459942813464_0002
Then I goes to Hadoop Cluster Web UI, and found that the job status is ACCEPTED, and not running. I checked the log file of YARN.ResourceManager, and its last ERROR message is like this:
2016-04-06 10:34:42,466 ERROR org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt: Error trying to assign container token and NM token to an allocated container container_1459942813464_0001_02_000001
java.lang.IllegalArgumentException: java.net.UnknownHostException: worker14.alex
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
at org.apache.hadoop.yarn.server.utils.BuilderUtils.newContainerToken(BuilderUtils.java:256)
at org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager.createContainerToken(RMContainerTokenSecretManager.java:220)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.pullNewlyAllocatedContainersAndNMTokens(SchedulerApplicationAttempt.java:448)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.getAllocation(FiCaSchedulerApp.java:269)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocate(CapacityScheduler.java:896)
at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMContainerAllocatedTransition.transition(RMAppAttemptImpl.java:937)
at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMContainerAllocatedTransition.transition(RMAppAttemptImpl.java:930)
at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:755)
at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:106)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:842)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:823)
at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:182)
at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.UnknownHostException: worker14.alex
... 19 more
The Hadoop Configuration file is following:
#core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:8020/</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/alex/hadoop-2.6.3/tmp/</value>
</property>
</configuration>
#yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/home/alex/hadoop-2.6.3/tmp/nm.local</value>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>/home/alex/hadoop-2.6.3/log/nm.log</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
#mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>10.0.0.7:10020</value>
</property>
<property>
<name>yarn.app.mapreduce.am.staging-dir</name>
<value>/home/alex/hadoop-2.6.3/tmp/staging</value>
</property>
<property>
<name>mapreduce.jobhistory.intermediate-done-dir</name>
<value>/home/alex/hadoop-2.6.3/tmp/mr-history/tmp</value>
</property>
<property>
<name>mapreduce.jobhistory.done-dir</name>
<value>/home/alex/hadoop-2.6.3/tmp/mr-history/done</value>
</property>
</configuration>
/etc/hosts file have map ips to either master or worker1 - worker14
slaves file are master, worker1 - worker14
It seems that my hostname resolve goes wrong. It is worker14.alex rather than worker14 (alex is my linux username)
So what's wrong with my configuration? Do I need to restart all the servers? Or I just need to restart some of the services like service networking restart?
were you able to get to a resolution? I'm seeing the exact same issue, I see a Caused by: java.net.UnknownHostException: var exception. – Nishant Kelkar
Check your yarn-site.xml, this value:
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/var/log/hadoop-yarn/apps</value>
If you put "hdfs://" before the path, the error occurs.

unable to connect to hbase using java(in Eclipse) in Cloudera VM

I am trying to connect to Hbase using Java(in Eclipse) in Cloudera VM, but getting below error. Am able to run same program in command line(by converting my program into jar)
my java program
`import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.*;
import org.apache.hadoop.hbase.util.Bytes;
//import org.apache.hadoop.mapred.MapTask;
import java.io.FileWriter;
import java.io.IOException;
public class HbaseConnection {
public static void main(String[] args) throws IOException {
Configuration config = HBaseConfiguration.create();
config.addResource("/usr/lib/hbase/conf/hbase-site.xml");
HTable table = new HTable(config, "test_table");
byte[] columnFamily = Bytes.toBytes("colf");
byte[] idColumnName = Bytes.toBytes("id");
byte[] groupIdColumnName = Bytes.toBytes("g_id");
Put put = new Put(Bytes.toBytes("testkey"));
put.add(columnFamily, idColumnName, Bytes.toBytes("test id"));
put.add(columnFamily, groupIdColumnName, Bytes.toBytes("test group id"));
table.put(put);
table.close();
}
}`
And I have kept hbase-site.xml in the source folder in eclipse
hbase-site.xml
<property>
<name>hbase.rest.port</name>
<value>8070</value>
<description>The port for the HBase REST server.</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://quickstart.cloudera:8020/hbase</value>
</property>
<property>
<name>hbase.regionserver.ipc.address</name>
<value>0.0.0.0</value>
</property>
<property>
<name>hbase.master.ipc.address</name>
<value>0.0.0.0</value>
</property>
<property>
<name>hbase.thrift.info.bindAddress</name>
<value>0.0.0.0</value>
</property>
And am getting below error while running the program in eclipse
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:389)
at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:366)
at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:247)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:188)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:150)
at com.aig.gds.hadoop.platform.idgen.hbase.HBaseTest.main(HBaseTest.java:34)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:387)
... 5 more
Caused by: java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.hdfs.DistributedFileSystem could not be instantiated
at java.util.ServiceLoader.fail(ServiceLoader.java:224)
at java.util.ServiceLoader.access$100(ServiceLoader.java:181)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:377)
at java.util.ServiceLoader$1.next(ServiceLoader.java:445)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2400)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2411)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:287)
at org.apache.hadoop.hbase.util.DynamicClassLoader.<init>(DynamicClassLoader.java:104)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.<clinit>(ProtobufUtil.java:197)
at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:69)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:83)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:801)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:633)
... 10 more
Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.conf.Configuration.addDeprecations([Lorg/apache/hadoop/conf/Configuration$DeprecationDelta;)V
at org.apache.hadoop.hdfs.HdfsConfiguration.addDeprecatedKeys(HdfsConfiguration.java:66)
at org.apache.hadoop.hdfs.HdfsConfiguration.<clinit>(HdfsConfiguration.java:31)
at org.apache.hadoop.hdfs.DistributedFileSystem.<clinit>(DistributedFileSystem.java:114)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:374)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:373)
... 26 more
Thanks in advance.
The root cause of your problem is in the stack trace:
NoSuchMethodError: org.apache.hadoop.conf.Configuration.addDeprecations
This means that your hadoop-common-* jar version is not in sync with your hadoop-hdfs-* jar version Or you may have a mix of different versions in your class path.
Note that addDeprecations is present in hadoop 2.3.0 and later :
https://hadoop.apache.org/docs/r2.3.0/api/org/apache/hadoop/conf/Configuration.html
but was missing in 2.2.0 and prior:
https://hadoop.apache.org/docs/r2.2.0/api/org/apache/hadoop/conf/Configuration.html

Java Web Service Contacting Hive - DataNucleus ClassLoaderResolver Error

I have a Java web service (made with a proprietary technology at my company), which is being used to service requests/responses, and while processing requests, it is attempting to talk to Hadoop's Hive and execute a query. However, it is failing immediately when I simply try to initialize the connection.
Here is the line of code it fails on. I am largely using the code sample from https://cwiki.apache.org/confluence/display/Hive/HiveClient:
String connString = "jdbc:hive://";
Connection con = DriverManager.getConnection(connString, "", "");
Here is the stack trace:
javax.jdo.JDOFatalInternalException: Unexpected exception caught.
at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1186)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:246)
at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:275)
at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:208)
at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:183)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:70)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:130)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:407)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:359)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:504)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:266)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:228)
at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.<init>(HiveServer.java:131)
at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.<init>(HiveServer.java:121)
at org.apache.hadoop.hive.jdbc.HiveConnection.<init>(HiveConnection.java:76)
at org.apache.hadoop.hive.jdbc.HiveDriver.connect(HiveDriver.java:104)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:185)
at (...my package...).RemoteCtrbTest.kickOffRemoteTest(RemoteCtrbTest.java:52)
NestedThrowablesStackTrace:
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
at java.security.AccessController.doPrivileged(Native Method)
at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:246)
at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:275)
at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:208)
at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:183)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:70)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:130)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:407)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:359)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:504)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:266)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:228)
at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.<init>(HiveServer.java:131)
at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.<init>(HiveServer.java:121)
at org.apache.hadoop.hive.jdbc.HiveConnection.<init>(HiveConnection.java:76)
at org.apache.hadoop.hive.jdbc.HiveDriver.connect(HiveDriver.java:104)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:185)
at (...my package...).RemoteCtrbTest.kickOffRemoteTest(RemoteCtrbTest.java:52)
Caused by: org.datanucleus.exceptions.NucleusUserException: Persistence process has been specified to use a ClassLoaderResolver of name "jdo" yet this has not been found by the DataNucleus plugin mechanism. Please check your CLASSPATH and plugin specification.
at org.datanucleus.OMFContext.getClassLoaderResolver(OMFContext.java:319)
at org.datanucleus.OMFContext.<init>(OMFContext.java:165)
at org.datanucleus.OMFContext.<init>(OMFContext.java:137)
at org.datanucleus.ObjectManagerFactoryImpl.initialiseOMFContext(ObjectManagerFactoryImpl.java:132)
at org.datanucleus.jdo.JDOPersistenceManagerFactory.initialiseProperties(JDOPersistenceManagerFactory.java:363)
at org.datanucleus.jdo.JDOPersistenceManagerFactory.<init>(JDOPersistenceManagerFactory.java:307)
at org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:255)
at org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182)
... 35 more
I found one other question that has a similar error message, but it was about Maven and didn't contain Hive (which is the one using DataNucleus in it's code): Datanucleus, JDO and executable jar - how to do it?
I am using a hive-site.xml file to specify some properties for hive and datanucleus. The datanucleus ones are below. The last two I tried to try to fix the issue and when I change whatever I specify for datanucleus.classLoaderResolverName, it changes the error message that is in quotes.
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>datanucleus.classLoaderResolverName</name>
<value>jdo</value>
</property>
<property>
<name>javax.jdo.PersistenceManagerFactoryClass</name>
<value>org.datanucleus.jdo.JDOPersistenceManagerFactory</value>
</property>
The part that I cannot figure out is if somehow the service is re-bundling the jars, as in the other stackoverflow question that I linked to above, messing up the location of plugin.xml and/or the Manifest.mf file. I'm also not sure how the plugin file interacts with the hive-site file.
The classpath here requires to add specific jars instead of just a classpath. I am using the following datanucleus jars:
* datanucleus-connectionpool-2.0.3.jar
* datanucleus-enhancer-2.0.3.jar
* datanucelus-rdbms-2.0.3.jar
* datanucleus-core-2.0.3.jar
Any input you can give to help me would be greatly appreciated. I can provide more information if you need it so please do ask.
In case you are using Spring framework in your application made with proprietary technology at your company, then you can take advantage of Spring-Hadoop support.
All you have to do is just add below configuration in your applicationContext:
<hdp:configuration>
fs.default.name=${fs.default.name.url}
mapred.job.tracker=${mapred.job.tracker.url}
</hdp:configuration>
<hdp:hive-client-factory host="${hadoop.hive.host.url}" port="10000"
xmlns="http://www.springframework.org/schema/hadoop" />
<hdp:hive-template />
After that, autowire HiveTemplate,
#Autowired
HiveTemplate hiveTemplate;
And then query Hive as shown below:
List<String> list = hiveTemplate.query(queryString, parameterMap);
DataNucleus apparently uses an OSGi-based plugin mechanism. If you are not running it in an OSGi container and are just using a standard maven project, what's probably happening is the plugins are on the classpath but are not registered due to an issue with manifests. You might try something like this:
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>2.4</version>
<executions>
<execution>
<id>copy-dependencies</id>
<phase>package</phase>
<goals>
<goal>copy-dependencies</goal>
</goals>
<configuration>
<outputDirectory>${project.build.directory}/jars</outputDirectory>
<overWriteReleases>false</overWriteReleases>
<overWriteSnapshots>false</overWriteSnapshots>
<overWriteIfNewer>true</overWriteIfNewer>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
This was answered previously.

Categories

Resources