Hbase calling HTable hangs - java

There is sample java code for Hbase connectivity program which is the famous "HbaseTest" class sample, which is available in the internet for long time.
I have compiled the code in my server and compiling was successful. When i run my Java class file, i am able to see that it is getting hanged in the particular line. "HTable table = new HTable(conf, tableName);"
It throws the below alert when running.
Jun 18, 2015 12:16:14 PM org.apache.hadoop.util.NativeCodeLoader WARNING: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Jun 18, 2015 12:16:15 PM org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper INFO: The identifier of this process is pid#servername
I have identified it has stuck in that particular lines by giving Print statements.
Please do let me know what to do for the same. I have checked that the Hbase is running properly.
Kindly share your thoughts and idea's.
#hive #hbase #hadoop
Thanks in Advance Sam

I had a similar problem before it was network issues. Try setting retry and timeout parameters, e.g.
hbase.client.retries.number=2
zookeeper.session.timeout=2000
zookeeper.recovery.retry=0
hbase.rpc.timeout=100
ipc.socket.timeout=100
hbase.client.pause=100
zookeeper.recovery.retry.intervalmill=100
timeout=100
You may need to modify your network settings according to the errors that are thrown.

Related

WebLogic doesn't loads class despite being on $domain/lib

I'm trying to add a AS400 jar to WebLogic classpath. I'm putting the jt400.jar inside $domain/lib/ as indicated by the readme.txt found in there. And when server starts I can see the line:
<Sep 14, 2017 7:12:28 PM CST> <Notice> <WebLogicServer> <BEA-000395> <The following extensions directory contents added to the end of the classpath:
{$domain}/lib/jt400.jar.>
But, when I'm testing my datasource stills throws the error message:
weblogic.common.resourcepool.ResourceSystemException: Cannot load driver class com.ibm.as400.access.AS400JDBCDriver for datasource 'MyDataSource'
I have verified that the class is indeed inside the given jar.
What am I doing wrong?
Finally I understand what was happening. I forgot to add to the question that the WebLogic had several managed cluster servers. After trying various approachs, I realized the error only occured when I selected one of the clusters I wanted that datasource to serve.
So in the end, the servers from that cluster also need the jar to connect to the selected datasource.

How to ignore 'java.io.serialization' logger in java

To adress security vulnerability CVE-2017-3241 (Java RMI Registry.bind() Unvalidated Deserialization) which affects JRE version prior to 1.8.0_121. In addition to using JRE 1.8.0_121 ,we added below lines of code in java.security file.
jdk.serialFilter=*
sun.rmi.registry.registryFilter=*
sun.rmi.transport.dgcFilter=\
java.rmi.server.ObjID;\
java.rmi.server.UID;\
java.rmi.dgc.VMID;\
java.rmi.dgc.Lease;\ maxdepth=2147483647;maxarray=2147483647;maxrefs=2147483647;maxbytes=2147483647
Once we add these lines then we are getting below lines whenever do any RMI call.
Feb 13, 2017 1:00:53 AM sun.misc.ObjectInputFilter$Config lambda$static$0
INFO: Creating serialization filter from *
We want to suppress these info, can somebody suggest any solution for this.
I use -Djdk.serialFilter=maxbytes=10000;!org.* as JVM arguments, don`t see any log info.

Exception starting AntiSamyFilter

I'm unable to run my group's web application on my local machine with the AntiSamy filter in place. The filter works on WebSphere on our development and production servers, and also runs fine on one of my colleagues' machine with what should be the same setup as mine, using Tomcat. However, on my machine and several other colleagues', if we try to start our server with AntiSamy filter in place, we get this error:
Oct 12, 2015 11:06:32 PM org.apache.catalina.core.StandardContext filterStart
SEVERE: Exception starting filter AntiSamyFilter
java.lang.IllegalStateException: java.io.FileNotFoundException: C:\Users\myUsername\Documents\myWorkspaceDirectory.metadata.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\mobile3\WEB-INF\classes\antisamy-default.xml (The system cannot find the path specified)
I've looked in the directory cited, and there is an "antisamy-default.xml" file there, so I'm not sure where to begin to figure out what's causing the error.
Any help will be much appreciated!

need to use hadoop native

I am invoking a mapreduce job from my java program.
Today, when I set the mapreduce job's input format to :LzoTextInputFormat
The mapreduce job fails:
Could not load native gpl library
java.lang.UnsatisfiedLinkError: no gplcompression in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1738)
at java.lang.Runtime.loadLibrary0(Runtime.java:823)
at java.lang.System.loadLibrary(System.java:1028)
at com.hadoop.compression.lzo.GPLNativeCodeLoader.<clinit>(GPLNativeCodeLoader.java:32)
at com.hadoop.compression.lzo.LzoCodec.<clinit>(LzoCodec.java:67)
at com.hadoop.mapreduce.LzoTextInputFormat.listStatus(LzoTextInputFormat.java:58)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:241)
at com.hadoop.mapreduce.LzoTextInputFormat.getSplits(LzoTextInputFormat.java:85)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
at company.Validation.run(Validation.java:99)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at company.mapreduceTest.main(mapreduceTest.java:18)
Apr 5, 2012 4:40:29 PM com.hadoop.compression.lzo.LzoCodec <clinit>
SEVERE: Cannot load native-lzo without native-hadoop
java.lang.IllegalArgumentException: Wrong FS: hdfs://D-SJC-00535164:9000/local/usecases /gbase014/outbound/seed_2012-03-12_06-34-39/1_1.lzo.index, expected: file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:357)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:648)
at com.hadoop.compression.lzo.LzoIndex.readIndex(LzoIndex.java:169)
at com.hadoop.mapreduce.LzoTextInputFormat.listStatus(LzoTextInputFormat.java:69)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:241)
at com.hadoop.mapreduce.LzoTextInputFormat.getSplits(LzoTextInputFormat.java:85)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
at company.Validation.run(Validation.java:99)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at company.stopTransfer.mapreduceTest.main(mapreduceTest.java:18)
Apr 5, 2012 4:40:29 PM company.Validation run
SEVERE: LinkExtractor: java.lang.IllegalArgumentException: Wrong FS: hdfs://D-SJC-00535164:9000/local/usecases/gbase014/outbound/seed_2012-03-12_06-34-39/1_1.lzo.index, expected: file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:357)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:648)
at com.hadoop.compression.lzo.LzoIndex.readIndex(LzoIndex.java:169)
at com.hadoop.mapreduce.LzoTextInputFormat.listStatus(LzoTextInputFormat.java:69)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:241)
at com.hadoop.mapreduce.LzoTextInputFormat.getSplits(LzoTextInputFormat.java:85)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
at company.Validation.run(Validation.java:99)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at company.stopTransfer.mapreduceTest.main(mapreduceTest.java:18)
But in lib/native they are some files extends with a,la,so...
I tried to set them in my path environment variable, but it still doesn't work.
Could anyone please give me a suggestion!!!!
Thank you very much!
Your error relates to the actual shared library for Lzo not being present in the hadoop native library folder.
The code for GPLNativeCodeLoader is looking for a shared library called gplcompression. Java is actually looking for a file named libgplcompression.so. If this file doesn't exist in your lib/native/${arch} folder then you'll see this error.
In a terminal, navigate to your hadoop base directory and execute the following to dump the native libraries installed, and post back to your original question
uname -a
find lib/native
If you are using Cloudera Hadoop, you can install lzo easily according to the following instruction:
http://www.cloudera.com/content/cloudera/en/documentation/cloudera-impala/v1/v1-0-1/Installing-and-Using-Impala/ciiu_lzo.html

Errors for running Mahout example

I downloaded the examples of latest version for chapter 09 of “Mahout in Action”. I can successfully run several examples, but for three files, NewsKMeansClustering.java, ReutersToSparseVectors.java, and NewsFuzzyKMeansClusteing.java. Running these three programs gives similar error messages:
Aug 3, 2011 2:03:54 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Initializing JVM Metrics with processName=JobTracker, sessionId=
Aug 3, 2011 2:03:54 PM org.apache.hadoop.mapred.JobClient configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications should
implement Tool for the same.
Aug 3, 2011 2:03:54 PM org.apache.hadoop.mapred.JobClient configureCommandLineOptions
WARNING: No job jar file set. User classes may not be found. See JobConf(Class) or
JobConf#setJar(String).
Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/home/user1/workspaceMahout1/recommender/inputDir
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:224)
at org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:55)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:241)
at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
at org.apache.mahout.vectorizer.DocumentProcessor.tokenizeDocuments(DocumentProcessor.java:93)
at mia.clustering.ch09.NewsKMeansClustering.main(NewsKMeansClustering.java:54)
For the above messages, I do not quite understand what do those two warnings mean? Moreover, it looks like that “input path” should have been created, how can I create this type of input? Thanks.
You can ignore the warnings. The error is that the input directory you have specified does not exist. Does it exist? What is your command line?
I ran into a similar mismatch. The MiA files at https://github.com/tdunning/MiA have some cases where a .csv file is left in the same dir as the Java source. For example https://github.com/tdunning/MiA/tree/master/src/main/java/mia/recommender/ch02 ... however via Eclipse, loading it using DataModel model = new FileDataModel(new File("intro.csv")); ...doesn't find it.
Adding
System.out.println("CWD: "+System.getProperty("user.dir"));
...will reveal where Eclipse is looking (in my case, a couple levels up the filetree, but this might vary depending on how exactly you've set things up).

Categories

Resources