Jmeter plugin execution throws ArrayIndexOutOfBoundsException - java

I've looked for answers to this problem, but couldn't find any over the internet. Maybe someone here ran into this issue before.
I have a CentOS machine with Jmeter 3.1. On this machine everything works fine. I created a new VM, and copied the jmeter directory to the new machine with everything set up. Test execution works fine, but when I try to use any of the plugins (cmdrunner-2.0.jar or by JMeterPluginsCMD.sh)
I get an
Exception back, with not much information what's wrong:
[root#box bin]# java -jar "/opt/apache-jmeter-3.1/lib/cmdrunner-2.0.jar" -n --tool Reporter --input-jtl "/tmp/data.csv" --plugin-type SynthesisReport --generate-csv "/tmp/report.csv"
WARN 2017-10-22 12:41:57.204 [jmeter.u] (): Exception 'null' occurred when fetching String property:'sampleresult.default.encoding', defaulting to:ISO-8859-1
WARN 2017-10-22 12:41:57.224 [jmeter.u] (): Exception 'null' occurred when fetching String property:'jmeterPlugin.prefixPlugins'
INFO 2017-10-22 12:41:57.224 [kg.apc.j] (): Using JMeterPluginsCMD v. N/A
INFO 2017-10-22 12:41:57.229 [jmeter.u] (): Setting Locale to en_US
INFO 2017-10-22 12:41:57.238 [kg.apc.j] (): Loading user properties from: /opt/apache-jmeter-3.1/bin/user.properties
INFO 2017-10-22 12:41:57.238 [kg.apc.j] (): Loading system properties from: /opt/apache-jmeter-3.1/bin/system.properties
ERROR: java.lang.ArrayIndexOutOfBoundsException: 0
*** Problem's technical details go below ***
Home directory was detected as: /opt/apache-jmeter-3.1/lib
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 0
at sun.font.CompositeStrike.getStrikeForSlot(CompositeStrike.java:75)
at sun.font.CompositeStrike.getFontMetrics(CompositeStrike.java:93)
at sun.font.FontDesignMetrics.initMatrixAndMetrics(FontDesignMetrics.java:359)
...
...
ERROR: java.lang.ArrayIndexOutOfBoundsException: 0
That's all I get. The only differences between the two machines is:
Working platform:
kernel 3.10.0-514.26.2.el7.x86_64
java (build 1.8.0_131-b12)
Not working:
kernel 3.10.0-693.2.2.el7.x86_64,
java (build 1.8.0_144-b01)
There are no environment variables missing.
Any suggestions are more than welcomed

This is a java bug on this particular platform:
https://bugzilla.redhat.com/show_bug.cgi?id=1484079

I can't believe it - yum update java solved my issue. The thing is, I just updated java last week..

Related

Pypspark error :Java gateway process exited before sending its port number error in windows [SOLVED]

I am trying to run pyspark on jupyter(via anaconda) in windows.Facing the below mentioned error while trying to create a SparkSession.
Exception: Java gateway process exited before sending its port number
Error snapshot 1
Error snapshot 2
I even tried adding JAVA_HOME,SPARK_HOME and HADOOP_HOME path into environment variable:
JAVA_HOME: C:\Java\jdk-11.0.16.1
SPARK_HOME:C:\Spark\spark-3.1.3-bin-hadoop3.2
HADOOP_HOME:C:\Spark\spark-3.1.3-bin-hadoop3.2
Even after this I am facing the same issue.
PS: My pyspark version is 3.3.1 and python version is 3.8.6.
As per spark documentation, the string for setting master should be "local[*]" or "local[N]" for only using N cores. If you leave out the master setting, it defaults to "local[*]".
After several attempts, I finally figured out the issue. It was because the windows firewall had blocked java that caused this error. Once I gave the access permission the error was rectified!

GDAL error on CentOS 7

I’m no IT guy so it’s possible that I making something very wrong. But I’m struggling for days with this issue…
Working in a VM with CentOS 7.
When running something in GeoKettle I have this error that points to GDAL.
Native library load failed.
java.lang.UnsatisfiedLinkError: /home/geoairc/QGIS_Install/geokettle/libswt/linux/x86_64/libogrjni.so: liblcms.so.1: cannot open shared object file: No such file or directory
INFO 29-05 10:10:37,639 - OGR Input - wfs xml geodomus.0 - Finished processing (I=0, O=0, R=0, W=0, U=0, E=0)
Exception in thread "OGR Input - wfs xml geodomus.0 (Thread-10)" java.lang.UnsatisfiedLinkError: org.gdal.ogr.ogrJNI.RegisterAll()V
at org.gdal.ogr.ogrJNI.RegisterAll(Native Method)
at org.gdal.ogr.ogr.RegisterAll(ogr.java:110)
at org.pentaho.di.core.geospatial.OGRReader.open(OGRReader.java:75)
at org.pentaho.di.trans.steps.ogrfileinput.OGRFileInputMeta.getOutputFields(OGRFileInputMeta.java:277)
at org.pentaho.di.trans.steps.ogrfileinput.OGRFileInput.processRow(OGRFileInput.java:172)
at org.pentaho.di.trans.steps.ogrfileinput.OGRFileInput.run(OGRFileInput.java:342)
Someone pointed me that the error was caused by the lack of GDAL bindings to Java.
So I installed gdal-java RPM
https://www.rpmfind.net/linux/RPM/epel/7/x86_64/g/gdal-java-1.11.4-1.el7.x86_64.html
I installed but get successive dependency errors that can not get past (this is the first but when I try to install one of this got another set of dependencies errors):
[root#srvlgis01 tmp]# rpm -Uvh gdal-java-1.11.4-1.el7.x86_64.rpm
error: Failed dependencies:
gdal-libs(x86-64) = 1.11.4-1.el7 is needed by gdal-java-1.11.4-1.el7.x86_64
libgeotiff.so.1.2()(64bit) is needed by gdal-java-1.11.4-1.el7.x86_64
My GDAL version: gdal.x86_64 0:1.11.4-10.rhel7
Thanks in advance,
Pedro

Jmeter error trying to record a sample java.lang.IllegalArgumentException: Failed marshalling

I have one Linux VPS dedicated just for running a jmeter. The tests run fine, but failed requests are not written to error.jtl with SimpleDataWriter. Java error is written to jmeter.log instead.
I run the tests in non-gui mode:
jmeter -n -t om5.jmx -j results-tmp1/t3-l1-jmeter.log
The error
2016/07/08 16:59:35 ERROR - jmeter.reporters.ResultCollector:
Error trying to record a sample java.lang.IllegalArgumentException: Failed marshalling:class:class
org.apache.jmeter.samplers.SampleResult,content:org.apache.jmeter.samplers.SampleResult#1f605bfa[saveConfig=org.apache.jmeter.samplers.SampleSaveConfiguration#b4a9237e,parent=<null>,
responseData={},responseCode=500,label=03 add to cart,resultFileName=,samplerData=<null>,threadName=Thread Group 1-149,responseMessage=Number of samples in transaction : 2, number of failing samples : 1,
responseHeaders=,contentType=,requestHeaders=,timeStamp=1467989884109,startTime=1467989884109,endTime=1467989975286,idleTime=1001,pauseTime=0,assertionResults=<null>,subResults=[reset basketItems, /some-page],
dataType=,success=false,files=[res-tmp/t400-l5-errors.jtl],dataEncoding=<null>,elapsedTime=90176,latency=0,connectTime=0,startNextThreadLoop=false,stopThread=false,stopTest=false,
stopTestNow=false,isMonitor=false,sampleCount=1,bytes=806,headersSize=192,bodySize=614,groupThreads=400,allThreads=400,nanoTimeOffset=1467988012523,useNanoTime=true,nanoThreadSleep=5000,location=<null>]
at org.apache.jmeter.save.SaveService.saveSampleResult(SaveService.java:345)
at org.apache.jmeter.reporters.ResultCollector.sampleOccurred(ResultCollector.java:557)
at org.apache.jmeter.threads.ListenerNotifier.notifyListeners(ListenerNotifier.java:67)
at org.apache.jmeter.threads.JMeterThread.notifyListeners(JMeterThread.java:819)
at org.apache.jmeter.threads.JMeterThread.doEndTransactionSampler(JMeterThread.java:534)
at org.apache.jmeter.threads.JMeterThread.triggerEndOfLoopOnParentControllers(JMeterThread.java:342)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:258)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
I cannot find what is wrong. It is only that one machine, that gives me this error. Everywhere else I try it, it works correctly.
The OS is Debian Jessie (8.5) - minimal from official repositories. I tried many different versions of java (1.7, 1.8) - currently running on the latest 1.8 (1.8.0_92-b14) and I have the latest apache-jmeter 3.0 r1743807, but previously I used 2.13 r1665067. I don't use any third-party plugins.
I didn't notice exactly when it stopped working, what change was done (some os update, maybe some change in jmx), but I have my tests in git, so I checked out older version from the date of my last full error.jtl and it is not writing the errors to jtl as well.
I reinstalled the Debian to Ubuntu 16.04 and nothing changed.
I don't know how to debug that problem, what should I do, because on any other machine I have an access to, it works fine.
I don't know if you've fixed this problem until now?
I had the same issue, and I fixed it just now.
I guess in your jmeter script, you checked the hostname to record sample result, in jmeter.log, you could find such message: jmeter couldn't get the linux Local IP, so hostname is Null, exception threw out.
So what you need to do is remove 'Save Hostname' from 'Sample Result Save Configuration' tab.

PIG/Hadoop issue: ERROR 2081: Unable to setup the load function [duplicate]

This question already has answers here:
how to load files on hadoop cluster using apache pig?
(3 answers)
Closed 3 years ago.
I'm running Pig 0.13.0 and Hadoop 2.5.1, both installed from the Apache distros, they're not packages from Horton or Cloudera or anything.
I'm working with a tutorial and can get it to work fine when running Pig locally ($> ./pig -x local), but when trying to run it on the Hadoop instance I get an error that I'm having a hard time researching on the internet.
This command:
movies = LOAD '/home/hduser/pig-tutorial-master/movies_data.csv' USING PigStorage(',') as (id,name,year,rating,duration);
DUMP movies;
Works fine running locally. When I run it in Hadoop/MR mode, it seems to work fine when I run the first line of code:
grunt> movies = LOAD '/home/hduser/pig-tutorial-master/movies_data.csv' USING PigStorage(',') as (id,name,year,rating,duration);
2014-10-29 18:16:26,281 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2014-10-29 18:16:26,281 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
But when I try to $> DUMP movies it gives me this trace:
grunt> dump movies
2014-10-29 18:17:15,419 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN
2014-10-29 18:17:15,420 [main] INFO org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter], RULES_DISABLED=[FilterLogicExpressionSimplifier]}
2014-10-29 18:17:15,445 [main] WARN org.apache.pig.data.SchemaTupleBackend - SchemaTupleBackend has already been initialized
2014-10-29 18:17:15,469 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2081: Unable to setup the load function.
Details at logfile: /usr/local/pig/pig_1414606194436.log
The ERROR 2081 is what I'm trying to diagnose, but can't find anything that helps point me in the right direction. Any ideas of where to start? I assume it's something to do with my Hadoop installation and not Pig, but I don't know. Any suggestions will be helpful.
Thanks,
Mark
EDIT: Here is the full log output:
ERROR 2081: Unable to setup the load function.
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias movies
at org.apache.pig.PigServer.openIterator(PigServer.java:912)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:752)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:372)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:228)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:203)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66)
at org.apache.pig.Main.run(Main.java:542)
at org.apache.pig.Main.main(Main.java:156)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: org.apache.pig.PigException: ERROR 1002: Unable to store alias movies
at org.apache.pig.PigServer.storeEx(PigServer.java:1015)
at org.apache.pig.PigServer.store(PigServer.java:974)
at org.apache.pig.PigServer.openIterator(PigServer.java:887)
... 12 more
Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception while executing (Name: movies: Store(hdfs://localhost:54310/tmp/temp-1276361014/tmp-2000190966:org.apache.pig.impl.io.InterStorage) - scope-1 Operator Key: scope-1): org.apache.pig.backend.executionengine.ExecException: ERROR 2081: Unable to setup the load function.
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:289)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POStore.getNextTuple(POStore.java:143)
at org.apache.pig.backend.hadoop.executionengine.fetch.FetchLauncher.runPipeline(FetchLauncher.java:160)
at org.apache.pig.backend.hadoop.executionengine.fetch.FetchLauncher.launchPig(FetchLauncher.java:81)
at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:275)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1367)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1352)
at org.apache.pig.PigServer.storeEx(PigServer.java:1011)
... 14 more
Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 2081: Unable to setup the load function.
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLoad.getNextTuple(POLoad.java:127)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:281)
... 21 more
Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://localhost:54310/home/hduser/pig-tutorial-master/movies_data.csv
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:321)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigTextInputFormat.listStatus(PigTextInputFormat.java:36)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:385)
at org.apache.pig.impl.io.ReadToEndLoader.init(ReadToEndLoader.java:190)
at org.apache.pig.impl.io.ReadToEndLoader.<init>(ReadToEndLoader.java:146)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLoad.setUp(POLoad.java:95)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLoad.getNextTuple(POLoad.java:123)
... 22 more
================================================================================
If you are running the pig commands from grunt shell on hadoop cluster, set the property:
set opt.fetch false;
By setting the above property dump will run in mapreduce mode, by default the above property is set to true.
if you are working with hadoop 2.6.0 and pig 0.14
downgrading pig to 0.13 may help. This worked for me.

HBase: MiniDFSCluster.java Fails in Certain Environments

I'm working on some code to access HBase and I am writing unit tests that create a MiniDFSCluster as part of the test setup.
(defn test-config [& options]
(let [testing-utility (HBaseTestingUtility.)]
(.startMiniCluster testing-utility 1)
(let [config (.getConfiguration testing-utility)]
(if (not= options nil)
(doseq [[key value] options]
(.set config key value)))
config)))
;; For those who don't read Clojure, lines 2 and 3 cause
;; the failure and are equivalent to the following Java
;;
;; HBaseTestingUtility testingUtility = new HBaseTestingUtility();
;; testingUtility.startMiniCluster(1); // blows up on Linux but not Mac OSX
This runs fine on Mac OSX with Java HotSpot:
$ java -version
java version "1.6.0_51"
Java(TM) SE Runtime Environment (build 1.6.0_51-b11-457-11M4509)
Java HotSpot(TM) 64-Bit Server VM (build 20.51-b01-457, mixed mode)
$ lein test
lein test hbase.config-test
lein test hbase.table-test
2013-07-12 17:44:13.488 java[27384:1203] Unable to load realm info from SCDynamicStore
Starting DataNode 0 with dfs.data.dir: /Users/dwilliams/Desktop/Repos/mobiusinversion/hbase/target/test-data/fe0199fd-0168-48d9-98ce-b4a5e62d3257/dfscluster_bbad1095-58d1-4571-ba12-4d4f1c24203f/dfs/data/data1,/Users/dwilliams/Desktop/Repos/mobiusinversion/hbase/target/test-data/fe0199fd-0168-48d9-98ce-b4a5e62d3257/dfscluster_bbad1095-58d1-4571-ba12-4d4f1c24203f/dfs/data/data2
Cluster is active
Ran 11 tests containing 14 assertions.
0 failures, 0 errors.
But when this is run in a Linux environment, the following error occurs:
ERROR in (create-table) (MiniDFSCluster.java:426)
Uncaught exception, not in assertion.
expected: nil
actual: java.lang.NullPointerException: null
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes (MiniDFSCluster.java:426)
org.apache.hadoop.hdfs.MiniDFSCluster.<init> (MiniDFSCluster.java:284)
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster (HBaseTestingUtility.java:444)
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster (HBaseTestingUtility.java:612)
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster (HBaseTestingUtility.java:568)
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster (HBaseTestingUtility.java:555)
I filed a travis-ci ticket, since this first manifested itself there and I thought it might be due to their environment.
https://github.com/travis-ci/travis-ci/issues/1240
However, after discussion with travis support I was able to reproduce the error on CentOS. I tried both the Sun JDK and the OpenJDK on Linux and both produced the same error. Whats going on here? Is this a trivial configuration problem? Perhaps something not set in the Linux ENV that is set in Mac OSX's ENV?
If you would like to run the tests, please clone the repo
https://github.com/mobiusinversion/hbase
And run lein test. Help is greatly appreciated!
Update:
Filed this HBASE Jira ticket
https://issues.apache.org/jira/browse/HBASE-8944
Short answer: set "umask 022" prior to running the tests.
Long answer: This is a common environmental issue with running MiniDFSCluster from Hadoop 1.x, releases, which HBaseTestingUtility uses internally. It has been effectively fixed in Hadoop 0.22+ (including 2.0+, but not 1.x at the moment).
The underlying problem is https://issues.apache.org/jira/browse/HDFS-2556.
When the MiniDFSCluster starts up, it creates the temporary storage directories to use for the datanode processes (configured as "dfs.data.dir"). These will be created with your currently set umask. When each datanode starts up, it checks that the directories configured in "dfs.data.dir" both exist and that the directory permissions match the expected value (set as "dfs.datanode.data.dir.perm"). If the directories permissions do not match the expected value ("755" by default), then the datanode process exits.
By default, in Hadoop 1.x, this value is set to "755", so if you set your umask to "022", the data directories will wind up with the correct permissions. If however, the permissions do not match the expected value, the datanode will abort and you will see errors like the following in the test log file:
WARN [main] datanode.DataNode(1577): Invalid directory in dfs.data.dir: Incorrect permission for /.../dfs/data/data2, expected: rwxr-xr-x, while actual: rwxrwxr-x
In later versions of Hadoop, the datanode will attempt to change the directory permissions to the expected value if they do not match. Only if this operation fails will the datanode abort. HDFS-2556 proposes backporting this change to the 1.x releases, but has not yet been fixed.

Categories

Resources