Jmeter Html Report show more than one(first) failed assertion - java

Does anyone know if it is possible to configure the Jmeter html report so that the html report shows not only the first failed assertions, but all?
Generated xml_log.jtl looks like this.
<assertionResult>
<name>Response Assertion [202]</name>
<failure>true</failure>
<error>false</error>
<failureMessage>Test failed: code expected to equal /
received : [2]00
comparison: [3]00
/</failureMessage>
</assertionResult>
<assertionResult>
<name>Duration Assertion [5ms] request</name>
<failure>true</failure>
<error>false</error>
<failureMessage>The operation lasted too long: It took 293 milliseconds, but should not have lasted longer than 5 milliseconds.</failureMessage>
</assertionResult>
And generated report:
Thanks.

The point is that the HTML Reporting Dashboard can only be generated from .jtl results files in CSV format
The dashboard generator is a modular extension of JMeter. Its default behavior is to read and process samples from CSV files to generate HTML files containing graph views.
and the .jtl results file in CSV format stores information only about first failed assertion.
You can work it around by adding a JSR223 Listener to walk through all the assertion failures, combine failure messages into a single one and substitute first assertion's failure message with this combined cumulative synthetic one, example code:
def message = new StringBuilder()
prev.getAssertionResults().each { assertionResult ->
message.append(assertionResult.getFailureMessage()).append(System.getProperty('line.separator'))
}
if (prev.getAssertionResults().size() > 0) {
prev.getAssertionResults().first().setFailureMessage(message.toString())
}
More information on Groovy scripting in JMeter: Apache Groovy - Why and How You Should Use It

Related

cTAKES parser output

I am trying to understand the result generated via cTAKES parser. I am unable to understand certain points-
cTAKES parser is invoked via TIKa-app
we get following result-
ctakes:AnatomicalSiteMention: liver:77:82:C1278929,C0023884
ctakes:ProcedureMention: CT scan:24:31:C0040405,C0040405,C0040405,C0040405
ctakes:ProcedureMention: CT:24:26:C0009244,C0009244,C0040405,C0040405,C0009244,C0009244,C0040405,C0009244,C0009244,C0009244,C0040405
ctakes:ProcedureMention: scan:27:31:C0034606,C0034606,C0034606,C0034606,C0441633,C0034606,C0034606,C0034606,C0034606,C0034606,C0034606
ctakes:RomanNumeralAnnotation: did:47:50:
ctakes:SignSymptomMention: lesions:62:69:C0221198,C0221198
ctakes:schema: coveredText:start:end:ontologyConceptArr
resourceName: sample
and document parsed contains -
The patient underwent a CT scan in April which did not reveal lesions in his liver
i have following questions-
why UMLS id is repeated like in ctakes:ProcedureMention: scan:27:31:C0009244,C0009244,C0040405,C0040405,C0009244,C0009244,C0040405,C0009244,C0009244,C0009244,C0040405? (cTAKES configuration properties file has annotationProps=BEGIN,END,ONTOLOGY_CONCEPT_ARR)
what does RomanNumeralAnnotation indicate?
In concept unique identifier like C0040405, do these 7 numbers have any meaning. How are these generated?
System information:
Apache tika 1.10
Apache ctakes 3.2.2

apache PIG with datafu: Cannot resolve UDF's

I'm trying the quickstart from here: http://datafu.incubator.apache.org/docs/datafu/getting-started.html
I tried nearly everything, but I'm sure it must be my fault somewhere. I tried already:
exporting PIG_HOME, CLASSPATH, PIG_CLASSPATH
starting pig with -cpdatafu-pig-incubating-1.3.0.jar
registering datafu-pig-incubating-1.3.0.jar locally and in hdfs => both succesful (at least no error shown)
nothing helped
Trying this on pig:
register datafu-pig-incubating-1.3.0.jar
DEFINE Median datafu.pig.stats.StreamingMedian();
data = load '/user/hduser/numbers.txt' using PigStorage() as (val:int);
data2 = FOREACH (GROUP data ALL) GENERATE Median(data);
or directly
data2 = FOREACH (GROUP data ALL) GENERATE datafu.pig.stats.StreamingMedian(data);
I get this name-resolve error:
2016-06-04 17:22:22,734 [main] ERROR org.apache.pig.tools.grunt.Grunt
- ERROR 1070: Could not resolve datafu.pig.stats.StreamingMedian using imports: [, java.lang., org.apache.pig.builtin.,
org.apache.pig.impl.builtin.] Details at logfile:
/home/hadoop/pig_1465053680252.log
When I look into the datafu-pig-incubating-1.3.0.jar it looks OK, everything in place. I also tried some Bag functions, same error then.
I think it's kind of a noob-error which I just don't see (as I did not find particular answers for datafu in SO or google), so thanks in advance for shedding some light on this.
Pig script is proper, the only thing that could break is that while registering datafu there were some class dependencies that coudn't been met.
Try to run locally (pig -x local) and see a detailed log.
Check also the version of pig - it should be newer than 0.14.0.

How to read multiple ORC & OBR segment from HL7 message using HAPI

I have following HL7-Message to parse.
MSH|^~\&|LIS|LAB1|APP2|LAB2|20140706163250||OML^O21|20140706163252282|P|2.4
PID|1||7015||LISTESTPATIENT12^LISTESTPATIENT12||19730901000000|F
PV1|1||||||LISPHYCDE1^LISPHY001^LISCARE TEST
ORC|NW|LISCASEID15|||||||||||||||NJ||||TCL^TCL
OBR|1|LISCASEID15||28259^Her2^STAIN|||20140706162713|||||||20140706162713|Breast|patho^pathl^pathf|||image1^image1^image1|blk1^blk1^blk1|SPEC14^SPEC14^SPEC14
ORC|XO|LISCASEID15|||||||||||||||NJ||||TCL^TCL
OBR|2|LISCASEID15||28260^Her2^STAIN|||20140706162713|||||||20140706162713|Breast|patho^pathl^pathf|||image2^image2^image|blk2^blk2^blk2|SPEC14^SPEC14^SPEC14
I am trying to fetch values from both OBR & ORC segments using HAPI Terser.get() method as follows.
Terser t = new Terser(h7msg);
t.get("/.ORDER_OBSERVATION(0)/ORC-1-1"); // Should return NW
t.get("/.ORDER_OBSERVATION(1)/ORC-1-1"); // Should return XO
t.get("/.ORDER_OBSERVATION(0)/OBR-4-1"); // Should return 28259
t.get("/.ORDER_OBSERVATION(1)/OBR-4-1"); // Should return 28260
But all the above statements gives following error
"End of message reached while iterating without loop"
Don't know, what wrong I am doing here.
Guys please help me with proper input to Teaser.get() method, to get above values.
The issue here is that the OML^O21 message does not contain multiple ORDER_OBSERVATION. This means you cannot access the element ORDER_OBSERVATION(1), because it does not exist.
Here a representation within 7edit:
When you parse your OML message with to XML, you can see the real structure of the HL7:
<?xml version="1.0" encoding="UTF-8"?><OML_O21 xmlns="urn:hl7-org:v2xml">
<MSH>
<MSH.1>|</MSH.1>
<MSH.2>^~\&</MSH.2>
<MSH.3>
<HD.1>LIS</HD.1>
</MSH.3>
<MSH.4>
<HD.1>LAB1</HD.1>
</MSH.4>
<MSH.5>
<HD.1>APP2</HD.1>
</MSH.5>
<MSH.6>
<HD.1>LAB2</HD.1>
</MSH.6>
<MSH.7>
<TS.1>20140706163250</TS.1>
</MSH.7>
<MSH.9>
<MSG.1>OML</MSG.1>
<MSG.2>O21</MSG.2>
</MSH.9>
<MSH.10>20140706163252282</MSH.10>
<MSH.11>
<PT.1>P</PT.1>
</MSH.11>
<MSH.12>
<VID.1>2.4</VID.1>
</MSH.12>
</MSH>
<OML_O21.PATIENT>
<PID>
<PID.1>1</PID.1>
<PID.3>
<CX.1>7015</CX.1>
</PID.3>
<PID.5>
<XPN.1>
<FN.1>LISTESTPATIENT12</FN.1>
</XPN.1>
<XPN.2>LISTESTPATIENT12</XPN.2>
</PID.5>
<PID.7>
<TS.1>19730901000000</TS.1>
</PID.7>
<PID.8>F</PID.8>
</PID>
<OML_O21.PATIENT_VISIT>
<PV1>
<PV1.1>1</PV1.1>
<PV1.7>
<XCN.1>LISPHYCDE1</XCN.1>
<XCN.2>
<FN.1>LISPHY001</FN.1>
</XCN.2>
<XCN.3>LISCARE TEST</XCN.3>
</PV1.7>
</PV1>
</OML_O21.PATIENT_VISIT>
</OML_O21.PATIENT>
<OML_O21.ORDER_GENERAL>
<OML_O21.ORDER>
<ORC>
<ORC.1>NW</ORC.1>
<ORC.2>
<EI.1>LISCASEID15</EI.1>
</ORC.2>
<ORC.17>
<CE.1>NJ</CE.1>
</ORC.17>
<ORC.21>
<XON.1>TCL</XON.1>
<XON.2>TCL</XON.2>
</ORC.21>
</ORC>
</OML_O21.ORDER>
<OML_O21.ORDER>
<ORC>
<ORC.1>XO</ORC.1>
<ORC.2>
<EI.1>LISCASEID15</EI.1>
</ORC.2>
<ORC.17>
<CE.1>NJ</CE.1>
</ORC.17>
<ORC.21>
<XON.1>TCL</XON.1>
<XON.2>TCL</XON.2>
</ORC.21>
</ORC>
<OML_O21.OBSERVATION_REQUEST>
<OBR>
<OBR.1>1</OBR.1>
<OBR.2>
<EI.1>LISCASEID15</EI.1>
</OBR.2>
<OBR.4>
<CE.1>28259</CE.1>
<CE.2>Her2</CE.2>
<CE.3>STAIN</CE.3>
</OBR.4>
<OBR.7>
<TS.1>20140706162713</TS.1>
</OBR.7>
<OBR.14>
<TS.1>20140706162713</TS.1>
</OBR.14>
<OBR.15>
<SPS.1>
<CE.1>Breast</CE.1>
</SPS.1>
</OBR.15>
<OBR.16>
<XCN.1>patho</XCN.1>
<XCN.2>
<FN.1>pathl</FN.1>
</XCN.2>
<XCN.3>pathf</XCN.3>
</OBR.16>
<OBR.19>image1</OBR.19>
<OBR.20>blk1</OBR.20>
<OBR.21>SPEC14</OBR.21>
</OBR>
<OML_O21.PRIOR_RESULT>
<OML_O21.ORDER_PRIOR>
<OBR>
<OBR.1>2</OBR.1>
<OBR.2>
<EI.1>LISCASEID15</EI.1>
</OBR.2>
<OBR.4>
<CE.1>28260</CE.1>
<CE.2>Her2</CE.2>
<CE.3>STAIN</CE.3>
</OBR.4>
<OBR.7>
<TS.1>20140706162713</TS.1>
</OBR.7>
<OBR.14>
<TS.1>20140706162713</TS.1>
</OBR.14>
<OBR.15>
<SPS.1>
<CE.1>Breast</CE.1>
</SPS.1>
</OBR.15>
<OBR.16>
<XCN.1>patho</XCN.1>
<XCN.2>
<FN.1>pathl</FN.1>
</XCN.2>
<XCN.3>pathf</XCN.3>
</OBR.16>
<OBR.19>image2</OBR.19>
<OBR.20>blk2</OBR.20>
<OBR.21>SPEC14</OBR.21>
</OBR>
</OML_O21.ORDER_PRIOR>
</OML_O21.PRIOR_RESULT>
</OML_O21.OBSERVATION_REQUEST>
</OML_O21.ORDER>
</OML_O21.ORDER_GENERAL>
</OML_O21>
This is unfortunately a problem with many parsers like HAPI, they do verify the structure of any message, depending on the type (OML_O21) and also the version. Because if you change from 2.4 to 2.5, you will get a completely different structure.
If you don't care about that structure, you may use a different HL7 parser like HL7X that transforms the hl7 to xml like a delimited file - independent of hl7 message type or version.
Here you find a similar problem on stackoverflow:
How to parse the Multiple OBR Segment in HL7 using HAPI TERSER

error 1200 mismatched input 'as' expecting SEMI_COLON when using DayExtractor in Pig

I'm trying to follow this tutorial to analyze Apache access log files using Pig:
http://venkatarun-n.blogspot.com/2013/01/analyzing-apache-logs-with-pig.html
And i'm stuck with this Pig script:
grpd = GROUP logs BY DayExtractor(dt) as day;
When i execute that in grunt terminal, i get the following error:
ERROR 1200: mismatched input 'as' expecting
SEMI_COLON Failed to parse: mismatched input 'as'
expecting SEMI_COLON
Function DayExtractor is defined from piggybank.jar in this manner:
DEFINE DayExtractor
org.apache.pig.piggybank.evaluation.util.apachelogparser.DateExtractor('yyyy-MM-dd');
Ideas anyone?
I've been searching for awhile about this. Any help would be greatly be appreciated.
I am not sure how the author of the blog post got it to work, but as far as I know, you cannot use as in GROUP BY in pig. Also, I don't think you cannot use UDFs in GROUP BY. May be the author had a different version of pig that supported such operations. To get the same effect, you can split it into two steps:
logs_day = FOREACH logs GENERATE ....., DayExtractor(dt) as day;
grpd = GROUP logs_day BY day;

Creating index and adding mapping in Elasticsearch with java api gives missing analyzer errors

Code is in Scala. It is extremely similar to Java code.
Code that our map indexer uses to create index: https://gist.github.com/a16e5946b67c6d12b2b8
Utilities that the above code uses to create index and mapping: https://gist.github.com/4f88033204cd761abec0
Errors that java gives: https://gist.github.com/d6c835233e2b606a7074
Response of http://elasticsearch.domain/maps/_settings after running code and getting errors: https://gist.github.com/06ca7112ce1b01de3944
JSON FILES:
https://gist.github.com/bbab15d699137f04ad87
https://gist.github.com/73222e300be9fffd6380
Attached are the json files i'm loading in. I have confirmed that it is loading the right json files and properly outputting it as a string into .loadFromSource and .setSource.
Any ideas why it can't find the analyzers even though they are in _settings? If I run these json files via curl they work fine and properly setup the mapping.
The code I was using to create the index (found here: Define custom ElasticSearch Analyzer using Java API) was creating settings in the index like:
"index.settings.analysis.filter.my_snow.type: "stemmer","
It had settings in the setting path.
I changed my indexing code to the following to fix this:
def createIndex(client: Client, indexName: String, indexFile: String) {
//Create index
client.admin().indices().prepareCreate(indexName)
.setSource(Utils.loadFileAsString(indexFile))
.execute()
.actionGet()
}

Categories

Resources