JMeter as code assertions are not being considered in test results - java

I'm using JMeter as a code (programmatic approach instead of GUI, with a Java Maven project) in order to stress-test an AWS Lambda Serverless API.
I've already developed a test plan, thread group, HTTPSamplerProxy and so on...
The execution of the calls to the API works perfect but is not the case e.g. for the DurationAssertion I've added to the HTTP Sampler..
I've also set a CSV file for the output where after execution I see everything ok (status code 200..), but the test should fail due to it is over the DurationAssertion I've configured (in addition to other assertion test elements).
I thought that perhaps I had to set "enabled" = true in the DurationAssertion object, but no effect.. Also, I've tried to access the JMeter Context in this way:
JMeterContextService.getContext().getPreviousResult()
I expected above code to retrieve a SampleResult (which has an AssertionResult collection), but the SampleResult is null..
A test plan with test elements (DurationAssertion in this case) without its respective analysis of the results of these assertions make no sense.. I want to see a failure message in each call that exceeds a certain threshold.. If I'd be using the JMeter GUI, I would add a ViewResultTree, which shows a Sampler Result view with detail of the request, response, and associated test asserts. And in addition to assertion result (per each request) I wanna see the request payload, full response, headers.. But in programmatic mode (without using the GUI).
So I would highly appreciate if anyone could give me some hint in how to accomplish this goal but by code.
UPDATE 1: I share a github snippet with the entire source code, such as UBIK LOAD PACK user suggested me:
https://gist.github.com/svillarreal/5eb90a66b8972633b95c249abb3566da
UPDATE 2: Inspection of context object (evaluated after JMeter engine finished run) - All null inside
UPDATE 3
i) I've recently found a jmeter.properties file, where I've configured the following properties:
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.assertion_results=all
And now the output as XML instead of CSV shows, at least, the sent request payload and the response data, which is VERY useful for analyse error cases.
ii) I did the inspection of JMeterContextService.getContext() inside the JMeterEngine execution instead of after it finishes run and then I could realize that there is one context per thread group, and during its run this object is full so now is clear why in UPDATE 2 all the properties are null..
Best regards and thanks!

I can think about at least one use case when your approach will not work: JMeter didn't receive response from the server at all.
For example if your server gets overloaded it might be the case that JMeter will never get response back therefore your Duration Assertion will simply not be applied as PostProcessors, Listeners and Assertions are not fired given that SampleResult is null.
So in order to be on the safe side I would recommend applying connect and response timeouts to your HTTP Request sampler(s)
HTTPSamplerProxy httpSampler = new HTTPSamplerProxy();
httpSampler.setConnectTimeout("3000");
httpSampler.setResponseTimeout("3000");
//etc.
If you have > 1 HTTP Request sampler in Test Plan it makes sense to go for HTTP Request Defaults instead of setting the timeouts individually.

Finally I could fix this. The issue was that I was managing erroneously the tree that is passed to the StandardJMeterEngine.
In JMeter everything is based on this tree, and like in the GUI, we should take care about how the elements are positioned in its hierarchy.
Analysing the library and debugging it intensely I've realized more in deep how JMeter works and I've understood that everything is managed starting from the HashTree. So the solution was to add the DurationAssertion and ResponseAssertion as HTTPSamplerProxy node's childs instead of putting them as HTTPSamplerProxy's test elements.
In particular, the method that fills the assertions to check after the execution is the following (and that let me know how to manage the hashtree):
// org.apache.jmeter.threads.TestCompiler
private void saveSamplerConfigs(Sampler sam) {
List<ConfigTestElement> configs = new LinkedList<>();
List<Controller> controllers = new LinkedList<>();
List<SampleListener> listeners = new LinkedList<>();
List<Timer> timers = new LinkedList<>();
List<Assertion> assertions = new LinkedList<>();
LinkedList<PostProcessor> posts = new LinkedList<>();
LinkedList<PreProcessor> pres = new LinkedList<>();
for (int i = stack.size(); i > 0; i--) {
addDirectParentControllers(controllers, stack.get(i - 1));
List<PreProcessor> tempPre = new LinkedList<>();
List<PostProcessor> tempPost = new LinkedList<>();
List<Assertion> tempAssertions = new LinkedList<>();
for (Object item : testTree.list(stack.subList(0, i))) {
if (item instanceof ConfigTestElement) {
configs.add((ConfigTestElement) item);
}
if (item instanceof SampleListener) {
listeners.add((SampleListener) item);
}
if (item instanceof Timer) {
timers.add((Timer) item);
}
if (item instanceof Assertion) {
tempAssertions.add((Assertion) item);
}
if (item instanceof PostProcessor) {
tempPost.add((PostProcessor) item);
}
if (item instanceof PreProcessor) {
tempPre.add((PreProcessor) item);
}
}
assertions.addAll(0, tempAssertions);
pres.addAll(0, tempPre);
posts.addAll(0, tempPost);
}
SamplePackage pack = new SamplePackage(configs, listeners, timers, assertions,
posts, pres, controllers);
pack.setSampler(sam);
pack.setRunningVersion(true);
samplerConfigMap.put(sam, pack);
}
Also I had to activate the following property:
jmeter.save.saveservice.assertion_results_failure_message=true
As a consequence now I have my CSV file report with the assertions results messages included in a exclusive column for that.
Well, issue resolved. ** I've updated the github snippet gist with the final solution ** Many thanks to all that read this post and tried to collaborate.
Best regards,

Related

Mutiny - Discarding Failures when Combining Results of Several Unis

I'm using Mutiny to try to fetch data from several external sources. Each call produces a list of results, and I am currently combining these results into one list in a Uni as follows:
List<Uni<List<Result>>> unis = new ArrayList<>();
for (Source source : sources) {
unis.add(source.getResults());
}
return Uni.combine().all().unis(unis).combinedWith(
responses -> {
List<Result> res = new ArrayList<>();
for (List<Result> response : (List<List<Result>>) responses) {
res.addAll(response);
}
return res;
}
);
When one of these Unis fails, though, the entire final Uni fails.
I want to be able to get the combined list of results from all of the calls that do not fail, and just log failures or something, but I can't figure out how to do this from the Mutiny documentation. Any help would be much appreciated.
You should have an error-handling strategy for each Uni then, so the combination only sees succeeeding Uni.
See:
https://quarkus.io/blog/mutiny-failure-handling/
https://smallrye.io/smallrye-mutiny/2.0.0/tutorials/handling-failures/

how to convert Flux<pojo> to ArrayList<String>

In my spring-boot springboot service class, I have created the following code which is not working as desired:
Service class:
Flux<Workspace> mWorkspace = webClient.get().uri(WORKSPACEID)
.retrieve().bodyToFlux(Workspace.class);
ArrayList<String> newmWorkspace = new ArrayList();
newmWorkspace = mWorkspace.blockLast();
return newmWorkspace;
Please someone help me on converting the list of json values to put it into arrayList
Json
[
{
"id:"123abc"
},
{
"id:"123abc"
}
]
Why is the code not working as desired
mWorkspace is a publisher of one or many items of type Workspace.
Calling newmWorkspace.blockLast() will get a Workspace from that Publisher:
which is an object of type: Workspace and not of type ArrayList<String>.
That's why : Type mismatch: cannot convert from Workspace to ArrayList<String>
Converting from Flux to an ArrayList
First of all, in reactive programming, a Flux is not meant to be blocked, the blockxxx methods are made for testing purposes. If you find yourself using them, then you may not need reactive logic.
In your service, you shall try this :
//initialize the list
ArrayList<String> newmWorkspace = new ArrayList<>();
Flux<Workspace> mWorkspace = webClient.get().uri(WORKSPACEID)
.retrieve().bodyToFlux(Workspace.class)
.map(workspace -> {
//feed the list
newmWorkspace.add(workspace.getId());
return workspace;
});
//this line will trigger the publication of items, hence feeding the list
mWorkspace.subscribe();
Just in case you want to convert a JSON String to a POJO:
String responseAsjsonString = "[{\"id\": \"123abc\"},{\"id\": \"123cba\"}] ";
Workspace[] workspaces = new ObjectMapper().readValue(responseAsjsonString, Workspace[].class);
You would usually want to avoid blocking in a non-blocking application. However, if you are just integrating from blocking to non-blocking and doing so step-by-step (unless you are not mixing blocking and non-blocking in your production code), or using a servlet stack app but want to only use the WebFlux client, it should be fine.
With that being said, a Flux is a Publisher that represents an asynchronous sequence of 1..n emitted items. When you do a blockLast you wait until the last signal completes, which resolves to a Workspace object.
You want to collect each resolved item to a list and return that. For this purpose, there is a useful method called collectList, which does this job without blocking the stream. You can then block the Mono<List<Workspace>> returned by this method to retrieve the list.
So this should give you the result you want:
List<Workspace> workspaceList = workspaceFlux.collectList().block();
If you must use a blocking call in the reactive stack, to avoid blocking the event loop, you should subscribe to it on a different scheduler. For the I/O purposes, you should use the boundedElastic Scheduler. You almost never want to call block on a reactive stack, instead subscribe to it. Or better let WebFlux to handle the subscription by returning the publisher from your controller (or Handler).

CommandExecuteIn Background throws a "Not an (encodable) value" error

I am currently trying to implement file exports in background so that the user can do some actions while the file is downloading.
I used the apache isis CommandExexuteIn:Background action attribute. However, I got an error
"Not an (encodable) value", this is an error thrown by the ScalarValueRenderer class.
This is how my method looks like:
#Action(semantics = SemanticsOf.SAFE,
command = CommandReification.ENABLED)
commandExecuteIn = CommandExecuteIn.BACKGROUND)
public Blob exportViewAsPdf() {
final Contact contact = this;
final String filename = this.businessName + " Contact Details";
final Map<String, Object> parameters = new HashMap<>();
parameters.put("contact", contact);
final String template = templateLoader.buildFromTemplate(Contact.class, "ContactViewTemplate", parameters);
return pdfExporter.exportAsPdf(filename, template);
}
I think the error has something to do with the command not actually invoking the action but returns the persisted background command.
This implementation actually worked on the method where there is no return type. Did I miss something? Or is there a way to implement background command and get the expected results?
interesting use case, but it's not one I anticipated when that part of the framework was implemented, so I'm not surprised it doesn't work. Obviously the error message you are getting here is pretty obscure, so I've raised a
JIRA ticket to see if we could at least improve that.
I'm interested to know in what user experience you think the framework should provide here?
In the Estatio application that we work on (that has driven out many of the features added to the framework over the last few years) we have a somewhat similar requirement to obtain PDFs from a reporting server (which takes 5 to 10 seconds) and then download them. This is for all the tenants in a shopping centre, so there could be 5 to 50 of these to generate in a single go. The design we went with was to move the rendering into a background command (similar to the templateLoader.buildFromTemplate(...) and pdfExporter.exportAsPdf(...) method calls in your code fragment, and to capture the output as a Document, via the document module. We then use the pdfbox addon to stitch all the document PDFs together as a single downloadable PDF for printing.
Hopefully that gives you some ideas of a different way to support your use case
Thx
Dan

Job parameters are getting cached

Blockquote
I am facing a problem with jobParameters in spring batch.I have a jobParameter which is optional.For the first time when i am passing job parameter through commandLineJobRunner it is working.For the second time i am not passing any jobParameter but still it is taking the previous jobParameter.When i clear my Meta-Data then jobParameter is coming as null i am not passing.How can i fix this without clearing the Meta-Data.Is this happens normally in spring batch
edited code
I am using MapJobRegistry and next is used while launching the job.When i debugged i have observed that to increment the run.id it is loading all the previous parameters
public JobParameters More ...getNext(JobParameters parameters) {
if (parameters == null) {
parameters = new JobParameters();
}
long id = parameters.getLong(key, 0L) + 1;
return new JobParametersBuilder(parameters).addLong(key, id).toJobParameters();
}
First thing to mention is that you should not use all the Map... classes. They are not intendend for production and, therefore, you better of using the different Jdbc implementations. If you don't wanna use a real DB, you can use always an inmemory DB.
But about your initial question:
You are using the CommandLineJobRunner togehter with the option "next".
Having a look at method CommandLineJobRunner.start() you find the following lines:
if (opts.contains("-next")) {
JobParameters nextParameters = getNextJobParameters(job);
Map<String, JobParameter> map = new HashMap<String, JobParameter>(nextParameters.getParameters());
map.putAll(jobParameters.getParameters());
jobParameters = new JobParameters(map);
}
You can see that getNextJobParameters is called. Inside this method you can see that the data of the previous run is loaded 'jobExplorer.getJobInstances(jobIdentifier, 0, 1);' (if there was a previous run). If there is a previous run, then the job-parameters of this old run are returned after applying the incrementer.next method -> hence, this is the reason you get your old parameters.
Now, this is the technical explanation but the question that follows is how you should use the "next" and "restart" option in order to get what you want.
using of next:
- next works only as expected if you launch a job with the same name and the same jobparameters. Otherwise the results can be confusing. Actually, I use next only inside units and integration tests
using of restart:
- you can use "restart" if the previous jobexecution with the same jobname failed. Also here, the jobparameters will be taken from your previous launch.
For a normal start of a job, you shouldn't be using next nor restart. A normal start of a job should always have a unique jobparameter. For instance, a "runid" whose value is changed with every start of the job. (otherwise, you would get JobInstanceAlready... Exception).
In case of unit tests, I use a unique "runId" for every testcase. And here, I'm using the "next" option.

How to deal with code that runs before foreach block in Apache Spark?

I'm trying to deal with some code that runs differently on Spark stand-alone mode and Spark running on a cluster. Basically, for each item in an RDD, I'm trying to add it to a list, and once this is done, I want to send this list to Solr.
This works perfectly fine when I run the following code in stand-alone mode of Spark, but does not work when the same code is run on a cluster. When I run the same code on a cluster, it is like "send to Solr" part of the code is executed before the list to be sent to Solr is filled with items. I try to force the execution by solrInputDocumentJavaRDD.collect(); after foreach, but it seems like it does not have any effect.
// For each RDD
solrInputDocumentJavaDStream.foreachRDD(
new Function<JavaRDD<SolrInputDocument>, Void>() {
#Override
public Void call(JavaRDD<SolrInputDocument> solrInputDocumentJavaRDD) throws Exception {
// For each item in a single RDD
solrInputDocumentJavaRDD.foreach(
new VoidFunction<SolrInputDocument>() {
#Override
public void call(SolrInputDocument solrInputDocument) {
// Add the solrInputDocument to the list of SolrInputDocuments
SolrIndexerDriver.solrInputDocumentList.add(solrInputDocument);
}
});
// Try to force execution
solrInputDocumentJavaRDD.collect();
// After having finished adding every SolrInputDocument to the list
// add it to the solrServer, and commit, waiting for the commit to be flushed
try {
if (SolrIndexerDriver.solrInputDocumentList != null
&& SolrIndexerDriver.solrInputDocumentList.size() > 0) {
SolrIndexerDriver.solrServer.add(SolrIndexerDriver.solrInputDocumentList);
SolrIndexerDriver.solrServer.commit(true, true);
SolrIndexerDriver.solrInputDocumentList.clear();
}
} catch (SolrServerException | IOException e) {
e.printStackTrace();
}
return null;
}
}
);
What should I do, so that sending-to-Solr part executes after the list of SolrDocuments are added to solrInputDocumentList (and works also in cluster mode)?
As I mentioned on the Spark Mailing list:
I'm not familiar with the Solr API but provided that 'SolrIndexerDriver' is a singleton, I guess that what's going on when running on a cluster is that the call to:
SolrIndexerDriver.solrInputDocumentList.add(elem)
is happening on different singleton instances of the SolrIndexerDriver on different JVMs while
SolrIndexerDriver.solrServer.commit
is happening on the driver.
In practical terms, the lists on the executors are being filled-in but they are never committed and on the driver the opposite is happening.
The recommended way to handle this is to use foreachPartition like this:
rdd.foreachPartition{iter =>
// prepare connection
Stuff.connect(...)
// add elements
iter.foreach(elem => Stuff.add(elem))
// submit
Stuff.commit()
}
This way you can add the data of each partition and commit the results in the local context of each executor. Be aware that this add/commit must be thread safe in order to avoid data loss or corruption.
have you checked under the spark UI to see the execution plan of this job.
Check how it is getting split into stages and their dependencies. That should give you an idea hopefully.

Categories

Resources