I wrote a custom Java Request which extends the AbstractJavaSamplerClient to measure the performance of a JAVA API invocation. However, now i need to measure the performance for a multiple API which is part of the same use case.
i.e.
Server severInst = new Server();
severInst.api1();
severInst.api2();
severInst.api3();
Need to get the metrics in Jmeter for each API invocation (api1, api2, api3). However, I cannot split those API calls since the api2 call is dependent on api1. (same for api3 which depends on api2). If i could split then I can write a different "Java Sampler Client" for each API. Since all these apis are inter-dependent i have to invoke all of them at once.
The method runTest returns only one SampleResult. However, I am in need of a situation where I need to return the multiple SampleResult. I tried the SampleResult.setParent() and SampleResult.storeSubResult() but no luck.
Any pointer on this will be helpful?
Thanks
How about creating three different tests. Each one collects the time for the required apis. So, in test 1 you'd have:
startTiming();
api1();
api2();
api3();
completeSample();
Then in the second test:
api1();
startTiming();
api2();
api3();
completeSample();
and so on.
Related
Is it possible for a Java request to have a summary report to it. I tried attaching
TPS listener, results tree, results table but could not see the report populated after running in jmeter.
It is not explicitly mentioned jmeter docs, but i assume, It should be supported.But i am not able to see it even after successful run of the test as seen from logs (runTest() method gets called successfully)
It is
The runTest() function is supposed to return a SampleResult and it's your job to call the necessary functions like:
create a new instance
call sampleStart() function when you want to start the measurement
call sampleEnd() function when you want to stop the measurement
call setSuccessful() function to mark the sampler as passed or failed
call setResponsecode() and setResponseData() functions to set response code/response body if needed
See JavaTest and SleepTest example implementations for reference.
You may also find JSR223 Sampler with Groovy easier to use (Java syntax should work in the majority of cases)
I am using Gatling for the first time. I have functional tests that are written in java/cucumber. I want to run these functional tests from a Gatling-scala script to do the performance testing of my application. Is there any way to do so?
The idea is to use the existing functional tests and wrap them around gatling scripts so that they could be executed concurrently for multiple users.
What you want to do is to call a Java method from Scala.
Make sure that the method you want to call is available on the class path Scala sees. Then refer to the method you want to call.
This blog post may help you.
If you are using the Gatling for the first time, have you been considering usage of some other performance tools which can provide you such options? As an analog to Gatling for your case (if you want to create functional tests on Java) and run them later using loading tools I would recommend you to check the Locust.
Using Locust you can write the tests using Java or even Kotlin. You can find the handy tutorial by this link:
https://www.blazemeter.com/blog/locust-performance-testing-using-java-and-kotlin
Another preferable option might be to use Taurus framework which allows you to run JUnit/TestNG tests right away:
https://gettaurus.org/docs/JUnit/
https://www.blazemeter.com/blog/navigating-your-first-steps-using-taurus
Gatling is primarily for http testing. what I would do is to call java code from within a gatling test that will return me a value that I check for ex: I return a boolean from a java code below for doing performance test(same also for functional test which needs extending GatlingHttpFunSpec instead of Simulation class). Also will need to use a dummy endpoint (like a health check url which will always return 200).
val myJavaTest: MyJavaTest = new MyJavaTest()
val baseURL="http://localhost:8080"
val endpoint_headers=Map("header1" -> "val1")
val endPoint="/myurl/healthcheck"
setUp(scenario("Scenario ")
.exec(
http("run first test")
.get(endpoint)
.headers(endpoint_headers)
.check(bodyString.transform(str => {
myJavaTest.runTest1()//should return boolean
}).is(true)))
.inject(atOnceUsers(1))).protocols(http
.baseURL(baseURL))
Currently I'm using pact-jvm-consumer/provider-junit_2.11 from au.com.dius lib. Got my consumer pact working and generating pacts, but the problem comes when I try to use these in my provider service.
The idea is to make all pacts integrally with junit tests, so everyone could run their unit tests locally without worrying about additional pact tests.
The main question is:
How to handle this, assuming service under tests requires another service (authorization one) and a db as a data feeder. I'm not quite convinced that each time running these instances locally and than killing them does the trick. (Would like to perform tests before even deploying these to any environments)
Should this be handled with some kind of 'hack-switch' to always return true, as authorized user in 'some circumstances', and mock a data-feeder? Or should it be handled in any other way?
Secondly (the side question):
Once i got my pact ready, how should I test these against a consumer? So far I got things like: (which works just fine, but I'm also not sure about these)
assertThat(result, instanceOf(DataStructure.class)); *as an example*
Above is to make sure that data I've received and pushed to my consumer are in the exact format I've been expecting. Is that ok, or the correct approach is to unpack all of these and check separately if these are e.g. Maps or Strings
Thanks in advance!
Here are some thoughts on stubbing service during verification:
https://github.com/pact-foundation/pact-ruby/wiki/FAQ#should-the-database-or-any-other-part-of-the-provider-be-stubbed
The pact authors' experience with using pacts to test microservices has been that using the set_up hooks to populate the database, and running pact:verify with all the real provider code has worked very well, and gives us full confidence that the end to end scenario will work in the deployed code.
However, if you have a large and complex provider, you might decide to stub some of your application code. You will definitely need to stub calls to downstream systems or to set up error scenarios. Make sure, if you stub, that you don't stub the code that actually parses the request and pulls the expected data out, because otherwise the consumer could be sending absolute rubbish, and the pact:verify won't fail because that code won't get executed. If the validation happens when you insert a record into the datasource, either don't stub anything, or rethink your validation code.
I personally would stub an authentication service (assuming you already have some other tests to show that you're invoking the authentication service correctly) but I generally use the real database, unless this complicates things such that using a mock is "cheaper" (in time, effort, maintainability).
In regards to your second question, I'm not exactly sure what you're talking about, but I think you're talking about making assertions about the properties of the object that has been unmarshalled from the mocked response (in the consumer tests). I would have one test that checked every property, to make sure that I was using the correct property names in my unmarshalling code. But as I said, I would only do this once (or however many times it was required to make sure I had checked every property name once). In the rest of the tests, I would just assert that the correct object class was returned.
I want to count how many times I make a HTTP GET when I use websockets and when I do not use websockets. I expect once when using websockets and n-times otherwise. I want to do this via JUnit, and I happen to be using Spring too. Are there any creative ways to count the times I make a GET with Jersey?
client.target(.....).get(....)
I don't know how to do this without cluttering my production code with test specific code.
If you code is defined using an interface, then I would use a Decorator pattern to add additional behavior. In this case additional behavior would be keeping track of the count of calls.
This approach is easy to configure if your concrete class is configured through Spring. Then in your Spring resource for the JUnit test, modify it to inject the Decorated class. There is no impact to existing production code.
If you add one static variable COUNT and increment it with every call - it will not hurt production at all. And you can use this variable not only for unit testing but even for production monitoring.
I have a question regarding unit test.
I am going to test a module which is an adapter to a web service. The purpose of the test is not test the web service but the adapter.
One function call the service provide is like:
class MyAdapterClass {
WebService webservice;
MyAdapterClass(WebService webservice) {
this.webservice = webservice;
}
void myBusinessLogic() {
List<VeryComplicatedClass> result = webservice.getResult();
// <business logic here>
}
}
If I want to unit test the myBusinessLogic function, the normal way is to inject an mocked version of webservice with getResult() function setup for some predefined return value.
But here my question is, the real webservice will return a list of very completed classes each with tens of properties and the list could contain hundreds or even thousands of element.
If I am going to manually setup a result using Mockito or something like that, it is a huge amount of work.
What do people normally do in this scenario? What I simply do is connect to the real web service and test again the real service. Is something good to do?
Many thanks.
You could write the code to call the real web service and then serialize the List<VeryComplicatedClass> to a file on disk and then in the setup for your mock deserialize it and have mockwebservice.getResult() return that object. That will save you manually constructing the object hierarchy.
Update: this is basically the approach which Gilbert has suggested in his comment as well.
But really.. you don't want to set up a list of very completed classes each with tens of properties and the list could contain hundreds or even thousands of element, you want to setup a mock or a stub that captures the minimum necessary to write assertions around your business logic. That way the test better communicates the details that it actually cares about. More specifically, if the business logic calls 2 or 3 methods on VeryComplicatedClass then you want the test to be explicit that those are the conditions that are required for the things that the test asserts.
One thought I had reading the comments would be to introduce a new interface which can wrap List<VeryComplicatedClass> and make myBusinessLogic use that instead.
Then it is easy (/easier) to stub or mock an implementation of your new interface rather than deal with a very complicated class that you have little control over.