Is there a load testing framework that I could use where I can supply my own Java class and test the performance of that class. So basically the framework would essentially spawn threads and record when those threads finished running and then generate a report with the final results.
Apache JMeter is exactly the project you want. You can point it at a running process or have it spin up multiple threads each starting a process. It will monitor the throughput, error rate and anything else you are interested in and render it all in a set of charts.
Take a look into Metrics (http://metrics.codahale.com/). You can use it to instrument your app, and get interesting reports after a test suite run or even published to a metrics server.
Assuming you have a Java Class and a Test method like below:
import org.junit.Test;
public class AnyTestEndPoint {
#Test
public void anyTestMethod() throws Exception {
...
your code goes here for a single user
...
}
}
Your above test can be fed to the load generator with following configs.
You can spawn virtual-users from a simple properties config file like below.
# my_load_config.properties
#############################
number.of.threads=50
ramp.up.period.in.seconds=10
loop.count=1
In the above config, number.of.threads represents virtual users to be ramped up concurrently.
Then your load test looks like below which is pointing to the above Test:
#LoadWith("my_load_config.properties")
#TestMapping(testClass = AnyTestEndPoint.class, testMethod = "anyTestMethod")
#RunWith(ZeroCodeLoadRunner.class)
public class LoadTest {
}
This can be achieved for JUnit4 load generation and JUnit5 load generation. See the running examples in the HelloWorld GitHub repo.
You could try JUnit or TestNG. I have used them in the past. Not sure if it exactly what you are looking for.
Related
I have setup an embedded mongo via flapdoodle (de.flapdoodle.embed).
Quite a lot of mongo operations hence i would like to run all of them as a suite and setup the mongo just once in testsuite.
Now when i run the test cases via mvn install , it seems to run the test cases individually.
Is there a way to run test cases only from suite and not as a class.
baeldung.com describes the use of JUnit 5 Tags, which are very well suited for your case.
You can mark tests with two different tags:
#Test
#Tag("MyMongoTests")
public void testThatThisHappensWhenThatHappens() {
}
#Test
#Tag("MyTestsWithoutMongo")
public void testThatItDoesNotHappen() {
}
And execute either set in a suite, e.g.
#IncludeTags("MyMongoTests")
public class MyMongoTestSuite {
}
In your case, the tests could be categorized by whether Mongo is in the application context or not. So, theoretically, it might be possible to create a JUnit 5 Extension to add the tag. That would be the more complex solution though.
I am running into trouble with JUnit 5 (5.0 or 5.1) and custom extension.
We are using service loader to load all implementations which then modify how our extension is bootstrapped. These implementations can be loaded just once, so I was thinking of using ExtensionContext.Store and placing it there. Every subsequent test instance would then just load it from Store instead of via service loader.
Now, I am even aware of the hierarchical context structure and I know that there is some "root" context which you can get through ExtensionContext.getRoot(). But this "root" context (instance of JupiterEngineExtensionContext) isn't really root - there is different one for every test instance.
Say you have FooTest and BarTest, then printing out getRoot() for each of them yields:
org.junit.jupiter.engine.descriptor.JupiterEngineExtensionContext#1f9e9475
org.junit.jupiter.engine.descriptor.JupiterEngineExtensionContext#6c3708b3
And hence trying to retrieve previously stored information from Store fails.
Is having this limitation intended? It makes the borderline between ClassExtensionContext and JupiterEngineExtensionContext pretty blurred.
Is there another way to globally store some information via extension?
Here is a (very) simplified version of how I tried working with the store (cutting out all other information basically). I also added some System.out.print() calls to underline what I am seeing. Executing this extension on two test classes results in what I described above:
public class MyExtension implements BeforeAllCallback {
#Override
public void beforeAll(ExtensionContext context) throws Exception {
System.out.println(context.getRoot());
if (context.getRoot().getStore(Namespace.create(MyExtension.class)).get("someIdentifier", String.class) == null) {
context.getRoot().getStore(Namespace.create(MyExtension.class)).put("someIdentifier", "SomeFooString");
} else {
// this is never executed
System.out.println("Found it, no need to store anything again!");
}
}
}
EDIT: Here is a minimal project on GH(link), run by mvn clean install, which displays the behaviour I see.
I just copied your MyExtension verbatim (i.e., with zero changes) and ran both FooTest and BarTest.
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
#ExtendWith(MyExtension.class)
class FooTest {
#Test
void test() {
}
}
and
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
#ExtendWith(MyExtension.class)
class BarTest {
#Test
void test() {
}
}
And the result is:
org.junit.jupiter.engine.descriptor.JupiterEngineExtensionContext#2280cdac
org.junit.jupiter.engine.descriptor.JupiterEngineExtensionContext#2280cdac
Found it, no need to store anything again!
Thus, getRoot() works as documented.
The only explanation for why you see two different roots is that you must be executing the tests in different processes.
Please keep in mind that the root ExtensionContext instance is bound to the current execution of your test suite.
So if you run FooTest and BarTest one after the other in an IDE, that will actually result in two "test suites" with different roots. The same is true if you configure your build tool to fork between test classes.
Whereas, if you execute both test classes together in a single "test suite" (e.g., by telling your IDE to run all tests in the same package or same source tree) you will then see that there is one root like in the output I provided above.
Note, however, that there was an issue with the junit-platform-surefire-provider prior to version 1.0.3, whereby the provider launched the JUnit Platform for each test class. This would give the appearance of forking even though Surefire did not actually start a new JVM process. For details, see https://github.com/junit-team/junit5/pull/1137.
For our usecase, as an example - we need to run a JUnit test, even if it is added multiple times within a Test Suite, without being skipped.
Currently we notice that JUnit test runner skips a Test with the same name, if it finds the test somewhere else within a Test Suite. Here is an example screenshot to show test "Case_A" within "Procedure_A" being skipped within a Test Suite -
Could this behaviour be overriden, if so could someone point us in the right direction?
I did some research arround this problem.
Simple setting - one Test "TestCase_A" and one Suite "TestProcedure_A" that runs the TestCase_A twice :
public class TestCase_A {
#Test
public void test() throws Exception {
System.out.println("Case_A RUN");
Assert.assertTrue(true);
}
}
#RunWith(Suite.class)
#Suite.SuiteClasses({ TestCase_A.class, TestCase_A.class })
#SuppressWarnings("all")
public class TestProcedure_A {
}
I run the test Suite using Eclipse and maven.
Finding: The sysout statement actually shows, that the TestCase_A runs twice!
Therefore, the Eclispe View is misleading. Test are run multiple times - the tree also reflects this. However, the status of the actual single calls is not displayed properly in the Eclispe Junit View.
I presume the view is based on the junit.runner.TestRunListener. It probably worth looking into that.
When writing code that interacts with external resources (such as using a web service or other network operation), I often structure the classes so that it can also be "stubbed" using a file or some other input method. So then I end up using the stubbed implementation to test other parts of the system and then one or two tests that specifically test calling the web service.
The problem is I don't want to be calling these external services either from Jenkins or when I run all of the tests for my project (e.g. "gradle test"). Some of the services have side effects, or may not be accessible to all developers.
Right now I just uncomment and then re-comment the #Test annotation on these particular test methods to enable and disable them. Enable it, run it manually to check it, then remember to comment it out again.
// Uncomment to test external service manually
//#Test
public void testSomethingExternal() {
Is there is a better way of doing this?
EDIT: For manual unit testing, I use Eclipse and am able to just right-click on the test method and do Run As -> JUnit test. But that doesn't work without the (uncommented) annotation.
I recommend using junit categories. See this blog for details : https://community.oracle.com/blogs/johnsmart/2010/04/25/grouping-tests-using-junit-categories-0.
Basically, you can annotate some tests as being in a special category and then you can set up a two test suites : one that runs the tests of that category and one that ignores tests in that category (but runs everything else)
#Category(IntegrationTests.class)
public class AccountIntegrationTest {
#Test
public void thisTestWillTakeSomeTime() {
...
}
#Test
public void thisTestWillTakeEvenLonger() {
....
}
}
you can even annotate individual tests"
public class AccountTest {
#Test
#Category(IntegrationTests.class)
public void thisTestWillTakeSomeTime() {
...
}
Anytime I see something manually getting turned on or off I cringe.
As far as I can see you use gradle and API for JUnit says that annotation #Ignore disables test. I will add gradle task which will add #Ignore for those tests.
If you're just wanting to disable tests for functionality that hasn't been written yet or otherwise manually disable some tests temporarily, you can use #Ignore; the tests will be skipped but still noted in the report.
If you are wanting something like Spring Profiles, where you can define rulesets for which tests get run when, you should either split up your tests into separate test cases or use a Filter.
You can use #Ignore annotation to prevent them from running automatically during test. If required, you may trigger such Ignored tests manually.
#Test
public void wantedTest() {
return checkMyFunction(10);
}
#Ignore
#Test
public void unwantedTest() {
return checkMyFunction(11);
}
In the above example, unwantedTest will be excluded.
What is the best way run a lot of integration tests using JUnit?
I crudely discovered that the code below can run all the tests... but it has a massive flaw. The tearDown() method in each of those classes is not called until they have all been run.
public class RunIntegrationTests extends TestSuite {
public RunIntegrationTests(){
}
public static void main (String[] args){
TestRunner.run(testSuite());
}
public static Test testSuite(){
TestSuite result = new TestSuite();
result.addTest(new TestSuite(AgreementIntegrationTest.class));
result.addTest(new TestSuite(InterestedPartyIntegrationTest.class));
result.addTest(new TestSuite(WorkIntegrationTest.class));
// further tests omitted for readability
return result;
}
}
The classes being run connect to the database, load an object, and display it in a JFrame. I overrode the setVisible method to enable testing. On our build machine, the java vm runs out of memory when running the code above as the objects it has to load from the database are pretty large. If the tearDown() method was called after each class finished it would solve the memory problems.
Is there a better way to run them? I'm having to use JUnit 3.8.2 by the way - we're still on Java 1.4 :(
Not sure if this is the problem, but according to the JUnit Primer you should just add the tests directly, instead of using the TestSuite:
result.addTest(new AgreementIntegrationTest()));
result.addTest(new InterestedPartyIntegrationTest()));
result.addTest(new WorkIntegrationTest()));
That's very strange. setUp and tearDown should bookend the running of each test method, regardless of how the methods are bundled up into suites.
I typically do it slightly differently.
TestSuite suite = new TestSuite( "Suite for ..." ) ;
suite.addTestSuite( JUnit_A.class ) ;
suite.addTestSuite( JUnit_B.class ) ;
And I just verified that tearDown was indeed being called the correct number of times. But your method should work just as well.
Are you sure tearDown is properly specified -- e.g. it's not "teardown"? When you run one test class on its own, is tearDown properly called?