The current project I'm working on requires me to write a tool which runs functional tests on a web application, and outputs method coverage data, recording which test case traversed which method.
Details:
The web application under test will be a Java EE application running in a servlet container (eg. Tomcat). The functional tests will be written in Selenium using JUnit. Some methods will be annotated so that they will be instrumented prior to deployement into the test enviornment. Once the Selenium tests are executed, the execution of annotated methods will be recorded.
Problem: The big obstacle of this project is finding a way to relate an execution of a test case with the traversal of a method, especially that the tests and the application run on different JVMs, and there's no way to transmit the name of the test case down the application, and no way in using thread information to relate test with code execution.
Proposed solution: My solution would consist of using the time of execution: I extend the JUnit framework to record the time the test case was executed, and I instrument the application so that it saves the time the method was traversed. And I try to use correlation to link the test case with method coverage.
Expected problems: This solution assumes that test cases are executed sequentially, and a test case ends befores the next one starts. Is this assumption reasonable with JUnit?
Question: Simply, can I have your input on the proposed solution, and perhaps suggestions on how to improve and make it more robust and functional on most Java EE applications? Or leads to already implemented solutions?
Thank you
Edit: To add more requirements, the tool should be able to work on any Java EE application and require the least amount of configuration or change in the application. While I know it isn't a realistic requirement, the tool should at least not require any huge modification of the application itself, like adding classes or lines of code.
Have you looked at existing coverage tools (Cobertura, Clover, Emma, ...). I'm not sure if one of them is able to link the coverage data to test cases, but at least with Cobertura, which is open-source, you might be able to do the following:
instrument the classes with cobertura
deploy the instrumented web app
start a test suite
after each test, invoke a URL on the web app which saves the coverage data to some file named after the test which has just been run, and resets the coverage data
after the test suite, generate a cobertura report for every saved file. Each report will tell which code has been run by the test
If you need a merged report, I guess it shouldn't be too hard to generate it from the set of saved files, using the cobertura API.
Your proposed solution seems like a reasonable one, except for the proposed solution to relate the test and request by timing. I've tried to do this sort of thing before, and it works. Most of the time. Unless you write your JUnit code very carefully, you'll have lots of issues, because of differences in time between the two machines, or if you've only got one machine, just matching one time against another.
A better solution would be to implement a Tomcat Valve which you can insert into the lifecycle in the server.xml for your webapp. Valves have the advantage that you define them in the server.xml, so you're not touching the webapp at all.
You will need to implement invoke(). The best place to start is probably with AccessLogValve. This is the implementation in AccessLogValve:
/**
* Log a message summarizing the specified request and response, according
* to the format specified by the <code>pattern</code> property.
*
* #param request Request being processed
* #param response Response being processed
*
* #exception IOException if an input/output error has occurred
* #exception ServletException if a servlet error has occurred
*/
public void invoke(Request request, Response response) throws IOException,
ServletException {
if (started && getEnabled()) {
// Pass this request on to the next valve in our pipeline
long t1 = System.currentTimeMillis();
getNext().invoke(request, response);
long t2 = System.currentTimeMillis();
long time = t2 - t1;
if (logElements == null || condition != null
&& null != request.getRequest().getAttribute(condition)) {
return;
}
Date date = getDate();
StringBuffer result = new StringBuffer(128);
for (int i = 0; i < logElements.length; i++) {
logElements[i].addElement(result, date, request, response, time);
}
log(result.toString());
} else
getNext().invoke(request, response);
}
All this does is log the fact that you've accessed it.
You would implement a new Valve. For your requests you pass a unique id as a parameter for the URL, which is used to identify the tests that you're running. Your valve would do all of the heavy lifting before and after the invoke(). You could remove remove the unique parameter for the getNext().invoke() if needed.
To measure the coverage, you could use a coverage tool as suggested by JB Nizet, based on the unique id that you're passing over.
So, from junit, if your original call was
#Test void testSomething() {
selenium.open("http://localhost/foo.jsp?bar=14");
}
You would change this to be:
#Test void testSomething() {
selenium.open("http://localhost/foo.jsp?bar=14&testId=testSomething");
}
Then you'd pick up the parameter testId in your valve.
Related
I have a java app with maven.
Junit for tests, with failsafe and surefire plugins.
I have more than 2000 integration tests.
To speed up the test running, I use failsafe jvmfork to run my tests parallel.
I have some heavy test class, and they typically running at end of my test execution and it is slows down my CI verify process.
The filesafe runorder:balanced would be a good option for me, but i cant use it because the jvmfork.
To rename the test classes or move to another package and run it alpahabetical is not an option.
Any suggestion how can I run my slow test classes at the begining of the verify process?
In JUnit 5 (from version 5.8.0 onwards) test classes can be ordered too.
src/test/resources/junit-platform.properties:
# ClassOrderer$OrderAnnotation sorts classes based on their #Order annotation
junit.jupiter.testclass.order.default=org.junit.jupiter.api.ClassOrderer$OrderAnnotation
Other Junit built-in class orderer implementations:
org.junit.jupiter.api.ClassOrderer$ClassName
org.junit.jupiter.api.ClassOrderer$DisplayName
org.junit.jupiter.api.ClassOrderer$Random
For other ways (beside junit-platform.properties file) to set configuration parameters see JUnit 5 user guide.
You can also provide your own orderer. It must implement ClassOrderer interface:
package foo;
public class MyOrderer implements ClassOrderer {
#Override
public void orderClasses(ClassOrdererContext context) {
Collections.shuffle(context.getClassDescriptors());
}
}
junit.jupiter.testclass.order.default=foo.MyOrderer
Note that #Nested test classes cannot be ordered by a ClassOrderer.
Refer to JUnit 5 documentations and ClassOrderer API docs to learn more about this.
I gave the combination of answers I found a try:
Running JUnit4 Test classes in specified order
Running JUnit Test in parallel on Suite Level
The second answer is based on these classes of this github project, which is available under the BSD-2 license.
I defined a few test classes:
public class LongRunningTest {
#Test
public void test() {
System.out.println(Thread.currentThread().getName() + ":\tlong test - started");
long time = System.currentTimeMillis();
do {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
}
} while(System.currentTimeMillis() - time < 1000);
System.out.println(Thread.currentThread().getName() + ":\tlong test - done");
}
}
#Concurrent
public class FastRunningTest1 {
#Test
public void test1() {
try {
Thread.sleep(250);
} catch (InterruptedException e) {
}
System.out.println(Thread.currentThread().getName() + ":\tfrt1-test1 - done");
}
// +7 more repetions of the same method
}
Then I defined the test suites:
(FastRunningTest2 is a copy of the first class with adjusted output)
#SuiteClasses({LongRunningTest.class, LongRunningTest.class})
#RunWith(Suite.class)
public class SuiteOne {}
#SuiteClasses({FastRunningTest1.class, FastRunningTest2.class})
#RunWith(Suite.class)
public class SuiteTwo {}
#SuiteClasses({SuiteOne.class, SuiteTwo.class})
#RunWith(ConcurrentSuite.class)
public class TopLevelSuite {}
When I execute the TopLevelSuite I get the following output:
TopLevelSuite-1-thread-1: long test - started
FastRunningTest1-1-thread-4: frt1-test4 - done
FastRunningTest1-1-thread-2: frt1-test2 - done
FastRunningTest1-1-thread-1: frt1-test1 - done
FastRunningTest1-1-thread-3: frt1-test3 - done
FastRunningTest1-1-thread-5: frt1-test5 - done
FastRunningTest1-1-thread-3: frt1-test6 - done
FastRunningTest1-1-thread-1: frt1-test8 - done
FastRunningTest1-1-thread-5: frt1-test7 - done
FastRunningTest2-2-thread-1: frt2-test1 - done
FastRunningTest2-2-thread-2: frt2-test2 - done
FastRunningTest2-2-thread-5: frt2-test5 - done
FastRunningTest2-2-thread-3: frt2-test3 - done
FastRunningTest2-2-thread-4: frt2-test4 - done
TopLevelSuite-1-thread-1: long test - done
TopLevelSuite-1-thread-1: long test - started
FastRunningTest2-2-thread-5: frt2-test8 - done
FastRunningTest2-2-thread-2: frt2-test6 - done
FastRunningTest2-2-thread-1: frt2-test7 - done
TopLevelSuite-1-thread-1: long test - done
Which basically shows that the LongRunningTest is executed in parralel to the FastRunningTests. The default value of threads used for parallel execution defined by the Concurrent Annotation is 5, which can be seen in the output of the parallel execution of the FastRunningTests.
The downside is that theses Threads are not shared between FastRunningTest1 and FastRunningTest2.
This behavious shows that it is "somewhat" possible to do what you want to do (so whether that works with your current setup is a different question).
Also I am not sure whether this is actually worth the effort,
as you need to prepare those TestSuites manually (or write something that autogenerates them)
and you need to define the Concurrent Annotation for all those classes (maybe with a different number of threads for each class)
As this basically shows that it is possible to define the execution order of classes and trigger their parallel execution, it should also be possibly to get the whole process to only use one ThreadPool (but I am not sure what the implication of that would be).
As the whole concept is based on a ThreadPoolExecutor, using a PriorityBlockingQueue which gives long running tasks a higher priority you would get closer to your ideal outcome of executing the long running tests first.
I experimented around a bit more and implemented my own custom suite runner and junit runner. The idea behind is to have your JUnitRunner submit the tests into a queue which is handeld by a single ThreadPoolExecutor. Because I didn't implement a blocking operation in the RunnerScheduler#finish method, I ended up with a solution where the tests from all classes were passed to the queue before the execution even started. (That might look different if there a more test classes and methods involved).
At least it proves the point that you can mess with junit at this level if you really want to.
The code of my poc is a bit messy and to lengthy to put it here, but if someone is interested I can push it into a github project.
In out project we had created a few marker interfaces (
example
public interface SlowTestsCategory {}
)
and put it into the #Category annotation of JUnit in the test class with slow tests.
#Category(SlowTestsCategory.class)
After that we created some special tasks for Gradle to run tests by category or a few categories by custom order:
task unitTest(type: Test) {
description = 'description.'
group = 'groupName'
useJUnit {
includeCategories 'package.SlowTestsCategory'
excludeCategories 'package.ExcludedCategory'
}
}
This solution is served by Gradle, but maybe it'll be helpful for you.
Let me summarize everything before I will provide a recommendation.
Integration tests are slow. This is fine and it's natural.
CI build doesn't run tests that assume deployment of a system, since there is no deployment in CI. We care about deployment in CD process.
So I assume your integration tests don't assume deployment.
CI build runs unit tests first. Unit tests are extremely fast because they use only RAM.
We have good and quick feedback from unit tests.
At this moment we are sure we don't have a problem with getting a quick feedback. But we still want to run integration tests faster.
I would recommend the following solutions:
Improve actual tests. Quite often they are not effective and can be speed up significantly.
Run integration tests in background (i.e. don't wait for real time feedback from them).
It's natural for them to be much slower than unit tests.
Split integration tests on groups and run them separately if you need feedback from some of them faster.
Run integration tests in different JVMs. Not different threads within the same JVM!
In this case you don't care about thread safety and you should not care about it.
Run integration tests on different machines and so on.
I worked with many different projects (some of them had CI build running for 48 hours) and first 3 steps were enough (even for crazy cases). Step #4 is rarely needed having good tests. Step #5 is for very specific situations.
You see that my recommendation relates to the process and not to the tool, because the problem is in the process.
Quite often people ignore root cause and try to tune the tool (Maven in this case). They get cosmetic improvements but with high maintenance cost of created solution.
There is a solution for that from version 5.8.0-M1 of junit.
Basically you need to create your own orderer. I did something like that.
Here is an annotation which you will use inside your test classes:
#Retention(RetentionPolicy.RUNTIME)
public #interface TestClassesOrder {
public int value() default Integer.MAX_VALUE;
}
Then you need to create class which will implement org.junit.jupiter.api.ClassOrderer
public class AnnotationTestsOrderer implements ClassOrderer {
#Override
public void orderClasses(ClassOrdererContext context) {
Collections.sort(context.getClassDescriptors(), new Comparator<ClassDescriptor>() {
#Override
public int compare(ClassDescriptor o1, ClassDescriptor o2) {
TestClassesOrder a1 = o1.getTestClass().getDeclaredAnnotation(TestClassesOrder.class);
TestClassesOrder a2 = o2.getTestClass().getDeclaredAnnotation(TestClassesOrder.class);
if (a1 == null) {
return 1;
}
if (a2 == null) {
return -1;
}
if (a1.value() < a2.value()) {
return -1;
}
if (a1.value() == a2.value()) {
return 0;
}
if (a1.value() > a2.value()) {
return 1;
}
return 0;
}
});
}
}
To get it working you need to tell junit which class you would use for ordering descriptors. So you need to create file "junit-platform.properties" it should be in resources folder. In that file you just need one line with your orderer class:
junit.jupiter.testclass.order.default=org.example.tests.AnnotationTestOrderer
Now you can use your orderer annotation like Order annotation but on class level:
#TestClassesOrder(1)
class Tests {...}
#TestClassesOrder(2)
class MainTests {...}
#TestClassesOrder(3)
class EndToEndTests {...}
I hope that this will help someone.
You can use annotations in Junit 5 to set the test order you wish to use:
From Junit 5's user guide:
https://junit.org/junit5/docs/current/user-guide/#writing-tests-test-execution-order
import org.junit.jupiter.api.MethodOrderer.OrderAnnotation;
import org.junit.jupiter.api.Order;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.TestMethodOrder;
#TestMethodOrder(OrderAnnotation.class)
class OrderedTestsDemo {
#Test
#Order(1)
void nullValues() {
// perform assertions against null values
}
#Test
#Order(2)
void emptyValues() {
// perform assertions against empty values
}
#Test
#Order(3)
void validValues() {
// perform assertions against valid values
}
}
Upgrading to Junit5 can be done fairly easily and the documentation on the link in the beginning of the post contains all the information you might need.
I have this method that I am using in a NetBeans plugin:
public static SourceCodeFile getCurrentlyOpenedFile() {
MainProjectManager mainProjectManager = new MainProjectManager();
Project openedProject = mainProjectManager.getMainProject();
/* Get Java file currently displaying in the IDE if there is an opened project */
if (openedProject != null) {
TopComponent activeTC = TopComponent.getRegistry().getActivated();
DataObject dataLookup = activeTC.getLookup().lookup(DataObject.class);
File file = FileUtil.toFile(dataLookup.getPrimaryFile()); // Currently opened file
// Check if the opened file is a Java file
if (FilenameUtils.getExtension(file.getAbsoluteFile().getAbsolutePath()).equalsIgnoreCase("java")) {
return new SourceCodeFile(file);
} else {
return null;
}
} else {
return null;
}
}
Basically, using NetBeans API, it detects the file currently opened by the user in the IDE. Then, it loads it and creates a SourceCodeFile object out of it.
Now I want to unit test this method using JUnit. The problem is that I don't know how to test it.
Since it doesn't receive any argument as parameter, I can't test how it behaves given wrong arguments. I also thought about trying to manipulate openedProject in order to test the method behaviour given some different values to that object, but as far as I'm concernet, I can't manipulate a variable in JUnit that way. I also cannot check what the method returns, because the unit test will always return null, since it doesn't detect any opened file in NetBeans.
So, my question is: how can I approach the unit testing of this method?
Well, your method does take parameters, "between the lines":
MainProjectManager mainProjectManager = new MainProjectManager();
Project openedProject = mainProjectManager.getMainProject();
basically fetches the object to work on.
So the first step would be to change that method signature, to:
public static SourceCodeFile getCurrentlyOpenedFile(Project project) {
...
Of course, that object isn't used, except for that null check. So the next level would be to have a distinct method like
SourceCodeFile lookup(DataObject dataLookup) {
In other words: your real problem is that you wrote hard-to-test code. The "default" answer is: you have to change your production code, to make easier to test.
For example by ripping it apart, and putting all the different aspects into smaller helper methods.
You see, that last method lookup(), that one takes a parameter, and now it becomes (somehow) possible to think up test cases for this. Probably you will have to use a mocking framework such as Mockito to pass mocked instances of that DataObject class within your test code.
Long story short: there are no detours here. You can't test your code (in reasonable ways) as it is currently structured. Re-structure your production code, then all your ideas about "when I pass X, then Y should happen" can work out.
Disclaimer: yes, theoretically, you could test the above code, by heavily relying on frameworks like PowerMock(ito) or JMockit. These frameworks allow you to contol (mock) calls to static methods, or to new(). So they would give you full control over everything in your method. But that would basically force your tests to know everything that is going on in the method under test. Which is a really bad thing.
I have a test case I'm trying to finish. It should try to find location ABC, but that doesn't actually exist in the DB. Essentially, it should not load the data I'm trying to find. I've tried a bunch of things, and haven't figured it out yet. Here is my code:
#Test
public void testFindByInvalidLocABC() {
System.out.println("findByInvalidLocABC");
Storage result = StorageFacadeTest.facade.findByLoc("ABC");
assertNotNull(result);
assertEquals("NOK-0000001402", result.getId());
assertEquals("ABC", result.getLoc());
}
Any suggestions is greatly appreciated!
I have a test case I'm trying to finish. It should try to find
location ABC, but that doesn't actually exist in the DB
To ensure that data be present or not present during test executions, you cannot rely on a applicative or shared database.
Automated tests have to be repeatable. Otherwise, these will be reliable today but useless and error prone tomorrow.
So I encourage you to clean/populate a test database( or schema) before the tests be executed.
Besides, as others commented, your test doesn't look like a "not found" scenario. You assert the retrieved Storage content. It makes no sense.
It should rather look like :
#Test
public void findByLoc_with_invalidLoc_returns_null() {
Storage result = StorageFacadeTest.facade.findByLoc("ABC");
assertNull(result);
}
Some improvements for your unit test:
1) Returning Optional instead of null in the method under test is probably better but you don't use it in your actual implementation. So I follow it in my example.
2) System.out is really not advised in the test code.
3) The test prefix in the test method is not advised either. It is a legacy way as Java annotations didn't exist.
I have a scenario where i need to run my selenium test in parallel using the same data provider. From what i have read it is possible but could not get it to work.I have a hub and a node running on one machine and have another node running on another machine.
My DataProvider
// Data provider for Storage Rule Suite
#DataProvider(name = "StorageRuleDataProvider", parallel =true)
public static Object[][] getStorageData(Method m) {
return TestUtil.getData(m.getName(), TestBase.storageSuite);
}
My Test
#Test(groups = { "CreateNewStorageRule" }, dependsOnGroups = { "StoragePage" }, dataProviderClass = TestDataProvider.class, dataProvider = "StorageRuleDataProvider", threadPoolSize = 20)
public void createNewStorageRuleTest(Hashtable<String, String> data){}
XML
<suite name="Storage Rule Suite" parallel="tests" data-provider-thread-count="20" >
When i run the test in the xml file, i have two set of browser opening on each node but when it attempts to do a login, sometimes it enters the credentials twice in one browser and nothing on the other and sometimes nothing gets entered on one browser.
What you describe is a classical example of non-thread-safe Selenium test automation framework. In most cases you solve this by having an instance of driver per test class, and running all tests from that class in single thread.
However, if you want to run content of single test class in multiple parallel threads, you need to redesign is-a & has-a relationships in your framework. Here is a detailed example of how this can be done:
http://automatictester.co.uk/2015/04/11/parallel-execution-on-method-level-in-selenium-testng-framework
Although, this may add extra work and additional compexity to your test automation. I'd think twice why you want to run Selenium test methods using data provider in parallel and try to answer the question if you really need to do that.
According to my experiences, if you start combining Data Providers with Selenium, you may have a problem with overall test approach. Perhaps you try to automate too much on UI level, instead of pushing the tests down the stack to e.g. API level.
First, you have to use parallel="methods" to run your #Test methods in parallel. Second: I had a similar problem, that more Test methods got executed in the same browser and I solved it by making my WebDriver ThreadSafe.
I have Struts 1 action and want to test it in isolation.
What this action do is as follows:
load data using parameters from request
build xml-based representation of this data
send this response directly to client
I use jMock for testing but have one doubt here.
My first test is
public void shouldActionInvocationPrintValidResponse() {
ProcessingAction action = new ProcessingAction();
DBService service = mock(DBService.class);
List records = new ArrayList();
when(service.loadData()).thenReturn(records);
ResponseBuilder builder = mock(ResponseBuilder.class);
when(builder.buildResponse(records)).thenReturn("fake response");
action.execute(null, null, null, null);
assertEquals("fake response", writer.getContentWritten());
}
And my prod code evaluated to this:
public String execute(...) {
List recordsList = service.loadData();
String response = responseBuilder.buildResponse(recordsList);
response.getWriter().print(response);
}
My doubt here is if such test isn't too big here. I check whole succesful flow here. Shouldn't there be separate tests for checking every single dependency call in their own tests?
I wonder because I had troubles with this test's name. My ideas at the beginning were something like
shouldFetchDataThenFormatThemAndSendResponse
As this is all the tests does, the name shows it probably does too much (look at the "and" e.g. in the test name)
And should I have whole test written at once, or just add dependencies calls step-by-step?
EDIT:
Detailed code for test and action provided.
I think you are on the right track. shouldFetchDataThenFormatThemAndSendResponse This says it all. In your test naming you are talking about implementation details. This is how your first test should have been written.
ProcessingAction action = new ProcessingAction();
Response response = action.execute();
assertEquals(true, response.IsValid);
Try: shouldGetResponseWhenActionExecuted.
Now you can look at how to get a response when executing an action.
I would bet you dollars to donuts that you didn't TDD this.
Remember: Intent over Implementation! Stop showing your crusty underwear.
It is hard to answer your question without seeing the code however I will give it a stab. For the test to be a Unit test, it should not exercise code other than the code in the class under test. If you have mocked every other class that the action calls and what you are verifying is only being done within the class under test, then no the test is not too big. I have written unit tests that have a large number of verification statements because all the things happen in the class under test due to the single invocation of the method.
My unit test rules are:
1. Exercise code only in the class under test
2. Only enter the method under test once per test method
I agree with John B.
Also, if you use the Mock Test Runner and write it correctly, you may not need an assertion.