This question already has answers here:
Java: How to test methods that call System.exit()?
(19 answers)
Closed 9 years ago.
I am trying to test a given java application, and for that purpose I want to use JUnit.
The problem I am facing is the following: Once the code I am trying to test finishes its work, its calling System.exit(), which closes the application. Although it is also stoping my tests from completing, as it closes the JVM (I assume).
Is there anyway to go around this problem, without modifying the original code? Initially I tried launching the application im testing from new thread, although that obviously didn't make much difference.
You can use System Rules: "A collection of JUnit rules for testing code which uses java.lang.System."
Among their rules, you have ExpectedSystemExit, below is an example on how to use it. I believe it is a very clean solution.
import org.junit.Rule;
import org.junit.Test;
import org.junit.contrib.java.lang.system.Assertion;
import org.junit.contrib.java.lang.system.ExpectedSystemExit;
public class SystemExitTest {
#Rule
public final ExpectedSystemExit exit = ExpectedSystemExit.none();
#Test
public void noSystemExit() {
//passes
}
#Test
public void executeSomeCodeAFTERsystemExit() {
System.out.println("This is executed before everything.");
exit.expectSystemExit();
exit.checkAssertionAfterwards(new Assertion() {
#Override
public void checkAssertion() throws Exception {
System.out.println("This is executed AFTER System.exit()"+
" and, if exists, the #org.junit.After annotated method!");
}
});
System.out.println("This is executed right before System.exit().");
System.exit(0);
System.out.println("This is NEVER executed.");
}
#Test
public void systemExitWithArbitraryStatusCode() {
exit.expectSystemExit();
System.exit(0);
}
#Test
public void systemExitWithSelectedStatusCode0() {
exit.expectSystemExitWithStatus(0);
System.exit(0);
}
#Test
public void failSystemExit() {
exit.expectSystemExit();
//System.exit(0);
}
}
If you use maven, you can add this to your pom.xml:
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
</dependency>
<dependency>
<groupId>com.github.stefanbirkner</groupId>
<artifactId>system-rules</artifactId>
<version>1.3.0</version>
</dependency>
System.exit(status) actually delegates the call to Runtime class. Runtime before proceeding with this shutdown request invokes checkExit(status) on JVM's current SecurityManager which can prevent the impending shutdown by throwing a SecurityException.
Usually, the SecurityManager needs to establish if the current thread has the privilege to shutdown defined by the current security policy in place but since all we need is to recover from this exit call we simply throw a SecurityException that we'll now have to catch in our JUnit test case.
In your JUnit test class, setup a SecurityManager in setUP() method:
securityManager = System.getSecurityManager();
System.setSecurityManager(new SecurityManager() {
#Override
public void checkExit(int status) {
super.checkExit(status); // This is IMPORTANT!
throw new SecurityException("Overriding shutdown...");
}
});
In tearDown() replace the SecurityManager again with the instance that we saved before. Failure to do so would prevent JUnit from shutting down now! :)
References:
http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/SecurityManager.html
http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/SecurityManager.html#checkExit(int)
The SecurityManager class contains many methods with names that begin with the word check. These methods are called by various methods in the Java libraries before those methods perform certain potentially sensitive operations. The invocation of such a check method typically looks like this:
SecurityManager security = System.getSecurityManager();
if (security != null) {
security.checkXXX(argument, . . . );
}
The security manager is thereby given an opportunity to prevent completion of the operation by throwing an exception. A security manager routine simply returns if the operation is permitted, but throws a SecurityException if the operation is not permitted.
There is no way around System.exit() except for calling the application to run as a seperate proces (outside your JVM).
You can do this from your unit test and observe the errorlevel that comes back from it. Whether that gives enough feedback on passing of the test is up to your judgement.
Related
I have a test case as below:
#Test
public void checkSomething()
{
//line1
//line2
//line3
//line4[Exception occurs here]
//line5
//line6
//line7 homepage.Logout();
}
Now if an exception occurs in line4 for example, then my application will never logout[line7]. This will cause my further test cases to fail since they will not be able to login since user session will be active.
How do i make it possible that logout always happen when a test fails prematurely?
I tried putting the logout logic in #AfterMethod. It works fine but is that the best practice to write test code in a configuration method like #AfterMethod?
Putting logout in #AfterMethod would be fine but make sure you are doing this in efficient way.
check for logout if only test failed
avoid using try catch because it waits for given time(ImplicitWait) to check element present and then goes in catch block instead use List
refer below code using #AfterMethod
#AfterMethod
public void screenShot(ITestResult result){
if(ITestResult.FAILURE==result.getStatus()){
List<WebElement> username = driver.findElement(By.locator); // element which displays if user is logged in
if(!username.isEmpty())
// steps to logout will go here
}
}
}
Another Alternative is you can go with TestNG Listener. implement ITestListener in a class and override onTestFailure method as below
#Override
public void onTestFailure(ITestResult result) {
if(ITestResult.FAILURE==result.getStatus()){
List<WebElement> username = driver.findElement(By.locator); // element which displays if user is logged in
if(!username.isEmpty())
// steps to logout will go here
}
}
}
add below tag in testng.xml
<listeners>
<listener class-name="com.pack.listeners.TestListener"/> // your created class name with package which implemented ITestListener
</listeners>
I work in C#, but the concept is most likely the same across all languages. In my case, I use a so-called "TearDown" tag in my base class to mark one method which should always run after a test. All the tests inherit this method from the base class and are handled accordingly. For the past years, this has worked out well, and to my knowledge any similar concept is considered best practice.
In pseudo-code:
[TearDown]
public void Cleanup()
{
try
{
Logout();
OtherStuffLikeClosingDriver();
}
catch (Exception ex)
{
Log(ex); // Obviously, this logging function needs to generate logs that are easily readable, based on the given exception.
FinishTest(testInstance, testName); // Handles critical flows that should always be finished (and "should" not be able to error out)
throw ex; // In my case, throwing the exception again makes sure that the exception is shown in the test output directly. This often speeds up the first diagnose of a failed test run.
}
}
Just make sure to handle exceptions and such accordingly: the logic in your #AfterMethod should not be interrupted by unexpected issues.
I have a doubt in TestNG with Java. I am completly new to TestNG. My doubt is, How all the test cases are executing using TestNG in java without having main() method? Please suggest me if you have any ideas. Following code is the example of a sample test case using TestNG in java. But if you notice, you can find one thing that there is no main() method in the code. Then, how does the testcases are executing?
I have another doubt. Is main() method needed for selenium Webdriver and TestNG combination to execute a script? Or can we execute testcases without main() method? If we can execute testcases without main(), then how does it is possible?
package com.first.example;
import org.testng.annotations.Test;
public class demoOne {
#Test
public void firstTestCase()
{
System.out.println("im in first test case from demoOne Class");
}
#Test
public void secondTestCase()
{
System.out.println("im in second test case from demoOne Class");
}
}
This is a valid doubt many testers have. Because the main() method is needed to run the Java program and while writing tests in TestNg we don't use main() method, and we use Annotations instead.
Annotations in TestNG are lines of code that can control how the method below them will be executed. So, in short you don't need to write main() method, TestNg do that by itself. Refer the code at the end in Annotations documentation to get the idea how it happens.
As rightly pointed out in this answer: https://stackoverflow.com/a/1918154/3619412
Annotations are meta-meta-objects which can be used to describe other
meta-objects. Meta-objects are classes, fields and methods. Asking an
object for its meta-object (e.g. anObj.getClass() ) is called
introspection. The introspection can go further and we can ask a
meta-object what are its annotations (e.g. aClass.getAnnotations).
Introspection and annotations belong to what is called reflection and
meta-programming.
Also, it's not necessary to have main() method in your tests, but you can use main() method to run the TestNg tests if you want. Refer this.
to run script from cmd prompt we use below statement,
java org.testng.TestNG testng1.xml
main method in TestNG.java class how accept the command line argument,
public static void main(String[] argv) {
TestNG testng = privateMain(argv, null);
System.exit(testng.getStatus());
}
You saw it right. Test-cases get executed through testng, the testing framework which was inspired from junit without having the main() method but extensively uses annotations.
Annotations
As per the documentation in Annotations majority of the APIs require a huge amount of boilerplate code. To write a web service you need to provide a paired interface and implementation. This boilerplate could be automatically generated by a tool if the program can be decorated with annotations indicating which methods were remotely accessible. Annotations doesn't affects the program semantics directly but they do affect the way programs are treated by tools and libraries, which can in turn affect the semantics of the running program.
TestNG
TestNG is a simple annotation-based test framework which uses a marker annotation type to indicate that a method is a test method and should be run by the testing tool. As an example:
import org.testng.annotations.Test;
#Test
public void foo() {
System.out.println("With in foo test");
}
The testing tool which is being used is as follows:
import java.lang.reflect.*;
public class RunTests {
public static void main(String[] args) throws Exception {
int passed = 0, failed = 0;
for (Method m : Class.forName(args[0]).getMethods()) {
if (m.isAnnotationPresent(Test.class)) {
try {
m.invoke(null);
passed++;
} catch (Throwable ex) {
System.out.printf("Test %s failed: %s %n", m, ex.getCause());
failed++;
}
}
}
System.out.printf("Passed: %d, Failed %d%n", passed, failed);
}
}
Our test environment has a variety of integration tests that rely on middleware (CMS platform, underlying DB, Elasticsearch index).
They're automated and we manage our middleware with Docker, so we don't have issues with unreliable networks. However, sometimes our DB crashes and our test fails.
The problem is that the detection of this failure is through a litany of org.hibernate.exception.JDBCConnectionException messages. These come about via a timeout. When that happens, we end up with hundreds of tests failing with this exception, each one taking many seconds to fail. As a result, it takes an age for our tests to complete. Indeed, we generally just kill these builds manually when we realise they are done.
My question: In a Maven-driven Java testing environment, is there a way to direct the build system to watch out for specific kinds of Exceptions and kill the whole process, should they arrive (or reach some kind of threshold)?
We could watchdog our containers and kill the build process that way, but I'm hoping there's a cleaner way to do it with maven.
If you use TestNG instead of JUnit, there are other possibilities to define tests as dependent on other tests.
For example, like others mentioned above, you can have a method to check your database connection and declare all other tests as dependent on this method.
#Test
public void serverIsReachable() {}
#Test(dependsOnMethods = { "serverIsReachable" })
public void queryTestOne() {}
With this, if the serverIsReachable test fails, all other tests which depends on this one will be skipped and not marked as failed. Skipped methods will be reported as such in the final report, which is important since skipped methods are not necessarily failures. But since your initial test serverIsReachable failed, the build should fail completely.
The positive effect is, that non of your other tests will be executed, which should fail very fast.
You could also extend this logic with groups. Let's say you're database queries are used by some domain logic tests afterwards, you can declare each database test with a group, like
#Test(groups = { "jdbc" })
public void queryTestOne() {}
and declare you domain logic tests as dependent on these tests, with
#Test(dependsOnGroups = { "jdbc.* })
public void domainTestOne() {}
TestNG will therefore guarantee the order of execution for your tests.
Hope this helps to make your tests a bit more structured. For more infos, have a look at the TestNG dependency documentation.
I realize this is not exactly what you are asking for, but could help none the less to speed up the build:
JUnit assumptions allow to let a test pass when an assumption fails. You could have an assumption like assumeThat(db.isReachable()) that would skip those tests when a timeout is reached.
In order to actually speed things up and to not repeat this over and over, you could put this in a #ClassRule:
A failing assumption in a #Before or #BeforeClass method will have the same effect as a failing assumption in each #Test method of the class.
Of cause you would then have to mark your build as unstable via another way, but that should be easily doable.
I don't know if you can fail-fast the build itself, or even want to - since the administrative aspects of the build may not then complete, but you could do this:
In all your test classes that depend on the database - or the parent classes, because something like this is inheritable - add this:
#BeforeClass
public void testJdbc() throws Exception {
Executors.newSingleThreadExecutor()
.submit(new Callable() {
public Object call() throws Exception {
// execute the simplest SQL you can, eg. "SELECT 1"
return null;
}
})
.get(100, TimeUnit.MILLISECONDS);
}
If the JDBC simple query fails to return within 100ms, the entire test class won't run and will show as a "fail" to the build.
Make the wait time as small as you can and still be reliable.
One thing you could do is to write a new Test Runner which will stop if such an error occurs. Here is an example of what that might look like:
import org.junit.internal.AssumptionViolatedException;
import org.junit.runner.Description;
import org.junit.runner.notification.RunNotifier;
import org.junit.runners.BlockJUnit4ClassRunner;
import org.junit.runners.model.FrameworkMethod;
import org.junit.runners.model.InitializationError;
import org.junit.runners.model.Statement;
public class StopAfterSpecialExceptionRunner extends BlockJUnit4ClassRunner {
private boolean failedWithSpecialException = false;
public StopAfterSpecialExceptionRunner(Class<?> klass) throws InitializationError {
super(klass);
}
#Override
protected void runChild(final FrameworkMethod method, RunNotifier notifier) {
Description description = describeChild(method);
if (failedWithSpecialException || isIgnored(method)) {
notifier.fireTestIgnored(description);
} else {
runLeaf(methodBlock(method), description, notifier);
}
}
#Override
protected Statement methodBlock(FrameworkMethod method) {
return new FeedbackIfSpecialExceptionOccurs(super.methodBlock(method));
}
private class FeedbackIfSpecialExceptionOccurs extends Statement {
private final Statement next;
public FeedbackIfSpecialExceptionOccurs(Statement next) {
super();
this.next = next;
}
#Override
public void evaluate() throws Throwable {
boolean complete = false;
try {
next.evaluate();
complete = true;
} catch (AssumptionViolatedException e) {
throw e;
} catch (SpecialException e) {
StopAfterSpecialExceptionRunner.this.failedWithSpecialException = true;
throw e;
}
}
}
}
Then annotate your test classes with #RunWith(StopAfterSpecialExceptionRunner.class).
Basically what this does is that it checks for a certain Exception (here it's SpecialException, an Exception I wrote myself) and if this occurs it will fail the test that threw that and skip all following Tests. You could of course limit that to tests annotated with a specific annotation if you liked.
It is also possible, that a similar behavior could be achieved with a Rule and if so that may be a lot cleaner.
I'm in the process of migrating a test framework from JUnit to TestNG. This framework is used to perform large end-to-end integration tests with Selenium that take several minutes to run and consist of several hundred steps across dozens of browser pages.
DISCLAIMER: I understand that this makes unit testing idealists very uneasy, but this sort of testing is required at most large service oriented companies and using unit testing tools to manage these integration tests is currently the most widespread solution. It wasn't my decision. It's what I've been asked to work on and I'm attempting to make the best of it.
At any rate, these tests fail very frequently (surprise) and making them easy to debug is of high importance. For this reason we like to detect test failures before they're reported, append some information about the failure, and then allow JUnit to fail with this extra information. For instance, without this information a failure may look like:
java.lang.<'SomeObscureException'>: <'Some obscure message'> at <'StackTrace'>
But with the added information it will look like:
java.lang.AssertionError:
Reproduction Seed: <'Random number used to generate test case'>
Country: <'Country for which test was set to run'>
Language: <'Localized language used by test'>
Step: <'Test step where the exception occurred'>
Exception Message: <'Message explaining probable cause of failure'>
Associated Exception Type: <'SomeObscureException'>
Associated Exception Message: <'Some obscure message'>
Associated Exception StackTrace: <'StackTrace'>
Exception StackTrace: <'StackTrace where we appended this information'>
It's important to note that we add this information before the test actually fails. Because our reporting tool is based entirely on the exceptions thrown by JUnit this ensures that the information we need is present in those exceptions. Ideally I'd like to add this information to an HTML or XML document using a reporter class after the test fails but before teardown is performed and then modify our reporting tool to pick up this extra information and append it to our e-mail reports. However, this has been a hard sell at our sprint planning meetings and I have not been allotted any time to work on it (running endless regressions for the developers is given higher priority than working on the test framework itself. Such is the life of the modern SDET). I also believe strongly in balance and refuse to cut into other parts of my life to get this done outside of tracked time.
What we're currently doing is this:
public class SomeTests extends TestBase {
#Test
public void someTest() {
// Test code
}
// More tests
}
public abstract class TestBase {
#Rule
public MyWatcher watcher = new MyWatcher();
// More rules and variables
#Before
public final void setup() {
// Read config, generate test data, create Selenium WebDriver, etc.
// Send references to all test objects to MyWatcher
}
}
public class MyWatcher extends TestWatcher {
// Test object references
#Override
public void failed(Throwable throwable, Description description) {
StringBuilder sb = new StringBuilder();
// Append custom test information to sb.
String exceptionSummary = sb.toString();
Assert.fail(exceptionSummary);
}
#Override
public void finished(Description description) {
// Shut down Selenium WebDriver, kill proxy server, etc.
}
// Miscellaneous teardown and logging methods
}
JUnit starts.
SomeTests inherits from TestBase class. TestBase instantiates our own instance of a TestWatcher via #Rule annotation (MyWatcher).
Test setup is run in TestBase class.
References to test objects are sent to MyWatcher.
JUnit begins someTest() method.
someTest fails at some point.
JUnit calls overridden failed() method in MyWatcher.
failed() method appends custom test information to new message using references passed by TestBase.
failed() method calls JUnit's Assert.fail() method with the customized message.
JUnit throws a java.lang.Assertion error for this new failure with the customized message. This is the exception that actually gets recorded in the test results.
JUnit calls overridden finished() method.
finished() method performs test teardown.
Our reporting tool picks up the summarized errors thrown by JUnit, and includes them in the e-mails we receive. This makes life easier than debugging the original exceptions would be without any of the extra information added by MyWatcher after the original failure.
I'd now like to implement a similar mechanism using TestNG. I first tried adding an IInvokedMethodListener in a #Listener annotation to our TestBase class as a way of replacing the TestWatcher that we were using in JUnit. Unfortunately the methods in this listener were getting called after every #BeforeMethod and #AfterMethod call as well as for the actual tests. This was causing quite a mess when I called Assert.fail from inside the IInvokedMethodListener so I opted to scrap this approach and insert the code directly into an #AfterMethod call in our TestBase class.
Unfortunately TestNG does not appear to handle the 'failing twice' approach that we were using in JUnit. When I call Assert.fail in the #AfterMethod of a test that has already failed it gets reported as an additional failure. It seems like we're going to have to come up with another way of doing this until I can get authorization to write a proper test reporter that includes the information we need for debugging.
In the meantime, we still need to dress up the exceptions that get thrown by TestNG so that the debugging information will appear in our e-mail reports. One idea I have for doing this is to wrap every single test in a try/catch block. If the test fails (an exception gets thrown), then we can catch that exception, dress it up in a summary exception with the debugging information added to that exception's message, and call Assert.fail with our new summarized exception. That way TestNG only ever sees that one exception and should only report one failure. This feels like a kludge on top of a kludge though, and I can't help but feel that there's a better way of doing this.
Does anybody know of a better method for modifying what gets reported by TestNG? Is there some kind of trick I can use for replacing the original exception with my own using ITestContext or ITestResult? Can I dive in somewhere and remove the original failure from some list, or is it already too late to stop TestNG's internal reporting by the time I get to the #AfterMethod functions?
Do you have any other advice regarding this sort of testing or exception handling in general? I don't have many knowledgeable co-workers to help with this stuff so I'm pretty much just winging it.
Implement IInvokedMethodListener:
public class InvokedMethodListener implements IInvokedMethodListener {
#Override
public void beforeInvocation(IInvokedMethod method, ITestResult testResult) {
}
#Override
public void afterInvocation(IInvokedMethod method, ITestResult result) {
if (method.isTestMethod() && ITestResult.FAILURE == result.getStatus()) {
Throwable throwable = result.getThrowable();
String originalMessage = throwable.getMessage();
String newMessage = originalMessage + "\nReproduction Seed: ...\nCountry: ...";
try {
FieldUtils.writeField(throwable, "detailMessage", newMessage, true);
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
Register it in your test:
#Listeners(InvokedMethodListener.class)
public class YourTest {
#Test
public void test() {
Assert.fail("some message");
}
}
or in testng.xml.
If you execute it, you should get:
java.lang.AssertionError: some message
Reproduction Seed: ...
Country: ...
You can user SoftAssert Class in testNG for implementing above scenario. SoftAssert Class has an hash map array which stores all the error message from Asserts in test cases and prints them in the end of the test case. you can also extend Assertion class to implement methods as per your requirement.
More information regarding SoftAssert class and its implementation can be found here
I have a class that makes native Windows API calls through JNA. How can I write JUnit tests that will execute on a Windows development machine but will be ignored on a Unix build server?
I can easily get the host OS using System.getProperty("os.name")
I can write guard blocks in my tests:
#Test public void testSomeWindowsAPICall() throws Exception {
if (isWindows()) {
// do tests...
}
}
This extra boiler plate code is not ideal.
Alternatively I have created a JUnit rule that only runs the test method on Windows:
public class WindowsOnlyRule implements TestRule {
#Override
public Statement apply(final Statement base, final Description description) {
return new Statement() {
#Override
public void evaluate() throws Throwable {
if (isWindows()) {
base.evaluate();
}
}
};
}
private boolean isWindows() {
return System.getProperty("os.name").startsWith("Windows");
}
}
And this can be enforced by adding this annotated field to my test class:
#Rule public WindowsOnlyRule runTestOnlyOnWindows = new WindowsOnlyRule();
Both these mechanisms are deficient in my opinion in that on a Unix machine they will silently pass. It would be nicer if they could be marked somehow at execution time with something similar to #Ignore
Does anybody have an alternative suggestion?
In Junit5, There are options for configuring or run the test for specific Operating System.
#EnabledOnOs({ LINUX, MAC })
void onLinuxOrMac() {
}
#DisabledOnOs(WINDOWS)
void notOnWindows() {
// ...
}
Have you looked into assumptions? In the before method you can do this:
#Before
public void windowsOnly() {
org.junit.Assume.assumeTrue(isWindows());
}
Documentation: http://junit.sourceforge.net/javadoc/org/junit/Assume.html
Have you looked at JUnit assumptions ?
useful for stating assumptions about the conditions in which a test
is meaningful. A failed assumption does not mean the code is broken,
but that the test provides no useful information. The default JUnit
runner treats tests with failing assumptions as ignored
(which seems to meet your criteria for ignoring these tests).
If you use Apache Commons Lang's SystemUtils:
In your #Before method, or inside tests that you don't want to run on Win, you can add:
Assume.assumeTrue(SystemUtils.IS_OS_WINDOWS);
Presumably you do not need to actually call the Windows API as part of the junit test; you only care that the class which is the target of the unit test calls what it thinks is a the windows API.
Consider mocking the windows api calls as part of the unit tests.