I'm using Spring Test with TestNG to test our DAOs, and I wanted to run a specific text fixture script before certain methods, allowing the modifications to be rolled back after every method so that the tests are free to do anything with the fixture data.
Initially I thought that 'groups' would be fit for it, but I already realized they're not intended for that (see this question: TestNG BeforeMethod with groups ).
Is there any way to configure a #BeforeMethod method to run only before specific #Tests? The only ways I see are workarounds:
Define an ordinary setup method and call at the beginning of every #Test method;
Move the #BeforeMethod method to a new class (top level or inner class), along with all methods that depend on it.
Neither is ideal, I'd like to keep my tests naturally grouped and clean, not split due to lack of alternatives.
You could add a parameter your #BeforeMethod with the type 'java.lang.reflect.Method'. TestNG will then inject the reflection information for the current test method including the method name, which you could use for switching.
If you add another 'Object' parameter, you will also get the invocation parameters of the test method.
You'all find all on possible parameters for TestNG-annotated methods in chapter 5.18.1 of the TestNG documentation.
Tests are simply not designed to do this. Technically speaking, a single tests is supposed to handle being idempotent for itself meaning it sets up, tests, and takes down. That is a single test. However, a lot of tests sometimes have the same set-up and take down method, whereas other tests need one set-up before they all run. This is the purpose of the #Before type tags.
If you don't like set-up and tear-down inside your test, your more then welcome to architect your own system, but technically speaking, if certain methods require specific set-ups or tear-downs, then that really should be embodied IN the test, since it is a requirement for test to pass. It is ok to call a set-up method, but ultimately, it should be OBVIOUS that a test needs a specific set-up in order to pass. After all, if your using specific set-ups, aren’t you actually testing states rather than code?
Related
The following test is one of several tests that fail when I run my tests in random order using this Maven command: mvn -Dsurefire.runOrder=random clean test
#Test
public void ShouldReturnCorrectAccountLoanSumForDebtRatioWhenRedemptionAmountIsNull(){
AccountVO account = mock(AccountVO.class);
CustomerGroupInformationVO group = mock(CustomerGroupInformationVO.class);
when(group.getCustomerIds()).thenReturn(Set.of("199406208123"));
when(account.getAccountOwners()).thenReturn(List.of((new AccountOwnerVO(null, "199406208123", null))));
when(account.getAmount()).thenReturn(BigDecimal.valueOf(500000));
when(account.getRedemptionAmount()).thenReturn(null);
assertEquals(BigDecimal.valueOf(500000), getAdjustedAccountLoanSumForDebtRatio(account, group, caseClientVO));
}
More specifically this is the line mentioned:
when(account.getAccountOwners()).thenReturn(List.of((new AccountOwnerVO(null, "199406208123", null))));
Any idea what is causing this and how I can fix it? When I run my tests normally using mvn clean install there are no issues at all. The reason I want it to work with a random order is that our build tool seems to use it and it can't build. Like I said it works fine locally.
Because Mockito works through side-effects stored in ThreadLocal variables, it is particularly subject to test pollution: If your tests fail when run in random order, it may be because some previous test left a mock in a state it wasn't expecting to be in. Also, Mockito stubbing relies on observing method calls in a specific order, which can cause odd exceptions if it can't observe the method calls (such as when they're final) or if you interact with a different mock while preparing a thenVerb argument.
One of your first lines of defense is to use validateMockitoUsage, which is meant to be run at the end of every test and confirms that no interaction with Mockito is left unfinished. You could put this in an #After method, but Mockito does so automatically if using MockitoJUnitRunner or MockitoRule. I'd recommend either of those latter options if possible, particularly MockitoRule.
This should help you confirm which test(s) are problematic. Once you're there, try these:
Double-check that you're not trying to mock final methods without Mockito's opt-in final support. If Mockito can't override your mock method, it won't be able to detect your stubbing calls in its expected order. If any of your methods are written in Kotlin, remember that unlike Java the methods are final unless declared open.
Your test doesn't seem to use Matchers, but if you do, make sure you use them for all arguments in a method if you use them for any argument in a method.
Be careful about calling real methods in the middle of stubbing. new AccountOwnerVO(...) shouldn't interact with mocks, but if it does, then Mockito might interpreted it as if your when call never got a thenReturn (when in reality it just hasn't gotten to it yet). Extracting your return value as a local variable is a reasonable step to try.
I have a super class that defines #BeforeEach and #AfterEach methods. The class also has a method that should run only when a system property is set - basically #EnabledIfSystemPropertyCondition. Normal tests with #Test annotation are in subclasses.
This works well, but the conditional test is shown as skipped in test report - which I want to avoid. Is there any way I can run a test method on a condition but not consider it as a test in normal situations?
This question is not related to inheritance of #Test annotations. The basic question is Is there any way I can run a test method on a condition but not consider it as a test in normal situations?
The easiest way to achieve that you want is to check clause manually inside the test case (just as #Paizo advised in comments). JUnit has several features which allow you skip tests execution such as Junit-ext project with #RunIf annotation or special Assume clause which will force test to be skipped if this assumption is not met. But these features also mark test as skipped which is not desired in your case. Another possibility you might think about is modifying your code with the magic of reflection to add/remove annotations at runtime, but this is not possible. Theoretically I can imagine a difficult way using cglib to subclass your test class at runtime and manage annotations just like Spring do, but first of all ask yourself if it worth?
My personal feeling is that you're trying to fix the thing which is working perfectly and not broken. Skipped test in the report is not a failed test. It's cool that you can understand from report was test executed or no.
Hope it helps!
In most of my unit-tests on a component with JUnit, I encounter the same issue:
I want to run a first test on the component with no setup to check that it initializes correctly, then I want all my other tests to be done on the component setup in a given way.
I usually end-up with a method that I call at the beginning of all my tests but one, which is ugly and error prone. I could also create two different classes, with one containing only one test, but I don't think that is the most convenient solution.
I tried searching around and found this answer: Exclude individual test from 'before' method in JUnit, but as far as I could find, the base.evaluate() call will always contain the #Before statement.
Is there a better way to do this first unSetup test?
What the answer you linked to meant is that you do not use a #Before annotation at all. Instead you put whatever you wanted to do in the #Before method instead of the
// run the before method here
comment.
I'm using Mockito in order to do some mocks/testing. My scenario is simple : I have a class mocked using mock() and I'm invoking this class (indirectly) for a large number of times (i.e. ~100k)
Mockito seems to hold some data for every invocation, and so I run out of memory at a certain point.
I'd like to tell mockito not to hold any data (I don't intend to call verify(), etc, I just don't care, for this specific tests, what reaches to that mock). I don't want to create new mocks with every invocation.
You can use Mockito.reset(mock), just be aware that after you call it, your mock will forget all stubbing as well as all interactions, so you would need to set it up again. Mockito's documentation on the method has these usage instructions:
List mock = mock(List.class);
when(mock.size()).thenReturn(10);
mock.add(1);
reset(mock);
//at this point the mock forgot any interactions & stubbing
They do also discourage use of this method, like the comments on your question do. Usually it means you could refactor your test to be more focused:
Instead of reset() please consider writing simple, small and focused test methods over lengthy, over-specified tests. First potential code smell is reset() in the middle of the test method. This probably means you're testing too much. Follow the whisper of your test methods: "Please keep us small & focused on single behavior". There are several threads about it on mockito mailing list.
I'm trying to cleanup my tests by always resetting to a known state before each test. In JUnit it seems that the best way to do this is to have a setup() method that sets the values for some fields. When running tests in parallel the field is always correct since each test is executed in a new instance of the test.
However in TestNG this doesn't seem to be the case. According to a post on their mailing list, setting fields in #BeforeMethod in a multithreaded testing doesn't guarantee their value.
As I need the classes I'm testing to be in a known state, is there a cleaner solution to this than using DataProvider or saying "Don't ever run tests in mulithreaded mode"?
There is only one difference between TestNG and JUnit in this specific area: JUnit will create a brand new instance of your test before each test method, TestNG will not.
What this means is that with TestNG, values stored in fields by test methods will be preserved between invocations, which is very useful if this object is complex and takes time to create. It also helps speed up test runs since you don't have to recreate this state from scratch every time.
If you want this state to be reset every time, simply put the initialization code in #BeforeMethod, like you do with JUnit (except it's called #Before).
As for multithreading, I don't understand why you are saying that there is no guarantee about that value, can you be more specific?