Functions (side-effect free ones) are such a fundamental building block, but I don't know of a satisfying way of testing them in Java.
I'm looking for pointers to tricks that make testing them easier. Here's an example of what I want:
public void setUp() {
myObj = new MyObject(...);
}
// This is sooo 2009 and not what I want to write:
public void testThatSomeInputGivesExpectedOutput () {
assertEquals(expectedOutput, myObj.myFunction(someInput);
assertEquals(expectedOtherOutput, myObj.myFunction(someOtherInput);
// I don't want to repeat/write the following checks to see
// that myFunction is behaving functionally.
assertEquals(expectedOutput, myObj.myFunction(someInput);
assertEquals(expectedOtherOutput, myObj.myFunction(someOtherInput);
}
// The following two tests are more in spirit of what I'd like
// to write, but they don't test that myFunction is functional:
public void testThatSomeInputGivesExpectedOutput () {
assertEquals(expectedOutput, myObj.myFunction(someInput);
}
public void testThatSomeOtherInputGivesExpectedOutput () {
assertEquals(expectedOtherOutput, myObj.myFunction(someOtherInput);
}
I'm looking for some annotation I can put on the test(s), MyObject or myFunction to make the test framework automatically repeat invocations to myFunction in all possible permutations for the given input/output combinations I've given, or some subset of the possible permutations in order to prove that the function is functional.
For example, above the (only) two possible permutations are:
myObj = new MyObject();
myObj.myFunction(someInput);
myObj.myFunction(someOtherInput);
and:
myObj = new MyObject();
myObj.myFunction(someOtherInput);
myObj.myFunction(someInput);
I should be able to only provide the input/output pairs (someInput, expectedOutput), and (someOtherInput, someOtherOutput), and the framework should do the rest.
I haven't used QuickCheck, but it seems like a non-solution. It is documented as a generator. I'm not looking for a way to generate inputs to my function, but rather a framework that lets me declaratively specify what part of my object is side-effect free and invoke my input/output specification using some permutation based on that declaration.
Update: I'm not looking to verify that nothing changes in the object, a memoizing function is a typical use-case for this kind of testing, and a memoizer actually changes its internal state. However, the output given some input always stays the same.
If you are trying to test that the functions are side-effect free, then calling with random arguments isn't really going to cut it. The same applies for a random sequence of calls with known arguments. Or pseudo-random, with random or fixed seeds. There's a good chance are that a (harmful) side-effect will only occur with any of the sequence of calls that your randomizer selects.
There is also a chance that the side-effects won't actually be visible in the outputs of any of the calls that you are making ... no matter what the inputs are. They side-effects could be on some other related objects that you didn't think to examine.
If you want to test this kind of thing, you really need to implement a "white-box" test where you look at the code and try and figure out what might cause (unwanted) side-effects and create test cases based on that knowledge. But I think that a better approach is careful manual code inspection, or using an automated static code analyser ... if you can find one that would do the job for you.
OTOH, if you already know that the functions are side-effect free, implementing randomized tests "just in case" is a bit of a waste of time, IMO.
I'm not quite sure I understand what you are asking, but it seems like Junit Theories (http://junit.sourceforge.net/doc/ReleaseNotes4.4.html#theories) could be an answer.
In this example, you could create a Map of key/value pairs (input/output) and call the method under test several times with values picked from the map. This will not prove, that the method is functional, but will increase the probability - which might be sufficient.
Here's a quick example of such an additional probably-functional test:
#Test public probablyFunctionalTestForMethodX() {
Map<Object, Object> inputOutputMap = initMap(); // this loads the input/output values
for (int i = 0; i < maxIterations; i++) {
Map.Entry test = pickAtRandom(inputOutputMap); // this picks a map enty randomly
assertEquals(test.getValue(), myObj.myFunction(test.getKey());
}
}
Problems with a higher complexity could be solved based on the Command pattern: You could wrap the test methods in command objects, add the command object to a list, shuffle the list and execute the commands (= the embedded tests) according to that list.
It sounds like you're attempting to test that invoking a particular method on a class doesn't modify any of its fields. This is a somewhat odd test case, but it's entirely possible to write a clear test for it. For other "side effects", like invoking other external methods, it's a bit harder. You could replace local references with test stubs and verify that they weren't invoked, but you still won't catch static method calls this way. Still, it's trivial to verify by inspection that you're not doing anything like that in your code, and sometimes that has to be good enough.
Here's one way to test that there are no side effects in a call:
public void test_MyFunction_hasNoSideEffects() {
MyClass systemUnderTest = makeMyClass();
MyClass copyOfOriginalState = systemUnderTest.clone();
systemUnderTest.myFunction();
assertEquals(systemUnderTest, copyOfOriginalState); //Test equals() method elsewhere
}
It's somewhat unusual to try to prove that a method is truly side effect free. Unit tests generally attempt to prove that a method behaves correctly and according to contract, but they're not meant to replace examining the code. It's generally a pretty easy exercise to check whether a method has any possible side effects. If your method never sets a field's value and never calls any non-functional methods, then it's functional.
Testing this at runtime is tricky. What might be more useful would be some sort of static analysis. Perhaps you could create a #Functional annotation, then write a program that would examine the classes of your program for such methods and check that they only invoke other #Functional methods and never assign to fields.
Randomly googling around, I found somebody's master's thesis on exactly this topic. Perhaps he has working code available.
Still, I will repeat that it is my advice that you focus your attention elsewhere. While you CAN mostly prove that a method has no side effects at all, it may be better in many cases to quickly verify this by visual inspection and focus the remainder of your time on other, more basic tests.
have a look at http://fitnesse.org/: it is used often for Acceptance Test but I found it is a easy way to run the same tests against huge amount of data
In junit you can write your own test runner. This code is not tested (I'm not sure if methods which get arguments will be recognized as test methods, maybe some more runner setup is needed?):
public class MyRunner extends BlockJUnit4ClassRunner {
#Override
protected Statement methodInvoker(final FrameworkMethod method, final Object test) {
return new Statement() {
#Override
public void evaluate() throws Throwable {
Iterable<Object[]> permutations = getPermutations();
for (Object[] permutation : permutations) {
method.invokeExplosively(test, permutation[0], permutation[1]);
}
}
};
}
}
It should be only a matter of providing getPermutations() implementation. For example it can take data from some List<Object[]> field annotated with some custom annotation and produce all the permutations.
I think the term you're missing is "Parametrized Tests". However it seems to be more tedious in jUnit that in the .Net flavor. In NUnit, the following test executes 6 times with all combinations.
[Test]
public void MyTest(
[Values(1,2,3)] int x,
[Values("A","B")] string s)
{
...
}
For Java, your options seem to be:
JUnit supports this with version 4. However it's a lot of code (it seems, jUnit is adamant about test methods not taking parameters). This is the least invasive.
DDSteps, a jUnit plugin. See this video that takes values from appropriately named excel spreadsheet. You also need to write a mapper/fixture class that maps values from the spreadsheet into members of the fixture class, that are then used to invoke the SUT.
Finally, you have Fit/Fitnesse. It's as good as DDSteps, except for the fact that the input data is in HTML/Wiki form. You can paste from an excel sheet into Fitnesse and it formats it correctly at the push of a button. You need to write a fixture class here too.
Im afraid that I dont find the link anymore, but Junit 4 has some help functions to generate testdata. Its like:
public void testData() {
data = {2, 3, 4};
data = {3,4,5 };
...
return data;
}
Junit will then thest your methods will this data. But as I said, I cant' find the link anymore (forgot the keywords) for a detailed (and correct) example.
Related
I'm in a java context and am using Mockito (but I'm not bound to it) for basic mocking needs.
I have code like this
public class AuditInfoSerializer {
[..]
public Map<String, Object> doStuff(Object a) {
doOtherStuff("hello", new TempClass(someField, <someParams>));
doOtherStuff("world", new TempClass(someField, <otherParams>));
return getResult();
}
}
and in a test I want to verify that there are two instances of TempClass created with the correct set of parameters when I call the doStuff method.
Is this possible somehow?
You don't want to verify temporary data on the object under test. You want to mock dependencies and assert the object under test behavior : that is with this input you have this output.
Mock verifying is a trade off for methods to mock that return nothing but only produces side effect.
So use it only as you don't have the choice.
In your unit test, what you want is asserting what the method to test returns that is getResult().
Do that with Assert.assertEquals(...) not with Mockito.verify(...).
For the most part I agree with #davidxxx's point about the mock verifying tradeoff. If you have a setup that allows you to make assertions about an outcome like a map that is created as a result, go for it!
From an API perspective doStuff is a simple straight-forward method: Throw something at it, get something back. The information you are interested in will be contained in the map (this would be your assertion).
There is a lot going on under the hood before doStuff returns something. Many people tend to want to break up encapsulation when testing stuff. They are constantly looking for ways to uncover what is going on behind the curtains. I believe that's totally natural. But of course, it's also an anti pattern. It doesn't matter what tool you (mis)use to break natural boundaries (mocking frameworks, custom reflection, "back doors" in your code base, etc). It is always wrong. As #Michael already pointed out, the call to doOtherStuff is indeed an implementation detail. Take the perspective of client code that makes a call to doStuff. Is it interested in how the map is created? I doubt it. This should also be your testing perspective.
One last thing about using verification in tests. I would like to mitigate the trade off statement. I really don't like the generalization here. Verification is not always the less attractive choice compared to real assertions:
// Valid test without any verifaction
#Test
void method_foo_returns_gibberish (#Mock SomeInput someInput) {
// Maybe this is just to prevent an NPE ...
when(someInput.readStuff()).thenReturn("bla");
assertEquals("gibberish", Foo.foo(someInput));
}
// Test made possible by verification
#Test
void method_foo_is_readonly (#Mock SomeInput someInput) {
Foo.foo(someInput);
verify(someInput.readStuff());
verifyNoMoreInteractions(mockedList);
}
This is just the most obvious example that I could think of. There is a fraction of BDD geniuses who strive to build their whole architecture around verification driven tests! Here is an excellent article by Martin Fowler
When talking about testing, most of the time, there is no black and white. Using mocks and verification means writing different tests.
As always, it's about picking the right tool.
In my understanding, code testing is to test whether results are right, like a calculator, I need to write a test case to verify if the result of 1+1 is 2.
But I have read many test cases about verifying the number of times a method is called. I'm very confused about that. The best example is what I just saw in Spring in Action:
public class BraveKnight implements Knight {
private Quest quest;
public BraveKnight(Quest quest) {
this.quest = quest;
}
public void embarkOnQuest() {
quest.embark();
}
}
public class BraveKnightTest {
#Test
public void knightShouldEmbarkOnQuest() {
Quest mockQuest = mock(Quest.class);
BraveKnight knight = new BraveKnight(mockQuest);
knight.embarkOnQuest();
verify(mockQuest, times(1)).embark();
}
}
I really have no idea about why they need to verify the embark() function is called one time. Don't you think that embark() will certainly be invoked after embarkOnQuest() is called? Or some errors will occur, and I will notice error messages in the logs, which show the error line number, that can help me quickly locate the wrong code.
So what's the point of verifying like above?
The need is simple: to verify that the correct number of invocations were made. There are scenarios in which method calls should not happen, and others in which they should happen more or less than the default.
Consider the following modified version of embarkOnQuest:
public void embarkOnQuest() {
quest.embark();
quest.embarkAgain();
}
And suppose you are testing error cases for quest.embark():
#Test
public void knightShouldEmbarkOnQuest() {
Quest mockQuest = mock(Quest.class);
Mockito.doThrow(RuntimeException.class).when(mockQuest).embark();
...
}
In this case you want to make sure that quest.embarkAgain is NOT invoked (or is invoked 0 times):
verify(mockQuest, times(0)).embarkAgain(); //or verifyZeroInteractions
Of course this is one other simple example. There are many other examples that could be added:
A database connector that should cache entries on first fetch, one can make multiple calls and verify that the connection to the database was called just once (per test query)
A singleton object that does initialization on load (or lazily), one can test that initialization-related calls are made just once.
Consider the following code:
public void saveFooIfFlagTrue(Foo foo, boolean flag) {
if (flag) {
fooRepository.save(foo);
}
}
If you don't check the number of times that fooRepository.save() is invoked , then how can you know whether this method is doing what you want it to?
This applies to other void methods. If there is no return to a method, and therefore no response to validate, checking which other methods are called is a good way of validating that the method is behaving correctly.
Good question. You raise a good point that mocking can be overly circuitous when you can just check the results. However, there are contexts where this does lead to more robust tests.
For example, if a method needs to make a call to an external API, there are several problems with simply testing the result:
Network I/O is slow. If you have many checks like this, it will slow down your test case
Any round-trip like this would have to rely on the code making the request, the API, and the code interpreting the API's response all to work correctly. This is a lot of failure points for a single test.
If something stupid happens and you accidentally make multiple requests, this could cause performance issues with your program.
To address your sub-questions:
Don't you think that embark() will certainly be invoked after embarkOnQuest() called?
Tests also have value in letting you refactor without worry about breaking things. This is obvious now, yes. Will it be obvious in 6 months?
I really have no idea about why they need to verify the embark()
function is called one time
Verifying an invocation on a mock for a specific number of times is the standard way how Mockito works as you invoke Mockito.verify().
In fact this :
verify(mockQuest, times(1)).embark();
is just a verbose way to write :
verify(mockQuest).embark();
In a general way, the verification for a single call on the mock is what you need.
In some uncommon scenarios you may want to verify that a method was invoked a specific number of times (more than one).
But you want to avoid using so specific verifications.
In fact you even want to use verifying as few as possible.
If you need to use verifying and besides the number of invocation on the mock, it generally means two things : the mocked dependency is too much coupled to the class under
test and or the method under test performs too many unitary tasks that produce only side effects.
The test is so not necessary straight readable and maintainable. It is like if you coded the mock flow in the verifying invocations.
And as a consequence it also makes the tests more brittle as it checks invocation details not the overall logic and states.
In most of cases, a refactoring is the remedy and cancel the requirement to specify a number of invocation.
I don't tell that it is never required but use it only as it happens to be the single decent choice for the class under test.
I have to deal with a legacy application that has no tests. So before I begin refactoring I want to make sure everything works as it is.
Now imagine the following situation:
public SomeObject doSomething(final OtherObject x, final String something) {
if(x == null) throw new RuntimeException("x may not be null!");
...
}
Now I want to test that null check, so to be sure it works and I don't lose it once I refactor.
So I did this
#Test(expected = RuntimeException.class)
public void ifOtherObjectIsNullExpectRuntimeException() {
myTestObject.doSomething(null, "testString");
}
Now, this works of course.
But instead of "testString" I'd like to pass in a random String.
So I tried with:
#Test(expected = RuntimeException.class)
public void ifOtherObjectIsNullExpectRuntimeException() {
myTestObject.doSomething(null, Mockito.anyString());
}
But this is not allowed., as I get org.mockito.exceptions.misusing.InvalidUseOfMatchersException:
... You cannot use argument matchers outside of verifications or stubbing
I do understand the meaning of this, but I wonder whether I can still manage to do what I want without parameterizing my test or the like.
The only libraries I may use are Junit, AssertJ, Mockito and Powermock.
Any ideas?
Tests should be deterministic. Using random values in a test makes it difficult to reproduce behavior when debuging a failed test. I suggest that you just create a String constant for the test such as "abcdefg".
Well, like Mockito is trying to tell you via that exception, that's not really how you'd use anyString. Such methods are only to be used by mocks.
So, why not try testing with an actual random string? My personal favorite in such a scenario: java.util.UUID.randomUUID().toString(). This will virtually always generate a brand new string that has never been used for your test before.
I'd also like to add that if you are writing tests for your SomeObject class that you should avoid mocking SomeObject's behavior. From your example, you weren't exactly doing that, but it looked like you might be going down that route. Mock the dependencies of the implementation you're trying to test, not the implementation itself! This is very important; otherwise you aren't actually testing anything.
You are mixing up concepts here.
All those "mocking" helpers like anyString() are meant to be used when configuring a mock object.
But when you check your testing code:
#Test(expected = RuntimeException.class)
public void ifOtherObjectIsNullExpectRuntimeException() {
myTestObject.doSomething(null, "testString");
}
you will find: there is absolutely no mocking involved for this test. You simply can't use those Mockito calls in that place; because "there is no Mockito" in that place.
And just for the record - no need to go overboard here anyway. Your logic is very clear here: when the first argument is null, then you throw that exception. Thus it really doesn't matter at all what comes in as second argument. So thinking for an hour how to test null with any second argument is, well, in my eyes: waste of your time.
Final hint: there is java.lang.Objects
And that class has a nice check for null, so my production code only looks like
public SomeObject doSomething(final OtherObject x, final String something) {
Objects.requireNonNull(otherObject, "otherObject must not be null");
Objects.requireNonNull(something, "something must not be null");
Only difference there: requires... throws NullPointerExceptions
Final finally: some people suggest to put final on every parameter; but I wouldn't do that. It adds no value in 99% of all cases. It just means that you have more code to read; for no good reasons. But that is a question of style.
EDIT on the comment about having a test to check for potential future changes: you shouldn't do that:
To a certain degree, how your input is verified is an implementation detail. You don't test for implementation details. In other words:
Your method has a certain contract (that you, for example specify informally by writing a javadoc that says "throws NPE on null input"). Your tests should verify exactly that current contract. And the contract is: throws if first argument is null.
And maybe another point of view; as I still think you are wasting your time here! You should make sure that all your interfaces are clear, easy to understand, and easy to use. That they allow users of your code to do the right thing easily; and prevent him from doing wrong things. That is what you should focus on - the quality of your interfaces as a whole!
So instead of worrying how you could write a test for potential future changes; just make sure that your code base is overall consistent.
Well i do not have much knowledge of mockito but you can always create your own random string generator. maybe that can work and u can modify more types of inputs in it
What I am trying to do is to test some Lucene code I wrote and would like some information on best practices when using JUnit for testing. Lucene, BTW, is a search engine which you can use to create a flat file to index a bunch of data.
So what I would like to test is the creation of this inverted index, then search through the index to verify that some data is present.
My question is in the code:
public class IndexTest {
#Test
public void testWriteIndexFromDB() {
//run test
assertTrue(something in this test); // some test for this method
// Is using a function like so a proper way of writing a test?
checkSomeDataIsReturned();
}
#Test
public void testWriteIndexFromExcelFile() {
//run test
assertTrue(something in this test); // some test for this method
// Is using a function like so a proper way of writing a test?
checkSomeDataIsReturned();
}
#Test
public void testRefreshIndexWithNewData() {
//run test
assertTrue(something in this test); // some test for this method
// Is using a function like so a proper way of writing a test?
checkSomeDataIsReturned();
}
// this function checks that data is returned after writing an index
public void checkSomeDataIsReturned(){ // not a test but does a check anyways
results = myIndex.searchForStuff(some input);
assertTrue(results.length > 0); // if length is zero, there is no data and something went wrong
}
}
To summarize, I have three options to write an index, I am testing that each of them writes. Is the re-usable function that is not a test the proper way to write a test? Or is there a better practice?
Is off course a good thing write reusable code in test, but more important than that is writing that are easy to understand. In general the asserts are in the test method itself, moving the asserts to helpers methods can make your test difficult to understand.
One way to write reusable code for checking expectations its using Hamcrest (https://code.google.com/p/hamcrest/wiki/Tutorial) and build a matchers (the library also comes with some very useful matchers for collections and stuff like that).
for example, you can write something like that:
public void
test_can_index_from_database() {
// create your index from database
assertThat(myIndex, containsWord('expected_word_in_index'));
}
The matchers "containsWord(String)" its a matcher that you write using hamcrest and you can re-use this logic in all your test. And with hamcrest you can write really easy to understand test.
Well, good practices such as reusable code are to be used in unit tests, too.
However, please consider that if you are in need to repeat code in unit tests, it may (and often does) mean that your tested methods take too many responsibilities.
I don't know if it really is your case but think about refactoring your code (here splitting your tested methods into more smaller methods) so you don't feel the need to repeat the same tests all over.
When each method takes a single responsibility only, and delegate the shared code to another method/class, you test this functionality somewhere else and here just test (using mocking and spying) if your methods calls the corresponding method/object.
I have some code that consists of a lot (several hundreds of LOC) of uggly conditionals i.e.
SomeClass someClass = null;
if("foo".equals(fooBar)) {
// do something possibly involving more if-else statments
// and possibly modify the someClass variable among others...
} else if("bar".equals(fooBar)) {
// Same as above but with some slight variations
} else if("baz".equals(fooBar)) {
// and yet again as above
}
//... lots of more else ifs
} else {
// and if nothing matches it is probably an error...
// so there is some error handling here
}
// Some code that acts on someClass
GenerateOutput(someClass);
Now I had the idea of refactoring this kind of code something along the lines of:
abstract class CheckPerform<S,T,Q> {
private CheckPerform<T> next;
CheckPerform(CheckPerform<T> next) {
this.next = next;
}
protected abstract T perform(S arg);
protected abstract boolean check(Q toCheck);
public T checkPerform(S arg, Q toCheck) {
if(check(toCheck)) {
return perform(arg);
}
// Check if this CheckPerform is the last in the chain...
return next == null ? null : next.checkPerform();
}
}
And for each if statment generate a subclass of CheckPerform e.g.
class CheckPerformFoo extends CheckPerform<SomeInput, SomeClass, String> {
CheckPerformFoo(CheckPerform<SomeInput, SomeClass, String> next) {
super(next);
}
protected boolean check(String toCheck) {
// same check as in the if-statment with "foo" above"
returs "foo".equals(toCheck);
}
protected SomeClass perform(SomeInput arg) {
// Perform same actions (as in the "foo" if-statment)
// and return a SomeClass instance (that is in the
// same state as in the "foo" if-statment)
}
}
I could then inject the diffrent CheckPerforms into eachother so that the same order of checks are made and the corresponding actions taken. And in the original class I would only need to inject one CheckPerform object. Is this a valid approach to this type of problem? The number of classes in my project is likely to explode, but atleast I will get more modular and testable code. Should I do this some other way?
Since these if-else-if-...-else-if-else statments are what I would call a recurring theme of the code base I would like to do this refactoring as automagically as possible. So what tools could I use to automate this?
a) Some customizable refactoring feature hidden somewhere in an IDE that I have missed (either in Eclipse or IDEA preferably)
b) Some external tool that can parse Java code and give me fine grained control of transformations
c) Should I hack it myself using Scala?
d) Should I manually go over each class and do the refactoring using the features I am familiar with in my IDE?
Ideally the output of the refactoring should also include some basic test code template that I can run (preferably also test cases for the original code that can be run on both new and old as a kind of regression test... but that I leave for later).
Thanks for any input and suggestions!
What you have described is the Chain of Responsibility Pattern and this sounds like it could be a good choice for your refactor. There could be some downsides to this.
Readability Because you are going to be injecting the the order of the CheckPerformers using spring or some such, this means that it is difficult to see what the code will actually do at first clance.
Maintainence If someone after you wants to add a new condition, as well as adding a whole new class they also have to edit some spring config. Choosing the correct place to add there new CheckPerformer could be difficult and error prone.
Many Classes Depending on how many conditions you have and how much repeated code within those conditions you could end up with a lot of new classes. Even though the long list of if else its very pretty, the logic it in one place, which again aids readability.
To answer the more general part of your question, I don't know of any tools for automatic refactoring beyond basic IDE support, but if you want to know what to look for to refactor have a look at the Refactoring catalog. The specific of your question are covered by replace conditional with Polymorphism and replace conditional with Visitor.
To me the easiest approach would involve a Map<String, Action>, i.e. mapping various strings to specific actions to perform. This way the lookup would be simpler and more performant than the manual comparison in your CheckPerform* classes, getting rid of much duplicated code.
The actions can be implemented similar to your design, as subclasses of a common interface, but it may be easier and more compact to use an enum with overridden method(s). You may see an example of this in an earlier answer of mine.
Unfortunately I don't know of any automatic refactoring which could help you much in this. Earlier when I did somewhat similar refactorings, I wrote unit tests and did the refactoring step-by-step, manually, using automated support at the level of Move Method et al. Of course since the unit tests were pretty similar to each other in their structure, I could reuse part of the code there.
Update
#Sebastien pointed out in his comment, that I missed the possible sub-ifs within the bigger if blocks. One can indeed use a hierarchy of maps to resolve this. However, if the hierarchy starts to be really complex with a lot of duplicated functionality, a further improvement might be to implement a DSL, to move the whole mapping out of code into a config file or DB. In its simplest form it might look something like
foo -> com.foo.bar.SomeClass.someMethod
biz -> com.foo.bar.SomeOtherClass.someOtherMethod
baz -> com.foo.bar.YetAnotherClass.someMethod
bar -> com.foo.bar.SomeOtherClass.someMethod
biz -> com.foo.bar.DifferentClass.aMethod
baz -> com.foo.bar.AndAnotherClass.anotherMethod
where the indented lines configure the sub-conditions for each bigger case.