CacheManager.clearAll throws CacheManager has been shut down for Junit Program - java

I am working on a project whose most of the Junit are failing. My job is to fix them and make them run. I have fixed around 200 Junit classes but there are around 136 Junits still failing, reason no Idea, some times they get fail and some they work. Try to drill down the problem and it the ehCache. It is being shut-down.
Can any body please explain me why this exception occur in Junt testing and that to not all the time.
Please Note we have test cases for "Action" classes as well(Which deal with Servlet Context)
But interesting point all action test classes are getting passed.
Error Message is :
java.lang.IllegalStateException: The CacheManager has been shut down. It can no longer be used.
at net.sf.ehcache.CacheManager.checkStatus(CacheManager.java:1504)
at net.sf.ehcache.CacheManager.getCacheNames(CacheManager.java:1491)
at net.sf.ehcache.CacheManager.clearAll(CacheManager.java:1526)

Some part of your code is shutting down the cache manager (probably in the tear down of the unit test) and then trying to clear the cache. You can see this in the stack trace:
at net.sf.ehcache.CacheManager.clearAll(CacheManager.java:1526)
But once the manager is shut-down you cannot invoke operations on it. Without seeing the code of one of the unit tests, its hard to be more specific.

I figured out what went wrong. One of the Junit is taking time to complete and it is a DAO. That method is taking around 40 to 50 Minutes to complete AND and session timed out for Cache Manager happens and when other Junit try to access I am getting that error.
I fixed the Junit means basically the Query of the DAO to run quicker and all works good.

Related

Why is Spring #Transactional unreliable with AspectJ?

TLDR: this project reproduces the issue: https://github.com/moreginger/aspectj-no-tx
Edit: Above now reproduced without jooq i.e. using plain JDBC.
Edit: Spring bug - https://github.com/spring-projects/spring-framework/issues/28368
I've been trying to use AspectJ to weave Spring #Transactional for a Kotlin codebase. I've seen it working with both load-time and compile-time weaving (the latter using io.freefair.aspectj.post-compile-weaving). However, when I wrote a test that the transaction would be unwound after an exception something strange happened: the test would sometimes fail. It turned out that it always failed on the first test suite run, then the second time around it would run first (due to failing the first time) and pass. After much investigation, a "minimal" case in my codebase is (in the same suite run):
Runs a test which calls a method marked #Transactional on an #Autowired bean.
Runs a test in a different class that calls another method marked #Transactional on a different #Autowired bean, which will make some changes and then throw an exception. Assert that the changes made aren't visible. This fails.
Then if you run this suite again it runs the second test first due to failing the first time and passes. It continues to run in this order and pass until something else fails or gradle clean is run. (Note that to avoid the order swapping you can make the first test also fail).
The same thing happens whether I use load-time or compile-time weaving.
How can this be? Especially with compile-time weaving the transactions have been incorporated into the class files. How can accessing a different #Transactional method cause one that we access later to stop being #Transactional o_O ? I've verified that simply removing #Transactional from the first "fixes" the issue.

Spring and JUnit5 - if test failed to start, just ignore it

today I noticed something really strange.
When I have some implementation of interface org.springframework.test.context.TestExecutionListener and it fails in beforeTestClass method, the whole test gets skipped.
I understand, it's just a listener and it should not be required to run a test, but test would fail too, if it gets a chance to run.
So my CI build went green even without potentialy failing test.
Shouldn't TestContextManager log errors instead of warn in these cases? Or should I rework my test architecture anyhow? This is kinda scary for me.
Thanks in advance.

Espresso tests can fail when running in succession, but tend to pass when ran individually?

I'm running Espresso tests using JUnit4, using Android 9. When I run tests individually, they successfully pass. However, when I run tests using classes or suites of classes, one or two tests tend to fail (out of ~100). This does not happen every time I try.
When tests do fail, they're never the same ones that failed previously. It is seemingly random. When I rerun that test individually, it of course passes. This makes it extremely hard to debug. The stack trace looks "normal", in that, it is usually a NoMatchingView or AmbigiousView exception, and doesn't mention any other issue.
I've read so many SO posts and articles at this point, I feel dizzy. I've implemented nearly every applicable solution but haven't found a sincere fix. Some of the solutions I've tried that resulted in more stable tests:
Turning off animations via developer options in the emulator
Disabling virtual, on-screen keyboard
Using androidx orchestrator, which clears package data after every test:
testInstrumentationRunnerArguments clearPackageData: 'true'
Adding idling resources for certain screens.
Calling the finishActivity() method on my activityTestRule after every test:
#After
fun finishActivity() {
mainActivityTestRule.finishActivity()
}
Generally, these failures tend to happen less when I reinstall the app and/or cold boot the emulator. This makes me suspect it could be app storage or cache. But that seems unlikely when I'm using the orchestrator to clear data. Any thoughts?

Cucumber - How to mark expected fails as known issues?

I successfully use Cucumber to process my Java-based tests.
Sometimes these tests encounter regression issues, and it takes time to fix found issues (depending on issue priority, it can be weeks or even months). So, I'm looking for a way to mark some cucumber tests as known issues. Don't want these tests fail the entire set of tests, just want to mark them, for example, as pending with yellow color in report instead.
I know that I can specify a #tag for failed tests and exclude them from execution list, but that's not what I want to do as I still need these tests to be run continuously. Once the issue fixed, appropriate test should be green again without any additional tag manipulation.
Some other frameworks provide such functionality (run the test but ignore its result in case of fails). Is it possible to do the same trick somehow using Cucumber?
The final solution I use now - to mark known issues with specific tag, exclude these tests from regular round and to run them separately. But that's not the best solution I believe.
Any ideas appreciated. Thanks in advance.
I would consider throwing a pending exception in the step that causes the known failure. This would allow the step to be executed and not be forgotten.
I would also consider rewriting the failing steps in such a way that when the failure occurs, it is caught and a pending exception is thrown instead of the actual failure. This would mean that when the issue is fixed and the reason for throwing the pending exception is gone, you have a passing suite.
Another thing I would work hard for is not to allow a problem to be old. Problems are like kids, when they grow up the they get harder and harder to fix. Fixing a problem while it is young, perhaps a few minutes, is usually easy. Fixing problems that are months old are harder.
You shouldn't.
My opinion is that if you have tests that fail then you should add a bug/task ticket for these scenarios and to add them in a build status page with the related tag.
Another thing you could do is to add the ticket number as a tag and removed after the ticked is fixed.
If you have scenarios that fail due a bug, then the report should show that, if the scenario is not fully implemented then is better not to run it at all.
One thing you could do is to add a specific tag/name for those scenarios and to try in a before scenario method to get the tags and check for added specific tag/name and throw a pending exception.
What i suggest is to keep those scenarios running if there is a bug and to document that in the status page.
I think the client would understand better if those scenarios are red because they are failing rather than some yellow color with "grey" status of what is happening.
If you need the status of the run to trigger some CI job then maybe is better to change the condition there.
As i see it, the thing you need to give it a thought should be: what is the difference between yellow or red, pending or fail for you or the client?, you would wand to keep a clear difference and to keep track of the real status.
You should address these issues in an email, discuss them with the project team and with the QA team, and after a final decision is taken you should get a feedback from the customer also.

Maven-surefire-plugin tests fail in Jenkins build but run successfully locally?

I have a maven project with test execution by the maven-surefire-plugin. An odd phenomenon I've observed and been dealing with is that running locally
mvn clean install
which executes my tests, results in a successful build with 0 Failures and 0 Errors.
Now when I deploy this application to our remote repo that Jenkins attempts to build, I get all sorts of random EasyMock errors, typically of the sort:
java.lang.IllegalStateException: 3 matchers expected, 4 recorded. at org.easymock.internal.ExpectedInvocation.createMissingMatchers
This is a legacy application being inherited, and we are aware that many of these tests are flawed if not plainly using EasyMock incorrectly, but I'm in a state where with test execution I get a successful build locally but not in Jenkins.
I know that the order of execution of these tests is not guaranteed, but I am wondering how I can introspect what is different in the Jenkins build pipeline vs. local to help identify the issue?
Is there anything I can do to force execute the tests in the way they're done locally? At this point, I have simply excluded many troublesome test classes but it seems that no matter how many times I see a Jenkins failure, I either fix the problem or exclude the test class, I'm only to find it complain about some other test class it didn't mention before.
Any ideas how to approach a situation like this?
I have experimented quite a similar situation, and the cause of mine was obviously some concurrency problems with the tests implementations.
And, after reading your comment:
What I actually did that fixed it (like magic am I right?) is for the maven-surefire plugin, I set the property reuseForks=false, and forkCount=1C, which is just 1*(number of CPU's of machine).
... I get more convinced that you have concurrency problems with your tests. Concurrency is not easy to diagnose, specially when your experiment runs OK on one CPU. But race conditions might arise when you run it on another system (which usually is faster or slower).
I recommend you strongly to review your tests one by one and ensure that each one of them is logically isolated:
They should not rely upon an expected previous state (files, database, etc). Instead, they should prepare the proper setup before each execution.
If they modify concurrently a common resource which might interfere other test's execution (files, database, singletons, etc), every assert must be done synchronizing as much as needed, and taking in account that its initial state is unknown:
Wrong test:
MySingleton.getInstance().put(myObject);
assertEquals(1, MySingleton.getInstance().size());
Right test:
synchronized(MySingleton.getInstance())
{
MySingleton.getInstance().put(myObject);
assertTrue(MySingleton.getInstance().contains(myObject));
}
A good start point for the reviewing is checking one of the failing tests and track the execution backwards to find the root cause of the fail.
Setting explicitly the tests' order is not a good practice, and I wouldn't recommend it to you even if I knew it was possible, because it only would hide the actual cause of the problem. Think that, in a real production environment, the executions' order is not usually guranteed.
JUnit test run order is non-deterministic.
Are the versions of Java and Maven the same on the 2 machines? If yes, make sure you're using the most recent maven-surefire-plugin version. Also, make sure to use a Freestyle Jenkins job with a Maven build step instead of the Maven project type. Using the proper Jenkins build type can either fix build problems outright or give you a better error so you can diagnose the actual issue.
You can turn on Maven debug logging to see the order tests are being run in. Each test should set up (and perhaps tear down) its own test data to make sure the tests may run independently. Perhaps seeing the test order will give you some clues as to which classes depend on others inappropriately. And - if the app uses caching, ensure the cache is cleaned out between tests (or explicitly populated depending on what the test needs to do). Also consider running the tests one package at a time to isolate the culprits - multiple surefile plugin executions might be useful.
Also check the app for classpath problems. This answer has some suggestions for cleaning the classpath.
And another possibility: Switching to a later version of JUnit might help - unless the app is using Spring 2.5.6.x. If the app is using Spring 2.5.6.x and cannot upgrade, the highest possible version of JUnit 4.x that may be used is 4.4. Later versions of JUnit are not compatible with Spring Test 2.5.6 and may lead to hard-to-diagnose test errors.

Categories

Resources