So long story short, we have some legacy code that causes problems due to static initialization of constants. Some of our tests depend on that and we would like to isolate them into separate JVM instances.
I know that this is fairly easy to do in pure maven - surefire
<forkCount>1</forkCount>
<reuseForks>false</reuseForks>
In theory the above code should fork off a new thread for each test class. As I take it, this should get rid of our problems as presumably this is a new instance of the JVM that this test is getting run into, hence all the static initialization/class loading is done again.
So far so good. Unfortunately, we are using tycho-surefire (0.16) which does not seem to have that option. My question is whether there is any trick that can allow us to overcome this problem.
For example, how is the parallel option for the Junit runner provider working for tycho?
<parallel>classes</parallel>
<useUnlimitedThreads>true</useUnlimitedThreads>
Would the above piece of code achieve similar result? Is there guarantee that each test class shall run in its own JVM ? I assume that if we specify for unlimited threads, the number of threads will equal the number of test class if our parallelism granularity is "classes".
I hope there is someone who can help me a bit with this mess.
+++++++++++++++++++++++++++ Some New Findings +++++++++++++++++++++++++++++++++
Interestingly enough, the following options fix the problem.
<threadCount>10</threadCount>
<perCoreThreadCount>true</perCoreThreadCount>
<parallel>classes</parallel>
I really cannot explain it to myself why is this the case. These options do not fork a separate JVM for each test class.It actually runs it in a separate thread within the same JVM. Cannot fork a JVM as this does not seem to be supported by Tycho - surefire. Our main problems stem from the eclipse osgi container construction which is constructed with the statically initialized values that are causing problems. Could it be the case that when you are paralleling tests this way in Tycho, it actually forks the JVM or does something strange that reconstructs the OSGI container and reloads certain classes.
Could that be the reason for the problem disappearing. All of this seems quite strange. I guess I should take a peek at the tycho-surefire source code.
There is currently no version of Tycho which supports forking more than one VM. The feature request is tracked as bug 380171.
I dont think surefire will execute each TestSuite classes in a separate JVM .
<parallel>classes</parallel>
If the above property is set JVM will be launched once and runner will spawn as many threads as the number of testsuite classes and all the test case methods in it will be sequential.
If you are using utility methods in such case which are static ,highly likely they are root cause of your troubles :)
Related
I have been assigned with testing a mongodb database from a java backend. I got told that I had to create a database completely utilizing a script for this task.
But I have difficulties understanding the benefit of creating a database from scratch with a script, instead of having a permanent test database. Where I imagine data should be inserted on startup and cleaned on teardown in both cases.
Why is it beneficial from a testing perspective to creating and deleting a database when testing?
Sometimes tests fail and therefore it may happen that the teardown phase will never be reached.
Furthermore, deleting a database is the fastest and most effective way to clean it, although perhaps not the most efficient way to do so. But it guarantees that you do not forget something in your cleanup routine.
And in particular for performance tests it is important that the database is in exact the same state for each run, otherwise the run times cannot be compared with each other: an improvement in a consecutive run could have been caused just because tablespaces were already increased or similar things, and not because the code optimisation worked …
most of the times test means a predefined environment and a expected reaction of that environment against our assumed states. so for verifying it we need a pure automated and repeatable process as much as possible without interference of manual setup or configuration.
In software development process we try to consider as many as possible test cases for QA of product. when we talk about too many test cases each one should be isolated from the others. if it's not isolated well the result may varies in each execution round and eventually invalid testing process.
They need not be. However:
You lose portability.
You don't have a known start state for your test.
We have a weird problem. We are using an automatic test tool. The DSL was implemented in Scala. The system which we test with this tool was written in Java, and the interface between the two components is RMI. Indeed, the interface part of the automatic test tool is also Java (the rest is Scala). We have the full control of the source code of these components.
We already have at the magnitude of thousand test cases. We execute these test cases automatically once every night, using Jenkins on a Linux server. The problem is that we sporadically receive a java.lang.NoClassDefFoundError exception. This typically happens when trying to access a Java artifacts from a Scala code.
If we execute the same test case manually, or check the result of the next nightly run, then typically the problem solves automatically, but sometimes it happens again in a completely different place. In case of some runs no such problem appears at all. The biggest problem is that the error is not reproducible; furthermore, as it happens in case of an automatic run, we have hardly any information about the exact circumstances, just the test case and the log.
Has somebody already encountered with such a problem? Do you have any idea how to proceed? Any hint or piece of information would be helpful, not only the solution of the problem. Thank you!
I found the reason of the error (99% sure). We had the following 2 Jenkins jobs:
Job1: Performs a full clean build of the tested system, written in Java, then performs a full clean build of the DSL, and finally executes the test cases. This is a long running job (~5 hours).
Job2: Performs a full clean build of the tested system, and then executes something else on it. The DSL is not involved. This is a shorter job (~1 hour).
We have one single Maven repository for all the jobs. Furthermore, some parts of the tested component is part of the interface between the two components.
Considering the time stamps the following happened:
Job1 performed the full build of both components, and started a test suite containing several test cases, which execution lasts about half an hour.
The garbage collector might swept out the components not used yet.
Job2 started the build, and it also rebuilt the interface parts, including the one swept out by garbage collector of Job1.
Job1 reached a test case which uses an interface component already swept out.
The solution was the following: we moved Job2 to an earlier time; now it finishes the job before Job1 starts the tests.
All,
Recently I developed a code that supposedly is a thread-safe class. Now the reason I have said 'supposedly' is because even after using the sync'ed blocks, immutable data structures and concurrent classes, I was not able to test the code for some cases because of the thread scheduling environment of JVM. i.e. I only had test cases on paper but could not replicate the same test environment. Is there any specific guidelines or something the experienced members over here who can share about how to test a multi-threaded environment.
First thing is, you can't ensure only with testing that your class is fully thread-safe. Whatever tests you run on it, you still need to have your code reviewed by as many experienced eyes as you can get, to detect subtle concurrency issues.
That said, you can devise specific test scenarios to try to cover all possible inter-thread timing scenarios, as you did. For ideas on this (and for designing thread-safe classes in general), it is recommended to read Java Concurrency in Practice.
Moreover, you can run stress tests, executing many threads simultaneously over an extended period of time. The number of threads should be way over the reasonable limit to make sure that thread contention happens often - this raises the chances of potential concurrency bugs to manifest over time.
Also, another thing I would recomend is for you to use code coverage measuring tools and set a high standar as your goal. For example, set a high goal for modified condition/decision coverage.
We use GroboUtils to create multi threaded tests.
If you have code that you plan to test in order to make it reliable, then make it single threaded.
Threading should be reserved for code that either doesn't particularly need to work, or is simple enough to be statically analysed and proven correct without testing.
The root of our problem is Singletons. But Singletons are hard to break and in the meantime we have a lot of unit tests that use Singletons without being careful to completely clear them in the tearDown() method. I figure that a good way to detect tests like these is to look for memory leaks. If the memory used after tearDown() and System.gc() is more than when the test started, either the test leaked or more classes were loaded by the classloader. Is there any way to automatically detect this sort of problem?
Could you introduce a subclass, between TestCase and your individual test classes, that did the cleanup? Then subclasses would only be responsible for calling super.teardown() - and only those that had a teardown() of their own.
I completely agree with other posters that monitoring the memory usage isn't a viable way to track this - System.gc() is not going to behave as you expect, or with enough precision to achieve your goal.
You're going to need a tool that lets you inspect the reference graph and show allocation call stacks.
I've used OptimizeIt from Borland and JProfiler from ej-technologies, both with success (a quick google reveals that OptimizeIt may now be dead.)
There's also the possiblity of using JVMTI to throw together a better monitor for this specific problem.
Edit: Wierd, but as I was reviewing this answer, I got a phone call from Embarcadero, who has apparently purchased OptimizeIt, done some updating and are now marketing under the name J Optimizer.
Just a thought: if you have two empty tests run right after one another, the second one should not have a different memory used after teardown(). If it does, you (probably) have a leak somewhere in your setup()/teardown() system.
I don't think this is a good approach. System.gc() is not guaranteed to fully clean up any unused objects as you think it will.
If your root problem is that you have unit tests which end up using global data (singletons) without properly cleaning them up, you should attack the root problem: these unit tests. It shouldn't be too too hard to find all tests that aren't using tearDown(), or to find all tests that use a particular singleton.
If your Singleton's are only intended to be initialized one time, you could have code that checks for reinitialization and logs the current stack when it detects that. Then if you check the stack, you will see which test got the ball rolling, and you can check the JUnit logs to see what the test run right before that was.
In terms of solving this problem more thoroughly, instead of detecting it I would recommend you have a singleton initializer that remembers what it initialized, and has one teardown method that tears down everything it initialized. That way tests can be made to only initialize via this class, and only has to do one thing in teardown.
I also think Carl Manaster's suggestion is a good one, but if you were using JUnit4, then you could have a teardown method that runs in the superclass without having to remember to call super. Unless you use the JUnit3 GUI, JUnit4 should be a drop in replacement. The only thing is to take advantage of its new features you have to migrate the whole test, you can't have both live in the same class. So tests that interact with these singletons would have to be migrated one whole test class at a time.
You could use the Eclipse Memory Analyzer to automate analyzing heap dumps taken after each test or probably better after all tests. MAT can find memory leaks fairly automatically.
I am writing a simple checkers game in Java. When I mouse over the board my processor ramps up to 50% (100% on a core).
I would like to find out what part of my code(assuming its my fault) is executing during this.
I have tried debugging, but step-through debugging doesn't work very well in this case.
Is there any tool that can tell me where my problem lies? I am currently using Eclipse.
This is called "profiling". Your IDE probably comes with one: see Open Source Profilers in Java.
Use a profiler (e.g yourkit )
Profiling? I don't know what IDE you are using, but Eclipse has a decent proflier and there is also a list of some open-source profilers at java-source.
In a nutshell, profilers will tell you which part of your program is being called how many often.
I don't profile my programs much, so I don't have too much experience, but I have played around with the NetBeans IDE profiler when I was testing it out. (I usually use Eclipse as well. I will also look into the profiling features in Eclipse.)
The NetBeans profiler will tell you which thread was executing for how long, and which methods have been called how long, and will give you bar graphs to show how much time each method has taken. This should give you a hint as to which method is causing problems. You can take a look at the Java profiler that the NetBeans IDE provides, if you are curious.
Profiling is a technique which is usually used to measure which parts of a program is taking up a lot of execution time, which in turn can be used to evaluate whether or not performing optimizations would be beneficial to increase the performance of a program.
Good luck!
1) It is your fault :)
2) If you're using eclipse or netbeans, try using the profiling features -- it should pretty quickly tell you where your code is spending a lot of time.
3) failing that, add console output where you think the inner loop is -- you should be able to find it quickly.
Yes, there are such tools: you have to profile the code. You can either try TPTP in eclipse or perhaps try JProfiler. That will let you see what is being called and how often.
Use a profiler. There are many. Here is a list: http://java-source.net/open-source/profilers.
For example you can use JIP, a java coded profiler.
Clover will give a nice report showing hit counts for each line and branch. For example, this line was executed 7 times.
Plugins for Eclipse, Maven, Ant and IDEA are available. It is free for open source, or you can get a 30 day evaluation license.
If you're using Sun Java 6, then the most recent JDK releases come with JVisualVM in the bin directory. This is a capable monitoring and profiling tool that will require very little effort to use - you don't even need to start your program with special parameters - JVisualVM simply lists all the currently running java processes and you choose the one you want to play with.
This tool will tell you which methods are using all the processor time.
There are plenty of more powerful tools out there, but have a play with a free one first. Then, when you read about what other features are available out there, you'll have an inking about how they might help you.
This is a typically 'High CPU' problem.
There are two kind of high CPU problems
a) Where on thread is using 100% CPU of one core (This is your scenario)
b) CPU usage is 'abnormally high' when we execute certain actions. In such cases CPU may not be 100% but will be abnormally high. Typically this happens when we have CPU intensive operations in the code like XML parsing, serialization de-serialization etc.
Case (a) is easy to analyze. When you experience 100% CPU 5-6 thread dumps in 30 sec interval. Look for a thread which is active (in "runnable" state) and which is inside the same method (you can infer that by monitoring the thread stack). Most probably that you will see a 'busy wait' (see code below for an example)
while(true){
if(status) break;
// Thread.sleep(60000); // such a statement would have avoided busy wait
}
Case (b) also can be analyzed using thread dumps taken in equal interval. If you are lucky you will be able to find out the problem code, If you are not able to identify the problem code by using thread dump. You need to resort to profilers. In my experience YourKit profiler is very good.
I always try with thread dumps first. Profilers will only be last resort. In 80% of the cases we will be able to identify using thread dumps.
Or use JUnit test cases and a code coverage tool for some common components of yours. If there are components that call other components, you'll quickly see those executed many more times.
I use Clover with JUnit test cases, but for open-source, I hear EMMA is pretty good.
In single-threaded code, I find adding some statements like this:
System.out.println("A: "+ System.currentTimeMillis());
is simpler and as effective as using a profiler. You can soon narrow down the part of the code causing the problem.