Sporadic java.lang.NoClassDefFoundError in Scala - java

We have a weird problem. We are using an automatic test tool. The DSL was implemented in Scala. The system which we test with this tool was written in Java, and the interface between the two components is RMI. Indeed, the interface part of the automatic test tool is also Java (the rest is Scala). We have the full control of the source code of these components.
We already have at the magnitude of thousand test cases. We execute these test cases automatically once every night, using Jenkins on a Linux server. The problem is that we sporadically receive a java.lang.NoClassDefFoundError exception. This typically happens when trying to access a Java artifacts from a Scala code.
If we execute the same test case manually, or check the result of the next nightly run, then typically the problem solves automatically, but sometimes it happens again in a completely different place. In case of some runs no such problem appears at all. The biggest problem is that the error is not reproducible; furthermore, as it happens in case of an automatic run, we have hardly any information about the exact circumstances, just the test case and the log.
Has somebody already encountered with such a problem? Do you have any idea how to proceed? Any hint or piece of information would be helpful, not only the solution of the problem. Thank you!

I found the reason of the error (99% sure). We had the following 2 Jenkins jobs:
Job1: Performs a full clean build of the tested system, written in Java, then performs a full clean build of the DSL, and finally executes the test cases. This is a long running job (~5 hours).
Job2: Performs a full clean build of the tested system, and then executes something else on it. The DSL is not involved. This is a shorter job (~1 hour).
We have one single Maven repository for all the jobs. Furthermore, some parts of the tested component is part of the interface between the two components.
Considering the time stamps the following happened:
Job1 performed the full build of both components, and started a test suite containing several test cases, which execution lasts about half an hour.
The garbage collector might swept out the components not used yet.
Job2 started the build, and it also rebuilt the interface parts, including the one swept out by garbage collector of Job1.
Job1 reached a test case which uses an interface component already swept out.
The solution was the following: we moved Job2 to an earlier time; now it finishes the job before Job1 starts the tests.

Related

How to improve GroovyClassLoader.loadClass performance?

I'm using Groovy 3.0.6 in my Java project as a scripting language, allowing the user to extend the application at runtime via groovy scripting. Some users have several hundred script snippets. I'm using GroovyClassLoader to load them (also CompileStatic, but that seems to have negligible impact).
My problem is that the parseClass method is extremely slow (more than 1 second compile time per script). Here's the profiler output of a JUnit test that compiles a bunch of scripts:
The call tree. Note that I let the test run for 724 seconds and groovy is consuming 87% of the time for compilation. It strikes me as odd that LexerATNSimulator.getReachableConfigSet consumes so much time...
The hot spots of the test run. Unsurprisingly, nothing but groovy (and java.util) here.
The source code is given to the GroovyClassLoader.parseClass(...) method as a regular String. I do have a custom code cache in place, but the compilation of a single script hits my application hard when there are hundreds of snippets to compile.
Is there anything I can do to reduce compilation times? They're crippling for the user experience at the moment.

Why should a testdatabase be created/deleted when testing?

I have been assigned with testing a mongodb database from a java backend. I got told that I had to create a database completely utilizing a script for this task.
But I have difficulties understanding the benefit of creating a database from scratch with a script, instead of having a permanent test database. Where I imagine data should be inserted on startup and cleaned on teardown in both cases.
Why is it beneficial from a testing perspective to creating and deleting a database when testing?
Sometimes tests fail and therefore it may happen that the teardown phase will never be reached.
Furthermore, deleting a database is the fastest and most effective way to clean it, although perhaps not the most efficient way to do so. But it guarantees that you do not forget something in your cleanup routine.
And in particular for performance tests it is important that the database is in exact the same state for each run, otherwise the run times cannot be compared with each other: an improvement in a consecutive run could have been caused just because tablespaces were already increased or similar things, and not because the code optimisation worked …
most of the times test means a predefined environment and a expected reaction of that environment against our assumed states. so for verifying it we need a pure automated and repeatable process as much as possible without interference of manual setup or configuration.
In software development process we try to consider as many as possible test cases for QA of product. when we talk about too many test cases each one should be isolated from the others. if it's not isolated well the result may varies in each execution round and eventually invalid testing process.
They need not be. However:
You lose portability.
You don't have a known start state for your test.

JMH forks, threads and debug

I started to work with JMH lately and wondered if there is a possible way to debug it.
First thing I did was try to debug it like any other program, but it threw "No transports initialized", so I couldn't debug it in the old fashioned way.
Next thing I did is to try to search on the internet and found someone who said that you need to put the forks to 0, tried it and it worked.
Unfortunately, I couldn't really understand why the forks are impacting the debug, or how forks impact the way I see things in the JMH.
All I know so far is that when you put .forks(number) on the OptionBuilder it says on how many process the benchmark will run. But if I put .forks(3) it's running each #Benchmark method on 3 process async?
An explanation about the .forks(number), .threads(number) how they are changing the way benchmarks run and how they impact debug would really clear things.
So normal JMH run has two processes running at any given time. The first ("host") process handles the run. The second ("forked") process is the process that runs one trial of a given benchmark -- achieving isolation. Requesting N forks either via annotation or command line (which takes precedence over annotations) makes N consecutive invocations for forked process. Requesting zero forks runs the workload straight in the hosted process itself.
So, there are two things to do with debugging:
a) Normally, you can attach to the forked process, and debug it there. It helps to configure workload to run longer to have plenty of time to attach and look around. The forked process usually has ForkedMain as its entry point, visible in jps.
b) If the thing above does not work, ask -f 0, and attach to the host process itself.
This was quite a pain in the soft side to get to work (now that I wrote this, it sounds trivial)
First of all I was trying to debug DTraceAsmProfiler (perfasm, but for Mac), using gradle.
First of all, I have a gradle task called jmh that looks like this:
task jmh(type: JavaExec, dependsOn: jmhClasses) {
// the actual class to run
main = 'com.eugene.AdditionPollution'
classpath = sourceSets.jmh.compileClasspath + sourceSets.jmh.runtimeClasspath
// I build JMH from sources, thus need this
repositories {
mavenLocal()
maven {
url '/Users/eugene/.m2/repository'
}
}
}
Then I needed to create a Debug Configuration in Intellij (nothing un-usual):
And then simply run:
sudo gradle -Dorg.gradle.debug=true jmh --debug-jvm

JVM instance per test for Tycho - surefire

So long story short, we have some legacy code that causes problems due to static initialization of constants. Some of our tests depend on that and we would like to isolate them into separate JVM instances.
I know that this is fairly easy to do in pure maven - surefire
<forkCount>1</forkCount>
<reuseForks>false</reuseForks>
In theory the above code should fork off a new thread for each test class. As I take it, this should get rid of our problems as presumably this is a new instance of the JVM that this test is getting run into, hence all the static initialization/class loading is done again.
So far so good. Unfortunately, we are using tycho-surefire (0.16) which does not seem to have that option. My question is whether there is any trick that can allow us to overcome this problem.
For example, how is the parallel option for the Junit runner provider working for tycho?
<parallel>classes</parallel>
<useUnlimitedThreads>true</useUnlimitedThreads>
Would the above piece of code achieve similar result? Is there guarantee that each test class shall run in its own JVM ? I assume that if we specify for unlimited threads, the number of threads will equal the number of test class if our parallelism granularity is "classes".
I hope there is someone who can help me a bit with this mess.
+++++++++++++++++++++++++++ Some New Findings +++++++++++++++++++++++++++++++++
Interestingly enough, the following options fix the problem.
<threadCount>10</threadCount>
<perCoreThreadCount>true</perCoreThreadCount>
<parallel>classes</parallel>
I really cannot explain it to myself why is this the case. These options do not fork a separate JVM for each test class.It actually runs it in a separate thread within the same JVM. Cannot fork a JVM as this does not seem to be supported by Tycho - surefire. Our main problems stem from the eclipse osgi container construction which is constructed with the statically initialized values that are causing problems. Could it be the case that when you are paralleling tests this way in Tycho, it actually forks the JVM or does something strange that reconstructs the OSGI container and reloads certain classes.
Could that be the reason for the problem disappearing. All of this seems quite strange. I guess I should take a peek at the tycho-surefire source code.
There is currently no version of Tycho which supports forking more than one VM. The feature request is tracked as bug 380171.
I dont think surefire will execute each TestSuite classes in a separate JVM .
<parallel>classes</parallel>
If the above property is set JVM will be launched once and runner will spawn as many threads as the number of testsuite classes and all the test case methods in it will be sequential.
If you are using utility methods in such case which are static ,highly likely they are root cause of your troubles :)

How to signal a test to stop in TestNG?

I have written tests for my windows based application in TestNG. Sometimes my tests will result in BSOD of windows, so the test will run waiting for the Windows to be up till timeout and fail.
I have written a Listener for BSOD detection and handling, so that whenever the test starts the BSOD detector will start as listener and handle if the BSOD occurs. But still I have no way to notify my test that it has to halt. It still continues to execute till timeout.
How can solve this problem? Is there a way to notify the test to terminate it from Listener?
I think you may be out of luck.
The discussion at https://groups.google.com/forum/#!topic/testng-users/BkIQeI0l0lc clearly indicates that, as of August 2012 there was no way to stop a running test suite.
That combined with a much older post (https://groups.google.com/forum/#!topic/testng-users/FhC3rqs1yDM) from February 2010 suggests it has been a shortcoming of TestNG for a while.
The current TestNG documentation does not describe any methods for stopping the suite.
From what I can see in the source code (although I have not traced through all paths) at https://github.com/cbeust/testng/blob/master/src/main/java/org/testng/TestNG.java, it simply creates new tasks for each test and runs them, providing no way to break early.
Perhaps you could modify your tests to have access to some flag that indicates if subsequent tests should be skipped (triggered by, e.g., the BSOD detector). Then all of your subsequent tests are responsible for indicating themselves as skipped if this flag is set. It might not be ideal, but one way to think about it is: Any test should be skipped if a BSOD (or other terminal event) was detected prior to that test.
You could request this feature on the TestNG Google Group (linked above). If you are so inclined, you could also perhaps customize the TestNG source.

Categories

Resources