When I run one of my unit tests through my ant build file's junit target
<junit showoutput="true" printsummary="yes" haltonfailure="yes" fork="yes" timeout="60000">
junit shows me:
[junit] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.958 sec
indicating successful completion of the test, however, it hangs after that line and will sit there for indefinitely (until I kill it).
Within the test, a new thread is created, but based on System.out.println output, I can see that it never completes even though it shouldn't take more than a couple of seconds. If I explicitly call join() then everything completes as expected.
If I run the same junit test in Eclipse (without the explicit join()) nothing seems amiss.
My question is [why] do I need to call join() before my unit test's method returns?
Related
I'm new to the stack exchange forum.
I'm trying to execute a test in Java using testNG. To give you a brief background, I'm executing the java package through an xml file and running the package as a TestNG Suite. I am also going through a VPN and have passed the arguments in to the run configurations. When I try to run any test I receive the following error:
Total tests run: 0, Failures: 0, Skips: 0
Picked up
_JAVA_OPTIONS: -Djavax.net.ssl.trustStore=c:\windows\sun\java\deployment\trusted.cacerts
I have executed this test successfully before, and I'm not sure what changed. Let me know if you have seen this seen this issue before. Also, let me know if you need any more details, or if you have any questions. I'll take all the help I can get at this point.
I'm new with Jenkins and I have a problem with builds. I'm writing UI tests with Selenium, Java and TestNG.
My problem is that Jenkins always shows Finished: SUCCESS even if some tests fail.
===============================================
TestAll
Total tests run: 10, Failures: 1, Skips: 0
===============================================
[SSH] exit-status: 0
TestNG Reports Processing: START
Looking for TestNG results report in workspace using pattern: **/testng-results.xml
Did not find any matching files.
Started calculate disk usage of build
Finished Calculation of disk usage of build in 0 seconds
Started calculate disk usage of workspace
Finished Calculation of disk usage of workspace in 0 seconds
Notifying upstream projects of job completion
No emails were triggered.
Finished: SUCCESS
How can I resolve my problem?
I assume you are building a Maven Project.
To stop a build on test failure, go to the configure part of your project then go to the build section and in "goals & options line" add :
-Dmaven.test.failure.ignore=false
this should stop the build if errors are found.
I have been searching for the answers last few days and unable to find one. The closest answer I could find is this which does not exactly answer the questions I have.
By the way, I have a Selenium Test Project which is based on Gradle. We build the project on Jenkins run the tests in 20 concurrent threads. Total number of unique test classes I have is 87. So, I expect gradle to exeute at least 5 batches The test project is build using Cucumber JVM, build and triggers tests by Jenkins to Selenium Hub. I tried to increase the parallelism of the tests by utilizing the grid as much as possible. But, the problem started when the number of tests started growing.
When I started the tests from Jenkins, I observed at first shot the test executed all 20 test processes and I see the second batch also started with same amount of processes. After the second batch the processes went back to single mode and the entire job took 14 hours to complete which defeats the purpose of having parallel test execution.
Gradle properties:
jvmArgs '-Xms128m', '-Xmx1024m', '-XX:MaxPermSize=128m'
Runtime.runtime.availableProcessors().toString()) as int
maxParallelForks = PropertyUtils.getProperty('test.parallel', '15') as int
forkEvery = PropertyUtils.getProperty('test.forkEvery', '0') as int
CLI:
gradle clean test -Dtest.single=*TestRun --info
I have read all the documents I can possibly find but failed to get answer. It would be greatly appreciated if someone can help me with these questions
1. How Gradle batch the test runner internally? For example if 20 executors starts and test 1,2,3 done executing faster than the others, do the three executors gets three more test classes or waits for the entire batch to finish executing?
2. Can forkEvery impact how the execution works during parallel testing?
Jenkins log
Successfully started process 'Gradle Test Executor 6'
Successfully started process 'Gradle Test Executor 13'
Successfully started process 'Gradle Test Executor 14'
Successfully started process 'Gradle Test Executor 5'
Successfully started process 'Gradle Test Executor 16'
Successfully started process 'Gradle Test Executor 8'
Successfully started process 'Gradle Test Executor 19'
Successfully started process 'Gradle Test Executor 4'
Successfully started process 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 11'
Successfully started process 'Gradle Test Executor 10'
Successfully started process 'Gradle Test Executor 18'
Successfully started process 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 20'
Successfully started process 'Gradle Test Executor 7'
Successfully started process 'Gradle Test Executor 9'
Successfully started process 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 15'
Successfully started process 'Gradle Test Executor 17'
Successfully started process 'Gradle Test Executor 12'
Gradle Test Executor 13 started executing tests.
Gradle Test Executor 14 started executing tests.
Gradle Test Executor 6 started executing tests.
Gradle Test Executor 5 started executing tests.
Gradle Test Executor 16 started executing tests.
Gradle Test Executor 19 started executing tests.
Gradle Test Executor 8 started executing tests.
Gradle Test Executor 4 started executing tests.
Gradle Test Executor 2 started executing tests.
Gradle Test Executor 10 started executing tests.
Gradle Test Executor 11 started executing tests.
Gradle Test Executor 18 started executing tests.
Gradle Test Executor 1 started executing tests.
Gradle Test Executor 20 started executing tests.
Gradle Test Executor 7 started executing tests.
Gradle Test Executor 3 started executing tests.
Gradle Test Executor 9 started executing tests.
Gradle Test Executor 17 started executing tests.
Gradle Test Executor 15 started executing tests.
Gradle Test Executor 12 started executing tests.
The default of forkEvery is 0
According to the documentation forkEvery is
The maximum number of test classes to execute in a forked test process. The forked test process will be restarted when this limit is reached. The default value is 0 (no maximum).
So gradle (and probably junit) will fork by classes not tests within the class. It sounds like a few of the 87 test classes have long running tests or a large number of tests and they end up in one forked test process. I would consider setting forkEvery to 1. This will ensure that each test class is sent to a new fork. If there is still an issue you may need to find which test classes are taking the most time. Consider splitting these classes up into smaller groups of tests so the tests get spread over each jvm. If it is one test that takes forever consider redesigning it and possibly creating smaller tests from it.
I do not believe that gradle runs tests in batches. As a worker becomes available it takes a test class from the queue of remaining tests. You would really have to look at how JUnit works as I'm sure gradle is simply passing these configurations to JUnit.
Building project on Win7 takes huge ammout of time, especially leaving the test suite. On my windows machine for 172 tests it takes 230 seconds, and on Jenkins (Ubuntu) about 19 seconds.
I run maven with -X argument to see what it is hanging on, but none of error appeard, after that time it just goes to run next plugin.
I tried to speed up it with setting surefire plugin to run on 4 threads, but it is not the case - Jenkins has exactly the same project as me.
I found that sometimes it hangs on calling externall processes, but the projects is not calling any externall processes (what would be even so easy according to run it on two different OS).
When I run the tests one by one in Win7 the working time is definitly lower that runing them with whole rebuild. This behaviour is the same on other Win7 machines.
How can I figure out what is keeping the maven from leaving the tests and going to next step?
Windows 7 output
Last test output
<--- stucks here
Tests run: 172, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 230.682 sec - in TestSuite
Results :
Tests run: 172, Failures: 0, Errors: 0, Skipped: 0
Next plugin run
Ubuntu output
Last test output
Tests run: 172, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.954 sec - in TestSuite
Results :
Tests run: 172, Failures: 0, Errors: 0, Skipped: 0
Next plugin run
Setting up the surefire plugin version to the latest one 2.19, from 2.16 helped. Now on Windows 7 it takes about 12 seconds, but still I have no idea what was the original cause of stucking.
In order to understand what is going on in your OS, you have to somehow debug your OS. I recommend taking a look at Microsoft Sysinternals and try to use Procmon (Process Monitor) and see what is going on. Sadly, it won't get you all the syscalls as strace on Linux does, but it might help you understand more what is going on.
You can also debug your JVM that executes the tests, it might also give you some answers.
I'm writing an ant build script to run regression tests for an application. I need to run the test cases sequentially and only if the previous test run was successful. Is there a way I can look a the output of the build to decide if the next target can be called?
[exec] [revBuild] RC = 1
[exec] -------------------------------------------------
[exec] Result: 1
BUILD SUCCESSFUL
Total time: 3 minutes 23 seconds
In the above output, the called application has failed. Is there a way I can search for the application return code in the build output, based on which the next ant target(to run the next test case) can be called?
You probably just want to set the failonerror attribute of the exec task to true. If you do this and the executable's return status code is anything other than 0, then the buildwill fail.
Youi could also store this status code in a property using the resultproperty attribute, and execute some task only if this property is set (or is not set).