I would like to frequently run the tests I'm working on, UT or IT but, whatever I try, I see npm messages passing by, npm run webapp:build:dev, npm_install, webapp_test, and of course a bunch of .spec.ts tests being executed.
Right clicking/run even in a single JUnit test starts all this, which takes a while to execute.
So, what's the way, in a monolithic JHipster application, to run tests ignoring the frontend?
Related
I created a project using cucumber to perform e2e tests of various apis I consume. I would like to know if I can run these tests through endpoints to further automate the application that was created.
That way I would be able to upload this app and would not need to keep calling locally.
You can do that if you create a Rest API with a get method which executes the test runner when called.
How to run cucumber feature file from java code not from JUnit Runner
But I don't recommend you to do that since what you are trying to achieve seems to me similar to a pipeline definition.
If you're in touch with the developers of these APIs, you can speak with them about including your test cases in their pipeline, since they probably have one in place.
If, for some reason, you still want to trigger your tests remotely and set it up by your own, I would recommend you to start reading about Jenkins. You can host it on any machine and run your tests from there, accessing from any machine to your jenkins instance:
https://www.softwaretestinghelp.com/cucumber-jenkins-tutorial/
If your code is hosted in any platform like github or gitlab, they already have its own way of creating pipelines and you can use it to run your tests. Read about Gitlab pipelines or Github actions.
I'm really not sure what code to paste here. I'm including a link to my GitHub below, to the specific file with the error.
So all of a sudden a unit test that had previously been working fine started failing. It makes no sense whatsoever, the failure. I'm using Spring's MockMVC utility to simulate web API calls, and my tests with this tool mostly revolve around specific web logic, such as my security rules. The security rules are super important to me in these tests, I've got unit tests for all the access rules to all my APIs.
Anyway, this test, which should be testing a successfully authenticated request, is now returning a 401, which causes the test to fail. Looking at the code, I can't find anything wrong with it. I'm passing in a valid API token. However, I don't believe that any of my logic is to blame.
The reason I say that is because I did a test. Two computers, both on the develop branch of my project. I deleted my entire .m2 from both machines, did a clean compile, and then ran the tests. On one machine, all the tests pass. On the other machine, this one test fails.
This leads me to think one of two things is happening. Either something is seriously wrong on one of the machines, or it's a test order thing, meaning something is not being properly cleaned up between my tests.
This is reinforced by the fact that if I only run this one test file (mvn clean test -Dtest=VideoFileControllerTest), it works on both machines.
So... what could it be? I'm at a loss because I felt I was cleaning up everything properly between tests, I'm usually quite good at this. Advice and feedback would be appreciated.
https://github.com/craigmiller160/VideoManagerServer/blob/develop/src/test/kotlin/io/craigmiller160/videomanagerserver/controller/VideoFileControllerTest.kt
testAddVideoFile()
I have checked out your project and ran the tests. Although I cannot pinpoint the exact cause of failure, it indeed looks like it has something to due with a form of test(data) contamination.
The tests started to fail after I randomized the order by modifying the maven surefire configuration. I added the following snippet in the build section of your pom.xml in order to randomize the tests:
<build>
...
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<runOrder>random</runOrder>
</configuration>
</plugin>
...
</build>
I ran the mvn clean test command ten times using the following (linux) bash script (if you use windows, the script might work using powershell):
#!/bin/bash
for i in {1..10}
do
mvn clean test
if [[ "$?" -ne 0 ]] ; then # if the exit code from mvn clean install was different than 0
echo "Error during test ${i}" >> results.txt
else
echo "Test ${i} went fine" >> results.txt
fi
done
Without the plugin snippet, the results.txt file merely contained ten lines of Test x went fine, while after applying the plugin about half of the tests failed. Unfortunately, the randomized tests all succeed when using mvn clean test -Dtest=VideoFileControllerTest, so my guess is that the contamination occurs somewhere else in your code.
I hope the above will give you more insight in the test failure. I would suggest searching for the culprit by #Ignore-ing halve of the active test classes and running the tests. If all tests succeed retry this process on the second halve and keep cutting the active tests in halve until you have found the cause of failure. Be sure to include to failing test though.
[edit]
You could add #DirtiesContext on the involved test classes/methods to prevent reuse of the ApplicationContext between tests.
Alright, thanks for the advice, I figured it out.
So, the main purpose of my controller tests were to validate my API logic, including authentication. Which meant that there was logic that made static method calls to SecurityContextHolder. I had another test class that was also testing logic involving SecurityContextHolder, and it was doing this:
#Mock
private lateinit var securityContext: SecurityContext
#Before
fun setup() {
SecurityContextHolder.setContext(securityContext)
}
So it was setting a Mockito mock object as the security context. After much investigation, I found that all my authentication logic was working fine on the test that was returning a 401 on my laptop (but not on my desktop). I also noticed that the test file with that code snippet above was running right before my controller test on my laptop, but after it on my desktop.
Furthermore, I had plenty of tests for an unauthenticated call, which is why only one test was failing: the unauthenticated test that followed it cleared the context.
The solution to this was to add the following logic to the test file from above:
#After
fun after() {
SecurityContextHolder.clearContext()
}
This cleared the mock and got everything to work again.
I have around 200 testNG test cases can be executed through maven by and suite.xml file. But I want to convert these test cases into web service or any other similar webservice so that anybody can call any test case from there machine and will be able to know whether that particular functionality is working fine at that moment.
But what if no one calls the test webservices for longer time? You won't know the state of your application, if you have any failures/regressions.
Instead, you can use
continuous integration to run the tests automatically on every code push; see Jenkins for a more complete solution; or, more hacky, you can create your own cron job/demon/git hook on a server to run your tests automatically
a maven plugin that displays the results of the last execution of the automated tests; see Surefire for a html report on the state of the last execution of each test
I'd like test classes to be aware if they are currently executed inside a Suite or not.
I have a test suite which starts a server before all tests and shuts down the server after all tests, using ExternalResource. Works perfect.
While writing new tests, I'd want to run a single test (or all from the same test class) to also start and stop the server. So I wanted those classes to become aware if they are currently executed inside a suite or not, and initiate server start accordingly.
That seems to be impossible. The suite Description is never passed to #Rules or #ClassRules it seems. Neither can I get a reference to the current Runner that runs the current test.
Is there a way to do that?
Update due to the first proposed answer:
Please note that there are many other tests that may run before the test suite and after it. Therefore I can't rely on a JVM shut down.
My main demand is that I'd like to run a single test from within my IDE (eclipse) right after writing the test, and still have the server start up and shut down. That should not happen for every test within the suite though.
Make all your test classes extend some AbstractServerTest.
Add #BeforeClass method in it to check if the server is already running and start it if it is not.
Remember that #BeforeClass will be called for every subclass, so the if-statement is necessary.
When the last test finishes, the JVM will shut down, so as long as your server is running in the same JVM (and not as separate process), you don't need to do any cleanup.
If your server is running as a separate process, in your #BeforeClass method you can add a shutdownHook that will execute some command to stop the server when the JVM shuts down.
May I simply suggest to use port-probing and check if a port is already bound within a JUnit test-rule (or a #Before/#BeforeClass annotated method) and only initialize the service if the port is still available. This way you can start the service externally or within the suite and reuse it within your test or initialize it before the test if the port is still available. This of course requires a fixed port assignment which is only used for the specific services needed while testing.
As you obviously performing an integration test, you might also be interested in how to setup Maven or Gradle to execute tests within the test integration phase automatically.
I have set up an end-to-end component test in Junit. It tests that objects generated at application startup are received at the other end of my application before they are sent over the network. I assert that the the number of objects received are the same as those generated. To be clear, this is only testing within the single application component. The network is mocked out.
When I run this test in Eclipse IDE, I send 4 and receive 4. When I run the test from Apache Ant, I send 4 but only 3 are received.
Does anybody know what this could be caused by? The test is probably run quicker with Ant, but my application is single threaded, so I don't see how this would make a difference.
Thanks!