I know there is remote debugging but I want to go one step further: I would like to run tests in my eclipse that are run within another JVM - i.e. have access to static fields, resources, instances, etc from that JVM.
More specifically: I have an Apache server running locally on my machine and I would like to execute tests as if they were running natively in that very server.
Currently, I implemented my own JUnit test runner that runs within that server/JVM and creates test report XMLs that are written to a folder that I can inspect. But that's a bit bothersome so I would like to be able to run them directly with a mouseclick from eclipse and have them presented there nicely in the JUnit view.
So my question is:
Is there a way to run (not debug) code from eclipse within another JVM?
If so - is this also possible with tests, i.e. run and check the test reports with the JUnit view?
A presume that you are talking about an Apache Tomcat server. (Running unit tests within an arbitrary Apache server doesn't make a lot of sense.)
A Google search didn't yield a lot of leads, but I did come across this:
JUnit-Tomcat: No Mocking Just Testing
Be aware that the source code in the code in the downloadable hasn't been updated since 2006. Apparently doesn't support JUnit 4.0, and it was developed for Tomcat 5.5.
Another option would be to use an embedded Tomcat server within your unit tests; e.g.
https://github.com/mjeanroy/junit-servers
Note: this is not a recommendation.
Related
I created a project using cucumber to perform e2e tests of various apis I consume. I would like to know if I can run these tests through endpoints to further automate the application that was created.
That way I would be able to upload this app and would not need to keep calling locally.
You can do that if you create a Rest API with a get method which executes the test runner when called.
How to run cucumber feature file from java code not from JUnit Runner
But I don't recommend you to do that since what you are trying to achieve seems to me similar to a pipeline definition.
If you're in touch with the developers of these APIs, you can speak with them about including your test cases in their pipeline, since they probably have one in place.
If, for some reason, you still want to trigger your tests remotely and set it up by your own, I would recommend you to start reading about Jenkins. You can host it on any machine and run your tests from there, accessing from any machine to your jenkins instance:
https://www.softwaretestinghelp.com/cucumber-jenkins-tutorial/
If your code is hosted in any platform like github or gitlab, they already have its own way of creating pipelines and you can use it to run your tests. Read about Gitlab pipelines or Github actions.
So this is my situation:
I am fairly new to gitlab-ci. I don't host my own gitlab instance but rather push everything to gitab itself. I am not using and am not familiar with any build tools like Maven. I usually work and run my programms from an IDE rather than the terminal.
This is my problem:
When I push my Java project I want my pipeline to start the Junit tests I wrote. Whereas I've found various simple commands for other languages than Java to run unit tests I didn't come across anything for Junit. I've just found people using Maven, running the test locally and then pushing the test reports to gitlab. Is it even possible to easily run Junit tests on the gitlab server with the pipeline without build tools like Maven? Do I have to run them locally? Do I have to learn to start them with a Java terminal command? I've beeen searching for days now.
The documentation is clear:
To enable the Unit test reports in merge requests, you need to add artifacts:reports:junit in .gitlab-ci.yml, and specify the path(s) of the generated test reports.
The reports must be .xml files, otherwise GitLab returns an Error 500.
You then have various example in Ruby, Gio, Java (Gradle or Maven), and other languages.
But with GitLab 13.12 (May 2021), this gets better:
Failed test screenshots in test report
GitLab makes it easy for teams to set up end-to-end testing with automation tools like Selenium that capture screenshots of failed tests as artifacts.
This is great until you have to sort through a huge archive of screenshots looking for the specific one you need to debug a failing test.
Eventually, you may give up due to frustration and just re-run the test locally to try and figure out the source of the issue instead of wasting more time.
Now, you can link directly to the captured screenshot from the details screen in the Unit Test report on the pipeline page.
This lets you quickly review the captured screenshot alongside the stack trace to identify what failed as fast as possible.
See Documentation and Issue.
Avoid restarting the server every time when tiny changes on integration tests
I'm new to Spring and feeling a lot of pain on writing integration tests on Spring.
For example, say I'm running an integration test and change the code below.
See, there's nothing related to the server code change.
To run the new updated integration test code, I have to launch the webserver and data seeding again which might take 5 minutes long.
Hard to imaging how people manage this way for development.
I'm not sure if it is possible to launch the webserver along by bootRun and the integration test should try to communicate the dedicate server without bothering to reboot the servers for running tests.
Usually, which part config file will define this behavior?
I took over this project and have to figure it out my own.
Before
serverResp.then()
.statusCode(203)
.body("query.startDateTime", equalTo("2018-07-01T00:00:00"))
After
serverResp.then()
.statusCode(200)
.body("query.endDateTime", equalTo("2020-07-01T00:00:00"))
There are many different way to do integration testing.
Spring has a built-in framework that runs the integration test in the same JVM as the real server.
If the application is heavy (usually relevant for monoliths) it indeed can take time to run it, the best you can do then is to "chose" which parts of the application to load that are relevant to the test. In spring there are ways to achieve this, the question is whether your application code allows such separation.
Then there is a way to write integration tests so that they indeed communicate with a remote server which is up-and-running "in advance". during the build this can be done once before a testing phase and then when the tests are done, the server should be shut down.
Usually tests like this have some ways to specify the server host/port for communications (I put aside the security, credentials, etc).
So you can check if there is some special flag/system property - read the host / port from there.
A good thing in this approach is that you won't need to restart the server before every test.
The bad thing is that it doesn't always allow to test easily: If your test deploys some test data, this test must also remove the data at the end of test.
Tests must be designed carefully.
A third approach is a kind of hybrid and generally not a mainstream IMO:
you can create a special setup that will run test in different JVM (externally) but once the test starts its bytecode gets uploaded to the running server (the server must have a backdoor for this) and the actual bytecode gets executed actually on the server. Again the server is up-and-running.
I wrote once a library to do this with Spock, but it was long time ago and we haven't used it after all (that project was closed).
Don't want to self-advertise or something, but you can check it out and maybe borrow technical ideas of how to do this.
I'm currently writing a Java program that is an interface to another server. The majority of the functions (close to >90%) do something on the server. Currently, I'm just writing simple classes that run some actions on the server, and then check it myself, or add methods to the test that read back the written information.
Currently, I'm developing on my own computer, and have a version of the server running locally on a VM.
I don't want to continually run the tests at every build, as I don't want to keep modifying the server I am connected too. I am not sure the best way to go about my testing. I have my JUnit tests (on simple functions that do not interact externally) that run every build. I can't seem to see the set way in JUnit to write tests that do not have to run at every build (perhaps when their functions change?).
Or, can anyone point me in the correct direction of how to best handle my testing.
Thanks!
I don't want to continually run the tests at every build, as I don't want to keep modifying the server I am connected to
This should have raised the alarms for you. Running the tests is what gives you feedback on whether you broke stuff. Not running them means you're blind. It does not mean that everything is fine.
There are several approaches, depending on how much access you have to the server code.
Full Access
If you're writing the server yourself, or you have access to the code, then you can create a test-kit for the server - A modified version of the server that runs completely in-memory and allows you to control how the server responds, so you can simulate different scenarios.
This kind of test-kit is created by separating the logic parts of the server from its surroundings, and then mocking them or creating in-memory versions of them (such as databases, queues, file-systems, etc.). This allows the server to run very quickly and it can then be created and destroyed within the test itself.
Limited/No Access
If you have to write tests for integration with a server that's out of your control, such as a 3rd party API, then the approach is to write a "mock" of the remote service, and a contract test to check that the mock still behaves the same way as the real thing. I usually put those in a different build, and run that occasionally just to know that my mock server hasn't diverged from the real server.
Once you have your mock server, you can write an adapter layer for it, covered by integration tests. The rest of your code will only use the adapter, and therefore can be tested using plain unit tests.
The second approach can, of course, be employed when you have full access as well, but usually writing the test-kit is better, since those kinds of tests tend to be duplicated across projects and teams, and then when the server changes a whole bunch of people need to fix their tests, whereas if the test-kit is written as part of the server code, it only has to be altered in one place.
We are developing a Java based Play Framework application.
I'm the only eclipse user in my team. My colleagues are using IntelliJ and they are able to run JUnit tests purely from within the IDE.
I wouldn't see this as a problem, since running the tests via ./activator and attaching a remote debugger is no big deal, but:
They can run one single #Test-annotated method of the whole test class and debug it. And I can't even find a way to run a single method using the activator command.
Can anyone point me to a solution?