I have a simple game which consists of two projects: Client and server. Now I want to test if they interact correctly.
Setup is: One Maven-parent project, and server/client as child-modules.
I don't now how to test the interaction: am I able to start both project somehow (how exactly?) in junit tests, or integration tests? So that I can listen for one projects output verify it and send it to the other? Or should I go another way?
For integeation tests create a seperate pom add both projects as test dependencies. Then start the server using testng beforeclass hooks or the maven failsafe plugin hook before the test starts, and close the server after the test.
This sounds like a good case for an integration test. I would do it as follows:
create a third maven module called IntegrationTests
use a maven profile integrationtests and activate it with -Dintegrationtests
pull in the client and server modules as dependencies in the new module
write my integration tests using JUnit (#Before, #Test, etc)
As Evgeniy suggested, you could of course put the integration tests in one of the existing modules, whereas my suggestion is to split it out to a third module.
If you need help on how to use maven profiles for integration testing, just google around, or feel free to ask if you have specific questions.
Good luck.
Typically you start the server from test and send requests from the test emulating client behaviour and checking that server reponses are what you expect.
Related
I have a project consists with multi modules - I use Maven. Each module has an integration tests. Within module tests shared the same container - currently I use TestContainers. But there is a problem with long launching container with mysql database. Sometimes it takes too long, and whole build project with tests lasts even several dozen minutes. There is possibility to launch one container, shared between submodule? After each execution by submodule, I will erase database and configure for new submodule.
I see there is something like org.junit.platform.launcher from JUnit, but I don't see so much documentation and don't know if it help me in some way.
I created a project using cucumber to perform e2e tests of various apis I consume. I would like to know if I can run these tests through endpoints to further automate the application that was created.
That way I would be able to upload this app and would not need to keep calling locally.
You can do that if you create a Rest API with a get method which executes the test runner when called.
How to run cucumber feature file from java code not from JUnit Runner
But I don't recommend you to do that since what you are trying to achieve seems to me similar to a pipeline definition.
If you're in touch with the developers of these APIs, you can speak with them about including your test cases in their pipeline, since they probably have one in place.
If, for some reason, you still want to trigger your tests remotely and set it up by your own, I would recommend you to start reading about Jenkins. You can host it on any machine and run your tests from there, accessing from any machine to your jenkins instance:
https://www.softwaretestinghelp.com/cucumber-jenkins-tutorial/
If your code is hosted in any platform like github or gitlab, they already have its own way of creating pipelines and you can use it to run your tests. Read about Gitlab pipelines or Github actions.
I have written unit tests for a third party rest API. These tests are what I would call live tests in that they do test the rest api responses and require valid credentials. This is required because the documentation provided by the third party is not up-to-date so its the only way of knowing what the response will be. Obviously, I can't use these as the unit tests because they actually connect externally. Where would be a good place to put these tests or separate them from mocked unit tests?
I have currently had to comment them out when I check them in so that they don't get run by the build process.
I tend to use assumeTrue for these sort of tests and pass a system property to the tests. So the start of one of your tests would be:
#Test
public void remoteRestTest()
{
assumeTrue(System.getProperty("run.rest.tests").equals("true"));
...
}
This will only allow the test to run if you pass -Drun.rest.tests=true to your build.
What you are looking for are integration tests. While the scope of a unit test is usually a single class the scope of an integration test is a whole component in its environment and this includes the availability of external resources such as your remove REST service. Yes, you should definitely keep integration test separate from unit tests. How this can be done in your environment depends on your build process.
For instance, in case you work with Maven there is a Maven Failsafe Plugin that targets integration testing in your build process.
I'm going to execute some Selenium tests after each Bamboo build. As far as I see, the best way is to store them in a separate repo and use specific project (or stage in an existing one) to run these tests. But there is an issue, I can't figure out. I'm using deployment plans to deliver product after build to development environment, so I'd like my tests to be executed, only if the deployment was successful. Does anybody know how to properly express this in Bamboo triggers' terms? Thanks you.
It's rather a confusing and complicated process. As we all know selenium needs a live website to point to in order to execute the tests. There are several ways to accomplish this using Bamboo. I assume you already have the build pipeline set up for automatic deployment. Depending on what you want and how you deploy several agents can be used to execute the tests. Another way is to use Selenium Grid. You want to trigger the selenium task after the deployment happen using several slaves. A grid creates the Hub and Slaves relationship and tells the hub to execute the tests accordingly. Here is some info about the plug-in that can be used to trigger Selenium Testng tests. And, of course, as yo said, you want to create the selenium task to be dependent on the deployment so if the deployment fails then the test will not run. Hope this help!
I'm currently writing a Java Client Server Application. So i want to implement two Libraries, one for the Client and one for the Server. The Client Server Communication has a very strict protocol, that I wan't to test with JUnit.
As build tool im using Maven and a Husdon Server for continues Integration.
Actually I do not have any good Idea how to test these Client / Server Libraries.
I got following Approaches:
Just write a Dummy Client for testing the server and write a Dummy Server to test the Client.
Disadvantages: Unfortunately this will result in many extra work. I could not be 100% sure that client and Server could work together, because I'm not sure that the Tests are completely identical.
Write an separate Test Project that Tests the Client and the Server together.
Disadvantages: The Unit Tests does not belong to the Project it self, so Hudson will not run them automatically. Everyone who changes anything at one of these Libraries, will have to run the Tests manually to ensure, everything is correct. Also i will not receive any Code Coverage Report.
Are there any better approaches to test codes like that?
Maybe test a Maven Multi Module Project, or something like that.
I hope any one got a good solution for that Issue.
Thanks.
Think of all your code as "transforms input to output": X -> [A] -> Y
X is the data that goes in, [A] is the transformer, Y is the output. In your case, you have this setup:
[Client] -> X -> [Server] -> Y -> [Client]
So the unit tests work like this:
You need a test that runs the client code to generate X. Verify that the code actually produces X with an assert. X should be a final static String in the code.
Use the constant X in a second test to call the server code which transforms it into Y (another constant).
A third test makes sure that the client code can parse the input Y
This way, you can keep the tests independent and still make sure that the important parts work: The interface between the components.
My suggestion would be to use two levels of testing:
For your client/server project, include some mocking in your unit tests to ensure the object interfaces are working as expected.
Following the build, have a more extensive integration test run, with automation to install the compiled client and server on one or more test systems. Then you can ensure that all the particulars of the protocol are tested thoroughly. Have this integration test project triggered on each successful build of the client/server project. You can use JUnit for this and still receive the conventional report from Hudson.
The latest approach to solve this problem is by using Docker containers. Create a docker file containing a base image and all the necessary dependencies required for your client server application. Create a separate container for each node type of your distributed client-server system and test all the entry point server API/client interactions using TestNG or JUnit. The best part of this approach is that you are not mocking any service calls. In most cases you can orchestrate all the end-to-end client-server interactions.
There is a little bit of learning curve involved in this approach but Docker is becoming highly popular in the Dev community especially for solving this problem.
Here is an example of how you could use docker client api to pull docker images in your JUnit test:
https://github.com/influxdb/influxdb-java/blob/master/src/test/java/org/influxdb/InfluxDBTest.java
The approach described above is now opensource product : testcontainers
So finally the resolution was to build a Multi Module Project, with a separate Test Module that includes the Server and the Client Module
Works great in Husdon. And even better in the Eclipse IDE.
Thanks # Aaron for the hint