I recently started looking into BDD(using Gherkin + Restassured). Need to mock third party servicd, below is my use case.
Service-A internally calls Service-B
The application is in goLang.
The BDD are in Java.
We have a CI pipeline running along, where it generates the rpm and deploys the rpm into VM.
On that VM we are running the BDD(Currently Service-A and Service-B are deployed on the same VM)
Is ther a way i can mock the Service-B, so that i dont have to be dependent on Service-B? If yes what would be the best approach here.
Have tried goLang httptest to mock the service at the unit-test level.
But how the mocking can be done after rpm gets created in pipeline with BDD in place.
Thanks
If your Service A is calling Service B internally, rather than via web or RPC, then you can use dependency injection to inject a "fake" version of your Service B. (Note that this doesn't necessarily involve a dependency injection framework; constructor-based and property-based injection are also valid). If Service B has no interface, extract one and use a thin adapter to call the real service or fake depending on environment.
You won't need to change your scenarios as long as they are only interacting with Service A's user interface or API.
You will need to change the way the build pipeline works, so that it deploys with your fake instead of the real code.
You can even do this at runtime, switching over from the fake to the real thing by having the adapter call the relevant service. The switch or deployment can be triggered by environment variables or by build arguments.
Be careful not to deploy your test service to production though!
If you're using continuous deployment, then the last step in the build pipeline should ideally deploy and test interaction with the real service. If for some reason that's the only way you can work, there are still a couple of things you can do that might help:
You can stub the data that Service B uses, so that it behaves in a predictable way
You can use a test instance. Reach out to your service provider and see if they have one for you. I recommend that you should still check that deployment of the real service succeeds, ideally with an automated test of some sort, even if that has to be run in production. It only needs to be a basic smoke test to check that the system is wired up. Note that the easier it is to deploy, the easier it will be to recover from any mistakes; if you can't deploy quickly then you will need to be more thorough in your checking.
If the RPM is created and deployed without any kind of fake or test instance, and you have no way to configure the environment to use such a fake or test instance, then you will not be able to mock it out. The build pipeline has to be a part of deploying a fake. That won't be a problem if you have control over your CI pipeline; otherwise reach out to your build team. They may have experience or be able to point you to someone else who can help you. Great BDD is driven by conversations, after all!
Related
Avoid restarting the server every time when tiny changes on integration tests
I'm new to Spring and feeling a lot of pain on writing integration tests on Spring.
For example, say I'm running an integration test and change the code below.
See, there's nothing related to the server code change.
To run the new updated integration test code, I have to launch the webserver and data seeding again which might take 5 minutes long.
Hard to imaging how people manage this way for development.
I'm not sure if it is possible to launch the webserver along by bootRun and the integration test should try to communicate the dedicate server without bothering to reboot the servers for running tests.
Usually, which part config file will define this behavior?
I took over this project and have to figure it out my own.
Before
serverResp.then()
.statusCode(203)
.body("query.startDateTime", equalTo("2018-07-01T00:00:00"))
After
serverResp.then()
.statusCode(200)
.body("query.endDateTime", equalTo("2020-07-01T00:00:00"))
There are many different way to do integration testing.
Spring has a built-in framework that runs the integration test in the same JVM as the real server.
If the application is heavy (usually relevant for monoliths) it indeed can take time to run it, the best you can do then is to "chose" which parts of the application to load that are relevant to the test. In spring there are ways to achieve this, the question is whether your application code allows such separation.
Then there is a way to write integration tests so that they indeed communicate with a remote server which is up-and-running "in advance". during the build this can be done once before a testing phase and then when the tests are done, the server should be shut down.
Usually tests like this have some ways to specify the server host/port for communications (I put aside the security, credentials, etc).
So you can check if there is some special flag/system property - read the host / port from there.
A good thing in this approach is that you won't need to restart the server before every test.
The bad thing is that it doesn't always allow to test easily: If your test deploys some test data, this test must also remove the data at the end of test.
Tests must be designed carefully.
A third approach is a kind of hybrid and generally not a mainstream IMO:
you can create a special setup that will run test in different JVM (externally) but once the test starts its bytecode gets uploaded to the running server (the server must have a backdoor for this) and the actual bytecode gets executed actually on the server. Again the server is up-and-running.
I wrote once a library to do this with Spock, but it was long time ago and we haven't used it after all (that project was closed).
Don't want to self-advertise or something, but you can check it out and maybe borrow technical ideas of how to do this.
I'm currently writing a Java program that is an interface to another server. The majority of the functions (close to >90%) do something on the server. Currently, I'm just writing simple classes that run some actions on the server, and then check it myself, or add methods to the test that read back the written information.
Currently, I'm developing on my own computer, and have a version of the server running locally on a VM.
I don't want to continually run the tests at every build, as I don't want to keep modifying the server I am connected too. I am not sure the best way to go about my testing. I have my JUnit tests (on simple functions that do not interact externally) that run every build. I can't seem to see the set way in JUnit to write tests that do not have to run at every build (perhaps when their functions change?).
Or, can anyone point me in the correct direction of how to best handle my testing.
Thanks!
I don't want to continually run the tests at every build, as I don't want to keep modifying the server I am connected to
This should have raised the alarms for you. Running the tests is what gives you feedback on whether you broke stuff. Not running them means you're blind. It does not mean that everything is fine.
There are several approaches, depending on how much access you have to the server code.
Full Access
If you're writing the server yourself, or you have access to the code, then you can create a test-kit for the server - A modified version of the server that runs completely in-memory and allows you to control how the server responds, so you can simulate different scenarios.
This kind of test-kit is created by separating the logic parts of the server from its surroundings, and then mocking them or creating in-memory versions of them (such as databases, queues, file-systems, etc.). This allows the server to run very quickly and it can then be created and destroyed within the test itself.
Limited/No Access
If you have to write tests for integration with a server that's out of your control, such as a 3rd party API, then the approach is to write a "mock" of the remote service, and a contract test to check that the mock still behaves the same way as the real thing. I usually put those in a different build, and run that occasionally just to know that my mock server hasn't diverged from the real server.
Once you have your mock server, you can write an adapter layer for it, covered by integration tests. The rest of your code will only use the adapter, and therefore can be tested using plain unit tests.
The second approach can, of course, be employed when you have full access as well, but usually writing the test-kit is better, since those kinds of tests tend to be duplicated across projects and teams, and then when the server changes a whole bunch of people need to fix their tests, whereas if the test-kit is written as part of the server code, it only has to be altered in one place.
We are currently improving the test coverage of a set of database-backed applications (or 'services') we are running by introducing functional tests. For me, functional tests treat the system under test (SUT) as a black box and test it through its public interface (be it a Web interface, REST, or our potential adventure into the messaging realm using AMQP).
For that, the test cases either A) bootstrap an instance of the application or B) use an instance that is already running.
The A version allows for test cases to easily test the current version of the system through the test phase of a build tool or inside a CI job. That is what e.g. the Grails functional test phase is for. Or Maven could be set up to do this.
The B version requires the system to already run but the system could be inside (or at least closer to) a production environment. Grails can do this through the -baseUrl option when executing functional tests.
What now puzzles me is how to achieve a required state of the service prior to the execution of every test case?
If I e.g. want to test a REST interface that does basic CRUD, how do I create an entity in the database so that I can test the HTTP GET for it?
I see different possibilities:
Using the same API (e.g. HTTP POST) to create the entity. Downside: Changing the creation method breaks two test cases. Furthermore, there might not be a creation method for all APIs.
Adding an additional CRUD API for testing and only activating that in non-production environments. That API is then used for testing. Downside: adds additional code to the production system, API logic might not be trivial, e.g. creation of complex entity graphs (through aggregation/composition), and we need to make sure the API is not activated for production.
Basically the same approach is followed by the Grails Remote Control plugin. It allows you to "grab into your application" and invoke arbitrary code through serialisation. Downside: Feels "brittle". There might be similar mechanisms for different languages/frameworks (this question is not Grails specific).
Directly accessing the relational database and creating/deleting content, e.g. using DbUnit or just manually creating entities through JDBC. Downside: you duplicate creation/deletion logic and/or ORM inside the test case. Refactoring the DB breaks the test case though the SUT still works.
Besides these possibilities, Grails when using the (-inline) option for functional tests allows accessing Spring services (since the application instance is run inside the same JVM as the test case). Same applies for Spring Boot "integration tests". But I cannot run the tests against an already running application version (as described as option B above).
So how do you do that? Did I miss any option for that?
Also, how do you guarantee that each test case cleans up after itself properly so that the next test case sees the SUT in the same state?
as with unit testing you want to have a "clean" database before you run a functional test. You will need some setup/teardown functionality to bring the database into a defined state.
easiest/fastest solution to clean the database is to delete all content with an sql script. (For debugging it is also useful to run this in the test setup to keep the state of the database after a test failure.) This can be maintained manually (it just contains delete <table> statements). If your database changes often you could try to generate the clean script (disable foreign keys (to avoid ordering problem), delete tables).
to generate test data you can use an sql script too but that will be hard to maintain, or create it by code. The code can be placed in ordinary Services. If you don't need real production data the build-test-data plugin is a great help at simplifying test data creation. If you are on the code side it also makes sense to re-use the production code to create test data to avoid duplication.
to call the test data setup simply use remote-control. I don't think it is more brittle than all the http & ajax stuff ;-). Since we now have all the creation code in a service the only thing you need to call with remote control is the Service that does create the data. It does not have to get more complicated than remote { ctx.testDataService.setupDataForXyz() }. If it is that simple you can even drop remote-control and use a controller/action to run it.
do not test too much detail with functional tests to make it not more complicated as it already is. :)
I have a web service which calls a third party web service.
Now I want to unit-test my web service. For this, should I mock the third party web service or is it fine to call it during the tests?
Is there any standard document on unit testing?
Yes, you should mock the third party web service for unit testing. Here are some advantages:
Your unit tests are independent of other components and really only test the unit and not also the web service. If you include the service in your unit tests and they fail, you won't know whether the problem is in your code or in the foreign web service.
Your tests are independent of the environment. If the Internet connection is ever down or you want to test on a machine that doesn't have Internet access at all, your test will still work.
Your tests will be much faster without actually connecting to the web service. This might not seem like a big thing but if you have many many tests (which you should), it can really get annoying.
You can also test the robustness of your unit by having the mock send unexpected things. This can always happen in the real world and if it does, your web service should react nicely and not just crash. There is no way to test that when using the real web service.
However, there is also integration testing. There, you test the complete setup, with all components put together (integrated). This is where you would use the real web service instead of the mock.
Both kinds of testing are important, but unit testing is typically done earlier and more frequently. It can (and should) be done before all components of the system are created. Integration testing requires a minimal amount of progress in all of most of the components.
This depend on what you're unit-testing.
So if you're testing out whether you can successfully communicate with the third-party webservice, you wouldn't obviously try to mock this. However if you're unit-testing some business use-cases that are part of your web-service offering (unrelated to what the other modules/web services are doing), then you might want to mock the third-party web service.
You should test both but both test cases does not fall under Unit testing.
Unit testing is primarily used for testing individual smaller pieces i.e. classes and methods. Unit testing is something that should preferably happen during the build process without any human intervention. So as part of Unit testing you should mock out the third party webservice. The advantage of mocking is that you can make the mocked object behave in many possible ways and test your classes/methods to make sure they handle all possible cases.
When multiple systems are involved, the test case falls under System/Integration/Functional testing. So as part of your System/Integration/Functional testing, you should call the methods in other webservice from your webservice and make sure everything works as expected.
Mocking is an usually essential in unit testing components which have dependent components. This is so because in unit testing , you are limited to testing that your code works correctly ( does what its contract says it will do ). If this method , in trying to do the part of its contract depends upon other component doing their part correctly , it is perfectly fine to mock that part out ( assuming that they work correctly ).
If you dont mock the other dependent parts , you soon run into problems. First , you cannot be certain what behavior is exhibited by that component. Second , you cannot predict the results of your own test ( because you dont know what was the inputs supplied to your tests )
I am working on a Java project that is split up into a web project and a back-end project. The web talks to the back-end via web service calls.
There is one class in the web project that makes all of the web service calls and I would like to add testing around this class. I want to do unit testing, and not functional testing, so I do not want to have to have the web service actually running to run the tests. If this class were simply passing the calls through to the back-end, I might be willing to overlook testing it, however there is caching happening at this point, so I want to test that it is working correctly.
When the web service is generated jax-ws wsgen it creates an interface that the front end uses. I have used this generated interface in order to create a fake object for testing. This works pretty well, but there are issues with this approach.
I am currently the only one on my team that is doing unit testing, and so am the only one maintaining the test code. I would like to be able to have the test code be built when the rest of the code is built, but if someone else introduces a new method into one of the web service classes, then the interface will have the new method on it, and my fake object will not implement it, and will therefor be broken.
The web and the back end code projects are not dependent on one another, and I do not want to introduce a dependency between them. So, introducing an interface on top of the web service endpoint does not seem plausible since if I put it in the back-end, my web code needs to reference it, and if I put it in the front-end, my back-end code needs to reference it. I also cannot extend the endpoint since this will also introduce a dependency between the projects.
I am unfamiliar with how web services work, and how the classes are generated for the web project to be able to refer to them. So, I do not know how to create an interface in the back end that will be available for me to use in the web project.
So, my question is, how would I get the interface available to my front-end project without introducing a project dependency (in Eclipse build path)? Or, is there another, better way to fake out the back-end web service that I am calling?
First off, I'd break out the caching code into a testable unit that does not directly depend upon the web service calls.
As for the web services, I find it useful to have both functional tests that exercise the web services and other tests that mock out the web services. The functional tests can help you find edge cases that your mocks may miss.
For instance, I'm using Axis2 and generating stubs from the WSDL. For the mocks, I just implement or extend the generated stubs. In our case the real web service is implemented by an outside organization. Probing their web service through exploratory functional tests has revealed some exceptions that needed to be handled that were not apparent by just examining the generated stubs. I used this information to better mock these edge cases.