i am working on integration tests, getting Responses from An API A.
the API A interacts with another API B which also calls a web Service to get data from it.
the problem is the data may change in the future, so the integration tests may fail and as long as the data changes, i have to edit the tests too to make it work.
i want to Mock the web service from which i have the data, but i don't know how to tell API B to call the mock only for tests,
does anyone has an idea about the best way to do this ?
You can use tools like http://rest-assured.io/ or http://wiremock.org/.
With this you can your API calls will be done the same like you would normally (probably you need to change the hostname). Then you can give a certain result on a URI, Content-Type, etc.
It is even possible to do assertion, to see if the request actual took place, and can do some checking on the requested content.
Related
I have a set of spring services that use a configuration database to change how they perform their tasks.
Whenever a change is made to that database I want to kick off integration tests that will test that the services are still working and that the changed configuration didn't break everything.
My idea is to make another spring service that will be called after a successful database change is made and can check if things are good to go (using Rest Assured to call the services, etc.).
I'm not sure if this is the right approach as I've never done something like this.
Looking for any alternative ideas or pretty much anything that could assist with this task.
I need to get mock data for client calling to spring restful web services. I know unit testing purpose we can use mock. but my case is not the testing.
Use hard coded data, or external data file, or external data source to store mock data. I understand the need to host a service that responds but may not be fully wired, to allow early integration with downstream clients. These are the techniques I use, each have pros and cons.
Hard coded data - as you say, is not intuitive or easy to change. Okay for temporary state.
External data file - able to update dynamically as needed
External data source - able to create multiple scenarios with dynamic mock payloads, and change on demand
Recently there was a discussion in my team about how to properly test a component of our system where the output is stored in a database. We use DDD to create our system so the component ultimately talks to a repository that has different stores implemented to talk to a MongoDB. As testing framework we use Cucumber and the database we use for testing is an in-memory version of mongo.
Up until now, all our scenarios had a command as input and the output was an event so our assertions were done on the event. But now we have a scenario where the event is processed and the result is stored in a database. The result can be retrieved using a rest call after that happens.
The discussion was about the way to test these two last scenarios. For some, the correct way is to check the in-memory database after the event is processed because that's the output of the system. The ultimate part of the system are the stores and they have to be tested as well as part of the scenario. Testing what the in-memory database contains is the right way as the stores are still using the same production ready logic to write the output. For convenience, we would use the repositories to retrieve this data as is easier this way, even when we need to use something not related to the scenario at hand.
On the other hand, for some people we shouldn't be checking the database as that's another component which we shouldn't be accessing for the test. Instead, because in this case the rest call is just retrieving the data, we should use the rest call as part of the test to verify the output. This way, our scenario would include this 2 parts, the storing and the retrieving instead of splitting the tests.
Is there any correct answer to this? Are we missing any point here?
Thanks.
I'd say verifying with a REST call is the correct way to do it here. Otherwise it wouldn't really be blackbox testing, and your test will depend on internal implementation details (your database structure). You usually want to see what effect your application has on the "outside world", and your database is not part of this IMO.
This is all assuming the tests you are creating are intended to be blackbox tests. If it's an integration test (~grey box I guess?) then IMO checking the database using the repository is probably a better idea.
If it's intended to be a unit test, the dependencies of your component should be mocked. You can then use the mocks to verify that your component called the repository correctly.
If I misunderstood something, do let me know. :)
I have a number of low-level methods in my play! 2.0 application (Java) that are calling an external Web Service (Neo4j via REST, to be specific). Each of them returns a Promise<WS.Response>. To test these methods I am currently installing a Callback<WS.Response> on their return values via onRedeem. The Callbacks contain the assertions to perform on individual WS.Responses. Each test relies on some specific fixtures that I am installing/removing via setUpClass and tearDownClass, respectively.
The problem that I am facing is that due to my test code being fully asynchronous, the tear-down logic ends up getting called before all of the Callbacks have had a chance to run. As a result, not all fixtures are being removed, and the database is left in a state that is different from the state it was in before running the tests.
One way to fix this problem would be to call get() with some arbitrary timeout on the Promise objects returned by the functions that are being tested, but that solution seems fairly brittle and unreliable to me. (What if, for some reason not under my application's control, the Web Service calls do not complete within the timeout? In that case, my tests would fail or error out even though my code is actually correct.)
So my question is: Is there a way of testing code that calls external Web Services that is non-blocking and still ensures database consistency? And if there isn't, which of the two approaches outlined above is the "canonical"/accepted way of testing this kind of code?
What if, for some reason not under my application's control, the Web Service calls do not complete within the timeout?
That is a problem for any test that calls external web services, whether asynchronous or not. That is why you should mock out your web service calls in some way, either using a fake web service or a fake implementation of the code that accesses the web service.
You can use e.g. Betamax for that.
I have written testing code for asynchronous code before and I believe your "brittle" approach is actually the right one.
I have an requirement as per which I have to call an existing java function which is called from the UI through jsp. Data is provided from a form.
I want to call that function from the java code.
Should I create an API which should be called from the java code, and pass all the parameters required for the java function
OR
create a mock of the form(if its possible) and pass it to jsp.
What is the recommended way?
If your code is within the same Web Application, you may want to get a handle to that JSP via a request dispatcher, then call that with wrapped request/response objects, suitably tweaked to hold just the parameters the JSP needs.
Using HttpClient may lead you to all kinds of issues, as this would go all the way to the network layer (for starters: are you sure that you can connect to your own app from the server? Are you sure you really know the IP/port? Are you sure there's no login or session required? And there's no security filter that makes sure your request comes via the load balancer? And so on on...)
Going with an API (even going to the trouble of having the code exposed as an API with a code change) may look cleaner, though. But then, if you're already using REST or SOAP, then it may not be so difficult.