Test a non-idempotent method of a webservice - java

I am searching for a clean and simple way to write tests for a single non-idempotent method of a webservice. So far, I couldn't find a satisfying way to handle this.
For example, I have a DELETE method, which deletes entities in a database. It returns a 200 in case of successful deletion of the given entity. In the test, I call it with a specific entity id which is then deleted. In case of a second test run, it will fail, because the entity doesn't exist anymore.
To workaround that, I would need to put e.g. a POST call within the test, to create an entity before deleting it. But that's mixing up my API tests. So, if I get a test failure, I can't be sure whether the POST method or the following DELETE failed. If possible, I only want one single endpoint to be called in one single test.
Is there a better way to come around that? Does there exist a standard pattern?

Related

How to make this delete test independent?

I want to check DELETE response. So should I make employee in this test or it should be in a different place(Before method for example)?
#Test(description="positive")
public static void deleteEmployeeEducation()
Response res= given().
spec(Specifications.getRequestSpecAsEmpl()).
body(Payloads.addEmployeeEducation("GGGG","ingieneer",2005,2010)).
when().
put(Endpoints.createEmployeeEducation()).
then().log().all().extract().response();
JsonPath js= ReusableMethods.rawToJson(res);
Integer educationId=js.get("[0].id");
given().
spec(Specifications.getRequestSpecAsEmpl()).
delete(Endpoints.deleteEmployeeEducation()+educationId).
then().log().all().
assertThat().statusCode(204);
If you want your delete test to be independent, then yes, it makes sense to first create the user in this test, then assert that deleting it works. The logical prerequisite is that an insert test has been successfully tested beforehand.
I wouldn't necessarily advise that you seek to make this delete test independent, however. Typically, in cases like this where you're testing CRUD operations provided by an API, it makes sense to have an integration test, with a logical order. If your API exposes insert, select, delete, update operations, then an integration test could be this sequence of tests, in this order:
Test insert works, with user 123
Test select works and returns user 123
Test update works, and updates user 123 to user 234. Probably add a select assertion within this test, and assert the select returns user 234
Test delete works. Probably add a select assertion which asserts no user is returned.
That would be my advice for the design of your test class, if I am correct in understanding that you're simply testing different CRUD operations.
The #Before method should be used for setting up prerequisites for the tests. It can make sense to create your employee in here if all your tests relate to the same employee. Otherwise, I would recommend my approach with the integration test.
Let me know if this helps.

Data dependent tests

I`m writing test for my "MySQL Requests Manager" and the problem is that some of the tests is depends on the data that contained in the database. So if any other test will delete the required records or someone else will delete them, that means that test will fail even if they correct.
I`m thinking about two approaches here:
1. In the test itself backup all the needed data before, run the test and restore the data from backup. But this is much more error prone and "heavier", in my humble opinion.
2. Before running one of the tests or even all of them is to create whole new database with structure and required data (from previously made dump, i think). This involves only around of two 'global' actions: create database and dropping it. Of course, i need to have totally isolated MySQL user and database for this.
What you think and what can you recommend? How another programmers dealing with that kind of issue?
Here's a different idea if you want to check it out: there's a java framework Mockito that is pretty helpful in cases like this. With it, you can create 'Mock' instantiations of certain objects/services that allow you to avoid actually instantiating them. With a mock, you can return a custom/hard coded results and test that your service handles that response correctly. For example, say you have a class 'SQLTestService' that has a method called 'getData()'. You can instantiate a Mock of 'SQLTestService' and have it return a specific value when 'getData()' is called. That way your tests are never actually dependent on the data in the DB and you can test for a specific outcome that you know your service should be able to handle.
When writing unit test, one should:
Use a test DB
Load data before each test (or load data before all tests)
#Before
public void setUp() {
//insert your test data here
}
Drop data after each test (or drop data after all tests)
#After
public void tearDown() {
// drop your test data here
}
That means the DB is system independent and each test runs isolated without having the fear of losing data, or interference between tests.

Pact provider test running app instances

Currently I'm using pact-jvm-consumer/provider-junit_2.11 from au.com.dius lib. Got my consumer pact working and generating pacts, but the problem comes when I try to use these in my provider service.
The idea is to make all pacts integrally with junit tests, so everyone could run their unit tests locally without worrying about additional pact tests.
The main question is:
How to handle this, assuming service under tests requires another service (authorization one) and a db as a data feeder. I'm not quite convinced that each time running these instances locally and than killing them does the trick. (Would like to perform tests before even deploying these to any environments)
Should this be handled with some kind of 'hack-switch' to always return true, as authorized user in 'some circumstances', and mock a data-feeder? Or should it be handled in any other way?
Secondly (the side question):
Once i got my pact ready, how should I test these against a consumer? So far I got things like: (which works just fine, but I'm also not sure about these)
assertThat(result, instanceOf(DataStructure.class)); *as an example*
Above is to make sure that data I've received and pushed to my consumer are in the exact format I've been expecting. Is that ok, or the correct approach is to unpack all of these and check separately if these are e.g. Maps or Strings
Thanks in advance!
Here are some thoughts on stubbing service during verification:
https://github.com/pact-foundation/pact-ruby/wiki/FAQ#should-the-database-or-any-other-part-of-the-provider-be-stubbed
The pact authors' experience with using pacts to test microservices has been that using the set_up hooks to populate the database, and running pact:verify with all the real provider code has worked very well, and gives us full confidence that the end to end scenario will work in the deployed code.
However, if you have a large and complex provider, you might decide to stub some of your application code. You will definitely need to stub calls to downstream systems or to set up error scenarios. Make sure, if you stub, that you don't stub the code that actually parses the request and pulls the expected data out, because otherwise the consumer could be sending absolute rubbish, and the pact:verify won't fail because that code won't get executed. If the validation happens when you insert a record into the datasource, either don't stub anything, or rethink your validation code.
I personally would stub an authentication service (assuming you already have some other tests to show that you're invoking the authentication service correctly) but I generally use the real database, unless this complicates things such that using a mock is "cheaper" (in time, effort, maintainability).
In regards to your second question, I'm not exactly sure what you're talking about, but I think you're talking about making assertions about the properties of the object that has been unmarshalled from the mocked response (in the consumer tests). I would have one test that checked every property, to make sure that I was using the correct property names in my unmarshalling code. But as I said, I would only do this once (or however many times it was required to make sure I had checked every property name once). In the rest of the tests, I would just assert that the correct object class was returned.

Test via JUnit DAO class

I have a DAO class in which I need to test method called getItemById() which returns Item object from DB's table.
As long as I understand I have to make an Item object in that test and check if it equals to returned from method? Or I have to just check if it returns an Item object?
What if table is empty or no row with that id at all?
Sorry, this is a quite newbie question, but I can't make it clear in my head. Please help!
Running tests against a database where you can't predict what's in it is not effective; any test that is resilient enough to accommodate changing data is going to be worthless for the purpose of confirming whether the code under test actually does the right thing. I would make the test use its own database instance, so that there's no question of interference from other users mucking up my test, or my test changing data out from under somebody else. The ideal choice would be an in-memory database like H2, that the test can instantiate and throw away when it's done with it. That way the test can run anywhere (for instance on a CI server), with the same results.
The test needs to run the ddl to create the schema and populate the database before executing. There are different tools you can use for this. DbUnit is popular, there is also an alternative called DBSetup which is supposed to be less complicated. You can have separate test data for different scenarios. DbUnit has tools to extract data from a database to make it easier to create your test data.
Since the database is under your control and you can populate it as you wish, you should verify that the returned object's fields are what you expect based on the populated data. Make the test as specific as possible.
For testing the SQL and how the object is mapped to the resultset it makes sense to use a database. For some parts of this it would make sense to use a unit test that doesn't touch the database and uses mocks. For instance, it would be good to confirm that the connection gets closed in all cases, it's easier to use mocks than it is to cause a SQLException in your code.
Testing using mocks would be easier if the DBConnection class was injected instead of being instantiated within the method. If you changed the code to inject the DBConnection then you could write a unit test (one using mocks that doesn't use a database) that checks whether the connection gets closed.
To perform unit test you should walk by three steps:
Prepare test environement (eg. populate db with known test data)- so you wont ask is the table empty or not etc.
Perform test and assert result
Do cleanup - so test wont have influence on other tests
Besides, you should test all scenarios cuz you sholuld handle all of them

DAO test: the right way?

I want to test my class MyTypeDAO implemented with Hibernate 4.1 using JUnit 4.9. I have the following question:
In my DAO, I have a findById method that retrieve an instance of my type by its ID. How to test this method?
What I've done:
I create an instance of my type.
Then, I need to persist this instance, but how? Can I rely on my saveMyType method? I don't think so, since I'm in the test case and this method is not tested.
Then, I need to call the findById method with the ID of the instance created in step 1.
Finally, I check that the instance created in step 1 equals the one I get in step 3.
Any idea? What are the best practices?
I have the same questions for the save method, since after running it, I need to retrieve the save instance. Here also, I don't think I can rely on my findById method since it's not already tested.
Thanks
One possible way is:
Create a in memory db for testing, load contents of this db from a predefined sql script andthen test your DAO classes against this database.
Everytime you start tests, database will be created from scratch using the sql script and you will know which id should return a result and which one should not.
See [DbUnit][1] (from satoshi's comment)
I don't think you have much choice to achieve this. It's not a good practice to have orthognal tests (tests that test 2 things or are dependent). Nevertheless, you should really consider this exception valid and fast. You are right : persisting an object and retrieving it is a good idea to test this dao layer.
Other options include having a record that you are sure about in the database and testing the retrieval (findById) on it. And the a second test to persist an object and removing it the teardown method.
But really, it would be simpler to test loading and saving together and it makes much sense.

Categories

Resources