Testing asynchronous Web Service calls in Play Framework 2.0 - java

I have a number of low-level methods in my play! 2.0 application (Java) that are calling an external Web Service (Neo4j via REST, to be specific). Each of them returns a Promise<WS.Response>. To test these methods I am currently installing a Callback<WS.Response> on their return values via onRedeem. The Callbacks contain the assertions to perform on individual WS.Responses. Each test relies on some specific fixtures that I am installing/removing via setUpClass and tearDownClass, respectively.
The problem that I am facing is that due to my test code being fully asynchronous, the tear-down logic ends up getting called before all of the Callbacks have had a chance to run. As a result, not all fixtures are being removed, and the database is left in a state that is different from the state it was in before running the tests.
One way to fix this problem would be to call get() with some arbitrary timeout on the Promise objects returned by the functions that are being tested, but that solution seems fairly brittle and unreliable to me. (What if, for some reason not under my application's control, the Web Service calls do not complete within the timeout? In that case, my tests would fail or error out even though my code is actually correct.)
So my question is: Is there a way of testing code that calls external Web Services that is non-blocking and still ensures database consistency? And if there isn't, which of the two approaches outlined above is the "canonical"/accepted way of testing this kind of code?

What if, for some reason not under my application's control, the Web Service calls do not complete within the timeout?
That is a problem for any test that calls external web services, whether asynchronous or not. That is why you should mock out your web service calls in some way, either using a fake web service or a fake implementation of the code that accesses the web service.
You can use e.g. Betamax for that.
I have written testing code for asynchronous code before and I believe your "brittle" approach is actually the right one.

Related

Tracking Spring method calls without using Aspects

For a few days, I am stuck at a (for me) quite challenging problem.
In my current project, we have a big SOA based architecture, our goal is to monitor and log all incoming requests, the invoked services, the invoked DAOs, and their result. For certain reasons we cant uses aspects, so our idea is to connect directly to the JavaVM and observe what's going on.
In our research, we found Byteman and Bytebuddy which both use the Java Machine Tool Interface to connect and inject code into the VM.
Looking closer at Byteman we discovered that we have to specify the Byteman-Operation for each operational class which in our case is simply impossible.
Would there be a better, more efficient way to log all incoming requests, the invoked services, the invoked DAOs, and their results? Should we write our own Agent which connects to the JMTI? What would you guys recommend?
I think the way to figure out a specific service method call can be overloaded. Wouldn't it be simplest and smarter to use APM?

I need to Build a rest client to make 10k rest api calls/execution of the application in java with best performance. Any useful links will be helpful

Hi i am using Spring 4 Async rest template to make 10k rest api calls to a web service. I have a method that creates the request object and a method that calls the web service. I am using Listenable Future classes and the two methods to create and call are enclosed in another method where the response is handled in future. Any useful links for such a task would be greatly helpful.
First, set up your testing environment.
Then benchmark what you have.
Then adjust your code and compare
(repeat as necessary).
Whatever you do, there is a cost associated with it. You need to be sure that your costs are measured and understood, every step of the way.
A simple Tomcat application might outperform a Spring application or be equivalent depending on what aspects of Spring's inversion of control are being leveraged. Using a Future might be fast or slow, depending on what it is being compared to. Using non-NIO might be faster or slower, depending on the implementation and the data being processed.

mock a web service only for test purpose

i am working on integration tests, getting Responses from An API A.
the API A interacts with another API B which also calls a web Service to get data from it.
the problem is the data may change in the future, so the integration tests may fail and as long as the data changes, i have to edit the tests too to make it work.
i want to Mock the web service from which i have the data, but i don't know how to tell API B to call the mock only for tests,
does anyone has an idea about the best way to do this ?
You can use tools like http://rest-assured.io/ or http://wiremock.org/.
With this you can your API calls will be done the same like you would normally (probably you need to change the hostname). Then you can give a certain result on a URI, Content-Type, etc.
It is even possible to do assertion, to see if the request actual took place, and can do some checking on the requested content.

Should my Play services always be asynchronous calls with promises?

I just read that it is recommended to use asynchronous method calls on the server via promises when executing long running requests. The documentation says this is because the Play server will block on the request and not be able to handle concurrent requests.
Does this mean all of my web requests should be asynchronous?
I'm just thinking that if I want to to increase my web pages rendering times that I would make a series of ajax calls to fetch needed page regions concurrently. Since I would potentially make multiple ajax calls, my Play controller methods need to be asynchronous.
Am I understanding this correctly? The syntax is quite verbose so I want to make certain I don't take this concept overboard. It would seem strange to me that I have to do this given other web severs such as Glassfish or IIS automatically handle pooling.
Here are some detailed docs on Play's thread pools, various different configurations, how to tune them, best practices etc:
http://www.playframework.com/documentation/2.2.x/ThreadPools

What does a mocking framework do for me?

I have heard some people who I cannot talk to are big fans of jmock. I've done test centered development for years and so I went through the website and looked at some of the docs and still can't figure out what good it is.
I had the same problem with spring. Their docs do a great job explaining it if you already understand what it is, so I'm not assuming that jmock is of no value. I just don't understand what it does for me.
So if jmock provides me with the ability to mock out stubbed data, let's go with an example of how I do things and see how jmock would be better.
Let's say I have my UI layer that says, create me a widget and the widget service, when creating a widget, initializes the widget and stores pieces of it in the three tables necessary to make up a widget.
When I write my tests, here's how I go about it.
First, I re-point hibernate to my test hypersonic database so I don't have to do a bunch of database set up. Hibernate creates my tables for me.
All of my tests for my classes have static factory methods that construct a test instance of the class for me. Each of my DAOs create test versions that point to the test schema. Then my service class has one that constructs itself with DAOs generated by the test class.
Now, when I run my test of the UI controller that calls the service, I am testing my code all the way through the application. Granted that this is not the total isolation generally wanted when doing a unit test, but it provides me, in my opinion, a better unit test because it executes the real code all the way through all of the supporting layers.
Because Hypersonic under hibernate is slow, it takes slightly longer to run all of my tests, but my entire build still runs in less than five minutes on an older computer for full build and packaging, so I find that pretty acceptable.
How would I do things differently with jmock?
In your example, there are two interfaces where one would use a mocking framework to do proper unit tests:
The interface between the UI layer and the widget service - replacing the widget service with a mock would allow you to test the UI layer in isolation, with the service returning manually created data and the mock verifying that the expected service calls (and no others) happen.
The interface between the widget service and the DAO - by replacing the DAO with a mock, any service methods that contain complex logic can be tested in isolation.
Granted that this is not the total isolation generally wanted when doing a unit test, but it provides me, in my opinion, a better unit test because it executes the real code all the way through all of the supporting layers.
This seems to be the core of your question. The answer has a number of facets:
If you're not testing components in isolation, you do not have unit tests, you have integration tests. As you observe, these are quite valuable, but they have their drawbacks
Since they test more things at the same time, they tend to break more often, they tend to break in large groups (when there's a problem with common functionality) and when they do, it is harder to find out where the actual problem lies
They are more constrained in what kinds of scenarios you can test. It can be hard or impossible to simulate certain edge cases in an integration test.
Sometimes a full integration test cannot be automated because some component is not sufficiently under your control (e.g. a third-party webservice) to set up the test data you need. In such a case you might even end up using a mocking framework in what is otherwise a high-level integration test.
I haven't looked at JMock in particular (I use Mockito) but in general mock frameworks allow you to "mock" external services such that you only need to test a single class at a time. Any dependencies of that class can be mocked, meaning the real method calls are not made, and instead stubs are called that return or throw constants. This is a good thing, because external calls can be slow, inconsistent, or unreliable--all bad things for unit testing.
To give a single example of how this works, imagine you have a service class that has a dependency on a web service client. If you test with the real web service client, it might be down, the connection might be slow, or the data behind the web service might change over time. How are you going to write a reliable test against that? (You can't). So you use a mock framework to mock/stub the web service client, and you create fake responses, fake errors, fake exceptions, to mimic the web service behavior. The difference is the result is always fast and consistent.
Also, you'd like to test all the failure cases that a given dependency might have, but without mocking that's hard to do. Consider the example I gave above. You'd like to be sure your code does the right thing if the web service throws an IOException because the web service is down (or times out), but it's not so easy to force that condition. With mocking this becomes trivial.

Categories

Resources