Pact provider test running app instances - java

Currently I'm using pact-jvm-consumer/provider-junit_2.11 from au.com.dius lib. Got my consumer pact working and generating pacts, but the problem comes when I try to use these in my provider service.
The idea is to make all pacts integrally with junit tests, so everyone could run their unit tests locally without worrying about additional pact tests.
The main question is:
How to handle this, assuming service under tests requires another service (authorization one) and a db as a data feeder. I'm not quite convinced that each time running these instances locally and than killing them does the trick. (Would like to perform tests before even deploying these to any environments)
Should this be handled with some kind of 'hack-switch' to always return true, as authorized user in 'some circumstances', and mock a data-feeder? Or should it be handled in any other way?
Secondly (the side question):
Once i got my pact ready, how should I test these against a consumer? So far I got things like: (which works just fine, but I'm also not sure about these)
assertThat(result, instanceOf(DataStructure.class)); *as an example*
Above is to make sure that data I've received and pushed to my consumer are in the exact format I've been expecting. Is that ok, or the correct approach is to unpack all of these and check separately if these are e.g. Maps or Strings
Thanks in advance!

Here are some thoughts on stubbing service during verification:
https://github.com/pact-foundation/pact-ruby/wiki/FAQ#should-the-database-or-any-other-part-of-the-provider-be-stubbed
The pact authors' experience with using pacts to test microservices has been that using the set_up hooks to populate the database, and running pact:verify with all the real provider code has worked very well, and gives us full confidence that the end to end scenario will work in the deployed code.
However, if you have a large and complex provider, you might decide to stub some of your application code. You will definitely need to stub calls to downstream systems or to set up error scenarios. Make sure, if you stub, that you don't stub the code that actually parses the request and pulls the expected data out, because otherwise the consumer could be sending absolute rubbish, and the pact:verify won't fail because that code won't get executed. If the validation happens when you insert a record into the datasource, either don't stub anything, or rethink your validation code.
I personally would stub an authentication service (assuming you already have some other tests to show that you're invoking the authentication service correctly) but I generally use the real database, unless this complicates things such that using a mock is "cheaper" (in time, effort, maintainability).
In regards to your second question, I'm not exactly sure what you're talking about, but I think you're talking about making assertions about the properties of the object that has been unmarshalled from the mocked response (in the consumer tests). I would have one test that checked every property, to make sure that I was using the correct property names in my unmarshalling code. But as I said, I would only do this once (or however many times it was required to make sure I had checked every property name once). In the rest of the tests, I would just assert that the correct object class was returned.

Related

TAINTED_SOURCE - os_command_sink

Further to Tainted_source JAVA, I want to add more information regarding the error os_command_sink I am getting.
Below is the section of code that's entry point of data from front end and marks parameter as tainted_souce
Now when the DTO - CssEmailWithAttachment is sent to static method of CommandUtils, it reports os_command_sink issue. Below is the code for the method
I tried various ways to sanitize the source in controller method - referenceDataExport i.e. using allowlist, using #Pattern annotation but coverity reports os_command_sink all the times.
I understand the reason as any data coming from http is marked as tainted by default. And the code is using the data to construct an OS command hence the issue is reported.
Coverity provides below information regarding the issue
So I tried strict validation of entityType that it should be one of the known values only but that also doesn't remove the issue.
Is there anyway this can be resolved?
Thanks
The main issue is that the code, as it currently stands, is insecure. To summarize the Coverity report:
entityType comes from an HTTP parameter, hence is under attacker control.
entityType is concatenated into tagline.
tagline is passed as the body and subject of CdsEmailWithAttachment. (You haven't included the constructor of that class, so this is partially speculation on my part.)
The subject and body are concatenated into an sh command line. Consequently, anyone who can invoke your HTTP service can execute arbitrary command lines on your server backend!
There is an attempt at validation in sendEmailWithAttachment, where certain shell metacharacters are filtered out. However, the filtering is incomplete (missing at least single and double quote) and is not applied to the subject.
So, your first task here is to fix the vulnerability. The Coverity tool has correctly reported that there is a problem, but making Coverity happy is not the goal, and even if it stops reporting after you make a change, that does not necessarily mean the vulnerability is fixed.
There are at least two straightforward ways I see to fix this code:
Use a whitelist filter on entityType, rejecting the request if the value is not among a fixed list of safe strings. You mentioned trying the #Pattern annotation, and that could work if used correctly. Be sure to test that your filter works and provides a sensible error message.
Instead of invoking mailx via sh, invoke it directly using ProcessBuilder. This way you can safely transport arbitrary data into mailx without the risks of a shell command line.
Personally, I would do both of these. It appears that entityType is meant to be one of a fixed set of values, so should be validated regardless of any vulnerability potential; and using sh is both risky from a security perspective and makes controlling the underlying process difficult (e.g., implementing a timeout).
Whatever you decide to do, test the fix. In fact, I recommend first (before changing the code) demonstrating that the code is vulnerable by constructing an exploit, as that will be needed later to test any fix, and is a valuable exercise in its own right. When you think you have fixed the problem, write more tests to really be sure. Think like an attacker; be devious!
Finally, I suspect you may be inexperienced at dealing with potential security vulnerabilities (I apologize if I'm mistaken). If so, please understand that code security is very important, and getting it right is difficult. If you have the option, I recommend consulting with someone in your organization who has more experience with this topic. Do not rely only on Coverity.

Data dependent tests

I`m writing test for my "MySQL Requests Manager" and the problem is that some of the tests is depends on the data that contained in the database. So if any other test will delete the required records or someone else will delete them, that means that test will fail even if they correct.
I`m thinking about two approaches here:
1. In the test itself backup all the needed data before, run the test and restore the data from backup. But this is much more error prone and "heavier", in my humble opinion.
2. Before running one of the tests or even all of them is to create whole new database with structure and required data (from previously made dump, i think). This involves only around of two 'global' actions: create database and dropping it. Of course, i need to have totally isolated MySQL user and database for this.
What you think and what can you recommend? How another programmers dealing with that kind of issue?
Here's a different idea if you want to check it out: there's a java framework Mockito that is pretty helpful in cases like this. With it, you can create 'Mock' instantiations of certain objects/services that allow you to avoid actually instantiating them. With a mock, you can return a custom/hard coded results and test that your service handles that response correctly. For example, say you have a class 'SQLTestService' that has a method called 'getData()'. You can instantiate a Mock of 'SQLTestService' and have it return a specific value when 'getData()' is called. That way your tests are never actually dependent on the data in the DB and you can test for a specific outcome that you know your service should be able to handle.
When writing unit test, one should:
Use a test DB
Load data before each test (or load data before all tests)
#Before
public void setUp() {
//insert your test data here
}
Drop data after each test (or drop data after all tests)
#After
public void tearDown() {
// drop your test data here
}
That means the DB is system independent and each test runs isolated without having the fear of losing data, or interference between tests.

How do people write unit test in this scenario

I have a question regarding unit test.
I am going to test a module which is an adapter to a web service. The purpose of the test is not test the web service but the adapter.
One function call the service provide is like:
class MyAdapterClass {
WebService webservice;
MyAdapterClass(WebService webservice) {
this.webservice = webservice;
}
void myBusinessLogic() {
List<VeryComplicatedClass> result = webservice.getResult();
// <business logic here>
}
}
If I want to unit test the myBusinessLogic function, the normal way is to inject an mocked version of webservice with getResult() function setup for some predefined return value.
But here my question is, the real webservice will return a list of very completed classes each with tens of properties and the list could contain hundreds or even thousands of element.
If I am going to manually setup a result using Mockito or something like that, it is a huge amount of work.
What do people normally do in this scenario? What I simply do is connect to the real web service and test again the real service. Is something good to do?
Many thanks.
You could write the code to call the real web service and then serialize the List<VeryComplicatedClass> to a file on disk and then in the setup for your mock deserialize it and have mockwebservice.getResult() return that object. That will save you manually constructing the object hierarchy.
Update: this is basically the approach which Gilbert has suggested in his comment as well.
But really.. you don't want to set up a list of very completed classes each with tens of properties and the list could contain hundreds or even thousands of element, you want to setup a mock or a stub that captures the minimum necessary to write assertions around your business logic. That way the test better communicates the details that it actually cares about. More specifically, if the business logic calls 2 or 3 methods on VeryComplicatedClass then you want the test to be explicit that those are the conditions that are required for the things that the test asserts.
One thought I had reading the comments would be to introduce a new interface which can wrap List<VeryComplicatedClass> and make myBusinessLogic use that instead.
Then it is easy (/easier) to stub or mock an implementation of your new interface rather than deal with a very complicated class that you have little control over.

Do I need to unit test web service request dispatcher (Java)?

This class is simply a request dispatcher. It takes request and response objects, and pass down the work according the request type. Application logic is tested. Mocking has to be avoided. How can I write unit test for this dispatcher without turning the test into integration or system test? How are dispatchers usually tested?
EDIT: I was told to avoid mocking. I don't think I can change that decision.
There should be two parts to the code; the first is the marshalling of data between the web layer and dispatching, the second is dispatching to handlers.
Dispatching can be tested using "plain" unit testing, it's just logic to map arbitrary criteria to handlers.
The marshalling layer requires either mocking, or enough integration to create a web request and watch its routing, what's returned from its handler, etc. HtmlUnit is one solution, there are a ton of others.
Use mocks. Do the unit tests.
If you start picking and chosing which parts to test and what not to test, you might as well not test anything at all.
Then again, you might just name them "Bootstraps" or "Imposter" or some other name and get around the restrictions. Alternative, you might be able to hand-code the mocked objects and get around the restrictions that way.

JUnit dynamic method dispatch?

Here is the problem I am facing. I have been tasked with testing the query parsing engine of a piece of software through negative testing. That is, I must write a large number of queries that will fail, and test that they do indeed fail, as well as having the expected error message for the particular error in the query. These are defined in an XML file. I've written a simple wrapper around the parsing of the XML document and struct-like classes for these test cases.
Now, given that I am using JUnit as a testing framework, I'm running into this issue - the act of running through all of these externally defined tests lives in a single method. If a single test fails, then no more will be run. Is there any way to dynamically dispatch a method to handle each of the tests as I encounter them? This way, if a test fails, we can still run the remaining ones while getting a report on what did and did not fail.
The other alternative is, of course, writing all of the JUnit tests. I'd like to avoid this for many reasons, one of which is that the number of tests to be run is extremely large, and a test case is 99% boilerplate code.
Thanks.
You should look into JUnit's Parameterized annotation.
If I understand correctly, the input data and expected results are all defined in XML, so you don't need specific code to handle each test case?
If you use JUnit4, you could write your own Runner implementation. You could either implement Runner directly or extend ParentRunner. All you need to implement is one method that returns a description of the tests, and another method that runs the tests.

Categories

Resources