Customize formatting of passed-in parameter in an expectation - java

I'm using jmock to mock out an OutputStream and set expectations on the data that gets written to it.
So I have an expectation that looks something like this
oneOf(stream).write(byteArrayMatching("Some string"));
(byteArrayMatching) is a factory for a custom matcher.
This all works fine, except when the test fails because the class under test writes incorrect data, I get an error looking something like this:
java.lang.AssertionError: unexpected invocation: stream.write([<60>, <63>, ...])
It's pretty hard to identify what exactly is wrong with the data by looking at the sequence of bytes (I haven't gotten around to memorizing ASCII yet). This pretty much forces me to run the test in a debugger to figure out what's wrong.
My question is: is there a way to somehow register a formatter of sorts with the mock object or the Mockery object which can pretty print a parameter value? It's clear that jmock is already doing some pretty-printing, since the above is not the output of byte[].toString(), but I can't find anything in the API docs that suggests a way to customize this pretty-printing logic

There is no way to currently do this in the current (2.5.1) jMock library. I would suggest you log an enhancement to jMock.

The cute answer is that mocking makes more sense against a type that you've defined, that has some domain structure to it, rather than external API.
The next answer is to look at the new version of Hamcrest which includes support for reporting a mismatch.
The next answer, unless there's sequence involved, is that in this case it might be better to use an in-memory byte stream and assert the string afterwards.
And file an issue too, please :)

Related

TAINTED_SOURCE - os_command_sink

Further to Tainted_source JAVA, I want to add more information regarding the error os_command_sink I am getting.
Below is the section of code that's entry point of data from front end and marks parameter as tainted_souce
Now when the DTO - CssEmailWithAttachment is sent to static method of CommandUtils, it reports os_command_sink issue. Below is the code for the method
I tried various ways to sanitize the source in controller method - referenceDataExport i.e. using allowlist, using #Pattern annotation but coverity reports os_command_sink all the times.
I understand the reason as any data coming from http is marked as tainted by default. And the code is using the data to construct an OS command hence the issue is reported.
Coverity provides below information regarding the issue
So I tried strict validation of entityType that it should be one of the known values only but that also doesn't remove the issue.
Is there anyway this can be resolved?
Thanks
The main issue is that the code, as it currently stands, is insecure. To summarize the Coverity report:
entityType comes from an HTTP parameter, hence is under attacker control.
entityType is concatenated into tagline.
tagline is passed as the body and subject of CdsEmailWithAttachment. (You haven't included the constructor of that class, so this is partially speculation on my part.)
The subject and body are concatenated into an sh command line. Consequently, anyone who can invoke your HTTP service can execute arbitrary command lines on your server backend!
There is an attempt at validation in sendEmailWithAttachment, where certain shell metacharacters are filtered out. However, the filtering is incomplete (missing at least single and double quote) and is not applied to the subject.
So, your first task here is to fix the vulnerability. The Coverity tool has correctly reported that there is a problem, but making Coverity happy is not the goal, and even if it stops reporting after you make a change, that does not necessarily mean the vulnerability is fixed.
There are at least two straightforward ways I see to fix this code:
Use a whitelist filter on entityType, rejecting the request if the value is not among a fixed list of safe strings. You mentioned trying the #Pattern annotation, and that could work if used correctly. Be sure to test that your filter works and provides a sensible error message.
Instead of invoking mailx via sh, invoke it directly using ProcessBuilder. This way you can safely transport arbitrary data into mailx without the risks of a shell command line.
Personally, I would do both of these. It appears that entityType is meant to be one of a fixed set of values, so should be validated regardless of any vulnerability potential; and using sh is both risky from a security perspective and makes controlling the underlying process difficult (e.g., implementing a timeout).
Whatever you decide to do, test the fix. In fact, I recommend first (before changing the code) demonstrating that the code is vulnerable by constructing an exploit, as that will be needed later to test any fix, and is a valuable exercise in its own right. When you think you have fixed the problem, write more tests to really be sure. Think like an attacker; be devious!
Finally, I suspect you may be inexperienced at dealing with potential security vulnerabilities (I apologize if I'm mistaken). If so, please understand that code security is very important, and getting it right is difficult. If you have the option, I recommend consulting with someone in your organization who has more experience with this topic. Do not rely only on Coverity.

Zuul filter return value

What is the possible usage of ZuulFilter.run() return value?
All the examples (for instance Spring example) return null.
The official documentation says:
Some arbitrary artifact may be returned. Current implementation ignores it.
So why to have it at all?
I've used this lib in multiple projects and I never thought to look into and stumbled upon this question so I had to look. Just tracing the code in IntelliJ, it does look like the results are pointless.
I'm on zuul-core:1.3.1:
Looking at FilterProcessor, when the routing methods are called to route based on the type, they all call runFilters(sType) which ultimately get the the return Object in question of the implementing IZuulFilter classes. The trail seems to stop here.
I then stopped to looked at their test classes and nothing seems to do anything with the return Object either nor the ZuulFilterResult that wraps it.
I then thought, ok, well maybe there is a way to pass data from one IZuulFilter to another (e.g. from pre to route) but that doesn't seem possible either since FilterProcessor.processZuulFilter(ZuulFilter) doesn't do anything with the results and just passes it back to runFilters(sType) which we know ignores it.
My next line of questioning was, "well, perhaps you can provide your own FilterProcessor implementation and swap it out and actually use the Object somewhere". But alas, it looks like that isn't the case either unless you want/need to implement a lot more even into the ZuulServlet?
Lastly, I thought, "well, maybe it's just a convention thing". But java.lang.Runnable.run() is void and javax.servlet.Filter.doFilter is also void.
So for now, my best guess is that like all of us at some point in our careers, we sometimes fall into a YAGNI situation; perhaps this is just one example.

java unit test of a method interacting with binary files in filesystem

I'm quite new to java programming, but I'll try to use the correct terms and avoid misunderstandings as much as possible.
I've found some answers to topics quite similar to my problem but or I just cannot see how they really fit to my problem, or maybe they really just don't fit. Some of them use mocked objects but I'm not sure it is the right option in my case.
General description
I need to have an array of objects which information is loaded from a random accessed binary files. The first bytes of the binary files are the header of the files which define how the data is stored in the files, basically says the length of some fields which help to compute the position of desired data in the files.
So now I want to test the method that will be called to load the desired data, which is specified by UnitListElement object, to the Unit object. For this I only focus on a single reading of a binary file.
More detailed view
I have a java class called Unit with some attributes, let's say a,*b* and c. The value for this attributes is loaded with a method call getDataFromBinFile:
public class Unit{
public double[] a;
public double[] b;
public double[] c;
getDataFromBinFile(UnitListElement element){
<here loads the data from the binary file with random access>
}
}
The method for loading the data from the binary file, opens the binary file and access to the desired data in the binary file. The desired data to be read is specified in a UnitListElement object:
public class UnitListElement{
public String pathOfFile;
public int beginToReadAt; // info related to where the desired data begins
public int finishReading; // info related to where the desired data ends
}
The attributes beginToReadAt and finishReading time references which are used, along with the binary file's header, to compute the first and last byte positions to read from the binary file.
So what I need to do is a test where I call the method getDataFromBinFile(unitListEl) and test whether the info returned is correct or not.
options for solutions
1st option
In some posts with similar problems propose to use mock objects. I've tried to find documentation about mocking objects but I haven't found any easy beginners guide. So although not understanding mock objects very much, my impression is that the do not fit into this case since what I want to test is the reading of the binary file, not just the interaction with other objects.
2nd option
Another option is to create the binary file for the test inside the test with a helper method, f.i. with a #BeforeClass, and run the test with this temporary file and then delete it with a #AfterClass method.
Question
What do you think is the best practice considering a TDD approach? Do mock objects really fit in this case? If they do, is there any documentation with basic examples for total beginners?
or on the other hand, the creation of the file is more suitable for testing reading methods?
Thanks
Lots of thanks in advance.
Mocking can be applied to your case, but it is in fact not strictly necessary here. All you need is decouple the actual data processing logic in getDataFromBinFile from the code reading the bytes from files.
You can achieve this in (at least) two ways:
With mocks: hide the file reading code behind an interface method which takes a UnitListElement and returns a byte array, then use this in getDataFromBinFile. Then you can mock this interface in your tests with a mock reader which just returns some predefined bytes without accessing any files. (Alternatively, you can move the file reading logic into UnitListElement itself, as for now it seems to be a POD class.)
Without mocks: change the signature of getDataFromBinFile to take a byte array parameter instead of a UnitListElement. In your real production code, you can read the data from the file position described by the UnitListElement, then pass it to getDataFromBinFile. In your unit tests, you can just pass any binary data to it directly. (Note that in this case, it makes sense to rename your method to something like getDataFromBytes.)
For mocking, I have been using EasyMock so far. I find its documentation fairly easy to understand, hope that helps.
I don't have much experience in TDD. Is not required to use mocking when you are testing read/write to a file, best option is to have a test version of file on which you test will run. Mocking is meant to be used when you can not easily create a testable object for your use case, i.e if you are testing interaction with a server for example.
I don't prefer creating the test binary files , as any change in the format of file being read means changing the test files as well ( and thus the tests ) .
Since you are following a TDD approach , you must be having the tests written out for the "UnitListElement" class , hence for the situation mocking seems to be a better solution . Your objective is to test the "getDataFromBinFile" method and not the "UnitListElement" class methods (currently) hence you can mock "UnitListElement" class ( or interface inherited by it and passed to getDataFromBinFile method ) . Mocking "UnitListElement" means you can return predefined or any specific return values to any method calls in the class whenever it is accessed in "getDataFromBinFile" method . Finally you could use the returned values from your mock in the "getDataFromBinFile" method and assert for the return value of the method after your business logic is performed . I haven't used too many mocking frameworks , however most often i have been using EasyMock framework .For a start you can get a basic example of EasyMock over here
Just make a test binary file.
This process is reading a file. So there is no reason to worry about the file system. the file will always be deterministic (if you altered the file durning reading that would be an other story)
if you want to do a test with the objects after you've read them in, I would suggest just creating them in your test (unless this is very hard to do, like a sound file)
Also, I would suggest the abstraction of a stream instead of a file, but I would STILL test this with a test file. btw: make sure the test file is small, it's a test after all.
Some people might argue "test aren't suppose to hit the file system" but where do you think the .class files are loaded from?
Also, I would get the stream via the java classLoader
this.getClass().getResourceAsStream("yourfile.name");
happy testing!
Llewellyn Falco
http://www.approvaltests.com

Save object in debug and than use it as stub in tests

My application connects to db and gets tree of categories from here. In debug regime I can see this big tree object and I just thought of ability to save this object somewhere on disk to use in test stubs. Like this:
mockedDao = mock(MyDao.class);
when(mockedDao.getCategoryTree()).thenReturn(mySavedObject);
Assuming mySavedObject - is huge enough, so I don't want to generate it manually or write special generation code. I just want to be able to serialize and save it somewhere during debug session then deserialize it and pass to thenReturn in tests.
Is there is a standard way to do so? If not how is better to implement such approach?
I do love your idea, it's awesome!
I am not aware of a library that would offer that feature out of the box. You can try using ObjectOutoutStream and ObjectInputStream (ie the standard Java serialization) if your objects all implement Seriablizable. Typically they do not. In that case, you might have more luck using XStream or one of its friends.
We usually mock the entire DB is such scenarios, reusing (and implicitly testing) the code to load the categories from the DB.
Specifically, our unit tests run against an in-memory database (hsqldb), which we initialize prior to each test run by importing test data.
Have look at Dynamic Managed Beans - this offers a way to change values of a running java application. Maybe there's a way to define a MBean that holds your tree, read the tree, store it somewhere and inject it again later.
I've run into this same problem and considered possible solutions. A few months ago I wrote custom code to print a large binary object as hex encoded strings. My toJava() method returns a String which is source code for a field definition of the object required. This wasn't hard to implement. I put log statements in to print the result to the log file, and then cut and paste from the log file to a test class. New unit tests reference that file, giving me the ability to dig into operations on an object that would be very hard to build another way.
This has been extremely useful but I quickly hit the limit on the size of bytecode in a compilation unit.

Writing long test method names to describe tests vs using in code documentation

For writing unit tests, I know it's very popular to write test methods that look like
public void Can_User_Authenticate_With_Bad_Password()
{
...
}
While this makes it easy to see what the test is testing for, I think it looks ugly and it doesn't display well in auto-generated documentation (like sandcastle or javadoc).
I'm interested to see what people think about using a naming schema that is the method being tested and underscore test and then the test number. Then using the XML code document(.net) or the javadoc comments to describe what is being tested.
/// <summary>
/// Tests for user authentication with a bad password.
/// </summary>
public void AuthenticateUser_Test1()
{
...
}
by doing this I can easily group my tests together by what methods they are testing, I can see how may test I have for a given method, and I still have a full description of what is being tested.
we have some regression tests that run vs a data source (an xml file), and these file may be updated by someone without access to the source code (QA monkey) and they need to be able to read what is being tested and where, to update the data sources.
I prefer the "long names" version - although only to describe what happens. If the test needs a description of why it happens, I'll put that in a comment (with a bug number if appropriate).
With the long name, it's much clearer what's gone wrong when you get a mail (or whatever) telling you which tests have failed.
I would write it in terms of what it should do though:
LogInSucceedsWithValidCredentials
LogInFailsWithIncorrectPassword
LogInFailsForUnknownUser
I don't buy the argument that it looks bad in autogenerated documentation - why are you running JavaDoc over the tests in the first place? I can't say I've ever done that, or wanted generated documentation. Given that test methods typically have no parameters and don't return anything, if the method name can describe them reasonably that's all the information you need. The test runner should be capable of listing the tests it runs, or the IDE can show you what's available. I find that more convenient than navigating via HTML - the browser doesn't have a "Find Type" which lets me type just the first letters of each word of the name, for example...
Does the documentation show up in your test runner? If not that's a good reason for using long, descriptive names instead.
Personally I prefer long names and rarely see the need to add comments to tests.
I've done my dissertation on a related topic, so here are my two cents: Any time you rely on documentation to convey something that is not in your method signature, you are taking the huge risk that nobody would read the documentation.
When developers are looking for something specific (e.g., scanning a long list of methods in a class to see if what they're looking for is already there), most of them are not going to bother to read the documentation. They want to deal with one type of information that they can easily see and compare (e.g., names), rather than have to start redirecting to other materials (e.g., hover long enough to see the JavaDocs).
I would strongly recommend conveying everything relevant in your signature.
Personally I prefer using the long method names. Note you can also have the method name inside the expression, as:
Can_AuthenticateUser_With_Bad_Password()
I suggest smaller, more focussed (test) classes.
Why would you want to javadoc tests?
What about changing
Can_User_Authenticate_With_Bad_Password
to
AuthenticateDenieTest
AuthenticateAcceptTest
and name suit something like User
As a Group how do we feel about doing a hybrid Naming schema like this
/// <summary>
/// Tests for user authentication with a bad password.
/// </summary>
public void AuthenticateUser_Test1_With_Bad_Password()
{
...
}
and we get the best of both.

Categories

Resources