Is it possible to measure how many distinct inputs were passed on to the methods of a class under test from existing test cases.
I'd like to measure something like code coverage, but for inputs instead.
I don't know of any COTS tools that compute input coverage, so I'd expect you to have to build a tool that did what you wanted.
My technical paper Branch Coverage for Arbitrary Languages Made Easy describes an approach for building test coverage tools for arbitrary languages using a Program transformation system to insert arbitrary probes into source code.
The paper is naturally focused on building code coverage, but the probe insertion technique is general and you can decide where to place probes and what they do. In your case, you want to place probes only at method entry, and you want the probes to track the input argument instances. The paper shows how to place probes anywhere by using a source code pattern to indicate the point of insertion; method entry is easy to describe as a pattern.
Capturing the input instances is more awkward but doable. You'll have to decide what an "input" is; is it just the argument values, or some kind of deep copy of the arguments? Likely what you need to do is create (per-method instrumented) an object type whose data members corresponds to the parameters, instantiate such an object with a copy (to appropriate depth) of the arguments, and store that object in a per-method hash table. (The transforamtion rules can insert all this once you know what you want to do as a code idiom). With all that, at execution, your hash table builds up the argument set, which is the key to what you want.
You can (continuously) count unique argument-set instances by controlling what happens when you insert duplicates into the hash table; that count (per method) can be managed in a global array that is exported at program completion. The paper discusses such a global array, and the various ways to export/display it in general.
Our line of test coverage and profilers are built using the techniques in the paper. The profilers keep counts/times in such global arrays (essentially what you need) and export them to a display engine that draws heat histograms, showing where the hot spots are. Those display engines are language and probe-data-source agnostic off-the-shelf in that they come in any of our (profiler) tools, including the Java profiler, so you could press one of them into service for the display task.
Related
How can a method be tested with random input in a systematic way? What I mean by this is, if the input data for the units being tested changes each run, how can you effectively find out which values caused the failure and retest those same values after the code has been change? For example the function int[] foo(int[] data) and it can be checked if the return value is correct or not, after writing test cases for "hard coded inputs" (e.g. {1,2,3}) how can random input be tested? If it's random each time, any errors wont be reproducible. I could print the random input to the screen, but that would become messy for each function. Any suggestions?
Also, is it still unit testing if the whole program is being tested? For example calling the constructor and all methods of class in one #Test? The class only has one public method to begin with so this can't really be avoided, but I'm wondering if JUnit is the best tool for it.
How can a method be tested with random input in a systematic way?
In the general case and in the most strict sense, you cannot.
By "strict sense" I mean: validating the correctness of the output no matter which input is presented.
Assuming it would be possible, 'strictness' imply that your test case can compute the result of the function on each and every (random) input. Which means you will need to write a piece of code that replicates the method to be tested - theoretically possible, but leading to a paradoxical situation:
assume you find multiple bugs in the method under test. What is the cheapest way to correct it? Of course, substituting the method code with the testing code
and now the tester (the author of the currently implemented method) needs to... what?... write another "incarnation" of the function in order to test her own implementation?
However, "fuzzing" is still a valid method: except that it is never to be taken in the strict sense; the tests expects the results to expose certain traits/invariants/etc, something that can be defined/checked not matter what the input is. For example, "the method never throws", or "the returned array has the same length as the input array (or double the size, or all elements are odd, or whatever" or "the result is alway a proper HTML page which passes the W3C markup validator".
tested with random input in a systematic way?
You almost have an oxymoron here, mate, like "honest politician" or "aeroplane-safe 2016-made Galaxy 7 Note". If testing is "systematic" it means "there is a (rule) system that governs the way the tests are conducted" - almost quite opposite to "random input".
The trick to reconcile the two: you still have a (rule-based) system to categorize your input (e.g. equivalence partitioning ), except that instead of taking a certain (constant) representative for your categories, you pick the representatives at random. I'm going to repeat this: inside each categories, you pick your representative at random; (as opposed to "pick a random input and see which category it belongs to").
How's this useful? Not much added value, because according to the system/equivalence, picking a representative is as good as picking any other.
Sorry my QA mate, you don't get off-the-hook in regards with your responsibility as a tester to design and plan for the tests (no matter if you use random techniques in generating your input).
If it's random each time, any errors wont be reproducible. I could print the random input to the screen, but that would become messy for each function. Any suggestions?
This is a frivolous reason to avoid random input if random input is deemed necessary: just use some tools to visually organize the testing logs if the simple flowing text format is so hard to read.
E.g. output your test log formatted as JSON with a certain structure and use/write a visualisation tool to represent/explore/fold/unfold it in such a way that human exploration is not a pain in the nether back part of the body.
If your job is to automate the testing, it is supposed you are able to code, isn't it?
Also, is it still unit testing if the whole program is being tested?
"Whole program is being tested" exactly how? What is the specific goal of this "whole system testing"?
There is a distinction between "unit testing" (even a more-than-comprehensive 10000% coverage unit testing) and "functional testing" or "integration testing" or "performance testing" or "usability testing" or "localization testing" (this including the gra'ma' of the UI/error messages) - all the latter belonging to "whole program testing" without being unit testing.
Hint: in defining the type of testing, the specific goal one has in mind when designing/executing test takes precedence over the means used in testing; I've seen testing performed manually using GUI test harnesses, in which the testers where manually writing values to unit test the SDK underneath.
On the other side, there are categories of non-unit testing which may make use of unit testing techniques (e.g. a Web or REST Service - wrap and present it as a proxy-function API and then you can write your tests using JUnit/TestBG or whatever unit test framework you fancy. And yet, you are doing functional or integration testing).
Property-based testing could be a solution for you. Basically, the idea is to have a framework generating all sorts of input data (random, edge cases) which is then fed into your implementation.
It's true that with random data you may end up with test cases behaving differently with every run. But at least the test framework usually would show you which input is used when tests are failing and you can have a further look into the reasons for failing. It's not a guarantee that your method will work in 100% of the cases, but at least you get some coverage and it's still better than nothing.
Typically, such a framework also allows you to restrict the generated data to a set which makes sense for your implementation. Or you can implement your own generator providing data for your tests.
For Java there is e.g. JUnit-Quickcheck which integrates with JUnit: http://pholser.github.io/junit-quickcheck/site/0.6.1/
A lot has already been written about the differences between unit tests/integration tests etc. Maybe have a look here: What is the difference between integration and unit tests?
How does one go about determining the different types of sorting algorithms without being able to look at code?
The scenario is that our lecturer showed a .jar file with 4 sorting methods in it (sortA, sortB...) that we can call to sort a vector.
So, as far as I know, we can only measure and compare the times for the sorts. What's the best method to determine the sorts?
The main issue is that the times for the sorts don't take very long (differing by ~1000ms) to begin with, so comparing them by that isn't really on option, and so far all the data sets I've used (ascending, descending, nearly sorted) haven't really been giving much variation in the sort time.
I would create a data structure and do the following:
Sort the data on paper using known sorting algorithms and document the expected behavior on paper. Be very specific on what happens to your data after each pass.
Run a test on debug mode and step through the sorting process.
Observe how the elements are being sorted and compare it with your predictions (notes) obtained in step 1.
I think this should work. Basically, you are using the Scientific Method to determine which algorithm is being used in a given method. This way, you don't have to resort to "cheating" by decompiling code or rely in imprecise methodologies like using execution time. The three step process I outlined rely in solid, empirical data to arrive to your conclusions.
I have several java classes which implements a quite complicated (non-linear) business logic. Basically the user provides several (numeric) input parameters, and the application computes a scalar.
I would like to do a parameter scan on the input data, that is I would like to know what parameter values create the maximum output value.
The easiest and most time-consuming method would be to create some simple loops with "small" steps on the input parameters, and constantly check the output one.
But as I said, this takes quite a long time; there are several mathematical solutions for this problem (e.g. Newton-method).
My question is; are there any free/open source JAVA libraries which provide this parameter scanning funcionality?
Thanks,
krisy
You might be able to adjust OptaPlanner for this. Call your business logic in a SimpleScoreCalculator and wrap the returned scalar in a Score instance.
Make sure you use at least 6.1.0.Beta2 as that supports IntValueRange and DoubleValueRange.
As for optimization algorithms: if the variables are few, integer and small in range, "Branch And Bound" can work, guaranteeing you the optimal solution (but much faster than Brute Force).
Otherwise, you'll need to go with a limited selection Construction Heuristic followed by Local Search with custom Move implementations.
Given the source code of a program, how do I analyze it and count the function points within it?
Thanks!
You might find this tutorial on FPA of interest. Personally, I don't put much stock in this estimation method. From my perspective it attempts to provide a precise estimate in for things that have been shown repeatedly to not be precisely measurable. I much prefer planning poker or something similar that tries to group things within a similar order of magnitude and provide an estimate based on your previous estimations for similarly sized stories.
If you're doing this for a class, simply follow the rules given in the text book and crank out the answer. If you're really intending to try this as a software development estimation method, my advice is to simplify the process rather than make it more complex. I would imagine that members of the International Function Point User Group (yes, there is one), will disagree.
With a code analysis tool. If you want to write one yourself, you might want to start with cglib or ASM.
Scenario
I am attempting to implement supervised learning over a data set within a Java GUI application. The user will be given a list of items or 'reports' to inspect and will label them based on a set of available labels. Once the supervised learning is complete, the labelled instances will then be given to a learning algorithm. This will attempt to order the rest of the items on how likely it is the user will want to view them.
To get the most from the user's time I want to pre-select the reports that will provide the most information about the entire collection of reports, and have the user label them. As I understand it, to calculate this, it would be necessary to find the sum of all the mutual information values for each report, and order them by that value. The labelled reports from supervised learning will then be used to form a Bayesian network to find the probability of a binary value for each remaining report.
Example
Here, an artificial example may help to explain, and may clear up confusion when I've undoubtedly used the wrong terminology :-) Consider an example where the application displays news stories to the user. It chooses which news stories to display first based on the user's preference shown. Features of a news story which have a correlation are country of origin, category or date. So if a user labels a single news story as interesting when it came from Scotland, it tells the machine learner that there's an increased chance other news stories from Scotland will be interesting to the user. Similar for a category such as Sport, or a date such as December 12th 2004.
This preference could be calculated by choosing any order for all news stories (e.g. by category, by date) or randomly ordering them, then calculating preference as the user goes along. What I would like to do is to get a kind of "head start" on that ordering by having the user to look at a small number of specific news stories and say if they're interested in them (the supervised learning part). To choose which stories to show the user, I have to consider the entire collection of stories. This is where Mutual Information comes in. For each story I want to know how much it can tell me about all the other stories when it is classified by the user. For example, if there is a large number of stories originating from Scotland, I want to get the user to classify (at least) one of them. Similar for other correlating features such as category or date. The goal is to find examples of reports which, when classified, provide the most information about the other reports.
Problem
Because my math is a bit rusty, and I'm new to machine learning I'm having some trouble converting the definition of Mutual Information to an implementation in Java. Wikipedia describes the equation for Mutual Information as:
However, I'm unsure if this can actually be used when nothing has been classified, and the learning algorithm has not calculated anything yet.
As in my example, say I had a large number of new, unlabelled instances of this class:
public class NewsStory {
private String countryOfOrigin;
private String category;
private Date date;
// constructor, etc.
}
In my specific scenario, the correlation between fields/features is based on an exact match so, for instance, one day and 10 years difference in date are equivalent in their inequality.
The factors for correlation (e.g. is date more correlating than category?) are not necessarily equal, but they can be predefined and constant. Does this mean that the result of the function p(x,y) is the predefined value, or am I mixing up terms?
The Question (finally)
How can I go about implementing the mutual information calculation given this (fake) example of news stories? Libraries, javadoc, code examples etc. are all welcome information. Also, if this approach is fundamentally flawed, explaining why that is the case would be just as valuable an answer.
PS. I am aware of libraries such as Weka and Apache Mahout, so just mentioning them is not really useful for me. I'm still searching through documentation and examples for both these libraries looking for stuff on Mutual Information specifically. What would really help me is pointing to resources (code examples, javadoc) where these libraries help with mutual information.
I am guessing that your problem is something like...
"Given a list of unlabeled examples, sort the list by how much the predictive accuracy of the model would improve if the user labelled the example and added it to the training set."
If this is the case, I don't think mutual information is the right thing to use because you can't calculate MI between two instances. The definition of MI is in terms of random variables and an individual instance isn't a random variable, it's just a value.
The features and the class label can be though of as random variables. That is, they have a distribution of values over the whole data set. You can calculate the mutual information between two features, to see how 'redundant' one feature is given the other one, or between a feature and the class label, to get an idea of how much that feature might help prediction. This is how people usually use mutual information in a supervised learning problem.
I think ferdystschenko's suggestion that you look at active learning methods is a good one.
In response to Grundlefleck's comment, I'll go a bit deeper into terminology by using his idea of a Java object analogy...
Collectively, we have used the term 'instance', 'thing', 'report' and 'example' to refer to the object being clasified. Let's think of these things as instances of a Java class (I've left out the boilerplate constructor):
class Example
{ String f1;
String f2;
}
Example e1 = new Example("foo", "bar");
Example e2 = new Example("foo", "baz");
The usual terminology in machine learning is that e1 is an example, that all examples have two features f1 and f2 and that for e1, f1 takes the value 'foo' and f2 takes the value 'bar'. A collection of examples is called a data set.
Take all the values of f1 for all examples in the data set, this is a list of strings, it can also be thought of as a distribution. We can think of the feature as a random variable and that each value in the list is a sample taken from that random variable. So we can, for example, calculate the MI between f1 and f2. The pseudocode would be something like:
mi = 0
for each value x taken by f1:
{ sum = 0
for each value y taken by f2:
{ p_xy = number of examples where f1=x and f2=y
p_x = number of examples where f1=x
p_y = number of examples where f2=y
sum += p_xy * log(p_xy/(p_x*p_y))
}
mi += sum
}
However you can't calculate MI between e1 and e2, it's just not defined that way.
I know information gain only in connection with decision trees (DTs), where in the construction of a DT, the split to make on each node is the one which maximizes information gain. DTs are implemented in Weka, so you could probably use that directly, although I don't know if Weka lets you calculate information gain for any particular split underneath a DT node.
Apart from that, if I understand you correctly, I think what you're trying to do is generally referred to as active learning. There, you first need some initial labeled training data which is fed to your machine learning algorithm. Then you have your classifier label a set of unlabeled instances and return confidence values for each of them. Instances with the lowest confidence values are usually the ones which are most informative, so you show these to a human annotator and have him/her label these manually, add them to your training set, retrain your classifier, and do the whole thing over and over again until your classifier has a high enough accuracy or until some other stopping criterion is met. So if this works for you, you could in principle use any ML-algorithm implemented in Weka or any other ML-framework as long as the algorithm you choose is able to return confidence values (in case of Bayesian approaches this would be just probabilities).
With your edited question I think I'm coming to understand what your aiming at. If what you want is calculating MI, then StompChicken's answer and pseudo code couldn't be much clearer in my view. I also think that MI is not what you want and that you're trying to re-invent the wheel.
Let's recapitulate: you would like to train a classifier which can be updated by the user. This is a classic case for active learning. But for that, you need an initial classifier (you could basically just give the user random data to label but I take it this is not an option) and in order to train your initial classifier, you need at least some small amount of labeled training data for supervised learning. However, all you have are unlabeled data. What can you do with these?
Well, you could cluster them into groups of related instances, using one of the standard clustering algorithms provided by Weka or some specific clustering tool like Cluto. If you now take the x most central instances of each cluster (x depending on the number of clusters and the patience of the user), and have the user label it as interesting or not interesting, you can adopt this label for the other instances of that cluster as well (or at least for the central ones). Voila, now you have training data which you can use to train your initial classifier and kick off the active learning process by updating the classifier each time the user marks a new instance as interesting or not. I think what you're trying to achieve by calculating MI is essentially similar but may be just the wrong carriage for your charge.
Not knowing the details of your scenario, I should think that you may not even need any labeled data at all, except if you're interested in the labels themselves. Just cluster your data once, let the user pick an item interesting to him/her from the central members of all clusters and suggest other items from the selected clusters as perhaps being interesting as well. Also suggest some random instances from other clusters here and there, so that if the user selects one of these, you may assume that the corresponding cluster might generally be interesting, too. If there is a contradiction and a user likes some members of a cluster but not some others of the same one, then you try to re-cluster the data into finer-grained groups which discriminate the good from the bad ones. The re-training step could even be avoided by using hierarchical clustering from the start and traveling down the cluster hierarchy at every contradiction user input causes.