I have a web tool which when queried returns generated Java classes based upon the arguments in the URL.
The classes we retrieve from the webserver change daily and we need to ensure that they still can process known inputs.
Note these classes do not test the webserver, they run locally and transform xml into a custom format. I am not testing the webserver.
These classes must then be placed in specific package structure compiled and run against a known set of input data and compared against known output data.
I would like to do this automatically each night to make sure that the generated classes are correct.
What is the best way to achieve this?
Specifically whats the best way to:
retrieve the code from a webserver and place it in a file
compile the code and then call it
I'm sure a mix of junit and ant will be able to achieve this but is there and standard solution / approach for this?
First up, to answer your question: No, I do not think that there is a standard approach for this. This sounds like quite an unusual situation ;-)
Given that, what I would do is to write your JUnit tests to all call a class GeneratedCode, and then, once you download the code, rename the class to GeneratedCode, compile, and run your unit tests.
You have the same goal as continuous integration ;-)
Maybe a bit overkill for this simple task, but this is the standard way to get something, compile something and test something regularly.
E.g. you could try hudson.
You should be creating a "mock" interface for your web service that (a) behaves the same way and (b) returns a known answer.
You should then do some other integration testing with the live web service where a person looks at the results and decides if they worked.
Can you only test the generated classes after they were published on the webservice ? You have no way to test during or just after the generation ?
One idea, if the generated code isn't to complex, is to load it via the GroovyClassLoader and to run your tests against it. See this page for examples.
Related
Current Situation
As a pre-build test, I am trying to check that any commits/changes to java fixtures will not break any fitnesse tests I have in my fitnesse server.
The approach I have taken is to grab all the tests (context.txt) files that I want to verify the commit will not break, and parse it the best I can and compare it with the available methods I can grab using reflection from my projects.
Currently, I have a HashMap of 'class name' to 'class object' for all my available java fixtures. I also have been able to get access to all the fitnesse tests as File objects. It looks a little like this:
HashMap<String, Class<?>> availableJavaClassesMap = initJavaMap();
List<File> allFitnesseTestFiles = initTestFiles();
Goal
Now I want to be able to do something like this:
HashMap<String, Method> parsedMethodsFromFitnesseMap;
For(File file: allFitnesseTestFiles) {
parsedMethodsFromFitnesseMap.addAll( parseFile( file ) );
}
Then I would simply compare the two HashMaps and make sure the parsedMethodsFromFitnesseMap HashMap is a subset of the availableJavaClassesMap HashMap.
Worries
Include files: how to handle parsing those first/other approaches
Scenarios: create my own list of known scenarios and how they work
Ideal Solution
Is there an already made parser that will do this?
I have found the Slim Code and think it could be refactored to my needs.
Is this the best approach for this pre-build check?
Note
Each test is very expensive to run, therefore simply running all the tests to see if they still work is not an option for my needs.
Ideas? Thanks.
My Solution
The approach I settled on was using SVNKit to programmatically pull from svn into the systems temp folder and then did my parsing and comparing between methods and then deleted the directory when I was done.
For execution time, if I pulled and parsed 200 tests, it took about 20 seconds to do so. However, about 90% of the time is taken up by simply pulling the tests from the svn repo.
Holding on to the repo instead of deleting it would solve this timing issue (besides the first pull obviously), however for my jUnit style approach, I will probably take the longer time hit for sake of encapsulation.
I made sure that the import files were parsed before hand so that the scenarios in those could be used when parsing the actual tests.
I ended up not using any of the slim code (However if you want to give it a shot, the InstructionExecutor class was what I thought was most useful/close to a parser.
This was a fairly easy approach and I would recommend it.
Note
I never tried the restful approach, which might of yielded a faster execution time, but I did enjoy the simplicity once the repo was successfully pulled onto the machine.
I had to implement some code to traverse a directory structure and return a list of files found. The requirements were pretty simple:
Given a base directory, find all files (which are not directories themselves) within.
If a directory is found, repeat step 1 for it.
I wanted to develop the code using TDD. As I started writing the tests, I realized that I was mocking class File, so I could intercept calls to File.isDirectory() and so on. In this way, I was forcing myself to use a solution where I would call that method.
I didn't like it because this test is definitely tightly coupled to the implementation. If I ever change the way in which I ask if a file is a directory, then this test is going to fail even if I keep the contract working. Looking at it as a Private Unit Test made me feel uneasy, for all the reasons expressed on that post. I'm not sure if this is one of those cases where I need to use that kind of testing. On the other hand, I really want to be sure that it returns every file that its not also a directory, traversing the entire structure. To me, that requires a nice, simple, test.
I wanted to avoid having to create a testing directory structure with real testing files "on disk", as I saw it rather clumsy and against some of the best practices that I have read.
Bear in mind that I don't need to do anything with the contents, so tricks like using StringReader instead of FileReader do not apply here. I thought I could do something equivalent, though, like being able to create a directory structure in memory when I set up the test, then tearing it down. I haven't found a way to do it.
How would you develop this code using TDD?
Thanks!
The mistake you have made is to mock File. There is a testing anti pattern that assumes that if your class delegates to class X, you must mock class X to test your class. There is also a general rule to be cautious of writing unit tests that do file I/O, because they tend to be too slow. But there is no absolute prohibition on file I/O in unit tests.
In your unit tests have a temporary directory set up and torn down, and create test files and directories within that temporary directory. Yes, your tests will be slower than pure CPU tests, but they will still be fast. JUnit even has support code to help with this very scenario: a #Rule on a TemporaryFolder.
Just this week I implemented, using TDD, some housekeeping code that had to scan through a directory and delete files, so I know this works.
As someone who gets very antsy about unit tests that take longer than a few milliseconds to complete, I strongly recommend mocking out the file I/O.
However, I don't think you should mock the File class directly. Instead, look at your use of the File class as the "how", and try to identify the "what". Then codify that with an interface.
For example: you mentioned that one of the things you do is intercept calls to File.isDirectory. Instead of interacting with the File class, what if your code interacted with some implementation of an interface like:
public interface FileSystemNavigator {
public boolean isDirectory(String path);
// ... other relevant methods
}
This hides the use of File.isDirectory from the rest of your code, while simultaneously reframing the problem into something more relevant to your program.
In TDD(Test Driven Development) development process, how to deal with the test data?
Assumption that a scenario, parse a log file to get the needed column. For a strong test, How do I prepare the test data? And is it properly for me locate such files to the test class files?
Maven, for example, uses a convention for folder structures that takes care of test data:
src
main
java <-- java source files of main application
resources <-- resource files for application (logger config, etc)
test
java <-- test suites and classes
resources <-- additional resources for testing
If you use maven for building, you'll want to place the test resources in the right folder, if your building with something different, you may want to use this structure as it is more than just a maven convention, to my opinion it's close to 'best practise'.
Another option is to mock out your data, eliminating any dependency on external sources. This way it's easy to test various data conditions without having to have multiple instances of external test data. I then generally use full-fledged integration tests for lightweight smoke testing.
Hard code them in the tests so that they are close to the tests that use them, making the test more readable.
Create the test data from a real log file. Write a list of the tests intended to be written, tackle them one by one and tick them off once they pass.
getClass().getClassLoader().getResourceAsStream("....xml");
inside the test worked for me. But
getClass().getResourceAsStream("....xml");
didn't worked.
Don't know why but maybe it helps some others.
When my test data must be an external file - a situation I try to avoid, but can't always - I put it into a reserved test-data directory at the same level as my project, and use getClass().getClassLoader().getResourceAsStream(path) to read it. The test-data directory isn't a requirement, just a convenience. But try to avoid needing to do this; as #philippe points out, it's almost always nicer to have the values hard-coded in the tests, right where you can see them.
I am continuing the development of a serialization layer generator. The user enters a description of types (currently in XSD or in WSDL), and the software produces code in a certain target language (currently, Java and ansi C89) which is able to represent the types described and which is also able to serialize (turn into a byte-sequence) and deserialize these values.
Since generating code is tricky (I mean, writing code is hard. Writing code that writes code is writing code to do a hard thing, which is a whole new land of hardness :) ). Thus, in the project which preceded my master thesis, we decided that we want some system tests in place.
These system tests know a type and a number of pairs of values and byte sequences. In order to execute a system test in a certain language, the type is run through the syste, resulting in code as described above. This code is then linked with some handwritten host-code, which is capable of reading these pairs of a byte sequence and a value and functions to read values of the given value from a string. The resulting executable is then run and the byte-value-pairs are fed into this executable and it is overall checked if all such bindings result in the output "Y". If this is the case, then these example values for the type serialize into the previously defined byte sequence and we can conclude that the generated code compiles and runs correctly, and thus, overall, that the part of the system handling this type is correct. This is a very good thing.
However, right now I am a bit unhappy with the current implementation. Currently, I have written a custom junit runner which uses quite a lot of reflection sorcery in order to read these byte-value-bindings from a classes attributes. Also, the overall stack to generate the code requires a lot of boilerplate code and boilerplate classes which do little more than to contain two or three strings. Even worse, it is quite hard to get a good integration with all tools which base on Junits descriptions and which generate test failure reports. It is quite hard to actually debug what is happening if the helpful maven Junit testrunner or the eclipse test runner gobble up whatever errors the compiler threw, just because the format of this error is different from junits own assertion errors.
Even worse, a single failed test in the generated code causes the maven build to fail. This is very annoying. I like it if the maven build fails if a certain test of a different unit fails, because (for example), if a certain depth first preorder calculation fails for some reason, everything will go haywire. However, if I just want to show someone some generated code for a type I know working, then it is very annoying if I cannot quickly build my application because the type I am working on right now is not finished.
So, given this background, how can I get a nice automated system which checks these generation specifications? Possibilities I have considererd:
A Junit integrated solution appears to be less than ideal, unless I can improve the integration of maven and junit and junit with my runner and everything else.
We used fitnesse earlier, but overall ditched it, because it caused more problems than it solved. The major issues we had were integration into maven and hudson.
A solution using texttest. I am not entirely convinced, because this mostly wants an executable, strings to put on stdin and strings to expect on stdout. Adding the whole "run application, link with host code and THEN run the generated executable" seems kinda complicated.
Writing my own solution. This will of course work and do what I want. However, this will be the most time consuming task, as usual.
So... do you see another possible way to do this while avoiding to write something own?
You can run Maven with -Dmaven.test.skip=true. Netbeans has a way to set this automatically unless you explicitly hit one of the commands to test the project, I don't know about Eclipse.
I'm learning JUnit. Since my app includes graphical output, I want the ability to eyeball the output and manually pass or fail the test based on what I see. It should wait for me for a while, then fail if it times out.
Is there a way to do this within JUnit (or its extensions), or should I just throw up a dialog box and assertTrue on the output? It seems like it might be a common problem with an existing solution.
Edit: If I shouldn't be using JUnit for this, what should I be using? I want to manually verify the build every so often, and unit test automatically, and it'd be great if the two testing frameworks got along.
Manually accepting/rejecting a test defeats the purpose of using an automated test framework. JUnit is not made for this kind of stuff. Unless you find a way to create and inject a mockup of the object representing your output device, you should consider alternatives (don't really know any there sorry).
I once wrote automated tests for a video decoding component. I dumped the decoded data to a file using some other decoder as a reference, and then compared the output of my decoder to that using the PSNR of each pair of images. This is not 100% self contained (needs external files as resources), but was automated at least, and worked fine for me.
Although you could probably code that, that is not what JUnit is about. It is about automated tests, not guided manual tests. Generally that "does it look right" test is regarded as an integration test, as it is something that is very hard to automate correctly in a way that doesn't break for trivial changes all the time.
Take a look at Abbot to give you a more robust way to test your GUI.
Unit tests shouldn't require human intervention. If you need a user to take an action then I think you're doing it wrong.
If you need a human to verify things, then don't do this as part of your unit tests. Just make it a required step for your test department to carry out when QA'ing builds. (this still works of your QA department is just you.)
I recommend using your unit tests for the Models if using MVC, or any utility method (i.e. with Swing it's common to have color mapping methods). If you have a good set of unit tests on things like model behavior, if you have a UI bug it'll help narrow your search.
Visual based unit tests are very difficult, at a company I worked at they had tried these visual tests but slight differences in video cards could produce failed tests. In the end this is where a good Q/A team is required.
Take a look at FEST-Swing. It provides an easy way to automatically test your GUIs.
The other thing you'll want to do is separate your the code which does the bulk of the work from your gui code as much as possible. You can then write unit tests on this work code without having to deal with the user interface. You'll also find that you'll run these tests much more frequently, as they can run quickly.