I'm looking for some kind of test (mostly jUnit, but not exclusivelly) management. What I have in mind is a tool that would allow to easily enable/disable individual tests based on configuration (xml file, property file or in DB etc.).
I was thinking that there must be some maven plugin that could do that, or some tool. Can you suggest anything?
There are Maven plugins for JUnit and TestNG that allow you to define which tests to run from the command line (or all).
JUnit allows the creation of test suites. With some care, you can tailor your test suites to match your "to be enabled / disabled" sets of tests.
Failing that, you could wrap the test suite idea with something that generates / references an XML file for configuration (if that's really the path you want to take). Considering that it's all in a Java class, you might find it easier to do by another means (perhaps an properties file or just in-class handling).
Related
I unfortunately may have two questions in one, or rather the solution may go two different ways. I have log4j loggers set up in a few classes that I unit test on. When I run mvn clean install it obviously runs those tests and in turn creates a log file (that is usually empty as nothing exciting is being logged). This isn't necessarily a problem except that Jenkins doesn't seem to like this when I do Perform Maven Release. It yells about the workspace having local changes and it cites the log file before declaring failure.
I know its the unit tests because if I changed them to integration tests or ignore them, everything works fine. But I'd like a solution not a workaround.
Are there configurations in Jenkins that can allow me to remedy this?
Or is there a strategy for mocking or ignoring logging for Unit tests?
I don't necessarily want to ignore them, but it is interfering with creating a release.
I am not quite aware of what Perform Maven Release does, but I can suggest a couple of solutions:
Remove the log file from your source code repository (as the log files gets regenerated on every run, I don't think it should reside in your source code repository).
Add the path of the offending log file to a list of files that are ignored by version control (e.g. git uses a file called gitignore - https://git-scm.com/docs/gitignore)
Hope this helps!
Amit has some good ideas, and I'll suggest a few more:
Use slf4j for logging in your project, and don't bind a logging implementation during the test phase
going one better, bind slf4j-test during the test phase, which logs to memory. Then you can also write tests against your logging to ensure it happens when you expect it to.
I've got a Java software that reads settings from properties files and database, reads input files from a directory and creates output files in another directory. It also makes modifications to database.
I need to improve testing of this software from being manual to automatic. Currently the user copies some files to input directory, executes the program and inspects the files in the output director. I'd like to automate this to just running the tests and inspecting the test result file. The test platform would have a expected result file(s) for each input file. The test results should be readable by people that are not programmers :)
I don't want to do this in a jUnit test in the build phase because the tests have to be executed against development and test environments. Is there any tools/platforms that could help me with this or should I build this kind of thing from scratch?
I'd recommend to use TestNG testing framework.
This is functionality testing framework, which provides similar to jUnit functionality, but has a number of features specific to functional testing - like test dependencies, groups etc.
The test results should be readable by
people that are not programmers :)
You can implement your own test listener and use it to build custom test report.
In TDD(Test Driven Development) development process, how to deal with the test data?
Assumption that a scenario, parse a log file to get the needed column. For a strong test, How do I prepare the test data? And is it properly for me locate such files to the test class files?
Maven, for example, uses a convention for folder structures that takes care of test data:
src
main
java <-- java source files of main application
resources <-- resource files for application (logger config, etc)
test
java <-- test suites and classes
resources <-- additional resources for testing
If you use maven for building, you'll want to place the test resources in the right folder, if your building with something different, you may want to use this structure as it is more than just a maven convention, to my opinion it's close to 'best practise'.
Another option is to mock out your data, eliminating any dependency on external sources. This way it's easy to test various data conditions without having to have multiple instances of external test data. I then generally use full-fledged integration tests for lightweight smoke testing.
Hard code them in the tests so that they are close to the tests that use them, making the test more readable.
Create the test data from a real log file. Write a list of the tests intended to be written, tackle them one by one and tick them off once they pass.
getClass().getClassLoader().getResourceAsStream("....xml");
inside the test worked for me. But
getClass().getResourceAsStream("....xml");
didn't worked.
Don't know why but maybe it helps some others.
When my test data must be an external file - a situation I try to avoid, but can't always - I put it into a reserved test-data directory at the same level as my project, and use getClass().getClassLoader().getResourceAsStream(path) to read it. The test-data directory isn't a requirement, just a convenience. But try to avoid needing to do this; as #philippe points out, it's almost always nicer to have the values hard-coded in the tests, right where you can see them.
I recently read (link text) about a way to statically add tests to test suite in JUnit 4? What about a dynamic way, i.e. how to add a test class if its name is known not earlier than at run-time, e.g. its name is read from XML file?
I know how to do it. I can use JUnitExt library (http://junitext.sourceforge.net). It supports "declarative test configurations (as provided by TestNG)". See junitext.sourceforge.net/tutorial.html (How to parameterize tests using XML).
I don't know of a way to add to an existing suite, but you can create your own suite at runtime. The JUnitCore class lets you pass in a list of classes you want run. These can be read in from anywhere you like including XML.
I want to run my unit tests automatically when I save my Eclipse project. The project is built automatically whenever I save a file, so I think this should be possible in some way.
How do I do it? Is the only option really to get an ant script and change the project build to use the ant script with targets build and compile?
Update I will try 2 different approaches now:
Running an additional builder for my project that executes the ant target test (I have an ant script anyway)
ct-eclipse, recommended by Thorbjørn
For sure it it unwise to run all tests, because we can have for example 20.000 tests whereas our change could affect only, let's say 50 of them, among which are tests for the class we have changed and tests for classes that collaborate with our class.
There is an unseful plugin called infinitetest http://improvingworks.com/products/infinitest/ which runs only some tests ( related to class we've changed ) just after we save changes. It also integrate quite nicely with editor ( using annotations ) and problem view - displaying not-passing tests like errors.
Right click on your project > Properties > Builders > New, and there add your ant ant builder.
But, in my opinion, it is unwise to run the unit tests on each save.
See if Eclipse has a plugin for Infinitest.
I'd also consider TestNG as an alternative to JUnit. It has a lot of features that might be helpful in partitioning your unit test classes into shorter and longer running groups.
I believe you are looking for http://ct-eclipse.tigris.org/
I've experimented with the concept earlier, and my personal conclusion was that in order for this to be useful you need a lot of tests which take time. Personally I save very frequently so this would happen frequently, and I didn't find it to be an advantage. It might be different for you.
Instead we bit the bullet and set up a "build server" which watches our CVS repository and builds projects as they change. If the compilation fails or the tests fail we are notified quickly so we can remedy it.
It is as always a matter of taste what works for you. This is what I've found.
I would recommend Inifinitest for the described situation. Infinitest is nowadays a GPL v3 licensed product. Eclipse update site: http://infinitest.github.com
Then you must use INFINITEST. INFINITEST helps you to do Continuous Testing.
Whenever you make a change, Infinitest runs tests for you.
It selects tests intelligently, and only runs the ones you need. It reports unit test failures like compiler errors, and provides additional information that helps you write better tests.