Integration Test for JUnit - java

I am very new to TDD in general so please forgive me if my question does not make lots of sense.
After looking looking around for a bit, it seems that jUnit is capable of implement integration test. I am hoping that the community can provide me some guidance on how to write integration test. Here is a simple overview of my design.
I have Main1, that accept a list of zip files. Main will extract the zip files, edit the content of the pdf inside the zip files, and put the final pdf files into folder X. If the number of pdf reach a THRESHOLD, then Main2Processor (not a main class) will get invoked and zip all the pdf files, and also create a report text file with the same name as the newly create zip file.
If I run Main2, it also will kick off Main2Processor, which will zip the pdf file and create text file reports (even though the number of pdf in folder X did not reach a THRESHOLD).
How do I write integration test testing the correctness for my above design?

You're right; JUnit can be used to write tests that would be called integration tests. All you have to do is relax the rules regarding tests not touching external resources.
First, I would refactor the main() of your application to do as little as you can possibly make it do; there isn't a really good way to test the code in a main() function. Have it construct and run an object (that object can be the object containing main(), if you wish), passing that new object your list of ZIP files. That object can now be tested using JUnit by just instantiating it.
Now, you just have to architect the test to set up a constant test environment and then perform a repeatable test. Create, or clear out, a temp directory somewhere, then copy over some test ZIP files into that directory. Then, run your main processor.
To detect that the proper behavior occurs when the threshold is reached, you just test for the existence of a zip file (and/or its absence if the threshold isn't reached).

Do you really want an "integration test" (that term is overloaded beyond comprehension now so if you could state your end-goals that'd help) ? How about an acceptance test where you use this console/GUI app like a real user with specific input and check for the expected output?
JUnit is just a test runner and is oblivious of what the test actually does. So yes, you could use it to write any test. However it has been built for unit-testing and that will leak sometimes.. e.g. the fact that the test shuts down on the first error / the first assert that does not succeed. Usually coarse-tests like to push ahead and get a bunch of errors at the end.
If you want an integration test - you would have to redesign your app to be callable from tests (if it is not already so). Raise specific obstacles to writing this test and I can offer more specific advice.

I think that you should create some utility methods for your tests. For example running applicaiton, checking directory, clearing directory etc.
Then you will be able to implement test like the following:
#Test
public mytest1() {
exec(Main1.class, "f1.zip", "f2.zip");
Assert.assertTrue(getFileCount(OUTPUT_DIR) < THRESHOLD);
// perform verification of your files etc...
}

First of all you might describe your above specification, but from a "test sequence" point of view. For example, test one would provide to Main1 a set of N pdf files, N being under the threshold. Then your test code, after Main1 returned, would check X folder content, as well as reports, to verify that your expectations are met.
JUnit itself just helps running test case, it does not really help writing the tests.
And JUnit is "unit test" oriented (but you can use it as well for integration tests, despite some situations does not suit well; when a global setup is required for example, or when the test cases are expected to be run in a specific order...).
Some additional libraries can help greatly to interact easily with the rest of your code : dbunit, httpunit and so on.

Related

Is there a way to have one test function appearing as multiple unit tests in JUnit?

I have a kx q project with unit tests. This test suite produces a table with the test results and I need to integrate this with Atlassian Bamboo to see what tests failed and why. The easier way would be to dump this test result table into a csv that would be then converted to a JUnit XML output that Atlassian Bamboo plugins already understand.
To this end I think the best would be to have a Java project with a single test suite or single function that reads each line (each line in that CSV is a separate test case) of this CSV and asserts against whether the corresponding line passed or failed the test. The important thing is that each assertion is taken as a separate test within the test suite so the JUnit XML output correspond one to one to the csv file dump.
Can this be done using JUnit and how?
Can this be done using JUnit
I wouldn't expect so, mostly because the behavior being described isn't strictly a unit test per se. Perhaps I'm misunderstanding the question (correct me if that's the case), but it sounds like what you want to do is:
Repeat a single test for every line of an input file, and indicate the results individually.
That's a test of sorts, validating a file as input into the system. But it's not testing the behavior of the code which puts it a bit outside the scope of unit tests.
Unless there's a tool which does something similar to this, I'd be inclined to simply build one. All it really needs to do is receive the file as input, invoke a "test" over each line (which could be a unit test method, but doesn't necessarily need to be), and output results to a file of its own. As an implementation detail that output happens to be XML in the same format that JUnit outputs its results, so that the same tools can read the results.
As unit tests alone I imagine it would necessitate treating each line of the input as a separate statically defined test (where each line of input is the "arrange" step). Which certainly wouldn't be ideal. I imagine writing a one-off tool to be invoked on its own would be a shorter path to the goal than trying to force JUnit into a pattern it wasn't intended for.
Unless as a concern of automated build/deploy steps it's necessary for this validation to be unit tests, a one-off tool seems more straightforward.

Using TDD to develop file traversing code in Java

I had to implement some code to traverse a directory structure and return a list of files found. The requirements were pretty simple:
Given a base directory, find all files (which are not directories themselves) within.
If a directory is found, repeat step 1 for it.
I wanted to develop the code using TDD. As I started writing the tests, I realized that I was mocking class File, so I could intercept calls to File.isDirectory() and so on. In this way, I was forcing myself to use a solution where I would call that method.
I didn't like it because this test is definitely tightly coupled to the implementation. If I ever change the way in which I ask if a file is a directory, then this test is going to fail even if I keep the contract working. Looking at it as a Private Unit Test made me feel uneasy, for all the reasons expressed on that post. I'm not sure if this is one of those cases where I need to use that kind of testing. On the other hand, I really want to be sure that it returns every file that its not also a directory, traversing the entire structure. To me, that requires a nice, simple, test.
I wanted to avoid having to create a testing directory structure with real testing files "on disk", as I saw it rather clumsy and against some of the best practices that I have read.
Bear in mind that I don't need to do anything with the contents, so tricks like using StringReader instead of FileReader do not apply here. I thought I could do something equivalent, though, like being able to create a directory structure in memory when I set up the test, then tearing it down. I haven't found a way to do it.
How would you develop this code using TDD?
Thanks!
The mistake you have made is to mock File. There is a testing anti pattern that assumes that if your class delegates to class X, you must mock class X to test your class. There is also a general rule to be cautious of writing unit tests that do file I/O, because they tend to be too slow. But there is no absolute prohibition on file I/O in unit tests.
In your unit tests have a temporary directory set up and torn down, and create test files and directories within that temporary directory. Yes, your tests will be slower than pure CPU tests, but they will still be fast. JUnit even has support code to help with this very scenario: a #Rule on a TemporaryFolder.
Just this week I implemented, using TDD, some housekeeping code that had to scan through a directory and delete files, so I know this works.
As someone who gets very antsy about unit tests that take longer than a few milliseconds to complete, I strongly recommend mocking out the file I/O.
However, I don't think you should mock the File class directly. Instead, look at your use of the File class as the "how", and try to identify the "what". Then codify that with an interface.
For example: you mentioned that one of the things you do is intercept calls to File.isDirectory. Instead of interacting with the File class, what if your code interacted with some implementation of an interface like:
public interface FileSystemNavigator {
public boolean isDirectory(String path);
// ... other relevant methods
}
This hides the use of File.isDirectory from the rest of your code, while simultaneously reframing the problem into something more relevant to your program.

Why use JUnit for testing?

Maybe my question is a newbie one, but I can not really understand the circumstances under which I would use junit?
Whether I write simple applications or larger ones I test them with the System.out statements and it seams quite easy to me.
Why create test-classes with JUnit, unnecessary folders in the project if we still have to call the same methods, check what they return and we then have an overhead of annotating everything?
Why not write a class and test it at once with System.out but not create Test-classes?
PS. I have never worked on large projects I am just learning.
So what is the purpose?
That's not testing, that's "looking manually at output" (known in the biz as LMAO). More formally it's known as "looking manually for abnormal output" (LMFAO). (See note below)
Any time you change code, you must run the app and LMFAO for all code affected by those changes. Even in small projects, this is problematic and error-prone.
Now scale up to 50k, 250k, 1m LOC or more, and LMFAO any time you make a code change. Not only is it unpleasant, it's impossible: you've scaled up the combinations of inputs, outputs, flags, conditions, and it's difficult to exercise all possible branches.
Worse, LMFAO might mean visiting pages upon pages of web app, running reports, poring over millions of log lines across dozens of files and machines, reading generated and delivered emails, checking text messages, checking the path of a robot, filling a bottle of soda, aggregating data from a hundred web services, checking the audit trail of a financial transaction... you get the idea. "Output" doesn't mean a few lines of text, "output" means aggregate system behavior.
Lastly, unit and behavior tests define system behavior. Tests can be run by a continuous integration server and checked for correctness. Sure, so can System.outs, but the CI server isn't going to know if one of them is wrong–and if it does, they're unit tests, and you might as well use a framework.
No matter how good we think we are, humans aren't good unit test frameworks or CI servers.
Note: LMAO is testing, but in a very limited sense. It isn't repeatable in any meaningful way across an entire project or as part of a process. It's akin to developing incrementally in a REPL, but never formalizing those incremental tests.
We write tests to verify the correctness of a program's behaviour.
Verifying the correctness of a program's behaviour by inspecting the content of output statements using your eyes is a manual, or more specifically, a visual process.
You could argue that
visual inspection works, I check that the code does what it's meant to
do, for these scenarios and once I can see it's correct we're good to
go.
Now first up, it's great to that you are interested in whether or not the code works correctly. That's a good thing. You're ahead of the curve! Sadly, there are problems with this as an approach.
The first problem with visual inspection is that you're a bad welding accident away from never being able to check your code's correctness again.
The second problem is that the pair of eyes used is tightly coupled with the brain of the owner of the eyes. If the author of the code also owns the eyes used in the visual inspection process, the process of verifying correctness has a dependency on the knowledge about the program internalised in the visual inspector's brain.
It is difficult for a new pair of eyes to come in and verify the correctness of the code simply because they are not partnered up with brain of the original coder. The owner of the second pair of eyes will have to converse with original author of the code in order to fully understand the code in question. Conversation as a means of sharing knowledge is notoriously unreliable. A point which is moot if the Original Coder is unavailable to the new pair eyes. In that instance the new pair of eyes has to read the original code.
Reading other people's code that is not covered by unit tests is more difficult than reading code that has associated unit tests. At best reading other peoples code is tricky work, at its worst this is the most turgid task in software engineering. There's a reason that employers, when advertising job vacancies, stress that a project is a greenfield (or brand new) one. Writing code from scratch is easier than modifying existing code and thereby makes the advertised job appear more attractive to potential employees.
With unit testing we divide code up into its component parts. For each component we then set out our stall stating how the program should behave. Each unit test tells a story of how that part of the program should act in a specific scenario. Each unit test is like a clause in a contract that describes what should happen from the client code's point of view.
This then means that a new pair of eyes has two strands of live and accurate documentation on the code in question.
First they have the code itself, the implementation, how the code was done; second they have all of the knowledge that the original coder described in a set of formal statements that tell the story of how this code is supposed to behave.
Unit tests capture and formally describe the knowledge that the original author possessed when they implemented the class. They provide a description of how that class behaves when used by a client.
You are correct to question the usefulness of doing this because it is possible to write unit tests that are useless, do not cover all of the code in question, become stale or out of date and so on. How do we ensure that unit tests not only mimics but improves upon the process of a knowledgeable, conscientious author visually inspecting their code's output statements at runtime? Write the unit test first then write the code to make that test pass. When you are finished, let the computers run the tests, they're fast they are great at doing repetitive tasks they are ideally suited to the job.
Ensure test quality by reviewing them each time you touch off the code they test and run the tests for each build. If a test fails, fix it immediately.
We automate the process of running tests so that they are run each time we do a build of the project. We also automate the generation of code coverage reports that details what percentage of code that is covered and exercised by tests. We strive for high percentages. Some companies will prevent code changes from being checked in to source code control if they do not have sufficient unit tests written to describe any changes in behaviour to the code. Typically a second pair of eyes will review code changes in conjunction with the author of the changes. The reviewer will go through the changes ensure that the changes understandable and sufficiently covered by tests. So the review process is manual, but when the tests (unit and integration tests and possibly user acceptance tests) pass this manual review process the become part of the automatic build process. These are run each time a change is checked in. A continuous-integration server carries out this task as part of the build process.
Tests that are automatically run, maintain the integrity of the code's behaviour and help to prevent future changes to the code base from breaking the code.
Finally, providing tests allows you to aggressively re-factor code because you can make big code improvements safe in the knowledge that your changes do not break existing tests.
There is a caveat to Test Driven Development and that is that you have to write code with an eye to making it testable. This involves coding to interfaces and using techniques such as Dependency Injection to instantiate collaborating objects. Check out the work of Kent Beck who describes TDD very well. Look up coding to interfaces and study design-patterns
When you test using something like System.out, you're only testing a small subset of possible use-cases. This is not very thorough when you're dealing with systems that could accept a near infinite amount of different inputs.
Unit tests are designed to allow you to quickly run tests on your application using a very large and diverse set of different data inputs. Additionally, the best unit tests also account for boundary cases, such as the data inputs that lie right on the edge of what is considered valid.
For a human being to test all of these different inputs could take weeks whereas it could take minutes for a machine.
Think of it like this: You're also not "testing" something that will be static. Your application is most likely going through constant changes. Therefore, these unit tests are designed to run at different points in the compile or deployment cycle. Perhaps the biggest advantage is this:
If you break something in your code, you'll know about it right now, not after you deployed, not when a QA tester catches a bug, not when your clients have cancelled. You'll also have a better chance of fixing the glitch immediately, since it's clear that the thing that broke the part of the code in question most likely happened since your last compile. Thus, the amount of investigative work required to fix the problem is greatly reduced.
I added some other System.out can NOT do:
Make each test cases independent (It's important)
JUnit can do it: each time new test case instance will be created and #Before is called.
Separate testing code from source
JUnit can do it.
Integration with CI
JUnit can do it with Ant and Maven.
Arrange and combine test cases easily
JUnit can do #Ignore and test suite.
Easy to check result
JUnit offers many Assert methods (assertEquals, assertSame...)
Mock and stub make you focus on the test module.
JUnit can do: Using mock and stub make you setup correct fixture, and focus on the test module logic.
Unit tests ensure that code works as intended. They are also very helpful to ensure that the code still works as intended in case you have to change it later to build new functionalities to fix a bug. Having a high test coverage of your code allows you to continue developing features without having to perform lots of manual tests.
Your manual approach by System.out is good but not the best one.This is one time testing that you perform. In real world, requirements keep on changing and most of the time you make a lot of modificaiotns to existing functions and classes. So… not every time you test the already written piece of code.
there are also some more advanced features are in JUnit like like
Assert statements
JUnit provides methods to test for certain conditions, these methods typically start with asserts and allow you to specify the error message, the expected and the actual result
Some of these methods are
fail([message]) - Lets the test fail. Might be used to check that a certain part of the code is not reached. Or to have failing test before the test code is implemented.
assertTrue(true) / assertTrue(false) - Will always be true / false. Can be used to predefine a test result, if the test is not yet implemented.
assertTrue([message,] condition) - Checks that the boolean condition is true.
assertEquals([message,] expected, actual) - Tests whether two values are equal (according to the equals method if implemented, otherwise using == reference comparison). Note: For arrays, it is the reference that is checked, and not the contents, use assertArrayEquals([message,] expected, actual) for that.
assertEquals([message,] expected, actual, delta) - Tests whether two float or double values are in a certain distance from each other, controlled by the delta value.
assertNull([message,] object) - Checks that the object is null
and so on. See the full Javadoc for all examples here.
Suites
With Test suites, you can in a sense combine multiple test classes into a single unit so you can execute them all at once. A simple example, combining the test classes MyClassTest and MySecondClassTest into one Suite called AllTests:
import org.junit.runner.RunWith;
import org.junit.runners.Suite;
import org.junit.runners.Suite.SuiteClasses;
#RunWith(Suite.class)
#SuiteClasses({ MyClassTest.class, MySecondClassTest.class })
public class AllTests { }
The main advantage of JUnit is that it is automated rather than you manually having to check with your print outs. Each test you write stays with your system. This means that if you make a change that has an unexpected side effect your test will catch it and fail rather than you having to remember to manually test everything after each change.
JUnit is a unit testing framework for the Java Programming Language. It is important in the test driven development, and is one of a family of unit testing frameworks collectively known as xUnit.
JUnit promotes the idea of "first testing then coding", which emphasis on setting up the test data for a piece of code which can be tested first and then can be implemented . This approach is like "test a little, code a little, test a little, code a little..." which increases programmer productivity and stability of program code that reduces programmer stress and the time spent on debugging.
Features
JUnit is an open source framework which is used for writing & running tests.
Provides Annotation to identify the test methods.
Provides Assertions for testing expected results.
Provides Test runners for running tests.
JUnit tests allow you to write code faster which increasing quality
JUnit is elegantly simple. It is less complex & takes less time.
JUnit tests can be run automatically and they check their own results and provide immediate feedback. There's no need to manually comb through a report of test results.
JUnit tests can be organized into test suites containing test cases and even other test suites.
Junit shows test progress in a bar that is green if test is going fine and it turns red when a test fails.
I have slightly different perspective of why JUnit is needed.
You can actually write all test cases yourself but it's cumbersome. Here are the problems:
Instead of System.out we can add if(value1.equals(value2)) and return 0 or -1 or error message. In this case, we need a "main" test class which runs all these methods and checks results and maintains which test cases failed and which are passed.
If you want to add some more tests you need to add them to this "main" test class as well. Changes to existing code. If you want to auto detect test cases from test classes, then you need to use reflection.
All your tests and your main class to run tests are not detected by eclipse and you need to write custom debug/run configurations to run these tests. You still don't see those pretty green/red colored outputs though.
Here is what JUnit is doing:
It has assertXXX() methods which are useful for printing helpful error messages from the conditions and communicating results to "main" class.
"main" class is called runner which is provided by JUnit, so we don't have to write any. And it detects the test methods automatically by reflection. If you add new tests with #Test annotation then they are automatically detected.
JUnit has eclipse integration and maven/gradle integration as well, so it is easy to run tests and you will not have to write custom run configurations.
I'm not an expert in JUnit, so that's what I understood as of now, will add more in future.
You cannot write any test case without using testing framework or else you will have to write your testing framewok to give justice to your test cases.
Here are some info about JUnit Framework apart from that you can use TestNG framework .
What is Junit?
Junit is widely used testing framework along with Java Programming Language. You can use this automation framework for both unit testing and UI testing.It helps us define the flow of execution of our code with different Annotations. Junit is built on idea of "first testing and then coding" which helps us to increase productivity of test cases and stability of the code.
Important Features of Junit Testing -
It is open source testing framework allowing users to write and run test cases effectively.
Provides various types of annotations to identify test methods.
Provides different Types of Assertions to verify the results of test case execution.
It also gives test runners for running tests effectively.
It is very simple and hence saves time.
It provides ways to organize your test cases in form of test suits.
It gives test case results in simple and elegant way.
You can integrate jUnit with Eclipse, Android Studio, Maven & Ant, Gradle and Jenkins
JUNIT is the method that is usually accepted by java developer.
Where they can provide similar expected input to the function and decide accordingly that written code is perfectly written or if test case fails then different approach may also need to implement.
JUNIT will make development fast and will ensure the 0 defects in the function.
JUNIT : OBSERVE AND ADJUST
Here is my perspective of JUNIT.
JUNIT can be used to,
1)Observe a system behaviour when a new unit is added in that system.
2)Make adjustment in the system to welcome the "new" unit in the system.
What? Exactly.
Real life eg.
When your relative visits your college hostel room,
1) You will pretend to be more responsible.
2) You will keep all things where they should be, like shoes in shoe rack not on chair, clothes in cupboard not on chair.
3) You will get rid of all the contraband.
4) you will start cleanUp in every device you posses.
In programming terms
System: Your code
UNIT: new functionality.
As JUNIT framework is used for JAVA language so JUNIT = JAVA UNIT (May be).
Suppose a you already have a well bulletproof code, but a new requirement came and you have to add the new requirement in your code. This new requirement may break your code for some input(testcase).
Easy way to adapt this change is using unit testing (JUNIT).
For that you should write multiple testcases for your code when you are building your codebase. And whenever a new requirement comes you just run all the test cases to see if any test case fails.
If No then you are a BadA** artist and you are ready to deploy the new code.
If any of the testcases fail then you change your code and again run testcases until you get the green status.

Managing test cases, existing in a single file

I have a lot of test cases written in a single file. They are actually sort of instructions, which can be read and run from inside JAVA.
The problem is this approach is not good. The file becomes big and unmanageable with a lot of test cases. How should I manage them? I was thinking of splitting them in different files, and having a XML Database for metadata. Any better ways?
P.S: They are not plain English test cases, they are sort of instructions which can be run inside JAVA.
Update: But they are not unit tests, more of like functional tests and they are not test classes. A program reads the different test cases from file and runs them.
Look at the approach that is used by Cucumber. Here you find human readable "feature" descriptions that each contain a number of different scenarios for that feature to test it out completely. No one feature file is a single test, none are test classes themselves, and a program reads all the feature files and runs them.
The overall pattern here would probably be instructive for you as well.
http://cukes.info/
Note that recently there has been a significant amount of work done making writing these cuke tests in Java easy as well as the original native language of Ruby.
The Java port of Cucumber uses a JUnit 4 custom test runner like this:
#RunWith(Cucumber.class)
#Feature("create_user_account.feature")
public class CreateUserAccountTest {
}
You can run this class as a JUnit test and the console output looks very similar to what you see on the Cucumber website. So you basically have one of these "test classes" for every feature. Then you can run a whole package worth of features, a single feature, or the entire project worth of features all at once by either grouping them into test suites or using Eclipse's test run batching.

How to deal with the test data in Junit?

In TDD(Test Driven Development) development process, how to deal with the test data?
Assumption that a scenario, parse a log file to get the needed column. For a strong test, How do I prepare the test data? And is it properly for me locate such files to the test class files?
Maven, for example, uses a convention for folder structures that takes care of test data:
src
main
java <-- java source files of main application
resources <-- resource files for application (logger config, etc)
test
java <-- test suites and classes
resources <-- additional resources for testing
If you use maven for building, you'll want to place the test resources in the right folder, if your building with something different, you may want to use this structure as it is more than just a maven convention, to my opinion it's close to 'best practise'.
Another option is to mock out your data, eliminating any dependency on external sources. This way it's easy to test various data conditions without having to have multiple instances of external test data. I then generally use full-fledged integration tests for lightweight smoke testing.
Hard code them in the tests so that they are close to the tests that use them, making the test more readable.
Create the test data from a real log file. Write a list of the tests intended to be written, tackle them one by one and tick them off once they pass.
getClass().getClassLoader().getResourceAsStream("....xml");
inside the test worked for me. But
getClass().getResourceAsStream("....xml");
didn't worked.
Don't know why but maybe it helps some others.
When my test data must be an external file - a situation I try to avoid, but can't always - I put it into a reserved test-data directory at the same level as my project, and use getClass().getClassLoader().getResourceAsStream(path) to read it. The test-data directory isn't a requirement, just a convenience. But try to avoid needing to do this; as #philippe points out, it's almost always nicer to have the values hard-coded in the tests, right where you can see them.

Categories

Resources