My question is from the perspective of language design.
Why is assert treated differently i.e. it raises a error and not an exception, it is not enabled by default etc..
It does seem elegant(very subjective opinion), easy to read(again subjective) for doing validations & also there are tools(IDE) which can live-evaluate it and provide warnings based on assertions.
I'd say that the reason is that defaults for Java are meant for production code (the "release" version of software) - if users need to build your code they will use provided defaults and if you are developer and want to have better reporting you can always make some additional effort.
Usually you don't want to ship assertions with a release version. Why? You can always design your code to perform some not disturbing background error handling and throwing AssertionError in users face is not always the way to go.
Most of the time I see them used as additional code testing - when you run regression tests and code coverage is high no assertion error suggest that there are no (obvious to spot) errors in your code. If some happens, you can deduce from stack trace what went wrong and why. On the other hand clients shouldn't be bothered with seeing descriptive error information.
So how should you actually use them? In my experience you should design code to not use assertions to perform error handling. If you want exception to be thrown somewhere throw it explicitly yourself. Once code can handle itself, you can add assertions to check pre- and postconditions as well as invariants - so basically used them to check algorithm correctness instead of data correctness. It has value for developers rather than users. Once you have enough confidence in your solution, you can disable assertions, your program still works fine and your users don't have to run program with additional runtime overhead.
Asserts are tools for the developer.
The core reason it's not enabled by default is that assertions via assert are not meant to provide run-time validation/protection for production code.
assert is a tool for use in development process and during testing that should not impact performance during actual running in production environment.
Imagine a very heavy weight assertions that are critical to check when building out a new feature or a change against a entire range of allowable inputs, but once built and properly tested, don't need to run until code changes again.
It raises an error, because the severity of an assertion violation is high enough to do so.
An example for an Exception is something like ArrayIndexOutOfBounds. This might be reasonable in some cases and you might even expect (and handle) such a case.
But an assertion violation is (like out of memory e.g.) nothing you would ever expect or you would like to deal with. It's an error and there is no excuse.
Assertions are not enabled by default, because they should be always fullfilled. You enable them to test for that, but then you "know" (as far as you can know) that they are not violated. So you don't need to check the conditions (which might be performance intensive) every time in production code.
The good thing about assertions in Java is, that the code actually performing the checks is never executed, when assertions are not enabled.
For example:
if (!doComplexChecks()) throw new AssertionError("Damn!");
might take a lot of time and you want to verify this when unit testing or debugging, but in production you just don't want that.
But this code
assert doComplexChecks();
is only executed, when assertions are enabled, so it saves you a lot of time in production code.
Related
I'd like to mark unit tested methods as tested in Java code, so that a compiler error occurs in case of a missing testing method.
Is there a way to do that? I couldn't find a satisfying solution.
of course it's possible. you can write your own postprocessor that parses your code, finds marked method, runs tests, gets coverage results, compares them with marked methods and fails build if something is wrong.
but
it's a loooot of work
the benefit is very small. why? because what matter is not the coverage itself but quality of tests. you can even write a test generator that covers all / most of your code automatically but it will have no value. good test checks for behaviour specific. and only devs know what behaviour is expected
the quality of test is usually covered by code reviews. at the pull request screen you can also display the stats of the pull request ie. code coverage etc. but i'm not sure if automating it further is worth the effort
I successfully use Cucumber to process my Java-based tests.
Sometimes these tests encounter regression issues, and it takes time to fix found issues (depending on issue priority, it can be weeks or even months). So, I'm looking for a way to mark some cucumber tests as known issues. Don't want these tests fail the entire set of tests, just want to mark them, for example, as pending with yellow color in report instead.
I know that I can specify a #tag for failed tests and exclude them from execution list, but that's not what I want to do as I still need these tests to be run continuously. Once the issue fixed, appropriate test should be green again without any additional tag manipulation.
Some other frameworks provide such functionality (run the test but ignore its result in case of fails). Is it possible to do the same trick somehow using Cucumber?
The final solution I use now - to mark known issues with specific tag, exclude these tests from regular round and to run them separately. But that's not the best solution I believe.
Any ideas appreciated. Thanks in advance.
I would consider throwing a pending exception in the step that causes the known failure. This would allow the step to be executed and not be forgotten.
I would also consider rewriting the failing steps in such a way that when the failure occurs, it is caught and a pending exception is thrown instead of the actual failure. This would mean that when the issue is fixed and the reason for throwing the pending exception is gone, you have a passing suite.
Another thing I would work hard for is not to allow a problem to be old. Problems are like kids, when they grow up the they get harder and harder to fix. Fixing a problem while it is young, perhaps a few minutes, is usually easy. Fixing problems that are months old are harder.
You shouldn't.
My opinion is that if you have tests that fail then you should add a bug/task ticket for these scenarios and to add them in a build status page with the related tag.
Another thing you could do is to add the ticket number as a tag and removed after the ticked is fixed.
If you have scenarios that fail due a bug, then the report should show that, if the scenario is not fully implemented then is better not to run it at all.
One thing you could do is to add a specific tag/name for those scenarios and to try in a before scenario method to get the tags and check for added specific tag/name and throw a pending exception.
What i suggest is to keep those scenarios running if there is a bug and to document that in the status page.
I think the client would understand better if those scenarios are red because they are failing rather than some yellow color with "grey" status of what is happening.
If you need the status of the run to trigger some CI job then maybe is better to change the condition there.
As i see it, the thing you need to give it a thought should be: what is the difference between yellow or red, pending or fail for you or the client?, you would wand to keep a clear difference and to keep track of the real status.
You should address these issues in an email, discuss them with the project team and with the QA team, and after a final decision is taken you should get a feedback from the customer also.
Java assert statements are selectively enabled/disabled, based on config flags (eg, -ea). I read online that with JUnit assertions (eg, Assert.assertTrue), you don't need to worry about managing the assertion-config yourself. What exactly does this mean? Either:
1) JUnit assertions (mixed together with Prod code) are never disabled. They behave just like Validate statements, and always get run.
or
2) The JUnit test framework automatically enables all JUnit assertions when running JUnit tests. Whenever the application/code is run outside of JUnit, the JUnit assert methods do nothing at all.
or
3) JUnit assertions are always enabled/disabled by default, but there is a custom config/flag that can be used to disable/enable them
Which of the above is true?
--
Edit: Changed the wording a little bit to clear up any ambiguity.
This question seems to have opened up a can of worms with regards to coding-guidelines on when to use/not to use asserts. I'll post here the answer to my immediate question regarding whether JUnit Assert statements get disabled. Further down, you will find a discussion on the philosophical aspects of using assertions.
Summary: JUnit assert statements are never disabled, even when running in production with the disable-assertions flag.
Experiment: Created a small main function which throws an error in the very first line. The error was implemented in various different ways. I then ran the application from the command-line like so: mvn clean package; java $JAVA_OPTS -cp ....
Induced error by throwing IllegalArgument exception. Application failed with IllegalArgument exception.
Induced error by adding assert false and ran with assertions enabled (-ea). Application failed.
Induced error by adding assert false and ran with assertions disabled (-da). Application succeeded.
Induced error by adding Assert.assertTrue(false); and ran with assertions enabled (-ea). Application failed with AssertionError.
Induced error by adding Assert.assertTrue(false); and ran with assertions disabled (-da). Application failed with AssertionError.
Did a quick google search to see if there's any way to disable JUnit.Asserts. Didn't find anything. If anyone knows of a way to do so, please let me know, because I believe it's an important distinction.
Conclusion: assert keywords can be enabled/disabled using the -ea commandline flag. JUnit.Asserts cannot be enabled/disabled. If you want to add assertions to your production code, then this becomes a very important distinction. If you want the ability to disable them, use the assert keyword. If you want something that you know will always be enabled, then consider JUnit.Assert, or some other similar framework that throws AssertionErrors.
Regarding the philosophy behind assert usage:
It's good to add assertions of some form to your production code. See: https://docs.oracle.com/javase/8/docs/technotes/guides/language/assert.html#usage
Putting Assertions Into Your Code. There are many situations where it is good to use assertions, including:
Internal Invariants
Control-Flow Invariants
Preconditions, Postconditions, and Class Invariants
JUnit Asserts cannot be treated as a universal replacement for java assert keywords. The latter gives you the power to disable them, and the former doesn't. Hence, there is a design choice to be made in choosing which of the 2 to use.
Exceptions and Assertions serve very different purposes and should not be used interchangeably. The following discussion clarifies this further: When to use an assertion and when to use an exception
Related StackOverflow discussions:
assert vs. JUnit Assertions
I think you're confused between JUnit assertions and Java's built in assertions.
JUnit assertions are methods that throw an AssertionError exception and should only be used in your test code. JUnit code should not be used in production, only when running your JUnit tests during/between builds. The method names are named like assertEquals( expected, actual) and assertNull(Object) etc. You can't disable these assertions since they are method calls. Also your test code would be pretty useless if you could disable all the JUnit assert methodcalls.
Java's built in assertion is a boolean expression preceded by the assert keyword. If the expression returns false, then it too throws an AssertionError. This kind of assertion can be disabled in production to help speed up your code or turned on to help track down bugs. You can read more about the built in Java assert in the Java Programming Guide
i am doing one Android app.My client is asking me to include assertions in the android code.I have googled across and found that Commonsware says that assertions should be avoided in android code.But i need a strong reason why using assertions is avoided?
Please let me know if i should use assertions or not.And if so what are the rules or suggestions using assertions in android.
But i need a strong reason why using assertions is avoided?
An assertion is intentionally introducing an unhandled, uncatchable error (AssertionError, specifically).
Assertions do no work in Android by default. You have to specifically enable them. This means that whatever logic you are trying to validate via assertions will not be employed on production devices.
Hence, I agree with this assessment:
Step zero: Refactor comments into assertions
Step one: Refactor assertions out of the code into unit tests
Step three: Escalate the remaining assertions into program exceptions
So you are certainly welcome to validate inputs, confirm outputs, etc. Just do not use assert. Instead, handle the condition in some other fashion. If nothing else, throw a RuntimeException like an IllegalArgumentException, so that your top-level unhandled-exception logic can get control.
A trivial search of the Internet will turn up many articles regarding the choice of assertions versus exceptions, such as:
https://softwareengineering.stackexchange.com/questions/137158/is-it-better-to-use-assert-or-illegalargumentexception-for-required-method-param
https://stackoverflow.com/a/1276318/866172
What would be the point of such an Assert ?
On a production code Asserts are not going to do much good, they will only detect why what you considered as an invariant is not as you expected, but it won't prevent your app from crashing.
Furthermore, the Assert Class is part of junit.framework.Assert, it is here to be used in an unit test.
So it is a way better practice to establish unit tests that will tests these invariants, that way you can act when a future development breaks them.
My line coverage for unit tests measured by Cobertura is suffering, because I have assert statements which are not covered in tests. Should I be testing assertions, and is there any way to get Cobertura to ignore them so they do not affect my test coverage?
The line coverage of your Java assert statements should be simply covered by running your test suite with assertions enabled, i.e., giving -ea as argument to the jvm.
If you do this, you'll see that cobertura easily reports 100% line coverage if the rest of your lines are covered as well.
Nevertheless, the assert lines will still be colored red, suggesting insufficient coverage.
This is because your assertions are usually always true, so you never hit the false branch.
Thus, branch coverage in Cobertura is messed up by using assert, since assert lines will have 50% branch coverage, rendering the overall branch coverage percentage hard to interpret (or useless).
Clover has a nice feature to ignore assert when computing coverage. I haven't seen this feature in any open source Java coverage tool.
If you use assertions in a design-by-contract style, there is no need to add tests that make your Java assert statements fail. As a matter of fact,
for many assertions (e.g., invariants, post-conditions) you cannot even create objects that would make them fail, so it is impossible to write such tests.
What you can do, though, is use the invariants/postconditions to derive test cases exercising their boundaries -- see Robert Binder's invariant boundaries pattern. But this won't make your assertions fail.
Only if you have a very tricky pre-condition for a given method, you may want to consider writing a test aimed at making the pre-condition fail. But then re-thinking your pre-condition may be a better idea.
There's a feature request (1959691) already logged for Cobertura to add the ability to ignore assert lines; feel free to watch it in case they implement and add a comment to the page if you think it's a good idea. It's a shame none of the opensource coverage tools support this yet, as I definitely think assert statements are a good idea, but testing them usually isn't practical/possible.
Code coverage is a tool which allow you to improve your testing, it is not a some kind of proof for the validity of your tests. You get the value from it by aiming to having 100% code coverage, looking at what is not covered, and reflecting on how to improve your tests.
Finding that there are uncovered assert statements is just an example where you should call: "ok, no need to cover this". A similar examples for this is with tracing macros and debug code which adds hidden "if statements" which are never covered.
I guess what what you don't like about your answer is that you like to have a minimal code-coverage requirement for your code; I faced the same issue with some of my projects. If one must not lose the coverage data, one solution is to have "coverage build" where the problematic statements (assert, trace, etc.) are replaced with empty ones (say through macro or linker magic). But I believe this it is usually a better trade-off just to accept the non-perfect coverage.
Good luck.
Presumably you're using assertions to verify pre-conditions or some other set of conditions. Consider whether your situation is similar to Argument Exceptions should be Unit Tested.
In any case, I expect you're ultimately trying to determine if you should test the negative-path branches of these statements.
Yes, by all means, test those. Your assertions are themselves logic about your assumptions, and it's a Good Thing to test that your guard statements prevent/protect the scenarios you think they do.