In javaFx, we see the following statement in the initialize method of each generated controller class, for each control with fxid.
assert type != null : "fx:id=\"type\" was not injected: check your FXML file 'FinancialReport.fxml'.";
I can understand that the statement is there to ensure that, at the time this fxml is loaded the control with this fx:id is present in the fxml layout file, if the control is not present it throws an exceptions and quits the fxml loading process.
But then referring to this, I learned that assertions are not recommended to be used in production code. Again studying this tutorial, it seems that assertions are useful, especially when debugging (however, not to be used to validate the arguments of public methods).
I need more knowledge of the following :
Is it fine to use assertions for input validation and such purposes, in production code?
Can we do something else then the usual behavior when the Boolean expression turns false, like some alternative method call or something (an example would be nice).
Is it fine to use assertions for input validation and such purposes,
in production code?
No. Refer to the first link you posted (which actually says that assertions should never be triggered in production code, not that they should never be used): assertions are not even switched on by default when running the JVM. So your validation would fail most of the time. Assertions are specifically there as a debugging tool - to check that your code is correct. The assertions that are added in the generated controller code are good examples: they check that the #FXML-annotated fields in the controller have elements in the FXML file with matching fx:id attributes. If this fails, it's a programming error, not a data validation error.
Can we do something else then the usual behavior when the Boolean
expression turns false, like some alternative method call or something
(an example would be nice).
Just use an if (...) { ... } else { ... } construct?
Related
Is there a chance to automate the search for non-localized text on startup in Thymeleaf Templates and log occurrences?
My Infrastructure: Ant, Spring, Thymeleaf.
Unfortunately there is no clean documented way in which to do it (that I know of).
Having said that, I have done something similar where I wanted to check the template for something and log an occurrence, however the implementation is ugly.
I have to warn you that this is beyond horrible and because it is not standard is likely to break in future releases so I would use sparingly, and definitely not use in any production code.
This requires for you template resolver to be cacheble to be true - org.thymeleaf.templateresolver.TemplateResolver#setCacheable(true) the default is true.
I was able to do it by extending org.thymeleaf.cache.StandardCacheManager (you need to set the cache manager on the org.thymeleaf.TemplateEngine) and overriding initializeTemplateCache() that returned a custom version of org.thymeleaf.cache.StandardCache. My implementation of the cache override the put(..) method which passed in a org.thymeleaf.Template as the value.
That then has org.thymeleaf.dom.Document accessible via getDocument() and from there you can recursivly iterate through the children (some of which will be org.thymeleaf.dom.AbstractTextNode). In your case you may also want to iterate through all the attributes on element nodes as well.
For you, you will then have to write some logic to determine if the text is not going to be localised, working out whether the #{} expression is not being used or if that expression in not in a th:...="#{}" tag or being inline [[#{}]]
Ugly I know but it works for me. If anyone has a cleaner solution I'm all ears.
Is there any benefit of using checkState over assert? I remember reading somewhere that I should prefer checkState but I can't remember why.
checkState and assert have totally different purposes.
checkState is a precondition check that throws an exception if a caller called your method when the program is in a state in which that method may not be called. (Meaning that they are using your code incorrectly; they should have been able to avoid calling that method at the wrong time by using it correctly.)
assert is generally at most a sanity check of something that you know must be true at that point in the program (kind of a compiled comment). Additionally, assert may be enabled or disabled depending on a flag when starting the JVM. It's typical to have it disabled in production. So it's not something you can rely on to break the flow of your method even if you do somehow get in a state that you're asserting is not possible.
Guava's new (as of 17.0) Verify class is something like an assert that is always enabled, but not exactly; it's for cases that should not occur, but could possibly if some outside service (i.e. one that your code is calling, not the code that's calling you) behaves in a way that it claims it shouldn't. See its Javadoc for more on the differences between Preconditions, assert and Verify.
All JUnit assert methods have an optional first parameter being the message printed when the assertion fails.
How do I ensure the parameter is always passed, and developers in my project never lazily skip describing what the assertion is doing?
Is there an inspection tool that can check for that?
Is there anything I can do programmatically?
My project is maven-friendly.
As code inspection seems to be your aim, I would recommend a tool called PMD. If there is not already a rule for this, I would think it is fairly trivial to create one. Furthermore, this will help you in detecting other code mess which your developers may be creating.
Here is a link:
http://pmd.sourceforge.net/
In our team, we do code reviews of pull requests. If asserts are not following the standard, you could flag it in the review. Only pull requests that have sufficient approvals are allowed to be merged.
That said, I would probably not enforce this rule on my team. Instead I'd tell them to write shorter tests where the method name will clearly state the intent, like:
#Test(expected = IllegalArgumentException.class);
shouldThrowExceptionWhenInputIsNegative() {}
#Test
shouldFilterOutNulls() {}
#Test
shouldCreateAdditionalRecordWhenBankBalanceIsOver10000() {}
etc...
In Java, I will occasionally throw an AssertionError directly, to assert that a particular line will not be reached. An example of this would be to assert that the default case in a switch statement cannot be reached (see this JavaSpecialists page for an example).
I would like to use a similar mechanism in .Net. Is there an equivalent exception that I could use? Or is there another method that could be used with the same effect?
Edit - To clarify, I'm looking for a mechanism to flag failures at runtime, in released code, to indicate that there has been a (possibly catastrophic) failure of some invariant in the code. The linked example generates a random integer between 0 and 2 (inclusive) and asserts that the generated number is always 0, 1 or 2. If this assertion doesn't hold, it would be better to stop execution completely rather than continue with some unknown corrupt state of the system.
I'd normally throw InvalidOperationException or ArgumentOutOfRangeException depending on where the value came from.
Alternatively, there's Debug.Assert (which will only fail when you've got the DEBUG preprocessor symbol defined) or in .NET 4.0 you could use Contract.Fail, Contract.Assert or Contract.Assume depending on the situation. Explicitly throwing an exception has the benefit that the compiler knows that the next statement is unreachable though.
I'm not a big fan of Debug.Assert - it's usually inappropriate for a release (as it throws up an assertion box rather than just failing) and by default it won't be triggered in release anyway. I prefer exceptions which are always thrown, as they prevent your code from carrying on regardless after the opportunity to detect that "stuff is wrong".
Code Contracts changes the game somewhat, as there are all kinds of options for what gets preserved at execution time, and the static checker can help to prove that you won't get into that state. You still need to choose the execution time policy though...
You can use the Trace.Assert method, which will work on release builds (if you have the TRACE compilation symbol defined, which is defined by default on Visual Studio projects). You can also customize the way your application reacts on assertion errors by way of a TraceListener. The default is (unsurprisingly) the DefaultTraceListener, which will show the assertion in a dialog box if the application is running in interactive mode. If you want to throw an exception, for example, you can create your own TraceListener and throw it on the method Fail. You can then remove the DefaultTraceListener and use your own, either programmatically or in the configuration file.
This looks like a lot of trouble, and is only justifiable if you want to dynamically change the way your application handles assertions by way of the trace listeners. For violations that you always want to fail, create your own AssertionException class and throw it right away.
For .NET 4.0, I'd definetely look at the Contract.Assert method. But, this method is only compiled when the symbols DEBUG or CONTRACTS_FULL are defined. DEBUG won't work on release builds, and CONTRACTS_FULL will also turn on all other contracts checking, some of which you might not want to be present in release builds.
For writing unit tests, I know it's very popular to write test methods that look like
public void Can_User_Authenticate_With_Bad_Password()
{
...
}
While this makes it easy to see what the test is testing for, I think it looks ugly and it doesn't display well in auto-generated documentation (like sandcastle or javadoc).
I'm interested to see what people think about using a naming schema that is the method being tested and underscore test and then the test number. Then using the XML code document(.net) or the javadoc comments to describe what is being tested.
/// <summary>
/// Tests for user authentication with a bad password.
/// </summary>
public void AuthenticateUser_Test1()
{
...
}
by doing this I can easily group my tests together by what methods they are testing, I can see how may test I have for a given method, and I still have a full description of what is being tested.
we have some regression tests that run vs a data source (an xml file), and these file may be updated by someone without access to the source code (QA monkey) and they need to be able to read what is being tested and where, to update the data sources.
I prefer the "long names" version - although only to describe what happens. If the test needs a description of why it happens, I'll put that in a comment (with a bug number if appropriate).
With the long name, it's much clearer what's gone wrong when you get a mail (or whatever) telling you which tests have failed.
I would write it in terms of what it should do though:
LogInSucceedsWithValidCredentials
LogInFailsWithIncorrectPassword
LogInFailsForUnknownUser
I don't buy the argument that it looks bad in autogenerated documentation - why are you running JavaDoc over the tests in the first place? I can't say I've ever done that, or wanted generated documentation. Given that test methods typically have no parameters and don't return anything, if the method name can describe them reasonably that's all the information you need. The test runner should be capable of listing the tests it runs, or the IDE can show you what's available. I find that more convenient than navigating via HTML - the browser doesn't have a "Find Type" which lets me type just the first letters of each word of the name, for example...
Does the documentation show up in your test runner? If not that's a good reason for using long, descriptive names instead.
Personally I prefer long names and rarely see the need to add comments to tests.
I've done my dissertation on a related topic, so here are my two cents: Any time you rely on documentation to convey something that is not in your method signature, you are taking the huge risk that nobody would read the documentation.
When developers are looking for something specific (e.g., scanning a long list of methods in a class to see if what they're looking for is already there), most of them are not going to bother to read the documentation. They want to deal with one type of information that they can easily see and compare (e.g., names), rather than have to start redirecting to other materials (e.g., hover long enough to see the JavaDocs).
I would strongly recommend conveying everything relevant in your signature.
Personally I prefer using the long method names. Note you can also have the method name inside the expression, as:
Can_AuthenticateUser_With_Bad_Password()
I suggest smaller, more focussed (test) classes.
Why would you want to javadoc tests?
What about changing
Can_User_Authenticate_With_Bad_Password
to
AuthenticateDenieTest
AuthenticateAcceptTest
and name suit something like User
As a Group how do we feel about doing a hybrid Naming schema like this
/// <summary>
/// Tests for user authentication with a bad password.
/// </summary>
public void AuthenticateUser_Test1_With_Bad_Password()
{
...
}
and we get the best of both.