Related
One of the most useful features of Java 8 are the new default methods on interfaces. There are essentially two reasons (there may be others) why they have been introduced:
Providing actual default implementations. Example: Iterator.remove()
Allowing for JDK API evolution. Example: Iterable.forEach()
From an API designer's perspective, I would have liked to be able to use other modifiers on interface methods, e.g. final. This would be useful when adding convenience methods, preventing "accidental" overrides in implementing classes:
interface Sender {
// Convenience method to send an empty message
default final void send() {
send(null);
}
// Implementations should only implement this method
void send(String message);
}
The above is already common practice if Sender were a class:
abstract class Sender {
// Convenience method to send an empty message
final void send() {
send(null);
}
// Implementations should only implement this method
abstract void send(String message);
}
Now, default and final are obviously contradicting keywords, but the default keyword itself would not have been strictly required, so I'm assuming that this contradiction is deliberate, to reflect the subtle differences between "class methods with body" (just methods) and "interface methods with body" (default methods), i.e. differences which I have not yet understood.
At some point of time, support for modifiers like static and final on interface methods was not yet fully explored, citing Brian Goetz:
The other part is how far we're going to go to support class-building
tools in interfaces, such as final methods, private methods, protected
methods, static methods, etc. The answer is: we don't know yet
Since that time in late 2011, obviously, support for static methods in interfaces was added. Clearly, this added a lot of value to the JDK libraries themselves, such as with Comparator.comparing().
Question:
What is the reason final (and also static final) never made it to Java 8 interfaces?
This question is, to some degree, related to What is the reason why “synchronized” is not allowed in Java 8 interface methods?
The key thing to understand about default methods is that the primary design goal is interface evolution, not "turn interfaces into (mediocre) traits". While there's some overlap between the two, and we tried to be accommodating to the latter where it didn't get in the way of the former, these questions are best understood when viewed in this light. (Note too that class methods are going to be different from interface methods, no matter what the intent, by virtue of the fact that interface methods can be multiply inherited.)
The basic idea of a default method is: it is an interface method with a default implementation, and a derived class can provide a more specific implementation. And because the design center was interface evolution, it was a critical design goal that default methods be able to be added to interfaces after the fact in a source-compatible and binary-compatible manner.
The too-simple answer to "why not final default methods" is that then the body would then not simply be the default implementation, it would be the only implementation. While that's a little too simple an answer, it gives us a clue that the question is already heading in a questionable direction.
Another reason why final interface methods are questionable is that they create impossible problems for implementors. For example, suppose you have:
interface A {
default void foo() { ... }
}
interface B {
}
class C implements A, B {
}
Here, everything is good; C inherits foo() from A. Now supposing B is changed to have a foo method, with a default:
interface B {
default void foo() { ... }
}
Now, when we go to recompile C, the compiler will tell us that it doesn't know what behavior to inherit for foo(), so C has to override it (and could choose to delegate to A.super.foo() if it wanted to retain the same behavior.) But what if B had made its default final, and A is not under the control of the author of C? Now C is irretrievably broken; it can't compile without overriding foo(), but it can't override foo() if it was final in B.
This is just one example, but the point is that finality for methods is really a tool that makes more sense in the world of single-inheritance classes (generally which couple state to behavior), than to interfaces which merely contribute behavior and can be multiply inherited. It's too hard to reason about "what other interfaces might be mixed into the eventual implementor", and allowing an interface method to be final would likely cause these problems (and they would blow up not on the person who wrote the interface, but on the poor user who tries to implement it.)
Another reason to disallow them is that they wouldn't mean what you think they mean. A default implementation is only considered if the class (or its superclasses) don't provide a declaration (concrete or abstract) of the method. If a default method were final, but a superclass already implemented the method, the default would be ignored, which is probably not what the default author was expecting when declaring it final. (This inheritance behavior is a reflection of the design center for default methods -- interface evolution. It should be possible to add a default method (or a default implementation to an existing interface method) to existing interfaces that already have implementations, without changing the behavior of existing classes that implement the interface, guaranteeing that classes that already worked before default methods were added will work the same way in the presence of default methods.)
In the lambda mailing list there are plenty of discussions about it. One of those that seems to contain a lot of discussion about all that stuff is the following: On Varied interface method visibility (was Final defenders).
In this discussion, Talden, the author of the original question asks something very similar to your question:
The decision to make all interface members public was indeed an
unfortunate decision. That any use of interface in internal design
exposes implementation private details is a big one.
It's a tough one to fix without adding some obscure or compatibility
breaking nuances to the language. A compatibility break of that
magnitude and potential subtlety would seen unconscionable so a
solution has to exist that doesn't break existing code.
Could reintroducing the 'package' keyword as an access-specifier be
viable. It's absence of a specifier in an interface would imply
public-access and the absence of a specifier in a class implies
package-access. Which specifiers make sense in an interface is unclear
- especially if, to minimise the knowledge burden on developers, we have to ensure that access-specifiers mean the same thing in both
class and interface if they're present.
In the absence of default methods I'd have speculated that the
specifier of a member in an interface has to be at least as visible as
the interface itself (so the interface can actually be implemented in
all visible contexts) - with default methods that's not so certain.
Has there been any clear communication as to whether this is even a
possible in-scope discussion? If not, should it be held elsewhere.
Eventually Brian Goetz's answer was:
Yes, this is already being explored.
However, let me set some realistic expectations -- language / VM
features have a long lead time, even trivial-seeming ones like this.
The time for proposing new language feature ideas for Java SE 8 has
pretty much passed.
So, most likely it was never implemented because it was never part of the scope. It was never proposed in time to be considered.
In another heated discussion about final defender methods on the subject, Brian said again:
And you have gotten exactly what you wished for. That's exactly what
this feature adds -- multiple inheritance of behavior. Of course we
understand that people will use them as traits. And we've worked hard
to ensure that the the model of inheritance they offer is simple and
clean enough that people can get good results doing so in a broad
variety of situations. We have, at the same time, chosen not to push
them beyond the boundary of what works simply and cleanly, and that
leads to "aw, you didn't go far enough" reactions in some case. But
really, most of this thread seems to be grumbling that the glass is
merely 98% full. I'll take that 98% and get on with it!
So this reinforces my theory that it simply was not part of the scope or part of their design. What they did was to provide enough functionality to deal with the issues of API evolution.
It will be hard to find and identify "THE" answer, for the resons mentioned in the comments from #EJP : There are roughly 2 (+/- 2) people in the world who can give the definite answer at all. And in doubt, the answer might just be something like "Supporting final default methods did not seem to be worth the effort of restructuring the internal call resolution mechanisms". This is speculation, of course, but it is at least backed by subtle evidences, like this Statement (by one of the two persons) in the OpenJDK mailing list:
"I suppose if "final default" methods were allowed, they might need rewriting from internal invokespecial to user-visible invokeinterface."
and trivial facts like that a method is simply not considered to be a (really) final method when it is a default method, as currently implemented in the Method::is_final_method method in the OpenJDK.
Further really "authorative" information is indeed hard to find, even with excessive websearches and by reading commit logs. I thought that it might be related to potential ambiguities during the resolution of interface method calls with the invokeinterface instruction and and class method calls, corresponding to the invokevirtual instruction: For the invokevirtual instruction, there may be a simple vtable lookup, because the method must either be inherited from a superclass, or implemented by the class directly. In contrast to that, an invokeinterface call must examine the respective call site to find out which interface this call actually refers to (this is explained in more detail in the InterfaceCalls page of the HotSpot Wiki). However, final methods do either not get inserted into the vtable at all, or replace existing entries in the vtable (see klassVtable.cpp. Line 333), and similarly, default methods are replacing existing entries in the vtable (see klassVtable.cpp, Line 202). So the actual reason (and thus, the answer) must be hidden deeper inside the (rather complex) method call resolution mechanisms, but maybe these references will nevertheless be considered as being helpful, be it only for others that manage to derive the actual answer from that.
I wouldn't think it is neccessary to specify final on a convienience interface method, I can agree though that it may be helpful, but seemingly the costs have outweight the benefits.
What you are supposed to do, either way, is to write proper javadoc for the default method, showing exactly what the method is and is not allowed to do. In that way the classes implementing the interface "are not allowed" to change the implementation, though there are no guarantees.
Anyone could write a Collection that adheres to the interface and then does things in the methods that are absolutely counter intuitive, there is no way to shield yourself from that, other than writing extensive unit tests.
We add default keyword to our method inside an interface when we know that the class extending the interface may or may not override our implementation. But what if we want to add a method that we don't want any implementing class to override? Well, two options were available to us:
Add a default final method.
Add a static method.
Now, Java says that if we have a class implementing two or more interfaces such that they have a default method with exactly same method name and signature i.e. they are duplicate, then we need to provide an implementation of that method in our class. Now in case of default final methods, we can't provide an implementation and we are stuck. And that's why final keyword isn't used in interfaces.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What are the best practices for using Java's #Override annotation and why?
It seems like it would be overkill to mark every single overridden method with the #Override annotation. Are there certain programming situations that call for using the #Override and others that should never use the #Override?
Use it every time you override a method for two benefits. Do it so that you can take advantage of the compiler checking to make sure you actually are overriding a method when you think you are. This way, if you make a common mistake of misspelling a method name or not correctly matching the parameters, you will be warned that you method does not actually override as you think it does. Secondly, it makes your code easier to understand because it is more obvious when methods are overwritten.
Additionally, in Java 1.6 you can use it to mark when a method implements an interface for the same benefits. I think it would be better to have a separate annotation (like #Implements), but it's better than nothing.
I think it is most useful as a compile-time reminder that the intention of the method is to override a parent method. As an example:
protected boolean displaySensitiveInformation() {
return false;
}
You will often see something like the above method that overrides a method in the base class. This is an important implementation detail of this class -- we don't want sensitive information to be displayed.
Suppose this method is changed in the parent class to
protected boolean displaySensitiveInformation(Context context) {
return true;
}
This change will not cause any compile time errors or warnings - but it completely changes the intended behavior of the subclass.
To answer your question: you should use the #Override annotation if the lack of a method with the same signature in a superclass is indicative of a bug.
There are many good answers here, so let me offer another way to look at it...
There is no overkill when you are coding. It doesn't cost you anything to type #override, but the savings can be immense if you misspelled a method name or got the signature slightly wrong.
Think about it this way: In the time you navigated here and typed this post, you pretty much used more time than you will spend typing #override for the rest of your life; but one error it prevents can save you hours.
Java does all it can to make sure you didn't make any mistakes at edit/compile time, this is a virtually free way to solve an entire class of mistakes that aren't preventable in any other way outside of comprehensive testing.
Could you come up with a better mechanism in Java to ensure that when the user intended to override a method, he actually did?
Another neat effect is that if you don't provide the annotation it will warn you at compile time that you accidentally overrode a parent method--something that could be significant if you didn't intend to do it.
I always use the tag. It is a simple compile-time flag to catch little mistakes that I might make.
It will catch things like tostring() instead of toString()
The little things help in large projects.
Using the #Override annotation acts as a compile-time safeguard against a common programming mistake. It will throw a compilation error if you have the annotation on a method you're not actually overriding the superclass method.
The most common case where this is useful is when you are changing a method in the base class to have a different parameter list. A method in a subclass that used to override the superclass method will no longer do so due the changed method signature. This can sometimes cause strange and unexpected behavior, especially when dealing with complex inheritance structures. The #Override annotation safeguards against this.
To take advantage from compiler checking you should always use Override annotation. But don’t forget that Java Compiler 1.5 will not allow this annotation when overriding interface methods. You just can use it to override class methods (abstract, or not).
Some IDEs, as Eclipse, even configured with Java 1.6 runtime or higher, they maintain compliance with Java 1.5 and don’t allow the use #override as described above. To avoid that behaviour you must go to: Project Properties ->Java Compiler -> Check “Enable Project Specific Settings” -> Choose “Compiler Compliance Level” = 6.0, or higher.
I like to use this annotation every time I am overriding a method independently, if the base is an interface, or class.
This helps you avoiding some typical errors, as when you are thinking that you are overriding an event handler and then you see nothing happening. Imagine you want to add an event listener to some UI component:
someUIComponent.addMouseListener(new MouseAdapter(){
public void mouseEntered() {
...do something...
}
});
The above code compiles and run, but if you move the mouse inside someUIComponent the “do something” code will note run, because actually you are not overriding the base method mouseEntered(MouseEvent ev). You just create a new parameter-less method mouseEntered(). Instead of that code, if you have used the #Override annotation you have seen a compile error and you have not been wasting time thinking why your event handler was not running.
#Override on interface implementation is inconsistent since there is no such thing as "overriding an interface" in java.
#Override on interface implementation is useless since in practise it catches no bugs that the compilation wouldn't catch anyway.
There is only one, far fetched scenario where override on implementers actually does something: If you implement an interface, and the interface REMOVES methods, you will be notified on compile time that you should remove the unused implementations. Notice that if the new version of the interface has NEW or CHANGED methods you'll obviously get a compile error anyways as you're not implementing the new stuff.
#Override on interface implementers should never have been permitted in 1.6, and with eclipse sadly choosing to auto-insert the annotations as default behavior, we get a lot of cluttered source files. When reading 1.6 code, you cannot see from the #Override annotation if a method actually overrides a method in the superclass or just implements an interface.
Using #Override when actually overriding a method in a superclass is fine.
Its best to use it for every method intended as an override, and Java 6+, every method intended as an implementation of an interface.
First, it catches misspellings like "hashcode()" instead of "hashCode()" at compile-time. It can be baffling to debug why the result of your method doesn't seem to match your code when the real cause is that your code is never invoked.
Also, if a superclass changes a method signature, overrides of the older signature can be "orphaned", left behind as confusing dead code. The #Override annotation will help you identify these orphans so that they can be modified to match the new signature.
If you find yourself overriding (non-abstract) methods very often, you probably want to take a look at your design. It is very useful when the compiler would not otherwise catch the error. For instance trying to override initValue() in ThreadLocal, which I have done.
Using #Override when implementing interface methods (1.6+ feature) seems a bit overkill for me. If you have loads of methods some of which override and some don't, that probably bad design again (and your editor will probably show which is which if you don't know).
#Override on interfaces actually are helpful, because you will get warnings if you change the interface.
Another thing it does is it makes it more obvious when reading the code that it is changing the behavior of the parent class. Than can help in debugging.
Also, in Joshua Block's book Effective Java (2nd edition), item 36 gives more details on the benefits of the annotation.
It makes absolutely no sense to use #Override when implementing an interface method. There's no advantage to using it in that case--the compiler will already catch your mistake, so it's just unnecessary clutter.
Whenever a method overrides another method, or a method implements a signature in an interface.
The #Override annotation assures you that you did in fact override something. Without the annotation you risk a misspelling or a difference in parameter types and number.
I use it every time. It's more information that I can use to quickly figure out what is going on when I revisit the code in a year and I've forgotten what I was thinking the first time.
The best practive is to always use it (or have the IDE fill them for you)
#Override usefulness is to detect changes in parent classes which has not been reported down the hierarchy.
Without it, you can change a method signature and forget to alter its overrides, with #Override, the compiler will catch it for you.
That kind of safety net is always good to have.
I use it everywhere.
On the topic of the effort for marking methods, I let Eclipse do it for me so, it's no additional effort.
I'm religious about continuous refactoring.... so, I'll use every little thing to make it go more smoothly.
Used only on method declarations.
Indicates that the annotated method
declaration overrides a declaration
in supertype.
If used consistently, it protects you from a large class of nefarious bugs.
Use #Override annotation to avoid these bugs:
(Spot the bug in the following code:)
public class Bigram {
private final char first;
private final char second;
public Bigram(char first, char second) {
this.first = first;
this.second = second;
}
public boolean equals(Bigram b) {
return b.first == first && b.second == second;
}
public int hashCode() {
return 31 * first + second;
}
public static void main(String[] args) {
Set<Bigram> s = new HashSet<Bigram>();
for (int i = 0; i < 10; i++)
for (char ch = 'a'; ch <= 'z'; ch++)
s.add(new Bigram(ch, ch));
System.out.println(s.size());
}
}
source: Effective Java
Be careful when you use Override, because you can't do reverse engineer in starUML afterwards; make the uml first.
It seems that the wisdom here is changing. Today I installed IntelliJ IDEA 9 and noticed that its "missing #Override inspection" now catches not just implemented abstract methods, but implemented interface methods as well. In my employer's code base and in my own projects, I've long had the habit to only use #Override for the former -- implemented abstract methods. However, rethinking the habit, the merit of using the annotations in both cases becomes clear. Despite being more verbose, it does protect against the fragile base class problem (not as grave as C++-related examples) where the interface method name changes, orphaning the would-be implementing method in a derived class.
Of course, this scenario is mostly hyperbole; the derived class would no longer compile, now lacking an implementation of the renamed interface method, and today one would likely use a Rename Method refactoring operation to address the entire code base en masse.
Given that IDEA's inspection is not configurable to ignore implemented interface methods, today I'll change both my habit and my team's code review criteria.
The annotation #Override is used for helping to check whether the developer what to override the correct method in the parent class or interface. When the name of super's methods changing, the compiler can notify that case, which is only for keep consistency with the super and the subclass.
BTW, if we didn't announce the annotation #Override in the subclass, but we do override some methods of the super, then the function can work as that one with the #Override. But this method can not notify the developer when the super's method was changed. Because it did not know the developer's purpose -- override super's method or define a new method?
So when we want to override that method to make use of the Polymorphism, we have better to add #Override above the method.
I use it as much as can to identify when a method is being overriden. If you look at the Scala programming language, they also have an override keyword. I find it useful.
It does allow you (well, the compiler) to catch when you've used the wrong spelling on a method name you are overriding.
Override annotation is used to take advantage of the compiler, for checking whether you actually are overriding a method from parent class. It is used to notify if you make any mistake like mistake of misspelling a method name, mistake of not correctly matching the parameters
i think it's best to code the #override whenever allowed. it helps for coding. however, to be noted, for ecipse Helios, either sdk 5 or 6, the #override annotation for implemented interface methods is allowed. as for Galileo, either 5 or 6, #override annotation is not allowed.
Annotations do provide meta data about the code to the Compiler and the annotation #Override is used in case of inheritance when we are overriding any method of base class. It just tells the compiler that you are overriding method. It can avoide some kinds common mistakes we can do like not following the proper signature of the method or mispelling in name of the method etc. So its a good practice to use #Override annotation.
For me the #Override ensures me I have the signature of the method correct. If I put in the annotation and the method is not correctly spelled, then the compiler complains letting me know something is wrong.
Simple–when you want to override a method present in your superclass, use #Override annotation to make a correct override. The compiler will warn you if you don't override it correctly.
As the official statement about virtual extension methods (default methods) states:
The purpose of virtual extension methods (default methods) is to
enable interfaces to be evolved in a compatible manner after their
initial publication.”
Can I see some good examples in the JDK for this in action?
So far I've found in the java.util.Collection interface and several others but it would be nice to have a more thorough list.
edit: I was not looking for whether using default methods are safe or not (since I've already read the question which is linked). I was looking for practical examples in the JDK.
There are several places in standard classes which I know beyond collections:
Java SQL API has interface enhancements. ResultSet.updateObject, PreparedStatement.setObject, PreparedStatement.executeLargeUpdate (differs from executeUpdate returning long instead of int) and so on were added. By default they throw UnsupportedOperationException or some other exception. This is "interface evolution": a way to enhance older interface with new features.
Functions in java.util.function have default methods which allow to combine them. These methods are not expected to be redefined as functional interfaces are mostly intended to work with lambdas. These methods are just for convenience. This is not the "interface evolution" as these interfaces appeared first in Java 8. However were these methods non-default, you would not be able to express these interfaces with lambdas as single-abstract-method rule would be violated.
Several default methods added to java.util.Comparator: reversed(), thenComparing() and so on. Also default implementations are fine and usually not intended to be replaced. This is similar to functions, but Comparator interface existed before, so it's also kind of interface evolution.
The java.util.Spliterator interface has default methods. This is actually good example as default methods of different purpose are shown here. The getComparator() method by default throws, but it must be overridden if spliterator uses SORTED characteristic. The hasCharacteristic() method is just a convenience way to use non-default characteristics() method and probably should never be redefined. The forEachRemaining() default implementation will always work correctly, but overriding it may produce much more effective result for some spliterators. The getExactSizeIfKnown() default implementation is also always correct and usually no need to redefine it. However if your spliterator is never sized, you can optimize it a little, just returning -1.
Is it a safe practice to use default methods as a poor's man version of traits in Java 8?
Some claim it may make pandas sad if you use them just for the sake of it, because it's cool, but that's not my intention. It is also often reminded that default methods were introduced to support API evolution and backward compatibility, which is true, but this does not make it wrong or twisted to use them as traits per se.
I have the following practical use case in mind:
public interface Loggable {
default Logger logger() {
return LoggerFactory.getLogger(this.getClass());
}
}
Or perhaps, define a PeriodTrait:
public interface PeriodeTrait {
Date getStartDate();
Date getEndDate();
default isValid(Date atDate) {
...
}
}
Admitedly, composition could be used (or even helper classes) but it seems more verbose and cluttered and does not allow to benefit from polymorphism.
So, is it ok/safe to use default methods as basic traits, or should I be worried about unforeseen side effects?
Several questions on SO are related to Java vs Scala traits; that's not the point here. I'm not asking merely for opinions either. Instead, I'm looking for an authoritative answer or at least field insight: if you've used default methods as traits on your corporate project, did it turn out to be a timebomb?
The short answer is: it's safe if you use them safely :)
The snarky answer: tell me what you mean by traits, and maybe I'll give you a better answer :)
In all seriousness, the term "trait" is not well-defined. Many Java developers are most familiar with traits as they are expressed in Scala, but Scala is far from the first language to have traits, either in name or in effect.
For example, in Scala, traits are stateful (can have var variables); in Fortress they are pure behavior. Java's interfaces with default methods are stateless; does this mean they are not traits? (Hint: that was a trick question.)
Again, in Scala, traits are composed through linearization; if class A extends traits X and Y, then the order in which X and Y are mixed in determines how conflicts between X and Y are resolved. In Java, this linearization mechanism is not present (it was rejected, in part, because it was too "un-Java-like".)
The proximate reason for adding default methods to interfaces was to support interface evolution, but we were well aware that we were going beyond that. Whether you consider that to be "interface evolution++" or "traits--" is a matter of personal interpretation. So, to answer your question about safety ... so long as you stick to what the mechanism actually supports, rather than trying to wishfully stretch it to something it does not support, you should be fine.
A key design goal was that, from the perspective of the client of an interface, default methods should be indistinguishable from "regular" interface methods. The default-ness of a method, therefore, is only interesting to the designer and implementor of the interface.
Here are some use cases that are well within the design goals:
Interface evolution. Here, we are adding a new method to an existing interface, which has a sensible default implementation in terms of existing methods on that interface. An example would be adding the forEach method to Collection, where the default implementation is written in terms of the iterator() method.
"Optional" methods. Here, the designer of an interface is saying "Implementors need not implement this method if they are willing to live with the limitations in functionality that entails". For example, Iterator.remove was given a default which throws UnsupportedOperationException; since the vast majority of implementations of Iterator have this behavior anyway, the default makes this method essentially optional. (If the behavior from AbstractCollection were expressed as defaults on Collection, we might do the same for the mutative methods.)
Convenience methods. These are methods that are strictly for convenience, again generally implemented in terms of non-default methods on the class. The logger() method in your first example is a reasonable illustration of this.
Combinators. These are compositional methods that instantiate new instances of the interface based on the current instance. For example, the methods Predicate.and() or Comparator.thenComparing() are examples of combinators.
If you provide a default implementation, you should also provide some specification for the default (in the JDK, we use the #implSpec javadoc tag for this) to aid implementors in understanding whether they want to override the method or not. Some defaults, like convenience methods and combinators, are almost never overridden; others, like optional methods, are often overridden. You need to provide enough specification (not just documentation) about what the default promises to do, so the implementor can make a sensible decision about whether they need to override it.
Why
1) you can test postconditions with assertions in both public and non public methods
but, you are advised to
2) not use assertions to check the parameters of a public method(preconditions)?
I understand that 2) is caused by :
- convention: the method guarantees that it will always enforce the argument checks(e.g. with checked exceptions) so it must check its arguments whether or not the assertions are enabled.
- the assert construct does not throw an exception of the specified type. It can throw only an AssertionError, which is not quite user friendly
But I don't understand why 1) is available for public methods as well ?
Thanks.
The above questioned rephrased :)
A contract is made of two parts :
- requirements upon the caller made by the class
- promises made by the class to the caller
Why Sun advises you to not use assertions as preconditions for public methods because the assertions could be disabled and then you are not checking the requirements imposed on the caller, but allows you to use assertions to test postconditions for public methods(test the return value to see you are returning a correct result) is still an enigma for me.
Put in other words, you must be very careful when enforcing requirements, but you can close an eye when verifying the promises.
Is there a technical reason for this metaphorically called "lack of ethics" that you demostrate when you enforce your client to respect the requirements, but you are not so harsh with yourself in respecting the promises ? :)
Point 2) states you shouldn't use assertions because they might not be turned on. Instead, it appears to recommend using checks which are performed every time and throw a checked exception.
If you want to throw different exceptions when assertions are enabled you can do
boolean assertions = false;
assert assertions = true;
if (assertions && myCheckHere) throw new MyExceptionOrError();
Technically, a method has parameters e.g. String name, a method is invoked passing arguments e.g. "Hello", So its the argument you want to check, the compiler will check the parameters.
You are using "assert" keyword to assert your assumptions and in case of public methods the pre-conditions should be always true (so that the method has correct data to perform its task) whether running in production or not.
Since assert will be disabled in production you can't use the assert keyword to enforce the pre-conditions of a public method so you should be throwing a unchecked exception. Now, coming to the post-conditions there is no such contract for the method (say f1) performing the task. The output of this f1() method might be a input to another method (say f2), in which case the pre-conditions in f2() might enforce the assumptions.
On a side note, I suggest looking at the google-guava library's Preconditions class for checking pre-conditions and throwing unchecked exceptions, both using a simple and intuitive API.
Not sure where you got this information, but assertions are excellent for checking the parameters of public methods. They enforce the very contract that the method claims to live up to.
It's not a technical reason but a reflection of the fact that asserts aren't a design by contract implementation, which is something they say in the guide.
On balance, we came to the conclusion that a simple boolean assertion
facility was a fairly straight-forward solution and far less risky.
Not everyone sees things the way that Meyers does, and Java has language goals which conflict with Eiffel's in some places. In Program Development in Java, Liskov stresses defensive programming, that is, if something can go wrong, contract-related or not, you check for it as you go. Contracts are "asserted" in comments that go at the top of the method body. Now, that book predates the addition of boolean asserts to Java, and who knows what they would have said if it had been written a couple of years later. My point is that
more than one treatment of contracts exists
asserts are primitive and probably best used where they're practical, instead of trying to do real DbC with them
As you noted, the designers suggest that you could do some DbC with postconditions, nonpublic preconditions, occasional invariant checks, but they don't sound all that serious about it. Is it unethical? I understand the objection, but it's not a given that everyone will want to program with contracts in the first place.
Java assertions are worthless, in my opinion.
If you read Meyer's "Object-Oriented Software Construction", he makes the case about how assertions should always be enabled - especially for production code. So, if they were always enabled by default, no matter what, it would be much better. Maybe I'm niggling with that fact a bit too much, but it's definitely a detriment - a static method is "safer" since you know it will always execute... and you can still control that with a system property. thus the default is always 'safe mode'.
DbC - It seems kind of like it would be hard to fit into the Java language - we can't put preconditions/postconditions on an interface and have them automatically work on any implementation. There are some frameworks that attempt to provide this behavior, but I haven't managed to use them in any of my projects.
I instead write my own "Contract" class full of static methods, and use a lot of assertions throughout the production code. It has methods called 'precondition' and 'postcondition', but really all you care about is assert(). there is no mechanism in Java's design whereby you can implement 'pre' and 'post' hooks on an inheritor's code. that's why some of the frameworks out there use a precompiler or AOP...