Consider the below enums, which is better? Both of them can be used exactly the same way, but what are their advantages over each other?
1. Overriding abstract method:
public enum Direction {
UP {
#Override
public Direction getOppposite() {
return DOWN;
}
#Override
public Direction getRotateClockwise() {
return RIGHT;
}
#Override
public Direction getRotateAnticlockwise() {
return LEFT;
}
},
/* DOWN, LEFT and RIGHT skipped */
;
public abstract Direction getOppposite();
public abstract Direction getRotateClockwise();
public abstract Direction getRotateAnticlockwise();
}
2. Using a single method:
public enum Orientation {
UP, DOWN, LEFT, RIGHT;
public Orientation getOppposite() {
switch (this) {
case UP:
return DOWN;
case DOWN:
return UP;
case LEFT:
return RIGHT;
case RIGHT:
return LEFT;
default:
return null;
}
}
/* getRotateClockwise and getRotateAnticlockwise skipped */
}
Edit: I really hope to see some well reasoned/elaborated answers, with evidences/sources to particular claims. Most existing answers regarding performance isn't really convincing due to the lack of proves.
You can suggest alternatives, but it have to be clear how it's better than the ones stated and/or how the stated ones is worse, and provide evidences when needed.
Forget about performance in this comparison; it would take a truly massive enum for there to be a meaningful performance difference between the two methodologies.
Let's focus instead on maintainability. Suppose you finish coding your Direction enum and eventually move on to a more prestigious project. Meanwhile, another developer is given ownership of your old code including Direction - let's call him Jimmy.
At some point, requirements dictate that Jimmy add two new directions: FORWARD and BACKWARD. Jimmy is tired and overworked and does not bother to fully research how this would affect existing functionality - he just does it. Let's see what happens now:
1. Overriding abstract method:
Jimmy immediately gets a compiler error (actually he probably would've spotted the method overrides right below the enum constant declarations). In any case, the problem is spotted and fixed at compile time.
2. Using a single method:
Jimmy doesn't get a compiler error, or even an incomplete switch warning from his IDE, since your switch already has a default case. Later, at runtime, a certain piece of code calls FORWARD.getOpposite(), which returns null. This causes unexpected behavior and at best quickly causes a NullPointerException to be thrown.
Let's back up and pretend you added some future-proofing instead:
default:
throw new UnsupportedOperationException("Unexpected Direction!");
Even then the problem wouldn't be discovered until runtime. Hopefully the project is properly tested!
Now, your Direction example is pretty simple so this scenario might seem exaggerated. In practice though, enums can grow into a maintenance problem as easily as other classes. In a larger, older code base with multiple developers resilience to refactoring is a legitimate concern. Many people talk about optimizing code but they can forget that dev time needs to be optimized too - and that includes coding to prevent mistakes.
Edit: A note under JLS Example ยง8.9.2-4 seems to agree:
Constant-specific class bodies attach behaviors to the constants. [This] pattern is much safer than using a switch statement in the base type... as the pattern precludes the possibility of forgetting to add a behavior for a new constant (since the enum declaration would cause a compile-time error).
I actually do something different. Your solutions have setbacks: abstract overridden methods introduce quite a lot of overhead, and switch statements are pretty hard to maintain.
I suggest the following pattern (applied to your problem):
public enum Direction {
UP, RIGHT, DOWN, LEFT;
static {
Direction.UP.setValues(DOWN, RIGHT, LEFT);
Direction.RIGHT.setValues(LEFT, DOWN, UP);
Direction.DOWN.setValues(UP, LEFT, RIGHT);
Direction.LEFT.setValues(RIGHT, UP, DOWN);
}
private void setValues(Direction opposite, Direction clockwise, Direction anticlockwise){
this.opposite = opposite;
this. clockwise= clockwise;
this. anticlockwise= anticlockwise;
}
Direction opposite;
Direction clockwise;
Direction anticlockwise;
public final Direction getOppposite() { return opposite; }
public final Direction getRotateClockwise() { return clockwise; }
public final Direction getRotateAnticlockwise() { return anticlockwise; }
}
With such design you:
never forget to set a direction, because it is enforced by the constructor (in case case you could)
have little method call overhead, because the method is final, not virtual
clean and short code
you can however forget to set one direction's values
First variant is faster and is probably more maintainable, because all properties of the direction are described where the direction itself is defined. Nevertheless, putting non-trivial logic into enums looks odd for me.
The second variant will probably be a little bit faster as the >2-ary polymorphism will force a full virtual function call on the interface, vs a direct call and index for the latter.
The first form is the object-oriented approach.
The second form is a pattern-matching approach.
As such the first form, being object-oriented, makes it easy to add new enums, but hard to add new operations. The second form does the opposite
Most experienced programmers I know would recommend using pattern-matching over object-orientation. As enums are closed, adding new enums is not an option; therefore, I would definitely go with the latter approach myself.
The enum values can be considered as independant classes. So considering the Object Oriented Concepts each enum should define its own behaviour. So i would reccommend first approach.
You could also simpy implement it once like this (you need to keep the enum constants in the appropriate order):
public enum Orientation {
UP, RIGHT, DOWN, LEFT; //Order is important: must be clock-wise
public Orientation getOppposite() {
int position = ordinal() + 2;
return values()[position % 4];
}
public Orientation getRotateClockwise() {
int position = ordinal() + 1;
return values()[position % 4];
}
public Orientation getRotateAnticlockwise() {
int position = ordinal() + 3; //Not -1 to avoid negative position
return values()[position % 4];
}
}
The first version is probably much faster. The Java JIT compiler can apply aggressive optimizations to it because enums are final (so all methods in them are final, too). The code:
Orientation o = Orientation.UP.getOppposite();
should actually become (at runtime):
Orientation o = Orientation.DOWN;
i.e. the compiler can remove the overhead for the method call.
From a design perspective, it's the proper way to do these things with OO: Move knowledge close to the object that needs it. So UP should know about it's opposite, not some code elsewhere.
The advantage of the second method is that it's more readable since all related things are grouped better (i.e. all the code related to "opposite" is in one place instead of a bit here and a bit there).
EDIT My first argument depends on how smart the JIT compiler is. My solution for the problem would look like this:
public enum Orientation {
UP, DOWN, LEFT, RIGHT;
private static Orientation[] opposites = {
DOWN, UP, RIGHT, LEFT
};
public Orientation getOpposite() {
return opposites[ ordinal() ];
}
}
This code is compact and fast, no matter what the JIT can or could do. It clearly communicates intent and, given the rules of ordinals, it will always work.
I would also suggest to add a test which makes sure that when calling getOpposite() for each value of the enum, you always get a different result and none of the results is null. That way, you can be sure that you got every case.
The only problem left is when you change the order of values. To prevent problems in this case, assign each value an index and use that to look up values in an array or even in Orientation.values().
here is another way to do it:
public enum Orientation {
UP(1), DOWN(0), LEFT(3), RIGHT(2);
private int opposite;
private Orientation( int opposite ) {
this.opposite = opposite;
}
public Orientation getOpposite() {
return values()[ opposite ];
}
}
I don't like this approach, though.
It's too hard to read (you have to count the index of each value in your head) and too easy to get wrong. It would need a unit test per value in the enum and per method that you can call (so 4*3 = 12 in your case).
Answer: It Depends
IF your method definitions are simple
This is the case with your very simple example methods, which just hard-code an enum output for each enum input
implement definitions specific to an enumeration value right next to that enumeration value
implement definitions common to all enumeration values at the bottom of the class in the "common area"; if the same method signature is to be available for all enum values but none/part of logic is common, use abstract method definitions in the common area
i.e. Option 1
Why?
readability, consistency, maintainability: the code directly related to a definition is right next to the definition
compile-time checking if abstract methods declared in common area, but not specified in enum value area
Note that the North/South/East/West example could be considered to represent a very simple state (of current direction) and the methods opposite/rotateClockwise/rotateAnticlockwise could be considered to represent user commands to change state. Which raises the question, what do you do for a real-life, typically complex state machine??
IF your method definitions are complex:
State-Machines are often complex, relying on current (enumerated value) state, command input, timers, and a fairly large number rules and business exceptions to determine the new (enumerated value) state. Other rare times, methods may even determine enumerated value output via calculations (e.g. scientific/engineering/insurance rating categorisation). Or it could use data-structures such as a map, or a complex data structure suited to an algorithm. When the logic is complex then extra care is required and the balance between "common" logic and "enum value-specific" logic changes.
avoid putting excessive code volume, complexity, and repeated 'cut & paste' sections right next to the enum value
try to refactor as much logic as possible into the common area - possibly putting 100% of logic here, but if not possible, employing the Gang Of Four "Template Method" pattern to maximise the amount of common logic, but flexibly allow a small amount of specific logic against each enum value.
i.e. As much as possible of Option 1, with a little of Option 2 allowed
Why?
readability, consistency, maintainability: avoids code bloat, duplication, poor textual formatting with masses of code interspersed amongst enum values, allows the full set of enum values to be quickly seen and understood
compile-time checking if using Template Method pattern and abstract methods declared in common area, but not specified in enum value area
Note: you could put ALL logic into a separate helper class, but I personally don't see any advantages to this (not performance/maintainability/readability). It breaks encapsulation a little and once you have all the logic in one place, what difference does it make to add a simple enum definition back to the top of the class? Splitting code across multiple classes is a different matter and is to be encouraged where appropriate.
Related
When i see code from others, i mainly see two types of method-styling.
One looks like this, having many nested ifs:
void doSomething(Thing thing) {
if (thing.hasOwner()) {
Entity owner = thing.getOwner();
if (owner instanceof Human) {
Human humanOwner = (Human) owner;
if (humanOwner.getAge() > 20) {
//...
}
}
}
}
And the other style, looks like this:
void doSomething(Thing thing) {
if (!thing.hasOwner()) {
return;
}
Entity owner = thing.getOwner();
if (!(owner instanceof Human)) {
return;
}
Human humanOwner = (Human) owner;
if (humanOwner.getAge() <= 20) {
return;
}
//...
}
My question is, are there names for these two code styles? And if, what are they called.
The early-returns in the second example are known as guard clauses.
Prior to the actual thing the method is going to do, some preconditions are checked, and if they fail, the method immediately returns. It is a kind of fail-fast mechanism.
There's a lot of debate around those return statements. Some think that it's bad to have multiple return statements within a method. Others think that it avoids wrapping your code in a bunch of if statements, like in the first example.
My own humble option is in line with this post: minimize the number of returns, but use them if they enhance readability.
Related:
Should a function have only one return statement?
Better Java syntax: return early or late?
Guard clauses may be all you need
I don't know if there is a recognized name for the two styles, but in structured programming terms, they can be described as "single exit" versus "multiple exit" control structures. (This also includes continue and break statements in loop constructs.)
The classical structured programming paradigm advocated single exit over multiple exit, but most programmers these days are happy with either style, depending on the context. Even classically, relaxation of the "single exit" rule was acceptable when the resulting code was more readable.
(One needs to remember that structured programming was a viewed as the antidote to "spaghetti" programming, particularly in assembly language, where the sole control constructs were conditional and non-conditional branches.)
i would say it's about readability. The 2nd style which i prefer, gives you the opportunity to send for example messages to the user/program for any check that should stop the program.
One could call it "multiple returns" and "single return". But I wouldn't call it a style, you may want to use both approaches, depending on readability in any particular case.
Single return is considered a better practice in general, since it allows you to write more readable code with the least surprise for the reader. In a complex method, it may be quite complicated to understand at which point the program will exit for any particular arguments, and what side effects may occur.
But if in any particular case you feel multiple returns improve readability of your code, there's nothing wrong with using them.
I'm working with an external library that decided to handle collections on its own. Not working with it or updating is outside my control. To work with elements of this third party "collection" it only returns iterators.
A question came up during a code review about having multiple returns in the code to gain performance. We all agree (within the team) the code is more readable with a single return, but some are worried about optimizations.
I'm aware premature optimization is bad. That is a topic for another day.
I believe the JIT compiler can handle this and skip the unneeded iterations, but could not find any info to back this up. Is JIT capable of such a thing?
A code sample of the issue at hand:
public void boolean contains(MyThings things, String valueToFind) {
Iterator<Thing> thingIterator = things.iterator();
boolean valueFound = false;
while(thingIterator.hasNext()) {
Thing thing = thingIterator.next();
if (valueToFind.equals(thing.getValue())) {
valueFound = true;
}
}
return valueFound;
}
VS
public void boolean contains(MyThings things, String valueToFind) {
Iterator<Thing> thingIterator = things.iterator();
while(thingIterator.hasNext()) {
Thing thing = thingIterator.next();
if (valueToFind.equals(thing.getValue())) {
return true;
}
}
return false;
}
We all agree the code is more readable with a single return.
Not really. This is just old school structured programming when functions were typically not kept small and the paradigms of keeping values immutable weren't popular yet.
Although subject to debate, there is nothing wrong with having very small methods (a handful of lines of code), which return at different points. For example, in recursive methods, you typically have at least one base case which returns immediately, and another one which returns the value returned by the recursive call.
Often you will find that creating an extra result variable, just to hold the return value, and then making sure no other part of the function overwrites the result, when you already know you can just return, just creates noise which makes it less readable not more. The reader has to deal with cognitive overload to see the result is not modified further down. During debugging this increases the pain even more.
I don't think your example is premature optimisation. It is a logical and critical part of your search algorithm. That is why you can break from loops, or in your case, just return the value. I don't think the JIT could realise that easily it should break out the loop. It doesn't know if you want to change the variable back to false if you find something else in the collection. (I don't think it is that smart to realise that valueFound doesn't change back to false).
In my opinion, your second example is not only more readable (the valueFound variable is just extra noise) but also faster, because it just returns when it does its job. The first example would be as fast if you put a break after setting valueFound = true. If you don't do this, and you have a million items to check, and the item you need is the first, you will be comparing all the others just for nothing.
Java compiler cannot do an optimization like that, because doing so in a general case would change the logic of the program.
Specifically, adding an early return would change the number of invocations of thingIterator.hasNext(), because your first code block continues iterating the collection to the end.
Java could potentially replace a break with an early return, but that would have any effect on the timing of the program.
I'm building a scripting language in Java for a game, and I'm currently working on the parser. The language is to be utilized by players/modders/myself to create custom spells and effects. However, I'm having difficulty imagining how to smoothly implement static typing in the current system (a painful necessity driven by performance needs). I don't care so much if compilation is fast, but actual execution needs to be as fast as I can get it (within reason, at least. I'm hoping to get this done pretty soon.)
So the parser has next() and peek() methods to iterate through the stream of tokens. It's currently built of a hierarchy methods that call each other in a fashion that preserves type precedence (the "bottom-most" method returning a constant, variable, etc). Each method returns an IResolve that has a generic type <T> it "resolves" to. For example, here's a method that handles "or" expressions, with "and" being more tightly coupled:
protected final IResolve checkGrammar_Or() throws ParseException
{
IResolve left = checkGrammar_And();
if (left == null)
return null;
if (peek().type != TokenType.IDENTIFIER || !"or".equals((String)peek().value))
return left;
next();
IResolve right = checkGrammar_Or();
if (right == null)
throwExpressionException();
return new BinaryOperation(left, right, new LogicOr());
}
The problem is when I need to implement a function that depends on the type. As you probably noticed, the generic type isn't being specified by the parser, and is part of the design problem. In this function, I was hoping to do something like the following (though this wouldn't work due to generic types' erasure...)
protected final IResolve checkGrammar_Comparison() throws ParseException
{
IResolve left = checkGrammer_Term();
if (left == null)
return null;
IBinaryOperationType op;
switch (peek().type)
{
default:
return left;
case LOGIC_LT:
//This ain't gonna work because of erasure
if (left instanceof IResolve<Double>)
op = new LogicLessThanDouble();
break;
//And the same for these
case LOGIC_LT_OR_EQUAL:
case LOGIC_GT:
case LOGIC_GT_OR_EQUAL:
}
next();
IResolve right = checkGrammar_Comparison();
if (right == null)
throwExpressionException();
return new BinaryOperation(left, right, op);
}
The problem spot, where I'm wishing I could make the connection, is in the switch statement. I'm already certain I'll need to make IResolve non-generic and give it a "getType()" method that returns an int or something, especially if I want to support user-defined classes in the future.
The question is:
What's the best way to achieve static typing given my current structure and the desire for mixed inheritance (user-defined classes and interfaces, like Java and C#)? If there is no good way, how can I alter or even rebuild my structure to achieve it?
Note: I don't claim to have any idea what I've gotten myself into, constructive criticism is more than welcome. If I need to clarify anything, let me know!
Another note: I know you're thinking "Why static typing?", and normally I'd agree with you-- however, the game world is composed of voxels (it's a Minecraft mod to be precise) and working with them needs to be fast. Imagine a script that's a O(n^2) algorithm iterating over 100 blocks twenty times a second, for 30+ players on a cheap server that's already barely squeaking by... or, a single, massive explosion effecting thousands of blocks, inevitably causing a horrendous lag spike. Hence, backend type checking or any form of duck-typing ain't gonna cut it (though I'm desperately aching for it atm.) The low level benefits are a necessity in this particular case, painful though it is.
You can get the best of both worlds by adding a method Class<T> getType() to IResolve; its implementers should simply return the appropriate Class object. (If the implementers themselves are generic, you need to get a reference to that object in the constructor or something.)
You can then do left.getType().equals(Double.class), etc.
This is entirely separate from the question of whether you should build your own parser with static typing, which is very much worth asking.
The solution I'm going with, as some have suggested in the comments, was to separate parsing and typing into separate phases, along with using an enum to represent type as I originally felt I should.
While I appreciate Taymon's answer, I can't use it if I hope to support user defined classes in the future.
If someone has a better solution, I'd be more than happy to accept it!
Sometimes i extract boolean checks into local variables to achief better readability.
What do you think?
Any disadvantages?
Does the compiler a line-in or something if the variable isn't used anywhere else? I also thought about reducing the scope with an additional block "{}".
if (person.getAge() > MINIMUM_AGE && person.getTall() > MAXIMUM_SIZE && person.getWeight < MAXIMUM_WEIGHT) {
// do something
}
final boolean isOldEnough = person.getAge() > MINIMUM_AGE;
final boolean isTallEnough = person.getTall() > MAXIMUM_SIZE;
final boolean isNotToHeavy = person.getWeight < MAXIMUM_WEIGHT;
if (isOldEnough && isTallEnough && isNotToHeavy) {
// do something
}
I do this all the time. The code is much more readable that way. The only reason for not doing this is that it inhibits the runtime from doing shortcut optimisation, although a smart VM might figure that out.
The real risk in this approach is that it loses responsiveness to changing values.
Yes, people's age, weight, and height don't change very often, relative to the runtime of most programs, but they do change, and if, for example, age changes while the object from which your snippet is still alive, your final isOldEnough could now yield a wrong answer.
And yet I don't believe putting isEligible into Person is appropriate either, since the knowledge of what constitutes eligibility seems to be of a larger scope. One must ask: eligible for what?
All in all, in a code review, I'd probably recommend that you add methods in Person instead.
boolean isOldEnough (int minimumAge) { return (this.getAge() > minimumAge); }
And so on.
Your two blocks of code are inequivalent.
There are many cases that could be used to show this but I will use one. Suppose that person.getAge() > MINIMUM_AGE were true and person.getTall() threw an exception.
In the first case, the expression will execute the if code block, while the second case will throw an exception. In computability theory, when an exception is thrown, then this is called 'the bottom element. It has been shown that a program when evaluated using eager evaluation semantics (as in your second example), that if it terminates (does not resolve to bottom), then it is guaranteed that an evaluation strategy of laziness (your first example) is guaranteed to terminate. This is an important tenet of programming. Notice that you cannot write Java's && function yourself.
While it is unlikely that your getTall() method will throw an exception, you cannot apply your reasoning to the general case.
I think the checks probably belong in the person class. You could pass in the Min/Max values, but calling person.IsEligable() would be a better solution in my opinion.
You could go one step further and create subtypes of the Person:
Teenager extends Person
ThirdAgePerson extends Person
Kid extends Person
Subclasses will be overriding Person's methods in their own way.
One advantage to the latter case is that you will have the isOldEnough, isTallEnough, and isNotToHeavy (sic) variables available for reuse later in the code. It is also more easily readable.
You might want to consider abstracting those boolean checks into their own methods, or combining the check into a method. For example a person.isOldEnough() method which would return the value of the boolean check. You could even give it an integer parameter that would be your minimum age, to give it more flexible functionality.
I think this is a matter of personal taste. I find your refactoring quite readable.
In this particualr case I might refactor the whole test into a
isThisPersonSuitable()
method.
If there were much such code I might even create a PersonInterpreter (maybe inner) class which holds a person and answers questions about their eligibility.
Generally I would tend to favour readability over any minor performance considerations.
The only possible negative is that you lose the benefits of the AND being short-circuited. But in reality this is only really of any significance if any of your checks is largely more expensive than the others, for example if person.getWeight() was a significant operation and not just an accessor.
I have nothing against your construct, but it seems to me that in this case the readability gain could be achieved by simply putting in line breaks, i.e.
if (person.getAge() > MINIMUM_AGE
&& person.getTall() > MAXIMUM_SIZE
&& person.getWeight < MAXIMUM_WEIGHT)
{
// do something
}
The bigger issue that other answers brought up is whether this belongs inside the Person object. I think the simple answer to that is: If there are several places where you do the same test, it belongs in Person. If there are places where you do similar but different tests, then they belong in the calling class.
Like, if this is a system for a site that sells alcohol and you have many places where you must test if the person is of legal drinking age, then it makes sense to have a Person.isLegalDrinkingAge() function. If the only factor is age, then having a MINIMUM_DRINKING_AGE constant would accomplish the same result, I guess, but once there's other logic involved, like different legal drinking ages in different legal jurisdictions or there are special cases or exceptions, then it really should be a member function.
On the other hand, if you have one place where you check if someone is over 18 and somewhere else where you check if he's over 12 and somewhere else where you check if he's over 65 etc etc, then there's little to be gained by pushing this function into Person.
I have a variable that I'm using like a constant (it will never change). I can't declare it as a constant because the value gets added at runtime.
Would you capitalize the variable name to help yourself understand that data's meaning?
Or would you not because this defies convention and make things more confusing?
The larger question:
Do you follow conventions even if the scenario isn't typical of the convention, but close enough that it might help you, personally, to understand things?
If it will aid you (and everybody else) in understanding your code six months down the line, do it. If it won't, don't. It's really that simple.
Personally, I would capitalise it. This is the convention in Java, where constants are always allocated at runtime due to its object-oriented nature. I'd be much more comfortable knowing that if I accidentally assigned to it, I'd definitely notice the next time I scanned through that chunk of code.
I don't consider my personals need to be paramount here -- if I've written the code, I'm already better placed to retrace it in the future if and when that's needed, than anybody else; so it's the "anybody else" I put first and foremost -- a present or future teammate that will need to understand the code (ideally) as thoroughly as I do.
Besides, with mandatory code reviews as a prereq to committing ANYthing to the codebase (an excellent practice, and the unfailing rule at my present employer), I'm likely to be called up on it should I ever let my attention slip (it does happen -- which is why I LOVE those mandatory code reviews, as applied to myself as well as everybody else!-).
A "variable set only once at startup" is a special-enough case that may be worth adding to your team's guidelines -- treating it as "closer to a constant than a variable" may make a lot of sense, but that only helps if the same rule/guideline is used consistently across the codebase. If the rule is not there I would check if there's consensus about adding it; otherwise, I would NOT break the guidelines for the sake of my personal tastes... that's the root of "egoless programming" and "team ownership of the codebase", two principles I serve with burning ardor.
BTW, were I on a single-person team in terms of coding guidelines (it happens, though it's not an optimal situation;), I think I'd have no trouble gaining unanimous consensus by myself that treating "set-once at startup" variables as constants in terms of naming conventions!-). But with a larger team, that's more work, and it could go either way.
Encapsulate it.
#include <iostream>
class ParamFoo
{
public:
static void initializeAtStartup(double x);
static double getFoo();
private:
static double foo_;
};
double ParamFoo::foo_;
void ParamFoo::initializeAtStartup(double x)
{
foo_ = x;
}
double ParamFoo::getFoo()
{
return foo_;
}
int main(void)
{
ParamFoo::initializeAtStartup(0.4);
std::cout << ParamFoo::getFoo() << std::endl;
}
This should make it pretty clear that you shouldn't be setting this value anywhere else but at the startup of the application. If you want added protection, you can add some private guard boolean variable to throw an exception if initializeAtStartup is called more than once.
I would name it as a variable, I prefer to keep my naming very consistent.
As Rob already suggested, what about readonly (available in C# at least).
Or a property with no setter.
My immediate impression is that something that you "set at runtime, then never change" is a constant, only so far as the business rules are constant. Also, you should be using mutators/accessors, since using ALL CAPS can hardly guarantee "constness".
public class BadClass
{
public static final double PI = 3.1;
// PI is very constant. Not according to the business roles modeled by my
// application, but by nature. I don't have a problem making this publicly
// accessible--except that [Math] already does, with much better precision)
public static /*final*/ int FOO = null;
// FOO is constant only by convention. I cannot even enforce its "constness".
// Making it public means that my enemies (overtime, for example) can change
// the value (late night programming), without telling me.
}
Instead,
public class BetterClass
{
public static final double PI = 3.1;
private /*final*/ Integer foo = null;
public int getFoo() {
return this.foo.intValue();
}
public void setFoo(int value) {
// The business rules say that foo can be set only once.
// If the business rules change, we can remove this condition
// without breaking old code.
if ( null == this.foo ) {
this.foo = value;
} else {
throw new IllegalStateException("Foo can be set only once.");
}
}
}
If you always use the mutator to set the value, even within [BetterClass] itself, you know that the foo's "constness" will not be violated. Of course, if someone is going to set the value of foo directly (I need to quit working before 2:00 am!), there are still no guarantees. But something like that should be pointed out at code review.
So my recommendation is to treat foo as a normal member variable--there doesn't need to be a special naming convention for something that is almost const.
However, use mutators/accessors, even on private variables. These are typically very fast, and you can enforce business rules inside of them. This should be you convention.
(If you are writing code for embedded medical devices, pretend that you never saw this posting).
is it possible to mark it as readonly? Then conventions are not as important.
Do you follow conventions even if the
scenario isn't typical of the
convention, but close enough that it
might help you, personally, to
understand things?
Following a convention when the scenario is atypical might confuse or slow down others (or even you, after a while.) I would avoid giving a variable the guise of something that it isn't.
Also, the fact that you have this atypical scenario could be an indication that perhaps some other, more typical paradigm could be followed. Though, I don't have any immediate suggestions for a alternative.
I would make it capitalized (since it's more constant than variable from a design perspective) and add a comment around it stating its uniqueness to the application.
FWIW my own convention is to use all caps for #defines and for enums. For const variables I either use no particular convention, or when I do it's to prefix the name with a 'k' (for 'konstant' - not 'c' which is already over used for things like 'count' or 'char').
I'm finding that I like the 'k' convention and will probably use it more often, and may even use it for enums, reserving the screaming, all-caps identifiers for the dreaded preprocessor macros.
Conventions are just that, conventions. They are there to help the code understandable. They usually do if they are not too badly chosen and if they are applied consistently. The last point is probably the most important thing about them: they should be applied consistently.
One thing which prevent some conventions to make code more readable even when they are applied consistently -- at least for new comers and people switching between code base -- is when they are conflicting with other conventions. In C and C++, I'm aware of two common conventions about the use of names in ALL_CAPS:
reserve them for the preprocessor; that one has my preference as the preprocessor identifier are special: they don't obey usual scoping rule and preventing clashes with them is important
use them for constant (macro and enumerators).
Two problems comes in addition to the unfamiliarity if you use them for logically constant things which are in fact variable:
they aren't usable in places (like array size) where the language expect constant expression
my experience teach me that maintenance will tend to make them even less constant that they are now.
Create a wrapper class with a single private static field. Create an initField(..) and a getField(..) static method. initField throws/asserts/otherwise errors if the static field is not null. (For primitives, you may have to use a primitive and a boolean to track initialization.)
In java, I prefer to pass these types of variables in as system properties. A static class can then do something like:
public final int MY_INT = Integer.getInteger("a.property.name");
You could also use a property file (see java.util.Properties) instead of using -D to specify it. Then you get:
public class Foo {
public static final int MY_INT;
static {
Properties p = new Properties();
try{
p.load( new FileInputStream("app.props"):
} catch(IOException e) {
//SWALLOW or RETHROW AS ERROR
}
MY_INT=Integer.parseInt( p.getProperty("my.int","17") ); //17 is default if you swallo IOException
}
...
}
First of all, follow your project's coding standards. You should be coding for other people reading the code, not yourself. Your personal preferences should not take precedence over project-wide rules and conventions, etc.
In the absence of a project coding standard you should follow "best practice" for the language you are dealing with.
In Java, best practice is that you should declare a pseudo-constant with a camel case identifier. That's what the Sun Java coding standard says, and that is what the vast majority of professional Java developers use.
In C and C++ the (classical) convention is that all-caps is used for constants defined as preprocessor symbols. So since this is not a preprocessor symbol, you should use whatever your coding standard says is appropriate for a variable.
The fact that the pseudo-constant is not supposed to change won't stop someone from modifying the code so that it actually changes, accidentally or deliberately. If you use / abuse a coding convention that makes the identifier look like a real constaint, you will be part of the problem:
Someone trying to read / debug your code will first assume the identifier is a real constant and not investigate the possibility thatit is not.
Then when they do lookat the declaration, there will be alot of shouting and threats of
defenestration.
Actually, a better way to deal with a pseudo-constant is to encapsulate it. In Java, you would declare it as private member and provide a getter and setter. The setter should do something to prevent the pseudo-constant from being changed after it has been set the first time. Any decent Java JIT compiler will inline a simple getter, so this should not affect runtime performance.
Giving wrong information is generally not best practise.
Implicitly claiming something is a constant, when it is merely currently not changed, is giving out wrong information.
I'm not sure if this is legal in your language of choice, but in C++, this would work for your purpose:
#include <iostream>
int main()
{
int i = 0;
std::cin >> i;
const int CONST = i;
std::cout << CONST; //displays i
system("PAUSE");
return 0;
}
I'm not sure if this is a moral thing to do, but this does solve your problem (unless you really need your memory).
Just like anything else - scope and context are required to know in what way something is constant. So - there's no way to to satisfy everyone.
Follow the style used in your language of choice - 80% of the time, that will be clear enough. The alternative is a highly over-though nameing system that sacrifices productivity for ideal technical correctness (which few people will even really appreaciate if you can ever achieve it.)
one question would be: what kind of variable?
in the case of static variables, that don't change after what i'd call "boot-time" for the lack of a better term, i use ALL_CAPS ... same thing for global variables (if the language supports them at all) ...
communicating semantics is actually the point of naming conventions, and seeing an ALL_CAPS clearly states, that a) i will not write to it b) i can cache it (to a local variable for example, or in AS3 even an instance variable makes sense, since static access is very slow) ...
whether it's a "real constant" or not does not really matter ... that's more of an implementation detail, that should be hidden away (reliably! information hiding is good, and important, but it is crucial, that the information that is shared, can be trusted!) ... it can really be exchanged ... for example, i often start building apps vs. some hardcoded config, containing some static constants ... later, i decide that i don't want this to be hardcoded, but rather coming from some config file, so i load it, and during boot process, i init all the pseudo-constants ... the actuall app still treats them as constants, because after booting, that is what these values are ... this seems perfectly valid to me ...
at instance level, i am not 100% sure, if i ever ran into a case, where i could be very certain, that some field would never change ... usually, this makes the class unflexible ...
other than that, you can usually declare readonly properties, to have compile time errors, which is also a good thing to have ...