From language design point of view , What type of practice is supporting operator overloading?
What are the pros & cons (if any) ?
EDIT: it has been mentioned that std::complex is a much better example than std::string for "good use" of operator overloading, so I am including an example of that as well:
std::complex<double> c;
c = 10.0;
c += 2.0;
c = std::complex<double>(10.0, 1.0);
c = c + 10.0;
Aside from the constructor syntax, it looks and acts just like any other built in type.
The primary pro is that you can create new types which act like the built in types. A good example of this is std::string (see above for a better example) in c++. Which is implemented in the library and is not a basic type. Yet you can write things like:
std::string s = "hello"
s += " world";
if(s == "hello world") {
//....
}
The downside is that it is easy to abuse. Poor choices in operator overloading can lead to accidentally inefficient or unclear code. Imagine if std::list had an operator[]. You may be tempted to write:
for(int i = 0; i < l.size(); ++i) {
l[i] += 10;
}
that's an O(n^2) algorithm! Ouch. Fortunately, std::list does not have operator[] since it is assumed to be an efficient operation.
The initial driver is usually maths. Programmers want to add vectors and quaternions by writing a + b and multiply matrices by vectors with M * v.
It has much broader scope than that. For instance, smart pointers look syntactically like normal pointers, streams can look a bit like Unix pipes and various custom data structures can be used as if they were arrays (with strings as indexes, even!).
The general principle behind overloading is to allow library implementors to enhance the language in a seamless way. This is both a strength and a curse. It provides a very powerful way to make the language seem richer than it is out of the box, but it is also highly vulnerable to abuse (more so than many other C++ features).
Pros: you can end up with code which is simpler to read. Few people would argue that:
BigDecimal a = x.add(y).divide(z).plus(m.times(n));
is as easy to read as
BigDecimal a = ((x + y) / z) + (m * n); // Brackets added for clarity
Cons: it's harder to tell what's being called. I seem to remember that in C++, a statement of the form
a = b[i];
can end up performing 7 custom operations, although I can't remember what they all are. I may be slightly "off" on the example, but the principle is right - operator overloading and custom conversions can "hide" code.
You can do some interesting tricks with syntax (i.e. streams in C++) and you can make some code very concise. However, you can have the effect of making code very difficult to debug and understand. You sort of have to constantly be on your guard about the effects of various operators of different kinds of objects.
You might deem operator overloading as a kind of method/function overloading. It is part of polymorphism in object oriented language.
With overloading, each class of objects work like primitive types, which make classes more natural to use, just as easy as 1 + 2.
Say a complex number class, Complex.
Complex(1,2i) + Complex(2,3i) yields Complex(3,5i).
5 + Complex(3, 2i) yields Complex(8, 2i).
Complex(2, 4i) + -1.8 yields Complex(0.2, 4i).
It is much easier to use class this way. If there is no operator overloading, you have to use a bunch of class methods to denote 'add' and make the notation clumsy.
The operator overloading ought to be defined carefully; otherwise, comes confusion. For example, '+' operator is natural to adding numbers, time, date, or concatenation of array or text. Adding '+' operator to a class of Mouse or Car might not make any sense. Sometimes some overloading might not be seem natural to some people. For example, Array(1,2,3) + Array(3,4,5). Some might expect Array(4,6,8) but some expect Array(1,2,3,3,4,5).
Related
This is a question I read on some lectures about dynamic programming I randomly found on the internet. (I am graduated and I know the basic of dynamic programming already)
In the section of explaining why memoization is needed, i.e.
// psuedo code
int F[100000] = {0};
int fibonacci(int x){
if(x <= 1) return x;
if(F[x]>0) return F[x];
return F[x] = fibonacci(x-1) + fibonacci(x-2);
}
If memoization is not used, then many subproblems will be re-calculated many time that makes the complexity very high.
Then on one page, the notes have a question without answer, which is exactly what I want to ask. Here I am using exact wordings and the examples it show:
Automated memoization: Many functional programming languages (e.g. Lisp) have built-in support for memoization.
Why not in imperative languages (e.g. Java)?
LISP example the note provides (which it claims it is efficient):
(defun F (n)
(if
(<= n 1)
n
(+ (F (- n 1)) (F (- n 2)))))
Java example it provides (which it claims it is exponential)
static int F(int n) {
if (n <= 1) return n;
else return F(n-1) + F(n-2);
}
Before reading this, I do not even know there is built-in support of memoization in some programming languages.
Is the claim in the notes true? If yes, then why imperative languages not supporting it?
The claims about "LISP" are very vague, they don't even mention which LISP dialect or implementation they mean. None of LISP dialects I'm familiar with do automatic memoization, but LISP makes it easy to write a wrapper function which transforms any existing function into a memoized one.
Fully automatic, unconditional memoization would be a very dangerous practice and would lead to out-of-memory errors. In imperative languages it would be even worse because return values are often mutable, therefore not reusable. Imperative languages don't usually support tail-recursion optimization, further reducing the applicability of memoization.
The support for memoization is nothing more than having first-class functions.
If you want to memoize the Java version for one specific case, you can write it explicitly: create a hashtable, check for existing values, etc. Unfortunately, you cannot easily generalize this in order to memoize any function. Languages with first-class functions make writing functions and memoizing them almost orthogonal problems.
The basic case is easy, but you have to take into account recursive calls.
In statically typed functional languages like OCaml, a function that is memoized cannot just call itself recursively, because it would call the non-memoized version. However the only change to your existing function is to accept a function as an argument, named for example self, which should be called whenever you function wants to recurse. The generic memoization facility then provides the appropriate function. A full example of this is available in this answer.
The Lisp version has two features that makes memoizing an existing function even more straightforward.
You can manipulate functions like any other value
You can redefine functions at runtime
So for example, in Common Lisp, you define F:
(defun F (n)
(if (<= n 1)
n
(+ (F (- n 1))
(F (- n 2)))))
Then, you see that you need to memoize the function, so you load a library:
(ql:quickload :memoize)
... and you memoize F:
(org.tfeb.hax.memoize:memoize-function 'F)
The facility accepts arguments to specify which input should be cached and which test function to use. Then, the function F is replaced by a fresh one, which introduces the necessary code to use an internal hash-table. Recursive calls to F inside F are now calling the wrapping function, not the original one (you don't even recompile F). The only potential problem is if the original F was subject to tail-call optimization. You should probably declare it notinline or use DEF-MEMOIZED-FUNCTION.
Although I'm not sure any widely-used Lisps have supported automatic memoization, I think there are two reasons why memoization is more common in functional languages, and an additional one for Lisp-family languages.
First of all, people write functions in functional languages: computations whose result depends only on their arguments and which do not side-effect the environment. Anything which doesn't meet that requirement isn't amenable to memoization at all. And, well, imperative languages are just those languages in which those requirements are not, or may not be, met, because they would not be imperative otherwise!
Of course, even in merely functional-friendly languages like (most) Lisps you have to be careful: you probably should not memoize the following, for instance:
(defvar *p* 1)
(defun foo (n)
(if (<= n 0)
*p*
(+ (foo (1- n)) (foo (- n *p*)))))
Secondly is that functional languages generally want to talk about immutable data structures. This means two things:
It is actually safe to memoize a function which returns a large data structure
Functions which build very large data structures often need to cons an enormous amount of garbage, because they can't mutate interim structures.
(2) is slightly controversial: the received wisdom is that GCs are now so good that it's not a problem, copying is very cheap, compilers can do magic and so on. Well, people who have written such functions will know that this is only partly true: GCs are good, copying is cheap (but pointer-chasing large structures to copy them is often very hostile to caches), but it's not actually enough (and compilers almost never do the magic they are claimed to do). So you either cheat by gratuitously resorting to non-functional code, or you memoize. If you memoize the function then you only build all the interim structures once, and everything becomes cheap (other than in memory, but suitable weakness in the memoization can handle that).
Thirdly: if your language does not support easy metalinguistic abstraction, it's a serious pain to implement memoization. Or to put it another way: you need Lisp-style macros.
To memoize a function you need to do at least two things:
You need to control which arguments are the keys for the memoization -- not all functions have just one argument, and not all functions with multiple arguments should be memoized on the first;
You need to intervene inside the function to disable any self-tail-call optimization, which will completely subvert memoization.
Although it's kind of cruel to do so because it's so easy, I will demonstrate this by poking fun at Python.
You might think that decorators are what you need to memoize functions in Python. And indeed, you can write memoizing tools using decorators (and I have written a bunch of them). And these even sort-of work, although they do so mostly by chance.
For a start, a decorator can't easily know anything about the function it is decorating. So you end up either trying to memoize based on a tuple of all the arguments to the function, or having to specify in the decorator which arguments to memoize on, or something equally grotty.
Secondly, the decorator gets the function it is decorating as an argument: it doesn't get to poke around inside it. That's actually OK, because Python, as part of its 'no concepts invented after 1956' policy, of course, does not assume that calls to f lexically within the definion of f (and with no intervening bindings) are in fact self-calls. But perhaps one day it will, and all your memoization will now break.
So in summary: to memoize functions robustly, you need Lisp-style macros. Probably the only imperative languages which have those are Lisps.
The question is on the strategy approach to the problem of defining a square root algorithm in a generic numerical interface. I am aware of the existance of algorithms solving the problem with different conditions. I'm interested in algorithms that:
Solves the problem using only selected functions;
Doesn't care if the objects manipulated are integers, floating points or other, provided those objects can be added, mutiplied and confronted;
Returns an exact solution if the input is a perfect square.
Because the subtlety of the distintion and for the sake of clarity, I will define the problem in a very verbose way. Beware the wall text!
Suppose to have a Java Interface Constant<C extends Constant<C>> with the following abstract methods, that we will call base functions:
C add(C a);
C subtract(C a);
C multiply(C a);
C[] divideAndRemainder(C b);
C additiveInverse();
C multiplicativeInverse();
C additiveIdentity();
C multiplicativeIdentity();
int compareTo(C arg1);
Is not known if C represents an integer or a floating point, nor this must be relevant in the following discussion.
Using only those methods is possible to create static or default implementation of some mathematical algorithm regarding numbers: for example, dividerAndRemainder(C b); and compareTo(C arg1); allow to create algorithms for the greater common divisor, the bezout identity, etc etc...
Now suppose our Interface has a default method for the exponentiation:
public default C pow(int n){
if(n < 0) return this.additiveInverse().pow(-n);
if(n == 0) return additiveIdentity();
int m = n;
C output = this;
while(m > 1)
{
if(m%2 == 0) output = output.multiply(output);
else output = this.multiply(output.multiply(output));
m = m/2;
}
return output;
}
The goal is to define two default method called C root(int n) and C maximumErrorAllowed() such that:
x.equals(y.pow(n)) implies x.root(n).equals(y);
C root(int n); is actually implemented using only base functions and methods created from the base functions;
The interface can still be applied to any kind of numbers, including but not limiting at both integers and floating points.
this.root(n).pow(n).compareTo(maximumErrorAllowed()) == -1 for all this such that this.root(n)!=null, i.e. any eventual approximation has an error minor than C maximumErrorAllowed();
Is that possible? If yes, how and what would be an estimation of the computational complexity?
I went through some time working on a custom number interface for Java, it's amazingly hard--one of the most disappointing experiences I've had with Java.
The problem is that you have to start over from scratch--you can't really re-use anything in Java, so if you want to have implementations for int, float, long, BigInteger, rational, Complex and Vector you have to implement all the methods yourself for every single class, and then don't expect the Math package to be of much help.
It got particularly nasty implementing the "Composed" classes like "Complex" which is made from two of the "Generic" floating point types, or "Rational" which composes two generic integer types.
And math operators are right out--this can be especially frustrating.
The way I got it to work reasonably well was to implement the classes in Java and then write some of the higher-level stuff in Groovy. If you name the operations correctly, Groovy can just pick them up, like if your class implements ".plus()" then groovy will let you do instance1+instance2.
IIRC because of being dynamic, Groovy often handled cross-class pieces nicely, like if you said Complex + Integer you could supply a conversion from Integer to complex and groovy would promote Integer to Complex to do the operation and return a complex.
Groovy is pretty interchangeable with Java, You can usually just rename a Java class ".groovy" and compile it and it will work, so it was a pretty good compromise.
This was a long time ago though, now you might get some traction with Java 8's default methods in your "Number" interface--that could make implementing some of the classes easier but might not help--I'd have to try it again to find out and I'm not sure I want to re-open that can o' worms.
Is that possible? If yes, how?
In theory, yes. There are approximation algorithms for root(), for example the n-th root algorithm. You will run into problems with precision, however, which you might want to solve on a case-by-case basis (i. e. use a look-up table for integers). As such, I'd recommend against a default implementation in an interface.
What would be an estimation of the computational complexity?
This, too, is implementation varies based on your type of number, and is dependant on your precision. For integers, you can create an implementation with a look-up table, and the complexity would be O(1).
If you want a better answer for the complexity of the operation itself, you might want to check out Computational complexity of calculating the nth root of a real number.
I program in Java and have been trying to understand exactly what operator overloading is. I'm still a bit puzzled.
An operator can take on different meanings depending on which class uses it? I've read that it is "Name Polymorphism".
Java apparently does not support it and there seems to be a lot of controversy around this. Should I worry about this?
As a last question, in an assignment the teacher has stated that the assignment uses operator overloading, he is a C++ programmer mainly but we are allowed to write the assignment in Java. since Java does not support overloading, is there something I should be wary of?
Operator overloading basically means to use the same operator for different data types. And get different but similar behaviour because of this.
Java indeed doesn't support this but any situation where something like this could be useful, you can easily work around it in Java.
The only overloaded operator in Java is the arithmetic + operator. When used with numbers (int, long, double etc.), it adds them, just as you would expect. When used with String objects, it concatenates them. For example:
String a = "This is ";
String b = " a String";
String c = a + b;
System.out.print (c);
This would print the following on the screen: This is a String.
This is the only situation in Java in which you can talk about operator overloading.
Regarding your assignment: if the requirement is to do something that involves operator overloading, you can't do this in Java. Ask your teacher exactly what language you are allowed to use for this particular assignment. You will most likely need to do it in C++.
PS: In case of Integer, Long, Double etc. objects, it would also work because of unboxing.
Java doesn't allow overloading operators. It uses a very limited kind of operator overloading though, since + does addition or concatenation depending on the context.
If your assignment asks you to implement something by overloading operators, you won't be able to do it in Java. Maybe you should ask the teacher why he allows Java for such an assignment.
If your assignment only asks you to use an overloaded operator, then having your program use + for concatenation and addition would fit the bill. But I would ask the teacher, because I doubt that it's what he expects.
Java apparently does not support it and there seems to be a lot of
controversy around this.
There is no controversy about this. Some people might disagree with the decision, but James Gosling and others decided from day one to leave operator overloading by class developers out of the language. It's not likely to change.
As pointed out by others here, they reserved the right for the JVM to overload operators on a limited basis. The point is that you can't do it when you're developing your own classes.
They did it because there were examples of C++ developers abusing the capability (e.g. overloading the dot operator.)
Should I worry about this?
No. Java won't explode. You just won't be able to do it for your classes. If you feel like you need to, you'll just have to write C++ or some other language.
As to your query about the difference between operator overloading and polymorphism. Polymorphism is a standard OOP concept where an instance of a class may exhibit different characteristics depending on the underlying type. For example in C++:
class Shape {
...
virtual void draw()=0;
}
class Circle :public Shape {
virtual void draw() {...draw a circle...}
}
class Square:public Shape {
virtual void draw() {...draw a square...}
}
..
Shape *s = new Circle();
s->draw(); // calls Circle::draw();
s=new Square(); // calls Square::draw();
Hence s is exhibiting polymorphism.
This is different from operator overloading but you already have been explained what that is in the other answers.
Can either use natural a != b (a is not equal to b) or a.Equals(b)
b.set(1, 0);
a = b;
b.set(2, 0);
assert( !a.Equals(b) );
But java has a limited set of operator overloading than other languages http://en.wikipedia.org/wiki/Operator_overloading
Referring to a ~2-year old discussion of the fact that there is no operator overloading in Java ( Why doesn't Java offer operator overloading? ), and coming from many intense C++ years myself to Java, I wonder whether there is a more fundamental reason that operator overloading is not part of the Java language, at least in the case of assignment, than the highest-rated answer in that link states near the bottom of the answer (namely, that it was James Gosling's personal choice).
Specifically, consider assignment.
// C++
#include <iostream>
class MyClass
{
public:
int x;
MyClass(const int _x) : x(_x) {}
MyClass & operator=(const MyClass & rhs) {x=rhs.x; return *this;}
};
int main()
{
MyClass myObj1(1), myObj2(2);
MyClass & myRef = myObj1;
myRef = myObj2;
std::cout << "myObj1.x = " << myObj1.x << std::endl;
std::cout << "myObj2.x = " << myObj2.x << std::endl;
return 0;
}
The output is:
myObj1.x = 2
myObj2.x = 2
In Java, however, the line myRef = myObj2 (assuming the declaration of myRef in the previous line was myClass myRef = myObj1, as Java requires, since all such variables are automatically Java-style 'references') behaves very differently - it would not cause myObj1.x to change and the output would be
myObj1.x = 1
myObj2.x = 2
This difference between C++ and Java leads me to think that the absence of operator overloading in Java, at least in the case of assignment, is not a 'matter of personal choice' on the part of James Gosling, but rather a fundamental necessity given Java's syntax that treats all object variables as references (i.e. MyClass myRef = myObj1 defines myRef to be a Java-style reference). I say this because if assignment in Java causes the left-hand side reference to refer to a different object, rather than allowing the possibility that the object itself change its value, then it would seem that there is no possibility of providing an overloaded assignment operator.
In other words - it's not simply a 'choice', and there's not even the possibility of 'holding your breath' with the hope that it will ever be introduced, as the aforementioned high-rated answer also states (near the end). Quoting: "The reasons for not adding them now could be a mix of internal politics, allergy to the feature, distrust of developers (you know, the saboteur ones), compatibility with the previous JVMs, time to write a correct specification, etc.. So don't hold your breath waiting for this feature.". <-- So this isn't correct, at least for the assignment operator: the reason there's no operator overloading (at least for assignment) is instead fundamental to the nature of Java.
Is this a correct assessment on my part?
ADDENDUM
Assuming the assignment operator is a special case, then my follow-up question is: Are there any other operators, or more generally any other language features, that would by necessity be affected in a similar way as the assignment operator? I would like to know how 'deep' the difference goes between Java and C++ regarding variables-as-values/references. i.e., in C++, variable tokens represent values (and note, even if the variable token was declared initially as a reference, it's still treated as a value essentially wherever it's used), whereas in Java, variable tokens represent honest-to-goodness references wherever the token is later used.
There is a big misconception when talking about similarities and differences between Java and C++, that arises in your question. C++ references and Java references are not the same. In Java a reference is a resettable proxy to the real object, while in C++ a reference is an alias to the object. To put it in C++ terms, a Java references is a garbage collected pointer not a reference. Now, going back to your example, to write equivalent code in C++ and Java you would have to use pointers:
int main() {
type a(1), b(2);
type *pa = &a, *pb = &b;
pa = pb;
// a is still 1, b is still 2, pa == pb == &b
}
Now the examples are the same: the assignment operator is being applied to the pointers to the objects, and in that particular case you cannot overload the operator in C++ either. It is important to note that operator overloading can be easily abused, and that is a good reason to avoid it in the first place. Now if you add the two different types of entities: objects and references, things become more messy to think about.
If you were allowed to overload operator= for a particular object in Java, then you would not be able to have multiple references to the same object, and the language would be crippled:
Type a = new Type(1);
Type b = new Type(2);
a = b; // dispatched to Type.operator=( Type )??
a.foo();
a = new Type(3); // do you want to copy Type(3) into a, or work with a new object?
That in turn would make the type unusable in the language: containers store references, and they reassign them (even the first time just when an object is created), functions don't really use pass-by-reference semantics, but rather pass-by-value the references (which is a completely different issue, again, the difference is void foo( type* ) versus void foo( type& ): the proxy entity is copied, you cannot modify the reference passed in by the caller.
The problem is that the language is trying really hard to hide the fact that a and the object that a refers to are not the same thing (same happens in C#), and that in turn means that you cannot explicitly state that one operation is to be applied to the reference/referent, that is resolved by the language. The outcome of that design is that any operation that can be applied to references can never be applied to the objects themselves.
As of the rest of the operators, the decision is most probably arbitrary, because the language hides the reference/object difference, it could have been designed such that a+b was translated into type* operator+( type*, type* ) by the compiler. Since you cannot use arithmetic then there would be no problem, as the compiler would recognize that a+b is an operation that must be applied to the objects (it does not make sense with references). But then it could be considered a little awkward that you can overload +, but you cannot overload =, ==, !=...
That is the path that C# took, where assignment cannot be overloaded for reference types. Interestingly in C# there are value types, and the set of operators that can be overloaded for reference and value types are different. Not having coded C# in large projects, I cannot really tell whether that potential source of confusion is such or if people are just used to it (but if you search SO, you will find that a few people do ask why X cannot be overloaded in C# for reference types where X is one of the operations that can be applied to the reference itself.
That doesn't explain why they couldn't have allowed overloading of other operators like + or -. Considering James Gosling designed the Java language, and he said it was his personal choice, which he explains in more detail at the link provided in the question you linked, I think that's your answer:
There are some things that I kind of feel torn about, like operator overloading. I left out operator overloading as a fairly personal choice because I had seen too many people abuse it in C++. I've spent a lot of time in the past five to six years surveying people about operator overloading and it's really fascinating, because you get the community broken into three pieces: Probably about 20 to 30 percent of the population think of operator overloading as the spawn of the devil; somebody has done something with operator overloading that has just really ticked them off, because they've used like + for list insertion and it makes life really, really confusing. A lot of that problem stems from the fact that there are only about half a dozen operators you can sensibly overload, and yet there are thousands or millions of operators that people would like to define -- so you have to pick, and often the choices conflict with your sense of intuition. Then there's a community of about 10 percent that have actually used operator overloading appropriately and who really care about it, and for whom it's actually really important; this is almost exclusively people who do numerical work, where the notation is very important to appealing to people's intuition, because they come into it with an intuition about what the + means, and the ability to say "a + b" where a and b are complex numbers or matrices or something really does make sense. You get kind of shaky when you get to things like multiply because there are actually multiple kinds of multiplication operators -- there's vector product, and dot product, which are fundamentally very different. And yet there's only one operator, so what do you do? And there's no operator for square-root. Those two camps are the poles, and then there's this mush in the middle of 60-odd percent who really couldn't care much either way. The camp of people that think that operator overloading is a bad idea has been, simply from my informal statistical sampling, significantly larger and certainly more vocal than the numerical guys. So, given the way that things have gone today where some features in the language are voted on by the community -- it's not just like some little standards committee, it really is large-scale -- it would be pretty hard to get operator overloading in. And yet it leaves this one community of fairly important folks kind of totally shut out. It's a flavor of the tragedy of the commons problem.
UPDATE: Re: your addendum, the other assignment operators +=, -=, etc. would also be affected. You also can't write a swap function like void swap(int *a, int *b);. and other stuff.
Is this a correct assessment on my part?
The lack of operator in general is a "personal choice". C#, which is a very similar language, does allow operator overloading. But you still can't overload assignment. What would that even do in a reference-semantics language?
Are there any other operators, or more
generally any other language features,
that would by necessity be affected in
a similar way as the assignment
operator? I would like to know how
'deep' the difference goes between
Java and C++ regarding
variables-as-values/references.
The most obvious is copying. In a reference-semantics language, clone() isn't that common, and isn't needed at all for immutable types like String. But in C++, where the default assignment semantics are based around copying, copy constructors are very common. And automatically generated if you don't define one.
A more subtle difference is that it's a lot harder for a reference-semantics language to support RAII than a value-semantics language, because object lifetime is harder to track. Raymond Chen has a good explanation.
The reason why operator overloading is abused in C++ language is because it's too complex feature. Here's some aspects of it which makes it complex:
expressions are a tree
operator overloading is the interface/documentation for those expressions
interfaces are basically invisible feature in c++
free functions/static functions/friend functions are a big mess in C++
function prototypes are already complex feature
choice of the syntax for operator overloading is less than ideal
there is no other comparable api in c++ language
user-defined types/function names are handled differently than built-in types/function names in function prototypes
it uses advanced math, like the operator<<(ostream&, ostream & (*fptr)(ostream &));
even the simplest examples of it uses polymorphism
It's the only c++ feature that has 2d array in it
this-pointer is invisible and whether your operators are member functions or outside the class is important choice for programmers
Because of these complexity, very small number of programmers actually understand how it works. I'm probably missing many important aspects of it, but the list above is good indication that it is very complex feature.
Update: some explanation about the #4: the argument pretty much is as follows:
class A { friend void f(); }; class B { friend void f(); }
void f() { /* use both A and B members inside this function */ }
With static functions, you can do this:
class A { static void f(); }; void f() { /* use only class A here */ }
And with free functions:
class A { }; void f() { /* you have no special access to any classes */ }
Update#2: The #10, the example I was thinking looks like this in stdlib:
ostream &operator<<(ostream &o, std::string s) { ... } // inside stdlib
int main() { std::cout << "Hello World" << std::endl; }
Now the polymorphism in this example happens because you can choose between std::cout and std::ofstream and std::stringstream. This is possible because operator<< first parameter takes a reference to ostream. This is normal runtime polymorphism in this example.
Update #3: About the prototypes still. The real interaction between operator overloading and prototypes is because the overloaded operators becomes part of the class' interface. This brings us to the 2d array thing, because inside the compiler the class interface is a 2d data structure which has quite complex data in it, including booleans, types, function names. The rule #4 is needed so that you can choose when your operators are inside this 2d data structure and when they're outside of it. Rule #8 deals with the booleans stored in the 2d data structure. Rule #7 is because class' interface is used to represent elements of an expression tree.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Why doesn't Java need operator overloading? Is there any way it can be supported in Java?
Java only allows arithmetic operations on elementary numeric types. It's a mixed blessing, because although it's convenient to define operators on other types (like complex numbers, vectors etc), there are always implementation-dependent idiosyncrasies. So operators don't always do what you expect them to do. By avoiding operator overloading, it's more transparent which function is called when. A wise design move in some people's eyes.
Java doesn't "need" operator overloading, because no language needs it.
a + b is just "syntactic sugar" for a.Add(b) (actually, some would argue that a.Add(b) is just syntactic sugar for Add(a,b))
This related question might help. In short, operator overloading was intentionally avoided when Java was designed because of issues with overloading in C++.
Scala, a newer JVM language, has a syntax that allows method overloading that functions very much like operator overloading, without the limitations of C++ operator overloading. In Scala, it's possible to define a method named +, for example. It's also possible to omit the . operator and parentheses in method calls:
case class A(value: Int) {
def +(other: A) = new A(value + other.value)
}
scala> new A(1) + new A(3)
res0: A = A(4)
No language needs operator overloading. Some believe that Java would benefit from adding it, but its omission has been publicized as a benefit for so long that adding it is almost certainly politically unacceptable (and it's only since the Oracle buyout that I'd even include the "almost").
The counterpoint generally consists of postulating some meaningless (or even counterintuitive) overload, such as adding together two employees or overloading '+' to do division. While operator overloading in such languages as C++ would allow this, lack of operator overloading in Java does little to prevent or even mitigate the problem. someEmployee.Add(anotherEmployee) is no improvement over someEmployee + anotherEmployee. Likewise, if myLargeInteger.Add(anotherLargeInteger) actually does division instead of addition. At least to me, this line of argument appears thoroughly unconvincing at best.
There is, however, another respect in which omitting operator overloading does (almost certainly) have a real benefit. Its omission keeps the language easier to process, which makes it much easier (and quicker) to develop tools that process the language. Just for an obvious example, refactoring tools for Java are much more numerous and comprehensive than for C++. I doubt that this can or should be credited specifically and solely to support for operator overloading in C++ and its omission in Java. Nonetheless, the general attitude of keeping Java simple (including omission of operator overloading) is undoubtedly a major contributing factor.
The possibility of simplifying parsing by requiring spaces between identifiers and operators (e.g., a+b prohibited, but a + b allowed) has been raised. At least in my opinion, this is unlikely to make any real difference in most cases. The reason is fairly simple: at least in a typical compiler, the parser is preceded by a lexer. The lexer extracts tokens from the input stream and feeds them to the parser. With such a structure, the parser wouldn't see any difference at all between the a+b and a + b. Either way, it would receive exactly three tokens: identifer, +, and identifier.
Requiring the spaces might simplify the lexer a tiny bit--but to the extent it did, it would be completely independent of operator overloading, at least assuming the operator overloading was done like it is in C++, where only existing tokens are used1.
So, if that's not the problem, what is? The problem with operator overloading is that you can't hard-code a parser to know the meaning of an operator. With Java, for some given a = b + c, there are exactly two possibilities: a, b and c are each chosen from a small, limited set of types, and the meaning of that + is baked into the language, or else you have an error. So, a tool that needs to look at b + c and make sense of it can do a very minimal parse to assure that b and c are of types that can be added. If they are, it knows what the addition means, what kind of result it produces, and so on. If they are't, it can underline it in red squiggles (or whatever) to indicate an error.
For C++, things are quite different. For an expression like a = b + c;, b and c could be of almost entirely arbitrary types. The + could be implemented as a member function of b's type, or it could be a free function. In some cases, we might have a number of operator overloads (some of which could be templates) that could carry out that operation, so we need to do overload resolution to determine which one the compiler would actually select based on the types of the parameters (and if some of them are templates, the overload resolution rules get even more complex).
That lets us determine the type of the result from b + c. From there we basically repeat the whole process again to figure out what (if any) overload is used to assign that result to a. It might be built-in, or it might be another operator overload, and there might be multiple possible overloads that could do the job, so we have to do overload resolution again to figure out the right operator to use here.
In short, just figuring out what a = b + c; means in C++ requires nearly an entire compiler front-end. We can do the same in Java with a much smaller subset of a compiler2
I suppose things could be somewhat different if you allowed operator overloading like, for example, ML does, where a more or less arbitrary token can be designated as an operator, and that operator can be given a more or less arbitrary associativity and/or precedence. I believe ML handles this entirely in parsing, not lexing, but if you took this basic concept enough further, I can believe it might start to affect lexing, not just parsing.
Not to mention that most Java tools will use the JDK, which has a complete Java compiler built into the JVM, so tools can normally do most such analysis without dealing directly with parsing and such at all.
java-oo compiler plugin can add Operator Overloading support in Java.
It's not that java doesn't "need" operator overloading, it's just a choice made by its creators who wanted to keep the language more simple.
Java does not support operator overloading by programmers. This is not the same as stating that Java does not need operator overloading.
Operator overloading is syntactic sugar to express an operation using (arithmetic) symbols. For obvious reasons, the designers of the Java programming language chose to omit support for operator overloading in the language. This declaration can be found in the Java Language Environment whitepaper:
There are no means provided by which
programmers can overload the standard
arithmetic operators. Once again, the
effects of operator overloading can be
just as easily achieved by declaring a
class, appropriate instance variables,
and appropriate methods to manipulate
those variables. Eliminating operator
overloading leads to great
simplification of code.
In my personal opinion, that is a wise decision. Consider the following piece of code:
String b = "b";
String c = "c";
String a = b + c;
Now, it is fairly evident that b and c are concatenated to yield a. But when one consider the following snippet written using a hypothetical language that supports operator overloading, it is fairly evident that using operator overloading does not make for readable code.
Person b = new Person("B");
Person c = new Person("C");
Person a = b + c;
In order to understand the result of the above operation, one must view the implementation of the overloaded addition operator for the Person class. Surely, that makes for a tedious debugging session, and the code is better implemented as:
Person b = new Person("B");
Person c = new Person("C");
Person a = b.copyAttributesFrom(c);
OK Well... we have a very discussed and common issue. Today, in software industry, there are, mainly, two different types of languages:
Low level languages
High level languages
This distinction was useful about 10 years before now, the situation, at present, is a bit different.
Today we talk about business-ready applications.
Business models are some particular models where programs need to meet many requirements. They are so complex and so strict that coding an application with a language like c or c++ would be very time-spending. For this reason hybrid languages where invented.
We commonly know two types of languages:
Compiled
Interpreted
Well, today there is another one:
Compiled/Interpreted: in one word: MANAGED.
Managed languages are languages that are compiled in order to produce another code, different from the original one, but much more complex to handle. This INTERMEDIATE LANGUAGE is then INTERPETED by a program that runs the final program.
It is the common dynamics we came knowing from Java... It is a winning approach for business-ready applications.
Well, now going to your question...
Operator overloading is a matter that concerns also multiple inheritance and other advanced characteristics of low level languages.
Java, as well as C#, Python and so on, is a managed language, made to be easy to write and useful for building complex applications in very few time.
If we included operator overloading in Java, the language would become more complex and difficult to handle.
If you program in C++ you sure understand that operator overloading is a very very very delicate matter because it can lead to very complex situations and sometimes compiler might refuse to compile because of conflicts and so on... Introducing operator overloading is to be done carefully. IT IS POWERFUL, but we pay this power with an incredibly big load of problems to handle.
OKOK IT IS TRUE, you might tell me: "HEY, But C# uses operator overloading... What the hell are you telling me? why c# supports them and Java not?".
Well, this is the answer. C#, yes, implements operator overloading, but it is not like C++. There are many operator that cannot be overloaded in c# like "new" or many others that you can overload in c++... So C# supports operator overloading, but in a much lower level than c++ or other languages that fully supports it. But this is not a good answer to the earlier question...
The real answer is that C# is more complex than Java. This is a pro but also a con. It is a matter of deciding where to place the language: high level, higher level, very high level?
Well, Java does not support op overloading because it wants to be fast and easy to manage and use. When introducing op overloading, a language must also carry a large amount of problems caused by this new functionality.
It is exactly like questioning: "Why does Java not support multiple inheritance?"
Because it is tremendously complex to manage. Think about it... IT WOULD BE IMPOSSIBLE for a managed language to support multiple inheritance... No common class tree, no object class as a common base class for all classes, no possibility of upcasting (safely) and many problems to handle, manage, foresee, keep in count...
Java wants to be simple.
Even if I believe that future implementations of this language will result in supporting op overloading, you will see that the overloading dynamics will involve a fewer set of all the possibilities you have about overloading in C++.
Many others, here, also told you that overloading is useless.
Well I belong to those ones who think this is not true.
Well, if you think this way (op overloading is useless), then also many other features of managed languages are useless too. Think about interfaces, classes and so on, you really do not need them. You can use abstract classes for interface implementations... Let's look at c#... so many sugar syntax, LINQ and so on, they are not really necessary, BUT THEY FASTEN YOUR WORK...
Well, in managed languages everything that fasten a development process is welcome and does not imply uselessness. If you think that such features are not useful than the entire language itself would be useless and we all would come back programming complex applications in c++, ada, etc. The added value of managed languages is to be measured right on this elements.
Op overloading is a very useful feature, it could be implemented in languages like Java, and this would change the language structure and purposes, it would be a good thing but a bad thing too, just a matter of tastes.
But today, Java is simpler than C# even for this reason, because Java does not supports op overloading.
I know, maybe I was a little long, but hope it helps. Bye
Check Java Features Removed from C and C++ p 2.2.7 No More Operator Overloading.
There are no means provided by which
programmers can overload the standard
arithmetic operators. Once again, the
effects of operator overloading can be
just as easily achieved by declaring a
class, appropriate instance variables,
and appropriate methods to manipulate
those variables. Eliminating operator
overloading leads to great
simplification of code.
Java doesn't support operator overloading (one reference is the Wikipedia Operator Overloading page). This was a design decision by Java's creators to avoid perceived problems seen with operator overloading in other languages (especially C++).