Related
I program in Java and have been trying to understand exactly what operator overloading is. I'm still a bit puzzled.
An operator can take on different meanings depending on which class uses it? I've read that it is "Name Polymorphism".
Java apparently does not support it and there seems to be a lot of controversy around this. Should I worry about this?
As a last question, in an assignment the teacher has stated that the assignment uses operator overloading, he is a C++ programmer mainly but we are allowed to write the assignment in Java. since Java does not support overloading, is there something I should be wary of?
Operator overloading basically means to use the same operator for different data types. And get different but similar behaviour because of this.
Java indeed doesn't support this but any situation where something like this could be useful, you can easily work around it in Java.
The only overloaded operator in Java is the arithmetic + operator. When used with numbers (int, long, double etc.), it adds them, just as you would expect. When used with String objects, it concatenates them. For example:
String a = "This is ";
String b = " a String";
String c = a + b;
System.out.print (c);
This would print the following on the screen: This is a String.
This is the only situation in Java in which you can talk about operator overloading.
Regarding your assignment: if the requirement is to do something that involves operator overloading, you can't do this in Java. Ask your teacher exactly what language you are allowed to use for this particular assignment. You will most likely need to do it in C++.
PS: In case of Integer, Long, Double etc. objects, it would also work because of unboxing.
Java doesn't allow overloading operators. It uses a very limited kind of operator overloading though, since + does addition or concatenation depending on the context.
If your assignment asks you to implement something by overloading operators, you won't be able to do it in Java. Maybe you should ask the teacher why he allows Java for such an assignment.
If your assignment only asks you to use an overloaded operator, then having your program use + for concatenation and addition would fit the bill. But I would ask the teacher, because I doubt that it's what he expects.
Java apparently does not support it and there seems to be a lot of
controversy around this.
There is no controversy about this. Some people might disagree with the decision, but James Gosling and others decided from day one to leave operator overloading by class developers out of the language. It's not likely to change.
As pointed out by others here, they reserved the right for the JVM to overload operators on a limited basis. The point is that you can't do it when you're developing your own classes.
They did it because there were examples of C++ developers abusing the capability (e.g. overloading the dot operator.)
Should I worry about this?
No. Java won't explode. You just won't be able to do it for your classes. If you feel like you need to, you'll just have to write C++ or some other language.
As to your query about the difference between operator overloading and polymorphism. Polymorphism is a standard OOP concept where an instance of a class may exhibit different characteristics depending on the underlying type. For example in C++:
class Shape {
...
virtual void draw()=0;
}
class Circle :public Shape {
virtual void draw() {...draw a circle...}
}
class Square:public Shape {
virtual void draw() {...draw a square...}
}
..
Shape *s = new Circle();
s->draw(); // calls Circle::draw();
s=new Square(); // calls Square::draw();
Hence s is exhibiting polymorphism.
This is different from operator overloading but you already have been explained what that is in the other answers.
Can either use natural a != b (a is not equal to b) or a.Equals(b)
b.set(1, 0);
a = b;
b.set(2, 0);
assert( !a.Equals(b) );
But java has a limited set of operator overloading than other languages http://en.wikipedia.org/wiki/Operator_overloading
Referring to a ~2-year old discussion of the fact that there is no operator overloading in Java ( Why doesn't Java offer operator overloading? ), and coming from many intense C++ years myself to Java, I wonder whether there is a more fundamental reason that operator overloading is not part of the Java language, at least in the case of assignment, than the highest-rated answer in that link states near the bottom of the answer (namely, that it was James Gosling's personal choice).
Specifically, consider assignment.
// C++
#include <iostream>
class MyClass
{
public:
int x;
MyClass(const int _x) : x(_x) {}
MyClass & operator=(const MyClass & rhs) {x=rhs.x; return *this;}
};
int main()
{
MyClass myObj1(1), myObj2(2);
MyClass & myRef = myObj1;
myRef = myObj2;
std::cout << "myObj1.x = " << myObj1.x << std::endl;
std::cout << "myObj2.x = " << myObj2.x << std::endl;
return 0;
}
The output is:
myObj1.x = 2
myObj2.x = 2
In Java, however, the line myRef = myObj2 (assuming the declaration of myRef in the previous line was myClass myRef = myObj1, as Java requires, since all such variables are automatically Java-style 'references') behaves very differently - it would not cause myObj1.x to change and the output would be
myObj1.x = 1
myObj2.x = 2
This difference between C++ and Java leads me to think that the absence of operator overloading in Java, at least in the case of assignment, is not a 'matter of personal choice' on the part of James Gosling, but rather a fundamental necessity given Java's syntax that treats all object variables as references (i.e. MyClass myRef = myObj1 defines myRef to be a Java-style reference). I say this because if assignment in Java causes the left-hand side reference to refer to a different object, rather than allowing the possibility that the object itself change its value, then it would seem that there is no possibility of providing an overloaded assignment operator.
In other words - it's not simply a 'choice', and there's not even the possibility of 'holding your breath' with the hope that it will ever be introduced, as the aforementioned high-rated answer also states (near the end). Quoting: "The reasons for not adding them now could be a mix of internal politics, allergy to the feature, distrust of developers (you know, the saboteur ones), compatibility with the previous JVMs, time to write a correct specification, etc.. So don't hold your breath waiting for this feature.". <-- So this isn't correct, at least for the assignment operator: the reason there's no operator overloading (at least for assignment) is instead fundamental to the nature of Java.
Is this a correct assessment on my part?
ADDENDUM
Assuming the assignment operator is a special case, then my follow-up question is: Are there any other operators, or more generally any other language features, that would by necessity be affected in a similar way as the assignment operator? I would like to know how 'deep' the difference goes between Java and C++ regarding variables-as-values/references. i.e., in C++, variable tokens represent values (and note, even if the variable token was declared initially as a reference, it's still treated as a value essentially wherever it's used), whereas in Java, variable tokens represent honest-to-goodness references wherever the token is later used.
There is a big misconception when talking about similarities and differences between Java and C++, that arises in your question. C++ references and Java references are not the same. In Java a reference is a resettable proxy to the real object, while in C++ a reference is an alias to the object. To put it in C++ terms, a Java references is a garbage collected pointer not a reference. Now, going back to your example, to write equivalent code in C++ and Java you would have to use pointers:
int main() {
type a(1), b(2);
type *pa = &a, *pb = &b;
pa = pb;
// a is still 1, b is still 2, pa == pb == &b
}
Now the examples are the same: the assignment operator is being applied to the pointers to the objects, and in that particular case you cannot overload the operator in C++ either. It is important to note that operator overloading can be easily abused, and that is a good reason to avoid it in the first place. Now if you add the two different types of entities: objects and references, things become more messy to think about.
If you were allowed to overload operator= for a particular object in Java, then you would not be able to have multiple references to the same object, and the language would be crippled:
Type a = new Type(1);
Type b = new Type(2);
a = b; // dispatched to Type.operator=( Type )??
a.foo();
a = new Type(3); // do you want to copy Type(3) into a, or work with a new object?
That in turn would make the type unusable in the language: containers store references, and they reassign them (even the first time just when an object is created), functions don't really use pass-by-reference semantics, but rather pass-by-value the references (which is a completely different issue, again, the difference is void foo( type* ) versus void foo( type& ): the proxy entity is copied, you cannot modify the reference passed in by the caller.
The problem is that the language is trying really hard to hide the fact that a and the object that a refers to are not the same thing (same happens in C#), and that in turn means that you cannot explicitly state that one operation is to be applied to the reference/referent, that is resolved by the language. The outcome of that design is that any operation that can be applied to references can never be applied to the objects themselves.
As of the rest of the operators, the decision is most probably arbitrary, because the language hides the reference/object difference, it could have been designed such that a+b was translated into type* operator+( type*, type* ) by the compiler. Since you cannot use arithmetic then there would be no problem, as the compiler would recognize that a+b is an operation that must be applied to the objects (it does not make sense with references). But then it could be considered a little awkward that you can overload +, but you cannot overload =, ==, !=...
That is the path that C# took, where assignment cannot be overloaded for reference types. Interestingly in C# there are value types, and the set of operators that can be overloaded for reference and value types are different. Not having coded C# in large projects, I cannot really tell whether that potential source of confusion is such or if people are just used to it (but if you search SO, you will find that a few people do ask why X cannot be overloaded in C# for reference types where X is one of the operations that can be applied to the reference itself.
That doesn't explain why they couldn't have allowed overloading of other operators like + or -. Considering James Gosling designed the Java language, and he said it was his personal choice, which he explains in more detail at the link provided in the question you linked, I think that's your answer:
There are some things that I kind of feel torn about, like operator overloading. I left out operator overloading as a fairly personal choice because I had seen too many people abuse it in C++. I've spent a lot of time in the past five to six years surveying people about operator overloading and it's really fascinating, because you get the community broken into three pieces: Probably about 20 to 30 percent of the population think of operator overloading as the spawn of the devil; somebody has done something with operator overloading that has just really ticked them off, because they've used like + for list insertion and it makes life really, really confusing. A lot of that problem stems from the fact that there are only about half a dozen operators you can sensibly overload, and yet there are thousands or millions of operators that people would like to define -- so you have to pick, and often the choices conflict with your sense of intuition. Then there's a community of about 10 percent that have actually used operator overloading appropriately and who really care about it, and for whom it's actually really important; this is almost exclusively people who do numerical work, where the notation is very important to appealing to people's intuition, because they come into it with an intuition about what the + means, and the ability to say "a + b" where a and b are complex numbers or matrices or something really does make sense. You get kind of shaky when you get to things like multiply because there are actually multiple kinds of multiplication operators -- there's vector product, and dot product, which are fundamentally very different. And yet there's only one operator, so what do you do? And there's no operator for square-root. Those two camps are the poles, and then there's this mush in the middle of 60-odd percent who really couldn't care much either way. The camp of people that think that operator overloading is a bad idea has been, simply from my informal statistical sampling, significantly larger and certainly more vocal than the numerical guys. So, given the way that things have gone today where some features in the language are voted on by the community -- it's not just like some little standards committee, it really is large-scale -- it would be pretty hard to get operator overloading in. And yet it leaves this one community of fairly important folks kind of totally shut out. It's a flavor of the tragedy of the commons problem.
UPDATE: Re: your addendum, the other assignment operators +=, -=, etc. would also be affected. You also can't write a swap function like void swap(int *a, int *b);. and other stuff.
Is this a correct assessment on my part?
The lack of operator in general is a "personal choice". C#, which is a very similar language, does allow operator overloading. But you still can't overload assignment. What would that even do in a reference-semantics language?
Are there any other operators, or more
generally any other language features,
that would by necessity be affected in
a similar way as the assignment
operator? I would like to know how
'deep' the difference goes between
Java and C++ regarding
variables-as-values/references.
The most obvious is copying. In a reference-semantics language, clone() isn't that common, and isn't needed at all for immutable types like String. But in C++, where the default assignment semantics are based around copying, copy constructors are very common. And automatically generated if you don't define one.
A more subtle difference is that it's a lot harder for a reference-semantics language to support RAII than a value-semantics language, because object lifetime is harder to track. Raymond Chen has a good explanation.
The reason why operator overloading is abused in C++ language is because it's too complex feature. Here's some aspects of it which makes it complex:
expressions are a tree
operator overloading is the interface/documentation for those expressions
interfaces are basically invisible feature in c++
free functions/static functions/friend functions are a big mess in C++
function prototypes are already complex feature
choice of the syntax for operator overloading is less than ideal
there is no other comparable api in c++ language
user-defined types/function names are handled differently than built-in types/function names in function prototypes
it uses advanced math, like the operator<<(ostream&, ostream & (*fptr)(ostream &));
even the simplest examples of it uses polymorphism
It's the only c++ feature that has 2d array in it
this-pointer is invisible and whether your operators are member functions or outside the class is important choice for programmers
Because of these complexity, very small number of programmers actually understand how it works. I'm probably missing many important aspects of it, but the list above is good indication that it is very complex feature.
Update: some explanation about the #4: the argument pretty much is as follows:
class A { friend void f(); }; class B { friend void f(); }
void f() { /* use both A and B members inside this function */ }
With static functions, you can do this:
class A { static void f(); }; void f() { /* use only class A here */ }
And with free functions:
class A { }; void f() { /* you have no special access to any classes */ }
Update#2: The #10, the example I was thinking looks like this in stdlib:
ostream &operator<<(ostream &o, std::string s) { ... } // inside stdlib
int main() { std::cout << "Hello World" << std::endl; }
Now the polymorphism in this example happens because you can choose between std::cout and std::ofstream and std::stringstream. This is possible because operator<< first parameter takes a reference to ostream. This is normal runtime polymorphism in this example.
Update #3: About the prototypes still. The real interaction between operator overloading and prototypes is because the overloaded operators becomes part of the class' interface. This brings us to the 2d array thing, because inside the compiler the class interface is a 2d data structure which has quite complex data in it, including booleans, types, function names. The rule #4 is needed so that you can choose when your operators are inside this 2d data structure and when they're outside of it. Rule #8 deals with the booleans stored in the 2d data structure. Rule #7 is because class' interface is used to represent elements of an expression tree.
I've been using Lisp on and off, and I'm catching up with clojure.
The good thing about clojure is that I can use all the java functions naturally, and the bad thing about clojure is also that I have to know java function naturally.
For example, I had to spend some time (googling) to find square function in Java (Math/sqrt in clojure notation).
Could you recommend me some good information resource for Java functions (libraries) for clojure users that are not so familiar with Java?
It can be anything - good books, webpages, forums or whatever.
I had similar problems when I first started using Clojure. I had done some Java development years ago, but was still pretty unfamiliar with the libraries out there.
Intro
I find the easiest way to use Java is to not really use it. I think a book would be a little bit much to just get started using Java from Clojure. There isn't that much you really need to know, unless you really start getting down into the JVM/Java libraries. Let me explain.
Spend more time learning how to use Clojure inside and out, and become familiar with Clojure-Contrib. For instance, sqrt is in generic.math-functions in clojure.contrib.
Many of the things you'll need are in fact already in Clojure–but still plenty are not.
Become familiar with calling conventions and syntactic sugar in Clojure for using Java. e.g. Math/sqrt, as per your example, is calling the static method (which just a function, basically) sqrt from the class Math.
Anyway, here's a guide that should help you get started if you find yourself really needing to use Java. I'm going to assume you've done some imperative OO programming, but not much else. And even if you haven't, you should be okay.
Isaac's Clojurist's Guide to Java
Classes
A class is a bundle of methods (functions which act on the class) that
can also be a data type: e.g. to create a new class of the type Double : (Double. 1.2) which initializes the class Double (the period is the syntactic sugar for calling the class constructor methods, which initialize the class with the values you provide) with the value 1.2.
Now, look at the Double class in the Java 6 API:
Double
public Double(double value)
Constructs a newly allocated Double object that represents the
primitive double argument.
Parameters:
value - the value to be represented by the Double.
So you can see what happened there. You "built" a new Double with value 1.2, which is a double. A little confusing there, but really a Double is a class that represents a Double and can do things relating to doubles.
Static Methods
For instance, to parse a Double value out of a string, we can use the static method (meaning we don't need a particular instance of Double, we can just call it like we called sqrt) parseDouble(String s):
(Double/parseDouble "1.2") => 1.2
Not to tricky there.
Nonstatic Methods
Say we want to use a Java class that we initialized to something. Not too difficult:
(-> (String. "Hey there") ;; make a new String object
(.toUpperCase)) ;; pass it to .toUpperCase (look up -> to see what it does)
;; toUpperCase is a non-static method
=> "HEY THERE"
So now we've used a method which is not static, and which requires a real, live String object to deal with. Let's look at how the docs say it works:
toUpperCase
public String toUpperCase()
Converts all of the characters in this String to upper case using
the rules of the default locale. This method is equivalent to
toUpperCase(Locale.getDefault()).
Returns:
the String, converted to uppercase.
So here we have a method which returns a string (as shown by the "String" after the public in the definition, and takes no parameters. But wait! It does take a parameter. In Python, it'd be the implicit parameter self: this is called this in Java.
We could also use the method like this: (.toUpper (String. "Hey there")) and get the same result.
More on Methods
Since you deal with mutable data and classes in Java, you need to be able to apply functions to Classes (instances of Classes, really) and not expect a return value.
For instance, say we're dealing with a JFrame from the javax.swing library. We might need to do a number of things to it, not with it (you generally operate with values, not on them in functional languages). We can, like this:
(doto (JFrame. "My Frame!");; clever name
(.setContentPane ... here we'd add a JPanel or something to the JFrame)
(.pack) ;; this simply arranges the stuff in the frame–don't worry about it
(.setVisibleTrue)) ;; this makes the Frame visible
doto just passes its first argument to all the other functions you supply it, and passes it as the first argument to them. So here we're just doing a lot of things to the JFrame that don't return anything in particular. All these methods are listed as methods of the JFrame in the documentation (or its superclasses… don't worry about those yet).
Wrapping up
This should prepare you for now exploring the JavaDocs yourself. Here you'll find everything that is available to you in a standard Java 1.6 install. There will be new concepts, but a quick Google search should answer most of your questions, and you can always come back here with specific ones.
Be sure to look into the other important Clojure functions like proxy and reify as well as extend-type and its friends. I don't often use them, but when I need to, they can be invaluable. I still am understanding them myself, in fact.
There's a ton out there, but it's mostly a problem of volume rather than complexity. It's not a bad problem to have.
Additional reading:
Static or Nonstatic? ;; a guide to statis vs. nonstatic methods
The Java Class Library ;; an overview of what's out there, with a nice picture
The JavaDocs ;; linked above
Clojure Java Interop Docs ;; from the Clojure website
Best Java Books ;; as per clartaq's answer
Really, any good Java book can get you started. See for example the answer to the question about the
best Java book people have read so far. There are lots of good sources there.
Once you have a little Java under you belt, using it is all just a matter of simple Clojure syntax.
Mastering the content of the voluminous Java libraries is a much bigger task than figuring out how to use them in Clojure.
My first question would be: what do you exactly need? There are many Java libraries out there. Or do you just need the standard libraries? In that case the answer given by dbyrne should be enough.
Keep in mind that in general you are better of using the Clojure data structures like sequences instead of the Java equivalents.
Start with the Sun (now Oracle) Java Tutorials: http://download.oracle.com/javase/tutorial/index.html
Then dive into the Java 6 API docs:
http://download-llnw.oracle.com/javase/6/docs/
Then ask questions on #clojure IRC or the mailing list, and read blogs.
For a deep dive into Java the language, I recommend Bruce Eckel's free Thinking in Java:
http://www.mindview.net/Books/TIJ/
I think the plain old Java 6
API Specification should be pretty much all you need.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Why doesn't Java need operator overloading? Is there any way it can be supported in Java?
Java only allows arithmetic operations on elementary numeric types. It's a mixed blessing, because although it's convenient to define operators on other types (like complex numbers, vectors etc), there are always implementation-dependent idiosyncrasies. So operators don't always do what you expect them to do. By avoiding operator overloading, it's more transparent which function is called when. A wise design move in some people's eyes.
Java doesn't "need" operator overloading, because no language needs it.
a + b is just "syntactic sugar" for a.Add(b) (actually, some would argue that a.Add(b) is just syntactic sugar for Add(a,b))
This related question might help. In short, operator overloading was intentionally avoided when Java was designed because of issues with overloading in C++.
Scala, a newer JVM language, has a syntax that allows method overloading that functions very much like operator overloading, without the limitations of C++ operator overloading. In Scala, it's possible to define a method named +, for example. It's also possible to omit the . operator and parentheses in method calls:
case class A(value: Int) {
def +(other: A) = new A(value + other.value)
}
scala> new A(1) + new A(3)
res0: A = A(4)
No language needs operator overloading. Some believe that Java would benefit from adding it, but its omission has been publicized as a benefit for so long that adding it is almost certainly politically unacceptable (and it's only since the Oracle buyout that I'd even include the "almost").
The counterpoint generally consists of postulating some meaningless (or even counterintuitive) overload, such as adding together two employees or overloading '+' to do division. While operator overloading in such languages as C++ would allow this, lack of operator overloading in Java does little to prevent or even mitigate the problem. someEmployee.Add(anotherEmployee) is no improvement over someEmployee + anotherEmployee. Likewise, if myLargeInteger.Add(anotherLargeInteger) actually does division instead of addition. At least to me, this line of argument appears thoroughly unconvincing at best.
There is, however, another respect in which omitting operator overloading does (almost certainly) have a real benefit. Its omission keeps the language easier to process, which makes it much easier (and quicker) to develop tools that process the language. Just for an obvious example, refactoring tools for Java are much more numerous and comprehensive than for C++. I doubt that this can or should be credited specifically and solely to support for operator overloading in C++ and its omission in Java. Nonetheless, the general attitude of keeping Java simple (including omission of operator overloading) is undoubtedly a major contributing factor.
The possibility of simplifying parsing by requiring spaces between identifiers and operators (e.g., a+b prohibited, but a + b allowed) has been raised. At least in my opinion, this is unlikely to make any real difference in most cases. The reason is fairly simple: at least in a typical compiler, the parser is preceded by a lexer. The lexer extracts tokens from the input stream and feeds them to the parser. With such a structure, the parser wouldn't see any difference at all between the a+b and a + b. Either way, it would receive exactly three tokens: identifer, +, and identifier.
Requiring the spaces might simplify the lexer a tiny bit--but to the extent it did, it would be completely independent of operator overloading, at least assuming the operator overloading was done like it is in C++, where only existing tokens are used1.
So, if that's not the problem, what is? The problem with operator overloading is that you can't hard-code a parser to know the meaning of an operator. With Java, for some given a = b + c, there are exactly two possibilities: a, b and c are each chosen from a small, limited set of types, and the meaning of that + is baked into the language, or else you have an error. So, a tool that needs to look at b + c and make sense of it can do a very minimal parse to assure that b and c are of types that can be added. If they are, it knows what the addition means, what kind of result it produces, and so on. If they are't, it can underline it in red squiggles (or whatever) to indicate an error.
For C++, things are quite different. For an expression like a = b + c;, b and c could be of almost entirely arbitrary types. The + could be implemented as a member function of b's type, or it could be a free function. In some cases, we might have a number of operator overloads (some of which could be templates) that could carry out that operation, so we need to do overload resolution to determine which one the compiler would actually select based on the types of the parameters (and if some of them are templates, the overload resolution rules get even more complex).
That lets us determine the type of the result from b + c. From there we basically repeat the whole process again to figure out what (if any) overload is used to assign that result to a. It might be built-in, or it might be another operator overload, and there might be multiple possible overloads that could do the job, so we have to do overload resolution again to figure out the right operator to use here.
In short, just figuring out what a = b + c; means in C++ requires nearly an entire compiler front-end. We can do the same in Java with a much smaller subset of a compiler2
I suppose things could be somewhat different if you allowed operator overloading like, for example, ML does, where a more or less arbitrary token can be designated as an operator, and that operator can be given a more or less arbitrary associativity and/or precedence. I believe ML handles this entirely in parsing, not lexing, but if you took this basic concept enough further, I can believe it might start to affect lexing, not just parsing.
Not to mention that most Java tools will use the JDK, which has a complete Java compiler built into the JVM, so tools can normally do most such analysis without dealing directly with parsing and such at all.
java-oo compiler plugin can add Operator Overloading support in Java.
It's not that java doesn't "need" operator overloading, it's just a choice made by its creators who wanted to keep the language more simple.
Java does not support operator overloading by programmers. This is not the same as stating that Java does not need operator overloading.
Operator overloading is syntactic sugar to express an operation using (arithmetic) symbols. For obvious reasons, the designers of the Java programming language chose to omit support for operator overloading in the language. This declaration can be found in the Java Language Environment whitepaper:
There are no means provided by which
programmers can overload the standard
arithmetic operators. Once again, the
effects of operator overloading can be
just as easily achieved by declaring a
class, appropriate instance variables,
and appropriate methods to manipulate
those variables. Eliminating operator
overloading leads to great
simplification of code.
In my personal opinion, that is a wise decision. Consider the following piece of code:
String b = "b";
String c = "c";
String a = b + c;
Now, it is fairly evident that b and c are concatenated to yield a. But when one consider the following snippet written using a hypothetical language that supports operator overloading, it is fairly evident that using operator overloading does not make for readable code.
Person b = new Person("B");
Person c = new Person("C");
Person a = b + c;
In order to understand the result of the above operation, one must view the implementation of the overloaded addition operator for the Person class. Surely, that makes for a tedious debugging session, and the code is better implemented as:
Person b = new Person("B");
Person c = new Person("C");
Person a = b.copyAttributesFrom(c);
OK Well... we have a very discussed and common issue. Today, in software industry, there are, mainly, two different types of languages:
Low level languages
High level languages
This distinction was useful about 10 years before now, the situation, at present, is a bit different.
Today we talk about business-ready applications.
Business models are some particular models where programs need to meet many requirements. They are so complex and so strict that coding an application with a language like c or c++ would be very time-spending. For this reason hybrid languages where invented.
We commonly know two types of languages:
Compiled
Interpreted
Well, today there is another one:
Compiled/Interpreted: in one word: MANAGED.
Managed languages are languages that are compiled in order to produce another code, different from the original one, but much more complex to handle. This INTERMEDIATE LANGUAGE is then INTERPETED by a program that runs the final program.
It is the common dynamics we came knowing from Java... It is a winning approach for business-ready applications.
Well, now going to your question...
Operator overloading is a matter that concerns also multiple inheritance and other advanced characteristics of low level languages.
Java, as well as C#, Python and so on, is a managed language, made to be easy to write and useful for building complex applications in very few time.
If we included operator overloading in Java, the language would become more complex and difficult to handle.
If you program in C++ you sure understand that operator overloading is a very very very delicate matter because it can lead to very complex situations and sometimes compiler might refuse to compile because of conflicts and so on... Introducing operator overloading is to be done carefully. IT IS POWERFUL, but we pay this power with an incredibly big load of problems to handle.
OKOK IT IS TRUE, you might tell me: "HEY, But C# uses operator overloading... What the hell are you telling me? why c# supports them and Java not?".
Well, this is the answer. C#, yes, implements operator overloading, but it is not like C++. There are many operator that cannot be overloaded in c# like "new" or many others that you can overload in c++... So C# supports operator overloading, but in a much lower level than c++ or other languages that fully supports it. But this is not a good answer to the earlier question...
The real answer is that C# is more complex than Java. This is a pro but also a con. It is a matter of deciding where to place the language: high level, higher level, very high level?
Well, Java does not support op overloading because it wants to be fast and easy to manage and use. When introducing op overloading, a language must also carry a large amount of problems caused by this new functionality.
It is exactly like questioning: "Why does Java not support multiple inheritance?"
Because it is tremendously complex to manage. Think about it... IT WOULD BE IMPOSSIBLE for a managed language to support multiple inheritance... No common class tree, no object class as a common base class for all classes, no possibility of upcasting (safely) and many problems to handle, manage, foresee, keep in count...
Java wants to be simple.
Even if I believe that future implementations of this language will result in supporting op overloading, you will see that the overloading dynamics will involve a fewer set of all the possibilities you have about overloading in C++.
Many others, here, also told you that overloading is useless.
Well I belong to those ones who think this is not true.
Well, if you think this way (op overloading is useless), then also many other features of managed languages are useless too. Think about interfaces, classes and so on, you really do not need them. You can use abstract classes for interface implementations... Let's look at c#... so many sugar syntax, LINQ and so on, they are not really necessary, BUT THEY FASTEN YOUR WORK...
Well, in managed languages everything that fasten a development process is welcome and does not imply uselessness. If you think that such features are not useful than the entire language itself would be useless and we all would come back programming complex applications in c++, ada, etc. The added value of managed languages is to be measured right on this elements.
Op overloading is a very useful feature, it could be implemented in languages like Java, and this would change the language structure and purposes, it would be a good thing but a bad thing too, just a matter of tastes.
But today, Java is simpler than C# even for this reason, because Java does not supports op overloading.
I know, maybe I was a little long, but hope it helps. Bye
Check Java Features Removed from C and C++ p 2.2.7 No More Operator Overloading.
There are no means provided by which
programmers can overload the standard
arithmetic operators. Once again, the
effects of operator overloading can be
just as easily achieved by declaring a
class, appropriate instance variables,
and appropriate methods to manipulate
those variables. Eliminating operator
overloading leads to great
simplification of code.
Java doesn't support operator overloading (one reference is the Wikipedia Operator Overloading page). This was a design decision by Java's creators to avoid perceived problems seen with operator overloading in other languages (especially C++).
Some people say that every programming language has its "complexity budget" which it can use to accomplish its purpose. But if the complexity budget is depleted, every minor change becomes increasingly complicated and hard to implement in a backward-compatible way.
After reading the current provisional syntax for Lambda (≙ Lambda expressions, exception transparency, defender methods and method references) from August 2010 I wonder if people at Oracle completely ignored Java's complexity budget when considering such changes.
These are the questions I'm thinking about - some of them more about language design in general:
Are the proposed additions comparable in complexity to approaches other languages chose?
Is it generally possible to add such additions to a language and protecting the developer from the complexity of the implementation ?
Are these additions a sign of reaching the end of the evolution of Java-as-a-language or is this expected when changing a language with a huge history?
Have other languages taken a totally different approach at this point of language evolution?
Thanks!
I have not followed the process and evolution of the Java 7 lambda
proposal, I am not even sure of what the latest proposal wording is.
Consider this as a rant/opinion rather than statements of truth. Also,
I have not used Java for ages, so the syntax might be rusty and
incorrect at places.
First, what are lambdas to the Java language? Syntactic sugar. While
in general lambdas enable code to create small function objects in
place, that support was already preset --to some extent-- in the Java
language through the use of inner classes.
So how much better is the syntax of lambdas? Where does it outperform
previous language constructs? Where could it be better?
For starters, I dislike the fact that there are two available syntax
for lambda functions (but this goes in the line of C#, so I guess my
opinion is not widespread. I guess if we want to sugar coat, then
#(int x)(x*x) is sweeter than #(int x){ return x*x; } even if the
double syntax does not add anything else. I would have preferred the
second syntax, more generic at the extra cost of writting return and
; in the short versions.
To be really useful, lambdas can take variables from the scope in
where they are defined and from a closure. Being consistent with
Inner classes, lambdas are restricted to capturing 'effectively
final' variables. Consistency with the previous features of the
language is a nice feature, but for sweetness, it would be nice to be
able to capture variables that can be reassigned. For that purpose,
they are considering that variables present in the context and
annotated with #Shared will be captured by-reference, allowing
assignments. To me this seems weird as how a lambda can use a variable
is determined at the place of declaration of the variable rather than
where the lambda is defined. A single variable could be used in more
than one lambda and this forces the same behavior in all of them.
Lambdas try to simulate actual function objects, but the proposal does
not get completely there: to keep the parser simple, since up to now
an identifier denotes either an object or a method that has been kept
consistent and calling a lambda requires using a ! after the lambda
name: #(int x)(x*x)!(5) will return 25. This brings a new syntax
to use for lambdas that differ from the rest of the language, where
! stands somehow as a synonim for .execute on a virtual generic
interface Lambda<Result,Args...> but, why not make it complete?
A new generic (virtual) interface Lambda could be created. It would
have to be virtual as the interface is not a real interface, but a
family of such: Lambda<Return>, Lambda<Return,Arg1>,
Lambda<Return,Arg1,Arg2>... They could define a single execution
method, which I would like to be like C++ operator(), but if that is
a burden then any other name would be fine, embracing the ! as a
shortcut for the method execution:
interface Lambda<R> {
R exec();
}
interface Lambda<R,A> {
R exec( A a );
}
Then the compiler need only translate identifier!(args) to
identifier.exec( args ), which is simple. The translation of the
lambda syntax would require the compiler to identify the proper
interface being implemented and could be matched as:
#( int x )(x *x)
// translated to
new Lambda<int,int>{ int exec( int x ) { return x*x; } }
This would also allow users to define Inner classes that can be used
as lambdas, in more complex situations. For example, if lambda
function needed to capture a variable annotated as #Shared in a
read-only manner, or maintain the state of the captured object at the
place of capture, manual implementation of the Lambda would be
available:
new Lambda<int,int>{ int value = context_value;
int exec( int x ) { return x * context_value; }
};
In a manner similar to what the current Inner classes definition is,
and thus being natural to current Java users. This could be used,
for example, in a loop to generate multiplier lambdas:
Lambda<int,int> array[10] = new Lambda<int,int>[10]();
for (int i = 0; i < 10; ++i ) {
array[i] = new Lambda<int,int>{ final int multiplier = i;
int exec( int x ) { return x * multiplier; }
};
}
// note this is disallowed in the current proposal, as `i` is
// not effectively final and as such cannot be 'captured'. Also
// if `i` was marked #Shared, then all the lambdas would share
// the same `i` as the loop and thus would produce the same
// result: multiply by 10 --probably quite unexpectedly.
//
// I am aware that this can be rewritten as:
// for (int ii = 0; ii < 10; ++ii ) { final int i = ii; ...
//
// but that is not simplifying the system, just pushing the
// complexity outside of the lambda.
This would allow usage of lambdas and methods that accept lambdas both
with the new simple syntax: #(int x){ return x*x; } or with the more
complex manual approach for specific cases where the sugar coating
interferes with the intended semantics.
Overall, I believe that the lambda proposal can be improved in
different directions, that the way it adds syntactic sugar is a
leaking abstraction (you have deal externally with issues that are
particular to the lambda) and that by not providing a lower level
interface it makes user code less readable in use cases that do not
perfectly fit the simple use case.
:
Modulo some scope-disambiguation constructs, almost all of these methods follow from the actual definition of a lambda abstraction:
λx.E
To answer your questions in order:
I don't think there are any particular things that make the proposals by the Java community better or worse than anything else. As I said, it follows from the mathematical definition, and therefore all faithful implementations are going to have almost exactly the same form.
Anonymous first-class functions bolted onto imperative languages tend to end up as a feature that some programmers love and use frequently, and that others ignore completely - therefore it is probably a sensible choice to give it some syntax that will not confuse the kinds of people who choose to ignore the presence of this particular language feature. I think hiding the complexity and particulars of implementation is what they have attempted to do by using syntax that blends well with Java, but which has no real connotation for Java programmers.
It's probably desirable for them to use some bits of syntax that are not going to complicate existing definitions, and so they are slightly constrained in the symbols they can choose to use as operators and such. Certainly Java's insistence on remaining backwards-compatible limits the language evolution slightly, but I don't think this is necessarily a bad thing. The PHP approach is at the other end of the spectrum (i.e. "let's break everything every time there is a new point release!"). I don't think that Java's evolution is inherently limited except by some of the fundamental tenets of its design - e.g. adherence to OOP principles, VM-based.
I think it's very difficult to make strong statements about language evolution from Java's perspective. It is in a reasonably unique position. For one, it's very, very popular, but it's relatively old. Microsoft had the benefit of at least 10 years worth of Java legacy before they decided to even start designing a language called "C#". The C programming language basically stopped evolving at all. C++ has had few significant changes that found any mainstream acceptance. Java has continued to evolve through a slow but consistent process - if anything I think it is better-equipped to keep on evolving than any other languages with similarly huge installed code bases.
It's not much more complicated then lambda expressions in other languages.
Consider...
int square(x) {
return x*x;
}
Java:
#(x){x*x}
Python:
lambda x:x*x
C#:
x => x*x
I think the C# approach is slightly more intuitive. Personally I would prefer...
x#x*x
Maybe this is not really an answer to your question, but this may be comparable to the way objective-c (which of course has a very narrow user base in contrast to Java) was extended by blocks (examples). While the syntax does not fit the rest of the language (IMHO), it is a useful addition and and the added complexity in terms of language features is rewarded for example with lower complexity of concurrent programming (simple things like concurrent iteration over an array or complicated techniques like Grand Central Dispatch).
In addition, many common tasks are simpler when using blocks, for example making one object a delegate (or - in Java lingo - "listener") for multiple instances of the same class. In Java, anonymous classes can already be used for that cause, so programmers know the concept and can just spare a few lines of source code using lambda expressions.
In objective-c (or the Cocoa/Cocoa Touch frameworks), new functionality is now often only accessible using blocks, and it seems like programmers are adopting it quickly (given that they have to give up backwards compatibility with old OS versions).
This is really really close to Lambda functions proposed in the new generation of C++ (C++0x)
so I think, Oracle guys have looked at the other implementations before cooking up their own.
http://en.wikipedia.org/wiki/C%2B%2B0x
[](int x, int y) { return x + y; }