As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
Java has primitive data types which doesn't derive from object like in Ruby. So can we consider Java as a 100% object oriented language? Another question: Why doesn't Java design primitive data types the object way?
When Java first appeared (versions 1.x) the JVM was really, really slow.
Not implementing primitives as first-class objects was a compromise they had taken for speed purposes, although I think in the long run it was a really bad decision.
"Object oriented" also means lots of things for lots of people.
You can have class-based OO (C++, Java, C#), or you can have prototype-based OO (Javascript, Lua).
100% object oriented doesn't mean much, really. Ruby also has problems that you'll encounter from time to time.
What bothers me about Java is that it doesn't provide the means to abstract ideas efficiently, to extend the language where it has problems. And whenever this issue was raised (see Guy Steele's "Growing a Language") the "oh noes, but what about Joe Sixpack?" argument is given. Even if you design a language that prevents shooting yourself in the foot, there's a difference between accidental complexity and real complexity (see No Silver Bullet) and mediocre developers will always find creative ways to shoot themselves.
For example Perl 5 is not object-oriented, but it is extensible enough that it allows Moose, an object system that allows very advanced techniques for dealing with the complexity of OO. And syntactic sugar is no problem.
No, because it has data types that are not objects (such as int and byte). I believe Smalltalk is truly object-oriented but I have only a little experience with that language (about two months worth some five years ago).
I've also heard claims from the Ruby crowd but I have zero experience with that language.
This is, of course, using the definition of "truly OO" meaning it only has objects and no other types. Others may not agree with this definition.
It appears, after a little research into Python (I had no idea about the name/object distinction despite having coded in it for a year or so - more fool me, I guess), that it may indeed be truly OO.
The following code works fine:
#!/usr/bin/python
i = 7
print id(i)
print type(i)
print i.__str__()
outputting:
6701648
<type 'int'>
7
so even the base integers are objects here.
To get to true 100% OO think Smalltalk for instance, where everything is an object, including the compiler itself, and even if statements: ifTrue: is a message sent to a Boolean with a block of code parameter.
The problem is that object-oriented is not really well defined and can mean a lot of things. This article explains the problem in more detail:
http://www.paulgraham.com/reesoo.html
Also, Alan Kay (the inventor of Smalltalk and author(?) of the term "object-oriented") famously said that he hadn't C++ in mind when thought about OOP. So I think this could apply to Java as well.
The language being fully OO (whatever that means) is desirable, because it means better orthogonality, which is a good thing. But given that Java is not very orthogonal anyway in other respects, the small bit of its OO incompleteness probably doesn't matter in practice.
Java is not 100% OO.
Java may going towards 99% OO (think of auto-boxing, Scala).
I would say Java is now 87% OO.
Why java doesn't design primitive data
types as object way ?
Back in the 90's there were Performance reasons and at the same time Java stays backward compatible. So they cannot take them out.
No, Java is not, since it has primitive data types, which are different from objects (they don't have methods, instance variables, etc.). Ruby, on the other hand, is completely OOP. Everything is an object. I can do this:
1.class
And it will return the class of 1 (Fixnum, which is basically a number). You can't do this in Java.
Java, for the reason you mentioned, having primitives, doesn't make it a purely object-oriented programming language. However, the enforcement of having every program be a class makes it very oriented toward object-oriented programming.
Ruby, as you mentioned, and happened to be the first language that came to my mind as well, is a language that does not have primitives, and all values are objects. This certainly does make it more object-oriented than Java. On the other hand, to my knowledge, there is no requirement that a piece of code must be associated with a class, as is the case with Java.
That said, Java does have objects that wrap around the primitives such as Integer, Boolean, Character and such. The reason for having primitives is probably the reason given in Peter's answer -- back when Java was introduced in the mid-90's, memory on systems were mostly in the double-digit megabytes, so having each and every value be an object was large overhead.
(Large, of course is relative. I can't remember the exact numbers, but an object overhead was around 50-100 bytes of memory. Definitely more than the minimum of several bytes required for primitive values)
Today, with many desktops with multiple gigabytes of memory, the overhead of objects are less of an issue.
"Why java doesn't design primitive data types as object way ?"
At Sun developer days, quite a few years ago I remember James Gosling answering this. He said that they'd liked to have totally abstracted away from primitives - to leave only standard objects, but then ran out of time and decided to ship with what they had. Very sad.
So can we consider java as 100% object
oriented language?
No.
Another question : Why java doesn't
design primitive data types as object
way?
Mainly for performance reasons, possibly also to be more familiar to people coming from C++.
One reason Java can't obviously do away with non-object primitives (int, etc.) is that it does not support native data members. Imagine:
class int extends object
{
// need some data member here. but what type?
public native int();
public native int plus(int x);
// several more non-mutating methods
};
On second thoughts, we know Java maintains internal data per object (locks, etc.). Maybe we could define class int with no data members, but with native methods that manipulate this internal data.
Remaining issues: Constants -- but these can be dealt with similarly to strings. Operators are just syntactical sugar and + and would be mapped do the plus method at compile time, although we need to be careful that int.plus(float) returns float as does float.plus(int), and so on.
Ultimately I think the justification for primitives is efficiency: the static analysis needed to determine that an int object can be treated purely as JVM integer value may have been considered too big a problem when the language was designed.
I'd say that full-OO languages are those which have their elements (classes, methods) accessible as objects to work with.
From this POV, Java is not fully OOP language, and JavaScript is (no matter it has primitives).
According to Concepts in Programming Languages book, there is something called Ingalls test, proposed by Dan Ingalls a leader of the Smalltalk group. That is:
Can you define a new kind of integer,
put your new integers into rectangles
(which are already part of the window
system), ask the system to blacken a
rectangle, and have everything work?
And again according to the book Smalltalk passes this test but C++ and Java do not. Unfortunately book is not available online but here are some supporting slides (slide 33 answers your question).
No. Javascript, for example, is.
What would those Integer and Long and Boolean classes be written in?
How would you write an ArrayList or HashMap without primitive arrays?
This is one of those questions that really only matters in an academic sense. Ruby is optimized to treat ints, longs, etc. as primitives whenever possible. Java just made this explicit. If Java had primitives be objects, there would be IntPrimitive, LongPrimitive, etc (by whatever name) classes. which would most likely be final without special methods (e.g. no IntPrimitive.factorial). Which would mean for most purposes they would be primitives.
Java clearly is not 100% OO. You can easily program it in a procedural style. Most people do. It's true that the libraries and containers tend not to be as forgiving of this paradigm.
Java is not fully object oriented. I would consider Smalltalk and Eiffel the most popular fully object oriented languages.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
In Java for example there is the primitive data type "int" which represents a 32 Bit value and there is "Integer" which is just a class with a single "int" property (and some methods of course). That means that a Java "Integer" class still uses primitives behind the scenes. And that's the reason Java is not a pure object oriented programming language.
Where could a value be stored if there where no primitives? For example I imagine this pseudo class:
class Integer
{
private Integer i = 12;
public Integer getInteger
{
return this.Integer;
}
}
This would be recursive.
How can a programming language be implemented without primitives?
I appreciate any help solving my confusion.
Behind the scene always will be primitives because it just a bits in memory. But some languages hide primitives that You can work only with objects. Java allows You to work both with objects and primitives.
If you mean by primitives value types, then yes you can live without them as a user and use Integer instead of int and pay for the overhead of heap allocation and GC. But this doesn't come for free and you have to pay the cost. Primitives like 32-bit/64-bit integers and IEEE-754 floating points will always be faster because there is a hardware support for them.
From a compiler writer point of view you have to use what the machine supports to make things work.
LISP is a very simple functional language. The basic LISP did not have a primitive int and one solution to integers was to have successor of successor of successor of zero for 3.
This actually had some advantages, integers being open ended, no overflow so operations really commutative, associative, and so on. Some nice optimizations possible. And of course succ(succ(succ(zero))) could be encoded in a more tuple like way (probably better not in LISP).
In a later, normal, LISP '3' would be an atom, 123 would be such an atom, with math operators.
Then there are symbol manipulating languages (SNOBOL) that could do math on numerical strings ['4', '0'] * ['3'].
So names are objects (atoms) like a char 'a' or int 42.
It might help to show you the analogous code in a language that takes the "everything is an object" design principle much more seriously than Java does. Namely, Smalltalk. Imagine what it would be like if Java had only int, not Integer, but everything you used to need to use Integer for was possible with int. That's Smalltalk.
This is an excerpt of the code defining the SmallInteger class in Squeak 5.0:
Integer immediateSubclass: #SmallInteger
instanceVariableNames: ''
classVariableNames: ''
poolDictionaries: ''
category: 'Kernel-Numbers'!
!SmallInteger commentStamp: 'eem 11/20/2014 08:41' prior: 0!
My instances are at least 31-bit numbers, stored in twos complement
form. The allowable range in 32-bits is approximately +- 10^9
(+- 1billion). In 64-bits my instances are 61-bit numbers,
stored in twos complement form. The allowable range is
approximately +- 10^18 (+- 1 quintillion). The actual
values are computed at start-up. See SmallInteger class startUp:,
minVal, maxVal.!
!SmallInteger methodsFor: 'arithmetic' stamp: 'di 2/1/1999 21:31'!
+ aNumber
"Primitive. Add the receiver to the argument and answer with the result
if it is a SmallInteger. Fail if the argument or the result is not a
SmallInteger.
Essential, No Lookup. See Object documentation whatIsAPrimitive."
<primitive: 1>
^ super + aNumber! !
!SmallInteger class methodsFor: 'instance creation' stamp: 'tk 4/20/1999 14:17'!
basicNew
self error: 'SmallIntegers can only be created by performing arithmetic'! !
Don't sweat the fine details of syntax or semantics. What you should get out of this is: SmallInteger is defined as an object class just like everything else in the language, and arithmetic operations are methods just like every other piece of code in the language. But it's a little odd. It has no instance variables, you can only create instances by performing arithmetic, and most of the methods look like they're being defined circularly.
"Under the hood", the implementation maps arithmetic to the appropriate machine instructions (the <primitive: 1> thing is a hint to the implementation about that) and stores SmallIntegers as nothing more than the integer itself. The restricted range, relative to the hardware, is because a couple of bits are reserved to mark memory words as integers, rather than pointers to objects ("tagged pointers").
Without being able to eventually access real data, (eg. primitives or actual bits) (directly or indirectly) on a machine, it is no longer a programming language, it is an Interface Description Language.
(I'll rephrase the question to what I believe you're asking. If you think I've got it wrong, feel free to comment.)
How can a type system that's based on composition and inheritance define any useful type, if there are no intrinsic types to start from? Unless the language implementation knows about at least one intrinsic type to start from, any defined types would be doomed to be either recursive or empty. Is this inevitable?
Yes, in every C-family language that I know of, this is pretty much inevitable.
If every type is composed of other types then, at the very least, you need to have an intrinsic type to build upon - for example, an intrinsic type that represents a bit, in order to construct the byte type out of it through composition, then the word type, then various integer types, and so on. Then you'd need to define the operations that can be performed on these types, by manipulating the bits that make up their internal representation.
And even though all you need is one intrinsic type to build upon, it would likely be terribly inefficient - you don't want to waste space or CPU cycles and you do want to take advantage of the various storage locations and instructions that your target architecture offers, including FP registers and other stuff.
Thus, a good compromise between performance and "purity" is to offer in the language some intrinsic types that are likely to be recognizable by modern CPUs (like int32, int64, float, double, etc) and build the rest of the type system upon them. In Java, they decided to call these intrinsic types primitives and make them separate from classes.
Eventually everything comes back to bits in memory and instructions to the computer. The difference between assembler, compiled, procedural, object oriented, and all the other things is how much abstraction there is between you and the bits and how much benefit (or cost) you get from that abstraction.
I can't see the reason why the Boolean wrapper classes were made Immutable.
Why the Boolean Wrapper was not implemented like MutableBoolean in Commons lang which actually can be reset.
Does anyone have any idea/understanding about this ? Thanks.
Because 2 is 2. It won't be 3 tomorrow.
Immutable is always preferred as the default, especially in multithreaded situations, and it makes for easier to read and more maintainable code. Case in point: the Java Date API, which is riddled with design flaws. If Date were immutable the API would be very streamlined. I would know Date operations would create new dates and would never have to look for APIs that modify them.
Read Concurrency in Practice to understand the true importance of immutable types.
But also note that if for some reason you want mutable types, use AtomicInteger AtomicBoolean, etc. Why Atomic? Because by introducing mutability you introduced a need for threadsafety. Which you wouldn't have needed if your types stayed immutable, so in using mutable types you also must pay the price of thinking about threadsafety and using types from the concurrent package. Welcome to the wonderful world of concurrent programming.
Also, for Boolean - I challenge you to name a single operation that you might want to perform that cares whether Boolean is mutable. set to true? Use myBool = true. That is a re-assignment, not a mutation. Negate? myBool = !myBool. Same rule. Note that immutability is a feature, not a constraint, so if you can offer it, you should - and in these cases, of course you can.
Note this applies to other types as well. The most subtle thing with integers is count++, but that is just count = count + 1, unless you care about getting the value atomically... in which case use the mutable AtomicInteger.
Wrapper classes in Java are immutable so the runtime can have only two Boolean objects - one for true, one for false - and every variable is a reference to one of those two. And since they can never be changed, you know they'll never be pulled out from under you. Not only does this save memory, it makes your code easier to reason about - since the wrapper classes you're passing around you know will never have their value change, they won't suddenly jump to a new value because they're accidentally a reference to the same value elsewhere.
Similarly, Integer has a cache of all signed byte values - -128 to 127 - so the runtime doesn't have to have extra instances of those common Integer values.
Patashu is the closest. Many of the goofy design choices in Java were because of the limitations of how they implemented a VM. I think originally they tried to make a VM for C or C++ but it was too hard (impossible?) so made this other, similar language. Write one, run everywhere!
Any computer sciency justification like those other dudes spout is just after-the-fact folderal. As you now know, Java and C# are evolving to be as powerful as C. Sure, they were cleaner. Ought to be for languages designed decade(s) later!
Simple trick is to make a "holder" class. Or use a closure nowadays! Maybe Java is evolving into JavaScript. LOL.
Boolean or any other wrapper class is immutable in java. Since wrapper classes are used as variables for storing simple data, those should be safe and data integrity must be maintained to avoid inconsistent or unwanted results. Also, immutability saves lots of memory by avoiding duplicate objects. More can be found in article Why Strings & Wrapper classes are designed immutable in java?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Why doesn't Java need operator overloading? Is there any way it can be supported in Java?
Java only allows arithmetic operations on elementary numeric types. It's a mixed blessing, because although it's convenient to define operators on other types (like complex numbers, vectors etc), there are always implementation-dependent idiosyncrasies. So operators don't always do what you expect them to do. By avoiding operator overloading, it's more transparent which function is called when. A wise design move in some people's eyes.
Java doesn't "need" operator overloading, because no language needs it.
a + b is just "syntactic sugar" for a.Add(b) (actually, some would argue that a.Add(b) is just syntactic sugar for Add(a,b))
This related question might help. In short, operator overloading was intentionally avoided when Java was designed because of issues with overloading in C++.
Scala, a newer JVM language, has a syntax that allows method overloading that functions very much like operator overloading, without the limitations of C++ operator overloading. In Scala, it's possible to define a method named +, for example. It's also possible to omit the . operator and parentheses in method calls:
case class A(value: Int) {
def +(other: A) = new A(value + other.value)
}
scala> new A(1) + new A(3)
res0: A = A(4)
No language needs operator overloading. Some believe that Java would benefit from adding it, but its omission has been publicized as a benefit for so long that adding it is almost certainly politically unacceptable (and it's only since the Oracle buyout that I'd even include the "almost").
The counterpoint generally consists of postulating some meaningless (or even counterintuitive) overload, such as adding together two employees or overloading '+' to do division. While operator overloading in such languages as C++ would allow this, lack of operator overloading in Java does little to prevent or even mitigate the problem. someEmployee.Add(anotherEmployee) is no improvement over someEmployee + anotherEmployee. Likewise, if myLargeInteger.Add(anotherLargeInteger) actually does division instead of addition. At least to me, this line of argument appears thoroughly unconvincing at best.
There is, however, another respect in which omitting operator overloading does (almost certainly) have a real benefit. Its omission keeps the language easier to process, which makes it much easier (and quicker) to develop tools that process the language. Just for an obvious example, refactoring tools for Java are much more numerous and comprehensive than for C++. I doubt that this can or should be credited specifically and solely to support for operator overloading in C++ and its omission in Java. Nonetheless, the general attitude of keeping Java simple (including omission of operator overloading) is undoubtedly a major contributing factor.
The possibility of simplifying parsing by requiring spaces between identifiers and operators (e.g., a+b prohibited, but a + b allowed) has been raised. At least in my opinion, this is unlikely to make any real difference in most cases. The reason is fairly simple: at least in a typical compiler, the parser is preceded by a lexer. The lexer extracts tokens from the input stream and feeds them to the parser. With such a structure, the parser wouldn't see any difference at all between the a+b and a + b. Either way, it would receive exactly three tokens: identifer, +, and identifier.
Requiring the spaces might simplify the lexer a tiny bit--but to the extent it did, it would be completely independent of operator overloading, at least assuming the operator overloading was done like it is in C++, where only existing tokens are used1.
So, if that's not the problem, what is? The problem with operator overloading is that you can't hard-code a parser to know the meaning of an operator. With Java, for some given a = b + c, there are exactly two possibilities: a, b and c are each chosen from a small, limited set of types, and the meaning of that + is baked into the language, or else you have an error. So, a tool that needs to look at b + c and make sense of it can do a very minimal parse to assure that b and c are of types that can be added. If they are, it knows what the addition means, what kind of result it produces, and so on. If they are't, it can underline it in red squiggles (or whatever) to indicate an error.
For C++, things are quite different. For an expression like a = b + c;, b and c could be of almost entirely arbitrary types. The + could be implemented as a member function of b's type, or it could be a free function. In some cases, we might have a number of operator overloads (some of which could be templates) that could carry out that operation, so we need to do overload resolution to determine which one the compiler would actually select based on the types of the parameters (and if some of them are templates, the overload resolution rules get even more complex).
That lets us determine the type of the result from b + c. From there we basically repeat the whole process again to figure out what (if any) overload is used to assign that result to a. It might be built-in, or it might be another operator overload, and there might be multiple possible overloads that could do the job, so we have to do overload resolution again to figure out the right operator to use here.
In short, just figuring out what a = b + c; means in C++ requires nearly an entire compiler front-end. We can do the same in Java with a much smaller subset of a compiler2
I suppose things could be somewhat different if you allowed operator overloading like, for example, ML does, where a more or less arbitrary token can be designated as an operator, and that operator can be given a more or less arbitrary associativity and/or precedence. I believe ML handles this entirely in parsing, not lexing, but if you took this basic concept enough further, I can believe it might start to affect lexing, not just parsing.
Not to mention that most Java tools will use the JDK, which has a complete Java compiler built into the JVM, so tools can normally do most such analysis without dealing directly with parsing and such at all.
java-oo compiler plugin can add Operator Overloading support in Java.
It's not that java doesn't "need" operator overloading, it's just a choice made by its creators who wanted to keep the language more simple.
Java does not support operator overloading by programmers. This is not the same as stating that Java does not need operator overloading.
Operator overloading is syntactic sugar to express an operation using (arithmetic) symbols. For obvious reasons, the designers of the Java programming language chose to omit support for operator overloading in the language. This declaration can be found in the Java Language Environment whitepaper:
There are no means provided by which
programmers can overload the standard
arithmetic operators. Once again, the
effects of operator overloading can be
just as easily achieved by declaring a
class, appropriate instance variables,
and appropriate methods to manipulate
those variables. Eliminating operator
overloading leads to great
simplification of code.
In my personal opinion, that is a wise decision. Consider the following piece of code:
String b = "b";
String c = "c";
String a = b + c;
Now, it is fairly evident that b and c are concatenated to yield a. But when one consider the following snippet written using a hypothetical language that supports operator overloading, it is fairly evident that using operator overloading does not make for readable code.
Person b = new Person("B");
Person c = new Person("C");
Person a = b + c;
In order to understand the result of the above operation, one must view the implementation of the overloaded addition operator for the Person class. Surely, that makes for a tedious debugging session, and the code is better implemented as:
Person b = new Person("B");
Person c = new Person("C");
Person a = b.copyAttributesFrom(c);
OK Well... we have a very discussed and common issue. Today, in software industry, there are, mainly, two different types of languages:
Low level languages
High level languages
This distinction was useful about 10 years before now, the situation, at present, is a bit different.
Today we talk about business-ready applications.
Business models are some particular models where programs need to meet many requirements. They are so complex and so strict that coding an application with a language like c or c++ would be very time-spending. For this reason hybrid languages where invented.
We commonly know two types of languages:
Compiled
Interpreted
Well, today there is another one:
Compiled/Interpreted: in one word: MANAGED.
Managed languages are languages that are compiled in order to produce another code, different from the original one, but much more complex to handle. This INTERMEDIATE LANGUAGE is then INTERPETED by a program that runs the final program.
It is the common dynamics we came knowing from Java... It is a winning approach for business-ready applications.
Well, now going to your question...
Operator overloading is a matter that concerns also multiple inheritance and other advanced characteristics of low level languages.
Java, as well as C#, Python and so on, is a managed language, made to be easy to write and useful for building complex applications in very few time.
If we included operator overloading in Java, the language would become more complex and difficult to handle.
If you program in C++ you sure understand that operator overloading is a very very very delicate matter because it can lead to very complex situations and sometimes compiler might refuse to compile because of conflicts and so on... Introducing operator overloading is to be done carefully. IT IS POWERFUL, but we pay this power with an incredibly big load of problems to handle.
OKOK IT IS TRUE, you might tell me: "HEY, But C# uses operator overloading... What the hell are you telling me? why c# supports them and Java not?".
Well, this is the answer. C#, yes, implements operator overloading, but it is not like C++. There are many operator that cannot be overloaded in c# like "new" or many others that you can overload in c++... So C# supports operator overloading, but in a much lower level than c++ or other languages that fully supports it. But this is not a good answer to the earlier question...
The real answer is that C# is more complex than Java. This is a pro but also a con. It is a matter of deciding where to place the language: high level, higher level, very high level?
Well, Java does not support op overloading because it wants to be fast and easy to manage and use. When introducing op overloading, a language must also carry a large amount of problems caused by this new functionality.
It is exactly like questioning: "Why does Java not support multiple inheritance?"
Because it is tremendously complex to manage. Think about it... IT WOULD BE IMPOSSIBLE for a managed language to support multiple inheritance... No common class tree, no object class as a common base class for all classes, no possibility of upcasting (safely) and many problems to handle, manage, foresee, keep in count...
Java wants to be simple.
Even if I believe that future implementations of this language will result in supporting op overloading, you will see that the overloading dynamics will involve a fewer set of all the possibilities you have about overloading in C++.
Many others, here, also told you that overloading is useless.
Well I belong to those ones who think this is not true.
Well, if you think this way (op overloading is useless), then also many other features of managed languages are useless too. Think about interfaces, classes and so on, you really do not need them. You can use abstract classes for interface implementations... Let's look at c#... so many sugar syntax, LINQ and so on, they are not really necessary, BUT THEY FASTEN YOUR WORK...
Well, in managed languages everything that fasten a development process is welcome and does not imply uselessness. If you think that such features are not useful than the entire language itself would be useless and we all would come back programming complex applications in c++, ada, etc. The added value of managed languages is to be measured right on this elements.
Op overloading is a very useful feature, it could be implemented in languages like Java, and this would change the language structure and purposes, it would be a good thing but a bad thing too, just a matter of tastes.
But today, Java is simpler than C# even for this reason, because Java does not supports op overloading.
I know, maybe I was a little long, but hope it helps. Bye
Check Java Features Removed from C and C++ p 2.2.7 No More Operator Overloading.
There are no means provided by which
programmers can overload the standard
arithmetic operators. Once again, the
effects of operator overloading can be
just as easily achieved by declaring a
class, appropriate instance variables,
and appropriate methods to manipulate
those variables. Eliminating operator
overloading leads to great
simplification of code.
Java doesn't support operator overloading (one reference is the Wikipedia Operator Overloading page). This was a design decision by Java's creators to avoid perceived problems seen with operator overloading in other languages (especially C++).
Respected Sir!
As i have not learnt java yet but most people say that C++ has more OOP features than Java, I would like to know that what are the features that c++ has and java doesn't. Please explain.
From java.sun.com
Java omits many rarely used, poorly understood, confusing features of C++ that in our experience bring more grief than benefit. These omitted features primarily consist of operator overloading (although the Java language does have method overloading), multiple inheritance, and extensive automatic coercions.
For a more detailed comparison check out this Wikipedia page.
This might be controversial, but some authors say that using free functions might be more object oriented than writting methods for everything. So by those author's point of view, free functions in C++ make it more OO than Java (not having them).
The explanation is that there are some operations that are not really performed on an instance of an object, but rather externally, and that having externally defined operations for those cases improves the OO design. Some of the cases are operations on two objects that are not naturally an operation of either one. Incrementing a value is clearly an operation on the value, but creating a new value with the sum of two others (or concatenating) are not really operations on the instance. When you write:
String a = "Hello";
String b = " World";
String c = a.append( b );
The append operation is not performed on a: after the operation a is still "Hello". The operation is not performed on b either, it is an external operation that is performed on both a and b. In this particular example, the most OO way of implementing the operation would be providing a new constructor that takes two arguments (after all, the operation is performed on the new string), but another solution would be providing an external function append that takes two strings and returns a third one.
In this case, where both instances are of the same type, the operation can naturally be performed as a static method of the type, but when you mix different types the operation is not really part of either one, and in some cases it might end up being of a completely different type. In some cases free functions are faked in Java as in the Collections java class, it does not represent any OO element, but is rather simple glue to tie free functions are static methods because the language does not have support for the former. Note that all those algorithms are not performed on the collection nor an instance of the contained type.
Multiple inheritance
Template Metaprogramming
C++ is a huge language and it is common for C++ developers to only use a small subset during development. These language features are often cited as being the most dangerous/difficult part of C++ to master and are often avoided.
In C++ you can bypass the OO model and make up your own stuff, whereas in Java, the VM decides that you cannot. Very simplified, but you know... who has the time.
I suppose some would consider operator overloading an object oriented feature(if you view binary operators not much different then class methods).
Some links, that give some good answers:
Java is not pure a OOP language (... but I don't care ;) )
Comparing C++ and Java (Java Coffee Break article)
Comparing Java and C++ (Wikipedia comprehensive comparision)
Be careful. There are multiple definitions of OOP out there. For example, the definitions in Wegner 87 and Booch et al 91 are different to what people say in Java is not pure a OOP language.
All this "my language is more OO than your language" stuff is a bit pointless, IMO.
In Beyond Java(Section 2.2.9), Brute Tate claims that "typing model" is one of the problems of C++. What does that mean?
What he means is that objects in C++ don't intrinsically have types. While you might write
struct Dog {
char* name;
int breed;
};
Dog ralph("Ralph", POODLE);
in truth ralph doesn't have a type; it's just a bunch of bits, and the CPU doesn't give a damn about the fact that you call that collection of bits a Dog. For example, the following is valid:
struct Cat {
int color;
char* country_of_origin;
};
Cat ralph_is_that_you = * (Cat*) &ralph;
Watch in wonder as Professor C performs trans-species mutations between Dogs and Cats! The point here is that since ralph is just a sequence of bits, you can just claim that that sequence of bits is really a Cat and nothing would go wrong... except that the "Cat"'s color would be some random large integer, and you better not try to read it's country of origin. The fundamental problems is that while a variable (as in, the name, not the object it represents) has a type, the underlying object does not.
Compare this with JAVA, where not only types, but objects have intrinsic types. This may be due partly to the fact that there are no pointers and thus no access to memory, but the fact nonetheless exists that if you cast a Dog to an Object, you can't cast it back down to a Cat, because the object knows that deep down, it's actually a Dog, not a Cat.
The weak typing present in C++ is rather detrimental, because it makes the compiler static type checks nearly useless if you want to truly abuse-proof your application, and also makes secure and robust software hard to write. For example, you need to be very careful whenever you access a "pointer" because it could really be any random bit pattern.
EDIT 1: The comments had very good points, and I'd like to add them here.
kts points out that Sun's JAVA does indeed have pointers if you look deeply enough. Thanks! I hadn't known, and that's rather cool. However, the fundamental point is that JAVA objects are typed and C types aren't. Yes, you can circumvent this, but this is the same as the difference between opt-in and opt-out spam: yes, you could abuse the JAVA pointers, but the default is that no abuse is possible. You'd have to opt in.
martin-york points out that the example I showed is a purely C phenomenon. This is true, but
C++ mostly contains C as a subset (the differences are usually too minor to list).
C++ includes reinterpret_cast<T> specifically to allow hacks like this.
Just because it's discouraged doesn't mean it isn't pervasive or dangerous. Basically, even if JAVA has opt-in pointers (as I'll call them), the fact is that the person using them has probably thought of the consequences. C's casts are so easy that they're at times done without thinking (To quote Stroustroup, "But the new syntax was made deliberately ugly, because casting is still an ugly and often unsafe operation."). There's also the fact that the work needed to circumvent JAVA's type system is far more than what would make for a clever hack, while circumventing the C type system (and, yes, the C++ type system) is easy enough that I've seen it done just for a minor performance boost.
Anyway, discouraging something doesn't make it not happen. I discourage bad coding, but I haven't seen it get me anywhere...
As for the feature being useful, it admittedly is (just look up "fast inverse square root" on Google or Wikipedia) but it is dangerous enough that, following Stroustroup's maxim that ugly operations should be ugly, the difficulty threshold should be significantly higher.
That it is hard to type C++ code. :-p
Seriously though, they are probably referring to the fact that C++ has a weak static type system, that can be easily circumvented. Some examples: typedefs are not real types, enumerated types are just ints, booleans and integers are equivalent in many cases, and so on.