Why will there be no native properties in Java 7? - java

Is there any rational reason, why native properties will not be part of Java 7?

There are some high-level reasons related to schedule and resources of course. Implementation of properties and understanding all of the ramifications and intersections with other language features is a large task similar to the size of various Java 5 language changes.
But I think the real reason Sun is not pushing properties is the same as closures:
1) There is no consensus on what the implementation should look like. Or rather, there are many competing alternatives and people who are passionate about properties disagree about crucial parts of the implementation.
2) Perhaps more importantly, there is a significant lack of consensus about whether the feature is wanted at all. While many people want properties, there are also many people that don't think it's necessary or useful (in particular, I think server-side people see properties as far less crucial to their daily life than swing programmers).
Properties history here:
http://tech.puredanger.com/java7#property

Doing properties "right" in Java will not be easy. Rémi Forax's work especially has been valuable in figuring out what this might look like, and uncovering a lot of the "gotchas" that will have to be dealt with.
Meanwhile, Java 7 has already taken too long. The closures debate was a huge, controversial distraction that wasted a lot of mind-power that could have been used to develop features (like properties) that have broad consensus of support. Eventually, the decision was made to limit major changes to modularization (Project Jigsaw). Only "small change" is being considered for the language (under Project Coin).
JavaFX has beautiful property support, so Sun clearly understands the value of properties and knows how to implement them. But having been spoiled by JavaFX properties, developers are less likely to settle for a half-baked implementation in Java. If they are worth doing, they are worth doing right.

Any given thing is "not done" by default, so no particular reason is needed for something to remain not done. Rather some compelling reason is needed to move something from "not done" to "planned" or "done". No sufficiently compelling reason has yet arisen for this language feature.

There are two more reasons to avoid properties in any language:
Properties are not very object-oriented. Making them easy to write encourages the pattern where the object just serves up its internal state and the caller manipulates it. The object should provide higher-level methods and keep its internals private. Next time you're tediously implementing a getter, consider what the caller will do with the data and whether you can just provide that functionality directly.
Properties encourage mutable state (through setters), which makes a program less parallelizable. As the number of cores goes up, we should all be trying to make our objects immutable to make concurrent reasoning easier. Next time you're tediously implementing a setter, consider removing it and making the object immutable.

Not enough time?
Not yet specced properly?
Difficult to add to java due to java's implementation?
Deemed not important enough, ie other things were prioritiesed?

Related

Why do standard classes sometimes have seemingly unrelated methods?

While studying the standard Java library and its classes, i couldn't help noticing that some of those classes have methods that, in my opinion, have next to no relevance to those classes' cause.
The methods i'm talking about are, for example, Integer#getInteger, which retrieves a value of some "system property", or System#arraycopy, whose purpose is well-defined by its name.
Still, both of these methods seem kinda out of place, especially the first one, which for some reason binds working with system resources to a primitive type wrapper class.
From my current point of view, such method placement policy looks like a violation of a fundamental OOP design principle: that each class must be dedicated to solving its particular set of problems and not turn itself into a Swiss army knife.
But since i don't think that Java designers are idiots, i assume that there's some logic behind a decision to place those methods right where they are. So i'd be grateful if someone could explain what that logic really is.
Thanks!
Update
A few people have hinted at the fact that Java does have its illogical things that are simply remnants of a turbulent past. I reformulate my question then: why is Java so unwilling to mark its architectural flaws as deprecated, since it's not like that the existing deprecated features are likely to be discontinued in any observable future, and making things deprecated really helps refraining from using them in newly created code?
This is a good thing to wonder about. I know about more recent features (such as generics, lambda's etc) there are several blogs and posts on mailing lists that explain the choices made by the library makers. These are very interesting to read.
In your case I expect the answer isn't too exiting. The reason they were made is hard to tell. But both classes exist since JDK1.0. In those days the quality of programming in general (and also Java and OO in particular) was perhaps lower (meaning there were fewer common practices, library makers had to invent many paradigms themselves). Also there were other constraints in those times, such as Object creation being expensive.
Many of those awkwardly designed methods and classes now have a better alternative. (See Date and the package java.time)
The arraycopy you would expect to be added to the Arrays class, but unfortunately it is not there.
Ideally the original method would be deprecated for a while and then removed. Many libraries follow this strategy. Java however is very conservative about this and only deprecates things that really should not be used (such as Thread.stop(). I don't think a method has ever been removed in Java due to deprecation. This means it is fairly easy to upgrade your software to a newer version of Java, but it comes at the cost of leaving some clutter in the libraries.
The fact that java is so conservative about keeping the new JDK/JRE versions compatible with older source code and binaries is loved and hated. For your hobby project, or a small actively developed project upgrading to a new JVM that removes deprecated functions after a few years is not too difficult. But don't forget that many projects are not actively developed or the developers have a hard time making changes securely, for instance because they lack a proper regression test. In these projects changes in APIs cost a lot of time to comply to, and run the risk of introducing bugs.
Also libraries often try to support older versions of Java as well as newer version, they will have a problem doing so when methods have been deleted.
The Integer-example is probably just a design decision. If you want to implicitly interpret a property as Integer use java.lang.Integer. Otherwise you would have to provide a getter method for each java.lang-Type. Something like:
System.getPropertyAsBoolean(String)
System.getPropertyAsByte(String)
System.getPropertyAsInteger(String)
...
And for each data type, you'd require one additional method for the default:
- System.getPropertyAsBoolean(String, boolean)
- System.getPropertyAsByte(String, byte)
...
Since java.lang-Types already have some cast abilities (Integer.valueOf(String)), I am not too surprised to find a getProperty method here. Convenience in trade for breaking principles a tiny bit.
For the System.arraycopy, I guess it is an operation that depends on the operating system. You probably copy memory from one location to another in a very efficient way. If I would want to copy an array like that, I'd look for it in java.lang.System
"I assume that there's some logic behind a decision to place those
methods right where they are."
While that is often true, I have found that when somethings off, this assumption is typically where you are mislead.
A language is in constant development, from the day someone proposes a new language to the day it is antiquated. In between those extremes are some phases that the language, go through. Especially if someone is spending money on it and wants people to use it, a very peculiar phase often occurs, just before or after the first release:
The "we need this to work yesterday" phase.
This is where stuff like this happens, you have an almost complete language, but the programmers need to do something to to show what the language can do, or a specific application needs a feature that was not designed into the language.
So where do we add this feature?
- well, where it makes most sense to that particular programmer who's task it is to "make it work yesterday".
The logic may be that, this is where the function makes the most sense, since it doesn't belong anywhere else, and it doesn't deserve a class of its own. It could also be something like: so far, we have never done an array copy, without using system.. lets put arraycopy in there, and save everyone an extra include..
in the next generation of the language, people will not move the feature, since some experienced programmers will complain. So the feature may be duplicated, and found in a place where it makes more sense.
much later, it will be marked as deprecated, and deleted, if anyone cares to clean it up..

Good software engineering vs. Security

The Security and Design guidelines go to great length outlining various methods to make it more difficult for an attacker to compromise in-app billing implementation.
Especially noted is how easy it is to reverse-engineer a .apk file, even if obfuscated via Proguard. So they even recommend modifying all sample application code, especially "known entry points and exit points".
What I find missing is any reference to the wrapping certain verification methods in a single method, like the static Security.verify() which returns boolean: A good design practice (reducing code duplication, reusable, easier to debug, self-documenting, etc.) but all an attacker needs to do now is identify that method and make it always return true... So regardless how many times I used it, delayed or not delayed, randomly or not, it simply doesn't matter.
On the other hand, Java doesn't have macros like in C/C++, which allows reducing source code duplication, but doesn't have a single exit point for a verify() function.
So my questions:
Is there an inherent contention between the well known software engineering/coding practices and design for so called security? (in the context of Java/Android/secure transactions at least)
What can be done to mitigate the side-effects of "design for security" which seems like "shooting oneself in the foot" in terms of over-complicating software that could have been simpler, more maintainable and easier to debug?
Can you recommend good sources for further studying this subject?
As usual, it's a tradeoff. Making your code harder to reverse-engineer/crack involves making it less readable and harder to maintain. You decide how far to go, based on your intended user base, your own skills in the area, time/cost, etc. This is not specific to Android. Watch this Google I/O presentation for various stages of obfuscating and making your code tamper resistant. Then decide how far you are willing to go for your own apps.
On the other hand, you don't have to obfuscate/harden, etc. all of your code, just the part that deals with licensing, etc. That is usually a very small part of the whole codebase and doesn't really need to change that often, so you could probably live with it being hard to follow/maintain, etc. Just keep some notes on how it works, so you remind yourself 2 years later :).
The counter productivity you are describing is the tip of the iceberg... No software is 100% bug-free on release, so what do you do when users start reporting problems?
How do you troubleshoot or debug field problems after you disabled logging, stack tracing and all kinds of other information that help reverse-engineers but also help the legitimate development team?
However tough the obfuscation methods are, there is always a way to reverse engineer them. I mean, if your software gets more popular among the hakers community, eventually someone will try to reverse-engineer it.
Obfuscation is just a method to make the process of reverse engineering tougher. So is packing. I think many packing methods are available, but so is the process to reverse-engineer them.
You can check the www.tuts4you.com to see how tons of guides are being available.
I am not an expert like many others, but this is my experience in the process of learning reverse-engineering. Also recently I have seen a lot of guides for Android applications reverse-engineering. I have seen even in nullc0n (not sure) CTF, there was an app in Reversing Android. If you want, I can mention the site after searching.

Practices for managing complexity of meta-programming (AOP/reflection/macros) techniques

Aspects, Macros, Reflection, and other niceties - the good parts
I've noticed that "meta programming" tricks (in the clojure world, functions have meta data, in the oo world, we have concepts like reflection, AOP, etc...) can be a good way to decouple and extend functionality of existing code, without editing it. Such tricks allow us to intercept, redirect, and wrap functional peices of our code so it can be extended in a highly dynamic way.
The scary part
However, as many have claimed - overuse of macros can make code difficult to understand. The "blackboard" software architecture pattern, where several agents modify or edit a common resource can be dangerous if we dont manage the creation of those agents carefully. Finally, I would informally note that the long standing popularity of C++ and java is, at least partially due to the fact that they are "no-surprises" languages - where code is clear, explicit, and procedural.**
The problem : The promise of dynamic code injection techniques for reducing boiler plate and decoupling feature sets requires a "new" way of thinking about documentation, class design, and software engineering ?
My Questions
Does the way we document/deploy normal code, manage source packages, integrate libraries requires different or new techniques when we begin accomodating meta-programming methods in conjunction with our more traditional OO methodologies ?
For example, Should we consider the use of meta programming as an alternative to other, more conventional OO programming techniques ?
Are there a general set of known, red flags introduced by meta-programming -- and how can we avoid them ?
What are best use cases for the use of aspects, reflection, and other dynamic software techniques ?
I find that AOP is something that need to be used very carefully in a software project and have a well defined purpose. I find it is useful for some boiler plate processes like transaction demarcation, security and logging but it is really easy to get yourself in trouble with AOP and it can become a major source of accidental complexity.
"It depends" :) ... That's what is probably the best answer for all subjective questions in programming world.
I would suggest that before going to use any of the technique like AOP or DI, please give it a very serious though in respect to whether you really really need it. We as programmers tends to gets very fascinated by these new tricks and techniques which makes us see beauty (superficial) in code. The real beauty of code that we should strive for is simplicity and nothing else.
Remember every new trick/technique/framework you add to a system will increase the complexity of the system (probably exponentially).
I personally go by the idea of: Build Programs not Applications, Build libraries not frameworks.
Here's a quote in (SICP) that might be relevant to the discussion:
"It is no exaggeration to regard this as the most fundamental idea in programming:
The evaluator, which determines the meaning of expressions in a programming language, is just another program.
To appreciate this point is to change our images of ourselves as programmers. We come to see ourselves as designers of languages, rather than only users of languages designed by others."

Java - Robustness and Code Reuse

I have a few doubts on java concepts:
Is code reuse in java similar to using functions as defined in other programming languages like C?
Is Java robust by nature or does it provide a way to write robust code ?
Can anyone explain the above two. I have read a few books and I did not get a clear picture
Code Reuse
I do like to point you to some links on this topic.
A Realistic Look at Object-Oriented Reuse
What exactly is OO reuse?
Does OOP fulfill the promise of code reuse? What alternatives are there to achieve code reuse?
Is code reuse a lie?
Some points about code reuse from the first link.
Code reuse, the most common kind of reuse, refers to the reuse of
source code within sections of an application and potentially across
multiple applications. At its best, code reuse is accomplished by
sharing common classes or collections of functions and procedures. At
its worst, code reuse is accomplished by copying and then modifying
existing code. A sad reality of our industry is that code copying is
often the only form of reuse practised by developers.
Robust
Quoted from Core Java, Volume I, Fundamentals.
"Java is intended for writing programs that must be reliable in a
variety of ways. Java puts a lot of emphasis on early checking for
possible problems, later dynamic (runtime) checking, and eliminating
situations that are error-prone. . . . The single biggest difference
between Java and C/C++ is that Java has a pointer model that
eliminates the possibility of overwriting memory and corrupting data."
This feature is also very useful. The Java compiler detects many
problems that, in other languages, would show up only at runtime. As
for the second point, anyone who has spent hours chasing memory
corruption caused by a pointer bug will be very happy with this
feature of Java.
If you are coming from a language like Visual Basic that doesn’t
explicitly use pointers, you are probably wondering why this is so
important. C programmers are not so lucky. They need pointers to
access strings, arrays, objects, and even files. In Visual Basic, you
do not use pointers for any of these entities, nor do you need to
worry about memory allocation for them. On the other hand, many data
structures are difficult to implement in a pointerless language. Java
gives you the best of both worlds. You do not need pointers for
everyday constructs like strings and arrays. You have the power of
pointers if you need it, for example, for linked lists. And you always
have complete safety, because you can never access a bad pointer, make
memory allocation errors, or have to protect against memory leaking
away.
If by "code reuse" you mean including other files from the same project: yes. Otherwise, no.
As for the second, Java is robust, as taken from here: http://java.sun.com/docs/overviews/java/java-overview-1.html:
Java: A simple, object-oriented, network-savvy, interpreted, robust,
secure, architecture neutral, portable, high-performance, multithreaded,
dynamic language.

Use the right tool for the job: embedded programming

I'm interested in programming languages well suited for embedded programming.
In particular:
Is it possible to program embedded systems in C++?
Or is it better to use pure C?
Or is C++ OK only if some features of the language (e.g. RTTI, exceptions and templates) are excluded?
What about Java in this domain?
Thanks.
Is it possible to program embedded
systems in C++?
Yes, of course, even on 8bit systems. C++ only has a slightly different run-time initialisation requirements than C, that being that before main() is invoked constructors for any static objects must be called. The overhead (not including the constructors themselves which is within your control) for that is tiny, though you do have to be careful since the order of construction is not defined.
With C++ you only pay for what you use (and much that is useful may be free). That is to say for example, a piece of C code that is also C++ compilable will generally require no more memory and execute no slower when compiled as C++ than when compiled as C. There are some elements of C++ that you may need to be careful with, but much of the most useful features come at little or no cost, and great benefit.
Or is it better to use
pure C?
Possibly, in some cases. Some smaller 8 and even 16 bit targets have no C++ compiler (or at least not one of any repute), using C code will give greater portability should that be an issue. Moreover on severely resource constrained targets with small applications, the benefits that C++ can bring over C are minimal. The extra features in C++ (primarily those that enable OOP) make it suited to relatively large and complex software construction.
Or is C++ OK only if some
features of the language (e.g. RTTI,
exceptions and templates) are
excluded?
The language features that may be acceptable depend entirely on the target and the application. If you are memory constrained, you might avoid expensive features or libraries, and even then it may depend on whether it is code or data space you are short of (on targets where these are separate). If the application is hard real-time, you would avoid those features and library classes that are non-deterministic.
In general, I suggest that if your target will be 32bit, always use C++ in preference to C, then cut your C++ to suit the memory and performance constraints. For smaller parts be a little more circumspect when choosing C++, though do not discount it altogether; it can make life easier.
If you do choose to use C++, make sure you have decent debugger hardware/software that is C++ aware. The relative ease with which complex software can be constructed in C++, make a decent debugger even more valuable. Not all tools in the embedded arena are C++ aware or capable.
I always recommend digging in the archives at Embedded.com on any embedded subject, it has a wealth of articles, including a number of just this question, including:
Poor reasons for rejecting C++
Real men program in C
Dive in to C++ and survive
Guidelines for using C++ as an alternative to C in embedded designs
Why C++ is a viable alternative to C in embedded systems design
Better even at the lowest levels
Regarding Java, I am no expert, but it has significant run-time requirements that make it unsuited to resource constrained systems. You will probably constrain yourself to relatively expensive hardware using Java. Its primary benefit is platform independence, but that portability does not extend to platforms that cannot support Java (of which there are many), so it is arguably less portable than a well designed C or C++ implementation with an abstracted hardware interface.
[edit] Concidentally I just received this in the TechOnline newsletter: Using C++ Efficiently in Embedded Applications
More often than not in embedded systems, the language you're programming in is determined by which compiler is actually available.
If you're hardware only has a C compiler, that's what you're going to use. IF it has a decent C++ compiler than there is really no reason to prefer C over C++.
I would dare say that Java isn't a very popular choice in embedded systems.
Embedded programming these days spans a large range of applications.
Roughly, it goes from sensors/switches up to complete security systems.
You should base your language on the complexity and the hardware resources.
It is 1 of the choices next to HW (CPU,...), OS, protocols,...
possible choices:
switches: assembler
router-like devices: C and/or C++
handhelds: Java or QT/C++
complete systems: combinations C and/or C++ with python
Or is C++ OK only if some features of the language (e.g. RTTI, exceptions and templates) are excluded?
It's good to be thinking along these lines. Compile-time complexity is not a big deal, but runtime complexity has a resource cost.
C++ facilitates class/namespace modularity (e.g. method foo() in more than one context) and instance modularity (member field bar belonging to more than one object), both of which are a big advantage in software design. There are also features like const, references, static casts, and templates, which can help enforce constraints and have little or no runtime cost.
I would not exclude templates. They're complex to think about and you need a compiler that handles them well, but the resource cost is almost all compile time -- what's going to "cost" you is the fact that each time you use a template with different class parameters, you produce a new set of code to instantiate member functions. But you would almost certainly have to do the same thing without templates. Furthermore, templates allow you to design and test libraries for general circumstances, in separate files that are instantiated at compile time rather than link time. Just to clarify that: templates allow you to have a file A.h that you test. Then you use it with file B.h or B.c to instantiate it at compile time. (A library would be linked in rather than compiled, and this makes it less flexible -- template methods can be optimized out so they do not incur a function call.) I've used templates in embedded systems to implement CRC code and fixed-point math: I can test the general code, put it in version control, and then reuse it multiple times by writing a simple class that derives from a template or has a template member field. The classic example of course is STL.
RTTI and exceptions: these add run-time complexity. I don't have a good idea of the resource cost but I expect RTTI would be fairly simple (just adds a type tag, costing extra space) whereas exceptions are probably beastly, involving stack unwinding.
virtual functions: I used to rule these out because of the memory + executiontime costs (minimal but still there), as well as the complexity of debugging, but they allow you to decouple objects from each other. If you don't use virtual functions, when an instance of one class (e.g. Foo) needs to execute code associated with an instance of another class (e.g. Bar), then the first class needs to know everything about the second (to compile Foo you need to have static linkage to all the methods in Bar) -- this adds a lot of tight coupling.
dynamic memory allocation: this is another big thing (that is in C as well), which we avoid like the plague at my company -- not only are there all sorts of errors that can arise, but the big runtime cost is the allocator/deallocator, and you've got to be willing and able to know what that cost is and accept it.
edit: I would love to use Java instead of C++ in the embedded world. Unfortunately my choices are limited and the runtime resource costs (code size, memory size, garbage-collecting time constraints) are too high in the space that I work in. My reason for using Java is less because of its runtime goodies and more for the fact that its software design is much cleaner, and the tools are much better (OMG! refactoring! woohoo!)... the key to me seems to lie with two things, which make C/C++ feel very clunky in comparison:
that everything is an object and all the methods are virtual, so you can lean heavily on the abstractions of interfaces.
the interface/implementation separation in Java is not this clunky ugly .c/.h file splitting thing that makes compilers so slow. In constrast, I use Java in Eclipse and it automatically compiles the code right as I edit it. This is huge! I find most of my errors right away. In C/C++ I have to wait for a whole compile cycle.
Someday I hope there will be a language between C/C++ and Java that provides the advantages of Java for software development, without requiring the bells and whistles that make Java so attractive for desktop/server applications but unattractive in most of the embedded world.
Both C and C++ can be used on embedded systems. If you do limit the features of C++ that you use, then it is going to use roughly the same speed and space as C. As for using these additional features, it really depends on your constraints. For example, if you are making a real-time system, then exceptions might not be a good idea, simply because considering the propagation time for exceptions and all the paths through which exceptions can possibly propagate can make hard real-time guarantees quite tough (although, then again, the ACE / Tao real-time CORBA implementation uses exceptions). While templates and RTTI can lead to larger programs, there is a lot of variability in embedded systems, and so depending on your resource constraints, this could be perfectly fine or unacceptable. The same goes for Java. Java (well, technically, the Java programming language with a subset of the Java API) is running on Android, for example, using the Dalvik VM. J2ME runs on a variety of embedded devices. Whether it will or will not work for your application, depends on your particular constraints.
c is the most common language used in embedded systems.
However, I think C++'s future lies in the embedded system domain. I think the C++0x standards will help in that resepect. So I wouldn't be surprised to see C++ used a lot more in this field.
I think you already have a great answer by Clifford, but I'll add my experience to it. As was pointed out, the type of embedded system is the main driver. In Defense/Aerospace, C and Ada are the most popular embedded languages I encounter. Although as time goes on, I am seeing more C++ as the model based development becomes popular with the tools such as Rhapsody. In the jobs listing, seeing requirements for Object Oriented Design experience also leads me to believe that the market is slowly shifting to follow the trends of mainstream development.
If you're really interested in embedded development, I would start with C. It is pretty universal. Almost every OS, including real time OS's like Integrity, Nucleus, VxWorks, and embedded Linux, has a compiler and tools for it. The things you'll learn about pointers, memory management, etc... will translate into C++ development well in the embedded world at least.
As for Java, if you're interested in development for mobile platforms like smart phones, this is a solid choice (Android comes to mind). But in the world of Real Time Operating Systems, I haven't seen it.
On that note, that brings me to my last point and advice. If I wanted to jump into embedded programming (which I did just 4 years ago), I would focus on learning C from a low level point of view as mentioned above. Then, I would also learn what makes real time programming so difficult/different. I found the book below quite good at teaching you to think like an embedded programmer vs. an application developer. Good luck!
An Embedded Software Primer
It really boils down to what hardware platform you're operating on and hence what software platforms are open to you. For a lot of recent embedded kit - designed around a system-on-chip, a megabyte or two of RAM, a few devices - you really want a small operating system to manage the low level hardware while you concentrate on your application. Your choice of OS then constrains your available language set.
It's certainly possible to use C++ in the embedded space, but the full feature set of the language takes a lot of work to port correctly. For example, eCos is implemented in a mixture of C and what you might call the structural aspects of C++; support for RTTI, exceptions and the STL are lacking in the free version, though there are people working on this (and a commercial port available).
Similarly, it's possible to use Java - I know, I've ported a JVM to an embedded environment; it wasn't fun - though you usually get a cut-down Java language configuration, often something based on J2ME.

Categories

Resources