Aspects, Macros, Reflection, and other niceties - the good parts
I've noticed that "meta programming" tricks (in the clojure world, functions have meta data, in the oo world, we have concepts like reflection, AOP, etc...) can be a good way to decouple and extend functionality of existing code, without editing it. Such tricks allow us to intercept, redirect, and wrap functional peices of our code so it can be extended in a highly dynamic way.
The scary part
However, as many have claimed - overuse of macros can make code difficult to understand. The "blackboard" software architecture pattern, where several agents modify or edit a common resource can be dangerous if we dont manage the creation of those agents carefully. Finally, I would informally note that the long standing popularity of C++ and java is, at least partially due to the fact that they are "no-surprises" languages - where code is clear, explicit, and procedural.**
The problem : The promise of dynamic code injection techniques for reducing boiler plate and decoupling feature sets requires a "new" way of thinking about documentation, class design, and software engineering ?
My Questions
Does the way we document/deploy normal code, manage source packages, integrate libraries requires different or new techniques when we begin accomodating meta-programming methods in conjunction with our more traditional OO methodologies ?
For example, Should we consider the use of meta programming as an alternative to other, more conventional OO programming techniques ?
Are there a general set of known, red flags introduced by meta-programming -- and how can we avoid them ?
What are best use cases for the use of aspects, reflection, and other dynamic software techniques ?
I find that AOP is something that need to be used very carefully in a software project and have a well defined purpose. I find it is useful for some boiler plate processes like transaction demarcation, security and logging but it is really easy to get yourself in trouble with AOP and it can become a major source of accidental complexity.
"It depends" :) ... That's what is probably the best answer for all subjective questions in programming world.
I would suggest that before going to use any of the technique like AOP or DI, please give it a very serious though in respect to whether you really really need it. We as programmers tends to gets very fascinated by these new tricks and techniques which makes us see beauty (superficial) in code. The real beauty of code that we should strive for is simplicity and nothing else.
Remember every new trick/technique/framework you add to a system will increase the complexity of the system (probably exponentially).
I personally go by the idea of: Build Programs not Applications, Build libraries not frameworks.
Here's a quote in (SICP) that might be relevant to the discussion:
"It is no exaggeration to regard this as the most fundamental idea in programming:
The evaluator, which determines the meaning of expressions in a programming language, is just another program.
To appreciate this point is to change our images of ourselves as programmers. We come to see ourselves as designers of languages, rather than only users of languages designed by others."
Related
Good evening,
I am currently in the progress of getting a degree in Programming at an academic institution (not a University) in Germany. We also do web development with Java ee there. This particular course started with using Servlets and progressed to JSP. Using servlets to handle business logic and then printing those results with jsp and using some of the basic things provided by jsps (looping over collections e.g.) seemed to make sense. But recently we dove deeper into the world of JSP and did scriptlets and similar things which boiled down to putting more and more business logic into the jsp file and ditching servlets altogether. And this entanglement of Java code and business logic and the praise of doing this is somewhat beyond me. I always thought one of the main goals of web application development was to have the main business logic separated from frontend matters (a thing which django and its template language is doing very well imho).
I find this thought somewhat mind-boggling that on one hand they teach us to keep loose coupling in mind when coding in one subject and in the other subject we are being taught to move more and more business logic into the templates.
What even bothers me more so, is that if one googles some solutions to java ee problems, a high number of results shows solutions where lots of logic is happening in a template file, somewhat confirming that this mixing of template and programming language seems to be an accepted way of doing things in the ee world / encourages aspiring developers to adopt such practices.
Now from what I've heard, java for the web doesn't seem to be as big as a thing anymore, and if you look at the most popular webapps, hardly any of those are implemented in java, yet this aforementioned aspect always amazes me.
So the concrete questions here would be why is this high amount of coupling between template and business logic considered good practice in java ee?
Greetings,
derelektrischemoench
Actually, it is not a good practice. I think that lots of code you can find on Internet was written in wrong way because of various reason: probably it was developed to test some functionality and not to be deployed in a production environment, without evaluation of loose coupling, quality issues et cetera. Moreover, for various problems I always have to search on Internet. Most of time I find the solution and it lacks of class coupling, unsafe methods and so on. The point is: do not take this code you can find on Internet as an example of how something is done. Just use this code as a suggestion of how you can solve the problem you have and apply an improved version of that code in your production code. This applies not only to JSP or Java, but more generally to every kind of code you can read. Always remember that the code you have found somewhere with the help of Google was probably a fast "trial and errors" driven code that will never go on production and never will be changed. Your work as a developer is not copying-and-pasting that code, but organizing that code in the most maintainable way possible. I encorauge you to take a look at SOLID principles. To me SOLID principles enforces decoupling and other aspects that helps writing better code, and it is very important when you write a real-world product, because probably you are going to change it lots of time in future. Internet examples are not designed to be improved, just to be quickly understood.
Whenever I have the need to design an API in Java, I normally start off by opening up my IDE, and creating the packages, classes and interfaces. The method implementations are all dummy, but the javadocs are detailed.
Is this the best way to go about things? I am beginning to feel that the API documentation should be the first to be churned out - even before the first .java file is written up. This has few advantages:
The API designer can complete the design & specification and then split up the implementation among several implementors.
More flexible - change in design does not require one to bounce around among java files looking for the place to edit the javadoc comment.
Are there others who share this opinion? And if so, how do you go about starting off with the API design?
Further, are there any tools out there which might help? Probably even some sort of annotation-based tool which generates documentation and then the skeleton source (kind of like model-to-code generators)? I came across Eclipse PDE API tooling - but this is specific to Eclipse plugin projects. I did not find anything more generic.
For an API (and for many types of problems IMO), a top-down approach for problem partitioning and analysis is the way to go.
However (and this is just my 2c based on my own personal experience, so take it with a grain of salt), focusing on the Javadoc part of it is a good thing to do, but that is still not sufficient, and cannot reliably be the starting point. In fact, that is very implementation oriented. So what happened to the design, the modeling and reasoning that should take place before that (however brief that might be)?
You have to do some sort of modeling to identify the entities (the nouns, roles and verbs) that make up your API. And no matter how "agile" one would like to be, such things cannot be prototyped without having a clear picture of the problem statement (even if it is just a 10K foot view of it.)
The best starting point is to specify what you are trying to implement, or more precisely, what type of problems your API is trying to address. BDD might be of help (more of that below). That is, what is it that your API will provide (datum elements), and to whom, performing what actions (the verbs) and under what conditions (the context). That leads then to identify what entities provide these things and under what roles (interfaces, specifically interfaces with a single, clear role or function, not as catch-all bags of methods). That leads to an analysis on how they are orchestrated together (inheritance, composition, delegation, etc.)
Once you have that, then you might be in a good position to start doing some preliminary Javadoc. Then you can start working on the implementation of those interfaces, of those roles. More Javadoc follows (in addition to other documentation that might not fall within Javadoc .ie. tutorials, how-tos, etc.)
You start your implementation with use cases and verifiable requirements and behavioral descriptions of what each thing should do alone or in collaboration. BDD would be extremely helpful here.
As you work on, you continuously refactor, hopefully by taking some metrics (cyclomatic complexity and some variant of LCOM). These two tell you where you should refactor.
A development of an API should not be inherently different from the development of an application. After all, an API is a utilitarian application for a user (who happens to have a development role.)
As a result, you should not treat API engineering any diferently from general software-intensive application engineering. Use the same practices, tune them according to your needs (which every one who works with software should), and you'll do fine.
Google has been uploading its "Google Tech Talk" video lecture series on youtube for quite some time. One of them is an hour long lecture titled "How To Design A Good API and Why it Matters". You might want to check it out also.
Some links for you that might help:
Google Tech Talk's "Beyond Test Driven Development: Behaviour Driven Development" : http://www.youtube.com/watch?v=XOkHh8zF33o
Behavior Driven Development : http://behaviour-driven.org/
Website Companion to the book "Practical API Design" : http://wiki.apidesign.org/wiki/Main_Page
Going back to the Basics - Structured Design#Cohesion and Coupling : http://en.wikipedia.org/wiki/Structured_Design#Structured_Design
Defining the interface first is the programming-by-contract style of declaring preconditions, postconditions and invariants. I find it combines well with Test-Driven-Development (TDD), because the invariants and postconditions you write first are the behaviours that your tests can check for.
As an aside, it seems the Behaviour-Driven-Development elaboration of TDD seems to have come about because of programmers who did not habitually think of the interface first.
As for my self, I always prefer starting with writing the interfaces along with their documentation and only then start with the implementation.
In the past I took another approach which was starting with the UML and then using the automatic code generation.
The best tool I encountered for this matter was Rational Rose which is not free but I'm sure there are plenty of free plugins and utils.
The advantage of Rational Rose over other designers I bumped into was that you can "attach" the design to your code and then modify on either code or design and the other will update.
I jump right in with the coding with a prototype. Any required interfaces soon pop out at you and you can mould your proto into a final product. Get feedback along the way from whomever is going to be using your API if you can.
There is no 'best way' of approaching API design, do whatever works for you. Domain knowledge also has a large part to play
I'm a great fan of programming to the interface. It forms a contract between the implementors and the users of your code.
Rather than diving straight into code, I usually start with a basic model of my system (UML diagrams etc, depending on the complexity). Not only does this serve as good documentation, it provides a visual clarification of the system structure. Having this makes the coding part much easier to do. This kind of design documentation also makes it easier to understand the system when you come back to it in 6 months, or try to fix bugs :)
Prototyping also has its merits, but be prepared to throw it away and start again.
There seems to be some debate over refactoring to utilize java generics within my current team. The question I have is what are the current industry standards in terms of refactoring older Java code to take advantage of some of these features? Of course by industry standards I am referring to best practices. A link to a book or a site with these listed will be awarded the answer vote as that is the least subjective way to handle this question.
I don't think that blindly following what somebody else declares to be "best practice" or "industry standard" is ever a good idea. You're in the best position to decide whether changing your code is worthwhile or not.
The questions you need to answer are what benefits will you get from upgrading the old code, what will it cost, and what are the risks?
The main benefit is that you will have improved compile-time type checking, which should help to detect bugs in new code that uses the updated code. It may even highlight bugs in existing code. Code that uses generics, while sometimes quite verbose, is typically more readable as it is explicit about which types are valid in which contexts. You'll also no longer have to suppress/ignore compiler warnings.
The cost is the amount of time it will take to make and test the necessary changes to introduce generics. Any time you make code changes there is a chance that you might introduce bugs, so that's a risk. Do the benefits outweigh the costs? That depends on how much code you have, how it's being used and what other demands you have on your time.
The papers from this MIT research group might provide you with some useful guidelines:
Refactoring Java Applications to Use Generic Libraries
Efficiently refactoring Java applications to use generic libraries
Robert Fuhrer, Frank Tip, Julian Dolby, Adam Kiezun and Markus Keller
ECOOP 2005 --- Object-Oriented Programming, 19th European Conference, (Glasgow, Scotland), July 25-29, 2005.
Refactoring techniques for migrating applications to generic Java container classes
Frank Tip, Robert Fuhrer, Julian Dolby, and Adam Kiezun
IBM T.J. Watson Research Center IBM Research Report RC 23238, (Yorktown Heights, NY, USA), June 2, 2004.
Utilizing java generics is definitely a good idea. It is backward compatible. So the codebase you cannot convert will continue to work with the new code.
EDIT : I should have mentioned type erasure as the reason which makes backward compatible.
Best practices for adopting generics? The first best practice is "do". Try to eliminate as many casts as you can from your code. If you want to make your life easier, use IntelliJ's "Generify" refactoring -- just point it at your entire codebase, let it do its thing, and then do a little post-cleanup.
Your team should be balancing the benefits of a large-scale refactoring against the cost and the technical and business risks of doing this ... and the other priorities that your team has.
"Best practice" arguments and opinions from people who don't understand your project and the business context are simply not relevant here.
Best Practices don't exist. That's a weird term that suggests that the door closes on the 'bestness' of a particular solution... Use generics? Yes. Immediately. It's an awkward journey, since so many of the big libraries (Hibernate, Spring) still fail to embrace them completely.. but in my experience, dealing with a mix of generics and brave-casts still makes for a better code base than not using them at all.
I'd also make it policy to convert-as-you-touch instead of some sort of giant refactoring mission.
It is usually good idea to refactor to use generics.
It's not required so don't treat this as an urgent task, but you can consider not using generics is a mild form of technical debt - so if you have time available and the code base is planned to have a long ongoing life then it is worth investing in upgrading it.
The main benefits are:
Better compile-time type checking, which will reduce errors
Remove of unnecessary casts from your source code, which makes code more readable
It is still backwards compatible with old code (thanks to type erasure)
There is no real downside - the only potential issue I can think of is if you ever wanted to port the code back to earlier versions of Java without generic support. But that would be a very unusual thing to do!
If you do decide to refactor into generics then I recommend the following steps:
Turn on all your compiler warnings (Eclipse has pretty good warnings)
Add the generic type to your class first e.g. MyClass<T>
Then change the type of any method signatures / internal fields / data structures to use T
You will probably have many warnings / errors throughout the code at this point. This is OK, just work through them and fix them all. Often your IDE may be able to "quick fix" many of them.
Write / refactor test cases that demonstrate the generic features
It is fairly quick to do all this - I think I managed to refactor about 10,000 lines of Java library code to use generics in less than one day, which included updating some client code.
Is there any rational reason, why native properties will not be part of Java 7?
There are some high-level reasons related to schedule and resources of course. Implementation of properties and understanding all of the ramifications and intersections with other language features is a large task similar to the size of various Java 5 language changes.
But I think the real reason Sun is not pushing properties is the same as closures:
1) There is no consensus on what the implementation should look like. Or rather, there are many competing alternatives and people who are passionate about properties disagree about crucial parts of the implementation.
2) Perhaps more importantly, there is a significant lack of consensus about whether the feature is wanted at all. While many people want properties, there are also many people that don't think it's necessary or useful (in particular, I think server-side people see properties as far less crucial to their daily life than swing programmers).
Properties history here:
http://tech.puredanger.com/java7#property
Doing properties "right" in Java will not be easy. RĂ©mi Forax's work especially has been valuable in figuring out what this might look like, and uncovering a lot of the "gotchas" that will have to be dealt with.
Meanwhile, Java 7 has already taken too long. The closures debate was a huge, controversial distraction that wasted a lot of mind-power that could have been used to develop features (like properties) that have broad consensus of support. Eventually, the decision was made to limit major changes to modularization (Project Jigsaw). Only "small change" is being considered for the language (under Project Coin).
JavaFX has beautiful property support, so Sun clearly understands the value of properties and knows how to implement them. But having been spoiled by JavaFX properties, developers are less likely to settle for a half-baked implementation in Java. If they are worth doing, they are worth doing right.
Any given thing is "not done" by default, so no particular reason is needed for something to remain not done. Rather some compelling reason is needed to move something from "not done" to "planned" or "done". No sufficiently compelling reason has yet arisen for this language feature.
There are two more reasons to avoid properties in any language:
Properties are not very object-oriented. Making them easy to write encourages the pattern where the object just serves up its internal state and the caller manipulates it. The object should provide higher-level methods and keep its internals private. Next time you're tediously implementing a getter, consider what the caller will do with the data and whether you can just provide that functionality directly.
Properties encourage mutable state (through setters), which makes a program less parallelizable. As the number of cores goes up, we should all be trying to make our objects immutable to make concurrent reasoning easier. Next time you're tediously implementing a setter, consider removing it and making the object immutable.
Not enough time?
Not yet specced properly?
Difficult to add to java due to java's implementation?
Deemed not important enough, ie other things were prioritiesed?
I am primarily a Java developer, and I've been reading a lot of in-depth work on threads and concurrency. Many very smart people (Doug Lea, Brian Goetz, etc) have authored books on these topics and made contributions to new concurrency libraries for Java.
As I start to learn more about Python, Ruby, and other languages, I'm wondering: does all of that work have to be re-created for these languages?
Will there be, or does there need to be, a "Doug Lea of Python," or a "Brian Goetz of Ruby," who make similarly powerful contributions to the concurrency features of those languages?
Does all of this concurrency work done in Java have to be re-created for future languages? Or will the work done in Java establish lessons and guidance for future languages?
The basic principles of concurrent programming existed before java and were summarized in those java books you're talking about. The java.util.concurrent library was similarly derived from previous code and research papers on concurrent programming.
However, some implementation issues are specific to Java. It has a specified memory model, and the concurrent utilities in Java are tailored to the specifics of that. With some modification those can be ported to other languages/environments with different memory model characteristics.
So, you might need a book to teach you the canonical usage of the concurrency tools in other languages but it wouldn't be reinventing the wheel.
Keep in mind that threads are just one of several possible models for dealing with "concurrency". Python, for example, has one of the most advanced asynchronous (event based) non-threaded models in Twisted. Non-blocking models are quite powerful and are used as alternatives to threads in most of the highest scaling apps out there (eg. nginx, lighttpd).
Your assumption that other popular languages need threads may simply be a symptom of a java centric (and hence thread-centric) world view. Take a look at the C10K page for a slightly dated but highly informative look at several models for how to handle large volumes of concurrent requests.
I think the answer is both yes and no. Java arguably has the most well-defined memory model and execution semantics of the most commonly used imperative languages (Java, C++, Python, Ruby, etc). In some sense, other languages either lack this completely or are playing catch-up (if that's even possible given the immaturity of the threading models).
C++ is probably the notable exception - it has been treading the same ground for C++0x and has possibly gone beyond the current state of the Java model from my impression.
I say no because the communities are not isolated. Many of the guys working on this stuff are involved (at least from a guidance point of view, if not from a direct hand in the specs) in more than one language. So, there is a lot of crosstalk between guys working on JMM and guys working on C++0x specs as they are essentially solving the same problems with many of the same underlying drivers (from the hardware guys at the bottom and the users at the top). And I'm pretty sure there is cross-talk at some level between the JVM / CLR camps as well.
As others have mentioned, there are also other models for concurrency: actors in Erlang and Scala, agents/STM in Clojure, FP's rise in F#, Scala, Haskell, the CCR and PLINQ stuff in CLR land, etc. It's an exciting time right now! We can use as many concurrency experts as we can find I think.... :)
This is not flame bait, but IMHO Java has one of the simpler and more restricted models for threading and concurrency available.
That's not necessarily a bad thing, but at the level of granularity it offers it means that the perspective it gives you of what concurrency is and how to deal with it is inherently limited if you have a "java centric" view (as someone else put it).
If you're serious about concurrency, then it's worth exploring other languages precisely because different models and idioms exist.
Some of the hottest areas are lock-free programming (you'll see a lot of it, but often done badly, in C++) and functional programming (which has been around for a while but arguably, is becoming increasingly relevant. A prime example in the case of concurrency is probably Erlang).
Kamaelia is a project (which I started, and continue to work on) that has specifically the goal of making concurrency a tool you want to use, rather than a pain to use. In practical terms this means that it is primarily a shared-nothing model with message passing model (based on a world view from Occam & Unix pipes).
Underlying that goal is a desire to make concurrency easy to use for the average developer, shielding them from the nastier problems caused by a number of approaches to concurrency. (There's a bunch of presentations on slideshare explaining the why & how there)
Additionally it provides a simple software transactional memory model for the situations where you must share data, and uses a deliberately simple API.
Kamaelia's primary implementation is in python, with a toy implementation in Ruby & C++. Someone else has ported the underlying approach to E and also to Java. (though the Java person has disappeared) (The toy implementations are sanity checks the ideas can work in other languages, if needing to be recast as local idioms)
Perhaps your question shouldn't be "what can these languages learn", but "what can the Java community learn by looking elsewhere?". Many people who learn python are liguistically immigrants from elsewhere and bring their knowledge of other languages with them, and so from where I sit it looks like python already looks out to other languages for inspiration.
Picking something concrete, for example, this speak and write application - which is a tool for teaching a small child to read and write, based around pen input, handwriting recognition, and speech synth - uses several dozen concurrent subsystems, runs at an acceptable speed on a single core machine, would be easily amenable to running on a many-core machine. However, the reason for the number of concurrent subsystems however has nothing to do with "wanting to make the application parallel", but more to do with "How can I make the application easier to write, extend and maintain?". The fact it ends up embarassingly parallel is a secondary bonus.
There's a full tutorial - Pragmatic Concurrency - linked from the front page. (Notes, slides, video & code bundle)
The model can be improved, and suggestions are welcome - life would be very boring if we all just "stopped" trying to make better tools - but ignoring what already exists seems a little ... parochial. If that seems a little harsh, please look at today's dilbert.