Why does the microservices architecture mindset see benefit in duplicating all REST/JMS clients so each service has its independent code?
Does the trade-off really work out - given the comfortability and maintainability of a single library/adapter towards a server-spec so everybody uses it as an abstraction?
What are the the benefits in practice?
As this is on the brisk of asking for opinion, let's answer about the facts at hand.
The core thing to understand: any concept should never be seen as unalterable law. You don't follow rules because they are rules, but because they provide helpful guidance regarding your design decisions.
In that sense this is about balancing. When you can really abstract common infrastructure elements, then it is good practice to avoid code duplication where possible.
So instead of creating more and more copies of the same class, you rather turn that into an "internal library" and have your service instances use that. The downside here is of course that all services using this library now have a dependency on it.
Coming from there, you would strive to really provide a framework. Meaning: base, common parts are implemented only once - but in a fashion that allows the different services to configure/extend/enhance the framework where necessary.
Related
I have heard that tighly coupled code is hard to unit test. I dont understand how? Can somebody explain with example.
Tight coupling means that you use implementations instead of interfaces, reducing the array of options when it comes to creating mock implementations and other testing utilities. It may helped by using mocking frameworks (like Mockito for Android) but should nonetheless be avoided, as it is a bad practice.
However, this is probably the least problematic aspect of highly coupled code. It is generally discouraged, because it limits refactoring and/or expanding possibilities. You should always keep some level of abstraction in your code, to be able to easily implement new modules and change current implementations. But do not overdo it, because programs which have lots on interface-implementation exclusive pairs are very redundant and hard to debug.
In general, you should have a look at some open-source projects and see how those are tested (for Android, check out the Google I/O app for example) and how the testing approach is reflected in the code. It all comes with experience and there is no better way to learn it than by analyzing how pros do it :-)
Our company has purchased a SDK that can be used for Android apps. The SDK is written in java and its very big.
I have to create a wrapper for this SDK so our partners doesn't need to directly use the SDK, but call our wapper functions instead.
I am wondering what is the best design pattern for doing a job like this? I am have been looking at the proxy design pattern, but not sure if this is the correct one.
Many thanks for any suggestions,
The one that comes to mind is the Facade pattern.
From the Wikipedia description:
A facade is an object that provides a simplified interface to a larger
body of code, such as a class library.
The facade pattern is one of the original design patterns that was described in the GoF book. There the description reads:
Provide a unified interface to a set of interfaces in a subsystem.
Facade defines a higher-level interface that makes the subsystem
easier to use.
Seems to fit your use case perfectly.
This sounds like the classic case for the facade pattern. You want to design an API for your partners that captures your high-level use cases. The implementation would delegate to this purchased SDK, but without allowing its implementation details to propagate up to your partner's code.
http://en.wikipedia.org/wiki/Facade_pattern
The proxy pattern looks less relevant to this problem. Proxy tends to be a one-for-one mapping to the wrapped API just for infrastructure requirements. The classic example is remote method invocation.
http://en.wikipedia.org/wiki/Proxy_pattern
The Facade pattern sounds like the most appropriate one.
However, choosing the best design pattern is not a guarantee of success. Whether you will succeed really depends on the nature of the SDK that you are attempting to "wrap", and what you are really trying to achieve by wrapping it.
For example, in a past project I made the mistake of using facade-like wrappers to abstract over two different triple-store APIs. I wanted to have the freedom to switch triple-store implementations. It was a bad idea. I essentially ended up mirroring a large subset of the API functionality in the facade. It was a major implementation effort, and the end result was no simpler. In hindsight, it was simply not worth it.
In summary:
If you can make your facade API small and simple, this is a good idea. (The ideal facade is one that simply hides a whole bunch of complexity or configurability that you don't want the clients to use.)
If your facade is large and complicated, and/or entails complicated "mapping" to the actual APIs you are wrapping, you are liable to have a lot of coding to do, and you may end up with something that is objectively worse than what you started with.
I don't think this question can be answered without knowing exactly how your partners are going to use your wrapper. Design an API that's suitable for them and then just delegate the calls to the appropriate (series of) SDK calls.
You might call that a facade if you wish, but I think your task is bigger. I'd say it's a new layer in a layered architecture. In that layer you can use all the patterns you see fit.
Aspects, Macros, Reflection, and other niceties - the good parts
I've noticed that "meta programming" tricks (in the clojure world, functions have meta data, in the oo world, we have concepts like reflection, AOP, etc...) can be a good way to decouple and extend functionality of existing code, without editing it. Such tricks allow us to intercept, redirect, and wrap functional peices of our code so it can be extended in a highly dynamic way.
The scary part
However, as many have claimed - overuse of macros can make code difficult to understand. The "blackboard" software architecture pattern, where several agents modify or edit a common resource can be dangerous if we dont manage the creation of those agents carefully. Finally, I would informally note that the long standing popularity of C++ and java is, at least partially due to the fact that they are "no-surprises" languages - where code is clear, explicit, and procedural.**
The problem : The promise of dynamic code injection techniques for reducing boiler plate and decoupling feature sets requires a "new" way of thinking about documentation, class design, and software engineering ?
My Questions
Does the way we document/deploy normal code, manage source packages, integrate libraries requires different or new techniques when we begin accomodating meta-programming methods in conjunction with our more traditional OO methodologies ?
For example, Should we consider the use of meta programming as an alternative to other, more conventional OO programming techniques ?
Are there a general set of known, red flags introduced by meta-programming -- and how can we avoid them ?
What are best use cases for the use of aspects, reflection, and other dynamic software techniques ?
I find that AOP is something that need to be used very carefully in a software project and have a well defined purpose. I find it is useful for some boiler plate processes like transaction demarcation, security and logging but it is really easy to get yourself in trouble with AOP and it can become a major source of accidental complexity.
"It depends" :) ... That's what is probably the best answer for all subjective questions in programming world.
I would suggest that before going to use any of the technique like AOP or DI, please give it a very serious though in respect to whether you really really need it. We as programmers tends to gets very fascinated by these new tricks and techniques which makes us see beauty (superficial) in code. The real beauty of code that we should strive for is simplicity and nothing else.
Remember every new trick/technique/framework you add to a system will increase the complexity of the system (probably exponentially).
I personally go by the idea of: Build Programs not Applications, Build libraries not frameworks.
Here's a quote in (SICP) that might be relevant to the discussion:
"It is no exaggeration to regard this as the most fundamental idea in programming:
The evaluator, which determines the meaning of expressions in a programming language, is just another program.
To appreciate this point is to change our images of ourselves as programmers. We come to see ourselves as designers of languages, rather than only users of languages designed by others."
Whenever I have the need to design an API in Java, I normally start off by opening up my IDE, and creating the packages, classes and interfaces. The method implementations are all dummy, but the javadocs are detailed.
Is this the best way to go about things? I am beginning to feel that the API documentation should be the first to be churned out - even before the first .java file is written up. This has few advantages:
The API designer can complete the design & specification and then split up the implementation among several implementors.
More flexible - change in design does not require one to bounce around among java files looking for the place to edit the javadoc comment.
Are there others who share this opinion? And if so, how do you go about starting off with the API design?
Further, are there any tools out there which might help? Probably even some sort of annotation-based tool which generates documentation and then the skeleton source (kind of like model-to-code generators)? I came across Eclipse PDE API tooling - but this is specific to Eclipse plugin projects. I did not find anything more generic.
For an API (and for many types of problems IMO), a top-down approach for problem partitioning and analysis is the way to go.
However (and this is just my 2c based on my own personal experience, so take it with a grain of salt), focusing on the Javadoc part of it is a good thing to do, but that is still not sufficient, and cannot reliably be the starting point. In fact, that is very implementation oriented. So what happened to the design, the modeling and reasoning that should take place before that (however brief that might be)?
You have to do some sort of modeling to identify the entities (the nouns, roles and verbs) that make up your API. And no matter how "agile" one would like to be, such things cannot be prototyped without having a clear picture of the problem statement (even if it is just a 10K foot view of it.)
The best starting point is to specify what you are trying to implement, or more precisely, what type of problems your API is trying to address. BDD might be of help (more of that below). That is, what is it that your API will provide (datum elements), and to whom, performing what actions (the verbs) and under what conditions (the context). That leads then to identify what entities provide these things and under what roles (interfaces, specifically interfaces with a single, clear role or function, not as catch-all bags of methods). That leads to an analysis on how they are orchestrated together (inheritance, composition, delegation, etc.)
Once you have that, then you might be in a good position to start doing some preliminary Javadoc. Then you can start working on the implementation of those interfaces, of those roles. More Javadoc follows (in addition to other documentation that might not fall within Javadoc .ie. tutorials, how-tos, etc.)
You start your implementation with use cases and verifiable requirements and behavioral descriptions of what each thing should do alone or in collaboration. BDD would be extremely helpful here.
As you work on, you continuously refactor, hopefully by taking some metrics (cyclomatic complexity and some variant of LCOM). These two tell you where you should refactor.
A development of an API should not be inherently different from the development of an application. After all, an API is a utilitarian application for a user (who happens to have a development role.)
As a result, you should not treat API engineering any diferently from general software-intensive application engineering. Use the same practices, tune them according to your needs (which every one who works with software should), and you'll do fine.
Google has been uploading its "Google Tech Talk" video lecture series on youtube for quite some time. One of them is an hour long lecture titled "How To Design A Good API and Why it Matters". You might want to check it out also.
Some links for you that might help:
Google Tech Talk's "Beyond Test Driven Development: Behaviour Driven Development" : http://www.youtube.com/watch?v=XOkHh8zF33o
Behavior Driven Development : http://behaviour-driven.org/
Website Companion to the book "Practical API Design" : http://wiki.apidesign.org/wiki/Main_Page
Going back to the Basics - Structured Design#Cohesion and Coupling : http://en.wikipedia.org/wiki/Structured_Design#Structured_Design
Defining the interface first is the programming-by-contract style of declaring preconditions, postconditions and invariants. I find it combines well with Test-Driven-Development (TDD), because the invariants and postconditions you write first are the behaviours that your tests can check for.
As an aside, it seems the Behaviour-Driven-Development elaboration of TDD seems to have come about because of programmers who did not habitually think of the interface first.
As for my self, I always prefer starting with writing the interfaces along with their documentation and only then start with the implementation.
In the past I took another approach which was starting with the UML and then using the automatic code generation.
The best tool I encountered for this matter was Rational Rose which is not free but I'm sure there are plenty of free plugins and utils.
The advantage of Rational Rose over other designers I bumped into was that you can "attach" the design to your code and then modify on either code or design and the other will update.
I jump right in with the coding with a prototype. Any required interfaces soon pop out at you and you can mould your proto into a final product. Get feedback along the way from whomever is going to be using your API if you can.
There is no 'best way' of approaching API design, do whatever works for you. Domain knowledge also has a large part to play
I'm a great fan of programming to the interface. It forms a contract between the implementors and the users of your code.
Rather than diving straight into code, I usually start with a basic model of my system (UML diagrams etc, depending on the complexity). Not only does this serve as good documentation, it provides a visual clarification of the system structure. Having this makes the coding part much easier to do. This kind of design documentation also makes it easier to understand the system when you come back to it in 6 months, or try to fix bugs :)
Prototyping also has its merits, but be prepared to throw it away and start again.
In web development we use java .net or php5 .
All are OO. and have all the OO features.
But in real time websites do these OO features play any major role other than inheritance.?
Can anybody list some with real time examples.
In application development there may be a vast use of OO.
But in case of websites, does OO playing a major role?
How?
Back in 1995 the book Design Patterns managed to produce this phenomenal insight on page 20:
Favor object composition over class inheritance
Since then, proper object-orientation has been about that instead of inheritance. Particularly, the SOLID principles describe a set of principles for object-orientation that are applicable for any code base where maintainability is important. Polymorphism is important, but inheritance is irrelevant.
That applies to web applications as well as any other type of application.
I make heavy use of encapsulation and polymorphism. Specifically I use the Strategy Pattern (among others) pretty heavily to compartmentalize a great deal of my functionality. When combined with dependency injection, it makes it really easy for me to separate functionality of say, my persistence layer, from my business logic or presentation layer.
For instance, it's trivial to swap out a hibernate implementation with a JDBC implementation, etc. Actually, recently, I just switched from an e-mail service that operates synchronously with the web request to one that uses a message queue to asynchronously send mail. All I had to do once I'd implemented the new layer was change which class was injected into my beans that use it.
Edit: To address your comment, #zod, I don't use it so with with regards to the pages that are executed although that does happen from time to time (for instance, I have different classes for HTML email and plain text email depending on what the user has requested) but I primarily make use of the OO principals in the configuration of the application. Does that make sense?
As a start, the main principles of OO come in handy.Take for example data encapsulation in an MVC pattern; The fact that you can have a User model that does all the user stuff means that everything that has to do with users is encapsulated in that one model. This makes it easier to add and modify features later on. The extensibility also comes in handy when you want to extend your program with the code other people have written. As long as you know the public interface of their classes, you can use them.
TO give a simple example we as a company heavily use it for security. We have plugins for various frameworks that controls who tries to reach which class and method. In addition to that we can avoid user to reach that class and method without adding extra lines to class.
Other than that we are using them to clarify the code as well.
Beside all of those doing OOP is good way to create a big project with a big team.