I have some doubt about how implement a simple scenario for a my style exercises following the Hexagonal architecture or Port & Adapter Pattern.
I have a UseCase (or service) that has to send a Contact (received from a contact form) to an external API service.
public class SendContactUseCase{
//output port interface
private final SendContact sendContact;
public ServiceLayer(SendContact sendContact) {
this.sendContact = sendContact;
}
public void sendContact(ContactRequest req){
sendContact.send(req);
}
}
So I have create an output port with a method send(ContactRequest req) after that I have implemented this interface as a driven adapter where I put the codes for the communication with the API
//driven adapter
public class SendContactToAPIAdapter implements SendContact {
//private final PossibleAPILib possibleAPIlib...;
#Override
public boolean send(ContactRequest req) {
//HERE the code to communicate with API
}
}
Here my doubt, what if at a later moment comes the need to send the "Contact" also to another channel, for example as XML attachment to a specific email address.
If I have understood correctly the Port & Adapter pattern to hide the infrastructure "logic" to the UseCase I should implement a new driven adapter from the same Port in order to inject the correct adapter in the UseCase.
public class SendContactXMLAdapter implements SendContact
But if I'd should invoke both the adapater? Beacause I have to send the contact to the both systems?
Should I to create a third adapter where hide the logic to call both systems?
I hope I've been clear
Thanks folks
Create a sample java project for a simple use case
The answer depends on what you actually want to accomplish.
To me, your anticipation of the 'need to send the "Contact" also to another channel' sounds like an additional feature of your business logic.
Based on that assumption, I would implement two separate port/adapter pairs, one for the API and one for XML. Your use case can then decide which ports to call.
This diagram visualizes what I mean: Component Architecture
If you really always want to send the contact to the API and export it as XML you could leave your use-case untouched and just extend the implementation of your SendContactToAPIAdapter to do both. But you should then also consider renaming it.
public interface ContactRegisterer{
public void registerContact(Contact contactToRegister) throws ContactRegisterException;
}
you could use this interface[represents a port which is used by more higher level logic of your application] so that the location of where to save the contact and how to save it could be changed independently from the client. This interface could be used to register contact to a single but different destinations, or to multiple and different destination. For instance, you could register to a file, a database, a service or all of them.
When you register to a single destination, all you need is to give a different implementation of the interface for registering to that destination.
When you register to multiple destinations, you could use Decorator design pattern. (Create a wrapper that implements the interface and also contains different objects implementing the same interface. The wrapper is given to the client(have the same interface, hence the client is not aware of the fact that it is given a wrapper), the wrapper in turn could use multiple threads for registering to each destination or use a for loop within a thread)
Another alternative is to use A Modified Version Of Chain of Responsibility pattern. In this pattern, you chain multiple(open ended) instances of the interface and every instance call the next after it is done registering to its destination.
It depends if the change is a technical change or a business logic change.
The ports and adapters architecture allows you to change the adapters without changing the Core of the application. If you have tests that validate your business logic, as you should, you'll see that you can swap adapters, everything will work fine and you won't have to change your Core tests. In this case, you can have the two adapters and configure one or the other based on some setting, or as you say, have an adapter that basically calls the other two.
On the other hand, if your business logic requirements change (depending on some business rule you have to send the contact to A or to B), you'll notice that you'll have to change your tests to verify the new requirements. In that case, your Core will define a different Port that will be more suited to the new business logic.
Related
I am reading Uncle Bob's Clean Architecture book. One of the main points throughout the book is that you should depend on abstractions, not on implementations.
For example, he mentions that the higher layers of the software shouldn't know anything from the lower layers (which I agree). He also points out that when a higher layer needs to communicate with a lower layer, the lower layer must implement an interface that the higher layer uses. For example, if the Use Case layer needs to call the Presenter layer, it should be done through an interface OutputBoundary that is implemented by the Presenter, so the Use Case layer does not depend on the presenter. If you do it without an interface, it is really bad because the Use Case layer is depending on the Presenter layer.
How is that true? If in the future the Presenter layer needs more or different data to be sent by the Use Cases, you not only will have to modify the Presenter, but the OutputBoundary Interface and the Use Cases as well. So no, the Use Cases are never completely independent from the Presenter.
If the Presenter changes the way he presents data just changing the body of the method, then the Use Case layer won't have to change anything. If the Presenter changes the way he presents data by changing the method declaration it won't matter if you have an interface or not, because you will have to modify the method call in the Use Case layer. In none of the two cases the use of an interface really matter.
Is there something that I am missing here? I know the uses of an interface, and I know that if you have or plan to have multiple presenters, then having them to implement a common interface would be the right way to go, but even so I can't see the independence from lower layers that he mentions on his books.
How is that true? If in the future the Presenter layer needs more or
different data to be sent by the Use Cases, you not only will have to
modify the Presenter, but the OutputBoundary Interface and the Use
Cases as well. So no, the Use Cases are never completely independent
from the Presenter.
If the presenter layer needs more or different data then the use cases layer must change to provide that data.
But, if you decide to make changes in the presentation layer, or replace it for a new technology, but the information is the same, you would not need to modify the Use cases layer.
Same idea with data access, you use interfaces so it depends on application instead of the other way around, and when you need to change the data access to use a different database technology, as long as your data access layer keeps implementing the same interfaces, you don't need to make changes in application.
Imagine that Radio is higher level of software, then Battery will be lower level of dependency. In real life, Radio does not have tight coupling to Duracell battery, you can use any battery you want.
Let me show an example in C# (sorry, I do not have installed Java editor, but code is really simple):
class Radio
{
private IBattery _battery;
public Radio(IBattery battery)
{
_battery = battery;
}
public void TurnOn()
{
_battery.GiveEnergy();
}
}
interface IBattery
{
void GiveEnergy();
}
class Duracell : IBattery
{
public void GiveEnergy()
{
}
}
and usage:
IBattery battery = new Duracell();
var radio = new Radio(battery);
radio.TurnOn();
Your higher level - Radio does not depend on implementation of lower level Battery.
So you can use any battery you want.
How is that true? If in the future the Presenter layer needs more or different data to be sent by the Use Cases, you not only will have to modify the Presenter, but the OutputBoundary Interface and the Use Cases as well. So no, the Use Cases are never completely independent from the Presenter.
The answer is yes and no
Yes it is true since all layers pointing inwards not outwards
which make the communication between layers based on interfaces/abstraction the application layer as example will implement all the code against the interface and not aware with the implementation of that interface but since the higher layer pointing inward so it will consume the interface of the lower layer then implement it which makes the answer yes
every layer by itself it depend on the one lower so Application depend on Core/Entities
and Infrastructure depend on application but here is the trick in your presentation layer you actually will use DI to point the implementation of every interface to that interface which will make them depend on each others only in the run time
in this image you could see the direction of each layer
If the Presenter changes the way he presents data just changing the body of the method , then the Use Case layer won't have to change anything.
yes since the data still the same coming from application layer but if the presentation layer required completely diff let's say View model then the application layer will have to change the data which might need a change in persistence / infrastructure layer which will lead to change in the interface in application layer which persistence / infrastructure will need to implement !!!
I Actually recommend you to watch this video for Jason Taylor
https://www.youtube.com/watch?v=dK4Yb6-LxAk
Let me know if you have any questions
I'll give a brief overview of my goals below just in case there are any better, alternative ways of accomplishing what I want. This question is very similar to what I need, but not quite exactly what I need. My question...
I have an interface:
public interface Command<T> extends Serializable {}
..plus an implementation:
public class EchoCommand implements Command<String> {
private final String stringToEcho;
public EchoCommand(String stringToEcho) {
this.stringToEcho = stringToEcho;
}
public String getStringToEcho() {
return stringToEcho;
}
}
If I create another interface:
public interface AuthorizedCommand {
String getAuthorizedUser();
}
..is there a way I can implement the AuthorizedCommand interface on EchoCommand at runtime without knowing the subclass type?
public <C extends Command<T>,T> C authorize(C command) {
// can this be done so the returned Command is also an
// instance of AuthorizedCommand?
return (C) theDecoratedCommand;
}
The why... I've used Netty to build myself a very simple proof-of-concept client / server framework based on commands. There's a one-to-one relationship between a command, shared between the client and server, and a command handler. The handler is only on the server and they're extremely simple to implement. Here's the interface.
public interface CommandHandler<C extends Command<T>,T> {
public T execute(C command);
}
On the client side, things are also extremely simple. Keeping things simple in the client is the main reason I decided to try a command based API. A client dispatches a command and gets back a Future. It's clear the call is asynchronous plus the client doesn't have deal with things like wrapping the call in a SwingWorker. Why build a synchronous API against asynchronous calls (anything over the network) just to wrap the synchronous calls in an asynchronous helper methods? I'm using Guava for this.
public <T> ListenableFuture<T> dispatch(Command<T> command)
Now I want to add authentication and authorization. I don't want to force my command handlers to know about authorization, but, in some cases, I want them to be able to interrogate something with regards to which user the command is being executed for. Mainly I want to be able to have a lastModifiedBy attribute on some data.
I'm looking at using Apache Shiro, so the obvious answer seems to be to use their SubjectAwareExecutor to get authorization information into ThreadLocal, but then my handlers need to be aware of Shiro or I need to abstract it away by finding some way of mapping commands to the authentication / authorization info in Shiro.
Since each Command is already carrying state and getting passed through my entire pipeline, things are much simpler if I can just decorate commands that have been authorized so they implement the AuthorizedCommand interface. Then my command handlers can use the info that's been decorated in, but it's completely optional.
if(command instanceof AuthorizedCommand) {
// We can interrogate the command for the extra meta data
// we're interested in.
}
That way I can also develop everything related to authentication / authorization independent of the core business logic of my application. It would also (I think) let me associate session information with a Netty Channel or ChannelGroup which I think makes more sense for an NIO framework, right? I think Netty 4 might even allow typed attributes to be set on a Channel which sounds well suited to keeping track of things like session information (I haven't looked into it though).
The main thing I want to accomplish is to be able to build a prototype of an application very quickly. I'd like to start with a client side dispatcher that's a simple map of command types to command handlers and completely ignore the networking and security side of things. Once I'm satisfied with my prototype, I'll swap in my Netty based dispatcher (using Guice) and then, very late in the development cycle, I'll add Shiro.
I'd really appreciate any comments or constructive criticism. If what I explained makes sense to do and isn't possible in plain old Java, I'd consider building that specific functionality in another JVM language. Maybe Scala?
You could try doing something like this:
Java: Extending Class At Runtime
At runtime your code would extend the class of the Command to be instantiated and implement the AuthorizedCommand interface. This would make the class an instance of AuthorizedCommand while retaining the original Command class structure.
One thing to watch for, you wouldn't be able to extend any classes with the "final" keyword.
I have the following situation:
Three concrete service classes implement a service interface: one is for persistence, the other deals with notifications, the third deals with adding points to specific actions (gamification). The interface has roughly the following structure:
public interface IPhotoService {
void upload();
Photo get(Long id);
void like(Long id);
//etc...
}
I did not want to mix the three types of logic into one service (or even worse, in the controller class) because I want to be able to change them (or shut them) without any problems. The problem comes when I have to inject a concrete service into the controller to use. Usually, I create a fourth class, named roughly ApplicationNamePhotoService, which implements the same interface, and works as a wrapper (mediator) between the other three services, which gets input from the controller, and calls each service correspondingly. It is a working approach, though one, which creates a lot of boilerplate code.
Is this the right approach? Currently, I am not aware of a better one, although I will highly appreciate to know if it is possible to declare the execution sequence declaratively (in the context) and to inject the controller with and on-the fly generated wrapper instance.
Also, it would be nice to cache some stuff between the three services. For example, all are using DAOs, i.e. making sometimes the same calls to the DB over and over again. If all the logic were into one place that could have been avoided, but now... I know that it is possible to enable some request or session based caching. Can you suggest me some example code? BTW, I am using Hibernate for the persistence part. Is there already some caching provided (probably, if they reside in the same transaction or something - with that one I am totally lost)
The service layer should consist of classes with methods that are units of work with actions that belong in the same transaction. It sounds like you are mixing service classes when they could be in the same class and method. You can inject service classes into one another when required too, rather than create another "mediator".
It is perfectly acceptable to "mix the three types of logic", in fact it is preferable if they form an expected use case/unit of work
Cache-ing I would look to use eh cache which is, I believe, well integrated with hibernate.
I would like to improve the design of some part of my application. It is a swing application in which the user interface is updated by data received from a server (TCP/UDP). At the moment i pass my domain object (POJO) to the constructor of the class the will connect with the server, receive and send data, and directly use getter and setter method.
public static void main(String[] args) {
TcpClient client = new TcpClient(server, port, domainModel);
}
public class TcpClient {
public void connect() {
// Code to create the socket and connect to the server.
new Thread(new TcpProtocol(socket, domainModel)).start();
}
}
I 'm using some sort of Factory class inside the TcpProtocol class to create the right object.
I would like to decouple my domain object from the network programming part. Is there some common pattern to use for this ? I was thinking about DAO and Value Object, commonly used for JavaEE applications. Those are the only one i see, but if someone has better proposition please let me know.
Thank you.
As a general rule I consider non-trivial code in the constructor to be a design smell since it complicates inheritance and other forms of extensibility.
If you do not need the object to update itself from the network after it is created, then you can use a Factory to create your POJO. I tend to avoid ValueObjects unless you are wrapping something with very little data like currency/dates otherwise you will end up with a bunch of getters/setters.
If you need the object to access the network during its use, then you can make it a Proxy and pass in a DAO that handles all the network code. For testing you pass in a Mock/Stub instead.
A start to reaching your goal would be to remove the network access code from your constructor (I'm assuming you have one class that contacts the network and you're passing your POJO to its constructor). You would instead create a utility that contacts the network.
To instantiate the correct object type from your data stream, you might consider creating a Factory that knows how to adapt the stream to specific object types.
Maybe using the java.io ObjectInputStream and ObjectOutputStream might be a simple way to do this since it does not depend on the class of the object transferred?
What i would suggest to do is apply the Observer pattern. You might also need to apply the Mediator pattern along with your Observer - but check carefully if your design
warrants this kind of complexity.
The idea behind the observer is that your UI code "listens" for changes in the presented model. When changes take place (presumably from your network-bound code) an event is fired and your UI is thus notified in order to update itself accordingly,
I have a class that implements an interface
class TopLevel implements TopLevelOperations
inside TopLevel the operations are implemented in 2 different ways. So some of the operations in TopLevelOperations need to be called as SOAP client calls and some as restful calls.
What would be the best way to model this? Create additional two interfaces SOAPOperations and RESTOperations to specify what is the responsibility of restful and SOAP respectively ? Then use two other classes internally that implement those interfaces? The motivation is that I may one day want to swap out SOAP for some other approach.
Better way?
Edit: I also don't want different client code jumbled together in TopLevel as it currently is.
What you need to do is separate the transport layer from the payload.
Both requests over the network are over HTTP but the payload is different, i.e. it is wrapped in a SOAP envelope in one case and pure xml data in REST.
So for the higher lever code you should have an interface that just sends a message, encapsulated in an object.
The implementation of this interface converts the message to XML (via DOM or JAXB etc).
Then this XML is passed to a transport layer to be send over HTTP or wrapped in a SOAP message before passing it to the transport layer.
The transport layer can be just a concrete class that is as simple as:
public class HttpClient{
public String sendMsg(String xml){
//Use some HTTP client to send message and return the response.
//The input could be SOAP or plain xml app data
//The output could be SOAP or plain xml app data
}
}
So the user configures your objects to use SOAP or not.
The application code is only aware of the interface to send message.
In your implementation, because of the separation of XML transformation and HTTP transport layers you can swap implementations or add new ones.
Not sure the solution needs to be that complex (assuming I understand the problem of course...):
Make sure TopLevelOperations declares all the methods a client can use. Do so in a protocol-independent way, e.g. TopLevelOperations.doFoo(), not TopLevelOperations.doFooOverSOAP()
Implement TopLevel as your first version of the interface using SOAP/REST as appropriate.
Ensure clients only ever reference TopLevelOperations when declaring references - never the implementing class
Use whatever mechanism is appropriate for your app to inject the appropriate implementation into the clients (Dependency Injection / Factory / ...).
If / when you want to re-implement the methods using a different transport/protocol just create another class (TopLevelNew) that implements TopLevelOperations. Then inject it into clients instead of TopLevel in step 4 above.
Deciding which implementation to use is then an application-level configuration decision, not something the clients have to be aware of.
hth.
[You may also want/need to use some helpers classes for the implementation, e.g. separating content from payload as per #user384706's answer. But that's complementary to above (i.e. how to design the implementation vs. how to keep interface consistent for clients).]