I am looking at microservices, and the possibility of migrating some of our code to this architecture. I understand the general concept but am struggling to see how it would work for our example.
Supposing I have an interface called RatingEngine and an implementation called RatingEngineImpl, both running inside my monolithic application. The principle is simple - The RatingEngineImpl could run in a different machine, and be accessed by the monolithic application via (say) a REST API, serializing the DTOs with json over http. We even have an interface to help with this decoupling.
But how do I actually go about this? As far as I can see, I need to create a new implementation of the interface for the rump monolith (ie now the client), which takes calls to the interface methods, converts them into a REST call, and sends them over the network to the new 'rating engine service'. Then I also need to implement a new http server, with an endpoint for each interface method, which then deserializes the DTOs (method parameters) and routes the call to our original RatingEngineImpl, which sits inside the server. Then it serializes the response and sends it back to the client.
So that seems like an awful lot of plumbing code. It also adds maintenance overhead, since if you tweak a method in the interface you need to make changes in two more places.
Am I missing something? Is there some clever way we can automate this boilerplate code construction?
The Microservice pattern does not suggest you move every single service you have to it's own deployable. Only move self sustaining pieces of logic that will benefit from it's own release cycle. I.e. if your RatingEngine needs rating-logic updates weekly, but the rest of your system is pretty stable - it will likely benefit from beeing a service of it's own.
And yes - Microservices adds complexity, but not really boiler plate code of HTTP servers. There are a lot of frameworks around to deal with that. Vert.x is one good. Others are Spring Boot, Apache Camel etc. A complete microservice setup could look like this with Vert.x.
public class RatingService extends AbstractVerticle implements RatingEngine{
public void start() {
vertx.createHttpServer().requestHandler(req -> {
req.response()
.putHeader("content-type", "application/json")
.end(computeCurrentRating().encodePrettily());
}).listen(8080);
}
#Override
public int getRating(){
return 4; // or whatever.
}
protected JsonObject computeCurrentRating(){
return new JsonObject().put("rating", getRating());
}
}
Even the Java built-in framework JAX-RS helps making a microservice in not too many lines of code.
The really hard work with microservices is to add error-handling logic in the clients. Some common pitfalls
Microservice may go down If call to RatingService gives connection refused exception - can you deal with it? Can you estimate a "rating" in client to not prevent further processing? Can you reuse old responses to estimate the rating? .. Or at least - you need to signal the error to support staff.
Reactive app? How long can you wait for a response? A call to in memory methods will return within nano seconds, a call to an external HTTP service may take seconds or minutes depending on a number of factors. As long as the application is "reactive" and can continue to work without a "Rating" - and present the rating for the user once it's available - it's fine. If you are waiting for a blocking call to rating service, more than a few millisec. - response time becomes an obstacle. It's not as convenient/common to make reactive apps in Java as in node.js. A reactive approach will likely trigger a remake of you entire system.
Tolerant client Unit/integration testing a single project is easy. Testing a complex net of microservices is not. The best thing you can do about it is to make your client call less picky. Schema validations etc. are actually bad things. In XML use single XPaths to get data you want from the response, not more not less. That way, a change in the microservice response will not require updates of all clients. JSON is a bit easier to deal with than XML in this aspect.
No, unfortunately you do not miss anything substantial. The microservice architecture comes with its own cost. The one that caught your eye (boilerplate code) is one well-known item from the list. This is a very good article from Martin Fowler explaining the various advantages and disadvantages of the idea. It includes topics like:
added complexity
increased operational maintance cost
struggle to keep consistency (while allowing special cases to be treated in exceptional ways)
... and many more.
There are some frameworks out there to reduce such a boilerplate code. I use Spring Boot in a current project (though not for microservices). If you already have Spring based projects, then it really simplifies the development of microservices (or any other not-Microservice-application based on Spring). Checkout some of the examples: https://github.com/spring-projects/spring-boot/tree/master/spring-boot-samples
Related
I am making a Blackjack rest service, and currently have 4 GET endpoints:
/hit
/stand
/double down
/surrender
I could also make just 1 POST/PATCH endpoint that uses a dto to send a move, or should I keep
using these four uri's?
Which option would be more restful/better?
Thanks in advance.
Use one endpoint. All of these actions/commands are working with the same set of data.
I'd suggest having a single REST endpoint that takes a RequestBody and returns links as part of the response.
When links are returned as part of response, we are implementing HATEOS which increases the RESTfulness of the API. (reaches Level 3 of Richardson's Maturity Model)
Client can understand the links received as part of the response and can take next step of actions accordingly. This prevents the client from having to make multiple REST calls just to accomplish one task as a whole.
(Personal Development experience) If we maintain too many REST APIs, especially GET APIs, code maintenance increases overtime. Also, it is not extensible in nature, i.e. if a new requirement comes up and we do not have a DTO then we have to define multiple new endpoints. On the other hand, if we have a less number of endpoints with a DTO, we have the flexibility to enhance the DTO itself to have attributes that can help us achieve the new functionality.
For further reading, this is a good article on REST.
I'm implementing a series of REST micro services in Java - let's call them "adapters".
Every service reads the data from a particular source type, and provides result in the same way. The main idea is to have the same interface (service contract) for all of them, to get interchangeability. I would like to avoid code duplication and reuse the service contract for the services.
And it seems that I'm reinventing the wheel. Is there a standard approach for this?
I tried to extract the service contract in form of Java interface for Spring MVC Controller class and accompanying DAO class CustomObject:
public interface AdapterController {
#RequestMapping(method = RequestMethod.GET, value = "/objects/{name}")
CustomObject getObject(#PathVariable final String name);
}
Then put them into separate Maven project, set it as a dependency in the original project, and rewrote REST controller class as following:
#RestController
public class DdAdapterController implements AdapterController {
#Override
public CustomObject getObject(String name) {
return model.getByName(name);
}
I can reuse DAO object in a client code as well, but the interface class is useless at client side.
1) Summarizing: is it OK to reuse/share service contract between different service implementations? What's the cost of this? Is there the best practice how to share service contract?
2) The next question is about service contract and consuming client. Is it OK to share the contract between service and client? Is there some tools in Java / approach for this?
This goes against the microservice mentality and in the long run is a bad idea to share code.
If you start sharing code you will slowly just build a distributed monolith, where multiple services are dependent on each other.
Many have talked about this earlier:
microservices-dont-create-shared-libraries
The evils of too much coupling between services are far worse than the problems caused by code duplication
Micro services: shared library vs code duplication
The key to build microservices is:
One service should be very good at one thing
Keep them small
Have an extremely well documented api
When you need to delete a microservice this should be done with as few needs to update other services
Avoid code sharing, and treat all libraries like 3rd party libraries even your own
Microservises should by loosely coupled = minimum dependencies.
Microservices is an
architectural style that structures an application as a collection of
services that are
Highly maintainable and testable
Loosely coupled
Independently deployable
Organized around business capabilities.
https://microservices.io/
Contract can be defined with WADL
Using contract between client and server means less bugs, less missunderstandings when implementing client. That is what the contract good for.
At the moment I'm using multiple REST services to process my data.
A workflow would be like this:
User requests the speed of a car:
Ask the SpeedService the most recent speed => SpeedService requests the latests positions of the car from the PositionService and the PositionService achieves this by calling the DatabaseService in order to get the raw unprocessed data from the car. However the part where I'm having issues is calling a service withing another service. I've achieved this for now by making uses of the invocation api.
Example:
Client client = ClientBuilder.newClient();
WebTarget target = client.target("http://mylocalRestserver.example/webresources/").path("speed");
target = target.queryParam("carId", carId);
Invocation invocation = target.request(MediaType.APPLICATION_JSON).buildGet();
Speed speed = new Genson().deserialize(invocation.invoke(String.class), Speed.class);
return speed;
However whenever I try to simulate concurrent users - running multiple curl queries - the REST service breaks due to SocketTimeouts, I assume because multiple requests are sent on the same serversocket? Is there any way to achieve this "chaining of services"?
Your idea is sound but cannot be implemented with such a naive approach.
What you are trying to achieve is location transparency while keeping the system responsive, which is not an easy task.
There are big frameworks that deal with this problem, AKKA comes to mind.
If your objective is just separation of concerns (each service deals with is part of the problem and invokes other services to get what it needs), you can just use the relevant classes without making http request from the server to itself.
If instead you want to be able to distribute your services across several nodes you should rely on frameworks like akka, doing it yourself is an unfeasible task.
I m relatively new to Play framework. I'm trying to design a "Test/Quiz/Exam" app. It mostly consist of few CRUD operations on multiple tables (Right Now). Its has a
REST based Back end -- > AngularJS frontend.
Lets say for the GET request of format /users/{id} the following code is mapped
public Result getUser(Long id) {
// Get Info from DB using Spring Data JPA and return result.
}
Now as I came across Akka Actor model, is it better to re-write the getUser function such that it delegates the work to an Actor which retrieves the data from the DB and returns it. Should i follow actor model for rest of the CRUD operations too ?. Or is it an overkill to user Akka here (assuming the Play takes care of the concurrency for each request). FYI I just started looking into Akka.
Design tips would be appreciated.
It's overkill to use Akka here, because Play handles the inter-request concurrency and from your description of the problem it doesn't appear that you have any intra-request concurrency (which is where you'd use Akka, e.g. if you were making a thousand independent database queries then you could distribute these across a dozen actors or something along those lines). If you just want to make the Play actions asynchronous then see the JavaAsync documentation.
I have a web service layer that is written in Java/Jersey, and it serves JSON.
For the front-end of the application, I want to use Rails.
How should I go about building my models?
Should I do something like this?
response = api_client.get_user(123)
User user = User.new(response)
What is the best approach to mapping the JSON to the Ruby object?
What options do I have? Since this is a critical part, I want to know my options, because performance is a factor. This, along with mapping JSON to a Ruby object and going from Ruby object => JSON, is a common occurance in the application.
Would I still be able to make use of validations? Or wouldn't it make sense since I would have validation duplicated on the front-end and the service layer?
Models in Rails do not have to do database operation, they are just normal classes. Normally they are imbued with ActiveRecord magic when you subclass them from ActiveRecord::Base.
You can use a gem such as Virtus that will give you models with attributes. And for validations you can go with Vanguard. If you want something close to ActiveRecord but without the database and are running Rails 3+ you can also include ActiveModel into your model to get attributes and validations as well as have them working in forms. See Yehuda Katz's post for details on that.
In your case it will depend on the data you will consume. If all the datasources have the same basic format for example you could create your own base class to keep all the logic that you want to share across the individual classes (inheritance).
If you have a few different types of data coming in you could create modules to encapsulate behavior for the different types and include the models you need in the appropriate classes (composition).
Generally though you probably want to end up with one class per resource in the remote API that maps 1-to-1 with whatever domain logic you have. You can do this in many different ways, but following the method naming used by ActiveRecord might be a good idea, both since you learn ActiveRecord while building your class structure and it will help other Rails developers later if your API looks and works like ActiveRecords.
Think about it in terms of what you want to be able to do to an object (this is where TDD comes in). You want to be able to fetch a collection Model.all, a specific element Model.find(identifier), push a changed element to the remote service updated_model.save and so on.
What the actual logic on the inside of these methods will have to be will depend on the remote service. But you will probably want each model class to hold a url to it's resource endpoint and you will defiantly want to keep the logic in your models. So instead of:
response = api_client.get_user(123)
User user = User.new(response)
you will do
class User
...
def find id
#api_client.get_user(id)
end
...
end
User.find(123)
or more probably
class ApiClient
...
protected
def self.uri resource_uri
#uri = resource_uri
end
def get id
# basically whatever code you envisioned for api_client.get_user
end
...
end
class User < ApiClient
uri 'http://path.to.remote/resource.json'
...
def find id
get(id)
end
...
end
User.find(123)
Basic principles: Collect all the shared logic in a class (ApiClient). Subclass that on a per resource basis (User). Keep all the logic in your models, no other part of your system should have to know if it's a DB backed app or if you are using an external REST API. Best of all is if you can keep the integration logic completely in the base class. That way you have only one place to update if the external datasource changes.
As for going the other way, Rails have several good methods to convert objects to JSON. From the to_json method to using a gem such as RABL to have actual views for your JSON objects.
You can get validations by using part of the ActiveRecord modules. As of Rails 4 this is a module called ActiveModel, but you can do it in Rails 3 and there are several tutorials for it online, not least of all a RailsCast.
Performance will not be a problem except what you can incur when calling a remote service, if the network is slow you will be to. Some of that could probably be helped with caching (see another answer by me for details) but that is also dependent on the data you are using.
Hope that put you on the right track. And if you want a more thorough grounding in how to design these kind of structures you should pick up a book on the subject, for example Practical Object-Oriented Design in Ruby: An Agile Primer by Sandi Metz.