Chaining of services - Java - java

At the moment I'm using multiple REST services to process my data.
A workflow would be like this:
User requests the speed of a car:
Ask the SpeedService the most recent speed => SpeedService requests the latests positions of the car from the PositionService and the PositionService achieves this by calling the DatabaseService in order to get the raw unprocessed data from the car. However the part where I'm having issues is calling a service withing another service. I've achieved this for now by making uses of the invocation api.
Example:
Client client = ClientBuilder.newClient();
WebTarget target = client.target("http://mylocalRestserver.example/webresources/").path("speed");
target = target.queryParam("carId", carId);
Invocation invocation = target.request(MediaType.APPLICATION_JSON).buildGet();
Speed speed = new Genson().deserialize(invocation.invoke(String.class), Speed.class);
return speed;
However whenever I try to simulate concurrent users - running multiple curl queries - the REST service breaks due to SocketTimeouts, I assume because multiple requests are sent on the same serversocket? Is there any way to achieve this "chaining of services"?

Your idea is sound but cannot be implemented with such a naive approach.
What you are trying to achieve is location transparency while keeping the system responsive, which is not an easy task.
There are big frameworks that deal with this problem, AKKA comes to mind.
If your objective is just separation of concerns (each service deals with is part of the problem and invokes other services to get what it needs), you can just use the relevant classes without making http request from the server to itself.
If instead you want to be able to distribute your services across several nodes you should rely on frameworks like akka, doing it yourself is an unfeasible task.

Related

For restfulness, should I use one method for different get requests or seperate them

I am making a Blackjack rest service, and currently have 4 GET endpoints:
/hit
/stand
/double down
/surrender
I could also make just 1 POST/PATCH endpoint that uses a dto to send a move, or should I keep
using these four uri's?
Which option would be more restful/better?
Thanks in advance.
Use one endpoint. All of these actions/commands are working with the same set of data.
I'd suggest having a single REST endpoint that takes a RequestBody and returns links as part of the response.
When links are returned as part of response, we are implementing HATEOS which increases the RESTfulness of the API. (reaches Level 3 of Richardson's Maturity Model)
Client can understand the links received as part of the response and can take next step of actions accordingly. This prevents the client from having to make multiple REST calls just to accomplish one task as a whole.
(Personal Development experience) If we maintain too many REST APIs, especially GET APIs, code maintenance increases overtime. Also, it is not extensible in nature, i.e. if a new requirement comes up and we do not have a DTO then we have to define multiple new endpoints. On the other hand, if we have a less number of endpoints with a DTO, we have the flexibility to enhance the DTO itself to have attributes that can help us achieve the new functionality.
For further reading, this is a good article on REST.

Use response from a restful webservice endpoint call to be used later on in some other webservice endPoint call

I want response from one webservice call to be used later on by some other webservice call. How do I implement the code flow for it. Do I have to maintain a session?
I am creating the restful webservices using Spring,java
If user 1 calls an endPoint /getUserRecord and 1 minute later calls /checkUserRecord which uses data from the first call, how to handle it since user 2 can call /getUserRecord before user 1 calls /checkUserRecord
I am using Spring java to create RESTFul webservices.
Thanks,
Arpit
technically you can pass UserRecord from get to check.
GET /userrecords/ --> return the one or more record
POST /checkUserRecord with the record as you want to check as request body.
But I strongly advise you to not do this. Data provided by client are unreliable and cannot be trust by your backend code. What if some javascript has altered the original data ? Besides, what if you have a list of data or heterogenous payload to pass back and forth, it would end up to a complete mess of payload exchanges from client and server.
So as Veselin Davidov said, you should probably stick with clean stateless REST paradigm and rely on identifier:
so
GET /userrecords/ --> [ { "id": 1, "data":"myrecorddata"}, ...]
GET /checkUserRecord/{id} like /checkUserRecord/1
and yes, you will have to make two calls to the database. If your concern is performance, you can set some caching mecanism as piy26 points out, but caching could lead you to other issues (how to define a proper and reliable caching strategy ?).
Unless you manage a tremendous amount of data, I think you should first focus on providing a clear, maintainable and safely usable REST API with stateless design.
If you are using Spring Boot, it provides a way to enable caching on your Repository object which is what you are trying to solve.
#EnableCaching
org.springframework.cache.annotation.EnableCaching
You can use UserID as a hash attribute while creating your key so that response from different users remains unique.

Global Resource Object in Spring

I'm just getting into Spring (and Java), and despite quite a bit of research, I can't seem to even express the terminology for what I'm trying to do. I'll just explain the task, and hopefully someone can point me to the right Spring terms.
I'm writing a Spring-WS application that will act as middleware between two APIs. It receives a SOAP request, does some business logic, calls out to an external XML API, and returns a SOAP response. The external API is weird, though. I have to perform "service discovery" (make some API calls to determine the valid endpoints -- a parameter in the XML request) under a variety of situations (more than X hours since last request, more than Y requests since last discovery, etc.).
My thought was that I could have a class/bean/whatever (not sure of best terminology) that could handle all this service discovery stuff in the background. Then, the request handlers can query this "thing" to get a valid endpoint without needing to perform their own discovery and slow down request processing. (Service discovery only needs to be re-performed rarely, so it would be impactful to do it for every request.)
I thought I had found the answer with singleton beans, but every resource says those shouldn't have state and concurrency will be a problem -- both of which kill the idea.
How can I create an instance of "something" that can:
1) Wake up at a defined interval and run a method (i.e. to check if Service discovery needs to be performed after X hours and if so do it).
2) Provide something like a getter method that can return some strings.
3) Provide a way in #2 to execute a method in the background without delaying return (basically detect that an instance property exceeds a value and execute -- or I suppose, issue a request to execute -- an instance method).
I have experience with multi-threaded programming, and I have no problem using threads and mutexes. I'm just not sure that's the proper way to go in Spring.
Singletons ideally shouldn't have state because of multithreading issues. However, it sounds like what you're describing is essentially a periodic query that returns an object describing the results of the discovery mechanism, and you're implementing a cache. Here's what I'd suggest:
Create an immutable (value) object MyEndpointDiscoveryResults to hold the discovery results (e.g., endpoint address(es) or whatever other information is relevant to the SOAP consumers).
Create a singleton Spring bean MyEndpointDiscoveryService.
On the discovery service, save an AtomicReference<MyEndpointDiscoveryResults> (or even just a plain volatile variable). This will ensure that all threads see updated results, while limiting them to a single, atomically updated field containing an immutable object limits the scope of the concurrency interactions.
Use #Scheduled or another mechanism to run the appropriate discovery protocol. When there's an update, construct the entire result object, then save it into the updated field.

how do I migrate a Java interface to a microservice?

I am looking at microservices, and the possibility of migrating some of our code to this architecture. I understand the general concept but am struggling to see how it would work for our example.
Supposing I have an interface called RatingEngine and an implementation called RatingEngineImpl, both running inside my monolithic application. The principle is simple - The RatingEngineImpl could run in a different machine, and be accessed by the monolithic application via (say) a REST API, serializing the DTOs with json over http. We even have an interface to help with this decoupling.
But how do I actually go about this? As far as I can see, I need to create a new implementation of the interface for the rump monolith (ie now the client), which takes calls to the interface methods, converts them into a REST call, and sends them over the network to the new 'rating engine service'. Then I also need to implement a new http server, with an endpoint for each interface method, which then deserializes the DTOs (method parameters) and routes the call to our original RatingEngineImpl, which sits inside the server. Then it serializes the response and sends it back to the client.
So that seems like an awful lot of plumbing code. It also adds maintenance overhead, since if you tweak a method in the interface you need to make changes in two more places.
Am I missing something? Is there some clever way we can automate this boilerplate code construction?
The Microservice pattern does not suggest you move every single service you have to it's own deployable. Only move self sustaining pieces of logic that will benefit from it's own release cycle. I.e. if your RatingEngine needs rating-logic updates weekly, but the rest of your system is pretty stable - it will likely benefit from beeing a service of it's own.
And yes - Microservices adds complexity, but not really boiler plate code of HTTP servers. There are a lot of frameworks around to deal with that. Vert.x is one good. Others are Spring Boot, Apache Camel etc. A complete microservice setup could look like this with Vert.x.
public class RatingService extends AbstractVerticle implements RatingEngine{
public void start() {
vertx.createHttpServer().requestHandler(req -> {
req.response()
.putHeader("content-type", "application/json")
.end(computeCurrentRating().encodePrettily());
}).listen(8080);
}
#Override
public int getRating(){
  return 4; // or whatever.
}
protected JsonObject computeCurrentRating(){
return new JsonObject().put("rating", getRating());
}
}
Even the Java built-in framework JAX-RS helps making a microservice in not too many lines of code.
The really hard work with microservices is to add error-handling logic in the clients. Some common pitfalls
Microservice may go down If call to RatingService gives connection refused exception - can you deal with it? Can you estimate a "rating" in client to not prevent further processing? Can you reuse old responses to estimate the rating? .. Or at least - you need to signal the error to support staff.
Reactive app? How long can you wait for a response? A call to in memory methods will return within nano seconds, a call to an external HTTP service may take seconds or minutes depending on a number of factors. As long as the application is "reactive" and can continue to work without a "Rating" - and present the rating for the user once it's available - it's fine. If you are waiting for a blocking call to rating service, more than a few millisec. - response time becomes an obstacle. It's not as convenient/common to make reactive apps in Java as in node.js. A reactive approach will likely trigger a remake of you entire system.
Tolerant client Unit/integration testing a single project is easy. Testing a complex net of microservices is not. The best thing you can do about it is to make your client call less picky. Schema validations etc. are actually bad things. In XML use single XPaths to get data you want from the response, not more not less. That way, a change in the microservice response will not require updates of all clients. JSON is a bit easier to deal with than XML in this aspect.
No, unfortunately you do not miss anything substantial. The microservice architecture comes with its own cost. The one that caught your eye (boilerplate code) is one well-known item from the list. This is a very good article from Martin Fowler explaining the various advantages and disadvantages of the idea. It includes topics like:
added complexity
increased operational maintance cost
struggle to keep consistency (while allowing special cases to be treated in exceptional ways)
... and many more.
There are some frameworks out there to reduce such a boilerplate code. I use Spring Boot in a current project (though not for microservices). If you already have Spring based projects, then it really simplifies the development of microservices (or any other not-Microservice-application based on Spring). Checkout some of the examples: https://github.com/spring-projects/spring-boot/tree/master/spring-boot-samples

Concurrency : Designing a REST API using Play Framework/Akka

I m relatively new to Play framework. I'm trying to design a "Test/Quiz/Exam" app. It mostly consist of few CRUD operations on multiple tables (Right Now). Its has a
REST based Back end -- > AngularJS frontend.
Lets say for the GET request of format /users/{id} the following code is mapped
public Result getUser(Long id) {
// Get Info from DB using Spring Data JPA and return result.
}
Now as I came across Akka Actor model, is it better to re-write the getUser function such that it delegates the work to an Actor which retrieves the data from the DB and returns it. Should i follow actor model for rest of the CRUD operations too ?. Or is it an overkill to user Akka here (assuming the Play takes care of the concurrency for each request). FYI I just started looking into Akka.
Design tips would be appreciated.
It's overkill to use Akka here, because Play handles the inter-request concurrency and from your description of the problem it doesn't appear that you have any intra-request concurrency (which is where you'd use Akka, e.g. if you were making a thousand independent database queries then you could distribute these across a dozen actors or something along those lines). If you just want to make the Play actions asynchronous then see the JavaAsync documentation.

Categories

Resources