Got a question on actors within the play framework. Disclaimer - I am still new to actors/AKKA and have been spending quite a while now reading through documentation. I apologise if the answers to any of the below is already documented somewhere that I have missed.
What I would like to verify is that I am implementing a correct/idiomatic solution to the below scenario:
Case:
Using play framework, I need to execute code that may block (sql query) in such a way that it does not hinder the rest of my web server.
Below is my current solution and some questions:
static ActorRef actorTest = Akka.system().actorOf(
Props.create(ActorTest.class));
public static Promise<Result> runQuery() {
Promise<Result>r = Promise.wrap(
Patterns.ask(actorTest, query, 600000)).map(
new Function<Object, Result>() {
public Result apply(Object response) {
return ok(response.toString());
}
});
return r;
}
Now if I get many requests will they simpley enter an unbounded queue as they are dealt with by the actor? or,
I have read some docs on actor routing. Would I have to take care of this i.e. make a router actor instead which will use some kind of routing logic to send queries to child actors? Or is the above all taken care of in the play framework?
How can I configure the number of threds deadicated to the above actor (read something on this referring to the application.conf file).
Any clarification on the above will be greatly appreciated.
I'm using mostly Scala with Akka and Play so I may be misguiding you but let's give it a try.
First of all you can ditch actors for the task you want. I would just run the computation in the Future.
User actors when you need to have some state. Running query by async mean will do just fine with Future.
Futures and Actora are run on ExecutionContext that the default reincarnation is available in Scala by importing and using by reference. This may be different in Java but probably not much. That default ExecutionContext is configured in application.conf just like you've said.
Related
I would like to know is there, or can we make use of any functionality to log the actual time taken by a function returning Mono/Flux? For example something like creating a #Timed annotation to log the actual time taken by it.
I know the function return type being Flux/Mono, so it should return immediately so that is why I wanna know if we can actually do something like this so that we can know which modules/sub-modules are taking how much time?
We want to migrate our blocking spring-boot service to spring webflux, so going through all the possible options we have for better understanding.
P.S. I am new To reactive programming, still learning the paradigm.
You can use metrics() operator to time a publisher. You can combine this with the name() and tags() operators to customise the metrics that get published.
listenToEvents()
.name("events")
.tag("source", "kafka")
.metrics()
.doOnNext(event -> log.info("Received {}", event))
.delayUntil(this::processEvent)
.subscribe();
Publisher metrics docs
I am looking at microservices, and the possibility of migrating some of our code to this architecture. I understand the general concept but am struggling to see how it would work for our example.
Supposing I have an interface called RatingEngine and an implementation called RatingEngineImpl, both running inside my monolithic application. The principle is simple - The RatingEngineImpl could run in a different machine, and be accessed by the monolithic application via (say) a REST API, serializing the DTOs with json over http. We even have an interface to help with this decoupling.
But how do I actually go about this? As far as I can see, I need to create a new implementation of the interface for the rump monolith (ie now the client), which takes calls to the interface methods, converts them into a REST call, and sends them over the network to the new 'rating engine service'. Then I also need to implement a new http server, with an endpoint for each interface method, which then deserializes the DTOs (method parameters) and routes the call to our original RatingEngineImpl, which sits inside the server. Then it serializes the response and sends it back to the client.
So that seems like an awful lot of plumbing code. It also adds maintenance overhead, since if you tweak a method in the interface you need to make changes in two more places.
Am I missing something? Is there some clever way we can automate this boilerplate code construction?
The Microservice pattern does not suggest you move every single service you have to it's own deployable. Only move self sustaining pieces of logic that will benefit from it's own release cycle. I.e. if your RatingEngine needs rating-logic updates weekly, but the rest of your system is pretty stable - it will likely benefit from beeing a service of it's own.
And yes - Microservices adds complexity, but not really boiler plate code of HTTP servers. There are a lot of frameworks around to deal with that. Vert.x is one good. Others are Spring Boot, Apache Camel etc. A complete microservice setup could look like this with Vert.x.
public class RatingService extends AbstractVerticle implements RatingEngine{
public void start() {
vertx.createHttpServer().requestHandler(req -> {
req.response()
.putHeader("content-type", "application/json")
.end(computeCurrentRating().encodePrettily());
}).listen(8080);
}
#Override
public int getRating(){
return 4; // or whatever.
}
protected JsonObject computeCurrentRating(){
return new JsonObject().put("rating", getRating());
}
}
Even the Java built-in framework JAX-RS helps making a microservice in not too many lines of code.
The really hard work with microservices is to add error-handling logic in the clients. Some common pitfalls
Microservice may go down If call to RatingService gives connection refused exception - can you deal with it? Can you estimate a "rating" in client to not prevent further processing? Can you reuse old responses to estimate the rating? .. Or at least - you need to signal the error to support staff.
Reactive app? How long can you wait for a response? A call to in memory methods will return within nano seconds, a call to an external HTTP service may take seconds or minutes depending on a number of factors. As long as the application is "reactive" and can continue to work without a "Rating" - and present the rating for the user once it's available - it's fine. If you are waiting for a blocking call to rating service, more than a few millisec. - response time becomes an obstacle. It's not as convenient/common to make reactive apps in Java as in node.js. A reactive approach will likely trigger a remake of you entire system.
Tolerant client Unit/integration testing a single project is easy. Testing a complex net of microservices is not. The best thing you can do about it is to make your client call less picky. Schema validations etc. are actually bad things. In XML use single XPaths to get data you want from the response, not more not less. That way, a change in the microservice response will not require updates of all clients. JSON is a bit easier to deal with than XML in this aspect.
No, unfortunately you do not miss anything substantial. The microservice architecture comes with its own cost. The one that caught your eye (boilerplate code) is one well-known item from the list. This is a very good article from Martin Fowler explaining the various advantages and disadvantages of the idea. It includes topics like:
added complexity
increased operational maintance cost
struggle to keep consistency (while allowing special cases to be treated in exceptional ways)
... and many more.
There are some frameworks out there to reduce such a boilerplate code. I use Spring Boot in a current project (though not for microservices). If you already have Spring based projects, then it really simplifies the development of microservices (or any other not-Microservice-application based on Spring). Checkout some of the examples: https://github.com/spring-projects/spring-boot/tree/master/spring-boot-samples
I've read upon command bus a lot used in on a couple of projects its awesome. I keep reading though that the command is not supposed to return anything to the controller; however, there are certain times that I feel like I must absolutely return a value for example:
$product = $this->dispatch(AddProductCommand::class);
return redirect()->route('route', $attributes = ['product_slug' => $product->slug]);
I need to grab the slug of the newly created product because for the redirect the route needs the slug . Is this bad practice and if so what would be a cleaner way to go about it?
It's not possible to implement it in a completely asynchronous style as you use web framework which is synchronous by design.
If you use framework that allows for async requests or (even better) you have separated UI concerns (like redirect) from the backend you can subscribe for ProductAdded event with a callback that fires redirect.
I have created several tasks, each takes an input, an execution function which keeps updating its status, and a function to get output of this task. They will execute in serial or parallel. Some outputs are List so there will be loops as well.
public class Task1 { //each task looks like this
void addInput(String key, String value){
....
}
void run(){
....
updateStatus();
....
}
HashMap getOutput(){
....
}
Status getStatus(){
....
}
}
I want to make a workflow from these tasks and then I will use the workflow structure information to build a dynamic GUI and monitor outputs of each task. Do I have to write a workflow execution system from scratch or is there any simple alternative available?
Is there any workflow engine to which I can give (in XML may be) my Java classes, input and output and execution functions and let it execute?
In Java world, your use case is called as BPM (Business process management).
In .Net world, this is called as Windows Workflow Foundation (WWF).
There are many java based open source BPM tools. The one i like is jBPM.
This is more powerful and can be integrated with rule engines like Drools.
Sample jBPM Screenshot:
Also Activiti is another good choice.
Sample Activiti Screenshot:
Check out Activity. This is not strictly designed to solve your use case, but you may use and adapt it's process engine to help you because it's written purely in java. Activity is rather process modeling engine so it's not designed to controll tasks running in parallel at runtime, howerver you will get many things which you can reuse "out of the box".
You will get tasks linking basing on xml file
You will get your gui for linking tasks for free (basing on eclipse)
You will get the GUI in web browser to browse running processes, start new, and see current status of the tasks: http://activiti.org/userguide/index.html#N12E60
You will get the reporting engine for free, where you can see reports and charts ("how long time did the tasks take", "how long was the process was running"
You will get the REST API for free. Other application will be able to get the current state of your application via simple REST calls
So going in this direction you will get many things for free. From programmer point of View you can for example inherit from Task class from Activity api. Late when the task is completed call
taskService.complete(task.getId(), taskVariables);
You can also go another way arround. So supossing that your class which calculates in background is called CalculationTask, you can connect CalculatinTasks with new instance of Activity Task. By this you will get a bridge to Activity process engine. So you can do something like
class CustomActivityTask extends Task { // inherit from Activity Task class to add your own fields
private int someStateOne;
private String someOtherState;
(...)
// getters and setters
}
class CalculationTask {
private CustomActivityTask avtivityTask; // by updating the state of this task you are updating the state of the task in Activity process engine
private RuntimeService activityRuntimeServiece;
public void run() { // this is your execution functin
while (true) {
// calulate
activityTask.setSomeStateOne(45)
activityTask.setSomeOtherState("Task is almost completing...");
(...)
if (allCompleted) {
activityRuntimeServiece.complete(avtivityTask.getId(), taskVariables);
break;
}
}
}
Apache Camel is an open source integration framework that can be used as a lightweight workflow system. Routes can be defined with a Java, XML, Groovy or Scala DSL. Apache Camel includes integrated monitoring capacities. Beside that you may use external monitoring tools such as Hawtio.
Also have a look at Work Flow in Camel vs BPM.
Take a look at Copper Engine http://copper-engine.org/
Unlike Activiti and such, it does not require to write a ton of XML just to get a simple workflow going
Perhaps State Chart XML (SCXML) can help you. Currently it's a Working Draft specification published by W3C
SCXML provides a generic state-machine based execution environment based on Harel State Tables.
The apache foundations provides a java implementation that we(my company) are currently using to perform state-transitions on "jobs". Here is the Apache Commons SCXML implementation
If your analysis is time-consuming and you don't need immediate feedback, then perhaps you don't need a workflow engine at all but a batch processor. Have a look at the Spring Batch project which can be used together with Camel Apache (see my other answer for more information about this option).
I would definitely consider the Apache Storm project. Storm is designed to be an easily-extensible, parallel computation engine. Among its many features, its ease of management, fault tolerance and general simplicity to set up (in comparison to other similar technologies like Hadoop, I believe) are probably going to be attractive to you for a prototype system.
Workflows would be analogous to Storm topologies; the different tasks would be streams; and the different methods in tasks would correspond to spouts and bolts. Also, Storm supports several programming languages in its API, like Hadoop.
Storm was initially designed by Twitter and then open-sourced, similarly to other projects like Cassandra (Facebook) and Hadoop itself (Yahoo!). What that means for you as a user is that it was built for actual use, instead of as a purely theoretical concept. It's also pretty battle tested.
I hope this was useful to you, and wish you the best of luck with your project!
I'll give a brief overview of my goals below just in case there are any better, alternative ways of accomplishing what I want. This question is very similar to what I need, but not quite exactly what I need. My question...
I have an interface:
public interface Command<T> extends Serializable {}
..plus an implementation:
public class EchoCommand implements Command<String> {
private final String stringToEcho;
public EchoCommand(String stringToEcho) {
this.stringToEcho = stringToEcho;
}
public String getStringToEcho() {
return stringToEcho;
}
}
If I create another interface:
public interface AuthorizedCommand {
String getAuthorizedUser();
}
..is there a way I can implement the AuthorizedCommand interface on EchoCommand at runtime without knowing the subclass type?
public <C extends Command<T>,T> C authorize(C command) {
// can this be done so the returned Command is also an
// instance of AuthorizedCommand?
return (C) theDecoratedCommand;
}
The why... I've used Netty to build myself a very simple proof-of-concept client / server framework based on commands. There's a one-to-one relationship between a command, shared between the client and server, and a command handler. The handler is only on the server and they're extremely simple to implement. Here's the interface.
public interface CommandHandler<C extends Command<T>,T> {
public T execute(C command);
}
On the client side, things are also extremely simple. Keeping things simple in the client is the main reason I decided to try a command based API. A client dispatches a command and gets back a Future. It's clear the call is asynchronous plus the client doesn't have deal with things like wrapping the call in a SwingWorker. Why build a synchronous API against asynchronous calls (anything over the network) just to wrap the synchronous calls in an asynchronous helper methods? I'm using Guava for this.
public <T> ListenableFuture<T> dispatch(Command<T> command)
Now I want to add authentication and authorization. I don't want to force my command handlers to know about authorization, but, in some cases, I want them to be able to interrogate something with regards to which user the command is being executed for. Mainly I want to be able to have a lastModifiedBy attribute on some data.
I'm looking at using Apache Shiro, so the obvious answer seems to be to use their SubjectAwareExecutor to get authorization information into ThreadLocal, but then my handlers need to be aware of Shiro or I need to abstract it away by finding some way of mapping commands to the authentication / authorization info in Shiro.
Since each Command is already carrying state and getting passed through my entire pipeline, things are much simpler if I can just decorate commands that have been authorized so they implement the AuthorizedCommand interface. Then my command handlers can use the info that's been decorated in, but it's completely optional.
if(command instanceof AuthorizedCommand) {
// We can interrogate the command for the extra meta data
// we're interested in.
}
That way I can also develop everything related to authentication / authorization independent of the core business logic of my application. It would also (I think) let me associate session information with a Netty Channel or ChannelGroup which I think makes more sense for an NIO framework, right? I think Netty 4 might even allow typed attributes to be set on a Channel which sounds well suited to keeping track of things like session information (I haven't looked into it though).
The main thing I want to accomplish is to be able to build a prototype of an application very quickly. I'd like to start with a client side dispatcher that's a simple map of command types to command handlers and completely ignore the networking and security side of things. Once I'm satisfied with my prototype, I'll swap in my Netty based dispatcher (using Guice) and then, very late in the development cycle, I'll add Shiro.
I'd really appreciate any comments or constructive criticism. If what I explained makes sense to do and isn't possible in plain old Java, I'd consider building that specific functionality in another JVM language. Maybe Scala?
You could try doing something like this:
Java: Extending Class At Runtime
At runtime your code would extend the class of the Command to be instantiated and implement the AuthorizedCommand interface. This would make the class an instance of AuthorizedCommand while retaining the original Command class structure.
One thing to watch for, you wouldn't be able to extend any classes with the "final" keyword.