I've read upon command bus a lot used in on a couple of projects its awesome. I keep reading though that the command is not supposed to return anything to the controller; however, there are certain times that I feel like I must absolutely return a value for example:
$product = $this->dispatch(AddProductCommand::class);
return redirect()->route('route', $attributes = ['product_slug' => $product->slug]);
I need to grab the slug of the newly created product because for the redirect the route needs the slug . Is this bad practice and if so what would be a cleaner way to go about it?
It's not possible to implement it in a completely asynchronous style as you use web framework which is synchronous by design.
If you use framework that allows for async requests or (even better) you have separated UI concerns (like redirect) from the backend you can subscribe for ProductAdded event with a callback that fires redirect.
Related
I am looking at microservices, and the possibility of migrating some of our code to this architecture. I understand the general concept but am struggling to see how it would work for our example.
Supposing I have an interface called RatingEngine and an implementation called RatingEngineImpl, both running inside my monolithic application. The principle is simple - The RatingEngineImpl could run in a different machine, and be accessed by the monolithic application via (say) a REST API, serializing the DTOs with json over http. We even have an interface to help with this decoupling.
But how do I actually go about this? As far as I can see, I need to create a new implementation of the interface for the rump monolith (ie now the client), which takes calls to the interface methods, converts them into a REST call, and sends them over the network to the new 'rating engine service'. Then I also need to implement a new http server, with an endpoint for each interface method, which then deserializes the DTOs (method parameters) and routes the call to our original RatingEngineImpl, which sits inside the server. Then it serializes the response and sends it back to the client.
So that seems like an awful lot of plumbing code. It also adds maintenance overhead, since if you tweak a method in the interface you need to make changes in two more places.
Am I missing something? Is there some clever way we can automate this boilerplate code construction?
The Microservice pattern does not suggest you move every single service you have to it's own deployable. Only move self sustaining pieces of logic that will benefit from it's own release cycle. I.e. if your RatingEngine needs rating-logic updates weekly, but the rest of your system is pretty stable - it will likely benefit from beeing a service of it's own.
And yes - Microservices adds complexity, but not really boiler plate code of HTTP servers. There are a lot of frameworks around to deal with that. Vert.x is one good. Others are Spring Boot, Apache Camel etc. A complete microservice setup could look like this with Vert.x.
public class RatingService extends AbstractVerticle implements RatingEngine{
public void start() {
vertx.createHttpServer().requestHandler(req -> {
req.response()
.putHeader("content-type", "application/json")
.end(computeCurrentRating().encodePrettily());
}).listen(8080);
}
#Override
public int getRating(){
return 4; // or whatever.
}
protected JsonObject computeCurrentRating(){
return new JsonObject().put("rating", getRating());
}
}
Even the Java built-in framework JAX-RS helps making a microservice in not too many lines of code.
The really hard work with microservices is to add error-handling logic in the clients. Some common pitfalls
Microservice may go down If call to RatingService gives connection refused exception - can you deal with it? Can you estimate a "rating" in client to not prevent further processing? Can you reuse old responses to estimate the rating? .. Or at least - you need to signal the error to support staff.
Reactive app? How long can you wait for a response? A call to in memory methods will return within nano seconds, a call to an external HTTP service may take seconds or minutes depending on a number of factors. As long as the application is "reactive" and can continue to work without a "Rating" - and present the rating for the user once it's available - it's fine. If you are waiting for a blocking call to rating service, more than a few millisec. - response time becomes an obstacle. It's not as convenient/common to make reactive apps in Java as in node.js. A reactive approach will likely trigger a remake of you entire system.
Tolerant client Unit/integration testing a single project is easy. Testing a complex net of microservices is not. The best thing you can do about it is to make your client call less picky. Schema validations etc. are actually bad things. In XML use single XPaths to get data you want from the response, not more not less. That way, a change in the microservice response will not require updates of all clients. JSON is a bit easier to deal with than XML in this aspect.
No, unfortunately you do not miss anything substantial. The microservice architecture comes with its own cost. The one that caught your eye (boilerplate code) is one well-known item from the list. This is a very good article from Martin Fowler explaining the various advantages and disadvantages of the idea. It includes topics like:
added complexity
increased operational maintance cost
struggle to keep consistency (while allowing special cases to be treated in exceptional ways)
... and many more.
There are some frameworks out there to reduce such a boilerplate code. I use Spring Boot in a current project (though not for microservices). If you already have Spring based projects, then it really simplifies the development of microservices (or any other not-Microservice-application based on Spring). Checkout some of the examples: https://github.com/spring-projects/spring-boot/tree/master/spring-boot-samples
Got a question on actors within the play framework. Disclaimer - I am still new to actors/AKKA and have been spending quite a while now reading through documentation. I apologise if the answers to any of the below is already documented somewhere that I have missed.
What I would like to verify is that I am implementing a correct/idiomatic solution to the below scenario:
Case:
Using play framework, I need to execute code that may block (sql query) in such a way that it does not hinder the rest of my web server.
Below is my current solution and some questions:
static ActorRef actorTest = Akka.system().actorOf(
Props.create(ActorTest.class));
public static Promise<Result> runQuery() {
Promise<Result>r = Promise.wrap(
Patterns.ask(actorTest, query, 600000)).map(
new Function<Object, Result>() {
public Result apply(Object response) {
return ok(response.toString());
}
});
return r;
}
Now if I get many requests will they simpley enter an unbounded queue as they are dealt with by the actor? or,
I have read some docs on actor routing. Would I have to take care of this i.e. make a router actor instead which will use some kind of routing logic to send queries to child actors? Or is the above all taken care of in the play framework?
How can I configure the number of threds deadicated to the above actor (read something on this referring to the application.conf file).
Any clarification on the above will be greatly appreciated.
I'm using mostly Scala with Akka and Play so I may be misguiding you but let's give it a try.
First of all you can ditch actors for the task you want. I would just run the computation in the Future.
User actors when you need to have some state. Running query by async mean will do just fine with Future.
Futures and Actora are run on ExecutionContext that the default reincarnation is available in Scala by importing and using by reference. This may be different in Java but probably not much. That default ExecutionContext is configured in application.conf just like you've said.
I followed the following tutorial of Netbeans on creating the Enterprise Application using the IDE. I just wanted to know why the usage of Message driven bean is preferred here for the save or persist method? And why not for the other database operations such as findAll?
https://netbeans.org/kb/docs/javaee/maven-entapp.html
Message Driven Beans are asynchronous components, to illustrate the concept, asynchronous communication works pretty much like email communication, you send the email and that's it, you can only hope for the best, and expect that the recipient processes your mail as soon as possible and reply back if necessary (in a different communication), on the other hand, synchronous communication works pretty much like a phone call, you get your response during the same communication, without the need to start a new one.
In your case, when a client invokes findAll he's quite likely expecting to get a list of results in the same communication (synchronously: 'server, give me right now all the customers in the system'), in which case an MDB (asynchronous) is simply useless, on the other hand, when a client invokes save he might not want to wait for an answer (asynchronously: 'server, just try to save this info, i don't need to know right now if you succeeded or not').
There's a lot more info here.
I need help and it's really confusing.
I've tried to follow all example on the web about IPC - pass Parameter between portlet using event.
Here's my code if I only want to pass my attribute using event:
QName qName = new QName("http://liferay.com/events", "ipc.send");
response.setEvent(qName, pitchType);
and then in my getter Event Portlet my code
#ProcessEvent(qname = "{http://liferay.com/events}ipc.send")
public void catchBall(EventRequest request, EventResponse response) {
Event event = request.getEvent();
String send = (String) event.getValue();
response.setRenderParameter("send", send);
}
it only passes one parameter with and only String.
I've tried passing Object like Foo to this parameter but no luck. It won't run.
Any idea how to pass Object via event?
please really need help here.. :(
Passing custom objects as event parameters can be tricky, especially when you go across plugin boundaries: The class must be available to both plugins in this case, otherwise the event can naturally not be received properly.
A common recommendation is to keep the communication on the UI layer (e.g. in portlet events) very shallow and not rely on heavy objects. Keep in mind that this communication should not really be business-layer, thus it's ok to pass around identifiers, primary keys or other placeholders for the real data. Assume nobody might be interested in receiving the event, then you shouldn't have too much effort to build the event in the first place.
Alternatively you can cache the interesting object on your business layer, so that it will be quickly available if it indeed is being worked on (e.g. if the event is received)
In my app, i have lots of GET,POST, PUT requests. Right now, i have a singleton class that holds my downloaded data and has many inner classes that extend AsyncTask.
In my singleton class, i have also a few interfaces like this:
/**
* Handlers for notifying listeners when data is downloaded
*
*/
public interface OnQuestionsLoadedListener {
public void onDataLoadComplete();
public void onDataLoadingError();
}
Is there something wrong with this pattern (many inner classes that extend AsyncTask)?
Could it be done more efficiently with maybe just 1 inner class for every HTTP call (1 for GET, 1 for POST, ...)? If so, how to decide what to do after e.g. GET request?
As a whole, you should get away from AsyncTasks while preforming network requests.
Your AsyncTasks are linked to your Activity. That means, if your Activity stops, your AsyncTask stops.
This isn't the biggest problem when fetching data to show in that Activity, since you won't care that the fetching has stopped. But when you want to send some saved data to the server, and your user pressed 'back' or something like that before everything is sent, the data could be lost and not send.
What you want to have instead, is a Service which will keep running regardless of what happens to your Activities.
I'd advise you to take a look into RoboSpice. Even if you decide not to use it, reading what it does and why it does will give you a good insight on the pretty long list of reasons not to use AsyncTasks for network requests and why better to use Services.
If you use this, the rest of your question about efficiently network requesting is obsolete too, since they'll handle it for you the best way possible.
Nothing wrong with many async classes.
What ido is have a network layer,a service class. Send an intent to the service class with a resultreceiver object as part of intent. then in the service make http request in async task and send back the the result through result receiver object.
A good design is to abstract the ui (activity or fragment) from network access.
In a recently developed app I followed a similar scheme but in addition implemented a WebRequest class doing the actual GET, POST, PUT etc.
What I now have is a "Connector" class which has a whole lot of AsyncTask subclasses within.
In my implementation, however, I made them accept a Callback object to which each of those subclasses passes the Http result.
I think this is a valid if perhaps not ideal way.
What I imagine could be an improvement would be if I had just one subclass of Asynctask to which I would pass the request body (which is now built within those different tasks), the request url and method as well as the callback (which is, in my opinion a rather nice way to get the results).