I have following setup:
HTTP request arrives on REST endpoint and I receive it in my Application Service.
Application service maps request to command C1 and forwards it to aggregate using commandGateway.sendAndWait(new C1(restPostBody));.
Aggregate is loaded from repo, new command is applied and new event is produced an saved to store.
At this point in time, I need to enrich this event and use it as response of REST call.
So far I can see this options:
Use view projector, and project new event to create view model that can be forwarded as response in REST call. I guees here I would need to use queryGateway.subscriptionQuery(... and sqr.updates().blockFirst() for waiting for event to be processed by projector and then create response. Also, I guess this should be synchronous, since projection could get out of sync if system fails between storing event to DB and storing projection to DB?
Use some event enricher after event is published from aggregate and add needed properties to it and add response to REST call. This is similar to Projection, but in this case I would not save it to DB, since only time I need the data is as response to REST endpoint at the time command is issued. This should be definitelly synchronous, since if something fails, I would loose response. In case of async - I would need to have Aggregate handle duplicate events, and still emit events to event enricher but without storing to db. This seems to make things a lot complicated.
Are there any best practices related to this?
UPDATE
what I'm having currently is:
#Autowired
public void configure(EventProcessingConfigurer configurer){
configurer.usingSubscribingEventProcessors();
}
for synchronous Event processing in aggregate and view-model. Then I can query view model using (looks a bit ugly - is there better way?)
try {
sc = queryGateway.query(new MyQuery("123", "123),
ResponseTypes.instanceOf(SomeView.class)).get();
}
catch (InterruptedException e) {
e.printStackTrace();
}
catch (ExecutionException e) {
e.printStackTrace();
}
And I can return this SomeView as response on REST api.
So, #bojanv55, you're trying to spoof your application to be a synchronous set up, whilst the command-event-query approach with Axon Framework enforces you to go the other way.
Ideally, your front-end should be compliant to this situation.
Thus, if you hit an end point which publishes a command, then you'd do a fire and forget. The events updating your query model will be pushed as updates, as they happen, to the front-end. So shortly, embracing the fact it's asynchronous should, in the end, make everything feel more seamless.
However, that's easier said then done; you're asking this question with a reason of course.
I personally like the usage of the subscription query, which you're also pointing towards, to spoof the operation to become synchronous.
This repository by Frans shows how to do this with Axon Framework quite nicely I think.
What he does, is handle the REST operation and first dispatching a subscription query for the thing you know will be updated pretty soon.
Secondly, the command is dispatched to the aggregate, the aggregate makes a decision to publish an event and the event updates the query model.
The query model update then constitutes in an update being emitted towards your subscription query, allowing you to only return the result as soon as the query model has actually be adjusted.
Concluding, I'd always recommend my first suggestion to embrace the asynchronous situation you're in. Secondly though, I think the subscription query solution I've just shared could resolve the problem you're having as well.
Hope this helps you out!
Related
Does the following code block the call, if so how do I make it non-blocking? i.e. making the use of Reactive Java Stream useless? How can I paginate without making the call blocking?
Currently I have a webClient calling to a backend service that returns a Flux<Item> and according to specs I need to return a ResponseEntity<ItemListResponse> where I have provide a paginated response
code inside the controller method looks like ->
// getItems method was initially returning a Flux but my method was failing to paginate it so now that method returns a Mono<List<Item>>
// I would really like to see how I can make this work with Flux!
return webClient.getItems(...required params...)
.map(r -> {
// we get offset and limit from query params
var paginatedItems = listPaginator.applyPagination(r, offset, limit)
// assembleItemResponse method maps all values from backend service to new response required by client
List<ItemResponse> itemResponseList = paginatedItems.stream()
.map(this::assembleItemResponse)
.collect(Collectors.toList());
return ResponseEntity.ok()
.body(ItemListResponse.builder()
.itemCount(r.size())
.pagesize(itemResponseList.size())
.listItems(itemResponseList)
.build());
});
Does applying Pagination on response using Spring Webflux make it blocking?
There's nothing inherently blocking about pagination, but the normal Spring Data way to achieve that is by using a PagingAndSortingRepository to query the data layer for only the results you need for that particular page (as oppose to querying the data layer for all possible results then filtering, which could have a huge performance impact depending on the data size.) Unfortunately that approach is blocking. The PagingAndSortingRepository doesn't as of yet have a reactive equivalent.
However, that doesn't appear to be what you're doing here. A complete example here would be helpful, but it looks like r is a list of items, and then you're applying a listPaginator to that entire list to reduce it down. If that's the case then there's no reason why it would necessarily be blocking, so it should be safe. However, it's impossible to say for certain without knowing the specific behaviour of the applyPagination and assembleItemResponse methods. If those methods are blocking then no - you're not safe here, and you need to consider another approach (exactly what approach would require a more complete example.) If those methods are non-blocking (they just deal with the data they have, don't call any further web / db services, etc.) then you're ok.
Separately, you might consider looking at Blockhound which will be able to tell you for certain whether you have any blocking calls where they shouldn't be.
I have defined a Microservice:
AccountManagementService
Handles CreateAccountCommand
CreateAccountCommand shoots AccountCreatedEvent
The new Account is then stored in mysql db
Axon Server is used to store the event
Now I want another microservice B (that is connected to the same Axon Server) to handle the AccountCreatedEvent too.
Can I simply put
#EventHandler
protected void on(AccountCreatedEvent event){
}
in the second microservice B (for example in an Aggregate)? I tried that under the assumption that the event is stored in Axon Server and my second microservice automatically listens to that published event. It didn't work unfortunately.
How can microservice B subscribe to events (published by AccountManagementService) in the Axon Server?
The key issue you are expiriencing stems from one of your final sentences:
Can I simply put [insert-code-snippet] in the second microservice B (for example in an Aggregate)?
Well, yes you can simply add an #EventHandler annotated function in a regular component, but not in an Aggregate. The Aggregate functionality provided by Axon Framework is aimed towards being a Command Model. As would be used when employing CQRS really.
This makes the Aggregate a component which can solely handle command messages.
The only exception to this, is when you are adding Event Sourcing in the mix too. When Event Sourcing, you will source a given Command Model based on the events it has published. Thus introducing an #EventSourcingHandler annotated function inside your aggregate will only ever handle the events which originated from it.
To circle back to your original question, yes you can simply put the:
#EventHandler
protected void on(AccountCreatedEvent event){
// My event handling operation
}
in (micro)service B. But, that method cannot reside in an Aggregate. This is by design, to follow the CQRS and Event Sourcing paradigm as intended.
Any other component will do just fine though. If you are combining Axon with Spring, simply having the components with your event handling functions is sufficient. If you are not using Spring, you will have to register your components with event handlers to the Configurer or EventProcessingConfigurer.
Hoping this clarifies things for you.
I've been trying to use Java Observer and Observable in a multi-user XPages application, but I'm running into identity conflicts. I'll explain.
Say A and B have the same view on their screens, a list of documents with Readers fields. We want to keep those screens synchronised as much as possible. If A changes something, B might be receiving updates, depending on his rights and roles. We achieved to do this using WebSockets, but I want to see if there's a better way, i.e. without send a message to the client telling it to re-fetch the screen.
Using the Observer mechanism, B can observe changes and push the changed screen to the user. The tricky part here is that if I call notifyObservers as user A, and I walk through all the observables, A will be executing the Observer.update() method, and not B.
I also thought of using a Timer-like solution, but I'd probably end up with the same conflicts.
Question: is there any way I can properly switch sessions in XPages? Or should I wait for Publish/Subscribe in the XPages server?
I can see 3 possible actions:
Use the SudoUtils from XPages-Scaffolding to run code on behalf
Use DominoJNA to access the data with a different user id (not for the faint of heart)
Just notify the client using the websocket - preferably via webworker. It then would make a fetch (the artist formerly known as Ajax) to see if changes are needed in the client UI. While this has the disadvantage of incurring a network interlude (websocket + fetch) it has the advantage that you don't need to mess with impersonisation which always carries the risk of something going wrong.
For the first two I would want to pack them into an OSGi bundle to be independent from the particularities of Java loaded from an NSF
Old answer
Your observer needs to be in an application context, so you can update any Observer. The observer then would use a websocket to the client to tell it: update this ONE record.
The tricky part, needs planning: have individual websocket addresses, so you notify only the ones that need notification
I'm just getting into Spring (and Java), and despite quite a bit of research, I can't seem to even express the terminology for what I'm trying to do. I'll just explain the task, and hopefully someone can point me to the right Spring terms.
I'm writing a Spring-WS application that will act as middleware between two APIs. It receives a SOAP request, does some business logic, calls out to an external XML API, and returns a SOAP response. The external API is weird, though. I have to perform "service discovery" (make some API calls to determine the valid endpoints -- a parameter in the XML request) under a variety of situations (more than X hours since last request, more than Y requests since last discovery, etc.).
My thought was that I could have a class/bean/whatever (not sure of best terminology) that could handle all this service discovery stuff in the background. Then, the request handlers can query this "thing" to get a valid endpoint without needing to perform their own discovery and slow down request processing. (Service discovery only needs to be re-performed rarely, so it would be impactful to do it for every request.)
I thought I had found the answer with singleton beans, but every resource says those shouldn't have state and concurrency will be a problem -- both of which kill the idea.
How can I create an instance of "something" that can:
1) Wake up at a defined interval and run a method (i.e. to check if Service discovery needs to be performed after X hours and if so do it).
2) Provide something like a getter method that can return some strings.
3) Provide a way in #2 to execute a method in the background without delaying return (basically detect that an instance property exceeds a value and execute -- or I suppose, issue a request to execute -- an instance method).
I have experience with multi-threaded programming, and I have no problem using threads and mutexes. I'm just not sure that's the proper way to go in Spring.
Singletons ideally shouldn't have state because of multithreading issues. However, it sounds like what you're describing is essentially a periodic query that returns an object describing the results of the discovery mechanism, and you're implementing a cache. Here's what I'd suggest:
Create an immutable (value) object MyEndpointDiscoveryResults to hold the discovery results (e.g., endpoint address(es) or whatever other information is relevant to the SOAP consumers).
Create a singleton Spring bean MyEndpointDiscoveryService.
On the discovery service, save an AtomicReference<MyEndpointDiscoveryResults> (or even just a plain volatile variable). This will ensure that all threads see updated results, while limiting them to a single, atomically updated field containing an immutable object limits the scope of the concurrency interactions.
Use #Scheduled or another mechanism to run the appropriate discovery protocol. When there's an update, construct the entire result object, then save it into the updated field.
I've read upon command bus a lot used in on a couple of projects its awesome. I keep reading though that the command is not supposed to return anything to the controller; however, there are certain times that I feel like I must absolutely return a value for example:
$product = $this->dispatch(AddProductCommand::class);
return redirect()->route('route', $attributes = ['product_slug' => $product->slug]);
I need to grab the slug of the newly created product because for the redirect the route needs the slug . Is this bad practice and if so what would be a cleaner way to go about it?
It's not possible to implement it in a completely asynchronous style as you use web framework which is synchronous by design.
If you use framework that allows for async requests or (even better) you have separated UI concerns (like redirect) from the backend you can subscribe for ProductAdded event with a callback that fires redirect.