I own a DDD/CQRS application.
My question concerns the handling of an item creation through POST (Rest).
CQRS (based on CQS principle) promotes that commands should never return a value.
Queries are there for that.
So I wonder how to handle the use case of Item creation.
Here's my current command handler pattern (light for the sample (no interfaces etc.)):
#Service
#Transactional
public CreateItem {
public void handle(CreateItemCommand command) {
Customer customer = customerRepository.findById(command.customerId);
ItemId generatedItemId = itemRepository.nextIdentity(); //generating the GUID
customer.createItem(generatedItemId, .....);
}
}
By reading this article, an easy method would be to declare an output property in the command, populated at the end of the handle method like this:
public void handle(CreateItemCommand command) {
Customer customer = customerRepository.findById(command.customerId);
ItemId generatedItemId = itemRepository.nextIdentity(); //generating the GUID
customer.createItem(generatedItemId, .....);
command.itemId = generatedItemId; //populating the output property
}
However, I see one drawback with this approach:
- A command, in theory, is meant to be immutable.
This itemId would then be sent thanks to the calling controller (webapp) through Location Header with the status 201 or 202 (depending if I expect async or not).
An other solution would be to let the controller initialize the GUID by accessing the repository itself, thus letting the command immutable:
//in my controller:
ItemId generatedItemId = itemRepository.nextIdentity(); //controller generating the GUID
createItem.handle(command);
// setting here the location header (201-202) containing the URL to the newly created item with the using the previous itemId.
Drawback: Controller (adapter layer) accessing directly the repository ..., that is too low-level IMO.
My extreme client being a Javascript application, I might have another solution to let the Javascript itself generate a GUID, and feed CreateItemCommand with it before sending the whole command to server.
Advantage: No more issues about potential violation of CQ(R)S guidelines.
Drawback: Should check the validity of the passed id at server side. Although there would have an index unique on this preventing an unexpected insertion in database.
What is the best (or just a good) strategy to handle this?
I am the developer of a CRM application based on the CQRS pattern. I tend to see commands as immutable. The team decided early on, that all IDs are generated on the client to have immutable commands. This is perfectly ok, as we are using UUIDs. So we are quite confident, that the IDs are valid and there are no ID collisions. We went well with that approach up to this point - I can definitely recommend this. In that scenario the client just knows the IDs.
Sometimes it happens though - especially in manual testing - that a create command is dispatched twice with the same ID. In that case the addition of events in the event store fails (we use event sourcing) with a duplicate key exception. The exception is passed to the controller. In fact we do return results from command executions with a call back, even though it's just "everything ok" most of the time - so no exception thrown. Also command validation is done this way. We do this using a command bus concept.
I would recommend taking a look at the Axon framework. We use it, it provides the common building blocks, and it just works. Maybe you can get some inspirations from that!
Related
I'm building my first event sourced system. It will have multiple domains using projects with a publication lifecycle at it's core. How can I effectively replay or re-apply events of two domains to a new aggregate inside a third domain?
To be more specific. Imagine 4 domains each with their own bounded context and purpose. A short description of these contexts:
Project - A project is a complex object at the core of the system, almost every domain requires project data to operate. A project has one or more ProductTypes which contain the limited supply of Products.
Media - The media domain covers operations around images, documents and generated reports and functions as a file server.
Delivery - Delivery allows for the configuration of which content channels to publish all publications to.
Publication - The publication domain handles the complex tasks of verifying if a project can be published to the requested status in it's current state.
The states of publication follow the lifecycle: concept (not yet published) > announced (optional) > sale > sold-out (publication ended). In my description I focus on the announced status. Concept is not actually a thin for the publication domain since a project is always in concept if publication does not know about it yet.
My first attempt was setting up a normal aggregate which handled the incoming event AnnouncementPublishedEvent. This requires a project to meet some basic requirements like 'it has a name', 'it has a description', 'it has at least one image' and so on. This means I need to validate this information before the event is applied and therefore I somehow need to supply a project instance in the command.
While doing this I suspected this method breaks the purpose of CQRS and I should be looking at the real data source: events. My next attempt was creating a Saga that starts when the event AnnouncementPublicationRequestedEvent. This saga needs to review which events occured around the given projectId and apply those to this new 'published project' projection in order to (at least) validate if the request can be accepted.
I researched and experimented with tracking processors but could not get a good example how this is done in version 4 of Axon. I also started to read several other questions on Stackoverflow that made me think I might need to reconsider my approach.
Unfortunately, the exact code can not be shared as it's not open source and even if I could it's far from a working state. I can use example code to show what I'm trying to do.
#Saga
#ProcessingGroup("AnnouncementPublication")
public class AnnouncementPublicationSaga {
private static int NUMBER_OF_ALLOWED_IMAGES
private PublicationId publicationId;
private ProjectId projectId;
private int numberOfImages = 0;
//...other fields
#StartSaga
#SagaEventHandler(associationProperty = "projectId")
public void handle(AnnouncementPublicationRequestedEvent event) {
publicationId = generatePublicationId();
//set parameters from event for saga to use
projectId = event.getProjectId();
targetPublicationStatus = event.getPublicationStatus();
date = event.getDate();
//initialize the 'publicated project' aggregate
//start a replay of associated events for this #ProcessingGroup
}
...
#SagaEventHandler(associationProperty = "projectId")
public void handle(ProjectCreatedEvent event) {
//Verify the project exists and has a valid name
}
...
/* Assumption* on how AssociationResolver works: */
#SagaEventHandler(AssociationResolver=MediaProjectAssociator.class )
public void handle(ProjectImageAdded event) {
numberOfImages += 1;
}
/* Assumption* on how AssociationResolver works: */
#SagaEventHandler(AssociationResolver=MediaProjectAssociator.class )
public void handle(ProjectImageRemoved event) {
numberOfImages -= 1;
}
...
/* In my head this should trigger if all events have been played
up to the PublicationRequestedEvent. Or maybe
*/
#SagaEventHandler(associationProperty = "publicationId")
public void handle(ValidationRequestCompleted event) {
//ValidationResult result = ValidationResult.builder();
...
if (numberOfImages > NUMBER_OF_ALLOWED_IMAGES) {
//reason to trigger PublicationRequestDeniedEvent
//update validationResult
}
...
if (validationResult.isAcceptable()) {
//Trigger AnnouncementPublicationAcceptedEvent
} else {
//Trigger AnnouncementPublicationDeniedEvent
}
}
...
#EndSaga
#SagaEventHandler(associationProperty = "publicationId")
public void handle(AnnouncementPublicationDeniedEvent event) {
//do stuff to inform why the publication failed
}
#EndSaga
#SagaEventHandler(associationProperty = "publicationId")
public void handle(AnnouncementPublicationAcceptedEvent event){
//do stuff to notify success to user
//choice: delegate to delivery for actual sharing of data
// or delivery itselfs listens for these events
}
}
*The associationResolver code is an assumption to it's actual working as I'm not even close to that part yet. My media context uses a file id as aggregate identifier as not every event is bound to a project. But all the media events this saga needs to replay will have a projectId as field in them. Any feedback on this is welcome but it's not my main problem now.
In the end the result should be: a record of the publication or a record of the attempt and why it failed.
The record of the publication contains all data from project or media events that are relevant to a publication. This is mostly information that potential buyers need to make a decision.
For the purpose of this question I don't expect the above to be solved completely. I just want to know if I'm on the right track with thinking in events, if my approach on replaying relevant events is the right way to go and if so how this can be done in Axon4.
From your problem description Martin, I assume you have several distinct Bounded Contexts. Following the definition of Bounded Context:
Explicitly define the context within which a model applies.
Explicitly set boundaries in terms of team organization,
usage within specific parts of the application,
and physical manifestations such as code bases and database schemas.
Keep the model strictly consistent within these bounds,
but don’t be distracted or confused by issues outside.
From this I'd like to emphasize that within a given Bounded Context, you speak the same language/API with any component.
Between contexts, you will however share very consciously, using dedicated context-mappings like for example an anti-corruption layer to ensure another domain doesn't enter your domain.
Having said the above, events are part of a specific Bounded Context.
Thus, using multiple streams of events from other contexts to recreate/replay an aggregate in another context should ideally be out of the question.
On top of this, in Axon an Aggregate can only ever be recreated based on events it has published itself.
To still arrive to a solution where a given application ingests events from other applications to re-hydrate an Aggregate, I would take the following steps:
Have a dedicated component (e.g. the anti-corruption layer) which translates the incoming events into a different form of message within your application.
If these events should result in the reconstruction of an Aggregate, you are required to make translate the events to commands. The Aggregate infrastructure components in Axon are meant for the Command Model when talking about CQRS.
Said Aggregate would then handle the commands, perform some business logic and publish an event (or several) as a result.
From here on out, the Framework will deal with replaying all events for the given Aggregate, granted you follow Event Sourcing practices to update the Aggregate's state.
Lastly, I'd like to point out that any specifics provided by Axon around replaying tied to the TrackingEventProcessor are meant for Event Processing on the Query side of a CQRS application.
Hope this clarifies things for you Martin! If not, feel free to comment under this answer and I'll update my response accordingly.
I have a service that saves a tree-like structure to a database. Before persisting the tree, the tree gets validated, and during validation, a number of things can go wrong. The tree can have duplicate nodes, or a node can be missing an important field (such as its abbreviation, full name, or level).
In order to communicate to the service what went wrong, I'm using exceptions. When the validateTree() method encounters a problem, it throws the appropriate exception. The HttpService class then uses this exception to form the appropriate response (e.g. in response to an AJAX call).
public class HttpService {
private Service service;
private Logger logger;
// ...
public HttpServiceResponse saveTree(Node root) {
try {
service.saveTree(root);
} catch (DuplicateNodeException e) {
return HttpServiceResponse.failure(DUPLICATE_NODE);
} catch (MissingAbbreviationException e) {
return HttpServiceResponse.failure(MISSING_ABBREV);
} catch (MissingNameException e) {
return HttpServiceResponse.failure(MISSING_NAME);
} catch (MissingLevelException e) {
return HttpServiceResponse.failure(MISSING_LEVEL);
} catch (Exception e) {
logger.log(e.getMessage(), e. Logger.ERROR);
return HttpServiceResponse.failure(INTERNAL_SERVER_ERROR);
}
}
}
public class Service {
private TreeDao dao;
public void saveTree(Node root)
throws DuplicateNodeException, MissingAbbreviationException, MissingNameException, MissingLevelException {
validateTree(root);
dao.saveTree(root);
}
private void validateTree(Node root)
throws DuplicateNodeException, MissingAbbreviationException, MissingNameException, MissingLevelException {
// validate and throw checked exceptions if needed
}
}
I want to know, is this a good use of exceptions? Essentially, I'm using them to convey error messages. An alternative would be for my saveTree() method to return an integer, and that integer would convey the error. But in order to do this, I would have to document what each return value means. That seems to be more in the style of C/C++ than Java. Is my current use of exceptions a good practice in Java? If not, what's the best alternative?
No, exceptions aren't a good fit for the validation you need to do here. You will likely want to display multiple validation error messages, so that the user can see all the validation errors at once, and throwing a separate exception for each invalid input won't allow that.
Instead create a list and put errors in it. Then you can show the user the list of all the validation errors.
Waiting until your request has gotten all the way to the DAO seems like the wrong time to do this validation. A server-side front controller should be doing validation on these items before they get passed along any farther, as protection against attacks such as injection or cross-site scripting.
TL;DR The Java-side parts you showed us are nearly perfect. But you could add an independent validation check and use that from the client side before trying to save.
There are many software layers involved, so let's have a look at each of them - there's no "one size fits all" answer here.
For the Service object, it's the perfect solution to have it throw exceptions from the saveTree() method if it wasn't able to save the tree (for whatever reason, not limited to validation). That's what exceptions are meant for: to communicate that some method couldn't do its job. And the Service object shouldn't rely on some external validation, but make sure itself that only valid data are saved.
The HttpService.saveTree() should also communicate to its caller if it couldn't save the tree (typically indicated by an exception from the Service). But as it's an HTTP service, it can't throw exceptions, but has to return a result code plus a text message, just the way you do it. This can never contain the full information from the Java exception, so it's a good decision that you log any unclear errors here (but you should make sure that the stack trace gets logged too!), before you pass an error result to the HTTP client.
The web client UI software should of course present detailed error lists to the user and not just a translated single exception. So, I'd create an HttpService.validateTree(...) method that returns a list of validation errors and call that from the client before trying to save. This gives you the additional possibility to check for validity independent of saving.
Why do it this way?
You never have control what happens in the client, inside some browser, you don't even know whether the request is coming from your app or from something like curl. So you can't rely on any validation that your JavaScript (?) application might implement. All of your service methods should reject invalid data, by doing the validation themselves.
Implementing the validation checks in a JavaScript client application still needs the same validation inside the Java service (see above), so you'd have to maintain two pieces of code in different languages doing exactly the same business logic - don't repeat yourself! Only if the additional roundtrip isn't tolerable, then I'd regard this an acceptable solution.
Visible and highly noticeable, both in terms of the message itself and how it indicates which dialogue element users must repair.
From Guru Nielsen,
https://www.nngroup.com/articles/error-message-guidelines/
I would like to use placeholders in a feature file, like this:
Feature: Talk to two servers
Scenario: Forward data from Server A to Server B
Given MongoDb collection "${db1}/foo" contains the following record:
"""
{"key": "value"}
"""
When I send GET "${server1}/data"
When I forward the respone to PUT "${server2}/data"
Then MongoDB collection "${db2}/bar" MUST contain the following record:
"""
{"key": "value"}
"""
The values of ${server1} etc. would depend on the environment in which the test is to be executed (dev, uat, stage, or prod). Therefore, Scenario Outlines are not applicable in this situation.
Is there any standard way of doing this? Ideally there would be something which maintains a Map<String, String> that can be filled in a #Before or so, and runs automatically between Cucumber and the Step Definition so that inside the step definitions no code is needed.
Given the following step definitions
public class MyStepdefs {
#When("^I send GET "(.*)"$)
public void performGET(final String url) {
// …
}
}
And an appropriate setup, when performGET() is called, the placeholder ${server1} in String uri should already be replaced with a lookup of a value in a Map.
Is there a standard way or feature of Cucumber-Java of doing this? I do not mind if this involves dependency injection. If dependency injection is involved, I would prefer Spring, as Spring is already in use for other reasons in my use case.
The simple answer is that you can't.
The solution to your problem is to remove the incidental details from your scenario all together and access specific server information in the step defintions.
The server and database obviously belong together so lets describe them as a single entity, a service.
The details about the rest calls doesn't really help to convey what you're
actually doing. Features don't describe implementation details, they describe behavior.
Testing if records have been inserted into the database is another bad practice and again doesn't describe behavior. You should be able to replace that by an other API call that fetches the data or some other process that proves the other server has received the information. If there are no such means to extract the data available you should create them. If they can't be created you can wonder if the information even needs to be stored (your service would then appear to have the same properties as a black hole :) ).
I would resolve this all by rewriting the story such that:
Feature: Talk to two services
Scenario: Forward foobar data from Service A to Service B
Given "Service A" has key-value information
When I forward the foobar data from "Service A" to "Service B"
Then "Service B" has received the key-value information
Now that we have two entities Service A and Service B you can create a ServiceInformationService to look up information about Service A and B. You can inject this ServiceInformationService into your step definitions.
So when ever you need some information about Service A, you do
Service a = serviceInformationService.lookup("A");
String apiHost = a.getApiHost():
String dbHost = a.getDatabaseHOst():
In the implementation of the Service you look up the property for that service System.getProperty(serviceName + "_" + apiHostKey) and you make sure that your CI sets A_APIHOST and A_DBHOST, B_APIHOST, B_DBHOST, ect.
You can put the name of the collections in a property file that you look up in a similar way as you'd look up the system properties. Though I would avoid direct interaction with the DB if possible.
The feature you are looking for is supported in gherkin with qaf. It supports to use properties defined in properties file using ${prop.key}. In addition it offers strong resource configuration features to work with different environments. It also supports web-services
I have been wrestling with this problem for a while. I would like to use the same Stripes ActionBean for show and update actions. However, I have not been able to figure out how to do this in a clean way that allows reliable binding, validation, and verification of object ownership by the current user.
For example, lets say our action bean takes a postingId. The posting belongs to a user, which is logged in. We might have something like this:
#UrlBinding("/posting/{postingId}")
#RolesAllowed({ "USER" })
public class PostingActionBean extends BaseActionBean
Now, for the show action, we could define:
private int postingId; // assume the parameter in #UrlBinding above was renamed
private Posting posting;
And now use #After(stages = LifecycleStage.BindingAndValidation) to fetch the Posting. Our #After function can verify that the currently logged in user owns the posting. We must use #After, not #Before, because the postingId won't have been bound to the parameter before hand.
However, for an update function, you want to bind the Posting object to the Posting variable using #Before, not #After, so that the returned form entries get applied on top of the existing Posting object, instead of onto an empty stub.
A custom TypeConverter<T> would work well here, but because the session isn't available from the TypeConverter interface, its difficult to validate ownership of the object during binding.
The only solution I can see is to use two separate action beans, one for show, and one for update. If you do this however, the <stripes:form> tag and its downstream tags won't correctly populate the values of the form, because the beanclass or action tags must map back to the same ActionBean.
As far as I can see, the Stripes model only holds together when manipulating simple (none POJO) parameters. In any other case, you seem to run into a catch-22 of binding your object from your data store and overwriting it with updates sent from the client.
I've got to be missing something. What is the best practice from experienced Stripes users?
In my opinion, authorisation is orthogonal to object hydration. By this, I mean that you should separate the concerns of object hydration (in this case, using a postingId and turning it into a Posting) away from determining whether a user has authorisation to perform operations on that object (like show, update, delete, etc.,).
For object hydration, I use a TypeConverter<T>, and I hydrate the object without regard to the session user. Then inside my ActionBean I have a guard around the setter, thus...
public void setPosting(Posting posting) {
if (accessible(posting)) this.posting = posting;
}
where accessible(posting) looks something like this...
private boolean accessible(Posting posting) {
return authorisationChecker.isAuthorised(whoAmI(), posting);
}
Then your show() event method would look like this...
public Resolution show() {
if (posting == null) return NOT_FOUND;
return new ForwardResolution("/WEB-INF/jsp/posting.jsp");
}
Separately, when I use Stripes I often have multiple events (like "show", or "update") within the same Stripes ActionBean. For me it makes sense to group operations (verbs) around a related noun.
Using clean URLs, your ActionBean annotations would look like this...
#UrlBinding("/posting/{$event}/{posting}")
#RolesAllowed({ "USER" })
public class PostingActionBean extends BaseActionBean
...where {$event} is the name of your event method (i.e. "show" or "update"). Note that I am using {posting}, and not {postingId}.
For completeness, here is what your update() event method might look like...
public Resolution update() {
if (posting == null) throw new UnauthorisedAccessException();
postingService.saveOrUpdate(posting);
message("posting.save.confirmation");
return new RedirectResolution(PostingsAction.class);
}
To be specific let me illustrate the question with Spring http-remoting example.
Suppose we have such implementation of a simple interface:
public SearchServiceImpl implements SearchService {
public SearchJdo processSearch(SearchJdo search) {
search.name = "a funky name";
return search;
}
}
SearchJdo is itself a simple POJO.
Now when we call the method from a client through http-remoting (Spring's mechanism of calling remote objects much like EJB that uses serialization) we'll get:
public class HTTPClient {
public static void main(final String[] arguments) {
final ApplicationContext context = new ClassPathXmlApplicationContext(
"spring-http-client-config.xml");
final SearchService searchService =
(SearchService) context.getBean("searchService");
SearchJdo search = new SearchJdo();
search.name = "myName";
// this method actually returns the same object it gets as an argument
SearchJdo search2 = searchService.processSearch(search);
System.out.println(search == search2); // prints "false"
}
}
The problem is that the search objects are different because of serializaton although from logical prospective they are the same.
The question is whether there are some technique that allows to support or emulate object identity across VMs.
You said it - object identity is different from logical equality.
object identity is compared with ==
logical equality is compared with .equals(..)
So override the equals() method and all will be fine. Remember to override hashCode() based on the same field(s) as well. Use your IDE to generate these 2 methods for you.
(Teracotta VM clustering allows sharing objects between VMs, but that doesn't fit your case.)
IMHO attempting to preserve object identity equality across VMs is a losing proposition.
To the best of my knowledge the language specification does not require a VM to support that, so you would be limited in where you can pull off if you truly want to be portable.
May I ask why you don't just use some unique ID that you supply yourself? Java GUIDs, while expensive, are serializable.
I did this once, but I'm not quite sure if this is a right approach:
Every user had a username, session id, roles, and a login date attached to a user object. Every time I logged into a VM the system would load a User object into memory; I would also return the user object to the application.
If I needed to execute an action within the application server, then I would send the user object as an argument. If the VM had the User loaded with the same session ID then it would use the object stored in the VM to know the assigned roles. Otherwise, the application would then be capable of changing the roles in the user and it wouldn't be secure.
If the application had to change the application server, then it sends the user object to the new server and the new server wouldn't be able to find the user within its records.
HERE IS THE SECRET: The session ID is created hashing the username, the login date and a secret password shared among all of the servers.
Once the new server finds that the session ID is coherent, then it would load the roles from the database as a reliable source of information.
Sorry if I couldn't write this before, but hope it helps for someone.