I use Achilles library for working with cassandra database. The problem is when I create entity method that effects fields Achilles do not "see" these changes. See example below.
import info.archinnov.achilles.persistence.PersistenceManager;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
#Service
public class AhilesTest {
private static final UUID ID = UUID.fromString("083099f6-e423-498d-b810-d6c564228724");
//This is achilles persistence manager
#Autowired
private PersistenceManager persistenceManager;
public void test () {
//user creation and persistence
User toInsert = new User();
toInsert.setId(ID);
toInsert.setName("name");
toInsert.setVersion(0l);
persistenceManager.insert(toInsert);
//find user
User user = persistenceManager.find(User.class, id);
user.changeName("newName");
persistenceManager.update(user);
User updatedUser = persistenceManager.find(User.class, id);
//here old "name" value is returned
updatedUser.getName();
}
public class User {
private UUID id;
private String name;
private long version;
public void changeName (String newName) {
this.name = newName;
this.version++;
}
//getters and setters are omited
}
}
user.changeName("newName"); do not affect entity and "old" values are persisted. For my opinion (I have seen debug call stack) this happens because actual User entity is wrapper with Achilles proxy which react to gettter/setter calls. Also when I replace changeName: call to direct getter/setter invocation - user.setName("newName"); user.setVersion(user.getVersion()+1); updating became work.
So why it is happens and is there a way to configure Achilles to react of non getter/setter methods calls?
You have to use the setter methods explicitly.
According to the documentation, it intercepts the setter methods only.
"As a consequence of this design, internal calls inside an entity cannot be intercepted
and will escape dirty check mechanism. It is thus recommended to change state of the
entities using setters"
It is probably a design choice from achilles, and I suggest you raise it as an issue on the issues page, so it may receive some attention from the author.
Before do any actions with user you should get user proxy from info.archinnov.achilles.persistence.PersistenceManager and only after that use setters/getters for modification with 'user' entity.
User user = persistenceManager.getProxy(User.class, UUID.fromString(id));
Related
I'm looking to keep a record of different fields of a entity for use in some business logic, which can be easily done using the #PreUpdate annotation. However, this is only triggered when fields that are directly on the entity are changed, but not on related entities. Here an example entity User class:
#Entity
public class User {
#Transient
private User loadState;
#Column
private String phoneNumber;
#Column
private LocalDate phoneNumberChanged;
#OneToMany(cascade = CascadeType.ALL, orphanRemoval = true)
List<Address> addresses;
#Column
private LocalDate addressesChanged;
#PreUpdate
public void updateRecords() {
keepLastPhoneNumber(); // save date that phone number was changed
keepLastAddresses(); // save date that addresses were changed
}
#PostLoad
private void storeState() {
loadState = new User();
loadState.setphoneNumber(this.getPhoneNumber());
loadState.setAddresses(new Address(this.getAddresses()));
}
private void keepLastPhoneNumber() {
if (!loadState.getPhoneNumber().equals(this.getPhoneNumber())) {
this.setPhoneNumberChanged(LocalDate.now());
}
}
private void keepLastAddresses() {
if (!loadState.getAddresses().equals(this.getAddresses())) {
this.setAddressesChanged(LocalDate.now());
}
}
}
So here when phoneNumber is changed the updateRecords() function is run, but when the addresses field is changed it is not. Is there a way to run that function when a change is done on any fields of User, including the related fields? If there is a way beyond the typical JPA annotations I would love to learn about it.
EDIT: below a bit more details and added more fields to the above entity to make it clearer.
So if I were to change only the phoneNumber field, the updateRecords() function would automatically run because of the #PreUpdate annotation, and then would update the phoneNumberChanged field because the logic has checked that phoneNumber is indeed different from the one from loadState.
If I were to change both phoneNumber and addresses fields then the updateRecords() function would also automatically run, which would see that both addresses and phoneNumber were changed and alter the fields accordingly.
However, if I change only addresses the PreUpdate would not be triggered and so the addressesChanged field would not be updated, even though the addresses list has been changed. I am looking for a way to detect when changes are made to any field of the User entity, so that function can be called and run properly.
Let's say I have a domain java class representing a person:
class Person {
private final String id; // government id
private String name;
private String status;
private Person(String id, String name) {
this.id = id;
this.name = name;
this.status = "NEW";
}
Person static createNew(String id, String name) {
return new Person(id, name);
}
void validate() {
//logic
this.status = "VALID";
}
public static final class Builder {
private String id;
private String name;
private String status;
private Builder() {
}
public static Builder aPerson() {
return new Builder();
}
public Builder id(String id) {
this.id = id;
return this;
}
public Builder name(String name) {
this.name = name;
return this;
}
public Builder status(String status) {
this.status = status;
return this;
}
public Person build() {
Person person = new Person(id, name);
person.status = this.status;
return person;
}
}
I store this domain class object in a database, regular class with the same field + getters and setters. Currently when I want to store object I create new PersonDocument (data is stored in mongo), use getters and setters and save it. It gets complicated when I want to fetch it from DB. I would like my domain object to expose only what is necessary, for the business logic currently it is only creation and validation. Simply:
Person p = Person.createNew("1234", "John");
p.validate();
repository.save(p);
The other way it gets complicated, currently there is a builder which allows creation of object in any state. We do believe that data stored in DB has a proper state so it can be created that way but the downside is that there is a public API available, letting any one to do anything.
The initial idea was to use MapStruct java mapping library but it does use setters to create objects and exposing setters in the domain class (as far as I can tell) should be avoided.
Any suggestions how to do it properly?
Your problem likely comes from two conflicting requirements:
You want to expose only business methods.
You want to expose data too, since you want to be able to implement serialization/deserialization external to the object.
One of those has to give. To be honest, most people faced with this problem ignore the first one, and just introduce setter/getters. The alternative is of course to ignore the second one, and just introduce the serialization/deserialization into the object.
For example you can introduce a method Document toDocument() into the objects that produces the Mongo compatible json document, and also a Person fromDocument(Document) to deserialize.
Most people don't like this sort of solution, because it "couples" the technology to the object. Is that a good or bad thing? Depends on your use-case. Which one do you want to optimize for: Changing business logic or changing technologies? If you're not planning to change technologies very often and don't plan using the same class in a completely different application, there's no reason to separate technology.
Robert Bräutigam sentence is good:
Two conflicting requirements
But the is another sentence by Alan Kay that is better:
“I’m sorry that I long ago coined the term “objects” for this topic
because it gets many people to focus on the lesser idea. The big idea
is messaging.” ~ Alan Kay
So, instead of dealing with the conflict, let's just change the approach to avoid it. The best way I found is to take a functional aproach and avoid unnecessary states and mutations in classes by expresing the domain changes as events.
Instead to map classes (aggregates, V.o.'s and/or entities) to persistence, I do this:
Build an aggregate with the data needed (V.O.'s and entities) to apply aggregate rules and invariants given an action. This data comes from persistence. The aggregate do not expose getters not setters; just actions.
Call the aggretate's action with command data as parameter. This will call inner entities actions in case the overal rules need it. This allows responsibility segregation and decoupling as the Aggregate Root does not have to know how are implemeted their inner entities (Tell, don't ask).
Actions (in Aggregate roots and inner entities) does not modify its inner state; they instead returns events expressing the domain change. The aggregate main action coordinate and check the events returned by its inner entities to apply rules and invariants (the aggregate has the "big picture") and build the final Domain Event that is the output of the main Action call.
Your persistence layer has an apply method for every Domain Event that has to handle (Persistence.Apply(event)). This way your persistence knows what was happened and; as long as the event has all the data needed to persist the change; can apply the change into (even with behaviour if needed!).
Publish your Domain Event. Let the rest of your system knows that something has just happenend.
Check this post (it worth chek all DDD series in this blog) to see a similar implementation.
I do it this way:
The person as a domain entity have status (in the sense of the entity fields that define the entity, not your "status" field) and behaviour (methods).
What is stored in the db is just the status. Then I create a "PersonStatus" interface in the domain (with getter methods of the fields that we need to persist), so that PersonRepository deals with the status.
The Person entity implements PersonStatus (or instead of this, you can put a static method that returns the state).
In the infraestructure I have a PersonDB class implementing PersonStatus too, which is the persistence model.
So:
DOMAIN MODEL:
// ENTITY
public class Person implements PersonStatus {
// Fields that define status
private String id;
private String name;
...
// Constructors and behaviour
...
...
// Methods implementing PersonStatus
#Override
public String id() {
return this.id;
}
#Override
public String name() {
return this.name;
}
...
}
// STATUS OF ENTITY
public interface PersonStatus {
public String id();
public String name();
...
}
// REPOSITORY
public interface PersonRepository {
public void add ( PersonStatus personStatus );
public PersonStatus personOfId ( String anId );
}
INFRAESTRUCTURE:
public class PersonDB implements PersonStatus {
private String id;
private String name;
...
public PersonDB ( String anId, String aName, ... ) {
this.id = anId;
this.name = aName;
...
}
#Override
public String id() {
return this.id;
}
#Override
public String name() {
return this.name;
}
...
}
// AN INMEMORY REPOSITORY IMPLEMENTATION
public class InmemoryPersonRepository implements PersonRepository {
private Map<String,PersonDB> inmemorydb;
public InmemoryPersonRepository() {
this.inmemoryDb = new HashMap<String,PersonDB>();
}
#Override
public void add ( PersonStatus personStatus );
PersonDB personDB = new PersonDB ( personStatus.id(), personStatus.name(), ... );
this.inmemoryDb.put ( personDB.id(), personDB );
}
#Override
public PersonStatus personOfId ( String anId ) {
return this.inmemoryDb.personOfId ( anId );
}
}
APPLICATION LAYER:
...
Person person = new Person ( "1", "John Doe", ... );
personRepository.add ( person );
...
PersonStatus personStatus = personRepository.personOfId ( "1" );
Person person = new Person ( personStatus.id(), personStatus.name(), ... );
...
It basically boils down to two things, depending on how much you are willing to add extra work in on the necessary infrastructure and how constraining your ORM/persistence is.
Use CQRS+ES pattern
The most obvious choice that's used in bigger and complex domains is to use the CQRS (Command/Query Responsibility Segregation) "Event Sourcing" pattern. This means, that each mutable actions generates an event, that is persisted.
When your aggregate is loaded, all the events will be loaded from the database and applied in chronological order. Once applied, your aggregate will have its current state.
CQRS just means, that you separate read and write operations. Write operations would happen in the aggregate by creating events (by applying commands) which are stored/read via Event Sourcing.
Where the "Query" would be queries on projected data, which uses the events to create a current state of the object, that's used for querying and reading only. Aggregates still read by reapplying all the events from the event sourcing storage.
Pros
You have an history of all changes that were done on the aggregate. This can be seen as added value to the business and auditing
if your projected database is corrupted or in an invalid state, you can restore it by replaying all the events and generate the projection from anew.
It's easy to revert to a previous state in time (i.e. by applying compensating events, that does opposite of what a previous event did)
Its easy to fix a bug (i.e. when calculating the the state of the aggregate) and then reply all the events to get the new, corrected value.
Assume you have a BankingAccount aggregate and calculate the balance and you used regular rounding instead of "round-to-even". Here you can fix the calculation, then reapply all the events and you get the new and correct account balance.
Cons
Aggregates with 1000s of events can take some time to materialize (Snapshots/Mememto pattern can be used here to load a snapshot and apply the events after that snapshot)
Initially more time to implement the necessary infrastructure
You can't query event sourced aggregates w/o a read store; Requires projection and a message queue to publish the event sourcing events so they can be processed and applied to a projection (SQL or document table) which can be used for queries
Map directly to Domain Entities
Some ORM and Document database providers allow you to directly map to backing fields, i.e. via reflection.
In MongoDb C# Driver it can be done via something like in the linked answer.
Same applies to EF Core ORM. I'm sure theres something similar in the Java world too.
This may limit your database persistence library and technology usage, since it will require you to use one which supports such APIs via fluent or code configuration. You can't use attributes/annotations for this, because these are usually database specific and it would leak persistence knowledge into your domain.
It also MAY limit your ability to use the strong typed querying API (Linq in C#, Streams in Java), because that generally requires getters and setters, so you may have to use magic strings (with names of the fields or properties in the storage) in the persistence layer.
It may be acceptable for smaller/less complex domains. But CQRS+ES should always be preferred, if possible and within budget/timeline since its most flexible and works with all persistence storage and frameworks (even with key-value stores).
Pros
Not necessary to leverage more complex infrastructure (CQRS, ES, Pub/Sub messaging/queues)
No leaking of persistence knowledge into your models and no need to break encapsulation
Cons
No history of changes
No way to restore a previous state
May require magic strings when querying in the persistence layer (depends on framework/orm)
Can require a lot of fluent/code configuration in the persistence layer, to map it to the backing field
May break, when you rename the backing field
The problem is that one day we discovered that if we're saving an object in spring boot repository, another objects that are changed in the same method are also updated and persisted in the database.
The curiosity is massive to find out why does this actually happen. I created sample project using Spring Initializr and some template code to show the actual situation (tried to keep the number of dependencies as low as possible).
Using Spring boot version 1.5.11 (SNAPSHOT) and project has following dependencies:
dependencies {
compile('org.springframework.boot:spring-boot-starter-data-jpa')
compile('org.springframework.boot:spring-boot-starter-web')
compile('org.mariadb.jdbc:mariadb-java-client:2.1.0')
testCompile('org.springframework.boot:spring-boot-starter-test')
}
Now to the point:
Project has two entities, Pet:
#Entity
#JsonIdentityInfo(generator = ObjectIdGenerators.PropertyGenerator.class, property = "id", scope = Pet.class)
public class Pet {
#Id
#GeneratedValue
private long id;
private String type;
public Pet() {}
public String getType() { return type; }
public void setType(String type) { this.type = type; }
}
and User:
#Entity
#JsonIdentityInfo(generator = ObjectIdGenerators.PropertyGenerator.class, property = "id", scope = User.class)
public class User {
#Id
#GeneratedValue
private long id;
private String name;
public User() {}
public String getName() { return name; }
public void setName(String name) { this.name = name; }
}
Both entities also have repositories, Pet:
#Repository
public interface PetRepository extends CrudRepository<Pet, Long> {
Pet findPetById(Long id);
}
User:
#Repository
public interface UserRepository extends CrudRepository<User, Long> {
User findUserById(Long id);
}
And one simple service where the magic actually happens ( I have pre-saved one Pet and one User object, with different name and type)
#Service
public class UserService {
#Autowired
UserRepository userRepository;
#Autowired
PetRepository petRepository;
public User changeUserAndPet() {
User user = userRepository.findUserById(1L);
Pet pet = petRepository.findPetById(1L);
user.setName("Kevin");
pet.setType("Cow");
userRepository.save(user);
return user;
}
}
Right after calling userRepository.save(user); the Pet object is also updated in the database with new type of 'Cow'. Why exactly does this happen if I only saved the User object? Is this intended to be like this?
There's also one simple controller and simple test endpoint to call the service method which most likely is not important to the question, but I'll still add it here for the sake of completeness.
#RestController
public class UserController {
#Autowired
UserService userService;
#RequestMapping(value = "/test", method = RequestMethod.GET)
public User changeUserAndPet() {
return userService.changeUserAndPet();
}
}
Any explanation / tips are appreciated and feel free to ask extra information / code in github.
The Spring Data repository is a wrapper around the JPA EntityManager. When an entity is loaded, you get the instance, but a copy of the object is stored inside the EntityManager. When your transaction commits, the EntityManager iterates all managed entities, and compares them to the version it returned to your code. If you have made any changes to your version, JPA calculates which updates should be performed in the database to reflect your changes.
Unless you know JPA quite well, it can be tricky to predict when calls are propagated to the database, since flush() is called internally. For instance every time you do a query JPA performs a pre-query flush, because any pending inserts must be send to the database, or the query would not find them.
If you defined a transaction using #Transactional on you method, then pet would be updated even if the user was not saved. When you don't have a transaction, the call to save must trigger the EntityManager to propagate your update to the database. It's a bit of a mystery to me why this happens. I Know that Spring creates the EntityManager inside OpenEntityManagerInViewInterceptor before the Controller is called, but since the transaction is not explicit, it must be created implicitly and there could potentially be multiple transactions.
I always encourage developers to use explicit transactions in Spring, and qualify them with readonly when appropriate.
That's how JPA and the EntityManager works. If you lookup an entity through the repository, it is attached to the EntityManager as managed entity. Any changes that you do to that object, are picked up when a flush is executed by the EntityManager. In fact, you wouldn't even need to call the save method on the repository in your case.
You can find more information about the lifecycle of JPA entities e.g. here: https://dzone.com/articles/jpa-entity-lifecycle
So I have looked at various tutorials about JPA with Spring Data and this has been done different on many occasions and I am no quite sure what the correct approach is.
Assume there is the follwing entity:
package stackoverflowTest.dao;
import javax.persistence.*;
#Entity
#Table(name = "customers")
public class Customer {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name = "id")
private long id;
#Column(name = "name")
private String name;
public Customer(String name) {
this.name = name;
}
public Customer() {
}
public long getId() {
return id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
We also have a DTO which is retrieved in the service layer and then handed to the controller/client side.
package stackoverflowTest.dto;
public class CustomerDto {
private long id;
private String name;
public CustomerDto(long id, String name) {
this.id = id;
this.name = name;
}
public long getId() {
return id;
}
public void setId(long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
So now assume the Customer wants to change his name in the webui - then there will be some controller action, where there will be the updated DTO with the old ID and the new name.
Now I have to save this updated DTO to the database.
Unluckily currently there is no way to update an existing customer (except than deleting the entry in the DB and creating a new Cusomter with a new auto-generated id)
However as this is not feasible (especially considering such an entity could have hundreds of relations potentially) - so there come 2 straight forward solutions to my mind:
make a setter for the id in the Customer class - and thus allow setting of the id and then save the Customer object via the corresponding repository.
or
add the id field to the constructor and whenever you want to update a customer you always create a new object with the old id, but the new values for the other fields (in this case only the name)
So my question is wether there is a general rule how to do this?
Any maybe what the drawbacks of the 2 methods I explained are?
Even better then #Tanjim Rahman answer you can using Spring Data JPA use the method T getOne(ID id)
Customer customerToUpdate = customerRepository.getOne(id);
customerToUpdate.setName(customerDto.getName);
customerRepository.save(customerToUpdate);
Is's better because getOne(ID id) gets you only a reference (proxy) object and does not fetch it from the DB. On this reference you can set what you want and on save() it will do just an SQL UPDATE statement like you expect it. In comparsion when you call find() like in #Tanjim Rahmans answer spring data JPA will do an SQL SELECT to physically fetch the entity from the DB, which you dont need, when you are just updating.
In Spring Data you simply define an update query if you have the ID
#Repository
public interface CustomerRepository extends JpaRepository<Customer , Long> {
#Query("update Customer c set c.name = :name WHERE c.id = :customerId")
void setCustomerName(#Param("customerId") Long id, #Param("name") String name);
}
Some solutions claim to use Spring data and do JPA oldschool (even in a manner with lost updates) instead.
Simple JPA update..
Customer customer = em.find(id, Customer.class); //Consider em as JPA EntityManager
customer.setName(customerDto.getName);
em.merge(customer);
This is more an object initialzation question more than a jpa question, both methods work and you can have both of them at the same time , usually if the data member value is ready before the instantiation you use the constructor parameters, if this value could be updated after the instantiation you should have a setter.
If you need to work with DTOs rather than entities directly then you should retrieve the existing Customer instance and map the updated fields from the DTO to that.
Customer entity = //load from DB
//map fields from DTO to entity
So now assume the Customer wants to change his name in the webui -
then there will be some controller action, where there will be the
updated DTO with the old ID and the new name.
Normally, you have the following workflow:
User requests his data from server and obtains them in UI;
User corrects his data and sends it back to server with already present ID;
On server you obtain DTO with updated data by user, find it in DB by ID (otherwise throw exception) and transform DTO -> Entity with all given data, foreign keys, etc...
Then you just merge it, or if using Spring Data invoke save(), which in turn will merge it (see this thread);
P.S. This operation will inevitably issue 2 queries: select and update. Again, 2 queries, even if you wanna update a single field. However, if you utilize Hibernate's proprietary #DynamicUpdate annotation on top of entity class, it will help you not to include into update statement all the fields, but only those that actually changed.
P.S. If you do not wanna pay for first select statement and prefer to use Spring Data's #Modifying query, be prepared to lose L2C cache region related to modifiable entity; even worse situation with native update queries (see this thread) and also of course be prepared to write those queries manually, test them and support them in the future.
I have encountered this issue!
Luckily, I determine 2 ways and understand some things but the rest is not clear.
Hope someone discuss or support if you know.
Use RepositoryExtendJPA.save(entity). Example:
List<Person> person = this.PersonRepository.findById(0)
person.setName("Neo");
This.PersonReository.save(person);
this block code updated new name for record which has id = 0;
Use #Transactional from javax or spring framework. Let put #Transactional upon your class or specified function, both are ok. I read at somewhere that this annotation do a "commit" action at the end your function flow. So, every things you modified at entity would be updated to database.
There is a method in JpaRepository
getOne
It is deprecated at the moment in favor of
getById
So correct approach would be
Customer customerToUpdate = customerRepository.getById(id);
customerToUpdate.setName(customerDto.getName);
customerRepository.save(customerToUpdate);
I worked out a concept to conditionally validate using JSR 303 groups. "Conditionally" means that I have some fields which are only relevant if another field has a specific value.
Example: There is an option to select whether to register as a person or as a company. When selecting company, the user has to fill a field containing the name of the company.
Now I thought I use groups for that:
class RegisterForm
{
public interface BasicCheck {}
public interface UserCheck {}
public interface CompanyCheck {}
#NotNull(groups = BasicCheck.class)
private Boolean isCompany
#NotNull(groups = UserCheck.class)
private String firstName;
#NotNull(groups = UserCheck.class)
private String lastName;
#NotNull(groups = CompanyCheck.class)
private String companyName;
// getters / setters ...
}
In my controller, I validate step by step depending on the respective selection:
#Autowired
SmartValidator validator;
public void onRequest(#ModelAttribute("registerForm") RegisterForm registerForm, BindingResult result)
{
validator.validate(registerForm, result, RegisterForm.BasicCheck.class);
if (result.hasErrors()
return;
// basic check successful => we can process fields which are covered by this check
if (registerForm.getIsCompany())
{
validator.validate(registerForm, result, RegisterForm.CompanyCheck.class)
}
else
{
validator.validate(registerForm, result, RegisterForm.UserCheck.class);
}
if (!result.hasErrors())
{
// process registration
}
}
I only want to validate what must be validated. If the user selects "company" fills a field with invalid content and then switches back to "user", the invalid company related content must be ignored by the validator. A solution would be to clear those fields using Javascript, but I also want my forms to work with javascript disabled. This is why I totally like the approach shown above.
But Spring breaks this idea due to data binding. Before validation starts, Spring binds the data to registerForm. It adds error to result if, for instance, types are incompatible (expected int-value, but user filled the form with letters). This is a problem as these errors are shown in the JSP-view by <form:errors /> tags
Now I found a way to prevent Spring from adding those errors to the binding result by implementing a custom BindingErrorProcessor. If a field contains null I know that there was a validation error. In my concept null is not allowed - every field gets annotated with #NotNull plus the respective validation group.
As I am new to Spring and JSR-303 I wonder, whether I am totally on the wrong path. The fact that I have to implement a couple of things on my own makes me uncertain. Is this a clean solution? Is there a better solution for the same problem, as I think this is a common problem?
EDIT
Please see my answer here if you are interested in my solution in detail: https://stackoverflow.com/a/30500985/395879
You are correct that Spring MVC is a bit picky in this regard,and it is a common problem. But there are work-arounds:
Make all your backing fields strings, and do number/date etc conversions and null checks manually.
Use JavaScript to set fields to null when they become irrelevant.
Use JavaScript to validate fields when they are entered. This will fix almost all of your problems.
Good luck!
I know this question is old, but I came upon it looking for an answer for a different situation.
I think for your situation you could use inheritance for the forms and then use two controller methods:
The forms would look like this:
public class RegistrationForm
{
// Common fields go here.
}
public class UserRegistrationForm
extends RegistrationForm
{
#NotNull
private String firstName;
#NotNull
private String lastName;
// getters / setters ...
}
public class CompanyRegistrationForm
extends RegistrationForm
{
#NotNull
private String companyName;
// getters / setters ...
}
The controller methods would look like this:
#RequestMapping(method = RequestMethod.POST, params = "isCompany=false")
public void onRequest(
#ModelAttribute("registerForm") #Valid UserRegistrationForm form,
BindingResult result)
{
if (!result.hasErrors())
{
// process registration
}
}
#RequestMapping(method = RequestMethod.POST, params = "isCompany=true")
public void onRequest(
#ModelAttribute("registerForm") #Valid CompanyRegistrationForm form,
BindingResult result)
{
if (!result.hasErrors())
{
// process registration
}
}
Notice that the #RequestMapping annotations include a params attribute so the value of the isCompany parameter determines which method is called.
Also notice that the #Valid annotation is place on the form parameter.
Finally, no groups are needed in this case.