I'm experimenting with Spring Data REST and so far it's going relatively well. I'm able to query and manipulate the entities, and I have reached a point where I'd like to filter the retrieved data by a variable number of parameters. For this purpose I've been reading and decided on QueryDSL which is integrated nicely with Spring, and it works (almost) flawlessly when using fields from the entities.
However, my filtering form contains some parameters which have no direct mapping to the entity, leading to this question. For the sake of brevity, I'll be using an over-simplified example, hence my using of a persons's age instead of birth-date & etc.
Supposing we have the following Person entity:
#Data
#NoArgsConstructor
#Entity
public class Person {
#Id
#GeneratedValue
private UUID id;
private String name;
private String lastName;
private Integer age;
}
... and the appropriate repo
#RepositoryRestResource
public interface PersonRepository extends CrudRepository<Person, UUID>, QuerydslPredicateExecutor<Person>, QuerydslBinderCustomizer<QPerson> {
#RestResource
Page<Person> findAll(#QuerydslPredicate Predicate predicate, Pageable pageable);
#Override
default void customize(QuerydslBindings bindings, QPerson person) {
bindings.bind(String.class).first((SingleValueBinding<StringPath, String>) StringExpression::containsIgnoreCase);
}
}
... one can access and filter persons by name or last name (case insensitive) via http://<server>/persons?name=whatever, so far so good.
Next step, I would like to see only the people that are "pensionable", let's say over 65 years old, so the URL would look like http://<server>/persons?pensionable=true. However, pensionable is not an attribute in the Person entity, so adding it as a request param doesn't do anything.
I've been trying to figure out how this can be achieved or if this is currently a limitation of the framework(s), but my searches haven't been successful so far. Eventually via trial and error, I've come up with something that seems to work but feels more like a hack:
Create a different PersonExtendedFilter bean (not entity) which includes the extra/arbitrary params:
#Data
#NoArgsConstructor
public class PersonExtendedFilter{
private Boolean pensionable;
}
... create a BooleanPath using the above, and use it to define a binding inside the repo's customize method:
#Override
default void customize(QuerydslBindings bindings, QPerson person) {
bindings.bind(String.class).first((SingleValueBinding<StringPath, String>) StringExpression::containsIgnoreCase);
BooleanPath pensionable = new PathBuilder<>(PersonExtendedFilter.class, "personExtendedFilter").getBoolean("pensionable");
bindings.bind(pensionable).first((path, value) -> new BooleanBuilder().and(value ? person.age.gt(65) : person.age.loe(65)));
}
Bottom line, I'm wondering whether there is an elegant way of doing this or if I missing something, be it from a logical POV, a RTFM one, or something else.
Related
I want to select just a few columns from a table.. The catch is that I'm using a specification and pagination from the front end filter, and I don't think I can concatenate those with criteriabuilder. My original idea was to create a #MappedSuperClass with the attributes I wanted (in this case, just the id and date), and fetch using a dao repository from an empty subclass. I have done something similar to this before and it worked, but the subclasses used different tables so it's a different ball game. In this case, since both subclasses use the same table, and there's nothing to differentiate between the classes other than one doesn't have any attributes, it keeps fetching the original bigger class. I want to avoid creating a view with just the columns I want or processing the data in the backend after the fetching, but I think that's the only possible solution.
Superclass
#MappedSupperClass
public class Superclass
{
#Column( name = "id" )
private Integer id;
#Column( name = "date" )
private Date date;
}
Original Subclass
#Entity
#Table( name = "table" )
public class OriginalSubclass
extends Superclass
{
#Column( name = "code" )
private Integer code;
#Column( name = "name" )
private String name;
}
New Subclass
#Entity
#Table( name = "table" )
public class NewSubclass
extends Superclass
{
}
I created a new dao for the new subclass
#Repository
public interface NewSubclassDao
extends JpaRepository<NewSubclass, Integer>, JpaSpecificationExecutor<NewSubclass>
{
}
Is there a way to get only the attributes I want with something similar to my idea?
Or is it possible to do it with criteriabuilder?
If none of the options are viable, would you prefer to use a view or process the data?
EDIT
To make it perfectly clear, I want Spring to bring me only the id and date attributes, using JPA findAll or something very similar, without messing the pagination or filter from the Specification.
You should be able to use #Query to do something like:
#Repository
#Transactional(readOnly = true)
public interface NewSubclassDao
extends JpaRepository<NewSubclass, Integer>, JpaSpecificationExecutor<NewSubclass>
{
#Query("SELECT table.code FROM #{#entityName} table")
public Set<Integer> findAllCodes();
}
There are many ways to do this, but I think this is a perfect use case for Blaze-Persistence Entity Views.
I created the library to allow easy mapping between JPA models and custom interface or abstract class defined models, something like Spring Data Projections on steroids. The idea is that you define your target structure(domain model) the way you like and map attributes(getters) via JPQL expressions to the entity model.
A DTO model for your use case could look like the following with Blaze-Persistence Entity-Views:
#EntityView(User.class)
public interface UserDto {
#IdMapping
Long getId();
String getName();
Set<RoleDto> getRoles();
#EntityView(Role.class)
interface RoleDto {
#IdMapping
Long getId();
String getName();
}
}
Querying is a matter of applying the entity view to a query, the simplest being just a query by id.
UserDto a = entityViewManager.find(entityManager, UserDto.class, id);
The Spring Data integration allows you to use it almost like Spring Data Projections: https://persistence.blazebit.com/documentation/entity-view/manual/en_US/index.html#spring-data-features
Page<UserDto> findAll(Pageable pageable);
The best part is, it will only fetch the state that is actually necessary!
Let's say I have a domain java class representing a person:
class Person {
private final String id; // government id
private String name;
private String status;
private Person(String id, String name) {
this.id = id;
this.name = name;
this.status = "NEW";
}
Person static createNew(String id, String name) {
return new Person(id, name);
}
void validate() {
//logic
this.status = "VALID";
}
public static final class Builder {
private String id;
private String name;
private String status;
private Builder() {
}
public static Builder aPerson() {
return new Builder();
}
public Builder id(String id) {
this.id = id;
return this;
}
public Builder name(String name) {
this.name = name;
return this;
}
public Builder status(String status) {
this.status = status;
return this;
}
public Person build() {
Person person = new Person(id, name);
person.status = this.status;
return person;
}
}
I store this domain class object in a database, regular class with the same field + getters and setters. Currently when I want to store object I create new PersonDocument (data is stored in mongo), use getters and setters and save it. It gets complicated when I want to fetch it from DB. I would like my domain object to expose only what is necessary, for the business logic currently it is only creation and validation. Simply:
Person p = Person.createNew("1234", "John");
p.validate();
repository.save(p);
The other way it gets complicated, currently there is a builder which allows creation of object in any state. We do believe that data stored in DB has a proper state so it can be created that way but the downside is that there is a public API available, letting any one to do anything.
The initial idea was to use MapStruct java mapping library but it does use setters to create objects and exposing setters in the domain class (as far as I can tell) should be avoided.
Any suggestions how to do it properly?
Your problem likely comes from two conflicting requirements:
You want to expose only business methods.
You want to expose data too, since you want to be able to implement serialization/deserialization external to the object.
One of those has to give. To be honest, most people faced with this problem ignore the first one, and just introduce setter/getters. The alternative is of course to ignore the second one, and just introduce the serialization/deserialization into the object.
For example you can introduce a method Document toDocument() into the objects that produces the Mongo compatible json document, and also a Person fromDocument(Document) to deserialize.
Most people don't like this sort of solution, because it "couples" the technology to the object. Is that a good or bad thing? Depends on your use-case. Which one do you want to optimize for: Changing business logic or changing technologies? If you're not planning to change technologies very often and don't plan using the same class in a completely different application, there's no reason to separate technology.
Robert Bräutigam sentence is good:
Two conflicting requirements
But the is another sentence by Alan Kay that is better:
“I’m sorry that I long ago coined the term “objects” for this topic
because it gets many people to focus on the lesser idea. The big idea
is messaging.” ~ Alan Kay
So, instead of dealing with the conflict, let's just change the approach to avoid it. The best way I found is to take a functional aproach and avoid unnecessary states and mutations in classes by expresing the domain changes as events.
Instead to map classes (aggregates, V.o.'s and/or entities) to persistence, I do this:
Build an aggregate with the data needed (V.O.'s and entities) to apply aggregate rules and invariants given an action. This data comes from persistence. The aggregate do not expose getters not setters; just actions.
Call the aggretate's action with command data as parameter. This will call inner entities actions in case the overal rules need it. This allows responsibility segregation and decoupling as the Aggregate Root does not have to know how are implemeted their inner entities (Tell, don't ask).
Actions (in Aggregate roots and inner entities) does not modify its inner state; they instead returns events expressing the domain change. The aggregate main action coordinate and check the events returned by its inner entities to apply rules and invariants (the aggregate has the "big picture") and build the final Domain Event that is the output of the main Action call.
Your persistence layer has an apply method for every Domain Event that has to handle (Persistence.Apply(event)). This way your persistence knows what was happened and; as long as the event has all the data needed to persist the change; can apply the change into (even with behaviour if needed!).
Publish your Domain Event. Let the rest of your system knows that something has just happenend.
Check this post (it worth chek all DDD series in this blog) to see a similar implementation.
I do it this way:
The person as a domain entity have status (in the sense of the entity fields that define the entity, not your "status" field) and behaviour (methods).
What is stored in the db is just the status. Then I create a "PersonStatus" interface in the domain (with getter methods of the fields that we need to persist), so that PersonRepository deals with the status.
The Person entity implements PersonStatus (or instead of this, you can put a static method that returns the state).
In the infraestructure I have a PersonDB class implementing PersonStatus too, which is the persistence model.
So:
DOMAIN MODEL:
// ENTITY
public class Person implements PersonStatus {
// Fields that define status
private String id;
private String name;
...
// Constructors and behaviour
...
...
// Methods implementing PersonStatus
#Override
public String id() {
return this.id;
}
#Override
public String name() {
return this.name;
}
...
}
// STATUS OF ENTITY
public interface PersonStatus {
public String id();
public String name();
...
}
// REPOSITORY
public interface PersonRepository {
public void add ( PersonStatus personStatus );
public PersonStatus personOfId ( String anId );
}
INFRAESTRUCTURE:
public class PersonDB implements PersonStatus {
private String id;
private String name;
...
public PersonDB ( String anId, String aName, ... ) {
this.id = anId;
this.name = aName;
...
}
#Override
public String id() {
return this.id;
}
#Override
public String name() {
return this.name;
}
...
}
// AN INMEMORY REPOSITORY IMPLEMENTATION
public class InmemoryPersonRepository implements PersonRepository {
private Map<String,PersonDB> inmemorydb;
public InmemoryPersonRepository() {
this.inmemoryDb = new HashMap<String,PersonDB>();
}
#Override
public void add ( PersonStatus personStatus );
PersonDB personDB = new PersonDB ( personStatus.id(), personStatus.name(), ... );
this.inmemoryDb.put ( personDB.id(), personDB );
}
#Override
public PersonStatus personOfId ( String anId ) {
return this.inmemoryDb.personOfId ( anId );
}
}
APPLICATION LAYER:
...
Person person = new Person ( "1", "John Doe", ... );
personRepository.add ( person );
...
PersonStatus personStatus = personRepository.personOfId ( "1" );
Person person = new Person ( personStatus.id(), personStatus.name(), ... );
...
It basically boils down to two things, depending on how much you are willing to add extra work in on the necessary infrastructure and how constraining your ORM/persistence is.
Use CQRS+ES pattern
The most obvious choice that's used in bigger and complex domains is to use the CQRS (Command/Query Responsibility Segregation) "Event Sourcing" pattern. This means, that each mutable actions generates an event, that is persisted.
When your aggregate is loaded, all the events will be loaded from the database and applied in chronological order. Once applied, your aggregate will have its current state.
CQRS just means, that you separate read and write operations. Write operations would happen in the aggregate by creating events (by applying commands) which are stored/read via Event Sourcing.
Where the "Query" would be queries on projected data, which uses the events to create a current state of the object, that's used for querying and reading only. Aggregates still read by reapplying all the events from the event sourcing storage.
Pros
You have an history of all changes that were done on the aggregate. This can be seen as added value to the business and auditing
if your projected database is corrupted or in an invalid state, you can restore it by replaying all the events and generate the projection from anew.
It's easy to revert to a previous state in time (i.e. by applying compensating events, that does opposite of what a previous event did)
Its easy to fix a bug (i.e. when calculating the the state of the aggregate) and then reply all the events to get the new, corrected value.
Assume you have a BankingAccount aggregate and calculate the balance and you used regular rounding instead of "round-to-even". Here you can fix the calculation, then reapply all the events and you get the new and correct account balance.
Cons
Aggregates with 1000s of events can take some time to materialize (Snapshots/Mememto pattern can be used here to load a snapshot and apply the events after that snapshot)
Initially more time to implement the necessary infrastructure
You can't query event sourced aggregates w/o a read store; Requires projection and a message queue to publish the event sourcing events so they can be processed and applied to a projection (SQL or document table) which can be used for queries
Map directly to Domain Entities
Some ORM and Document database providers allow you to directly map to backing fields, i.e. via reflection.
In MongoDb C# Driver it can be done via something like in the linked answer.
Same applies to EF Core ORM. I'm sure theres something similar in the Java world too.
This may limit your database persistence library and technology usage, since it will require you to use one which supports such APIs via fluent or code configuration. You can't use attributes/annotations for this, because these are usually database specific and it would leak persistence knowledge into your domain.
It also MAY limit your ability to use the strong typed querying API (Linq in C#, Streams in Java), because that generally requires getters and setters, so you may have to use magic strings (with names of the fields or properties in the storage) in the persistence layer.
It may be acceptable for smaller/less complex domains. But CQRS+ES should always be preferred, if possible and within budget/timeline since its most flexible and works with all persistence storage and frameworks (even with key-value stores).
Pros
Not necessary to leverage more complex infrastructure (CQRS, ES, Pub/Sub messaging/queues)
No leaking of persistence knowledge into your models and no need to break encapsulation
Cons
No history of changes
No way to restore a previous state
May require magic strings when querying in the persistence layer (depends on framework/orm)
Can require a lot of fluent/code configuration in the persistence layer, to map it to the backing field
May break, when you rename the backing field
I am in a situation where I have to store data belonging to multiple entities in a single collection. But when I query then back, I dont want unwanted records in my result. How can we achieve this using spring? Below is what I have done so far.
1. I give same collection name in entity as shown below.
#Document(collection = "livingThings")
#Data
public class AnimalEntity {
//contains id, type, bla, bla
}
#Document(collection = "livingThings")
#Data
public class HumanEntity {
//contains id, gender, address
}
2. I create independent mongoRepository interfaces
public interface AnimalRepository implements MongoRepository<AnimalEntity, String> {
}
public interface HumanRepository implements MongoRepository<HumanEntity, String> {
}
3. And the problem is
when I do animalRepo.findAll or humanRepo.findAll, I get all records available in the collection.
4. What I expect
animalRepo.findAll returns only those records where document structure is same as AnimalEntity.
Thank you very much for your time and patience to attend this query.
MongoDB automatically adds _class field to entities in a collection. Even though it is not the best solution, you can try this:
#Query("_class:your package name here.AnimalEntity")
public AnimalEntity findAllAnimals();
My Document is
#QueryEntity #Data #Document(collection = "MyCol") public class MyCol {
#Id private String _id;
private String version;
I want to get all distinct version stored in the db.
My attempts:
public interface MyColDao extends MongoRepository<MyCol, String>, QueryDslPredicateExecutor<MyCol> {
#Query("{ distinct : 'MyCol', key : 'version'}")
List<String> findDistinctVersion();
}
Or just findDistinctVersion without the query annotation.
Most of the examples of github have a By-field like
List<Person> findDistinctPeopleByLastnameOrFirstname(String lastname, String firstname);
I don't need a By field.
Another example I found here.
#Query("{ distinct : 'channel', key : 'game'}")
public JSONArray listDistinctGames();
This doesn't seem to work for me.
I can't seem to find queryDSL/Morphia's documentation to do this.
public interface MyColDao extends MongoRepository<MyCol, String>, QueryDslPredicateExecutor<MyCol> {
#Query("{'yourdbfieldname':?0}")
List<String> findDistinctVersion(String version);
}
here version replaces your your db field name
more you can see here
This spring documentation provide the details, how to form a expression when you are want to fetch distinct values.
Link
I had a similar problem, but I couldn't work out how to do it within the MongoRepository (as far as I can tell, it's not currently possible) so ended up using MongoTemplate instead.
I believe the following would meet your requirement.
#AutoWired
MongoTemplate mongoTemplate
public List<String> getVersions(){
return mongoTemplate.findDistinct("version", MyCol.class, String.class);
}
I worked out a concept to conditionally validate using JSR 303 groups. "Conditionally" means that I have some fields which are only relevant if another field has a specific value.
Example: There is an option to select whether to register as a person or as a company. When selecting company, the user has to fill a field containing the name of the company.
Now I thought I use groups for that:
class RegisterForm
{
public interface BasicCheck {}
public interface UserCheck {}
public interface CompanyCheck {}
#NotNull(groups = BasicCheck.class)
private Boolean isCompany
#NotNull(groups = UserCheck.class)
private String firstName;
#NotNull(groups = UserCheck.class)
private String lastName;
#NotNull(groups = CompanyCheck.class)
private String companyName;
// getters / setters ...
}
In my controller, I validate step by step depending on the respective selection:
#Autowired
SmartValidator validator;
public void onRequest(#ModelAttribute("registerForm") RegisterForm registerForm, BindingResult result)
{
validator.validate(registerForm, result, RegisterForm.BasicCheck.class);
if (result.hasErrors()
return;
// basic check successful => we can process fields which are covered by this check
if (registerForm.getIsCompany())
{
validator.validate(registerForm, result, RegisterForm.CompanyCheck.class)
}
else
{
validator.validate(registerForm, result, RegisterForm.UserCheck.class);
}
if (!result.hasErrors())
{
// process registration
}
}
I only want to validate what must be validated. If the user selects "company" fills a field with invalid content and then switches back to "user", the invalid company related content must be ignored by the validator. A solution would be to clear those fields using Javascript, but I also want my forms to work with javascript disabled. This is why I totally like the approach shown above.
But Spring breaks this idea due to data binding. Before validation starts, Spring binds the data to registerForm. It adds error to result if, for instance, types are incompatible (expected int-value, but user filled the form with letters). This is a problem as these errors are shown in the JSP-view by <form:errors /> tags
Now I found a way to prevent Spring from adding those errors to the binding result by implementing a custom BindingErrorProcessor. If a field contains null I know that there was a validation error. In my concept null is not allowed - every field gets annotated with #NotNull plus the respective validation group.
As I am new to Spring and JSR-303 I wonder, whether I am totally on the wrong path. The fact that I have to implement a couple of things on my own makes me uncertain. Is this a clean solution? Is there a better solution for the same problem, as I think this is a common problem?
EDIT
Please see my answer here if you are interested in my solution in detail: https://stackoverflow.com/a/30500985/395879
You are correct that Spring MVC is a bit picky in this regard,and it is a common problem. But there are work-arounds:
Make all your backing fields strings, and do number/date etc conversions and null checks manually.
Use JavaScript to set fields to null when they become irrelevant.
Use JavaScript to validate fields when they are entered. This will fix almost all of your problems.
Good luck!
I know this question is old, but I came upon it looking for an answer for a different situation.
I think for your situation you could use inheritance for the forms and then use two controller methods:
The forms would look like this:
public class RegistrationForm
{
// Common fields go here.
}
public class UserRegistrationForm
extends RegistrationForm
{
#NotNull
private String firstName;
#NotNull
private String lastName;
// getters / setters ...
}
public class CompanyRegistrationForm
extends RegistrationForm
{
#NotNull
private String companyName;
// getters / setters ...
}
The controller methods would look like this:
#RequestMapping(method = RequestMethod.POST, params = "isCompany=false")
public void onRequest(
#ModelAttribute("registerForm") #Valid UserRegistrationForm form,
BindingResult result)
{
if (!result.hasErrors())
{
// process registration
}
}
#RequestMapping(method = RequestMethod.POST, params = "isCompany=true")
public void onRequest(
#ModelAttribute("registerForm") #Valid CompanyRegistrationForm form,
BindingResult result)
{
if (!result.hasErrors())
{
// process registration
}
}
Notice that the #RequestMapping annotations include a params attribute so the value of the isCompany parameter determines which method is called.
Also notice that the #Valid annotation is place on the form parameter.
Finally, no groups are needed in this case.