How to create a custom validator annotation in Spring Boot? - java

I would like to validate both the #RequestBody as well as path parameters. Ideally, I'd like to create my own #CustomValid annotation instead of the #Valid one.
In addition to doing checks like #NotNull on the DTO itself, I want to actually check if say, the id of the object to be updated actually exists in the database. For example for this request:
{
"id": "2348291983918",
"name": "Carol"
}
I'd like to do a lookup for whether the users mongo collection actually contains an object with the id passed in or reject it.
Does anyone know how to do this?

Related

SpringBoot: Better way to create multiple entities from a single JSON with all data

I am working on a REST API with the following structure:
controller: classes that define the endpoints to obtain/create entities.
model: classes that represent the entities that are stored in each database table.
repository: classes that extend JpaRepository, it provides the methods to perform HQL queries on each model.
service / serviceimpl: classes that define the logic to obtain or create an entity from a model.
There is a table in the database that has multiple #OneToMany relationships with other tables. From the front-end, I will receive a json with the data to create a new entity from this table, but this json will also contain information to create entities from other tables that are related to the main one. This gives me the following problems:
The model class for the main entity has a lot of #Transient attributes because they send me information that shouldn't be mapped directly to a DB table, because I'll have to implement the logic to create the actual instances. (where should I do it? currently the logic to get child instances is implemented in the parent's ServiceImpl class, so the code is very long and hard to maintain).
I must persist each instance separately: to create the child entities I must provide an id of the parent entity. Because of this, I need to use JpaRepository's .save() method a first time to insert the parent entity and get its id. Then from that id I do the logic to create all the child entities and persist each one. In case there is a problem in the middle of the method, some instances will have been persisted and others not, this implies saving incomplete data in the DB.
The result of this is a very dirty and difficult to maintain model and ServiceImpl class. But I have to do that, since the front-end devs want to send me a single json with the information of everything that needs to be created, and they decided that the back-end implements all the logic to create the entities.
In what classes and in what order would you define the methods to do this as cleanly and safely as possible?
If you use #Transactions and have auto-commit: false, it will commit the changes at the end of the transaction.
So if you created the main object, then create all other subsequent objects and if any of them fails, the transaction will rollback.
Regarding the order of creation:
i would make a creation manager that will handle these. So for example
the json that you receive from the FE is
Apply what says in this answer in the below method.
{
"name": "abc",
"children-ofTypeA": [{
"name": "abc-child-a"
},
"children-ofTypeB": [{
"name": "abc-child-b"
}],
"some-other-prop-that-we-don't-care": {..}
}
class MainObject {
private String name:
private List<A> childrenA;
private List<B> childrenB;
}
You get this json and you pass it to CreationManager for example
class CreationManager {
#Transactional
public void create(StructureAbove json) {
// use a mapper of something to create the object
var mainObj = createMainObjFrom(json);
//apply what says in the posted link
}
}

How to handle self-reference in Spring Data JPA with Spring Boot 2?

I have a Spring Boot 2 application in which I have the following User entity:
#Data
... JPA and other annotations
class User {
... many fields including Integer id
#ManyToOne(fetch = FetchType.LAZY)
public User createBy;
#ManyToOne(fetch = FetchType.LAZY)
public User updateBy;
}
Now, the main problem I am facing right now is the self-reference of User (from User) and this either is causing StackOverflow Exceptions or InvalidDefinitionException depending on the certain annotations I am using on User. This issue is very common and several solutions are discussed over the internet which are:
1. Annotate both fields with #JsonBackReference
Annotating with #JsonBackReference omits the updateBy and createBy fields altogether, meaning I am not getting them when desired in my API responses.
2. Annotate class with #JsonIdentityInfo(generator = ObjectIdGenerators.PropertyGenerator.class, property = "id") or None.class or IntSequenceGenerator.class or UUIDGenerator.class
This approach works fine, until the serializer finds the same user object somewhere down the json and instead of putting it up there again, it puts a reference of that object based on the class selected above. e.g.
[
{"id": 1, ..., "createBy": {"id": 2, ...},
2 // <- Let's ignore how user 1 is created by user 2 and notice that instead of a user object, I get an integer ID reference.
]
This means, the client parsing this data will often there to be an object and not a number, causing parsing errors.
3. Implementing a custom serializer (or extending an existing one)
Now, I am really unsure if this is the right way to go about achieving my goals (mentioned below). But if this is the right approach, how would I go about handling this self-reference?
GOALS:
Serialize the data so that, at least certain fields in the child object (user) are passed back, preventing further recursive calls.
{
"id": 1, "name": "old user", .. many other fields .., "createBy": {id: 2, "name": "2nd user"}
}
When the client sends a user object as request body, the application needs only the id of the child entity and not the whole object, as below:
{
"name": "new user", ...., "createBy": {id: 1}
}
I know that self-referencing is integral to ORMs and there are a lot of use cases for self-referencing. But how do professional developers/applications handle this issue, especially in Spring Framework? If a custom serializer is the only way to go, how do I make it function appropriately?
Also, is it advisable to exclude these fields (createBy and updateBy) from EqualsAndHashCode and ToString methods?

Spring JPA - RESTful partial update and validation for entity

I've a simple RESTful API based on Spring MVC using a JPA connected MySQL database. Until now this API supports complete updates of an entity only. This means all fields must be provided inside of the request body.
#ResponseBody
#PutMapping(value = "{id}")
public ResponseEntity<?> update(#Valid #RequestBody Article newArticle, #PathVariable("id") long id) {
return service.updateById(id, newArticle);
}
The real problem here is the validation, how could I validate only provided fields while still require all fields during creation?
#Entity
public class Article {
#NotEmpty #Size(max = 100) String title;
#NotEmpty #Size(max = 500) String content;
// Getters and Setters
}
Example for a partial update request body {"content": "Just a test"} instead of {"title": "Title", "content": "Just a test"}.
The actual partial update is done by checking if the given field is not null:
if(newArticle.getTitle() != null) article.setTitle(newArticle.getTitle());
But the validation of course wont work! I've to deactivate the validation for the update method to run the RESTful service. I've essentially two questions:
How can I validate only a "existing" subset of properties in the
update method while still require all fields during creation?
Is there a more elegant way for update partially then checking for
null?
The complexity of partial updates and Spring JPA is that you may send half of the fields populated, and even that you will need to pull the entire entity from the data base, then just "merge" both entity and the pojo, because otherwise you will risk your data by sending null values to the database.
But merging itself is kind of tricky, because you need to operate over each field and take the decision of either send the new value to the data base or just keep the current one. And as you add fields, the validation needs to be updated, and tests get more complex. In one single statement: it doesn't scale. The idea is to always write code which is open for extension and closed for modifications. If you add more fields, then the validation block ideally doesn't need to change.
The way you deal with this in a REST model, is by operating over the entire entity each time you need. Let's say you have users, then you first pull a user:
GET /user/100
Then you have in your web page the entire fields of user id=100. Then you change its last name. You propagate the change calling the same resource URL with PUT verb:
PUT /user/100
And you send all the fields, or rather the "same entity" back with a new lastname. And you forget about validation, the validation will just work as a black box. If you add more fields, you add more #NotNull or whatever validation you need. Of course there may be situations where you need to actually write blocks of code for validation. Even in this case the validation doesn't get affected, as you will have a main for-loop for your validation, and each field will have its own validator. If you add fields, you add validators, but the main validation block remains untouchable.

How to make a dynamic DTO without to send a complete Payload in Spring

I have the updateProvider(ProviderUpdateDto providerUpdt) method in my Spring controller, But I do not see the need to send the whole payload of the provider entity, if for example the client can only update the name or other attribute, that is, it is not necessary to send the whole entity if only it is necessary to update a field, this produces a Excessive bandwidth consumption when it is not necessary.
What is a better practice to send only the fields that are going to be updated and be able to build a DTO dynamically? and How would I do if I'm using Spring Boot to build my API?
You can use Jackson library, it provides the annotation #JsonInclude(Include.NON_NULL) and with this only properties with not null values will be passed to your client.
Check the link http://www.baeldung.com/jackson-ignore-null-fields for an example.
There are many technique to improve bandwidth usage
not pretty print Json
enable HTTP GZIP compression
However, it is more important to ensure ur API is logically sound, omitting some fields may break the business rules, too fine grain API design will also increase the interface complexity
Another option would be to have a DTO object for field changes which would work for every entity you have. E.g:
class EntityUpdateDTO {
// The class of the object you are updating. Or just use a custom identifier
private Class<? extends DTO> entityClass;
// the id of such object
private Long entityId;
// the fields you are updating
private String[] updateFields;
// the values of those fields...
private Object[] updateValues;
}
Example of a json object:
{
entityClass: 'MyEntityDTO',
entityId: 324123,
updateFields: [
'property1',
'property2'
],
updateValues: [
'blabla',
25,
]
}
Might bring some issues if any of your updateValues are complex objects themselves though...
Your API would become updateProvider(EntityUpdateDTO update);.
Of course you should leave out the entityClass field if you have an update API for each DTO, as you'd already know which class entity you are working on...
Still, unless you are working with huge objects I wouldn't worry about bandwidth.

Internal object representation when designing JSON apis

I've got an object design question.
I'm building a json api in Java. My system uses pojos to represent json objects and translates them from json to pojo using Jackson. Each object needs to take different forms in different contexts, and I can't decide whether to create a bunch of separate classes, one for each context, or try to make a common class work in all circumstances.
Let me give a concrete example.
The system has users. The api has a service to add, modify and delete uses. There is a table of users in a database. The database record looks like this:
{
id: 123, // autoincrement
name: "Bob",
passwordHash: "random string",
unmodifiable: "some string"
}
When you POST/add a user, your pojo should not include an id, because that's autogenerated. You also want to be able to include a password, which gets hashed and stored in the db.
When you PUT/update a user, your pojo shouldn't include the unmodifiable field, but it must include the id, so you know what user you're modifying.
When you GET/retrieve the user, you should get all fields except the passwordHash.
So the pojo that represents the user has different properties depending on whether you're adding, updating, or retrieving the user. And it has different properties in the database.
So, should I create four different pojos in my system and translate among them? Or create one User class and try to make it look different in different circumstances, using Jackson views or some other mechanism?
I'm finding the latter approach really hard to manage.
In my opinion you should create only one POJO - User which has all needed properties. And now you should decide whether your API is rigorous or lenient. If your API is rigorous it should return error when it receives wrong JSON data. In lenient version API can skip superfluous (unnecessary) properties.
Before I will provide an example, let me change the 'passwordHash' property to 'password'.
Add new user/POST
JSON data from client:
{
id: 123,
name: "Bob",
password: "random string",
unmodifiable: "some string"
}
Rigorous version can return for example something like this:
{
"status": "ERROR",
"errors": [
{
"errorType": 1001,
"message": "Id field is not allowed in POST request."
}
]
}
Lenient version can return for example something like this:
{
"status": "SUCCESS",
"warnings": [
"Id field was omitted."
]
}
For each CRUD method you can write a set of unit tests which will be holding information which way you choose and what is allowed and what is not.

Categories

Resources