I am aware that in a Spring Boot project, I can filter out null valued attributes in response using #JsonInclude(JsonInclude.Include.NON_NULL). But what if I want to return null values for certain use cases which I am driving based on input from consumers?
I have a search API being consumed by multiple consumers. Below are the scenarios I want my API to be able to handle using the same response object.
Scenario
Expected Request
Expected Response
Null Values are expected in response
{ "nullsInResponse": true }
{ "attribute1": "value1", "attribute2": null }
Null Values are not expected in response
{ "nullsInResponse": false }
{ "attribute1": "value1" }
on class level you can do
#JsonSerialize(include = JsonSerialize.Inclusion.ALWAYS)
on attribute level
#XmlElement(nillable = true) #JsonProperty("error") private Error error;
In my opinion, the best option for your case is creating 2 different DTOs for response with null values and without them, based on #JsonSerialize annotation.
It's reasonable, because you have two different business cases, so you can provide different transfer objects with names based on that cases (ResponseWithNullValues and Response for example).
Of course, you can always use reflection to add annotation dynamically, but... we should avoid reflection whenever possible. Your business logic doesn't see anything about building specific rules for every field, so for me it's clear place to use two separated DTOs.
Related
I am trying to generate swagger on a spring boot rest service, with the following sample schema.
{
"title": "HouseholdOperationsRequest",
"type":"object",
"properties": {
"operation": {
"type": "string",
"enum": ["Electrical","Plumbing","Paint","Handyman",""]
}
}
}
When testing directly (hitting the server), the validation works fine with an empty string sent in the request. However, the swagger that is generated from this represents the empty enum at the end as "__EMPTY__" and causes a validation failure for the clients sending the request with the operation value as "".
Is there a setting with swagger that can help get around this problem?
Edit: --removing Is it a bad practice using empty string in the enum.--
My requirement is a bit unusual since the downstreams treat nulls differently than empty strings. The field itself is not required and is nullable.
In my opinion, it's not good to have Empty Strings in Enum, rather you should keep this field as Optional in Swagger (if not already), so user can send Null value in that case.
Enum are generally a Group Of Constants and Empty String is not a constant. Rather it should be replaced with Null value in general.
I've a simple RESTful API based on Spring MVC using a JPA connected MySQL database. Until now this API supports complete updates of an entity only. This means all fields must be provided inside of the request body.
#ResponseBody
#PutMapping(value = "{id}")
public ResponseEntity<?> update(#Valid #RequestBody Article newArticle, #PathVariable("id") long id) {
return service.updateById(id, newArticle);
}
The real problem here is the validation, how could I validate only provided fields while still require all fields during creation?
#Entity
public class Article {
#NotEmpty #Size(max = 100) String title;
#NotEmpty #Size(max = 500) String content;
// Getters and Setters
}
Example for a partial update request body {"content": "Just a test"} instead of {"title": "Title", "content": "Just a test"}.
The actual partial update is done by checking if the given field is not null:
if(newArticle.getTitle() != null) article.setTitle(newArticle.getTitle());
But the validation of course wont work! I've to deactivate the validation for the update method to run the RESTful service. I've essentially two questions:
How can I validate only a "existing" subset of properties in the
update method while still require all fields during creation?
Is there a more elegant way for update partially then checking for
null?
The complexity of partial updates and Spring JPA is that you may send half of the fields populated, and even that you will need to pull the entire entity from the data base, then just "merge" both entity and the pojo, because otherwise you will risk your data by sending null values to the database.
But merging itself is kind of tricky, because you need to operate over each field and take the decision of either send the new value to the data base or just keep the current one. And as you add fields, the validation needs to be updated, and tests get more complex. In one single statement: it doesn't scale. The idea is to always write code which is open for extension and closed for modifications. If you add more fields, then the validation block ideally doesn't need to change.
The way you deal with this in a REST model, is by operating over the entire entity each time you need. Let's say you have users, then you first pull a user:
GET /user/100
Then you have in your web page the entire fields of user id=100. Then you change its last name. You propagate the change calling the same resource URL with PUT verb:
PUT /user/100
And you send all the fields, or rather the "same entity" back with a new lastname. And you forget about validation, the validation will just work as a black box. If you add more fields, you add more #NotNull or whatever validation you need. Of course there may be situations where you need to actually write blocks of code for validation. Even in this case the validation doesn't get affected, as you will have a main for-loop for your validation, and each field will have its own validator. If you add fields, you add validators, but the main validation block remains untouchable.
I stumbled upon some code that adds JsonIgnoreProperties to a property that doesn't exists in class, but exists in JSON, e.g.:
#JsonIgnoreProperties({"ignoreprop"})
public class VO {
public String prop;
}
When JSON is
{ "prop":"1", "ignoreprop":"9999"}
I wonder if ignoring properties has any advantage(s) performance-wise or is it just redundant code?
Annotation that can be used to either suppress serialization of properties (during serialization), or ignore processing of JSON properties read (during deserialization).
EDIT
Is there an advantage(s) ignoring specific property over all (with
#JsonIgnoreProperties(ignoreUnknown=true))?
I wonder if ignoring properties has any advantage
Yes, it is used a lot for forward-compatibility in services. Let's say you have Services A and B. Currently A sends requests to B with some JSON objects.
Now you want to support a new property in the JSON. If you have this feature you are able to let A start sending the new property before B knows how to handle it. Decoupling the development processes of those two services.
ignoring specific property over all
This case does have some minor performance advantages. First, it doesn't try to parse this property which can be a simple string or complex object/array. Second, it helps you avoid handling an exception. Think that all the following can be valid calls and you only care about prop:
{ "prop":"1", "ignoreprop":"9999"}
{ "prop":"1", "ignoreprop":{ "a": { "key": "value", "foo": false }}}
{ "prop":"1", "ignoreprop":[1,2,3,4,5,6..... 1000000]}
From the documentation, mainly the purpose of use this is To ignore any unknown properties in JSON input without exception: which is better not to popup exception when properties are not found either in class or JSON, and this might helps serializing faster docs
Example:
// to prevent specified fields from being serialized or deserialized
// (i.e. not include in JSON output; or being set even if they were included)
#JsonIgnoreProperties({ "internalId", "secretKey" })
// To ignore any unknown properties in JSON input without exception:
#JsonIgnoreProperties(ignoreUnknown=true)
Starting with 2.0, this annotation can be applied both to classes and to properties. If used for both, actual set will be union of all ignorals: that is, you can only add properties to ignore, not remove or override. So you can not remove properties to ignore using per-property annotation.
I have the updateProvider(ProviderUpdateDto providerUpdt) method in my Spring controller, But I do not see the need to send the whole payload of the provider entity, if for example the client can only update the name or other attribute, that is, it is not necessary to send the whole entity if only it is necessary to update a field, this produces a Excessive bandwidth consumption when it is not necessary.
What is a better practice to send only the fields that are going to be updated and be able to build a DTO dynamically? and How would I do if I'm using Spring Boot to build my API?
You can use Jackson library, it provides the annotation #JsonInclude(Include.NON_NULL) and with this only properties with not null values will be passed to your client.
Check the link http://www.baeldung.com/jackson-ignore-null-fields for an example.
There are many technique to improve bandwidth usage
not pretty print Json
enable HTTP GZIP compression
However, it is more important to ensure ur API is logically sound, omitting some fields may break the business rules, too fine grain API design will also increase the interface complexity
Another option would be to have a DTO object for field changes which would work for every entity you have. E.g:
class EntityUpdateDTO {
// The class of the object you are updating. Or just use a custom identifier
private Class<? extends DTO> entityClass;
// the id of such object
private Long entityId;
// the fields you are updating
private String[] updateFields;
// the values of those fields...
private Object[] updateValues;
}
Example of a json object:
{
entityClass: 'MyEntityDTO',
entityId: 324123,
updateFields: [
'property1',
'property2'
],
updateValues: [
'blabla',
25,
]
}
Might bring some issues if any of your updateValues are complex objects themselves though...
Your API would become updateProvider(EntityUpdateDTO update);.
Of course you should leave out the entityClass field if you have an update API for each DTO, as you'd already know which class entity you are working on...
Still, unless you are working with huge objects I wouldn't worry about bandwidth.
I'm create an API with Spring boot and I'm trying to think of the best way to handle two requests that have the same endpoint. So if the request has paramenters X, Y and Z it does A thing, else does B thing.
What is the best way to handle it? Maybe a middleware who handle the request and point to right method? Or maybe evaluate if the parameter X is null? Thanks for helping.
For Example:
Request A:
/payment
{
"holder_name": "John Doe",
"payment_method":
{
"card_number" : "5478349021823961",
"exp_date" : "2018-10-16",
"cvv" : "713"
}
}
Request B:
/payment
{
"holder_name": "John Doe",
"payment_method":
{
"boleto_number" : "123456789"
}
}
These two requests have the same endpoint but the payment methods are different.
The best way, depends on the specific details and needs of the application. Checking that the values are missing or null may not be the best thing to do, e.g. if you receive the four fields card_number, exp_date, cvv, boleto_number what would be the correct action to perform.
You could request the type in the json in an extra field at root level.
<!-- language: json -->
{
"holder_name": "John Doe",
"payment_method": "TICKET",
"payment_details": {
"ticket_number": "123456789"
}
}
Or with a field indide the current payments object.
<!-- language: json -->
{
"holder_name": "John Doe",
"payment_method": {
"type": "TICKET"
"ticket_number": "123456789"
}
}
This way you could first check the method type, and then check that the other fields and details match the fields you are expecting for that type, to then redict to the correct handler.
Also you should decide if you allow unknown fields or fields that do not belong to the specified method. For instance, if you decide not to be that strict, then receiving the 4 fields mentioned before, with the type of CREDIR_CARD, can be process effectively as a credit cards payment, and the extra field ignored. If you decide to be more strict, an error could be thrown indicating that unexpected fields were received.
You could use a enum to define the types.
public enum PaymentType {
TICKET,
CREDIT_CARD,
CASH
}
Finally, depending whether you are using classes or map-like objects as structure for the processed json, you could use a custom converters to transform the object easily, depending on the type.