How to define forms to persist complex objects in Spring? - java

I need to define a from to create instances of one of my objects. This is an easy task and I already have some, but this specific object has a reference to other object that I must define in the form. How can this be done? I know I could ask the user to enter de id, store it in a DTO and get the real object later but I suppose that this is not the best way to accomplish this. What can I do?
I put here my entities:
#Entity
public class Route {
#Id
#GeneratedValue
private Long id;
#Column(nullable = false)
private Long distance;
#Column(nullable = false)
private String name;
private String description;
#ManyToOne
#JoinColumn
private Place origin;
}
And this is the references object:
#Entity
public class Place {
#Id
#GeneratedValue
private Long id;
#Column(nullable = false)
private String name;
private Long latitude;
private Long longitude;
private String imagePath;
#OneToMany(mappedBy = "origin", cascade = CascadeType.REMOVE)
private Set<Route> originRoutes;
}

It actually depends on what is the use case in more details. Let's think of couple of possibilities:
Create a Place together with some Routes instantly at one go - in this case I would implement the view so that it creates a complex structure reflecting the entities one to one like you have them defined for JPA, and passes in single POST request. We basically assume that there is limited number of routes that are always created together with places. Routes cannot be shared among different Places as it makes no sense.
#RequestMapping(method = POST, value = "/places")
public CreatePlaceResponse createPlace(#RequestBody Place place) {
...
}
Create a Place and provide Routes in subsequent requests. - if we want more flexibility and/or expect the number of routes assigned to each place to be large, we may first create a Place and after that use another request to assign Routes to it (by referring to a place id). This way you let the user create the whole structure step by step, plus you give a possibility to add a Route later on.
#RequestMapping(method = POST, value = "/places")
public CreatePlaceResponse createPlace(#RequestBody Place place) {
...
}
#RequestMapping(method = POST, value = "/places/{placeId}")
public AddRouteResponse addRoute(#RequestBody Route route) {
...
}
Depending on case you can also think of bulk creation of Routes as well, so passing a list of Routes to an already created place.

On the UI side I can see this working using either a hidden field to store the place id in response to the text entered or a 'fancy' select such as one of these at the below which let you type into the select to filter and won't load 1000s of record in memory at once:
http://silviomoreto.github.io/bootstrap-select/
Either way, you are going to bind the hidden field or the selected option to:
route.place
e.g.
<form:hidden path="place" value="id_of_place_updated_by_javascript" />
or
<form:select path="place">
You will then register a converter which will convert the submitted value to the required type i.e. Place. On submit your converter takes the Place ID and retrieves the corresponding Place from the Database. The framework will then bind the Place returned by the Converter to the Route backing the form.
http://docs.spring.io/spring/docs/current/spring-framework-reference/html/validation.html#core-convert
See here for an example using a Formatter to do the conversion:
http://springinpractice.com/2012/01/07/making-formselect-work-nicely-using-spring-3-formatters
In the final example you would just go off to the database to get the relevant entity rather than creating a new instance.

Related

Spring's Entity inside nodejs

The way I manage persistent state inside my backends in the past is by using Spring's #Entity. Basically, I define a regular java class like this:
#Entity
#Table(name = "users")
class User {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
#Column(name = "user_id")
public Long user_id;
#Column(name = "email")
public String email;
}
Then I can use Hibernate to retrieve a managed java object:
User user = userFactory.getUserFromId(3L); // uses Hibernate internally to get a managed instance
user.email = "abc#company.com"; // gets auto-save to mysql database
This is highly convenient, as I can just change the fields without explicitly worrying about saving data. It's also quite convenient as I can specify a number of fields to be used as an index, so I can search for matching email names quite fast.
How would I go about doing the same thing using NodeJS? I need to use node as my team is not familiar with Java at all. I want to be able to store complex json objects fast, have a cached version in memory that ideally stores data permanently at regular intervals.
My current plan is to use Redis for this. Getting user's object should look something like this:
class User {
constructor(json) {
this.json = json
}
async save() {
await redis.set(JSON.stringify(this.json));
}
}
async function user(id) {
let json = JSON.parse(await redis.get("user-" + id));
return User(json);
}
u3 = await user(3);
u3.json.email = "def#company.com";
u3.save();
To get user by name, I'd create my own index (mapping from email to user id), and potentially store this index inside redis as well.
All of this seems clunky, and feels like I'm reimplementing basic database features. So before I do it like this, are there different ways to manage json objects well in js, so that the coding experience is somewhat the same as in Spring?
What you need is an ORM, that is that tool in every language that maps your objects into database records.
With a quick search you can find Sequalize that is very popular in the NodeJS world.

Should i use model classes or payload classes to serialize a json response

I'm using spring boot with mysql to create a Restful API. Here's an exemple of how i return a json response.
first i have a model:
#Entity
public class Movie extends DateAudit {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String name;
private Date releaseDate;
private Time runtime;
private Float rating;
private String storyline;
private String poster;
private String rated;
#OneToMany(mappedBy = "movie", cascade = CascadeType.ALL, orphanRemoval = true)
private List<MovieMedia> movieMedia = new ArrayList<>();
#OneToMany(mappedBy = "movie", cascade = CascadeType.ALL, orphanRemoval = true)
private List<MovieReview> movieReviews = new ArrayList<>();
#OneToMany(mappedBy = "movie", cascade = CascadeType.ALL, orphanRemoval = true)
private List<MovieCelebrity> movieCelebrities = new ArrayList<>();
// Setters & Getters
}
and correspond repository:
#Repository
public interface MovieRepository extends JpaRepository<Movie, Long> {
}
Also i have a payload class MovieResponse which represent a movie instead of Movie model, and that's for example if i need extra fields or i need to return specific fields.
public class MovieResponse {
private Long id;
private String name;
private Date releaseDate;
private Time runtime;
private Float rating;
private String storyline;
private String poster;
private String rated;
private List<MovieCelebrityResponse> cast = new ArrayList<>();
private List<MovieCelebrityResponse> writers = new ArrayList<>();
private List<MovieCelebrityResponse> directors = new ArrayList<>();
// Constructors, getters and setters
public void setCelebrityRoles(List<MovieCelebrityResponse> movieCelebrities) {
this.setCast(movieCelebrities.stream().filter(movieCelebrity -> movieCelebrity.getRole().equals(CelebrityRole.ACTOR)).collect(Collectors.toList()));
this.setDirectors(movieCelebrities.stream().filter(movieCelebrity -> movieCelebrity.getRole().equals(CelebrityRole.DIRECTOR)).collect(Collectors.toList()));
this.setWriters(movieCelebrities.stream().filter(movieCelebrity -> movieCelebrity.getRole().equals(CelebrityRole.WRITER)).collect(Collectors.toList()));
}
}
As you can see i divide the movieCelebrities list into 3 lists(cast, directos and writers)
And to map a Movie to MovieResponse I'm using ModelMapper class:
public class ModelMapper {
public static MovieResponse mapMovieToMovieResponse(Movie movie) {
// Create a new MovieResponse and Assign the Movie data to MovieResponse
MovieResponse movieResponse = new MovieResponse(movie.getId(), movie.getName(), movie.getReleaseDate(),
movie.getRuntime(),movie.getRating(), movie.getStoryline(), movie.getPoster(), movie.getRated());
// Get MovieCelebrities for current Movie
List<MovieCelebrityResponse> movieCelebrityResponses = movie.getMovieCelebrities().stream().map(movieCelebrity -> {
// Get Celebrity for current MovieCelebrities
CelebrityResponse celebrityResponse = new CelebrityResponse(movieCelebrity.getCelebrity().getId(),
movieCelebrity.getCelebrity().getName(), movieCelebrity.getCelebrity().getPicture(),
movieCelebrity.getCelebrity().getDateOfBirth(), movieCelebrity.getCelebrity().getBiography(), null);
return new MovieCelebrityResponse(movieCelebrity.getId(), movieCelebrity.getRole(),movieCelebrity.getCharacterName(), null, celebrityResponse);
}).collect(Collectors.toList());
// Assign movieCelebrityResponse to movieResponse
movieResponse.setCelebrityRoles(movieCelebrityResponses);
return movieResponse;
}
}
and finally here's my MovieService service which i call in the controller:
#Service
public class MovieServiceImpl implements MovieService {
private MovieRepository movieRepository;
#Autowired
public void setMovieRepository(MovieRepository movieRepository) {
this.movieRepository = movieRepository;
}
public PagedResponse<MovieResponse> getAllMovies(Pageable pageable) {
Page<Movie> movies = movieRepository.findAll(pageable);
if(movies.getNumberOfElements() == 0) {
return new PagedResponse<>(Collections.emptyList(), movies.getNumber(),
movies.getSize(), movies.getTotalElements(), movies.getTotalPages(), movies.isLast());
}
List<MovieResponse> movieResponses = movies.map(ModelMapper::mapMovieToMovieResponse).getContent();
return new PagedResponse<>(movieResponses, movies.getNumber(),
movies.getSize(), movies.getTotalElements(), movies.getTotalPages(), movies.isLast());
}
}
So the question here: is it fine to use for each model i have a payload class for the json serialize ? or it there a better way.
also guys id it's there anything wrong about my code feel free to comment.
I had this dilemma not so long back, this was my thought process. I have it here https://stackoverflow.com/questions/44572188/microservices-restful-api-dtos-or-not
The Pros of Just exposing Domain Objects
The less code you write, the less bugs you produce.
despite of having extensive (arguable) test cases in our code base, I have came across bugs due to missed/wrong copying of fields from domain to DTO or viceversa.
Maintainability - Less boiler plate code.
If I have to add a new attribute, I don't have to add in Domain, DTO, Mapper and the testcases, of course. Don't tell me that this can be achieved using a reflection beanCopy utils like dozer or mapStruct, it defeats the whole purpose.
Lombok, Groovy, Kotlin I know, but it will save me only getter setter headache.
DRY
Performance
I know this falls under the category of "premature performance optimization is the root of all evil". But still this will save some CPU cycles for not having to create (and later garbage collect) one more Object (at the very least) per request
Cons
DTOs will give you more flexibility in the long run
If only I ever need that flexibility. At least, whatever I came across so far are CRUD operations over http which I can manage using couple of #JsonIgnores. Or if there is one or two fields that needs a transformation which cannot be done using Jackson Annotation, As I said earlier, I can write custom logic to handle just that.
Domain Objects getting bloated with Annotations.
This is a valid concern. If I use JPA or MyBatis as my persistent framework, domain object might have those annotations, then there will be Jackson annotations too. If you are using Spring boot you can get away by using application-wide properties like mybatis.configuration.map-underscore-to-camel-case: true , spring.jackson.property-naming-strategy: SNAKE_CASE
Short story, at least in my case, cons didn't outweigh the pros, so it did not make any sense to repeat myself by having a new POJO as DTO. Less code, less chances of bugs. So, went ahead with exposing the Domain object and not having a separate "view" object.
Disclaimer: This may or may not be applicable in your use case. This observation is per my usecase (basically a CRUD api having 15ish endpoints)
We should each layer separate from other. As in your case, you have defined the entity and response classes. This is right way to separate things, we should never send the entity in the response. Even for request thing we should have a class.
What the issue if we are sending entity instead of response dto.
Not available to modify them because we already expose it with our client
Sometimes we don't want to serialize some fields and send as response.
Some overhead are there to translate request to domain, entity to domain etc. But its okay to keep more organized. ModelMapper is the best choice for translation purpose.
Try to use construct injection instead of setter for mandate dependency.
It is always recommended to separate DTO and Entity.
Entity should interact with DB/ORM and DTO should interact with client layer(Layer for request and response) even if the structure of Entity and DTO same.
Here Entity is Movie and
DTO is MovieResponse
Use your existing class MovieResponse for request & response.
Never use Movie class for request & response.
and the class MovieServiceImpl should contain business logic for converting Entity to DTO, Or you can use Dozer api to do auto conversion.
The reason for sepating:
In case you need to add/remove new elements in Request/response you dont have to change much code
if 2 entity have 2 way mapping(e.g. one-to-many/many-to-many relationship) then
JSON object cant be created if object have nested data, this will throw error while serializing
if Anything changed in DB or Entity, then this will not affect JSON Response(most of the time).
Code will be clear and easy to maintain.
On one side you should separate them because sometimes some of the JPA annotations which you use in your model don't work well with the json processor annotations. And yes, you should keep the things separated.
What if you later decide to change your data layer? Will you have to rewrite all your client side?
On the other side, there is this problem of mapping. For that, you can use a library with a small performance penalty.
DTO is a design pattern and solves the problem of fetching as maximum useful data from a service as possible.
In case of a simple application as yours, the DTOs tend to be similar to the Entity classes. However for certain complex applications, DTOs can be extended to combine data from various entities to avoid multiple requests to the server and thus save valuable resources and request-response time.
I would suggest not to duplicate the code in a simple case like this and use model classes in response to the APIs as well. Using separate response classes as DTOs will not solve any purpose and will only make maintaining the code difficult.
While most people have answered pros and cons of using DTO objects, I would like to give my 2 cents. In my case DTO was necessary because not all fields persisted in database were captured from user. There were a few fields which were computed based on user input(of other fields) and were not exposed to users. Also, it can also reduces the size of payload which could result in better performance in such cases.
I advocate for separating the "Payload" or "Data" object from the "Model" or "Display" object. Pretty much always. This just keeps things easier to manage.
Here's an example:
Let's say you need to hit an API that gives you data about cats for sale. Then you parse the data into a cat model object and populate a list of cats that is then displayed to the user. Cool.
But now you want to integrate another API and pull cats from 2 databases. But you run into a problem. One API returns furColor for the color and the new one returns catColor for the color.
If you were using the same object to also display the info, you have some options:
Add both furColor and catColor to the model object, make them both optional, and do some kind of computed property to check which one is set and use that one to display the color
In reality, this is rarely an option because the responses will usually be much more different than just one value like this so you would likelly need a whole new parser anyway
Add a new data object and then also a new adapter and then have to do some kind of check to know which adapter to use when
Something else that still isn't pretty or fun to work with
However, if you create a data object that catches the response, and then a display object that has only the info needed to populate the list, this becomes really easy:
You have a data object that captures the response from the first API
Now make a data object that captures the response from the second API
Now all you need is some kind of simple mapper to map the response to the Display Object
Now both will be converted to a common simple display object, and the same adapter can be used to display the new cats without additional work
This also will make storing the data locally much cleaner.

How to return full json using Manual ObjectID reference with Morphia

Normally i use this:
public class Person {
...
#ID ObjectId id;
String name;
#Reference User user;
...
}
and it store the $ref and objectId of the user... and when i request i get the json
Person {
id...
User {
login:
password:
}
}
but "they" say to not use #Reference, to use the Manual Reference storing
so instead of #Referente to use something like ObjectId userID;
but if i use this, how can i build the json to return the full User? since i cant do something person.setUser(userFromDataBaseByStoredReferenceId);
or i really have to work using 2 attbrs in the class, one for storing the ID of the user, and another "User user", so i can set it and create the full json?
something like:
public class Person {
...
#ID ObjectId id;
String name;
ObjectId userID;
User user; //so i have to fill this to build the full json, after doing a extra search by the userID
}
Who is "they"? It will depend on your use case:
#Reference eagerly loads all referenced entities by default. If you always need all of this data, this is the right approach.
If you don't need all the (referenced) data most of the time, you can add the lazy attribute to the reference annotation. It will only load the data if you are actually accessing it.
If you have very big entities and rarely need to traverse entities, you can roll your own reference approach by simply adding the ObjectId of a user. However, you will need to issue a query for that yourself and you cannot access it like person.getUser() any longer.
PS: Your JSON example looks a bit like an embedded entity. Maybe this is a better approach for your scenario?
You can use a #PostLoad method to load the referenced entity by its id.
#PostLoad
void postLoad() {
this.user = UserDAO.get(userID);
}

How to tell Hibernate to conditionally ignore columns in CRUD operations

Is it possible to somehow tell Hibernate to conditionally ignore a missing column in a database table while doing the CRUD operations?
I've got a Java application using Hibernate as persistence layer. I'd like to be able to somehow tell Hibernate: If database version < 50, then ignore this column annotation (or set it transient).
This situation arises due to different database versions at different clients, but same entity code for all sites. For example, I've got a class, where the column description2 might miss in some databases.
#Entity
#Table(name = "MY_TABLE")
public class MyTable implements java.io.Serializable {
private Integer serialNo;
private String pickCode;
private String description1;
private String description2;
#Id
#Column(name = "Serial_No", nullable = false)
#GenericGenerator(name = "generator", strategy = "increment")
#GeneratedValue(generator = "generator")
public Integer getSerialNo() {
return this.serialNo;
}
#Column(name = "Pick_Code", length = 25)
public String getPickCode() {
return this.pickCode;
}
#Column(name = "Description1")
public String getDescription1() {
return this.description1;
}
#Column(name = "Description2") // <- this column might miss in some databases
//#TransientIf(...) <- something like this would be nice, or any other solution
public String getDescription2() {
return this.description2;
}
}
Background: I have a large application with a lot of customizations for different clients. Now it happens from time to time that one client (out of lets say 500) gets a new feature that requires a database structure update (e.g. a new field in a table). I release a new version for him, he runs a database schema update and can use the new feature. But all other clients won't do an incremental database update each time when any user gets a new feature. They just want to use the latest version, but are affected by the new feature (for that one client) they will never use.
I think it is only possible if you separate the mapping definition from the entities so that you can replace it. Thus you can not use annotation based mapping.
Instead I would suggest to use xml based mapping and create different xml mapping files for each client. Since you have about 500 clients you might want to create groups of clients who all share the same mapping file.
At least I think it will be very hard to maintain the different clients needs with one entity model and it will lead to a complex code structure. E.g. if you add properties to the enties that can be null for some clients than you will also add a lot more null checks to your code. One null check for each client specific property.

Avoid having JPA to automatically persist objects

Is there any way to avoid having JPA to automatically persist objects?
I need to use a third party API and I have to pull/push from data from/to it. I've got a class responsible to interface the API and I have a method like this:
public User pullUser(int userId) {
Map<String,String> userData = getUserDataFromApi(userId);
return new UserJpa(userId, userData.get("name"));
}
Where the UserJpa class looks like:
#Entity
#Table
public class UserJpa implements User
{
#Id
#Column(name = "id", nullable = false)
private int id;
#Column(name = "name", nullable = false, length = 20)
private String name;
public UserJpa() {
}
public UserJpa(int id, String name) {
this.id = id;
this.name = name;
}
}
When I call the method (e.g. pullUser(1)), the returned user is automatically stored in the database. I don't want this to happen, is there a solution to avoid it? I know a solution could be to create a new class implementing User and return an instance of this class in the pullUser() method, is this a good practice?
Thank you.
Newly create instance of UserJpa is not persisted in pullUser. I assume also that there is not some odd implementation in getUserDataFromApi actually persisting something for same id.
In your case entity manager knows nothing about new instance of UserJPA. Generally entities are persisted via merge/persist calls or as a result of cascaded merge/persist operation. Check for these elsewhere in code base.
The only way in which a new entity gets persisted in JPA is by explicitly calling the EntityManager's persist() or merge() methods. Look in your code for calls to either one of them, that's the point where the persist operation is occurring, and refactor the code to perform the persistence elsewhere.
Generally JPA Objects are managed objects, these objects reflect their changes into the database when the transaction completes and before on a first level cache, obviously these objects need to become managed on the first place.
I really think that a best practice is to use a DTO object to handle the data transfering and then use the entity just for persistence purposes, that way it would be more cohesive and lower coupling, this is no objects with their nose where it shouldnt.
Hope it helps.

Categories

Resources