So I'm writing a personal project to learn about web programing and I came across the DAO pattern. I'd built some classes (models) and like almost any program, they are nested (ex: the class Payment has a reference to an Author instance).
Just for the reference, I'm not using any mappers (will add them on a later iteration, and I'm using JDBC, not JPA).
My question is this:
When I create the PaymentJdbcDao I had a method what will return some Payment, but in order to create this payment from the database stored object I must reach also the Author (stored in a separated table).
Should I call UserJdbcDao from the PaymentJdbcDao in order to get the payment's author, should I alter the query with a join to retrieve the fields from both entities, should the PaymentJdbcDao call the UserService (I think this isn't good since the services are on the layer abobe the daos), or should I remove the author reference as an object and just hold a reference to the author_id?
Which of the abobe is the more appropriate way to accomplish this? Or is any other way which is a better practice?
Thanks.
I call my DAOs (DataAccessObjects) "Repositories".
Spring Data JPA is doing this as well.
So I would create a UserRepository and a PaymentRepository.
Repositories can be called by other Repositories or Services.
Services should never be called by Repositories.
UI -> Service -> Repository.
Your PaymentRepository could return an Entity like this
public class PaymentEntity{
private long id;
private DateTime dateTime;
private UserEntity user;
}
Your UserRepository could return an Entity like this
public class UserEntity{
private long id;
private DateTime lastLogin;
private List<PaymentEntity> payments;
}
Your Repositories could look like this.
public interface PaymentRepository{
PaymentEntity getPaymentById(long id);
List<PaymentEntity> getAllPayments();
}
public interface UserRepository{
UserEntity getUserById(long id);
List<UserEntity> getAllUsers();
}
So your PaymentRepository would call the UserRepository to get the User for your Payment.
And your UserRepository would call the PaymentRepository to get all users Payments
I hope I was able to help you
Related
I know how to persist entity with #ManyToMany relation in pure Hibernate. But how to do it correctly in Spring?
In hibernate that would be:
EntityManager em = getEntityManager();
em.getTransaction.begin();
Book book = new Book("title", "isbn");
Category category = new Category("horror");
category.addBook(book);
em.persist(book);
em.getTransaction().commit();
em.getTransaction().close();
something like this. But what about Spring, let's say that I have #Service which receive Book or BookDto from #RestController.
public void saveBook(Book book) {
//what now
}
What I should do here, is bookRepository.save(book); would be enough to save this #ManyToMany relation? Don't I need additional methods like addBook, addCategory, removeBook etc.?
Thank you in advance.
Spring's repository is doing nothing than just wrapping entityManager and executing it's persist method. You can have a look at default implementation
That's why whatever possible in "pure hibernate" is possible with spring's repositories with a couple of notes
Note 1 With spring you most probably will replace transaction boiler-plate with #Transactional annotations, that's why you need to be careful about entities passed between methods.
Note 2 The code you referred as "pure hibernate" is actually "pure JPA", there was no hibernate mentioned. Probably hibernate was your JPA implementation.
The standard recipe to handle state management of your entities with the use of Spring Boot is:
Create entity classess annotated with #Entity (which you have already done, I presume) - for example a Book.class entity;
Create repository interface that extends JPARepository<T,ID>, where T is your entity class (create one for Book and another one for Category), and ID is the Primary Key of your entity class. This will give you the advantage of having a default implementation of many useful repository methods, like save(), findOneById(), etc;
public interface BookRepository extends JPARepository<Book, Long> {}
assuming that you have set Long as a primary key of your book entity;
Inject the repository in the service class;
Now you can use your service class and method like so:
#RequiredArgsConstructor
#Service
public class BookService {
private final BookRepository bookRepository;
#Transactional
public void saveBook() {
// just copying the logic from your question
// but normally You would pass it as an argument to the method
Book book = new Book("title", "isbn");
Category category = new Category("horror");
category.addBook(book);
bookRepository.save(book);
}
}
Answering Your 2nd question - yes, you have to take care of keeping both entities relationships in sync, there is no get-away from that here. So in #ManyToMany relationship, you have to add category to book categories collection and vice versa.
For this question,I am not looking for a solution, but looking for a direction from where I can take myself ahead, hence not sharing any code.
I am preparing a REST API and I have postgresql database setup locally, which has 2 tables and one view from those 2 tables.
Normally when I want to get any data from DB, I use following code(for the sake of clarity):
DataRepository class:
public interface DataRepository extends CrudRepository<Data, String>{}
DataService class:
#Service
public class DataService {
#Autowired
private DataRepository repo;
public Data getData(String id){
return repo.findById(id).orElse(null);
}
}
DataController class:
#RestController
public class DataController{
#Autowired
private DataService service;
#RequestMapping("/{id}")
public Data getData(String id){
return service.getData(id);
}
}
Data class:
#Entity
public class Data{
#Id
private String id;
private String name;
//respective getter and setter methods
}
Now I want to retrieve data from a view, so, what should be the approach for that?
Should we use the same approach of creating Model, Service, Ctonroller and Repository classes?
Can we use CrudRepository to achieve the same?
I searched in a lot of places, but didn't find anything useful.
Let me know if anyone has any clue on this.
The reading methods of a CrudRepository should work fine with a view. For the writing methods, the view needs to be updatableenter link description here.
If you only want to read, but not to write to the repository, you can create a ReadOnlyRepository by copying the source code of the CrudRepository and removing all the writing methods.
Note that JPA will still try to persist changes made to managed entities.
To avoid that and also avoid the cost of dirty checking you can mark your entities as immutable if you are using Hibernate.
I am using spring data jpa for my project and i have following pieces of code:
My Repository:
#Repository
#Transactional
public interface StudentRepository extends PagingAndSortingRepository<Student, Long> {
List<Student> findByIdIn(List<Long> ids);
}
and my entity is :
#Entity
public class Student implements Serializable {
private static final long serialVersionUID = 1L;
private Long id;
// other fields
// getters/setters
}
Somewhere in my service class, i have
#Autowired
StudentRepository studentRepository;
and then i call findByIdIn from my service layer like :
studentRepository.findByIdIn(listOfIds);
findByIdIn(listOfIds) method works perfectly fine and everything is working as expected.
I know that the implementation of findByIdIn() method is provided by spring data jpa.
But i am not able to figure where is its implementation present?
What exactly is its implementation?
Are such methods generated at run time depending upon the method-name? If yes how are they generated and executed dynamically?
Thanks!
You can dig a little bit in the core of the Spring code to see how it works (https://github.com/spring-projects/spring-data-commons/tree/master/src/main/java/org/springframework/data/repository), but basically, it's parsing the interface methods into HQL at the startup time.
You can test just be editing a method name to a field which doesn't exist, and you'll get an error at startup time saying that there is no field as this.
The main question is how to convert DTOs to entities and entities to Dtos without breaking SOLID principles.
For example we have such json:
{ id: 1,
name: "user",
role: "manager"
}
DTO is:
public class UserDto {
private Long id;
private String name;
private String roleName;
}
And entities are:
public class UserEntity {
private Long id;
private String name;
private Role role
}
public class RoleEntity {
private Long id;
private String roleName;
}
And there is usefull Java 8 DTO conveter pattern.
But in their example there is no OneToMany relations. In order to create UserEntity I need get Role by roleName using dao layer (service layer). Can I inject UserRepository (or UserService) into conveter. Because it seems that converter component will break SRP, it must convert only, must not know about services or repositories.
Converter example:
#Component
public class UserConverter implements Converter<UserEntity, UserDto> {
#Autowired
private RoleRepository roleRepository;
#Override
public UserEntity createFrom(final UserDto dto) {
UserEntity userEntity = new UserEntity();
Role role = roleRepository.findByRoleName(dto.getRoleName());
userEntity.setName(dto.getName());
userEntity.setRole(role);
return userEntity;
}
....
Is it good to use repository in the conveter class? Or should I create another service/component that will be responsible for creating entities from DTOs (like UserFactory)?
Try to decouple the conversion from the other layers as much as possible:
public class UserConverter implements Converter<UserEntity, UserDto> {
private final Function<String, RoleEntity> roleResolver;
#Override
public UserEntity createFrom(final UserDto dto) {
UserEntity userEntity = new UserEntity();
Role role = roleResolver.apply(dto.getRoleName());
userEntity.setName(dto.getName());
userEntity.setRole(role);
return userEntity;
}
}
#Configuration
class MyConverterConfiguration {
#Bean
public Converter<UserEntity, UserDto> userEntityConverter(
#Autowired RoleRepository roleRepository
) {
return new UserConverter(roleRepository::findByRoleName)
}
}
You could even define a custom Converter<RoleEntity, String> but that may stretch the whole abstraction a bit too far.
As some other pointed out this kind of abstraction hides a part of the application that may perform very poorly when used for collections (as DB queries could normally be batched. I would advice you to define a Converter<List<UserEntity>, List<UserDto>> which may seem a little cumbersome when converting a single object but you are now able to batch your database requests instead of querying one by one - the user cannot use said converter wrong (assuming no ill intention).
Take a look at MapStruct or ModelMapper if you would like to have some more comfort when defining your converters. And last but not least give datus a shot (disclaimer: I am the author), it lets you define your mapping in a fluent way without any implicit functionality:
#Configuration
class MyConverterConfiguration {
#Bean
public Mapper<UserDto, UserEntity> userDtoCnoverter(#Autowired RoleRepository roleRepository) {
Mapper<UserDto, UserEntity> mapper = Datus.forTypes(UserDto.class, UserEntity.class)
.mutable(UserEntity::new)
.from(UserDto::getName).into(UserEntity::setName)
.from(UserDto::getRole).map(roleRepository::findByRoleName).into(UserEntity::setRole)
.build();
return mapper;
}
}
(This example would still suffer from the db bottleneck when converting a Collection<UserDto>
I would argue this would be the most SOLID approach, but the given context / scenario is suffering from unextractable dependencies with performance implications which makes me think that forcing SOLID might be a bad idea here. It's a trade-off
If you have a service layer, it would make more sense to use it to do the conversion or make it delegate the task to the converter.
Ideally, converters should be just converters : a mapper object, not a service.
Now if the logic is not too complex and converters are not reusable, you may mix service processing with mapping processing and in this case you could replace the Converter prefix by Service.
And also it would seem nicer if only the services communicate with the repository.
Otherwise layers become blur and the design messy : we don't know really any longer who invokes who.
I would do things in this way :
controller -> service -> converter
-> repository
or a service that performs itself the conversion (it conversion is not too complex and it is not reusable) :
controller -> service -> repository
Now to be honest I hate DTO as these are just data duplicates.
I introduce them only as the client requirements in terms of information differ from the entity representation and that it makes really clearer to have a custom class (that in this case is not a duplicate).
personally, converters should be between your controllers and services, the only things DTOs should worry about is the data in your service layer and how which information to expose to your controllers.
controllers <-> converters <-> services ...
in your case, you can make use of JPA to populate roles of your users at the persistence layer.
I suggest that you just use Mapstruct to solve this kind of entity to dto convertion issue that you are facing. Through an annotation processor the mappings from dto to entity and vice versa are generated automatically and you just have to inject a reference from your mapper to your controller just like you normally would do with your repositories (#Autowired).
You can also check out this example to see if it fit your needs.
That's the way I'd likely do it. The way I'd conceptualize it is that the User converter is responsible for user / user dto conversions, and as such it rightly shouldn't be responsible for role / role dto conversion. In your case, the role repository is acting implicitly as a role converter that the user converter is delegating to. Maybe someone with more in-depth knowledge of SOLID can correct me if I'm wrong, but personally I feel like that checks out.
The one hesitation I would have, though, would be the fact that you're tying the notion of conversion to a DB operation which isn't necessarily intuitive, and I'd want to be careful that months or years into the future some developer doesn't inadvertently grab the component and use it without understanding the performance considerations (assuming you're developing on a larger project, anyways). I might consider creating some decorator class around the role repository that incorporates caching logic.
I think the way to do it cleanly is to include a Role DTO that you convert to the RoleEntity. I might use a simplified User DTO in case that it is read only. For example, in case of unprivileged access.
To expand your example
public class UserDto {
private Long id;
private String name;
private RoleDto role;
}
with the Role DTO as
public class RoleDto {
private Long id;
private String roleName;
}
And the JSON
{
id: 1,
name: "user",
role: {
id: 123,
roleName: "manager"
}
Then you can convert the RoleDto to RoleEntity while converting the User in your UserConverter and remove the repository access.
Instead of creating separate convertor clas, you can give that responsibility to Entity class itself.
public class UserEntity {
// properties
public static UserEntity valueOf(UserDTO userDTO) {
UserEntity userEntity = new UserEntity();
// set values;
return userEntity;
}
public UserDTO toDto() {
UserDTO userDTO = new UserDTO();
// set values
return userDTO;
}
}
Usage;
UserEntity userEntity = UserEntity.valueOf(userDTO);
UserDTO userDTO = userEntity.toDto();
In this way you have your domain in one place. You can use Spring BeanUtils to set properties.
You can do the same for RoleEntity and decide whether to lazy/eager load when loading UserEntity using ORM tool.
It's about passing interface of DTO to DAO.
For example I have following code
public interface User {
String getName();
}
public class SimpleUser implements User {
protected String name;
public SimpleUser(String name) {
this.name = name;
}
#Override
public String getName() {
return name;
}
}
// Mapped by Hibernate
public class PersistentUser extends SimpleUser {
private Long id;
// Constructor
// Getters for id and name
// Setters for id and name
}
I'm using generic DAO. Is it ok if I create DAO with using interface User instead PersistentUser?
User user = new PersistentUser(name);
UserDao.create(user);
I read a lot of topics on stack but not figured out is this approach ok or no. Please help me. Maybe this is stupid and I can achive only problems.
About separating beans.
I did this because some classes I want to share via API module, that can be used outside to create entities and pass them to my application. Because they uses interface I developed so I can pass them to my DAO for persisting.
Generally, I would say it is ok, but there are a few hidden problems. A developer could cast the object down or access some state via a toString method that shouldn't be accessible. If you don't be careful, it could happen that state is serialized as JSON/XML in webservices that shouldn't be serialized. The list goes on.
I created Blaze-Persistence Entity Views for exactly that use case. You essentially define DTOs for JPA entities as interfaces and apply them on a query. It supports mapping nested DTOs, collection etc., essentially everything you'd expect and on top of that, it will improve your query performance as it will generate queries fetching just the data that you actually require for the DTOs.
The entity views for your example could look like this
#EntityView(PersistentUser.class)
interface User {
String getName();
}
Querying could look like this
List<User> dtos = entityViewManager.applySetting(
EntityViewSetting.create(User.class),
criteriaBuilderFactory.create(em, PersistentUser.class)
).getResultList();