Automatic fill Pojo using annotation with orientdb - java

first point is I am useing orientdb version 2.1.7.
I try to realize a project with orientdb inlcuding jpa. As described in this doc automatic loading, saving and deleting works for simple POJOs.
But there are two points that did not work out.
I want to make a property unique. I know it works when I do it programmatically like this:
OrientVertexType vertexType = graph.createVertexType(vertexName);
vertexType.createProperty("id", OType.STRING);
vertexType.createIndex("ididx", OClass.INDEX_TYPE.UNIQUE, "id");
But is there a way to do this via annotations? The JPA annotaion (#Column(unique = true, nullable = false)) seems not to work.
I have two VertexTypes which are connected by Edges. Also I want that the collection is automatic loaded via annotation. Example (getters and setters are not listed):
The User Object:
public class MyUser implements iMyUser {
private String id;
private String name;
private Set<MyGroup> groups;
...
}
The Group Object:
public class MyGroup implements iMyGroup {
private String name;
private String id;
...
}
In JPA you can add something like #JoinTable(name = "table", joinColumns = { #JoinColumn(name = "colname") }, inverseJoinColumns = { #JoinColumn(name = "colname") }) to the groups property in MyUser and when you call the methode getGroups() you get the groups which have a relation. Is there a annotation in orientdb that supports such an behaviour?
I think this (#Adjacency) might be a solution but till now I didn't have any success in implementing it.
Also is there a list or something which annotations are supported?
Regards,
foo

The second point is possible with tinkerpop frames. #Adjacency annotation for direct relations between objects and #GremlinGroovy for queries. It is a bit strange to have the annotation at the interface, but however it works.
For the first point there might be no solution via annotation, but you can create unique index via the orientdb backend.

Related

Perform Mapstruct mapping outside of Hibernate session

I am using Spring Data and Mapstruct and I don't want hibernate to blindly load all the elements while mapping entity to dto.
Example:
public class VacancyEntity {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
Integer id;
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "job_category_id", nullable = false)
JobCategoryEntity jobCategory;
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "company_id", nullable = false)
CompanyEntity company;
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "employer_created_by", nullable = false)
EmployerProfileEntity employerCreatedBy;
#Column(nullable = false)
String title;
.... }
DTO:
public class VacancyDto {
Integer id;
String title;
CompanyDto company;
EmployerProfileDto employerCreatedBy;
JobCategoryDto jobCategory;
...}
So I have two methods findByIdWithCompanyAndCity and findByIdWithJobAndCityAndEmployer in VacancyRepository to perform only one SQL request.
And two #Transactional methods in my VacancyService: findWithCompanyAndCity and findWithCompanyAndCityAndEmployer.
Best practice is returning Dto from Service layer, so we need to parse Entity to Dto in the Service.
And I really don't want to just leave whole mapping in #Transactional (session) because if I add some field really deep into my entity, Mapstruct just trigger N+1 problem.
Best that I came up with, is to include each inner entity into method and check manually that Mapstruct don't add some new methods. (it is faster then checking names)
Ex:
#Mapping(target = "id", source = "entity.id")
#Mapping(target = "description", source = "entity.description")
#Mapping(target = "jobCategory", source = "jobCategoryDto")
#Mapping(target = "employerCreatedBy", source = "employerProfileDto")
#Mapping(target = "city", source = "cityDto")
#Mapping(target = "company", ignore = true)
VacancyDto toDto(VacancyEntity entity,
JobCategoryDto jobCategoryDto,
EmployerProfileDto employerProfileDto,
CityDto cityDto);
....
But this doesn't fix the real issue. There are still session while mapping, so it can lead to N+1 problem.
So I came up with several solutions
Use special method in Service to trigger #Transactional method and then map into DTO out of session scope. But it seems really ugly to double methods in Service
Return Entity from Service (which is Bad Practice) and map into DTO there.
I know that I'll get LazyInitializationException in both cases, but it seems to me like it more robust and scalable then just unpredictably SELECT.
How do I perform the mapping from entity to DTO in the service layer but outside the Hibernate session in an elegant way?
You didn't ask a question but it seems the question is supposed to be:
How do I perform the mapping from entity to DTO in the service layer but outside the Hibernate session in an elegant way.
I'd recommend the TransactionTemplate for this.
Usage looks like this:
#Autowired
VacancyRepository repo;
#Autowired
TransactionTemplate tx;
void someMethod(String company, String city){
VacancyEntity vac = tx.execute(__ -> repo.findWithCompanyAndCity(company, city));
return mappToDto(vac);
}
That said, I think you are using the wrong a approach to solve the underlying problem.
I suggest you take a look at having a test to verify the number of SQL statements executed.
See https://vladmihalcea.com/how-to-detect-the-n-plus-one-query-problem-during-testing/ for a way to do that.
To avoid the N + 1 problem you still need to use an entity graph, although I think this is a perfect use case for Blaze-Persistence Entity Views.
I created the library to allow easy mapping between JPA models and custom interface or abstract class defined models, something like Spring Data Projections on steroids. The idea is that you define your target structure(domain model) the way you like and map attributes(getters) via JPQL expressions to the entity model.
A DTO model for your use case could look like the following with Blaze-Persistence Entity-Views:
#EntityView(VacancyEntity.class)
public interface VacancyDto {
#IdMapping
Integer getId();
String getTitle();
CompanyDto getCompany();
EmployerProfileDto getEmployerCreatedBy();
JobCategoryDto getJobCategory();
#EntityView(CompanyEntity.class)
interface CompanyDto {
#IdMapping
Integer getId();
String getName();
}
#EntityView(EmployerProfileEntity.class)
interface EmployerProfileDto {
#IdMapping
Integer getId();
String getName();
}
#EntityView(JobCategoryEntity.class)
interface JobCategoryDto {
#IdMapping
Integer getId();
String getName();
}
}
Querying is a matter of applying the entity view to a query, the simplest being just a query by id.
VacancyDto a = entityViewManager.find(entityManager, VacancyDto.class, id);
The Spring Data integration allows you to use it almost like Spring Data Projections: https://persistence.blazebit.com/documentation/entity-view/manual/en_US/index.html#spring-data-features
Page<VacancyDto> findAll(Pageable pageable);
The best part is, it will only fetch the state that is actually necessary!

Design of the layers of a Spring Boot RESTful API and its entities mapping

I have been thinking about the architecture of my server for days since I am developing my first RESTful API from scratch, with Spring Boot.
I am using Hibernate and I created several entities and its relationships using Hibernate/JPA annotations, however, I am not sure whether I should use these entities as the business models since they are "dirty" with the extra fields Hibernate would recommend.
This is my tentative REST API layered architecture
This is an example taken directly from Hibernate's docs.
#Entity(name = "Person")
public static class Person implements Serializable {
#Id
#GeneratedValue
private Long id;
#NaturalId
private String registrationNumber;
#OneToMany(
mappedBy = "person",
cascade = CascadeType.ALL,
orphanRemoval = true
)
private List<PersonAddress> addresses = new ArrayList<>();
// ...
public void addAddress(Address address) {
PersonAddress personAddress = new PersonAddress( this, address );
addresses.add( personAddress );
address.getOwners().add( personAddress );
}
public void removeAddress(Address address) {
PersonAddress personAddress = new PersonAddress( this, address );
address.getOwners().remove( personAddress );
addresses.remove( personAddress );
personAddress.setPerson( null );
personAddress.setAddress( null );
}
}
#Entity(name = "PersonAddress")
public static class PersonAddress implements Serializable {
#Id
#ManyToOne
private Person person;
#Id
#ManyToOne
private Address address;
// ...
}
#Entity(name = "Address")
public static class Address implements Serializable {
#Id
#GeneratedValue
private Long id;
private String street;
#Column(name = "`number`")
private String number;
private String postalCode;
#OneToMany(
mappedBy = "address",
cascade = CascadeType.ALL,
orphanRemoval = true
)
private List<PersonAddress> owners = new ArrayList<>();
// ....
}
What I mean is, if I were to make a class diagram it would not look exactly as these entities because of the conditions you have to accomplish for Hibernate so it can map them to tables with ORM. For example, in a hypotetical class diagram Person would have a list of Address, not a list of PersonAddress as Hibernate suggests in this case (for mapping performance).
My question is whether I should separate the Person Model into two separate entities, one for the business logic layer (services) and one for the data access layer (repositories). Personally, I don't think that's a problem because Hibernate helps me ignore all the table creation, but maybe it's not a good practice and I should separate it into two different entities.
In general I use an API-Model and an Entity- Model.
The api model is used to Exchange data between services and the entity object is used to persist the data. This keeps your architecture more flexible. If something in your busineslogic changes the entity is not automatically affected. Also sometimes you get data by the Client and don‘t want to expose the whole database object. So you can provide just the fields you need and complete the rest in the entity object. This is also recomended by static code analysis sonaqube.
What you are referering to is a split between the persistence model and the business/domain model which is quite common. People often refer to this as the DTO approach.
The approach has many benefits and if you implement it right, almost no downsides.
Implementing it efficiently can be done with Blaze-Persistence Entity-Views a library on top of JPA/Hibernate which will handle all the fetching for you transparently. Take a look at the spring data integration which allows you to get started very quickly or tryout a sample project through an archetype to get a feeling for the benefits.

How to map a DTO to multiple entities?

I'm writing a Spring Application, which has two entities that are related by a one to many relationship, lets call them mother and kid.
When I create a mother entity via POST request, I want a kid entity be created automatically. Using the #OneToMany and #ManyToOne annotations, that works fine. At least, as long as I provide the kid information within the MotherService.
Here is my code
Mother.java
#Entity
#Table(name="mother")
public class Mother{
#Id
#Column(name="id", updatable = false, nullable = false)
private Long id;
#Column(name="name")
private String name;
#OneToMany(mappedBy = "mother", cascade = CascadeType.ALL, orphanRemoval = true)
private List<Kid> kidList = new ArrayList<>();
//constructor, getter, setter
private void addKid(Kid kid) {
this.kidList.add(kid);
kid.setMother(this);
}
}
Kid.java
#Entity
#Table(name="kid")
public class Kid{
#Id
#Column(name="id", updatable = false, nullable = false)
private Long id;
#Column(name="name")
private String name;
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "mother_id", nullable=false)
private Mother mother;
//constructor, getter, setter
}
MotherController.java
#RestController
#RequestMapping("mothers")
public class MotherController {
#Autowired
private MotherService motherService;
MotherController(MotherService motherService) {
this.motherService = motherService;
}
#PostMapping
Mother createMother(#RequestBody Mother mother) {
return this.motherService.createMother(mother);
}
}
MotherService.java
#Service
public class MotherService {
private MotherRepository motherRepository;
#Autowired
public MotherService (MotherRepository motherRepository) {
super();
this.motherRepository= motherRepository;
}
public Mother createMother(Mother mother) {
Kid kid = new Kid("Peter");
mother.addKid(kid);
return this.motherRepository.save(mother);
}
}
The repositories for mother and kid extend the JpaRepository without any custom methods so far.
My POST request is something like (using Postman)
{
"name":"motherName"
}
Now a mother is created with a name "motherName" and a kid with the name of "Peter".
My idea: Using a DTO
I now try to implement a DTO, that contains the mothers name and the kids name, map this information in the MotherService to the entities and save them via the corresponding repository, so I can define both names in the POST request.
motherDto.java
public class mother {
private String motherName;
private String kidName;
//getter, setter
}
So when I POST
{
"motherName":"Susanne",
"kidName":"Peter"
}
or even better
{
"mother": {
"name":"Susanne"
},
"kid": {
"name":"Peter"
}
}
a mother with name Susanne and a kid with name Peter are created.
My question is
How do I map a DTO to two entities?
Or do I not get something right? Is there an easier way to achieve my goal?
I know this is old and probably long solved, but let me offer a different take on the subject.
Another option would be to design a DTO solely for the purpose of creating the two entities you mentioned. You could call this MotherChildCreationDTO or something like that so the name already conveys its use and maybe create a REST-target consuming the DTO.
Asymmetric DTOs (receiving and sending) are an established pattern, and the DTOs are closely coupled to the REST controller any way.
First solution:
You can don't use DTO and send your JSON with same structure of Mother and kids and Jackson in Spring MVC deserialize it correctly for you.
{
id:2,
name:'sarah'
kidList:[{id:546,name:'bob'},{id:478,name:'tom'}]
}
Second solution:
If you want to different structure in JSON and Models and you can use Jackson annotation like #JsonProperty or #JsonDeserialize. Read this like for more information.
Third solution:
You can use DozzerMapper for complex mapping between your DTO and your Model. you define XML's file for mapping each model to your DTO and DozzerMapper map your DTO to your models.Read this link for more information.
You have 2 ways:
Map DTO to entities by yourself. In this case, you should create custom mapper and define how exactly DTO should be converted to entity. Then just inject and use your custom mapper in service.
Use one of existing mapper libraries. For example, good candidates are MapStruct and ModelMapper. You can find usage examples in corresponding getting started guides.

JPA/validation #ManyToOne relations should not create new rows

I have an JPA entity with contains a ManyToOne reference to another table, a simplified version of that entity is shown below:
#Entity
#Table(name = "ENTITIES")
public class Entity implements Serializable {
#Id #NotNull
private String id;
#JoinColumn(name = "REFERENCE", referencedColumnName = "ID")
#ManyToOne(optional = false)
private ReferencedEntity referencedEntity;
}
#Entity
#Table(name = "REFERENCES")
public class ReferencedEntity implements Serializable {
#Id #NotNull #Column(name = "ID")
private String id;
#Size(max = 50) #Column(name = "DSC")
private String description;
}
Finding entities works fine. Peristing entities also works fine, a bit too good in my particular setup, I need some extra validation.
Problem
My requirement is that the rows in table REFERENCES are static and should not be modified or new rows added.
Currently when I create a new Entity instance with a non-existing (yet) ReferencedEntity and persist that instance, a new row is added to REFERENCES.
Right now I've implemented this check in my own validate() method before calling the persist(), but I'd rather do it more elegantly.
Using an enum instead of a real entity is not an option, I want to add rows myself without a rebuild/redeployment several times in the future.
My question
What is the best way to implement a check like this?
Is there some BV annotation/constraint that helps me restrict this? Maybe a third party library?
It sounds like you need to first do a DB query to check if the value exists and then insert the record. This must be done in a transaction in order to ensure that the result of the query is still true at the time of insertion. I had a similar problem half a year back which might provide you with some leads on how to set up locking. Please see this SO question.
You should add this => insertable=false, updatable=false
And remove => optional=false , and maybe try nullable=true

Hibernate: enforcing unique data members

I am having an issue working with Hibernate and enforcing unique data members when inserting.
Here are my abridged Entity objects:
Workflow:
#Entity
public class Workflow {
private long wfId;
private Set<Service> services;
/** Getter/Setter for wfId */
...
#OneToMany(cascade = CascadeType.ALL)
#JoinTable(name = "workflow_services",
joinColumns = #JoinColumn(name = "workflow_id"),
inverseJoinColumns = #JoinColumn(name = "service_id"))
public Set<Service> getServices() {
return services;
}
Service:
#Entity
public class Service {
private long serviceId;
private String serviceName;
/** Getter/Setter for serviceId */
...
#Column(unique=true,nullable=false)
public String getServiceName() {
return serviceName;
}
#OneToMany(cascade = CascadeType.ALL)
#JoinTable(name = "service_operations",
joinColumns = { #JoinColumn(name = "serviceId") },
inverseJoinColumns = { #JoinColumn(name = "operationId") })
public Set<Operation> getOperations() {
return operations;
}
Operation:
#Entity
public class Operation {
private long operationId;
private String operationName;
/** Getter/Setter for operationId */
#Column(unique=true,nullable=false)
public String getOperationName() {
return operationName;
}
My issue:
Although I have stated in each object what is SUPPOSED to be unique, it is not being enforced.
Inside my Workflow object, I maintain a Set of Services. Each Service maintains a list of Operations. When a Workflow is saved to the database, I need it to check if the Services and Operations it currently uses are already in the database, if so, associate itself with those rows.
Currently I am getting repeats within my Services and Operations tables.
I have tried using the annotation:
#Table( uniqueConstraints)
but have had zero luck with it.
Any help would be greatly appreciated
The unique or uniqueConstraints attributes are not used to enforce the uniqueness in the DB, but create the correct DDL if you generate it from hibernate (and for documentation too, but that's arguable).
If you declare something as unique in hibernate, you should declare it too in the DB, by adding a constraint.
Taking this to the extreme, you can create a mapping in which the PK is not unique in the DB, and hibernate will throw an exception when it tries to load one item by calling Session.load, and sudently finding that there are 2 items.
Inside my Workflow object, I maintain a Set of Services. Each Service maintains a list of Operations. When a Workflow is saved to the database, I need it to check if the Services and Operations it currently uses are already in the database, if so, associate itself with those rows.
I think you're asking Hibernate to detect duplicate objects when you add them to the Set, yes? In other words, when you put an object in the Set, you want Hibernate to go look for a persistent version of that object and use it. However, this is not the way Hibernate works. If you want it to "reuse" an object, you have to look it up yourself and then use it. Hibernate doesn't do this.
I would suggest having a helper method on a DAO-like object that takes the parent and the child object, and then does the lookup and setting for you.

Categories

Resources