Cyclic serialisation with Many to Many relationship with Hibernate - java

I have a parent (Program) pojo with a many-to-many relationship with their children (Subscriber).
The problem is when it serialises a Program, it also serialises the Program's Subscribers, which involves serialising their Programs, which involves serialising their Subscribers, until it has serialised every single Program & Subscriber in the database.
The ERD looks like: Program <-> Subscriber
This means what was a tiny 17KB block of data (json) being returned has become a 6.9MB return. Thus in turn blows out the time to serialise the data and then return it.
Why is my parent returning children returning parents returning children? How can I stop this so I only get the Subscribers for each Program? I'm assuming I've done something wrong with my annotations, perhaps? I would like to maintain a many-to-many relationship but without this deeply nested data retrieval.
(Note: I have prior tried adding as many Lazy annotations I can find just to see if that helps. It doesn't. Perhaps I'm doing that wrong too?)
Program.java
#Entity
#Table(name="programs")
public class Program extends Core implements Serializable, Cloneable {
...
#ManyToMany()
#JoinTable(name="program_subscribers",
joinColumns={#JoinColumn(name="program_uid")},
inverseJoinColumns={#JoinColumn(name="subscriber_uid")})
public Set<Subscriber> getSubscribers() { return subscribers; }
public void setSubscribers(Set<Subscriber> subscribers) { this.subscribers = subscribers; }
Subscriber.java
#Entity
#Table(name="subscribers")
public class Subscriber extends Core implements Serializable {
...
#ManyToMany(mappedBy="subscribers")
public Set<Program> getPrograms() { return programs; }
public void setPrograms(Set<Program> programs) { this.programs = programs;
}
Implementation
public Collection<Program> list() {
return new Programs.findAll();
}

You didn't mention the framework you are using for JSON serialization, so I'll assume JAXB. Anyway, the idea is to make the Subscriber.getPrograms(..) transient in some way, so that it's not serialized. Hibernate takes care of these 'loops', but others don't. So:
#XmlTransient
#ManyToMany(..)
public Set<Program> getPrograms()...
If you use another framework, it may have a different annotation/configuration for specifying transient fields. Like the transient keyword.
The other way is to customize your mapper to handle the cycle manually, but this is tedious.

1) How does "your" serialization work. I mean is it JAXB or custom serialization or smth else.
2) Almost all frameworks let you set the depth of serialization. I mean you can set for example depth in 2.
3) I advice you not to serialize object with children, mark them(childre) transient, and serialize separately.

How about using annotations? http://thinkinginsoftware.blogspot.com/2010/08/json-and-cyclical-references.html

From lombok library. Or override equals and hashcode. Use inside hashcode only unique fields (e.g. id).
#EqualsAndHashCode(callSuper = false, of = {"id"})

Both Bozho and ponkin are on the right track. I needed to stop serialising the data down the wire but the big problem is I am unable to change the pojo -> toJSON class/method where the serialisation takes place. I was also worried about investing time on the toJSON() method considering I was taking such a performance hit at the point of serialisation I wanted a fix that would occur before I had the data rather than afterwards.
Also due to the nature of the Many-to-Many Bidirectional design I had listed I was always going to have this cyclic programs/subscribers/programs/... problem.
Resolution: (for now atleast) I have removed the Subscriber.getProgram() method and created a finder method on the ProgramDAO which returns the Programs by Subscriber.
public List<Program> findBySubscriber(Subscriber subscriber) {
String hql = "select p " +
"from Program p " +
" join p.subscribers s " +
"where s = :sub"
;
Query q = getSession().createQuery(hql);
q.setEntity("sub", subscriber);
List<Program> l = q.list();
return l;
}
For any CRUD work I think I'm just going to have to loop over Programs.getSubscribers, or write more hql helper methods.

Related

How to deal with transient entities after deserialization

Let's say I have a simple REST app with Controller, Service and Data layers. In my Controller layer I do something like this:
#PostMapping("/items")
void save(ItemDTO dto){
Item item = map(dto, Item.class);
service.validate(item);
service.save(item);
}
But then I get errors because my Service layer looks like this:
public void validate(Item item) {
if(item.getCategory().getCode().equals(5)){
throw new IllegalArgumentException("Items with category 5 are not currently permitted");
}
}
I get a NullPointerException at .equals(5), because the Item entity was deserialized from a DTO that only contains category_id, and nothing else (all is null except for the id).
The solutions we have found and have experimented with, are:
Make a special deserializer that takes the ids and automatically fetches the required entities. This, of course, resulted in massive performance problems, similar to those you would get if you marked all your relationships with FetchType.EAGER.
Make the Controller layer fetch all the entities the Service layer will need. The problem is, the Controller needs to know how the underlying service works exactly, and what it will need.
Have the Service layer verify if the object needs fetching before running any validations. The problem is, we couldn't find a reliable way of determining whether an object needs fetching or not. We end up with ugly code like this everywhere:
(sample)
if(item.getCategory().getCode() == null)
item.setCategory(categoryRepo.findById(item.getCategory().getId()));
What other ways would you do it to keep Services easy to work with? It's really counterintuitive for us having to check every time we want to use a related entity.
Please note this question is not about finding any way to solve this problem. It's more about finding better ways to solve it.
From my understanding, it would be very difficult for modelMapper to map an id that is in the DTO to the actual entity.
The problem is that modelMapper or some service would have to do a lookup and inject the entity.
If the category is a finite set, could use an ENUM and use static ENUM mapping?
Could switch the logic to read
if(listOfCategoriesToAvoid.contains(item.getCategory())){ throw new IllegalArgumentException("Items with category 5 are not currently permitted"); }
and you could populate the listOfCategoriesToAvoid small query, maybe even store it in a properties file/table where it could be a CSV?
When you call the service.save(item), wouldn't it still fail to populate the category because that wouldn't be populated? Maybe you can send the category as a CategoryDTO inside the itemDTO that populated the Category entity on the model.map() call.
Not sure if any of these would work for you.
From what I can gather the map(dto, Item.class) method does something like this:
Long categoryId = itemDto.getCategoryId();
Category cat = new Category();
cat.setId(categoryId);
outItem.setCategory(cat);
The simplest solution would be to have it do this inside:
Long categoryId = itemDto.getCategoryId();
Category cat = categoryRepo.getById(categoryId);
outItem.setCategory(cat);
Another option is since you are hardcoding the category code 5 until its finished, you could hard-code the category IDs that have it instead, if those are not something that you expect to be changed by users.
Why aren't you just using the code as primary key for Category? This way you don't have to fetch anything for this kind of check. The underlying problem though is that the object mapper is just not able to cope with the managed nature of JPA objects i.e. it doesn't know that it should actually retrieve objects by PK through e.g. EntityManager#getReference. If it were doing that, then you wouldn't have a problem as the proxy returned by that method would be lazily initialized on the first call to getCode.
I suggest you look at something like Blaze-Persistence Entity Views which has first class support for something like that.
I created the library to allow easy mapping between JPA models and custom interface or abstract class defined models, something like Spring Data Projections on steroids. The idea is that you define your target structure(domain model) the way you like and map attributes(getters) via JPQL expressions to the entity model.
A DTO model for your use case could look like the following with Blaze-Persistence Entity-Views:
#EntityView(Item.class)
// You can omit the strategy to default to QUERY when using the code as PK of Category
#UpdatableEntityView(strategy = FlushStrategy.ENTITY)
public interface ItemDTO {
#IdMapping
Long getId();
String getName();
void setName(String name);
CategoryDTO getCategory();
void setCategory(CategoryDTO category);
#EntityView(Category.class)
interface CategoryDTO {
#IdMapping
Long getId();
}
}
Querying is a matter of applying the entity view to a query, the simplest being just a query by id.
ItemDTO a = entityViewManager.find(entityManager, ItemDTO.class, id);
The Spring Data integration allows you to use it almost like Spring Data Projections: https://persistence.blazebit.com/documentation/entity-view/manual/en_US/index.html#spring-data-features
Page<ItemDTO> findAll(Pageable pageable);
The best part is, it will only fetch the state that is actually necessary!
And in your case of saving data, you can use the Spring WebMvc integration
that would look something like the following:
#PostMapping("/items")
void save(ItemDTO dto){
service.save(dto);
}
class ItemService {
#Autowired
ItemRepository repository;
#Transactional
public void save(ItemDTO dto) {
repository.save(dto);
Item item = repository.getOne(dto);
validate(item);
}
// other code...
}

Should i use model classes or payload classes to serialize a json response

I'm using spring boot with mysql to create a Restful API. Here's an exemple of how i return a json response.
first i have a model:
#Entity
public class Movie extends DateAudit {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String name;
private Date releaseDate;
private Time runtime;
private Float rating;
private String storyline;
private String poster;
private String rated;
#OneToMany(mappedBy = "movie", cascade = CascadeType.ALL, orphanRemoval = true)
private List<MovieMedia> movieMedia = new ArrayList<>();
#OneToMany(mappedBy = "movie", cascade = CascadeType.ALL, orphanRemoval = true)
private List<MovieReview> movieReviews = new ArrayList<>();
#OneToMany(mappedBy = "movie", cascade = CascadeType.ALL, orphanRemoval = true)
private List<MovieCelebrity> movieCelebrities = new ArrayList<>();
// Setters & Getters
}
and correspond repository:
#Repository
public interface MovieRepository extends JpaRepository<Movie, Long> {
}
Also i have a payload class MovieResponse which represent a movie instead of Movie model, and that's for example if i need extra fields or i need to return specific fields.
public class MovieResponse {
private Long id;
private String name;
private Date releaseDate;
private Time runtime;
private Float rating;
private String storyline;
private String poster;
private String rated;
private List<MovieCelebrityResponse> cast = new ArrayList<>();
private List<MovieCelebrityResponse> writers = new ArrayList<>();
private List<MovieCelebrityResponse> directors = new ArrayList<>();
// Constructors, getters and setters
public void setCelebrityRoles(List<MovieCelebrityResponse> movieCelebrities) {
this.setCast(movieCelebrities.stream().filter(movieCelebrity -> movieCelebrity.getRole().equals(CelebrityRole.ACTOR)).collect(Collectors.toList()));
this.setDirectors(movieCelebrities.stream().filter(movieCelebrity -> movieCelebrity.getRole().equals(CelebrityRole.DIRECTOR)).collect(Collectors.toList()));
this.setWriters(movieCelebrities.stream().filter(movieCelebrity -> movieCelebrity.getRole().equals(CelebrityRole.WRITER)).collect(Collectors.toList()));
}
}
As you can see i divide the movieCelebrities list into 3 lists(cast, directos and writers)
And to map a Movie to MovieResponse I'm using ModelMapper class:
public class ModelMapper {
public static MovieResponse mapMovieToMovieResponse(Movie movie) {
// Create a new MovieResponse and Assign the Movie data to MovieResponse
MovieResponse movieResponse = new MovieResponse(movie.getId(), movie.getName(), movie.getReleaseDate(),
movie.getRuntime(),movie.getRating(), movie.getStoryline(), movie.getPoster(), movie.getRated());
// Get MovieCelebrities for current Movie
List<MovieCelebrityResponse> movieCelebrityResponses = movie.getMovieCelebrities().stream().map(movieCelebrity -> {
// Get Celebrity for current MovieCelebrities
CelebrityResponse celebrityResponse = new CelebrityResponse(movieCelebrity.getCelebrity().getId(),
movieCelebrity.getCelebrity().getName(), movieCelebrity.getCelebrity().getPicture(),
movieCelebrity.getCelebrity().getDateOfBirth(), movieCelebrity.getCelebrity().getBiography(), null);
return new MovieCelebrityResponse(movieCelebrity.getId(), movieCelebrity.getRole(),movieCelebrity.getCharacterName(), null, celebrityResponse);
}).collect(Collectors.toList());
// Assign movieCelebrityResponse to movieResponse
movieResponse.setCelebrityRoles(movieCelebrityResponses);
return movieResponse;
}
}
and finally here's my MovieService service which i call in the controller:
#Service
public class MovieServiceImpl implements MovieService {
private MovieRepository movieRepository;
#Autowired
public void setMovieRepository(MovieRepository movieRepository) {
this.movieRepository = movieRepository;
}
public PagedResponse<MovieResponse> getAllMovies(Pageable pageable) {
Page<Movie> movies = movieRepository.findAll(pageable);
if(movies.getNumberOfElements() == 0) {
return new PagedResponse<>(Collections.emptyList(), movies.getNumber(),
movies.getSize(), movies.getTotalElements(), movies.getTotalPages(), movies.isLast());
}
List<MovieResponse> movieResponses = movies.map(ModelMapper::mapMovieToMovieResponse).getContent();
return new PagedResponse<>(movieResponses, movies.getNumber(),
movies.getSize(), movies.getTotalElements(), movies.getTotalPages(), movies.isLast());
}
}
So the question here: is it fine to use for each model i have a payload class for the json serialize ? or it there a better way.
also guys id it's there anything wrong about my code feel free to comment.
I had this dilemma not so long back, this was my thought process. I have it here https://stackoverflow.com/questions/44572188/microservices-restful-api-dtos-or-not
The Pros of Just exposing Domain Objects
The less code you write, the less bugs you produce.
despite of having extensive (arguable) test cases in our code base, I have came across bugs due to missed/wrong copying of fields from domain to DTO or viceversa.
Maintainability - Less boiler plate code.
If I have to add a new attribute, I don't have to add in Domain, DTO, Mapper and the testcases, of course. Don't tell me that this can be achieved using a reflection beanCopy utils like dozer or mapStruct, it defeats the whole purpose.
Lombok, Groovy, Kotlin I know, but it will save me only getter setter headache.
DRY
Performance
I know this falls under the category of "premature performance optimization is the root of all evil". But still this will save some CPU cycles for not having to create (and later garbage collect) one more Object (at the very least) per request
Cons
DTOs will give you more flexibility in the long run
If only I ever need that flexibility. At least, whatever I came across so far are CRUD operations over http which I can manage using couple of #JsonIgnores. Or if there is one or two fields that needs a transformation which cannot be done using Jackson Annotation, As I said earlier, I can write custom logic to handle just that.
Domain Objects getting bloated with Annotations.
This is a valid concern. If I use JPA or MyBatis as my persistent framework, domain object might have those annotations, then there will be Jackson annotations too. If you are using Spring boot you can get away by using application-wide properties like mybatis.configuration.map-underscore-to-camel-case: true , spring.jackson.property-naming-strategy: SNAKE_CASE
Short story, at least in my case, cons didn't outweigh the pros, so it did not make any sense to repeat myself by having a new POJO as DTO. Less code, less chances of bugs. So, went ahead with exposing the Domain object and not having a separate "view" object.
Disclaimer: This may or may not be applicable in your use case. This observation is per my usecase (basically a CRUD api having 15ish endpoints)
We should each layer separate from other. As in your case, you have defined the entity and response classes. This is right way to separate things, we should never send the entity in the response. Even for request thing we should have a class.
What the issue if we are sending entity instead of response dto.
Not available to modify them because we already expose it with our client
Sometimes we don't want to serialize some fields and send as response.
Some overhead are there to translate request to domain, entity to domain etc. But its okay to keep more organized. ModelMapper is the best choice for translation purpose.
Try to use construct injection instead of setter for mandate dependency.
It is always recommended to separate DTO and Entity.
Entity should interact with DB/ORM and DTO should interact with client layer(Layer for request and response) even if the structure of Entity and DTO same.
Here Entity is Movie and
DTO is MovieResponse
Use your existing class MovieResponse for request & response.
Never use Movie class for request & response.
and the class MovieServiceImpl should contain business logic for converting Entity to DTO, Or you can use Dozer api to do auto conversion.
The reason for sepating:
In case you need to add/remove new elements in Request/response you dont have to change much code
if 2 entity have 2 way mapping(e.g. one-to-many/many-to-many relationship) then
JSON object cant be created if object have nested data, this will throw error while serializing
if Anything changed in DB or Entity, then this will not affect JSON Response(most of the time).
Code will be clear and easy to maintain.
On one side you should separate them because sometimes some of the JPA annotations which you use in your model don't work well with the json processor annotations. And yes, you should keep the things separated.
What if you later decide to change your data layer? Will you have to rewrite all your client side?
On the other side, there is this problem of mapping. For that, you can use a library with a small performance penalty.
DTO is a design pattern and solves the problem of fetching as maximum useful data from a service as possible.
In case of a simple application as yours, the DTOs tend to be similar to the Entity classes. However for certain complex applications, DTOs can be extended to combine data from various entities to avoid multiple requests to the server and thus save valuable resources and request-response time.
I would suggest not to duplicate the code in a simple case like this and use model classes in response to the APIs as well. Using separate response classes as DTOs will not solve any purpose and will only make maintaining the code difficult.
While most people have answered pros and cons of using DTO objects, I would like to give my 2 cents. In my case DTO was necessary because not all fields persisted in database were captured from user. There were a few fields which were computed based on user input(of other fields) and were not exposed to users. Also, it can also reduces the size of payload which could result in better performance in such cases.
I advocate for separating the "Payload" or "Data" object from the "Model" or "Display" object. Pretty much always. This just keeps things easier to manage.
Here's an example:
Let's say you need to hit an API that gives you data about cats for sale. Then you parse the data into a cat model object and populate a list of cats that is then displayed to the user. Cool.
But now you want to integrate another API and pull cats from 2 databases. But you run into a problem. One API returns furColor for the color and the new one returns catColor for the color.
If you were using the same object to also display the info, you have some options:
Add both furColor and catColor to the model object, make them both optional, and do some kind of computed property to check which one is set and use that one to display the color
In reality, this is rarely an option because the responses will usually be much more different than just one value like this so you would likelly need a whole new parser anyway
Add a new data object and then also a new adapter and then have to do some kind of check to know which adapter to use when
Something else that still isn't pretty or fun to work with
However, if you create a data object that catches the response, and then a display object that has only the info needed to populate the list, this becomes really easy:
You have a data object that captures the response from the first API
Now make a data object that captures the response from the second API
Now all you need is some kind of simple mapper to map the response to the Display Object
Now both will be converted to a common simple display object, and the same adapter can be used to display the new cats without additional work
This also will make storing the data locally much cleaner.

Wicket - Serialization of persisted and non-persisted JPA entities

I know that when using Wicket with JPA frameworks it is not advisable to serialize entities that have already been persisted to the database (because of problems with lazy fields and to save space). In such cases we are supposed to use LoadableDetachableModel. But what about the following use-case?
Suppose we want to create a new entity (say, a Contract) which will consist, among other things, of persisted entities (say, a Client which is selected from a list of clients stored in the DB). The entity under creation is a model object of some Wicket component (say, a Wizard). In the end (when we finish our wizard) we save the new entity to the DB. So my question is: what is the best generic solution to the serialization problem of such model objects? We can't use LDM because the entity is not in the DB yet but we don't want our inner entities (like Client) to be serialized wholly, too.
My idea was to implement a custom wicket serializer that checks if the object is an entity and if it is persisted. If so, store only its id, otherwise use the default serialization. Similarly, when deserializing use the stored id and get the entity from the DB or deserialize using the default mechanism. Not sure, though, how to do that in a generic way. My next thought was that if we can do it, then we do not need any LDM anymore, we can just store all our entities in simple org.apache.wicket.model.Model models and our serialization logic will take care of them, right?
Here's some code:
#Entity
Client {
String clientName;
#ManyToOne(fetch = FetchType.LAZY)
ClientGroup group;
}
#Entity
Contract {
Date date;
#ManyToOne(fetch = FetchType.LAZY)
Client client;
}
ContractWizard extends Wizard {
ContractWizard(String markupId, IModel<Contract> model) {
super(markupId);
setDefaultModel(model);
}
}
Contract contract = DAO.createEntity(Contract.class);
ContractWizard wizard = new ContractWizard("wizard", ?);
How to pass the contract? If we just say Model.of(contract) the whole contract will be serialized along with inner client (and it can be big), moreover if we access contract.client.group after deserialization we can bump into the problem: https://en.wikibooks.org/wiki/Java_Persistence/Relationships#Serialization.2C_and_Detaching
So I wonder how people go about solving such issues, I'm sure it's a fairly common problem.
I guess there are 2 approaches to your problem:
a.) Only save the stuff the user actually sees in Models. In your example that might be "contractStartDate", "contractEndDate", List of clientIds. That's the main approach if you don't want your DatabaseObjects in your view.
b.) Write your own LoadableDetachableModel and make sure you only serialize transient objects. For example like: (assuming that any negative id is not saved to the database)
public class MyLoadableDetachableModel extends LoadableDetachableModel {
private Object myObject;
private Integer id;
public MyLoadableDetachableModel(Object myObject) {
this.myObject = myObject;
this.id = myObject.getId();
}
#Override
protected Object load() {
if (id < 0) {
return myObject;
}
return myObjectDao.getMyObjectById(id);
}
#Override
protected void onDetach() {
super.onDetach();
id = myObject.getId();
if (id >= 0) {
myObject = null;
}
}
}
The downfall of this is that you'll have to make your DatabaseObjects Serializable which is not really ideal and can lead to all kind of problems. You would also need to decouple the references to other entities from the transient object by using a ListModel.
Having worked with both approaches I personally prefer the first. From my expierence the whole injecting dao objects into wicket can lead to disaster. :) I would only use this in view-only projects that aren't too big.
Most projects I know of just accept serializing referenced entities (e.g. your Clients) along with the edited entity (Contract).
Using conversations (keeping a Hibernate/JPA session open over several requests) is a nice alternative for applications with complex entity relations:
The Hibernate session and its entities is kept separate from the page and is never serialized. The component just keeps an identifier to fetch its conversation.

JPA handle merge() of relationship

I have a unidirectional relation Project -> ProjectType:
#Entity
public class Project extends NamedEntity
{
#ManyToOne(optional = false)
#JoinColumn(name = "TYPE_ID")
private ProjectType type;
}
#Entity
public class ProjectType extends Lookup
{
#Min(0)
private int progressive = 1;
}
Note that there's no cascade.
Now, when I insert a new Project I need to increment the type progressive.
This is what I'm doing inside an EJB, but I'm not sure it's the best approach:
public void create(Project project)
{
em.persist(project);
/* is necessary to merge the type? */
ProjectType type = em.merge(project.getType());
/* is necessary to set the type again? */
project.setType(type);
int progressive = type.getProgressive();
type.setProgressive(progressive + 1);
project.setCode(type.getPrefix() + progressive);
}
I'm using eclipselink 2.6.0, but I'd like to know if there's a implementation independent best practice and/or if there are behavioral differences between persistence providers, about this specific scenario.
UPDATE
to clarify the context when entering EJB create method (it is invoked by a JSF #ManagedBean):
project.projectType is DETACHED
project is NEW
no transaction (I'm using JTA/CMT) is active
I am not asking about the difference between persist() and merge(), I'm asking if either
if em.persist(project) automatically "reattach" project.projectType (I suppose not)
if it is legal the call order: first em.persist(project) then em.merge(projectType) or if it should be inverted
since em.merge(projectType) returns a different instance, if it is required to call project.setType(managedProjectType)
An explaination of "why" this works in a way and not in another is also welcome.
You need merge(...) only to make a transient entity managed by your entity manager. Depending on the implementation of JPA (not sure about EclipseLink) the returned instance of the merge call might be a different copy of the original object.
MyEntity unmanaged = new MyEntity();
MyEntity managed = entityManager.merge(unmanaged);
assert(entityManager.contains(managed)); // true if everything worked out
assert(managed != unmanaged); // probably true, depending on JPA impl.
If you call manage(entity) where entity is already managed, nothing will happen.
Calling persist(entity) will also make your entity managed, but it returns no copy. Instead it merges the original object and it might also call an ID generator (e.g. a sequence), which is not the case when using merge.
See this answer for more details on the difference between persist and merge.
Here's my proposal:
public void create(Project project) {
ProjectType type = project.getType(); // maybe check if null
if (!entityManager.contains(type)) { // type is transient
type = entityManager.merge(type); // or load the type
project.setType(type); // update the reference
}
int progressive = type.getProgressive();
type.setProgressive(progressive + 1); // mark as dirty, update on flush
// set "code" before persisting "project" ...
project.setCode(type.getPrefix() + progressive);
entityManager.persist(project);
// ... now no additional UPDATE is required after the
// INSERT on "project".
}
UPDATE
if em.persist(project) automatically "reattach" project.projectType (I suppose not)
No. You'll probably get an exception (Hibernate does anyway) stating, that you're trying to merge with a transient reference.
Correction: I tested it with Hibernate and got no exception. The project was created with the unmanaged project type (which was managed and then detached before persisting the project). But the project type's progression was not incremented, as expected, since it wasn't managed. So yeah, manage it before persisting the project.
if it is legal the call order: first em.persist(project) then em.merge(projectType) or if it should be inverted
It's best practise to do so. But when both statements are executed within the same batch (before the entity manager gets flushed) it may even work (merging type after persisting project). In my test it worked anyway. But as I said, it's better to merge the entities before persisting new ones.
since em.merge(projectType) returns a different instance, if it is required to call project.setType(managedProjectType)
Yes. See example above. A persistence provider may return the same reference, but it isn't required to. So to be sure, call project.setType(mergedType).
Do you need to merge? Well it depends. According to merge() javadoc:
Merge the state of the given entity into the current persistence
context
How did you get the instance of ProjectType you attach to your Project to? If that instance is already managed then all you need to do is just
type.setProgessive(type.getProgressive() + 1)
and JPA will automatically issue an update effective on next context flush.
Otherwise if the type is not managed then you need to merge it first.
Although not directly related this quesetion has some good insight about persist vs merge: JPA EntityManager: Why use persist() over merge()?
With the call order of em.persist(project) vs em.merge(projectType), you probably should ask yourself what should happen if the type is gone in the database? If you merge the type first it will get re-inserted, if you persist the project first and you have FK constraint the insert will fail (because it's not cascading).
Here in this code. Merge basically store the record in different object, Let's say
One Account pojo is there
Account account =null;
account = entityManager.merge(account);
then you can store the result of this.
But in your code your are using merge different condition like
public void create(Project project)
{
em.persist(project);
/* is necessary to merge the type? */
ProjectType type = em.merge(project.getType());
}
here
Project and ProjectType two different pojo you can use merge for same pojo.
or is there any relationship between in your pojo then also you can use it.

Spring Data Neo4J #Fetch Issue seen with Jackson Serialization

I'm trying to figure out why the Jackson JSON Serialization of a collection of 250 objects is taking 40 seconds, and I think I have narrowed it down to SDN lazy loading. I'm using #Fetch, but it still seems as if its asking the database for the delegate for every attribute of every node in the collection. Please ignore any typos as I have to hand-type this as copy-paste isn't an option. Rest assured the class compiles as expected. The (simplified) class being serialized:
#NodeEntity
public class NodeWithDelegate {
#RelatedTo(type="REL_NAME", direction=Direction.OUTGOING)
#Fetch private DelegateNode delegate;
private DelegateNode getInitializedDelegate() {
if (delegate == null) {
delegate = new DelegateNode();
}
return delegate;
}
public String getDelegateAttribute1() {
return delegate == null ? null : delegate.getAttribute1();
}
public void setDelegateAttribute1(String attribute1) {
getInitializedDelegate().setAttribute1(attribute1);
}
....
public String getDelegateAttribute15() {
return delegate == null ? null : delegate.getAttribute15();
}
public void setDelegateAttribute15(String attribute15) {
getInitializedDelegate().setAttribute15(attribute15);
}
}
The DelegateNode class is exactly what you would expect, just a simple #NodeEntity POJO containing 15 String or Integer or Boolean attributes.
So two questions really:
how can I tell for sure if an object is actually being eagerly loaded? I'm using eclipse.
For debugging purposes, if the objects are all eagerly loaded, and I put a breakpoint between the fetching of the collection from the database and the serializer which calls all the delegate getters, and while paused shutdown the database, should it work? Is there any reason the objects would need to talk to the database at this point if its all eagerly loaded?
I guess I should mention I'm using the rest api for neo4j.
Many thanks in advance!
I am assuming you are using 3.x version of Spring Data Neo4j.
This version is not very optimized for REST api. If you enable logging of the cypher queries you will see many. Example for log4j:
log4j.category.org.springframework.data.neo4j.support.query=DEBUG
You can work around this limitation using custom cypher query and mapping the result with #QueryResult annotation.
Using the logging you should see your objects being loaded
It should, unless there is something "lazy" in the DelegateNode itself.

Categories

Resources