Wicket - Serialization of persisted and non-persisted JPA entities - java

I know that when using Wicket with JPA frameworks it is not advisable to serialize entities that have already been persisted to the database (because of problems with lazy fields and to save space). In such cases we are supposed to use LoadableDetachableModel. But what about the following use-case?
Suppose we want to create a new entity (say, a Contract) which will consist, among other things, of persisted entities (say, a Client which is selected from a list of clients stored in the DB). The entity under creation is a model object of some Wicket component (say, a Wizard). In the end (when we finish our wizard) we save the new entity to the DB. So my question is: what is the best generic solution to the serialization problem of such model objects? We can't use LDM because the entity is not in the DB yet but we don't want our inner entities (like Client) to be serialized wholly, too.
My idea was to implement a custom wicket serializer that checks if the object is an entity and if it is persisted. If so, store only its id, otherwise use the default serialization. Similarly, when deserializing use the stored id and get the entity from the DB or deserialize using the default mechanism. Not sure, though, how to do that in a generic way. My next thought was that if we can do it, then we do not need any LDM anymore, we can just store all our entities in simple org.apache.wicket.model.Model models and our serialization logic will take care of them, right?
Here's some code:
#Entity
Client {
String clientName;
#ManyToOne(fetch = FetchType.LAZY)
ClientGroup group;
}
#Entity
Contract {
Date date;
#ManyToOne(fetch = FetchType.LAZY)
Client client;
}
ContractWizard extends Wizard {
ContractWizard(String markupId, IModel<Contract> model) {
super(markupId);
setDefaultModel(model);
}
}
Contract contract = DAO.createEntity(Contract.class);
ContractWizard wizard = new ContractWizard("wizard", ?);
How to pass the contract? If we just say Model.of(contract) the whole contract will be serialized along with inner client (and it can be big), moreover if we access contract.client.group after deserialization we can bump into the problem: https://en.wikibooks.org/wiki/Java_Persistence/Relationships#Serialization.2C_and_Detaching
So I wonder how people go about solving such issues, I'm sure it's a fairly common problem.

I guess there are 2 approaches to your problem:
a.) Only save the stuff the user actually sees in Models. In your example that might be "contractStartDate", "contractEndDate", List of clientIds. That's the main approach if you don't want your DatabaseObjects in your view.
b.) Write your own LoadableDetachableModel and make sure you only serialize transient objects. For example like: (assuming that any negative id is not saved to the database)
public class MyLoadableDetachableModel extends LoadableDetachableModel {
private Object myObject;
private Integer id;
public MyLoadableDetachableModel(Object myObject) {
this.myObject = myObject;
this.id = myObject.getId();
}
#Override
protected Object load() {
if (id < 0) {
return myObject;
}
return myObjectDao.getMyObjectById(id);
}
#Override
protected void onDetach() {
super.onDetach();
id = myObject.getId();
if (id >= 0) {
myObject = null;
}
}
}
The downfall of this is that you'll have to make your DatabaseObjects Serializable which is not really ideal and can lead to all kind of problems. You would also need to decouple the references to other entities from the transient object by using a ListModel.
Having worked with both approaches I personally prefer the first. From my expierence the whole injecting dao objects into wicket can lead to disaster. :) I would only use this in view-only projects that aren't too big.

Most projects I know of just accept serializing referenced entities (e.g. your Clients) along with the edited entity (Contract).
Using conversations (keeping a Hibernate/JPA session open over several requests) is a nice alternative for applications with complex entity relations:
The Hibernate session and its entities is kept separate from the page and is never serialized. The component just keeps an identifier to fetch its conversation.

Related

Returning Entity from Service method is a bad practice?

I've heard when you want to return some object from a service method, you have to define a DTO object (or POJO object generated with JSON Schema) instead of using an Entity.
To make it clear, here is the example:
We have an entity and a jpa repository for it:
#Data
#Entity
#Table(name = "tables")
public class Table {
#Id
private Long id;
private String brand;
}
This is a bad practice:
#Service
public class MyService {
#Autowired
private TableRepository tableRepository;
#Transactional
public Table create() {
Table table = new Table();
// Some logic for creating and saving table
return table;
}
}
This is a good practice:
#Service
public class MyService {
#Autowired
private TableRepository tableRepository;
#Transactional
public TableDTO create() {
Table table = new Table();
// Some logic for creating and saving table
// Logic for converting Table object to TableDTO object
return tableDTO;
}
}
Why is this so?
Thank you!
Probably you mean a DTO (Data Transfer Object), not DAO (Data Access Object). Let me clarify this:
Data Transfer Object:
A Pojo that represents a piece of information. Usually it has aggregated data in it.
Data Access Object:
An object that performs access to some kind of persistence storage for retrieving information, someone considers it a synonim of Repository, someone not.
Entity:
An object that represents data that has been retrieved from the database.
Why is returning an Entity from the Service considered a bad practice?
The reason is that the Entity is something that is very close to the database. It contains primary key, someone could guess your database structure from it and the set of the data in case of query can be verbose. Hence, it is preferable to have some kind of logic, usually a mapper, that hides primary key and aggregates data to be less verbose and to not expose the db structure. Also, while the Entity is built on the table structure, the DTO can be customized in base of caller needs. Usually it contains exactly the data that is needed for some action and nothing more than this. Suppose you have thirdy party software that calls your backend services: you should not expose the db structure (Entities) to this service. It is better to define a contract, with the minimal information needed for this thirdy party service to operate, and expose only this part of the information, hiding all the rest.
Hope that's a little bit more clear now.
Edit:
Of course there are other good reasons for using DTOs instead of Entities, this is only an introductory explanation to the subject.

Should i use model classes or payload classes to serialize a json response

I'm using spring boot with mysql to create a Restful API. Here's an exemple of how i return a json response.
first i have a model:
#Entity
public class Movie extends DateAudit {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String name;
private Date releaseDate;
private Time runtime;
private Float rating;
private String storyline;
private String poster;
private String rated;
#OneToMany(mappedBy = "movie", cascade = CascadeType.ALL, orphanRemoval = true)
private List<MovieMedia> movieMedia = new ArrayList<>();
#OneToMany(mappedBy = "movie", cascade = CascadeType.ALL, orphanRemoval = true)
private List<MovieReview> movieReviews = new ArrayList<>();
#OneToMany(mappedBy = "movie", cascade = CascadeType.ALL, orphanRemoval = true)
private List<MovieCelebrity> movieCelebrities = new ArrayList<>();
// Setters & Getters
}
and correspond repository:
#Repository
public interface MovieRepository extends JpaRepository<Movie, Long> {
}
Also i have a payload class MovieResponse which represent a movie instead of Movie model, and that's for example if i need extra fields or i need to return specific fields.
public class MovieResponse {
private Long id;
private String name;
private Date releaseDate;
private Time runtime;
private Float rating;
private String storyline;
private String poster;
private String rated;
private List<MovieCelebrityResponse> cast = new ArrayList<>();
private List<MovieCelebrityResponse> writers = new ArrayList<>();
private List<MovieCelebrityResponse> directors = new ArrayList<>();
// Constructors, getters and setters
public void setCelebrityRoles(List<MovieCelebrityResponse> movieCelebrities) {
this.setCast(movieCelebrities.stream().filter(movieCelebrity -> movieCelebrity.getRole().equals(CelebrityRole.ACTOR)).collect(Collectors.toList()));
this.setDirectors(movieCelebrities.stream().filter(movieCelebrity -> movieCelebrity.getRole().equals(CelebrityRole.DIRECTOR)).collect(Collectors.toList()));
this.setWriters(movieCelebrities.stream().filter(movieCelebrity -> movieCelebrity.getRole().equals(CelebrityRole.WRITER)).collect(Collectors.toList()));
}
}
As you can see i divide the movieCelebrities list into 3 lists(cast, directos and writers)
And to map a Movie to MovieResponse I'm using ModelMapper class:
public class ModelMapper {
public static MovieResponse mapMovieToMovieResponse(Movie movie) {
// Create a new MovieResponse and Assign the Movie data to MovieResponse
MovieResponse movieResponse = new MovieResponse(movie.getId(), movie.getName(), movie.getReleaseDate(),
movie.getRuntime(),movie.getRating(), movie.getStoryline(), movie.getPoster(), movie.getRated());
// Get MovieCelebrities for current Movie
List<MovieCelebrityResponse> movieCelebrityResponses = movie.getMovieCelebrities().stream().map(movieCelebrity -> {
// Get Celebrity for current MovieCelebrities
CelebrityResponse celebrityResponse = new CelebrityResponse(movieCelebrity.getCelebrity().getId(),
movieCelebrity.getCelebrity().getName(), movieCelebrity.getCelebrity().getPicture(),
movieCelebrity.getCelebrity().getDateOfBirth(), movieCelebrity.getCelebrity().getBiography(), null);
return new MovieCelebrityResponse(movieCelebrity.getId(), movieCelebrity.getRole(),movieCelebrity.getCharacterName(), null, celebrityResponse);
}).collect(Collectors.toList());
// Assign movieCelebrityResponse to movieResponse
movieResponse.setCelebrityRoles(movieCelebrityResponses);
return movieResponse;
}
}
and finally here's my MovieService service which i call in the controller:
#Service
public class MovieServiceImpl implements MovieService {
private MovieRepository movieRepository;
#Autowired
public void setMovieRepository(MovieRepository movieRepository) {
this.movieRepository = movieRepository;
}
public PagedResponse<MovieResponse> getAllMovies(Pageable pageable) {
Page<Movie> movies = movieRepository.findAll(pageable);
if(movies.getNumberOfElements() == 0) {
return new PagedResponse<>(Collections.emptyList(), movies.getNumber(),
movies.getSize(), movies.getTotalElements(), movies.getTotalPages(), movies.isLast());
}
List<MovieResponse> movieResponses = movies.map(ModelMapper::mapMovieToMovieResponse).getContent();
return new PagedResponse<>(movieResponses, movies.getNumber(),
movies.getSize(), movies.getTotalElements(), movies.getTotalPages(), movies.isLast());
}
}
So the question here: is it fine to use for each model i have a payload class for the json serialize ? or it there a better way.
also guys id it's there anything wrong about my code feel free to comment.
I had this dilemma not so long back, this was my thought process. I have it here https://stackoverflow.com/questions/44572188/microservices-restful-api-dtos-or-not
The Pros of Just exposing Domain Objects
The less code you write, the less bugs you produce.
despite of having extensive (arguable) test cases in our code base, I have came across bugs due to missed/wrong copying of fields from domain to DTO or viceversa.
Maintainability - Less boiler plate code.
If I have to add a new attribute, I don't have to add in Domain, DTO, Mapper and the testcases, of course. Don't tell me that this can be achieved using a reflection beanCopy utils like dozer or mapStruct, it defeats the whole purpose.
Lombok, Groovy, Kotlin I know, but it will save me only getter setter headache.
DRY
Performance
I know this falls under the category of "premature performance optimization is the root of all evil". But still this will save some CPU cycles for not having to create (and later garbage collect) one more Object (at the very least) per request
Cons
DTOs will give you more flexibility in the long run
If only I ever need that flexibility. At least, whatever I came across so far are CRUD operations over http which I can manage using couple of #JsonIgnores. Or if there is one or two fields that needs a transformation which cannot be done using Jackson Annotation, As I said earlier, I can write custom logic to handle just that.
Domain Objects getting bloated with Annotations.
This is a valid concern. If I use JPA or MyBatis as my persistent framework, domain object might have those annotations, then there will be Jackson annotations too. If you are using Spring boot you can get away by using application-wide properties like mybatis.configuration.map-underscore-to-camel-case: true , spring.jackson.property-naming-strategy: SNAKE_CASE
Short story, at least in my case, cons didn't outweigh the pros, so it did not make any sense to repeat myself by having a new POJO as DTO. Less code, less chances of bugs. So, went ahead with exposing the Domain object and not having a separate "view" object.
Disclaimer: This may or may not be applicable in your use case. This observation is per my usecase (basically a CRUD api having 15ish endpoints)
We should each layer separate from other. As in your case, you have defined the entity and response classes. This is right way to separate things, we should never send the entity in the response. Even for request thing we should have a class.
What the issue if we are sending entity instead of response dto.
Not available to modify them because we already expose it with our client
Sometimes we don't want to serialize some fields and send as response.
Some overhead are there to translate request to domain, entity to domain etc. But its okay to keep more organized. ModelMapper is the best choice for translation purpose.
Try to use construct injection instead of setter for mandate dependency.
It is always recommended to separate DTO and Entity.
Entity should interact with DB/ORM and DTO should interact with client layer(Layer for request and response) even if the structure of Entity and DTO same.
Here Entity is Movie and
DTO is MovieResponse
Use your existing class MovieResponse for request & response.
Never use Movie class for request & response.
and the class MovieServiceImpl should contain business logic for converting Entity to DTO, Or you can use Dozer api to do auto conversion.
The reason for sepating:
In case you need to add/remove new elements in Request/response you dont have to change much code
if 2 entity have 2 way mapping(e.g. one-to-many/many-to-many relationship) then
JSON object cant be created if object have nested data, this will throw error while serializing
if Anything changed in DB or Entity, then this will not affect JSON Response(most of the time).
Code will be clear and easy to maintain.
On one side you should separate them because sometimes some of the JPA annotations which you use in your model don't work well with the json processor annotations. And yes, you should keep the things separated.
What if you later decide to change your data layer? Will you have to rewrite all your client side?
On the other side, there is this problem of mapping. For that, you can use a library with a small performance penalty.
DTO is a design pattern and solves the problem of fetching as maximum useful data from a service as possible.
In case of a simple application as yours, the DTOs tend to be similar to the Entity classes. However for certain complex applications, DTOs can be extended to combine data from various entities to avoid multiple requests to the server and thus save valuable resources and request-response time.
I would suggest not to duplicate the code in a simple case like this and use model classes in response to the APIs as well. Using separate response classes as DTOs will not solve any purpose and will only make maintaining the code difficult.
While most people have answered pros and cons of using DTO objects, I would like to give my 2 cents. In my case DTO was necessary because not all fields persisted in database were captured from user. There were a few fields which were computed based on user input(of other fields) and were not exposed to users. Also, it can also reduces the size of payload which could result in better performance in such cases.
I advocate for separating the "Payload" or "Data" object from the "Model" or "Display" object. Pretty much always. This just keeps things easier to manage.
Here's an example:
Let's say you need to hit an API that gives you data about cats for sale. Then you parse the data into a cat model object and populate a list of cats that is then displayed to the user. Cool.
But now you want to integrate another API and pull cats from 2 databases. But you run into a problem. One API returns furColor for the color and the new one returns catColor for the color.
If you were using the same object to also display the info, you have some options:
Add both furColor and catColor to the model object, make them both optional, and do some kind of computed property to check which one is set and use that one to display the color
In reality, this is rarely an option because the responses will usually be much more different than just one value like this so you would likelly need a whole new parser anyway
Add a new data object and then also a new adapter and then have to do some kind of check to know which adapter to use when
Something else that still isn't pretty or fun to work with
However, if you create a data object that catches the response, and then a display object that has only the info needed to populate the list, this becomes really easy:
You have a data object that captures the response from the first API
Now make a data object that captures the response from the second API
Now all you need is some kind of simple mapper to map the response to the Display Object
Now both will be converted to a common simple display object, and the same adapter can be used to display the new cats without additional work
This also will make storing the data locally much cleaner.

How to maintain bi-directional relationships with Spring Data REST and JPA?

Working with Spring Data REST, if you have a OneToMany or ManyToOne relationship, the PUT operation returns 200 on the "non-owning" entity but does not actually persist the joined resource.
Example Entities:
#Entity(name = 'author')
#ToString
class AuthorEntity implements Author {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
Long id
String fullName
#ManyToMany(mappedBy = 'authors')
Set<BookEntity> books
}
#Entity(name = 'book')
#EqualsAndHashCode
class BookEntity implements Book {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
Long id
#Column(nullable = false)
String title
#Column(nullable = false)
String isbn
#Column(nullable = false)
String publisher
#ManyToMany(fetch = FetchType.LAZY, cascade = [CascadeType.ALL])
Set<AuthorEntity> authors
}
If you back them with a PagingAndSortingRepository, you can GET a Book, follow the authors link on the book and do a PUT with the URI of a author to associate with. You cannot go the other way.
If you do a GET on an Author and do a PUT on its books link, the response returns 200, but the relationship is never persisted.
Is this the expected behavior?
tl;dr
The key to that is not so much anything in Spring Data REST - as you can easily get it to work in your scenario - but making sure that your model keeps both ends of the association in sync.
The problem
The problem you see here arises from the fact that Spring Data REST basically modifies the books property of your AuthorEntity. That itself doesn't reflect this update in the authors property of the BookEntity. This has to be worked around manually, which is not a constraint that Spring Data REST makes up but the way that JPA works in general. You will be able to reproduce the erroneous behavior by simply invoking setters manually and trying to persist the result.
How to solve this?
If removing the bi-directional association is not an option (see below on why I'd recommend this) the only way to make this work is to make sure changes to the association are reflected on both sides. Usually people take care of this by manually adding the author to the BookEntity when a book is added:
class AuthorEntity {
void add(BookEntity book) {
this.books.add(book);
if (!book.getAuthors().contains(this)) {
book.add(this);
}
}
}
The additional if clause would've to be added on the BookEntity side as well if you want to make sure that changes from the other side are propagated, too. The if is basically required as otherwise the two methods would constantly call themselves.
Spring Data REST, by default uses field access so that theres actually no method that you can put this logic into. One option would be to switch to property access and put the logic into the setters. Another option is to use a method annotated with #PreUpdate/#PrePersist that iterates over the entities and makes sure the modifications are reflected on both sides.
Removing the root cause of the issue
As you can see, this adds quite a lot of complexity to the domain model. As I joked on Twitter yesterday:
#1 rule of bi-directional associations: don't use them… :)
It usually simplifies the matter if you try not to use bi-directional relationship whenever possible and rather fall back to a repository to obtain all the entities that make up the backside of the association.
A good heuristics to determine which side to cut is to think about which side of the association is really core and crucial to the domain you're modeling. In your case I'd argue that it's perfectly fine for an author to exist with no books written by her. On the flip side, a book without an author doesn't make too much sense at all. So I'd keep the authors property in BookEntity but introduce the following method on the BookRepository:
interface BookRepository extends Repository<Book, Long> {
List<Book> findByAuthor(Author author);
}
Yes, that requires all clients that previously could just have invoked author.getBooks() to now work with a repository. But on the positive side you've removed all the cruft from your domain objects and created a clear dependency direction from book to author along the way. Books depend on authors but not the other way round.
I faced a similar problem, while sending my POJO(containing bi-directional mapping #OneToMany and #ManyToOne) as JSON via REST api, the data was persisted in both the parent and child entities but the foreign key relation was not established. This happens because bidirectional associations need to be manually maintained.
JPA provides an annotation #PrePersist which can be used to make sure that the method annotated with it is executed before the entity is persisted. Since, JPA first inserts the parent entity to the database followed by the child entity, I included a method annotated with #PrePersist which would iterate through the list of child entities and manually set the parent entity to it.
In your case it would be something like this:
class AuthorEntitiy {
#PrePersist
public void populateBooks {
for(BookEntity book : books)
book.addToAuthorList(this);
}
}
class BookEntity {
#PrePersist
public void populateAuthors {
for(AuthorEntity author : authors)
author.addToBookList(this);
}
}
After this you might get an infinite recursion error, to avoid that annotate your parent class with #JsonManagedReference and your child class with #JsonBackReference. This solution worked for me, hopefully it will work for you too.
This link has a very good tutorial on how you can navigate the recursion problem:Bidirectional Relationships
I was able to use #JsonManagedReference and #JsonBackReference and it worked like a charm
I believe one can also utilize #RepositoryEventHandler by adding a #BeforeLinkSave handler to cross link the bidirectional relation between entities. This seems to be working for me.
#Component
#RepositoryEventHandler
public class BiDirectionalLinkHandler {
#HandleBeforeLinkSave
public void crossLink(Author author, Collection<Books> books) {
for (Book b : books) {
b.setAuthor(author);
}
}
}
Note: #HandlBeforeLinkSave is called based on the first parameter, if you have multiple relations in your equivalent of an Author class, the second param should be Object and you will need to test within the method for the different relation types.

Avoid having JPA to automatically persist objects

Is there any way to avoid having JPA to automatically persist objects?
I need to use a third party API and I have to pull/push from data from/to it. I've got a class responsible to interface the API and I have a method like this:
public User pullUser(int userId) {
Map<String,String> userData = getUserDataFromApi(userId);
return new UserJpa(userId, userData.get("name"));
}
Where the UserJpa class looks like:
#Entity
#Table
public class UserJpa implements User
{
#Id
#Column(name = "id", nullable = false)
private int id;
#Column(name = "name", nullable = false, length = 20)
private String name;
public UserJpa() {
}
public UserJpa(int id, String name) {
this.id = id;
this.name = name;
}
}
When I call the method (e.g. pullUser(1)), the returned user is automatically stored in the database. I don't want this to happen, is there a solution to avoid it? I know a solution could be to create a new class implementing User and return an instance of this class in the pullUser() method, is this a good practice?
Thank you.
Newly create instance of UserJpa is not persisted in pullUser. I assume also that there is not some odd implementation in getUserDataFromApi actually persisting something for same id.
In your case entity manager knows nothing about new instance of UserJPA. Generally entities are persisted via merge/persist calls or as a result of cascaded merge/persist operation. Check for these elsewhere in code base.
The only way in which a new entity gets persisted in JPA is by explicitly calling the EntityManager's persist() or merge() methods. Look in your code for calls to either one of them, that's the point where the persist operation is occurring, and refactor the code to perform the persistence elsewhere.
Generally JPA Objects are managed objects, these objects reflect their changes into the database when the transaction completes and before on a first level cache, obviously these objects need to become managed on the first place.
I really think that a best practice is to use a DTO object to handle the data transfering and then use the entity just for persistence purposes, that way it would be more cohesive and lower coupling, this is no objects with their nose where it shouldnt.
Hope it helps.

Cyclic serialisation with Many to Many relationship with Hibernate

I have a parent (Program) pojo with a many-to-many relationship with their children (Subscriber).
The problem is when it serialises a Program, it also serialises the Program's Subscribers, which involves serialising their Programs, which involves serialising their Subscribers, until it has serialised every single Program & Subscriber in the database.
The ERD looks like: Program <-> Subscriber
This means what was a tiny 17KB block of data (json) being returned has become a 6.9MB return. Thus in turn blows out the time to serialise the data and then return it.
Why is my parent returning children returning parents returning children? How can I stop this so I only get the Subscribers for each Program? I'm assuming I've done something wrong with my annotations, perhaps? I would like to maintain a many-to-many relationship but without this deeply nested data retrieval.
(Note: I have prior tried adding as many Lazy annotations I can find just to see if that helps. It doesn't. Perhaps I'm doing that wrong too?)
Program.java
#Entity
#Table(name="programs")
public class Program extends Core implements Serializable, Cloneable {
...
#ManyToMany()
#JoinTable(name="program_subscribers",
joinColumns={#JoinColumn(name="program_uid")},
inverseJoinColumns={#JoinColumn(name="subscriber_uid")})
public Set<Subscriber> getSubscribers() { return subscribers; }
public void setSubscribers(Set<Subscriber> subscribers) { this.subscribers = subscribers; }
Subscriber.java
#Entity
#Table(name="subscribers")
public class Subscriber extends Core implements Serializable {
...
#ManyToMany(mappedBy="subscribers")
public Set<Program> getPrograms() { return programs; }
public void setPrograms(Set<Program> programs) { this.programs = programs;
}
Implementation
public Collection<Program> list() {
return new Programs.findAll();
}
You didn't mention the framework you are using for JSON serialization, so I'll assume JAXB. Anyway, the idea is to make the Subscriber.getPrograms(..) transient in some way, so that it's not serialized. Hibernate takes care of these 'loops', but others don't. So:
#XmlTransient
#ManyToMany(..)
public Set<Program> getPrograms()...
If you use another framework, it may have a different annotation/configuration for specifying transient fields. Like the transient keyword.
The other way is to customize your mapper to handle the cycle manually, but this is tedious.
1) How does "your" serialization work. I mean is it JAXB or custom serialization or smth else.
2) Almost all frameworks let you set the depth of serialization. I mean you can set for example depth in 2.
3) I advice you not to serialize object with children, mark them(childre) transient, and serialize separately.
How about using annotations? http://thinkinginsoftware.blogspot.com/2010/08/json-and-cyclical-references.html
From lombok library. Or override equals and hashcode. Use inside hashcode only unique fields (e.g. id).
#EqualsAndHashCode(callSuper = false, of = {"id"})
Both Bozho and ponkin are on the right track. I needed to stop serialising the data down the wire but the big problem is I am unable to change the pojo -> toJSON class/method where the serialisation takes place. I was also worried about investing time on the toJSON() method considering I was taking such a performance hit at the point of serialisation I wanted a fix that would occur before I had the data rather than afterwards.
Also due to the nature of the Many-to-Many Bidirectional design I had listed I was always going to have this cyclic programs/subscribers/programs/... problem.
Resolution: (for now atleast) I have removed the Subscriber.getProgram() method and created a finder method on the ProgramDAO which returns the Programs by Subscriber.
public List<Program> findBySubscriber(Subscriber subscriber) {
String hql = "select p " +
"from Program p " +
" join p.subscribers s " +
"where s = :sub"
;
Query q = getSession().createQuery(hql);
q.setEntity("sub", subscriber);
List<Program> l = q.list();
return l;
}
For any CRUD work I think I'm just going to have to loop over Programs.getSubscribers, or write more hql helper methods.

Categories

Resources