Hibernate - ConstraintViolation when persisting entity - java

I have an entity that contains a relationship to another entity in a manner I've never had to encounter before, and I'm getting an exception: "org.hibernate.exception.ConstraintViolationException: could not execute statement".
The parent entity is called "Post". A post can contain several Keyword entities. Keyword entities are unique by value, that is, if two posts contain the same keyword, both posts reference the same keyword entity.
My thought process was that there are many posts, each referencing many keywords, and any one keyword can be referenced by multiple posts, so it should be an #ManyToMany relationship. Obviously, it's not working. Inspecting the database shows that it is successfully persisting a few posts before it starts failing. As long as all the keywords are unique, it seems to be fine, but I'm thinking that it is dying whenever it's trying to persist a post with a keyword that is already being referenced by another post. Not sure how to fix this.
Here is what the classes look like (short version):
Post:
#Entity
public class Post implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "post_id_seq")
#SequenceGenerator(name = "post_id_seq", sequenceName = "post_id_seq", allocationSize = 1)
private Long id;
#ElementCollection(fetch = FetchType.EAGER)
#ManyToMany(cascade = CascadeType.ALL)
private Set<Keyword> keywords = new HashSet<>();
}
Keyword:
#Entity
public class Keyword implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "keyword_id_seq")
#SequenceGenerator(name = "keyword_id_seq", sequenceName = "keyword_id_seq", allocationSize = 1)
private Long id;
#Column(name = "KEYWORD_VALUE")
private String value;
private int count = 1;
}
UPDATE:
Here is the code I use in my service class to add a keyword to a post. Basically I have a Post object already that has Keywords filled in (request comes in via AJAX from a web front end and Spring unmarshals it automatically to a Post object). I have to loop through each keyword and see if an entity with the same value already exists in persistence. If so, increment the count for that keyword, merge it, then add that entity to the set that will end up replacing the Set that came in the request. If it doesn't already exist, I just use the Keyword that came in the request. Previously, I wasn't saving/merging the Keywords independently before adding them to the Post and persisting the post, but I started getting errors stating:
org.hibernate.TransientObjectException: object references an unsaved
transient instance - save the transient instance before flushing:
com.saic.jswe.clients.swtc.domain.social.Keyword
Anyway, here is my service code:
public void addPost(Post post){
Set<Keyword> keywords = new HashSet<>();
for (Keyword keyword : post.getKeywords()) {
Keyword persistedKeyword = keywordDao.findByValue(keyword.getValue());
if (persistedKeyword != null) {
persistedKeyword.setCount(persistedKeyword.getCount() + 1);
keywordDao.merge(persistedKeyword);
keywords.add(persistedKeyword);
} else {
keywordDao.persist(keyword);
keywords.add(keyword);
}
}
post.setKeywords(keywords);
postDao.persist(post);
}
Also, during my testing when I'm getting this error, it's just a single thread attempting to add test Post objects one at a time.
Checking the logs, here is the actual constraint violation:
rg.postgresql.util.PSQLException: ERROR: insert or update on table
"keyword" violates foreign key constraint
"fk_3tcnkw7v196mudsgmy3nriibl" Detail: Key (id)=(1) is not present
in table "post".
Hmmm... per the above code, it should only be adding a reference to a Keyword object with an ID if it did in fact find it in persistence. The keyword objects coming in with the Post object via the request should all have null IDs as they're not yet persisted.

I found where the issue came into play. A join table was being created called "post_keywords". It had 2 columns, one called "post" and one called "keyword". Each row represented the ID of a post and the ID of a keyword contained in that post. If there were multiple keywords in a post, there could be duplicate entries in the post column. However, as soon as a different post entity attempted to reference a keyword that was already used, it would complain about that ID already being present. Here's the visual:
post | keyword
-----+--------
1 | 1
1 | 2
1 | 4
2 | 3
2 | 4 <--- this would be a problem since keyword 4 is already related to post 1
So my knowledge/understanding of JPA is pretty weak, but I've only ever needed real basic relationships. Given that I understood where the problem was happening, I decided to quit playing and experimenting and start reading.
For a minute, I thought I found a solution just using a OneToMany relationship, because I didn't necessarily care or need the keyword entity to directly know which posts reference it. This was incorrect, however. I could get that code to execute without error, but I ended up with each keyword only being owned by one entity. As each post tried to reference that keyword, it would just override the previous ownership of the keyword. Anyway, I really did need a ManyToMany relationship.
I ended up finding examples (http://en.wikibooks.org/wiki/Java_Persistence/ManyToMany) showing tables where the multiple child entities reference the same parent entity, so I just implemented the same JPA attributes in my code and viola, it worked. Here is what the code looks like now:
#Entity
public class Post implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "post_id_seq")
#SequenceGenerator(name = "post_id_seq", sequenceName = "post_id_seq", allocationSize = 1)
#Column(name="POST_ID")
private Long id;
#ManyToMany(fetch = FetchType.EAGER)
#JoinTable(
name="POST_KEYWORD",
joinColumns={#JoinColumn(name="POST_ID", referencedColumnName="POST_ID")},
inverseJoinColumns={#JoinColumn(name="KEYWORD_ID", referencedColumnName="ID")})
private Set<Keyword> keywords = new HashSet<>();
}

Related

Hibernate good practice, lazy/eager loading and saving/deleting children (help me Hibernate sensei)

So, I have found myself in quite a pickle regarding Hibernate. When I started developing my web application, I used "eager" loading everywhere so I could easily access children, parents etc.
After a while, I ran into my first problem - re-saving of deleted objects. Multiple stackoverflow threads suggested that I should remove the object from all the collections that it's in. Reading those suggestions made my "spidey sense" tickle as my relations weren't really simple and I had to iterate multiple objects which made my code look kind of ugly and made me wonder if this was the best approach.
For example, when deleting Employee (that belongs to User in a sense that User can act as multiple different Employees). Let's say Employee can leave Feedback to Party, so Employee can have multiple Feedback and Party can have multiple Feedback. Additionally, both Employee and Party belong to some kind of a parent object, let's say an Organization. Basically, we have:
class User {
// Has many
Set<Employee> employees;
// Has many
Set<Organization> organizations;
// Has many through employees
Set<Organization> associatedOrganizations;
}
class Employee {
// Belongs to
User user;
// Belongs to
Organization organization;
// Has many
Set<Feedback> feedbacks;
}
class Organization {
// Belongs to
User user;
// Has many
Set<Employee> employees;
// Has many
Set<Party> parties;
}
class Party {
// Belongs to
Organization organization;
// Has many
Set<Feedback> feedbacks;
}
class Feedback {
// Belongs to
Party party;
// Belongs to
Employee employee;
}
Here's what I ended up with when deleting an employee:
// First remove feedbacks related to employee
Iterator<Feedback> iter = employee.getFeedbacks().iterator();
while (iter.hasNext()) {
Feedback feedback = iter.next();
iter.remove();
feedback.getParty().getFeedbacks().remove(feedback);
session.delete(feedback);
}
session.update(employee);
// Now remove employee from organization
Organization organization = employee.getOrganization();
organization.getEmployees().remove(employee);
session.update(organization);
This is, by my definition, ugly. I would've assumed that by using
#Cascade({CascadeType.ALL})
then Hibernate would magically remove Employee from all associations by simply doing:
session.delete(employee);
instead I get:
Error during managed flush [deleted object would be re-saved by cascade (remove deleted object from associations)
So, in order to try to get my code a bit cleaner and maybe even optimized (sometimes lazy fetch is enough, sometimes I need eager), I tried lazy fetching almost everything and hoping that if I do, for example:
employee.getFeedbacks()
then the feedbacks are nicely fetched without any problem but nope, everything breaks:
failed to lazily initialize a collection of role: ..., could not initialize proxy - no Session
The next thing I thought about was removing the possibility for objects to insert/delete their related children objects but that would probably be a bad idea performance-wise - inserting every object separately with
child.parent=parent
instead of in a bulk with
parent.children().add(children).
Finally, I saw that multiple people recommended creating my own custom queries and stuff but at that point, why should I even bother with Hibernate? Is there really no good way to handle my problem relatively clean or am I missing something or am I an idiot?
If I understood the question correctly it's all about cascading through simple 1:N relations. In that case Hibernate can do the job rather well:
#Entity
public class Post {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
#OneToMany(cascade = CascadeType.ALL,
mappedBy = "post", orphanRemoval = true)
private List<Comment> comments = new ArrayList<>();
}
#Entity
public class Comment {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
#ManyToOne
private Post post;
}
Code:
Post post = newPost();
doInTransaction(session -> {
session.delete(post);
});
Generates:
delete from Comment where id = 1
delete from Comment where id = 2
delete from Post where id = 1
But if you have some other (synthetic) collections, Hibernate has no chance to know which ones, so you have to handle them yourself.
As for Hibernate and custom queries, Hibernate provides HQL which is more compact then traditional SQL, but still is less transparent then annotations.

Hibernate Inheritance.JOINED generated FK name

I am currently trying to use inheritance within Hibernate and came across InheritanceType.JOINED. I like the idea of concentrating all data in one table and sharing IDs rather than having duplicate columns in all the sub type tables (#MappedSuperClass). But Hibernate automatically generates indexes on my sub class tables on the id column like FK_idx3wiwdm8yp2qkkddi726n8o everytime I initialize my Hibernate singleton. I noticed that by hitting the 64 key limit on my MySQL Table as the names are generated differently on every startup.
What is the proper way to handle this? Can this be fixed by annotations? What else could I try?
I know that there are countless similar Questions on SO but haven't been able to identify one solving my specific problem.
I am not going to disable hbm2ddl.auto during dev mode.
I am using MyISAM. There are no actual Foreign Keys. This is why Hibernate generates default indexes, I think. Anyway, the problem would be identical with InnoDB and real Foreign Keys as the names would still be quite random. Or maybe Hibernate would actually check for existence in this case. I don't really see, why it does not do this on MyISAM tables.
As I hit similar problems before, the solution could also be to specify a name for that single-column index. But how?
Super Class: FolderItem
#Entity
#Inheritance(strategy = InheritanceType.JOINED)
public abstract class FolderItem implements Comparable<FolderItem>
{
#Id
#GeneratedValue
protected int id;
protected String name;
#OneToOne
#ForeignKey(name = "fkParent")
protected Folder parent;
...
}
Sub Class: Folder
#Entity
public class Folder extends FolderItem
{
#OneToMany(mappedBy = "parent")
#OrderBy(value = "sortOrder")
private List<FolderItem> children;
...
}
What I tried
add #Index to FolderItem.id - this created an index on the FolderItem table as one would expect, but didn't affect the Folder table
copy protected int id; to Folder and tried to add an #Index to it, which resulted in an Exception similar to "duplicate definition of ID"
add #Table(appliesTo = "Folder", indexes = { #Index(name = "fkId", columnNames = { "id" }) }) to Folder class, which actually created my specified index as expected, but still created it's own FK_9xcia6idnwqdi9xx8ytea40h3 which is identical to mine, except for the name
Try #PrimaryKeyJoinColumn(name = "foler_item_id") annotation for Folder class.

Duplicate entries in Hibernate

I am facing a strange problem in Hibernate. Operating in a multithreaded env, when trying to insert into one of the tables, getting duplicate entries in table. Only the primary key is different, rest all other fields are getting exactly duplicate.
Using Hibernate + Oracle and using Spring - HibernateTemplate object.
Here's the relevant portion of my my BO class and below given code to save the object. Not using any transient fields.
Have checked other posts related to this, but none of them addresses the root cause of the problem. I don't want to introduce any constraints/unique indexes on db table.
#Entity
#Table(name="ADIRECIPIENTINTERACTION")
#Lazy(value = true)
#Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
#GenericGenerator(name="recipientInteractionSeq", strategy = "native", parameters =
{ #Parameter(name="sequence", value="SEQiRecipientInteractId")})
public class RecipientInteractionBO extends BusinessObject{
private static final long serialVersionUID = 1L;
#Id
#GeneratedValue(generator = "recipientInteractionSeq", strategy = GenerationType.AUTO)
#Column(name="IRECIPIENTINTERACTIONID")
private long lId; ....
And here's the Code used to save the BO.
-----------------------------------------------------
RecipientInteractionBO recInt = (RecipientInteractionBO) objectPS
.getUniqueResult(detachedCriteria);
if (recInt == null) {
recInt = new RecipientInteractionBO();
....
hibernateTemplateObj.insertObject(recInt);
} else {
...
hibernateTemplateObj.saveOrUpdate(recInt);
}
Please let me know if any other details are required.
Check your data persistence code for possible race conditions for multiple threads. You are checking for the existence of the RecipientInteractionBO which is possibly querying from database. If two threads are running simultaneously, both check for it's existence, since for both it's not there both persist the new entity. You might need to use synchronization to make the process of checking and inserting/updating to be done only for one thread at a single time.

Hibernate One to One mapping not updating the child table

I have following Associated classes with one to one mapping.
#Entity
public class EmployeeEntity
{
#Id
private String id;
private String name;
#OneToOne(mappedBy = "employeeEntity", fetch = FetchType.EAGER, cascade = CascadeType.ALL)
#Fetch(FetchMode.SELECT)
#JoinColumn(name = "empid")
private AddressEntity addressEntity;
...
...
getters & setters
}
#Entity
public class AddressEntity
{
#Id
#Column(unique=true, nullable=false)
#GeneratedValue(generator="gen")
#GenericGenerator(name="gen", strategy="foreign", parameters=#Parameter(name="property", value="employeeEntity"))
private String empId;
#OneToOne(fetch = FetchType.EAGER, cascade = CascadeType.ALL)
#PrimaryKeyJoinColumn
private EmployeeEntity employeeEntity;
...
getters & setters
}
I am using postgres and having tables (employeeentity, addressentity) with following foriegn key constraint on addressentity table:
Foreign-key constraints:
"fkakhilesh" FOREIGN KEY (empid) REFERENCES employeeentity(id) ON DELETE CASCADE
I have following requirements with different REST calls:
POST Rest call - should create an employee with address.
POST Rest call - should create an employee without address.
GET Rest call - should retrieve an employeee. Address should also come if it exist.
PUT Rest call - should update an employee and address (if address exists).
PUT Rest call - should update an employee and address (when address is passed and it already exists in addressentity table for empid)
PUT Rest call - should update an employee and create the address (when address is passed and it does not exists in addressentity table for empid)
I am able to perform operations 1 to 5 without any issues.
The main problem is in 6 and following questions come to my mind:
1. when i do "getSession().update(object)" , I get hibernate's StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1.
is this not possible with "update" if address does not exists? can't I create a new address during update?
do i need to change my ServiceImpl call to "getSession().merge(object) ? is this case can only be handled by calling "merge" ? how it impacts performance?
If i do merge, i get hibernate's IdentifierGenerationException: attempted to assign id from null one-to-one property.
Am i missing something here?
this can be solved by changing hibernate mapping? or somethin related to cascade.
what is the importance of #GeneratedValue(generator="gen") here? why is #parameter used in #GenericGenerator
I am new to hibernate and trying to get into the depth of hibernate mapping.
Also, I would be delighted if you could suggest me on the design as what should be the best way to handle this.
I got the fix for this. This one-one mapping is somewhat tricky and not simple as i thought initially.
I have used bidirectional one to one mapping, so it is important to call the setters of both EmployeeEntity and AddressEntity to set each other during update. for example:
employeeEntity.setAddressEntity(addressEntity) and addressEntity.setEmpoyeeEntity(empoyeeEntity) has to explicitly called.
setting alone employeeEntity.setAddressEntity(addressEntity) will not work.
Always use integer Id and use .getSession.saveOrUpdate(entity); for save or update.
In the One to One Mapping you should mention constrained=true on the child. It makes Child Id the same as Parent Id.
Use these lines for child id. I don't know Java attributes syntax.
<generator class="foreign">
<param name="property">employeeEntity</param>
</generator>
Also remove Fetch type and Cascade.All from child. I think the default fetch mode is Select which is fine. Cascade is usally used for the Parent part which is responsible for the parent-child relation.

Found shared references to a collection org.hibernate.HibernateException

I got this error message:
error: Found shared references to a collection: Person.relatedPersons
When I tried to execute addToRelatedPersons(anotherPerson):
person.addToRelatedPersons(anotherPerson);
anotherPerson.addToRelatedPersons(person);
anotherPerson.save();
person.save();
My domain:
Person {
static hasMany = [relatedPersons:Person];
}
any idea why this happens ?
Hibernate shows this error when you attempt to persist more than one entity instance sharing the same collection reference (i.e. the collection identity in contrast with collection equality).
Note that it means the same collection, not collection element - in other words relatedPersons on both person and anotherPerson must be the same. Perhaps you're resetting that collection after entities are loaded? Or you've initialized both references with the same collection instance?
I had the same problem. In my case, the issue was that someone used BeanUtils to copy the properties of one entity to another, so we ended up having two entities referencing the same collection.
Given that I spent some time investigating this issue, I would recommend the following checklist:
Look for scenarios like entity1.setCollection(entity2.getCollection()) and getCollection returns the internal reference to the collection (if getCollection() returns a new instance of the collection, then you don't need to worry).
Look if clone() has been implemented correctly.
Look for BeanUtils.copyProperties(entity1, entity2).
Explanation on practice. If you try to save your object, e.g.:
Set<Folder> folders = message.getFolders();
folders.remove(inputFolder);
folders.add(trashFolder);
message.setFiles(folders);
MESSAGESDAO.getMessageDAO().save(message);
you don't need to set updated object to a parent object:
message.setFiles(folders);
Simple save your parent object like:
Set<Folder> folders = message.getFolders();
folders.remove(inputFolder);
folders.add(trashFolder);
// Not set updated object here
MESSAGESDAO.getMessageDAO().save(message);
Reading online the cause of this error can be also an hibernate bug, as workaround that it seems to work, it is to put a:
session.clear()
You must to put the clear after getting data and before commit and close, see example:
//getting data
SrReq sr = (SrReq) crit.uniqueResult();
SrSalesDetailDTO dt=SrSalesDetailMapper.INSTANCE.map(sr);
//CLEAR
session.clear();
//close session
session.getTransaction().commit();
session.close();
return dt;
I use this solution for select to database, for update or insert i don't know if this solution can work or can cause problems.
My problem is equal at 100% of this: http://www.progtown.com/topic128073-hibernate-many-to-many-on-two-tables.html
I have experienced a great example of reproducing such a problem.
Maybe my experience will help someone one day.
Short version
Check that your #Embedded Id of container has no possible collisions.
Long version
When Hibernate instantiates collection wrapper, it searches for already instantiated collection by CollectionKey in internal Map.
For Entity with #Embedded id, CollectionKey wraps EmbeddedComponentType and uses #Embedded Id properties for equality checks and hashCode calculation.
So if you have two entities with equal #Embedded Ids, Hibernate will instantiate and put new collection by the first key and will find same collection for the second key.
So two entities with same #Embedded Id will be populated with same collection.
Example
Suppose you have Account entity which has lazy set of loans.
And Account has #Embedded Id consists of several parts(columns).
#Entity
#Table(schema = "SOME", name = "ACCOUNT")
public class Account {
#OneToMany(fetch = FetchType.LAZY, mappedBy = "account")
private Set<Loan> loans;
#Embedded
private AccountId accountId;
...
}
#Embeddable
public class AccountId {
#Column(name = "X")
private Long x;
#Column(name = "BRANCH")
private String branchId;
#Column(name = "Z")
private String z;
...
}
Then suppose that Account has additional property mapped by #Embedded Id but has relation to other entity Branch.
#ManyToOne(fetch = FetchType.EAGER)
#JoinColumn(name = "BRANCH")
#MapsId("accountId.branchId")
#NotFound(action = NotFoundAction.IGNORE)//Look at this!
private Branch branch;
It could happen that you have no FK for Account to Brunch relation id DB so Account.BRANCH column can have any value not presented in Branch table.
According to #NotFound(action = NotFoundAction.IGNORE) if value is not present in related table, Hibernate will load null value for the property.
If X and Y columns of two Accounts are same(which is fine), but BRANCH is different and not presented in Branch table, hibernate will load null for both and Embedded Ids will be equal.
So two CollectionKey objects will be equal and will have same hashCode for different Accounts.
result = {CollectionKey#34809} "CollectionKey[Account.loans#Account#43deab74]"
role = "Account.loans"
key = {Account#26451}
keyType = {EmbeddedComponentType#21355}
factory = {SessionFactoryImpl#21356}
hashCode = 1187125168
entityMode = {EntityMode#17415} "pojo"
result = {CollectionKey#35653} "CollectionKey[Account.loans#Account#33470aa]"
role = "Account.loans"
key = {Account#35225}
keyType = {EmbeddedComponentType#21355}
factory = {SessionFactoryImpl#21356}
hashCode = 1187125168
entityMode = {EntityMode#17415} "pojo"
Because of this, Hibernate will load same PesistentSet for two entities.
In my case, I was copying and pasting code from my other classes, so I did not notice that the getter code was bad written:
#OneToMany(fetch = FetchType.LAZY, mappedBy = "credito")
public Set getConceptoses() {
return this.letrases;
}
public void setConceptoses(Set conceptoses) {
this.conceptoses = conceptoses;
}
All references conceptoses but if you look at the get says letrases
I too got the same issue, someone used BeanUtils.copyProperties(source, target). Here both source and target, are using the same collection as attribute.
So i just used the deep copy as below..
How to Clone Collection in Java - Deep copy of ArrayList and HashSet
Consider an entity:
public class Foo{
private<user> user;
/* with getters and setters */
}
And consider an Business Logic class:
class Foo1{
List<User> user = new ArrayList<>();
user = foo.getUser();
}
Here the user and foo.getUser() share the same reference. But saving the two references creates a conflict.
The proper usage should be:
class Foo1 {
List<User> user = new ArrayList<>();
user.addAll(foo.getUser);
}
This avoids the conflict.
I faced similar exception in my application. After looking into the stacktrace it was clear that exception was thrown within a FlushEntityEventListener class.
In Hibernate 4.3.7 the MSLocalSessionFactory bean no longer supports the eventListeners property. Hence, one has to explicitly fetch the service registry from individual Hibernate session beans and then set the required custom event listeners.
In the process of adding custom event listeners we need to make sure the corresponding default event listeners are removed from the respective Hibernate session.
If the default event listener is not removed then the case arises of two event listeners registered against same event. In this case while iterating over these listeners, against first listeners any collections in the session will be flagged as reached and while processing the same collection against second listener would throw this Hibernate exception.
So, make sure that when registering custom listeners corresponding default listeners are removed from registry.
My problem was that I had setup an #ManyToOne relationship. Maybe if the answers above don't fix your problem you might want to check the relationship that was mentioned in the error message.
Posting here because it's taken me over 2 weeks to get to the bottom of this, and I still haven't fully resolved it.
There is a chance, that you're also just running into this bug which has been around since 2017 and hasn't been addressed.
I honestly have no clue how to get around this bug. I'm posting here for my sanity and hopefully to shave a couple weeks of your googling. I'd love any input anyone may have, but my particular "answer" to this problem was not listed in any of the above answers.
I had to replace the following collection initilization:
challenge.setGoals(memberChallenge.getGoals());
with
challenge.setGoals(memberChallenge.getGoals()
.stream()
.map(dmo -> {
final ChallengeGoal goal = new ChallengeGoalImpl();
goal.setMemberChallenge(challenge);
goal.setGoalDate(dmo.getGoalDate());
goal.setGoalValue(dmo.getGoalValue());
return goal;
})
.collect(Collectors.toList()));
I changed
#OneToMany( cascade= CascadeType.ALL)
#JoinColumn(
name = "some_id",
referencedColumnName = "some_id"
)
to
#OneToMany(mappedBy = "some_id", cascade= CascadeType.ALL)
You're using pointers(indirectly), so sometimes you're copying the memory address instead of the object/collection you want. Hibernate checks this and throw that error. Here's what can you do:
Don't copy the object/collection;
Initiate a new empty one;
Make a function to copy it's content and call it;
For example:
public Entity copyEntity(Entity e){
Entity copy = new Entity();
e.copy(name);
e.setCollection2(null);
e.setCollection3(copyCollection(e.getCollection3());
return copy;
}
In a one to many and many to one relationship this error will occur. If you attempt to devote same instance from many to one entity to more than one instance from one to many entity.
For example, each person can have many books but each of these books can be owned by only one person if you consider more than one owner for a book this issue is raised.

Categories

Resources