JPA MapKey with Nested Attributes - java

I have 3 elements like this:
public class ItemType {
#Id
private Long id = null;
...
#OneToMany(cascade = CascadeType.ALL, fetch = FetchType.LAZY, orphanRemoval = true, mappedBy = "itemTypeVO")
#MapKey(name = "company.id")
private Map<Long, ItemTypePurpose> purposeHash = null;
...
}
public class ItemTypePurpose {
#Id
private Long id = null;
...
#ManyToOne(fetch = FetchType.LAZY, optional = false)
#JoinColumn(name = "idcompany")
private Company company = null;
...
}
public class Company {
#Id
private Long id = null;
...
}
My problem is, I want the ID of Company as key of my map inside ItemType .
I can compile and deploy the application without any errors. Can persist ItemType, and everything goes well to DB. But when I get it back, the Map key is "wrong", I don't know what information is being used, but for sure it's not the Company id. Perhaps the ItemTypePurpose's ID.
The Company is being loaded into Map correctly, just the map key is wrong. I've tryied to google, bu can't find anything. Does any way to JPA create my map with this "nested attribute"?
*Sorry about my english, feel free if you understand what I need and can do a better writing in english to edit my question.

This doesn't exactly solves the question, but solve my needs for now.
Since que ID of Company was in table of ItemTypePurpose, I could change the MapKey to:
public class ItemType {
#Id
private Long id = null;
...
#OneToMany(cascade = CascadeType.ALL, fetch = FetchType.LAZY, orphanRemoval = true, mappedBy = "itemTypeVO")
#MapKeyColumn(name = "idcompany", insertable = false, updatable = false)
private Map<Long, ItemTypePurpose> purposeHash = null;
...
}
Instead of #MapKey, I used #MapKeyColumn. The #MapKeyColumn(name = "idcore_company", insertable = false, updatable = false is to turn the "Key Information" ReadOnly and avoid mapping conflict, since the same column is used in ItemTypePurpose to map the Entity.
Not exactly an answer, but "worked around" to solve my needs. This solution does not cover if you want a field as Map Key other than the ID.

Late reply, but can be helpful to someone else.
#MapKeyColumn seems to be the official solution here. As per the documentation, it seems the annotation to be used depends on the key type of the Map, regardless of the mapped fields. In your case, the key type is a Long, hence below will apply:
https://docs.oracle.com/cd/E19226-01/820-7627/giqvn/index.html
Using Map Collections in Entities
Collections of entity elements and
relationships may be represented by java.util.Map collections. A Map
consists of a key and value.
If the key type of a Map is a Java programming language basic type,
use the javax.persistence.MapKeyColumn annotation to set the column
mapping for the key. By default, the name attribute of #MapKeyColumn
is of the form RELATIONSHIP FIELD/PROPERTY NAME_KEY. For example, if
the referencing relationship field name is image, the default name
attribute is IMAGE_KEY.
In summary:
For nested fields go for MapKeyColumn(name="myNestFiled_key"), then you will set the value manually in your code like:
ItemType.getPurposeHash().put(ItemTypePurpose.getCompany().getId(), ItemTypePurpose);

Related

Hibernate Query for Object based on key value pairs of the Object's Java Map<String, String>

I am trying to query for a Java Object using HQL which is performing filters based on the Object's Java Map that it has.
Essentially what I want to ask is 'give me all the error reports where mapkey1=val_x and mapkey2=val_y'
I have this object (stripped down)
#Entity
#Table(name = "error_report")
public class ErrorReport implements Serializable {
private static final long serialVersionUID = 1L;
#Id
#Column(name = "id", length=50)
private String id= UUID.randomUUID().toString();
#ElementCollection
#CollectionTable(name = "error_property", joinColumns = {#JoinColumn(name = "error_id", referencedColumnName = "id")})
#MapKeyColumn(name = "prop", length=50)
#Column(name = "prop_val")
#Type(type="text")
private Map<String, String> reportedProperties = new HashMap<>();
}
So I want ErrorReports based on the reportedProperties. I have set up a unit test and everything works perfectly when the reportedProperties Map has only 1 entry per ErrorReport. This is the HQL I used:
from ErrorReport as model where KEY(model.reportedProperties) = :A1 and VALUE(model.reportedProperties) = :A2
When the ErrorReport has 2 entries in the reportedProperties Map the query fails with the following error:
could not extract ResultSet
caused by
HsqlException: cardinality violation
When I look at the generated SQL and try and run it manually I can see it will not work, because the innter select is returning multiple results.
SELECT error_report_.id AS id1_2_, error_report_.product_url AS product_2_2_, error_report_.audit_id AS audit_id3_2_, error_report_.category_id AS
category4_2_, error_report_.error_desc AS error_de5_2_, error_report_.notifier_id AS notifier6_2_, error_report_.product_name AS
product_7_2_, error_report_.product_version AS product_8_2_, error_report_.error_time AS error_ti9_2_
FROM error_report error_report_
CROSS JOIN error_property reportedpr1_
CROSS JOIN error_property reportedpr2_
WHERE error_report_.id=reportedpr1_.error_id
AND error_report_.id=reportedpr2_.error_id
AND reportedpr1_.prop=?
AND
(SELECT reportedpr2_.prop_val FROM error_property reportedpr2_ WHERE error_report_.id=reportedpr2_.error_id)=?
Clearly there is something wrong with my HQL, but it seems to follow other examples I have found. Does anyone know what the syntax is?
I am using hibernate 5.4.9.Final
For anyone in the future, I had similar problem and solved it by applying join on the collection table combined with INDEX().
SELECT DISTINCT(model.id), model.product_url, ...
FROM ErrorReport as model
...
JOIN model.reportedProperties rProp
WHERE INDEX(rProp) = :A1 AND rProp = :A2
Here the INDEX(rProp) is the key and rProp is the value. Also DISTINCT was needed because it was returning duplicate records for me due to the map.

JPA mapping issue when mapping two entities with same `JoinColumn`

I'm having some weird situation, where I have for example an entity called Article, which has a relation to Supplier, but also to Supplier Contact Person. For example:
Supplier is linked to Article by Supplier_Id, while ContactpersonSupplier is linked to Article by both Supplier_Id (to SupplierId) and Supplier_Contactperson_Id (to Id).
So, right now we mapped all relations on Article:
#JoinColumn(name = "Supplier_Id")
private Supplier supplier;
#JoinColumns({
#JoinColumn(name = "Supplier_Id"),
#JoinColumn(name = "Supplier_Contactperson_Id")
private SupplierContactperson supplierContactperson;
This does not work because we're mapping Supplier_Id twice, once for supplier and once for supplierContactperson. If you do this, you get the following exception:
org.hibernate.MappingException: Repeated column in mapping for entity: Article column: Supplier_Id (should be mapped with insert="false" update="false")
In a normal situation you would link them up like this: Article -> ContactpersonSupplier -> Supplier, and then there would be no problems.
However, ContactpersonSupplier is not required, but Supplier is required. This means that if we leave the contactperson away, we can't provide a supplier.
We cannot use insertable = false, updatable = false for the very same reason, if we put these values on supplier, we cannot add a supplier if the contactperson is not provided.
We cannot add them on supplierContactperson either, because JPA/Hibernate requires you to put it on all #JoinColumn's inside a #JoinColumns, and if we do that, we can't save a contactperson.
One idea we have is to simply map the IDs, in stead of using related entities, but we're wondering if there's an alternative approach that might work. So the question is, how should we solve this mapping issue?
One thing to mention though, the data structure cannot be changed.
this worked for me:
#JoinColumn(name = "Supplier_Id",insertable=false,updatable=false)
private Supplier supplier;
#JoinColumns({
#JoinColumn(name = "Supplier_Id",insertable=false,updatable=false),
#JoinColumn(name = "Supplier_Contactperson_Id",insertable=false,updatable=false)
private SupplierContactperson supplierContactperson;
#Column(name="Supplier_Id")
private String supplier_id;
#Column (name = "Supplier_Contactperson_Id")
private String supplier_contact_Person_id;
and then in the setters
for setSupplierContactPerson(contactPerson)
supplierContactPerson = contactPerson;
if (contactPerson!=null){
supplier_id = contactPerson.getSupplierID();
supplier_contact_Person_id = contactPerson.getSupplierContactPersonID();
}
for setSupplier(supplier):
supplier = supplier;
if (supplier != null){
supplier_id = supplier.getId();
}
To map only the id of ContactPersonSupplier has a problem: you could put a contact person from a supplier A and the supplier B and the database would not complain.
Since supplier is required, I'd try:
1. Put insert=false, update=false in the JoinColumn("supplier_id") of the contact person field, to avoid the complains from JPA.
2. modify (if still hadn't) setSupplierContactPerson() with
if (contactPerson != null){
setSupplier(contactPerson.getSupplier());
} else {
setSupplier(null);
}
Another option is to modify getSupplier() with
if (contactPerson != null){
return contactPerson.getSupplier();
}
return supplier;

Mapping multiple tables to one List Hibernate

I've been searching over the web to find out a solution for this. It seems nobody has the answer... I start thinking i'm in wrong way adressing the problem.
Let's see if i can explain easy.
Im developing a contract maintenance. (table: contrat_mercan). For the contract, we will select a category (table: categoria), each category has qualities (table: calidad) in relation 1 - N (relationship table categoria_calidad).
This qualities must have a value for each contract where the category is selected, so I created a table to cover this relationship: contrato_categoria_calidad.
#Entity
#Table(name = "contrato_categoria_calidad")
public class ContratoCategoriaCalidad implements Serializable{
// Constants --------------------------------------------------------
private static final long serialVersionUID = -1821053251702048097L;
// Fields -----------------------------------------------------------
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name = "CCC_ID")
private int id;
#Column(name = "CONTRAT_MERCAN_ID")
private int contratoId;
#Column(name = "CATEGORIA_ID")
private int categoriaId;
#Column(name = "CALIDAD_ID")
private int calidadId;
#Column(name = "VALOR")
private double valor;
.... getters/ setters
In this table I wanted to avoid having an Id, three fields are marked as FK in database and first attempts where with #JoinColumn in the three fields. But it does not worked for hibernate.
Anyway, now ContratoCategoriaCalidad is behaving okay as independent entity. But I will need to implement all maintenance, updates, deletes for each case manually... :(
What I really want, (and I think is a better practice) is a cascade when I saveOrUpdate the contract as the other entities do, but I don't find the way to make a List in contrat_mercan table.
This is working perfect for other relationships in same table:
#OneToOne
#JoinColumn(name="CONDICION")
private Condicion condicion;
#OneToMany (cascade = {CascadeType.ALL})
#JoinTable(
name="contrato_mercan_condicion",
joinColumns = #JoinColumn( name="CONTRATO_MERCAN_ID")
,inverseJoinColumns = #JoinColumn( name="CONDICION_ID")
)
private List<Condicion> condiciones;
But all my attempts to map this failed, what i want, is to have in my Java entity contrat_mercan a field like this:
private List<ContratoCategoriaCalidad> relacionContratoCategoriaCalidad;
not a real column in database, just representation of the relationship.
I found solutions to join multiple fields of the same table, here, and here, but not to make a relationship with 3 tables...
Any idea? Im doing something wrong? Maybe i must use intermediate table categoria_calidad to perform this?
Thanks!!
If you want to access a list of related ContratoCategoriaCalidad objects from Contrato entity you need to declare a relationship between those two entities using proper annotations.
In ContratoCategoriaCalidad class change field to:
#ManyToOne
#JoinColumn(name = "CONTRATO_ID")
private Contrato contrato;
In Contrato class add field:
#OneToMany(mappedBy = "contrato")
private List<ContratoCategoriaCalidad> relacionContratoCategoriaCalidad;
If you want to enable cascade updates and removals consider adding cascade = CascadeType.ALL and orphanRemoval = true attributes to #OneToMany annotation.
Hope this helps!

How do I maintain consistency of cached ManyToOne collections with READ_WRITE CacheConcurrencyStrategy in Hibernate?

I'm running into a difference between NONSTRICT_READ_WRITE and READ_WRITE CacheConcurrencyStrategy when writing "denormalized" collections... the idea being that I have a join table modeled as an entity but it also contains read only links to the tables it joins to.
My entities, roughly:
#Entity
#org.hibernate.annotations.Entity(dynamicUpdate = true)
#Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
class Actor {
#Id
Integer id;
#Column
String name;
}
#Entity
#org.hibernate.annotations.Entity(dynamicUpdate = true)
#Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
class Movie {
#Id
Integer id;
#Column
String title;
}
#Entity
#org.hibernate.annotations.Entity(dynamicUpdate = true)
#Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
class Credit {
#Column
String roleName;
#ManyToOne(targetEntity = Movie.class, optional = true)
#JoinColumn(name = "movie_id", insertable = false, updatable = false)
#NotFound(action = NotFoundAction.IGNORE)
Movie movie;
#Column(name = "movie_id")
Long movieId;
#ManyToOne(targetEntity = Actor.class, optional = true)
#JoinColumn(name = "actor_id", insertable = false, updatable = false)
#NotFound(action = NotFoundAction.IGNORE)
Actor actor;
#Column(name = "actor_id")
Long actorId;
}
Second level object cache is enabled (with ehcache).
My application writes Movies and Actors... and sometime later, it links them together by writing Credit. When Credit is written, I only fill in the roleName, movieId, and actorId fields, I do not provide the Movie and Actor objects.
Using NONSTRICT_READ_WRITE caching, I am then able to read back that Credit object and it will contain the referenced Movie and Actor objects.
Using READ_WRITE caching, reading back the Credit will return a Credit with empty Movie and Actor fields. If I clear the hibernate cache, reading back that Credit then contains the Movie and Actor objects as expected. This is also the behavior with TRANSACTIONAL caching (but of course not with NONE caching).
So it would seem that hibernate is inserting Credit into the 2nd level cache with null Actor and Movie fields when using READ_WRITE cache. Is there a way to prevent this from happening and always read from the database to get back these joined fields? I've tried annotating just the fields with CacheConcurrencyStrategy.NONE, but this does not work.
I think you have probably stumbled across a hibernate bug because of your weird mapping (at least non standard mapping). There is no real reason for having two fields one with id and one with entity.
You can turn an id into an entity reference using session.load - which just creates a proxy, doesn't load the data from DB.
If you get rid of the movieId and actorId field and remove the insertable / updatable false on the movie / actor field it should work the same irrespective of READ_WRITE or NON_STRICT_READ_WRITE
Credit c = new Credit()
Movie m = session.load(movieId);
Actor a = session.load(actorId);
c.setMovie(m);
c.setActor(a);
session.save(c);
Hibernate doesn't save the complete objects in the second level cache. I stores it in a flattened form (hydrated - more like db tables) and reconstructs the object from that. So your assertion that it stored the object with null in the cache and not updating it is incorrect. There is something else going on.
The main difference between READ_WRITE and NON_STRICT_READ_WRITE is that the entry in the cache will be locked while updating in case of READ_WRITE and won't be locked in NON_STRICT_READ_WRITE. This will only impact if you are updating this entity in multiple threads concurrently.

Hibernate - #ElementCollection - Strange delete/insert behavior

#Entity
public class Person {
#ElementCollection
#CollectionTable(name = "PERSON_LOCATIONS", joinColumns = #JoinColumn(name = "PERSON_ID"))
private List<Location> locations;
[...]
}
#Embeddable
public class Location {
[...]
}
Given the following class structure, when I try to add a new location to the list of Person's Locations, it always results in the following SQL queries:
DELETE FROM PERSON_LOCATIONS WHERE PERSON_ID = :idOfPerson
And
A lotsa' inserts into the PERSON_LOCATIONS table
Hibernate (3.5.x / JPA 2) deletes all associated records for the given Person and re-inserts all previous records, plus the new one.
I had the idea that the equals/hashcode method on Location would solve the problem, but it didn't change anything.
Any hints are appreciated!
The problem is somehow explained in the page about ElementCollection of the JPA wikibook:
Primary keys in CollectionTable
The JPA 2.0 specification does not
provide a way to define the Id in the
Embeddable. However, to delete or
update a element of the
ElementCollection mapping, some unique
key is normally required. Otherwise,
on every update the JPA provider would
need to delete everything from the
CollectionTable for the Entity, and
then insert the values back. So, the
JPA provider will most likely assume
that the combination of all of the
fields in the Embeddable are unique,
in combination with the foreign key
(JoinColunm(s)). This however could be
inefficient, or just not feasible if
the Embeddable is big, or complex.
And this is exactly (the part in bold) what happens here (Hibernate doesn't generate a primary key for the collection table and has no way to detect what element of the collection changed and will delete the old content from the table to insert the new content).
However, if you define an #OrderColumn (to specify a column used to maintain the persistent order of a list - which would make sense since you're using a List), Hibernate will create a primary key (made of the order column and the join column) and will be able to update the collection table without deleting the whole content.
Something like this (if you want to use the default column name):
#Entity
public class Person {
...
#ElementCollection
#CollectionTable(name = "PERSON_LOCATIONS", joinColumns = #JoinColumn(name = "PERSON_ID"))
#OrderColumn
private List<Location> locations;
...
}
References
JPA 2.0 Specification
Section 11.1.12 "ElementCollection Annotation"
Section 11.1.39 "OrderColumn Annotation"
JPA Wikibook
Java Persistence/ElementCollection
In addition to Pascal's answer, you have to also set at least one column as NOT NULL:
#Embeddable
public class Location {
#Column(name = "path", nullable = false)
private String path;
#Column(name = "parent", nullable = false)
private String parent;
public Location() {
}
public Location(String path, String parent) {
this.path = path;
this.parent= parent;
}
public String getPath() {
return path;
}
public String getParent() {
return parent;
}
}
This requirement is documented in AbstractPersistentCollection:
Workaround for situations like HHH-7072. If the collection element is a component that consists entirely
of nullable properties, we currently have to forcefully recreate the entire collection. See the use
of hasNotNullableColumns in the AbstractCollectionPersister constructor for more info. In order to delete
row-by-row, that would require SQL like "WHERE ( COL = ? OR ( COL is null AND ? is null ) )", rather than
the current "WHERE COL = ?" (fails for null for most DBs). Note that
the param would have to be bound twice. Until we eventually add "parameter bind points" concepts to the
AST in ORM 5+, handling this type of condition is either extremely difficult or impossible. Forcing
recreation isn't ideal, but not really any other option in ORM 4.
We discovered that entities we were defining as our ElementCollection types did not have an equals or hashcode method defined and had nullable fields. We provided those (via #lombok for what it's worth) on the entity type and it allowed hibernate (v 5.2.14) to identify that the collection was or was not dirty.
Additionally, this error manifested for us because we were within a service method that was marked with the annotation #Transaction(readonly = true). Since hibernate would attempt to clear the related element collection and insert it all over again, the transaction would fail when being flushed and things were breaking with this very difficult to trace message:
HHH000346: Error during managed flush [Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1]
Here is an example of our entity model that had the error
#Entity
public class Entity1 {
#ElementCollection #Default private Set<Entity2> relatedEntity2s = Sets.newHashSet();
}
public class Entity2 {
private UUID someUUID;
}
Changing it to this
#Entity
public class Entity1 {
#ElementCollection #Default private Set<Entity2> relatedEntity2s = Sets.newHashSet();
}
#EqualsAndHashCode
public class Entity2 {
#Column(nullable = false)
private UUID someUUID;
}
Fixed our issue. Good luck.
I had the same issue but wanted to map a list of enums: List<EnumType>.
I got it working like this:
#ElementCollection
#CollectionTable(
name = "enum_table",
joinColumns = #JoinColumn(name = "some_id")
)
#OrderColumn
#Enumerated(EnumType.STRING)
private List<EnumType> enumTypeList = new ArrayList<>();
public void setEnumList(List<EnumType> newEnumList) {
this.enumTypeList.clear();
this.enumTypeList.addAll(newEnumList);
}
The issue with me was that the List object was always replaced using the default setter and therefore hibernate treated it as a completely "new" object although the enums did not change.

Categories

Resources