This is basically a follow-up question or repost as question How to use enums with JPA seems to have been solved in a proprietary way. This is the 2012 JPA question of basically the same, here using EclipseLink 2.3.2.
The question is how to map a Java enum to a DB string with different names. There is a DB column with the VARCHAR values 'M', 'C', 'N', and 'F'. They are cryptic and mix English and German. To improve the business layer I want to map a Java enum to the above DB strings:
public enum TradingSector
{
EMPLOYEE( "M" ), // Mitarbeiter
CUSTOMER( "C" ),
NOSTRO( "N" ),
FUND( "F" );
// this works:
// M( "M" ),
// C( "C" ),
// N( "N" ),
// F( "F" );
private final String ch;
private TradingSector( String ch )
{
this.ch = ch;
}
#Override
public String toString()
{
return this.ch;
}
}
Here's the entity:
#Entity
#Table( name = "TRADES" )
#Inheritance( strategy = InheritanceType.SINGLE_TABLE )
#DiscriminatorColumn( name = "NOSTRO_FLAG", discriminatorType = DiscriminatorType.STRING, length = 1 )
public class Trade implements Serializable
{
#Enumerated( EnumType.STRING )
#Column( name = "NOSTRO_FLAG", insertable = false, updatable = false )
private TradingSector discriminator;
...
}
This is kind of special, mapping an enum to what is already the inheritance discriminator. This however should work, as the enums named like the DB strings show.
The TRADES table is the most monolithic one in the whole context right now and it's more than desirable to split the entity up into sub classes.
I had expected simply overriding toString to work from what I read, but this results in:
Caused by: Exception [EclipseLink-116] (Eclipse Persistence Services - 2.3.2.v20111125-r10461): org.eclipse.persistence.exceptions.DescriptorException
Exception Description: No conversion value provided for the value [M] in field [TRADES.TRADING_SECTOR].
Mapping: org.eclipse.persistence.mappings.DirectToFieldMapping[discriminator-->TRADES.TRADING_SECTOR]
Descriptor: RelationalDescriptor(com.company.project.model.Trade --> [DatabaseTable(TRADES)])
at org.eclipse.persistence.exceptions.DescriptorException.noFieldValueConversionToAttributeValueProvided(DescriptorException.java:1052)
at org.eclipse.persistence.mappings.converters.ObjectTypeConverter.convertDataValueToObjectValue(ObjectTypeConverter.java:140)
at org.eclipse.persistence.mappings.converters.EnumTypeConverter.convertDataValueToObjectValue(EnumTypeConverter.java:137)
at org.eclipse.persistence.mappings.foundation.AbstractDirectMapping.getAttributeValue(AbstractDirectMapping.java:699)
at org.eclipse.persistence.mappings.foundation.AbstractDirectMapping.valueFromRow(AbstractDirectMapping.java:1299)
at org.eclipse.persistence.mappings.DatabaseMapping.readFromRowIntoObject(DatabaseMapping.java:1326)
Here's what the EclipseLink error docs at http://wiki.eclipse.org/EclipseLink_Exception_Error_Reference_%28ELUG%29 say:
ECLIPSELINK-00116: No conversion value provided for the value [{0}] in field [{1}].
Cause: The attribute conversion value for the fieldValue was not given in the object type mapping.
Action: Verify the field value, and provide a corresponding attribute value in the mapping.
Ah, yes. :-/
Q:
What's the non-proprietary (JPA 2) way to solve this (if that's possible at all, without going too far into the hacking trenches)?
PS: It's not possible to change the DB values.
Damn, I missed Pascal Thivent's great answer here: https://stackoverflow.com/a/2751896/396732
Here's the final solution:
Enum:
public enum TradingSector
{
EMPLOYEE( "M" ),
CUSTOMER( "C" ),
NOSTRO( "N" ),
FUND( "F" );
private final String ch;
private TradingSector( String ch )
{
this.ch = ch;
}
public String getCharacter()
{
return this.ch;
}
public static TradingSector translate( String ch )
{
for ( TradingSector ts : TradingSector.values() )
{
if ( ch.equals( ts.getCharacter() ) )
{
return ts;
}
}
return null;
}
}
Entity:
#Entity
#Table( name = "TRADES" )
#Inheritance( strategy = InheritanceType.SINGLE_TABLE )
#DiscriminatorColumn( name = "NOSTRO_FLAG", discriminatorType = DiscriminatorType.STRING, length = 1 )
public abstract class Trade implements Serializable
{
#Column( name = "NOSTRO_FLAG", insertable = false, updatable = false )
protected String discriminator;
#Id
#Column( name = "TRADE_ID" )
protected Long id;
...
protected Trade()
{
}
public String getDiscriminator()
{
return this.discriminator;
}
public TradingSector getTradingSector()
{
return TradingSector.translate( this.discriminator );
}
...
}
Still useful to some degree.
Related
I want to create an array like the list ones like the ones in python3 in java but I don't know how to do it
Ex: {"John", 1, True, 1.7}
Is it possible to create an array of more than one datatype in java
If it is possible, how is it done?
Is it possible to create an array of more than one datatype in java If it is possible
No, not possible. But you're mistaken - in python, this is also impossible.
The trick is, in Python all objects are just 'object', and you have a list of objects. In java, expressions do have a type. Nevertheless, all objects are, well, objects. So:
Object[] o = {"John", 1, true, 1.7};
works. You really don't want this - arrays are low level constructs. You'd want List<Object> o = List.of("John", 1, true, 1.7}; no doubt.
Also, why do you want to store this in a list? It SOUNDS like you want this:
class Person {
String name;
int id;
boolean enrolled;
double gpa;
}
and then a List<Person>. That is 'the java way'. Junking that stuff in an Object[] is not the java way. When in rome, act like romans. When coding java, write it like java people would. If you do not, anybody else can't read your code, and libraries do not work for you or feel weird and unwieldy. If you insist on programming 'python style', then just use python.
tl;dr
Use records in Java 16 and later.
record Person ( String name , int talentCode , boolean isAvailable , double grading ) {}
Tuple
Given your example of {"John", 1, True, 1.7}, I suspect you have a set of properties related to a single entity rather than a collection of siblings. In other words, you have tuple.
Class
If this is case, such as all four of those values describing a single person, then in Java we would create a class named Person to hold four member fields. Each field would be of the appropriate type.
package work.basil.example.recs;
import java.util.Objects;
public final class Person
{
private final String name;
private final int talentCode;
private final boolean isAvailable;
private final double grading;
public Person ( String name , int talentCode , boolean isAvailable , double grading )
{
this.name = name;
this.talentCode = talentCode;
this.isAvailable = isAvailable;
this.grading = grading;
}
public String name ( ) { return name; }
public int talentCode ( ) { return talentCode; }
public boolean isAvailable ( ) { return isAvailable; }
public double grading ( ) { return grading; }
#Override
public boolean equals ( Object obj )
{
if ( obj == this ) return true;
if ( obj == null || obj.getClass() != this.getClass() ) return false;
var that = ( Person ) obj;
return Objects.equals( this.name , that.name ) &&
this.talentCode == that.talentCode &&
this.isAvailable == that.isAvailable &&
Double.doubleToLongBits( this.grading ) == Double.doubleToLongBits( that.grading );
}
#Override
public int hashCode ( )
{
return Objects.hash( name , talentCode , isAvailable , grading );
}
#Override
public String toString ( )
{
return "Person[" +
"name=" + name + ", " +
"talentCode=" + talentCode + ", " +
"isAvailable=" + isAvailable + ", " +
"grading=" + grading + ']';
}
}
Using this class.
Person alice = new Person( "Alice" , 1 , true , 1.7 ) ;
System.out.println( alice.getName() ) ;
Alice
record
Writing such a class is much easier in Java 16, if the main purpose of such a class is to transparently and immutably hold this data.
The new records feature lets us define a class briefly, declaring the member fields only. The compiler implicitly creates the constructor, getters, equals & hashCode, and toString.
Records can be thought of as nominal tuples, to quote JEP 395. The “nominal” means each value is named, and “tuple” means a finite ordered sequence of values.
public record Person ( String name , int talentCode , boolean isAvailable , double grading )
{
}
Using this record.
Person alice = new Person( "Alice" , 1 , true , 1.7 ) ;
System.out.println( alice.name() ) ;
Alice
A record can be written as a separate class, nested within another class, and can even be defined locally within a method.
I have a problem updating my entity. As you can see I have three entities.
LabelValueEntity holds a list from class LabelSwitchEntity.
LabelSwitchEntity holds a list from class SwitchCaseEntity.
As you can see from my SQL statement the fields name and labelValueUUID are unique. Only one row of that combination is allowed in my table.
When I update the parent entity (LabelValueEntity) with a new list of class LabelSwitchEntity I want Hibernate to delete the old list itself and create new entities. It's kinda hard to update the child one by one. That's why I want to delete all the related children directly.
When I update a LabelValueEntity and provide it with a list that contains a LabelSwitchEntity with a unique name and labelValueUUID (Already existing entity in the DB) combination I get a unique constraint violation exception. Well, that error is clear because the combination exists in the DB as I said. I would expect here that Hibernate is smart enough to delete the child before inserting it.
What am I doing wrong?
#Entity
#Table(name = "label_value")
class LabelValueEntity(uuid: UUID? = null,
...
#OneToMany(
mappedBy = "labelValueUUID",
cascade = [CascadeType.ALL],
fetch = FetchType.EAGER,
orphanRemoval = true)
#Fetch(FetchMode.SUBSELECT)
val labelSwitchEntities: List<LabelSwitchEntity>? = emptyList()
) : BaseEntity(uuid)
#Entity
#Table(name = "label_switch")
class LabelSwitchEntity(uuid: UUID? = null,
#Column(name = "label_value_uuid", nullable = false)
val labelValueUUID: UUID,
#OneToMany(
mappedBy = "labelSwitchUUID",
cascade = [CascadeType.ALL],
fetch = FetchType.EAGER,
orphanRemoval = true
)
val switchCaseEntities: List<SwitchCaseEntity>,
#Column
val name: String,
...
) : BaseEntity(uuid)
#Entity
#Table(name = "switch_case")
class SwitchCaseEntity(uuid: UUID? = null,
...
#Column(name = "label_switch_uuid", nullable = false)
val labelSwitchUUID: UUID
) : BaseEntity(uuid)
CREATE TABLE label_switch
(
uuid UUID NOT NULL PRIMARY KEY,
label_value_uuid UUID REFERENCES label_value(uuid) ON DELETE CASCADE,
name CHARACTER VARYING (255) NOT NULL,
UNIQUE (label_value_uuid, name)
);
CREATE TABLE switch_case
(
uuid UUID NOT NULL PRIMARY KEY,
label_switch_uuid UUID NOT NULL REFERENCES label_switch(uuid) ON DELETE CASCADE
);
BaseEntity
#MappedSuperclass
abstract class BaseEntity(givenId: UUID? = null) : Persistable<UUID> {
#Id
#Column(name = "uuid", length = 16, unique = true, nullable = false)
private val uuid: UUID = givenId ?: UUID.randomUUID()
#Transient
private var persisted: Boolean = givenId != null
override fun getId(): UUID = uuid
#JsonIgnore
override fun isNew(): Boolean = !persisted
override fun hashCode(): Int = uuid.hashCode()
override fun equals(other: Any?): Boolean {
return when {
this === other -> true
other == null -> false
other !is BaseEntity -> false
else -> getId() == other.getId()
}
}
#PostPersist
#PostLoad
private fun setPersisted() {
persisted = true
}
}
Here is where the transactions come in. If you handle all that logic inside a method annotated with #Transactional (be it from javax or spring), hibernate will do the dirty checking within its bounds, but not validation.
In other words - as long as there is a transaction, hibernate keeps track of changes made to the managed entities, but it does not validate it - it's the responsibility of the database. When being processed in memory, entities are not checked for consistency. Just make sure you're flushing the consistent state to the database (entities at the commit phase of transaction should be consistent).
Other options are possible, when your database allows for it, for example in postgres you can mark the constraints as deferred. When a constraint is defined as deferred, it's not checked within an opened transaction, only on commit.
After some struggle i came out with this solution:
fun updateLabelValue(updatedLabelValue: LabelValueDTO): LabelValueDTO {
val updatedEntity = updatedLabelValue.copy(labelSwitches = emptyList())
labelValueRepository.saveAndFlush(labelValueMapper.map(updatedEntity))
val labelValueEntity = labelValueMapper.map(updatedLabelValue)
return labelValueMapper.map(labelValueRepository.save(labelValueEntity))
}
It's pretty ugly and not really what i wanted. I'm saving the entity first with an empty array so that Hibernate can delete the orphans, after that i do an insert with the updated labelValue with the new orphans.
If you have suggestions or a better solution, drop it down below thanks.
Previously i was not getting any error but suddenly i am started getting error :
javax.persistence.EntityExistsException: A different object with the same identifier value was already associated with the session : [baag.betl.dbimporter.esmatrans.db.EquSecurityReferenceData#baag.db.SECU#59c70ceb]
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:116) ~[hibernate-core-5.2.10.Final.jar:5.2.10.Final]
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:155) ~[hibernate-core-5.2.10.Final.jar:5.2.10.Final]
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:162) ~[hibernate-core-5.2.10.Final.jar:5.2.10.Final]
at org.hibernate.internal.SessionImpl.firePersist(SessionImpl.java:787) ~[hibernate-core-5.2.10.Final.jar:5.2.10.Final]
at org.hibernate.internal.SessionImpl.persist(SessionImpl.java:765) ~[hibernate-core-5.2.10.Final.jar:5.2.10.Final]
There is no sequence created for any column in oracle database table. Also there is combination of unique key column for esn and techid. I dont want to create new sequence column in my database table.
I think i cannot use #GeneratedValue and set it to Auto for unique columns otherwise i will get hibernate sequence error.
I am also doing clear and flush after every 1000 records processed.
if (secuData != null) {
sessionFactory.getCurrentSession().persist(secuData);
}
i++;
if (i % 1000 == 0) {
sessionFactory.getCurrentSession().flush();
sessionFactory.getCurrentSession().clear();
}
To have a composite primary key, you can either have the use of #IdClass or #Embeddable key approach.
To continue with the #IdClass approach you need to follow some rules,
The composite primary key class must be public
It must have a no-arg constructor
It must define equals() and hashCode() methods
It must be Serializable
So in your case, the class would look like,
#Entity
#Table( name = "SECU" )
#IdClass( SECU.class )
public class SECU implements Serializable
{
#Id
#Column(name = "columnName") // use the correct column name if it varies from the variable name provided
protected String esn;
#Id
#Column(name = "columnName") // use the correct column name if it varies from the variable name provided
protected BigDecimal techrcrdid;
#Column(name = "columnName") // use the correct column name if it varies from the variable name provided
protected BigDecimal preTradLrgInScaleThrshld;
#Column(name = "columnName") // use the correct column name if it varies from the variable name provided
#Temporal( TemporalType.TIMESTAMP)
protected LocalDateTime CreDt;
#Column(name = "columnName") // use the correct column name if it varies from the variable name provided
protected String fullnm;
#Override
public boolean equals( Object o )
{
if( this == o ) return true;
if( !( o instanceof SECU ) ) return false;
SECU secu = ( SECU ) o;
return Objects.equals( esn, secu.esn ) &&
Objects.equals( techrcrdid, secu.techrcrdid ) &&
Objects.equals( preTradLrgInScaleThrshld, secu.preTradLrgInScaleThrshld ) &&
Objects.equals( CreDt, secu.CreDt ) &&
Objects.equals( fullnm, secu.fullnm );
}
#Override
public int hashCode()
{
return Objects.hash( esn, techrcrdid, preTradLrgInScaleThrshld, CreDt, fullnm );
}
// getters and setters
}
Also double check your getters and setters in each entity class, the once provided in your question seems incorrect.
Your entity class appears to be modelling a composite primary key, but I only see 2 #Id and no #IdClass annotation used which is necessary.
I have been trying to implement entity updating using #DynamicUpdate. Documentation (http://docs.jboss.org/hibernate/orm/4.3/manual/en-US/html_single/) says:
dynamicInsert / dynamicUpdate (defaults to false): specifies that INSERT / UPDATE SQL should be generated at runtime and contain only the columns whose values are not null.
But I didn't manage to make it works, so I dug into the sources of AbstractEntityPersister, and saw this:
generateUpdateString( propsToUpdate, j, oldFields, j == 0 && rowId != null )
It is a method for generating SQL query string, based on 'propsToUpdate' which is boolean array obtained from:
getPropertiesToUpdate( dirtyFields, hasDirtyCollection );
where 'dirtyFields' is an integer array of id's of 'dirty' columns.
But nowhere in
getPropertiesToUpdate(final int[] dirtyProperties, final boolean hasDirtyCollection)
i was able to find mechanism for checking if modified column is not null, so even when entity i want to merge has some null fields, all of them are updated in the DB, overriding existing data with nulls.
My question is: where is an error in my reasoning?
EDIT:
Here's my code, as Chaitanya requested:
Entity:
#SuppressWarnings("serial")
#Entity(name = "t_quiz_groups")
#DynamicUpdate
#SelectBeforeUpdate
public class QuizGroupEntity implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
#Column(name = "group_id")
private long id;
#Column(name = "description", nullable = false)
private String description;
#Column(name = "create_date", nullable = false)
#Column(name = "create_date")
private Calendar createDate;
+setters and getters
Then, I have a service (#Transactional) method:
public void update(long groupId, String newDescription) throws ServiceException {
QuizGroupEntity quizGroupEntity = quizRepository.getGroupById(groupId);
ExpectedNotNull.of(quizGroupEntity, RegistryErrorCodes.QUIZ_GROUP_NOT_EXISTS);
quizGroupEntity.setCreateDate(null); //for testing purposes i am setting another
field as null (shouldnt be updated then, right?)
quizRepository.lock(quizGroupEntity, LockModeType.WRITE);
quizGroupEntity.updateDescription(newDescription); // new description
quizRepository.merge(quizGroupEntity);
}
And now, in AbstractEntityPersiste, method:
getPropertiesToUpdate( dirtyFields, hasDirtyCollection );
is called, and dirtyFields[] consists of two 'dirty' columns: [0,2] - and that is correct, those two columns were modified (date and description, date set to null)
Then, generateUpdateString( propsToUpdate, j, oldFields, j == 0 && rowId != null ) is called, with 'propsToUpdate' looking like this:
[true, false, true, false, true]
Which is wrong, because indeed first and third columns were modified, but first one is set to null, that is why
generateUpdateString( propsToUpdate, j, oldFields, j == 0 && rowId != null )
Generates query:
[update t_quiz_groups set create_date=?, description=?, version=? where group_id=? and version=?]
That will modify create_date, and override it with null value.
Could someone explain to me how Any-related annotations (#Any, #AnyMetaDef, #AnyMetaDefs and #ManyToAny) work in practice. I have a hard time finding any useful documentation (JavaDoc alone isn't very helpful) about these.
I have thus far gathered that they somehow enable referencing to abstract and extended classes. If this is the case, why is there not an #OneToAny annotation? And is this 'any' referring to a single 'any', or multiple 'any'?
A short, practical and illustrating example would be very much appreciated (doesn't have to compile).
Edit: as much as I would like to accept replies as answers and give credit where due, I found both Smink's and Sakana's answers informative. Because I can't accept several replies as the answer, I will unfortunately mark neither as the answer.
Hope this article brings some light to the subject:
Sometimes we need to map an
association property to different
types of entities that don't have a
common ancestor entity - so a plain
polymorphic association doesn't do the
work.
For example let's assume three different applications which manage a media library - the first application manages books borrowing, the second one DVDs, and the third VHSs. The applications have nothing in common. Now we want to develop a new application that manages all three media types and reuses the exiting Book, DVD, and VHS entities. Since Book, DVD, and VHS classes came from different applications they don't have any ancestor entity - the common ancestor is java.lang.Object. Still we would like to have one Borrow entity which can refer to any of the possible media type.
To solve this type of references we can use the any mapping. this mapping always includes more than one column: one column includes the type of the entity the current mapped property refers to and the other includes the identity of the entity, for example if we refer to a book it the first column will include a marker for the Book entity type and the second one will include the id of the specific book.
#Entity
#Table(name = "BORROW")
public class Borrow{
#Id
#GeneratedValue
private Long id;
#Any(metaColumn = #Column(name = "ITEM_TYPE"))
#AnyMetaDef(idType = "long", metaType = "string",
metaValues = {
#MetaValue(targetEntity = Book.class, value = "B"),
#MetaValue(targetEntity = VHS.class, value = "V"),
#MetaValue(targetEntity = DVD.class, value = "D")
})
#JoinColumn(name="ITEM_ID")
private Object item;
.......
public Object getItem() {
return item;
}
public void setItem(Object item) {
this.item = item;
}
}
The #Any annotation defines a polymorphic association to classes from multiple tables. This type of mapping
always requires more than one column. The first column holds the type of the associated entity. The remaining
columns hold the identifier. It is impossible to specify a foreign key constraint for this kind of association, so
this is most certainly not meant as the usual way of mapping (polymorphic) associations. You should use this
only in very special cases (eg. audit logs, user session data, etc).
The #Any annotation describes the column holding the metadata information. To link the value of the
metadata information and an actual entity type, The #AnyDef and #AnyDefs annotations are used.
#Any( metaColumn = #Column( name = "property_type" ), fetch=FetchType.EAGER )
#AnyMetaDef(
idType = "integer",
metaType = "string",
metaValues = {
#MetaValue( value = "S", targetEntity = StringProperty.class ),
#MetaValue( value = "I", targetEntity = IntegerProperty.class )
} )
#JoinColumn( name = "property_id" )
public Property getMainProperty() {
return mainProperty;
}
idType represents the target entities identifier property type and metaType the metadata type (usually String).
Note that #AnyDef can be mutualized and reused. It is recommended to place it as a package metadata in this
case.
//on a package
#AnyMetaDef( name="property"
idType = "integer",
metaType = "string",
metaValues = {
#MetaValue( value = "S", targetEntity = StringProperty.class ),
#MetaValue( value = "I", targetEntity = IntegerProperty.class )
} )
package org.hibernate.test.annotations.any;
//in a class
#Any( metaDef="property", metaColumn = #Column( name = "property_type" ), fetch=FetchType.EAGER )
#JoinColumn( name = "property_id" )
public Property getMainProperty() {
return mainProperty;
}
#ManyToAny allows polymorphic associations to classes from multiple tables. This type of mapping always requires
more than one column. The first column holds the type of the associated entity. The remaining columns
hold the identifier. It is impossible to specify a foreign key constraint for this kind of association, so this is most
certainly not meant as the usual way of mapping (polymorphic) associations. You should use this only in very
special cases (eg. audit logs, user session data, etc).
#ManyToAny(
metaColumn = #Column( name = "property_type" ) )
#AnyMetaDef(
idType = "integer",
metaType = "string",
metaValues = {
#MetaValue( value = "S", targetEntity = StringProperty.class ),
#MetaValue( value = "I", targetEntity = IntegerProperty.class ) } )
#Cascade( { org.hibernate.annotations.CascadeType.ALL } )
#JoinTable( name = "obj_properties", joinColumns = #JoinColumn( name = "obj_id" ),
inverseJoinColumns = #JoinColumn( name = "property_id" ) )
public List<Property> getGeneralProperties() {
Src: Hibernate Annotations Reference Guide 3.4.0GA
Hope it Helps!
The #Any annotation defines a polymorphic association to classes from multiple tables, right, but polymorphic associations such as these are an SQL anti-pattern! The main reason is that you can´t define a FK constraint if a column can refer to more than one table.
One of the solutions, pointed out by Bill Karwin in his book, is to create intersection tables to each type of "Any", instead of using one column with "type", and using the unique modifier to avoid duplicates. This solution may be a pain to work with JPA.
Another solution, also proposed by Karwin, is to create a super-type for the connected elements. Taking the example of borrowing Book, DVD or VHS, you could create a super type Item, and make Book, DVD and VHS inherit from Item, with strategy of Joined table. Borrow then points to Item. This way you completely avoid the FK problem. I translated the book example to JPA bellow:
#Entity
#Table(name = "BORROW")
public class Borrow{
//... id, ...
#ManyToOne Item item;
//...
}
#Entity
#Table(name = "ITEMS")
#Inheritance(strategy=JOINED)
public class Item{
// id, ....
// you can add a reverse OneToMany here to borrow.
}
#Entity
#Table(name = "BOOKS")
public class Book extends Item {
// book attributes
}
#Entity
#Table(name = "VHS")
public class VHS extends Item {
// VHSattributes
}
#Entity
#Table(name = "DVD")
public class DVD extends Item {
// DVD attributes
}
Have you read the Hibernate Annotations documentation for #Any? Haven't used that one myself yet, but it looks like some extended way of defining references. The link includes an example, though I don't know if it's enough to fully understand the concept...