Cannot insert NULL into column - java

I am attempting to get Hibernate to lazy load some clobs. The loading portion is working just fine. The issue is when I try to create a new one. I started with advice from Blob lazy loading
Here are my mappings (Note the table structure is really really bad, there are multiple clobs on this table -- this example is simplified from my real model...).
#Entity #Table("TABLE_1")
public class BadDBDesign {
#Id #GeneratedValue(strategy = GenerationType.IDENTITY)
#Column("table_id")
private long key;
#OneToOne(fetch = FetchType.LAZY, cascade = CascadeType.ALL)
#JoinColumn(name = "table_id", referencedColumnName = "table_id",
insertable = true, updatable = false)
private BlobWrapperA;
#OneToOne(fetch = FetchType.LAZY, cascade = CascadeType.ALL)
#JoinColumn(name = "table_id", referencedColumnName = "table_id",
insertable = true, updatable = false)
private BlobWrapperB;
}
#Entity #Table(name = "TABLE_1")
public class BlobWrapperA {
#Lob
#Column(name = "col_A", nullable = false)
#Type(type = "org.springframework.orm.hibernate3.support.BlobByteArrayType")
private byte[] blobColA;
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
#Column(name = "table_id")
private long Key;
}
#Entity #Table(name = "TABLE_1")
public class BlobWrapperB {
#Lob
#Column(name = "col_B", nullable = false)
#Type(type = "org.springframework.orm.hibernate3.support.BlobByteArrayType")
private byte[] blobColB;
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
#Column(name = "table_id")
private long Key;
}
Application boots just fine, am able to retrieve the data without loading the clobs (am able to retrieve them when needed via lazy loading), but when I attempt to create the new ones I receive the following stacktrace:
Hibernate:
insert
into
TABLE_1
(key, col_A, col_B)
values
(?, ?, ?)
2011-08-31 17:35:09,089 [http-8080-1] DEBUG org.springframework.jdbc.support.lob.DefaultLobHandler IP134.167.141.34 CV#f2a597b2-a185-4e89 P#71252 - Set bytes for BLOB with length 7136
2011-08-31 17:35:16,441 [http-8080-1] DEBUG org.springframework.jdbc.support.lob.DefaultLobHandler IP134.167.141.34 CV#f2a597b2-a185-4e89 P#71252 - Set bytes for BLOB with length 10946
Aug 31, 2011 5:35:50 PM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet online threw exception java.sql.SQLIntegrityConstraintViolationException: ORA-01400: cannot insert NULL into ("SCHEMA"."TABLE_1"."COL_A")
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:207)
at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1010)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1315)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3576)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3657)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1350)
at org.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at org.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
Note the important piece where we see the lengths of the clobs immediately after the Hibernate insert statement from the generated SQL.
Edit: After looking at this early this morning, I realized that the issue was due to one of the Blobs had to be mapped with #JoinColumn(insertable = false, updatable = false), otherwise Hibernate would not start. As such of course it was attempting to insert Null into this column. So the new question becomes, can you lazily MULTIPLE clobs on a single table (using the same key). I'm guessing without a table redesign, I'm pretty much out of luck unless Oracle fixes the driver.

As much as it makes me want to vomit we needed to get this functionality without modifying the Database.
As such, I pulled out the common pieces into an Abstract class like such:
#MappedSuperclass #Table("TABLE_1")
public class BadDBDesign {
#Id #GeneratedValue(strategy = GenerationType.IDENTITY)
#Column("table_id")
private long key;
#Column("small_value")
private String smallVarChar2Field;
}
The problem is I then have to extend this class for each of our blobs :( Thus our extended classes loook like:
public class BlobA extends BadDBDesign {
#Lob #Column("col_a")
#Type(type ="org.springframework.orm.hibernate3.support.BlobByteArrayType")
private byte[] blobColA;
}
public class BlobB extends BadDBDesign {
#Lob #Column("col_b")
#Type(type ="org.springframework.orm.hibernate3.support.BlobByteArrayType")
private byte[] blobColB;
}
Luckily we don't have any location where we more than one clob on any given page. This is still a maintenance nightmare, but was an acceptable trade-off (for the time-being) on getting the loads done more efficiently. I created DAO's for these, which the project didn't have prior; hopefully this will push the team in a good direction towards a proper abstraction layer, and we can hopefully completely remove these wasted POJOs in a future release.

Looks like in your BlobWrapperA class you have "nullable = false" set on that column. Or the column has a null constraint on the table itself in the database.

Oracle and Hibernate hate each other when it comes to LOB types, which stems from the fact that the Oracle driver is garbage. I believe I've run across this before, you should try setting the following system properties:
hibernate.jdbc.use_streams_for_binary=true
hibernate.jdbc.batch_size=0

Related

Hibernate OneToOne between PK's with lazy behaviour

I'm trying to achieve to have an entity called MyEntity along with another entity called MyEntityInfo using Hibernate 5.3.13.Final with annotations under Wildfly 18.
The idea is to have MyEntity store some commonly requested fields, and MyEntityInfo store some rarely requested fields. Both share the same primary key called SID (Long), and there is a FK from Info's SID to Entity's SID. There can be entities without info.
Normally you will not require the additional info. For example, I don't want the info entity to be fetched when I query my entity like this:
MyEntityImpl entity = em.find(MyEntityImpl.class, 1L);
However, when I run this code, I find that there's a second query, fetching the Info entity along the main one, as in an EAGER behaviour.
I'm mapping the relationship using #OneToOne. I've tried several combinations of FetchType, optional and #LazyToOne, but so far without success.
Here is the code for both MyEntity and MyEntityInfo classes (additional getters and setters removed):
MyEntity (ID generator is a custom sequence generator):
#Entity
#Table(name = MyEntityImpl.TABLE_NAME)
public class MyEntityImpl {
public static final String TABLE_NAME = "TMP_MY_ENTITY";
#Id
#GeneratedValue(strategy = GenerationType.TABLE, generator = "GEN_" +
TABLE_NAME)
#GenericGenerator(name = "GEN_" +
TABLE_NAME, strategy = CoreIdGenerator.ID_GENERATOR, parameters = {
#Parameter(name = "tableName", value = TABLE_NAME) })
#Column(name = "sid", nullable = false, unique = true)
private Long sid;
#OneToOne(mappedBy = "myEntity", cascade = CascadeType.ALL, fetch = FetchType.LAZY, optional = true)
#LazyToOne(LazyToOneOption.NO_PROXY)
private MyEntityInfoImpl info;
#Column
private String field;
MyEntityInfo:
#Entity
#Table(name = MyEntityInfoImpl.TABLE_NAME)
public class MyEntityInfoImpl {
public static final String TABLE_NAME = "TMP_MY_ENTITY_INFO";
#Id
#Column(name = "SID", nullable = false, unique = true)
private Long sid;
#OneToOne(fetch = FetchType.EAGER, optional = false)
#JoinColumn(name = "SID", referencedColumnName = "SID", insertable = false, updatable = false, nullable = false)
private MyEntityImpl myEntity;
#Column(name = "INFO_FIELD")
private String infoField;
I've tried this solution, but as I said, it didn't work for me:
Hibernate lazy loading for reverse one to one workaround - how does this work?
I've managed to do something somewhat similar using #OneToMany and managing data manually, but that's not what I'd like to do. However, another alternatives and information on whether this can be achieved or not using #OneToOne, or the right design pattern to do this are also welcome.
PS: Database tables creation for SQL Server, in case you want to try it:
create table TMP_MY_ENTITY (SID NUMERIC(19,0) NOT NULL, FIELD VARCHAR(100));
go
ALTER TABLE TMP_MY_ENTITY ADD CONSTRAINT PK_TMP_MY_ENTITY PRIMARY KEY CLUSTERED (SID);
go
create table TMP_MY_ENTITY_INFO (SID NUMERIC(19,0) NOT NULL, INFO_FIELD VARCHAR(100));
go
ALTER TABLE TMP_MY_ENTITY_INFO ADD CONSTRAINT PK_TMP_MY_ENTITY_INFO PRIMARY KEY CLUSTERED (SID);
go
CREATE SEQUENCE SEQ_TMP_MY_ENTITY START WITH 1 INCREMENT BY 1 MINVALUE 1 CACHE 20;
alter table TMP_MY_ENTITY_INFO add constraint FK_TMP_MY_ENT_INFO_MY_ENT FOREIGN KEY (SID) references TMP_MY_ENTITY(SID);
go
insert into TMP_MY_ENTITY(SID, FIELD) VALUES (NEXT VALUE FOR SEQ_TMP_MY_ENTITY, 'Field 1');
insert into TMP_MY_ENTITY_INFO(SID, INFO_FIELD) VALUES ((SELECT MAX(SID) FROM TMP_MY_ENTITY), 'Info 1');
insert into TMP_MY_ENTITY(SID, FIELD) VALUES (NEXT VALUE FOR SEQ_TMP_MY_ENTITY, 'Field 2');
insert into TMP_MY_ENTITY_INFO(SID, INFO_FIELD) VALUES ((SELECT MAX(SID) FROM TMP_MY_ENTITY), 'Info 2');
insert into TMP_MY_ENTITY(SID, FIELD) VALUES (NEXT VALUE FOR SEQ_TMP_MY_ENTITY, 'Field 3 no info');
-- DELETE ALL
drop table TMP_MY_ENTITY_INFO;
drop table TMP_MY_ENTITY;
drop sequence SEQ_TMP_MY_ENTITY;
After following #SternK link, and upgrading to Wildfly 19 and Hibernate 5.4.14, it finally worked by using #MapsId.
The right mapping to use is this:
MyEntity:
public class MyEntityImpl {
#OneToOne(mappedBy = "myEntity", cascade = CascadeType.REMOVE, fetch = FetchType.LAZY, optional = true)
#JoinColumn(name = "SID")
private MyEntityInfoImpl info;
MyEntityInfo:
public class MyEntityInfoImpl {
#OneToOne(fetch = FetchType.EAGER, optional = false)
#MapsId
#JoinColumn(name = "SID", referencedColumnName = "SID", insertable = false, updatable = false, nullable = false)
private MyEntityImpl myEntity;

Hibernate 5.x memory leak - looks like HQL queries caching?

We have a bog standard REST-based Hibernate application.
Recently we noticed that it dies the death by Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded time and again. We've emulated what happens there in a test, that collects items from a collection of elements. After about 1000 runs the application goes kaput with the pattern as below. The production displays similar behavior, although the time scale is much longer:
Java profiler shows the abundance of hibernate Node elements in a hashmap:
When drilling deeper into the what these nodes are I can see the following:
It looks like Hibernate caches HQL queries (even if we run the same query over and over again).
The class that comprises the offending collection is below:
#Entity
#Audited
#Table(name = "benefit_package", uniqueConstraints = {
#UniqueConstraint(name = "uk01_benefit_package", columnNames = {"employer_guid", "benefit_package_name"}),
#UniqueConstraint(name = "uk02_benefit_package", columnNames = {"benefit_package_guid"})})
#SequenceGenerator(name = "benefit_package_sequence", sequenceName = "benefit_package_id_seq", allocationSize = 1, initialValue = 1)
#EqualsAndHashCode(of = {"employerId", "name"})
#ToString(of = {"id", "employerId", "name"})
public class BenefitPackage {
#Id
#Column(name = "benefit_package_id")
#GeneratedValue(generator = "benefit_package_sequence", strategy = SEQUENCE)
private Long id;
#Column(name = "benefit_package_guid", nullable = false, columnDefinition = "binary(16)")
private UUID guid;
#Column(name = "benefit_package_name", nullable = false)
private String name;
#Column(name = "employer_guid", nullable = false, columnDefinition = "binary(16)")
private UUID employerId;
#ManyToOne
#JoinColumn(name = "selection_control_id")
private SelectionControl selectionControl;
#OneToMany(mappedBy = "benefitPackage")
private List<BenefitPackageVersion> versions = new ArrayList<>();
#Column(name = "last_modified_date")
private LocalDateTime lastModifiedDate;
#Version
#Column(name = "optlock")
private Long optLock;
This entity is used with pagination, but we do not use any custom JPA queries (at least overtly) for this operation.
Application details: SpringBoot 1.2.8, Hibernate 5.1 (Upgrading to 5.2.18Final had no effect). No second level cache is used.
Questions:
Why should Hibernate cache the same query over and over again?
Has this ever been noticed or addressed? Is there a fix for this problem?
I know the ticket might be a duplication, but I have not found definitive answer to that question anywhere.

Hibernate reloading entities in a fetch join

I am having a problem with Hibernate reloading the entities in a query even though they are being fetched as part of the main query.
The entities are as follows (simplified)
class Data {
#Id
String guid;
#ManyToOne(fetch = FetchType.LAZY)
#NotFound(action = NotFoundAction.IGNORE)
DataContents contents;
}
class DataClosure {
#Id
#ManyToOne(fetch = FetchType.EAGER)
#Fetch(FetchMode.JOIN)
#JoinColumn(name = "ancestor_id", nullable = false)
private Data ancestor;
#Id
#ManyToOne(fetch = FetchType.EAGER)
#Fetch(FetchMode.JOIN)
#JoinColumn(name = "descendant_id", nullable = false)
private Data descendant;
private int length;
}
This is modelling a closure table of parent / child relationships.
I have set up some criteria as follows
final Criteria criteria = getSession()
.createCriteria(DataClosure.class, "dc");
criteria.createAlias("dc", "a");
criteria.createAlias("dc.descendant", "d");
criteria.setFetchMode("a", FetchMode.JOIN);
criteria.setFetchMode("d", FetchMode.JOIN);
criteria.add(Restrictions.eq("d.metadataGuid",guidParameter));
criteria.add(Restrictions.ne("a.metadataGuid",guidParameter));
This results in the following SQL query
select
this_.descendant_id as descenda2_21_2_,
this_.ancestor_id as ancestor3_21_2_,
this_.length as length1_21_2_,
d2_.guid as metadata1_20_0_,
d2_.name as name5_20_0_,
a1_.guid as metadata1_20_1_,
a1_.name as name6_20_1_
from
data_closure this_
inner join
data d2_
on this_.descendant_id=d2_.metadata_guid
inner join
data a1_
on this_.ancestor_id=a1_.metadata_guid
where
d2_.guid=?
and a1_.guid<>?
which looks like it is correctly implementing the join fetch. However when I execute
List list = criteria.list();
I see one of these entries in the SQL log per row in the result set
Result set row: 0
DEBUG Loader - Loading entity: [Data#testGuid19]
DEBUG SQL -
select
data0_.guid as guid1_20_0_,
data0_.title as title5_20_0_,
from
data data0_
where
data0_.guid=?
Hibernate:
(omitted)
DEBUG Loader - Result set row: 0
DEBUG Loader - Result row: EntityKey[Data#testGuid19]
DEBUG TwoPhaseLoad - Resolving associations for [Data#testGuid19]
DEBUG Loader - Loading entity: [DataContents#7F1134F890A446BBB47F3EB64C1CF668]
DEBUG SQL -
select
dataContents0_.guid as guid_12_0_,
dataContents0_.isoCreationDate as isoCreat2_12_0_,
from
dataContents dataContents0_
where
dataContents0_.guid=?
Hibernate:
(omitted)
It is looks like even though the DataContents is marked as lazily loaded, it's being loaded eagerly.
So I either want some way in my query to fetch join DataClosure and Data and lazily fetch DataContents, or to fetch join the DataContents if that is not possible.
Edit:
Modelling the closure table like this
class DataClosure {
#Id
#Column(name = "ancestor_id", nullable = false, length =36 )
private String ancestorId;
#Id
#Column(name = "descendant_id", nullable = false, length =36 )
private String descendantId;
#ManyToOne(fetch = FetchType.EAGER)
#Fetch(FetchMode.JOIN)
#JoinColumn(name = "ancestor_id", nullable = false)
private Data ancestor;
#ManyToOne(fetch = FetchType.EAGER)
#Fetch(FetchMode.JOIN)
#JoinColumn(name = "descendant_id", nullable = false)
private Data descendant;
private int length;
}
fixed the problem. In other words, having #Id annotation on entities from other tables seemed to cause the issue, even though there was nothing wrong with the queries generated.
I think your problem here might be this
#NotFound(action = NotFoundAction.IGNORE)
There are plenty of google results where using that causes the lazy loading to become eager. I think it is a bug in Hibernate.
Adding this to the list of annotations should fix the problem
#LazyToOne(value=LazyToOneOption.NO_PROXY)
Since that informs Hibernate that you will not try to use that property later on so no proxy is required.

#Column insertable, updateble don't go well with Spring JPA?

Scenario :
I have 3 tables, Offer, Channel and Offer_Channels.
Basically Channel is a lookup table, i.e, the values in that table can neither be inserted nor updated by the application. An offer can contain one or many channels. I use the Channel table values to populate dynamic checkboxes. Anyways, so here is what I have :
#Entity
#Table(name = "OFFER")
#Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE)
public class Offer implements Serializable {
// Offer Id
#Id
#GeneratedValue(strategy = GenerationType.AUTO, generator = "offer_seq_gen")
#Column(name = "OFFER_ID")
private long OfferId;
#ManyToMany(cascade = CascadeType.ALL)
#JoinTable(name = "OFFER_CHANNELS", joinColumns = { #JoinColumn(name = "OFFER_ID") }, inverseJoinColumns = { #JoinColumn(name = "CHANNEL_ID") })
private Set<Channel> channels = new HashSet<Channel>();
//Other fields and corresponding getters and setters
}
Here is the Channel entity :
#Entity
#Table(name = "CHANNEL")
public class Channel implements Serializable {
private static final long serialVersionUID = 1L;
#NotNull
#Id
#Column(name = "CHANNEL_ID", insertable=false, updatable=false)
private Long channelId;
#Column(name = "CHANNEL_NAME", insertable=false, updatable=false)
private String channelName;
//getters and setters
}
Now, when a user creates an offer, I need to insert row in Offer table and Offer_Channels tables and do nothing(No updates/inserts) for Channel table. Initially, all three would happen, so to achive the "do nothing to Channel table" part, I put insertable=false and updateable=false on the Channel table columns and that worked like a charm. Now I used plain Hibernates for this. I mean I wrote a standalone java application and a main class to add an offer useing hibernate's session.save(offer). It ran the following queries :
Hibernate: insert into OFFER
Hibernate: insert into OFFER_CHANNELS
Alright, now, I have a rest service where I am using the Spring's JPA repository to save the information and I have the same domain objects setup. Now, when I add an offer, it runs :
Hibernate: insert into OFFER
Hibernate: insert into CHANNEL ( It is failing here obviously. I don't want this step to happen)
My question :
1. Why is it is trying to write something to Channel table even though I gave insertable=false in its domain object, and this is only happening with the Spring JPA setup. With the hibernate setup it just works fine.
3. Doesn't #JoinTable/ #OneToMany / insertable / updateble , go well with Spring JPA repository ?
What am I missing here ?
UPDATE :
#Service
#Transactional
public class OfferService {
#Inject
private OfferRepository offerRepository;
public Offer saveOfferInformation(Offer offer) {
log.debug("Saving Offer Info..");
log.debug("Offer object :"+offer);
return offerRepository.save(offer);
}
}
Repo :
public interface OfferRepository extends JpaRepository<Offer, Long> {
List<Offer> findByBuySku(String buySku);
}
And in the REST service Im just injecting the service and calling it, so no business logic in the REST service. Right now Im getting and the reason is it is trying to insert record to Channel table:
exception: "org.springframework.dao.DataIntegrityViolationException"
message: "could not execute statement; SQL [n/a]; constraint [PVS_OWNER.CHANNEL_PK]; nested exception is org.hibernate.exception.ConstraintViolationException: could not execute statement"
Have you tried to add insertable and updatable on the #JoinColumn. This works with One to Many relationships. I'm not 100% sure if it works with a Many to Many relationship.
#JoinTable(name = "OFFER_CHANNELS", joinColumns = { #JoinColumn(name = "OFFER_ID", insertable = false, updatable = false ) }, inverseJoinColumns = { #JoinColumn(name = "CHANNEL_ID", insertable = false, updatable = false ) })

Handling creation of ORM objects prior to persistence/generation of primary keys?

Bear with me as I try to simplify my issue as much as possible.
I am creating a new ORM object. This object has an auto generated primary key which is created on the database using as an identity. Within this object, is a child object with a many to one relationship with the parent object. One of the attributes I need to set to create the child object is primary key of the parent object, which has not been generated yet. It is important to note that the primary key of the child object is a composite key that includes the primary key of the parent object.
Diagram http://xs941.xs.to/xs941/09291/fieldrule.1degree221.png
In this diagram FieldRule is the child table and SearchRule is the parent table. The problem is that SearchRuleId has not been generated when I am creating FieldRule objects. So there is no way to link them.
How do I solve this problem?
Here is are some relevant snippets from the entity classes, which use annotation based mappings.
From SearchRule.java (Parent Class):
public class SearchRule implements Serializable {
private static final long serialVersionUID = 1L;
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
#Basic(optional = true)
#Column(name = "ID")
private Integer id;
#Basic(optional = false)
#Column(name = "Name", unique = true)
private String name;
#Basic(optional = false)
#Column(name = "Threshold")
private int threshold;
#Basic(optional = false)
#Column(name = "LastTouched", insertable = false, updatable = false)
#Temporal(TemporalType.TIMESTAMP)
private Date lastTouched;
#Column(name = "TouchedBy")
private String touchedBy;
#OneToMany(cascade = CascadeType.ALL, mappedBy = "searchRule", fetch = FetchType.LAZY)
private Collection<FieldRule> fieldRuleCollection;
#JoinColumn(name = "IndexTemplateId", referencedColumnName = "ID")
#ManyToOne(optional = false, fetch = FetchType.LAZY)
private IndexTemplate indexTemplateId;
From FieldRule.java (Child Class):
public class FieldRule implements Serializable {
private static final long serialVersionUID = 1L;
#EmbeddedId
protected FieldRulePK fieldRulePK;
#Basic(optional = false)
#Column(name = "RuleValue")
private String ruleValue;
#JoinColumns({#JoinColumn(name = "IndexTemplateId", referencedColumnName = "IndexTemplateId", insertable = false, updatable = false), #JoinColumn(name = "FieldNumber", referencedColumnName = "FieldNumber", insertable = false, updatable = false)})
#ManyToOne(optional = false, fetch = FetchType.LAZY)
private Field field;
#JoinColumn(name = "SearchRuleId", referencedColumnName = "ID", insertable = false, updatable = false)
#ManyToOne(optional = false, fetch = FetchType.LAZY)
private SearchRule searchRule;
From FieldRulePK.java (Child PK Class):
#Embeddable
public class FieldRulePK implements Serializable {
#Basic(optional = false)
#Column(name = "IndexTemplateId")
private Integer indexTemplateId;
#Basic(optional = false)
#Column(name = "FieldNumber")
private Integer fieldNumber;
#Basic(optional = false)
#Column(name = "SearchRuleId")
private Integer searchRuleId;
Why do you have to set the primary key of the initial object in the sub-objects? With a proper mapping the reference will get set by the JPA application automatically.
So the answer is: do a correct mapping.
If you need a more detailed answer provide a more detailed question. Including:
source code of the involved classes
source code used to create and persist the instances
exceptions experienced
information on which jpa implementation you use
Edit, after more details where provided in the question:
I think your embeddable PK should look something like this:
#Embeddable
public class FieldRulePK implements Serializable {
#Basic(optional = false)
#Column(name = "IndexTemplateId")
private Integer indexTemplateId;
#Basic(optional = false)
#Column(name = "FieldNumber")
private Integer fieldNumber;
#ManyToOne( ... some not so trivial details here ..)
private SearchRule searchRule;
}
And the searchRule property of your FieldRule should be dropped. The entity reference in the embeddable should result in an id field in the database.
This is a database design issue, I think. If the FieldRule can be created independently of the SearchRule (in other words, SearchRuleId is not a "not null" field) then you need to not include it in your composite primary key. If SearchRuleId cannot be null, then you just have to save the objects in the right order, which your ORM should handle for you if your mapping is correct.
I think the problem is with the way you're doing your mapping, where you're trying to pull too many database concepts into your OO model. ORM was a little confusing to me as well, when I started doing it. What you need to understand is that the concept of a primary key field is a database concept and not an OO concept. In OO, each object reference is unique, and that's what you use to identify instances.
Object references do not really map well to the database world, and that's why we have primary key properties. With that said, the use of primary key properties should be kept to a minimal. What I find helpful is to minimize the type of primary key properties that map directly to the primary key columns (usually, integer properties that map to a primary key column).
Anyway, based on that, here's how I think you should do your mapping (changes highlighted with horizontal separators):
From FieldRule.java (Child Class):
public class FieldRule implements Serializable {
private static final long serialVersionUID = 1L;
#EmbeddedId
protected FieldRulePK fieldRulePK;
#Basic(optional = false)
#Column(name = "RuleValue")
private String ruleValue;
// Removed field and searchRule mapping as those are already in the
// primary key object, updated setters/getters to pull properties from
// primary key object
public Field getField() {
return fieldRulePK != null ? fieldRulePK.getField() : null;
}
public void getField(Field field) {
// ... parameter validation ...
if (fieldRulePK == null) fieldRulePK = new FieldRulePK();
fieldRulePK.setField(field);
}
public SearchRule getSearchRule() {
return fieldRulePK != null ? fieldRulePK.getSearchRule() : null;
}
public void setSearchRule(SearchRule searchRule) {
// ... parameter validation ...
if (fieldRulePK == null) fieldRulePK = new FieldRulePK();
fieldRulePK.setSearchRule(searchRule);
}
From FieldRulePK.java (Child PK Class):
#Embeddable
public class FieldRulePK implements Serializable {
// Map relationships directly to objects instead of using integer primary keys
#JoinColumns({#JoinColumn(name = "IndexTemplateId", referencedColumnName = "IndexTemplateId", insertable = false, updatable = false), #JoinColumn(name = "FieldNumber", referencedColumnName = "FieldNumber", insertable = false, updatable = false)})
#ManyToOne(optional = false, fetch = FetchType.LAZY)
private Field field;
#JoinColumn(name = "SearchRuleId", referencedColumnName = "ID", insertable = false, updatable = false)
#ManyToOne(optional = false, fetch = FetchType.LAZY)
private SearchRule searchRule;
SearchRule.java should be fine as it is.
I hope this all makes sense.
Note that this is untested, it would take too much time for me to set up a test database and create all the necessary test code, but I hope it gives you an idea on how to proceed.
Posting this mostly because I can't leave this complicated of comment... but anyway...
Normally when I look at EmbeddedId type things I see things like from this example of Embeddable keys. Normally I'd expect something like
From ChildPK.java:
#Basic(optional = false)
#Column(name = "ParentId")
private Parent parent;
But here I guess we've got 2 other FKs being made into a composite PK, IndexTemplateId and FieldNumber... and this Parent object's ID is auto-generated using a sequence.
Now I suppose that you must already be persisting the Parent object prior to trying to persist the child object or you must mark the Parent object in child as cascading, that should ensure the id gets populated, the composite keys seem to greatly complicate the problem.
Since this is a new ORM I would suggest that you use a single PK on each table instead of composite ids and simply have FK relations between the tables.
Apologies if I'm not grasping something here, but I'm not quite sure there is enough information here - I would ask for the entire Entity field declarations just to see how you're trying to put this together each of your 3 classes...
Something is a bit fishy here. Generally speaking if you have parent entity A and child entity B and you are persisting A with some children the correct order of operations is first inserting A into the database and then inserting children (I am assuming proper cascade from A to B). So in this general case the ids will be properly generated and everything should OK.
However it appears that in your case children (FieldRules) are saved first. The only reasonable explanation for this I can think of is that if you have an additional entity C (in your case probably Field entity) which is already saved when your code is running and it has a cascade to FieldRules. In this case you have two conflicting cascades: one SearchRule -> FieldRule and another Field -> FieldRule. Since JPA doesn't perform smart analysis of this it is a matter of chance (and loading order) which one will get invoked first. And in your case the Field->FieldRules is probably invoked which causes the children to be inserted before parent.
So I would try to search for any additional cascades TO FieldRules in your code and try to remove those. If you can remove them all it will probably solve your problem
Bottom line, your searchRule MUST be saved before your fieldRules can be.
However, rather than having the column definition on the field, you could try having it on a getter...
#Embeddable
public class FieldRulePK implements Serializable {
//snip other columns
#Basic(optional = false)
#Column(name = "SearchRuleId")
private Integer getSearchRuleId()
{
return this.fieldRule.searchRule.getId();
}
private void setSearchRuleId(Integer id)
{
this.fieldRule.searchRule = new SearchRule(id);
}
This would mean that when the saveSearchRule(searchRule) cascades into the FieldRuleCollection to save that, the searchRuleId is automatically retrieved from the searchRule after it is saved, rather than having to hackily be added in.
It means whatever creates your FieldRulePK object has to pass a reference to it's parent, but otherwise means your hacky setSearchRuleId() loop is unnecessary.
Why does the "sub-object" (I think you mean "child") need to have the key to the parent object? If you have a OneToMany on the Parent object and a ManyToOne on the Child object with mappedBy, your child object will already have a foreign key (and a reference to the parent object).
Also, you need to check you cascade in your Parent object OneToMany annotation.
Simple answer: don't rely on your persistence layer generating the IDs at the time of persistence. Create the entity IDs at the time you create the objects.
Unless you are coding some specific meaning into your keys (a database anti-pattern), they can be any random, unique value such as a UUID (GUID for the Microsofties).
And here's something to think about when you use your persistence layer to generate the ID/primary key: do you use the entity's primary key in the hashcode or equals method?
If you do use the ID/primary key in the hashcode/equals method then you will break the contract expected of objects when stored in a Java collection. See this Hibernate page for more details.
Right now my work around is doing something like,
Collection<FieldRule> fieldRules = searchRule.getFieldRuleCollection();
if (searchRule.getId() == null)
{
//null out the collection so it doesn't cascade on persist
searchRule.setFieldRuleCollection(null);
//save to get id
dao.saveSearchRule(searchRule);
for (FieldRule fr : fieldRules) {
fr.getFieldRulePK().setSearchRuleId(searchRule.getId());
}
}
//re set collection
searchRule.setFieldRuleCollection(fieldRules);
//remove double refrence, which jpa doesn't like, to FieldRuleCollection
fieldRules = null;
//save again, this time for real
dao.saveSearchRule(searchRule);
That seems really hackey to me, but it does work (maybe, I'm hitting some other issues but they may be unrelated).
There must be a better way to turn off casacade for a single persist.

Categories

Resources