JPA throwing "multiple assignments to same column" during save operation - java

I have a model class that references another model class and seem to be encountering an issue where the #OneToOne annotation fixes one problem but causes another. Removing it causes the inverse.
JPA throws "multiple assignments to same column" when trying to save changes to model. The generated SQL has duplicate columns and I'm not sure why.
Here's a preview of what the classes look like:
The parent class references look like this:
public class Appliance {
public Integer locationId;
#Valid
#OneToOne
public Location location;
}
The child Location class has an id field and a few other text fields -- very simple:
public class Location {
public Integer id;
public String name;
}
When I attempt to perform a save operation, does anyone know why JPA is creating an insert statement for the Appliance table that contains two fields named "location_id"?
I need to annotate the reference to the child class with #OneToOne if I want to be able to retrieve data from the corresponding database table to display on screen. However, If I remove #OneToOne, the save works fine, but it obviously won't load the Location data into the child object when I query the db.
Thanks in advance!

It appears you did not define an #InheritanceType on the parent Class. Since you did not, the default is to combine the the parent and the child class into the same Table in the Single Table Strategy.
Since both entities are going into the same table, I think that #OneToOne is trying to write the id twice - regardless of which side it is on.
If you want the parent to be persisted in its own table, look at InheritanceType.JOINED.
Or consider re-factoring so that you are not persisting the parent separately as JOINED is not considered a safe option with some JPA providers.
See official Oracle Documentation below.
http://docs.oracle.com/javaee/7/tutorial/doc/persistence-intro002.htm#BNBQR
37.2.4.1 The Single Table per Class Hierarchy Strategy
With this strategy, which corresponds to the default InheritanceType.SINGLE_TABLE, all classes in the hierarchy are mapped to a single table in the database. This table has a discriminator column containing a value that identifies the subclass to which the instance represented by the row belongs.

In OpenJPA, according to the docs (http://openjpa.apache.org/builds/1.0.1/apache-openjpa-1.0.1/docs/manual/jpa_overview_mapping_field.html), section 8.4, the foreign key column in a one-to-one mapping:
Defaults to the relation field name, plus an underscore, plus the name
of the referenced primary key column.
And the JPA API seems to concur with this (http://docs.oracle.com/javaee/6/api/javax/persistence/JoinColumn.html)
I believe this means that in a one-to-one mapping, the default column name for properties in a dependent class is parentClassFieldName_dependentClassFieldName (or location_id in your case). If that's the case, the location_id column you are defining in your Appliance class is conflicting with the location_id default column name which would be generated for your Location class.
You should be able to correct this by using the #Column(name="someColumnName") annotation and the #JoinColumn annotation on your #OneToOne relationship to force the column name to be something unique.

Ok gang, I figured it out.
Here's what the new code looks like, followed by a brief explanation...
Parent Class:
public class Appliance {
public Integer locationId;
#Valid
#OneToOne(cascade = CascadeType.ALL)
#JoinColumn(name="location_id", referencedColumnName="id")
public Location location;
}
Child Class:
public class Location {
public Integer id;
public String name;
}
The first part of the puzzle was the explicit addition of "cascade = CascadeType.ALL" in the parent class. This resolved the initial "multiple assignments to same column" by allowing the child object to be persisted.
However, I encountered an issue during update operations which is due to some sort of conflict between EBean and JPA whereby it triggers a save() operation on nested child objects rather than a cascading update() operation. I got around this by issuing an explicit update on the child object and then setting it to null before the parent update operation occurred. It's sort of a hack, but it seems like all these persistence frameworks solve one set of problems but cause others -- I guess that's why I've been old school and always rolled my own persistence code until now.

Related

Hibernate: custom entity name

Let's say I have an entity with a very long name:
#Entity
public class SupercalifragilisticexpialidociousPanda
{
...
}
Using Hibernate to persist it to a Postgres DB works flawlessly. Oracle, however, doesn't allow for table/column/index names longer than 30 characters.
That should be easy to fix, since i can just specify the table name manually, like this:
#Entity
#Table(name="SuperPanda")
public class SupercalifragilisticexpialidociousPanda
{
...
}
Now everything is back to working perfectly... except that any references I have to the entity in other tables still use the long class name ("SupercalifragilisticexpialidociousPanda") instead of the short table name ("SuperPanda").
For instance, if the entity has an embedded ElementCollection, like this:
#ElementCollection
private Set<String> nicknames;
Hibernate will try to create a DB like this: create table SupercalifragilisticexpialidociousPanda_nicknames, which will naturally cause an ORA-00972: identifier is too long error.
The same thing also happens for #OneToOne associations, where the lookup column would be called something like supercalifragilisticexpialidociousPanda_uuid, which also fails with oracle.
Now, one option would be to add a #CollectionTable(name="SuperPanda_nicknames") and #Column(name="...") annotation manually to every field that references this entity, but that's a lot of work and really error-prone.
Is there a way to just tell Hibernate once to use the short name everywhere a reference to the entity is required?
I also tried setting the entity name, like this:
#Entity(name="SuperPanda")
#Table(name="SuperPanda")
public class SupercalifragilisticexpialidociousPanda
{
...
}
... but it doesn't fix the issue.
What does one normally do in such a case?
Usually people give names for each database thing (table, column, index) by themselves. Letting Hibernate decide for you can lead to problem in future when you decide to refactor something.
All reference can be configured one way or another to use names you decide to use.
Ask specific question in case you can figure out the way to do it yourself.

Java Map<Entity1, Entity2> mapping to the database using JPA

In a project I'm working on, there seems to be a problem mapping a (Hash)Map to the database using JPA.
The Map (named 'racers', within the entity 'Race') consists of key-value-pairs < User, Racestats >, both custom entities in JEE.
The map is annotated by "#ElementCollection".
When trying to persist the map to the database, an error is given: "Data truncation: Data too long for column 'RACERS'".
When checking the database, we see a table 'Race_RACERS' is created, which consists of three columns: two bigints (representing the id of the Race object and the User object) and one varchar, which contains the Racestats object.
Of course, this last column should also contain references to the Racestats, instead of embedding these Racestats objects.
We have already tried fixing the issue using several other annotations, but none of them seem to work.
Could anyone please provide us with the correct syntax to persist our objects.
Keys will obviously be unique in each map, but within different Race objects, the Maps could contain the same key.
No 2 values will ever be the same. Even within different Race objects, maps will never contain the same value.
I don't have all the information about your use cases, but it looks to me like it would be simpler to put the User reference inside the Racestats entity, e.g.:
#Entity
public class Race {
#OneToMany(mappedBy="race")
Set<Racestats> racestats;
}
#Entity
public class Racestats {
#ManyToOne
User user;
#ManyToOne
Race race;
// Other race stats fields
...
}

Does it make sense to create entity table with only ID attribute?

Does it make sense to create a single entity when it should only contain the #Id value as a String?
#Entity
class CountryCode {
#Id
String letterCode; //GBR, FRA, etc
}
#Entity
class Payment {
CountryCode code;
// or directly without further table: String countryCode;
}
Or would you just use the letterCode as the stringvalue instead of creating the CountryCode entity?
It should later be possible for example to fetch all payments that contain a specific countrycode. This might be possible with both solutions. But which is the better one (why)?
Yes you can if you are using the entity as a lookup. In your example, you may want to add a column for description congaing (France, Great Britain, etc.) for the letter code and a third column whether it is active or not and maybe columns for when inserted and when it was last changed.
It makes sense to create such table to provide consistency of data, that is that no Payment is created with non-existing CountryCode. Having a separate entity (that is table) together with foreign key on Payment allows checking for consistency in database.
Another possible approach is to have check constraint on the code field but this is error prone if codes are added/deleted and/or there are more than one column of this type.
Adding the letterCode the the Payment Class as String Attribute (Or Enum to prevent typo errors) will increase the fetch performance as you do not need to create a join over your CountryCode Table.

Jdo (datanucleus) integer_idx column and databinding

I've been trying to do a simple one to many object binding in DataNucleus JDO. It's just two classes (i stripped a simple fields):
#PersistenceCapable(table="ORDER",schema="mgr")
public class Order {
#PrimaryKey(column="id")
#Persistent(valueStrategy=IdGeneratorStrategy.NATIVE,column="id")
private Long id;
#Persistent(defaultFetchGroup="false",column="customer_id")
#Element(column="customer_id")
private Customer customer;
}
And a class Customer having a list of orders
#PersistenceCapable(table="customer",schema="mgr",identityType=IdentityType.DATASTORE)
#DatastoreIdentity(strategy=IdGeneratorStrategy.NATIVE)
public class Customer {
#PrimaryKey
#Persistent(valueStrategy=IdGeneratorStrategy.NATIVE,column="id")
private Long id;
#Persistent(mappedBy="customer")
private List<Order> orders;
}
The database table setup is extremely simple(a table for customer and a table for orders with a foreign key (customer_id) referencing customer). Yet, when i try to insert some orders for customer i receive an error
javax.jdo.JDODataStoreException: Insert of object
"test.Order#17dd585" using statement "INSERT INTO
ORDER
(USER_COMMENT,ORDER_DATE,STATUS,CUSTOMER_ID,ORDERS_INTEGER_IDX)
VALUES (?,?,?,?,?)" failed : Unknown column 'ORDERS_INTEGER_IDX' in
'field list'
Somehow DataNucleus is assuming, there is a column ORDERS_INTEGER_IDX (such column does not exist in the database). The only idea, that came to my mind is http://www.datanucleus.org/products/datanucleus/jdo/metadata_xml.html
In some situations DataNucleus will add a special datastore column to
a join table so that collections can allow the storage of duplicate
elements. This extension allows the specification of the column name
to be used. This should be specified within the field at the
collection end of the relationship. JDO2 doesnt allow a standard place
for such a specification and so is an extension tag.
So cool! 'in some situations'. I have no idea how to make my situation not to be a subset of 'some situations' but I have no idea, how to get this working. Perhaps someone has allready met the "INTEGER_IDX" problem? Or (it is also highly possible) - im not binding the data correctly :/
So you create the schema yourself. Your schema is inconsistent with metadata. You run persistence without validating your metadata against schema, and an exception results. DataNucleus provides you with SchemaTool to create or validate the schema against your metadata, so that would mean that you can detect the problem.
You're using an indexed list, so it needs an index for each element (or how else is it to know what position an element is in?). How can it assume there is an index? well it's a thing called the JDO spec (publically available), which defines indexed lists. If you don't want positions of elements storing then don't use a List (the Java util class for retaining the position of elements) ... so I'd suggest using a Set since that doesn't need position info (hence no index).
You also have a class marked as datastore identity, and then have a primary-key. That is a contradiction ... you have one or the other. The docs define all of that, as well as how to have a 1-N List relation ("JDO API" -> "Mapping" -> "Fields/Properties" -> "1-N Relations" -> "Lists" or "Sets")

Chicken or egg type hibernate mapping problem

I got a class with a one-to-one relation. Basically, in "class A" I have a one-to-one relation to "class B". This relation uses a primary key join column. Now my issue is as follows, if I try and create a instance of A, I cant save it because I haven't added a instance of B to it yet. But, I cant create a instance of B because I need the id of A first.
A easy solution would be to make the ID in B the automatically generated one, so I could then create a instance of B before creating a instance of A. However, I'm sure there is a better way of doing this? :)
I can see in the database that hibernate created a additional index on the id column of A, which im guessing is a foreign key constraint. And I can see the the documentation that the XML version of the one-to-one mapping have a attribute to specify if the relation is constrained or not, however, the #OneToOne annotation doesnt seem to have this option? :S
It seems you have two relationships between A and B tables (you have: A has a_id, b_id; B has b_id, a_id). To model one to one you need only one relationship. Determine which table is 'main' and then drop column from 'secondary' table (should be: A has a_id, b_id; B has b_id). After that hibernate (and any other schema client) will be able to insert to B first, then to A with reference to B table.
For example for egg and chicken. There are multiple relations between eggs and chickens (one chicken can laid many eggs; one egg can produce one chicken). So for the one to one relationship egg-produces-chicken, it is reasonable to have parent_egg_id column in chicken table, so an egg can be created first and then a chicken with reference to that egg.
Hibernate mapping could look like the following:
In Chicken class:
#OneToOne
#JoinColumn(name = "parent_egg_id")
public Egg getParentEgg() {
return parentEgg;
}
In Egg class:
#OneToOne(mappedBy = "parentEgg")
public Chicken getChildChicken() {
return childChicken;
}
Update:
The same thing as constrained in xml, optional property in OneToOne interface will do. It is defaulted to true, so the relationship is nullable by default.
/**
* (Optional) Whether the association is optional. If set
* to false then a non-null relationship must always exist.
*/
boolean optional() default true;
According to your comments rows to A are inserted first. I would consider having dependency from B to A, not from A to B. In this case to create item in A then in B, two insert statements are required (with relation from A to B - additional update A statement is required.).

Categories

Resources