I've configured Hibernate to use PostgreSQL sequence (via annotations) to generate values for primary key id column as follows:
#Id
#SequenceGenerator(name="pk_sequence",sequenceName="entity_id_seq")
#GeneratedValue(strategy=GenerationType.SEQUENCE,generator="pk_sequence")
#Column(name="id", unique=true, nullable=false)
public int getId() {
return this.id;
}
What I see with this configuration is that hibernate is already assigning id values > 3000 on persisting, whereas the query on used sequence shows the following:
database=# select last_value from entity_id_seq;
last_value
------------
69
(1 row)
Questions:
Is there anything wrong or not?
Should hibernate sync with the sequence table?
If not, where does it store the last generated id?
Thank you.
I had the same problem. It is related to the id allocating strategies of Hibernate. Whe n you choose GenerationType.SEQUENCE, Hibernate uses HiLo strategy which allocates IDs in blocks of 50 by default. So you can explicitly set allocationSize value like this:
#Id
#SequenceGenerator(name="pk_sequence",sequenceName="entity_id_seq", allocationSize=1)
#GeneratedValue(strategy=GenerationType.SEQUENCE,generator="pk_sequence")
#Column(name="id", unique=true, nullable=false)
public int getId() {
return this.id;
}
Though, I've also heard opinions that using HiLo strategy with allocationSize=1 is not a good practice. Some people recommend to use GenerationType.AUTO instead when you have to deal with database-managed sequences
Update: I did end up going with allocationSize=1, and things seem to work as I expect now. My application is such that I don't really need blocks of IDs anyway, so YMMV.
DO NOT USE GenerationType.SEQUENCE for Postgres sequences!
It's completely counter-intuitive, but the Hibernate folks completely messed up on this. You must use GenerationType.AUTO or Hibernate will demolish your sequences if you have to restart/rebuild your DB. It's almost criminally negligent that they would allow this code to go into a production build, but the Hibernate team is rather famous for their bull-headed stances towards flatly-wrong positions (check out their position on LEFT JOINs, for instance).
First, you have to determine which version of Hibernate you are using. In terms of hibernate-core versions, 3.2 onwards introduced more consistent support for id generators especially in regards to defined in annotations. See http://in.relation.to/Bloggers/New323HibernateIdentifierGenerators for a discussion.
Next 3.6 introduced a setting ('hibernate.id.new_generator_mappings') which makes the generators discussed in that blog the default way JPA-annotations are handled. The setting is false by default because Hibernate has to maintain backwards compatibility with older versions. If you want the new behavior (which is completely recommended) then simply set that setting to true.
How GenerationType is handled depends on which version you are using and whether you have 'hibernate.id.new_generator_mappings' set to true. I will assume you are using 3.6+ (since anything older is, well, old) and do have 'hibernate.id.new_generator_mappings' set to true (since that is the recommendation for new apps):
GenerationType.AUTO -> treated as GenerationType.SEQUENCE
GenerationType.SEQUENCE -> maps to the org.hibernate.id.enhanced.SequenceStyleGenerator class discussed in the blog
GenerationType.TABLE -> maps to the org.hibernate.id.enhanced.TableGenerator class discussed in the blog
In Postgres I would do this:
#Id
#SequenceGenerator(name="pk_sequence",sequenceName="\"entity_id_seq\"")
#GeneratedValue(strategy=GenerationType.SEQUENCE,generator="\"pk_sequence\"")
#Column(name="\"id\"", unique=true)
private int id;
Mostly with uppercase names Hibernate need to be passed escaped quotes in order to understand Postgres and find the tables, columns or sequences names.
Related
I have a mapped entity like this:
#OneToMany(targetEntity = Child.class)
#JoinColumn(name = "PARENT_ID", referencedColumnName = "PARENT_ID")
#OrderBy("orderNumber")
private List<Child> children;
I would like to specify NULLS LAST in #OrderBy annotation of my mapped collection.
I am using Oracle database, which considers NULL values larger than any non-NULL values.
The problem is in my integration test, which uses h2 database and it seems the NULL values are evaluated differently.
So far, I came up with a hack to use nvl2() function inside of the #OrderNumber like this:
#OrderBy("nvl2(orderNumber, orderNumber, 100000000)")
This hack works, but it seems nasty and I don't like the idea that I have this code there just because of the integration tests. As I mentioned above, Oracle returns the rows in correct order by default, so the basic #OrderBy(orderNumber) without handling nulls works good. On the other hand, I want to have it tested in case the app will use different database.
Is there any way how to solve this issue in a better way?
To enable Oracle compatibility mode in H2 you need to append ;MODE=Oracle;DEFAULT_NULL_ORDERING=HIGH to JDBC URL as suggested in documentation:
https://h2database.com/html/features.html#compatibility
The DEFAULT_NULL_ORDERING setting changes the default ordering of NULL values (when neither NULLS FIRST nor NULLS LAST are used). There are four possible options, LOW is default.
This setting can be used separately without this compatibility mode if you don't need it.
I am using hibernate envers to audit my entity. I have entity with next fields:
public class Settings
#Id
#Column(length = 80)
private String key;
#NotNull
#Column(length = 1200)
private String value;
#Version
#Column(columnDefinition = "numeric")
private Integer version;
}
It contains key-value pairs. Some of fields in this table are updated automatically. The question is: is it possible to insert or not insert record into _AUDIT table depending on value of 'key' property?
Example:
There are records in my table:
|KEY |VALUE |VERSION
_________________________________
|laskCheckDate|12-01-2017|0
|numberOfsmth |3 |0
I want to insert record to _AUDIT table if numberOfsmth is updated/deleted, but NOT insert if laskCheckDate is updated.
What you would need to do is extend the EnversPostUpdateEventListenerImpl event listener class and add your logic to check for the necessary entity type and values and decide whether to call into the super-class to audit the update or not.
Unfortunately the above approach is a bit intrusive for the novice user and I would certainly not recommend doing this if you're not super familar with Hibernate ORM and Envers.
There are some thoughts on conditional auditing in HHH-11326 which is tentatively planned for Envers 6.0 where you can influence auditing based on hooks you tie into your entities through annotations.
Should you decide to move forward and extend the listeners in 5.x, just be mindful that you should always allow the INSERT of your entity to occur. This becomes extremely important if you're using the ValidityAuditStrategy as the UPDATE expects an INSERT revision type to exist in the table or else the strategy asserts.
If all you want to control are UPDATEs, then this should not be a problem for you regardless of which strategy you leverage.
I have a table with primary key generation of TO_NUMBER(TO_CHAR(SYSDATE#!,'YYDDD')||LPAD(TO_CHAR(SEQ_REFID.NEXTVAL),11,'0'))
This has been given as default value for the table. When I insert through JDBC, I could leave the column as NULL, so the pk would be generated/defaulted and i'll get the key using getGeneratedKeys() method.
I require similar behavior using JPA. I'm a beginner in JPA. Pl help.
Database used is Oracle 11g.
EDIT: The above mentioned value is not required to be table default. It can be applied from JPA layer if it is possible.
Other Entities depends on this Entity for the pk. PK must be passed over to all child tables.
#Entity
public class Entity {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
}
Can also be
GenerationType.AUTO
GenerationType.SEQUENCE
GenerationType.TABLE
This reference describes the various strategies
Add the following annotation to the id field:
#Column(insertable = false)
This way, JPA will ignore the field when inserting new values and the database automatically generates the desired key.
However, you shouldn't use such a primary key. It effectively contains 2 different kinds of data in one column which should better be split into two seperate columns.
Make a simple id column with an ascending integer (and absolutely meaning other than "this is entry nr. x"). Then add an additional column with the current timestamp. This timestamp can have a default value and be protected against updates.
This is how it's supposed to be and not only simplifies your queries, but also improves the performance. You can query the table for entries of a specific hour, week and so on, or generate detailed statistics.
Don't try to put multiple information into one column. There's no advantage.
Where did you get the idea that this default PK was a good idea?
If you want the creation time of the row, add a column to your table. Don't embed it in the PK like this.
I've bumped into this example in JPA 2.0 FR Specification, 11.1.37. OneToOne Annotation, page 403:
#OneToOne(optional=false)
#JoinColumn(name="CUSTREC_ID", unique=true, nullable=false, updatable=false)
public CustomerRecord getCustomerRecord() { return customerRecord; }
Is there any reason that I should put #OneToOne(optional=false) and at that same time put #JoinColumn(... nullable=false)?
Aren't these two declarations the same? Isn't one of them redundant?
Are both of them used in DDL schema generation?
Formally optional=false is a runtime instruction to the JPA implementation, and nullable=false is an instruction to the DDL generator. So they are not strictly redundant.
The difference can become significant when there is entity inheritance involved. If a particular mapping exists only on a subclass, and you have single table table per-hierarchy strategy, then the OneToOne mapping may be optional=false on the particular subclass that contains the mapping. However, the actual join column cannot be made not-null, since then other sub classes that share the table can't be inserted!
In practice different versions of different providers may or may not interpret either one at either time, caveat emptor.
What's the difference between #Basic(optional = false) and #Column(nullable = false) in JPA persistence?
Gordon Yorke (EclipseLink Architecture Committee Member, TopLink Core Technical Lead, JPA 2.0 Expert Group Member) wrote a good answer on this topic so instead of paraphrasing him, I'll quote his answer:
The difference between optional and
nullable is the scope at which they
are evaluated. The definition of
'optional' talks about property and
field values and suggests that this
feature should be evaluated within the
runtime. 'nullable' is only in
reference to database columns.
If an implementation chooses to
implement optional then those
properties should be evaluated in
memory by the Persistence Provider and
an exception raised before SQL is sent
to the database otherwise when using
'updatable=false' 'optional'
violations would never be reported.
So I tried the #Basic(optional=false) annotation using JPA 2.1 (EclipseLink) and it turns out the annotation is ignored in actual usage (at least for a String field). (e.g. entityManager.persist calls).
So I went to the specification and read up about it.
Here is what the spec has to say:
http://download.oracle.com/otndocs/jcp/persistence-2.0-fr-oth-JSpec/
Basic(optional): Whether the value of the field or property may be
null. This is a hint and is disregarded for primitive types; it may be
used in schema generation.
So I think this sentence explains the real use case for Basic(optional) it is used in schema generation. (That is: when you generate CREATE TABLE SQL from Java Entity classes. This is something Hibernate can do for example.)