Where to set hibernate.id.new_generator_mappings property? - java

I've wasted too much time on this ...
I'm using oracle and I have a sequence (MY_TABLE_SEQ) defined which increments by 1.
In my Pojo I have:
#SequenceGenerator(name = "MY_SEQ", sequenceName="MY_TABLE_SEQ", allocationSize=50)
#GeneratedValue(strategy=GenerationType.SEQUENCE, generator="MY_SEQ")
This gives me a unique constraint issue. From my understanding I need to set the following property:
hibernate.id.new_generator_mappings=true
I've tried setting in my hibernate.cfg.xml file but it does not seem to make any difference. I've come across server post to place in persistance.xml but this is a standalone app, no webcontainer.
Setting allocationSize=1 works but of course it hits the db on each insert to get the next sequence. Setting the above property is suppose to resolve it.

I haven't tried Oracle, but I had similar issues to yours inserting into an AS400 DB2 table.
I had to remove the identity flag on the id column on DB2 table - and instead used a custom jpa/hibernate sequence generator. This is set up on the pojo/entity annotation of the #ID entity field as you've done.
DB2 had been giving me errors about missing SYSIBM.SYSSEQUENCES table, so evidently hibernate (version 5.2), doesn't recognize the native DB2 identity designation. A custom sequence was and effective workaround.
On the #ID entity field:
#GeneratedValue(generator = "table", strategy=GenerationType.TABLE)
#TableGenerator(name = "table", allocationSize = 20)
This example allocates a pool of 20 sequence numbers each time it queries the table.
Next, create the required table Hibernate needs with columns that match the hibernate5 API - must be in lower case ... so put quotes around the names to work around the auto-upper casing that DB2 defaults to. The API will error out if these names are in caps.
Table:
"hibernate_sequences"
example of 2 Columns used:
"sequence_next_hi_value" (integer, not nullable, 0 default)
"sequence_name" (character, sample length 20, not nullable, natural default)
In the configuration code for the dialect used - ex: Spring Boot programmatically, add these properties:
properties.put("hibernate.supportsSequences","false");
properties.put("hibernate.id.new_generator_mappings","false");
and in the *.properties file:
spring.jpa.properties.hibernate.dialect.supportsSequences=false
spring.jpa.properties.hibernate.id.new_generator_mappings=false
Database systems are case-sensitive for schema/table/field names. Also watch for typos everywhere, incl. property names.
Be sure your pojo/entity only contains private fields that will be mapped to the table. Static finals such as serialVersionUID are ok.
I will be doing someething similar for SQL Server soon.
For MySQL, I had no issues using an identity column as defined in a table ID field to insert records, so didn't have to make all these changes. A simpler setup since hibernate recognizes the identity designation in MySQL.
#GeneratedValue(strategy=GenerationType.IDENTITY)
was all that was needed in the pojo.
I'm a newbie at all this, so always looking for better ways ... but this worked for now.

I set the property like this in the hibernate.cfg.xml file and it works !
<property name="hibernate.jpa.compliance.global_id_generators" value="true"/>

Related

Hibernate throwing validation exception "wrong column type encountered in column" even when my DDL script and JPA entity are in sync

I am starting my spring container in validate mode
autoddl=validate
I am getting a validation exception like this
Caused by: org.hibernate.tool.schema.spi.SchemaManagementException: Schema-
validation: wrong column type encountered in column [amount] in table [Balance];
found [numeric (Types#NUMERIC)], but expecting [int8 (Types#BIGINT)]
and my DDL script goes like this
CREATE TABLE Balance(stratr VARCHAR(25), histFromDate TIMESTAMP WITHOUT TIME
ZONE,amount numeric(11, 0))
and my attribute in JPA entity goes like this
#Column(name="amount", precision=11, scale=0) //have specified precision and scale
private Long amount ;
where I have used import javax.persistence.Column.Since I have annotated the exact precision and scale, Shouldn't hibernate validate with these info that I have provided through the column annotation? What could have I missed ?
I cannot do the following
#Column(
columnDefinition = "NUMERIC(11,0)"
)
private Long amount;
because I don't know the data store of this JPA entity.
I also tried generating the script by the following property
<prop key="javax.persistence.schema-generation.scripts.action">drop-and-create</prop>
<prop key="javax.persistence.schema-generation.scripts.create-target">./l/create.sql</prop>
<prop key="javax.persistence.schema-generation.scripts.drop-target">./l/drop.sql</prop>
This is also generating as int8 and not numeric(11,0). What can be done to solve this ?
It's really quite difficult to grasp what you're trying to accomplish, but if I understood correctly:
you want to keep your application portable by not fixing the column definition on the entity level to be NUMERIC(11,0), which would make it Postgres-specific
at the same time, you want your column to use NUMERIC(11,0) for Postgres and not INT8 that Hibernate would normally use for a Long in Postgres (and is hoping to find in your schema upon validation)
In short, you want a per-database customization that is not reflected in your entity mapping. The only way to accomplish that is to customize the dialect that Hibernate is using for your version of Postgres. What you need to do is:
determine which dialect version is being selected for your Postgres database (it will be one of the following: PostgresPlusDialect, PostgreSQL81Dialect, PostgreSQL82Dialect, PostgreSQL91Dialect, PostgreSQL92Dialect,PostgreSQL93Dialect, PostgreSQL94Dialect, PostgreSQL95Dialect, PostgreSQL9Dialect)
extend from that class, adding the following definition:
public MyCustomPostgresDialect() {
super();
registerColumnType(Types.BIGINT, "NUMERIC(11, 0)");
}
(If you want to be able to control the precision and scale using #Column(precision = ..., scale = ...), use registerColumnType(Types.BIGINT, "NUMERIC($p, $s)") instead)
add the hibernate.dialect property to persistence.xml, pointing to the fully qualified class name of your custom dialect
Note that this will, of course, affect all Long properties in your data model, not just the specific field in question.
I can think on only reason is because in your entity amount type is Long but in JPA creation script your DDL specified as amount numeric(11, 0) here second param suggest decimal precision.
As you can see java tries to enter data in Long type (ie. 10.0000), similar to BigInt in Database but database does not accept such decimal value being type numeric (11,0)
You should be able to resolve it by either changing your java code to have entity amount type int or change DDL to have scaleInt. ie. NUMERIC(11,5).
However best bet would be to have DECIMAL type for any non Integer type.
http://www.h2database.com/html/datatypes.html#decimal_type

Alter table from jpa annotations in Java

I have a definition for a column like that:
#Column
private String my_column;
And by default in Postgres database type for this field is character varying(255).
Now, I want to change the data type for this column.
How I can do this without entry in database and alter table?
I tried this:
#Lob
#Column
private String my_column;
And
#Column(columnDefinition = "TEXT")
private String my_column;
But, without results.
The thing is, that JPA does not handle Schema changes.
JPA maps your existing DB to Java Classes, it does not manage the database it self.
As for schema changes managment.
A common practice is to have a schema migration tool to handle that, for example Flyway and Liquibase are a popular solutions.
There you can write a SQL script, to change the DB column type to "text"
and it will apply those changes when you run the DB migration process.
Or you can always just access your DB and modify it manually.

More than one table found in namespace (, ) - SchemaExtractionException

I have been facing this weird exception while trying to persist some values into a table using Hibernate in a Java application. However this exception occurs only for one particular table/entity for rest of the tables i am able to perform crud operations via Hibernate.
Please find below the Stacktrace and let me know if this is anyway related to java code is or its a database design error.
2016-04-28 11:52:34 ERROR XXXXXDao:44 - Failed to create sessionFactory object.org.hibernate.tool.schema.extract.spi.SchemaExtractionException: More than one table found in namespace (, ) : YYYYYYY
Exception in thread "main" java.lang.ExceptionInInitializerError
at com.XX.dao.XXXXXXXDao.main(XXXXXXXXDao.java:45)
Caused by: org.hibernate.tool.schema.extract.spi.SchemaExtractionException: More than one table found in namespace (, ) : YYYYYYY
at org.hibernate.tool.schema.extract.internal.InformationExtractorJdbcDatabaseMetaDataImpl.processGetTableResults(InformationExtractorJdbcDatabaseMetaDataImpl.java:381)
at org.hibernate.tool.schema.extract.internal.InformationExtractorJdbcDatabaseMetaDataImpl.getTable(InformationExtractorJdbcDatabaseMetaDataImpl.java:279)
at org.hibernate.tool.schema.internal.exec.ImprovedDatabaseInformationImpl.getTableInformation(ImprovedDatabaseInformationImpl.java:109)
at org.hibernate.tool.schema.internal.SchemaMigratorImpl.performMigration(SchemaMigratorImpl.java:252)
at org.hibernate.tool.schema.internal.SchemaMigratorImpl.doMigration(SchemaMigratorImpl.java:137)
at org.hibernate.tool.schema.internal.SchemaMigratorImpl.doMigration(SchemaMigratorImpl.java:110)
at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.performDatabaseAction(SchemaManagementToolCoordinator.java:176)
at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.process(SchemaManagementToolCoordinator.java:64)
at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:458)
at org.hibernate.boot.internal.SessionFactoryBuilderImpl.build(SessionFactoryBuilderImpl.java:465)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:708)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:724)
at com.xx.dao.zzzzzzzzzzzzDAOFactory.configureSessionFactory(zzzzzzzDAOFactory.java:43)
at com.xx.dao.zzzzzzzzzzzzDAOFactory.buildSessionFactory(zzzzzzzzzDAOFactory.java:27)
at com.xx.dao.XXXXXXXXDao.main(XXXXXXXXDao.java:41)
Thanks in advance for your help
I have had the same problem and was able to dig down to the code to find out the cause, at least in my case. I don't know whether it will be the same issue for you but this may be helpful.
From your stack trace I can see you have the hibernate.hbm2ddl.auto set to upgrade the schema. As part of this, it is trying to look up the metadata for all the tables hibernate knows about and for one of them getting an ambiguous answer because the metadata query is returning more than a single row of table or view metadata.
In my case this was caused by our naming convention for tables. We had a table called (say) "AAA_BBB" for which this was going wrong. Now the use of an underscore in the table name is perfectly acceptable as far as I am aware and is quite common practice. However the underscore is also the SQL wildcard for a single character; looking in the code for the database metadata I can see it is doing a "WHERE table_name LIKE ..." in DatabaseMetaData.getTables(...) method, which is what hibernate is using here.
Now, in my schema I also had a second table called "AAA1BBB" and hence both of these matched the metadata lookup and so it returned a metadata row for each of these tables. The hibernate method is written to just fall down if the result set from the table metadata lookup returns more than one row. I would guess it should examine the available row(s) and find if there is one which is an exact match with the specified table name.
I tested this for both Oracle and MySQL with the same result.
Seems the property hibernate.hbm2ddl.auto set to update is causing the issue here. Try removing it from your hibernate config xml.
This will work:
Check your database schema/s and your database user privileges;
Hibernate update mechanism may fail with this exception if there is a another database schema/user with the same table name, and the db user has the sufficient privileges to view this table.
So in your case, the table 'YYYYYYY' may be found in more than one database user/schema, and your db user has 'DBA' privileges.
To solve this you can either find and delete the ambiguous table or remove the user's redundant privileges.
Another situation may be occurred except whatever dear RichB has been stated.
in ORACLE every user has separate schema ,
Therefore probably there is tow tables with the same name in two different schemes
then you should specify your default schema in persistence.xml with below property
<property name="hibernate.default_schema" value="username"/>
Use catalog value with #Table, i.e.:
#Entity
#Table(**catalog = "MY_DB_USER"**, name = "LOOKUP")
public class Lookup implements Serializable {
}
I don't have this error now.
Hope this work.
We had a Spring Data / JPA application and this error started happening after upgrading to Postgres 10.6 (from 10).
Our solution was as follows, in our JPA configuration class: note the new commented line,
props.put("hibernate.hbm2ddl.auto", "none"); //POSTGRES 10 --> 10.6 migration
Class:
#Configuration
#EnableJpaRepositories(basePackages = "app.dao")
#ComponentScan(basePackages = { "app.service" })
#EnableTransactionManagement
public class JpaConfig {
#Autowired
DataSource dataSource;
#Bean
public Map<String, Object> jpaProperties() {
Map<String, Object> props = new HashMap<String, Object>();
props.put("hibernate.dialect", PostgreSQL95Dialect.class.getName());
props.put("hibernate.hbm2ddl.auto", "none"); //POSTGRES 10 --> 10.6 migration.
return props;
}
So after having the same issue, it turns out that I needed to update my OJDBC driver from ojdbc6 to ojdbc8. Hopefully this helps.
I have same issue with such configuration
#Entity
#Table(name = "NOTIFICATION")
public class Notification {
...
}
issue was resolved for me when I moved table name from #Table to #Entity
#Entity(name = "NOTIFICATION")
#Table
public class Notification {
...
}
Simply, if u are using two schemas then u will get this error. To resolve this error u can use these steps :
1. You need to delete extra schema.
2. Or u can define default schemas or that schema are u using.
spring.jpa.properties.hibernate.default_schema=nameOfSchema
and
jdbc:postgresql://localhost:5432/databaseName?currentSchema=nameOfSchema
I also came across this issue. Here is my solution:
the error:
https://gist.github.com/wencheng1994
I solve that. It mainly because the db account has a higher authority. I set the "hibernate.hbm2ddl.auto=update", So when hbm2ddl works, it tried to find all exists shcema I defied. But there is two schema exist the table with the same name. then the db account can find that. so it found "more than one table in the namespace"
All I need to do is to make the db account lower authority so that it can not find table in other schema. (one shcema relation one db account).

Is it possible to generate a default value for a certain database column using hbm2ddl

Env: JPA 1, Hibernate 3.3.x, MySQL 5.x
We auto generate database schema using hbm2ddl export operation. Would it be possible to generate a default value for a certain #Entity member during SQL generation. (e.g. archive field in mytable entity class.
create table mytable (
...
'archive‘ tinyint(1) default ’0 ’,
...
)
There is no portable way to do that and the columnDefinition "trick" is definitely not a good solution. Actually, setting defaults in the generated DDL is just not a good idea, this would require the provider to go back to the database to see the result after an insert1. Better default in your Java code.
1 Just in case, note that you can tell Hibernate to do that using the #Generated annotation.

Why Hibernates ignores the name attribute of the #Column annotation?

Using Hibernate 3.3.1 and Hibernate Annotations 3.4, the database is DB2/400 V6R1, running that on WebSphere 7.0.0.9
I have the following class
#Entity
public class Ciinvhd implements Serializable {
#Id
private String ihinse;
#Id
#Column(name="IHINV#")
private BigDecimal ihinv;
....
}
For reasons I can't figure, Hibernate ignores the specified column name and uses 'ihinv' to generate the SQL:
select
ciinvhd0_.ihinse as ihinse13_,
ciinvhd0_.ihinv as ihinv13_,
...
Which of course gives me the following error:
Column IHINV not in table CIINVHD
Edit: I switched the log level of hibernate to DEBUG, and I see that it does not process the column annotation for that field. Tried several random things it just doesn't work.
Did anyone had this problem before? I have other entities that are very alike in the way that they are using # in their database field names and that are part of the PK and I don't have this problem with them.
You could try some kind of quoting:
For example:
#Column(name="`IHINV#`")
or
#Column(name="'IHINV#'")
Another option would be to dig in to source code Hibernate dialect for DB2 and see if it contains anything helpful.
Of course, the easiest way would be to remove the hash from column name if possible.
I suspect that the problem is the hash in the column name. A similar question on the hibernate forums suggests that backticks can be useful here.

Categories

Resources