I'm working with a spring project. It uses Hibernate Envers for entity auditing and Flyway for db migration. For instance If I insert values into entity that is tracked by Envers using flyway scripts (i.e native queries) does the change gets added to audit table for that entity? or should I add separate query for audit table insertion?
Related
We have a multi-tenant Spring Web app setup with Hibernate as JPA. We use a tenant per schema. We like to use tools like Envers or JaVers for Audit logging such that every db schema has its own audit tables.
How have you implemented this? Or would you implement it?
I have changed the code of JaVers a bit such that when it retrieves the db schema I will return the db schema that belongs to the logged-in user. This works partly as some tables are contained in the shared public schema, however, currently at the place were JaVers retrieves the db schema I don't have this entity/table info. I am thinking about changing JaVers code further such that I have this entity/table info, but isn't there a better library/approach to support multi-tenant?
Sorry for not posting any code 🙈
I am a newbie that only knows basics of Java Core. I have test task where I need to create simple REST Web Service with Spring Boot.
I wrote all the logic in Java Core, and now I try to wrap it in all these technologies.
I am using this guide:
https://spring.io/guides/tutorials/rest/
Here they have JPA entities and #Table annotation, where table name is specified, but there are no SQL scripts to create tables in this guide.
So I thought JPA will create database and tables for entities by itself, but when I uncomment #Table annotation it says "Cannot resolve table '<table_name>'"
I am using IntelliJ IDEA with Spring Boot Maven project with imported Spring Web, H2 and JPA (like the guide tells to do).
I also configured H2 Data Source and tested the connection: works fine. There is a schema, but no tables.
Here is my application.properties:
spring.h2.console.enabled=true
spring.h2.console.path=/h2_console
spring.datasource.url=jdbc:h2:~/kaylemains
spring.datasource.platform=h2
spring.datasource.initialization-mode=always
spring.datasource.username=sa
spring.datasource.password=
spring.datasource.driverClassName=org.h2.Driver
spring.jpa.generate-ddl=true
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
spring.jpa.hibernate.ddl-auto = update
spring.jpa.show-sql=true
logging.level.org.hibernate.SQL=DEBUG
logging.level.org.hibernate.type.descriptor.sql.BasicBinder=TRACE
As in the guide, I add entities in LoadDatabase class like this:
#Bean
CommandLineRunner initTournaments(TournamentRepository repository) {
return args -> {
log.info("Preloading " + repository.save(new Tournament("Kayle Mains Competition: Summoner's Gorge", 16)));
};
}
So my question is: can I have file-stored H2 database and do everything with the it (including table creation) from my Java code?
OR it is necessary to create tables manually (by writing SQL scripts with CREATE TABLE) and construct them so that all entities work fine? (that means defining foreign key columns etc.), and only after that will JPA be able to work with this database? Do I need to add #Column annotation to every field, and JPA won't do it by itself for its Entities?
Why am I getting this error of "Cannot resolve table"? Of course it cannot be resolved because it does not exist yet, I thought JPA & Hibernate will create it for me based on entity classes...
Here in Baeldung you have all the information about the properties to ddl generation
Spring provides a JPA-specific property which Hibernate uses for DDL generation: spring.jpa.hibernate.ddl-auto.
create – Hibernate first drops existing tables, then creates new tables
update – the object model created based on the mappings (annotations or XML) >is compared with the existing schema, and then Hibernate updates the schema >according to the diff. It never deletes the existing tables or columns even if >they are no more required by the application
create-drop – similar to create, with the addition that Hibernate will drop >the database after all operations are completed. Typically used for unit testing
validate – Hibernate only validates whether the tables and columns exist, otherwise it throws an exception
none – this value effectively turns off the DDL generation
We have to set the value carefully or use one of the other mechanisms
to initialize the database.
If the problem is still present go to Settings -> Inspections, and uncheck the option "Unresolved database references in annotation"
I'm building a small spring-boot application. I'm using spring-data-jpa to create the database schema and liquibase to populate it with test data.
application.properties:
spring.datasource.url=jdbc:postgresql://localhost:5432/book-db
spring.datasource.driver-class-name=org.postgresql.Driver
spring.datasource.username=admin
spring.datasource.password=lTIDDYz3n3jD3BeYaAJz
spring.jpa.generate-ddl=true
spring.jpa.hibernate.ddl-auto=create
spring.jpa.properties.hibernate.jdbc.lob.non_contextual_creation=true
According to the documentation no configuration for liquibase is required if I have a gradle dependency and master changeLog under the default path.
db.changelog-master.yaml:
databaseChangeLog:
- changeSet:
id: 1
author: jb
changes:
- sqlFile:
path: db/migration/insert-books.sql
insert-books.sql:
--liquibase formatted sql
--changeset admin:1
delete from book;
insert into book (id, title)
values (nextval('seq'), 'Functional Programming for Mortals');
commit;
I have tried it with and without commit. The tables databasechangelog and databasechangelog are created successfully and contain the migration (insert-books).
The migration goes through, because if I add an invalid insert (to some table that does not exist), I get the exception:
ERROR: relation "xxx" does not exist
How to populate the database with data in insert-books.sql script using liquibase?
Don't use both, liquibase and JPA, to manage the DB structure. If you want to use liquibase, set JPA (Hibernate) to just validate the schema and manage the schema within liquibase.
spring.jpa.hibernate.ddl-auto=validate
The problem with your solution is in the order of operations. When your application starts, it first runs liquibase, which inserts the data, then JPA is started and the schema is created from scratch.
Try dropping the schema before running the app, I bet the migration (liquibase) will fail.
Liquibase has to be in charge of the schema, there is a way to add liquibase to an existing database, but it again makes liquibase the owner of the schema:
Using liquibase on the existing database
I recently switched from Ebean to Hibernate and I want to enable Hibernate table auto-geration.
When I used Ebean, it was quite simple, I just activated the evolutions and Ebean created all my database schemas.
How I can do that with Hibernate?
You can use hibernate.hbm2ddl.auto which can Automatically validates or exports schema DDL to the database when the SessionFactory is created. With create-drop, the database schema will be dropped when the SessionFactory is closed explicitly.
e.g. validate | update | create | create-drop
So the list of possible options are,
validate: validate the schema, makes no changes to the database.
update: update the schema.create: creates the schema, destroying previous data.
create-drop: drop the schema at the end of the session.
How to implement mariadb dynamic columns concept using hibernate mappings?
I have configured the mariadb in the my project but not clear to implement hibernate tables and data insertion into it and fetching data using criteria