Creating SpringSource Tool Suite (STS) Hibernate Template - java

i've created the Hibernate project using Spring Template Project. Two domain objects, a JUnit test, app-context.xml and the persistence-context.xml were created. Now i noticed this line
<jdbc:embedded-database
id="dataSource"></jdbc:embedded-database>
and assume that the following happens
A default HQSL db is used
The two created models Order.java & Item.java will automatically created in memory tables T_ORDER and T_ITEM and the these will be mapped as per annotations on the objects. Inside the auto created classes one of the test methods is as follows
#Test
#Transactional
public void testSaveAndGet() throws Exception {
Session session = sessionFactory.getCurrentSession();
Order order = new Order();
order.getItems().add(new Item());
session.save(order);
session.flush();
// Otherwise the query returns the existing order
// (and we didn't set the parent in the item)...
session.clear();
Order other = (Order) session.get(Order.class, order.getId());
assertEquals(1, other.getItems().size());
assertEquals(other, other.getItems().iterator().next().getOrder());
}
Questions ...
Am i correct to think that the in memory tables are created from the domain models (Order/Item), and mapped? Therefore session.flush() synchronize the object to the physical (in memory table)....
Are these tables auto mapped because if i do the following
session.save(order);
session.flush();
session.clear();
Order other = (Order) session
.createQuery("from T_ORDER where ORDER_ID =: orderid")
.setLong("orderid", order.getId())
.uniqueResult();
i get an exception...
org.hibernate.hql.ast.[B]QuerySyntaxException[/B]: \
T_ORDER is not mapped [from T_ORDER where ORDER_ID =: orderid]
............
............
if these tables are not mapped automatically, how is flushing working at the first place?

Table creation is a feature of Hibernate (and other JPA proviers). It taking place when the application/test starts. It has nothing to do with any query. Even if you only start your test, with Hibernate running and configured, it can create the tables.
If Hibernate create the tables, drop old once, and so on, depends on its configuration: the property: hibernate.hbm2ddl.auto is used what hibernate do if its starts. For example the value update will add not existing tables and columns.
More details can be found in the documentation.
Your Exception
When you uses Hibernate and write hibernate query statements, then you have to use HQL and not SQL. -- The main difference is that HQL is based on the classes but not on the tables. So in your case you must not use T_ORDER, but Order (the same is for the id, you need to use the property/field name, but not the column name).

Related

Hibernate order_inserts not working as expected when cascading

I have configured hibernate to batch insert/update entities via the following properties:
app.db.props.hibernate.jdbc.batch_size=50
app.db.props.hibernate.batch_versioned_data=true
app.db.props.hibernate.order_inserts=true
app.db.props.hibernate.order_updates=true
(Ignore the app.db.props prefix, it is removed by Spring) I can confirm that the properties are making it to hibernate because simple batches work as expected, confirmed by logging via the datasource directly. The proxy below produces logging to show that batches are happening.
ProxyDataSourceBuilder.create(dataSource)
.asJson().countQuery()
.logQueryToSysOut().build();
Logs (notice batchSize)...
{"name":"", "connection":121, "time":1, "success":true, "type":"Prepared", "batch":true, "querySize":1, "batchSize":18, "query":["update odm.status set first_timestamp=?, last_timestamp=?, removed=?, conformant=?, event_id=?, form_id=?, frozen=?, group_id=?, item_id=?, locked=?, study_id=?, subject_id=?, verified=? where id=?"], "params":[...]}
However when inserting a more complex object model, involving a hierarchy of 1-* relationships, hibernate is not ordering inserts (and thus not batching). With a model like EntityA -> EntityB -> EntityC, hibernate is inserting each parent and child and then iterating, rather than batching each entity class.
I.e. what I see is multiple inserts for each type...
insert EntityA...
insert EntityB...
insert EntityC...
insert EntityA...
insert EntityB...
insert EntityC...
repeat...
But what I would expect is a single iteration, using a bulk insert, for each type.
It seems like the cascading relationship is preventing the ordering of inserts (and the bulk insert), but I can't figure out why. Hibernate should be capable of understanding that all instances of EntityA can be inserted and once, then EntityB, and so on.

HibernateException when updating a collection configured with delete orphan : can't save the parent object

I work on a Java project and I have to write a new module in order to copy some data from one database to another (same tables).
I have an entity Contrat containing several fields and the following field :
#OneToMany(mappedBy = "contrat", fetch = FetchType.LAZY)
#Fetch(FetchMode.SUBSELECT)
#Cascade( { org.hibernate.annotations.CascadeType.ALL, org.hibernate.annotations.CascadeType.DELETE_ORPHAN })
#BatchSize(size = 50)
private Set<MonElement> elements = new HashSet<MonElement>();
I must read some "Contrat" objects from a database and write them in another database.
I hesitate between 2 solutions :
use jdbc to query the first database and get the objects and then write those objects into the second database (paying attention to the order and the different keys). It will be long.
as the project currently uses Hibernate and contains all hibernate mapping classes, I was thinking about opening a first session to the first database, reading the hibernate Contrat object, setting the ids to null in the children elements and writing the object to the destination database with a second session. It should be quicker.
I wrote a test class for the second use case and the process fails with the following exception :
org.hibernate.HibernateException: Don't change the reference to a
collection with cascade="all-delete-orphan"
I think the reference must change when I set the ids to null, but I am not sure : I don't understand how changing a field of a Collection member can change the Collection reference
Note that if I remove DELETE_ORPHAN from the configuration, everything works, all the objects and their dependencies are written in the database.
So I would like to use the hibernate solution which is faster but I have to keep the DELETE_ORPHAN feature because the application currently uses this feature to ensure that every MonElement removed from the elements Set will be deleted in the database.
I don't need this feature but cannot remove it.
Also, I need to set the MonElement ids to null in order to generate new ones because their id in the first database may exist in the target database.
Here is the code I wrote which works well when I remove the DELETE_ORPHAN option.
SessionFactory sessionFactory = new AnnotationConfiguration().configure("/hibernate.cfg.src.xml").buildSessionFactory();
Session session = sessionFactory.openSession();
// search the Contrat object
Criteria crit = session.createCriteria(Contrat.class);
CriteriaUtil.addEqualCriteria(crit, "column", "65465454");
Contrat contrat = (Contrat)crit.list().get(0);
session.close();
SessionFactory sessionFactoryDest = new AnnotationConfiguration().configure("/hibernate.cfg.dest.xml").buildSessionFactory();
Session sessionDest = sessionFactoryDest.openSession();
Transaction transaction = sessionDest.beginTransaction();
// setting id to null, also for the elements in the elements Set
contrat.setId(null);
for (MonElement element:contrat.getElements()) {
element.setId(null);
}
// writing the object in the database
sessionDest.save(contrat);
transaction.commit();
sessionDest.flush();
sessionDest.close();
This is way faster than managing myself the queries and the primary / foreign keys and dependencies between objects.
Does anyone have an idea to get rid of this exception ?
Or maybe I should change the state of the Set.
In fact I'm not trying to delete any element of this Set, I just want them to be considered as new objects.
If I don't find a solution, I will do something dirty : duplicate all hibernate entity objects in my new project and remove the DELETE_ORPHAN parameter in the newly created Contrat.
So the application will continue using its mapping and my new project will use my specific mapping. But I want to avoid that.
Thanks
A correct solution has been written by crizzis as a comment to my question.
I quote him :
I'd try wrapping the contrat.elements in a new collection (contrat.setElements(new HashSet<>(contrat.getElements())) before trying to persist the contract with the new session
It works well.

why do we have to use #Modifying annotation for queries in Data Jpa

for example I have a method in my CRUD interface which deletes a user from the database:
public interface CrudUserRepository extends JpaRepository<User, Integer> {
#Transactional
#Modifying
#Query("DELETE FROM User u WHERE u.id=:id")
int delete(#Param("id") int id, #Param("userId") int userId);
}
This method will work only with the annotation #Modifying. But what is the need for the annotation here? Why cant spring analyze the query and understand that it is a modifying query?
CAUTION!
Using #Modifying(clearAutomatically=true) will drop any pending updates on the managed entities in the persistence context spring states the following :
Doing so triggers the query annotated to the method as an updating
query instead of selecting one. As the EntityManager might contain
outdated entities after the execution of the modifying query, we do
not automatically clear it (see the JavaDoc of EntityManager.clear()
for details), since this effectively drops all non-flushed changes
still pending in the EntityManager. If you wish the EntityManager to
be cleared automatically, you can set the #Modifying annotation’s
clearAutomatically attribute to true.
Fortunately, starting from Spring Boot 2.0.4.RELEASE Spring Data added flushAutomatically flag (https://jira.spring.io/browse/DATAJPA-806) to auto flush any managed entities on the persistence context before executing the modifying query check reference https://docs.spring.io/spring-data/jpa/docs/2.0.4.RELEASE/api/org/springframework/data/jpa/repository/Modifying.html#flushAutomatically
So the safest way to use #Modifying is :
#Modifying(clearAutomatically=true, flushAutomatically=true)
What happens if we don't use those two flags??
Consider the following code :
repo {
#Modifying
#Query("delete User u where u.active=0")
public void deleteInActiveUsers();
}
Scenario 1 why flushAutomatically
service {
User johnUser = userRepo.findById(1); // store in first level cache
johnUser.setActive(false);
repo.save(johnUser);
repo.deleteInActiveUsers();// BAM it won't delete JOHN right away
// JOHN still exist since john with active being false was not
// flushed into the database when #Modifying kicks in
// so imagine if after `deleteInActiveUsers` line you called a native
// query or started a new transaction, both cases john
// was not deleted so it can lead to faulty business logic
}
Scenario 2 why clearAutomatically
In following consider johnUser.active is false already
service {
User johnUser = userRepo.findById(1); // store in first level cache
repo.deleteInActiveUsers(); // you think that john is deleted now
System.out.println(userRepo.findById(1).isPresent()) // TRUE!!!
System.out.println(userRepo.count()) // 1 !!!
// JOHN still exists since in this transaction persistence context
// John's object was not cleared upon #Modifying query execution,
// John's object will still be fetched from 1st level cache
// `clearAutomatically` takes care of doing the
// clear part on the objects being modified for current
// transaction persistence context
}
So if - in the same transaction - you are playing with modified objects before or after the line which does #Modifying, then use clearAutomatically & flushAutomatically if not then you can skip using these flags
BTW this is another reason why you should always put the #Transactional annotation on service layer, so that you only can have one persistence context for all your managed entities in the same transaction.
Since persistence context is bounded to hibernate session, you need to know that a session can contain couple of transactions see this answer for more info https://stackoverflow.com/a/5409180/1460591
The way spring data works is that it joins the transactions together (known as Transaction Propagation) into one transaction (default propagation (REQUIRED)) see this answer for more info https://stackoverflow.com/a/25710391/1460591
To connect things together if you have multiple isolated transactions (e.g not having a transactional annotation on the service) hence you would have multiple sessions following the way spring data works hence you have multiple persistence contexts (aka 1st level cache) that means you might delete/modify an entity in a persistence context even with using flushAutomatically the same deleted/modified entity might be fetched and cached in another transaction's persistence context already, That would cause wrong business decisions due to wrong or un-synced data.
This will trigger the query annotated to the method as updating query instead of a selecting one. As the EntityManager might contain outdated entities after the execution of the modifying query, we automatically clear it (see JavaDoc of EntityManager.clear() for details). This will effectively drop all non-flushed changes still pending in the EntityManager. If you don't wish the EntityManager to be cleared automatically you can set #Modifying annotation's clearAutomatically attribute to false;
for further detail you can follow this link:-
http://docs.spring.io/spring-data/jpa/docs/1.3.4.RELEASE/reference/html/jpa.repositories.html
Queries that require a #Modifying annotation include INSERT, UPDATE, DELETE, and DDL statements.
Adding #Modifying annotation indicates the query is not for a SELECT query.
When you use only #Query annotation,you should use select queries
However you #Modifying annotation you can use insert,delete,update queries above the method.

Load Entity from View in JPA/Hibernate

I have an application which uses Spring and Hibernate. In my database there are some views that I need to load in some entities. So I'm trying to execute a native query and load the class withthe data retrieved from the view:
//In my DAO class (#Repository)
public List<MyClass> findMyEntities(){
Query query = em.createNativeQuery("SELECT * FROM V_myView", MyClass.class);
return query.getResultList();
}
and MyClass has the same fields as the column names of the view.
The problem is that Hibernate can't recognize MyClass because it's not an entity (it's not annotated with #Entity)
org.hibernate.MappingException: Unknown entity
If I put MyClass as an entity the system will put try to create/update a table for that entity, because I have configured it :
<property name="hibernate.hbm2ddl.auto" value="update"/>
So I come into these questions:
Can I disable "hibernate.hbm2ddl.auto" just for a single entity?
Is there any way to load the data from a view into a non-entity class?
If not, what would be the best way in my case for loading the data from a view into a class in hibernate?
Thanks
Placed on your class
#Entity
#Immutable
#Subselect(QUERY)
public MyClass {....... }
Hibernate execute the query to retrieve data, but not create the table or view. The downside of this is that it only serves to make readings.
You may use axtavt solution. You may also just execute your query, and transform the List<Object[]> it will return into a List<MyClass> explicitely. Or you may map your view as a read-only entity, which is probably the best solution, because it would allow for associations with other tables, querying through JPQL, Criteria, etc.
In my opinion, hibernate.hbm2ddl.auto should only be used for quick n' dirty prototypes. Use the hibernate tools to generate the SQL file allowing to create the schema, and modify it to remove the creation of the view. Anyway, if it's set to update, shouldn't it skip the table creation since it already exists (as a view)?
You can use AliasToBeanResultTransformer. Since it's a Hibernate-specific feature, you need to access the underlying Hibernate Session:
return em.unwrap(Session.class)
.createSQLQuery("...")
.setResultTransformer(new AliasToBeanResultTransformer(MyClass.class))
.list();

How do you update a foreign key value directly via Hibernate?

I have a couple of objects that are mapped to tables in a database using Hibernate, BatchTransaction and Transaction. BatchTransaction's table (batch_transactions) has a foreign key reference to transactions, named transaction_id.
In the past I have used a batch runner that used internal calls to run the batch transactions and complete the reference from BatchTransaction to Transaction once the transaction is complete. After a Transaction has been inserted, I just call batchTransaction.setTransaction(txn), so I have a #ManyToOne mapping from BatchTransaction to Transaction.
I am changing the batch runner so that it executes its transactions through a Web service. The ID of the newly inserted Transaction will be returned by the service and I'll want to update transaction_id in BatchTransaction directly (rather than using the setter for the Transaction field on BatchTransaction, which would require me to load the newly inserted item unnecessarily).
It seems like the most logical way to do it is to use SQL rather than Hibernate, but I was wondering if there's a more elegant approach. Any ideas?
Here's the basic mapping.
BatchQuery.java
#Entity
#Table(name = "batch_queries")
public class BatchQuery
{
#ManyToOne
#JoinColumn(name = "query_id")
public Query getQuery()
{
return mQuery;
}
}
Query.java
#Entity
#Table(name = "queries")
public class Query
{
}
The idea is to update the query_id column in batch_queries without setting the "query" property on a BatchQuery object.
Using a direct SQL update, or an HQL update, is certainly feasible.
Not seeing the full problem, it looks to me like you might be making a modification to your domain that's worth documenting in your domain. You may be moving to having a BatchTransaction that has as a member just the TransactionId and not the full transaction.
If in other activities, the BatchTransaction will still be needing to hydrate that Transaction, I'd consider adding a separate mapping for the TransactionId, and having that be the managing mapping (make the Transaction association update and insert false).
If BatchTransaction will no longer be concerned with the full Transaction, just remove that association after adding a the TransactionId field.
As you have writeen, we can use SQL to achieve solution for above problem. But i will suggest not to update the primary keys via SQL.
Now, as you are changing the key, which means you are creating alltogether a new object, for this, you can first delete the existing object, with the previous key, and then try to insert a new object with the updated key(in your case transaction_id)

Categories

Resources