ID of java objects not synchronizing with database - java

The only link I found that's close to what I am experiencing is this one :
How do you synchronize the id of a java object to its associated db row?
and there's not much of a solution in it.
My problem is that my Java objects aren't updated after being added to my database despite the .commit()
em.getTransaction().begin();
System.out.println(eleve.getID());
em.persist(eleve);
em.getTransaction().commit();
System.out.println(eleve.getID());
which refers to this class
public class Eleve {
private String _nom;
private String _prenom;
private float _ptsMerite;
#Id
private int _IDEleve;
and yields this output :
0
0
I think I've done everything properly when it comes to the persistence since it does create the object in the database (mySQL) with correct ID's which I've set to be autoincrement.
I am using javax.persistence for everything (annotations and such).

Did you try to add the #GeneratedValue annotation at your ID field?
There are four possible strategies you can choose from:
GenerationType.AUTO: The JPA provider will choose an appropriate strategy for the underlying database.
GenerationType.IDENTITY: Relies on a auto-increment column in your database.
GenerationType.SEQUENCE: Relies on a database sequence
GenerationType.TABLE: Uses a generator table in the database.
More info: https://www.baeldung.com/jpa-strategies-when-set-primary-key
If you ever change to a more powerful framework it is likely that this manages your transactions (CMT) so you can't (or don't want) commit everytime you want to access the ID for a new entity. In these cases you can use EntityManager#flush to synchronize Entity Manager with database.

Related

Weblogic to Liberty w JPA upgrade - related entities intermittently not being queried

just a quick question please in case something stands out immediately.
We're migrating an EAR/EJB application from Weblogic 11g to latest WS Liberty (22.x) also upgrading several of the frameworks including JPA to 2.2. This also changes JPA implementation to eclipseLink. We came from com.oracle.weblogic.11g.modules:javax.persistence:1.0.0.0_1-0-2. Underlying DB is MS-SQL Server.
And I'm running into some weirdness with regards to related objects not being resolved/queried intermittently.
Just as an example we have entities where the columns hold reference data codes or similar lookups. Say I have an entity called PayemntRecordT and it has a status code which refers to a ref table that also holds a textual description. Something like this:
SQL:
CREATE TABLE [PAYMENT_RECORD_T](
[PAYMENT_ID] [int] NOT NULL,
...
[PAYMENT_STATUS_CD] [CHAR](8) NOT NULL,
...
)
ALTER TABLE [PAYMENT_RECORD_T] WITH CHECK ADD CONSTRAINT [FK_PAYM4] FOREIGN KEY([PAYMENT_STATUS_CD])
REFERENCES [RECORD_STATUS_T] ([REC_STAT_CD])
GO
CREATE TABLE [RECORD_STATUS_T] (
[RECORD_STAT_CD] [CHAR](8) NOT NULL,
[RECORD_STAT_DSC] [VARCHAR](60) NOT NULL
CONSTRAINT [PK_RECORD_STATUS_T] PRIMARY KEY CLUSTERED (
[RECORD_STAT_CD] ASC
)WITH (PAD_INDEX = OFF...) ON [PRIMARY]
) ON [PRIMARY]
GO
Java:
#Table(name = "PAYMENT_RECORD_T")
#Entity
public class PaymentRecordT {
...
#ManyToOne
#PrimaryKeyJoinColumn(name = "payment_status_cd", referencedColumnName = "REC_STAT_CD")
private RecordStatusT recordStatusT;
}
#Table(name = "RECORD_STATUS_T")
#Entity
public class RecordStatusT {
#Column(name = "REC_STAT_CD")
#Id
private String recStatCd;
#Column(name = "REC_STAT_DSC")
#Basic
private String recStatDsc;
}
Others relations in our app might not be primary key relations but loose relations in which case its just #JoinColumn but the pattern would be the same.
My 'weirdness' is the following:
So in this example I have a list of 10 'Payment Records' each of them have such a record status, which is actually NON NULL in the database. When I do the initial retrieval via EJB method it grabs the 10 records and I also get the correctly resolved/queried record statuses.
Then I add a new record via EJB method (TRANSACTION_REQUIERD). After the add method returns I can query the new payment record in the database via SSMS. Its committed and it looks 100% correct and it contains a correct record status code.
Now I run the retrieval method again and I get the 11 records as I would expect. Only the 11th (newly inserted) record will have recordStatusT as null.
When I restart the app all goes well again for the retrieval of all 11 records. But for subsequent additions the outcome seems again 'undefined'.
In JDBC logging I an see that during the original retrieval of the records the record_status_t table was queried but the 2nd time around it was not and I have no explanation why.
I played with FETCHTYPE.EAGER and read up on caching etc but I'm not going anywhere.
Any ideas?
Thanks for your time
Carsten
I solved the problem by ensuring that after inserts/updates the objects arent being queried from the cache.
In the end - rather than doing it with query hint - I disabled caching for the entity involved using the #Chacheable annotation, like so
#Table(name = "PAYMENT_RECORD_T")
#Entity
#Cacheable(false)
public class PaymentRecordT {
...
#ManyToOne
#PrimaryKeyJoinColumn(name = "payment_status_cd", referencedColumnName = "REC_STAT_CD")
private RecordStatusT recordStatusT;
}
I still feel like there should be a better solution. Eclipselink tracks the inserts/updates so it should be able track what needs rereading from the DB and what not. I still feel like I don't fully understand the entire picture, but this works for me and its reasonably clean.
I can leave the considerable amount of read-only data/objects chacheable and the few that are changeable as non-cacheable.
Thanks for reading
Carsten

Schema migration of ids and n to m relationships in objectify

If have a User entity which previously had a String id. I'd like to migrate to a Long id which seems simple:
public class UserEntity {
/*#Id*/
#Index String oldId;
#Id Long newId;
/* other indexed fields I use for loading the entity */
private List<Ref<ReferencedEntity>> collections;
}
public class ReferencedEntity {
private List<Ref<User>> owners;
}
Since I load the user via different fields I can check if the user has a null newId and if so just null the old one save it back so the auto generator will set a new Id in the newId field.
The problem is now my n to m relationship to other entities. How should I migrate those? I have a Ref on both sides so I guess I just can load the refs on the user entity side and replace the other side with the new Id.
The general question is how to migrate n to m relationship if one side needs a new id?
You are probably thinking with a relational database strategy. When you update a value on one side, the other side will not be updated, therefore you have to update both entities. Since this is NoSql you have to think differently.
I would take this strategy.
First load the "old" user entity and save it again in the new structure. Once you have confirmed that all the data you loaded has been converted to the new object (I suggest using BigQuery), then you should spawn a task for each referenced entity using the indexed oldId and update the reference of the owners in the ReferencedEntity.
It will take a while to load but it is probably a safe way to do it.

How to refresh entity after "manual" backend query update [duplicate]

This question already has answers here:
Spring Data JPA Update #Query not updating?
(5 answers)
Closed 2 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Let's suppose to have this situation:
We have Spring Data configured in the standard way, there is a Respository object, an Entity object and all works well.
Now for some complex motivations I have to use EntityManager (or JdbcTemplate, whatever is at a lower level than Spring Data) directly to update the table associated to my Entity, with a native SQL query. So, I'm not using Entity object, but simply doing a database update manually on the table I use as entity (it's more correct to say the table from which I get values, see next rows).
The reason is that I had to bind my spring-data Entity to a MySQL view that makes UNION of multiple tables, not directly to the table I need to update.
What happens is:
In a functional test, I call the "manual" update method (on table from which the MySQL view is created) as previously described (through entity-manager) and if I make a simple Respository.findOne(objectId), I get the old object (not updated one). I have to call Entitymanager.refresh(object) to get the updated object.
Why?
Is there a way to "synchronize" (out of the box) objects (or force some refresh) in spring-data? Or am I asking for a miracle?
I'm not ironical, but maybe I'm not so expert, maybe (or probably) is my ignorance. If so please explain me why and (if you want) share some advanced knowledge about this amazing framework.
If I make a simple Respository.findOne(objectId) I get old object (not
updated one). I've to call Entitymanager.refresh(object) to get
updated object.
Why?
The first-level cache is active for the duration of a session. Any object entity previously retrieved in the context of a session will be retrieved from the first-level cache unless there is reason to go back to the database.
Is there a reason to go back to the database after your SQL update? Well, as the book Pro JPA 2 notes (p199) regarding bulk update statements (either via JPQL or SQL):
The first issue for developers to consider when using these [bulk update] statements
is that the persistence context is not updated to reflect the results
of the operation. Bulk operations are issued as SQL against the
database, bypassing the in-memory structures of the persistence
context.
which is what you are seeing. That is why you need to call refresh to force the entity to be reloaded from the database as the persistence context is not aware of any potential modifications.
The book also notes the following about using Native SQL statements (rather than JPQL bulk update):
■ CAUTION Native SQL update and delete operations should not be
executed on tables mapped by an entity. The JP QL operations tell the
provider what cached entity state must be invalidated in order to
remain consistent with the database. Native SQL operations bypass such
checks and can quickly lead to situations where the inmemory cache is
out of date with respect to the database.
Essentially then, should you have a 2nd level cache configured then updating any entity currently in the cache via a native SQL statement is likely to result in stale data in the cache.
In Spring Boot JpaRepository:
If our modifying query changes entities contained in the persistence context, then this context becomes outdated.
In order to fetch the entities from the database with latest record.
Use #Modifying(clearAutomatically = true)
#Modifying annotation has clearAutomatically attribute which defines whether it should clear the underlying persistence context after executing the modifying query.
Example:
#Modifying(clearAutomatically = true)
#Query("UPDATE NetworkEntity n SET n.network_status = :network_status WHERE n.network_id = :network_id")
int expireNetwork(#Param("network_id") Integer network_id, #Param("network_status") String network_status);
Based on the way you described your usage, fetching from the repo should retrieve the updated object without the need to refresh the object as long as the method which used the entity manager to merge has #transactional
here's a sample test
#DirtiesContext(classMode = ClassMode.AFTER_CLASS)
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = ApplicationConfig.class)
#EnableJpaRepositories(basePackages = "com.foo")
public class SampleSegmentTest {
#Resource
SampleJpaRepository segmentJpaRepository;
#PersistenceContext
private EntityManager entityManager;
#Transactional
#Test
public void test() {
Segment segment = new Segment();
ReflectionTestUtils.setField(segment, "value", "foo");
ReflectionTestUtils.setField(segment, "description", "bar");
segmentJpaRepository.save(segment);
assertNotNull(segment.getId());
assertEquals("foo", segment.getValue());
assertEquals("bar",segment.getDescription());
ReflectionTestUtils.setField(segment, "value", "foo2");
entityManager.merge(segment);
Segment updatedSegment = segmentJpaRepository.findOne(segment.getId());
assertEquals("foo2", updatedSegment.getValue());
}
}

redundant id values inserted despite using #inheritance

In a spring mvc app using hibernate, jpa, and MySQL, I have a BaseEntity that contains an id field that is unique across all classes that inherit from BaseEntity, using #Inheritance(strategy = InheritanceType.TABLE_PER_CLASS). Some data is imported into the MySQL database using an external dml.sql file run from the command line. The imported data is carefully planned so that all the ids that need to be managed as part of the BaseEntity inheritance group are unique within their inheritance group.
The problem is that hibernate is not taking the values of the ids already in the database into account when it inserts a new record into the database. Instead, hibernate is saving an id value in one of the descendent entities which is identical to an id stored in one of the other descendent entities.
How can I configure hibernate to respect the id values already in the database when it saves a new entity within the same inheritance group?
Some relevant facts are:
All of the objects in the MySQL database were created directly from the hibernate mappings in the app by using hbm2ddl.
I cannot use #MappedSuperClass for BaseEntity because BaseEntity is used as a property of one of the entities in the app, so that entities of various types can be stored in the same property of that entity. When I was using #MappedSuperClass, eclipse was giving compile errors saying that BaseEntity cannot be instantiated directly because it has #MappedSuperClass annotation.
Note: The file sharing site seems to be center-justifying all the code. You can fix this by simply cutting and pasting it into a text editor.
You can read the code for BaseEntity by clicking on this link.
The code for the entity whose id values are being set incorrectly by hibernate can be read by clicking on this link.
The jpql code for saving the entity whose id is being set incorrectly is as follows:
#Override
#Transactional
public void saveCCD(HL7ConsolidatedCareDocument ccd) {
if (ccd.getId() == null) {
this.em.persist(ccd);
this.em.flush();
}
else {
this.em.merge(ccd);
this.em.flush();
}
}
I have never done this using hibernate or mysql ut have done something similar with EclipseLink + PostgreSQL. So there might be some mistakes below.
With generation type TABLE you might want to explicitly specify some additional parameters using the TableGenerator annotation. That way you are certain where hibernate is storing things.
#Id
#GeneratedValue(
strategy=GenerationType.TABLE,
generator="TBL_GEN")
#javax.persistence.TableGenerator(
name="TBL_GEN",
table="GENERATOR_TABLE",
pkColumnName = "mykey",
valueColumnName = "hi"
pkColumnValue="BaseEntity_Id",
allocationSize=20
)
What you need to do when you bypass hibernate is to reserve the ids you need by updating the row with mykey BaseEntity_Id in the table GENERATOR_TABLE.
For details on the annotations see paragraph 5.1.2.2

How do you update a foreign key value directly via Hibernate?

I have a couple of objects that are mapped to tables in a database using Hibernate, BatchTransaction and Transaction. BatchTransaction's table (batch_transactions) has a foreign key reference to transactions, named transaction_id.
In the past I have used a batch runner that used internal calls to run the batch transactions and complete the reference from BatchTransaction to Transaction once the transaction is complete. After a Transaction has been inserted, I just call batchTransaction.setTransaction(txn), so I have a #ManyToOne mapping from BatchTransaction to Transaction.
I am changing the batch runner so that it executes its transactions through a Web service. The ID of the newly inserted Transaction will be returned by the service and I'll want to update transaction_id in BatchTransaction directly (rather than using the setter for the Transaction field on BatchTransaction, which would require me to load the newly inserted item unnecessarily).
It seems like the most logical way to do it is to use SQL rather than Hibernate, but I was wondering if there's a more elegant approach. Any ideas?
Here's the basic mapping.
BatchQuery.java
#Entity
#Table(name = "batch_queries")
public class BatchQuery
{
#ManyToOne
#JoinColumn(name = "query_id")
public Query getQuery()
{
return mQuery;
}
}
Query.java
#Entity
#Table(name = "queries")
public class Query
{
}
The idea is to update the query_id column in batch_queries without setting the "query" property on a BatchQuery object.
Using a direct SQL update, or an HQL update, is certainly feasible.
Not seeing the full problem, it looks to me like you might be making a modification to your domain that's worth documenting in your domain. You may be moving to having a BatchTransaction that has as a member just the TransactionId and not the full transaction.
If in other activities, the BatchTransaction will still be needing to hydrate that Transaction, I'd consider adding a separate mapping for the TransactionId, and having that be the managing mapping (make the Transaction association update and insert false).
If BatchTransaction will no longer be concerned with the full Transaction, just remove that association after adding a the TransactionId field.
As you have writeen, we can use SQL to achieve solution for above problem. But i will suggest not to update the primary keys via SQL.
Now, as you are changing the key, which means you are creating alltogether a new object, for this, you can first delete the existing object, with the previous key, and then try to insert a new object with the updated key(in your case transaction_id)

Categories

Resources