I am using Spring data jpa to store and read entities across diffferent applications. Below are the steps that are executed:
Application 1 creates an entity via jpa repository and stores it into db with saveAndFlush() method
It then sends an event to queue saying, entity is created
This event is then read by another application (let's say Application 2) which then tries to read the entity and processes it
Following is the example method used to store the object:
#Transactional
public Entity createEntity(final Entity entity) {
return entityRepository.saveAndFlush(entity);
}
As per the documentation, #Transactional annotation should make sure the object gets persisted once method execution is finished. However, when Application 2 receives an event and tries to look up the entity (by id), it is not found. I am using Maria DB and Spring Data JPA 1.9.4.
Do we need to do anything else to force hard commit after saveAndFlush call?
Related
in the above code i have used hibernate with mysql and the hibernate session is managed by SpringSessionContext (that'y am using sessionFactory.currentSession class under transactional boundary)
the below image (dao layer) is straight forward use case but the exception is not rolled back i have called this method from simple service layer (i.e service layer is calling dao layer for CRUD operation)
i learned about spring proxy mechanism on transaction management in this case this below image class is implementation of Dao interface so spring will create a proxy bean using Jdkdynamic proxy and this method is called from service layer (non transactional class but the expectation was data should not be persisted exception should rollback but it was persisted in db
Hibernate persists dirty objects after the whole transaction process is completed. You should examine the first input method to the last method flow. Hibernate persisting operation is not processed when the save function is called. It stores into a buffer map, and after the transaction completes, it will be processed. Are there any transactions or try-catch blocks in your flow?
Hello my problem is that i can't avoid cache. I'm using Spring Data Jpa with Spring Boot 1.5.4
What am I doing:
I have a case when some client request my REST endpoint with some data based on which I am creating an entity and I'm saveing it into database, next I request another REST endpoint which response me dirlectly OK, but request which I got isn't finished yet. Next I'm waiting for another service which has to request my another REST endpoint (First client is all the time on wire). This endpoint modifies entity which was created after first request I got and here I got problem.
So basicly, first request creates entity and saves it using "saveAndFlush" method. When first request is waiting another thread modifies this entity using spring data jpa:
#Modifying(clearAutomatically = true)
#Query("UPDATE QUERY ")
#Transactional
int updateMethod();
But after that (when first request is released from waiting) when I call findOne method in first thread I got old entity, I have tried also override method:
#Transactional
default MyEntity findOneWithoutCache(Long id) {
return findOne(id);
}
But this not working too, I also added
#Cacheable(false)
public class MyEntity {
And this is not working too.
There is only one way which is working, when i select this entity using #Query this way:
#Query("SELECT STATEMENT "
+ "WHERE p.id = ?1")
MyEntity findEntityById(Long id);
Could you explain me how to solve this problem?
The thing is what kind of Transaction Isolation do You have?
What Database , settings, driver?
Theoretically in a perfect ACID transaction - after starting transaction you cannot see changes done in other transactions. (see Repeatable Read).
On the other hand typically you do not have I in ACID. And isolation is weaker.
(like Read Commited).
If the query works it suggests you do not have Repeatable read - so maybe you should simply get EnityManager (via JpaContext) and try to clear() session (in the 1st thread)?
Summary (details below):
I'd like to make a stored proc call before any entities are saved/updated/deleted using a Spring/JPA stack.
Boring details:
We have an Oracle/JPA(Hibernate)/Spring MVC (with Spring Data repos) application that is set up to use triggers to record history of some tables into a set of history tables (one history table per table we want audited). Each of these entities has a modifiedByUser being set via a class that extends EmptyInterceptor on update or insert. When the trigger archives any insert or update, it can easily see who made the change using this column (we're interested in which application user, not database user). The problem is that for deletes, we won't get the last modified information from the SQL that is executed because it's just a plain delete from x where y.
To solve this, we'd like to execute a stored procedure to tell the database which app user is logged in before executing any operation. The audit trigger would then look at this value when a delete happens and use it to record who executed the delete.
Is there any way to intercept the begin transaction or some other way to execute SQL or a stored procedure to tell the db what user is executing the inserts/updates/deletes that are about to happen in the transaction before the rest of the operations happen?
I'm light on details about how the database side will work but can get more if necessary. The gist is that the stored proc will create a context that will hold session variables and the trigger will query that context on delete to get the user ID.
From the database end, there is some discussion on this here:
https://docs.oracle.com/cd/B19306_01/network.102/b14266/apdvprxy.htm#i1010372
Many applications use session pooling to set up a number of sessions
to be reused by multiple application users. Users authenticate
themselves to a middle-tier application, which uses a single identity
to log in to the database and maintains all the user connections. In
this model, application users are users who are authenticated to the
middle tier of an application, but who are not known to the
database.....in these situations, the application typically connects
as a single database user and all actions are taken as that user.
Because all user sessions are created as the same user, this security
model makes it very difficult to achieve data separation for each
user. These applications can use the CLIENT_IDENTIFIER attribute to
preserve the real application user identity through to the database.
From the Spring/JPA side of things see section 8.2 at the below:
http://docs.spring.io/spring-data/jdbc/docs/current/reference/html/orcl.connection.html
There are times when you want to prepare the database connection in
certain ways that aren't easily supported using standard connection
properties. One example would be to set certain session properties in
the SYS_CONTEXT like MODULE or CLIENT_IDENTIFIER. This chapter
explains how to use a ConnectionPreparer to accomplish this. The
example will set the CLIENT_IDENTIFIER.
The example given in the Spring docs uses XML config. If you are using Java config then it looks like:
#Component
#Aspect
public class ClientIdentifierConnectionPreparer implements ConnectionPreparer
{
#AfterReturning(pointcut = "execution(* *.getConnection(..))", returning = "connection")
public Connection prepare(Connection connection) throws SQLException
{
String webAppUser = //from Spring Security Context or wherever;
CallableStatement cs = connection.prepareCall(
"{ call DBMS_SESSION.SET_IDENTIFIER(?) }");
cs.setString(1, webAppUser);
cs.execute();
cs.close();
return connection;
}
}
Enable AspectJ via a Configuration class:
#Configuration
#EnableAspectJAutoProxy
public class SomeConfigurationClass
{
}
Note that while this is hidden away in a section specific to Spring's Oracle extensions it seems to me that there is nothing in section 8.2 (unlike 8.1) that is Oracle specific (other than the Statement executed) and the general approach should be feasible with any Database simply by specifying the relevant procedure call or SQL:
Postgres for example as the following so I don't see why anyone using Postgres couldn't use this approach with the below:
https://www.postgresql.org/docs/8.4/static/sql-set-role.html
Unless your stored procedure does more than what you described, the cleaner solution is to use Envers (Entity Versioning). Hibernate can automatically store the versions of an entity in a separate table and keep track of all the CRUD operations for you, and you don't have to worry about failed transactions since this will all happen within the same session.
As for keeping track who made the change, add a new colulmn (updatedBy) and just get the login ID of the user from Security Principal (e.g. Spring Security User)
Also check out #CreationTimestamp and #UpdateTimestamp.
I think what you are looking for is a TransactionalEvent:
#Service
public class TransactionalListenerService{
#Autowired
SessionFactory sessionFactory;
#TransactionalEventListener(phase = TransactionPhase.BEFORE_COMMIT)
public void handleEntityCreationEvent(CreationEvent<Entity> creationEvent) {
// use sessionFactory to run a stored procedure
}
}
Registering a regular event listener is done via the #EventListener
annotation. If you need to bind it to the transaction use
#TransactionalEventListener. When you do so, the listener will be
bound to the commit phase of the transaction by default.
Then in your transactional services you register the event where necessary:
#Service
public class MyTransactionalService{
#Autowired
private ApplicationEventPublisher applicationEventPublisher;
#Transactional
public void insertEntityMethod(Entity entity){
// insert
// Publish event after insert operation
applicationEventPublisher.publishEvent(new CreationEvent(this, entity));
// more processing
}
}
This can work also outside the boundaries of a trasaction if you have the requirement:
If no transaction is running, the listener is not invoked at all since
we can’t honor the required semantics. It is however possible to
override that behaviour by setting the fallbackExecution attribute of
the annotation to true.
I´m reading about Hibernate and using Spring data. In one chapter I´ve read about the different of use get() and load(), supposedly in Hibernate load return a proxy placeHolder and only access to database in case that you access to the entity attribute.
In my application many times I just need to return and entity to add as dependency to another entity and for that specific case add a proxy would be more than enough, but using Spring data repository I cannot find the get() or load() method, so I guess they dont implement the same feature as in Hibernate. Any idea if Spring data has that Hibernate´s features to have a proxy placeHolder?.
Regards.
JpaRepository interface has two methods: First is getOne(id) which is an alternative of hibernate load, second is findById(id) which is an aternative of hibernate get method.
I'm using GlassFish v2ur1 (it's the one our company uses and I cannot upgrade it at this point). I have an EJB (ejbA) which gets called periodically from a timer. In the call, I'm reading a file, creating an entity bean for each line, and persisting the entity bean to a db (PostgreSQL v9.2). After calling entitymanager.persist(entityBean), an HTTP call is made to a servlet, passing the entityBean's ID, which in turn calls into another EJB (ejbB). ejbB sends a JMS message to another entity bean, ejbC. This is a production system and I must make the HTTP call, it processes the data further. ejbC is in the same enterprise application as ejbA, but uses a different EntityManager. ejbC receives the id, reads the record from the db, modifies the record, and persists it.
The problem I'm having is the entity bean's data isn't stored into the db until the transaction from the timer call completes (see below) (I understand that's the way EJB's work). When ejbB is called, it fails to find the record in the db with the id it receives. I've tried a couple of approaches to get the data stored into the db so ejbC can find it:
1) I tried setting the flush mode to COMMIT when persisting the entityBean in ejbA:
- em.setFlushMode(FlushModeType.COMMIT)
- instantiate entity bean
- em.persist(entityBean)
- em.flush()
However, the results are the same, by the time ejbC is called, no record is in the db.
2) I created ejbD and added a storeRecord method (which persists entityBean) in it with TransactionAttributeType.REQUIRES_NEW. This is supposed to suspend ejbA's transaction, start ejbD's transaction, commit it, and resume ejbA's transaction. Again, the results here are the same, by the time ejbC is called, no record is in the db. I'm also seeing a problem with this solution where the ejbA call just stops when I call the storeRecord method. No exceptions are thrown, but I don't see the EJB processing any more lines from the file even though there are more lines. It seems to abort the EJB call and roll back the transaction with no indications. Not sure if this is a GlassFish v2ur1 bug or not.
How can I ensure that the data is stored into the db in ejbA so when ejbC is called, it can find the record in the db? BTW, there are other things going on in ejbA which I don't necessarily want to commit. I'd like to only persist the entityBeans I'm trying to store into the db.
ejbA
ejbTimer called (txn starts)
read file contents
for each line
create entity bean
persist entity bean to db
make HTTP call to ejbB, passing id
<see ejbC>
return (txn ends)
ejbB
Processes data based on id
Looks up JMS queue for ejbC
Passes ejbC the id
ejbC
ejb method called (txn starts)
read record based on received id
modify record and persist
return (txn ends)
When using a transaction isolation of "read-committed", no other transaction can see changes made by an uncommitted transaction. You can specify a lower transaction isolation, but this will have no effect on PostgreSQL: it's "most chaotic" behaviour is read-committed, so you simply can't do it with PostgreSQL. And nor should you:
ejbA should not call ejbB via HTTP. Servlets should only be used to service remote client requests, not to provide internal services. ejbA should connect and invoke ejbB directly. If the method in ejbB is annotated TransactionAttributeType.MANDATORY or TransactionAttributeType.REQUIRED, ejbB will see the entity created by ejbA because it is under the same transaction.
In ejbB, persisting is unnecessary: simply load the entity with an EntityManager and make the change.
If you are completely at the mercy of this HTTP mechanism, you could use bean-managed transactions, but this is a terrible way of doing things:
read file contents
for each line
start transaction
create entity bean
persist entity bean to db
commit transaction
make HTTP call
There are two things I did to solve this. 1) I added remote methods to ejbB which perform the same functionality as the HTTP call. This way the call to ejbB is within the same transaction.
The root of the problem, which Glenn Lane pointed out, is that the transaction continues in the call from ejbA to ejbB, but it ends when ejbB sends the JMS message to ejbC...the transaction doesn't extend to the call to ejbC. This means when the id gets to ejbC, it's in a new transaction, one which cannot see the data persisted to the db by ejbA.
2) I stored the entity bean to the db in ejbA in a special state. The entity bean will be stored when the timer call to ejbA returns (and therefore txn commits). When ejbA is called again by the timer, it looks for record in the db in this special state. It then calls ejbB. ejbB sends a JMS message. When ejbC gets the id, it finds the record in the db (as it was previously committed in a previous txn), changes it's state, and continues processing.