Hello my problem is that i can't avoid cache. I'm using Spring Data Jpa with Spring Boot 1.5.4
What am I doing:
I have a case when some client request my REST endpoint with some data based on which I am creating an entity and I'm saveing it into database, next I request another REST endpoint which response me dirlectly OK, but request which I got isn't finished yet. Next I'm waiting for another service which has to request my another REST endpoint (First client is all the time on wire). This endpoint modifies entity which was created after first request I got and here I got problem.
So basicly, first request creates entity and saves it using "saveAndFlush" method. When first request is waiting another thread modifies this entity using spring data jpa:
#Modifying(clearAutomatically = true)
#Query("UPDATE QUERY ")
#Transactional
int updateMethod();
But after that (when first request is released from waiting) when I call findOne method in first thread I got old entity, I have tried also override method:
#Transactional
default MyEntity findOneWithoutCache(Long id) {
return findOne(id);
}
But this not working too, I also added
#Cacheable(false)
public class MyEntity {
And this is not working too.
There is only one way which is working, when i select this entity using #Query this way:
#Query("SELECT STATEMENT "
+ "WHERE p.id = ?1")
MyEntity findEntityById(Long id);
Could you explain me how to solve this problem?
The thing is what kind of Transaction Isolation do You have?
What Database , settings, driver?
Theoretically in a perfect ACID transaction - after starting transaction you cannot see changes done in other transactions. (see Repeatable Read).
On the other hand typically you do not have I in ACID. And isolation is weaker.
(like Read Commited).
If the query works it suggests you do not have Repeatable read - so maybe you should simply get EnityManager (via JpaContext) and try to clear() session (in the 1st thread)?
Related
There is a scenario I have encountered, where I'm returning the API response(request thread) and delegating the task to a background thread.
In the background thread, I'm calling hibernate's T getOne(ID id); to fetch some information, which is resulting in
org.hibernate.LazyInitializationException: could not initialize proxy - no Session in Thread class
But, when performing DB operations with JPA queries #Query("some query"), native query #Query(value = "some query", native = true) and JdbcTemplate, it's working fine in the background thread.
Can someone help me understand why such behaviour?
FYI.. I'm using Spring Boot 1.4.2 and Hibernate 5.0.11
T getOne(ID id) relies on EntityManager.getReference() that performs an entity lazy loading. So to ensure the effective loading of the entity, invoking a method on it is required.
Basically your thread is unable to find any active sessions context.Hibernate throws the LazyInitializationException when it needs to initialize a lazily fetched association to another entity without an active session context.
You FetchType.EAGER in any of the associations you are having w.r.t to the object you are trying to get. But, it can have its own repercussions like unwanted query execution every time you try to get object.
Best Solution will be using Optional<T> findById(ID id) You can get check if entity exists using obj.isPresent() and continue.
I need to send a request to other microService once the object got created in the database. I only send the object id so other microService needs to call the db again for the info with bunch of other stuff.
But, when the other microService try to lookup for the record using the received id it cannot find the saved record in the database.
I tried debug seems like record does not persist even though #postPersist got called.
It will be saved after #PostPersist got executed.
Has anyone could give a workaround for this. I really need to query the database again as this is a custom requirement. I use mysql and spring boot
public class EmployeeListener {
#PostPersist
public void sendData(Employee employee){
Long id = employee.getEmployeeId();
RestTemplate restTemplate = new RestTemplate();
restTemplate.exchange("http://localhost:8081/service/employee"+id, HttpMethod.POST, null, String.class);
}
}
#Entity
#EntityListeners(EmployeeListener.class)
public class Employee {
//
}
The problem is that JPA lifecycle events happen in the same transaction as your save operation, but the lookup, since it happens with a different server must only happen after your transaction is closed.
I therefore recommend the following setup: Gather the ids that need informing in a Collection and then when the transaction is completed send the data.
If you want to have the send operation and save operation in one method, the [TransactionTemplate][1] might be nicer to use than transaction management by annotation.
You also might consider Domain Events. Note that they trigger only when save is actually called. The benefit of these events is that they get published using a ApplicationEventPublisher for which listeners are Spring Beans so you may inject whatever bean you find helpful. They still need a way to break out of the transaction as described above
#PostPersist annotated method is called within the same transaction and the default flash mode is AUTO, that's why you don't see the record in the database. You need to force a flush:
#Component
public class EmployeeListener {
#PersistenceContext
private EntityManager entityManager;
#PostPersist
public void sendData(Employee employee){
// Send it to database
entityManager.flush();
Long id = employee.getEmployeeId();
RestTemplate restTemplate = new RestTemplate();
restTemplate.exchange("http://localhost:8081/service/employee"+id, HttpMethod.POST, null, String.class);
}
}
Notice that EmployeeListener needs to be a Spring managed bean.
Summary (details below):
I'd like to make a stored proc call before any entities are saved/updated/deleted using a Spring/JPA stack.
Boring details:
We have an Oracle/JPA(Hibernate)/Spring MVC (with Spring Data repos) application that is set up to use triggers to record history of some tables into a set of history tables (one history table per table we want audited). Each of these entities has a modifiedByUser being set via a class that extends EmptyInterceptor on update or insert. When the trigger archives any insert or update, it can easily see who made the change using this column (we're interested in which application user, not database user). The problem is that for deletes, we won't get the last modified information from the SQL that is executed because it's just a plain delete from x where y.
To solve this, we'd like to execute a stored procedure to tell the database which app user is logged in before executing any operation. The audit trigger would then look at this value when a delete happens and use it to record who executed the delete.
Is there any way to intercept the begin transaction or some other way to execute SQL or a stored procedure to tell the db what user is executing the inserts/updates/deletes that are about to happen in the transaction before the rest of the operations happen?
I'm light on details about how the database side will work but can get more if necessary. The gist is that the stored proc will create a context that will hold session variables and the trigger will query that context on delete to get the user ID.
From the database end, there is some discussion on this here:
https://docs.oracle.com/cd/B19306_01/network.102/b14266/apdvprxy.htm#i1010372
Many applications use session pooling to set up a number of sessions
to be reused by multiple application users. Users authenticate
themselves to a middle-tier application, which uses a single identity
to log in to the database and maintains all the user connections. In
this model, application users are users who are authenticated to the
middle tier of an application, but who are not known to the
database.....in these situations, the application typically connects
as a single database user and all actions are taken as that user.
Because all user sessions are created as the same user, this security
model makes it very difficult to achieve data separation for each
user. These applications can use the CLIENT_IDENTIFIER attribute to
preserve the real application user identity through to the database.
From the Spring/JPA side of things see section 8.2 at the below:
http://docs.spring.io/spring-data/jdbc/docs/current/reference/html/orcl.connection.html
There are times when you want to prepare the database connection in
certain ways that aren't easily supported using standard connection
properties. One example would be to set certain session properties in
the SYS_CONTEXT like MODULE or CLIENT_IDENTIFIER. This chapter
explains how to use a ConnectionPreparer to accomplish this. The
example will set the CLIENT_IDENTIFIER.
The example given in the Spring docs uses XML config. If you are using Java config then it looks like:
#Component
#Aspect
public class ClientIdentifierConnectionPreparer implements ConnectionPreparer
{
#AfterReturning(pointcut = "execution(* *.getConnection(..))", returning = "connection")
public Connection prepare(Connection connection) throws SQLException
{
String webAppUser = //from Spring Security Context or wherever;
CallableStatement cs = connection.prepareCall(
"{ call DBMS_SESSION.SET_IDENTIFIER(?) }");
cs.setString(1, webAppUser);
cs.execute();
cs.close();
return connection;
}
}
Enable AspectJ via a Configuration class:
#Configuration
#EnableAspectJAutoProxy
public class SomeConfigurationClass
{
}
Note that while this is hidden away in a section specific to Spring's Oracle extensions it seems to me that there is nothing in section 8.2 (unlike 8.1) that is Oracle specific (other than the Statement executed) and the general approach should be feasible with any Database simply by specifying the relevant procedure call or SQL:
Postgres for example as the following so I don't see why anyone using Postgres couldn't use this approach with the below:
https://www.postgresql.org/docs/8.4/static/sql-set-role.html
Unless your stored procedure does more than what you described, the cleaner solution is to use Envers (Entity Versioning). Hibernate can automatically store the versions of an entity in a separate table and keep track of all the CRUD operations for you, and you don't have to worry about failed transactions since this will all happen within the same session.
As for keeping track who made the change, add a new colulmn (updatedBy) and just get the login ID of the user from Security Principal (e.g. Spring Security User)
Also check out #CreationTimestamp and #UpdateTimestamp.
I think what you are looking for is a TransactionalEvent:
#Service
public class TransactionalListenerService{
#Autowired
SessionFactory sessionFactory;
#TransactionalEventListener(phase = TransactionPhase.BEFORE_COMMIT)
public void handleEntityCreationEvent(CreationEvent<Entity> creationEvent) {
// use sessionFactory to run a stored procedure
}
}
Registering a regular event listener is done via the #EventListener
annotation. If you need to bind it to the transaction use
#TransactionalEventListener. When you do so, the listener will be
bound to the commit phase of the transaction by default.
Then in your transactional services you register the event where necessary:
#Service
public class MyTransactionalService{
#Autowired
private ApplicationEventPublisher applicationEventPublisher;
#Transactional
public void insertEntityMethod(Entity entity){
// insert
// Publish event after insert operation
applicationEventPublisher.publishEvent(new CreationEvent(this, entity));
// more processing
}
}
This can work also outside the boundaries of a trasaction if you have the requirement:
If no transaction is running, the listener is not invoked at all since
we can’t honor the required semantics. It is however possible to
override that behaviour by setting the fallbackExecution attribute of
the annotation to true.
I am using Spring data jpa to store and read entities across diffferent applications. Below are the steps that are executed:
Application 1 creates an entity via jpa repository and stores it into db with saveAndFlush() method
It then sends an event to queue saying, entity is created
This event is then read by another application (let's say Application 2) which then tries to read the entity and processes it
Following is the example method used to store the object:
#Transactional
public Entity createEntity(final Entity entity) {
return entityRepository.saveAndFlush(entity);
}
As per the documentation, #Transactional annotation should make sure the object gets persisted once method execution is finished. However, when Application 2 receives an event and tries to look up the entity (by id), it is not found. I am using Maria DB and Spring Data JPA 1.9.4.
Do we need to do anything else to force hard commit after saveAndFlush call?
I have a simple spring 3.2 web application, which is connected to a MySQL db. My question is simple: I have method in dao, which is annotated with #Cacheable. Is there a way to log if the method goes to db, or its result is loaded from cache? For example, I'd like to see the following log:
Object with id 'x' was retrieved from database at 23:44:30 / 2015....
Object with id 'x' was retrived from cache at...
Thank you
Spring logs out cache hits and misses under the org.springframework.cache category at TRACE level.
you can a log message in your service layer if there's going to be a call to your persistence layer. here's a method from some code i'm working where i'm doing this. I only get a log entry when the cache isn't hit.
#Cacheable(value = CACHE_PICO)
#Transactional(readOnly = true)
#PostAuthorize(PICO_READ + OR + ALLOWED_FOR_ADMIN)
public Pico get(long id) {
log.info("cached missed for pico {}", id);
return _picoRepository.findOne(id);
}
You could use log4jdbc (docu) - it is wrapper around your normal jdbc connection that logs every executed statement.