In EJB3 container-managed bean, I want to be able to allow extended timeout for nightly jobs.
How can I change TransactionTimeout setting for such use-cases?
Currently, code looks like this:
#TransactionTimeout(300)
public Result getResult() {
//code goes here
}
Simply annotate the EJB method that is being executed within the transaction as you noted above. My only suggestion is be more explicit in terms of the units. In this case I wait for an hour. Numerous TimeUnit.XXX's enumerated values are available.
import org.jboss.ejb3.annotation.TransactionTimeout;
import java.util.concurrent.TimeUnit;
#TransactionTimeout(value=1, unit=TimeUnit.HOURS)
public void doSOmethingForALongTime() {
}
Related
I want to publish an event if and only if there were changes to the DB. I'm running under #Transaction is Spring context and I come up with this check:
Session session = entityManager.unwrap(Session.class);
session.isDirty();
That seems to fail for new (Transient) objects:
#Transactional
public Entity save(Entity newEntity) {
Entity entity = entityRepository.save(newEntity);
Session session = entityManager.unwrap(Session.class);
session.isDirty(); // <-- returns `false` ):
return entity;
}
Based on the answer here https://stackoverflow.com/a/5268617/672689 I would expect it to work and return true.
What am I missing?
UPDATE
Considering #fladdimir answer, although this function is called in a transaction context, I did add the #Transactional (from org.springframework.transaction.annotation) on the function. but I still encounter the same behaviour. The isDirty is returning false.
Moreover, as expected, the new entity doesn't shows on the DB while the program is hold on breakpoint at the line of the session.isDirty().
UPDATE_2
I also tried to change the session flush modes before calling the repo save, also without any effect:
session.setFlushMode(FlushModeType.COMMIT);
session.setHibernateFlushMode(FlushMode.MANUAL);
First of all, Session.isDirty() has a different meaning than what I understood. It tells if the current session is holding in memory queries which still haven't been sent to the DB. While I thought it tells if the transaction have changing queries. When saving a new entity, even in transaction, the insert query must be sent to the DB in order to get the new entity id, therefore the isDirty() will always be false after it.
So I ended up creating a class to extend SessionImpl and hold the change status for the session, updating it on persist and merge calls (the functions hibernate is using)
So this is the class I wrote:
import org.hibernate.HibernateException;
import org.hibernate.internal.SessionCreationOptions;
import org.hibernate.internal.SessionFactoryImpl;
import org.hibernate.internal.SessionImpl;
public class CustomSession extends SessionImpl {
private boolean changed;
public CustomSession(SessionFactoryImpl factory, SessionCreationOptions options) {
super(factory, options);
changed = false;
}
#Override
public void persist(Object object) throws HibernateException {
super.persist(object);
changed = true;
}
#Override
public void flush() throws HibernateException {
changed = changed || isDirty();
super.flush();
}
public boolean isChanged() {
return changed || isDirty();
}
}
In order to use it I had to:
extend SessionFactoryImpl.SessionBuilderImpl to override the openSession function and return my CustomSession
extend SessionFactoryImpl to override the withOptions function to return the extended SessionFactoryImpl.SessionBuilderImpl
extend AbstractDelegatingSessionFactoryBuilderImplementor to override the build function to return the extended SessionFactoryImpl
implement SessionFactoryBuilderFactory to implement getSessionFactoryBuilder to return the extended AbstractDelegatingSessionFactoryBuilderImplementor
add org.hibernate.boot.spi.SessionFactoryBuilderFactory file under META-INF/services with value of my SessionFactoryBuilderFactory implementation full class name (for the spring to be aware of it).
UPDATE
There was a bug with capturing the "merge" calls (as tremendous7 comment), so I end up capturing the isDirty state before any flush, and also checking it once more when checking isChanged()
The following is a different way you might be able to leverage to track dirtiness.
Though architecturally different than your sample code, it may be more to the point of your actual goal (I want to publish an event if and only if there were changes to the DB).
Maybe you could use an Interceptor listener to let the entity manager do the heavy lifting and just TELL you what's dirty. Then you only have to react to it, instead of prod it to sort out what's dirty in the first place.
Take a look at this article: https://www.baeldung.com/hibernate-entity-lifecycle
It has a lot of test cases that basically check for dirtiness of objects being saved in various contexts and then it relies on a piece of code called the DirtyDataInspector that effectively listens to any items that are flagged dirty on flush and then just remembers them (i.e. keeps them in a list) so the unit test cases can assert that the things that SHOULD have been dirty were actually flushed as dirty.
The dirty data inspector code is on their github. Here's the direct link for ease of access.
Here is the code where the interceptor is applied to the factory so it can be effective. You might need to write this up in your injection framework accordingly.
The code for the Interceptor it is based on has a TON of lifecycle methods you can probably exploit to get the perfect behavior for "do this if there was actually a dirty save that occured".
You can see the full docs of it here.
We do not know your complete setup, but as #Christian Beikov suggested in the comment, is it possible that the insertion was already flushed before you call isDirty()?
This would happen when you called repository.save(newEntity) without a running transaction, since the SimpleJpaRepository's save method is annotated itself with #Transactional:
#Transactional
#Override
public <S extends T> S save(S entity) {
...
}
This will wrap the call in a new transaction if none is already active, and flush the insertion to the DB at the end of the transaction just before the method returns.
You might choose to annotate the method where you call save and isDirty with #Transactional, so that the transaction is created when your method is called, and propagated to the repository call. This way the transaction would not be committed when the save returns, and the session would still be dirty.
(edit, just for completeness: in case of using an identity ID generation strategy, the insertion of newly created entity is flushed during a repository's save call to generate the ID, before the running transaction is committed)
I have an interesting task where I need to cache the results of my method, which is really simple with spring cache abstraction
#Cachable(...)
public String getValue(String key){
return restService.getValue(key);
}
The restService.getValue() targets a REST service, which can be answering or not if the end point is down.
I need to set a specific TTL for the cache value, lets say 5 minutes, but in case if the server is down I need to return the last value, even if it extends 5 minutes.
I was thinking about having a second cachable method which have no TTL and always returns the last value, it would be called from getValue if restService returns nothing, but maybe there is a better way?
I've been interested in doing this for a while too. Sorry to say, I have not found any trivial way of doing this. Spring will not do this for you, it's more a question of whether what cache implementation spring is wrapping can do it. I assume you are using the EhCache implementation. Unfortunately this functionality does not come out the box as far as I know.
There are various ways one can achieve something similar depending on your problem
1) use an eternal cache time and have a second class Thread which periodically loops over the cached data refreshing it. I have not done this exactly, but the Thread class would need to have to look something like this:
#Autowired
EhCacheCacheManager ehCacheCacheManager;
...
//in the infinite loop
List keys = ((Ehcache) ehCacheCacheManager.getCache("test").getNative Cache()).getKeys();
for (int i = 0; i < keys.size(); i++) {
Object o = keys.get(i);
Ehcache ehcache = (Ehcache)ehCacheCacheManager.getCache("test").getNativeCache()
Element item = (ehcache).get(o);
//get the data based on some info in the value, and if no exceptions
ehcache.put(new Element(element.getKey(), newValue));
}
benefits are this is very fast for the #Cacheable caller, downside is your server might get more hits than neccessary
2) You could make a CacheListener to listen to the eviction event, store the data temporarily. And should the server call fail, use that data and return from the method.
the ehcache.xml
<cacheEventListenerFactory class="caching.MyCacheEventListenerFactory"/>
</cache>
</ehcache>
The factory:
import net.sf.ehcache.event.CacheEventListener;
import net.sf.ehcache.event.CacheEventListenerFactory;
import java.util.Properties;
public class MyCacheEventListenerFactory extends CacheEventListenerFactory {
#Override
public CacheEventListener createCacheEventListener(Properties properties) {
return new CacheListener();
}
}
The Pseudo-implementation
import net.sf.ehcache.CacheException;
import net.sf.ehcache.Ehcache;
import net.sf.ehcache.Element;
import net.sf.ehcache.event.CacheEventListener;
import java.util.concurrent.ConcurrentHashMap;
public class CacheListener implements CacheEventListener {
//prob bad practice to use a global static here - but its just for demo purposes
public static ConcurrentHashMap myMap = new ConcurrentHashMap();
#Override
public void notifyElementPut(Ehcache ehcache, Element element) throws CacheException {
//we can remove it since the put happens after a method return
myMap.remove(element.getKey());
}
#Override
public void notifyElementExpired(Ehcache ehcache, Element element) {
//expired item, we should store this
myMap.put(element.getKey(), element.getValue());
}
//....
}
A challenge here is that the key is not very useful, you might need to store something about the key in the returned value to be able to pick it up if the server call fails. This feels a bit hacky, and I have not determined if this is exactly bullet proof. It might need some testing.
3) A lot of effort but works:
#Cacheable("test")
public MyObject getValue(String data) {
try {
MyObject result = callServer(data);
storeResultSomewhereLikeADatabase(result);
} catch (Exception ex) {
return getStoredResult(data);
}
}
a Pro here is that it will work between server restarts, and you can extend it simply to allow shared caches between clustered servers.
I had a version in an 12 clustered environment where each one checked the database first to see if any other cluster had got the "expensive" data first
and then reused that rather than make the server call.
A slight variant would also be to use a second #Cacheable method together with #CachePut rather than a DB to store the data. But this would mean doubling up in memory usage. That might be acceptable depending on your result sizes.
Maybe you can use spel to change the used cache (one using ttl and the second not) if the condition (is the service up?) is true or false, I've never used spel this way (I used it to change the key based on some request params) but I think it could work
#Cacheable(value = "T(com.xxx.ServiceChecker).checkService()",...)
where checkService() is a static method that returns the name of the cache that should be used
I'm working in an Spring application that downloads data from different APIs. For that purpose I need a class Fetcher that interacts with an API to fetch the needed data. One of the requirements of this class is that it has to have a method to start the fetching and a method to stop it. Also, it must download all asynchronously because users must be able to interact with a dashboard while fetching data.
Which is the best way to accomplish this? I've been reading about task executors and the different annotations of Spring to schedule tasks and execute them asynchronously but this solutions don't seem to solve my problem.
Asynchronous task execution is what you're after and since Spring 3.0 you can achieve this using annotations too directly on the method you want to run asyncrhonously.
There are two ways of implementing this depending whether you are interested in getting a result from the async process:
#Async
public Future<ReturnPOJO> asyncTaskWithReturn(){
//..
return new AsyncResult<ReturnPOJO>(yourReturnPOJOInstance);
}
or not:
#Async
public void asyncTaskNoReturn() {
//..
}
In the former method the result of your computation conveyed by yourReturnPOJOInstance object instance, is stored in an instance of org.springframework.scheduling.annotation.AsyncResult<V> which in return implements the java.util.concurrent.Future<V> that the caller can use to retrieve the result of the computation later on.
To activate the above functionality in Spring you have to add in your XML config file:
<task: annotation-driven />
along with the needed task namespace.
The simplest way to do this is to use the Thread class. You supply a Runnable object that performs the fetching functionality in the run() method and when the Thread is started, it invokes the run method in a separate thread of execution.
So something like this:
public class Fetcher implements Runnable{
public void run(){
//do fetching stuff
}
}
//in your code
Thread fetchThread = new Thread(new Fetcher());
fetchThread.start();
Now, if you want to be able to cancel, you can do that a couple of ways. The easiest (albeit most violent and nonadvisable way to do it is to interrupt the thread:
fetchThread.interrupt();
The correct way to do it would be to implement logic in your Fetcher class that periodically checks a variable to see whether it should stop doing whatever it's doing or not.
Edit To your question about getting Spring to run it automatically, if you wanted it to run periodically, you'll need to use a scheduling framework like Quartz. However, if you just want it to run once what you could do is use the #PostConstruct annotation. The method annotated with #PostConstruct will be executed after the bean is created. So you could do something like this
#Service
public class Fetcher implements Runnable{
public void run(){
//do stuff
}
#PostConstruct
public void goDoIt(){
Thread trd = new Thread(this);
trd.start();
}
}
Edit 2 I actually didn't know about this, but check out the #Async discussion in the Spring documentation if you haven't already. Might also be what you want to do.
You might only need certain methods to run on a separate thread rather than the entire class. If so, the #Async annotation is so simple and easy to use.
Simply add it to any method you want to run asynchronously, you can also use it on methods with return types thanks to Java's Future library.
Check out this page: http://www.baeldung.com/spring-async
JBoss 4.x
EJB 3.0
I've seen code like the following (greatly abbreviated):
#Stateless
#TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
public class EJB1 implements IEJB1
{
#EJB
private IEJB1 self;
#EJB
private IEJB2 ejb2;
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public boolean someMethod1()
{
return someMethod2();
}
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public boolean someMethod2()
{
return self.someMethod3();
}
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public boolean someMethod3()
{
return ejb2.someMethod1();
}
}
And say EJB2 is almost an exact copy of EJB1 (same three methods), and EJB2.someMethod3() calls into EJB3.someMethod1(), which then finally in EJB3.someMethod3() writes to the DB.
This is a contrived example, but have seen similar code to the above in our codebase. The code actually works just fine.
However, it feels like terrible practice and I'm concerned about the #TransactionAttribute(TransactionAttributeType.REQUIRES_NEW) on every method that doesn't even actually perform any DB writes. Does this actually create a new transaction every single time for every method call with the result of:
new transaction
-new transaction
--new transaction
---new transaction
...(many more)
-------new transaciton (DB write)
And then unwraps at that point? Would this ever be a cause for performance concern? Additional thoughts?
Does this actually create a new transaction every single time for
every method call
No, it doesn't. The new transaction will be created only when calling method by EJB reference from another bean. Invoking method2 from method1 within the same bean won't spawn the new transaction.
See also here and here. The latter is exceptionally good article, explaining transaction management in EJB.
Edit:
Thanks #korifey for pointing out, that method2 actually calls method3 on bean reference, thus resulting in a new transaction.
It really creates new JTA transaction in every EJB and this must do a serious performance effect to read-only methods (which makes only SELECTS, not updates). Use #SUPPORTS for read-only methods
My application loads entities from a Hibernate DAO, with OpenSessionInViewFilter to allow rendering.
In some cases I want to make a minor change to a field -
Long orderId ...
link = new Link("cancel") {
#Override public void onClick() {
Order order = orderDAO.load(orderId);
order.setCancelledTime(timeSource.getCurrentTime());
};
but such a change is not persisted, as the OSIV doesn't flush.
It seems a real shame to have to call orderDOA.save(order) in these cases, but I don't want to go as far as changing the FlushMode on the OSIV.
Has anyone found any way of declaring a 'request handling' (such as onClick) as requiring a transaction?
Ideally I suppose the transaction would be started early in the request cycle, and committed by the OSIV, so that all logic and rendering would take place in same transaction.
I generally prefer to use additional 'service' layer of code that wraps basic DAO
logic and provides transactions via #Transactional. That gives me better separation of presentation vs business logic and is
easier to test.
But since you already use OSIV may be you can just put some AOP interceptor around your code
and have it do flush()?
Disclaimer : I've never actually tried this, but I think it would work. This also may be a little bit more code than you want to write. Finally, I'm assuming that your WebApplication subclasses SpringWebApplication. Are you with me so far?
The plan is to tell Spring that we want to run the statements of you onClick method in a transaction. In order to do that, we have to do three things.
Step 1 : inject the PlatformTransactionManager into your WebPage:
#SpringBean
private PlatformTransactionManager platformTransactionManager;
Step 2 : create a static TransactionDefinition in your WebPage that we will later reference:
protected static final TransactionDefinition TRANSACTION_DEFINITION;
static {
TRANSACTION_DEFINITION = new DefaultTransactionDefinition(TransactionDefinition.PROPAGATION_REQUIRES_NEW);
((DefaultTransactionDefinition) TRANSACTION_DEFINITION).setIsolationLevel(TransactionDefinition.ISOLATION_SERIALIZABLE);
}
Feel free to change the TransactionDefinition settings and/or move the definition to a shared location as appropriate. This particular definition instructs Spring to start a new transaction even if there's already one started and to use the maximum transaction isolation level.
Step 3 : add transaction management to the onClick method:
link = new Link("cancel") {
#Override
public void onClick() {
new TransactionTemplate(platformTransactionManager, TRANSACTION_DEFINITION).execute(new TransactionCallback() {
#Override
public Object doInTransaction(TransactionStatus status) {
Order order = orderDAO.load(orderId);
order.setCancelledTime(timeSource.getCurrentTime());
}
}
}
};
And that should do the trick!