my application is running on tomee and I have the ejb timer to trigger the timeout method every two minutes. The timer triggered the timeout method first time and is still running when the timer tried to trigger the same method for second time. And it threw the following exception..
javax.ejb.ConcurrentAccessTimeoutException: Unable to get write lock on 'timeout' method for: com.abc.xyz
at org.apache.openejb.core.singleton.SingletonContainer.aquireLock(SingletonContainer.java:298)
at org.apache.openejb.core.singleton.SingletonContainer._invoke(SingletonContainer.java:217)
at org.apache.openejb.core.singleton.SingletonContainer.invoke(SingletonContainer.java:197)
at org.apache.openejb.core.timer.EjbTimerServiceImpl.ejbTimeout(EjbTimerServiceImpl.java:769)
at org.apache.openejb.core.timer.EjbTimeoutJob.execute(EjbTimeoutJob.java:39)
at org.quartz.core.JobRunShell.run(JobRunShell.java:207)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:560)
All my log is filled up with the same stacktrace and it continues to occur until I stop the server..
Can we make the timerservice not to trigger the method if it is already running?
or is there a way to timeout the first call before it is triggered again..
Thanks,
Is your timed EJB a singleton bean?
By default singletons use container managed concurrency with write locks that guarantee exclusive access for all methods.
The openejb.xml configures the AccessTimeout for a singleton EJB. After that timeout the exception you have seen will be thrown. Please see here as well: http://tomee.apache.org/singleton-beans.html
Solutions might be:
Use a stateless session bean as the timer bean
Define a read lock on the timer method
Don't use a repeating timer but schedule the next execution of your timer at the end of the current execution.
If you want to avoid running multiple times in parallel, but also want to avoid that the scheduled runs queue up, then I have another proposal.
This way I let schedules "skip", if the previous one is still running:
#Singleton
#Startup
#ConcurrencyManagement(ConcurrencyManagementType.BEAN)
public class Example
{
private final AtomicBoolean alreadyRunning = new AtomicBoolean(false);
#Schedule(minute = "*", hour="*", persistent = false)
public void doWork()
{
if (alreadyRunning.getAndSet(true)) return;
try
{
// ... your code
}
finally
{
alreadyRunning.set(false);
}
}
}
Related
I have a project which executes multiple scheduled method at start up.
I remarked that after scheduled methods are executed, the opened threads do not close, but remain in a 'parking' state.
Is this a normal behavior ?
Aren't the threads suppose to close after method is executed ? (Because keeping multiple threads open just slows down the application and consumes more RAM.)
Here are my code configurations:
#EnableScheduling
#Configuration
#ConditionalOnProperty(name = "scheduling.enabled", matchIfMissing = true)
public class SchedulingConfiguration implements SchedulingConfigurer {
}
Here is an example of method called in service:
#Scheduled(cron = "0 0 4 * * *")
protected void updateExchangeRates() {
if (enablePostConstruct) {
countryService.updateCountryExchangeRates();
}
}
I would like to run the scheduled methods asynchronously, with a max thread pool consumed between 10-15 threads. And after execution, the thread to close and reopen in case it got to the point when it needs to be executed again.
Can you guide me please how this can be achieved ?
I tried to implement SchedulingConfigurer and perform executorService.shutdown(), but it did not work.
You could use a method annotated with #PreDestroy to invoke executorService.shutdown(). I wouldn't bother about the Parking State, you probably want those threads to be ready for the next invocation, so not really harmful that they are parked.
Nothing wrong with the code.
We have the following simple app: one #Singleton bean with #Timeout method and one servlet which starts timer. After first deploy we see that method is called once in 2 seconds - which is expected. Then after hot re-deploy we see that method is called twice within 2 seconds. After a few redeploys method is called multiple times during the same 2 seconds. Restarting the server doesn't help. See the code below:
import javax.ejb.*;
#Remote(TimerRemote.class)
#Singleton
public class TimerBean implements TimerRemote {
#Resource
private SessionContext context;
public void startTimer() {
context.getTimerService().createTimer(2000,2000,null);
}
#Timeout
public void timeoutCallback(javax.ejb.Timer timer) {
System.out.println("timeoutCallback is called: " + timer);
}
}
#Timeout method should be called after given interval time. Currently method is getting called multiple times within a second.
The timer is persistent per default and is not cancelled at all.
Please refer to the official Java EE6 Tutorial: Using the Timer Service.
Prefer using #Schedule and set persistent=false if you don't need the timer to be persistent. Or try a programmatic approach controlling the lifecycle of the timer on your own.
I have defined a bean which needs to do some heavy processing during the #PostConstruct lifecycle phase (during start up).
As it stands, I submit a new Callable to an executor service with each iteration of the processing loop. I keep a list of the Future objects returned from these submissions in a member variable.
#Component
#Scope("singleton")
public class StartupManager implements ApplicationListener<ContextRefreshedEvent> {
#Autowired
private ExecutorService executorService;
private final Map<Class<?>, Optional<Action>> actionMappings = new ConcurrentHashMap<>();
private final List<Future> processingTasks = Collections.synchronizedList(new ArrayList<>());
#PostConstruct
public void init() throws ExecutionException, InterruptedException {
this.controllers.getHandlerMethods().entrySet().stream().forEach(handlerItem -> {
processingTasks.add(executorService.submit(() -> {
// processing
}));
});
}
}
This same bean implements the ApplicationListener interface, so it can listen for a ContextRefreshedEvent which allows me to detect when the application has finished starting up. I use this handler to loop through the list of Futures and invoke the blocking get method which ensures that all of the processing has occurred before the application continues.
#Override
public void onApplicationEvent(ContextRefreshedEvent applicationEvent) {
for(Future task : this.processingTasks) {
try {
task.get();
} catch (InterruptedException | ExecutionException e) {
throw new IllegalStateException(e.getMessage());
}
}
}
My first question... Is changing the actionMapping stream to a parallelStream going to achieve the same thing as submitting a task to the executor service? Is there a way I can pass an existing executor service into a parallel stream for it to use the thread pool size i've defined for the bean?
Secondly.. As part of the processing.. The actionMappings map is read and entries are put in there. It is sufficient enough to make this Map a ConcurrentHashMap to make it thread safe in this scenario?
And secondly is implementing the ApplicationListener interface and listening for the ContextRefreshedEvent the best way to detect when the application has startedup and therefore force complete the un-processed tasks by blocking? Or can this be done another way?
Thanks.
About using parallelStream(): No, and this is precisely the main drawback of using this method. It should be used only when the thread pool size doesn't matter, so I think your ExecutorService-based approach is fine.
Since you are working with Java 8, you could as well use the CompletableFuture.supplyAsync() method, which has an overload that takes an Executor. Since ExecutorService extends Executor, you can pass it your ExecutorService and you're done!
I think a ConcurrentHashMap is fine. It ensures thread safety in all its operations, especially when comes the time to add or modify entries.
When is a ContextRefreshedEvent fired? According to the Javadoc:
Event raised when an ApplicationContext gets initialized or refreshed.
which doesn't guarantee your onApplicationEvent() method is to be called once and only once, that is, when your bean is properly initialized, which includes execution of the #PostConstruct-annotated method.
I suggest you implement the BeanPostProcessor interface and put your Future-checkup logic in the postProcessAfterInitialization() method. The two BeanPostProcessormethods are called before and after the InitializingBean.afterPropertiesSet() method (if present), respectively.
I hope this will be helpful...
Cheers,
Jeff
I have created a bean of a class with default (Singleton) scope. Within the class I have a method which is scheduled to be run every hour.
public class TaskService implements InitializingBean {
#Scheduled(cron="0 0 */1 * * ?")
public void hourlyReportTask()
{
... code here ...
}
public void performAllTasks()
{
hourlyReportTask();
...
...
}
}
My application config looks something like this,
<bean id="reportService"
class="com.tasks.TaskService" />
I am assuming the Thread running the scheduled task will be using the same TaskService bean since its created in singleton scope. What shall happen if the application is currently running hourlyReportTask() and the Spring container kicks off a background scheduled thread to run hourlyReportTask() at the same time. Will it wait for the to get access of the TaskService instance?
The exact same instance is used by both your application and the scheduling service. There is no synchronization so the scheduling service may run that method while your application invokes it.
Pretty much the same way as you would have injected TaskService in something that can be accessed by multiple threads at the same time and those threads call that method concurrently.
There's no black magic behind #Scheduled: it invokes your method the same way as you would manually. If that method is not thread-safe you need to fallback on regular synchronization mechanism in Java (for instance by adding the synchronized keyword to your method declaration).
Spring Singleton, does not mean what you expect from Design Patterns Singleton. In Spring, Singleton means that a bean only has created only one instance (without meaning that another cannot be created) and that instance is used whenever Spring needs that type.
In your case your hourlyReportTask() method would execute twice.
I have a method which needs to be executed every day at 07:00.
For that matter I created a bean with the method and annotated it with #Scheduled(cron="0 0 7 * * ?").
In this bean I crated a main function - which will initialize the spring context, get the bean and invoke the method ( at least for the first time ), like this:
public static void main(String[] args) {
ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext(args[0]);
SchedulerService schedulerService = context.getBean(SchedulerService.class);
schedulerService.myMethod();
}
This works just fine - but just once.
I think I understand why - It's because the main thread ends - and so is the spring context so even though myMethod is annotated with #Scheduled it wont work.
I thought of a way to pass this - meaning don't let the main thread die, perhaps like this:
while (true){
Thread.currentThread().sleep(500);
}
That's how, I think, the application context will remain and so is my bean.
Am I right?
Is there a better way to solve this?
I'm using spring 3.1.2.
Thanks.
The main thread should stay active until any non-daemon threads are alive. If you have a <task:annotation-driven/> tag in your application then Spring should start up a executor with a small pool of non-daemon threads for you and the main application should not terminate.
The only thing that you will need to do is to register a shutdown hook also to ensure a cleanup when the VM ends.
context.registerShutdownHook()
The join method is ideal for this:
try {
Thread.currentThread().join();
} catch (InterruptedException e) {
logger.warn("Interrupted", e);
}
Alternatively, here's the old school wait method:
final Object sync = new Object();
synchronized (sync) {
try {
sync.wait();
} catch (InterruptedException e) {
logger.warn("Interrupted", e);
}
}