When undeploying an application from Tomcat there are threads left open.
org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [/services] appears to have started a thread named [Abandoned connection cleanup thread] but has failed to stop it. This is very likely to create a memory leak.
org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [/services] appears to have started a thread named [Timer-0] but has failed to stop it. This is very likely to create a memory leak.
The application maintains a map of DataSources and runs a ScheduledExecutorService to update the map every 5 minutes.
#WebListener
public class DataSourceFactory implements ServletContextListener
{
private static Map<String, DataSource> rdsDataSourceMap;
private static ScheduledExecutorService scheduler;
private static final long CONNECTION_MAP_REFRESH_INTERVAL = 5;
#Override
public void contextInitialized(ServletContextEvent event)
{
scheduler = Executors.newSingleThreadScheduledExecutor();
scheduler.scheduleAtFixedRate(new Runnable(){
#Override
public void run() {
cacheDatasourceMap();
}
}, 0, CONNECTION_MAP_REFRESH_INTERVAL, TimeUnit.MINUTES);
}
#Override
public void contextDestroyed(ServletContextEvent event)
{
scheduler.shutdownNow();
if (localPool != null) {
localPool.close();
}
for (DataSource ds : rdsDataSourceMap.values()) {
if (ds != null) {
ds.close();
}
}
}
private void cacheDatasourceMap()
{
...
}
....
}
The DataSources are created using TomcatJDBC with the following parameters:
driver=com.mysql.jdbc.Driver
jmxEnabled=true
testWhileIdle=true
testOnBorrow=true
validationQuery=SELECT 1
testOnReturn=false
validationInterval=30000
timeBetweenEvictionRunsMillis=5000
maxActive=100
maxIdle=20
initialSize=10
maxWait=100000
removeAbandonedTimeout=60
minEvictableIdleTimeMillis=30000
minIdle=10
logAbandoned=true
removeAbandoned=true
jdbcInterceptors=org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer;org.apache.tomcat.jdbc.pool.interceptor.ResetAbandonedTimer;org.apache.tomcat.jdbc.pool.interceptor.SlowQueryReportJmx(threshold=10000)
UPDATE
After getting rid of the ScheduledExecutorService I am still seeing the Timer thread being left open. I have added a logging statement at the end of the contextDestroyed() and verified that it is getting passed closing the DataSources.
I have also verified that the MySQL Driver in Tomcat's lib and not in the WAR.
First of all, there is nothing that Tomcat can do about this, you are creating an Executor using Java SE. So an application server (a Java EE) cannot and should not be managing this ExecutorService you have created directly from Java SE. If you want to use a Java EE ExecutorService, consider using ManagedScheduledExecutorService which you will not need to worry about shutting down because it uses the app server's thread pool. With that out of the way, onto the question...
You are using shutdownNow() which is a "quick and dirty" way of shutting down an ExecutorService. If you want to bring your app down gently I would recommend using ExecutorService.shutdown() in combination with ExecutorService.awaitTermination() instead.
According to the doc, shutdownNow() makes no guarantees about what can actually be stopped.
This method does not wait for actively executing tasks to terminate.
...
There are no guarantees beyond best-effort attempts to stop processing actively executing tasks.
If you care about waiting for tasks to stop, you need to use awaitTermination(). The only thing shutdown() or shutdownNow() can do is call interrupt(), which may or may not actually stop the Thread. To await termination, do this:
executor.shutdown(); // or shutdownNow()
if (!executor.isTerminated())
executor.awaitTermination(10, TimeUnit.SECONDS); // wait for up to 10s
Related
In my application, I have multiple scheduler threads that create tasks. For example each scheduler threads can create bunch of tasks :
TaskCreator tastCreator;
for (Report report: report) {
taskCreator.createTask(report);
}
The scheduler threads can run concurrently as you can see from the logs:
15:57:20.107 INFO [ scheduler-4] c.task.ReportExportSchedulerTask : Task created
15:57:20.107 INFO [ scheduler-2] c.task.ReportExportSchedulerTask : Task created
I have a TaskCreator component as follows that passes the task to the executeJob():
#Component
public class TaskCreator {
#Autowired
private SftpTaskExecutor sftpTaskExecutor;
#Autowired
SftpConfig sftpConfig;
#Autowired
private SFTPConnectionManager connectionManager;
public void createTask(Report report) {
sftpTaskExecutor.executeJob(new JobProcessorTask(...));
}
public void validateTasksExecution() {
sftpTaskExecutor.getExecutorService().shutdown();
while (!sftpTaskExecutor.getExecutorService().isTerminated()) ;
connectionManager.disconnect();
}
}
SftpTaskExecutor Component as follows that constructs an executorService to which I submit the above tasks to:
#Component
public class SftpTaskExecutor {
private ExecutorService executorService = Executors.newSingleThreadExecutor();
public void executeJob(JobProcessorTask jobProcessorTask) {
executorService.execute(jobProcessorTask);
}
public ExecutorService getExecutorService() {
return executorService;
}
}
My question is, if two or more scheduler threads are creating tasks and submitting to executor service concurrently, the above throws a RejectedExecutionException with one scheduler task not finished (i.e. file not sent)
For each schedule threads, I need to be able to call validateTasksExecution() without interfering with the other scheduler thread. In other words, not disconnect while other scheduler is still processing.
Am I using the ExecutorService correctly in this regard? How can I change the above to be thread safe?
My question is, if two or more scheduler threads are creating tasks and submitting to executor service concurrently, the above throws a RejectedExecutionException with one scheduler task not finished (i.e. file not sent)
Let's take a look at the javadocs for ExecutorService.execute(...)
RejectedExecutionException - if this task cannot be accepted for execution.
In looking at the ThreadPoolExecutor (and associated) code, the jobs get rejected for 2 reasons:
The queue for the jobs is full (this doesn't apply to you because the queues are by default unbounded)
The executor service is no longer running (ding ding ding)
I believe that your executor service has been shutdown, most likely because the first of your threads has called validateTasksExecution() before the 2nd thread calls executeJob(...). Your code is incorrect if you are trying to reuse that thread-pool. That you are also closing the connectionManager() makes me wonder if you want to re-use the SftpTaskExecutor at all.
If you want each thread to see if its operation is done but have the thread-pool stay running then you need to be saving the Future(s) from the ExecutorService.submit(...) method and call get() on them. That will tell you when the jobs are done.
Something like:
public Future<Void> createTask(Report report) {
return sftpTaskExecutor.executeJob(new JobProcessorTask(...));
}
public void validateTasksExecution(Future<Void> future) {
// there is some exceptions here you need to handle
future.get();
}
public void shutdown() {
sftpTaskExecutor.shutdown();
connectionManager.disconnect();
}
...
public Future<Void> executeJob(JobProcessorTask jobProcessorTask) {
return executorService.submit(jobProcessorTask);
}
If you need to monitor multiple jobs then you should store them in a collection and call get() on them serially although the jobs will be running in parallel.
The alternative would be for you to have a separate ExecutorService for each transaction which is wasteful but maybe not so bad considering that it is managing sftp calls.
while (!sftpTaskExecutor.getExecutorService().isTerminated()) ;
Yeah you don't want to spin like that. See awaitTermination(...) javadocs.
I have a spring-boot application.
I have implemented SmartLifecycle interface in my bean which starts async snmp server in it's start method and stops it in it's stop method.
All working fine, except the fact that main application context stops right after start, so my server bean also stops right after start.
All I need is to make spring context to stop only when shutdown hook is fired.
This is not a web application, so I don't need spring-boot-starter-web, which is solves this problem by starting webserver which prevents context stop until webserver stops.
I can use something like CountDownLatch and waiting for it to be zero in my main method right after context starts. Somethig like this:
public static void main(String[] args) throws InterruptedException {
ConfigurableApplicationContext ctx = SpringApplication.run(SnmpTrapRetranslatorApplication.class, args);
CountDownLatch snmpServerCloseLatch = ctx.getBean("snmpServerCloseLatch", CountDownLatch.class);
snmpServerCloseLatch.await();
}
And my server bean's start method will create this latch with count 1, while stop method will call snmpServerCloseLatch.countDown().
This technique is described here.
But what wrong with this is that my main method is responsible for waiting my custom server bean to stop. I feel this just not right.
How for example spring-boot-starter-web do this? When it starts tomcat, it keeps running until shutdown hook is received and it don't need to have any managing code in the main method. It stops only when context receiving shoutdown signal.
The same behaviour is for example when I have #Scheduled method in my bean. Spring also doesn't stops context automatically. Only on CTRL-C.
I want to achieve similar effect. My main method should have only one line: start the context. Context should start and stop my async server when it starts or stops (already achieved by SmartLifecycle) and should not stop until shutdown is requested (CTRL-C, SIGINT etc).
My investigation lead me to the core of the problem: daemon threads.
The snmp server implementation which I use (snmp4j) use daemon threads internally. So even when snmp server started, there are no more live user threads in JVM, so it exits.
TL/DR:
Just add this method to any bean (snmp server bean is good candidate for this):
#Scheduled(fixedDelay = 1000 * 60 * 60) // every hour
public void doNothing() {
// Forces Spring Scheduling managing thread to start
}
(Do not forget to add #EnableScheduling to your spring configuration).
Explanation:
To prevent stopping spring context, while SNMP server is still running, we need any non-daemon thread to be alive in JVM. Not necessarily main thread. So we can let main method to finish.
We can run new non-daemon thread from our server bean's start method. This thread will wait on some lock in while loop checking for some running variable, while our stop method will set this running variable to false and notifyAll on this lock.
This way, our non-daemon thread will be alive until shotdown hook is triggered (and prevents JVM to exit).
After shutdown hook, spring context lifecycle close method will call all SmartLifecycle bean's close methods, that will lead to SNMP server bean's stop method call, that will lead to set running to false, that will lead to our non-daemon thread to stop, that allow JVM to stop gracefully.
Or instead we can use Spring's scheduling thread in similar way. It also is non-daemon thread, so it will prevent JVM to exit. And Spring manages this thread itself, so it will automatically stop it when shutdown hook is triggered.
To make Spring's scheduling thread to start we need any #Scheduled method in any bean.
I think that first (manual) approach is still more "correct", while requires more async coding (which is error-prone as we all know). Who knows how Spring will change it's scheduling implementation in the future.
SpringApplication app = new SpringApplication(Main.class);
app.setRegisterShutdownHook(false);
ConfigurableApplicationContext applicationContext= app.run();
Runtime.getRuntime().addShutdownHook(new Thread(new Runnable() {
#Override
public void run() {
//do your things
applicationContext.close();
}
}));
I'm experimenting with Spring's DeferredResult on Tomcat, and I'm getting crazy results. Is what I'm doing wrong, or is there some bug in Spring or Tomcat? My code is simple enough.
#Controller
public class Test {
private DeferredResult<String> deferred;
static class DoSomethingUseful implements Runnable {
public void run() {
try { Thread.sleep(2000); } catch (InterruptedException e) { }
}
}
#RequestMapping(value="/test/start")
#ResponseBody
public synchronized DeferredResult<String> start() {
deferred = new DeferredResult<>(4000L, "timeout\n");
deferred.onTimeout(new DoSomethingUseful());
return deferred;
}
#RequestMapping(value="/test/stop")
#ResponseBody
public synchronized String stop() {
deferred.setResult("stopped\n");
return "ok\n";
}
}
So. The start request creates a DeferredResult with a 4 second timeout. The stop request will set a result on the DeferredResult. If you send stop before or after the deferred result times out, everything works fine.
However if you send stop at the same time as start times out, things go crazy. I've added an onTimeout action to make this easy to reproduce, but that's not necessary for the problem to occur. With an APR connector, it simply deadlocks. With a NIO connector, it sometimes works, but sometimes it incorrectly sends the "timeout" message to the stop client and never answers the start client.
To test this:
curl http://localhost/test/start & sleep 5; curl http://localhost/test/stop
I don't think I'm doing anything wrong. The Spring documentation seems to say it's okay to call setResult at any time, even after the request already expired, and from any thread ("the
application can produce the result from a thread of its choice").
Versions used: Tomcat 7.0.39 on Linux, Spring 3.2.2.
This is an excellent bug find !
Just adding more information about the bug (that got fixed) for a better understanding.
There was a synchronized block inside setResult() that extended up to the part of submitting a dispatch. This can cause a deadlock if a timeout occurs at the same time since the Tomcat timeout thread has its own locking that permits only one thread to do timeout or dispatch processing.
Detailed explanation:
When you call "stop" at the same time as the request "times out", two threads are attempting to lock the DeferredResult object 'deferred'.
The thread that executes the "onTimeout" handler
Here is the excerpt from the Spring doc:
This onTimeout method is called from a container thread when an async request times out before the DeferredResult has been set. It may invoke setResult or setErrorResult to resume processing.
Another thread that executes the "stop" service.
If the dispatch processing called during the stop() service obtains the 'deferred' lock, it will wait for a tomcat lock (say TomcatLock) to finish the dispatch.
And if the other thread doing timeout handling has already acquired the TomcatLock, that thread waits to acquire a lock on 'deferred' to complete the setResult()!
So, we end up in a classic deadlock situation !
I doubt whether java.util.concurrent.ExecutorService should be shutdown after all tasks had been completed or canceled?
I have a method like this:
public void testProxies() {
// 5 thread
ExecutorService exec = Executors.newFixedThreadPool(5);
try {
while(condition){
exec.execute(new Runnable() {
#Override
public void run() {
//some task
}
});
}
} catch (Exception e) {
e.printStackTrace();
} finally {
exec.shutdown();// should be shutdown here?
}
}
Is that a currect way of using ExecutorService?
How can I reuse the ExecutorService?
ExecutorService should be shutdown or let it go?
If you shut it down, you can't reuse it.
If you don't shut it down, your program won't be able to exit because there will be live non daemon threads.
So you need to call shutdown at some stage to let your program exit, but only when you know that you don't need to submit additional tasks to your executor.
What I generally do:
I make the ExecutorService a field of my class
I provide a stop or shutdown method which the user of my class needs to call which calls the shutdown method of the executor. Note that the executor won't actually shutdown until all the submitted tasks have completed (or have been successfully cancelled).
An alternative is to add a shutdown hook which will shutdown your executor when the JVM exits.
Yes. ExecutorService should be shutdown, if you don;t want to execute any tasks any more.
On single instance of Tomcat I have a thread that was started when context was initialized. Something like this :
public class MyContextListener implements ServletContextListener {
private MyThread thread = null;
#Override
public void contextInitialized(ServletContextEvent sce) {
//Start thread...
}
#Override
public void contextDestroyed(ServletContextEvent sce) {
//Stop thread...
}
}
This thread performs some important jobs in system every 10 minutes, and it was working fine.
Now I have switched to cluster of to instances of tomcat and this thread is running on two instances. I'm trying to achieve different behavior.
What I'm trying to achieve:
This thread should be running only on one instance at the time.
If first instance fails (on which thread was running), thread should be started on second instance.
I would be grateful for any hint.
What is my application logic ?
Application logic that is executed by a thread is as follows:
Read sth from DB.
Analyze DB information.
Do HTTP request to external system, if needed.
Sleep thread for another 10 minutes.
The point is: If I will have 2 instances of tomcat, only one should execute this logic
If I understand correctly you are not really allowed to start a new thread in your application when using a web-application-server. All threads must be managed by the application server.