I'm currently using jedis version 2.9.0 in my maven webapp. It consists in a loop that retrieves data for caching every five minutes. For that purpose, I'm creating a jedispool on app Scheduler class when the whole app starts the execution. It runs only once on the app start and then never again.
#WebListener
public class appScheduler implements ServletContextListener {
private static JedisPool jedisPool;
private ScheduledExecutorService scheduler;
public void contextInitialized(ServletContextEvent event) {
logger.info("contextInitialized: " + event);
logger.info("Creating scheduler for cache updates...");
boolean loadStatus = PropertiesLoader.getInstance().load();
if(!loadStatus){
logger.error("Error loading properties");
System.exit(1);
}
scheduler = Executors.newSingleThreadScheduledExecutor();
scheduler.scheduleAtFixedRate(new UpdateCacheJob(), 1, 5, TimeUnit.MINUTES);
int port = 6379;
try {
port = Integer.parseInt(PropertiesLoader.getInstance().getCachePort());
} catch (NumberFormatException e) {
logger.error("Invalid port in properties file for cache.");
}
jedisPool = new JedisPool(PropertiesLoader.getInstance().getPoolConfig(), PropertiesLoader.getInstance().getCacheEndpoint(), port);
}
public static Jedis getJedisResource() {
return jedisPool.getResource();
}
}
My pool configuration:
final JedisPoolConfig poolConfig = new JedisPoolConfig();
poolConfig.setMaxTotal(128);
poolConfig.setMaxIdle(128);
poolConfig.setMinIdle(16);
poolConfig.setTestOnBorrow(true);
poolConfig.setTestOnReturn(true);
poolConfig.setTestWhileIdle(true);
poolConfig.setMinEvictableIdleTimeMillis(60000);
poolConfig.setTimeBetweenEvictionRunsMillis(30000);
poolConfig.setNumTestsPerEvictionRun(3);
poolConfig.setBlockWhenExhausted(true);
return poolConfig;
Every five minutes a pipeline is created to sync the new data on cache in a different class:
Jedis jedis = schedulerFunction.getJedisResource(); //retrieve resource from scheduler class
Pipeline pipeline = jedis.pipelined(); //create the pipeline
for (String key : list.keySet()) {
pipeline.setex(key, 3600, data_to_fill_cache);
}
pipeline.sync(); //sync pipeline
logger.info("Cache synched... ");
Everything works fine for many hours, but then suddenly stops with the following error:
Could not get a resource from the pool
It happens on the line of resource retrieval to create jedis on 5 minutes loop function. It doesn't happen consistently at the same time though. It could either happen after two hours or ten, it's never the same. The pipeline is following the same process every five minutes and writing the same data to redis. It's not a problem of data integrity or a sudden change that could affect the process. I've discarded any other causes. The context have been retrieved hundreds of times before suddenly getting the error.
I've been looking through all documentation on the internet but I have been unable to find a reason or a solution.
I think all your resources have been retrieved but none of them are returned.
You would have to return the resources after using them. (If you're using a good enough version of Jedis,) You can return the resource by calling jedis.close(); or try-with-resources.
Related
In my Spring application, there is a scheduler for executing some task. Scheduled annotation is not used there because the schedule is quite complicated - it is dynamic and it used some data from the database. So simple endless cycle with thread sleeping is used. And sleeping interval is changed according to some rules. Maybe all this can be done with Scheduled annotation, but the question is not about that.
Below is simple example:
#Service
public class SomeService {
#PostConstruct
void init() {
new Thread(() -> {
while (true) {
System.out.println(new Date());
try {
Thread.sleep(1000);
} catch (Exception ex) {
System.out.println("end");
return;
}
}
}).start();
}
}
The code works fine but there is some trouble with killing that new thread. When I stop the application from Tomcat this new thread is continuing to run. So on Tomcat manage page I see that application is stopped, but in Tomcat log files I still see the output from the thread.
So what the problem? How I should change the code so the thread would be killed when the application is stopped?
Have you tried to implement a #PreDestroy method which will be invoked before WebApplicationContext is closed to change a boolean flag used in your loop? Though it seems strange that your objects are not discarded even when application is stopped...
class Scheduler {
private AtomicBoolean booleanFlag = new AtomicBoolean(true);
#PostConstruct
private void init() {
new Thread(() -> {
while (booleanFlag.get()) {
// do whatever you want
}
}).start();
}
#PreDestroy
private void destroy() {
booleanFlag.set(false);
}
}
I am trying to change Quartz Sequential execution to Parallel Execution.
It is working fine, Performance wise, it is seems good but Spawned (created) threads are not destroyed.
It is Still in Runnable State; why and How can I fix that?
Please Guide me.
Code is here :
#Override
protected void executeInternal(JobExecutionContext context) throws JobExecutionException {
logger.error("Result Processing executed");
List<Object[]> lstOfExams = examService.getExamEntriesForProcessingResults();
String timeZone = messageService.getMessage("org.default_timezone", null, Locale.getDefault());
if(lstOfExams!=null&&!lstOfExams.isEmpty()){
ThreadPoolTaskExecutor threadPoolExecuter = new ThreadPoolTaskExecutor();
threadPoolExecuter.setCorePoolSize(lstOfExams.size());
threadPoolExecuter.setMaxPoolSize(lstOfExams.size()+1);
threadPoolExecuter.setBeanName("ThreadPoolTaskExecutor");
threadPoolExecuter.setQueueCapacity(100);
threadPoolExecuter.setThreadNamePrefix("ThreadForUpdateExamResult");
threadPoolExecuter.initialize();
for(Object[] obj : lstOfExams){
if(StringUtils.isNotBlank((String)obj[2]) ){
timeZone = obj[2].toString();
}
try {
Userexams userexams=examService.findUserExamById(Long.valueOf(obj[0].toString()));
if(userexams.getExamresult()==null){
UpdateUserExamDataThread task=new UpdateUserExamDataThread(obj,timeZone);
threadPoolExecuter.submit(task);
}
// testEvaluator.generateTestResultAsPerEvaluator(Long.valueOf(obj[0].toString()), obj[4].toString(), obj[3]==null?null:obj[3].toString(),timeZone ,obj[5].toString() ,obj[1].toString());
// logger.error("Percentage Marks:::::"+result.getPercentageCatScore());
} catch (Exception e) {
Log.error("Exception at ResultProcessingJob extends QuartzJobBean executeInternal(JobExecutionContext context) throws JobExecutionException",e);
continue;
}
}
threadPoolExecuter.shutdown();
}
}
UpdateUserExamDataThread .class
#Component
//#Scope(value="prototype", proxyMode=ScopedProxyMode.TARGET_CLASS)
//public class UpdateUserExamDataThread extends ThreadLocal<String> //implements Runnable {
public class UpdateUserExamDataThread implements Runnable {
private Logger log = Logger.getLogger(UpdateUserExamDataThread.class);
#Autowired
ExamService examService;
#Autowired
TestEvaluator testEvaluator;
private Object[] obj;
private String timeZone;
public UpdateUserExamDataThread(Object[] obj,String timeZone) {
super();
this.obj = obj;
this.timeZone = timeZone;
}
#Override
public void run() {
String threadName=String.valueOf(obj[0]);
log.info("UpdateUserExamDataThread Start For:::::"+threadName);
testEvaluator.generateTestResultAsPerEvaluator(Long.valueOf(obj[0].toString()), obj[4].toString(), obj[3]==null?null:obj[3].toString(),timeZone ,obj[5].toString() ,obj[1].toString());
//update examResult
log.info("UpdateUserExamDataThread End For:::::"+threadName);
}
}
TestEvaluatorImpl.java
#Override
#Transactional
public Examresult generateTestResultAsPerEvaluator(Long userExamId, String evaluatorType, String codingLanguage,String timeZoneFollowed ,String inctenceId ,String userId) {
dbSchema = messageService.getMessage("database.default_schema", null, Locale.getDefault());
try {
//Some Methods
return examResult;
}catch(Exception e){
log.erorr(e);
}
}
I can provide Thread Dump file if needed.
it seems you create a thread pool in the same size of exams which is not quite optimal.
// Core pool size is = number of exams
threadPoolExecuter.setCorePoolSize(lstOfExams.size());
// Max pool size is just 1 + exam size.
threadPoolExecuter.setMaxPoolSize(lstOfExams.size()+1);
You have to consider that:
- If you create a thread pool and started it as many threads as defined in core size started immediately.
The max pool size is only than effective when you submit more than core pool threads can process right now AND when the queue size is full (in this case 100). So that means a new thread will be only then created when the number of submitted tasks exceeded 100+exam size.
In your case I would set the core pool size 5 or 10 (it actually depends on the how many core your target CPU have and/or how IO bound the submitted tasks are).
The max pool size can be double of that but it doesn't effective until the queue is full.
To let the size of live threads decrease after the submitted work done you have to set 2 parameters.
setKeepAliveSeconds(int keepAliveSeconds) : Which let the threads shut down automatically if they are not used along the defined seconds (by default 60 seconds, which is optimal) BUT this is normally only used to shut down threads of non-core pool threads.
To shut down threads of core part after keepAliveSeconds you have to set setAllowCoreThreadTimeOut(boolean allowCoreThreadTimeOut) as true. Which is normally false to keep core pool alive as long as the application is running.
I hope it helps.
I suspect that one of your threads waits indefinitely for an IO request answer. For example, you try to connect to a remote host where you did not set connection timeout and the host does not answer. In this case, you can shutdown all executing tasks forcefully by running shutdownNow method of the underlying ExecutorService then you can analyze InterruptedIOException thrown by the offending threads.
Replace
threadPoolExecuter.shutdown();
with below so you can examine errors.
ExecutorService executorService = threadPoolExecuter.getThreadPoolExecutor();
executorService.shutdownNow();
This will send interrupt signal to all running threads.
The threads do not wait on IO from some remote server, because the executed method on the threads would be in some jdbc driver classes, but they are currently all in UpdateUserExamDataThread.run(), line 37.
Now the question is: what is the code at UpdateUserExamDataThread.java line 37 ?
Unfortunately, the UpdateUserExamDataThread.java given at the moment is incomplete and/or not the version really executed: the package declaration is missing and it ends at line 29.
I suspect the issue is simply that you are calling run() instead of execute() when spawning the task thread using submit(). There is probably some expectation when using submit that threads kill themselves when the task is finished rather than terminating at the end of the run method.
Just Needed to increase the priority of threads and create number of threads as per number of cores in processor.
protected void executeInternal(JobExecutionContext context) throws JobExecutionException {
logger.error("Result Processing executed");
List<Object[]> lstOfExams = examService.getExamEntriesForProcessingResults();
String timeZone = messageService.getMessage("org.default_timezone", null, Locale.getDefault());
int cores = Runtime.getRuntime().availableProcessors();
if(lstOfExams!=null&&!lstOfExams.isEmpty()){
ThreadPoolTaskExecutor threadPoolExecuter = new ThreadPoolTaskExecutor();
threadPoolExecuter.setCorePoolSize(cores);
// threadPoolExecuter.setMaxPoolSize(Integer.MAX_VALUE);
threadPoolExecuter.setBeanName("ThreadPoolTaskExecutor");
// threadPoolExecuter.setQueueCapacity(Integer.MAX_VALUE);
threadPoolExecuter.setQueueCapacity(lstOfExams.size()+10);
threadPoolExecuter.setThreadNamePrefix("ThreadForUpdateExamResult");
threadPoolExecuter.setWaitForTasksToCompleteOnShutdown(true);
threadPoolExecuter.setThreadPriority(10);
threadPoolExecuter.initialize();
for(Object[] obj : lstOfExams){
if(StringUtils.isNotBlank((String)obj[2]) ){
timeZone = obj[2].toString();
}
try {
Userexams userexam=examService.findUserExamById(Long.valueOf(obj[0].toString()));
if(userexam.getExamresult()==null){
UpdateUserExamDataThread task=new UpdateUserExamDataThread(obj,timeZone,testEvaluator);
// threadPoolExecuter.submit(task);
threadPoolExecuter.execute(task);
}
// testEvaluator.generateTestResultAsPerEvaluator(Long.valueOf(obj[0].toString()), obj[4].toString(), obj[3]==null?null:obj[3].toString(),timeZone ,obj[5].toString() ,obj[1].toString());
// logger.error("Percentage Marks:::::"+result.getPercentageCatScore());
} catch (Exception e) {
logger.error("Exception at ResultProcessingJob extends QuartzJobBean executeInternal(JobExecutionContext context) throws JobExecutionException",e);
continue;
}
}
threadPoolExecuter.shutdown();
}
}
I have an web app(with Spring/Spring boot) running on tomcat 7.There are some ExecutorService defined like:
public static final ExecutorService TEST_SERVICE = new ThreadPoolExecutor(10, 100, 60L,
TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(1000), new ThreadPoolExecutor.CallerRunsPolicy());
The tasks are important and must complete properly. I catch the exceptions and save them to db for retry, like this:
try {
ThreadPoolHolder.TEST_SERVICE.submit(new Runnable() {
#Override
public void run() {
try {
boolean isSuccess = false;
int tryCount = 0;
while (++tryCount < CAS_COUNT_LIMIT) {
isSuccess = doWork(param);
if (isSuccess) {
break;
}
Thread.sleep(1000);
}
if (!isSuccess) {
saveFail(param);
}
} catch (Exception e) {
log.error("test error! param : {}", param, e);
saveFail(param);
}
}
});
} catch (Exception e) {
log.error("test error! param:{}", param, e);
saveFail(param);
}
So, when tomcat shutting down, what will happen to the threads of the pool(running or waiting in the queue)? how to make sure that all the tasks either completed properly before shutdown or saved to db for retry?
Tomcat has built in Thread Leak detection, so you should get an error when the application is undeployed. As a developer it is your responsibility to link any object you create to the web applications lifecycle, this means You should never ever have static state which are not constants
If you are using Spring Boot, your Spring context is already linked to the applications lifecycle, so the best way is to create you executor as a Spring bean, and let Spring shut it down when the application stops. Here is an example you can put in any #Configuration class.
#Bean(destroyMethod = "shutdownNow", name = "MyExecutorService")
public ThreadPoolExecutor executor() {
ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(10, 100, 60L,
TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(1000),
new ThreadPoolExecutor.CallerRunsPolicy());
return threadPoolExecutor;
}
As you can see the #Bean annotation allows you to specify a destroy method which will be executed when the Spring context is closed. In addition I have added the name property, this is because Spring typically creates a number of ExecutorServices for stuff like async web processing. When you need to use the executor, just Autowire it as any other spring bean.
#Autowired
#Qualifier(value = "MyExecutorService")
ThreadPoolExecutor executor;
Remember static is EVIL, you should only use static for constants, and potentially immutable obbjects.
EDIT
If you need to block the Tomcats shutdown procedure until the tasks have been processed, you need to wrap the Executor in a Component for more control, like this.
#Component
public class ExecutorWrapper implements DisposableBean {
private final ThreadPoolExecutor threadPoolExecutor;
public ExecutorWrapper() {
threadPoolExecutor = new ThreadPoolExecutor(10, 100, 60L,
TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(1000), new ThreadPoolExecutor.CallerRunsPolicy());
}
public <T> Future<T> submit(Callable<T> task) {
return threadPoolExecutor.submit(task);
}
public void submit(Runnable runnable) {
threadPoolExecutor.submit(runnable);
}
#Override
public void destroy() throws Exception {
threadPoolExecutor.shutdown();
boolean terminated = threadPoolExecutor.awaitTermination(1, TimeUnit.MINUTES);
if (!terminated) {
List<Runnable> runnables = threadPoolExecutor.shutdownNow();
// log the runnables that were not executed
}
}
}
With this code you call shutdown first so no new tasks can be submitted, then wait some time for the executor finish the current task and queue. If it does not finish in time you call shutdownNow to interrupt the running task, and get the list of unprocessed tasks.
Note: DisposableBean does the trick, but the best solution is actually to implement the SmartLifecycle interface. You have to implement a few more methods, but you get greater control, because no threads are started until all bean have been instantiated and the entire bean hierarchy is wired together, it even allows you to specify in which orders components should be started.
Tomcat as any Java application will not end untill all non-daeon threads will end. ThreadPoolExecutor in above example uses default thread factory and will create non-daemon threads.
I need to send a email during registration process , so for this reason i am using Java Mail API , this is working fine , but observed that
the email process is taking nearly 6 seconds (which is too long ) so Ajax call making the user wait too long for response
so for this reason i have decided to use background thread for sending email so the user need not wait for the Ajax call response (Jersey REST Web Service call)
My question is it a good practice to creating threads in a webapplication for every request ??
#Path("/insertOrUpdateUser")
public class InsertOrUpdateUser {
final static Logger logger = Logger.getLogger(InsertOrUpdateUser.class);
#GET
#Consumes("application/text")
#Produces("application/json")
public String getSalesUserData(#QueryParam(value = "empId") String empId
)
throws JSONException, SQLException {
JSONObject final_jsonobject = new JSONObject();
ExecutorService executorService = Executors.newFixedThreadPool(10);
executorService.execute(new Runnable() {
public void run() {
try {
SendEmailUtility.sendmail(emaildummy);
} catch (IOException e) {
logger.error("failed",e);
}
}
});
}
} catch (SQLException e) {
} catch (Exception e) {
}
finally {
}
return response;
}
}
And this is my Utility class for sending email
public class SendEmailUtility
{
public static String sendmail(String sendto)
throws IOException
{
String result = "fail";
Properties props_load = getProperties();
final String username = props_load.getProperty("username");
final String password = props_load.getProperty("password");
Properties props_send = new Properties();
props_send.put("mail.smtp.auth", "true");
props_send.put("mail.smtp.starttls.enable", "true");
props_send.put("mail.smtp.host", props_load.getProperty("mail.smtp.host"));
props_send.put("mail.smtp.port", props_load.getProperty("mail.smtp.port"));
Session session = Session.getInstance(props_send,
new javax.mail.Authenticator() {
#Override
protected PasswordAuthentication getPasswordAuthentication()
{
return new PasswordAuthentication(username, password);
}
});
try {
Message message = new MimeMessage(session);
message.setFrom(new InternetAddress(props_load.getProperty("setFrom")));
message.setRecipients(Message.RecipientType.TO, InternetAddress.parse(sendto));
message.setText("Some Text to be send in mail");
Transport.send(message);
result = "success";
} catch (MessagingException e) {
result = "fail";
logger.error("Exception Occured - sendto: " + sendto, e);
}
return result;
}
}
Could you please let me know if this is best practice to do in a web application ??
There are host of ways you can handle it, so it all depends on whether your application server has that much resources (memory, threads etc.) to handle your implementation, so it makes you best person to decide on which approach to go.
As such it is not bad practice to spawn parallel threads for doing something if it is justified by design, but typically you should go with controlled threads.
Please note that whether you use newSingleThreadExecutor() or newFixedThreadPool(nThreads), under-the-hoods there will always be a ThreadPoolExecutor object created.
My recommendation will be to use seconds option in below list i.e. "Controlled number of threads", and in that specify max thread count as you see fir.
One thread for each request
In this approach one thread will be created for each incoming request from GUI, so if you are getting 10 requests for inserting/updating user then 10 threads will be spawned which will send emails.
Downside of this approach is that there is no control on number of threads so you can end with StackOverflowException or may be memory issue.
Please make sure to shutdown your executor service else you will end up wasting JVM resources.
// inside your getSalesUserData() method
ExecutorService emailExecutor = Executors.newSingleThreadExecutor();
emailExecutor.execute(new Runnable() {
#Override
public void run() {
try {
SendEmailUtility.sendmail(emaildummy);
} catch (IOException e) {
logger.error("failed", e);
}
}
});
emailExecutor.shutdown(); // it is very important to shutdown your non-singleton ExecutorService.
Controlled number of threads
In this approach, some pre-defined number of threads will be present and those will process your email sending requirement. In below example I am starting a thread pool with max of 10 threads, then I am using a LinkedBlockingQueue implementation so this will ensure that if there are more than 10 requests and currently all my 10 threads are busy then excess of requests will be queued and not lost, this is the advantage you get with LinkedBlockingQueue implementation of Queue.
You can initialize you singleton ThreadPoolExecutor upon application server start, if there are no requests then no threads will be present so it is safe to do so. In fact I use similar configuration for my prod application.
I am using time to live seconds as 1 seconds so if a thread is ideal in JVM for more than 1 seconds then it will die.
Please note that since same thread pool is used for processing all you requests, so it should be singleton and do not shutdown this thread pool else your tasks will never be executed.
// creating a thread pool with 10 threads, max alive time is 1 seconds, and linked blocking queue for unlimited queuing of requests.
// if you want to process with 100 threads then replace both instances of 10 with 100, rest can remain same...
// this should be a singleton
ThreadPoolExecutor executor = new ThreadPoolExecutor(10, 10, 1, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>());
// inside your getSalesUserData() method
executor.execute(new Runnable() {
#Override
public void run() {
try {
SendEmailUtility.sendmail(emaildummy);
} catch (IOException e) {
logger.error("failed", e);
}
}
});
Java's default cached thread pool
This approach is much like above, only that Java will initialize the ThreadPoolExecutor for you as ThreadPoolExecutor(0, Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new SynchronousQueue<Runnable>());
Here max number of threads will be Integer.MAX_VALUE, so threads will be created as needed and time to live will be 60 seconds.
If you want to use this way then below is the way.
// this should be a singleton
ExecutorService emailExecutor = Executors.newCachedThreadPool();
// from you getSalesUserData() method
emailExecutor.execute(new Runnable() {
#Override
public void run() {
try {
SendEmailUtility.sendmail(emaildummy);
} catch (IOException e) {
logger.error("failed", e);
}
}
});
Manually creating of ExecutorService on java web serer is bad idea. In your implementation for each request you create 10 threads.
Better solution is to use ManagedExecutorService (example) if you work with JEE7 or ThreadPoolTaskExecutor if you work with Spring(docs).
If you work with Tomcat you should read this thread.
The best practice is to use a single ExecutorService to provide a thread pool for all requests. You probably want to configure the ExecutorService with a non-zero, but limited, number of threads.
The idea here is that you will have some threads that are reused throughout the lifetime of the application. You get the added benefit that if there is a temporary slowdown (or halt) in sending emails, you don't end up with a growing number of threads Instead, you end up with a growing number of pieces of work (emails to send) to be executed, which is much less resource intensive than extra threads.
I am using Java EmailSender class.
I simply started a new thread to send mail because it was blocking the main thread and I was getting Time out an exception.
String link = "http://localhost:PORT/api/v1/registration/confirm?token=" +token;
//Sending mail in thread beacause it block main thread
new Thread(
() -> emailSender.sendMail(request.getEmail(),buildEmail(request.getFirstName(),
link))).start();
Many JDBC calls (querying DB and get results) are executed through ExecutorService. I found that when those calls are executed, JDBC connections gets a long time to get closed the connection even though those connections are closed correctly. Why I say so is, when a load test is run through JMeter, the database shows that many connections are in IDLE in transaction. If the number of thread which run the test is high, the number of connections in Idle in transactions goes up. If the test is run slowly, then connections get closed slowly (1, 2 minutes), that means there are connections in IDLE in transactions, but after few minutes they become IDLE. I use connection pool here too. If I run the JDBC querying functions as a sequence ( one after another), then database doesn't show any connections in IDLE in transactions. Below is how I run my runnable tasks which run JDBC queries. TaskManager class handles whole ExecutorService related functions.
public class TaskManager {
final private ThreadServiceFactory threadFactory;
private int concurrentThreadCount;
private ExecutorService executerSV;
private final CountDownLatch latch;
// I keep a count of proposed tas task as servicecount
public TaskManager(int serviceCount) {
threadFactory = new ThreadServiceFactory();
this.concurrentThreadCount = serviceCount;
latch = new CountDownLatch(serviceCount);
}
public void execute( ThreadService runnableTask) {
Object rv = null;
runnableTask.setCountDownLatch(latch);
if(executerSV == null) {
executerSV = Executors.newFixedThreadPool(this.concurrentThreadCount, getThreadFactory());
}
executerSV.execute(runnableTask);
}
public boolean holdUntilComplete(){
try {
latch.await();
executerSV.shutdown();
return true;
} catch (InterruptedException e) {
e.printStackTrace();
return false;
}
}
private ThreadServiceFactory getThreadFactory(){
threadFactory.setDeamon( Boolean.FALSE);
return threadFactory;
}
}
In my test class ;
public void test(){
TaskManager tm = new TaskManager(3);
tm.execute(queryTask1);
tm.execute(queryTask2);
tm.holdUntilComplete();
}
queryTask1 is a Runnable and it calls JDBC select query.
If I run, queryTask1.run(); queryTask2.run(); then there are no any IDLE in connections in DB.
I use java 7. Please any one can let me know where the problem is.
There is no code in your question that opens any connection to a database. As such, it is difficult to suggest an answer. However, since you state that you are using a connection pool, you should better look at the pool configuration parameters, since they dictate how long an idle connection may be open before being elegible for eviction. For instance, if you are running a connection pool in tomcat, you should look particularly at "minIdle", "maxIdle" and "minEvictableIdleTimeMillis" properties. See https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html