Many JDBC calls (querying DB and get results) are executed through ExecutorService. I found that when those calls are executed, JDBC connections gets a long time to get closed the connection even though those connections are closed correctly. Why I say so is, when a load test is run through JMeter, the database shows that many connections are in IDLE in transaction. If the number of thread which run the test is high, the number of connections in Idle in transactions goes up. If the test is run slowly, then connections get closed slowly (1, 2 minutes), that means there are connections in IDLE in transactions, but after few minutes they become IDLE. I use connection pool here too. If I run the JDBC querying functions as a sequence ( one after another), then database doesn't show any connections in IDLE in transactions. Below is how I run my runnable tasks which run JDBC queries. TaskManager class handles whole ExecutorService related functions.
public class TaskManager {
final private ThreadServiceFactory threadFactory;
private int concurrentThreadCount;
private ExecutorService executerSV;
private final CountDownLatch latch;
// I keep a count of proposed tas task as servicecount
public TaskManager(int serviceCount) {
threadFactory = new ThreadServiceFactory();
this.concurrentThreadCount = serviceCount;
latch = new CountDownLatch(serviceCount);
}
public void execute( ThreadService runnableTask) {
Object rv = null;
runnableTask.setCountDownLatch(latch);
if(executerSV == null) {
executerSV = Executors.newFixedThreadPool(this.concurrentThreadCount, getThreadFactory());
}
executerSV.execute(runnableTask);
}
public boolean holdUntilComplete(){
try {
latch.await();
executerSV.shutdown();
return true;
} catch (InterruptedException e) {
e.printStackTrace();
return false;
}
}
private ThreadServiceFactory getThreadFactory(){
threadFactory.setDeamon( Boolean.FALSE);
return threadFactory;
}
}
In my test class ;
public void test(){
TaskManager tm = new TaskManager(3);
tm.execute(queryTask1);
tm.execute(queryTask2);
tm.holdUntilComplete();
}
queryTask1 is a Runnable and it calls JDBC select query.
If I run, queryTask1.run(); queryTask2.run(); then there are no any IDLE in connections in DB.
I use java 7. Please any one can let me know where the problem is.
There is no code in your question that opens any connection to a database. As such, it is difficult to suggest an answer. However, since you state that you are using a connection pool, you should better look at the pool configuration parameters, since they dictate how long an idle connection may be open before being elegible for eviction. For instance, if you are running a connection pool in tomcat, you should look particularly at "minIdle", "maxIdle" and "minEvictableIdleTimeMillis" properties. See https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html
Related
I'm currently using jedis version 2.9.0 in my maven webapp. It consists in a loop that retrieves data for caching every five minutes. For that purpose, I'm creating a jedispool on app Scheduler class when the whole app starts the execution. It runs only once on the app start and then never again.
#WebListener
public class appScheduler implements ServletContextListener {
private static JedisPool jedisPool;
private ScheduledExecutorService scheduler;
public void contextInitialized(ServletContextEvent event) {
logger.info("contextInitialized: " + event);
logger.info("Creating scheduler for cache updates...");
boolean loadStatus = PropertiesLoader.getInstance().load();
if(!loadStatus){
logger.error("Error loading properties");
System.exit(1);
}
scheduler = Executors.newSingleThreadScheduledExecutor();
scheduler.scheduleAtFixedRate(new UpdateCacheJob(), 1, 5, TimeUnit.MINUTES);
int port = 6379;
try {
port = Integer.parseInt(PropertiesLoader.getInstance().getCachePort());
} catch (NumberFormatException e) {
logger.error("Invalid port in properties file for cache.");
}
jedisPool = new JedisPool(PropertiesLoader.getInstance().getPoolConfig(), PropertiesLoader.getInstance().getCacheEndpoint(), port);
}
public static Jedis getJedisResource() {
return jedisPool.getResource();
}
}
My pool configuration:
final JedisPoolConfig poolConfig = new JedisPoolConfig();
poolConfig.setMaxTotal(128);
poolConfig.setMaxIdle(128);
poolConfig.setMinIdle(16);
poolConfig.setTestOnBorrow(true);
poolConfig.setTestOnReturn(true);
poolConfig.setTestWhileIdle(true);
poolConfig.setMinEvictableIdleTimeMillis(60000);
poolConfig.setTimeBetweenEvictionRunsMillis(30000);
poolConfig.setNumTestsPerEvictionRun(3);
poolConfig.setBlockWhenExhausted(true);
return poolConfig;
Every five minutes a pipeline is created to sync the new data on cache in a different class:
Jedis jedis = schedulerFunction.getJedisResource(); //retrieve resource from scheduler class
Pipeline pipeline = jedis.pipelined(); //create the pipeline
for (String key : list.keySet()) {
pipeline.setex(key, 3600, data_to_fill_cache);
}
pipeline.sync(); //sync pipeline
logger.info("Cache synched... ");
Everything works fine for many hours, but then suddenly stops with the following error:
Could not get a resource from the pool
It happens on the line of resource retrieval to create jedis on 5 minutes loop function. It doesn't happen consistently at the same time though. It could either happen after two hours or ten, it's never the same. The pipeline is following the same process every five minutes and writing the same data to redis. It's not a problem of data integrity or a sudden change that could affect the process. I've discarded any other causes. The context have been retrieved hundreds of times before suddenly getting the error.
I've been looking through all documentation on the internet but I have been unable to find a reason or a solution.
I think all your resources have been retrieved but none of them are returned.
You would have to return the resources after using them. (If you're using a good enough version of Jedis,) You can return the resource by calling jedis.close(); or try-with-resources.
I am trying to change Quartz Sequential execution to Parallel Execution.
It is working fine, Performance wise, it is seems good but Spawned (created) threads are not destroyed.
It is Still in Runnable State; why and How can I fix that?
Please Guide me.
Code is here :
#Override
protected void executeInternal(JobExecutionContext context) throws JobExecutionException {
logger.error("Result Processing executed");
List<Object[]> lstOfExams = examService.getExamEntriesForProcessingResults();
String timeZone = messageService.getMessage("org.default_timezone", null, Locale.getDefault());
if(lstOfExams!=null&&!lstOfExams.isEmpty()){
ThreadPoolTaskExecutor threadPoolExecuter = new ThreadPoolTaskExecutor();
threadPoolExecuter.setCorePoolSize(lstOfExams.size());
threadPoolExecuter.setMaxPoolSize(lstOfExams.size()+1);
threadPoolExecuter.setBeanName("ThreadPoolTaskExecutor");
threadPoolExecuter.setQueueCapacity(100);
threadPoolExecuter.setThreadNamePrefix("ThreadForUpdateExamResult");
threadPoolExecuter.initialize();
for(Object[] obj : lstOfExams){
if(StringUtils.isNotBlank((String)obj[2]) ){
timeZone = obj[2].toString();
}
try {
Userexams userexams=examService.findUserExamById(Long.valueOf(obj[0].toString()));
if(userexams.getExamresult()==null){
UpdateUserExamDataThread task=new UpdateUserExamDataThread(obj,timeZone);
threadPoolExecuter.submit(task);
}
// testEvaluator.generateTestResultAsPerEvaluator(Long.valueOf(obj[0].toString()), obj[4].toString(), obj[3]==null?null:obj[3].toString(),timeZone ,obj[5].toString() ,obj[1].toString());
// logger.error("Percentage Marks:::::"+result.getPercentageCatScore());
} catch (Exception e) {
Log.error("Exception at ResultProcessingJob extends QuartzJobBean executeInternal(JobExecutionContext context) throws JobExecutionException",e);
continue;
}
}
threadPoolExecuter.shutdown();
}
}
UpdateUserExamDataThread .class
#Component
//#Scope(value="prototype", proxyMode=ScopedProxyMode.TARGET_CLASS)
//public class UpdateUserExamDataThread extends ThreadLocal<String> //implements Runnable {
public class UpdateUserExamDataThread implements Runnable {
private Logger log = Logger.getLogger(UpdateUserExamDataThread.class);
#Autowired
ExamService examService;
#Autowired
TestEvaluator testEvaluator;
private Object[] obj;
private String timeZone;
public UpdateUserExamDataThread(Object[] obj,String timeZone) {
super();
this.obj = obj;
this.timeZone = timeZone;
}
#Override
public void run() {
String threadName=String.valueOf(obj[0]);
log.info("UpdateUserExamDataThread Start For:::::"+threadName);
testEvaluator.generateTestResultAsPerEvaluator(Long.valueOf(obj[0].toString()), obj[4].toString(), obj[3]==null?null:obj[3].toString(),timeZone ,obj[5].toString() ,obj[1].toString());
//update examResult
log.info("UpdateUserExamDataThread End For:::::"+threadName);
}
}
TestEvaluatorImpl.java
#Override
#Transactional
public Examresult generateTestResultAsPerEvaluator(Long userExamId, String evaluatorType, String codingLanguage,String timeZoneFollowed ,String inctenceId ,String userId) {
dbSchema = messageService.getMessage("database.default_schema", null, Locale.getDefault());
try {
//Some Methods
return examResult;
}catch(Exception e){
log.erorr(e);
}
}
I can provide Thread Dump file if needed.
it seems you create a thread pool in the same size of exams which is not quite optimal.
// Core pool size is = number of exams
threadPoolExecuter.setCorePoolSize(lstOfExams.size());
// Max pool size is just 1 + exam size.
threadPoolExecuter.setMaxPoolSize(lstOfExams.size()+1);
You have to consider that:
- If you create a thread pool and started it as many threads as defined in core size started immediately.
The max pool size is only than effective when you submit more than core pool threads can process right now AND when the queue size is full (in this case 100). So that means a new thread will be only then created when the number of submitted tasks exceeded 100+exam size.
In your case I would set the core pool size 5 or 10 (it actually depends on the how many core your target CPU have and/or how IO bound the submitted tasks are).
The max pool size can be double of that but it doesn't effective until the queue is full.
To let the size of live threads decrease after the submitted work done you have to set 2 parameters.
setKeepAliveSeconds(int keepAliveSeconds) : Which let the threads shut down automatically if they are not used along the defined seconds (by default 60 seconds, which is optimal) BUT this is normally only used to shut down threads of non-core pool threads.
To shut down threads of core part after keepAliveSeconds you have to set setAllowCoreThreadTimeOut(boolean allowCoreThreadTimeOut) as true. Which is normally false to keep core pool alive as long as the application is running.
I hope it helps.
I suspect that one of your threads waits indefinitely for an IO request answer. For example, you try to connect to a remote host where you did not set connection timeout and the host does not answer. In this case, you can shutdown all executing tasks forcefully by running shutdownNow method of the underlying ExecutorService then you can analyze InterruptedIOException thrown by the offending threads.
Replace
threadPoolExecuter.shutdown();
with below so you can examine errors.
ExecutorService executorService = threadPoolExecuter.getThreadPoolExecutor();
executorService.shutdownNow();
This will send interrupt signal to all running threads.
The threads do not wait on IO from some remote server, because the executed method on the threads would be in some jdbc driver classes, but they are currently all in UpdateUserExamDataThread.run(), line 37.
Now the question is: what is the code at UpdateUserExamDataThread.java line 37 ?
Unfortunately, the UpdateUserExamDataThread.java given at the moment is incomplete and/or not the version really executed: the package declaration is missing and it ends at line 29.
I suspect the issue is simply that you are calling run() instead of execute() when spawning the task thread using submit(). There is probably some expectation when using submit that threads kill themselves when the task is finished rather than terminating at the end of the run method.
Just Needed to increase the priority of threads and create number of threads as per number of cores in processor.
protected void executeInternal(JobExecutionContext context) throws JobExecutionException {
logger.error("Result Processing executed");
List<Object[]> lstOfExams = examService.getExamEntriesForProcessingResults();
String timeZone = messageService.getMessage("org.default_timezone", null, Locale.getDefault());
int cores = Runtime.getRuntime().availableProcessors();
if(lstOfExams!=null&&!lstOfExams.isEmpty()){
ThreadPoolTaskExecutor threadPoolExecuter = new ThreadPoolTaskExecutor();
threadPoolExecuter.setCorePoolSize(cores);
// threadPoolExecuter.setMaxPoolSize(Integer.MAX_VALUE);
threadPoolExecuter.setBeanName("ThreadPoolTaskExecutor");
// threadPoolExecuter.setQueueCapacity(Integer.MAX_VALUE);
threadPoolExecuter.setQueueCapacity(lstOfExams.size()+10);
threadPoolExecuter.setThreadNamePrefix("ThreadForUpdateExamResult");
threadPoolExecuter.setWaitForTasksToCompleteOnShutdown(true);
threadPoolExecuter.setThreadPriority(10);
threadPoolExecuter.initialize();
for(Object[] obj : lstOfExams){
if(StringUtils.isNotBlank((String)obj[2]) ){
timeZone = obj[2].toString();
}
try {
Userexams userexam=examService.findUserExamById(Long.valueOf(obj[0].toString()));
if(userexam.getExamresult()==null){
UpdateUserExamDataThread task=new UpdateUserExamDataThread(obj,timeZone,testEvaluator);
// threadPoolExecuter.submit(task);
threadPoolExecuter.execute(task);
}
// testEvaluator.generateTestResultAsPerEvaluator(Long.valueOf(obj[0].toString()), obj[4].toString(), obj[3]==null?null:obj[3].toString(),timeZone ,obj[5].toString() ,obj[1].toString());
// logger.error("Percentage Marks:::::"+result.getPercentageCatScore());
} catch (Exception e) {
logger.error("Exception at ResultProcessingJob extends QuartzJobBean executeInternal(JobExecutionContext context) throws JobExecutionException",e);
continue;
}
}
threadPoolExecuter.shutdown();
}
}
I have server which receives requests from clients and based on the requests connects to some external website & does some operations.
I am using Apache Commons HttpClient (v 2.0.2) to do these connections (I know it's old, but I have to use it because of other restrictions).
My server is not going to get frequent requests. I think it may be a lot of requests when it's first deployed. Then on it's only going to be a few requests a day. There may be occasional spurts when again there are a lot of requests occasionally.
All connections are going to be one of 3 URLS - they may be http or https
I was thinking of using separate instances of HttpClient for each request
Is there any need for me to use a common HttpClient object & use it with MultiThreadedHttpConnectionManager for different connections.
How exactly does MultiThreadedHttpConnectionManager help - does it keep the connection open even after you call releaseConnection? How long will it keep it open?
All my connections are going to be GET & they are going to return 10-20 bytes at most. I am not downloading anything. The reason I am using HttpClient rather than core java libraries is because occasionally, I may want to use HTTP 1.0 (I don't think java classes support this) and I also may want to do Http Redirects automatically.
I think it all depends on what your SLAs are and if the the performance is within the acceptable/expected response times. Your solution will work without any issues but it is not scalable if your application demands grow over time.
Using MultiThreadedHttpConnectionManager is much more elegant/scalable solution than having to manage 3 independent HttpClient objects.
I use a PoolingHttpClientConnectionManager in a considerably multi-threaded environment and it works very well.
Here's an implementation of a Client pool:
public class HttpClientPool {
// Single-element enum to implement Singleton.
private static enum Singleton {
// Just one of me so constructor will be called once.
Client;
// The thread-safe client.
private final CloseableHttpClient threadSafeClient;
// The pool monitor.
private final IdleConnectionMonitor monitor;
// The constructor creates it - thus late
private Singleton() {
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
// Increase max total connection to 200
cm.setMaxTotal(200);
// Increase default max connection per route to 200
cm.setDefaultMaxPerRoute(200);
// Make my builder.
HttpClientBuilder builder = HttpClients.custom()
.setRedirectStrategy(new LaxRedirectStrategy())
.setConnectionManager(cm);
// Build the client.
threadSafeClient = builder.build();
// Start up an eviction thread.
monitor = new IdleConnectionMonitor(cm);
// Start up the monitor.
Thread monitorThread = new Thread(monitor);
monitorThread.setDaemon(true);
monitorThread.start();
}
public CloseableHttpClient get() {
return threadSafeClient;
}
}
public static CloseableHttpClient getClient() {
// The thread safe client is held by the singleton.
return Singleton.Client.get();
}
public static void shutdown() throws InterruptedException, IOException {
// Shutdown the monitor.
Singleton.Client.monitor.shutdown();
}
// Watches for stale connections and evicts them.
private static class IdleConnectionMonitor implements Runnable {
// The manager to watch.
private final PoolingHttpClientConnectionManager cm;
// Use a BlockingQueue to stop everything.
private final BlockingQueue<Stop> stopSignal = new ArrayBlockingQueue<Stop>(1);
IdleConnectionMonitor(PoolingHttpClientConnectionManager cm) {
this.cm = cm;
}
public void run() {
try {
// Holds the stop request that stopped the process.
Stop stopRequest;
// Every 5 seconds.
while ((stopRequest = stopSignal.poll(5, TimeUnit.SECONDS)) == null) {
// Close expired connections
cm.closeExpiredConnections();
// Optionally, close connections that have been idle too long.
cm.closeIdleConnections(60, TimeUnit.SECONDS);
}
// Acknowledge the stop request.
stopRequest.stopped();
} catch (InterruptedException ex) {
// terminate
}
}
// Pushed up the queue.
private static class Stop {
// The return queue.
private final BlockingQueue<Stop> stop = new ArrayBlockingQueue<Stop>(1);
// Called by the process that is being told to stop.
public void stopped() {
// Push me back up the queue to indicate we are now stopped.
stop.add(this);
}
// Called by the process requesting the stop.
public void waitForStopped() throws InterruptedException {
// Wait until the callee acknowledges that it has stopped.
stop.take();
}
}
public void shutdown() throws InterruptedException, IOException {
// Signal the stop to the thread.
Stop stop = new Stop();
stopSignal.add(stop);
// Wait for the stop to complete.
stop.waitForStopped();
// Close the pool.
HttpClientPool.getClient().close();
// Close the connection manager.
cm.close();
}
}
}
All you need to do is CloseableHttpResponse conversation = HttpClientPool.getClient().execute(request); and when you've finished with it, just close it and it will be returned to the pool.
I am developing an application in Enterprise JavaBeans 3.1 and I receive data from a Socket. This application acts as a Listener and once data is received it is processed. This application was single threaded and due to it processing it slowly, the application is implemented using Threads which is now a multi threaded application. By doing this, the application now runs much faster.
However, there are two threads and both threads access the database to insert and update the database. I face the problem of concurrency where one thread inserts and the other updates causing problems. To deal with concurrency, I added a synchronized block to lock an object making sure the full block is executed. By doing this application is now very slow as it was with a single threaded application. The insert and update is done through JDBC.
Is there anything else that can be done so it is processed and processed very quickly without slowing down the application. The below is sample code:
#Startup
#Singleton
public class Listener {
private ServerSocket serverSocket;
private Socket socket;
private Object object;
private InetAddress server;
#Resource
private ScheduledExecutorService executor;
#PostConstruct
public void init() {
object = new Object();
serverSocket = new ServerSocket("somePortNumber");
Runnable runnable = new Runnable() {
public void run() {
checkDatabase();
if(!isServerActive()) {
// send e-mail
listen();
}
else {
listen();
}
}
};
executor.scheduleAtFixedRate(runnable, 0, 0, TimeUnit.SECONDS);
}
public void listen() {
if(socket == null) {
socket = serverSocket.accept();
}
else if(socket.isClosed()) {
socket = serverSocket.accept();
}
startThread(socket);
}
public void startThread(Socket socket) {
Runnable runnable = new Runnable() {
public void run() {
processMessage(socket);
}
};
new Thread(runnable).start();
}
public void processMessage(Socket socket) {
synchronized(object) {
// build data from Socket
// insert into database message, sentDate
// do other things
// update processDate
}
}
public void checkDatabase() {
synchronized(object) {
// get data and further update
}
}
public boolean isServerActive() {
boolean isActive = true;
if(server == null) {
sever = InetAddress.getByName("serverName");
}
if(!server.isNotReachable(5000)) {
isActive = false;
if(socket != null) {
socket.close();
}
}
return isActive;
}
}
EDIT:
Table name: Audit
Message: VARCHAR NOT NULL
SentDate: DATE NOT NULL
ProcessedDate: DATE
AnotherDate: DATE
Query: INSERT INTO AUDIT (message, sentDate, processedDate, receivedDate) VALUES (?, java.sql.Timestamp, null, null)
Assuming a record is inserted without the synchronized block inserting the message and sentDate. The other thread will execute causing this record to be found and further update. The problem is that after the initial insert and processedDate should be updated and then the other thread should be executed.
The processMessage() sends the data over HTTPS asynchronously.
One of the reasons to use Threads was because only one piece of data came to Java. So by introducing threads the full set of data comes to Java.
Even with single thread you can get much better speed by using JDBC batching and running any transactions around the batch instead of committing every individual insert/update statement.
In a multi threaded environment you can avoid concurrency problems if you ensure no two threads act on the same database row at the same time. You can use row level locks to avoid multiple threads updating the same row.
It is not possible to give you any more inputs with the information you have given. You may get more ideas if you provide information about the data you are processing.
The application behaved as single threaded because the processMessage & checkDatabase methods have synchronised block on the same class object , the threads that are listening currently will hold the lock and other threads will have to wait until the message is processed,which will cause the application to slow down. instead of putting synchronised in two separate blocks create separate threads outside of the class that checks this condition and try to invoke then separately based on a condition or you could try with wait() and notifyAll in your synchronized blocks also.
Assume the following pseudo code for a simple two thread scenario:
I have two threads, I would like to insert data to different tables to database. On thread1, I would like to insert to some table, at same time, I want to insert other data to thread 2. My question is how/where to place connection.close(), if I place it on thread 1 and it executes while thread2 is still processing, or vice versa, if thread2 has finished and closed the connection, yet thread1 hasn't finished.
Note, the database is just an example, it can be anything like a file,logger..etc.
class Thread1{
DataBaseConnection connection;
main(){
threadPool = Executors.newFixedThreadPool(1);
connection.open();
if(ThisMightTakeSomeTime)
threadPool.submit(new MyRunnable(connection));
InsertDataToDataBase(Table A, Table B));
connection.Close(); //What if thread2 isn't done yet?
}
}
public class MyRunnable implements Runnable {
MyRunnable(connection){}
#override
void Run() { ...}
void TaskThatMayTakeWhile(){
...get data ...
...Connection.InsertToTables(table X, table Y)
}
}
My question is how/where to place connection.close(),
To start, as far as I know, you should not be sharing a single connection with 2 different threads. Each thread should have it's own database connection, possibly utilizing a database connection pool such as Apache's DBCP.
Once you have multiple connections, I would have each thread manage and release its own connection back to the pool. You should make sure this is done in a finally block to make sure that if there is a database exception, the connection is still released.
If you are forced to have multiple threads share the same connection then they will have to use synchronized to make sure they have an exclusive lock to it:
synchronized (connection) {
// use the connection
}
As to when to close it if it is shared, you could have a shared usage counter (maybe an AtomicInteger) and close it when the counter goes to 0. Or as others have recommended you could use a thread-pool and then the thread pool is done free the connection.
Note, the database is just an example, it can be anything like a file,logger..etc.
In terms of a more generic answer I always try to mirror where the thing is created. If a method opens the stream then it should have the finally that closes the stream.
public void someMethod() {
InputStream stream = ...
try {
// process the stream here probably by calling other methods
} finally {
// stream should be closed in the same method for parity
stream.close();
}
}
The exception to this pattern is a thread handler. Then the Thread should close the stream or release connection in a finally block at the end of the run() or call() method.
public void serverLoopMethod() {
while (weAcceptConnections) {
Connection connection = accept(...);
threadPool.submit(new ConnectionHandler(connection);
}
}
...
private static class ConnectionHandler implements Runnable {
private Connection connection;
public ConnectionHandler(Connection connection) {
this.connection = connection;
}
// run (or call) method executed in another thread
public void run() {
try {
// work with the connection probably by calling other methods
} finally {
// connection is closed at the end of the thread run method
connection.close();
}
}
}
If you run your code it's likely that database connection will be closed before insert statement execution and of course insert will be unsuccessful.
Proper solutions
If you have multiple insert tasks:
Use ExecutorService instead of Execuutor
Submit all tasks
Invoke executorService.shutdown() it will wait until all submitted tasks are done.
Close connection
If you have only one task to submit:
You should close the connection after Connection.InsertToTables(table X, table Y) in your task.
Good for both scenarios and recommended:
Each tasks has it own connection.
Example:
class Thread1 {
private static DataSource dataSource; // initialize it
public static void main(String[] args){
ExecutorService threadPool = Executors.newFixedThreadPool(1);
threadPool.submit(new MyRunnable(dataSource));
}
}
class MyRunnable implements Runnable {
private final DataSource dataSource;
MyRunnable(DataSource dataSource) {
this.dataSource = dataSource;
}
public void run() {
Connection connection = dataSource.getConnection();
// do something with connection
connection.close();
}
}
class Thread1{
DataBaseConnection connection;
main(){
threadPool = Executors.newFixedThreadPool(1);
connection.open();
if(ThisMightTakeSomeTime)
Future f = threadPool.submit(new MyRunnable(connection));
InsertDataToDataBase(Table A, Table B));
f.get(); // this will hold the program until the Thread finishes.
connection.Close(); //What if thread2 isn't done yet?
}
}
the Future is the reference resulting from the submit call. if we call Future.get(), that will block the current thread until the submited thread finishes.