I'm writing stock quotes processing engine and receiving async notifications from postgresql database with pgjdbc-ng-0.6 library. I'd like to know if my database connection is still alive, so I wrote in thread's run() method
while (this.running) {
try {
this.running = pgConnection.isValid(Database.CONNECTION_TIMEOUT);
Thread.sleep(10000);
} catch (SQLException e) {
log.warn(e.getMessage(), e);
gracefullShutdown();
} catch (InterruptedException e) {
gracefullShutdown();
}
}
I read isValid() declaration and it stated that the function will return false if timeout reached. However, isValid() just hangs indefinitely and run() is never exited. To create connectivity issues I disable VPN my application uses to connect to database. Rather rude way,but function must return false... Is this bug in driver or another method exists?
I tried setNetworkTimeout() in PGDataSource, but without any success.
One of the possible handlings of this problem for Enterprise developers is to use existing threadPool to create a separate thread for submitting in
Callable with test call() for DB (in case of Oracle it may be "SELECT 'OK' FROM dual"),
using statement object.execute('...'), (not executeUpdate, because it
may cause the call to get stuck), and further using for example - future.get(3, TimeUnit.Seconds).
Using this approach in a final call you need to catch InterruptedException
for get() call. If Callable throws you set database availability as false, if not then true.
Before using this approach inside the Enterprise application you have to ensure you have access to application server threadPool executor in certain object and #Autowire it there.
Related
I am getting below error in my PROD environment. Some DB call is working fine but some call is throwing exception due to package is discarded/state is invalidate and many reason. But I don't have control to re-compile package. i.e i have only read-only access. I am using Java8, JDBI and Oracle database with connection pooling.
ORA-04068: existing state of packages has been discarded
ORA-04061: existing state of package body "USER.PKG_MY_PACKAGE" has been invalidated
ORA-04065: not executed, altered or dropped package body "USER.PKG_MY_PACKAGE"
ORA-06508: PL/SQL: could not find program being called: "USER.PKG_MY_PACKAGE"
ORA-06512: at line 34
I would like do to re-try options(2 times) by handle this exception in catch block and re-try from catch block. I referred many forums, connection has old compiled stored procedure or state of object as dirty. So, If I re-try with fresh connection for 3 times i may not get this error. Please suggest me fresh connection will work or fresh session will work. I am going to try this below
public void callDB(int userName, int retryCount) throws Exception {
try(Handle handle = dbInstance.open()) {
OutputParameters parameters = handle.createCall(MY_STORED_PROC).bind(0,userName).bind(1,Oracle.CURSOR);
Employee employee=parameters.getObject(2);
} catch(Exception e) {
logger.error(e);
if(retryCount!=2 && e.getMessage().contains("ORA-04068")) {
retryCount=retryCount+1;
callDB(userName, retryCount);
} else {
throw e;
}
}
}
I am getting from connection pooling via JDBI API. Please let me know this above code will work or not. Or Shall I need to re-try different way.
Also, suggest me when i re-try do i need create a new session or connection from connection pool is enough.
I am using realm in my firebase service and i am closing the instance of realm in my finally block but the problem arises when i perform an asynchronous operation in my try block in which case the finally executes before the async complete and the realm instance is closed which causes the app to crash since the async operation performs realm related tasks.
try {
// perform async task that requires realm
} catch (Exception e) {
e.printStackTrace();
} finally {
if (realm != null && !realm.isClosed())
realm.close();
}
This is what the code roughly looks like.If i try closing the realm instance anywhere else i get an error saying that i am accessing the realm instance from the incorrect thread,is there a way i can wait until the async operation is completed and only then close the realm instance.
So, you really can't perform an asynchronous task inside a try/catch block.
... and by "can't" I don't mean that it is bad practice, I mean that it is simply not possible, by the very definition of "asynchronous".
What you are doing inside the try/catch block is enqueuing the task, for later execution. Once the task is enqueued (not executed!) the try/catch block is exited.
If you want the try/catch block around the asynchronously executed code, you need to execute it at part of the asynchronous task.
Furthermore, as you will see in the documentation, you cannot pass most Realm objects between threads. You cannot open the realm on some thread and then pass it, open, to an asynchronous task.
I have a main thread that runs periodically. It opens a connection, with setAutoCommit(false), and is passed as reference to few child threads to do various database read/write operations. A reasonably good number of operations are performed in the child threads. After all the child threads had completed their db operations, the main thread commits the transaction with the opened connection. Kindly note that I run the threads inside the ExecutorService. My question, is it advisable to share a connection across threads? If "yes" see if the below code is rightly implementing it. If "no", what are other way to perform a transaction in multi-threaded scenario? comments/advise/a-new-idea are welcome. pseudo code...
Connection con = getPrimaryDatabaseConnection();
// let me decide whether to commit or rollback
con.setAutoCommit(false);
ExecutorService executorService = getExecutor();
// connection is sent as param to the class constructor/set-method
// the jobs uses the provided connection to do the db operation
Callable jobs[] = getJobs(con);
List futures = new ArrayList();
// note: generics are not mentioned just to keep this simple
for(Callable job:jobs) {
futures.add(executorService.submit(job));
}
executorService.shutdown();
// wait till the jobs complete
while (!executorService.isTerminated()) {
;
}
List result = ...;
for (Future future : futures) {
try {
results.add(future.get());
} catch (InterruptedException e) {
try {
// a jobs has failed, we will rollback the transaction and throw exception
connection.rollback();
result = null;
throw SomeException();
} catch(Exception e) {
// exception
} finally {
try {
connection.close();
} catch(Exception e) {//nothing to do}
}
}
}
// all the jobs completed successfully!
try {
// some other checks
connection.commit();
return results;
} finally {
try {
connection.close();
} catch(Exception e){//nothing to do}
}
I wouldn't recommend you to share connection between threads, as operations with connection is quite slow and overall performance of you application may harm.
I would rather suggest you to use Apache Connections Pool and provide separate connection to each thread.
You could create a proxy class that holds the JDBC connection and gives synchronized access
to it. The threads should never directly access the connection.
Depending on the use and the operations you provide you could use synchronized methods, or lock on objects if the proxy needs to be locked till he leaves a certain state.
For those not familiar with the proxy design pattern. Here the wiki article. The basic idea is that the proxy instance hides another object, but offers the same functionality.
In this case, consider creating a separate connection for each worker. If any one worker fails, roll back all the connections. If all pass, commit all connections.
If you're going to have hundreds of workers, then you'll need to provide synchronized access to the Connection objects, or use a connection pool as #mike and #NKukhar suggested.
I am having difficulty trying to correctly program my application in the way I want it to behave.
Currently, my application (as a Java Servlet) will query the database for a list of items to process. For every item in the list, it will submit an HTTP Post request. I am trying to create a way where I can stop this processing (and even terminate the HTTP Post request in progress) if the user requests. There can be simultaneous threads that are separately processing different queries. Right now, I will stop processing in all threads.
My current attempt involves implementing the database query and HTTP Post in a Callable class. Then I submit the Callable class via the Executor Service to get a Future object.
However, in order properly to stop the processing, I need to abort the HTTP Post and close the database's Connection, Statement and ResultSet - because the Future.cancel() will not do this for me. How can I do this when I call cancel() on the Future object? Do I have to store a List of Arrays that contains the Future object, HttpPost, Connection, Statement, and ResultSet? This seems overkill - surely there must be a better way?
Here is some code I have right now that only aborts the HttpPost (and not any database objects).
private static final ExecutorService pool = Executors.newFixedThreadPool(10);
public static Future<HttpClient> upload(final String url) {
CallableTask ctask = new CallableTask();
ctask.setFile(largeFile);
ctask.setUrl(url);
Future<HttpClient> f = pool.submit(ctask); //This will create an HttpPost that posts 'largefile' to the 'url'
linklist.add(new tuple<Future<HttpClient>, HttpPost>(f, ctask.getPost())); //storing the objects for when I cancel later
return f;
}
//This method cancels all running Future tasks and aborts any POSTs in progress
public static void cancelAll() {
System.out.println("Checking status...");
for (tuple<Future<HttpClient>, HttpPost> t : linklist) {
Future<HttpClient> f = t.getFuture();
HttpPost post = t.getPost();
if (f.isDone()) {
System.out.println("Task is done!");
} else {
if (f.isCancelled()) {
System.out.println("Task was cancelled!");
} else {
while (!f.isDone()) {
f.cancel(true);
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("!Aborting Post!");
try {
post.abort();
} catch (Exception ex) {
System.out.println("Aborted Post, swallowing exception: ");
ex.printStackTrace();
}
}
}
}
}
}
Is there an easier way or a better design? Right now I terminate all processing threads - in the future, I would like to terminate individual threads.
I think keeping a list of all the resources to be closed is not the best approach. In your current code, it seems that the HTTP request is initiated by the CallableTask but the closing is done by somebody else. Closing resources is the responsibility of the one who opened it, in my opinion.
I would let CallableTask to initiate the HTTP request, connect to database and do it's stuff and, when it is finished or aborted, it should close everything it opened. This way you have to keep track only the Future instances representing your currently running tasks.
I think your approach is correct. You would need to handle the rollback yourself when you are canceling the thread
cancel() just calls interrupt() for already executing thread. Have a look here
http://docs.oracle.com/javase/tutorial/essential/concurrency/interrupt.html:
As it says
An interrupt is an indication to a thread that it should stop what it
is doing and do something else. It's up to the programmer to decide
exactly how a thread responds to an interrupt, but it is very common
for the thread to terminate.
Interrupted thread would throw InterruptedException
when a thread is waiting, sleeping, or otherwise paused for a long
time and another thread interrupts it using the interrupt() method in
class Thread.
So you need to explicitly code for scenarios such as you mentioned in executing thread where there is a possible interruption.
My team has to make some changes and renew an old web application. This application has one main thread and 5 to 15 daemon threads used as workers to retrieve and insert data in a DB.
All those threads have this design (here simplified for convenience):
public MyDaemon implements Runnable {
// initialization and some other stuffs
public void run() {
...
while(isEnabled) {
Engine.doTask1();
Engine.doTask2();
...
Thread.sleep(someTime);
}
}
}
The Engine class provides a series of static methods used to maipulate other methods of DataAccessor classes, some of those methods been static:
public Engine {
public static doTask1() {
ThisDataAccessor.retrieve(DataType data);
// some complicated operations
ThisDataAccessor.insertOrUpdate(DataType data);
}
public static doTask2() {
ThatDataAccessor da = new ThatDataAccessor();
da.retrieve(DataType data);
// etc.
}
...
}
DataAccessor classes usually interact with DB using simple JDBC statements enclosed in synchronized methods (static for some classes). DataSource is configured in the server.
public ThatDataAccessor {
public synchronized void retrieve(DataType data) {
Connection conn = DataSource.getConnection();
// JDBC stuff
conn.close();
}
...
}
The problem is that the main thread needs to connect to DB and when these daemon threads are working we run easily out of available connections from the pool, getting "waiting for connection timeout" exceptions. In addition, sometimes even those daemon threads get the same exception.
We have to get rid of this problem.
We have a connection pool configured with 20 connections, and no more can be added since that "20" is our production environment standard. Some blocks of code need to be synchronized, even if we plan to move the "synchronized" keyword only where really needed. But I don't think that it would make really the difference.
We are not experienced in multithreading programming and we've never faced this connection pooling problem before, that's why I'm asking: is the problem due to the design of those threads? Is there any flaw we haven't noticed?
I have profiled thread classes one by one and as long as they are not running in parallel it seems that there's no bottleneck to justify those "waiting for connection timeout".
The app is running on WebSphere 7, using Oracle 11g.
You are likely missing a finally block somewhere to return the connections back to the pool. With hibernate, I think this is probably done when you call close() or possibly for transactions, when you call rollback(). But I would call close anyway.
For example, I wrote a quick and dirty pool myself to extend an old app to make it multithreaded, and here is some of the handling code (which should be meaningless to you except the finnally block):
try {
connection = pool.getInstance();
connection.beginTransaction();
processFile(connection, ...);
connection.endTransaction();
logger_multiThreaded.info("Done processing file: " + ... );
} catch (IOException e) {
logger_multiThreaded.severe("Failed to process file: " + ... );
e.printStackTrace();
} finally {
if (connection != null) {
pool.releaseInstance(connection);
}
}
It is fairly common for people to fail to use finally blocks properly... For example, look at this hibernate tutorial, and skip to the very bottom example. You will see that in the try{} he uses tx.commit() and in the catch{} he uses tx.rollback(), but he has no session.close(), and no finally. So even if he added a "session.close()" in try and in catch, if his try block threw something other than a RuntimeException, or his catch caused an additional Exception before the try or a non-HibernateException before the rollback(), his connection would not be closed. And without session.close(), I don't think that is actually very good code. But even if the code is seemingly working, a finally gives you assurance that you are protected from this type of problem.
So I would rewrite his methods that use Session to match the idiom shown on this hibernate documentation page. (and also I don't recommend his throwing a RuntimeException, but that is a different topic).
So if you are using Hibernate, I think the above is good enough. But otherwise, you'll need to be more specific if you want specific code help, but otherwise the simple idea that you should use a finally to ensure the connection is closed is enough.