Is it good practice to put all Java JDBC Select statements in a try-catch block ? Currently I write most of my code without it. However, I do try-catch for insert/update/delete.
Note: Currently using Sprint Boot.
String sqlQuery = "Select productId, productName, productStartDate from dbo.product where productId = 5"
public getProductData() {
....
List<Product> productList = namedJdbcTemplate.query(sqlQuery, new ProductMapper());
Since this question is tagged with spring-boot and you are using JdbcTemplate, I'm giving you a Spring-specific answer.
One of the points of Spring is to avoid boilerplate from developers. If you find yourself adding things repetitively, like putting try-catch blocks around code executing DML, that's cause for suspecting you're not doing something right. Adding your own try-catches in code using Spring isn't always wrong, but it usually is.
In the Spring reference doc https://docs.spring.io/spring-framework/docs/current/reference/html/data-access.html#jdbc there is a table showing what is the developer's responsibility and what is Spring's responsibility. Processing exceptions, handling the transactions, and closing jdbc resources are all shown as being Spring's responsibility.
SpringJdbc takes care of a lot of things for you. It handles closing JDBC resources and returning connections to their pool, and converts exceptions from SQLException to a hierarchy of unchecked DataAccessExceptions. In Spring unchecked exceptions thrown from a method wrapped in a transactional proxy cause the transaction to get rolled back. If you do your own try-catch logic you can prevent rollback from occurring when it needs to, if you catch the exception and the proxy never sees it. Adding try-catch logic can cause problems if you don't understand what Spring is doing.
Exceptions do need to be caught somewhere. In a Spring web application, you can set up an exception handler that catches anything thrown from the controller layer, so that you can log it. That way the action in progress gets broken off cleanly, the current transaction rolls back, and the exception gets handled in a consistent way. If you have other entry points, such as reading messages from a queue, those will need their own exception handler.
Exceptions are thrown in order to escape the current context, which isn't able to deal with the problem, and relocate control to somewhere safe. For most exceptions coming from JDBC, they aren't anything you can fix, you just want to let it be thrown, let the current transaction rollback, then let the central exception handler catch and log it.
First of all, if you're working with raw JDBC API, you should always use PreparedStatement.
Yes, you'll just have to wrap the code with try-catch block at some point, though it's a good practice to catch exceptions just right away or at the point where it's logically suits. In case of SQL queries, you actually should wrap all of them into some Service class that will give you an access to modify your database objects without running through JDBC API every time. For example:
public class UserService {
private static final String CREATE_USER_SQL = "...";
private final Connection jdbcConnection;
public #Nullable User createUser(final String name) {
try (final PreparedStatement stmt = jdbcConnection.prepareStatement(CREATE_USER_SQL)) {
jdbcConnection.setAutoCommit(false);
stmt.setString(1, name);
stmt.executeQuery();
jdbcConnection.commit();
return new User(name);
} catch (final SQLException createException) {
System.out.printf("User CREATE failed: %s\n", createException.getMessage());
try {
jdbcConnection.rollback();
} catch (final SQLException rollbackException) {
System.out.printf("Rollback failed: %s\n", rollbackException.getMessage());
}
return null;
}
}
}
This solves two problems right away:
You won't need to put boilerplate JDBC code everywhere;
It will log any JDBC errors right away, so you won't need to go through a complex debugging process.
Brief explanation:
First of all any resource involving I/O access (database access is I/O access) must always be closed or it will cause a memory leak.
Secondly, it is better to rely on try-with-resources to close any resource as having to call the .close() method manually is always exposed to the risk of not being effectively executed at runtime due to a potential Exception/RuntimeException/Error getting thrown beforehand; even closing the resource in a finally method is not preferable as such block executes at a different phase compared to the try-with-resources - auto closure of try-with-resources happens at the end of the try block, while finally executes at the end of all try/catch block - , in addition to the basic problem that it is not a secure solution as a throw might happen even inside the finally block, preventing it from completing correctly.
This said, you always need to close:
Statement/PreparedStatement/CallableStatement
any ResultSet
the whole Connection when you don't need DB access anymore
Try-catch for DB Layer code is important if you're querying with JDBC.
Think about, what if the connection broke? Or what if Database crashed ? Or some other unfortunate scenario comes up.
For these things, I will recommend you to always keep the DB layer code within try-catch.
It's also recommended for you to have some fallback mechanism in-case of the above events.
You should always handle it with try cactch.
Why: For example you started a connection to db then some exception happened, if you don't rollback your transaction it stay on db and performance will be decreased, and memory leak will be happen.
Imagine, if your connection limit is 100 and 100 exception throwed after transaction started and you didn't rollback it your system will be locked because of you can't create any new connection to your database.
But if you want an alternative for "try catch finally" you can use like this:
EmUtil.consEm(em -> {
System.out.println(em.createNativeQuery("select * from temp").getResultList().size());
});
Source code:
public final class EmUtil {
interface EmCons {
public void cons(EntityManager em);
}
public static void consEm(EmCons t) {
EntityManager em = null;
try {
em = getEmf().createEntityManager();
t.cons(em);
} finally {
if (em != null && em.getTransaction().isActive())
em.getTransaction().rollback();
if (em != null && em.isOpen())
em.close();
}
}
private static EntityManagerFactory getEmf() {
//TODO
}
}
Spring translates those exceptions as DataAccessException (for more detail link). It will be good to catch those exceptions and you can rollback with #Transactional.
Related
I'm using jOOQ inside an existing project which also uses some custom JDBC code. Inside a jOOQ transaction I need to call some other JDBC code and I need to pass through the active connection so everything gets inside the same transaction.
I don't know how to retrieve the underlying connection inside a jOOQ transaction.
create.transaction(configuration -> {
DSLContext ctx = DSL.using(configuration);
// standard jOOQ code
ctx.insertInto(...);
// now I need a Connection
Connection c = ctx.activeConnection(); // not real, this is what I need
someOtherCode(c, ...);
});
Reading the docs and peeking a bit on the source code my best bet is this:
configuration.connectionProvider().acquire()
But the name is a bit misleading in this particular use case. I don't want a new connection, just the current one. I think this is the way to go because the configuration is derived and I will always get the same connection, but I'm not sure and I can't find the answer in the documentation.
jOOQ's API makes no assumptions about the existence of a "current" connection. Depending on your concrete implementations of ConnectionProvider, TransactionProvider, etc., this may or may not be possible.
Your workaround is generally fine, though. Just make sure you follow the ConnectionProvider's SPI contract:
Connection c = null;
try {
c = configuration.connectionProvider().acquire();
someOtherCode(c, ...);
}
finally {
configuration.connectionProvider().release(c);
}
The above is fine when you're using jOOQ's DefaultTransactionProvider, for instance.
Note there is a pending feature request #4552 that will allow you to run code in the context of a ConnectionProvider and its calls to acquire() and release(). This is what it will look like:
DSL.using(configuration)
.connection(c -> someOtherCode(c, ...));
I have developed a JDBC connection pool using synchronized methods like getConnection and returnConnection. This works well and it is fast enough for my purposes. Now the problem happens when this connection pool has to be shared in other packages of our application and so other developers will make use it as well. I feel it is a bit confusing as they always need to perform a returnConnection and I am afraid they may forget to do so.
Thinking about it I came up with the idea to expose only only method in my connection pool and force the other developers to encapsulate their code and so I handle the getConnection / returnConnection inside the connection pool.
It would be something like this:
public MyConnectionPool {
private Connection getConnection() {
//return connection
}
private void returnConnection(Connection connection) {
//add connection to list
}
public void executeDBTask(DBTaskIF task) {
Connection connection = getConnection();
task.execute(connection);
returnConnection(connection);
}
}
where:
public interface DBTaskIF {
public execute(Connection connection);
}
with an example of this DBTaskIF:
connectionPool.executeDBTask( new DBTaskIF() {
public void execute(Connection connection) {
PreparedStatement preStmt = null;
try {
preStmt = connection.prepareStatement(Queries.A_QUERY);
preStmt.setString(1, baseName);
preStmt.executeUpdate();
} finally {
if(preStmt!=null) {
try {
preStmt.close();
} catch (SQLException e) {
log.error(e.getStackTrace());
}
}
}}});
I hope you can get the idea. What I want to know is your opinion about this approach. I want to propose this to the development team and I worry some one comes up saying that this is not standard or OOP or something else...
Any comments are much appreciated.
I feel it is a bit confusing as they always need to perform a returnConnection and I am afraid they may forget to do so.
Thinking about it I came up with the idea to expose only only method in my connection pool and force the other developers to encapsulate their code and so I handle the getConnection returnConnection inside the connection pool.
I'm concerned with this statement. APIs should not (never?) assume that whoever uses them will do so in some way that is not enforced contractually by whichever method it exposes.
And java.sql.Connection is a widely used interface so you'll be making enemies by telling people how to use it with your pool.
Instead, you should assume that your Connection instances will be used correctly, i.e., that they will be closed (connection.close() in a finally block) once their use is over (see, for instance, Connecting with DataSource Objects):
Connection con;
PreparedStatement stmt;
try {
con = pool.getConnection();
con.setAutoCommit(false);
stmt = con.prepareStatement(...);
stmt.setFloat(1, ...);
stmt.setString(2, ...);
stmt.executeUpdate();
con.commit();
stmt.close();
} catch (SQLException e) {
con.rollback();
} finally {
try {
if(con!=null)
con.close();
if(stmt!=null) {
stmt.close();
} catch (SQLException e) {
...
} finally {
}
}
And the Connection implementation of your pool should be recycled when closed.
I second #lreeder's comment that you're really reinventing the wheel here and that most connection pools already available are definitely fast enough for most purposes, and underwent many fine tweakings over time. This also applies to embedded databases.
Disclaimer; this is just my opinion, but I have written custom connection pools before.
I find Java code where you have to create inner class impls a little clunky. However in Java8 lambda or Scala anonymous functions this would be a clean design. I probably would just expose returnConnection() as a public method and allow callers to use it directly.
Third option: use a utility class that takes care of most of the administration.
Not only forgetting to close a Connection can cause trouble, but also forgetting to close a Statement or a Resultset can cause trouble. This is similar to using various IO streams in a method: at some point you make an extra utility class in which you register all opened IO streams so that if an error occurs, you can call close in the utility class and be sure that all opened IO streams are closed.
Such a utility class will not cover all use cases but there is always the option to write another one for other (more complex) use cases. As long as they keep the same kind of contract, using them should just make things easier (and will not feel forced).
Wrapping or proxying a Connection to change the behavior of close to return the Connection to the pool is in general how connection pools prevent connections from actually being closed. But if a connection pool is not used, the application is usually written in a different manner: a connection (or two) is created (at startup) and used wherever a query is executed and the connection is only closed when it is known that a connection is not needed for a while (at shutdown). In contrast, when a pool is used, the connection is "closed" as soon as possible so that other processes can re-use the connection. This together with the option to use a utility class, made me decide to NOT wrap or proxy a connection, but instead let the utility class actually return the connection to the pool if a pool was used (i.e. not call connection.close() but call pool.release(connection)). Usage example of such a utility class is here, the utlity class itself is here.
Proxying causes small delays which is why for example BoneCP decided to wrap Connection and Datasource (wrapping causes very little overhead). The Datasource interface changes with each Java version (at least from 1.6 to 1.7) which means the code will not compile with older/newer versions of Java. This made me decide to proxy the Datasource because it is easier to maintain, but it is not easy to setup (see the various proxy helper classes here). Proxying also has the drawback of making stack-traces harder to read (which makes debugging harder) and sometimes makes exceptions disappear (I have seen this happen in JBoss where the underlying object threw a runtime exception from the constructor).
tl;dr If you make your own specialized pool, also deliver a utility class which makes it easy to use the pool and takes care of most of the administration that is required (like closing used resources) so that it is unlikely to be forgotten. If a utility class is not an option, wrapping or proxying is the standard way to go.
I have a code that saves a bean, and updates another bean in a DB via Hibernate. It must be do in the same transaction, because if something wrong occurs (f.ex launches a Exception) rollback must be executed for the two operations.
public class BeanDao extends ManagedSession {
public Integer save(Bean bean) {
Session session = null;
try {
session = createNewSessionAndTransaction();
Integer idValoracio = (Integer) session.save(bean); // SAVE
doOtherAction(bean); // UPDATE
commitTransaction(session);
return idBean;
} catch (RuntimeException re) {
log.error("get failed", re);
if (session != null) {
rollbackTransaction(session);
}
throw re;
}
}
private void doOtherAction(Bean bean) {
Integer idOtherBean = bean.getIdOtherBean();
OtherBeanDao otherBeanDao = new OtherBeanDao();
OtherBean otherBean = otherBeanDao.findById(idOtherBean);
.
. (doing operations)
.
otherBeanDao.attachDirty(otherBean)
}
}
The problem is:
In case that
session.save(bean)
launches an error, then I get AssertionFailure, because the function doOtherAction (that is used in other parts of the project) uses session after a Exception is thrown.
The first thing I thought were extract the code of the function doOtherAction, but then I have the same code duplicate, and not seems the best practice to do it.
What is the best way to refactor this?
It's a common practice to manage transactions at one level above DAOs, in services or other business logic classes. That way you can, based on the business/service logic, in one case do two DAO operations in one transaction and, in another case, do them in separate transactions.
I'm a huge fan of Declarative Transaction Management. If you can spare the time to get it working (piece of cake with an Application Server such as GlassFish or JBoss, and easy with Spring). If you annotate your business method with #TransactionAttribute(REQUIRED) (it can even be set to be done as default) and it calls the two DAO methods you will get exactly what you want: everything gets committed at once or rolled back over an Exception.
This solution is about as loosely coupled as it gets.
The others are correct in that they take in to account what are common practice currently.
But that doesn't really help you with your current practice.
What you should do is create two new DAO methods. Such as CreateGlobalSession and CommitGlobalSession.
What these do is the same thing as your current create and commit routines.
The difference is that they set a "global" session variable (most likely best done with a ThreadLocal). Then you change the current routines so that they check if this global session already exists. If your create detects the global session, then simply return it. If your commit detects the global session, then it does nothing.
Now when you want to use it you do this:
try {
dao.createGlobalSession();
beanA.save();
beanb.save();
Dao.commitGlobalSession();
} finally {
dao.rollbackGlobalSession();
}
Make sure you wrap the process in a try block so that you can reset your global session if there's an error.
While the other techniques are considered best practice and ideally you could one day evolve to something like that, this will get you over the hump with little more than 3 new methods and changing two existing methods. After that the rest of your code stays the same.
I'm using JPA toplink-essential, building REST web app.
I have a servlet that find one entity and delete it.
Below code I thought I could catch optimistic lock exception in servlet level but its not!
Instead RollbackException is thrown, and that's what documentation says:
But then when I see the Netbean IDE GlassFish log, somewhere, optimisticLockException is thrown. It's just not being caught in my code. (my system print message doesn't get displayed so I'm sure its not going in there.)
I tried to import each packages (one at a time of course) and tested with catch clause but both time, it is not going into the catch block even though log error says "optimistic exception".
import javax.persistence.OptimisticLockException;
import oracle.toplink.essentials.exceptions.OptimisticLockException;
So where the OptimisticLockException is thrown?????
#Path("delete")
#DELETE
#Consumes("application/json")
public Object planDelete(String content) {
try {
EntityManager em = EmProvider.getInstance().getEntityManagerFactory().createEntityManager();
EntityTransaction txn = em.getTransaction();
txn.begin();
jObj = new JSONObject(content);
MyBeany bean = em.find(123);
bean.setVersion(Integer.parseInt(12345));
em.remove(bean);
//here commit!!!!!
em.getTransaction().commit();
}
catch(OptimisticLockException e) { //this is not caught here :(
System.out.pritnln("here");
//EntityTransactionManager.rollback(txn);
return HttpStatusHandler.sendConflict();
}
catch(RollbackException e) {
return HttpStatusHandler.sendConflict();
}
catch(Exception e) {
return HttpStatusHandler.sendServerError(e);
}
finally {
if(em != null) {
em.close();
}
}
Error msg:
[TopLink Warning]: 2011.01.28 05:11:24.007--UnitOfWork(22566987)
--Exception [TOPLINK-5006]
(Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))):
oracle.toplink.essentials.exceptions.OptimisticLockException
[TopLink Warning]: 2011.02.01 08:50:15.095--UnitOfWork(681660)--
javax.persistence.OptimisticLockException: Exception [TOPLINK-5006] (Oracle TopLink
Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))):
oracle.toplink.essentials.exceptions.OptimisticLockException
Not 100% sure, but could it be that you're catching javax.persistence.OptimisticLockException (notice the package), but as the thrown exception is oracle.toplink.essentials.exceptions.OptimisticLockException, it's not getting caught? Even though the name of exception-class is the same, they're not the same class.
I would guess that it is thrown in the em.getTransaction().commit(); statement.
Because the java doc of RollbackException if said:
Thrown by the persistence provider when EntityTransaction.commit() fails.
I strongly belive that this is not the code you realy use (it would not compile because of a missing ) in line bean.setVersion(Integer.parseInt(12345);), but i "hope" that the real code has the same problem.
Have you tried calling entityManager.flush(); inside your try/catch block? When JPA flushes is when the OptimisticLock exception is thrown.
Also you need not commit the transaction in the manner you did. You simply could have done txn.commit(); instead of em.getTransaction().commit();.
I have a similar situation where I am able to catch javax.persistence.OptimisticLockException. In my case I made the ReST endpoint a SSB and inject the entity manager. I then call a method on another SSB which is also injected and acts as a controller for this piece of biz logic. This controller performs a flush() and catches the OLEX and rethrows and ApplicationException which the Rest endpoint / SSB catches and retries. Using this pattern you also need to make sure to specify TransactionAttributeType.RequiresNew so that each retry you make occurs in a fresh transaction since the OLEX invalidates the old one.
I was not able to find any advice on catch blocks in Java that involve some cleanup operations which themselves could throw exceptions.
The classic example is that of stream.close() which we usually call in the finally clause and if that throws an exception, we either ignore it by calling it in a try-catch block or declare it to be rethrown.
But in general, how do I handle cases like:
public void doIt() throws ApiException { //ApiException is my "higher level" exception
try {
doLower();
} catch(Exception le) {
doCleanup(); //this throws exception too which I can't communicate to caller
throw new ApiException(le);
}
}
I could do:
catch(Exception le) {
try {
doCleanup();
} catch(Exception le1) {
//ignore?
//log?
}
throw new ApiException(le); //I must throw le
}
But that means I will have to do some log analysis to understand why cleanup failed.
If I did:
catch(Exception le) {
try {
doCleanup();
} catch(Exception le1) {
throw new ApiException(le1);
}
It results in losing the le that got me here in the catch block in the fist place.
What are some of the idioms people use here?
Declare the lower level exceptions in throws clause?
Ignore the exceptions during cleanup operation?
First, decide if you actually need to throw from the finally block - think carefully on that - in the case of a failed close() call... well, logging it is fine - but what can a higher layer of your API really do about the problem? So for 99% of the cases, you will log the secondary then re-throw the primary.
Next, if you do need to throw the secondary, decide if the various causes of the secondary exception are important. It will be rare that they are. So set the cause of the secondary to be the primary (either using an appropriate constructor, or initCause() ).
Finally, if you've must throw the secondary, and retain the full stack trace of the secondary and the primary - then you are in to creating a custom Exception to handle the situation. It's ugly, because you will probably want to derive from different parent classes. If this does arise, I suggest creating a helper class that is capable of filling the stack trace of a target exception in such a way that it produces a meaningful trace based on both exceptions (Be sure to use indentation for the secondary so nested exceptions are easy to pull apart).
But mostly, I suggest that you use the log-and-rethrow-primary paradigm. Focus on fixing the primary, and the secondary issues generally take care of themselves (the classic example here is an IO exception that munges things up so badly that the call to close() can't succeed either).
Use a finally block. If you are worried about swallowing the stack trace do the following:
Exception ex = new Exception(le.stackTrace());