Proper way to handle JDBC connection in EJB 3 (SLSB) - java

I ask this question especially for Stateless Session Bean. I knew that I can easily inject the DataSource with the #Resource annotation. But I don't know what is the proper way to get the Connection. Is it in each method of the bean, or in the method annotated with #PostConstruct? And also for the closing of the Connection. Do I have to close it within the finally block in each method call, or in the method annotated with #PreDestroy?
Is it safe to create an instance variable for the Connection, for example:
#Stateless
public class MyBean {
#Resource private DataSource ds;
private Connection conn;
#PostConstruct
public void onCreate() {
conn = ds.getConnection(); // within try catch block
}
#PreDestroy
public void onDestroy() {
conn.close() // within try catch block
}
}
Or should I create them locally in each method like this:
#Stateless
public class MyBean {
#Resource private DataSource ds;
public void method1() {
Connection conn = null;
// get and close connection...
}
public void method2() {
Connection conn = null;
// get and close connection...
}
}
Some people in the Internet do this way, and some other do that way. What is the proper method to be implemented in an application with a high request traffic? When the bean instance is returned back to the EJB pool, does the Connection remains opened or does it returned to the database pool?
Note: The application is using native JDBC API. There are no JPA, JDO, etc.. The application server is Wildfly.

TL;DR
The second approach is the correct one. Just make sure to close the connection to return it to the Pool.
The Datasource is a pool of connections, every time you get a connection it borrows one from the datasource and when you close that connection it will be returned to the pool, so you will always want to release the connection as soon as possible.
In the first approach you will retain the connection for as long as the EJB lives in memory. Since the EJB is an Stateless bean it will be alive for long and reused by diferent consumenrs. Making you have at least 1 connection open per EJB that is alive thus this approach is not practical.
The second approach is the correct one. Just make sure to close the connection to return it to the Pool. With this approach the Bean will only retain the connection while in use. Just make sure to close the connection to return it to the Pool.
#Stateless
public class MyBean {
#Resource private DataSource ds;
public void method1() {
try(Connection conn = ds.getConnection()){
// Do anything you need with the connection
}
}
public void method2() {
Connection conn = ds.getConnection();
try {
// Do anything you need with the connection
} finally {
connection.close();
}
}
}

Related

How to catch a broken database connection

I'm working on an app that retrieves from and enters information to a database, using Spring JDBC template. On the service tier, I would like to set up some logic to catch an exception if the database goes down. However, I have no idea how to do this. I'm able to set up the methods to catch if they fail, but I'd like set up specific logic for the server going down.
As an option - you can create a sceduler which will check database connectivity.
Database connectivity could be checked executing a simple query or via Connection interface:
boolean isValid(int timeout) throws SQLException
Returns true if the connection has not been closed and is still valid.
The driver shall submit a query on the connection or use some other
mechanism that positively verifies the connection is still valid when
this method is called. The query submitted by the driver to validate
the connection shall be executed in the context of the current
transaction.
An example of checking database connectivity via Spring scheduler:
#Service
public class ConnectionListener {
private Connection connection;
#Autowired
private JdbcTemplate jdbcTemplate;
#PostConstruct
public void init() {
connection = jdbcTemplate.getDatasource().getConnection();
}
#Scheduled(fixedRate = 60000) // check every 60 sec
public void checkConnection() {
try {
connection.isValid(10);
} catch (SQLException e) { // Or just handle it here
throw new ConnectionTimeoutException(e);
}
}
}
You need some additional cnfiguration to handle exceptions thrown from Spring Scheduler:
#EnableScheduling
#Configuration
class SchedulingConfiguration implements SchedulingConfigurer {
#Override
public void configureTasks(ScheduledTaskRegistrar taskRegistrar) {
taskRegistrar.setScheduler(...);
}
}
Sceduler also could be implemented with ExecutorService.
#Service
class ConnectionLisener {
private ScheduledExecutorService service = Executors.newScheduledThreadPool(2);
private Connection connection;
#PostConstruct
public void init() {
connection = jdbcTemplate.getDatasource().getConnection();
checkConnection();
}
#PreDestroy
public void destroy() {
service.shutdown();
}
public void checkConnection() {
service.scheduleAtFixedRate(() -> {
try {
connection.isValid(10);
} catch (Exception e) {
// handle your exception
}
}, 60, 60, TimeUnit.SECONDS);
}
}
That's a general overview and just a couple of hints for doing further research.
Just one note that if a server is going down you need a disaster recovery, catching an exception will not help. That's a big infrastructure and architectural task, not the responsibility of single application.

Java Connection Pooling - garbage collection or call close()?

My fear is that I have a fundamental issue with understanding connection pooling in Java.
I'm using IDBCDataSource as a connection pool.
At the entry point of my application I instantiate a BasicDataSource with for instance setMaxActive=50. The instance of that DataSource is than handed into various DAOs that are utilized by some business logic.
Each DAO calls getConnection(), but there is no single close() called. My assumption is that after a DAO is not used the garbage collector closes the connections.
My issue is that Im constantly running out of connections (i.e. code waiting for an available connection).
Now lets say I would add a close() call at the end of each database operation. What happens with thrown Exceptions. I would have to catch every Exception in the DAO, make sure to close the connection and then re-throw the occurred Exception!
Example - Current Approach:
public class MyDAO {
private Connection con;
public MyDAO (DataSource ds) {
con = ds.getConnection();
}
public MyReturnClass execSomeQuery() throws SQLException {
String sql = String.format("SELECT * FROM foo");
PreparedStatement ps = con.prepareStatement(sql);
ResultSet rs = ps.executeQuery();
while (rs.next()) {
…
...
}
return result;
}
}
public class MyAppLogic() {
DataSource ds;
public MyAppLogic(DataSource ds) {
this.ds = ds;
}
public void doSomeStuff() {
MyDAO myDAO = MyDAO(ds);
myDAO.execSomeQuery();
}
}
You need to close the connections so that they return in the connection pool. GC will not call close on your connections!
You could create a wrapper or parent class that manages the connection, so that you don't have to replicate the logic in each method. Here's an example (note that I haven't actually compiled or tested this).
public interface DAOClass {
public void execSomeQuery() throws SQLException;
}
public class MyDAOWrapper {
private DAOClass dao;
private DataSource ds;
public MyDAOWrapper(DataSource ds, DAOClass dao) {
this.dao = dao;
this.ds = ds;
}
public void exec() throws SQLException {
Connection con = ds.getConnection();
try {
dao.execSomeQuery();
}
finally {
con.close();
}
}
}
// usage
public void doSomeStuff() throws SQLException {
MyDAOWrapper dao = new MyDAOWrapper(ds, new MyDAO());
dao.exec();
}
Regarding error handling, you don't need to rethrow an exception unless you catch it. Your finally clause should close the connection (if it exists) and when that exits, the exception will continue propagating up.
try {
do_something();
}
finally {
cleanup();
// throw is not necessary
}

Injecting datasource in EJB

When you inject a datasource in your application and get a connection by invoking getConnection() on it, are you supposed to close the connection?
Even though the datasource itself is container managed, the API indeed requires the programmer to close connections. This is different from a couple of other container managed resources (like the entity manager), where the container takes care of closing. Note that closing here in the majority of cases doesn't actually closes the connection here, but returns the connection to a connection pool.
As a rule of thumb, if you use a factory-ish resources to obtain one or more other resources from that can be closed, you have to close them. Otherwise the container does this.
Since Connection implements AutoCloseable, you can use a try-with-resources block for this:
#Stateless
public class MyBean {
#Resource(lookup = "java:/app/datasource")
private DataSource dataSource;
public void doStuff() {
try (Connection connection = dataSource.getConnection()) {
// Work with connection here
} catch (SQLException e) {
throw new SomeRuntimeException(e);
}
}
}
Of course, otherwise you'll exhaust your connection pool. It's best to do this in finally block:
#Resource(mappedName="jndi/yourDatasource")
DataSource ds;
..
Connection conn = null;
try {
conn = ds.getConnection();
//PERFORM QUERY, ETC..
}
catch(SQLException ex) {
//EXCEPTION HANDLING
}
finally {
try {
if(conn != null)
conn.close();
}
catch(SQLException ex) {..}
}

Using BoneCP: Handling connections from the pool

I have just started using BoneCP and this is my first time using a connection pool. I'm somewhat confused as to how I am supposed to use it. Currently I am saving the BoneCP-object as a static variable, and thus I can use it between different connections.
When I'm done with the connection, I close it with connection.close().
Should I do this, or should I not close it to enable it to be reused by the pool?
This is my current implementation to get a connection:
private static BoneCP connectionPool;
public Connection getConnection() throws SQLException {
if (connectionPool == null) {
initPool();
}
return connectionPool.getConnection();
}
private void initPool() throws SQLException {
BoneCPConfig config = new BoneCPConfig();
config.setJdbcUrl(DB_URL);
config.setUsername(DB_USERNAME);
config.setPassword(DB_PASSWORD);
config.setMinConnectionsPerPartition(5);
config.setMaxConnectionsPerPartition(10);
config.setPartitionCount(1);
connectionPool = new BoneCP(config);
}
Does this seem correct or have I misunderstood how I am supposed to use BoneCP?
Other than making your private static final and changing the init to a static block (or alternaitvely making your getConnection synchronized), you are ok.
You are correct you MUST do connection.close() to return to the pool. When your app shuts down, shut down the connection pool

Multithreaded Java server: allowing one thread to access another one

Hopefully the code itself explains the issue here:
class Server {
public void main() {
// ...
ServerSocket serverSocket = new ServerSocket(PORT);
while (true) {
Socket socket = serverSocket.accept();
Thread thread = new Thread(new Session(socket));
thread.start();
}
// ..
}
public static synchronized Session findByUser(String user) {
for (int i = 0; i < sessions.size(); i++) {
Session session = sessions.get(i);
if (session.getUserID().equals(user)) {
return session;
}
}
return null;
}
}
class Session {
public Session(Socket socket) {
attach(socket);
}
public void attach(Socket socket) {
// get socket's input and output streams
// start another thread to handle messaging (if not already started)
}
public void run() {
// ...
// user logs in and if he's got another session opened, attach to it
Session session = Server.findByUser(userId);
if (session != null) {
// close input and output streams
// ...
session.attach(socket);
return;
}
// ..
}
}
My question here is: Is it safe to publish session reference in Server.findByUser method, doesn't it violate OOP style, etc?
Or should I reference sessions through some immutable id and encapsulate the whole thing? Anything else you would change here?
String sessionId = Server.findByUser(userId);
if (sessionId != null && sessionId.length() > 0) {
// close input and output streams
// ...
Server.attach(sessionId, socket);
return;
}
Thomas:
Thanks for your answer.
I agree, in a real world, it would be a good idea to use dependency injection when creating a new instance of Session, but then probably also with an interface, right (code below)? Even though I probably should have unit tests for that, let's consider I don't. Then I need exactly one instance of Server. Would it then be a huge OO crime to use static methods instead of a singletone?
interface Server {
Session findByUser(String user);
}
class ServerImpl implements Server {
public Session findByUser(String user) { }
}
class Session {
public Session(Server server, Socket socket) { }
}
Good point on the attach(...) method - I've never even considered subclassing Session class, that's probably why I haven't thought how risy it might be to call public method in the constructor. But then I actually need some public method to attach session to a different socket, so maybe a pair of methods?
class Session {
public Session(Socket socket) {
attach_socket(socket);
}
public void attach(Socket socket) {
attach_socket(socket);
}
private void attach_socket(Socket socket) {
// ...
}
}
It's true that allowing clients of Session to call attach(...) doesn't seem right. That's probably one of those serious mehods only the Server should have access to. How do I do it without C++'s friendship relationship though? Somehow inner classes came to my mind, but I haven't given it much thought, so it maybe a completely wrong path.
Everytime I receive a new connection I spawn a new thread (and create a new Session instance associated with it) to handle transmission. That way while the user sends in a login command, Server is ready to accept new connections. Once the user's identity is verified, I check if by any chance he's not already logged in (has another ongoing session). If he is then I detach the onging session from it's socket, close that socket, attach the ongoing session to current socket and close current session. Hope this is more clear explanation of what actually happens? Maybe the use of a word session is a bit misfortunate here. What I really have is 4 different objects created for each connection (and 3 threads): socket handler, message sender, message receiver and a session (if it's a good solution that's a different question...). I just tried simplyfing the source code to focus on the question.
I totally agree it makes no sense to iterate over session list when you can use a map. But I'm afraid that's probably one of the smaller issues (believe me) the code I'm working on suffers from. I should've mentioned it's actually some legacy system that, no surprise, quite recently has been discoved to have some concurrency and performance issues. My task is to fix it... Not an easy task when you pretty much got only theoretical knowledge on multithreading or when you merely used it to display a progress bar.
If after this, rather lengthy, clarification you have some more insight on the architecture, I'd be more than willing to listen.
You should start by making the Server class OO (i.e. not static) and use dependency injection in the Session class:
class Server {
public Session findByUser(String user) { }
}
class Session{
public Session(Server server, Socket socket){}
}
public void attach(..) has to be private to ensure encapsulation and proper initialization. A subclass could break the Session class otherwise like this:
class BadSession extends Session{
#Override public void attach(Socket socket) {
//this is not initialized at this point
//now the instance is broken
}
}
Calling attach from a client seems to be invalid, too.
The responsibility to attach the Socket to the Session should be part of the Server. This is the right place to decide which Session gets which Socket. As far as I do understand your code you are creating a Session with a Socket. Somehow you find out that the user already has a Session (with another Socket). Now you attach the current Session to this Socket. There is now the old Socket with two Sessions and the new Socket without a Session. I think the a traditional Session should have multiple Sockets not the other wayaround:
Session session = findSession(userId);
session.attach(socket);
class Session{
List<Socket> sockets;
}
After this change the threads would not be assigned to Sessions but socket handlers, that process the input stream for one socket and change the Session accordingly.
Using synchronized for the method public static synchronized Session findByUser(String user) is not sufficient to ensure thread-safeness. You have to make sure that the look up of a session (by user) and the registration a session (if the user is not known) have to be atomic. The semantic should be analogous to putIfAbsent of ConcurrentMap. (Iterating over the session List is not efficient anyway. You should use a Map<Id, Session>.)
I hope this helps.

Categories

Resources