Under .net (specifically C# really), is there an equivalent to Java's DataSource class? I'm used to creating a single DataSource (pooled or non-pooled) and passing it around to objects that require creating new database connections. Helpful in decoupled/dependency injection situations.
However under .net, instantiating a new SqlConnection seems to come from a pool if you use the same connection string. Does that mean you should pass around a connection string (or connection string builder) to your DAO pattern classes, just pass around the single Connection object or create a new ConnectionProvider like class?
eg
class SomethingDao {
DataSource dataSource;
Something getSomething(int id) {
connection = dataSource.GetConnection();
connection.CreateCommand();
... etc
}
}
The Enterprise Library takes care of virtually all of these details for you, so I recommend you consider using it and following the example code shown here:
http://msdn.microsoft.com/en-us/library/ff953187%28v=PandP.50%29.aspx
This link walks you through using it step-by-step. The equivalent using Ent Lib would be the Database class. It has all the code examples, so I won't repeat them here.
Related
I have one class name DBManager.java, this class implements a singleton design pattern and this is used for all DB operations.
This works perfectly when I have to connect with 1 data source, Now in my project, I have to connect with 2 different data sources, Now when I use this class, this behaves differently. because it always returns 1 data source connection.
Now how can I manage this, in a better way? One approach is I would create another DBManager2.java class and use this class for the 2nd data source, but I don't think so it's a good way.
Any recommendation.
Use Map<Key, DataSource> to store datasources by some key. And then use some key object to get corresponding datasoucre (database URL, database user or some identifier)
One way is to create an enum with the different databases as different enum constants:
public enum Databases{
DB1,
DB2
}
And then use that in your DBManager.getConnection() method:
public final class DBManager{
// singleton stuff
public Connection getConnection(Databases d){
switch(d){
case DB1:
// return connection to db1
case DB2:
// return connection to db2
}
}
}
By using a switch you can just create a new branch for every database.
Another way would be to store all the information needed for the connection in the enum itself. Although that way there'd b many security flaws, because you just hardcode database credentials into your code (which should not be done).
I'm using jOOQ inside an existing project which also uses some custom JDBC code. Inside a jOOQ transaction I need to call some other JDBC code and I need to pass through the active connection so everything gets inside the same transaction.
I don't know how to retrieve the underlying connection inside a jOOQ transaction.
create.transaction(configuration -> {
DSLContext ctx = DSL.using(configuration);
// standard jOOQ code
ctx.insertInto(...);
// now I need a Connection
Connection c = ctx.activeConnection(); // not real, this is what I need
someOtherCode(c, ...);
});
Reading the docs and peeking a bit on the source code my best bet is this:
configuration.connectionProvider().acquire()
But the name is a bit misleading in this particular use case. I don't want a new connection, just the current one. I think this is the way to go because the configuration is derived and I will always get the same connection, but I'm not sure and I can't find the answer in the documentation.
jOOQ's API makes no assumptions about the existence of a "current" connection. Depending on your concrete implementations of ConnectionProvider, TransactionProvider, etc., this may or may not be possible.
Your workaround is generally fine, though. Just make sure you follow the ConnectionProvider's SPI contract:
Connection c = null;
try {
c = configuration.connectionProvider().acquire();
someOtherCode(c, ...);
}
finally {
configuration.connectionProvider().release(c);
}
The above is fine when you're using jOOQ's DefaultTransactionProvider, for instance.
Note there is a pending feature request #4552 that will allow you to run code in the context of a ConnectionProvider and its calls to acquire() and release(). This is what it will look like:
DSL.using(configuration)
.connection(c -> someOtherCode(c, ...));
I'm storing it in a public static field
public class DB {
private static final String url = "jdbc:sqlite:file:target/todo";
public static final DBI dbi = new DBI(url);
public static void migrate() {
Flyway flyway = new Flyway();
flyway.setDataSource(url, "", "");
flyway.migrate();
}
}
And never close it, is there a better option?
This amounts to how you handle getting ahold of any dependency in your application. The best general model, IMHO, is passing it in to the constructor of things that need it. If you want to put some kind of DAO facade around your database access, pass the DBI to the ctor of your DAO. If you are using a DI framework, bind the DBI instance to the framework, and #Inject it.
For your specific question about Connections, the DBI equivalent of a JDBC Connection is the Handle. You should obtain a Handle, use it, and close it as soon as you are done. Typical use of DBI instance is to give it a DataSource which manages actual database connections, by releasing the Handle as soon as you finish with it, you make better use of the connection pool.
In most cases, you would only close the DBI instance if you want to close the Datasource, that is all that closing the DBI instance does. 98% of the time, in a java-for-server world, closing the datasource doesn't make sense, so worrying about closing the DBI (as compared to the Handle) is not a big deal.
When using JDBI, keep in mind:
DBI -> Datasource
Handle -> Connection
Query/SQLStatement -> Statement
This doc elaborates on these.
The best option is to make a handler class. This class "hands" out handles as someone needs them. You need to worry most about closing the handles. If you really want a fast system, something like c3p0 is great. Normally, it is best to make mutable objects private and final, using getters/setters. You can keep DBI static if you want. When you make a call to checkout a Handle, you should use try-with-resources.
public Handle getHandle(){
dbi.open(dataSource);
}
public void doSomething(){
try(Handle handle = getHandle()){
// Do something
}
catch(DBIException e){
// TODO Handle it...
}
}
I'd probably make my handler autocloseable and close everything left over (like any connection pools) when it closes. This, by the way, lets you pull your credentials in the handler and keep that data safe there.
I have developed a JDBC connection pool using synchronized methods like getConnection and returnConnection. This works well and it is fast enough for my purposes. Now the problem happens when this connection pool has to be shared in other packages of our application and so other developers will make use it as well. I feel it is a bit confusing as they always need to perform a returnConnection and I am afraid they may forget to do so.
Thinking about it I came up with the idea to expose only only method in my connection pool and force the other developers to encapsulate their code and so I handle the getConnection / returnConnection inside the connection pool.
It would be something like this:
public MyConnectionPool {
private Connection getConnection() {
//return connection
}
private void returnConnection(Connection connection) {
//add connection to list
}
public void executeDBTask(DBTaskIF task) {
Connection connection = getConnection();
task.execute(connection);
returnConnection(connection);
}
}
where:
public interface DBTaskIF {
public execute(Connection connection);
}
with an example of this DBTaskIF:
connectionPool.executeDBTask( new DBTaskIF() {
public void execute(Connection connection) {
PreparedStatement preStmt = null;
try {
preStmt = connection.prepareStatement(Queries.A_QUERY);
preStmt.setString(1, baseName);
preStmt.executeUpdate();
} finally {
if(preStmt!=null) {
try {
preStmt.close();
} catch (SQLException e) {
log.error(e.getStackTrace());
}
}
}}});
I hope you can get the idea. What I want to know is your opinion about this approach. I want to propose this to the development team and I worry some one comes up saying that this is not standard or OOP or something else...
Any comments are much appreciated.
I feel it is a bit confusing as they always need to perform a returnConnection and I am afraid they may forget to do so.
Thinking about it I came up with the idea to expose only only method in my connection pool and force the other developers to encapsulate their code and so I handle the getConnection returnConnection inside the connection pool.
I'm concerned with this statement. APIs should not (never?) assume that whoever uses them will do so in some way that is not enforced contractually by whichever method it exposes.
And java.sql.Connection is a widely used interface so you'll be making enemies by telling people how to use it with your pool.
Instead, you should assume that your Connection instances will be used correctly, i.e., that they will be closed (connection.close() in a finally block) once their use is over (see, for instance, Connecting with DataSource Objects):
Connection con;
PreparedStatement stmt;
try {
con = pool.getConnection();
con.setAutoCommit(false);
stmt = con.prepareStatement(...);
stmt.setFloat(1, ...);
stmt.setString(2, ...);
stmt.executeUpdate();
con.commit();
stmt.close();
} catch (SQLException e) {
con.rollback();
} finally {
try {
if(con!=null)
con.close();
if(stmt!=null) {
stmt.close();
} catch (SQLException e) {
...
} finally {
}
}
And the Connection implementation of your pool should be recycled when closed.
I second #lreeder's comment that you're really reinventing the wheel here and that most connection pools already available are definitely fast enough for most purposes, and underwent many fine tweakings over time. This also applies to embedded databases.
Disclaimer; this is just my opinion, but I have written custom connection pools before.
I find Java code where you have to create inner class impls a little clunky. However in Java8 lambda or Scala anonymous functions this would be a clean design. I probably would just expose returnConnection() as a public method and allow callers to use it directly.
Third option: use a utility class that takes care of most of the administration.
Not only forgetting to close a Connection can cause trouble, but also forgetting to close a Statement or a Resultset can cause trouble. This is similar to using various IO streams in a method: at some point you make an extra utility class in which you register all opened IO streams so that if an error occurs, you can call close in the utility class and be sure that all opened IO streams are closed.
Such a utility class will not cover all use cases but there is always the option to write another one for other (more complex) use cases. As long as they keep the same kind of contract, using them should just make things easier (and will not feel forced).
Wrapping or proxying a Connection to change the behavior of close to return the Connection to the pool is in general how connection pools prevent connections from actually being closed. But if a connection pool is not used, the application is usually written in a different manner: a connection (or two) is created (at startup) and used wherever a query is executed and the connection is only closed when it is known that a connection is not needed for a while (at shutdown). In contrast, when a pool is used, the connection is "closed" as soon as possible so that other processes can re-use the connection. This together with the option to use a utility class, made me decide to NOT wrap or proxy a connection, but instead let the utility class actually return the connection to the pool if a pool was used (i.e. not call connection.close() but call pool.release(connection)). Usage example of such a utility class is here, the utlity class itself is here.
Proxying causes small delays which is why for example BoneCP decided to wrap Connection and Datasource (wrapping causes very little overhead). The Datasource interface changes with each Java version (at least from 1.6 to 1.7) which means the code will not compile with older/newer versions of Java. This made me decide to proxy the Datasource because it is easier to maintain, but it is not easy to setup (see the various proxy helper classes here). Proxying also has the drawback of making stack-traces harder to read (which makes debugging harder) and sometimes makes exceptions disappear (I have seen this happen in JBoss where the underlying object threw a runtime exception from the constructor).
tl;dr If you make your own specialized pool, also deliver a utility class which makes it easy to use the pool and takes care of most of the administration that is required (like closing used resources) so that it is unlikely to be forgotten. If a utility class is not an option, wrapping or proxying is the standard way to go.
I have been having a query regarding writing unit tests for web methods which actually communicates with a database and returns some value.
Say for example I have a web service named "StudentInfoService".
That web serivces provides a API "getStudentInfo(studentid)"
Here is some sample snippet
public class StudentInfoService
{
public StudentInfo getStudentInfo(long studentId) {
//Communicates with DB and creates
// StudentInfo object with necessary information
// and returns it to the caller.
}
}
How do we actually write unit tests for this method getStudentInfo?
Generally how do we write unit tests for methods which involves a connection with a resource(Database, Files, JNDI, etc...)?
Firstly, the class StudentInfoService in your example is not testable, or atleast not easily. This is for a very simple reason - there is no way to pass in a database connection object to the class, at least not in the method that you've listed.
Making the class testable would require you to build your class in the following manner:
public class StudentInfoService
{
private Connection conn;
public StudentInfoService(Connection conn)
{
this.conn = conn;
}
public StudentInfo getStudentInfo(long studentId) {
//Uses the conn object to communicate with DB and creates
// StudentInfo object with necessary information
// and returns it to the caller.
}
}
The above code allows for dependency injection via the constructor. You may use setter injection instead of constructor injection, if that is more suitable, but it usually isn't for DAO/Repository classes, as the class cannot be considered fully formed, without a connection.
Dependency injection would allow your test cases to create a connection to a database (which is a collaborator to your class/system under test) instead of getting the class/system itself to create the collaborator objects. In simpler words, you are decoupling the mechanism of establishing database connections from your class. If your class was previously looking up a JNDI datasource and then creating a connection, then it would have been untestable, unless you deployed it to a container using Apache Cactus or a similar framework like Arquillian, or if you used an embedded container. By isolating the concern of creating the connection from the class, you are now free to create connections in your unit tests outside the class and provide them to the class on a as-needed basis, allowing you to run tests inside a Java SE environment.
This would enable you to use a database-oriented unit testing framework like DbUnit, which would allow you to setup the database in a known state before every test, then pass in the connection to the StudentInfoService class, and then assert the state of the class (and also the collaborator, i.e. the database) after the test.
It must be emphasized that when you unit test your classes, your classes alone must be the only systems under test. Items like connections and datasources are mere collaborators that could and ought to be mocked. Some unit tests would use in-memory databases like H2, HSQL, or Derby for unit-tests and use the production-equivalent installations of databases for integration and functional testing.
Try to use http://www.dbunit.org/intro.html.
Main idea - make a stub database with known dataset to run your tests and assert results.
You will need to reload the dataset before runs to restore initial state.
We are using the in-memory HSQL database. It is very fast and SQL-92 compliant. In order to make our PostgreSQL queries run on HSQL, we rewrite the queries using a self-written test SessionFactory (Hibernate). Advantages over a real database are:
much faster, which is important for unit tests
requires no configuration
runs everywhere, including our continuous integration server
When working with "legacy code", it can be difficult to write unit tests without some level of refactoring. When writing objects, I try to adhere to SOLID. As part of SOLID, the "D" stands for dependency inversion.
The problem with legacy code is that you may already have numerous clients that are using the no arg constructor of StudentInfoService, which can make adding a constructor that takes a Connection conn parameter difficult.
What I would suggest isn't generally best practice, because you're exposing test code in your production system, but it is sometimes optimal for working with legacy code.
public class StudentInfoService {
private final Connection conn;
/**
* This no arg constructor will automatically establish a connection for you. This
* will remain around to support legacy code that depends on a no arg constructor.
*/
public StudentInfoService() throws Exception {
conn = new ConcreteConnectionObject( ... );
}
/**
* This constructor may be used by your unit tests (or new code).
*/
public StudentInfoService( Connection conn ) {
this.conn = conn;
}
public StudentInfo getStudentInfo() {
// this method will need to be slightly refactored to use
// the class variable "conn" instead of establishing its own connection inline.
}
}