Using SET Statements with MyBatis - java

(I am using MyBatis v3, Java SE v6, Tomcat v6 and Spring v3 all over Teradata v12.)
One of the technical requirements for my current project is to use the query banding feature in Teradata. This is done by running a statement like the following whenever required:
SET QUERY_BAND='someKey=someValue;' FOR TRANSACTION;
I want to have a query band for all of my calls. However, I am unsure how to add this functionality in a clean and reusable manner without having to add it to each of my <select> statements in my mapper file like the following:
<sql id="queryBand">
SET QUERY_BAND='k=v;' FOR TRANSACTION;
</sql>
<select ...>
<include refid="queryBand"/>
... some SQL performing a SELECT
</select>
My issues with the above are:
1) The format of the query band is identical across all my mapper XML files with the exception of k & v, which I would want to customise on a per <select> (etc.) basis. I'm not sure how I can do this customisation without having to pass in the k and v values, which muddies my mapper interface.
2) There is duplication in the above code that makes me uneasy. Developers have to remember to include the queryBand SQL, which someone will forget at some stage (Murphy's Law).
Can someone point me in the direction of the solution to implementing the query banding in a cleaner way?

The solution is to use MyBatis Interceptor plug-ins. For example, the following:
import java.sql.Connection;
import java.sql.Statement;
import java.util.Properties;
import org.apache.ibatis.executor.statement.StatementHandler;
import org.apache.ibatis.plugin.Interceptor;
import org.apache.ibatis.plugin.Intercepts;
import org.apache.ibatis.plugin.Invocation;
import org.apache.ibatis.plugin.Plugin;
import org.apache.ibatis.plugin.Signature;
#Intercepts({#Signature(
type=StatementHandler.class,
method = "prepare",
args={ Connection.class })})
public class StatementInterceptor implements Interceptor {
#Override
public Object intercept(Invocation invocation) throws Throwable {
Connection conn = (Connection) invocation.getArgs()[0];
Statement stmt = conn.createStatement();
stmt.executeUpdate("SET QUERY_BAND = 'k=v;' FOR TRANSACTION;");
return invocation.proceed();
}
#Override
public Object plugin(Object target) {
return Plugin.wrap(target, this);
}
#Override
public void setProperties(Properties properties) {}
}

Let's say that every SQL string should be appended to a query band. I would try to find a method inside myBatis/Spring which does it. Using Spring's AOP this method could be intercepted and its result appended to the query band and returned for further computation.
Finding a method to intercept can be hard but not impossible. Download all dependency sources and link them properly (using Maven this should be trivial nonetheless in Eclipse is not that hard, too), run the code in debugging mode and look for an appropriate method.

Related

Can ejb TransactionTimeout be changed at runtime?

In EJB3 container-managed bean, I want to be able to allow extended timeout for nightly jobs.
How can I change TransactionTimeout setting for such use-cases?
Currently, code looks like this:
#TransactionTimeout(300)
public Result getResult() {
//code goes here
}
Simply annotate the EJB method that is being executed within the transaction as you noted above. My only suggestion is be more explicit in terms of the units. In this case I wait for an hour. Numerous TimeUnit.XXX's enumerated values are available.
import org.jboss.ejb3.annotation.TransactionTimeout;
import java.util.concurrent.TimeUnit;
#TransactionTimeout(value=1, unit=TimeUnit.HOURS)
public void doSOmethingForALongTime() {
}

Fastest way to get ResultSet to private strings?

I have a class that gets ran every time an action happens, for example, I log in and the User class gets ran. This class is passed a ResultSet containing information of that particular user.
Now what I'm trying to accomplish is to get the result and split them into "class variables" (I believe they're called fields). I've tried the following:
public User(ResultSet resultSet) throws SQLException {
this.username = resultSet.getString("username");
this.firstname = resultSet.getString("firstname");
// etc etc.
}
and that works, but since I have about two dozen elements in there, this would become a long list.
I also thought of a method where I'd look through the results and then check if its a string for example. If it is, assign it to a variable which name corresponds to the key, but that would cut my problem in half, since I'd still need to declare all those variables.
I was wondering if there's a faster, perhaps more elegant way to tackle something like this.
Thanks.
You create a class for each type of information you retrieve from the database. You created a User class for user information. There's no shortcut where your database tables create Java objects, unless you use an ORM like Hibernate.
I have been recently working on a simple tool that does this. It accepts a Map<String, Object> and a class, and returns an instance of this class. Have look at my repository for examples. If you use Maven, you can just add a dependency:
<dependency>
<groupId>uk.co.jpawlak</groupId>
<artifactId>map-to-object-converter</artifactId>
<version>1.1</version>
</dependency>
This will of course require you to convert ResultSet into Map first, however this is pretty simple and you will only have to write this code once.
There is nothing out of the box to help you accomplish something like that.
There are couple of ways you could do it
Using ORM would be the best option - if you are willing to spend the time and effort to configure and set up the framework and update your objects as necessary.
Using Java reflection along with ResultSetMetaData to map the resultSet directly to Objects would be another option. Something along these lines http://oprsteny.com/?p=900
If you are lucky enough to have the object's field names exactly match the sql column names, like in your example, you could write something like this
import java.lang.reflect.Field;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
...
private void getUserDetails(ResultSet resultSet) throws SQLException, NoSuchFieldException,
SecurityException, IllegalArgumentException, IllegalAccessException {
ResultSetMetaData rsmd = resultSet.getMetaData();
while (resultSet.next()) {
int i = rsmd.getColumnCount();
while (i-- > 0) {
// *this* being the user object
String colName = rsmd.getColumnName(i);
Field field = this.getClass().getDeclaredField(colName);
field.set(this, resultSet.getObject(colName));
}
}
}

Is there any way to use OrmLite with Postgres hstores?

We're currently using a PostgreSQL database and OrmLite. We now have a use case for using an Postgres hstore, but can't find any way of accessing that table through OrmLite. I'd prefer to avoid opening a separate database connection just to select and insert to that one table, but I'm not seeing any other options.
At the very least I'd like a handle to the existing connection OrmLite is using so I can reuse it to build a prepared statement, but I haven't found a way to get a java.sql.Connection starting from an OrmLite ConnectionSource.
I see that OrmLite has a JdbcCompiledStatement, but that's just a wrapper around a PreparedStatement and requires the PreparedStatement to be passed in to the constructor. (Not sure what the use case for that is.)
I've tried to use DatabaseConnection.compileStatement(...), but that requires knowledge of the field types being used and OrmLite doesn't seem to know what an hstore is.
I've tried to use updateRaw(), but that function only exists on an OrmLite dao that I don't have because the table I would link the dao to has a field type OrmLite doesn't recognize. Is there some way to get a generic dao to issue raw queries?
I get that hstores are database specific and probably won't be supported by OrmLite, but I'd really like to find a way to transfer data to and from the database using unsupported fields instead of just unsupported queries.
It sounds like ConnectionSource may actually be implemented by JdbcConnectionSource, and will likely return a JdbcDatabaseConnection. That object has a getInternalConnection method that looks like what you are looking for.
#Gray I submitted an ORMLite patch on SourceForge that can enables the "Other" data type. The patch ID is 3566779. With this patch, it's possible to support hstores.
Users will need to add the PGHStore class to their projects. The code for this class is here.
Users will also need to add a persister class as shown here:
package com.mydomain.db.persister;
import com.mydomain.db.PGHStore;
import com.j256.ormlite.field.FieldType;
import com.j256.ormlite.field.SqlType;
import com.j256.ormlite.field.types.BaseDataType;
import com.j256.ormlite.support.DatabaseResults;
import java.sql.SQLException;
public class PGHStorePersister extends BaseDataType {
private static final PGHStorePersister singleton = new PGHStorePersister();
public static PGHStorePersister getSingleton() {
return singleton;
}
protected PGHStorePersister() {
super(SqlType.OTHER, new Class<?>[] { PGHStore.class });
}
protected PGHStorePersister(SqlType sqlType, Class<?>[] classes) {
super(sqlType, classes);
}
#Override
public Object parseDefaultString(FieldType ft, String string) throws SQLException {
return new PGHStore(string);
}
#Override
public Object resultToSqlArg(FieldType fieldType, DatabaseResults results, int columnPos) throws SQLException {
return results.getString(columnPos);
}
#Override
public Object sqlArgToJava(FieldType fieldType, Object sqlArg, int columnPos) throws SQLException {
return new PGHStore((String) sqlArg);
}
#Override
public boolean isAppropriateId() {
return false;
}
}
Lastly, users will need to annotate their data to use the persister.
#DatabaseField(columnName = "myData", persisterClass=PGHStorePersister.class)
At the very least I'd like a handle to the existing connection OrmLite is using so I can reuse it to build a prepared statement...
Ok, that's pretty easy. As #jsight mentioned, the ORMLite ConnectionSource for JDBC is JdbcConnectionSource. When you get a connection from that class using connectionSource.getReadOnlyConnection(), you will get a DatabaseConnection that is really a JdbcDatabaseConnection and can be cast to it. There is a JdbcDatabaseConnection.getInternalConnection() method which returns the associated java.sql.Connection.
I've tried to use updateRaw(), but that function only exists on an OrmLite dao that I don't have ...
You really can use any DAO class to perform a raw function on any table. It is convenient to think of it as being an unstructured update to an DAO object's table. But if you have any DAO, you can perform a raw update on any other table.
find a way to transfer data to and from the database using unsupported fields instead of just unsupported queries
If you are using unsupported fields, then you are going to have to do it as a raw statement -- either SELECT or UPDATE. If you edit your post to show the raw statement you've tried, I can help more specifically.

What is the best way to reset the database to a known state while testing database operations?

I'm writing tests with JUnit for some methods operating on a test database.
I need to reset the database to the original state after each #Test. I'm wondering what's the best way to do that.
Is there some method in the EntityManager? Or should I just delete everything manually or with an SQL statement? Would it be better to just drop and recreate the whole database?
One technique that I have used in the past is to recreate the database from scratch by simply copying the database from a standard 'test database', and using this in the tests.
This technique works if:
Your schema doesn't change much (otherwise it's a pain to keep in line)
You're using something like hibernate which is reasonably database independent.
This has the following advantages:
It works with code that manages its own transactions. My integration tests run under junit. For instance, when I'm testing a batch process I call Batch.main() from junit, and test stuff before and after. I wouldn't want to change the transaction processing in the code under test.
It's reasonably fast. If the files are small enough, then speed is not a problem.
It makes running integration tests on a ci server easy. The database files are checked in with the code. No need for a real database to be up and running.
And the following disadvantages:
The test database files need to be maintained along with the real database. If you're adding columns all of the time, this can be a pain.
There is code to manage the jdbc urls, because they change for every test.
I use this with Oracle as the production/integration database and hsqldb as the test database. It works pretty well. hsqldb is a single file, so is easy to copy.
So, in the #Before, using hsqldb, you copy the file to a location such as target/it/database/name_of_test.script. This is picked up in the test.
In the #After, you delete the file (or just leave it, who cares). With hsqldb, you'll need to do a SHUTDOWN as well, so that you can delete the file.
You can also use a #Rule which extends from ExternalResource, which is a better way to manage your resources.
One other tip is that if you're using maven or something like it, you can create the database in target. I use target/it. This way, the copies of databases get removed when I do and mvn clean. For my batches, I actually copy all of my other properties files etc into this directory as well, so I don't get any files appearing in strange places either.
The easiest way is simply rolling back all changes after each test. This requires a transactional RDBMS and a custom test runner or similar that wraps each test into it's own transaction. Spring's AbstractTransactionalJUnit4SpringContextTests does exactly that.
DBUnit can reset your database between tests and even fill it with predefined test data.
I am answering this more for my own reference, but here goes. The answer assumes a per-developer SQL Server DB.
Basic approach
Use DBUnit to store an XML file of the known state. You can extract this file once you've set up the DB, or you can create it from scratch. Put this file in your version control along with scripts that call DBUnit to populate the DB with it.
In your tests, call the aforementioned scripts using #Before.
Speedup 1
Once this is working, tweak the approach to speed things up. Here's an approach for a SQL Server DB.
Before DBUnit, totally wipe out the DB:
EXEC sp_msforeachtable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL';
EXEC sp_MSforeachtable 'ALTER TABLE ? DISABLE TRIGGER ALL';
EXEC sp_MSForEachTable 'SET QUOTED_IDENTIFIER ON SET ANSI_NULLS ON DELETE FROM ?';
After DBUnit, restore the constraints
EXEC sp_MSforeachtable 'ALTER TABLE ? CHECK CONSTRAINT ALL';
EXEC sp_MSforeachtable 'ALTER TABLE ? ENABLE TRIGGER ALL';
Speedup 2
Use SQL Server's RESTORE functionality. In my tests, this runs in 25% the time DBUnit takes. If (and only if) this is a major factor in your test duration, it's worth investigating this approach.
The following classes show an implementation using Spring JDBC, JTDS, and CDI injection. This is designed to work for in-container tests, where the container may be making its own connections to the DB, that need to be stopped
import java.io.File;
import java.sql.SQLException;
import javax.inject.Inject;
import javax.sql.DataSource;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.jdbc.core.JdbcTemplate;
/**
* Allows the DB to be reset quickly using SQL restore, at the price of
* additional complexity. Recommended to vanilla DBUnit unless speed is
* necessary.
*
* #author aocathain
*
*/
#SuppressWarnings({ "PMD.SignatureDeclareThrowsException" })
public abstract class DbResetterSO {
protected final Logger logger = LoggerFactory.getLogger(getClass());
/**
* Deliberately created in the target dir, so that on mvn clean, it is
* deleted and will be recreated.
*/
private final File backupFile = new File(
"target\\test-classes\\db-backup.bak");
#Inject
private OtherDbConnections otherDbConnections;
/**
* Backs up the database, if a backup doesn't exist.
*
* #param masterDataSource
* a datasource with sufficient rights to do RESTORE DATABASE. It
* must not be connected to the database being restored, so
* should have db master as its default db.
*/
public void backup(final DataSource masterDataSource) throws Exception {
final JdbcTemplate masterJdbcTemplate = new JdbcTemplate(
masterDataSource);
if (backupFile.exists()) {
logger.debug("File {} already exists, not backing up", backupFile);
} else {
otherDbConnections.start();
setupDbWithDbUnit();
otherDbConnections.stop();
logger.debug("Backing up");
masterJdbcTemplate.execute("BACKUP DATABASE [" + getDbName()
+ "] TO DISK ='" + backupFile.getAbsolutePath() + "'");
logger.debug("Finished backing up");
otherDbConnections.start();
}
}
/**
* Restores the database
*
* #param masterDataSource
* a datasource with sufficient rights to do RESTORE DATABASE. It
* must not be connected to the database being restored, so
* should have db master as its default db.
*/
public void restore(final DataSource masterDataSource) throws SQLException {
final JdbcTemplate masterJdbcTemplate = new JdbcTemplate(
masterDataSource);
if (!backupFile.exists()) {
throw new IllegalStateException(backupFile.getAbsolutePath()
+ " must have been created already");
}
otherDbConnections.stop();
logger.debug("Setting to single user");
masterJdbcTemplate.execute("ALTER DATABASE [" + getDbName()
+ "] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;");
logger.info("Restoring");
masterJdbcTemplate.execute("RESTORE DATABASE [" + getDbName()
+ "] FROM DISK ='" + backupFile.getAbsolutePath()
+ "' WITH REPLACE");
logger.debug("Setting to multi user");
masterJdbcTemplate.execute("ALTER DATABASE [" + getDbName()
+ "] SET MULTI_USER;");
otherDbConnections.start();
}
/**
* #return Name of the DB on the SQL server instance
*/
protected abstract String getDbName();
/**
* Sets up the DB to the required known state. Can be slow, since it's only
* run once, during the initial backup. Can use the DB connections from otherDbConnections.
*/
protected abstract void setupDbWithDbUnit() throws Exception;
}
import java.sql.SQLException;
/**
* To SQL RESTORE the db, all other connections to that DB must be stopped. Implementations of this interface must
* have control of all other connections.
*
* #author aocathain
*
*/
public interface OtherDbConnections
{
/**
* Restarts all connections
*/
void start() throws SQLException;
/**
* Stops all connections
*/
void stop() throws SQLException;
}
import java.sql.Connection;
import java.sql.SQLException;
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import javax.enterprise.inject.Produces;
import javax.inject.Named;
import javax.inject.Singleton;
import javax.sql.DataSource;
import net.sourceforge.jtds.jdbcx.JtdsDataSource;
import org.springframework.jdbc.datasource.DelegatingDataSource;
import org.springframework.jdbc.datasource.SingleConnectionDataSource;
/**
* Implements OtherDbConnections for the DbResetter and provides the DataSource during in-container tests.
*
* #author aocathain
*
*/
#Singleton
#SuppressWarnings({ "PMD.AvoidUsingVolatile" })
public abstract class ResettableDataSourceProviderSO implements OtherDbConnections
{
private volatile Connection connection;
private volatile SingleConnectionDataSource scds;
private final DelegatingDataSource dgds = new DelegatingDataSource();
#Produces
#Named("in-container-ds")
public DataSource resettableDatasource() throws SQLException
{
return dgds;
}
#Override
#PostConstruct
public void start() throws SQLException
{
final JtdsDataSource ds = new JtdsDataSource();
ds.setServerName("localhost");
ds.setDatabaseName(dbName());
connection = ds.getConnection(username(), password());
scds = new SingleConnectionDataSource(connection, true);
dgds.setTargetDataSource(scds);
}
protected abstract String password();
protected abstract String username();
protected abstract String dbName();
#Override
#PreDestroy
public void stop() throws SQLException
{
if (null != connection)
{
scds.destroy();
connection.close();
}
}
}
It's an old Topic I know but times have changed in the last ten years ;)
A solution I like much is do create a prepatched dockerimage and create a container from it with Testcontainers. It may take a moment to start the container (not that much time) but in this way you are able to run all tests in parallel because it's up to you how many Databse Instances you want to use (I use one per CPU dynamically) and speedup the whole testsoute alot.
The nice thing here is, if your application relies on more other dependencies (e.g. other servers like ssh, ftp, ldap, rest Services, ... whatever) you can deal with it in the same way.
In Addition, you are of course able do combine this solution with any of the other solutions to speed the whole thing up a little more.

Using Dynamic Proxies to centralize JPA code

Actually, This is not a question but really I need your opinions in a matter...
I put his post here because I know you always active, so please don't consider this a bad question and share me your opinions.
I've used Java dynamic proxies to Centralize The code of JPA that I used in a standalone mode, and Here's the dynamic proxy code:
package com.forat.service;
import java.lang.reflect.InvocationHandler;
import java.lang.reflect.Method;
import java.lang.reflect.Proxy;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.persistence.EntityManager;
import javax.persistence.EntityManagerFactory;
import javax.persistence.EntityTransaction;
import javax.persistence.Persistence;
import com.forat.service.exceptions.DAOException;
/**
* Example of usage :
* <pre>
* OnlineFromService onfromService =
* (OnlineFromService) DAOProxy.newInstance(new OnlineFormServiceImpl());
* try {
* Student s = new Student();
* s.setName("Mohammed");
* s.setNationalNumber("123456");
* onfromService.addStudent(s);
* }catch (Exception ex) {
* System.out.println(ex.getMessage());
* }
*</pre>
* #author mohammed hewedy
*
*/
public class DAOProxy implements InvocationHandler{
private Object object;
private Logger logger = Logger.getLogger(this.getClass().getSimpleName());
private DAOProxy(Object object) {
this.object = object;
}
public static Object newInstance(Object object) {
return Proxy.newProxyInstance(object.getClass().getClassLoader(),
object.getClass().getInterfaces(), new DAOProxy(object));
}
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
EntityManagerFactory emf = null;
EntityManager em = null;
EntityTransaction et = null;
Object result = null;
try {
emf = Persistence.createEntityManagerFactory(Constants.UNIT_NAME);
em = emf.createEntityManager();;
Method entityManagerSetter = object.getClass().
getDeclaredMethod(Constants.ENTITY_MANAGER_SETTER_METHOD, EntityManager.class);
entityManagerSetter.invoke(object, em);
et = em.getTransaction();
et.begin();
result = method.invoke(object, args);
et.commit();
return result;
}catch (Exception ex) {
et.rollback();
Throwable cause = ex.getCause();
logger.log(Level.SEVERE, cause.getMessage());
if (cause instanceof DAOException)
throw new DAOException(cause.getMessage(), cause);
else
throw new RuntimeException(cause.getMessage(), cause);
}finally {
em.close();
emf.close();
}
}
}
And here's the link that contains more info (http://m-hewedy.blogspot.com/2010/04/using-dynamic-proxies-to-centralize-jpa.html)
So, Please give me your opinions.
Thanks.
So you've encapsulated the transaction demarcation logic in one place and use dynamic proxy to enhance existing services with transaction management and reduce boilerplate code, right?
The sounds a rather OK to me. Actually what containers such as Spring, or EJB do when we speak of declarative transaction demarcation is very similar. Implementation-wise, you can do it with dynamic proxy, or byte code instrumentation, or even use AspectJ. I did something very similar once for a tiny testing framework once. Here is a blog post about it.
The tricky parts that I see are:
1) Rollback only. As per JPA spec, an entity transaction can be flagged as "rollback only". Such a transaction can never commit. So I feel like you should check that between these two lines:
result = method.invoke(object, args);
et.commit();
2) Re-entrancy. Most system that have declarative transaction implement a semantics in which a transaction is started only if there isn't one already active (See "Required" in this list of EJB annotations). Looks like you should maybe check with isActive that in your logic.
3) Exception handling. Be very careful with the exception propagation in dynamic proxy. The proxy is supposed to be transparent for the client as much as possible. If an exception other than DAOException leaks out of the DAO, the proxy will transform it into a RuntimeException. Doesn't sound optimal to me. Also don't confuse the exception because invoke failed, and the exception wrapped by the invocation, that I think you should re-throw as-is:
catch ( InvocationTargetException e )
{
Throwable nested = e.getTargetException();
throw nested;
}
Conclusion: the idea to use dynamic proxy in this scenario sounds OK to me. But I suspect there are a few stuffs to double-check in your code (I don't remember all the details of the JPA specs and exception handling with dynamic proxy, but there are some tricky cases). This kind of code can hide subtle bugs, so it's worth taking time to make it bullet-proof.
I've used something similar in the past, but coded to the hibernate API (this was pre-JPA). Data access for most types of DAO was managed by an interface named after the object type, E.g. CustomerPersistence for managing Customer instnaces. Methods such as findXXX mapped to named queries, with parameter names in the method mapped to parameters in the query.
The implementation of the interfaces were proxies, which used the interface name, method names, parameter names etc.. to invoke appropriate methods in the hibernate API.
It saves a lot of boilerplate coding, with an intuitive mapping to the underlying data access framework, and makes for very easy mocking of the data access layer.
So, I'm definitely "thumbs up" on using proxies.

Categories

Resources