We're currently using a PostgreSQL database and OrmLite. We now have a use case for using an Postgres hstore, but can't find any way of accessing that table through OrmLite. I'd prefer to avoid opening a separate database connection just to select and insert to that one table, but I'm not seeing any other options.
At the very least I'd like a handle to the existing connection OrmLite is using so I can reuse it to build a prepared statement, but I haven't found a way to get a java.sql.Connection starting from an OrmLite ConnectionSource.
I see that OrmLite has a JdbcCompiledStatement, but that's just a wrapper around a PreparedStatement and requires the PreparedStatement to be passed in to the constructor. (Not sure what the use case for that is.)
I've tried to use DatabaseConnection.compileStatement(...), but that requires knowledge of the field types being used and OrmLite doesn't seem to know what an hstore is.
I've tried to use updateRaw(), but that function only exists on an OrmLite dao that I don't have because the table I would link the dao to has a field type OrmLite doesn't recognize. Is there some way to get a generic dao to issue raw queries?
I get that hstores are database specific and probably won't be supported by OrmLite, but I'd really like to find a way to transfer data to and from the database using unsupported fields instead of just unsupported queries.
It sounds like ConnectionSource may actually be implemented by JdbcConnectionSource, and will likely return a JdbcDatabaseConnection. That object has a getInternalConnection method that looks like what you are looking for.
#Gray I submitted an ORMLite patch on SourceForge that can enables the "Other" data type. The patch ID is 3566779. With this patch, it's possible to support hstores.
Users will need to add the PGHStore class to their projects. The code for this class is here.
Users will also need to add a persister class as shown here:
package com.mydomain.db.persister;
import com.mydomain.db.PGHStore;
import com.j256.ormlite.field.FieldType;
import com.j256.ormlite.field.SqlType;
import com.j256.ormlite.field.types.BaseDataType;
import com.j256.ormlite.support.DatabaseResults;
import java.sql.SQLException;
public class PGHStorePersister extends BaseDataType {
private static final PGHStorePersister singleton = new PGHStorePersister();
public static PGHStorePersister getSingleton() {
return singleton;
}
protected PGHStorePersister() {
super(SqlType.OTHER, new Class<?>[] { PGHStore.class });
}
protected PGHStorePersister(SqlType sqlType, Class<?>[] classes) {
super(sqlType, classes);
}
#Override
public Object parseDefaultString(FieldType ft, String string) throws SQLException {
return new PGHStore(string);
}
#Override
public Object resultToSqlArg(FieldType fieldType, DatabaseResults results, int columnPos) throws SQLException {
return results.getString(columnPos);
}
#Override
public Object sqlArgToJava(FieldType fieldType, Object sqlArg, int columnPos) throws SQLException {
return new PGHStore((String) sqlArg);
}
#Override
public boolean isAppropriateId() {
return false;
}
}
Lastly, users will need to annotate their data to use the persister.
#DatabaseField(columnName = "myData", persisterClass=PGHStorePersister.class)
At the very least I'd like a handle to the existing connection OrmLite is using so I can reuse it to build a prepared statement...
Ok, that's pretty easy. As #jsight mentioned, the ORMLite ConnectionSource for JDBC is JdbcConnectionSource. When you get a connection from that class using connectionSource.getReadOnlyConnection(), you will get a DatabaseConnection that is really a JdbcDatabaseConnection and can be cast to it. There is a JdbcDatabaseConnection.getInternalConnection() method which returns the associated java.sql.Connection.
I've tried to use updateRaw(), but that function only exists on an OrmLite dao that I don't have ...
You really can use any DAO class to perform a raw function on any table. It is convenient to think of it as being an unstructured update to an DAO object's table. But if you have any DAO, you can perform a raw update on any other table.
find a way to transfer data to and from the database using unsupported fields instead of just unsupported queries
If you are using unsupported fields, then you are going to have to do it as a raw statement -- either SELECT or UPDATE. If you edit your post to show the raw statement you've tried, I can help more specifically.
Related
I have one class name DBManager.java, this class implements a singleton design pattern and this is used for all DB operations.
This works perfectly when I have to connect with 1 data source, Now in my project, I have to connect with 2 different data sources, Now when I use this class, this behaves differently. because it always returns 1 data source connection.
Now how can I manage this, in a better way? One approach is I would create another DBManager2.java class and use this class for the 2nd data source, but I don't think so it's a good way.
Any recommendation.
Use Map<Key, DataSource> to store datasources by some key. And then use some key object to get corresponding datasoucre (database URL, database user or some identifier)
One way is to create an enum with the different databases as different enum constants:
public enum Databases{
DB1,
DB2
}
And then use that in your DBManager.getConnection() method:
public final class DBManager{
// singleton stuff
public Connection getConnection(Databases d){
switch(d){
case DB1:
// return connection to db1
case DB2:
// return connection to db2
}
}
}
By using a switch you can just create a new branch for every database.
Another way would be to store all the information needed for the connection in the enum itself. Although that way there'd b many security flaws, because you just hardcode database credentials into your code (which should not be done).
I am facing a situation while accessing my DB layer from the andriod code. I have my app project and for database I have created an internal Library that takes care of DB operations.
I have an interface exposed from DB layer, which is implemented by the DB manager class in DB library.
The interface has methods related to common SQL operations, such as insert, select etc.
Now, when I am calling one of these methods to pass my data from the app to DB library, I want to do this on objects. In My case I have a common base class from which all Model classes are inherited.
However, when I try to add a method in my DB interface which takes this base object as a parameter, android studio complains of circular dependency.
For the time being, I may use the Map, or some other data structures to send and receive data to and from my DB library. However, I want to solve this problem in a standard fashion.
I know that there is something i have to do related to dependency inversion principle, but i am just not getting a hint on how I can make my coupling loose for this case by using abstractions.
can some one please give some hint to proceed forward ?
Interface:
public interface DbItf {
public void close();
//For country table
public Map<String,String> selectCtrs(Context m_context, String qry);
long saveCtrList(Map<String, String> ctrMp, String qry, Context appContext);
}
Instead I want to do this:
public interface DbItf {
public void close();
//For country table
public List<MyObject> selectCtrs(Context m_context, String qry);
long saveCtrList(List<MyObject>, String qry, Context appContext);
}
My app project model classes accesses this interface in following way:
#Override
public long saveToLocal(String qry) {
AppCtrl.getInstance().initDB();
long retc = 0;
retc = AppCtrl.getInstance().getQeeDbItf().saveCtrList(AppCtrl.getInstance().getCtrMp(), qry, AppCtrl.getInstance().getAppContext());
return retc;
}
#Override
public void openFrmLocal(String qry) {
AppCtrl.getInstance().initDB();
Map<String,String> locMp = AppCtrl.getInstance().getQeeDbItf().selectCtrs(AppCtrl.getInstance().getAppContext(), qry);
if (locMp.size() > 0 ) {
Log.d("openFrmLocal", "" + String.valueOf(locMp.size()));
AppCtrl.getInstance().setCtrMp(locMp);
}
}
Thanks
I hope to have understood your problem / system architecture.
Anyway, just define a DataBaseObject interface, implemented by the object you will write in the database and used by the Database.
In this way, you can import the interface from both the classes/libraries, but only the DataBaseObject will have a direct pointer to the database and not viceversa.
There are countless questions here, how to solve the "could not initialize proxy" problem via eager fetching, keeping the transaction open, opening another one, OpenEntityManagerInViewFilter, and whatever.
But is it possible to simply tell Hibernate to ignore the problem and pretend the collection is empty? In my case, not fetching it before simply means that I don't care.
This is actually an XY problem with the following Y:
I'm having classes like
class Detail {
#ManyToOne(optional=false) Master master;
...
}
class Master {
#OneToMany(mappedBy="master") List<Detail> details;
...
}
and want to serve two kinds of requests: One returning a single master with all its details and another one returning a list of masters without details. The result gets converted to JSON by Gson.
I've tried session.clear and session.evict(master), but they don't touch the proxy used in place of details. What worked was
master.setDetails(nullOrSomeCollection)
which feels rather hacky. I'd prefer the "ignorance" as it'd be applicable generally without knowing what parts of what are proxied.
Writing a Gson TypeAdapter ignoring instances of AbstractPersistentCollection with initialized=false could be a way, but this would depend on org.hibernate.collection.internal, which is surely no good thing. Catching the exception in the TypeAdapter doesn't sound much better.
Update after some answers
My goal is not to "get the data loaded instead of the exception", but "how to get null instead of the exception"
I
Dragan raises a valid point that forgetting to fetch and returning a wrong data would be much worse than an exception. But there's an easy way around it:
do this for collections only
never use null for them
return null rather than an empty collection as an indication of unfetched data
This way, the result can never be wrongly interpreted. Should I ever forget to fetch something, the response will contain null which is invalid.
You could utilize Hibernate.isInitialized, which is part of the Hibernate public API.
So, in the TypeAdapter you can add something like this:
if ((value instanceof Collection) && !Hibernate.isInitialized(value)) {
result = new ArrayList();
}
However, in my modest opinion your approach in general is not the way to go.
"In my case, not fetching it before simply means that I don't care."
Or it means you forgot to fetch it and now you are returning wrong data (worse than getting the exception; the consumer of the service thinks the collection is empty, but it is not).
I would not like to propose "better" solutions (it is not topic of the question and each approach has its own advantages), but the way that I solve issues like these in most use cases (and it is one of the ways commonly adopted) is using DTOs: Simply define a DTO that represents the response of the service, fill it in the transactional context (no LazyInitializationExceptions there) and give it to the framework that will transform it to the service response (json, xml, etc).
What you can try is a solution like the following.
Creating an interface named LazyLoader
#FunctionalInterface // Java 8
public interface LazyLoader<T> {
void load(T t);
}
And in your Service
public class Service {
List<Master> getWithDetails(LazyLoader<Master> loader) {
// Code to get masterList from session
for(Master master:masterList) {
loader.load(master);
}
}
}
And call this service like below
Service.getWithDetails(new LazyLoader<Master>() {
public void load(Master master) {
for(Detail detail:master.getDetails()) {
detail.getId(); // This will load detail
}
}
});
And in Java 8 you can use Lambda as it is a Single Abstract Method (SAM).
Service.getWithDetails((master) -> {
for(Detail detail:master.getDetails()) {
detail.getId(); // This will load detail
}
});
You can use the solution above with session.clear and session.evict(master)
I have raised a similar question in the past (why dependent collection isn't evicted when parent entity is), and it has resulted an answer which you could try for your case.
The solution for this is to use queries instead of associations (one-to-many or many-to-many). Even one of the original authors of Hibernate said that Collections are a feature and not an end-goal.
In your case you can get better flexibility of removing the collections mapping and simply fetch the associated relations when you need them in your data access layer.
You could create a Java proxy for every entity, so that every method is surrounded by a try/catch block that returns null when a LazyInitializationException is catched.
For this to work, all your entities would need to implement an interface and you'd need to reference this interface (instead of the entity class) all throughout your program.
If you can't (or just don't want) to use interfaces, then you could try to build a dynamic proxy with javassist or cglib, or even manually, as explained in this article.
If you go by common Java proxies, here's a sketch:
public static <T> T ignoringLazyInitialization(
final Object entity,
final Class<T> entityInterface) {
return (T) Proxy.newProxyInstance(
entityInterface.getClassLoader(),
new Class[] { entityInterface },
new InvocationHandler() {
#Override
public Object invoke(
Object proxy,
Method method,
Object[] args)
throws Throwable {
try {
return method.invoke(entity, args);
} catch (InvocationTargetException e) {
Throwable cause = e.getTargetException();
if (cause instanceof LazyInitializationException) {
return null;
}
throw cause;
}
}
});
}
So, if you have an entity A as follows:
public interface A {
// getters & setters and other methods DEFINITIONS
}
with its implementation:
public class AImpl implements A {
// getters & setters and other methods IMPLEMENTATIONS
}
Then, assuming you have a reference to the entity class (as returned by Hibernate), you could create a proxy as follows:
AImpl entityAImpl = ...; // some query, load, etc
A entityA = ignoringLazyInitialization(entityAImpl, A.class);
NOTE 1: You'd need to proxy collections returned by Hibernate as well (left as an excersice to the reader) ;)
NOTE 2: Ideally, you should do all this proxying stuff in a DAO or in some type of facade, so that everything is transparent to the user of the entities
NOTE 3: This is by no means optimal, since it creates a stacktrace for every access to an non-initialized field
NOTE 4: This works, but adds complexity; consider if it's really necessary.
I am attempting to use Unitils to assist me in Database testing. I would like to use the Unitils/DBMaintain functionality for disabling constraints. However there is a few problems with this. I do not wish to use DBMaintain to create my databases for me however I wish to use its constraint disabling functionality. I was able to achieve this through the use of a custom module listed below:
public class DisableConstraintModule implements Module {
private boolean disableConstraints = false;
public void afterInit() {
if (disableConstraints) {
DatabaseUnitils.disableConstraints();
}
}
public void init(Properties configuration) {
disableConstraints = PropertyUtils.getBoolean("Database.disableConstraints", false, configuration);
}
}
This partially solves what I want however I wish to be able to only disable constraints for tables I will be using in my test. My tests will be running against a database with multiple schemas and each schema has hundreds of different tables. DatabaseUnitils.disableConstraints() disables the constraints for every table in every schema which would be far too time consuming and is unnecessary.
Upon searching the dbmaintain code I found that the Db2Database class does indeed contain a function for disabling constraints on a specific schema and table name basis however this method is protected. I could access this be either extending the Db2Database class or using reflection.
Next I need to be able to determine which schemas and tables I am interested in. I could do this by observing the #DataSet annotation to determine which schemas and tables are important based on what is in the xml. In order to do this I need to override the TestListener so I can instruct it to disable the constraints using the xml before it attempts to insert the dataset. This was my attempt at this:
public class DisableConstraintModule extends DbUnitModule {
private boolean disableConstraints = false;
private TableBasedConstraintsDisabler disabler;
public void afterInit() {
}
public void init(Properties configuration) {
disableConstraints = PropertyUtils.getBoolean("Database.disableConstraints", false, configuration);
PropertyUtils.getInstance("org.unitils.dbmaintainer.structure.ConstraintsDisabler.implClassName", configuration);
}
public void disableConstraintsForDataSet(MultiSchemaDataSet dataSet) {
disabler.disableConstraints(dataSet);
}
protected class DbUnitCustomListener extends DbUnitModule.DbUnitListener {
#Override
public void beforeTestSetUp(Object testObject, Method testMethod) {
disableConstraintsForDataSet(getDataSet(testMethod, testObject));
insertDataSet(testMethod, testObject);
}
}
}
This is what I would like to do however I am unable to get the #DataSet annotation to trigger my DbUnitCustomListener and instead it calls the default DBUnitModule DbUnitListener. Is there anyway for me to override which listener gets called when using the #DataSet annotation or is there a better approach all together for disabling constraints on a specific schema and table level for a DB2 Database?
Thanks
You have to tell Unitils to use your subclass of DbUnitModule. You do this using the unitils.module.dbunit.className property in your unitils.properties file. It sounds like you've got this part figured out.
The second part is to override DbUnitModule's getTestListener() in order to return your custom listener.
See this post for an example.
(I am using MyBatis v3, Java SE v6, Tomcat v6 and Spring v3 all over Teradata v12.)
One of the technical requirements for my current project is to use the query banding feature in Teradata. This is done by running a statement like the following whenever required:
SET QUERY_BAND='someKey=someValue;' FOR TRANSACTION;
I want to have a query band for all of my calls. However, I am unsure how to add this functionality in a clean and reusable manner without having to add it to each of my <select> statements in my mapper file like the following:
<sql id="queryBand">
SET QUERY_BAND='k=v;' FOR TRANSACTION;
</sql>
<select ...>
<include refid="queryBand"/>
... some SQL performing a SELECT
</select>
My issues with the above are:
1) The format of the query band is identical across all my mapper XML files with the exception of k & v, which I would want to customise on a per <select> (etc.) basis. I'm not sure how I can do this customisation without having to pass in the k and v values, which muddies my mapper interface.
2) There is duplication in the above code that makes me uneasy. Developers have to remember to include the queryBand SQL, which someone will forget at some stage (Murphy's Law).
Can someone point me in the direction of the solution to implementing the query banding in a cleaner way?
The solution is to use MyBatis Interceptor plug-ins. For example, the following:
import java.sql.Connection;
import java.sql.Statement;
import java.util.Properties;
import org.apache.ibatis.executor.statement.StatementHandler;
import org.apache.ibatis.plugin.Interceptor;
import org.apache.ibatis.plugin.Intercepts;
import org.apache.ibatis.plugin.Invocation;
import org.apache.ibatis.plugin.Plugin;
import org.apache.ibatis.plugin.Signature;
#Intercepts({#Signature(
type=StatementHandler.class,
method = "prepare",
args={ Connection.class })})
public class StatementInterceptor implements Interceptor {
#Override
public Object intercept(Invocation invocation) throws Throwable {
Connection conn = (Connection) invocation.getArgs()[0];
Statement stmt = conn.createStatement();
stmt.executeUpdate("SET QUERY_BAND = 'k=v;' FOR TRANSACTION;");
return invocation.proceed();
}
#Override
public Object plugin(Object target) {
return Plugin.wrap(target, this);
}
#Override
public void setProperties(Properties properties) {}
}
Let's say that every SQL string should be appended to a query band. I would try to find a method inside myBatis/Spring which does it. Using Spring's AOP this method could be intercepted and its result appended to the query band and returned for further computation.
Finding a method to intercept can be hard but not impossible. Download all dependency sources and link them properly (using Maven this should be trivial nonetheless in Eclipse is not that hard, too), run the code in debugging mode and look for an appropriate method.