The variable in the request in the mybatis mapper is not replaced.
In the case of "select" queries, everything is fine, but I need to be able to create a table by name.
import org.apache.ibatis.annotations.Mapper;
import org.apache.ibatis.annotations.Param;
import org.apache.ibatis.annotations.Update;
import org.springframework.stereotype.Repository;
#Mapper
#Repository
public interface TableMapper {
#Update("""
create table if not exists #{name}
(table_name varchar(50) NOT NULL,
update_date varchar(20) NOT NULL)
""")
void createTableByName(#Param("name") String name);
}
What am I doing wrong? Or is this approach not possible when using mybatis?
In logs:
The error occurred while setting parameters
SQL: create table if not exists ? (table_name varchar(50) NOT NULL, update_date varchar(20) NOT NULL)
The variable passed to the method is neither empty nor null.
The list of fields in the table itself is not important, only the possibility of creating a table by a given name.
Related
I'm using Gradle and https://github.com/etiennestuder/gradle-jooq-plugin, configured like that
jooq {
version = '3.10.6'
foo(sourceSets.main) {
jdbc {
url = "jdbc:sqlite:${projectDir.absolutePath}/foo.sqlite"
}
}
}
I have the following dependencies of interest
jooqRuntime 'org.xerial:sqlite-jdbc:3.21.0.1'
compile 'org.xerial:sqlite-jdbc:3.21.0.1'
My foo DB contains the following
CREATE TABLE tree (
id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
grove_id INTEGER,
type INTEGER NOT NULL,
latitude FLOAT NOT NULL,
longitude FLOAT NOT NULL,
FOREIGN KEY (grove_id) REFERENCES grove (id)
ON DELETE CASCADE
ON UPDATE CASCADE
);
CREATE TABLE grove (
id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
);
When creating a new tree client-side, I want to persist it in my SQLite DB using the 'record' way. So I'm doing
DSLContext dslContext = DslContext.get("jdbc:sqlite:"+path_to_sqlite_file);
TreeRecord treeRecord = dslContext.newRecord(TREE);
treeRecord.setGroveId(10);
treeRecord.setType(1);
treeRecord.setLatitude(0.1);
treeRecord.setLongitude(0.2);
treeRecord.store(); // this works, when inspecting the SQLite file afterwards
treeRecord.getId(); => this is null, even though the DB correctly has a ID value attributed.
I didn't find anything that tells this type of feature is not supported by Jooq on SQLite db. Isn't it ?
Ok, the problem was due to my DslContext singleton, which was:
import org.jooq.Configuration;
import org.jooq.SQLDialect;
import org.jooq.impl.DSL;
import org.jooq.impl.DataSourceConnectionProvider;
import org.jooq.impl.DefaultConfiguration;
import org.sqlite.SQLiteDataSource;
public class DslContext {
private static org.jooq.DSLContext INSTANCE;
public static org.jooq.DSLContext get(String dbUrl) {
if (INSTANCE == null) {
INSTANCE = instantiate(dbUrl);
}
return INSTANCE;
}
private static org.jooq.DSLContext instantiate(String dbUrl) {
SQLiteDataSource ds = new SQLiteDataSource();
ds.setUrl(dbUrl);
Configuration configuration = new DefaultConfiguration()
.set(SQLDialect.SQLITE)
.set(new DataSourceConnectionProvider(ds));
return DSL.using(configuration);
}
private DslContext() {
}
}
Not sure why this was not working.
I felt back to using DSL.using("jdbc:sqlite:"+path_to_sqlite_file); in my code snippet instead of DslContext.get("jdbc:sqlite:"+path_to_sqlite_file);
I have a mysql table like this:
CREATE TABLE `sezione_menu` (
`id_sezione_menu` int(11) unsigned NOT NULL AUTO_INCREMENT,
`nome` varchar(256) NOT NULL DEFAULT '',
`ordine` int(11) DEFAULT NULL,
PRIMARY KEY (`id_sezione_menu`)
)ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=utf8;
I use apache dbutils to query my database, with these methods:
public static List<SezioneMenu> getSezioniMenu() {
String sql = "SELECT * FROM sezione_menu";
try {
QueryRunner qr = new QueryRunner(createDataSource());
ResultSetHandler rsh = new BeanListHandler(SezioneMenu.class);
List<SezioneMenu> sezioni = (List<SezioneMenu>)qr.query(sql, rsh);
return sezioni;
} catch (SQLException e) {
e.printStackTrace();
}
return null;
}
private static DataSource createDataSource() {
BasicDataSource d = new BasicDataSource();
d.setDriverClassName(DRIVER);
d.setUsername(USERNAME);
d.setPassword(PASSWORD);
d.setUrl(DB_URL);
return d;
}
Now, if i run my application, it doesn't throw exception, but some fields (not all!) of my java bean SezioneMenu are empty (integer field equals zero and string field equals empty string).
This happen also with other tables and beans.
I used this method in the past in another system configuration without problems.
You can fix it in two ways:
As per dbutils doc,
Alias the column names in the SQL so they match the Java names: select social_sec# as socialSecurityNumber from person
Subclass BeanProcessor and override the mapColumnsToProperties() method to strip out the offending characters.
If you are keeping a class like this
public class SezioneMenuBean implements Serializable {
private int idSezioneMenu;
private String nome;
private int ordine;
public SezioneMenuBean() {
}
// Getters and setters for bean values
}
As per first solution write your queries something like this SELECT id_sezione_menu AS idSezioneMenu, name, ordine FROM sezione_menu.
Or
Based on second solution you can use GenerousBeanProcessor which is a subclass of BeanProcessor it ignores underscore & case sensitivity from column name. You don't have to implement your own custom BeanProcessor
GenerousBeanProcessor is available since version 1.6 of commons-dbutils.
Usage:
// TODO initialize
QueryRunner queryRunner = null;
ResultSetHandler<List<SezioneMenuBean>> resultSetHandler =
new BeanListHandler<SezioneMenuBean>(SezioneMenuBean.class, new BasicRowProcessor(new GenerousBeanProcessor()));
// best practice is specifying only required columns in the query
// SELECT id_sezione_menu, name, ordine FROM sezione_menu
final List<SezioneMenuBean> sezioneMenuBeans = queryRunner.query("SELECT * FROM sezione_menu", resultSetHandler);
for (SezioneMenuBean sezioneMenuBean : sezioneMenuBeans) {
System.out.println(sezioneMenuBean.getIdSezioneMenu());
}
I faced the same issue of BeanHandler/BeanHandlerList returning null or 0 for database columns.
As mentioned by #aelfric5578 in the comment, I have updated the Bean class with same names as Database, DBUtils returned values correctly.
Having BeanClass defined like this will solve your problem.
public class SezioneMenuBean{
int id_sezione_menu;
String nome;
int ordine;
public SezioneMenuBean(){
}
// Getters and setters for bean values
}
In web-app hibernate criteria taking too long against oracle db. I enable the log4j.logger.org.hibernate.SQL=debug SQL and run the sql query with same bind variable in sql plus the result is instance. Enabling hibernate logging and going through the logs. What cause hibernate to take too long rub a query? Any suggestions?
Update 1:
It appears that oracle use different execution plan when i run through the SQLPlus the same query which hibernate generates.
SQL Query From Hibernate:
select count(*) as y0_ from SUMMARY_VIEW this_ where this_.ser_id like :1 and this_.TYPE=:2 and this_.TIME_LOCAL>=:3 and this_.TIME_LOCAL<=:4
SQL Query Run on SQLPlus:
select count(*) as y0_ from SUMMARY_VIEW this_ where this_.ser_id like :ser_id and this_.TYPE=:type and this_.TIME_LOCAL>=:startdate and this_.TIME_LOCAL<=:enddate
Update 2:
Further Investigation reveled that that startdate and endate bind variable passed as varchar2 from sqlplus but these passed as timestamp from app ( :) ). Due to this execution plans are different.
select sql_text, v.sql_id, name, value_string, datatype_string from v$sql_bind_capture vbc join v$sql v using (hash_value) where v.sql_id in (?)
Does bind variable type affect execution plan? if so, Is there any other tools to pass date variable as a bind parameter to query?
Update 3:
The performance issue due to incompatible data types. It appears that column data type(DATE) and hibernate data type(TIMESTAMP) mismatch causes implicit data type conversion. Oracle uses INTERNAL_FUNCTION to transfer date column to match the passed bind variable hibernate data type TimeStamp.
Similar issues :
Non-negligible execution plan difference with Oracle when using jdbc Timestamp or Date
Why is Oracle so slow when I pass a java.sql.Timestamp for a DATE column?
If you're using Hibernate, you can solve this problem with the solution that is indicated here:
http://blog.jooq.org/2014/12/29/leaky-abstractions-or-how-to-bind-oracle-date-correctly-with-hibernate/
Essentially, you'll need to use a UserType:
import java.io.Serializable;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Timestamp;
import java.sql.Types;
import java.util.Objects;
import oracle.sql.DATE;
import org.hibernate.engine.spi.SessionImplementor;
import org.hibernate.usertype.UserType;
public class OracleDate implements UserType {
#Override
public int[] sqlTypes() {
return new int[] { Types.TIMESTAMP };
}
#Override
public Class<?> returnedClass() {
return Timestamp.class;
}
#Override
public Object nullSafeGet(
ResultSet rs,
String[] names,
SessionImplementor session,
Object owner
)
throws SQLException {
return rs.getTimestamp(names[0]);
}
#Override
public void nullSafeSet(
PreparedStatement st,
Object value,
int index,
SessionImplementor session
)
throws SQLException {
// The magic is here: oracle.sql.DATE!
st.setObject(index, new DATE(value));
}
// The other method implementations are omitted
}
And annotate all your entities with that type:
#Entity
#TypeDefs(
value = #TypeDef(
name = "oracle_date",
typeClass = OracleDate.class
)
)
public class Rental {
#Id
#Column(name = "rental_id")
public Long rentalId;
#Column(name = "rental_date")
#Type(type = "oracle_date")
public Timestamp rentalDate;
}
A lot of repetitive boilerplate, I'm afraid, but at least you can get the execution plan up to speed again.
JDBC and other APIs
For the record, a similar article presents how to solve this problem on the JDBC layer, or with jOOQ in particular:
http://blog.jooq.org/2014/12/22/are-you-binding-your-oracle-dates-correctly-i-bet-you-arent/
Let's say I have to fire a query like this:
Select primarykey, columnname, old_value, new_value from first_audit_log;
Select primarykey, columnname, old_value, new_value from second_audit_log;
Select primarykey, columnname, old_value, new_value from third_audit_log; ...so on
audit_log is not mapped as JPA enity to any class and I strictly can't create n number of classes for n number of *_audit_logs.
Using native query feature, how best I can map this to a generic class? Trying to SELECT NEW feature, but not sure... Hence any help is appreciated.
Since your audit logs tables share the same columns, you can create a view that "unifies" those tables and map a single Java class to that view. I believe you can, since you don't need to write updates, I guess.
As an alternative, using native queries would be a good choice.
EDIT:
1) If your audit logs are already views, you can create a view based on other views, if you don't want to create a mapping Java class for each of them. Just remember to add a dummy column that has value 1 if the row comes from the "first" audit log, 2 if it comes from the second, and so on, so you can set them apart.
2) In order to use native queries, assuming your persistence provider is Hibernate, you can do like in this example:
EntityManagerFactory emf = Persistence.createEntityManagerFactory("test");
EntityManager em = emf.createEntityManager();
Session sess = em.unwrap(Session.class); // <-- Use Hibernate-specific features
SQLQuery query = sess.createSQLQuery(
"SELECT AVG(age) AS averageAge, AVG(salary) as averageSalary FROM persons");
query.setResultTransformer(Transformers.aliasToBean(MyResult.class));
MyResult result = (MyResult) query.list().get(0);
where MyResult is declared as follows:
public class MyResult {
private BigDecimal averageAge;
private BigDecimal averageSalary;
public BigDecimal getAverageAge() {
return averageAge;
}
public void setAverageAge(BigDecimal averageAge) {
this.averageAge = averageAge;
}
public BigDecimal getAverageSalary() {
return averageSalary;
}
public void setAverageSalary(BigDecimal averageSalary) {
this.averageSalary = averageSalary;
}
}
and the persons table is like this (MySQL syntax):
CREATE TABLE `persons` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`firstname` varchar(255) NOT NULL,
`lastname` varchar(255) NOT NULL,
`age` int(11) NOT NULL,
`salary` int(11) NOT NULL,
PRIMARY KEY (`id`)
);
You can easily adapt this example to your needs, just replace persons and MyResult with what you want.
The aliases in the sql query is automatically converted to upper case and its looking for the setter in Upper case as a result org.hibernate.PropertyNotFoundException Exception is thrown. Any suggestions would be greatly appreciated.
For instance, the below statement is looking for the setter ID instead of Id/id (Could not find setter for ID on class Data)
List<Data> result = entityManager.unwrap(Session.class)
.createSQLQuery("Select id as id from table")
.setParameter("day", date.getDayOfMonth())
.setParameter("month", date.getMonthOfYear())
.setParameter("year", date.getYear())
.setResultTransformer(Transformers.aliasToBean(Data.class))
.list();
class Data {
Integer id;
public Integer getId() {
return id;
}
public void setId(Integer id) {
this.id = id;
}
}
I am trying to implement MyBatis in my project at work. It is a legacy system, which uses vanilla JDBC to access the database, solely through stored procedures. I understand that to call a stored procedure, MyBatis requires an object which contains the input parameters for the stored procedure and another that will hold the result set. Not sure if this is entirely true.
To prevent creating too many data entities in the system, I want to reuse the existing ones. And here is where the problem arises. Let me explain what the typical situation/scenario I am facing, and then how I am trying to solve it.
Let's say I have the following data entity(ies) in the system:
class Account {
private int accountID;
private String accountName;
private OrganizationAddress address;
// Getters-Setters Go Here
}
class OrganizationAddress extends Address {
// ... some attributes here
// Getters-Setters Go Here
}
class Address {
private String address;
private String city;
private String state;
private String country;
// Getters-Setters Go Here
}
I am using annotations, so my Mapper class has something like this:
#Select(value = "{call Get_AccountList(#{accountType, mode=IN, jdbcType=String})}")
#Options(statementType = StatementType.CALLABLE)
#Results(value = {
#org.apache.ibatis.annotations.Result
(property = "accountID", column = "Account_ID"),
#org.apache.ibatis.annotations.Result
(property = "accountName", column = "Organization_Name"),
#org.apache.ibatis.annotations.Result
(property = "state", column = "State", javaType=OrganizationAddress.class)
})
List<Account> getAccountList(Param param);
Problem: When I make the call to the stored procedure, the Account object has the state always null.
To add to the injury, I do not have access to the source of the above data entities. So I couldn't try the solution provided on this link either - Mybatis select with nested objects
My query:
Is it possible for me to use the data entites already present in the system, or do I have to create new ones, and then map the data to the existing ones?
If yes, how do I go about it? Any references, if any.
If no, is there a way to reduce the number of data entities I would create to call the stored procedures (for both in and out parameters)?
I think the best solution for your situation (if I understand it correctly) is to use a MyBatis TypeHandler that will map the state column to an OrganizationAddress object.
I've put together a example based on the information you provided and it works. Here is the revised annotated Mapper:
// Note: you have an error in the #Select line => maps to VARCHAR not "String"
#Select(value = "{call Get_AccountList(#{accountType, mode=IN, jdbcType=VARCHAR})}")
#Options(statementType = StatementType.CALLABLE)
#Results(value = {
#org.apache.ibatis.annotations.Result
(property = "accountID", column = "Account_ID"),
#org.apache.ibatis.annotations.Result
(property = "accountName", column = "Organization_Name"),
#org.apache.ibatis.annotations.Result
(property = "address", column = "State", typeHandler=OrgAddressTypeHandler.class)
})
List<Account> getAccountList(Param param);
You need to map the address field of Account to the "state" column and use a TypeHandler to create an OrganizationAddress with its "state" property filled in.
The OrgAddressTypeHandler I created looks like this:
import java.sql.CallableStatement;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import org.apache.ibatis.type.BaseTypeHandler;
import org.apache.ibatis.type.JdbcType;
public class OrgAddressTypeHandler extends BaseTypeHandler<OrganizationAddress> {
#Override
public OrganizationAddress getNullableResult(ResultSet rs, String colName) throws SQLException {
OrganizationAddress oa = new OrganizationAddress();
oa.setState(rs.getString(colName));
return oa;
}
#Override
public OrganizationAddress getNullableResult(ResultSet rs, int colNum) throws SQLException {
OrganizationAddress oa = new OrganizationAddress();
oa.setState(rs.getString(colNum));
return oa;
}
#Override
public OrganizationAddress getNullableResult(CallableStatement cs, int colNum) throws SQLException {
OrganizationAddress oa = new OrganizationAddress();
oa.setState(cs.getString(colNum));
return oa;
}
#Override
public void setNonNullParameter(PreparedStatement arg0, int arg1, OrganizationAddress arg2, JdbcType arg3) throws SQLException {
// not needed for this example
}
}
If you need a more complete working example than this, I'll be happy to send more of it. Or if I have misunderstood your example, let me know.
With this solution you can use your domain objects without modification. You just need the TypeHandler to do the mapping and you don't need an XML mapper file.
Also I did this with MyBatis-3.1.1 in MySQL. Here is the simple schema and stored proc I created to test it:
DROP TABLE IF EXISTS account;
DROP TABLE IF EXISTS organization_address;
CREATE TABLE account (
account_id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT,
organization_name VARCHAR(45) NOT NULL,
account_type VARCHAR(10) NOT NULL,
organization_address_id SMALLINT UNSIGNED NOT NULL,
PRIMARY KEY (account_id)
)ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE organization_address (
organization_address_id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT,
address VARCHAR(45) NOT NULL,
city VARCHAR(45) NOT NULL,
state VARCHAR(45) NOT NULL,
country VARCHAR(45) NOT NULL,
PRIMARY KEY (organization_address_id)
)ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO organization_address VALUES(1, '123 Foo St.', 'Foo City', 'Texas', 'USA');
INSERT INTO organization_address VALUES(2, '456 Bar St.', 'Bar City', 'Arizona', 'USA');
INSERT INTO organization_address VALUES(3, '789 Quux Ave.', 'Quux City', 'New Mexico', 'USA');
INSERT INTO account VALUES(1, 'Foo', 'Type1', 1);
INSERT INTO account VALUES(2, 'Bar', 'Type1', 2);
INSERT INTO account VALUES(3, 'Quux', 'Type2', 3);
DROP PROCEDURE IF EXISTS Get_AccountList;
DELIMITER $$
CREATE PROCEDURE Get_AccountList(IN p_account_type VARCHAR(10))
READS SQL DATA
BEGIN
SELECT a.account_id, a.organization_name, o.state
FROM account a
JOIN organization_address o ON a.organization_address_id = o.organization_address_id
WHERE account_type = p_account_type
ORDER BY a.account_id;
END $$
DELIMITER ;