Converting RAW JDBC code to use JdbcTemplate - java

I have recently started my first Software Dev job out of university and one of my tasks at the moment is to convert raw JDBC Code to use JdbcTemplate in order to get rid of boilerplate code.
I have written an example of a DAO class using JdbcTemplate which retrieves a users address.
Would someone be able to tell me if this looks like the right pattern/approach... or am I
missing anything here ?
public class accountsDAO extends StoredProcedure {
private static final STOREDPROC = "accounts.getAccountDetails";
public accountsDAO() {
super(jdbcTemplate,STOREDPROC)
declareParameter(new SqlParameter("client_id"), OracleTypes.VARCHAR, );
declareParameter(new SqlOutParameter("accounts_csr"), OracleTypes.CURSOR,new AccountAddressExtractor());
setFunction(false)
complie()
}
private List<String> getAccountAddress(String account) {
Map<String,Object> params = new HashMap<String,Object>;
Map<String,Object results;
List<String> data = new LinkedList<String>();
try{
params.put("client_id",account);
results = execute(params);
data = (List<String) results.get(0);
}catch (Exception e) {
// report error
}
return data;
}
private class AccountAddressExtractor implements RowMapper<List<String>> {
#Override
public List<String> mapRow(Resultset rs, int i) throws SQLException{
List<String> data = new ArrayList<String>();
data.add(rs.getString(1));
data.add(rs.geString(2));
data.add(rs.getString(3));
return data;
}
}
}

Related

Time Optimization For Feed API (Inside A List Of different API calls)

There is a REST API for a Dashboard Feed Page. The is containing different Activities with pagination. The Different APIS are getting data from different Database collection as well as Some Http 3rd party API's.
public List<Map<String, Object>> getData(params...) {
List<Map<String, Object>> uhfList = null;
Map<String, Object> uhf = null;
for (MasterModel masterModel : pageActivities) { //Taking Time n (which I need to reduce)
uhf = new HashMap<String, Object>();
uhf.put("Key", getItemsByMethodName(params..));
uhfList.add(uhf);
}
return uhfList;
}
private List<? extends Object> getItemsByMethodName(params...) {
java.lang.reflect.Method method = null;
List<? extends Object> data = null;
try {
method = uhfRelativeService.getClass().getMethod(params...);
data = (List<? extends Object>) method.invoke(params...);
} catch (Exception e) {
LOG.error("Error Occure in get Items By Method Name :: ", e.getMessage());
}
return data;
}
I tried It with different approach by using Complatable Future But not much effective !
private CompletableFuture<List<? extends Object>> getItemsByMethodName(UserDetail userIdentity, UHFMaster uhfMaster) {
java.lang.reflect.Method method = null;
CompletableFuture<List<? extends Object>> data = null;
try {
method = uhfRelativeService.getClass().getMethod(uhfMaster.getMethodName().trim(),params...);
data = (CompletableFuture<List<? extends Object>>) method.invoke(uhfRelativeService, userIdentity);
} catch (Exception e) {
LOG.error("Error :: ", e.getMessage());
}
return data;
}
//MasterModel Class
public class MasterModel {
#Id
private ObjectId id;
private String param;
private String param1;
private String param2;
private Integer param3;
private Integer param4;
private Integer param5;
private Integer param6;
private Integer param7;
private Integer param8;
private Integer param9;
private String param10;
private String param11;
//getter & setter
}
But the time is not much reduced. I need a solution to perform this operation fast with less response time. Please Help me on this
If you want to do multithreading, then just casting to a CompletetableFuture won't help. To actually run a process asynchronously in a separate thread, you can do something like:
public List<Map<String, Object>> getData(params...) {
UHFMaster master = null; // assume present
List<UserDetail> userDetails = null; // assume list present
// just an example
// asynchronous threads
List<CompletableFuture<List<? extends Object>>> futures =
userDetails.stream().map(u -> getItemsByMethodName(u, master));
// single future which completes when all the threads/futures in list complete
CompletableFuture<List<? extends Object>> singleFuture =
CompletableFuture.allOf(futures);
// .get() on future will block - meaning waiting for the completion
List<Map<String, Object>> listToReturn =
singleFuture.get().map(i -> new HashMap() {{ put("Key", i); }});
return listToReturn;
}
private CompletableFuture<List<? extends Object>> getItemsByMethodName(UserDetail userIdentity, UHFMaster uhfMaster) {
try {
java.lang.reflect.Method method = uhfRelativeService.getClass().getMethod(uhfMaster.getMethodName().trim(),params...);
return CompletableFuture
.suppyAsync(() -> method.invoke(uhfRelativeService, userIdentity));
} catch (Exception e) {
LOG.error("Error :: ", e.getMessage());
return CompletableFuture.completedFuture(null);
}
}

Spring JDBC - Stored Procedure returning an empty array

I've done plenty of research on figuring out a way to return a result set using the Spring JDBC framework. I am new to Spring and I'm working on an implementation that calls a stored function that uses multiple input parameters and returns a ref cursor, but I can't seem to figure out the proper way of doing this. The code that I have now keeps returning an empty array. Can someone suggest to me what would be the best way of handling this?
Here's some of my code snippets:
public class StandardRodsDAOImplementation implements StandardRodsDAO {
#Autowired
private JdbcTemplate jdbcTemplate;
private final String schema_name = "xxlt";
private final String procedure_catalog = "xxlt_bpg_std_width_pkg";
private final String procedure_name = "get_std_rods";
public List<StandardRods> getDefaultRodMatAndColor(String series, String style, String material, String color)
{
SimpleJdbcCall simpleJdbcCall = new SimpleJdbcCall(jdbcTemplate).withSchemaName(schema_name).withCatalogName(procedure_catalog).withFunctionName(procedure_name)
.declareParameters(new SqlOutParameter("lref", OracleTypes.CURSOR,
new StandardRodsMapper()),
new SqlParameter("p_series_ind", Types.VARCHAR),
new SqlParameter("p_style_ind", Types.VARCHAR),
new SqlParameter("p_material_ind", Types.VARCHAR),
new SqlParameter("p_color_ind", Types.VARCHAR));
simpleJdbcCall.compile();
Map<String, Object> inParams = new HashMap<>();
inParams.put("p_series_ind", Types.VARCHAR);
inParams.put("p_style_ind", Types.VARCHAR);
inParams.put("p_material_ind", Types.VARCHAR);
inParams.put("p_color_ind", Types.VARCHAR);
Map output = simpleJdbcCall.execute(inParams);
return (List) output.get("lref"); }
Mapper function:
public class StandardRodsMapper implements RowMapper<StandardRods> {
public StandardRods mapRow(ResultSet rs, int rowNum) throws SQLException
{
StandardRods standardRods = new StandardRods();
standardRods.setSeries(rs.getString("SERIES_IND"));
standardRods.setStyle(rs.getString("BELT_STYLE_IND"));
standardRods.setMaterial(rs.getString("MATERIAL_IND"));
standardRods.setColor(rs.getString("COLOR_IND"));
return standardRods;
}}
Interface:
public interface StandardRodsDAO {
List<StandardRods> getDefaultRodMatAndColor(String series, String style, String material, String color);}
Junit Test:
#Configuration
#ComponentScan
public class SpringDemoTestingApplication
{
StandardRodsDAOImplementation standardRodsDAOImplementation;
#Before
public void setUp()
{
ApplicationContext applicationContext = new ClassPathXmlApplicationContext(
"spring-call-proc-simplejdbccall.xml");
standardRodsDAOImplementation = (StandardRodsDAOImplementation) applicationContext.getBean("standardRodsDao");
}
#Test
public void testGetStandardRod()
{
List<StandardRods> standardRods = standardRodsDAOImplementation.getDefaultRodMatAndColor("550", "TIGHT TRANSFER FLAT TOP", "NYLON", "DARK BROWN");
System.out.println(standardRods);
}
Procedure:
FUNCTION get_std_rods (p_series_ind IN varchar2, p_style_ind IN varchar2, p_material_ind IN varchar2, p_color_ind IN varchar2)
return SYS_REFCURSOR IS lref SYS_REFCURSOR;
BEGIN
OPEN lref FOR
SELECT bbv.default_rod_mat, bbv.default_rod_color
FROM xxlt.xxlt_bpg_belt_val bbv
WHERE bbv.series_ind = p_series_ind AND
bbv.belt_style_ind = p_style_ind AND
bbv.material_ind = p_material_ind AND
bbv.color_ind = p_color_ind;
RETURN lref;
END;

How to use Aggregation Query with MongoItemReader in spring batch

Some how the requirement changed and I have to use aggregation query insted of basic query in setQuery(). Is this even possible?
Please suggest how can i do that? My Aggregation query is ready but not sure how can I use that in spring batch
public ItemReader<ProfileCollection> searchMongoItemReader() throws Exception, UnexpectedInputException, ParseException, NonTransientResourceException {
MongoItemReader<MyCollection> mongoItemReader = new MongoItemReader<>();
mongoItemReader.setTemplate(myMongoTemplate);
mongoItemReader.setCollection(myMongoCollection);
mongoItemReader.setQuery(" Some Simple Query - Basic");
mongoItemReader.setTargetType(MyCollection.class);
Map<String, Sort.Direction> sort = new HashMap<>();
sort.put("field4", Sort.Direction.ASC);
mongoItemReader.setSort(sort);
return mongoItemReader;
}
extend MongoItemReader and provide your own implementation for method doPageRead(). This way you will have full pagination support and this reading of documents will be part of a step.
public class CustomMongoItemReader<T, O> extends MongoItemReader<T> {
private MongoTemplate template;
private Class<? extends T> inputType;
private Class<O> outputType
private MatchOperation match;
private ProjectionOperation projection;
private String collection;
#Override
protected Iterator<T> doPageRead() {
Pageable page = PageRequest.of(page, pageSize) //page and page size are coming from the class that MongoItemReader extends
Aggregation agg = newAggregation(match, projection, skip(page.getPageNumber() * page.getPageSize()), limit(page.getPageSize()));
return (Iterator<T>) template.aggregate(agg, collection, outputType).iterator();
}
}
And other getter and setters and other methods. Just have a look at sourcecode for MongoItemReader here.
I also removed Query support from it. You can have that also in the same method just copy paste it from MongoItemReader. Same with Sort.
And in the class where you have a reader, you would do something like:
public MongoItemReader<T> reader() {
CustomMongoItemReader reader = new CustomMongoItemReader();
reader.setTemplate(mongoTemplate);
reader.setName("abc");
reader.setTargetType(input.class);
reader.setOutputType(output.class);
reader.setCollection(myMongoCollection);
reader.setMatch(Aggregation.match(new Criteria()....)));
reader.setProjection(Aggregation.project("..","..");
return reader;
}
To be able to use aggregation in a job, making use of all features that spring batch has, you have to create a custom ItemReader.
Extending AbstractPaginatedDateItemReader we can use all the elements from pageable operations.
Here's a simple of that custom class:
public class CustomAggreagationPaginatedItemReader<T> extends AbstractPaginatedDataItemReader<T> implements InitializingBean {
private static final Pattern PLACEHOLDER = Pattern.compile("\\?(\\d+)");
private MongoOperations template;
private Class<? extends T> type;
private Sort sort;
private String collection;
public CustomAggreagationPaginatedItemReader() {
super();
setName(ClassUtils.getShortName(CustomAggreagationPaginatedItemReader.class));
}
public void setTemplate(MongoOperations template) {
this.template = template;
}
public void setTargetType(Class<? extends T> type) {
this.type = type;
}
public void setSort(Map<String, Sort.Direction> sorts) {
this.sort = convertToSort(sorts);
}
public void setCollection(String collection) {
this.collection = collection;
}
#Override
#SuppressWarnings("unchecked")
protected Iterator<T> doPageRead() {
Pageable pageRequest = new PageRequest(page, pageSize, sort);
BasicDBObject cursor = new BasicDBObject();
cursor.append("batchSize", 100);
SkipOperation skipOperation = skip(Long.valueOf(pageRequest.getPageNumber()) * Long.valueOf(pageRequest.getPageSize()));
Aggregation aggregation = newAggregation(
//Include here all your aggreationOperations,
skipOperation,
limit(pageRequest.getPageSize())
).withOptions(newAggregationOptions().cursor(cursor).build());
return (Iterator<T>) template.aggregate(aggregation, collection, type).iterator();
}
#Override
public void afterPropertiesSet() throws Exception {
Assert.state(template != null, "An implementation of MongoOperations is required.");
Assert.state(type != null, "A type to convert the input into is required.");
Assert.state(collection != null, "A collection is required.");
}
private String replacePlaceholders(String input, List<Object> values) {
Matcher matcher = PLACEHOLDER.matcher(input);
String result = input;
while (matcher.find()) {
String group = matcher.group();
int index = Integer.parseInt(matcher.group(1));
result = result.replace(group, getParameterWithIndex(values, index));
}
return result;
}
private String getParameterWithIndex(List<Object> values, int index) {
return JSON.serialize(values.get(index));
}
private Sort convertToSort(Map<String, Sort.Direction> sorts) {
List<Sort.Order> sortValues = new ArrayList<Sort.Order>();
for (Map.Entry<String, Sort.Direction> curSort : sorts.entrySet()) {
sortValues.add(new Sort.Order(curSort.getValue(), curSort.getKey()));
}
return new Sort(sortValues);
}
}
If you look with attention you can see it was create using MongoItemReader from Spring framework, you can see that class at org.springframework.batch.item.data.MongoItemReader, there's the way you have to create a whole new class extending AbstractPaginatedDataItemReader, if you have a look at "doPageRead" method you should be able to see that it only uses the find operation of MongoTemplate, making it impossible to use Aggregate operations in it.
Here is how it should like to use it our CustomReader:
#Bean
public ItemReader<YourDataClass> reader(MongoTemplate mongoTemplate) {
CustomAggreagationPaginatedItemReader<YourDataClass> customAggreagationPaginatedItemReader = new CustomAggreagationPaginatedItemReader<>();
Map<String, Direction> sort = new HashMap<String, Direction>();
sort.put("id", Direction.ASC);
customAggreagationPaginatedItemReader.setTemplate(mongoTemplate);
customAggreagationPaginatedItemReader.setCollection("collectionName");
customAggreagationPaginatedItemReader.setTargetType(YourDataClass.class);
customAggreagationPaginatedItemReader.setSort(sort);
return customAggreagationPaginatedItemReader;
}
As you may notice, you also need a instance of MongoTemplate, here's how it should like too:
#Bean
public MongoTemplate mongoTemplate(MongoDbFactory mongoDbFactory) {
return new MongoTemplate(mongoDbFactory);
}
Where MongoDbFactory is a autowired object by spring framework.
Hope that's enough to help you.

Java - How to Avoid Creating List and Then Copying Into it

I have this general function to populate an ArrayList of objects from a database. The problem is that I'm getting a general ArrayList class back from the DB, and then creating the specific subclass of the ArrayList I need to create, and then copying from the generic ArrayList to my subclass. I want to eliminate that unnecessary step of copying from one array to the other, since the performance won't be great with hundreds of rows. How can I eliminate that step using generics?
So, to use a more specific example, I have a data class like
public class UserData {}
and then a class like
public class UserSet extends ArrayList<UserData>
and I would populate the UserSet object by using a function call like
UserSet s = selectAll("SELECT * FROM users", UserSet.class);
and my general function to query the DB and return a UserSet instance is like this.
public static <T, S extends List<T>> S selectAll(String sql, Class<S> listType, Object...args) throws Exception
{
// t = UserData.class in my example
Class<T> t = (Class<T>)((ParameterizedType)listType.getGenericSuperclass()).getActualTypeArguments()[0];
// From Apache's DBUtils project
QueryRunner run = new QueryRunner();
// AnnotatedDataRowProcessor is my class that just converts a DB row into a data object
ResultSetHandler<List<T>> h = new BeanListHandler<T>(t, new AnnotatedDataRowProcessor());
Connection conn = DB.getConnection();
try
{
// creates the new instance of my specific subclass of ArrayList
S result = listType.newInstance();
// returns the ArrayList which I then copy into result
result.addAll(run.query(conn, sql, h, args));
return result;
}
finally
{
DbUtils.close(conn);
}
}
You can customize your BeanListHandler, something like this:
ResultSetHandler<List<T>> h = new BeanListHandler<T>(t, new AnnotatedDataRowProcessor()) {
#Override
public List<T> handle(ResultSet rs) throws SQLException {
List<T> rows = listType.newInstance();
while (rs.next()) {
rows.add(this.handleRow(rs));
}
return rows;
}
};
You will probably need some casts to make this compile, but this is the general idea.
Then calling run.query(conn, sql, h, args) will directly create the type you're looking for.
I actually had to pass in the Class type into the constructor of AnnotatedDataRowProcessor so I wouldn't break the interface methods in this BeanListHandler.
private Class<?> type;
public <T> AnnotatedDataRowProcessor(Class<T> type)
{
this.type = type;
}
#Override
public <T> List<T> toBeanList(ResultSet rs, Class<T> type) throws SQLException
{
try
{
List<T> list = (List<T>)this.type.newInstance();
while (rs.next())
{
list.add(toBean(rs,type));
}
return list;
}
catch (IllegalAccessException ex)
{
throw new SQLException(ex.getMessage());
}
catch (InstantiationException ex)
{
throw new SQLException(ex.getMessage());
}
}

Mapping a JDBC ResultSet to an object

I have a user class that has 16 attributes, things such as firstname, lastname, dob, username, password etc... These are all stored in a MySQL database and when I want to retrieve users I use a ResultSet. I want to map each of the columns back to the user attributes but the way I am doing it seems terribly inefficient.
For example I am doing:
//ResultSet rs;
while(rs.next()) {
String uid = rs.getString("UserId");
String fname = rs.getString("FirstName");
...
...
...
User u = new User(uid,fname,...);
//ArrayList<User> users
users.add(u);
}
i.e I retrieve all the columns and then create user objects by inserting all the column values into the User constructor.
Does anyone know of a faster, neater, way of doing this?
If you don't want to use any JPA provider such as OpenJPA or Hibernate, you can just give Apache DbUtils a try.
http://commons.apache.org/proper/commons-dbutils/examples.html
Then your code will look like this:
QueryRunner run = new QueryRunner(dataSource);
// Use the BeanListHandler implementation to convert all
// ResultSet rows into a List of Person JavaBeans.
ResultSetHandler<List<Person>> h = new BeanListHandler<Person>(Person.class);
// Execute the SQL statement and return the results in a List of
// Person objects generated by the BeanListHandler.
List<Person> persons = run.query("SELECT * FROM Person", h);
No need of storing resultSet values into String and again setting into POJO class. Instead set at the time you are retrieving.
Or best way switch to ORM tools like hibernate instead of JDBC which maps your POJO object direct to database.
But as of now use this:
List<User> users=new ArrayList<User>();
while(rs.next()) {
User user = new User();
user.setUserId(rs.getString("UserId"));
user.setFName(rs.getString("FirstName"));
...
...
...
users.add(user);
}
Let's assume you want to use core Java, w/o any strategic frameworks. If you can guarantee, that field name of an entity will be equal to the column in database, you can use Reflection API (otherwise create annotation and define mapping name there)
By FieldName
/**
Class<T> clazz - a list of object types you want to be fetched
ResultSet resultSet - pointer to your retrieved results
*/
List<Field> fields = Arrays.asList(clazz.getDeclaredFields());
for(Field field: fields) {
field.setAccessible(true);
}
List<T> list = new ArrayList<>();
while(resultSet.next()) {
T dto = clazz.getConstructor().newInstance();
for(Field field: fields) {
String name = field.getName();
try{
String value = resultSet.getString(name);
field.set(dto, field.getType().getConstructor(String.class).newInstance(value));
} catch (Exception e) {
e.printStackTrace();
}
}
list.add(dto);
}
By annotation
#Retention(RetentionPolicy.RUNTIME)
public #interface Col {
String name();
}
DTO:
class SomeClass {
#Col(name = "column_in_db_name")
private String columnInDbName;
public SomeClass() {}
// ..
}
Same, but
while(resultSet.next()) {
T dto = clazz.getConstructor().newInstance();
for(Field field: fields) {
Col col = field.getAnnotation(Col.class);
if(col!=null) {
String name = col.name();
try{
String value = resultSet.getString(name);
field.set(dto, field.getType().getConstructor(String.class).newInstance(value));
} catch (Exception e) {
e.printStackTrace();
}
}
}
list.add(dto);
}
Thoughts
In fact, iterating over all Fields might seem ineffective, so I would store mapping somewhere, rather than iterating each time. However, if our T is a DTO with only purpose of transferring data and won't contain loads of unnecessary fields, that's ok. In the end it's much better than using boilerplate methods all the way.
Hope this helps someone.
Complete solution using #TEH-EMPRAH ideas and Generic casting from Cast Object to Generic Type for returning
import annotations.Column;
import java.lang.reflect.Field;
import java.lang.reflect.InvocationTargetException;
import java.sql.SQLException;
import java.util.*;
public class ObjectMapper<T> {
private Class clazz;
private Map<String, Field> fields = new HashMap<>();
Map<String, String> errors = new HashMap<>();
public DataMapper(Class clazz) {
this.clazz = clazz;
List<Field> fieldList = Arrays.asList(clazz.getDeclaredFields());
for (Field field : fieldList) {
Column col = field.getAnnotation(Column.class);
if (col != null) {
field.setAccessible(true);
fields.put(col.name(), field);
}
}
}
public T map(Map<String, Object> row) throws SQLException {
try {
T dto = (T) clazz.getConstructor().newInstance();
for (Map.Entry<String, Object> entity : row.entrySet()) {
if (entity.getValue() == null) {
continue; // Don't set DBNULL
}
String column = entity.getKey();
Field field = fields.get(column);
if (field != null) {
field.set(dto, convertInstanceOfObject(entity.getValue()));
}
}
return dto;
} catch (IllegalAccessException | InstantiationException | NoSuchMethodException | InvocationTargetException e) {
e.printStackTrace();
throw new SQLException("Problem with data Mapping. See logs.");
}
}
public List<T> map(List<Map<String, Object>> rows) throws SQLException {
List<T> list = new LinkedList<>();
for (Map<String, Object> row : rows) {
list.add(map(row));
}
return list;
}
private T convertInstanceOfObject(Object o) {
try {
return (T) o;
} catch (ClassCastException e) {
return null;
}
}
}
and then in terms of how it ties in with the database, I have the following:
// connect to database (autocloses)
try (DataConnection conn = ds1.getConnection()) {
// fetch rows
List<Map<String, Object>> rows = conn.nativeSelect("SELECT * FROM products");
// map rows to class
ObjectMapper<Product> objectMapper = new ObjectMapper<>(Product.class);
List<Product> products = objectMapper.map(rows);
// display the rows
System.out.println(rows);
// display it as products
for (Product prod : products) {
System.out.println(prod);
}
} catch (Exception e) {
e.printStackTrace();
}
I would like to hint on q2o. It is a JPA based Java object mapper which helps with many of the tedious SQL and JDBC ResultSet related tasks, but without all the complexity an ORM framework comes with. With its help mapping a ResultSet to an object is as easy as this:
while(rs.next()) {
users.add(Q2Obj.fromResultSet(rs, User.class));
}
More about q2o can be found here.
There are answers recommending to use https://commons.apache.org/proper/commons-dbutils/. The default implementation of row processor i.e org.apache.commons.dbutils.BasicRowProcessor in db-utils 1.7 is not thread safe. So, if you are using org.apache.commons.dbutils.QueryRunner::query method in a multi-threaded environment, you should write your custom row processor. It can be done either by implementing org.apache.commons.dbutils.RowProcessor interface or by extending org.apache.commons.dbutils.BasicRowProcessor class. Sample code given below by extending BasicRowProcessor:
class PersonResultSetHandler extends BasicRowProcessor {
#Override
public <T> List<T> toBeanList(ResultSet rs, Class<? extends T> type)
throws SQLException
{
//Handle the ResultSet and return a List of Person
List<Person> personList = .....
return (List<T>) personList;
}
}
Pass the custom row processor to the appropriate org.apache.commons.dbutils.ResultSetHandler implementation. A BeanListHandler has been used in the below code:
QueryRunner qr = new QueryRunner();
List<Person> personList = qr.query(conn, sqlQuery, new BeanListHandler<Person>(Person.class, new PersonResultSetHandler()));
However, https://mvnrepository.com/artifact/org.springframework.boot/spring-boot-starter-jdbc is another alternative with a cleaner API. Although, I am not sure about the thread safety aspects of it.
Thank you ##TEH-EMPRAH. His solution "By FieldName" will be final as:
/**
* Method help to convert SQL request data to your custom DTO Java class object.
* Requirements: fields of your Java class should have Type: String and have the same name as in sql table
*
* #param resultSet - sql-request result
* #param clazz - Your DTO Class for mapping
* #return <T> List <T> - List of converted DTO java class objects
*/
public static <T> List <T> convertSQLResultSetToObject(ResultSet resultSet, Class<T> clazz) throws SQLException, NoSuchMethodException, InvocationTargetException, InstantiationException, IllegalAccessException {
List<Field> fields = Arrays.asList(clazz.getDeclaredFields());
for(Field field: fields) {
field.setAccessible(true);
}
List<T> list = new ArrayList<>();
while(resultSet.next()) {
T dto = clazz.getConstructor().newInstance();
for(Field field: fields) {
String name = field.getName();
try{
String value = resultSet.getString(name);
field.set(dto, field.getType().getConstructor(String.class).newInstance(value));
} catch (Exception e) {
e.printStackTrace();
}
}
list.add(dto);
}
return list;
}
using DbUtils...
The only problem I had with that lib was that sometimes you have relationships in your bean classes, DBUtils does not map that. It only maps the properties in the class of the bean, if you have other complex properties (refering other beans due to DB relationship) you'd have to create "indirect setters" as I call, which are setters that put values into those complex properties's properties.
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.json.simple.JSONObject;
import com.google.gson.Gson;
public class ObjectMapper {
//generic method to convert JDBC resultSet into respective DTo class
#SuppressWarnings("unchecked")
public static Object mapValue(List<Map<String, Object>> rows,Class<?> className) throws Exception
{
List<Object> response=new ArrayList<>();
Gson gson=new Gson();
for(Map<String, Object> row:rows){
org.json.simple.JSONObject jsonObject = new JSONObject();
jsonObject.putAll(row);
String json=jsonObject.toJSONString();
Object actualObject=gson.fromJson(json, className);
response.add(actualObject);
}
return response;
}
public static void main(String args[]) throws Exception{
List<Map<String, Object>> rows=new ArrayList<Map<String, Object>>();
//Hardcoded data for testing
Map<String, Object> row1=new HashMap<String, Object>();
row1.put("name", "Raja");
row1.put("age", 22);
row1.put("location", "India");
Map<String, Object> row2=new HashMap<String, Object>();
row2.put("name", "Rani");
row2.put("age", 20);
row2.put("location", "India");
rows.add(row1);
rows.add(row2);
#SuppressWarnings("unchecked")
List<Dto> res=(List<Dto>) mapValue(rows, Dto.class);
}
}
public class Dto {
private String name;
private Integer age;
private String location;
//getters and setters
}
Try the above code .This can be used as a generic method to map JDBC result to respective DTO class.
Use Statement Fetch Size , if you are retrieving more number of records. like this.
Statement statement = connection.createStatement();
statement.setFetchSize(1000);
Apart from that i dont see an issue with the way you are doing in terms of performance
In terms of Neat. Always use seperate method delegate to map the resultset to POJO object. which can be reused later in the same class
like
private User mapResultSet(ResultSet rs){
User user = new User();
// Map Results
return user;
}
If you have the same name for both columnName and object's fieldName , you could also write reflection utility to load the records back to POJO. and use MetaData to read the columnNames . but for small scale projects using reflection is not an problem. but as i said before there is nothing wrong with the way you are doing.

Categories

Resources