I just want to be shure when inputting new DBObject to DB that it is really unique and Collection doesn't contain key field duplicates .
Here is how it looks now:
public abstract class AbstractMongoDAO<ID, MODEL> implements GenericDAO<ID, MODEL> {
protected Mongo client;
protected Class<MODEL> model;
protected DBCollection dbCollection;
/**
* Contains model data : unique key name and name of get method
*/
protected KeyField keyField;
#SuppressWarnings("unchecked")
protected AbstractMongoDAO() {
ParameterizedType genericSuperclass = (ParameterizedType) this.getClass().getGenericSuperclass();
model = (Class<MODEL>) genericSuperclass.getActualTypeArguments()[1];
getKeyField();
}
public void connect() throws UnknownHostException {
client = new MongoClient(Config.getMongoHost(), Integer.parseInt(Config.getMongoPort()));
DB clientDB = client.getDB(Config.getMongoDb());
clientDB.authenticate(Config.getMongoDbUser(), Config.getMongoDbPass().toCharArray());
dbCollection = clientDB.getCollection(getCollectionName(model));
}
public void disconnect() {
if (client != null) {
client.close();
}
}
#Override
public void create(MODEL model) {
Object keyValue = get(model);
try {
ObjectMapper mapper = new ObjectMapper();
String requestAsString = mapper.writerWithDefaultPrettyPrinter().writeValueAsString(model);
// check if not presented
BasicDBObject dbObject = new BasicDBObject((String) keyValue, requestAsString);
dbCollection.ensureIndex(dbObject, new BasicDBObject("unique", true));
dbCollection.insert(new BasicDBObject((String) keyValue, requestAsString));
} catch (Throwable e) {
throw new RuntimeException(String.format("Duplicate parameters '%s' : '%s'", keyField.id(), keyValue));
}
}
private Object get(MODEL model) {
Object result = null;
try {
Method m = this.model.getMethod(this.keyField.get());
result = m.invoke(model);
} catch (Exception e) {
throw new RuntimeException(String.format("Couldn't find method by name '%s' at class '%s'", this.keyField.get(), this.model.getName()));
}
return result;
}
/**
* Extract the name of collection that is specified at '#Entity' annotation.
*
* #param clazz is model class object.
* #return the name of collection that is specified.
*/
private String getCollectionName(Class<MODEL> clazz) {
Entity entity = clazz.getAnnotation(Entity.class);
String tableName = entity.value();
if (tableName.equals(Mapper.IGNORED_FIELDNAME)) {
// think about usual logger
tableName = clazz.getName();
}
return tableName;
}
private void getKeyField() {
for (Field field : this.model.getDeclaredFields()) {
if (field.isAnnotationPresent(KeyField.class)) {
keyField = field.getAnnotation(KeyField.class);
break;
}
}
if (keyField == null) {
throw new RuntimeException(String.format("Couldn't find key field at class : '%s'", model.getName()));
}
}
KeyFeld is custom annotation:
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.FIELD)
public #interface KeyField {
String id();
String get();
String statusProp() default "ALL";
But I'm not shure that this solution really prove this. I'm newly at Mongo.
Any suggestions?
A uniqueness can be maintained in MonboDb using _id field. If we will not provide the value of this field, MongoDB automatically creates a unique id for that particuler collection.
So, in your case just create a property called _id in java & assign your unique field value here. If duplicated, it will throw an exception.
With Spring Data MongoDB (the question was tagged with spring-data, that's why I suggest it), all you need is that:
// Your types
class YourType {
BigInteger id;
#Indexed(unique = true) String emailAddress;
…
}
interface YourTypeRepository extends CrudRepository<YourType, BigInteger> { }
// Infrastructure setup, if you use Spring as container prefer #EnableMongoRepositories
MongoOperations operations = new MongoTemplate(new MongoClient(), "myDatabase");
MongoRepositoryFactory factory = new MongoRepositoryFactory(operations);
YourTypeRepository repository = factory.getRepository(YourTypeRepository.class);
// Now use it…
YourType first = …; // set email address
YourType second = …; // set same email address
repository.save(first);
repository.save(second); // will throw an exception
The crucial part that's most related to your original question is #Indexed as this will cause the required unique index created when you create the repository.
What you get beyond that is:
no need to manually implement any repository (deleted code does not contain bugs \o/)
automatic object-to-document conversion
automatic index creation
powerful repository abstraction to easily query data by declaring query methods
For more details, check out the reference documentation.
Related
I have dto CryptoNews. Which contains
List<Currencies> currencies
I would like to save "currencies" field to SourceRecord when constructing it.
Can't figure out how to:
Declare it in schema.
Pass it to Struct object when building value.
My attempts end in this exception:
Invalid Java object for schema type STRUCT: class com.dto.Currencies
Kafka Connect doesn't provide explicit example how to do handle case, when object in List requires it's own Schema.
I also tried to apply similar approach as in Kafka test cases, but it doesn't work. https://github.com/apache/kafka/blob/trunk/connect/api/src/test/java/org/apache/kafka/connect/data/StructTest.java#L95-L98
How to do this?
kafka-connect-api version: 0.10.2.0-cp1
value and key converter: org.apache.kafka.connect.json.JsonConverter
no avro used
CryptoNews implements Serializable {
// omitted fields
private List<Currencies> currencies;
}
class Currencies {
private String code;
private String title;
private String slug;
private String url;
}
SchemaConfiguration
public static final Integer FIRST_VERSION = 1;
public static final String CURRENCIES_SCHEMA_NAME = "currencies";
public static final Schema NEWS_SCHEMA = SchemaBuilder.struct().name("News")
.version(FIRST_VERSION)
.field(CURRENCIES_SCHEMA_NAME, CURRENCIES_SCHEMA)
// simple fields ommited for brevity.
.build();
public static final Schema CURRENCIES_SCHEMA = SchemaBuilder.array(
SchemaBuilder.struct()
.field(CODE_FIELD, Schema.OPTIONAL_STRING_SCHEMA)
.field(TITLE_FIELD, Schema.OPTIONAL_STRING_SCHEMA)
.field(SLUG_FIELD, Schema.OPTIONAL_STRING_SCHEMA)
.field(URL_FIELD, Schema.OPTIONAL_STRING_SCHEMA)
.optional()
.build()
)
.optional()
.name(CURRENCIES_SCHEMA_NAME)
.version(FIRST_VERSION)
.build();
SourceTask
return new SourceRecord(
sourcePartition(),
sourceOffset(cryptoNews),
config.getString(TOPIC_CONFIG),
null,
CryptoNewsSchema.NEWS_KEY_SCHEMA,
buildRecordKey(cryptoNews),
CryptoNewsSchema.NEWS_SCHEMA,
buildRecordValue(cryptoNews),
Instant.now().toEpochMilli()
);
public Struct buildRecordValue(CryptoNews cryptoNews){
Struct valueStruct = new Struct(CryptoNewsSchema.NEWS_SCHEMA);
// Produces Invalid Java object for schema type STRUCT: class com.dto.Currencies
List<Currencies> currencies = cryptoNews.getCurrencies();
if (currencies != null) {
valueStruct.put(CurrenciesSchema.CURRENCIES_SCHEMA_NAME, currencies);
}
return valueStruct;
}
UPDATE:
worker.properties
bootstrap.servers=localhost:29092
key.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=true
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=true
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter.schemas.enable=true
rest.port=8086
rest.host.name=127.0.0.1
offset.storage.file.filename=offsets/standalone.offsets
offset.flush.interval.ms=10000
You need to provide a List<Struct>
Here's a full unit test example
First, an interface that will help
public interface ConnectPOJOConverter<T> {
Schema getSchema();
T fromConnectData(Struct s);
Struct toConnectData(T t);
}
class ArrayStructTest {
public static final Schema CURRENCY_ITEM_SCHEMA = SchemaBuilder.struct()
.version(1)
.name(Currency.class.getName())
.doc("A currency item")
.field("code", Schema.OPTIONAL_STRING_SCHEMA)
.field("title", Schema.OPTIONAL_STRING_SCHEMA)
.field("slug", Schema.OPTIONAL_STRING_SCHEMA)
.field("url", Schema.OPTIONAL_STRING_SCHEMA)
.build();
static final ConnectPOJOConverter<Currency> CONVERTER = new CurrencyConverter();
#Test
void myTest() {
// Given
List<Currency> currencies = new ArrayList<>();
// TODO: Get from external source
currencies.add(new Currency("200", "Hello", "/slug", "http://localhost"));
currencies.add(new Currency("200", "World", "/slug", "http://localhost"));
// When: build Connect Struct data
Schema valueSchema = SchemaBuilder.struct()
.name("CryptoNews")
.doc("A record holding a list of currency items")
.version(1)
.field("currencies", SchemaBuilder.array(CURRENCY_ITEM_SCHEMA).required().build())
.build();
final List<Struct> items = currencies.stream()
.map(CONVERTER::toConnectData)
.collect(Collectors.toList());
// In the SourceTask, this is what goes into the SourceRecord along with the valueSchema
Struct value = new Struct(valueSchema);
value.put("currencies", items);
// Then
assertDoesNotThrow(value::validate);
Object itemsFromStruct = value.get("currencies");
assertInstanceOf(List.class, itemsFromStruct);
//noinspection unchecked
List<Object> data = (List<Object>) itemsFromStruct; // could also use List<Struct>
assertEquals(2, data.size(), "same size");
assertInstanceOf(Struct.class, data.get(0), "Object list still has type information");
Struct firstStruct = (Struct) data.get(0);
assertEquals("Hello", firstStruct.get("title"));
currencies = data.stream()
.map(o -> (Struct) o)
.map(CONVERTER::fromConnectData)
.filter(Objects::nonNull) // in case converter has errors, could return null
.collect(Collectors.toList());
assertTrue(currencies.size() <= data.size());
assertEquals("World", currencies.get(1).getTitle(), "struct parsing data worked");
}
static class CurrencyConverter implements ConnectPOJOConverter<Currency> {
#Override
public Schema getSchema() {
return CURRENCY_ITEM_SCHEMA;
}
#Override
public Currency fromConnectData(Struct s) {
// simple conversion, but more complex types could throw errors
return new Currency(
s.getString("code"),
s.getString("title"),
s.getString("url"),
s.getString("slug")
);
}
#Override
public Struct toConnectData(Currency c) {
Struct s = new Struct(getSchema());
s.put("code", c.getCode());
s.put("title", c.getTitle());
s.put("url", c.getUrl());
s.put("slug", c.getSlug());
return s;
}
}
}
The alternative approach is to just use a String schema, and use Jackson ObjectMapper to get a JSON string, then let JSONConverter handle the rest.
final ObjectMapper om = new ObjectMapper();
final Schema valueSchema = Schema.STRING_SCHEMA;
output.put("schema", new TextNode("TODO")); // replace with JSONConverter schema
// for-each currency
Map<String, JsonNode> output = new HashMap<>();
try {
output.put("payload", om.readTree(om.writeValueAsBytes(currency))); // write and parse to not double-encode
String value = om.writeValueAsString(output);
SourceRecord r = new SourceRecord(...., valueSchema, value);
records.add(r); // poll return result
} catch (IOException e) {
// TODO: handle
}
// end for-each
return records;
I make a class like this
#BsonDiscrimintor
public class User {
#BsonId
private Integer _id;
// some properties
// getter & setter
}
and register to codec
ClassModel<User> userModel = ClassModel.builder(User.class).enableDiscriminator(true).build();
PojoCodecProvider pojoCodecProvider = PojoCodecProvider.builder().register(userModel).build();
pojoCodecRegistry = fromRegistries(MongoClient.getDefaultCodecRegistry(), fromProviders(pojoCodecProvider));
mongoClient = new MongoClient("localhost", MongoClientOptions.builder().codecRegistry(pojoCodecRegistry).build());
mongoDatabase = mongoClient.getDatabase("bbs").withCodecRegistry(pojoCodecRegistry);
When I try to insert
public int addOne(User user) {
try {
user.set_id(Db.getNextId("user"));
userCollection.insertOne(user);
} catch (Exception e) {
e.printStackTrace();
}
return user.get_id();
}
But When I find it in mongo, its _id field type is ObjectID but not Int32.
But I declared _id as Integer, Why?
The default type of id in MongoDB is ObjectID. It could be changed as you create table. It's not decided by the class define in java.
Actually, MongoDB would implicitly convert int to ObjectID when your code run.
I am receiving a simple list of values part of JSON request which I want to save as comma separated values. Tried using following but it did not work.
#Column(nullable = true)
#GeneratedValue(strategy = GenerationType.AUTO)
private ArrayList<String> services = new ArrayList<String>() ;
and
#Column(nullable = true)
#ElementCollection(targetClass = String.class)
private List<String> services = new ArrayList<String>() ;
#ElementCollection threw an exception saying table services does not exist.
The #ElementCollection requires a table to store multiples rows of values,
So you could define as a String column and join/explode in getters and setters, like this
private String services;
public setServices(String services[]) //Can be Array or List
{
// this.services = Iterate services[] and create a comma separated string or Use ArrayUtils
}
public String[] getServices() //Can be Array or List
{
// services.split(",") to get a list of Strings, then typecast/parse them to Strings before returning or use Arrays.asList(arguments.split(","));
}
As mentioned by others in the comments an AttributeConverter works pretty well. This one uses Jackson to serialize as a JSON array. I recommend JSON since it cleanly handles delimiter escaping, nulls, quotes, etc.:
#Converter
public class StringListAttributeConverter implements AttributeConverter<List<String>, String> {
private static final TypeReference<List<String>> TypeRef = new TypeReference<List<String>>(){};
#Override
public String convertToDatabaseColumn (List<String> attribute) {
if (attribute == null) {
return null;
}
try {
return ObjectMapperFactory.getInstance().writeValueAsString(attribute);
}
catch (IOException ex) {
throw new UncheckedIOException(ex);
}
}
#Override
public List<String> convertToEntityAttribute (String dbData) {
if (dbData == null) {
return null;
}
try {
return ObjectMapperFactory.getInstance().readValue(dbData, TypeRef);
}
catch (IOException ex) {
throw new UncheckedIOException(ex);
}
}
}
I've used this class and it works well in most cases. One caveat I've found is that using this converter can confuse some JPA criteria queries, because it expects a type List on the entity, but finds a String in the db.
A more simple variant did the trick for me, no Jackson, but trimming strings:
public class CsvTrimmedStringsConverter implements AttributeConverter<List<String>, String> {
#Override
public String convertToDatabaseColumn(List<String> attribute) {
return attribute == null
? null
: attribute.stream().map(String::trim).collect(Collectors.joining(","));
}
#Override
public List<String> convertToEntityAttribute(String dbData) {
return dbData == null
? null
: Arrays.stream(dbData.split(",")).map(String::trim).collect(Collectors.toList());
}
}
I have a bean class
public class Group{string name;Type type; }
and another bean
public class Type{String name;}
Now, i want to bind group by using jdbi #BindBean
#SqlBatch("INSERT INTO (type_id,name) VALUES((SELECT id FROM type WHERE name=:m.type.name),:m.name)")
#BatchChunkSize(100)
int[] insertRewardGroup(#BindBean ("m") Set<Group> groups);
How can i bind the user defined object's property as member of the bean??
You could implement your own Bind-annotation here. I implemented one that I am adopting for this answer. It will unwrap all Type ones.
I think it could be made fully generic with a little more work.
Your code would look like this (please note that m.type.name changed to m.type):
#SqlBatch("INSERT ... WHERE name=:m.type),:m.name)")
#BatchChunkSize(100)
int[] insertRewardGroup(#BindTypeBean ("m") Set<Group> groups);
This would be the annotation:
#BindingAnnotation(BindTypeBean.SomethingBinderFactory.class)
#Retention(RetentionPolicy.RUNTIME)
#Target({ElementType.PARAMETER})
public #interface BindTypeBean {
String value() default "___jdbi_bare___";
public static class SomethingBinderFactory implements BinderFactory {
public Binder build(Annotation annotation) {
return new Binder<BindTypeBean, Object>() {
public void bind(SQLStatement q, BindTypeBean bind, Object arg) {
final String prefix;
if ("___jdbi_bare___".equals(bind.value())) {
prefix = "";
} else {
prefix = bind.value() + ".";
}
try {
BeanInfo infos = Introspector.getBeanInfo(arg.getClass());
PropertyDescriptor[] props = infos.getPropertyDescriptors();
for (PropertyDescriptor prop : props) {
Method readMethod = prop.getReadMethod();
if (readMethod != null) {
Object r = readMethod.invoke(arg);
Class<?> c = readMethod.getReturnType();
if (prop.getName().equals("type") && r instanceof Type) {
r = ((Type) r).getType();
c = r.getClass();
}
q.dynamicBind(c, prefix + prop.getName(), r);
}
}
} catch (Exception e) {
throw new IllegalStateException("unable to bind bean properties", e);
}
}
};
}
}
}
Doing this in JDBI is not possible , you have to bring out the property and give is a argument.
I have a user class that has 16 attributes, things such as firstname, lastname, dob, username, password etc... These are all stored in a MySQL database and when I want to retrieve users I use a ResultSet. I want to map each of the columns back to the user attributes but the way I am doing it seems terribly inefficient.
For example I am doing:
//ResultSet rs;
while(rs.next()) {
String uid = rs.getString("UserId");
String fname = rs.getString("FirstName");
...
...
...
User u = new User(uid,fname,...);
//ArrayList<User> users
users.add(u);
}
i.e I retrieve all the columns and then create user objects by inserting all the column values into the User constructor.
Does anyone know of a faster, neater, way of doing this?
If you don't want to use any JPA provider such as OpenJPA or Hibernate, you can just give Apache DbUtils a try.
http://commons.apache.org/proper/commons-dbutils/examples.html
Then your code will look like this:
QueryRunner run = new QueryRunner(dataSource);
// Use the BeanListHandler implementation to convert all
// ResultSet rows into a List of Person JavaBeans.
ResultSetHandler<List<Person>> h = new BeanListHandler<Person>(Person.class);
// Execute the SQL statement and return the results in a List of
// Person objects generated by the BeanListHandler.
List<Person> persons = run.query("SELECT * FROM Person", h);
No need of storing resultSet values into String and again setting into POJO class. Instead set at the time you are retrieving.
Or best way switch to ORM tools like hibernate instead of JDBC which maps your POJO object direct to database.
But as of now use this:
List<User> users=new ArrayList<User>();
while(rs.next()) {
User user = new User();
user.setUserId(rs.getString("UserId"));
user.setFName(rs.getString("FirstName"));
...
...
...
users.add(user);
}
Let's assume you want to use core Java, w/o any strategic frameworks. If you can guarantee, that field name of an entity will be equal to the column in database, you can use Reflection API (otherwise create annotation and define mapping name there)
By FieldName
/**
Class<T> clazz - a list of object types you want to be fetched
ResultSet resultSet - pointer to your retrieved results
*/
List<Field> fields = Arrays.asList(clazz.getDeclaredFields());
for(Field field: fields) {
field.setAccessible(true);
}
List<T> list = new ArrayList<>();
while(resultSet.next()) {
T dto = clazz.getConstructor().newInstance();
for(Field field: fields) {
String name = field.getName();
try{
String value = resultSet.getString(name);
field.set(dto, field.getType().getConstructor(String.class).newInstance(value));
} catch (Exception e) {
e.printStackTrace();
}
}
list.add(dto);
}
By annotation
#Retention(RetentionPolicy.RUNTIME)
public #interface Col {
String name();
}
DTO:
class SomeClass {
#Col(name = "column_in_db_name")
private String columnInDbName;
public SomeClass() {}
// ..
}
Same, but
while(resultSet.next()) {
T dto = clazz.getConstructor().newInstance();
for(Field field: fields) {
Col col = field.getAnnotation(Col.class);
if(col!=null) {
String name = col.name();
try{
String value = resultSet.getString(name);
field.set(dto, field.getType().getConstructor(String.class).newInstance(value));
} catch (Exception e) {
e.printStackTrace();
}
}
}
list.add(dto);
}
Thoughts
In fact, iterating over all Fields might seem ineffective, so I would store mapping somewhere, rather than iterating each time. However, if our T is a DTO with only purpose of transferring data and won't contain loads of unnecessary fields, that's ok. In the end it's much better than using boilerplate methods all the way.
Hope this helps someone.
Complete solution using #TEH-EMPRAH ideas and Generic casting from Cast Object to Generic Type for returning
import annotations.Column;
import java.lang.reflect.Field;
import java.lang.reflect.InvocationTargetException;
import java.sql.SQLException;
import java.util.*;
public class ObjectMapper<T> {
private Class clazz;
private Map<String, Field> fields = new HashMap<>();
Map<String, String> errors = new HashMap<>();
public DataMapper(Class clazz) {
this.clazz = clazz;
List<Field> fieldList = Arrays.asList(clazz.getDeclaredFields());
for (Field field : fieldList) {
Column col = field.getAnnotation(Column.class);
if (col != null) {
field.setAccessible(true);
fields.put(col.name(), field);
}
}
}
public T map(Map<String, Object> row) throws SQLException {
try {
T dto = (T) clazz.getConstructor().newInstance();
for (Map.Entry<String, Object> entity : row.entrySet()) {
if (entity.getValue() == null) {
continue; // Don't set DBNULL
}
String column = entity.getKey();
Field field = fields.get(column);
if (field != null) {
field.set(dto, convertInstanceOfObject(entity.getValue()));
}
}
return dto;
} catch (IllegalAccessException | InstantiationException | NoSuchMethodException | InvocationTargetException e) {
e.printStackTrace();
throw new SQLException("Problem with data Mapping. See logs.");
}
}
public List<T> map(List<Map<String, Object>> rows) throws SQLException {
List<T> list = new LinkedList<>();
for (Map<String, Object> row : rows) {
list.add(map(row));
}
return list;
}
private T convertInstanceOfObject(Object o) {
try {
return (T) o;
} catch (ClassCastException e) {
return null;
}
}
}
and then in terms of how it ties in with the database, I have the following:
// connect to database (autocloses)
try (DataConnection conn = ds1.getConnection()) {
// fetch rows
List<Map<String, Object>> rows = conn.nativeSelect("SELECT * FROM products");
// map rows to class
ObjectMapper<Product> objectMapper = new ObjectMapper<>(Product.class);
List<Product> products = objectMapper.map(rows);
// display the rows
System.out.println(rows);
// display it as products
for (Product prod : products) {
System.out.println(prod);
}
} catch (Exception e) {
e.printStackTrace();
}
I would like to hint on q2o. It is a JPA based Java object mapper which helps with many of the tedious SQL and JDBC ResultSet related tasks, but without all the complexity an ORM framework comes with. With its help mapping a ResultSet to an object is as easy as this:
while(rs.next()) {
users.add(Q2Obj.fromResultSet(rs, User.class));
}
More about q2o can be found here.
There are answers recommending to use https://commons.apache.org/proper/commons-dbutils/. The default implementation of row processor i.e org.apache.commons.dbutils.BasicRowProcessor in db-utils 1.7 is not thread safe. So, if you are using org.apache.commons.dbutils.QueryRunner::query method in a multi-threaded environment, you should write your custom row processor. It can be done either by implementing org.apache.commons.dbutils.RowProcessor interface or by extending org.apache.commons.dbutils.BasicRowProcessor class. Sample code given below by extending BasicRowProcessor:
class PersonResultSetHandler extends BasicRowProcessor {
#Override
public <T> List<T> toBeanList(ResultSet rs, Class<? extends T> type)
throws SQLException
{
//Handle the ResultSet and return a List of Person
List<Person> personList = .....
return (List<T>) personList;
}
}
Pass the custom row processor to the appropriate org.apache.commons.dbutils.ResultSetHandler implementation. A BeanListHandler has been used in the below code:
QueryRunner qr = new QueryRunner();
List<Person> personList = qr.query(conn, sqlQuery, new BeanListHandler<Person>(Person.class, new PersonResultSetHandler()));
However, https://mvnrepository.com/artifact/org.springframework.boot/spring-boot-starter-jdbc is another alternative with a cleaner API. Although, I am not sure about the thread safety aspects of it.
Thank you ##TEH-EMPRAH. His solution "By FieldName" will be final as:
/**
* Method help to convert SQL request data to your custom DTO Java class object.
* Requirements: fields of your Java class should have Type: String and have the same name as in sql table
*
* #param resultSet - sql-request result
* #param clazz - Your DTO Class for mapping
* #return <T> List <T> - List of converted DTO java class objects
*/
public static <T> List <T> convertSQLResultSetToObject(ResultSet resultSet, Class<T> clazz) throws SQLException, NoSuchMethodException, InvocationTargetException, InstantiationException, IllegalAccessException {
List<Field> fields = Arrays.asList(clazz.getDeclaredFields());
for(Field field: fields) {
field.setAccessible(true);
}
List<T> list = new ArrayList<>();
while(resultSet.next()) {
T dto = clazz.getConstructor().newInstance();
for(Field field: fields) {
String name = field.getName();
try{
String value = resultSet.getString(name);
field.set(dto, field.getType().getConstructor(String.class).newInstance(value));
} catch (Exception e) {
e.printStackTrace();
}
}
list.add(dto);
}
return list;
}
using DbUtils...
The only problem I had with that lib was that sometimes you have relationships in your bean classes, DBUtils does not map that. It only maps the properties in the class of the bean, if you have other complex properties (refering other beans due to DB relationship) you'd have to create "indirect setters" as I call, which are setters that put values into those complex properties's properties.
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.json.simple.JSONObject;
import com.google.gson.Gson;
public class ObjectMapper {
//generic method to convert JDBC resultSet into respective DTo class
#SuppressWarnings("unchecked")
public static Object mapValue(List<Map<String, Object>> rows,Class<?> className) throws Exception
{
List<Object> response=new ArrayList<>();
Gson gson=new Gson();
for(Map<String, Object> row:rows){
org.json.simple.JSONObject jsonObject = new JSONObject();
jsonObject.putAll(row);
String json=jsonObject.toJSONString();
Object actualObject=gson.fromJson(json, className);
response.add(actualObject);
}
return response;
}
public static void main(String args[]) throws Exception{
List<Map<String, Object>> rows=new ArrayList<Map<String, Object>>();
//Hardcoded data for testing
Map<String, Object> row1=new HashMap<String, Object>();
row1.put("name", "Raja");
row1.put("age", 22);
row1.put("location", "India");
Map<String, Object> row2=new HashMap<String, Object>();
row2.put("name", "Rani");
row2.put("age", 20);
row2.put("location", "India");
rows.add(row1);
rows.add(row2);
#SuppressWarnings("unchecked")
List<Dto> res=(List<Dto>) mapValue(rows, Dto.class);
}
}
public class Dto {
private String name;
private Integer age;
private String location;
//getters and setters
}
Try the above code .This can be used as a generic method to map JDBC result to respective DTO class.
Use Statement Fetch Size , if you are retrieving more number of records. like this.
Statement statement = connection.createStatement();
statement.setFetchSize(1000);
Apart from that i dont see an issue with the way you are doing in terms of performance
In terms of Neat. Always use seperate method delegate to map the resultset to POJO object. which can be reused later in the same class
like
private User mapResultSet(ResultSet rs){
User user = new User();
// Map Results
return user;
}
If you have the same name for both columnName and object's fieldName , you could also write reflection utility to load the records back to POJO. and use MetaData to read the columnNames . but for small scale projects using reflection is not an problem. but as i said before there is nothing wrong with the way you are doing.