Inheritance and DTO object design - java

I am developing a standalone J2SE application (Can't add J2EE, Hibernate, Spring, etc...). I need idea on designing the code architecture.
This is my code:
class PersonBean {
String name;
int id;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
}
class PersonMapper{
PersonMapper(Connection conn){
}
PersonBean load(ResultSet rs){
PersonBean entity = new PersonBean();
entity.setId(rs.getInt("id"));
entity.setName(rs.getString("name"));
return entity;
}
PersonBean findById(int id){
String query = "SELECT * FROM Person where id = ?";
PreparedStatment stmt = conn.getPreparedStatement(query);
stmt.setInt(1, id);
ResultSet rs = stmt.executeQuery();
if (rs.next())
return load(rs);
else
return null;
}
List<PersonBean> findByName(String name) {}
}
Created entity classes for each Table (mysql table), with getter
and setter, named it as Bean.class (PersonBean.class)
Created mapper classes for each table to retrieve records for
that table,and populate entity. Named it as Mapper.class
(PersonMapper.class).
Is there anything do I need to do to improve this design?
In object world Student object extends Person object. But in database Student and Person are two different tables, which leads to create two different classes for StudentBean and PersonBean where StudentBean will not extend PersonBean. Do I need to create business layer on top of entity Bean object layer? if so how to design?
I don't know how to start browse about this, any links also would be fine.

If you want to be able to have a hierarchy, you should implement a better mapper, that can do the difference between loading a Person vs a Student bean. A better way would be to implement on your own, something similar to Hibernate or Spring's JdbcTemplate ... which will take you a while.
But before that, I would suggest to try out sql2java, that will generate a similar structure as yours, for all the tables that you have, and also lets you to customize it, and if the data structure changes, it can be re-generated.

I go with the logic said by Albert
I would recommend the same, you can program using ResultSetMetaData to get all the column/pseudo column names which will map like
EMPLOYEE_NAME to employeeName setter method setEmployeeName on the fly
you can use Java Reflection to call a setter method or you can use apache BeanUtils

Either for learning or for real usage,
It is best to review the Source Generated by DAO Generators.
You can learn many good things and additionally stay away from Design Failures.
Review the one here: FireStorm

Related

How to properly map between persistence layer and domain object

Let's say I have a domain java class representing a person:
class Person {
private final String id; // government id
private String name;
private String status;
private Person(String id, String name) {
this.id = id;
this.name = name;
this.status = "NEW";
}
Person static createNew(String id, String name) {
return new Person(id, name);
}
void validate() {
//logic
this.status = "VALID";
}
public static final class Builder {
private String id;
private String name;
private String status;
private Builder() {
}
public static Builder aPerson() {
return new Builder();
}
public Builder id(String id) {
this.id = id;
return this;
}
public Builder name(String name) {
this.name = name;
return this;
}
public Builder status(String status) {
this.status = status;
return this;
}
public Person build() {
Person person = new Person(id, name);
person.status = this.status;
return person;
}
}
I store this domain class object in a database, regular class with the same field + getters and setters. Currently when I want to store object I create new PersonDocument (data is stored in mongo), use getters and setters and save it. It gets complicated when I want to fetch it from DB. I would like my domain object to expose only what is necessary, for the business logic currently it is only creation and validation. Simply:
Person p = Person.createNew("1234", "John");
p.validate();
repository.save(p);
The other way it gets complicated, currently there is a builder which allows creation of object in any state. We do believe that data stored in DB has a proper state so it can be created that way but the downside is that there is a public API available, letting any one to do anything.
The initial idea was to use MapStruct java mapping library but it does use setters to create objects and exposing setters in the domain class (as far as I can tell) should be avoided.
Any suggestions how to do it properly?
Your problem likely comes from two conflicting requirements:
You want to expose only business methods.
You want to expose data too, since you want to be able to implement serialization/deserialization external to the object.
One of those has to give. To be honest, most people faced with this problem ignore the first one, and just introduce setter/getters. The alternative is of course to ignore the second one, and just introduce the serialization/deserialization into the object.
For example you can introduce a method Document toDocument() into the objects that produces the Mongo compatible json document, and also a Person fromDocument(Document) to deserialize.
Most people don't like this sort of solution, because it "couples" the technology to the object. Is that a good or bad thing? Depends on your use-case. Which one do you want to optimize for: Changing business logic or changing technologies? If you're not planning to change technologies very often and don't plan using the same class in a completely different application, there's no reason to separate technology.
Robert Bräutigam sentence is good:
Two conflicting requirements
But the is another sentence by Alan Kay that is better:
“I’m sorry that I long ago coined the term “objects” for this topic
because it gets many people to focus on the lesser idea. The big idea
is messaging.” ~ Alan Kay
So, instead of dealing with the conflict, let's just change the approach to avoid it. The best way I found is to take a functional aproach and avoid unnecessary states and mutations in classes by expresing the domain changes as events.
Instead to map classes (aggregates, V.o.'s and/or entities) to persistence, I do this:
Build an aggregate with the data needed (V.O.'s and entities) to apply aggregate rules and invariants given an action. This data comes from persistence. The aggregate do not expose getters not setters; just actions.
Call the aggretate's action with command data as parameter. This will call inner entities actions in case the overal rules need it. This allows responsibility segregation and decoupling as the Aggregate Root does not have to know how are implemeted their inner entities (Tell, don't ask).
Actions (in Aggregate roots and inner entities) does not modify its inner state; they instead returns events expressing the domain change. The aggregate main action coordinate and check the events returned by its inner entities to apply rules and invariants (the aggregate has the "big picture") and build the final Domain Event that is the output of the main Action call.
Your persistence layer has an apply method for every Domain Event that has to handle (Persistence.Apply(event)). This way your persistence knows what was happened and; as long as the event has all the data needed to persist the change; can apply the change into (even with behaviour if needed!).
Publish your Domain Event. Let the rest of your system knows that something has just happenend.
Check this post (it worth chek all DDD series in this blog) to see a similar implementation.
I do it this way:
The person as a domain entity have status (in the sense of the entity fields that define the entity, not your "status" field) and behaviour (methods).
What is stored in the db is just the status. Then I create a "PersonStatus" interface in the domain (with getter methods of the fields that we need to persist), so that PersonRepository deals with the status.
The Person entity implements PersonStatus (or instead of this, you can put a static method that returns the state).
In the infraestructure I have a PersonDB class implementing PersonStatus too, which is the persistence model.
So:
DOMAIN MODEL:
// ENTITY
public class Person implements PersonStatus {
// Fields that define status
private String id;
private String name;
...
// Constructors and behaviour
...
...
// Methods implementing PersonStatus
#Override
public String id() {
return this.id;
}
#Override
public String name() {
return this.name;
}
...
}
// STATUS OF ENTITY
public interface PersonStatus {
public String id();
public String name();
...
}
// REPOSITORY
public interface PersonRepository {
public void add ( PersonStatus personStatus );
public PersonStatus personOfId ( String anId );
}
INFRAESTRUCTURE:
public class PersonDB implements PersonStatus {
private String id;
private String name;
...
public PersonDB ( String anId, String aName, ... ) {
this.id = anId;
this.name = aName;
...
}
#Override
public String id() {
return this.id;
}
#Override
public String name() {
return this.name;
}
...
}
// AN INMEMORY REPOSITORY IMPLEMENTATION
public class InmemoryPersonRepository implements PersonRepository {
private Map<String,PersonDB> inmemorydb;
public InmemoryPersonRepository() {
this.inmemoryDb = new HashMap<String,PersonDB>();
}
#Override
public void add ( PersonStatus personStatus );
PersonDB personDB = new PersonDB ( personStatus.id(), personStatus.name(), ... );
this.inmemoryDb.put ( personDB.id(), personDB );
}
#Override
public PersonStatus personOfId ( String anId ) {
return this.inmemoryDb.personOfId ( anId );
}
}
APPLICATION LAYER:
...
Person person = new Person ( "1", "John Doe", ... );
personRepository.add ( person );
...
PersonStatus personStatus = personRepository.personOfId ( "1" );
Person person = new Person ( personStatus.id(), personStatus.name(), ... );
...
It basically boils down to two things, depending on how much you are willing to add extra work in on the necessary infrastructure and how constraining your ORM/persistence is.
Use CQRS+ES pattern
The most obvious choice that's used in bigger and complex domains is to use the CQRS (Command/Query Responsibility Segregation) "Event Sourcing" pattern. This means, that each mutable actions generates an event, that is persisted.
When your aggregate is loaded, all the events will be loaded from the database and applied in chronological order. Once applied, your aggregate will have its current state.
CQRS just means, that you separate read and write operations. Write operations would happen in the aggregate by creating events (by applying commands) which are stored/read via Event Sourcing.
Where the "Query" would be queries on projected data, which uses the events to create a current state of the object, that's used for querying and reading only. Aggregates still read by reapplying all the events from the event sourcing storage.
Pros
You have an history of all changes that were done on the aggregate. This can be seen as added value to the business and auditing
if your projected database is corrupted or in an invalid state, you can restore it by replaying all the events and generate the projection from anew.
It's easy to revert to a previous state in time (i.e. by applying compensating events, that does opposite of what a previous event did)
Its easy to fix a bug (i.e. when calculating the the state of the aggregate) and then reply all the events to get the new, corrected value.
Assume you have a BankingAccount aggregate and calculate the balance and you used regular rounding instead of "round-to-even". Here you can fix the calculation, then reapply all the events and you get the new and correct account balance.
Cons
Aggregates with 1000s of events can take some time to materialize (Snapshots/Mememto pattern can be used here to load a snapshot and apply the events after that snapshot)
Initially more time to implement the necessary infrastructure
You can't query event sourced aggregates w/o a read store; Requires projection and a message queue to publish the event sourcing events so they can be processed and applied to a projection (SQL or document table) which can be used for queries
Map directly to Domain Entities
Some ORM and Document database providers allow you to directly map to backing fields, i.e. via reflection.
In MongoDb C# Driver it can be done via something like in the linked answer.
Same applies to EF Core ORM. I'm sure theres something similar in the Java world too.
This may limit your database persistence library and technology usage, since it will require you to use one which supports such APIs via fluent or code configuration. You can't use attributes/annotations for this, because these are usually database specific and it would leak persistence knowledge into your domain.
It also MAY limit your ability to use the strong typed querying API (Linq in C#, Streams in Java), because that generally requires getters and setters, so you may have to use magic strings (with names of the fields or properties in the storage) in the persistence layer.
It may be acceptable for smaller/less complex domains. But CQRS+ES should always be preferred, if possible and within budget/timeline since its most flexible and works with all persistence storage and frameworks (even with key-value stores).
Pros
Not necessary to leverage more complex infrastructure (CQRS, ES, Pub/Sub messaging/queues)
No leaking of persistence knowledge into your models and no need to break encapsulation
Cons
No history of changes
No way to restore a previous state
May require magic strings when querying in the persistence layer (depends on framework/orm)
Can require a lot of fluent/code configuration in the persistence layer, to map it to the backing field
May break, when you rename the backing field

Mapstruct: join on id

I am using Mapstruct to map from generated DTOs (metro, xsd) to our business domain objects. My difficulty is that the DTOs don't actually reference child objects but instead use IDs to reference associated instances.
Trying to break this down to a simplified case, I have come up with an example:
SchoolDTO has a lists of teachers and courses. The teacher of a
course is only referenced through a teacherId in each course.
In the business domain School only has a list of teachers who each
hold a list of their courses.
Class diagram: UML: DTO / Domain
Initially I was hoping to solve this in mapstruct syntax with something like a join on foreignId and teacher id (or some qualifiedBy association), pseudo code as follows:
#Mapping(source="courses", target="teachers.courses", where="teacher.id = course.teacherId")
DTOs:
public class SchoolDto {
List<TeacherDto> teachers;
List<CourseDto> courses;
}
public class TeacherDto {
String id;
String name;
}
public class CourseDto {
String name;
String teacherId;
}
Domain:
public class School {
List<Teacher> teachers;
}
public class Teacher {
String name;
List<Course> courses;
}
public class Course {
String name;
}
I am right now working around it with fairly big #AfterMapping methods but I feel this isn't such an exceptional use case - so maybe I am missing something rather obvious. What is the correct/intended way to solve these type of "joins" in a mapping with Mapstruct?
I doubt that you can do this without an #AfterMapping. MapStruct is "just" for mapping one object to another one, it doesn't support any kind of queries to find or join data.
If you are not already using it this sounds like a good use-case for using a context. Then the #AfterMapping is not really big:
#Mapper
public abstract class SchoolMapper {
public School toSchool(SchoolDto school) {
return toSchool( school, school.getCourses() );
}
protected abstract School toSchool(SchoolDto school, #Context List<CourseDto> courses);
#Mapping(target = "courses", ignore = true) // see afterMappingToTeacher
protected abstract Teacher toTeacher(TeacherDto teacher, #Context List<CourseDto> courses);
protected abstract Course toCourse(CourseDto course);
#AfterMapping
void afterMappingToTeacher(#MappingTarget target, TeacherDto source, #Context List<CourseDto> courses) {
// omitted null-checks
List<Course> courses = new ArrayList<>();
for(CourseDto course : courses) {
if(course.getTeacherId().equals(source.getId())) {
courses.add( toCourse(course) );
}
}
target.setCourses( courses );
}
}
(when using Java >= 8 you can use an interface with default methods)
In case you need to query things multiple times you can things create an own class as a context which for example has own methods for finding all courses by a teacher ID.

How to beautifully update a JPA entity in Spring Data?

So I have looked at various tutorials about JPA with Spring Data and this has been done different on many occasions and I am no quite sure what the correct approach is.
Assume there is the follwing entity:
package stackoverflowTest.dao;
import javax.persistence.*;
#Entity
#Table(name = "customers")
public class Customer {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name = "id")
private long id;
#Column(name = "name")
private String name;
public Customer(String name) {
this.name = name;
}
public Customer() {
}
public long getId() {
return id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
We also have a DTO which is retrieved in the service layer and then handed to the controller/client side.
package stackoverflowTest.dto;
public class CustomerDto {
private long id;
private String name;
public CustomerDto(long id, String name) {
this.id = id;
this.name = name;
}
public long getId() {
return id;
}
public void setId(long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
So now assume the Customer wants to change his name in the webui - then there will be some controller action, where there will be the updated DTO with the old ID and the new name.
Now I have to save this updated DTO to the database.
Unluckily currently there is no way to update an existing customer (except than deleting the entry in the DB and creating a new Cusomter with a new auto-generated id)
However as this is not feasible (especially considering such an entity could have hundreds of relations potentially) - so there come 2 straight forward solutions to my mind:
make a setter for the id in the Customer class - and thus allow setting of the id and then save the Customer object via the corresponding repository.
or
add the id field to the constructor and whenever you want to update a customer you always create a new object with the old id, but the new values for the other fields (in this case only the name)
So my question is wether there is a general rule how to do this?
Any maybe what the drawbacks of the 2 methods I explained are?
Even better then #Tanjim Rahman answer you can using Spring Data JPA use the method T getOne(ID id)
Customer customerToUpdate = customerRepository.getOne(id);
customerToUpdate.setName(customerDto.getName);
customerRepository.save(customerToUpdate);
Is's better because getOne(ID id) gets you only a reference (proxy) object and does not fetch it from the DB. On this reference you can set what you want and on save() it will do just an SQL UPDATE statement like you expect it. In comparsion when you call find() like in #Tanjim Rahmans answer spring data JPA will do an SQL SELECT to physically fetch the entity from the DB, which you dont need, when you are just updating.
In Spring Data you simply define an update query if you have the ID
#Repository
public interface CustomerRepository extends JpaRepository<Customer , Long> {
#Query("update Customer c set c.name = :name WHERE c.id = :customerId")
void setCustomerName(#Param("customerId") Long id, #Param("name") String name);
}
Some solutions claim to use Spring data and do JPA oldschool (even in a manner with lost updates) instead.
Simple JPA update..
Customer customer = em.find(id, Customer.class); //Consider em as JPA EntityManager
customer.setName(customerDto.getName);
em.merge(customer);
This is more an object initialzation question more than a jpa question, both methods work and you can have both of them at the same time , usually if the data member value is ready before the instantiation you use the constructor parameters, if this value could be updated after the instantiation you should have a setter.
If you need to work with DTOs rather than entities directly then you should retrieve the existing Customer instance and map the updated fields from the DTO to that.
Customer entity = //load from DB
//map fields from DTO to entity
So now assume the Customer wants to change his name in the webui -
then there will be some controller action, where there will be the
updated DTO with the old ID and the new name.
Normally, you have the following workflow:
User requests his data from server and obtains them in UI;
User corrects his data and sends it back to server with already present ID;
On server you obtain DTO with updated data by user, find it in DB by ID (otherwise throw exception) and transform DTO -> Entity with all given data, foreign keys, etc...
Then you just merge it, or if using Spring Data invoke save(), which in turn will merge it (see this thread);
P.S. This operation will inevitably issue 2 queries: select and update. Again, 2 queries, even if you wanna update a single field. However, if you utilize Hibernate's proprietary #DynamicUpdate annotation on top of entity class, it will help you not to include into update statement all the fields, but only those that actually changed.
P.S. If you do not wanna pay for first select statement and prefer to use Spring Data's #Modifying query, be prepared to lose L2C cache region related to modifiable entity; even worse situation with native update queries (see this thread) and also of course be prepared to write those queries manually, test them and support them in the future.
I have encountered this issue!
Luckily, I determine 2 ways and understand some things but the rest is not clear.
Hope someone discuss or support if you know.
Use RepositoryExtendJPA.save(entity). Example:
List<Person> person = this.PersonRepository.findById(0)
person.setName("Neo");
This.PersonReository.save(person);
this block code updated new name for record which has id = 0;
Use #Transactional from javax or spring framework. Let put #Transactional upon your class or specified function, both are ok. I read at somewhere that this annotation do a "commit" action at the end your function flow. So, every things you modified at entity would be updated to database.
There is a method in JpaRepository
getOne
It is deprecated at the moment in favor of
getById
So correct approach would be
Customer customerToUpdate = customerRepository.getById(id);
customerToUpdate.setName(customerDto.getName);
customerRepository.save(customerToUpdate);

How to create relationships between objects in Spring JDBC?

I want to implement relationships from JPA to Spring JDBC. For instance, assume I have Account and Advert objects. The relationship between Account and Advert is #OneToMany according to JPA.
Account class:
public class Account {
private Long id;
private String username;
private Set<Advert> adverts = new HashSet<Advert>();
// getters + setters
}
Advert class:
public class Advert {
private Long id;
private String text;
private Account account;
// getters + setters
}
AccountMapper:
public class AccountMapper implements RowMapper<Account> {
public Account mapRow(ResultSet rs, int rowNum) throws SQLException {
Account account = new Account();
account.setId(rs.getLong("id"));
account.setUsername(rs.getString("username"));
return account;
}
}
Now, I am trying to create a Mapper for the Advert class. How can I map the account variable from the Advert class to a row? Many thanks
You can use Hibernate without affecting your application performance, just check out this Hibernate tutorial for hundreds of examples related too mapping entities.
As for doing that in JDBC, you need to doo the following steps:
You need to use aliases to all selected columns so that the ids columns won't clash.
You can define two row mappers and use a join from Advert to Account and pass it to the AccountMapper:
public class AdvertMapper implements RowMapper<Advert> {
public Advert mapRow(ResultSet rs, int rowNum) throws SQLException {
Advert advert = new Advert();
advert.setId(rs.getLong("advert_id"));
advert.setText(rs.getString("advert_text"));
return advert;
}
}
public class AccountMapper implements RowMapper<Account> {
private final AdvertMapper advertMapper;
public AccountMapper(AdvertMapper advertMapper) {
this.advertMapper = advertMapper;
}
public Account mapRow(ResultSet rs, int rowNum) throws SQLException {
Account account = new Account();
account.setId(rs.getLong("account_id"));
account.setUsername(rs.getString("account_username"));
Advert advert = this.advertMapper.mapRow(rs, rowNum);
advert.setAccount(account);
account.getAdverts().add(advert);
return account;
}
}
The AccountMapper uses the AdvertMapper to create Adverts from the joined data.
Compare this to Hibernate, where all these mappings are resolved for you.
Well if you do not use an ORM ... you have no object relation mapping ! After all the ORMs were created for that reason :-)
More seriously, ORM saves you from writing a lot of boilerplate code. Using direct JDBC instead of JPA is a code optimisation. Like any other code optimisation, it should be used when appropriate. It is relevant for :
libraries using few tables that do not want to rely on an ORM (ex: user, roles, and ACL in spring security)
identified bottlenecks in larger application
My advice should be to first use JPA or native hibernate hidden in a DAO layer. Then carefully analyze your performance problems and rewrite the most expensive parts in JDBC.
Of course, you can directly code you DAO implementations in JDBC, but it will be much longer to write.
I almost forgot the essential part : in an ORM you map classes and relations, in JDBC you write independant SQL queries.
Solving the one to one case is easy with as Vlad answered, If you want to map a one to many as your Account - Advert suggest you can't do that
with a RowMapper because you will try to map multiple rows of your ResultSet to one Account, many Advert.
You can also do that manually or you can also use http://simpleflatmapper.org that provides mapping from ResultSet to POJO with one to many support.
Beware that the bidirectional relationship is not great there if you really want those it's possible but they won't be the same instance.
Checkout
http://simpleflatmapper.org/0104-getting-started-springjdbc.html
and
https://arnaudroger.github.io/blog/2017/02/27/jooq-one-to-many.html
you will need to get a ResutlSetExtractor - it's thread safe so only need one instance -,
private final ResultSetExtractor<List<Account>> mapper =
JdbcTemplateMapperFactory
.newInstance()
.addKeys("id") // assuming the account id will be on that column
.newResultSetExtractor(Account.class);
// in the method
String query =
"SELECT ac.id as id, ac.username, ad.id as adverts_id, ad.text as adverts_text"
+ "FROM account ac LEFT OUTER JOIN advert ad ON ad.account_id = ac.id order by id "
// the order by id is important here as it uses the break on id on the root object
// to detect new root object creation
List<Account> results = template.query(query, mapper);
with that you should get a list of account with the list of adverts populated. but advert won't have the account.

Android - Is there are super comfortable way to store persistent data?

I want my Data Model and my Getters and Setters set like a simple Java Class, let Eclipse create all the getters and setters and if I call them, I want the Data to be stored persistently. There is a sort of way with SQLiteDatabase Class, but it´s still not as comfortable as if you work with simple Java Classes. Is there are framework for it.(also, not only for android. (I got the Idea from the web framework Grails)
//Define DataModel
class StackOverflowUser {
private String name;
private int points;
}
//getters, setters...
//store Data persistently in a Database:
dan.setPoints(dan.getPoints()+5);
I dont understand why this OO Language has this very comfortable way of objects and getters and setters to easily define a data model, but when it comes to persistence, I need dozens of helper classes. It´s not a concrete problem but I hope you have an idea.
Use Shared Preferences to store your data. Please read this tutorial --->
http://androidandandroid.wordpress.com/2010/07/25/android-tutorial-16/
Shared preferences may be an option for small amount of data, but data types are limited. I started opensource project to eliminate boilerplate code while using them ( https://github.com/ko5tik/andject )
Another solution would be storing data in JSON form and use some databinding tool ( like: https://github.com/ko5tik/jsonserializer ) - JSON data can be also stored
Android Jetpack introduced Room, an elegant way to store data in Sqlite using POJOs. Example:
User.java
#Entity
public class User {
#PrimaryKey
private int uid;
#ColumnInfo(name = "first_name")
private String firstName;
#ColumnInfo(name = "last_name")
private String lastName;
// Getters and setters are ignored for brevity,
// but they're required for Room to work.
}
UserDao.java
#Dao
public interface UserDao {
#Query("SELECT * FROM user")
List<User> getAll();
#Query("SELECT * FROM user WHERE uid IN (:userIds)")
List<User> loadAllByIds(int[] userIds);
#Query("SELECT * FROM user WHERE first_name LIKE :first AND "
+ "last_name LIKE :last LIMIT 1")
User findByName(String first, String last);
#Insert
void insertAll(User... users);
#Delete
void delete(User user);
}
AppDatabase.java
#Database(entities = {User.class}, version = 1)
public abstract class AppDatabase extends RoomDatabase {
public abstract UserDao userDao();
}
After creating the files above, you get an instance of the created database using the following code:
AppDatabase db = Room.databaseBuilder(getApplicationContext(),
AppDatabase.class, "database-name").build();
UserDao dao = db.userDao
Time to play!
List<User> users = dao.getAll();
dao.insertAll(users);
// ...
If you use Kotlin (highly recommended) it is still more concise with data objects.

Categories

Resources