I am currently working on developing a simple Java web application without using an ORM.
It is a layered architecture. I use annotations to 'map' the columns from my domain class to my persistence layer where the repositories are.
My DefaultRepository is using reflection to get the field names and build the queries. I just use this one instead of making a separate one for each domain (such as user, order, product).
It looks like this:
Repository:
public abstract class DefaultRepository <TYPE extends DefaultDomain<TYPE>>{
public int insert(Connection con, TYPE entity){};
public int update (Connection con, TYPE entity){};
public int delete (Connection con, TYPE entity){};
public int findById(Connection con, int id){};
...
}
I want to implement lazy loading to load the orders of each user only when requested. This preferably happens in the repository in the findById() where I map the result set to my TYPE (domain class).
How would I go about implementing this foreign key relation/ one to many/many to one in general?
An idea was to make an annotation with the specific repository and then call the findAll() method in there when the order list is called.
For example:
public class Order extends DefaultDomain{
#ManyToOne(repository=UserRepository.class)
private User user;
}
Is that the right way? Is there a different/better approach to this?
Related
I'm pretty new to using jooq and I'm trying to implement the usual CRUD operations that us Java guys like to have in our DAOs/repositories. I have the following code for selecting a record by id:
public class JooqRepository<ID, E extends BaseObject<ID>, T extends Table<R>, R extends Record> {
...
private final T table; // would be coming from constructor to concrete reference in the generated classes
...
protected Optional<E> findById(ID id) {
final TableField<R, ID> idField = (TableField<R, ID>) table.getIdentity().getField();
return dsl.fetchOptional(table, idField.eq(id)).map(toEntity()); // conversion method omitted here
}
...
}
My question is firstly would this approach work for all kinds of tables/records or only ones that use identity/auto-increment?
What if I use a DBMS that doesn't have this feature (e.g. Oracle)?
What if a table has a composite key?
And lastly: Is it even recommended to use jooq in that way or should we explicitly craft dedicated queries for every table?
While it is possible to use jOOQ as a Spring repository implementation, you could also just use jOOQ's out of the box DAO support, which works in a similar way. The main difference is that jOOQ DAOs are unopinionated auxiliary tools, that do not impose DDD as a modeling paradigm, they just simplify the most common CRUD operations on each of your tables.
You can subclass the generated DAOs in order to add more functionality, and inject them to your services like Spring's repositories.
Basically I am looking for a nice mechanism to do something like
#Query(value ="generateReport", nativeQuery = true)
public void generateCSVReport(Path filename, UUID managerUuid);
where generateReport is a parametrized query located in #EnableJpaRepositories(namedQueriesLocation = "classpath:/foo/bar/file.sql)
that includes a
COPY ( SELECT * FROM foo) TO file WITH (FORMAT CSV)
Without the need of defining a Repository. Indeed I tried JpaRepository<Void,Void> but it does not work
If you don't have any entity attached to your repository then you don't really need to be implementing JpaRepository.. or even CrudRepository since it doesn't make sense for you to have any CRUD operations without an entity.
Try implementing the base interface Repository<T, ID extends Serializable> instead.
Also Repository<Void, Void> wont work since Void does not extend Serializable and also because Void is not a managed type (i.e. it is not an #Entity).
Using Spring Data repositories to do this you would have to create an empty dummy entity just to pass in. It would probably make sense to map this to the foo table you are querying in your SQL:
#Entity
#Table(name = "foo")
public class DummyEntity extends Serializable {
//Blank
}
Then extend Repository<DummyEntity, Integer>. This probably indicates that Spring Data repos aren't the best solution for this problem though.
I've generic DAO:
#NoRepositoryBean
interface ICrudDao<T, ID extends Serializable> extends Repository<T, ID> {
void delete(T deleted);
List<T> findAll();
T findOne(ID id);
T save(T persisted);
}
To allow services to work on that I have to create interface that allows custom entities get persistence, f.e.:
interface TodoDao extends ICrudDao<Todo, Long> {
}
I've a lot of daos like TodoDao. Then don't deliver any special methods.
Creating a lot of empty interfaces seems a dumb idea. How can create a Generic one?
Edit:
I don't think what you are trying to do is a good idea. At first to register a repository for each Entity seems like boiler plate code, but as the application grows, it will help you to maintain it. Imagine your application to evolve over time like this:
You create a simple entity Person and the Interface PersonRepository. Luckily all basic CRUD operations are included, so far it fits your needs so there is nothing else to do.
As your application grows, Person gets a lot of associated relations, like Address, Job, Hobbies and it would be very inefficient to fetch all associated data everytime you access it, because not always every association is needed. To encounter that, you create your own method in PersonRepository which executes your own NamedQuery to only load certain fields and store it in your DTO needed for the specific view ("SELECT new package.PersonDto(x,y) FROM PERSON WHERE ...).
As time passes by, you find yourself in a situation where you need queries to get executed in dynamic fashion, like pagination or restrictions that only need to be added on certain conditions. So you create a new interface PersonCustomRepository and PersonCustomRepositoryImpl where you write queries in a programatic way:
#PersistenceContext
private EntityManager entityManager;
#Transactional
public List<Person> foo() {
// example for accessing hibernate directly, you could also use QueryDSL and so on
Criteria basicCriteria = entityManager.unwrap().createCriteria(Person.class);
if (someCondition) {
criteria.add(Restrictions.eq("foo", foo));
...
}
...
return criteria.list();
}
Bottom line: Spring data repositories already do a lot of work for you and they are easy to extend, don't try to fight your framework, even it maybe saves you some clicks in the first place.
You can avoid this by making your entities generic.
//you can annotated with #MappedSuperclass
public class BaseBean{
//you can specify the id here
}
public class Todo extends BaseBean {
}
#NoRepositoryBean
interface ICrudDao<T exntends BaseBean, ID extends Serializable> extends Repository<T, ID> {
void delete(T deleted);
List<T> findAll();
T findOne(ID id);
T save(T persisted);
}
I don't think it's possible. See How to create a Generic DAO class using Hibernate Context sessions and Hibernate: CRUD Generic DAO these might help.
I can also think of the Hibernate Session as an example of a single class that deals with the persistence of all types of objects, it just deals with Object type.
I'm implementing several DAO classes for a web project and for some reasons I have to use JDBC.
Now I'd like to return an entity like this:
public class Customer{
// instead of int userId
private User user;
// instead of int activityId
private Activity act;
// ...
}
Using JPA user and activity would be loaded easily (and automatically specifying relations between entities).
But how, using JDBC? Is there a common way to achieve this? Should I load everiting in my CustomerDAO? IS it possible to implement lazy initialization for referenced entities?
My first idea was to implement in my UserDAO:
public void initUser(Customer customer);
and in my ActivityDAO:
public void initActivity(Customer customer);
to initialize variables in customer.
Active Record route
You could do this with AspectJ ITDs and essentially make your entities into Active Record like objects.
Basically you make an Aspect that advises class that implement an interface called "HasUser" and "HasActivity". Your interfaces HasUser and HasActivity will just define getters.
You will then make Aspects that will weave in the actual implementation of getUser() and getActivity().
Your aspects will do the actual JDBC work. Although the learning curve on AspectJ is initially steep it will make your code far more elegant.
You can take a look at one of my answers on AspectJ ITD stackoverflow post.
You should also check out springs #Configurable which will autowire in your dependencies (such as your datasource or jdbc template) into non managed spring bean.
Of course the best example of to see this in action is Spring Roo. Just look at the AspectJ files it generates to get an idea (granted that roo uses JPA) of how you would use #Configurable (make sure to use the activerecord annotation).
DAO Route
If you really want to go the DAO route than you need to this:
public class Customer{
// instead of int userId
private Integer userId;
// instead of int activityId
private Integer activityId;
}
Because in the DAO pattern your entity objects are not supposed to have behavior. Your Services and/or DAO's will have to make transfer objects or which you could attach the lazy loading.
I'm not sure if there is any automated approach about this. Without ORM I usually define getters as singletons where my reference types are initialized to null by default, i.e. my fetching function would load primitives + Strings and will leave them as null. Once I need getUser(), my getter would see if this is null and if so, it would issue another select statement based on the ID of the customer.
I have written my own java.util.List implementation, and now i want to store it in a MySQL using DataNucleus. My implementation consists of a public class that implements the List interface, and a private class that implements the node for that list.
When I run the SchemaTool in Eclipse, only the table for my Node implementation gets created, and when i run my app, i get the following error:
Persistent class "a.b.c.util.DtvList" has no table in the database, but the operation requires it. Please check the specification of the MetaData for this class.
Here's the beginning of my List implementing class...
#PersistenceCapable
#Inheritance(strategy=InheritanceStrategy.COMPLETE_TABLE)
public class DtvList<E extends Comparable<E>> implements List {
#Persistent
private DtvListNode first = null;
private DtvListNode last = null;
private int length = 0;
public DtvList(){};
Also, i only have an implementation for the add(E object) method, all the other methods throw a RuntimeException. Could that be the problem?
PS I also tried implementing some more methods, such as getIterator and others, and I even tried writing a mapping plugin (http://www.datanucleus.org/extensions/rdbms_java_types.html), but to no avail. The TABLE does not get created by the SchemaTool in the database.
PS/2 Added the Mapping class for the DtvListNode implementation, now i have a table for the DtvList, but not for the DtvListNode. It is still not working. But i still get the exception org.datanucleus.store.exceptions.NoTableManagedException that the DtvList table does not exist.
I don't think DataNucleus supports custom List implementation for mapping relationships.
If you Lists are small in size and your implementation can support a copy constructor and to List(), you could map a standard List and implement LoadCallback and StoreCallback to manage the conversion. Obviously if you have a lot of persistent operations on that List, it will get rather messy...