I have an slsb holding my business logic, how do I use generics to change the following three methods into one generic method ? The first two are the same db, the third is a different database. Also do the methods require further annotation in relation to transaction ?
#PersistenceContext(unitName = "db")
private EntityManager myEntityManager;
#PersistenceContext(unitName = "db2")
private EntityManager myDB2EntityManager;
#TransactionAttribute(TransactionAttribute.Required)
public void crud(MyEntity myEntity) throws MyException {
myEntityManager.merge(myEntity);
}
public void crud(ADifferentEntity aDifferentEntity) throws MyException {
myEntityManager.merge(aDifferentEntity);
}
public void crud(DB2Entity db2Entity) throws MyException {
myDB2EntityManager.merge(db2Entity);
}
Many thanks in advance.
Cheers!
Not sure if I fully understand the question, but:
Since you have two different entity managers there and two different DBs (assuming you're not saving the same data in duplicate to both DBs at the same time, which it appears that you are not), I think it's reasonable to have two different methods in your interface. (I would name them differently to avoid confusion I think.)
To merge the first two how about using a common interface or inherited abstract base class and changing the parameter type to that common type?
If you require merging of 2 entities from 2 different databases within the same method, you should have JTA configured - as the transaction will span the 2 databases.
Not too sure what you're trying to do with the generic thing... Are you trying to provide a method to crud e.g. a T extends AbstractEntity, and then in the crud method,
crud(T entity) {
if (entity instanceof DB1Entity) then em1.merge(entity)
else em2.merge(entity)
}
???
or are you trying to do horizontal partitioning ?:
Multi-user Datasources - Spring + Hibernate,
http://www.jroller.com/kenwdelong/entry/horizontal_database_partitioning_with_spring
Related
This is one of those topics I don't even know how to search in google (tried already, most of the results were for C#), so here I go:
I'm messing around with our huge application, trying to get to work a brand new DAO/Entity/Service/DTO.. euh...thing. I've been left more or less on my own, and, again, more or less, I'm getting to understand some of the hows and maybe one or two of the whys.
The thing is that I got all, the way "up", from the DB to the Service:
I got a DAO class which executes a query stored on an Entity class. After executing it, it returns the Entity with the values.
The service receives the Entity and, somehow, transforms the Entity to a DTO and returns it to whenever is needed.
My problem is with the "somehow" thing the code goes like this:
DTOClass dto = ClassTransformerFromEntityToDTO.INSTANCE.apply(entityQueryResult);
I went into ClassTransformerFromEntityToDTO and found this:
public enum ClassTransfomerFromEntityToDTO implements Function<EntityClass,DTO Class> ) {
INSTANCE;
#Override
public DTOClass apply(EntityClass entityInstance) {
/*Code to transform the Entity to DTO and the return*/
}
}
The class that this... thing, implements, is this:
package com. google .common . base;
import com. google .common . annotations. GwtCompatible ;
import javax. annotation .Nullable ;
#GwtCompatible
public abstract interface Function <F , T >
{
#Nullable
public abstract T apply (#Nullable F paramF) ;
public abstract boolean equals (#Nullable Object paramObject) ;
}
I'm in the classic "everyone who where at the beginning of the project fled", and no one knows why is this or what is this (The wisest one told me that maybe it had something to do with Spring), so, I have two main questions (which can be more or less answered in the same side):
1) What's this? What's the point of using an enum with a function to make a conversion?
2) What's the point of this? Why can I just make a class with a single function and forget about this wizardry?
not sure there's much to answer here... And I'm adding an answer to illustrate my thoughts with some code I've seen, but that you have is horrible. I've actually seem similar stuff. My guess is that that codes actually precedes Spring. It's used as some sort of Singleton.
I have seen code like this, which is worse:
public interface DTO {
find(Object args)
}
public class ConcreteDTO1 implements DTO {
...
}
public class ConcreteDTO2 implements DTO {
...
}
public enum DTOType {
CONCRETE_DTO1(new ConcreteDTO1(someArgs)),
CONCRETE_DTO2(new ConcreteDTO2(someOtherArgs))
private DTO dto;
public DTOType(DTO dto) {
this.dto = dto;
}
public DTO dto() {
return dto;
}
}
and then the DTOs are basically accessed through the Enum Type:
DTOType.CONCRETE_DTO1.dto().find(args);
So everyone trying to get hold of a DTO accesses it through the enum. With Spring, you don't need any of that. The IoC container is meant to avoid this kind of nonsense, that's why my guess is that it precedes Spring, from some ancient version of the app when Spring was not there. But it could be that someone was wired to do such things regardless of whether Spring was already in the app or not.
For that kind of stuff you're trying to do, you're better of with the Visitor pattern. Here's an example from a different answer: passing different type of objects dynamically on same method
It's me. From the future.
Turns out that this construct is a propossed Singleton Implementation, at least on "Effective Java 2nd edition".
So, yeah, Ulise's guess was well oriented.
I am writing a web service and one of the operation in service is getShortURL(String longURL). In this method I first check whether longURL exists in database, if yes, return it otherwise create a shortURL, insert it in database and return to client.
My confusion is how to organize and name my classes. Apart from the web service class, right now I have 3 classes:
URLData: It just has URL attributes and getters and setters.
MongoDB: It connects to database(right now connection attributes are hard-coded in it), inserts in database, and retrieves raw string from database.
MongoDBUtil: This class has again insert(URLData) method, it calls MongoDB.insert() to insert into database. Also has retrieveURLData which in turn calls MongoDB equivalent method to do the actual job.
Web service method sets URLData setters and calls MongoDBUtil.retrieve or insert.
I am thinking that URLData class should be named URLDataBusinessObject and along with setters and getters it can have insert, update and delete methods.
MongoDBUtil can be renamed to UrlDAO and it can have different kinds of retrieve methods.
MongoDB is more kinda Select query class, not sure how to design and name it.
Please advise
URLData is fine. Don't bloat your class name with long irrelevant words. If you want to make clear that this is a business object, create a package like com.yourcompany.yourproject.bo for example, then put your URLData class in there.
Yes, UrlDAO is more specific than MongoDBUtil. You can create a com.yourcompany.yourproject.dao package for it.
Looks fine for me. However if you use some kind of framework (e.g. Spring) you don't have to create your own class to hold the database connection configurations.
I suggest you google for some tutorial on the topic, you will learn both how to use the technology and how to name/orginize your classes.
This question might be suited more for http://programmers.stackexchange.com.
Nevertheless: yes, I would change the naming.
1) URLDataBusinessObject No, never. You're adding 14 characters to a classname without adding any value. URLData was just fine.
2) You should change the naming of your DAO classes to be non-DB specific, unless you explicitly have an architecture aiming at multiple databases and the DB-specific classes perform DB-specific tasks.
I'm assuming this isn't the case and thus you should give it a more general name.
Persistence can be just fine, DAO as well, anything that shows the intended usage without going into specifics is eligible.
3) MongoDBUtil is your interface to the persistence layer, it's not a utility class in heart and soul. What's the purpose of this class? If all you do is chain the method call to MongoDB you might as well drop it and go straight to the latter.
To create a simple layered design build interfaces for all the persistence specific operations and interfaces for all the domain objects. Then code against those rather than their concrete implementations. That way it's easy to swap out a mongo persistence layer for a different one, functionality is organised so that others can easily understand it and can also test against interfaces rather than concrete implementations. You'd have something like:
URLData interface
URLDataDTO class (used in the business layer)
Persistence interface
MongoPersistence class (used in the persistence layer)
My current project does something similar and also works with Mongo. The persistence layer interface has methods like "void put(URLData)". When called the Mongo implementation constructs a new MongoURLData from the URLData passed in, extracts the DBObject then persists it. Methods like "URLData get(String id);" work the other way around. The Mongolayer queries the database and creates new URLDataDTO objects from Mongo DBObjects. The web service is then responsible for serialising/deserialising DTO objects that are sent to or received from client applications.
My Mongo Domain objects all inherit from something this:
public abstract class MongoDO<T extends Object> {
DBObject dbobject = null;
public MongoDO(T dto) {
this.dbobject = new BasicDBObject();
};
public MongoDO(DBObject obj) {
this.setDBObject(obj);
};
public abstract T toDTO() throws StorageException;
public DBObject getDBObject() {
return dbobject;
}
public void setDBObject(DBObject obj) {
this.dbobject = obj;
}
public ObjectId getIdObject() {
return (ObjectId) this.getDBObject().get("_id");
}
public void setIdObject(ObjectId id) {
this.getDBObject().put("_id", id);
}
protected String getField(String field) {
if (dbobject.containsField(field) && dbobject.get(field) !=null) {
return dbobject.get(field).toString();
} else
return null;
}
protected void setField(String field, String value) {
dbobject.put(field, value);
}
}
An example Mongo implementation would be:
public class MongoURLData extends MongoDO<URLData> implements URLData {
private static final String FIELD_SHORT_URL = "surl";
public String getShortUrl() {
return getField(FIELD_SHORT_URL);
}
public void setShortUrl(String shortUrl) {
setField(FIELD_SHORT_URL, shortUrl);
}
public URLData toDTO(){
URLDataDTO dto = new URLDataDTO();
dto.setShortURL(getShortURL);
return dto;
}
}
I have class Validator, which manage all validation criteria from files and database. But this criteria are loaded by Loader like this:
Validator validator = Loader.load("clients"); //get all from clients.cfg file
What is the best approach to determine from another class, which criteria are currently loaded?
Importer importer;
Validator clientsValidator = Loader.load("clients");
Validator addressValidator = Loader.load("address"); ...
importer.validate(data, clientsValidator, addressValidator);
public class Importer{
public void validate(Data data, Validator... validator){
...
validateClient(data, one of validators);
validateAddress(data, another of validator);
...
}
}
I need to know in Importer class, which Validator is for Clients, which for Addresses... Any good approaches?
The best way would be for you to be add a field and accompanying method to Validator to return the identifier (e.g. "clients") with which it was created.
Alternatively, if by using a different identifier when calling Loader.load() you get back instances of different classes implementing the Validator interface, then you can use the Object.getClass() method to tell those classes apart. If those classes are within a pretty small set you might even get away with using instanceof directly.
We would need more information, such as what Loader does exactly, what Validator is and how much you are allowed to change their code before being able to provide a more concrete answer.
EDIT:
Quite honestly, perhaps you should reconsider a redesign of your data model. As it stands, you can apparently mix clients and addresses without any checks. You should restructure your code to be able to rely on the type safety features of Java.
One way would be to have a generic class/interface Validator<T>, where T would the class of the validated objects:
public interface Validator<T> {
public boolean validate(T object);
}
You could then have specific Data subclasses for your data, such as Address or Client, and set typed Validator objects to Importer through specific methods:
public class Importer {
public void addAddressValidator(Validator<Address> validator) {
...
}
public void addClientValidator(Validator<Client> validator) {
...
}
}
This is far safer than mixing all validator objects in a single variadic method call, and it is also the preferred approach of most common frameworks in the wild.
Why not have a getSource() in Validator which gets set when Loader loads the source.
Thinking more about the specific question below :
I need to know in Importer class, which Validator is for Clients,
which for Addresses... Any good approaches?
Actually a better way to do this is if Loader can return a ClientValidator (implementation of Validator) for client and AddressValidator for addresses.
That way you can avoid the if-else conditions and directly call validate on the Validator class
Pass the validators by position. You must also check if the specific validator is null or not before you use.
public void validate(Data data,
Validator clientsValidator,
Validator addressValidator) {
...
if (clientsValidator != null) {
validateClient(data, clientsValidator);
}
if (addressValidator != null) {
validateAddress(data, addressValidator);
}
...
}
The following code doesn't work (of course), because the marked line does not compile:
MyClass {
//singleton stuff
private static MyClass instance;
private MyClass () {}
public static MyClass getInstance() {
if(instance==null) {
instance = new MyClass ();
}
return instance;
}
// method creating problems
public NonGenericSuperClassOfGenericClass create(Class<?>... classes) {
if(someCondition)
return new GenericClass<classes[0],classes[1]>; // DOES NOT COMPILE
else
return new OtherGenericClass<classes[0]>;
}
}
Therefore, I actually don't know whether "create" will return
GenericClass<classes[0],classes[1]>
or
OtherGenericClass<classes[0]>
which have different numbers of parameters.
This happens because I'm using Spring and I plan to use MongoDB, but in the future I may need to switch to something different (e.g. Hibernate).
The class GenericClass is something like:
GenericClass<PersistetType1, Long>
or
GenericClass<PersistentType2, Long>
where PersistentType1/2 are classes that I need to finally store in the DB, while, GenericClass is a sort of Proxy to access Mongo APIs. In fact, it looks like:
public MongoTemplate getTemplate();
public void save(T toInsert);
public List<T> select(Query selectionQuery);
public T selectById(ID id);
public WriteResult update(Query selectionQuery, Update updatedAttributes);
public void delete(T toRemove);
public void delete(Query selectionQuery);
Now, what?
From Controllers (or Entity, if you are picky) I need to instantiate the repository and invoke any methods. This causes the Controllers to be coupled with MongoDB, i.e. they explicitly have to instantiate such GenericClass, which is actually called MongoRepository and is strictly dependent on Mongo (in fact it is a generic with exactly two "degrees of freedom").
So, I decided to create MyClass, that is a further proxy that isolates Controllers. In this way, Controller can get the single instance of MyClass and let it create a new instance of the appropriate repository. In particular, when "somecondition" is true, it means that we want to use MongoRepository (when it is false, maybe, a need to instantiate a Hibernate proxy, i.e. HibernateRepository). However, MongoRepository is generic, therefore it requires some form of instantiation, that I hoped to pass as a parameter.
Unfortunately, generics are resolved at compile time, thus they don't work for me, I guess.
How can I fix that?
In order to decouple the underlying persistence store from your application logic I would use the DAO approach.
Define the interface of your DAO with the required methods e.g. save, update etc. And then provide an implementation for each persistence provider you might need e.g.UserAccess might be the interface which you could implement as HibernateUserAccess and MongoUserAccess. In each implementation you inject the appropriate Template e.g. Mongo or Hibernate and use that to complete the persistence operation.
The issue you might have is that your load operation would return an instance of User, this would need to vary across persistence providers i.e. JPA annotations would be different to the Spring Data annotations needed for MongoDB (leaky abstraction).
I would probably solve that by creating a User interface to represent the result of the persistence operation and having an implementation for each persistence provider. Either that or return a common model which you build from the results of a JPA or Mongo load.
Following my previous question, DAO and Service layers (JPA/Hibernate + Spring), I decided to use just a single DAO for my data layer (at least at the beginning) in an application using JPA/Hibernate, Spring and Wicket. The use of generic CRUD methods was proposed, but I'm not very sure how to implement this using JPA. Could you please give me an example or share a link regarding this?
Here is an example interface:
public interface GenericDao<T, PK extends Serializable> {
T create(T t);
T read(PK id);
T update(T t);
void delete(T t);
}
And an implementation:
public class GenericDaoJpaImpl<T, PK extends Serializable>
implements GenericDao<T, PK> {
protected Class<T> entityClass;
#PersistenceContext
protected EntityManager entityManager;
public GenericDaoJpaImpl() {
ParameterizedType genericSuperclass = (ParameterizedType) getClass()
.getGenericSuperclass();
this.entityClass = (Class<T>) genericSuperclass
.getActualTypeArguments()[0];
}
#Override
public T create(T t) {
this.entityManager.persist(t);
return t;
}
#Override
public T read(PK id) {
return this.entityManager.find(entityClass, id);
}
#Override
public T update(T t) {
return this.entityManager.merge(t);
}
#Override
public void delete(T t) {
t = this.entityManager.merge(t);
this.entityManager.remove(t);
}
}
Based on the article Don't repeat the DAO we used this kind of technique for many years. We always struggled with problems with our patterns after we realized that we made a big mistake.
By using an ORM tool such as Hibernate or JPA you will not have to think DAO and Service layer separately. You can use EntityManager from your service classes as you know the lifecycle of transactions and the logic of your entity classes there.
Do you save any minute if you call myDao.saveEntity instead of simply entityManager.saveEntity? No. You will have an unnecessary dao class that does nothing else but will be a wrapper around EntityManager. Do not afraid to write selects in your service classes with the help of EntityManager (or session in hibernate).
One more note: You should define the borders of your service layer and do not let programmers to return or wait for Entity classes. The UI or WS layer programmers should not know at all about entity classes only about DTO-s. Entity objects have lifecycles that most of the programmers do not know about. You will have really serious issues if you store an entity object in a session data and try to update it back to the database seconds or hours later. Well you may would not do it but a programmer of the UI who knows the parameter types and return types of your service layer only would do to save some lines of code.
I was looking for this same thing. I found what appears to be exactly that- the Spring-Data JPA project provided by SpringSource. This is a code port from Hades and has now (Early 2011) been swallowed by Spring and better integrated.
It allows you to use a single dao (SimpleJpaRepository) with a static create, or extend the base JpaRepository class to create any object specific dao with ready made CRUD+ methods. Also allows grails like queries just by using params names as the name of the method- in the interface (no implementation required!) i.e. findByLastname(String lastName);
Looks very promising- being part of Spring projects will certainly ensure some future for it too.
I have begun implementing this in my upcoming project now.
if you are looking for a third party implementation , you can check
http://www.altuure.com/projects/yagdao/ . it is a nnotation based generic DAO framework which supports JPA and hibernate
You may also have a look at http://codeblock.engio.net/data-persistence-and-the-dao-pattern/
The related code can be found on github https://github.com/bennidi/daoism
It has integration with Spring and configuration examples for Hibernate and EclipseLink