I'm on a project that uses the latest Spring+Hibernate for persistence and for implementing a REST API.
The different tables in the database contain lots of records which are in turn pretty big as well. So, I've created a lot of DAOs to retrieve different levels of detail and their accompanying DTOs.
For example, if I have some Employee table in the database that contains tons of information about each employee. And if I know that any client using my application would benefit greatly from retrieving different levels of detail of an Employee entity (instead of being bombarded by the entire entity every time), what I've been doing so far is something like this:
class EmployeeL1DetailsDto
{
String id;
String firstName;
String lastName;
}
class EmployeeL2DetailsDto extends EmployeeL1DetailsDto
{
Position position;
Department department;
PhoneNumber workPhoneNumber;
Address workAddress;
}
class EmployeeL3DetailsDto extends EmployeeL2DetailsDto
{
int yearsOfService;
PhoneNumber homePhoneNumber;
Address homeAddress;
BidDecimal salary;
}
And So on...
Here you see that I've divided the Employee information into different levels of detail.
The accompanying DAO would look something like this:
class EmployeeDao
{
...
public List<EmployeeL1DetailsDto> getEmployeeL1Detail()
{
...
// uses a criteria-select query to retrieve only L1 columns
return list;
}
public List<EmployeeL2DetailsDto> getEmployeeL2Detail()
{
...
// uses a criteria-select query to retrieve only L1+L2 columns
return list;
}
public List<EmployeeL3DetailsDto> getEmployeeL3Detail()
{
...
// uses a criteria-select query to retrieve only L1+L2+L3 columns
return list;
}
.
.
.
// And so on
}
I've been using hibernate's aliasToBean() to auto-map the retrieved Entities into the DTOs. Still, I feel the amount of boiler-plate in the process as a whole (all the DTOs, DAO methods, URL parameters for the level of detail wanted, etc.) are a bit worrying and make me think there might be a cleaner approach to this.
So, my question is: Is there a better pattern to follow to retrieve different levels of detail from a persisted entity?
I'm pretty new to Spring and Hibernate, so feel free to point anything that is considered basic knowledge that you think I'm not aware of.
Thanks!
I would go with as little different queries as possible. I would rather make associations lazy in my mappings, and then let them be initialized on demand with appropriate Hibernate fetch strategies.
I think that there is nothing wrong in having multiple different DTO classes per one business model entity, and that they often make the code more readable and maintainable.
However, if the number of DTO classes tends to explode, then I would make a balance between readability (maintainability) and performance.
For example, if a DTO field is not used in a context, I would leave it as null or fill it in anyway if that is really not expensive. Then if it is null, you could instruct your object marshaller to exclude null fields when producing REST service response (JSON, XML, etc) if it really bothers the service consumer. Or, if you are filling it in, then it's always welcome later when you add new features in the application and it starts being used in a context.
You will have to define in one way or another the different granularity versions. You can try to have subobjects that are not loaded/set to null (as recommended in other answers), but it can easily get quite awkward, since you will start to structure your data by security concerns and not by domain model.
So doing it with individual classes is after all not such a bad approach.
You might want to have it more dynamic (maybe because you want to extend even your data model on db side with more data).
If that's the case you might want to move the definition out from code to some configurations (could even be dynamic at runtime). This will of course require a dynamic data model also on Java side, like using a hashmap (see here on how to do that). You gain thereby a dynamic data model, but loose the type safety (at least to a certain extend). In other languages that probably would feel natural but in Java it's less common.
It would now be up to your HQL to define on how you want to populate your object.
The path you want to take depends now a lot on the context, how your object will get used
Another approach is to use only domain objects at Dao level, and define the needed subsets of information as DTO for each usage. Then convert the Employee entity to each DTO's using the Generic DTO converter, as I have used lately in my professional Spring activities. MIT-licenced module is available at Maven repository artifact dtoconverter .
and further info and user guidance at author's Wiki:
http://ratamaa.fi/trac/dtoconverter
Quickest idea you get from the example page there:
Happy hunting...
Blaze-Persistence Entity Views have been created for exactly such a use case. You define the DTO structure as interface or abstract class and have mappings to your entity's attributes. When querying, you just pass in the class and the library will take care of generating an optimized query for the projection.
Here a quick example
#EntityView(Cat.class)
public interface CatView {
#IdMapping("id")
Integer getId();
String getName();
}
CatView is the DTO definition and here comes the querying part
CriteriaBuilder<Cat> cb = criteriaBuilderFactory.create(entityManager, Cat.class);
cb.from(Cat.class, "theCat")
.where("father").isNotNull()
.where("mother").isNotNull();
EntityViewSetting<CatView, CriteriaBuilder<CatView>> setting = EntityViewSetting.create(CatView.class);
List<CatView> list = entityViewManager
.applySetting(setting, cb)
.getResultList();
Note that the essential part is that the EntityViewSetting has the CatView type which is applied onto an existing query. The generated JPQL/HQL is optimized for the CatView i.e. it only selects(and joins!) what it really needs.
SELECT
theCat.id,
theCat.name
FROM
Cat theCat
WHERE theCat.father IS NOT NULL
AND theCat.mother IS NOT NULL
Related
I doesn't understand how to use repositoryItem in ATG. How do I need construct customized logic on it.
Do I need to create usual JavaBean over repositoryItem or I need to use it as is?
I will try to explain:
Logic on repositoryItem:
RepositoryItem store = getRepository().getItem(..);
String address = store.getPropertyValue(..);
Logic on JavaBean:
class StoreBean {
String address;
StoreBean(RepositoryItem store) {
address = store.getPropertyValue(..);
}
}
Then I can use StoreBean how I want, to get it fields(lazy load for them, for example).
What will be best practices in ATG?
It is a matter of preference.
What you do not get with RepositoryItem objects is strong type checking. You must either make assumptions about the type of RepositoryItem you are working with or you have to do manual checks in your code (see example below). Additionally, since the RepositoryItem properties are stored as a metadata, you have to know 1) the actual names of the properties from the XML repository descriptor and 2) you need to know the types, which requires type casting (Example: String firstName = (String) item.getProperty("firstName");) Here is an example of a validation to ensure the RepositoryItem object is of type "sku":
RepositoryItemDescriptor skuItemDescriptor = getCatalogTools().getCatalog().getItemDescriptor(getCatalogTools().getBaseSKUItemType());
if (!RepositoryUtils.isTypeOfItemDesc(itemDescriptor, skuItemDescriptor)) {
throw new IllegalArgumentException("RepositoryItem must be of type " + getCatalogTools().getBaseSKUItemType());
}
If you take the approach of not using "JavaBeans", then you are increasing the risk of having runtime errors in your application. My suggestion is that you have a healthy balance between using RepistoryItem objects and wrapper objects. For critical items you plan to use in a large amount of your code base, I suggest using a wrapper object.
I suggest that if you create wrapper objects, that for consistency, you follow the same design pattern that Oracle Commerce uses. For example, the "order" item is wrapped by OrderImpl and implements the ChangedProperties interface.
public class OrderImpl
extends CommerceIdentifierImpl
implements Order, ChangedProperties
http://docs.oracle.com/cd/E52191_03/Platform.11-1/apidoc/atg/commerce/order/OrderImpl.html
ATG out of box repository implementations do not use JavaBeans for the most part. One big disadvantage of using JavaBeans and lazy loading them into memory will be to lose many repository caching features and will increase your memory footprint. For instance you will not be able to monitor your cache statistic or invalidate cache periodically. You will also have overheads of instantiations when you have huge repotiroyitem result set from a query.
Instead you can also use DynamicBean which lets you refer to repository properties similar to java beans for instance Profile.city.
If you only want to wrap them so that developers don't accidentally parse them incorrectly, you can write a util class per repository for various types of ready write operations and centralize your type safety.
I have implemented some REST API with springMVC+Jackson+hibernate.
All I needed to do is retrieve objects from database, return it as a list, the conversion to JSON is implicit.
But there is one problem. If I want to add some more information to those object before return/response. For example I am returning a list of "store" object, but I want to add a name of the person who is attending right now.
JAVA does not have dynamic type (how I solve this problem in C#). So, how do we solve this problem in JAVA?
I thought about this, and have come up with a few not so elegant solution.
1. use factory pattern, define another class which contain the name of that person.
2. covert store object to JSON objects (ObjectNode from jackson), put a new attribute into json objects, return json objects.
3. use reflection to inject a new property to store object, return objects, maybe SpringMVC conversion will generate JSON correctly?
option 1 looks bad, will end up with a lot of boiler plate class which doesn't really useful. option 2 looks ok, but is this the best we could do with springMVC?
option 1
Actually your JSON domain is different from your core domain. I would decouple them and create a seperate domain for your JSON objects, as this is a seperate concern and you don't want to mix it. This however might require a lot of 1-to-1 mapping. This is your option 1, with boilerplate. There are frameworks that help you with the boilerplate (such as dozer, MapStruct), but you will always have a performance penalty with frameworks that use generic reflection.
option 2, 3
If you really insist on hacking it in because it's only a few exceptions and not a common pattern, I would certainly not alter the JSON nodes or use reflection (your option 2 and 3). This is certainly not the way to do it in Java.
option 4 [hack]
What you could do is extend your core domain with new types that contain the extra information and in a post-processing step replace the old objects with the new domain objects:
UnaryOperator<String> toJsonStores = domainStore -> toJsonStore(domainStore);
list.replaceAll(toJsonStores);
where the JSONStore extends the domain Store and toJsonStore maps the domain Store to the JSONStore object by adding the person name.
That way you preserve type safety and keep the codebase comprehensive. But if you have to do it more then in a few exceptional cases, you should change strategy.
Are you looking for a rest service that return list of objects that contain not just one type, but many type of objects? If so, Have you tried making the return type of that service method to List<Object>?
I recommend to create a abstract class BaseRestResponse that will be extended by all the items in the list which you want return by your rest service method.
Then make return type as List<BaseRestResponse>.
BaseRestResponse should have all the common properties and the customized object can have the property name as you said
I plan to move form PHP to Java writing data-driven web apps. I obviously want to have a layer handling persistent data. In PHP with Doctrine (1.x) the following things can be done thru a single interface (PHP's ArrayAccess):
Representing data structures in code
Getting structured data from the database thru Doctrine
Representing structured data in an HTML form
So it is essential that I can have a layer for forms like:
$properties = array (
"minlength" => 2,
"maxlength" => 30,
);
new TextInput ("name", $properties);
... which is oblivious about the underlaying mechanics. It can load and save (possibly structured) data from all the sources above thru a single interface.
When saving data to a record it can not call setName($value). It can only call set("name", $value). (Of course it could be done thru reflection, but I hope I don't have to elaborate on why it's a bad idea).
So is there any ORM in Java which:
Implements the native collection interfaces. java.util.Map for example.
Maps DB relations as collections like author.get("books").put(newBook)
Has the right triggers to implement complex logic (like permissions or external files attached to fields).
Map access for POJO classess can be achieved thru a superclass implementing Map thru Hibernate's ClassMetadata interface like:
abstract class MappedRecord implements java.util.Map<String, Object> {
private ClassMetadata classMeta;
public MappedRecord() {
classMeta = mySessionFactory.getClassMetadata(this.getClass());
}
public Object put(String s, Object o) {
classMeta.setPropertyValue(this, s, o, EntityMode.POJO);
}
}
Then when you extend MappedRecord in your persistent classes, you can call:
User u = new User();
u.put("name", "John");
Safely getting mySessionFactory is a tricky question though;
You may want to have a look into Hibernate and JPA
I think NHibernate is the choice, but I'm not sure I got your requirement about triggers. I think, it's a bit application layer, not ORM layer.
I'm in a position where our company has a database search service that is highly configurable, for which it's very useful to configure queries in a programmatic fashion. The Criteria API is powerful but when one of our developers refactors one of the data objects, the criteria restrictions won't signal that they're broken until we run our unit tests, or worse, are live and on our production environment. Recently, we had a refactoring project essentially double in working time unexpectedly due to this problem, a gap in project planning that, had we known how long it would really take, we probably would have taken an alternative approach.
I'd like to use the Example API to solve this problem. The Java compiler can loudly indicate that our queries are borked if we are specifying 'where' conditions on real POJO properties. However, there's only so much functionality in the Example API and it's limiting in many ways. Take the following example
Product product = new Product();
product.setName("P%");
Example prdExample = Example.create(product);
prdExample.excludeProperty("price");
prdExample.enableLike();
prdExample.ignoreCase();
Here, the property "name" is being queried against (where name like 'P%'), and if I were to remove or rename the field "name", we would know instantly. But what about the property "price"? It's being excluded because the Product object has some default value for it, so we're passing the "price" property name to an exclusion filter. Now if "price" got removed, this query would be syntactically invalid and you wouldn't know until runtime. LAME.
Another problem - what if we added a second where clause:
product.setPromo("Discounts up to 10%");
Because of the call to enableLike(), this example will match on the promo text "Discounts up to 10%", but also "Discounts up to 10,000,000 dollars" or anything else that matches. In general, the Example object's query-wide modifications, such as enableLike() or ignoreCase() aren't always going to be applicable to every property being checked against.
Here's a third, and major, issue - what about other special criteria? There's no way to get every product with a price greater than $10 using the standard example framework. There's no way to order results by promo, descending. If the Product object joined on some Manufacturer, there's no way to add a criterion on the related Manufacturer object either. There's no way to safely specify the FetchMode on the criteria for the Manufacturer either (although this is a problem with the Criteria API in general - invalid fetched relationships fail silently, even more of a time bomb)
For all of the above examples, you would need to go back to the Criteria API and use string representations of properties to make the query - again, eliminating the biggest benefit of Example queries.
What alternatives exist to the Example API that can get the kind of compile-time advice we need?
My company gives developers days when we can experiment and work on pet projects (a la Google) and I spent some time working on a framework to use Example queries while geting around the limitations described above. I've come up with something that could be useful to other people interested in Example queries too. Here is a sample of the framework using the Product example.
Criteria criteriaQuery = session.createCriteria(Product.class);
Restrictions<Product> restrictions = Restrictions.create(Product.class);
Product example = restrictions.getQueryObject();
example.setName(restrictions.like("N%"));
example.setPromo("Discounts up to 10%");
restrictions.addRestrictions(criteriaQuery);
Here's an attempt to fix the issues in the code example from the question - the problem of the default value for the "price" field no longer exists, because this framework requires that criteria be explicitly set. The second problem of having a query-wide enableLike() is gone - the matcher is only on the "name" field.
The other problems mentioned in the question are also gone in this framework. Here are example implementations.
product.setPrice(restrictions.gt(10)); // price > 10
product.setPromo(restrictions.order(false)); // order by promo desc
Restrictions<Manufacturer> manufacturerRestrictions
= Restrictions.create(Manufacturer.class);
//configure manuf restrictions in the same manner...
product.setManufacturer(restrictions.join(manufacturerRestrictions));
/* there are also joinSet() and joinList() methods
for one-to-many relationships as well */
Even more sophisticated restrictions are available.
product.setPrice(restrictions.between(45,55));
product.setManufacturer(restrictions.fetch(FetchMode.JOIN));
product.setName(restrictions.or("Foo", "Bar"));
After showing the framework to a coworker, he mentioned that many data mapped objects have private setters, making this kind of criteria setting difficult as well (a different problem with the Example API!). So, I've accounted for that too. Instead of using setters, getters are also queryable.
restrictions.is(product.getName()).eq("Foo");
restrictions.is(product.getPrice()).gt(10);
restrictions.is(product.getPromo()).order(false);
I've also added some extra checking on the objects to ensure better type safety - for example, the relative criteria (gt, ge, le, lt) all require a value ? extends Comparable for the parameter. Also, if you use a getter in the style specified above, and there's a #Transient annotation present on the getter, it will throw a runtime error.
But wait, there's more!
If you like that Hibernate's built-in Restrictions utility can be statically imported, so that you can do things like criteria.addRestriction(eq("name", "foo")) without making your code really verbose, there's an option for that too.
Restrictions<Product> restrictions = new Restrictions<Product>(){
public void query(Product queryObject){
queryObject.setPrice(gt(10));
queryObject.setPromo(order(false));
//gt() and order() inherited from Restrictions
}
}
That's it for now - thank you very much in advance for any feedback! We've posted the code on Sourceforge for those that are interested. http://sourceforge.net/projects/hqbe2/
The API looks great!
Restrictions.order(boolean) smells like control coupling. It's a little unclear what the values of the boolean argument represent.
I suggest replacing or supplementing with orderAscending() and orderDescending().
Have a look at Querydsl. Their JPA/Hibernate module requires code generation. Their Java collections module uses proxies but cannot be used with JPA/Hibernate at the moment.
First post here...
I normally develop using PHP and Symfony with Propel and ActionScript 3 (Flex 3), using AMF services. This weekend I'm trying my hand at creating a BlazeDS application using Java and Hibernate, and I'm beginning to like Java a lot!!!
After some research this weekend and using the Hibernate Synchronizer plugin for Eclipse, creating classes mapped (excuse my terminology) to tables seem fairly easy, I'm swiftly getting closer to understanding these aspects.
What I want to know however, is how to develop a more comprehensive architecture for my database, specifically in terms of queries, their results and iterating them. Let me ellaborate:
If I have for example an authors table, I'll be creating an Author class which is mapped to the table, with getters and setters etc. This part looks pretty standard in terms of Hibernate.
Furthermore, I would probably need a Peer class (like in Propel for PHP) to for example do queries which will return a List/Array containing Author instances.
So I would have (speaking under correction) the following classes:
Author
- represents a single row with getters and setters.
AuthorsPeer
- has for example functions like AuthorsPeer::getAuthorsByCountry('USA'); or
- AuthorsPeer::getRetiredAuthors(); or AuthorsPeer::getAuthorsWithSwineFlu();
- ... get the picture. :)
- which return an AuthorsList as it's result...see next point.
AuthorsList
- a collection/list of Author with functions for iterating getNext(), getPrevious() etc.
Is this the way it's meant to be in Hibernate, or am I missing the plot? Am I noticing a Design Pattern here, and not noticing it? Do I need to have something like AuthorsList, or does Hibernate provide a generic solution. All in all, what's the norm with dealing with these aspects.
Also, if I have a Books table, these are related with Authors, if I call say
Author myAuthor = Authors::getAuthor(primaryId, includeBooks);
can Hibernate deal with returning me a result that I can use as follows:
String title = myAuthor.books[0].title;
What I'm asking is, do queries to Authors relating to Books table result in Hibernate returning Authors with their Books all nested inside the Authors "value object" ready for me to pounce with some iteration?
Thanks in advance!
The answer to most of your questions is yes. What you should really look into is how to map one-to-many relationships in Hibernate. In your case, the author is the "one" and the books are the many. Associative mappings are described there. If you are using Annotations with Hibernate (which I highly recommend over XML files), you can find out how to do associations with annotations here.
The great thing about hibernate is that you don't have to do much to manage the relationships. The following is a code snippet of what you would need for your Author-Book relationship.
#Entity public class Author {
#OneToMany(mappedBy="author")
public List<Book> getBooks() {
return books;
}
...
}
#Entity public class Book {
public String getName() {
return bookName;
}
#ManyToOne
public Author getAuthor() {
return author;
}
...
}
To get the Authors from the table, you would use the Criteria API or HQL. I prefer the Criteria API because it goes with Java's general programming feel (unlike HQL). But your preference may differ.
On a side note: You should not create an "AuthorsList" class. Java generics does the job for you:
List<Author> myAuthors = new ArrayList<Author>();
//iterate like so
for(Author author: myAuthors) {
//do something with current author
}
//or just grab 1
Author firstAuthor = myAuthors.get(0);
Java handles compile time checks against the types so you don't have to.
Basically the answer is yes, kinda. :) If you fetch a single Author and then access the collection of Books, Hibernate will lazy load the data for you. This actually isn't the most performant way to do this but it is convenient for the programmer. Ideally you want to use something like HQL (Hibernate Query Language) or its Criteria API to execute a query and "eager fetch" the collection of books, so that all data can be loaded with a single SQL query.
http://docs.jboss.org/hibernate/stable/core/reference/en/html/queryhql.html
Otherwise if you access an Author that has say 500 Books associated to it, Hibernate may issue 500 + 1 SQL queries against the database which is of course incredibly slow. If you want to read more about this, you can Google "n + 1 selects problem" as it's commonly referred to.