ATG JavaBean over RepositoryItem - java

I doesn't understand how to use repositoryItem in ATG. How do I need construct customized logic on it.
Do I need to create usual JavaBean over repositoryItem or I need to use it as is?
I will try to explain:
Logic on repositoryItem:
RepositoryItem store = getRepository().getItem(..);
String address = store.getPropertyValue(..);
Logic on JavaBean:
class StoreBean {
String address;
StoreBean(RepositoryItem store) {
address = store.getPropertyValue(..);
}
}
Then I can use StoreBean how I want, to get it fields(lazy load for them, for example).
What will be best practices in ATG?

It is a matter of preference.
What you do not get with RepositoryItem objects is strong type checking. You must either make assumptions about the type of RepositoryItem you are working with or you have to do manual checks in your code (see example below). Additionally, since the RepositoryItem properties are stored as a metadata, you have to know 1) the actual names of the properties from the XML repository descriptor and 2) you need to know the types, which requires type casting (Example: String firstName = (String) item.getProperty("firstName");) Here is an example of a validation to ensure the RepositoryItem object is of type "sku":
RepositoryItemDescriptor skuItemDescriptor = getCatalogTools().getCatalog().getItemDescriptor(getCatalogTools().getBaseSKUItemType());
if (!RepositoryUtils.isTypeOfItemDesc(itemDescriptor, skuItemDescriptor)) {
throw new IllegalArgumentException("RepositoryItem must be of type " + getCatalogTools().getBaseSKUItemType());
}
If you take the approach of not using "JavaBeans", then you are increasing the risk of having runtime errors in your application. My suggestion is that you have a healthy balance between using RepistoryItem objects and wrapper objects. For critical items you plan to use in a large amount of your code base, I suggest using a wrapper object.
I suggest that if you create wrapper objects, that for consistency, you follow the same design pattern that Oracle Commerce uses. For example, the "order" item is wrapped by OrderImpl and implements the ChangedProperties interface.
public class OrderImpl
extends CommerceIdentifierImpl
implements Order, ChangedProperties
http://docs.oracle.com/cd/E52191_03/Platform.11-1/apidoc/atg/commerce/order/OrderImpl.html

ATG out of box repository implementations do not use JavaBeans for the most part. One big disadvantage of using JavaBeans and lazy loading them into memory will be to lose many repository caching features and will increase your memory footprint. For instance you will not be able to monitor your cache statistic or invalidate cache periodically. You will also have overheads of instantiations when you have huge repotiroyitem result set from a query.
Instead you can also use DynamicBean which lets you refer to repository properties similar to java beans for instance Profile.city.
If you only want to wrap them so that developers don't accidentally parse them incorrectly, you can write a util class per repository for various types of ready write operations and centralize your type safety.

Related

What is the best practice to return dynamic type from a REST API in SpringMVC

I have implemented some REST API with springMVC+Jackson+hibernate.
All I needed to do is retrieve objects from database, return it as a list, the conversion to JSON is implicit.
But there is one problem. If I want to add some more information to those object before return/response. For example I am returning a list of "store" object, but I want to add a name of the person who is attending right now.
JAVA does not have dynamic type (how I solve this problem in C#). So, how do we solve this problem in JAVA?
I thought about this, and have come up with a few not so elegant solution.
1. use factory pattern, define another class which contain the name of that person.
2. covert store object to JSON objects (ObjectNode from jackson), put a new attribute into json objects, return json objects.
3. use reflection to inject a new property to store object, return objects, maybe SpringMVC conversion will generate JSON correctly?
option 1 looks bad, will end up with a lot of boiler plate class which doesn't really useful. option 2 looks ok, but is this the best we could do with springMVC?
option 1
Actually your JSON domain is different from your core domain. I would decouple them and create a seperate domain for your JSON objects, as this is a seperate concern and you don't want to mix it. This however might require a lot of 1-to-1 mapping. This is your option 1, with boilerplate. There are frameworks that help you with the boilerplate (such as dozer, MapStruct), but you will always have a performance penalty with frameworks that use generic reflection.
option 2, 3
If you really insist on hacking it in because it's only a few exceptions and not a common pattern, I would certainly not alter the JSON nodes or use reflection (your option 2 and 3). This is certainly not the way to do it in Java.
option 4 [hack]
What you could do is extend your core domain with new types that contain the extra information and in a post-processing step replace the old objects with the new domain objects:
UnaryOperator<String> toJsonStores = domainStore -> toJsonStore(domainStore);
list.replaceAll(toJsonStores);
where the JSONStore extends the domain Store and toJsonStore maps the domain Store to the JSONStore object by adding the person name.
That way you preserve type safety and keep the codebase comprehensive. But if you have to do it more then in a few exceptional cases, you should change strategy.
Are you looking for a rest service that return list of objects that contain not just one type, but many type of objects? If so, Have you tried making the return type of that service method to List<Object>?
I recommend to create a abstract class BaseRestResponse that will be extended by all the items in the list which you want return by your rest service method.
Then make return type as List<BaseRestResponse>.
BaseRestResponse should have all the common properties and the customized object can have the property name as you said

DTOs with different granularity

I'm on a project that uses the latest Spring+Hibernate for persistence and for implementing a REST API.
The different tables in the database contain lots of records which are in turn pretty big as well. So, I've created a lot of DAOs to retrieve different levels of detail and their accompanying DTOs.
For example, if I have some Employee table in the database that contains tons of information about each employee. And if I know that any client using my application would benefit greatly from retrieving different levels of detail of an Employee entity (instead of being bombarded by the entire entity every time), what I've been doing so far is something like this:
class EmployeeL1DetailsDto
{
String id;
String firstName;
String lastName;
}
class EmployeeL2DetailsDto extends EmployeeL1DetailsDto
{
Position position;
Department department;
PhoneNumber workPhoneNumber;
Address workAddress;
}
class EmployeeL3DetailsDto extends EmployeeL2DetailsDto
{
int yearsOfService;
PhoneNumber homePhoneNumber;
Address homeAddress;
BidDecimal salary;
}
And So on...
Here you see that I've divided the Employee information into different levels of detail.
The accompanying DAO would look something like this:
class EmployeeDao
{
...
public List<EmployeeL1DetailsDto> getEmployeeL1Detail()
{
...
// uses a criteria-select query to retrieve only L1 columns
return list;
}
public List<EmployeeL2DetailsDto> getEmployeeL2Detail()
{
...
// uses a criteria-select query to retrieve only L1+L2 columns
return list;
}
public List<EmployeeL3DetailsDto> getEmployeeL3Detail()
{
...
// uses a criteria-select query to retrieve only L1+L2+L3 columns
return list;
}
.
.
.
// And so on
}
I've been using hibernate's aliasToBean() to auto-map the retrieved Entities into the DTOs. Still, I feel the amount of boiler-plate in the process as a whole (all the DTOs, DAO methods, URL parameters for the level of detail wanted, etc.) are a bit worrying and make me think there might be a cleaner approach to this.
So, my question is: Is there a better pattern to follow to retrieve different levels of detail from a persisted entity?
I'm pretty new to Spring and Hibernate, so feel free to point anything that is considered basic knowledge that you think I'm not aware of.
Thanks!
I would go with as little different queries as possible. I would rather make associations lazy in my mappings, and then let them be initialized on demand with appropriate Hibernate fetch strategies.
I think that there is nothing wrong in having multiple different DTO classes per one business model entity, and that they often make the code more readable and maintainable.
However, if the number of DTO classes tends to explode, then I would make a balance between readability (maintainability) and performance.
For example, if a DTO field is not used in a context, I would leave it as null or fill it in anyway if that is really not expensive. Then if it is null, you could instruct your object marshaller to exclude null fields when producing REST service response (JSON, XML, etc) if it really bothers the service consumer. Or, if you are filling it in, then it's always welcome later when you add new features in the application and it starts being used in a context.
You will have to define in one way or another the different granularity versions. You can try to have subobjects that are not loaded/set to null (as recommended in other answers), but it can easily get quite awkward, since you will start to structure your data by security concerns and not by domain model.
So doing it with individual classes is after all not such a bad approach.
You might want to have it more dynamic (maybe because you want to extend even your data model on db side with more data).
If that's the case you might want to move the definition out from code to some configurations (could even be dynamic at runtime). This will of course require a dynamic data model also on Java side, like using a hashmap (see here on how to do that). You gain thereby a dynamic data model, but loose the type safety (at least to a certain extend). In other languages that probably would feel natural but in Java it's less common.
It would now be up to your HQL to define on how you want to populate your object.
The path you want to take depends now a lot on the context, how your object will get used
Another approach is to use only domain objects at Dao level, and define the needed subsets of information as DTO for each usage. Then convert the Employee entity to each DTO's using the Generic DTO converter, as I have used lately in my professional Spring activities. MIT-licenced module is available at Maven repository artifact dtoconverter .
and further info and user guidance at author's Wiki:
http://ratamaa.fi/trac/dtoconverter
Quickest idea you get from the example page there:
Happy hunting...
Blaze-Persistence Entity Views have been created for exactly such a use case. You define the DTO structure as interface or abstract class and have mappings to your entity's attributes. When querying, you just pass in the class and the library will take care of generating an optimized query for the projection.
Here a quick example
#EntityView(Cat.class)
public interface CatView {
#IdMapping("id")
Integer getId();
String getName();
}
CatView is the DTO definition and here comes the querying part
CriteriaBuilder<Cat> cb = criteriaBuilderFactory.create(entityManager, Cat.class);
cb.from(Cat.class, "theCat")
.where("father").isNotNull()
.where("mother").isNotNull();
EntityViewSetting<CatView, CriteriaBuilder<CatView>> setting = EntityViewSetting.create(CatView.class);
List<CatView> list = entityViewManager
.applySetting(setting, cb)
.getResultList();
Note that the essential part is that the EntityViewSetting has the CatView type which is applied onto an existing query. The generated JPQL/HQL is optimized for the CatView i.e. it only selects(and joins!) what it really needs.
SELECT
theCat.id,
theCat.name
FROM
Cat theCat
WHERE theCat.father IS NOT NULL
AND theCat.mother IS NOT NULL

How to handle dynamic JSON data with GWT Autobeans?

Currently I have a class setup to be processed as an autobean:
public interface Asset extends Hit {
String getGuid();
String getHitType();
Map<String,Serializable> getMetadata();
}
I tried using Object instead of Serializable:
Map<String,Object> getMetadata()
but this seems to blow up when trying to access data (because it's not 'reified').
The Metadata map may contain other maps, strings, ints, etc. How do I retrieve data from an inner map of that metadata object?
Currently, if I call asset.getMetadata().get("title"); this returns a SerializableAutoBean and performing toString() or String.valueOf(obj) on that object returns the in memory object information and not the actually string value.
Can an AutoBean object be this dynamic, or do you specifically have to define every field?
AutoBeans aren't "dynamic" in the Java generics or RTTI sense.
In GWT, all types have to be known at compile time for anything which is auto-generated (which includes AutoBeans). This places restrictions on your designs which don't allow you to take full advantage of Java's language features (specifically, generics and other RTTI features). So, AutoBeans are not dynamic in the RTTI or Java generic sense. However, AutoBeans are simply a low-level way of wrapping your data, and you still have access to the data by using Splittables!
As stated in the previous comments, you can use Splittables for the parts of your JSON object whose type is not known at serialization/decode time. Sure, it would be nice to have everything happen at once, but nothing is stopping you from performing some post-processing on your data objects to get them into your desired state.
A really good way for someone to "Grok" what is going on with AutoBeans (and anything else which is autogenerated) is to look at the resulting generated code. The default location for maven is: ${project.build.directory}/.generated.
If you look in there after you've compiled, you should find the code which the GWT compiler produces for your AutoBeans.

Java classes with dynamic fields

I'm looking for clever ways to build dynamic Java classes, that is classes where you can add/remove fields at runtime. Usage scenario: I have an editor where users should be able to add fields to the model at runtime or maybe even create the whole model at runtime.
Some design goals:
Type safe without casts if possible for custom code that works on the dynamic fields (that code would come from plugins which extend the model in unforeseen ways).
Good performance (can you beat HashMap? Maybe use an array and assign indexes to the fields during setup?)
Field "reuse" (i.e. if you use the same type of field in several places, it should be possible to define it once and then reuse it).
Calculated fields which depend on the value of other fields
Signals should be sent when fields change value (no necessarily via the Beans API)
"Automatic" parent child relations (when you add a child to a parent, then the parent pointer in the child should be set for "free").
Easy to understand
Easy to use
Note that this is a "think outside the circle" question. I'll post an example below to get you in the mood :-)
Type safe without casts if possible for custom code that works on the dynamic fields (that code would come from plugins which extend the model in unforeseen ways)
AFAIK, this is not possible. You can only get type-safety without type casts if you use static typing. Static typing means method signatures (in classes or interfaces) that are known at compile time.
The best you can do is have an interface with a bunch of methods like String getStringValue(String field), int getIntValue(String field) and so on. And of course you can only do that for a predetermined set of types. Any field whose type is not in that set will require a typecast.
The obvious answer is to use a HashMap (or a LinkedHashMap if you care for the order of fields). Then, you can add dynamic fields via a get(String name) and a set(String name, Object value) method.
This code can be implemented in a common base class. Since there are only a few methods, it's also simple to use delegation if you need to extend something else.
To avoid the casting issue, you can use a type-safe object map:
TypedMap map = new TypedMap();
String expected = "Hallo";
map.set( KEY1, expected );
String value = map.get( KEY1 ); // Look Ma, no cast!
assertEquals( expected, value );
List<String> list = new ArrayList<String> ();
map.set( KEY2, list );
List<String> valueList = map.get( KEY2 ); // Even with generics
assertEquals( list, valueList );
The trick here is the key which contains the type information:
TypedMapKey<String> KEY1 = new TypedMapKey<String>( "key1" );
TypedMapKey<List<String>> KEY2 = new TypedMapKey<List<String>>( "key2" );
The performance will be OK.
Field reuse is by using the same value type or by extending the key class of the type-safe object map with additional functionality.
Calculated fields could be implemented with a second map that stores Future instances which do the calculation.
Since all the manipulation happens in just two (or at least a few) methods, sending signals is simple and can be done any way you like.
To implement automatic parent/child handling, install a signal listener on the "set parent" signal of the child and then add the child to the new parent (and remove it from the old one if necessary).
Since no framework is used and no tricks are necessary, the resulting code should be pretty clean and easy to understand. Not using String as keys has the additional benefit that people won't litter the code with string literals.
So basically you're trying to create a new kind of object model with more dynamic properties, a bit like a dynamic language?
Might be worth looking at the source code for Rhino (i.e. Javascript implemented in Java), which faces a similar challenge of implementing a dynamic type system in Java.
Off the top of my head, I suspect you will find that internal HashMaps ultimately work best for your purposes.
I wrote a little game (Tyrant - GPL source available) using a similar sort of dynamic object model featuring HashMaps, it worked great and performance was not an issue. I used a few tricks in the get and set methods to allow dynamic property modifiers, I'm sure you could do the same kind of thing to implement your signals and parent/child relations etc.
[EDIT] See the source of BaseObject how it is implemented.
You can use the bytecode manipulation libraries for it. Shortcoming of this approach is that you need to do create own classloader to load changes in classes dynamically.
I do almost the same, it's pure Java solution:
Users generate their own models, which are stored as JAXB schema.
Schema is compiled in Java classes on the fly and stored in
user jars
All classes are forced to extend one "root" class, where you could put every extra functionality you want.
Appropriate classloaders are implemented with "model change"
listeners.
Speaking of performance (which is important in my case), you can hardly beat this solution. Reusability is the same of XML document.

What's the best pattern to handle a table row datastructure?

The Facts
I have the following datastructure consisting of a table and a list of attributes (simplified):
class Table {
List<Attribute> m_attributes;
}
abstract class Attribute {}
class LongAttribute extends Attribute {}
class StringAttribute extends Attribute {}
class DateAttribute extends Attribute {}
...
Now I want to do different actions with this datastructure:
print it in XML notation
print it in textual form
create an SQL insert statement
create an SQL update statement
initialize it from a SQL result set
First Try
My first attempt was to put all these functionality inside the Attribute, but then the Attribute was overloaded with very different responsibilities.
Alternative
It feels like a visitor pattern could do the job very well instead, but on the other side it looks like overkill for this simple structure.
Question
What's the most elegant way to solve this?
I would look at using a combination of JAXB and Hibernate.
JAXB will let you marshall and unmarshall from XML. By default, properties are converted to elements with the same name as the property, but that can be controlled via #XmlElement and #XmlAttribute annotations.
Hibernate (or JPA) are the standard ways of moving data objects to and from a database.
The Command pattern comes to mind, or a small variation of it.
You have a bunch of classes, each of which is specialized to do a certain thing with your data class. You can keep these classes in a hashmap or some other structure where an external choice can pick one for execution. To do your thing, you call the selected Command's execute() method with your data as an argument.
Edit: Elaboration.
At the bottom level, you need to do something with each attribute of a data row.
This indeed sounds like a case for the Visitor pattern: Visitor simulates a double
dispatch operation, insofar as you are able to combine a variable "victim" object
with a variable "operation" encapsulated in a method.
Your attributes all want to be xml-ed, text-ed, insert-ed updat-ed and initializ-ed.
So you end up with a matrix of 5 x 3 classes to do each of these 5 operations
to each of 3 attribute types. The rest of the machinery of the visitor pattern
will traverse your list of attributes for you and apply the correct visitor for
the operation you chose in the right way for each attribute.
Writing 15 classes plus interface(s) does sound a little heavy. You can do this
and have a very general and flexible solution. On the other hand, in the time
you've spent thinking about a solution, you could have hacked together the code
to it for the currently known structure and crossed your fingers that the shape
of your classes won't change too much too often.
Where I thought of the command pattern was for choosing among a variety of similar
operations. If the operation to be performed came in as a String, perhaps in a
script or configuration file or such, you could then have a mapping from
"xml" -> XmlifierCommand
"text" -> TextPrinterCommand
"serial" -> SerializerCommand
...where each of those Commands would then fire up the appropriate Visitor to do
the job. But as the operation is more likely to be determined in code, you probably
don't need this.
I dunno why you'd store stuff in a database yourself these days instead of just using hibernate, but here's my call:
LongAttribute, DateAttribute, StringAttribute,… all have different internals (i.e. fields specific to them not present in Attribute class), so you cannot create one generic method to serialize them all. Now XML, SQL and plain text all have different properties when serializing to them. There's really no way you can avoid writing O(#subclasses of Attribute #output formats)* different methods of serializing.
Visitor is not a bad pattern for serializing. True, it's a bit overkill if used on non-recursive structures, but a random programmer reading your code will immediately grasp what it is doing.
Now for deserialization (from XML to object, from SQL to object) you need a Factory.
One more hint, for SQL update you probably want to have something that takes old version of the object, new version of the object and creates update query only on the difference between them.
In the end, I used the visitor pattern. Now looking back, it was a good choice.

Categories

Resources