I am using Hibernate 5 & Spring Data.
Inside my PartyDao, I have the following method:
#Query("from Party where id in :partyIDs")
List<PartyTO> loadByIDs(#Param("partyIDs") List<Long> partyIDs);
I am calling it like this:
partyList = partyDao.loadByIDs(userPartyIDsList));
but I am getting a list of Hibernate proxy objects (with all the fields set to null and a handler field of type org.hibernate.proxy.pojo.javassist.JavassistLazyInitializer).
This makes no sense to me! Why is Hibernate not loading the objects FROM the query root I am specifying?
I changed it to:
#Query("select party from Party party where party.id in :partyIDs")
List<PartyTO> loadByIDs(#Param("partyIDs") List<Long> partyIDs);
to try to make it more explicit that that I want this object fetched, but it's still returning the proxy objects. Is there something I'm missing? I don't know how I would make it fetch itself.
EDIT:
The proxy object actually has an attribute called target, which has all the attributes set. Why are they not placed into the object itself?
I am not getting a "lazy initialization exception", but a NullPointerException inside a Comparator that is sorting the parties by name:
...
return o1.name.compareTo(o2.name);
The problem is your direct access to the name property of your object.
...
return o1.name.compareTo(o2.name);
Hibernate will always return proxy objects, and the serialization of some more complex structures might lead you to issues in the future including the lazy instantiation exceptions mentioned. However, the cause of your problem is direct access of a property, if you correctly utilize your getter functions within your comparators you will not have any other problem.
The proxy object is a runtime extension of your target class, it will have the same interface as the target class, but in true OOD fashion the internals are not visible or accessible. The only guarantee is the interface contract presented, and that is what you should be coding against regardless within your objects.
Change your comparator and other code to match the following, and you won't have this issue again.
...
return o1.getName().compareTo(o2.getName());
The value will be read into the main object by calling the property method instead of the attribute itself.
The Comparator return statement must be changed to:
return o1.getName().compareTo(o2.getName());
Related
I doesn't understand how to use repositoryItem in ATG. How do I need construct customized logic on it.
Do I need to create usual JavaBean over repositoryItem or I need to use it as is?
I will try to explain:
Logic on repositoryItem:
RepositoryItem store = getRepository().getItem(..);
String address = store.getPropertyValue(..);
Logic on JavaBean:
class StoreBean {
String address;
StoreBean(RepositoryItem store) {
address = store.getPropertyValue(..);
}
}
Then I can use StoreBean how I want, to get it fields(lazy load for them, for example).
What will be best practices in ATG?
It is a matter of preference.
What you do not get with RepositoryItem objects is strong type checking. You must either make assumptions about the type of RepositoryItem you are working with or you have to do manual checks in your code (see example below). Additionally, since the RepositoryItem properties are stored as a metadata, you have to know 1) the actual names of the properties from the XML repository descriptor and 2) you need to know the types, which requires type casting (Example: String firstName = (String) item.getProperty("firstName");) Here is an example of a validation to ensure the RepositoryItem object is of type "sku":
RepositoryItemDescriptor skuItemDescriptor = getCatalogTools().getCatalog().getItemDescriptor(getCatalogTools().getBaseSKUItemType());
if (!RepositoryUtils.isTypeOfItemDesc(itemDescriptor, skuItemDescriptor)) {
throw new IllegalArgumentException("RepositoryItem must be of type " + getCatalogTools().getBaseSKUItemType());
}
If you take the approach of not using "JavaBeans", then you are increasing the risk of having runtime errors in your application. My suggestion is that you have a healthy balance between using RepistoryItem objects and wrapper objects. For critical items you plan to use in a large amount of your code base, I suggest using a wrapper object.
I suggest that if you create wrapper objects, that for consistency, you follow the same design pattern that Oracle Commerce uses. For example, the "order" item is wrapped by OrderImpl and implements the ChangedProperties interface.
public class OrderImpl
extends CommerceIdentifierImpl
implements Order, ChangedProperties
http://docs.oracle.com/cd/E52191_03/Platform.11-1/apidoc/atg/commerce/order/OrderImpl.html
ATG out of box repository implementations do not use JavaBeans for the most part. One big disadvantage of using JavaBeans and lazy loading them into memory will be to lose many repository caching features and will increase your memory footprint. For instance you will not be able to monitor your cache statistic or invalidate cache periodically. You will also have overheads of instantiations when you have huge repotiroyitem result set from a query.
Instead you can also use DynamicBean which lets you refer to repository properties similar to java beans for instance Profile.city.
If you only want to wrap them so that developers don't accidentally parse them incorrectly, you can write a util class per repository for various types of ready write operations and centralize your type safety.
I have JPA entity with list like this:
#OneToMany(mappedBy = "scadaElement", orphanRemoval = true)
private List<ElementParameter> elementParameters;
and map form ElementParameter
#ManyToOne
#JoinColumn(name = "SCADAELEMENT_ID")
ScadaElement scadaElement;
when i get entity with elementParameters list and do stream on it stream do nothing, even when I trigger list with .size() but when I do the same with a for loop it work.
System.out.println("elements size: " + s.getElementParameters().size());
s.getElementParameters()
.stream()
.forEach(
a -> {
System.out.println("elementId: " + a.getId());
}
);
Is there any solution to make that stream work? I use eclipselink as JPA provider.
Apparently, you are referring to this issue. These lazy lists using the anti-pattern of inheriting from actual implementations (here Vector) fail to adapt to the evolution of the base class. Note that there are two possible outcomes depending on how the anti-pattern was realized
If the lazily populated list populates itself (it terms of the inherited state) on the first use, the new inherited methods will start working as soon as a trigger property has been accessed for the first time
But if the list overrides all accessor methods to enforce delegation to another implementation, without ever updating the state of the base class, the base class’ methods which have not been overridden will never start working, even if the list has been populated (from the subclass’ point of view)
Apparently, the second case applies to you. Triggering the population of the list does not make the inherited forEach method work. Note that turning off the lazy population via configuration might be the simpler solution here.
To me, the cleanest solution would be if IndirectList inherits from AbstractList and adheres to the Collection API standard, now, almost twenty years after the Collection API has superseded Vector (should I mention how much younger JPA actually is?). Unfortunately, the developers didn’t go that road. Instead, the anti-pattern was maxed out by creating another class that inherits from the class which already inherits from the class not designed for inheritance. This class overrides the methods introduced in Java 8 and perhaps gets another subclass in one of the next Java releases.
So the good news is, developers expecting every List to be a Vector do not have to make up their minds, but the bad news is it doesn’t work as sometimes, you will not get the extended Java 8 specific version with JPA 2.6. But apparently, JPA 2.7 will work.
So you can derive a few alternative solutions:
Turn off lazy population
Stay with Java 7
Wait for JPA 2.7
just copy the collection, e.g.
List<ElementParameter> workList=new ArrayList<>(elementParameters);
This workList will support all Collection & Stream operations
Why not using the real JPA Streaming?
Stream<User> findAllByName(String name);
Being new to Spring's security annotations, I need a clarification for the below code.
#PostFilter("hasPermission(filterObject, 'READ') or hasRole('ROLE_ADMIN')")
public List<User> getUsers(String orderByInsertionDate,
Integer numberDaysToLookBack) throws AppException
So this means that the list of users returned by getUsers will only contain those elements which have full "READ" access to the calling object or the calling object has role as "ROLE_ADMIN". Thanks.
#PreFilter and #PostFilter are designated to use with Spring security to be able to filter collections or arrays based on the authorization.
To have this working, you need to use expression-based access control in spring security (as you have in your example)
#PreFilter - filters the collection or arrays before executing method.
#PostFilter - filters the returned collection or arrays after executing the method.
So, let's say your getUser() returns List of Users. Spring Security will iterate through the list and remove any elements for which the applied expression is false (e.g. is not admin, and does not have read permission)
filterObject is built-in object on which filter operation is performed and you can apply various conditions to this object (basically all built-in expressions are available here, e.g. principal, authentication), for example you can do
#PostFilter ("filterObject.owner == authentication.name")
Though those filters are useful, it is really inefficient with large data sets, and basically you lose control over your result, instead Spring controls the result.
Since the currently accepted answer didn't go into #PreFilter, here's my two cents:
(Quote from the JavaDocs)
Annotation for specifying a method filtering expression which will be evaluated before a method has been invoked. The name of the argument to be filtered is specified using the filterTarget attribute. This must be a Java Collection implementation which supports the remove method.
#PreFilter operates on the method arguments, not the return value. If the annotated method only has a single Collection argument, then the filterTarget annotation argument can be omitted.
The other answers are pretty clear about the rationale and effects of #PreFilter and #PostFilter.
I just feel obliged to add an information about the method output type - both these annotations work with collections, arrays, streams and maps. Be careful to use them with for example Optional. Although to me it's pretty natural to filter an Optional (you think of it as of the collection with at most one element), this will throw the following exception since spring-security-core 5.4.1:
java.lang.IllegalArgumentException: Filter target must be a collection, array, map or stream type, but was Optional[0]
I recently came accross the following statement on Java persistence with Hibernate book.I was able to understand everything else except the highlighted one.
Another issue to consider is dirty checking. Hibernate automatically detects
object state changes in order to synchronize the updated state with the database.
It’s usually safe to return a different object from the getter method than the
object passed by Hibernate to the setter. Hibernate compares the objects by
value—not by object identity—to determine whether the property’s persistent
state needs to be updated. For example, the following getter method doesn’t
result in unnecessary SQL UPDATEs:
public String getFirstname() {
return new String(firstname);
}
Query: My concern here is we are creating new instance. Is that really necessary? kindly correct me if i'm wrong here..
If you will return different object from getter this means you are trying to create a defensive copy.
From hibernate perspective if you return different object from getter that object will have no history with hibernate session, and if you will call save on that object and that object already exist in database you will have ConstraintViolationException, you have to call saveOrUpdate instead. Call to saveOrUpdate will cause hibernate to issue select statement to database before committing.
if some object was already in session and you call commit after performing some changes Hibernate will issue update query
I'm converting my entity to DTO and I want to set NULL as DTO value for all fields, which are lazy-loaded and not initialized (because I do not want to transfer all the data all the time).
I've tried:
if (!(entity.getNationality() instanceof HibernateProxy))
this.setNationalityFromEntity(entity.getNationality());
But it did not seemed to help.
Any suggestions are welcome!
Thank you!
They way we do this in our Entities is we have boolean methods which do the check in a way that will not trigger the lazy loading. For example, if our Entity had an associated entity called 'associatedSomething', then the method to check if that associated Entity has been lazy loaded would be:
public boolean isAssociatedSomethingLoaded() {
if (associatedSomething instanceof HibernateProxy) {
if (((HibernateProxy)associatedSomething).getHibernateLazyInitializer().isUninitialized()) {
return false;
}
}
return (getAssociatedSomething() != null);
}
NOTE: It's important not to use getAssociatedSomething() in the check, as this makes sure that the associated Entity does not get lazy-loaded during the check.
The class is always a proxy, whether it's initialized or not, so you're going to exclude it every time if you just check for instances of proxy. The Lazy Load does not cause the Proxy reference on the entity to be replaced with a reference to a new object, it just populates the fields.
To find out if it's actually initialized you need to ask it!
if (HibernateProxy.class.isInstance(entity.getNationality())) {
HibernateProxy proxy = HibernateProxy.class.cast(entity.getNationality());
if (!proxy.getHibernateLazyInitializer().isUninitialized()) {
this.setNationalityFromEntity(entity.getNationality());
}
}
The mere possibility of being able to invoke a getter for some state that shouldn't be available for a use case is problematic in my opinion, but that's a different story. I would suggest you implement a proper DTO approach instead to avoid accidental errors.
I created Blaze-Persistence Entity Views for exactly that use case. You essentially define DTOs for JPA entities as interfaces and apply them on a query. It supports mapping nested DTOs, collection etc., essentially everything you'd expect and on top of that, it will improve your query performance as it will generate queries fetching just the data that you actually require for the DTOs.
The entity views for your example could look like this
#EntityView(Person.class)
interface PersonDto {
String getNationality();
}
Querying could look like this
List<PersonDto> dtos = entityViewManager.applySetting(
EntityViewSetting.create(PersonDto.class),
criteriaBuilderFactory.create(em, Person.class)
).getResultList();