Solving LazyInitializationException via ignorance - java

There are countless questions here, how to solve the "could not initialize proxy" problem via eager fetching, keeping the transaction open, opening another one, OpenEntityManagerInViewFilter, and whatever.
But is it possible to simply tell Hibernate to ignore the problem and pretend the collection is empty? In my case, not fetching it before simply means that I don't care.
This is actually an XY problem with the following Y:
I'm having classes like
class Detail {
#ManyToOne(optional=false) Master master;
...
}
class Master {
#OneToMany(mappedBy="master") List<Detail> details;
...
}
and want to serve two kinds of requests: One returning a single master with all its details and another one returning a list of masters without details. The result gets converted to JSON by Gson.
I've tried session.clear and session.evict(master), but they don't touch the proxy used in place of details. What worked was
master.setDetails(nullOrSomeCollection)
which feels rather hacky. I'd prefer the "ignorance" as it'd be applicable generally without knowing what parts of what are proxied.
Writing a Gson TypeAdapter ignoring instances of AbstractPersistentCollection with initialized=false could be a way, but this would depend on org.hibernate.collection.internal, which is surely no good thing. Catching the exception in the TypeAdapter doesn't sound much better.
Update after some answers
My goal is not to "get the data loaded instead of the exception", but "how to get null instead of the exception"
I
Dragan raises a valid point that forgetting to fetch and returning a wrong data would be much worse than an exception. But there's an easy way around it:
do this for collections only
never use null for them
return null rather than an empty collection as an indication of unfetched data
This way, the result can never be wrongly interpreted. Should I ever forget to fetch something, the response will contain null which is invalid.

You could utilize Hibernate.isInitialized, which is part of the Hibernate public API.
So, in the TypeAdapter you can add something like this:
if ((value instanceof Collection) && !Hibernate.isInitialized(value)) {
result = new ArrayList();
}
However, in my modest opinion your approach in general is not the way to go.
"In my case, not fetching it before simply means that I don't care."
Or it means you forgot to fetch it and now you are returning wrong data (worse than getting the exception; the consumer of the service thinks the collection is empty, but it is not).
I would not like to propose "better" solutions (it is not topic of the question and each approach has its own advantages), but the way that I solve issues like these in most use cases (and it is one of the ways commonly adopted) is using DTOs: Simply define a DTO that represents the response of the service, fill it in the transactional context (no LazyInitializationExceptions there) and give it to the framework that will transform it to the service response (json, xml, etc).

What you can try is a solution like the following.
Creating an interface named LazyLoader
#FunctionalInterface // Java 8
public interface LazyLoader<T> {
void load(T t);
}
And in your Service
public class Service {
List<Master> getWithDetails(LazyLoader<Master> loader) {
// Code to get masterList from session
for(Master master:masterList) {
loader.load(master);
}
}
}
And call this service like below
Service.getWithDetails(new LazyLoader<Master>() {
public void load(Master master) {
for(Detail detail:master.getDetails()) {
detail.getId(); // This will load detail
}
}
});
And in Java 8 you can use Lambda as it is a Single Abstract Method (SAM).
Service.getWithDetails((master) -> {
for(Detail detail:master.getDetails()) {
detail.getId(); // This will load detail
}
});
You can use the solution above with session.clear and session.evict(master)

I have raised a similar question in the past (why dependent collection isn't evicted when parent entity is), and it has resulted an answer which you could try for your case.

The solution for this is to use queries instead of associations (one-to-many or many-to-many). Even one of the original authors of Hibernate said that Collections are a feature and not an end-goal.
In your case you can get better flexibility of removing the collections mapping and simply fetch the associated relations when you need them in your data access layer.

You could create a Java proxy for every entity, so that every method is surrounded by a try/catch block that returns null when a LazyInitializationException is catched.
For this to work, all your entities would need to implement an interface and you'd need to reference this interface (instead of the entity class) all throughout your program.
If you can't (or just don't want) to use interfaces, then you could try to build a dynamic proxy with javassist or cglib, or even manually, as explained in this article.
If you go by common Java proxies, here's a sketch:
public static <T> T ignoringLazyInitialization(
final Object entity,
final Class<T> entityInterface) {
return (T) Proxy.newProxyInstance(
entityInterface.getClassLoader(),
new Class[] { entityInterface },
new InvocationHandler() {
#Override
public Object invoke(
Object proxy,
Method method,
Object[] args)
throws Throwable {
try {
return method.invoke(entity, args);
} catch (InvocationTargetException e) {
Throwable cause = e.getTargetException();
if (cause instanceof LazyInitializationException) {
return null;
}
throw cause;
}
}
});
}
So, if you have an entity A as follows:
public interface A {
// getters & setters and other methods DEFINITIONS
}
with its implementation:
public class AImpl implements A {
// getters & setters and other methods IMPLEMENTATIONS
}
Then, assuming you have a reference to the entity class (as returned by Hibernate), you could create a proxy as follows:
AImpl entityAImpl = ...; // some query, load, etc
A entityA = ignoringLazyInitialization(entityAImpl, A.class);
NOTE 1: You'd need to proxy collections returned by Hibernate as well (left as an excersice to the reader) ;)
NOTE 2: Ideally, you should do all this proxying stuff in a DAO or in some type of facade, so that everything is transparent to the user of the entities
NOTE 3: This is by no means optimal, since it creates a stacktrace for every access to an non-initialized field
NOTE 4: This works, but adds complexity; consider if it's really necessary.

Related

Best way to sequence a pair of external service calls in Akka

I need to geocode an Address object, and then store the updated Address in a search engine. This can be simplified to taking an object, performing one long-running operation on the object, and then persisting the object. This means there is an order of operations requirement that the first operation be complete before persistence occurs.
I would like to use Akka to move this off the main thread of execution.
My initial thought was to use a pair of Futures to accomplish this, but the Futures documentation is not entirely clear on which behavior (fold, map, etc) guarantees one Future to be executed before another.
I started out by creating two functions, defferedGeocode and deferredWriteToSearchEngine which return Futures for the respective operations. I chain them together using Future<>.andThen(new OnComplete...), but this gets clunky very quickly:
Future<Address> geocodeFuture = defferedGeocode(ec, address);
geocodeFuture.andThen(new OnComplete<Address>() {
public void onComplete(Throwable failure, Address geocodedAddress) {
if (geocodedAddress != null) {
Future<Address> searchEngineFuture = deferredWriteToSearchEngine(ec, addressSearchService, geocodedAddress);
searchEngineFuture.andThen(new OnComplete<Address>() {
public void onComplete(Throwable failure, Address savedAddress) {
// process search engine results
}
});
}
}
}, ec);
And then deferredGeocode is implemented like this:
private Future<Address> defferedGeocode(
final ExecutionContext ec,
final Address address) {
return Futures.future(new Callable<Address>() {
public Address call() throws Exception {
log.debug("Geocoding Address...");
return address;
}
}, ec);
};
deferredWriteToSearchEngine is pretty similar to deferredGeocode, except it takes the search engine service as an additional final parameter.
My understand is that Futures are supposed to be used to perform calculations and should not have side effects. In this case, geocoding the address is calculation, so I think using a Future is reasonable, but writing to the search engine is definitely a side effect.
What is the best practice here for Akka? How can I avoid all the nested calls, but ensure that both the geocoding and the search engine write are done off the main thread?
Is there a more appropriate tool?
Update:
Based on Viktor's comments below, I am trying this code out now:
ExecutionContext ec;
private Future<Address> addressBackgroundProcess(Address address) {
Future<Address> geocodeFuture = addressGeocodeFutureFactory.defferedGeocode(address);
return geocodeFuture.flatMap(new Mapper<Address, Future<Address>>() {
#Override
public Future<Address> apply(Address geoAddress) {
return addressSearchEngineFutureFactory.deferredWriteToSearchEngine(geoAddress);
}
}, ec);
}
This seems to work ok except for one issue which I'm not thrilled with. We are working in a Spring IOC code base, and so I would like to inject the ExecutionContext into the FutureFactory objects, but it seems wrong for this function (in our DAO) to need to be aware of the ExecutionContext.
It seems odd to me that the flatMap() function needs an EC at all, since both futures provide one.
Is there a way to maintain the separation of concerns? Am I structuring the code badly, or is this just the way it needs to be?
I thought about creating an interface in the FutureFactory's that would allow chaining of FutureFactory's, so the flatMap() call would be encapsulated in a FutureFactory base class, but this seems like it would be deliberately subverting an intentional Akka design decision.
Warning: Pseudocode ahead.
Future<Address> myFutureResult = deferredGeocode(ec, address).flatMap(
new Mapper<Address, Future<Address>>() {
public Future<Address> apply(Address geocodedAddress) {
return deferredWriteToSearchEngine(ec, addressSearchService, geocodedAddress);
}
}, ec).map(
new Mapper<Address, SomeResult>() {
public SomeResult apply(Address savedAddress) {
// Create SomeResult after deferredWriteToSearchEngine is done
}
}, ec);
See how it is not nested. flatMap and map is used for sequencing the operations. "andThen" is useful for when you want a side-effecting-only operation to run to full completion before passing the result on. Of course, if you map twice on the SAME future-instance then there is no ordering guaranteed, but since we are flatMapping and mapping on the returned futures (new ones according to the docs), there is a clear data-flow in our program.

Using Stripes, what is the best pattern for Show/Update/etc Action Beans?

I have been wrestling with this problem for a while. I would like to use the same Stripes ActionBean for show and update actions. However, I have not been able to figure out how to do this in a clean way that allows reliable binding, validation, and verification of object ownership by the current user.
For example, lets say our action bean takes a postingId. The posting belongs to a user, which is logged in. We might have something like this:
#UrlBinding("/posting/{postingId}")
#RolesAllowed({ "USER" })
public class PostingActionBean extends BaseActionBean
Now, for the show action, we could define:
private int postingId; // assume the parameter in #UrlBinding above was renamed
private Posting posting;
And now use #After(stages = LifecycleStage.BindingAndValidation) to fetch the Posting. Our #After function can verify that the currently logged in user owns the posting. We must use #After, not #Before, because the postingId won't have been bound to the parameter before hand.
However, for an update function, you want to bind the Posting object to the Posting variable using #Before, not #After, so that the returned form entries get applied on top of the existing Posting object, instead of onto an empty stub.
A custom TypeConverter<T> would work well here, but because the session isn't available from the TypeConverter interface, its difficult to validate ownership of the object during binding.
The only solution I can see is to use two separate action beans, one for show, and one for update. If you do this however, the <stripes:form> tag and its downstream tags won't correctly populate the values of the form, because the beanclass or action tags must map back to the same ActionBean.
As far as I can see, the Stripes model only holds together when manipulating simple (none POJO) parameters. In any other case, you seem to run into a catch-22 of binding your object from your data store and overwriting it with updates sent from the client.
I've got to be missing something. What is the best practice from experienced Stripes users?
In my opinion, authorisation is orthogonal to object hydration. By this, I mean that you should separate the concerns of object hydration (in this case, using a postingId and turning it into a Posting) away from determining whether a user has authorisation to perform operations on that object (like show, update, delete, etc.,).
For object hydration, I use a TypeConverter<T>, and I hydrate the object without regard to the session user. Then inside my ActionBean I have a guard around the setter, thus...
public void setPosting(Posting posting) {
if (accessible(posting)) this.posting = posting;
}
where accessible(posting) looks something like this...
private boolean accessible(Posting posting) {
return authorisationChecker.isAuthorised(whoAmI(), posting);
}
Then your show() event method would look like this...
public Resolution show() {
if (posting == null) return NOT_FOUND;
return new ForwardResolution("/WEB-INF/jsp/posting.jsp");
}
Separately, when I use Stripes I often have multiple events (like "show", or "update") within the same Stripes ActionBean. For me it makes sense to group operations (verbs) around a related noun.
Using clean URLs, your ActionBean annotations would look like this...
#UrlBinding("/posting/{$event}/{posting}")
#RolesAllowed({ "USER" })
public class PostingActionBean extends BaseActionBean
...where {$event} is the name of your event method (i.e. "show" or "update"). Note that I am using {posting}, and not {postingId}.
For completeness, here is what your update() event method might look like...
public Resolution update() {
if (posting == null) throw new UnauthorisedAccessException();
postingService.saveOrUpdate(posting);
message("posting.save.confirmation");
return new RedirectResolution(PostingsAction.class);
}

Is getMethod on form value safe?

recently I found a function like this in a generic JSR245 portlet class:
public class MyGenericPortlet extends GenericPortlet {
#Override
public void processAction(ActionRequest rq, ActionResponse rs) throws PortletException{
String actParam = rq.getParameter("myAction");
if( (actParam != null) && (!("").equals(actParam))) {
try{
Method m = this.getClass().getMethod(actParam, new Class[]{ActionRequest.class, ActionResponse.class});
m.invoke(this, new Object[]{rq, rs});
}
catch(Exception e){
setRequestAttribute(rq.getPortletSession(),"error", "Error in method:"+action);
e.printStackTrace();
}
}
else setRequestAttribute(rq.getPortletSession(),"error", "Error in method:"+action);
}
}
How safe is such code? As far as I can see the following problems might occur:
A parameter transmitted from the client is used unchecked to call a function. This allows anyone who can transmit data to the corresponding portlet to call any matching function. on the other hand the function to be called must have a specific interface. Usually such functions are very rare.
A programmer might accidentaly add a function with a corresponding interface. As only public functions seem to be found this is no problem as long as the function is private or protected.
The error message can reveal information about the software to the client. This shouldn't be a problem as the software itself is Open Source.
Obviously there is some room for programming errors that can be exploited. Are there other unwanted side effects that might occur? How should I (or the developers) judge the risk that comes from this function?
If you think it is safe, I'd like to know why.
The fact that only public methods with a specific signature can be invoked remotely is good. However, it could be made more secure by, for example, requiring a special annotation on action methods. This would indicate the developer specifically intended the method to be an invokable action.
A realistic scenario where the current implementation could be dangerous is when the developer adds an action that validates that the information in the request is safe, then passes the request and response to another method for actual processing. If an attacker could learn the name of the delegate method, he could invoke it directly, bypassing the parameter safety validation.

Restrict access to the owner of an object in DDD

Let's say there is an object TaskList which can be edited and deleted only by its owner. Other users should only by able to take a task and update its status.
The following options come to my mind:
check the ownership and access in the controller of the web application
let the repository return proxy object which throws exception on certain operations, but the controller (or view) would still need to know which actions (in form of links or form fields) should be visible
pass the caller (user) to the method of the domain object, so that the domain object can itself check whether the caller ist allowed or not.
The used technology is Java.
Any other/better ideas?
Interesting articles about security and DDD
Domain Object Security with the Spring framework
Security in Domain-Driven Design
I have accepted my own answer now, because that is what I actually use, but further suggestions are welcome.
I would not encode the ownership/permissions model into the TaskList domain object. That sort of business logic should be external. I also don't like the idea of a proxy object. Although it would certainly work, it would confuse debugging and is, in this case at least, unnecessarily complex. I would also not check it in the controller.
Instead I would create a business logic object which oversees the permissions for TaskList. So the TaskList would have an owner field but you would have something like:
public class TaskListAccessor {
private TaskList taskList;
private User reader;
public void updateStatus(Status status) {
// everyone can do this
taskList.updateStatus(status);
}
/** Return true if delete operation is allowed else false */
public boolean isDeleteAllowed() {
return taskList.getOwner().equals(reader);
}
/** Delete the task. Only owners can do this. Returns true if worked else false */
public boolean delete() {
if (isDeleteAllowed()) {
taskList.delete();
return true;
} else {
return false;
}
}
// ... other accessors with other is*Allowed methods
}
If you need to require that all operations on TaskList objects go through accessors then you could create a factory class which is the only one who creates TaskList using package constructors or something. Maybe the factory is the only one who would use the DAO to look up the TaskList from the data store.
However, if there are too many methods to control in this fashion then a proxy might be easier. In both cases having TaskList be an interface would be recommended, with the implementation class hidden by the proxy or the accessor.
I found it unnecessarily complex to create accessor classes for each protected domain class as suggested by 'Gray'. My solution is probably not perfect, but simple to use and - more important - robust. You cannot forget to use a certain object or to check conditions outside.
public class TaskList {
private SystemUser owner;
private List<Task> tasks = new ArrayList<>();
public TastList(SystemUser owner) {
this.owner = owner;
}
public void Add(Task task) {
Guard.allowFor(owner);
tasks.add(task);
}
}
The Guard knows the current user (from a thread local for example) and compares it to the owner passed as parameter to allowFor(owner). If access is denied a security exception will be thrown.
That is simple, robust and even easy to maintain since only the guard has to be changed if the underlying authentication changes.

How to do transactional without lose encapsulation?

I have a code that saves a bean, and updates another bean in a DB via Hibernate. It must be do in the same transaction, because if something wrong occurs (f.ex launches a Exception) rollback must be executed for the two operations.
public class BeanDao extends ManagedSession {
public Integer save(Bean bean) {
Session session = null;
try {
session = createNewSessionAndTransaction();
Integer idValoracio = (Integer) session.save(bean); // SAVE
doOtherAction(bean); // UPDATE
commitTransaction(session);
return idBean;
} catch (RuntimeException re) {
log.error("get failed", re);
if (session != null) {
rollbackTransaction(session);
}
throw re;
}
}
private void doOtherAction(Bean bean) {
Integer idOtherBean = bean.getIdOtherBean();
OtherBeanDao otherBeanDao = new OtherBeanDao();
OtherBean otherBean = otherBeanDao.findById(idOtherBean);
.
. (doing operations)
.
otherBeanDao.attachDirty(otherBean)
}
}
The problem is:
In case that
session.save(bean)
launches an error, then I get AssertionFailure, because the function doOtherAction (that is used in other parts of the project) uses session after a Exception is thrown.
The first thing I thought were extract the code of the function doOtherAction, but then I have the same code duplicate, and not seems the best practice to do it.
What is the best way to refactor this?
It's a common practice to manage transactions at one level above DAOs, in services or other business logic classes. That way you can, based on the business/service logic, in one case do two DAO operations in one transaction and, in another case, do them in separate transactions.
I'm a huge fan of Declarative Transaction Management. If you can spare the time to get it working (piece of cake with an Application Server such as GlassFish or JBoss, and easy with Spring). If you annotate your business method with #TransactionAttribute(REQUIRED) (it can even be set to be done as default) and it calls the two DAO methods you will get exactly what you want: everything gets committed at once or rolled back over an Exception.
This solution is about as loosely coupled as it gets.
The others are correct in that they take in to account what are common practice currently.
But that doesn't really help you with your current practice.
What you should do is create two new DAO methods. Such as CreateGlobalSession and CommitGlobalSession.
What these do is the same thing as your current create and commit routines.
The difference is that they set a "global" session variable (most likely best done with a ThreadLocal). Then you change the current routines so that they check if this global session already exists. If your create detects the global session, then simply return it. If your commit detects the global session, then it does nothing.
Now when you want to use it you do this:
try {
dao.createGlobalSession();
beanA.save();
beanb.save();
Dao.commitGlobalSession();
} finally {
dao.rollbackGlobalSession();
}
Make sure you wrap the process in a try block so that you can reset your global session if there's an error.
While the other techniques are considered best practice and ideally you could one day evolve to something like that, this will get you over the hump with little more than 3 new methods and changing two existing methods. After that the rest of your code stays the same.

Categories

Resources