What is the purpose of `wire()` and `isWired()` in Seam? - java

I am checking out Seam 2.2.0.GA, and using seam generate-entities it generated Home classes with the methods wire() and isWired(). What are these methods and what purpose do they serve?

isWired() returns a value that indicates whether the object has all its required references to other objects filled (i.e. all required (not null) foreign keys have values). wire() tries to fill these fields with values from the relevant Home objects.
(I hope someone can post a better description.)

The server to "wire" together entity classes together. The later version of Seam have them actually filled out. I've never actually used them in over a year of Seam programming.

Related

XPages: NotSerializableException on DateTime

I'm kind of desperate. We have a lot of code, and we also have a lot of variables of which many are inside viewScope and other HashMaps. Every now and then we get the error that some DateTime object cannot be Serialized. I understand the why, no problem there. But which variable? Which element of the HashMap? Since Serialization happens automatically, out of my control, the problem could be anywhere. It could be a DateTime value the code puts into a viewScope variable (I think I checked them all), it could be my own beans' HashMaps, and it could even be lines with column values from a view. I just don't know...
Can anyone point me into the right direction to find out where that #$#%#! exception really occurs? For instance: can the stack trace be more informing about which HashMap it found the problem in, and maybe even which key??
#$#%#! - read: elusive...
One option would be to add a PhaseListener to your application that, in Render Response phase, iterates through all scopes and outputs the key and the output of getClass() for the value. The code could also do the same for the hash maps in the beans.
There are various examples of PhaseListeners on XSnippets.

Genson Caching of Bean Accessors

I have recently upgraded to Genson 1.3 and I am not 100% sure if this issue is new or not as previously I patched the 0.98 version to make it work.
Context
We are using our own implementation of the BeanMutatorAccessorResolver. This is so that we can dynamically decide whether a property should be serialized or not. Basically we have integrated Genson into our generic jersey REST API interface. Genson does all the serializing and deserializing. When doing a GET requests it is possible for a user to pass fields in the URL in order to filter those he specifically needs (especially for large objects this is necessary where you only need 3 fields or so for displaying a table overview). For example: ?fields=field1, field2, field3. We then know in our implementation of BeanMutatorAccessorResolver exactly which fields to serialize and which ones to ignore. This is mainly intended for speeding up requests and parsing as we are then working with less data.
Problem
Unfortunately it seems that once Genson has read in all the fields via reflection or whatever, it caches that. This would be no problem if we were always requesting the same fields. Unfortunately on some occasions we need more fields then before, but because Genson does not visit our BeanMutatorAccessorResolver a second time it only returns the few fields that it has already cached.
Is there anyway around this? Perhaps there is a better solution than turning cahing off completely - because that would most probably hurt performance, right?
Update
Is seems that I have found the location where this is happening. Basically Genson returns a cached converter in Genson.provideConverter(Type forType) (line: 154).
Converter<T> converter = (Converter<T>) converterCache.get(forType);
At the top of the method I have noticed that it looks for a __GENSON$DO_NOT_CACHE_CONVERTER.
if (Boolean.TRUE.equals(ThreadLocalHolder.get("__GENSON$DO_NOT_CACHE_CONVERTER", Boolean.class))) {
Should I perhaps set this value or is there a better solution?
The problem has been solved thanks to Eugen. The solution can be found here: https://groups.google.com/forum/#!topic/genson/Z1oFHJfA-5w.
Basically you need to extend 3 classes to get this working:
GensonBundle, which you can register with the GensonBuilder.
BaseBeanDescriptorProvider, which gets created in GensonBundle.
BeanDescriptor, which gets created in BaseBeanDescriptorProvider and
which contains the serialize method to adapt to your needs.

Migrate Struts-Tiles to Spring + tiles 3

I'm migrating to SpringMVC and Apache Tiles 3 from a Strut1 + Tiles project. I know only a little about Struts1+Tiles, it is too old and I'm stuck in Controller and ComponentContext in Struts-tiles. According to document from apache website, it was replaced by ViewPreparer and AttributeContext but I dont know the following line means:
ComponentContext compContext=(ComponentContext)pageContext.getAttribute(ComponentConstants.COMPONENT_CONTEXT,PageContext.REQUEST_SCOPE);
What is ComponentConstants.COMPONENT_CONTEXT? and how to change ComponentContext to AttributeContext
Please Help, Thanks.
Bidi, there are 2 ways of getting an AttributeContext:
The first one, like mck stated: through "org.apache.tiles.AttributeContext.STACK" key of request scope. However, the value is a STACK that contains 2 elements of AttributeContext type. The one we need is the first element. IMHO, this way is limited because since the data structure is a stack, getting also mean removing from the stack according to FIFO rule, so you can use the object for only once.
I am using the second way in my project. Because the execute() method of ViewPreparer already have a parameter of AttributeContext type, and this method is always called each time a page is rendered, so you can use this object to do the thing you want (or put it in request) when overriding the method.
AttributeContext is just a collection of key/value pairs. Normally, people use it to get access to some values which are attributes in the template, so fetching the values and putting them to the request can save the overhead. You can also create some static properties of the inheriting class and setting the values to them.
With the Spring-4 and Tiles-3 integration set up (there's spring docs on this as well as a number of good tutorials around) then the properties you put into spring's model map will be available in your jsps, this is not related to the AttributeContext.
AttributeContext only the other hand is (basically) only for holding the map of attributes. Attributes here are defined within a definition, used to identify template or string attributes (as is typically declared in you xml definitions), and come with properties of role, renderer, expression, and/or value.
If AttributeContext is what you are after: you can get hold of it through the current tilesContainer, and to get hold of the current container use the static TilesAccess, eg
TilesContainer tileContainer = TilesAccess.getCurrentContainer(request);
AttributeContext attributeContext = tilesContainer.getAttributeContext(request);
Bidi,
take a read of http://tiles.apache.org/framework/tutorial/advanced/runtime.html
particular the "Runtime Composition using APIs" section.
TilesContainer container = TilesAccess
.getContainer(request.getSession().getServletContext());
Request tilesRequest = new ServletRequest(
container.getApplicationContext(),
request,
response);
otherwise i suggest you take a dive into the Tiles codebase, it's not complicated code, especially the TilesAccess, Request, ApplicationContext stuff.

Freemarker Customize TemplateCache storage want to read the information from TemplateKey

I have my own customized freemarker template storage configured over freemarker, it is working fine.
Recently I want to make some changes on the cache management, I need to read the properties from the cache key which is typeof "TemplateKey". Unfortunately the "TemplateKey" is "private final static class". I have no access to this class and I cannot cast the Key object back into TemplateKey object.
I see the simplest way is to make source code change in TemplateCache.java to update the TemplateKey to be expose as public class.
Question to Freemarker designer: Is it any special reason to make this TemplateKey not be exposed? Is it possible to expose it in next build?
Thanks.
Rocky
At the first glance, I would keep the key class private, because exposing it would introduce a backward-compatibility constraint that can be in the way of further development. But what exactly is your use case that require information from the key?
We disabled localelookup, in this case, one FTL file are working for all locales. But the key includes locale, so sampe template was cached per locale, it is duplicated in memory.
One solution is: remove locale in key when manage it in cache, so I need read key properties. But it still need combine with other changes such as make template clone-able.
Please refer to this POS for detail: Freemarker Template Cache are in same content when locale are different, is it a concern on wasting memory?
Thanks.

"Dynamic" java validation framework?

AFAIK JSR-303 is the standard bean validation system.
I don't know whether it could do validations like this (I guess no):
if an object has a deleted flag set, you cannot modify the object
you cannot change the start date property, after the date is passed
you cannot decrease some integer properties in the bean
So how can I handle validations, which depend on the previous state of an object?
I would like to solve problems like that in hibernate3.5 - spring3 - JPA2 environment.
Thanks
My solution was to mess with hibernate, reload the object to see the old state (after evicting the new object). This time I need some smarter solution...
I don't think this can be done using JSR 303 validation (or any other validation framework I've used). Validation is usually stateless - you pass it an instance of an object, and your validation framework tests things to make sure the current values of your object are valid. There's no real knowledge of previous states of the object.
You can do this - just not with validation. You could use a constrained property, or you could make this work using the proxy pattern or AOP.
It sounds like the fields which you want to validate (with regards to previous state) are all metadata about the records as opposed to real data. All of these fields (idDeleted, createdDate, etc.) are better left out of your domain layer and therefor do not require validation. I would put the logic for determining & setting these values in you data-access layer so that the systems using your repository interfaces do not need to know or care about getting them right.
If my assumption about these fields being meta-data is not correct and you have user-entered data which validation depends on previous state, then I do not think that an extra lookup for the previous values is absurd and should not be out of the question. It makes sense in your case. Hibernate itself does a lookup under then hood to determine whether to INSERT or UPDATE when using it's save function.
Hope you find a reasonable solution.
how can I handle validations, which depend on the previous state of an object?
I'm not 100% sure it's doable but the only way I can think of would be to create an object graph made of the "new state" and the "old-state" (transient) and to validate the object graph as a whole using custom constraints. That's at least what I would try.
I would probably create a transient field that says previous version which points to a copy of the data that represents its previous state. This object is created on construction but since it is marked as transient it is not serialized. Then do the validations against it.
Simplest implementation would be to add a method called makeACopy() which makes a copy of the object and put it into the field.
You can add complexity by implementing Clonable or creating a utility class that would do reflection, but that's up to you. I suggest makeACopy() and refactor later since it is easier to think about.
I don't know any ready-to-use solution either. As you suspect JSR-303 won't do the job, because it's validation is 'static'.
But...
An idea would be to use some AOP techniques to do that. So...
if an object has a deleted flag set, you cannot modify the object
This one I would implement as a proxy method registered around every setter. The proxy method would check the 'deleted' flag. If it was set to true, an exception would be thrown, otherwise the original method would be executed.
you cannot change the start date property, after the date is passed
This one is similar. This time you wouldn't access any other property in the intercepted setter, but the original (not changed yet) value of the field and setter argument.
you cannot decrease some integer properties in the bean
That one is the same as with the dates, the only difference is the date type (date vs integer).
One can argue if AOP is a good choice for this task, but still a solution. I am doubtful too.
One more concern is that I guess you would want to enforce these contraints on JPA entities. So using Spring AOP wouldn't be that easy, since the entities wouldn't be Spring managed.
A completely different approach is to put the validation checks into the setters of properties. The downside is that you would lose declarativeness.
Example:
public void setCounter(int newCounter) {
if (newCounter < this.counter) {
throw new IllegalOperationException("Cannot decrease the counter");
} else {
this.counter = newCounter;
}
}
You might want to look at OVal instead. We do this kind of validation all the time. Normally, it's done using the SimpleCheck where you get the object and the value and can do all kinds of cross-checking.

Categories

Resources