Clean way to wire variables after serialization? - java

I've got a web application ( tomcat 7, spring 4.2, ZK 7). As I have two servers that can "take over" the other's sessions, serialization of the sessions is required, which leads to the problem that I have to somehow re-initialize the spring services after deserialization. Due to the structure of ZK, it is required that the Composers (kind of Controllers) need to be serialized (and these Composers use Services).
For example, let's say I have an object that needs to be serialized. This objects has a reference to a Spring service (which cannot be serialized, since in the end, there's a reference to a DataSource, SqlSessionTemplate, etc. - all of them not Serializable).
So, now how to solve this problem elegantly? Is there some way to integrate Spring into the deserialization process so that Spring automatically re-wires my (transient, autowired) variables after (or even while) deserialization?
The current solution is to have a singleton bean lying around that has a #Autowired reference to the ApplicationContext, so that I can access it via getInstance() to get a reference to a Service, but this solution is not very elegant and also makes testing more complex (since I prefer to unit test without loading a Spring context).
Is there some other, preferably better, way to do this?

It seems, that the most obvious and elegant answer is to declare the ScopedProxyMode of a bean, that wraps it into a Proxy and dynamically sets the non-serializable dependencies, for example...
#Scope(proxyMode=ScopedProxyMode.TARGET_CLASS)
More can be found in the Spring documentation here. This also has been discussed here on StackOverflow already (with a link to the presentation when they announced it).

Related

Spring - is using new a bad practice?

Is creating objects by hand, i.e. using new operator instead of registering Spring bean and using dependency injection considered bad practice? I mean, does Spring IoC container have to know about all objects in the application? If so, why?
You want Spring to create beans for classes that :
you want/need to inject instance(s) in other beans
you need to inject beans (or dependencies) in their own instances.
you want them to benefit from Spring features (instantiation management, transaction management, proxy classes Spring empowered such as Repository/Interceptor and so for...)
Services, controllers or interceptors are example of them.
For example a controller may need inject a service or an interceptor.
As well as you don't want to handle the instantiation of these classes by implementing yourself the singleton pattern for each one. Which could be error-prone and require boiler plate code.
So you want all of these classes to be beans managed by Spring.
But you don't want to Spring create beans for classes that :
you don't want/need to inject instance(s) in other beans
you don't need to inject beans (or rdependencies) in their own instances
you don't need them benefit from Spring features
Entity, DTO, Value Object are example of them.
For example an entity never needs to be injected into another entity or in a service as a dependency because entities are not created at the container startup but are generally created inside a method and have a scope limited to the methods lifespan.
As well as you don't need Spring to create instances which the lifespan is a method. The new operator does very well the job.
So defining them as bean instances makes no sense and appears even counter intuitive.
Spring implements the Dependency Injection pattern. You should inject in the container of spring the beans that are going to be used in other classes as dependence to be able to work. Usually classes that implement interfaces are injected so that if you change the implementation, the classes that use that interface do not know about the change.
I recommend you read the post about the dependency injection of Martin Fowler.
Using new is not bad and you are just giving the IoC container the responsibility of using new under the hood. IoC will know about all the classes that you register with it. When using frameworks, it's even more important to think about the applications architecture, because a framework makes bad design as easy to implement as good design.
If you don't need multiple implementations of a class, then use new.
If you think it's plausible that you may need to switch between implementations, consider your app design and find a suitable injection point so that refactoring won't be such a drain.
If you need multiple implementation of a class, then use a design pattern like a factory or a DI framework.
Not every nook and cranny of an application needs to be highly configurable. That's what leads to over-engineered and hard to maintain code.

How many proxies created in Java application use Spring core, Hibernate, Spring AOP?

I'm reading about Java proxy and as we know Spring Core, Hibernate, Spring AOP, Ehcache is an implement of it. I have got confused cause SpringCore will create a proxy, Hibernate will create a proxy and SpringAOP or Ehcache will do the same if we use all of them in a Java project.
How many proxies will create? Can someone help me out this problem and give me some example?
Each of those frameworks create any variable number of proxies all based upon certain design choices and configurations. That said, the only way to have any idea would be to profile your application.
Most frameworks that use proxies leverage them for similar reasons. These proxies are meant to act as placeholders that look like an object our code knows about and works with; however the internal implementation details are hidden, often supplemented with framework specific business logic.
For example, hibernate may expose a lazily-loaded collection of objects as a collection of proxies. Each proxy looks like the object our application expects in that collection; however, the internal state of that proxy is often not yet loaded until first accessed. In this case, the proxy saves on memory consumption, result-set parsing and database bandwidth, and a plethora of other things.

What is the correct #Scope for Components in Spring Boot desktop/CLI applications?

I've already written a couple of Spring Boot application (at the moment, one for web, one using JavaFX and a handful CLI applications).
While all work as expected, I currently struggle with one particular concept of desktop or command line applications: The #Scope annotation for #Services and #Components.
I recently read a lot of why singletons are "evil" or at least undesired, but for desktop applications I currently see no other way to implement it, since most of the time a single instance is enough in these kinds of applications.
In Guice I would create an (non-static and non-final) instance in my module. In Spring I use #Scope("singleton").
What I want to know now: Is this a clean solution? Is there any other solution at all?
Regards,
Daniel
The articles you're reading are about the Singleton pattern. Many consider Singleton an anti-pattern and there's plenty of information out there on why. See this answer for some good reasons on why you should avoid the pattern.
What you're referring to is singleton as a scope. Spring does not follow the pattern, a scope of singleton simply indicates the container will only create a single instance and use that to satisfy dependencies. There could be multiple containers each with it's own instance, or one container where the bean is singleton scope and another where it's prototype scope instead.
Singleton is the default scope in Spring so you don't actually need to specify it. If you don't have a specific reason to use a different scope then you probably want the default singleton. Sometimes I need a bean to not be shared, in which case I may use prototype. Please check the Spring documentation for more information on available scopes and their meaning.
In any case the key difference here is this is not an implementation of the singleton pattern. If Spring were to implement such a pattern then we would expect every container to have the same instance which is not the case at all.

EJB - Home/Remote and LocalHome/Local interfaces

Revising some past exam papers for an exam mainly focus on component-oriented design and J2EE, I have come across the following question:
A preliminary investigation of scenario 3: “Exchange Request” suggests that two EJBs will provide a suitable solution: a session bean called EnterExchangeRequest to control the processing and an entity bean called ExchangeRequest to represent the persistent properties of the request. Discuss the role of the following interfaces:
Home
Remote
LocalHome
Local
and how they would provide access to the services of the EJBs described above.
I could try to explain how Home and Remote interfaces would fit into the picture. I have also heard the lecturer say one could replace Home by LocalHome, and Remote by Local (why?), but why are they asking me to discuss the role of all four at the same time?
Do I get it right when I say, the EJB container (the application server) would see that an interface is Home or Remote and then decide that the bean can 'live' on any machine in the cluster, while in the case the interfaces are LocalHome and Local the container will know that the beans can't be distributed across multiple machines and will therefore keep them 'alive' in one machine only?
I am totally lost in this enterprise Java jungle. I am experiencing a BeanOverflow. Could you please tell me which of my assumptions are wrong, point out my misconceptions and blunders.
Thank you all who are willing to help me with these EJB interfaces.
P.S. Note that I am not asking you to answer the question from the past exam paper. Just curious if you have any thoughts as to what could they be after when asking this.
As pointed out by Yishay, Home/Remote and LocalHome/Local are tied together and the Home interface functions as a constructor.
Local beans are tied to the JVM they live in, you can not access them from the outside. Remote beans can be accessed from other JVMs.
I use a similar approach: I always deploy ears. Beans for the ear I make local beans, Beans meant for use by other ears I make remote. But it is possible to use the local beans in other ears, as long as the are deployed in the same JVM
Home is responsible for the creation of the Remote (kind of like its constructor) and LocalHome and Local have the same relationship.
In each case the container is giving you a proxy that references the real EJB class that you write.
If I had to guess, what the question was looking for was the use of remote for the session bean and local for the entity bean.
Anyway, although these concepts can still exists, things have been much better simplified in EJB3.
EDIT: In response to the comment, with EJB3, the bean class itself can implement the remote and the home interfaces directly (for the session beans). They are made EJB's with a single annotation. Stateful beans have a couple more annotations to deal with state issues. Entity beans do not have a Home interface, and do not need a local interface, you can interact with the java object directly. There is an EntityManager that retrieves the right entity beans based on a query, and that EntityManager is injected via an annotation.
That kind of sums it up in a paragraph. There are great tutorials on the web for this stuff, but EJBs in general solve a class of problem that is hard to appreciate unless you deal with the problem. They aren't the only way to solve it, but unless you deal with this type of programming, just reading about it won't really help you relate to it.

Do you really need stateless session beans in this case?

We have a project with a pretty considerable number of EJB 2 stateless session beans which were created quite a long time ago. These are not the first-line beans which are accessed from our client via RMI, rather they are used by that code to perform specific functions. However, I've come to believe that there's nothing to be gained by having them as session beans at all.
They do not need to be accessed via
RMI.
They do not retain any state,
they are just code that was factored
out of the first set of beans to
reduce their complexity.
They don't
have multiple different
implementations which we are swapping
out, each one has been as it was for
years (barring bug fixes and feature
additions).
None of them alter the
transaction that comes into them from the bean calling them
(that is they don't require a new
transaction, not participate in the
existing one, or otherwise change
things).
Why should these not all just be classes with a couple of static functions and no EJB trappings at all?
The only reason I can see is for clustering purposes (if you are doing clustering). That is the hand off to those beans could be on another VM on another machine if clustering is being done right to spread the load around.
That is likely not the case, and the movement to EJB's was just over-engineering. I'm suffering with that too.
Even transactions aren't really enough to justify it, you can have a single EJB that handles the transactions and call the different code through it via a Command type pattern.
There seems to be no reason why they shouldn't just be simple POJO's rather than stateless session beans. I think this is the conclusion that people came to after using EJB 1.x in this manner as well.
It's also the reason why frameworks such as Spring exist as an alternative to EJB's.
I'd say change them over to be just standard POJO's, but make sure you have a safety net of unit and functional tests (which might be a little bit harder with EJB's) to help you.

Categories

Resources