J2EE: Singleton vs keeping things in session - java

When should an object (i.e. an application-wide properties file) be kept in the session, as opposed to creating a singleton to keep it? When should each of these approaches be used?
Note: I am working on a clustered environment, if that makes any difference.

If it's supposed to be application-wide, then you should not store it in the session scope, but in the application scope. With storing in the session scope, you're unnecessarily duplicating the same data for every visitor. A singleton is also not needed at all, just instantiate once during server startup with help of a ServletContextListener and store it in the application scope using ServletContext#setAttribute().

+1 to BalusC, but I suspect that was just a typo on your part.
As for singletons, it depends on what you mean by singleton. If you have an EJB annotated with #Singleton, then that's fine (other dependency-injection providers may also support this pattern).
If you're talking about the standard singleton pattern, where you keep the instance in a static variable, then that's a bad idea. You should generally avoid static variables in Java EE or servlet containers, because the class loading can be a bit tricky - you may wind up with multiple copies when you don't expect it, or you may be sharing a single copy between different applications, or you may be keeping stuff in memory when you redeploy your application. You can make an exception in cases where the variable isn't exposed outside the class, and you don't really care how many copies of it you have (for example, logger objects).

Note: I am working on a clustered environment, if that makes any difference.
I don't disagree with what Mike and BalusC have already written, but I feel you're entering territory where implementation details matter. What you do and how you do it will depend on the back-end services, what sort of clustering, and what the application requirements are. I think the question is too broad to give specific answers.
Furthermore...
All Java EE profiles share a set of common features, such as naming and resource injection, packaging rules, security requirements, etc. This guarantees a degree of uniformity across all products, and indirectly applications, that fall under the “Java EE platform” umbrella. This also ensures that developers who are familiar with a certain profile, or with the full platform, can move easily to other profiles, avoiding excessive compartmentalization of skills and experience.
Java EE specifications define a certain level of compliance but the goal isn't to make every infrastructure heterogeneous. This sort of thing adds complexity to an already nebulous problem domain.

Related

Is it better to hold a repository for every web application (context) or is it better to share a common instance by JNDI or a similar technique

within our company it's kind of standard to create repositories for data which is originally stored in the database as described for example in https://thinkinginobjects.com/2012/08/26/dont-use-dao-use-repository/.
Our web infrastructure consist of a few independent web applications within Tomcat 7 for printing, product description, product order (this is not persisted in the database!), category description etc.
They are all build on Servlet 2 API.
So each instance/implementation of repository holds a specialised kind of data represented by serializable classes and the instances of this serialzable classes are set up/filled by an periodically executed database query (for every resultrow the setters of the fields are called; reminds me of domain oriented entity beans with CMP).
The repositories are initialized on the servlets init sequences (so every servlet keeps it's own set of instances).
Each context has a own connection to the Oracle database (set up by resource description file on deployment).
All the data is read only, we never need to write back to the database.
Because we need some of these data types for more than one web application (context) and some even for more than one servlet within the same web context repositories with an identical data type are instantiated more than once - e.g. four times, twice within the same application.
In the end some of the data is doubled and I'm not sure if this is as clever and efficient as it should be. It should be possible to share the same repository object to more than one application (JNDI?) but at least it must be possible to share it for several servlets within the same application context.
Despite I'm irritated by the idea to use a "self build" repository instead of something like a well tested, open developed cache (ehcache, jcs, ...) because some of these caches also provide options for distributed caches (so it should also work within the same container).
If certain entries are searched the search algorithm iterates over all entries in the repository (s. link above). For every search pattern there are specialised functions which are directly called from within the business logic classes using the "entity beans"; there's no specification object or interface.
In the end the application server as a whole does not perform that well and it uses a hell lot of RAM (at least for approximately 10000 DB entries); this is in my opinion most probably correlated to the use of serializeable XSD-to-JAXB-generated classes.
Additionally every time a application is deployed for tests you have to wait at least two minutes until all entries of the database have been loaded into the repositories - when deploying on live there's a well recognizable out of service phase on context/servlet start up.
I tend to think all of this is closely related to the solutions I described above.
Because I haven't got any experiences in this field and I'm new in the company I don't want to be to obtrusive.
Maybe you can help me to evaluate ideas for a better setup:
Is it for performance and memory better to unify all the repositories into one "repository servlet" and request objects from there via HTTP (don't think so, though it seems quite modular/distributed system friendly) or should I try to go with JNDI (never did that before) and connect to the repository similar to a JDBC database?
Wouldn't it be even more sensible, faster and efficient to at least use only one single connection pool for the whole Tomcat (and reference this connection pool from within the web apps deployment descriptor)? Or might that slow down connections or limit it in any other aspect?
I was told that the cache system (ehcache) didn't work well (at least not with the performance of the self written solution - though: I can't believe that). I imagine the usage of repositories backed by a distributed (as across all contexts) cache used in all web applications should not only reduce memory footprint significantly but should not be significantly slower. - I believe it will be faster and have shorter start up times respectively it shouldn't be needed to redeploy it that often.
I'm very grateful for every tip or hint and your thoughts. Would be marvellous to get a peer review of my ideas based on practical experiences.
So thank you very much in advance!
Is it better to hold a repository for every web application (context) or is it better to share a common instance by JDNI or a similar technique
Unless someone proves me otherwise I would say there is no way to do it, in a standard way, meaning as defined in the Servlet Sepc or in the rest of the Java EE spec canon.
There are technical ways to do it which probably depend on a specific application server implementation, but this cannot be "better" in its universal sense.
If you have two applications that operate on the same data, I wonder whether the partitioning of the applications is useful. Maybe all functionality operating on some kind of data needs to be in the same application?
within our company it's kind of standard to create repositories for data which is originally stored in the database as described for example in https://thinkinginobjects.com/2012/08/26/dont-use-dao-use-repository/.
I looked up Evans in our book shelf. The blog post is quite weird. A repository and a DAO are basically the same thing, it provides CRUD operations for an object or for a tree of objects (Evans says only the the aggregate roots).
The repositories are initialized on the servlets init sequences (so every servlet keeps it's own set of instances). Each context has a own connection to the Oracle database (set up by resource description file on deployment). [ ... ]
In the end the application server as a whole does not perform that well and it uses a hell lot of RAM
When something performs badly its the best to do profiling, e.g. with YourKit or with perf and FlameGraphs if you are on Linux. If your applications need a lot of RAM, analyze the heap e.g. with Eclipse MAT. There is no way somebody can give you a recommendation or hint on a best practice without seeing any line of code.
A general answer would include anyting about performance tuning for Oracle DBs, JDBC, Java Collections and Concurrent Programming, Networking and Operating Systems.
I was told that the cache system (ehcache) didn't work well (at least not with the performance of the self written solution - though: I can't believe that)
I can. EHCache is between 10-20 times slower then a simple HashMap. See: cache benchmarks. You only need a map, when you do a complete preload and don't have any mutations.
I imagine the usage of repositories backed by a distributed (as across all contexts) cache used in all web applications should not only reduce memory footprint significantly but should not be significantly slower
Distributed caches need to go over the network and add serialization/deserialization overhead. That's probably another factor 30 slower. When is the distributed cache updated?
I'm very grateful for every tip or hint and your thoughts.
Wrap up:
Do the normal software engineering homework, do profiling and analyzing and spend the effort of tuning at the right places
Ask specific questions on one topic on stackoverflow and share your code and performance data. Ask a question about one thing at one time and read https://stackoverflow.com/help/on-topic
You may also come to the conclusion that there is nothing to tune. There are applications out there that need a day to build up an in memory data structure from persistent data. Maybe its just a lot of data? If you do not like the downtime use green blue deployment. Also use smaller data sets for development and testing

Is JNDI bad as service locator design pattern?

I'm a Java EE nooby developer, According to many resources on the internet which claim that service locator design pattern is an anti-pattern because it hides classes dependecies and more things and should be avoided as many as possbile and using Dependecy Injection instead, as we know JNDI is an implemantation of service locator pattern.
I googled to check that JNDI is an implementation of service locator and i found this response which claims this : Understanding JNDI
Althought i see that JNDI is used in Java EE application for many purposes (Datasources, EJB lookup ...), So should i use it or should i avoid it as more as possible?, if JNDI isn't bad then service locator isn't?
I think that the one part of your question, whether service locator is good or not or whether JNDI is about this pattern is a bit esoteric. I can give a general advice here as being a software architect for some years now, that a pattern by itself is not good and not bad, it is just a piece of solution that was successfully used before in many cases and thus be declared a pattern in order to be used for future cases which are similar. And another thing is, as opposed to many years ago, when one had to know the GoF book by heart in order to survive an interview, nowadays it is much more important to understand the underlying concepts of a framework like Java EE than to implement all those patterns, because what you have to implement is very often very simple and straightforward, but using them relies on those concepts.
Concerning the second part of your question, you are almost never in need of directly using JNDI, but to use concepts built on top of it, as injection - that is what you should use in your application.
It's a horrible pattern IMHO since it is a massive security flaw. If dependencies are known at compile time and do not change, then its much easier to audit, gate and control possible vulenrabilities. Even within an organization JDNI is a Trojan horse waiting to be put to nefarious use, if a bad actor ca compromise some other area and your network, then get load whatever they want via a poorly/unwittingly implemented app. This log4j debacle is proof of that: don't allow apps to look-up and load whatever, whenever. It's a stupid idea. It's unsafe.
In a business environment we end up needing different kinds of data across applications so that it makes sense to store them in a shared location. For instance you may have a set of applications that share the same set of users, and we need authorization information for each of them listing what roles they have so we can know what they need to access. That kind of thing goes into an LDAP data store, you can think of it as a hierarchical database optimized for fast read access.
All sorts of things can go in these datastores, it's normal for an application server to stash connection pools in them, for instance. A lot of these, like users, roles, and connection pools, are vital things you need to do your job.
JNDI is the standard Java API for accessing these LDAP datastores.
The nasty thing about the service locator design pattern is that the client code doing the lookup has to know too much about the thing it is querying (mainly, where to get it from), and having that lookup hard-coded in the client makes the code inflexible and hard to test. But if we use dependency injection (whether it's CDI, Spring, whatever) we can have the framework inject the value we want into the code, while the JNDI lookups are handled within the framework code and not in the application. That means you can use JNDI without your application code having to use the service locator pattern.

ThreadLocal Management in a Servlet 3.0 Asynchronous Environment

For a pilot project I want to implement a custom and distributed user session. It seems the perfect spot for a ThreadLocal binding, carefully managed by a request filter.
Such user session is going to be available both in Servlet and non-Servlet environments. In Servlet environments, it should be independent of the presence or absence of any underlying javax.servlet.http.HttpSession (that is, it won't be allowed to create or use HttpSession objects).
Unfortunately I'm not able to find exhaustive information about how to handle this scenario in a Servlet 3.0 + environment configured for asynchronous operations. I understand (at least, I think...) that a javax.servlet.Filter should add a javax.servlet.AsyncListener to the current javax.servlet.AsyncContext, but some dedicated resources / real examples would be extremely helpful (mostly for showing some nuances I would certainly miss).
I am well aware of the ThreadLocal pitfalls, but their actual benefits (in such context) make me willing to find the proper way to implement them in such an asynchronous architecture (plus, passing a session reference to inner layers is not an option).

EJB - Home/Remote and LocalHome/Local interfaces

Revising some past exam papers for an exam mainly focus on component-oriented design and J2EE, I have come across the following question:
A preliminary investigation of scenario 3: “Exchange Request” suggests that two EJBs will provide a suitable solution: a session bean called EnterExchangeRequest to control the processing and an entity bean called ExchangeRequest to represent the persistent properties of the request. Discuss the role of the following interfaces:
Home
Remote
LocalHome
Local
and how they would provide access to the services of the EJBs described above.
I could try to explain how Home and Remote interfaces would fit into the picture. I have also heard the lecturer say one could replace Home by LocalHome, and Remote by Local (why?), but why are they asking me to discuss the role of all four at the same time?
Do I get it right when I say, the EJB container (the application server) would see that an interface is Home or Remote and then decide that the bean can 'live' on any machine in the cluster, while in the case the interfaces are LocalHome and Local the container will know that the beans can't be distributed across multiple machines and will therefore keep them 'alive' in one machine only?
I am totally lost in this enterprise Java jungle. I am experiencing a BeanOverflow. Could you please tell me which of my assumptions are wrong, point out my misconceptions and blunders.
Thank you all who are willing to help me with these EJB interfaces.
P.S. Note that I am not asking you to answer the question from the past exam paper. Just curious if you have any thoughts as to what could they be after when asking this.
As pointed out by Yishay, Home/Remote and LocalHome/Local are tied together and the Home interface functions as a constructor.
Local beans are tied to the JVM they live in, you can not access them from the outside. Remote beans can be accessed from other JVMs.
I use a similar approach: I always deploy ears. Beans for the ear I make local beans, Beans meant for use by other ears I make remote. But it is possible to use the local beans in other ears, as long as the are deployed in the same JVM
Home is responsible for the creation of the Remote (kind of like its constructor) and LocalHome and Local have the same relationship.
In each case the container is giving you a proxy that references the real EJB class that you write.
If I had to guess, what the question was looking for was the use of remote for the session bean and local for the entity bean.
Anyway, although these concepts can still exists, things have been much better simplified in EJB3.
EDIT: In response to the comment, with EJB3, the bean class itself can implement the remote and the home interfaces directly (for the session beans). They are made EJB's with a single annotation. Stateful beans have a couple more annotations to deal with state issues. Entity beans do not have a Home interface, and do not need a local interface, you can interact with the java object directly. There is an EntityManager that retrieves the right entity beans based on a query, and that EntityManager is injected via an annotation.
That kind of sums it up in a paragraph. There are great tutorials on the web for this stuff, but EJBs in general solve a class of problem that is hard to appreciate unless you deal with the problem. They aren't the only way to solve it, but unless you deal with this type of programming, just reading about it won't really help you relate to it.

Do you really need stateless session beans in this case?

We have a project with a pretty considerable number of EJB 2 stateless session beans which were created quite a long time ago. These are not the first-line beans which are accessed from our client via RMI, rather they are used by that code to perform specific functions. However, I've come to believe that there's nothing to be gained by having them as session beans at all.
They do not need to be accessed via
RMI.
They do not retain any state,
they are just code that was factored
out of the first set of beans to
reduce their complexity.
They don't
have multiple different
implementations which we are swapping
out, each one has been as it was for
years (barring bug fixes and feature
additions).
None of them alter the
transaction that comes into them from the bean calling them
(that is they don't require a new
transaction, not participate in the
existing one, or otherwise change
things).
Why should these not all just be classes with a couple of static functions and no EJB trappings at all?
The only reason I can see is for clustering purposes (if you are doing clustering). That is the hand off to those beans could be on another VM on another machine if clustering is being done right to spread the load around.
That is likely not the case, and the movement to EJB's was just over-engineering. I'm suffering with that too.
Even transactions aren't really enough to justify it, you can have a single EJB that handles the transactions and call the different code through it via a Command type pattern.
There seems to be no reason why they shouldn't just be simple POJO's rather than stateless session beans. I think this is the conclusion that people came to after using EJB 1.x in this manner as well.
It's also the reason why frameworks such as Spring exist as an alternative to EJB's.
I'd say change them over to be just standard POJO's, but make sure you have a safety net of unit and functional tests (which might be a little bit harder with EJB's) to help you.

Categories

Resources