We currently have a Stateful bean that is injected into a Servlet. The problem is that sometimes we get a Caused by: javax.ejb.ConcurrentAccessException: SessionBean is executing another request. [session-key: 7d90c02200a81f-752fe1cd-1] when executing a method on the stateful bean.
public class NewServlet extends HttpServlet {
#EJB
private ReportLocal reportBean;
protected void processRequest(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
response.setContentType("text/html;charset=UTF-8");
PrintWriter out = response.getWriter();
try {
String[] parameters = fetchParameters(request);
out.write(reportBean.constructReport(parameters));
} finally {
out.close();
}
}
}
In the above code, constructReport will check if it needs to open a new connection to the database specified in the Report after which a Report in HTML is constructed from a query which is built from the parameters specified.
The reason why we chose to use a stateful bean over a stateless bean was because we need to open a database connection to an unknown database and perform queries on it. With a stateless bean it seems terribly inefficient to repeatedly open and close database connections with each injected instance of the bean.
A few more details regarding the ConcurrentAccessException: as per the EJB spec, access to SLSB is synchronized by the app. server. However, this is not the case with SFSB. The burden of making sure that the SFSB is not accessed concurrently is on the application developer's shoulders.
Why? Well, synchronization of SLSB is only necessary at the instance-level. That is, each particular instance of the SLSB is synchronized, but you may have multiple instances in a pool or on different node in a cluster, and concurrent requests on different instances is not a problem. This is unfortunately not so easy with SFSB because of passivation/activation of instances and replication across the cluster. This is why the spec doesn't enforce this. Have a look at this dicussion if you are interested in the topic.
This means that using SFSB from servlet is complicated. A user with multiple windows from the same session, or reloading page before the rendering finished can lead to concurrent access. Each access to the EJB that is done in a servlet needs theoretically to be synchronized on the bean itself. What I did was to to create an InvocationHandler to synchronize all invocations on the particular EJB instance:
public class SynchronizationHandler implements InvocationHandler {
private Object target; // the EJB
public SynchronizationHandler( Object bean )
{
target = bean;
}
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable
{
synchronized( target )
{
// invoke method to the target EJB
}
}
}
Then, right after you obtain the remote reference to the EJB, you wrap it with the SynchronizationHandler. This way you are sure that this particular instance will not be accessed concurrently from your app (as long as it runs in only one JVM). You can also write a regular wrapper class which synchronizes all the methods of the bean.
My conclusion is nevertheless: use SLSB whenever possible.
EDIT:
This answer reflects the EJB 3.0 specs (section 4.3.13):
Clients are not allowed to make concurrent calls to a stateful session
object. If a client-invoked business method is in progress on an
instance when another client-invoked call, from the same or different
client, arrives at the same instance of a stateful session bean class,
if the second client is a client of the bean’s business interface, the
concurrent invocation may result in the second client receiving the
javax.ejb.ConcurrentAccessException
Such restrictions have been removed in EJB 3.1 (section 4.3.13):
By default, clients are allowed to make concurrent calls to a stateful
session object and the container is required to serialize such
concurrent requests.
[...]
The Bean Developer may optionally specify that concurrent client
requests to a stateful session bean are prohibited. This is done using
the #AccessTimeout annotation or access-timeout deployment descriptor
element with a value of 0. In this case, if a client-invoked business
method is in progress on an instance when another client-invoked call,
from the same or different client, arrives at the same instance of a
stateful session bean, if the second client is a client of the bean’s
business interface or no-interface view, the concurrent invocation
must result in the second client receiving a
javax.ejb.ConcurrentAccessException
This is not what stateful session beans (SFSB) are intended to be used for. They are designed to hold conversation state, and are to be bound to the user's http session to hold that state, like a heavyweight alternative to storing state in the session directly.
If you want to hold things like database connections, then there are better ways to go about it.
Best option is to use a connection pool. You should always use a connection pool, and if you're running inside an application server (which, if you're using EJBs, then you are), then you can easily use your appserver's datasource configuration to create a connection pool, and use that inside your stateless session bean (SLSB).
Until you provide some code and the stacktrace, I'd suggest that you consider using a connection pool.
If by "unknown database" you mean a database whose parameters are supplied by the end user, and hence no preconfigured connection pool is possible, you can still use the connection pool concept, rather than opening a new connection each time.
Also, theck this thread.
Session beans cannot be used concurrently, like skaffman said they were meant to handle state corresponding to the client session and are typically stored in the session object per client.
Refactoring to use a database pool to handle concurrent requests to your resources is the way to go.
In the meantime, if all you need is this trivial use, you could synchronise the call to constructReport as in:
synchronised (reportBean) {
out.write(reportBean.constructReport(parameters));
}
Note that this is no solution if constructReport takes a significant amount of time relative to your number of clients.
you should never syncronize servlet or ejb access since this cause requests queue and if you have N concurrently users the last one will wait for a long time and often get a timeout response!!! Syncronize method is not intended for this reason!!!
Related
Due to remote invocation nature of REST services, they are in constant situation to run into race condition with each other. One of the everyday resources to race for is session. In order to be practical, you need to be able to put a lock over the resource at the beginning of your process and lift it up whenever you are done with it.
Now my question is, does Spring Session have any feature to deal with race condition over the session entries?
Or any other library / framework in Java!!!
If you're using Spring Controllers, then you can use
RequestMappingHandlerAdapter.setSynchronizeOnSession-boolean-
This will make every Controller method synchronized in presence of a session.
HttpSession.setAttribute is thread safe. However getAttribute followed by setAttribute has to be manually made tread safe.
synchronized(session) {
session.setAttribute("foo", "bar");
session.getAttribute("foo");
}
Same can be done in case of spring session beans.
synchronized(session) {
//do something with the session bean
}
#Edit
In case of multiple containers with normal spring session beans you would have to use sticky sessions. That would ensure that one session state is stored on one container and that container is accessed every single time the same session is requested. This has to be done on the load balancer with the help of something like BigIP cookies. Rest would would work the same way as for a single session there exists a single container, so locking session would suffice.
If you would like to use session sharing across instances there are supports on the containers like Tomcat and Jetty
These approaches use a back-end database or some other persistence mechanism to store state.
For the same purpose you can try using Spring Session. Which is trivial to configure with the Redis. Since Redis is single threaded, it ensures that one instance of an entry is accessed atomically.
Above approaches are non invasive. Both the database and Redis based approaches support transactions.
However if you want more control over the distributed state and locking you can try using the distributed data grids like Hazelcast and Gemfire.
I have personally worked with the Hazelcast and it does provide methods to lock entries made in the map.
#Edit2
Though I believe that handling transactions should suffice with Spring Session and Redis, to make sure you would need distributed locking. Lock object would have to be acquired from the Redis itself. Since Redis is single threaded a personal implementation would also work by using something like INCR
Algorithm would go something like below
//lock_num is the semaphore/lock object
lock_count = INCR lock_num
while(true) {
if(lock_count != 1) {
DECR lock_num
} else {
break
}
wait(wait_time_period)
}
//do processing in critical section
DECR lock_num
However, thankfully Spring already provides this distributed lock implementation via RedisLockRegistry. More documentation on usage is here.
If you decide to use plain Jedis without spring then here is a distributed lock as for Jedis : Jedis Lock.
//from https://github.com/abelaska/jedis-lock
Jedis jedis = new Jedis("localhost");
JedisLock lock = new JedisLock(jedis, "lockname", 10000, 30000);
lock.acquire();
try {
// do some stuff
}
finally {
lock.release();
}
Both of these should work exactly like Hazelcast locking.
As a previous answer stated. If you are using Spring Session and you are concerned for thread safety on concurrent access of a Session, you should set:
RequestMappingHandlerAdapter.setSynchronizeOnSession(true);
One example can be found here EnableSynchronizeOnSessionPostProcessor :
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.config.BeanPostProcessor;
import org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter;
public class EnableSynchronizeOnSessionPostProcessor implements BeanPostProcessor {
private static final Logger logger = LoggerFactory
.getLogger(EnableSynchronizeOnSessionPostProcessor.class);
#Override
public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
// NO-OP
return bean;
}
#Override
public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
if (bean instanceof RequestMappingHandlerAdapter) {
RequestMappingHandlerAdapter adapter = (RequestMappingHandlerAdapter) bean;
logger.info("enable synchronizeOnSession => {}", adapter);
adapter.setSynchronizeOnSession(true);
}
return bean;
}
}
Sticky Sessions and Session Replication
With regards to a clustered application and Sessions, there is a very good post here on SO, that discusses this topic: Sticky Sessions and Session Replication
In my experience, you would want both Sticky Session and Session replication.
You use sticky session to eliminate the concurrent Session access across nodes, because sticky session will pin a session to a single node and each subsequent request for the same session will always be directed to that node. This eliminates the cross-node session access concern.
Replicated sessions are helpful mainly in case a node goes down. By replicating sessions, when a node goes down, future requests for existing sessions will be directed to another node that will have a copy of the original session and makes the fail over transparent to the user.
There are many frameworks that support session replication. The one I use for large projects is the open-source Hazelcast.
In response to your comments made on #11thdimension post:
I think you are in a bit of a challenging area. Basically, you want to enforce all session operations to be atomic across nodes in a cluster. This leads me to lean towards a common session store across nodes, where access is synchronized (or something similar).
Multiple Session store / replication frameworks surely support an external store concept and I am sure Reddis does. I am most familiar with Hazelcast and will use that as an example.
Hazelcast allows to configure the session persistence to use a common database.
If you look at Map Persistence section, it shows an example and a description of options.
The description for the concept states:
Hazelcast allows you to load and store the distributed map entries from/to a persistent data store such as a relational database. To do this, you can use Hazelcast's MapStore and MapLoader interfaces.
Data store needs to be a centralized system that is accessible from all Hazelcast Nodes. Persistence to local file system is not supporte
Hazelcast supports read-through, write-through, and write-behind persistence modes which are explained in below subsections.
The interesting mode is write-through:
Write-Through
MapStore can be configured to be write-through by setting the write-delay-seconds property to 0. This means the entries will be put to the data store synchronously.
In this mode, when the map.put(key,value) call returns:
MapStore.store(key,value) is successfully called so the entry is persisted.
In-Memory entry is updated.
In-Memory backup copies are successfully created on other JVMs (if backup-count is greater than 0).
The same behavior goes for a map.remove(key) call. The only difference is that MapStore.delete(key) is called when the entry will be deleted.
I think, using this concept, plus setting up your database tables for the store properly to lock entries on insert/update/deletes, you can accomplish what you want.
Good Luck!
I am new to EJB so please don't mind anything silly in the question.
I have a doubt that someone might be able to solve hopefully.
I have the following Stateful Bean:
#Stateful
public class SessionBean implements SessionBeanRemote {
private int count = 0;
#Override
public int getCount(){
count++;
return count;
}
}
And this is the client that invokes the Bean (Servlet)
#Override
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
InitialContext ctx;
HttpSession session = null;
SessionBeanRemote obj;
try {
if (session.getAttribute("myBean") == null) {
ctx = new InitialContext();
obj = (SessionBeanRemote) ctx.lookup("SessionBean/remote");
session.setAttribute("myBean", obj);
} else {
obj = (SessionBeanRemote) session.getAttribute("myBean");
}
System.out.println(obj.getCount());
} catch (NamingException ex) {
Logger.getLogger(TestServlet.class.getName()).log(Level.SEVERE, null, ex);
}
}
Now I was wondering if it is ultimately HttpSession that has to hold the session bean then why use EJB at all, why not just store whatever we want to in the session directly rather than having it in the session bean first and then storing that bean in the session.
And also I was wondering if lets say I change my #Stateful annotation to #Stateless and then execute the same code on the client side and store the bean in the session then also I can extract the same bean from the session then what is the difference between stateless and stateful, I know that when new lookup is done there is a chance that same stateless bean might be returned to me where as with stateful bean it is always new when we do a lookup. But is that it?
P.S. As I mentioned earlier, I am new to EJB and all the doubts are based on what I have understood from a few tutorials online and some questions on SO. I have also tried running it locally but unfortunately I am not able to deploy the application on the GlassFish because of following error "Exception while loading the app : EJB Container initialization error". I am trying to look into it.
They're two unrelated concepts.
If you seperate concerns, and you should. Then the HTTP session and EJB session(s) operate at logically distinct layers. The http session is for holding an individual web browser and user's state. An EJB session is used for holding transactional, scalable and fault-tolerant and "transparently" (and possibly remote) reference(s) within the context of an Enterprise application client.
The fact that you're using EJB(s) to serve web content, does not mean you cannot also use those same EJB(s) to serve JFC/Swing (or JavaFX) clients.
I'll try to give you a simplified answer.
Yes, you have to store reference to the SFSB somewhere, in case of web application in http session, but as Elliott wrote you may have different client types also.
Some benefits of Session bean vs POJO:
container transaction management
container security enforcement
possible remote access via remote interface
allows to be deployed as separate module (EJB) for potentially better reusability.
If your session bean relies on state, then it is more logical to hold the state in the bean rather than pass all state info in each method call.
Your example is extremely simple, you don't use transactions, persistence, security, so there is no point of using SFSB or even EJB at all.
SFSB are considered to be rather heavyweight, in general should rather be avoided and I'd say majority of web applications don't use them (depends on application requirements really). So you should design your services to be stateless and rather use stateless beans than stateful, then you will have better performance and easier reusability.
So if you plan to use some of the features I provided, you may benefit from EJBs, otherwise you maybe happy with just Http session and business logic in POJOs.
EJB Sessions
A session in EJB is maintained using the SessionBeans over the server's JVM. You design beans that can contain business logic or calculations or dynamic pages, and that can be used by the clients. You have two different session beans: Stateful and Stateless.
Stateful: is somehow connected with a single client (each bean's object). It maintains the state for that client, can be used only by that client and when the client "dies" then the session bean is "lost". Stateful bean's life-cycle is bind with the client. (very useful in role-based applications/systems)
Stateless: A Stateless Session Bean doesn't maintain any state and there is no guarantee that the same client will use the same stateless bean, even for two calls one after the other. The lifecycle of a Stateless Session EJB is slightly different from the one of a Stateful Session EJB. Is EJB Container's responsability to take care of knowing exactly how to track each session and redirect the request from a client to the correct instance of a Session Bean and to same job for every one.
Where:
HTTPSession: It is acquired through the request object. You cannot really instantiate a new HttpSession object, and it doesn't contains any business logic or calculation, but is more of a place where to store objects (for client as output or server as input), and for communication of two or more system over the network.
Good evening! I have a web app with login-logout, where you can buy or sell some products. My queries are stored in a java class, called DBManager, which has just static methods that are called from servlets (I can't use JSP because in this project we can't use them, the professor gives us this constraint).
So here's the problem: I used to manage connection with ServletContextListener. In contextInitialized I set up connection, and in contextDestroyed I shutdown it. The attribute "Connection" is stored using ServletContext.setAttribute(Connection).
How can i get this parameter through the java class (not servlet) DBManager? I must get the object using getServletContext() inside a servlet and than passing it as an attribute, or there's a shortcut to avoid that?
Opening a connection in the method contextInitialized and closing it in contextDestroyed is a bad solution. Imagine that your web site has two visitors at the same time. They would now share the same database connection. If you work with transactions, you would end up with unexpected results where one visitor commits an intermediate state of a transaction executed by the other visitor. And if the connection is ever lost (maybe because the DB server is restarted), your application will fail because it does not re-establish the connection (unless you have a very smart JDBC driver).
A very expensive but safe solution would be to open a new connection for every database operation, and close it again immediately after the operation.
For a perfect solution, you would use some kind of connection pool. Whenever you need to execute a database statement (or a sequence of statements), you would borrow a connection from the pool. Once you're finished, you would return the connection to the pool. Most pool implementations take care of things like validation (check whether the connection is still "alive"), and multiple threads (different HTTP requests send by different visitors at the same time) can execute statements in parallel.
If you want to go for this solution, maybe the Commons DbUtils library is something for you: http://commons.apache.org/dbutils/
Or you check out the documentation of your Java application server or servlet engine (Tomcat?) to see what built-in DB connection pooling features it provides.
Instead of arbitrary getting the connection, you could change your class to receive a connection. That allows you to pass any instance of connection (what happens if you need to get data from another database?).
Also, imagine that the code will be ported to desktop. With this approach you can reuse your DAO without any changes.
class DBManager {
private Connection connection;
public DBManager(Connection connection) {
this.connection = connection;
}
//methods that will use the connection
}
In your HttpServlet doPost or doGet methods you can call getServletContext().
ServletContext sc = getServletContext();
DataSource ds = (DataSource) sc.getAttribute("ds");
Connection conn = ds.getConnection();
A servlet can bind an object attribute into the context by name. Any attribute bound into a context is available to any other servlet that is part of the same Web application.
Context attributes are LOCAL to the JVM in which they were created. This prevents ServletContext attributes from being a shared memory store in a distributed container. When information needs to be shared between servlets running in a distributed environment, the information should be placed into a session, stored in a database, or set in an Enterprise JavaBeans component.
Is there any way to start executing java Servlet code (specifically, in Websphere Application Server) (one session, one thread on the Servlet) and then pause to get more information from the calling client at various points? I require that the current session, and ongoing Servlet thread, not die until specified, and instead keep waiting (open) for information from the client.
Is this kind of ongoing conversation possible? Or can the Servlet call to "doPost" only be started - and then the Servlet ignores the client until it finishes?
As suggested, I would use an object stored in session to maintain the state needed. You can also modify the session on a servlet by servlet basis if you need certain actions to extend the session timeout beyond the webapp defaults using the following method in the HttpSession API:
public void setMaxInactiveInterval(int interval) Specifies the time, in seconds, between client requests before the servlet container will invalidate this session. A negative time indicates the session should never timeout.
You just need to establish your logic for your object setting/retrieval from session. Typically something like this:
HttpSession session = req.getSession();
MyBeanClass bean;
Object temp = null;
temp = session.getAttribute("myBean");
if(temp !=null) {
bean = (MyBeanClass) temp;
} else {
bean = new MyBeanClass();
}
// Logic
session.setAttribute("myBean", bean);
You can save/update your session state between requests and when the next request comes, you can restore and continue whatever you were doing.
I have not done this with directly, but the underlying support is somewhat related to Jetty's continuation model and Servlet 3.0 Suspend/Resume support.
Web frameworks that work like the post description (actually, they are resumed across different connections) are sometimes called Continuation-Based frameworks. I am unsure of any such frameworks in Java (as the Java language is not conducive to such models) but there are two rather well known examples of the general principle:
Seaside (for Smalltalk) and;
Lift (for Scala).
Hope this was somewhat useful.
I am writing some servlets with plain old mostly-JDBC patterns. I realized that I have several objects that would like to share a single transaction, and I'd like to enforce that one HTTP transaction = one database transaction.
I think I can do this via passing a Connection around in a ThreadLocal variable, and then having a servlet filter handling the creation/commit/rollback of said Connection.
Is there an existing framework that does this that I'm not privy to, or is this a reasonable late-00's way to do things?
Spring transaction management does exactly what you describe, it might be a little over whelming at first glance but all you will be needing (for the simplest case) is:
org.springframework.jdbc.datasource.DataSourceTransactionManager
org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy
org.springframework.transaction.support.TransactionTemplate
Wire up your existing DataSource and wrap it in the TransctionAwareDataSourceProxy then create a DataSourceTransactionManager with the wrapped data source, keep these in your ServletContext. Then for each transaction create a TransactionTemplate passing in the transaction manager and call the execute(TransactionCallback) method to run your code. eg:
new TransactionTemplate(transactionManager).execute(new TransactionCallback(){
public void doInTransaction(TransactionStatus ts){
// run your code here...use the dataSource to get a connection and run stuff
Connection c = dataSourceProxy.getConnection();
// to rollback ... throw a RuntimeException out of this method or call
st.setRollbackOnly();
}
});
The connection will be bound to a thread local so as long as you always get the connection form the same datasource i.e. the wrapped one, you'll get the same connection in the same transaction.
Note this is the simplest possible spring transaction setup ... not nessarly the best or recommended one, for that have a look at the spring reference doc's or read spring in action.
... so I guess as a direct answer, yes it is a reasonable thing to be doing, it's what the spring framework has been doing for a long time.
Most appServer todays support JTA (Java Transaction Api): A transaction that spans over multiple open/close jdbc connections. It does the "threadLocal" stuff for you and it's J2EE compliant.
You use it like this in your filter:
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
throws IOException, ServletException {
UserTransaction transaction = null;
try {
transaction = (UserTransaction)new InitialContext().lookup("java:comp/UserTransaction");
transaction.begin();
chain.doFilter(request, response);
transaction.commit();
} catch (final Exception errorInServlet) {
try {
transaction.rollback();
} catch (final Exception rollbackFailed) {
log("No ! Transaction failed !",rollbackFailed);
}
throw new ServletException(errorInServlet);
}
}
On the app-server, declare a Datasource with a jndi name, and use it in your code to retrieve a connection (do NOT make cx.commit(), cx.rollback() or cx.setAutocommit() stuff, it will interfere with JTA). You can open and close your connection several times in the same HTTP transaction, JTA will take care of it:
public void doingDatabaseStuff() throws Exception {
DataSource datasource = (DataSource)new InitialContext().lookup("/path/to/datasource");
Connection connection = datasource.getConnection();
try {
// doing stuff
} finally {
connection.close();
}
}
It is generally better to pass object with "Parameterisation from Above", the sleazing through with ThreadLocal. In the case of ServletFilter, an attribute of the ServletRequest would be an obvious place. The interface to non-servlet dependent code can extract the Connection to meaningful context.
If you cannot rely on a "real" app server and you want to avoid the not-so-lightweightness of Spring, using a filter to provide a connection, keep it on the thread and close it at the end of the request is indeed a practical and reasonable solution.
You would need some (essentially static) accessor class that allows to get() a connection and a setRollbackOnly().
Upon end of the request, from the filter's perspective, make sure to catch exceptions (upon which you should log and set to rollback only) and commit/rollback, close the transaction accordingly.
In most applications and web containers (and JTA usually makes similar assumptions) a request will be processed by exactly one thread and associating the one database connection with the thread for re-use between layers during the request is just the right thing to do.
Having a filter manage the transaction is a good approach to rolling your own transaction management.
The Java EE specification provides for transaction management, and alternative frameworks like Spring provide similar support (though that's not an endorsement; Spring doesn't necessarily do this well).
However, use of a ThreadLocal can create problems. For example, there are no guarantees that a single thread is used throughout a request, anything can access the Connection through the global variable, and testing can become more difficult if you are depending on some global state to be set up. I'd consider using a dependency injection container to explicitly pass a Connection to objects that need one.