JDO Transaction in Google App Engine - java

I am reading the following tutorial
Take the following code example.
import javax.jdo.Transaction;
import ClubMembers; // not shown
// ...
// PersistenceManager pm = ...;
Transaction tx = pm.currentTransaction();
try {
tx.begin();
ClubMembers members = pm.getObjectById(ClubMembers.class, "k12345");
members.incrementCounterBy(1);
pm.makePersistent(members);
tx.commit();
} finally {
if (tx.isActive()) {
tx.rollback();
}
}
Does this mean, any code block in between tx.begin and tx.commit, there will be only one process/thread can access at a time? Is the tx.being and tx.commit is similar to syncrhonized keyword? But the syncrhonized protection is extended to process level instead of thread level?
For incrementCounterBy, do we have the explicitly declare the method header as synchronized? This is to ensure in entire web environment, there is only one process can access incrementCounterBy at one time. But, is synchronized protection only applied to thread level? Does synchronized keyword help? Or it is just redundant and we shall solely depend on tx.begin and tx.commit?

No.
Yes or use the synchronized(Object) {} notation.
The transaction only gives you the possibility to rollback any changes made since tx.begin().
You also have to consider that, the ClubMembers can also be accessed from outside the synchronized block while yout transaction is ongoing.
Other functions accessing ClubMember with key "k12345" will see the old counter value until you commit.

Related

Should Java JDBC Select statements always be in a Try-Catch block?

Is it good practice to put all Java JDBC Select statements in a try-catch block ? Currently I write most of my code without it. However, I do try-catch for insert/update/delete.
Note: Currently using Sprint Boot.
String sqlQuery = "Select productId, productName, productStartDate from dbo.product where productId = 5"
public getProductData() {
....
List<Product> productList = namedJdbcTemplate.query(sqlQuery, new ProductMapper());
Since this question is tagged with spring-boot and you are using JdbcTemplate, I'm giving you a Spring-specific answer.
One of the points of Spring is to avoid boilerplate from developers. If you find yourself adding things repetitively, like putting try-catch blocks around code executing DML, that's cause for suspecting you're not doing something right. Adding your own try-catches in code using Spring isn't always wrong, but it usually is.
In the Spring reference doc https://docs.spring.io/spring-framework/docs/current/reference/html/data-access.html#jdbc there is a table showing what is the developer's responsibility and what is Spring's responsibility. Processing exceptions, handling the transactions, and closing jdbc resources are all shown as being Spring's responsibility.
SpringJdbc takes care of a lot of things for you. It handles closing JDBC resources and returning connections to their pool, and converts exceptions from SQLException to a hierarchy of unchecked DataAccessExceptions. In Spring unchecked exceptions thrown from a method wrapped in a transactional proxy cause the transaction to get rolled back. If you do your own try-catch logic you can prevent rollback from occurring when it needs to, if you catch the exception and the proxy never sees it. Adding try-catch logic can cause problems if you don't understand what Spring is doing.
Exceptions do need to be caught somewhere. In a Spring web application, you can set up an exception handler that catches anything thrown from the controller layer, so that you can log it. That way the action in progress gets broken off cleanly, the current transaction rolls back, and the exception gets handled in a consistent way. If you have other entry points, such as reading messages from a queue, those will need their own exception handler.
Exceptions are thrown in order to escape the current context, which isn't able to deal with the problem, and relocate control to somewhere safe. For most exceptions coming from JDBC, they aren't anything you can fix, you just want to let it be thrown, let the current transaction rollback, then let the central exception handler catch and log it.
First of all, if you're working with raw JDBC API, you should always use PreparedStatement.
Yes, you'll just have to wrap the code with try-catch block at some point, though it's a good practice to catch exceptions just right away or at the point where it's logically suits. In case of SQL queries, you actually should wrap all of them into some Service class that will give you an access to modify your database objects without running through JDBC API every time. For example:
public class UserService {
private static final String CREATE_USER_SQL = "...";
private final Connection jdbcConnection;
public #Nullable User createUser(final String name) {
try (final PreparedStatement stmt = jdbcConnection.prepareStatement(CREATE_USER_SQL)) {
jdbcConnection.setAutoCommit(false);
stmt.setString(1, name);
stmt.executeQuery();
jdbcConnection.commit();
return new User(name);
} catch (final SQLException createException) {
System.out.printf("User CREATE failed: %s\n", createException.getMessage());
try {
jdbcConnection.rollback();
} catch (final SQLException rollbackException) {
System.out.printf("Rollback failed: %s\n", rollbackException.getMessage());
}
return null;
}
}
}
This solves two problems right away:
You won't need to put boilerplate JDBC code everywhere;
It will log any JDBC errors right away, so you won't need to go through a complex debugging process.
Brief explanation:
First of all any resource involving I/O access (database access is I/O access) must always be closed or it will cause a memory leak.
Secondly, it is better to rely on try-with-resources to close any resource as having to call the .close() method manually is always exposed to the risk of not being effectively executed at runtime due to a potential Exception/RuntimeException/Error getting thrown beforehand; even closing the resource in a finally method is not preferable as such block executes at a different phase compared to the try-with-resources - auto closure of try-with-resources happens at the end of the try block, while finally executes at the end of all try/catch block - , in addition to the basic problem that it is not a secure solution as a throw might happen even inside the finally block, preventing it from completing correctly.
This said, you always need to close:
Statement/PreparedStatement/CallableStatement
any ResultSet
the whole Connection when you don't need DB access anymore
Try-catch for DB Layer code is important if you're querying with JDBC.
Think about, what if the connection broke? Or what if Database crashed ? Or some other unfortunate scenario comes up.
For these things, I will recommend you to always keep the DB layer code within try-catch.
It's also recommended for you to have some fallback mechanism in-case of the above events.
You should always handle it with try cactch.
Why: For example you started a connection to db then some exception happened, if you don't rollback your transaction it stay on db and performance will be decreased, and memory leak will be happen.
Imagine, if your connection limit is 100 and 100 exception throwed after transaction started and you didn't rollback it your system will be locked because of you can't create any new connection to your database.
But if you want an alternative for "try catch finally" you can use like this:
EmUtil.consEm(em -> {
System.out.println(em.createNativeQuery("select * from temp").getResultList().size());
});
Source code:
public final class EmUtil {
interface EmCons {
public void cons(EntityManager em);
}
public static void consEm(EmCons t) {
EntityManager em = null;
try {
em = getEmf().createEntityManager();
t.cons(em);
} finally {
if (em != null && em.getTransaction().isActive())
em.getTransaction().rollback();
if (em != null && em.isOpen())
em.close();
}
}
private static EntityManagerFactory getEmf() {
//TODO
}
}
Spring translates those exceptions as DataAccessException (for more detail link). It will be good to catch those exceptions and you can rollback with #Transactional.

JPA correct way to handle detached entity state in case of exceptions/rollback

I have this class and I tought three ways to handle detached entity state in case of persistence exceptions (which are handled elsewhere):
#ManagedBean
#ViewScoped
public class EntityBean implements Serializable
{
#EJB
private PersistenceService service;
private Document entity;
public void update()
{
// HANDLING 1. ignore errors
service.transact(em ->
{
entity = em.merge(entity);
// some other code that modifies [entity] properties:
// entity.setCode(...);
// entity.setResposible(...);
// entity.setSecurityLevel(...);
}); // an exception may be thrown on method return (rollback),
// but [entity] has already been reassigned with a "dirty" one.
//------------------------------------------------------------------
// HANDLING 2. ensure entity is untouched before flush is ok
service.transact(em ->
{
Document managed = em.merge(entity);
// some other code that modifies [managed] properties:
// managed.setCode(...);
// managed.setResposible(...);
// managed.setSecurityLevel(...);
em.flush(); // an exception may be thrown here (rollback)
// forcing method exit without [entity] being reassigned.
entity = managed;
}); // an exception may be thrown on method return (rollback),
// but [entity] has already been reassigned with a "dirty" one.
//------------------------------------------------------------------
// HANDLING 3. ensure entity is untouched before whole transaction is ok
AtomicReference<Document> reference = new AtomicReference<>();
service.transact(em ->
{
Document managed = em.merge(entity);
// some other code that modifies [managed] properties:
// managed.setCode(...);
// managed.setResposible(...);
// managed.setSecurityLevel(...);
reference.set(managed);
}); // an exception may be thrown on method return (rollback),
// and [entity] is safe, it's not been reassigned yet.
entity = reference.get();
}
...
}
PersistenceService#transact(Consumer<EntityManager> consumer) can throw unchecked exceptions.
The goal is to maintain the state of the entity aligned with the state of the database, even in case of exceptions (prevent entity to become "dirty" after transaction fail).
Method 1. is obviously naive and doesn't guarantee coherence.
Method 2. asserts that nothing can go wrong after flushing.
Method 3. prevents the new entity assigment if there's an exception in the whole transaction
Questions:
Is method 3. really safer than method 2.?
Are there cases where an exception is thrown between flush [excluded] and commit [included]?
Is there a standard way to handle this common problem?
Thank you
Note that I'm already able to rollback the transaction and close the EntityManager (PersistenceService#transact will do it gracefully), but I need to solve database state and the business objects do get out of sync. Usually this is not a problem. In my case this is the problem, because exceptions are usually generated by BeanValidator (those on JPA side, not on JSF side, for computed values that depends on user inputs) and I want the user to input correct values and try again, without losing the values he entered before.
Side note: I'm using Hibernate 5.2.1
this is the PersistenceService (CMT)
#Stateless
#Local
public class PersistenceService implements Serializable
{
#PersistenceContext
private EntityManager em;
#TransactionAttribute(TransactionAttributeType.REQUIRED)
public void transact(Consumer<EntityManager> consumer)
{
consumer.accept(em);
}
}
#DraganBozanovic
That's it! Great explanation for point 1. and 2.
I'd just love you to elaborate a little more on point 3. and give me some advice on real-world use case.
However, I would definitely not use AtomicReference or similar cumbersome constructs. Java EE, Spring and other frameworks and application containers support declaring transactional methods via annotations: Simply use the result returned from a transactional method.
When you have to modify a single entity, the transactional method would just take the detached entity as parameter and return the updated entity, easy.
public Document updateDocument(Document doc)
{
Document managed = em.merge(doc);
// managed.setXxx(...);
// managed.setYyy(...);
return managed;
}
But when you need to modify more than one in a single transaction, the method can become a real pain:
public LinkTicketResult linkTicket(Node node, Ticket ticket)
{
LinkTicketResult result = new LinkTicketResult();
Node managedNode = em.merge(node);
result.setNode(managedNode);
// modify managedNode
Ticket managedTicket = em.merge(ticket);
result.setTicket(managedTicket);
// modify managedTicket
Remark managedRemark = createRemark(...);
result.setRemark(managedemark);
return result;
}
In this case, my pain:
I have to create a dedicated transactional method (maybe a dedicated #EJB too)
That method will be called only once (will have just one caller) - is a "one-shot" non-reusable public method. Ugly.
I have to create the dummy class LinkTicketResult
That class will be instantiated only once, in that method - is "one-shot"
The method could have many parameters (or another dummy class LinkTicketParameters)
JSF controller actions, in most cases, will just call a EJB method, extract updated entities from returned container and reassign them to local fields
My code will be steadily polluted with "one-shotters", too many for my taste.
Probably I'm not seeing something big that's just in front of me, I'll be very grateful if you can point me in the right direction.
Is method 3. really safer than method 2.?
Yes. Not only is it safer (see point 2), but it is conceptually more correct, as you change transaction-dependent state only when you proved that the related transaction has succeeded.
Are there cases where an exception is thrown between flush [excluded] and commit [included]?
Yes. For example:
LockMode.OPTIMISTIC:
Optimistically assume that transaction will not experience contention
for entities. The entity version will be verified near the transaction
end.
It would be neither performant nor practically useful to check optimistick lock violation during each flush operation within a single transaction.
Deferred integrity constraints (enforced at commit time in db). Not used often, but are an illustrative example for this case.
Later maintenance and refactoring. You or somebody else may later introduce additional changes after the last explicit call to flush.
Is there a standard way to handle this common problem?
Yes, I would say that your third approach is the standard one: Use the results of a complete and successful transaction.
However, I would definitely not use AtomicReference or similar cumbersome constructs. Java EE, Spring and other frameworks and application containers support declaring transactional methods via annotations: Simply use the result returned from a transactional method.
Not sure if this is entirely to the point, but there is only one way to recover after exceptions: rollback and close the EM. From https://docs.jboss.org/hibernate/entitymanager/3.6/reference/en/html/transactions.html#transactions-basics-issues
An exception thrown by the Entity Manager means you have to rollback
your database transaction and close the EntityManager immediately
(discussed later in more detail). If your EntityManager is bound to
the application, you have to stop the application. Rolling back the
database transaction doesn't put your business objects back into the
state they were at the start of the transaction. This means the
database state and the business objects do get out of sync. Usually
this is not a problem, because exceptions are not recoverable and you
have to start over your unit of work after rollback anyway.
-- EDIT--
Also see http://piotrnowicki.com/2013/03/jpa-and-cmt-why-catching-persistence-exception-is-not-enough/
ps: downvote is not mine.

Is synchronization within an HttpSession feasible?

UPDATE: Solution right after question.
Question:
Usually, synchronization is serializing parallel requests within a JVM, e.g.
private static final Object LOCK = new Object();
public void doSomething() {
...
synchronized(LOCK) {
...
}
...
}
When looking at web applications, some synchronization on "JVM global" scope is maybe becoming a performance bottleneck and synchronization only within the scope of the user's HttpSession would make more sense.
Is the following code a possibility? I doubt that synchronizing on the session object is a good idea but it would be interesting to hear your thoughts.
HttpSession session = getHttpServletRequest().getSession();
synchronized (session) {
...
}
Key Question:
Is it guaranteed that the session object is the same instance for all threads processing requests from the same user?
Summarized answer / solution:
It seems that the session object itself is not always the same as it depends on the implementation of the servlet container (Tomcat, Glassfish, ...) and the getSession() method might return just a wrapper instance.
So it is recommended to use a custom variable stored in the session to be used as locking object.
Here is my code proposal, feedback is welcome:
somewhere in a Helper Class, e.g. MyHelper:
private static final Object LOCK = new Object();
public static Object getSessionLock(HttpServletRequest request, String lockName) {
if (lockName == null) lockName = "SESSION_LOCK";
Object result = request.getSession().getAttribute(lockName);
if (result == null) {
// only if there is no session-lock object in the session we apply the global lock
synchronized (LOCK) {
// as it can be that another thread has updated the session-lock object in the meantime, we have to read it again from the session and create it only if it is not there yet!
result = request.getSession().getAttribute(lockName);
if (result == null) {
result = new Object();
request.getSession().setAttribute(lockName, result);
}
}
}
return result;
}
and then you can use it:
Object sessionLock = MyHelper.getSessionLock(getRequest(), null);
synchronized (sessionLock) {
...
}
Any comments on this solution?
I found this nice explanation in spring-mvc JavaDoc for WebUtils.getSessionMutex():
In many cases, the HttpSession reference itself is a safe mutex as well, since it will always be the same object reference for the same active logical session. However, this is not guaranteed across different servlet containers; the only 100% safe way is a session mutex.
This method is used as a lock when synchronizeOnSession flag is set:
Object mutex = WebUtils.getSessionMutex(session);
synchronized (mutex) {
return handleRequestInternal(request, response);
}
If you look at the implementation of getSessionMutex(), it actually uses some custom session attribute if present (under org.springframework.web.util.WebUtils.MUTEX key) or HttpSession instance if not:
Object mutex = session.getAttribute(SESSION_MUTEX_ATTRIBUTE);
if (mutex == null) {
mutex = session;
}
return mutex;
Back to plain servlet spec - to be 100% sure use custom session attribute rather than HttpSession object itself.
See also
http://www.theserverside.com/discussions/thread.tss?thread_id=42912
In general, don't rely on HttpServletRequest.getSession() returning same object. It's easy for servlet filters to create a wrapper around session for whatever reason. Your code will only see this wrapper, and it will be different object on each request. Put some shared lock into the session itself. (Too bad there is no putIfAbsent though).
As people already said, sessions can be wrapped by the servlet containers and this generates a problem: the session hashCode() is different between requests, i.e., they are not the same instance and thus can't be synchronized! Many containers allow persist a session. In this cases, in certain time, when session was expired, it is persisted on disk. Even when session is retrieved by deserialization, it is not same object as earlier, because it don't shares same memory address like when was at memory before the serialization process. When session is loaded from disk, it is put into memory for further access, until "maxInactiveInterval" is reached (expires). Summing up: the session could be not the same between many web requests! It will be the same while is in memory. Even if you put an attribute into the session to share lock, it will not work, because it will be serialized as well in the persistence phase.
Synchronization occurs when a lock is placed on an object reference, so that threads that reference the same object will treat any synchronization on that shared object as a toll gate.
So, what your question raises an interesting point: Does the HttpSession object in two separate web calls from the same session end up as the same object reference in the web container, or are they two objects that just happen to have similar data in them? I found this interesting discussion on stateful web apps which discusses HttpSession somewhat. Also, there is this discussion at CodeRanch about thread safety in HttpSession.
From those discussions, it seems like the HttpSession is indeed the same object. One easy test would be to write a simple servlet, look at the HttpServletRequest.getSession(), and see if it references the same session object on multiple calls. If it does, then I think your theory is sound and you could use it to sync between user calls.
Here is my own solution:
It seems that the session object itself is not always the same as it depends on the implementation of the servlet container (Tomcat, Glassfish, ...) and the getSession() method might return just a wrapper instance.
So it is recommended to use a custom variable stored in the session to be used as locking object.
Here is my code proposal, feedback is welcome:
somewhere in a Helper Class, e.g. MyHelper:
private static final Object LOCK = new Object();
public static Object getSessionLock(HttpServletRequest request, String lockName) {
if (lockName == null) lockName = "SESSION_LOCK";
Object result = request.getSession().getAttribute(lockName);
if (result == null) {
// only if there is no session-lock object in the session we apply the global lock
synchronized (LOCK) {
// as it can be that another thread has updated the session-lock object in the meantime, we have to read it again from the session and create it only if it is not there yet!
result = request.getSession().getAttribute(lockName);
if (result == null) {
result = new Object();
request.getSession().setAttribute(lockName, result);
}
}
}
return result;
}
and then you can use it:
Object sessionLock = MyHelper.getSessionLock(getRequest(), null);
synchronized (sessionLock) {
...
}
Any comments on this solution?
Another solution suggested in "Murach's Java Servlets and JSP (3rd Edition)" book:
Cart cart;
final Object lock = request.getSession().getId().intern();
synchronized (lock) {
cart = (Cart) session.getAttribute("cart");
}
Personally, I implement session-locking with the help of an HttpSessionListener*:
package com.example;
#WebListener
public final class SessionMutex implements HttpSessionListener {
/**
* HttpSession attribute name for the session mutex object. The target for
* this attribute in an HttpSession should never be altered after creation!
*/
private static final String SESSION_MUTEX = "com.example.SessionMutex.SESSION_MUTEX";
public static Object getMutex(HttpSession session) {
// NOTE: We cannot create the mutex object if it is absent from
// the session in this method without locking on a global
// constant, as two concurrent calls to this method may then
// return two different objects!
//
// To avoid having to lock on a global even just once, the mutex
// object is instead created when the session is created in the
// sessionCreated method, below.
Object mutex = session.getAttribute(SESSION_MUTEX);
// A paranoia check here to ensure we never return a non-null
// value. Theoretically, SESSION_MUTEX should always be set,
// but some evil external code might unset it:
if (mutex == null) {
// sync on a constant to protect against concurrent calls to
// this method
synchronized (SESSION_MUTEX) {
// mutex might have since been set in another thread
// whilst this one was waiting for sync on SESSION_MUTEX
// so double-check it is still null:
mutex = session.getAttribute(SESSION_MUTEX);
if (mutex == null) {
mutex = new Object();
session.setAttribute(SESSION_MUTEX, mutex);
}
}
}
return mutex;
}
#Override
public void sessionCreated(HttpSessionEvent hse) {
hse.getSession().setAttribute(SESSION_MUTEX, new Object());
}
#Override
public void sessionDestroyed(HttpSessionEvent hse) {
// no-op
}
}
When I need a session mutex, I can then use:
synchronized (SessionMutex.getMutex(request.getSession())) {
// ...
}
__
*FWIW, I really like the solution proposed in the question itself, as it provides for named session locks so that requests for independent resources don't need to share the same session lock. But if a single session lock is what you want, then this answer might be right up your street.
The answers are correct. If you want to avoid the same user executes 2 different (or the same) requests at the same time, you can synchronize on the HttpSession. The best to do this is to use a Filter.
Notes:
if your resources (images, scripts, and any non-dynamic file) also comes through the servlet, you could create a bottleneck. Then be sure, the synchonization is only done on dynamic pages.
Try to avoid the getSession directly, you should better test if the session already exists because a session is not automatically created for guests (as nothing has to be stored in the session). Then, if you call getSession(), the session will be created and memory will be lost. Then use getSession(false) and try to deal with the null result if no session already exists (in this case, don't synchronize).
The spring framework solution as mentioned by Tomasz Nurkiewicz is accidentally correct in clustered environments only because the Servlet spec requires session consistency across multiple JVMs. Otherwise, it does not do a magic on its own for the scenarios where multiple requests are spread across different machines. See the discussion in this thread that sheds some light on the subject.
Using
private static final Object LOCK = new Object();
you are using the same lock for all sessions and it was the core reason for deadlock I did face.
So every session in your implementation has the same race condition, which is bad.
It needs change.
Other suggested answer:
Object mutex = session.getAttribute(SESSION_MUTEX_ATTRIBUTE);
if (mutex == null) {
mutex = session;
}
return mutex;
seems much better.

How to do transactional without lose encapsulation?

I have a code that saves a bean, and updates another bean in a DB via Hibernate. It must be do in the same transaction, because if something wrong occurs (f.ex launches a Exception) rollback must be executed for the two operations.
public class BeanDao extends ManagedSession {
public Integer save(Bean bean) {
Session session = null;
try {
session = createNewSessionAndTransaction();
Integer idValoracio = (Integer) session.save(bean); // SAVE
doOtherAction(bean); // UPDATE
commitTransaction(session);
return idBean;
} catch (RuntimeException re) {
log.error("get failed", re);
if (session != null) {
rollbackTransaction(session);
}
throw re;
}
}
private void doOtherAction(Bean bean) {
Integer idOtherBean = bean.getIdOtherBean();
OtherBeanDao otherBeanDao = new OtherBeanDao();
OtherBean otherBean = otherBeanDao.findById(idOtherBean);
.
. (doing operations)
.
otherBeanDao.attachDirty(otherBean)
}
}
The problem is:
In case that
session.save(bean)
launches an error, then I get AssertionFailure, because the function doOtherAction (that is used in other parts of the project) uses session after a Exception is thrown.
The first thing I thought were extract the code of the function doOtherAction, but then I have the same code duplicate, and not seems the best practice to do it.
What is the best way to refactor this?
It's a common practice to manage transactions at one level above DAOs, in services or other business logic classes. That way you can, based on the business/service logic, in one case do two DAO operations in one transaction and, in another case, do them in separate transactions.
I'm a huge fan of Declarative Transaction Management. If you can spare the time to get it working (piece of cake with an Application Server such as GlassFish or JBoss, and easy with Spring). If you annotate your business method with #TransactionAttribute(REQUIRED) (it can even be set to be done as default) and it calls the two DAO methods you will get exactly what you want: everything gets committed at once or rolled back over an Exception.
This solution is about as loosely coupled as it gets.
The others are correct in that they take in to account what are common practice currently.
But that doesn't really help you with your current practice.
What you should do is create two new DAO methods. Such as CreateGlobalSession and CommitGlobalSession.
What these do is the same thing as your current create and commit routines.
The difference is that they set a "global" session variable (most likely best done with a ThreadLocal). Then you change the current routines so that they check if this global session already exists. If your create detects the global session, then simply return it. If your commit detects the global session, then it does nothing.
Now when you want to use it you do this:
try {
dao.createGlobalSession();
beanA.save();
beanb.save();
Dao.commitGlobalSession();
} finally {
dao.rollbackGlobalSession();
}
Make sure you wrap the process in a try block so that you can reset your global session if there's an error.
While the other techniques are considered best practice and ideally you could one day evolve to something like that, this will get you over the hump with little more than 3 new methods and changing two existing methods. After that the rest of your code stays the same.

How do synchronized static methods work in Java and can I use it for loading Hibernate entities?

If I have a util class with static methods that will call Hibernate functions to accomplish basic data access. I am wondering if making the method synchronized is the right approach to ensure thread-safety.
I want this to prevent access of info to the same DB instance. However, I'm now sure if the following code are preventing getObjectById being called for all Classes when it is called by a particular class.
public class Utils {
public static synchronized Object getObjectById (Class objclass, Long id) {
// call hibernate class
Session session = new Configuration().configure().buildSessionFactory().openSession();
Object obj = session.load(objclass, id);
session.close();
return obj;
}
// other static methods
}
To address the question more generally...
Keep in mind that using synchronized on methods is really just shorthand (assume class is SomeClass):
synchronized static void foo() {
...
}
is the same as
static void foo() {
synchronized(SomeClass.class) {
...
}
}
and
synchronized void foo() {
...
}
is the same as
void foo() {
synchronized(this) {
...
}
}
You can use any object as the lock. If you want to lock subsets of static methods, you can
class SomeClass {
private static final Object LOCK_1 = new Object() {};
private static final Object LOCK_2 = new Object() {};
static void foo() {
synchronized(LOCK_1) {...}
}
static void fee() {
synchronized(LOCK_1) {...}
}
static void fie() {
synchronized(LOCK_2) {...}
}
static void fo() {
synchronized(LOCK_2) {...}
}
}
(for non-static methods, you would want to make the locks be non-static fields)
By using synchronized on a static method lock you will synchronize the class methods and attributes ( as opposed to instance methods and attributes )
So your assumption is correct.
I am wondering if making the method synchronized is the right approach to ensure thread-safety.
Not really. You should let your RDBMS do that work instead. They are good at this kind of stuff.
The only thing you will get by synchronizing the access to the database is to make your application terribly slow. Further more, in the code you posted you're building a Session Factory each time, that way, your application will spend more time accessing the DB than performing the actual job.
Imagine the following scenario:
Client A and B attempt to insert different information into record X of table T.
With your approach the only thing you're getting is to make sure one is called after the other, when this would happen anyway in the DB, because the RDBMS will prevent them from inserting half information from A and half from B at the same time. The result will be the same but only 5 times ( or more ) slower.
Probably it could be better to take a look at the "Transactions and Concurrency" chapter in the Hibernate documentation. Most of the times the problems you're trying to solve, have been solved already and a much better way.
Static methods use the class as the object for locking, which is Utils.class for your example. So yes, it is OK.
static synchronized means holding lock on the the class's Class object
where as
synchronized means holding lock on the class' instance. That means, if you are accessing a non-static synchronized method in a thread (of execution) you still can access a static synchronized method using another thread.
So, accessing two same kind of methods(either two static or two non-static methods) at any point of time by more than a thread is not possible.
Why do you want to enforce that only a single thread can access the DB at any one time?
It is the job of the database driver to implement any necessary locking, assuming a Connection is only used by one thread at a time!
Most likely, your database is perfectly capable of handling multiple, parallel access
If it is something to do with the data in your database, why not utilize database isolation locking to achieve?
To answer your question, yes it does: your synchronized method cannot be executed by more than one thread at a time.
How the synchronized Java keyword works
When you add the synchronized keyword to a static method, the method can only be called by a single thread at a time.
In your case, every method call will:
create a new SessionFactory
create a new Session
fetch the entity
return the entity back to the caller
However, these were your requirements:
I want this to prevent access to info to the same DB instance.
preventing getObjectById being called for all classes when it is called by a particular class
So, even if the getObjectById method is thread-safe, the implementation is wrong.
SessionFactory best practices
The SessionFactory is thread-safe, and it's a very expensive object to create as it needs to parse the entity classes and build the internal entity metamodel representation.
So, you shouldn't create the SessionFactory on every getObjectById method call.
Instead, you should create a singleton instance for it.
private static final SessionFactory sessionFactory = new Configuration()
.configure()
.buildSessionFactory();
The Session should always be closed
You didn't close the Session in a finally block, and this can leak database resources if an exception is thrown when loading the entity.
According to the Session.load method JavaDoc might throw a HibernateException if the entity cannot be found in the database.
You should not use this method to determine if an instance exists (use get() instead). Use this only to retrieve an instance that you assume exists, where non-existence would be an actual error.
That's why you need to use a finally block to close the Session, like this:
public static synchronized Object getObjectById (Class objclass, Long id) {
Session session = null;
try {
session = sessionFactory.openSession();
return session.load(objclass, id);
} finally {
if(session != null) {
session.close();
}
}
}
Preventing multi-thread access
In your case, you wanted to make sure only one thread gets access to that particular entity.
But the synchronized keyword only prevents two threads from calling the getObjectById concurrently. If the two threads call this method one after the other, you will still have two threads using this entity.
So, if you want to lock a given database object so no other thread can modify it, then you need to use database locks.
The synchronized keyword only works in a single JVM. If you have multiple web nodes, this will not prevent multi-thread access across multiple JVMs.
What you need to do is use LockModeType.PESSIMISTIC_READ or LockModeType.PESSIMISTIC_WRITE while applying the changes to the DB, like this:
Session session = null;
EntityTransaction tx = null;
try {
session = sessionFactory.openSession();
tx = session.getTransaction();
tx.begin();
Post post = session.find(
Post.class,
id,
LockModeType.LockModeType.PESSIMISTIC_READ
);
post.setTitle("High-Performance Java Perisstence");
tx.commit();
} catch(Exception e) {
LOGGER.error("Post entity could not be changed", e);
if(tx != null) {
tx.rollback();
}
} finally {
if(session != null) {
session.close();
}
}
So, this is what I did:
I created a new EntityTransaction and started a new database transaction
I loaded the Post entity while holding a lock on the associated database record
I changed the Post entity and committed the transaction
In the case of an Exception being thrown, I rolled back the transaction

Categories

Resources