Can we write custom Spring transaction manager? - java

Suppose that we have a businessLogic() method that does 2 things: write some information in a local cache and save the same information in the DB using JDBC so that the contents of the cache and the DB are always the same.
I know we can use Spring's JDBC Datasource Transaction Manager to automatically rollback the DB in case of exception. However, how can we define a custom transaction manager that also rollbacks the content of the cache in this case, so that the contents of the cache and the DB are always in sync?
Thanks all.

Gab's answer is right, except for the parts that aren't.
XA is indeed the standard way to coordinate update of multiple resources... except that where the cache is local i.e. in-process, it's not necessarily a resource.
A cache doesn't exactly 'implement JTA', it acts in one of two roles in the XA protocol, according to how it's deployed. It can be an XAResource, but that's usually only required where its lifecycle is distinct from that of the client process. For in-process use, it's more likely to be a Synchronization.
The key difference between these roles is: XAResource is fault-tolerant, but Synchronization is not. For a volatile cache that's in-memory with the client process, it's sufficient to rebuild the cache after a crash by querying the db. For a cache that's out of process, a client crash after the db tx commit but before the cache update would leave the cache out of sync, at least until it expired or was manually refreshed.
Depending on the cache implementation, there is no guarantee it will pick the right mode automatically. See the configuration reference for your chosen implementation e.g. https://infinispan.org/docs/stable/user_guide/user_guide.html#tx_sync_enlist
Spring isn't actually a JTA XA transaction manager either, though it does provide an abstraction layer over them. It's possible to use Spring to drive a database in non-XA mode, but then you have no standard hook for the cache Synchronizations and you need a proprietary interface instead. Or you can have the database do pseudo-XA via a one-phase resource adapter. Full-on 2PC is probably overkill for your use case.

First of all I believe that the task of transaction management for cache is redundant. I advice you to only update the cache if database level transaction is successfully committed.
Most scenarios with cache using are completely acceptable if you have small window between updates of entity in database and its cached state.
If your case rejects any possibility of outdated cache then you probably have to avoid using cache or use something special for caching, probably the same database as your original data supporting transactions. Otherwise you will have problems trying to maintain consistency between two different systems: db level and cache level. Most of the time the best you can achieve is eventual consistency - it means that anyway you will have windows of inconsistent state and only then (eventually) the data will become consistent.

Standard way to deal with transaction distributed among multiple resources is to use XA
You must then access your database using an xa-datasource and use a cache implementation implementing JTA, eg. ehcache.
I'am not very familiar with spring boot, but the transaction manager should manage the transaction synchronization across both resources out of the box with the appropriate configuration (no need to override anything)

Related

How check when cache is empty and I should load it

I use technologies like spring boot, jpa and java 8. I have a question, how can I check if the cache is empty and I should send a query to the database to reload it (how to check that I need to reload the cache)?
As your question is not clear about regarding what type of cache you are using ??
JPA uses the first level of caching is the persistence context.
The Entity Manager guarantees that within a single Persistence Context, for any particular database row, there will be only one object instance. However the same entity could be managed in another User's transaction, so you should use either optimistic or pessimistic locking.
If you mean 2nd level cache ,This level of cache came due to performance reasons.this 2nd level cache sits between Entity Manager and the database. Persistence context shares the cache, making the second level cache available throughout the application. Database traffic is reduced considerably because entities are loaded in to the shared cache and made available from there. So actually laying you dont need to worry about reloading of data from database if cache miss happens.
Now if you are implementing your own logic to implement the cache , then you need to do more research on how actually caching works and different algorithms for caching like LRU,MRU etc. (which I would personally not recommend as you can use existing available providers like ehcache, redis,hazelcast just few names for 2nd level caching )

JPA with Multiple Servers

I am currently working on a project that uses JPA (Toplink, currently) for its persistence. Currently, we are running a single application server, but, for redundancy, we would like to add a load balancer and another application sever (and possibly more as it grows).
First, I'm running into the issue of JPA caching. Since two processes will be updating the same database, the JPA cache returns the cached value rather than going to the database. I see how to turn that off, and the database itself implements a level of caching. Is turning off the cache completely the way to go here? I see the ways to tell JPA to always get from the database at a query level, but in a multi-server environment, it seems that you'll always want that to happen.
Along with this specific question, I'm interested in anyone out there who has implemented a JPA solution with multiple application servers and what problems arose during the implementation (and any suggestions you have).
Thanks much.
As you have found, you can disable the shared cache, see http://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching or http://wiki.eclipse.org/EclipseLink/FAQ/How_to_disable_the_shared_cache%3F
There are also other options available in EclipseLink depending on your data and requirements.
A list of option include:
Disable shared cache
Enable cache coordination (see, http://www.eclipse.org/eclipselink/api/2.1/org/eclipse/persistence/config/PersistenceUnitProperties.html#COORDINATION_PROTOCOL)
Set a cache invalidation timeout (see, http://www.eclipse.org/eclipselink/api/2.1/org/eclipse/persistence/annotations/Cache.html#expiry%28%29)
Enable optimistic locking, this will ensure that any stale object cannot be updated, when an update on stale data occurs it will fail, and EclipseLink will automatically invalidate the object in the cache.
Investigate the Oracle TopLink integration of EclipseLink and Oracle Coherence to provide a distributed cache.
See also, http://en.wikibooks.org/wiki/Java_Persistence/Caching#Caching_in_a_Cluster
There is no perfect solution, the solution used normally depend on the data/class, normally an application has a set of read-only classes, read-mostly classes and write mostly classes. Personally I would enable the cache for the read-only with a 1 day timeout, enable the cache with cache coordination for the read-mostly, and disable the cache for the write mostly.

Update a single object across multiple process in java

A couple of Relational DB tables are managed by a single object cache that resides in a process. When the cache is committed the tables are updated. The DB relational tables are updated by regular SQL queries and not anything more fancier like hibernate.
Eventually, other processes got into the business of modifying this object without communicating with one another i.e, Each process would initialize this object (read from DB) and update it( commit to DB), & other process would not know about it holding on to a stale cache.
I have to fix this workflow. I have thought of couple of methods.
One is to make this object an mBean. So, the object would reside on one process and every process would eventually modify the object in that process by mBean method invocations.
However, this approach has a couple of problems.
1) Every object returned by this cache has be an mBean, which could make the method invocations quite chatty.
2) Also there is a requirement that every process should see a consistent data model(cache) of the DB, and it should merge its contents to the DB if possible. (like a transaction). If the DB was updated by some other process significantly, it is OK for the merge to fail.
What technologies in Java will help to solve this problem?
You should have a look at Terracotta. They have technology that makes multiple JVMs (can be on different servers) appear unified. If you update an object on one JVM, Terracotta will update the instance transparently on all JVMs in the cluster in a safe way.
If you wanted to keep the object model, you could use java object cache for centralized storage before committing. Or you could keep a shared lock using zookeeper.
But it sounds like you should really abandon the self-managed cache. Use hibernate or another JPA implementation, which you mentioned. JPA addresses the cache issues and maintains a L2 shared cache, so they've thought about this for you.
I agree with John - use a second level cache in hibernate with support for clustering. Much more straightforward way to manage data by using a simplified data access model and let Hibernate manage the details.
Terracotta Ehcache is one such cache, so is JBoss, Coherence, etc.
More info on Hibernate Second Level Cache can be had here and in the official Hibernate docs on Chapter 19. Improving Performance (note that the while the Hibernate docs do list second level cache providers, the list is woefully out of date, for example who uses Swarm Cache? The last release of that was in 2003)

Is it possible to use more than one persistence unit in a transaction, without it being XA?

Our application needs to use (read-only) a couple different persistence units pointing to different databases (different, commercial vendors as well).
We do not have the budget to enable 2pc on one of them (Sybase). Is there a way to use these in a transaction without it having to be an XA transaction?
We're using Websphere 6.1, Sybase 12.5.3, Oracle 10g, Java EE 5, and JPA with Hibernate Entity Manager.
Update: The oracle PU is updated rarely 1 or 2 per month, the sybase PU is updated very frequently -- many times per day. Isolation is definitely a concern for the latter, consistency between the two is not necessary to enforce.
Careful.
Read-only does not always mean that 2PC does not apply. If you have two databases, and you read both but only update one, you need a transaction to guarantee consistent results. Suppose you have a scenario where you read database A, then use those results to read and update database B. If you fail to use a transaction with database A, then it is possible that while your operation is active, the data you have read from database A can be read and updated by another application. In this case you can get inconsistent data in database B.
If you truly are reading BOTH databases and updating neither, again you may think that a distributed transaction and its accompanying locking is unnecessary. Once again though, maybe not. You may get inconsistent reads in this scenario as well, if other applications are updating the same databases. It depends on your requirements and the other users of the database.
I would suggest reading up on isolation levels to get some insight into the locking that applies, even during read operations, for all durable stores like databases. Transactional locking may be unnecessary; for example it is unnecessary if you are dealing with data that effectively does not change (no writes by any app).
Maybe there is a business solution here - negotiate with your vendor to drop the price of XA enablement, and pay it. With the economy, you may get a deal you can afford. Side note: I am surprised that you can license a database and NOT get transactions. I was not aware that it was possible to license Sybase in that way.
Atomikos TransactionsEssentials is a free, open source JTA/XA with connection pools for JDBC (and JMS).
One of its features is its added support for non-xa datasources. If readonly (your case) it is safe and easy to use our non-xa datasource to include your Sybase into a JTA transaction.
Best
Guy

How to maintain Hibernate cache consistency running two Java applications?

Our design has one jvm that is a jboss/webapp (read/write) that is used to maintain the data via hibernate (using jpa) to the db. The model has 10-15 persistent classes with 3-5 levels of depth in the relationships.
We then have a separate jvm that is the server using this data. As it is running continuously we just have one long db session (read only).
There is currently no intra-jvm cache involved - so we manually signal one jvm from the other.
Now when the webapp changes some data, it signals the server to reload the changed data. What we have found is that we need to tell hibernate to purge the data and then reload it. Just doing a fetch/merge with the db does not do the job - mainly in respect of the objects several layers down the hierarchy.
Any thoughts on whether there is anything fundamentally wrong with this design or if anyone is doing this and has had better luck with working with hibernate on the reloads.
Thanks,
Chris
A Hibernate session loads all data it reads from the DB into what they call the first-level cache. Once a row is loaded from the DB, any subsequent fetches for a row with the same PK will return the data from this cache. Furthermore, Hibernate gaurentees reference equality for objects with the same PK in a single Session.
From what I understand, your read-only server application never closes its Hibernate session. So when the DB gets updated by the read-write application, the Session on read-only server is unaware of the change. Effectively, your read-only application is loading an in-memory copy of the database and using that copy, which gets stale in due course.
The simplest and best course of action I can suggest is to close and open Sessions as needed. This sidesteps the whole problem. Hibernate Sessions are intended to be a window for a short-lived interaction with the DB. I agree that there is a performance gain by not reloading the object-graph again and again; but you need to measure it and convince yourself that it is worth the pains.
Another option is to close and reopen the Session periodically. This ensures that the read-only application works with data not older than a given time interval. But there definitely is a window where the read-only application works with stale data (although the design guarantees that it gets the up-to-date data eventually). This might be permissible in many applications - you need to evaluate your situation.
The third option is to use a second level cache implementation, and use short-lived Sessions. There are various caching packages that work with Hibernate with relative merits and demerits.
Chris, I'm a little confused about your circumstances. If I understand correctly, you have a both a web app (read/write) a standalone application (read-only?) using Hibernate to access a shared database. The changes you make with the web app aren't visible to the standalone app. Is that right?
If so, have you considered using a different second-level cache implementation? I'm wondering if you might be able to use a clustered cache that is shared by both the web application and the standalone application. I believe that SwarmCache, which is integrated with Hibernate, will allow this, but I haven't tried it myself.
In general, though, you should know that the contents of a given cache will never be aware of activity by another application (that's why I suggest having both apps share a cache). Good luck!
From my point of view, you should change your underline Hibernate cache to that one, which supports clustered mode. It could be a JBoss Cache or a Swarm Cache. The first one has a better support of data synchronization (replication and invalidation) and also supports JTA.
Then you will able to configure cache synchronization between webapp and server. Also look at isolation level if you will use JBoss Cache. I believe you should use READ_COMMITTED mode if you want to get new data on a server from the same session.
The most used practice is to have a Container-Managed Entity Manager so that two or more applications in the same container (ie Glassfish, Tomcat, Websphere) can share the same caches.
But if you don't use an Application container, because you use Play! for instance, then I would build some webservices in the primary Application to read/write consistently in the cache.
I think using stale data is an open door for disaster. Just like Singletons become Multitons, read-only applications are often a write sometimes.
Belt and braces :)

Categories

Resources