I have to create a mass insertion feature for our user administration tool. We built a small in-house library using spring LDAP, and everything works fine for single user management (CRUD).
I would like to try to insert hundreds of records at a time and rollback if something goes wrong.
Is there a way to create transactions in LDAP like it exists in Databases ?
Thanks for your ideas.
This is a followup to #adrianboimvaser.
Just a note that that the Spring LDAP transaction support is not using XA transactions but "Logical" compensating transactions so the rollback of LDAP will be a compensating action against LDAP. While this is an improvement over no transactions be aware that this is not the same as a typical transaction "like it exists in Databases". i.e. The ACID properties of transactions are not supported.
Note that even though the
same logical transaction is used, this
is not a JTA XA transaction; no
two-phase commit will be performed,
and thus commit and rollback may yield
unexpected results.
For example: If you are adding 100 entries to LDAP each record will be added one by one to LDAP. If the last add fails then the rollback action will be to remove the previously created 99 entries within the transaction. However, if for some reason (e.g. network connectivity is down to LDAP which is what caused the failure for the 100th entry) the first 99 entries cannot actually be removed then even though you have attempted to rollback the transaction you will have an inconsistency between the database and LDAP. i.e. There will be 99 records in LDAP (because they could not be deleted) which do not exist in the database (because those records were never inserted or were actually rolled back).
I'm not sure what your situation is but if you have frequent large updates to LDAP you may want to consider using an actual database to avoid the transaction headaches as well as to optimize performance since LDAP is designed for fast reads with relatively slower writes.
Have a look at the documentation: http://static.springsource.org/spring-ldap/docs/1.2.0-rc1/reference/#transactions
Related
I am using redisson-hibernate as Hibernate 2nd level Cache provider. If redis server becomes unavailable, while the application is running, the whole application goes down. I am trying to find a way to circuit-break the L2 cache in such a scenario.
In case redis is unavailable, hibernate should work as if L2 is disabled, and there should be some mechanism (or subsequent requests), which should check for redis availability after a specified amount of L2, and re-enables L2. Is there already a way to do this?
If not, how can I build such a mechanism? I could try running a quartz job, which checks for redis connectivity, kills and restarts the application by modifying the app config. But this approach is complicated and not very clean. Please suggest.
I had tried the scenario which you have mentioned in a SPRING BOOT application and all the requests (HTTP) started to pile up when I shutdown Redis. But, when I re-start Redis, the application is able to reconnect and processes as usual.
Hibernate & Redis connections load up based on the configuration available in the "application.properties" file.
spring.jpa.properties.hibernate.cache.auto_evict_collection_cache=false
spring.jpa.properties.hibernate.generate_statistics=false
spring.jpa.properties.hibernate.cache.use_second_level_cache=true
spring.jpa.properties.hibernate.cache.use_query_cache=true
spring.jpa.properties.hibernate.javax.cache.missing_cache_strategy=create
spring.jpa.properties.hibernate.cache.redisson.config=redisson.json
spring.jpa.properties.javax.persistence.sharedCache.mode=ENABLE_SELECTIVE
I couldn't find any parameters w.r.t Hibernate / Spring JPA in Hibernate or Spring docs which could automatically detect when L2 is down.
I am also eagerly awaiting for such mechanisms in Hibernate / Spring JPA as this scenario is valid test case which will we surely face mostly in LOAD/PROD environments.
A bit late here but I also face same sort of problems with hibernate and l2 cache management / eviction with any sub jcache api (caffeine, ehcache, hazelcast, infinispan, redisson....).
we have a very small database with 99,999% read and maybe once or twice a week one write.
we cache everything (entitis, queries, collections).
I tried many ways to make hibernate l2 cache auto evict entries :
based on sub cache configuration size or time based eviction
based on a custom scheduled task which tries to evict jpa cache, based on hibernate interceptors to evict chirurgically entities at max scheduled times
tried with debezium to evict the cache on database level update detection....
I always face from time to time a failed to load entity with id xxx...
Tried many configurations, many java annotations (jpa or hibernate level), still struggling to find a working way of expiring the cache and make jpa/hibernate fallback to request directly to the database....
The only way I was able to barely fix this was by disabling hibernate query cache and create a spring cache for this (for every JpaRepo methods with aspect and custom implementation _(--)'/ )...
This with Debezium to evict only updated entities (from inside or outside the app scope) were the only working scenarios.
=> my cached queries and entities get quickly retrieved when in cache (spring and hibernate) and chirurgical eviction of entities in the debezium callback method make any databases modification immediately reflected in the application (all servers listen to debezium , get notified and expire their local cache).
This is frustrating as this is certainly a really common scenario in prod environment...
I'm trying to understand the use of Java XA Datasource.
But I can't still figure when to use it, and when not to use it.
I read that XA Datasource used when we use two databases.
But I'm not sure what is the meaning of two database.
For example:
I had two layer of classes (Service and DAO)
A method in service layer annotated as a transaction, invoke two methods in DAO.
Each method in DAO open new connection to database and close it in the end of the method.
If I use one instance of database, and each method in DAO write to different table, do I have to use XA Datasource? since transaction occured in service layer but only in one instance database
Systems such as databases, but also for example queueing systems that you use through JMS have the concept of transactions. A transaction is treated as a unit of work; at the end of doing the work, such as inserting, updating or deleting records in the database, you commit the transaction and then the database definitively does the work; or you rollback the transaction and then everything done in the transaction is cancelled.
In some cases, your software has to perform operations over multiple different systems. For example, you might need to insert data into multiple databases, or insert something in the database and put a message on a queue.
If you want to do such combinations of operations as if they are in one transaction, then you need a distributed transaction system - a system that can combine the transactions of the different systems into one. That way you can write your code as if it's running inside a single transaction; the distributed transaction system automatically commits or rolls back the transactions in the underlying systems.
To make it more concrete: Suppose that you insert a record in a database, and put a message on a queue, and you want to do this inside one transaction. When something goes wrong with putting the message on the queue, you also want the database transaction to be rolled back, so that you don't have a record in the database but not a corresponding message on the queue. Instead of manually keeping track of the transactions of the database and the queue system (including handling all combinations of possible errors), you can use a distributed transaction system.
XA is a standard for working with distributed transactions. You can work with XA transactions in Java through the Java Transaction API (JTA). Java EE servers have support for this built-in. If you're not using a Java EE server, then you can use a separate library that implements JTA such as Narayana or Atomikos.
Each method in DAO open new connection to database and close it in the end of the method.
Normally this isn't how you should write DAOs. Opening a database connection is a relatively slow operation; if you open a new database connection for every method that you call in a DAO, your program is most like going to run slowly. You should at least use a connection pool, that manages a number of database connections and allows you to reuse already open connections.
If I use one instance of database, and each method in DAO write to different table, do I have to use XA Datasource? since transaction occured in service layer but only in one instance database
If your DAO methods each open their own connection, then they will run in separate transactions. Whether this is a problem or not depends on what your application needs to do. An XA datasource is not the solution to make them run in one transaction. Instead, you should both let them use the same connection and transaction.
XA transactions are really only useful if you are using multiple database systems or other systems, and you want to be able to perform transactions that span across these systems.
My project connects to a database using hibernate, getting connections from a connection pool on JBoss. I want to replace some of the reads/writes to tables with publish/consume from queues. I built a working example that uses OracleAQ, however, I am connecting to the DB using:
AQjmsFactory.getQueueConnectionFactory followed by createQueueConnection,
then using createQueueSession to get a (JMS) QueueSession on which I can call createProducer and createConsumer.
So I know how to do what I want using a jms.QueueSession. But using hibernate, I get a hibernate.session, which doesn't have those methods.
I don't want to open a new connection every time I perform an action on a queue - which is what I am doing now in my working example. Is there a way to perform queue operations from a hibernate.session? Only with SQL queries?
I think you're confusing a JMS (message queue) session with a Hibernate (database) session. The Hibernate framework doesn't have any overlap with JMS, so it can't be used to do both things.
You'll need 2 different sessions for this to work:
A Hibernate Session (org.hibernate.Session) for DB work
A JMS Session (javax.jms.Session) to to JMS/queue work
Depending on your use case, you may also want an XA transaction manager to do a proper two-phase commit across both sessions and maintain transactional integrity.
I was also looking for some "sane" way how to use JMS connection to manipulate database data. There is not any. Dean is right, you have to use two different connections to the same data and have distributed XA transaction between them.
This solution opens a world of various problems never seen before. In real life distributed transactions can really be non-trivial. Surprisingly in some situations Oracle can detect that two connections are pointing into the same database and then two-phase commit can be bypassed - even when using XA.
hi,
If two resources are involved in a transaction then the XA transation
setting should be enabled in the weblogic server. Then the xa drivers
has to be chosed.Is there a alternative way to have this two
resources in a transaction without enabling XA transaction
Yes, you can use Global Transaction Emulation. WebLogic has two mode:
Logging Last Resource - WebLogic creates a table into all your datasources and write transactions data into this table. This is a preference option.
From official documentation:
With this option, the transaction branch in which the connection is used is processed as the >last resource in the transaction and is processed as a local transaction. Commit records for >two-phase commit (2PC) transactions are inserted in a table on the resource itself, and the >result determines the success or failure of the prepare phase of the global transaction. >This option offers some performance benefits and greater data safety than Emulate Two-Phase >Commit, but it has some limitations.
see http://docs.oracle.com/cd/E15051_01/wls/docs103/jta/llr.html
Emulate Two-Phase Commit - transaction branch always return "SUCCESS" while prepare phase. Select this option if your app can tolerant heuristic conditions.
see http://docs.oracle.com/cd/E23943_01/web.1111/e13737/transactions.htm for more information.
I prefer LLR option, but if you work with legacy DB and do not have the table create grant, you should use two-phase commit emulation.
I wan to know how the transaction is internally implemented in EJB. I want to know the logic they use to create a transaction. if you could point out some articles that would be helpful
Hibernate doesn't implement transactions, it relies on and wraps JDBC transactions or JTA transactions (either container managed or application managed).
Regarding EJBs, if you want to understand the details of a JTA Transaction Manager, you'll need to be fluent with the JTA interfaces UserTransaction, TransactionManager, and XAResource which are described in the JTA specification. The JDBC API Tutorial and Reference, Third Edition will also be useful to understand the XA part of a JDBC driver.
Then, get the sources of an EJB container (like JBoss) or of a standalone JTA Transaction Manager (like Atomikos) to analyze the TM part. And good luck.
This question could have answers at many levels.
A general discussion of what's going on can be found here
My summary goes like this ... First, somewhere there must be a transaction coordinator, the EJB container will know about the coordinator - typically that's part of the application server. So all the EJB container has to do is to call
someobject.BeginTransaction()
that's it. The actual API the EJB container uses is JTA. EJBs can actually use Bean Managed transaction transaction or Container managed transactions. In the Bean Managed case the implementer nhas to make the JTA calls. More usually we use Container Managed transactions (CMT). In which case the container has logic which is run before the implementation is reached. For example:
if ( we're not already in a transaction )
begin transaction
call the EJB implementation
and later the container has logic
if ( finished processing request )
commit transaction
with other paths to abort the transaction if errors have happened.
Now that logic is more complex because CMT EJBs are annotated with transaction control statements. For example you can say things "if we already have a transaction, use it" So if one EJB calls another only a single transaction is used. Read up the EJB spec for that.
However all that's pretty obvious in any write-up of Java EE EJBs. So I suspect that you're asking moe about what happens inside the JTA calls, how the transaction manager is implemented and its relationship to the transactional resource managers (eg. Databases). That's a huge topic. You've actually go implementations of the XA distributed transaction protocol down there. Frankly I doubt that you really need to need to know this. At some point you have trust the APIs you're using. However there is one key detail: your Transaction Manager (typically the App Server itself) must be able to tell the REsource Managers the fate of any given transaction, and that information must survive restart of the App Server, hence some persistent store of transaction information must be kept. You will find transaction logs somewhere, and in setting up the App Server you need to make sure those logs are well looked after.
From EJB in Action book
The protocol commonly used to achieve multiple resource is the two-phase commit. The two-phase commit protocol performs an additional preparatory step before the final commit. Each resource manager involved is asked if the current transaction can be successfully committed. If any of the resource managers indicate that the transaction cannot be committed if attempted, the entire transaction is abandoned (rolled back). Otherwise, the transaction is allowed to proceed and all resource managers are asked to commit.
A resource manager can be a database, for instance. Others examples includes a Message Service. The component which coordinates transactions is called Transaction manager.
Suppose you have an application which involves two distincts databases. How does Transaction manager performs its work by using Two phase commit protocol ?
Transaction Manager ask database 1 if it can commit the current transaction
If so, ask database 2 if it can commit the current transaction
Transaction Manager ask database 1 to commit
Transaction Manager ask database 2 to commit
Hibernate is built on top of the JDBC API. It just coordinates one database. So if you call
session.commit();
Behind the scenes, it call
connection.commit();
If you really want to study Transaction internals, my advice is Java Transaction Processing book.
Hibernate has TransactionFactory:
An abstract factory for Transaction instances. Concrete implementations are specified by hibernate.transaction.factory_class.
It has implementations: JDBCTransactionFactory, JTATransactionFactory, CMTTransactionFactory. These factories create an instance of Transaction - for example JDBCTransaction.
Then I can't tell you what happens for JTA and CMT, but for JDBC it's as simple as setting the auto-commit to false (when you call begin a transaction):
connection.setAutoCommit(false);
And respectively on transaction.commit(): connection.commit()
If any exception occurs when operating with the session, it invokes connection.rollback()
Another good read would be the JTS articles by Brian Goetz; links:
http://www.ibm.com/developerworks/java/library/j-jtp0305.html
http://www.ibm.com/developerworks/java/library/j-jtp0410/index.html
http://www.ibm.com/developerworks/java/library/j-jtp0514.html