I am using redisson-hibernate as Hibernate 2nd level Cache provider. If redis server becomes unavailable, while the application is running, the whole application goes down. I am trying to find a way to circuit-break the L2 cache in such a scenario.
In case redis is unavailable, hibernate should work as if L2 is disabled, and there should be some mechanism (or subsequent requests), which should check for redis availability after a specified amount of L2, and re-enables L2. Is there already a way to do this?
If not, how can I build such a mechanism? I could try running a quartz job, which checks for redis connectivity, kills and restarts the application by modifying the app config. But this approach is complicated and not very clean. Please suggest.
I had tried the scenario which you have mentioned in a SPRING BOOT application and all the requests (HTTP) started to pile up when I shutdown Redis. But, when I re-start Redis, the application is able to reconnect and processes as usual.
Hibernate & Redis connections load up based on the configuration available in the "application.properties" file.
spring.jpa.properties.hibernate.cache.auto_evict_collection_cache=false
spring.jpa.properties.hibernate.generate_statistics=false
spring.jpa.properties.hibernate.cache.use_second_level_cache=true
spring.jpa.properties.hibernate.cache.use_query_cache=true
spring.jpa.properties.hibernate.javax.cache.missing_cache_strategy=create
spring.jpa.properties.hibernate.cache.redisson.config=redisson.json
spring.jpa.properties.javax.persistence.sharedCache.mode=ENABLE_SELECTIVE
I couldn't find any parameters w.r.t Hibernate / Spring JPA in Hibernate or Spring docs which could automatically detect when L2 is down.
I am also eagerly awaiting for such mechanisms in Hibernate / Spring JPA as this scenario is valid test case which will we surely face mostly in LOAD/PROD environments.
A bit late here but I also face same sort of problems with hibernate and l2 cache management / eviction with any sub jcache api (caffeine, ehcache, hazelcast, infinispan, redisson....).
we have a very small database with 99,999% read and maybe once or twice a week one write.
we cache everything (entitis, queries, collections).
I tried many ways to make hibernate l2 cache auto evict entries :
based on sub cache configuration size or time based eviction
based on a custom scheduled task which tries to evict jpa cache, based on hibernate interceptors to evict chirurgically entities at max scheduled times
tried with debezium to evict the cache on database level update detection....
I always face from time to time a failed to load entity with id xxx...
Tried many configurations, many java annotations (jpa or hibernate level), still struggling to find a working way of expiring the cache and make jpa/hibernate fallback to request directly to the database....
The only way I was able to barely fix this was by disabling hibernate query cache and create a spring cache for this (for every JpaRepo methods with aspect and custom implementation _(--)'/ )...
This with Debezium to evict only updated entities (from inside or outside the app scope) were the only working scenarios.
=> my cached queries and entities get quickly retrieved when in cache (spring and hibernate) and chirurgical eviction of entities in the debezium callback method make any databases modification immediately reflected in the application (all servers listen to debezium , get notified and expire their local cache).
This is frustrating as this is certainly a really common scenario in prod environment...
Related
Specification
Each tenant has their own database which handles users in greater detail, and there needs to exist a central database which handles:
Tokens (OAuth2)
Users (limited level of detail)
Mapping users to their database
Problem
I've found solutions for multi-tenancy which allows me to determine the datasource depending on the user. However, I'm not sure how I can also link certain crud repositories to this central datasource, and others to variable datasources.
Another solution involved updating the properties file, and using a configuration server (i.e. via git) to trigger #RefreshScope annotated configs. Though I'm not sure if this can work for Datasources, or if this could cause problems later on.
Extra Context
I'm using Spring Boot and Hibernate heavily in this project.
This blog gives a very good tutorial on how to do it.
After a lot of research it looks like hibernate just isn't built for doing that, but by manually writing the schema myself I can inject that into new tenant databases using native queries.
I also had a problem with MS Server DBs, as they don't allow simply appending ;createDatabaseIfNotExist to the JDBC URL, which meant even more native queries (Moving the project over to use MySQL anyway, so this is no longer a problem.)
I would like to know if I can start a ignite cache from java client . I am using Cassandra as the persistence store and use POJO configurations to work with the cache and Cassandra. With out providing any named cache configuration in server side is it possible ?
Please share your thoughts.
Cache itself can be dynamically started using Ignite#createCache method. However, classes that are required for this cache need to be deployed explicitly in advance, before servers are started.
In your case you will have to deploy POJO classes because they are currently required by Cassandra store. You will be able to skip this step though after this ticket is implemented: https://issues.apache.org/jira/browse/IGNITE-5270
My project connects to a database using hibernate, getting connections from a connection pool on JBoss. I want to replace some of the reads/writes to tables with publish/consume from queues. I built a working example that uses OracleAQ, however, I am connecting to the DB using:
AQjmsFactory.getQueueConnectionFactory followed by createQueueConnection,
then using createQueueSession to get a (JMS) QueueSession on which I can call createProducer and createConsumer.
So I know how to do what I want using a jms.QueueSession. But using hibernate, I get a hibernate.session, which doesn't have those methods.
I don't want to open a new connection every time I perform an action on a queue - which is what I am doing now in my working example. Is there a way to perform queue operations from a hibernate.session? Only with SQL queries?
I think you're confusing a JMS (message queue) session with a Hibernate (database) session. The Hibernate framework doesn't have any overlap with JMS, so it can't be used to do both things.
You'll need 2 different sessions for this to work:
A Hibernate Session (org.hibernate.Session) for DB work
A JMS Session (javax.jms.Session) to to JMS/queue work
Depending on your use case, you may also want an XA transaction manager to do a proper two-phase commit across both sessions and maintain transactional integrity.
I was also looking for some "sane" way how to use JMS connection to manipulate database data. There is not any. Dean is right, you have to use two different connections to the same data and have distributed XA transaction between them.
This solution opens a world of various problems never seen before. In real life distributed transactions can really be non-trivial. Surprisingly in some situations Oracle can detect that two connections are pointing into the same database and then two-phase commit can be bypassed - even when using XA.
We've got a Spring based web application that makes use of Hibernate to load/store its entities to the underlying database.
Since it's a backend application we not only want to allow our UI but also 3rd party tools to manually initiate DB transactions. That's why the callers need to
Call a StartTransaction method and in return get an ID that they can refer to
Do all DB relevant calls (e. g. creating, modifying, deleting) by referring to this ID to make clear which operations belong to the started transaction
Call the CommitTransaction method to signal to our backend that the transaction can be committed now (or in the negative case RollbackTransaction will be called)
So keeping in mind, that all database handling will be done internally by the Java persistence annotations, how can we open the transaction management to our UI that behaves like a 3rd party application that has no direct access to the backend entities but deals with data transfer objects only?
From the Spring Reference: Programmatic transaction management
I think this can be done but would be a royal pain to implement/verify. You would basically require a transaction manager which is not bounded by "per-thread-transaction" definition but spans across multiple invocations for the same client.
JTA + Stateful session beans might be something you would want to have a look at.
Why don't you build services around your 'back end application' for example a SOAP interface or a REST interface.
With this strategy you can manage your transaction in the backend
I have to create a mass insertion feature for our user administration tool. We built a small in-house library using spring LDAP, and everything works fine for single user management (CRUD).
I would like to try to insert hundreds of records at a time and rollback if something goes wrong.
Is there a way to create transactions in LDAP like it exists in Databases ?
Thanks for your ideas.
This is a followup to #adrianboimvaser.
Just a note that that the Spring LDAP transaction support is not using XA transactions but "Logical" compensating transactions so the rollback of LDAP will be a compensating action against LDAP. While this is an improvement over no transactions be aware that this is not the same as a typical transaction "like it exists in Databases". i.e. The ACID properties of transactions are not supported.
Note that even though the
same logical transaction is used, this
is not a JTA XA transaction; no
two-phase commit will be performed,
and thus commit and rollback may yield
unexpected results.
For example: If you are adding 100 entries to LDAP each record will be added one by one to LDAP. If the last add fails then the rollback action will be to remove the previously created 99 entries within the transaction. However, if for some reason (e.g. network connectivity is down to LDAP which is what caused the failure for the 100th entry) the first 99 entries cannot actually be removed then even though you have attempted to rollback the transaction you will have an inconsistency between the database and LDAP. i.e. There will be 99 records in LDAP (because they could not be deleted) which do not exist in the database (because those records were never inserted or were actually rolled back).
I'm not sure what your situation is but if you have frequent large updates to LDAP you may want to consider using an actual database to avoid the transaction headaches as well as to optimize performance since LDAP is designed for fast reads with relatively slower writes.
Have a look at the documentation: http://static.springsource.org/spring-ldap/docs/1.2.0-rc1/reference/#transactions