I’m using the spring-boot spring-data-redis 1.8.9.RELEASE RedisCacheManager implementation of CacheManager for caching. One metric that I want visibility into is the cache hit/miss ratio. To get that, I’m extracting the keyspace_hits and keyspace_misses exposed via the redis server which can also be viewed via the redis_cli with INFO STATS. The problem is that RedisCacheManager never registers cache misses, i.e. keyspace_misses never increments even if there is a cache "miss".
Debugging the code, I see that spring-data-redis actually checks to see if the key EXISTS in redis before retrieving it. I see the sense with this approach however when EXISTS is executed against the redis server, it does not register a cache miss.
Is there any way to use RedisCacheManager and register cache misses? I know I can use other redis objects to accomplish this but I was wondering if it could be done with the standard CacheManager implementation?
Edit
The ideal solution won't add a great deal of overhead and I am unable to edit the configuration of the redis server.
Code that RedisCacheManager uses when retrieving an element from cache. Notice Boolean exists:
public RedisCacheElement get(final RedisCacheKey cacheKey) {
Assert.notNull(cacheKey, "CacheKey must not be null!");
Boolean exists = (Boolean)this.redisOperations.execute(new RedisCallback<Boolean>() {
public Boolean doInRedis(RedisConnection connection) throws DataAccessException {
return connection.exists(cacheKey.getKeyBytes());
}
});
return !exists ? null : new RedisCacheElement(cacheKey, this.fromStoreValue(this.lookup(cacheKey)));
}
The above code will execute these commands on redis viewable via MONITOR on a cache miss. Notice again that EXISTS is executed as per the code:
After the above commands are executed, keyspace_misses is not incremented even though there was a cache miss:
The code mentioned in the question is part of RedisCache provided by Spring.
Extend and Create a custom implementation of RedisCache class to override the behavior of "get" method to suit your need.
Extend RedisCacheManager to override the method "createRedisCache" to use your custom RedisCache that you created in first step instead of default cache.
Related
I'm using Spring boot with Ehcache for caching some data in the application.
The application is a rest service that caches some data that has high usage.
The code in our controllers looks like:
#Cacheable("CategoryModels")
#GetMapping("/category/{companyId}")
public List<CategoryViewModel> getAllCategories(#PathVariable(value = "companyId", required = true) long companyId,
#RequestHeader("user") String user) {
//custom code here
}
Now in some situations the users are getting different data sets back from the server. Can someone explain this in the above situation?
If data is changed in the database I refresh the cache and the program will auto update the updated data to the
For refreshing the cache I use a custom written method:
Cache categoryCache = (Cache) manager.getCache("CategoryModels").getNativeCache();
categoryCache.removeAll();
categoryController.getAllCategories(company.getCompanyId(), null);
I have the same behavior on other caches that are used and refreshed on the same way the above cache is used.
You should try to parametrize your cache definition with :
#Cacheable(value="CategoryModels", key="{ #root.methodName, #companyId, #user.id }")
It may be a couple of things. First off the default key resolver that spring provides does not consider anything but the names of the parameters. The cleanest way to fix this kid to write your own key revolver that considers both class and method, without this it could be possible to get back data from a completely different method that happens to share the same parameter list.
Summary (details below):
I'd like to make a stored proc call before any entities are saved/updated/deleted using a Spring/JPA stack.
Boring details:
We have an Oracle/JPA(Hibernate)/Spring MVC (with Spring Data repos) application that is set up to use triggers to record history of some tables into a set of history tables (one history table per table we want audited). Each of these entities has a modifiedByUser being set via a class that extends EmptyInterceptor on update or insert. When the trigger archives any insert or update, it can easily see who made the change using this column (we're interested in which application user, not database user). The problem is that for deletes, we won't get the last modified information from the SQL that is executed because it's just a plain delete from x where y.
To solve this, we'd like to execute a stored procedure to tell the database which app user is logged in before executing any operation. The audit trigger would then look at this value when a delete happens and use it to record who executed the delete.
Is there any way to intercept the begin transaction or some other way to execute SQL or a stored procedure to tell the db what user is executing the inserts/updates/deletes that are about to happen in the transaction before the rest of the operations happen?
I'm light on details about how the database side will work but can get more if necessary. The gist is that the stored proc will create a context that will hold session variables and the trigger will query that context on delete to get the user ID.
From the database end, there is some discussion on this here:
https://docs.oracle.com/cd/B19306_01/network.102/b14266/apdvprxy.htm#i1010372
Many applications use session pooling to set up a number of sessions
to be reused by multiple application users. Users authenticate
themselves to a middle-tier application, which uses a single identity
to log in to the database and maintains all the user connections. In
this model, application users are users who are authenticated to the
middle tier of an application, but who are not known to the
database.....in these situations, the application typically connects
as a single database user and all actions are taken as that user.
Because all user sessions are created as the same user, this security
model makes it very difficult to achieve data separation for each
user. These applications can use the CLIENT_IDENTIFIER attribute to
preserve the real application user identity through to the database.
From the Spring/JPA side of things see section 8.2 at the below:
http://docs.spring.io/spring-data/jdbc/docs/current/reference/html/orcl.connection.html
There are times when you want to prepare the database connection in
certain ways that aren't easily supported using standard connection
properties. One example would be to set certain session properties in
the SYS_CONTEXT like MODULE or CLIENT_IDENTIFIER. This chapter
explains how to use a ConnectionPreparer to accomplish this. The
example will set the CLIENT_IDENTIFIER.
The example given in the Spring docs uses XML config. If you are using Java config then it looks like:
#Component
#Aspect
public class ClientIdentifierConnectionPreparer implements ConnectionPreparer
{
#AfterReturning(pointcut = "execution(* *.getConnection(..))", returning = "connection")
public Connection prepare(Connection connection) throws SQLException
{
String webAppUser = //from Spring Security Context or wherever;
CallableStatement cs = connection.prepareCall(
"{ call DBMS_SESSION.SET_IDENTIFIER(?) }");
cs.setString(1, webAppUser);
cs.execute();
cs.close();
return connection;
}
}
Enable AspectJ via a Configuration class:
#Configuration
#EnableAspectJAutoProxy
public class SomeConfigurationClass
{
}
Note that while this is hidden away in a section specific to Spring's Oracle extensions it seems to me that there is nothing in section 8.2 (unlike 8.1) that is Oracle specific (other than the Statement executed) and the general approach should be feasible with any Database simply by specifying the relevant procedure call or SQL:
Postgres for example as the following so I don't see why anyone using Postgres couldn't use this approach with the below:
https://www.postgresql.org/docs/8.4/static/sql-set-role.html
Unless your stored procedure does more than what you described, the cleaner solution is to use Envers (Entity Versioning). Hibernate can automatically store the versions of an entity in a separate table and keep track of all the CRUD operations for you, and you don't have to worry about failed transactions since this will all happen within the same session.
As for keeping track who made the change, add a new colulmn (updatedBy) and just get the login ID of the user from Security Principal (e.g. Spring Security User)
Also check out #CreationTimestamp and #UpdateTimestamp.
I think what you are looking for is a TransactionalEvent:
#Service
public class TransactionalListenerService{
#Autowired
SessionFactory sessionFactory;
#TransactionalEventListener(phase = TransactionPhase.BEFORE_COMMIT)
public void handleEntityCreationEvent(CreationEvent<Entity> creationEvent) {
// use sessionFactory to run a stored procedure
}
}
Registering a regular event listener is done via the #EventListener
annotation. If you need to bind it to the transaction use
#TransactionalEventListener. When you do so, the listener will be
bound to the commit phase of the transaction by default.
Then in your transactional services you register the event where necessary:
#Service
public class MyTransactionalService{
#Autowired
private ApplicationEventPublisher applicationEventPublisher;
#Transactional
public void insertEntityMethod(Entity entity){
// insert
// Publish event after insert operation
applicationEventPublisher.publishEvent(new CreationEvent(this, entity));
// more processing
}
}
This can work also outside the boundaries of a trasaction if you have the requirement:
If no transaction is running, the listener is not invoked at all since
we can’t honor the required semantics. It is however possible to
override that behaviour by setting the fallbackExecution attribute of
the annotation to true.
Due to remote invocation nature of REST services, they are in constant situation to run into race condition with each other. One of the everyday resources to race for is session. In order to be practical, you need to be able to put a lock over the resource at the beginning of your process and lift it up whenever you are done with it.
Now my question is, does Spring Session have any feature to deal with race condition over the session entries?
Or any other library / framework in Java!!!
If you're using Spring Controllers, then you can use
RequestMappingHandlerAdapter.setSynchronizeOnSession-boolean-
This will make every Controller method synchronized in presence of a session.
HttpSession.setAttribute is thread safe. However getAttribute followed by setAttribute has to be manually made tread safe.
synchronized(session) {
session.setAttribute("foo", "bar");
session.getAttribute("foo");
}
Same can be done in case of spring session beans.
synchronized(session) {
//do something with the session bean
}
#Edit
In case of multiple containers with normal spring session beans you would have to use sticky sessions. That would ensure that one session state is stored on one container and that container is accessed every single time the same session is requested. This has to be done on the load balancer with the help of something like BigIP cookies. Rest would would work the same way as for a single session there exists a single container, so locking session would suffice.
If you would like to use session sharing across instances there are supports on the containers like Tomcat and Jetty
These approaches use a back-end database or some other persistence mechanism to store state.
For the same purpose you can try using Spring Session. Which is trivial to configure with the Redis. Since Redis is single threaded, it ensures that one instance of an entry is accessed atomically.
Above approaches are non invasive. Both the database and Redis based approaches support transactions.
However if you want more control over the distributed state and locking you can try using the distributed data grids like Hazelcast and Gemfire.
I have personally worked with the Hazelcast and it does provide methods to lock entries made in the map.
#Edit2
Though I believe that handling transactions should suffice with Spring Session and Redis, to make sure you would need distributed locking. Lock object would have to be acquired from the Redis itself. Since Redis is single threaded a personal implementation would also work by using something like INCR
Algorithm would go something like below
//lock_num is the semaphore/lock object
lock_count = INCR lock_num
while(true) {
if(lock_count != 1) {
DECR lock_num
} else {
break
}
wait(wait_time_period)
}
//do processing in critical section
DECR lock_num
However, thankfully Spring already provides this distributed lock implementation via RedisLockRegistry. More documentation on usage is here.
If you decide to use plain Jedis without spring then here is a distributed lock as for Jedis : Jedis Lock.
//from https://github.com/abelaska/jedis-lock
Jedis jedis = new Jedis("localhost");
JedisLock lock = new JedisLock(jedis, "lockname", 10000, 30000);
lock.acquire();
try {
// do some stuff
}
finally {
lock.release();
}
Both of these should work exactly like Hazelcast locking.
As a previous answer stated. If you are using Spring Session and you are concerned for thread safety on concurrent access of a Session, you should set:
RequestMappingHandlerAdapter.setSynchronizeOnSession(true);
One example can be found here EnableSynchronizeOnSessionPostProcessor :
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.config.BeanPostProcessor;
import org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter;
public class EnableSynchronizeOnSessionPostProcessor implements BeanPostProcessor {
private static final Logger logger = LoggerFactory
.getLogger(EnableSynchronizeOnSessionPostProcessor.class);
#Override
public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
// NO-OP
return bean;
}
#Override
public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
if (bean instanceof RequestMappingHandlerAdapter) {
RequestMappingHandlerAdapter adapter = (RequestMappingHandlerAdapter) bean;
logger.info("enable synchronizeOnSession => {}", adapter);
adapter.setSynchronizeOnSession(true);
}
return bean;
}
}
Sticky Sessions and Session Replication
With regards to a clustered application and Sessions, there is a very good post here on SO, that discusses this topic: Sticky Sessions and Session Replication
In my experience, you would want both Sticky Session and Session replication.
You use sticky session to eliminate the concurrent Session access across nodes, because sticky session will pin a session to a single node and each subsequent request for the same session will always be directed to that node. This eliminates the cross-node session access concern.
Replicated sessions are helpful mainly in case a node goes down. By replicating sessions, when a node goes down, future requests for existing sessions will be directed to another node that will have a copy of the original session and makes the fail over transparent to the user.
There are many frameworks that support session replication. The one I use for large projects is the open-source Hazelcast.
In response to your comments made on #11thdimension post:
I think you are in a bit of a challenging area. Basically, you want to enforce all session operations to be atomic across nodes in a cluster. This leads me to lean towards a common session store across nodes, where access is synchronized (or something similar).
Multiple Session store / replication frameworks surely support an external store concept and I am sure Reddis does. I am most familiar with Hazelcast and will use that as an example.
Hazelcast allows to configure the session persistence to use a common database.
If you look at Map Persistence section, it shows an example and a description of options.
The description for the concept states:
Hazelcast allows you to load and store the distributed map entries from/to a persistent data store such as a relational database. To do this, you can use Hazelcast's MapStore and MapLoader interfaces.
Data store needs to be a centralized system that is accessible from all Hazelcast Nodes. Persistence to local file system is not supporte
Hazelcast supports read-through, write-through, and write-behind persistence modes which are explained in below subsections.
The interesting mode is write-through:
Write-Through
MapStore can be configured to be write-through by setting the write-delay-seconds property to 0. This means the entries will be put to the data store synchronously.
In this mode, when the map.put(key,value) call returns:
MapStore.store(key,value) is successfully called so the entry is persisted.
In-Memory entry is updated.
In-Memory backup copies are successfully created on other JVMs (if backup-count is greater than 0).
The same behavior goes for a map.remove(key) call. The only difference is that MapStore.delete(key) is called when the entry will be deleted.
I think, using this concept, plus setting up your database tables for the store properly to lock entries on insert/update/deletes, you can accomplish what you want.
Good Luck!
I have a Spring Boot/MVC app that should store some Simple POJOs sent from users for 15 minutes of time. When this time expires, this object should be removed from ConcurrentHashMap. At first ConcurrentHashMap was something I wanted to implement this feature with, but then I thought maybe leveraging Guava's cache would be a better option, since it has a time-based eviction out of the box.
My service implementation
#CachePut(cacheNames = "teamConfigs", key = "#authAccessDto.accessToken")
#Override
public OAuthAccessDto saveConfig(OAuthAccessDto authAccessDto) {
return authAccessDto;
}
#Override
#Cacheable(cacheNames = "teamConfigs")
public OAuthAccessDto getConfig(String accessToken) {
// we assume that the cache is already populated
return null;
}
As you can see, we save data with saveConfig and then when we need to retrieve it, we call getConfig.
Cache configuration in Spring boot is the following (yml file):
spring:
cache:
cache-names: teamConfigs
guava:
spec: expireAfterWrites=900s
However, after reading Guava's cache doc https://github.com/google/guava/wiki/CachesExplained I found that Guava can clean up caches even before the time defined in expireAfterWrites elapses (and even before it runs out of memory).
How can I configure Guava Cache to keep objects until the time expires (considering it did not run out of memory). Maybe I should opt for some other solution?
I don't know about Guava but you could use any JSR-107 compliant provider and a simple configuration that would look like this:
#Bean
public JCacheManagerCustomizer cacheManagerCustomizer() {
return cm -> {
cm.createCache("teamConfigs", new MutableConfiguration<>()
.setExpiryPolicyFactory(CreatedExpiryPolicy
.factoryOf(new Duration(MINUTES, 15)));
};
}
Caffeine (a rewrite of Guava with Java8) has a JSR-107 provider so you could use that. Maybe that version does not exhibit what you experience with Guava? If so, support is expected in Spring Boot 1.4 but you can give the JSR-107 support a try right now.
If you don't want to use that, you could give expiringmap a try. It does not have any Cache implementation in the Spring abstraction but since it's a Map you can easily wrap it, something like on the top of my head:
#Bean
public Cache teamConfigsCache() {
Map<Object, Object> map = ExpiringMap.builder()
.expiration(15, TimeUnit.MINUTES)
.build();
return new ConcurrentMapCache("teamConfigs", map , true);
}
If Spring Boot discovers at least a Cache bean in your configuration, it auto-creates a CacheManager implementation that wraps them. You can force that behaviour via the spring.cache.type property.
More specifically, I find that I'm implementing a custom AuthorizingRealm, which declares template methods doGetAuthenticationInfo() and doGetAuthorizationInfo() for returning AuthenticationInfo and AuthorizationInfo objects, respectively.
However, when I retrieve the data for the AuthenticationInfo (a JPA entity) in doGetAuthenticationInfo(), I find that I already have the necessary AuthorizationInfo. Alas, there's no apparantly good way to hang onto this data, so I have to throw it out only to perform another JPA lookup when the authorization filter ultimately gets its turn in the filter chain.
Behold:
public class CustomRealm extends AuthorizingRealm {
#Override
protected AuthenticationInfo doGetAuthenticationInfo(AuthenticationToken token) {
UsernamePasswordToken userPassToken = (UsernamePasswordToken) token;
String username = userPassToken.getUsername()
User user; // Contains username, password, and roles
// Perform JPA lookup by username...
return constructSimpleAuthenticationInfoFromUser(user);
}
#Override
protected AuthorizationInfo doGetAuthorizationInfo(PrincipalCollection principals) {
// Look up user again? :(
...
}
}
I've considered a number of possibilities:
Use realm caching. The application will run in a distributed environment so there could be any arbitrary number of JVMs running. The default realm cache manager implementations don't solve all of the inherent problems and setting up an enterprise implementations seems out of scope for this project.
Use the subject's session. There is no server-side state and I'd like to keep it that way if possible. Perhaps you can force the session to behave like request scope, but I wouldn't know how to do so and that risks being obfuscated.
Implement my own Subject. There appears to typically be one Subject instance per request, but it's unclear how to bootstrap this and I would risk losing a lot of potential functionality.
Use the Shiro ThreadContext object. I could attach the data to the ThreadContext as a threadlocal property. Servlet containers generally follow a thread-per-request model, and the Subject instance itself seems to chill out here, awaiting its inevitable garbage collection. Shiro also appears to build up and tear down the context automatically. However, there's not much documentation on this and the source code is hard for me to follow.
Finally, the default WebSecurityManager keeps singleton instances of the CustomRealm around, one per JVM it seems. Simply setting some local instance property is not thread-safe.
This seems like a common data retrieval option and a typical deployment scenario. So, what am I missing?
Thanks!
I would go with option 4 - Using ThreadLocal object as your requirement clearly says that the object lifetime must be of http request.
Have a look at this discussion: When and how should I use a ThreadLocal variable?
ThreadLocal doc: http://docs.oracle.com/javase/6/docs/api/java/lang/ThreadLocal.html