I am using java boot for my development. For now I have used 'EhCache' for caching , it is directly supported from Java boot. This is "in-process" cache, i.e., becomes part of your process. It is okay for now. But my server will run on multiple nodes in near future. Hence want to switch to 'Memcached' as common caching layer.
After spending good amount of time, I could not get good sample of using Memcached from java boot. I have looked at 'Simple Spring Memcached' which comes close to my requirement. But still it gives example using XML configuration in Spring way. Java boot does not use such XML configuration as far as possible. At least I could not map the example quickly to java boot world.
I want to use Memcahed ( directly or via cache-abstraction-layer) from java boot. If anybody points me to a relevant java boot example, it will save a lot of time for me.
You could also check Memcached Spring Boot library. It uses Memcached implementation for Spring Cache Abstraction.
In other words you use the same configuration and same annotations as you would use with any other Spring Cache implementation. You can check out here the usage of the library.
There are also example projects in Kotlin and Java.
I have already accepted answer given by #ragnor. But I think I should post a complete example here which has worked for me.
Make sure you have cache-enabled for your application by adding #EnableCaching
POM.xml should have following dependency:
<dependency>
<groupId>com.google.code.simple-spring-memcached</groupId>
<artifactId>spring-cache</artifactId>
<version>3.6.1</version>
</dependency>
<dependency>
<groupId>com.google.code.simple-spring-memcached</groupId>
<artifactId>spymemcached-provider</artifactId>
<version>3.6.1</version>
</dependency>
Add a config file to configure your memcached cache configuration, say MySSMConfig.java
#Configuration
#EnableAspectJAutoProxy
#ImportResource("simplesm-context.xml") // This line may or may not be needed,
// not sure
public class SSMConfig
{
private String _memcachedHost; //Machine where memcached is running
private int _memcachedPort; //Port on which memcached is running
#Bean
public CacheManager cacheManager()
{
//Extended manager used as it will give custom-expiry value facility in future if needed
ExtendedSSMCacheManager ssmCacheManager = new ExtendedSSMCacheManager();
//We can create more than one cache, hence list
List<SSMCache>cacheList = new ArrayList<SSMCache>();
//First cache: Testcache
SSMCache testCache = createNewCache(_memcachedHost, _memcachedPort,
"testcache", 5);
//One more dummy cache
SSMCache dummyCache = createNewCache(_memcachedHost,_memcachedPort,
"dummycache", 300);
cacheList.add(testCache);
cacheList.add(dummyCache);
//Adding cache list to cache manager
ssmCacheManager.setCaches(cacheList);
return ssmCacheManager;
}
//expiryTimeInSeconds: time(in seconds) after which a given element will expire
//
private SSMCache createNewCache(String memcachedServer, int port,
String cacheName, int expiryTimeInSeconds)
{
//Basic client factory to be used. This is SpyMemcached for now.
MemcacheClientFactoryImpl cacheClientFactory = new MemcacheClientFactoryImpl();
//Memcached server address parameters
//"127.0.0.1:11211"
String serverAddressStr = memcachedServer + ":" + String.valueOf(port);
AddressProvider addressProvider = new DefaultAddressProvider(serverAddressStr);
//Basic configuration object
CacheConfiguration cacheConfigToUse = getNewCacheConfiguration();
//Create cache factory
CacheFactory cacheFactory = new CacheFactory();
cacheFactory.setCacheName(cacheName);
cacheFactory.setCacheClientFactory(cacheClientFactory);
cacheFactory.setAddressProvider(addressProvider);
cacheFactory.setConfiguration(cacheConfigToUse);
//Get Cache object
Cache object = null;
try {
object = cacheFactory.getObject();
} catch (Exception e) {
}
//allow/disallow remove all entries from this cache!!
boolean allowClearFlag = false;
SSMCache ssmCache = new SSMCache(object, expiryTimeInSeconds, allowClearFlag);
return ssmCache;
}
private CacheConfiguration getNewCacheConfiguration()
{
CacheConfiguration ssmCacheConfiguration = new CacheConfiguration();
ssmCacheConfiguration.setConsistentHashing(true);
//ssmCacheConfiguration.setUseBinaryProtocol(true);
return ssmCacheConfiguration;
}
}
OK, we are ready to use our configured cache.
Sample methods in some other class to read from cache and to remove from cache
#Cacheable(value="dummycache, key="#givenId.concat('-dmy')", unless="#result == null")
public String getDummyDataFromMemCached(String givenId)
{
logger.warn("getDummyDataFromMemCached: Inside DUMMY method to actually get data");
return "Sample-" + String.valueOf(givenId);
}
#CacheEvict(value="dummycache",key="#givenId.concat('-dmy')")
public void removeDummyDataFromMemCached(String givenId)
{
//Do nothing
return;
}
Note that we have added suffix to the kache-keys. As Memcached does not support cache-zones, "dummycache" and "testcache" ultimately does not remain separate on a single server. (They may remain separate with some other cache implementation). Hence to avoid conflict, we append unique suffix to the cache-key.
If you want to cache objects of your own class, then make sure that they are serializable. Just change your class definition to 'XYZ implements Serializable'.
You can find some materials how to configure SSM using Java configuration instead of XML files here and here.
Basically you have to move definitions of all beans from XML to Java.
Related
In Java Spring Boot, I can easily enable caching using the annotation #EnableCaching and make methods cache the result using #Cacheable, this way, any input to my method with the exact same parameters will NOT call the method, but return immediately using the cached result.
Is there something similar in C#?
What I did in the past was i had to implement my own caching class, my own data structures, its a big hassle. I just want an easy way for the program to cache the result and return the exact result if the input parameters are the same.
EDIT: I dont want to use any third party stuff, so no MemCached, no Redis, no RabbitMQ, etc... Just looking for a very simple and elegant solution like Java's #Cacheable.
Caches
A cache is the most valuable feature that Microsoft provides. It is a type of memory that is relatively small but can be accessed very quickly. It essentially stores information that is likely to be used again. For example, web browsers typically use a cache to make web pages load faster by storing a copy of the webpage files locally, such as on your local computer.
Caching
Caching is the process of storing data into cache. Caching with the C# language is very easy. System.Runtime.Caching.dll provides the feature for working with caching in C#. In this illustration I am using the following classes:
ObjectCache
MomoryCache
CacheItemPolicy
ObjectCache
: The CacheItem class provides a logical representation of a cache entry, that can include regions using the RegionName property. It exists in the System.Runtime.Caching.
MomoryCache
: This class also comes under System.Runtime.Caching and it represents the type that implements an in-cache memory.
CacheItemPolicy
: Represents a set of eviction and expiration details for a specific cache entry.
.NET provides
System.Web.Caching.Cache - default caching mechanizm in ASP.NET. You can get instance of this class via property Controller.HttpContext.Cache also you can get it via singleton HttpContext.Current.Cache. This class is not expected to be created explicitly because under the hood it uses another caching engine that is assigned internally. To make your code work the simplest way is to do the following:
public class DataController : System.Web.Mvc.Controller{
public System.Web.Mvc.ActionResult Index(){
List<object> list = new List<Object>();
HttpContext.Cache["ObjectList"] = list; // add
list = (List<object>)HttpContext.Cache["ObjectList"]; // retrieve
HttpContext.Cache.Remove("ObjectList"); // remove
return new System.Web.Mvc.EmptyResult();
}
}
System.Runtime.Caching.MemoryCache - this class can be constructed in user code. It has the different interface and more features like update\remove callbacks, regions, monitors etc. To use it you need to import library System.Runtime.Caching. It can be also used in ASP.net application, but you will have to manage its lifetime by yourself.
var cache = new System.Runtime.Caching.MemoryCache("MyTestCache");
cache["ObjectList"] = list; // add
list = (List<object>)cache["ObjectList"]; // retrieve
cache.Remove("ObjectList"); // remove
You can write a decorator with a get-or-create functionality. First, try to get value from cache, if it doesn't exist, calculate it and store in cache:
public static class CacheExtensions
{
public static async Task<T> GetOrSetValueAsync<T>(this ICacheClient cache, string key, Func<Task<T>> function)
where T : class
{
// try to get value from cache
var result = await cache.JsonGet<T>(key);
if (result != null)
{
return result;
}
// cache miss, run function and store result in cache
result = await function();
await cache.JsonSet(key, result);
return result;
}
}
ICacheClient is the interface you're extending. Now you can use:
await _cacheClient.GetOrSetValueAsync(key, () => Task.FromResult(value));
Using Cassandra, I want to create keyspace and tables dynamically using Spring Boot application. I am using Java based configuration.
I have an entity annotated with #Table whose schema I want to be created before application starts up since it has fixed fields that are known beforehand.
However depending on the logged in user, I also want to create additional tables for those user dynamically and be able to insert entries to those tables.
Can somebody guide me to some resources that I can make use of or point me in right direction in how to go about solving these issues. Thanks a lot for help!
The easiest thing to do would be to add the Spring Boot Starter Data Cassandra dependency to your Spring Boot application, like so...
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-cassandra</artifactId>
<version>1.3.5.RELEASE</version>
</dependency>
In addition, this will add the Spring Data Cassandra dependency to your application.
With Spring Data Cassandra, you can configure your application's Keyspace(s) using the CassandraClusterFactoryBean (or more precisely, the subclass... CassandraCqlClusterFactoryBean) by calling the setKeyspaceCreations(:Set) method.
The KeyspaceActionSpecification class is pretty self-explanatory. You can even create one with the KeyspaceActionSpecificationFactoryBean, add it to a Set and then pass that to the setKeyspaceCreations(..) method on the CassandraClusterFactoryBean.
For generating the application's Tables, you essentially just need to annotate your application domain object(s) (entities) using the SD Cassandra #Table annotation, and make sure your domain objects/entities can be found on the application's CLASSPATH.
Specifically, you can have your application #Configuration class extend the SD Cassandra AbstractClusterConfiguration class. There, you will find the getEntityBasePackages():String[] method that you can override to provide the package locations containing your application domain object/entity classes, which SD Cassandra will then use to scan for #Table domain object/entities.
With your application #Table domain object/entities properly identified, you set the SD Cassandra SchemaAction to CREATE using the CassandraSessionFactoryBean method, setSchemaAction(:SchemaAction). This will create Tables in your Keyspace for all domain object/entities found during the scan, providing you identified the proper Keyspace on your CassandraSessionFactoryBean appropriately.
Obviously, if your application creates/uses multiple Keyspaces, you will need to create a separate CassandraSessionFactoryBean for each Keyspace, with the entityBasePackages configuration property set appropriately for the entities that belong to a particular Keyspace, so that the associated Tables are created in that Keyspace.
Now...
For the "additional" Tables per user, that is quite a bit more complicated and tricky.
You might be able to leverage Spring Profiles here, however, profiles are generally only applied on startup. If a different user logs into an already running application, you need a way to supply additional #Configuration classes to the Spring ApplicationContext at runtime.
Your Spring Boot application could inject a reference to a AnnotationConfigApplicationContext, and then use it on a login event to programmatically register additional #Configuration classes based on the user who logged into the application. You need to follow your register(Class...) call(s) with an ApplicationContext.refresh().
You also need to appropriately handle the situation where the Tables already exist.
This is not currently supported in SD Cassandra, but see DATACASS-219 for further details.
Technically, it would be far simpler to create all the possible Tables needed by the application for all users at runtime and use Cassandra's security settings to restrict individual user access by role and assigned permissions.
Another option might be just to create temporary Keyspaces and/or Tables as needed when a user logs in into the application, drop them when the user logs out.
Clearly, there are a lot of different choices here, and it boils down more to architectural decisions, tradeoffs and considerations then it does technical feasibility, so be careful.
Hope this helps.
Cheers!
Following spring configuration class creates keyspace and tables if they dont exist.
#Configuration
public class CassandraConfig extends AbstractCassandraConfiguration {
private static final String KEYSPACE = "my_keyspace";
private static final String USERNAME = "cassandra";
private static final String PASSWORD = "cassandra";
private static final String NODES = "127.0.0.1"; // comma seperated nodes
#Bean
#Override
public CassandraCqlClusterFactoryBean cluster() {
CassandraCqlClusterFactoryBean bean = new CassandraCqlClusterFactoryBean();
bean.setKeyspaceCreations(getKeyspaceCreations());
bean.setContactPoints(NODES);
bean.setUsername(USERNAME);
bean.setPassword(PASSWORD);
return bean;
}
#Override
public SchemaAction getSchemaAction() {
return SchemaAction.CREATE_IF_NOT_EXISTS;
}
#Override
protected String getKeyspaceName() {
return KEYSPACE;
}
#Override
public String[] getEntityBasePackages() {
return new String[]{"com.panda"};
}
protected List<CreateKeyspaceSpecification> getKeyspaceCreations() {
List<CreateKeyspaceSpecification> createKeyspaceSpecifications = new ArrayList<>();
createKeyspaceSpecifications.add(getKeySpaceSpecification());
return createKeyspaceSpecifications;
}
// Below method creates "my_keyspace" if it doesnt exist.
private CreateKeyspaceSpecification getKeySpaceSpecification() {
CreateKeyspaceSpecification pandaCoopKeyspace = new CreateKeyspaceSpecification();
DataCenterReplication dcr = new DataCenterReplication("dc1", 3L);
pandaCoopKeyspace.name(KEYSPACE);
pandaCoopKeyspace.ifNotExists(true).createKeyspace().withNetworkReplication(dcr);
return pandaCoopKeyspace;
}
}
Using #Enes Altınkaya answer:
#Value("${cassandra.keyspace}")
private String keySpace;
#Override
protected List<CreateKeyspaceSpecification> getKeyspaceCreations() {
return Arrays.asList(
CreateKeyspaceSpecification.createKeyspace()
.name(keySpace)
.ifNotExists()
.withNetworkReplication(new DataCenterReplication("dc1", 3L)));
}
To define your varaibles use an application.properties or application.yml file:
cassandra:
keyspace: yout_keyspace_name
Using config files instead of hardcoded strings you can publish your code on for example GitHub without publishing your passwords and entrypoints (.gitignore files) which may be a security risk.
The following cassandra configuration will create a keyspace when it does not exist and also run the start-up script specified
#Configuration
#PropertySource(value = {"classpath:cassandra.properties"})
#EnableCassandraRepositories
public class CassandraConfig extends AbstractCassandraConfiguration {
#Value("${cassandra.keyspace}")
private String cassandraKeyspace;
#Override
protected List<CreateKeyspaceSpecification> getKeyspaceCreations() {
return Collections.singletonList(CreateKeyspaceSpecification.createKeyspace(cassandraKeyspace)
.ifNotExists()
.with(KeyspaceOption.DURABLE_WRITES, true)
.withSimpleReplication());
}
#Override
protected List<String> getStartupScripts() {
return Collections.singletonList("CREATE TABLE IF NOT EXISTS "+cassandraKeyspace+".test(id UUID PRIMARY KEY, greeting text, occurrence timestamp) WITH default_time_to_live = 600;");
}
}
For table's creation you can use this in the application.properties file
spring.data.cassandra.schema-action=CREATE_IF_NOT_EXISTS
This answer is inspired by Viswanath's answer.
My cassandra.yml looks as follows:
spring:
data:
cassandra:
cluster-name: Test Cluster
keyspace-name: keyspace
port: 9042
contact-points:
- 127.0.0.1
#Configuration
#PropertySource(value = { "classpath:cassandra.yml" })
#ConfigurationProperties("spring.data.cassandra")
#EnableCassandraRepositories(basePackages = "info.vishrantgupta.repository")
public class CassandraConfig extends AbstractCassandraConfiguration {
#Value("${keyspacename}")
protected String keyspaceName;
#Override
protected String getKeyspaceName() {
return this.keyspaceName;
}
#Override
protected List getKeyspaceCreations() {
return Collections.singletonList(CreateKeyspaceSpecification
.createKeyspace(keyspaceName).ifNotExists()
.with(KeyspaceOption.DURABLE_WRITES, true)
.withSimpleReplication());
}
#Override
protected List getStartupScripts() {
return Collections.singletonList("CREATE KEYSPACE IF NOT EXISTS "
+ keyspaceName + " WITH replication = {"
+ " 'class': 'SimpleStrategy', "
+ " 'replication_factor': '3' " + "};");
}
}
You might have to customize #ConfigurationProperties("spring.data.cassandra"), if your configuration starts with cassandra in cassandra.yml file then use #ConfigurationProperties("cassandra")
I have essentially the same question as here but am hoping to get a less vague, more informative answer.
I'm looking for a way to configure DropWizard programmatically, or at the very least, to be able to tweak configs at runtime. Specifically I have a use case where I'd like to configure metrics in the YAML file to be published with a frequency of, say, 2 minutes. This would be the "normal" default. However, under certain circumstances, I may want to speed that up to, say, every 10 seconds, and then throttle it back to the normal/default.
How can I do this, and not just for the metrics.frequency property, but for any config that might be present inside the YAML config file?
Dropwizard reads the YAML config file and configures all the components only once on startup. Neither the YAML file nor the Configuration object is used ever again. That means there is no direct way to configure on run-time.
It also doesn't provide special interfaces/delegates where you can manipulate the components. However, you can access the objects of the components (usually; if not you can always send a pull request) and configure them manually as you see fit. You may need to read the source code a bit but it's usually easy to navigate.
In the case of metrics.frequency you can see that MetricsFactory class creates ScheduledReporterManager objects per metric type using the frequency setting and doesn't look like you can change them on runtime. But you can probably work around it somehow or even better, modify the code and send a Pull Request to dropwizard community.
Although this feature isn't supported out of the box by dropwizard, you're able to accomplish this fairly easy with the tools they give you. Note that the below solution definitely works on config values you've provided, but it may not work for built in configuration values.
Also note that this doesn't persist the updated config values to the config.yml. However, this would be easy enough to implement yourself simply by writing to the config file from the application. If anyone would like to write this implementation feel free to open a PR on the example project I've linked below.
Code
Start off with a minimal config:
config.yml
myConfigValue: "hello"
And it's corresponding configuration file:
ExampleConfiguration.java
public class ExampleConfiguration extends Configuration {
private String myConfigValue;
public String getMyConfigValue() {
return myConfigValue;
}
public void setMyConfigValue(String value) {
myConfigValue = value;
}
}
Then create a task which updates the config:
UpdateConfigTask.java
public class UpdateConfigTask extends Task {
ExampleConfiguration config;
public UpdateConfigTask(ExampleConfiguration config) {
super("updateconfig");
this.config = config;
}
#Override
public void execute(Map<String, List<String>> parameters, PrintWriter output) {
config.setMyConfigValue("goodbye");
}
}
Also for demonstration purposes, create a resource which allows you to get the config value:
ConfigResource.java
#Path("/config")
public class ConfigResource {
private final ExampleConfiguration config;
public ConfigResource(ExampleConfiguration config) {
this.config = config;
}
#GET
public Response handleGet() {
return Response.ok().entity(config.getMyConfigValue()).build();
}
}
Finally wire everything up in your application:
ExampleApplication.java (exerpt)
environment.jersey().register(new ConfigResource(configuration));
environment.admin().addTask(new UpdateConfigTask(configuration));
Usage
Start up the application then run:
$ curl 'http://localhost:8080/config'
hello
$ curl -X POST 'http://localhost:8081/tasks/updateconfig'
$ curl 'http://localhost:8080/config'
goodbye
How it works
This works simply by passing the same reference to the constructor of ConfigResource.java and UpdateConfigTask.java. If you aren't familiar with the concept see here:
Is Java "pass-by-reference" or "pass-by-value"?
The linked classes above are to a project I've created which demonstrates this as a complete solution. Here's a link to the project:
scottg489/dropwizard-runtime-config-example
Footnote: I haven't verified this works with the built in configuration. However, the dropwizard Configuration class which you need to extend for your own configuration does have various "setters" for internal configuration, but it may not be safe to update those outside of run().
Disclaimer: The project I've linked here was created by me.
I solved this with bytecode manipulation via Javassist
In my case, I wanted to change the "influx" reporter
and modifyInfluxDbReporterFactory should be ran BEFORE dropwizard starts
private static void modifyInfluxDbReporterFactory() throws Exception {
ClassPool cp = ClassPool.getDefault();
CtClass cc = cp.get("com.izettle.metrics.dw.InfluxDbReporterFactory"); // do NOT use InfluxDbReporterFactory.class.getName() as this will force the class into the classloader
CtMethod m = cc.getDeclaredMethod("setTags");
m.insertAfter(
"if (tags.get(\"cloud\") != null) tags.put(\"cloud_host\", tags.get(\"cloud\") + \"_\" + host);tags.put(\"app\", \"sam\");");
cc.toClass();
}
I am developing a REST application to read all the caches in a Cluster that uses J Cache with Hazel cast 3.3.3
This application will create another hazel cast node when I call the following line in the application:
cacheManager= Caching.getCachingProvider().getCacheManager();
The node get clustered with already created nodes. But when I try to get all the cache names of the cluster with the following command, It returns an empty iterable:
cacheManager.getCacheNames().iterator()
I went through the Java doc of the Jcache which contained:
May not provide all of the Caches managed by the CacheManager. For
example: Internally defined or platform specific Caches that may be
accessible by a call to getCache(java.lang.String) or
getCache(java.lang.String,java.lang.Class,java.lang.Class) may not be
present in an iteration.
But the caches that I am trying to access is not internally
defined or platform specific. They are created by other nodes.
I want a way to get all the names present in the cluster. Is there a way to this?
NB: No hazelcast.xml is used in the application. All is initialized by the default xml s.
Update:
I can access the cache if I know the name. And after accessing for the first time by giving the name directly, now it shows that cache in the cacheManager.getCacheNames().iterator()
CacheManager only provides names of caches it manages, so you cannot obtain all caches known to the cluster using JCache API.
In Hazelcast 3.7 (EA was released just yesterday), all Caches are available as DistributedObjects, so invoking HazelcastInstance.getDistributedObjects() and then checking for objects being instances of javax.cache.Cache or the Hazelcast-specific subclass com.hazelcast.cache.ICache you should be able to get references to all Caches in the cluster:
// works for 3.7
Collection<DistributedObject> distributedObjects = hazelcastInstance.getDistributedObjects();
for (DistributedObject distributedObject : distributedObjects) {
if (distributedObject instanceof ICache) {
System.out.println("Found cache with name " + distributedObject.getName());
}
}
In Hazelcast 3.6 it is possible to obtain all cache names known to the cluster only using internal classes, so there is no guarantee this will work with any other version.
// works for 3.6 using internal classes, most probably will not work for other versions
public static void main(String[] args) {
// start a hazelcast instance
HazelcastInstance hz = Hazelcast.newHazelcastInstance();
// create a CacheManager and Cache on this instance
CachingProvider hazelcastCachingProvider = Caching.getCachingProvider("com.hazelcast.cache.HazelcastCachingProvider",
HazelcastCachingProvider.class.getClassLoader());
CacheManager cacheManager = hazelcastCachingProvider.getCacheManager();
cacheManager.createCache("test1", new CacheConfig<Object, Object>());
// hacky: obtain a reference to internal cache service
CacheDistributedObject cacheDistributedObject = hz.getDistributedObject("hz:impl:cacheService", "setupRef");
ICacheService cacheService = cacheDistributedObject.getService();
// obtain all CacheConfig's in the cluster
Collection<CacheConfig> cacheConfigs = cacheService.getCacheConfigs();
for (CacheConfig cacheConfig : cacheConfigs) {
System.out.println("Cache name: " + cacheConfig.getName() +
", fully qualified name: " + cacheConfig.getNameWithPrefix());
}
hz.shutdown();
}
But the caches that I am trying to access is not internally defined or
platform specific
Its good because this method should return all the others and some of the internally defined or platform specific ones.
I am trying to use ehcache in my java project, it's new for me.
now i am using ehcache for retrieving area list, and my project adding new area , that time i am using #TriggersRemove functionality for once clear the cache and then reload it.
ex: i have retrive 10 areas and using ehcache , and i adding one more area that time i clear the cache and reload it.
is any other options for avoiding the data reload in cache.
my code:
#Cacheable(cacheName="retrieveAreas")
public List<AreaBO> retrieveAreas(){
//some code here
}
#TriggersRemove(cacheName="retrieveAreas", removeAll=true)
public long addArea(AreaBO areaBO) throws UserServiceException{
// some codes here
}
It seems that you are using the annotation from EhCache. If you where to switch to the caching annotations provided by Spring since version 3.1 your code would be:
#Cacheable(value="retrieveAreas")
public List<AreaBO> retrieveAreas(){
//some code here
}
#CachePut(value="retrieveAreas")
public long addArea(AreaBO areaBO) throws UserServiceException{
// some codes here
}
The difference as you can see is in the #CachePut annotation that adds the return value of the method to the cache specified.
I am not aware of a corresponding annotation in EHCache