Access single hazelcast instance on same JVM - java

I have a Java web application (running on Tomcat) and would like to share data between two different contexts in the application. I'd like to use hazelcast since I'm already using it for clustering purposes.
Is there a way to access a single hazelcast instance running on the same JVM (and the same port)?
I've trying accessing the instance using the instance name, but this doesn't seem to work. For example:
public class HazelcastTest1 {
static public void main(String[] args) {
Config config = new Config();
config.getNetworkConfig().setPort(5701);
config.getNetworkConfig().setPortAutoIncrement(false);
config.setInstanceName("hztest");
HazelcastInstance hz = Hazelcast.getOrCreateHazelcastInstance(config);
Map<String, String> mp = hz.getMap("vcutest");
mp.put("test1", "test1");
System.out.printf("put item in map");
while (true) {
}
}
}
public class HazelcastTest2 {
static public void main(String[] args) {
Config config = new Config();
config.getNetworkConfig().setPort(5701);
config.getNetworkConfig().setPortAutoIncrement(false);
config.setInstanceName("hztest");
HazelcastInstance hz = Hazelcast.getOrCreateHazelcastInstance(config);
Map<String,String> mp = hz.getMap("vcutest");
System.out.printf("map value = %s%n", mp.get("test1"));
}
}
When I start the 2nd instance (with the 1st already running) the following exception is thrown:
Exception in thread "main" com.hazelcast.core.HazelcastException: Port
[5701] is already in use and auto-increment is disabled. Hazelcast
cannot start.

You can retrieve the same instance using Hazelcast::getHazelcastInstanceByName but this requires that the classes are visible to both webapp classloaders. You can achieve that by putting the JAR file inside the tomcat lib directory.
Apart from that, Hazelcast is not designed to run in a single instance mode, this will not perform well.

If you're deploying two separate WARs, despite the fact they are in the same JVM they are in isolated class loaders.
You should just consider them as separate JVMs and use Hazelcast as intended (have each web app join the cluster)
In this case, enable port auto increment, which should allow it to join the same Hazelcast instance

Related

Create a database-service in AWS Lambda

I am developing a web-application using java and spring-boot on AWS Lambda Service.
I am designing it to have one database-service. This will be collections of Entity(table) and JPARepositories classes. So If I need to have any database schema changes I just have to make the change only in this service.
The other services which will be exposed through an API-gateway will be using this database-service as a Lambda Layer.
parent-project
|
|---database-service
|
|---API-service1
|
|---API-service2
...
The Problem is I need to create the tables before any of the Lambda Service is deployed. So that this API-Services can use them. One way to solve this is to deploy the database-service as a Lambda function and invoke the function which will call a method like below to create all the tables.
#SpringBootApplication
public class DatabaseServiceApplication implements CommandLineRunner {
private DynamoDBMapper dynamoDBMapper;
private final AmazonDynamoDB amazonDynamoDB;
public DatabaseServiceApplication(AmazonDynamoDB amazonDynamoDB) {
this.amazonDynamoDB = amazonDynamoDB;
}
public static void main(String[] args) {
SpringApplication.run(DatabaseServiceApplication.class, args);
}
#Override
public void run(String... strings) {
dynamoDBMapper = new DynamoDBMapper(amazonDynamoDB);
CreateTableRequest tableRequest = dynamoDBMapper
.generateCreateTableRequest(Association.class);
tableRequest.setProvisionedThroughput(
new ProvisionedThroughput(1L, 1L));
TableUtils.createTableIfNotExists(amazonDynamoDB, tableRequest);
}
}
Or use a script to create the tables. I am not sure which is a better option or is there any better option.
Can anyone suggest me if anyone has faced this problem before and fixed it?
To me the best way to do this is on Lambda cold start. Your code needs to be smart enough to not care if the DB is already correct. Based on the code you're showing I would do something on the order of:
public class LambdaExample implements RequestStreamHandler {
// only called on cold start
public LambdaExample() {
dynamoDBMapper = new DynamoDBMapper(amazonDynamoDB);
CreateTableRequest tableRequest = dynamoDBMapper
.generateCreateTableRequest(Association.class);
tableRequest.setProvisionedThroughput(
new ProvisionedThroughput(1L, 1L));
TableUtils.createTableIfNotExists(amazonDynamoDB, tableRequest);
}
public void handleRequest(InputStream inputStream, OutputStream outputStream, Context context) {
// handle request. this lambda type requires reading the inputStream
// yourself but use whatever you normally have here.
}
If you're using a traditional relational database, you could use Flyway instead. It too knows if a DB has already been updated.
Note that if you have thousands of Lambdas they will all call this, slowing the cold start of every single one of them. That is why #MarkB is suggesting a process to externalize the DB creation as really only the very first Lambda kicked off does anything useful. After that you're wasting a bit of time/money with every new Lambda.
Since you are deploying via Terraform then the correct way to do this is to have Terraform create the DynamoDB tables as well. You would configure your aws_lambda_function resources in Terraform with depends_on property referencing the aws_dynamodb_table resource, so that Terraform would ensure the table is created before the Lambda functions.
Can you please answer the below questions?
1) Are you deploying your springboot application in lambda?
If Yes, that doesn't sound like a good use of Springboot, Springboot application should be hosted in EC2/ECS instance to be up and running (24/7).
Think about Lambda as a function that runs to handle a simple task. To achieve that, you can write a simple Java application, and deploy the jar to lambda function.
2) CloudFormation, TerraForm and other languages are used to create the infrastructure, you usually run the infrastructure job first, and the deployment after it.
Here's a link of a terraform structure I built for a personal project.
https://github.com/saifmasadeh/terraform-project-structure

Unable to get good example of using memcached from Java boot

I am using java boot for my development. For now I have used 'EhCache' for caching , it is directly supported from Java boot. This is "in-process" cache, i.e., becomes part of your process. It is okay for now. But my server will run on multiple nodes in near future. Hence want to switch to 'Memcached' as common caching layer.
After spending good amount of time, I could not get good sample of using Memcached from java boot. I have looked at 'Simple Spring Memcached' which comes close to my requirement. But still it gives example using XML configuration in Spring way. Java boot does not use such XML configuration as far as possible. At least I could not map the example quickly to java boot world.
I want to use Memcahed ( directly or via cache-abstraction-layer) from java boot. If anybody points me to a relevant java boot example, it will save a lot of time for me.
You could also check Memcached Spring Boot library. It uses Memcached implementation for Spring Cache Abstraction.
In other words you use the same configuration and same annotations as you would use with any other Spring Cache implementation. You can check out here the usage of the library.
There are also example projects in Kotlin and Java.
I have already accepted answer given by #ragnor. But I think I should post a complete example here which has worked for me.
Make sure you have cache-enabled for your application by adding #EnableCaching
POM.xml should have following dependency:
<dependency>
<groupId>com.google.code.simple-spring-memcached</groupId>
<artifactId>spring-cache</artifactId>
<version>3.6.1</version>
</dependency>
<dependency>
<groupId>com.google.code.simple-spring-memcached</groupId>
<artifactId>spymemcached-provider</artifactId>
<version>3.6.1</version>
</dependency>
Add a config file to configure your memcached cache configuration, say MySSMConfig.java
#Configuration
#EnableAspectJAutoProxy
#ImportResource("simplesm-context.xml") // This line may or may not be needed,
// not sure
public class SSMConfig
{
private String _memcachedHost; //Machine where memcached is running
private int _memcachedPort; //Port on which memcached is running
#Bean
public CacheManager cacheManager()
{
//Extended manager used as it will give custom-expiry value facility in future if needed
ExtendedSSMCacheManager ssmCacheManager = new ExtendedSSMCacheManager();
//We can create more than one cache, hence list
List<SSMCache>cacheList = new ArrayList<SSMCache>();
//First cache: Testcache
SSMCache testCache = createNewCache(_memcachedHost, _memcachedPort,
"testcache", 5);
//One more dummy cache
SSMCache dummyCache = createNewCache(_memcachedHost,_memcachedPort,
"dummycache", 300);
cacheList.add(testCache);
cacheList.add(dummyCache);
//Adding cache list to cache manager
ssmCacheManager.setCaches(cacheList);
return ssmCacheManager;
}
//expiryTimeInSeconds: time(in seconds) after which a given element will expire
//
private SSMCache createNewCache(String memcachedServer, int port,
String cacheName, int expiryTimeInSeconds)
{
//Basic client factory to be used. This is SpyMemcached for now.
MemcacheClientFactoryImpl cacheClientFactory = new MemcacheClientFactoryImpl();
//Memcached server address parameters
//"127.0.0.1:11211"
String serverAddressStr = memcachedServer + ":" + String.valueOf(port);
AddressProvider addressProvider = new DefaultAddressProvider(serverAddressStr);
//Basic configuration object
CacheConfiguration cacheConfigToUse = getNewCacheConfiguration();
//Create cache factory
CacheFactory cacheFactory = new CacheFactory();
cacheFactory.setCacheName(cacheName);
cacheFactory.setCacheClientFactory(cacheClientFactory);
cacheFactory.setAddressProvider(addressProvider);
cacheFactory.setConfiguration(cacheConfigToUse);
//Get Cache object
Cache object = null;
try {
object = cacheFactory.getObject();
} catch (Exception e) {
}
//allow/disallow remove all entries from this cache!!
boolean allowClearFlag = false;
SSMCache ssmCache = new SSMCache(object, expiryTimeInSeconds, allowClearFlag);
return ssmCache;
}
private CacheConfiguration getNewCacheConfiguration()
{
CacheConfiguration ssmCacheConfiguration = new CacheConfiguration();
ssmCacheConfiguration.setConsistentHashing(true);
//ssmCacheConfiguration.setUseBinaryProtocol(true);
return ssmCacheConfiguration;
}
}
OK, we are ready to use our configured cache.
Sample methods in some other class to read from cache and to remove from cache
#Cacheable(value="dummycache, key="#givenId.concat('-dmy')", unless="#result == null")
public String getDummyDataFromMemCached(String givenId)
{
logger.warn("getDummyDataFromMemCached: Inside DUMMY method to actually get data");
return "Sample-" + String.valueOf(givenId);
}
#CacheEvict(value="dummycache",key="#givenId.concat('-dmy')")
public void removeDummyDataFromMemCached(String givenId)
{
//Do nothing
return;
}
Note that we have added suffix to the kache-keys. As Memcached does not support cache-zones, "dummycache" and "testcache" ultimately does not remain separate on a single server. (They may remain separate with some other cache implementation). Hence to avoid conflict, we append unique suffix to the cache-key.
If you want to cache objects of your own class, then make sure that they are serializable. Just change your class definition to 'XYZ implements Serializable'.
You can find some materials how to configure SSM using Java configuration instead of XML files here and here.
Basically you have to move definitions of all beans from XML to Java.

Getting all the Cache Names

I am developing a REST application to read all the caches in a Cluster that uses J Cache with Hazel cast 3.3.3
This application will create another hazel cast node when I call the following line in the application:
cacheManager= Caching.getCachingProvider().getCacheManager();
The node get clustered with already created nodes. But when I try to get all the cache names of the cluster with the following command, It returns an empty iterable:
cacheManager.getCacheNames().iterator()
I went through the Java doc of the Jcache which contained:
May not provide all of the Caches managed by the CacheManager. For
example: Internally defined or platform specific Caches that may be
accessible by a call to getCache(java.lang.String) or
getCache(java.lang.String,java.lang.Class,java.lang.Class) may not be
present in an iteration.
But the caches that I am trying to access is not internally
defined or platform specific. They are created by other nodes.
I want a way to get all the names present in the cluster. Is there a way to this?
NB: No hazelcast.xml is used in the application. All is initialized by the default xml s.
Update:
I can access the cache if I know the name. And after accessing for the first time by giving the name directly, now it shows that cache in the cacheManager.getCacheNames().iterator()
CacheManager only provides names of caches it manages, so you cannot obtain all caches known to the cluster using JCache API.
In Hazelcast 3.7 (EA was released just yesterday), all Caches are available as DistributedObjects, so invoking HazelcastInstance.getDistributedObjects() and then checking for objects being instances of javax.cache.Cache or the Hazelcast-specific subclass com.hazelcast.cache.ICache you should be able to get references to all Caches in the cluster:
// works for 3.7
Collection<DistributedObject> distributedObjects = hazelcastInstance.getDistributedObjects();
for (DistributedObject distributedObject : distributedObjects) {
if (distributedObject instanceof ICache) {
System.out.println("Found cache with name " + distributedObject.getName());
}
}
In Hazelcast 3.6 it is possible to obtain all cache names known to the cluster only using internal classes, so there is no guarantee this will work with any other version.
// works for 3.6 using internal classes, most probably will not work for other versions
public static void main(String[] args) {
// start a hazelcast instance
HazelcastInstance hz = Hazelcast.newHazelcastInstance();
// create a CacheManager and Cache on this instance
CachingProvider hazelcastCachingProvider = Caching.getCachingProvider("com.hazelcast.cache.HazelcastCachingProvider",
HazelcastCachingProvider.class.getClassLoader());
CacheManager cacheManager = hazelcastCachingProvider.getCacheManager();
cacheManager.createCache("test1", new CacheConfig<Object, Object>());
// hacky: obtain a reference to internal cache service
CacheDistributedObject cacheDistributedObject = hz.getDistributedObject("hz:impl:cacheService", "setupRef");
ICacheService cacheService = cacheDistributedObject.getService();
// obtain all CacheConfig's in the cluster
Collection<CacheConfig> cacheConfigs = cacheService.getCacheConfigs();
for (CacheConfig cacheConfig : cacheConfigs) {
System.out.println("Cache name: " + cacheConfig.getName() +
", fully qualified name: " + cacheConfig.getNameWithPrefix());
}
hz.shutdown();
}
But the caches that I am trying to access is not internally defined or
platform specific
Its good because this method should return all the others and some of the internally defined or platform specific ones.

Standalone JBOSS7 uses more JVMSs?

I would like to store some data in a static variables and I want all the webservices deployed on the same JBOSS7 to reach those data. I thought a standalone JBOSS runs in a single JVM and all the services run in the same JVM so they can access a static variable.
However I noticed that I got a NullPointerException when my webservice try to get the data.
This is my storage class:
public enum OneJvmCacheImpl {
INSTANCE;
private ConcurrentHashMap<String, Object> values = new ConcurrentHashMap<String, Object>();
public <T> T get(String key, Class<T> type) {
return type.cast(values.get(key));
}
...
}
OneJvmCacheImpl.INSTANCE.get(...);
Can you please advise me why I cannot access the values from my webservice?
Thanks,
V.
If you by deployments mean separate war files, the static variables will not be visible to the other webservices in other war files as they are loaded by different classloaders. Each war has it's own classloader, and hence it's own "class instance" of the class. You could perhaps solve it by moving the class in question to a place where it's shared amongst the deployments, but I would suggest that you solve it otherwise anyway, either by using the database or a distributed cache.
jBoss definitely won't allow you to share static variables across different deployments. That would be a huge security issue, what if I deploy a war next to yours and start changing your static variables...
You need to persist such values in something else like a database, memcache or shared file.

Vert.x and Neo4j in multithread cucumber environment

I try to run cucumber tests in a JRuby environment. I configured the cucumber rake task to startup an embedded Vert.x application server in another thread but in the same JVM.
During the application startup, an embedded instance of Neo4j is initialized.
So finally, there are Cucumber, Vert.x and Neo4j all running in the same JVM (tada!).
At the end of some test scenarios, I would like to check if certian data has been placed in the database base. And since the Neo4j docs say...
The EmbeddedGraphDatabase instance can be shared among multiple threads. Note however that you can’t create multiple instances pointing to the same database.
...I try to get the already initialized Neo4j instance and use it for these checks. To make that happen, I wrote the following factory.
public class ConcurrentGraphDatabaseFactory {
private static HashMap<String, GraphDatabaseService> databases = new HashMap<String, GraphDatabaseService>();
public static synchronized GraphDatabaseService getOrCreateDatabase(String path, String autoIndexFields) {
System.out.println("databases: " + databases.toString());
if (databases.containsKey(path)) {
return databases.get(path);
} else {
final GraphDatabaseService database = new GraphDatabaseFactory().newEmbeddedDatabaseBuilder(path).
setConfig(GraphDatabaseSettings.node_keys_indexable, autoIndexFields).
setConfig(GraphDatabaseSettings.node_auto_indexing, GraphDatabaseSetting.TRUE).
newGraphDatabase();
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
database.shutdown();
}
});
databases.put(path, database);
return database;
}
}
}
This factory should ensure that only on instance per path is initialized. But if the function getOrCreateDatabase accessed the second time, the internal databases HashMap is still empty. That cause the code to initialize a second Neo4j instance on the same data, which fails with
NativeException: java.lang.IllegalStateException: Unable to lock store
It's all running in the same JVM, but it seems, that the different threads have separated memory.
What am I doing wrong here?
Are you sure you are only running one single neo4j instance from all threads? Otherwise, several neo4j instances will fight on locking the store files. Neo4j is thread safe, but not doing several embedded instances on the same store, for scaling it, you use the High Availability setup, see http://docs.neo4j.org/chunked/snapshot/ha.html
I've spend some time on the problem and finally found a solution.
The verticles in Vert.x create strictly isolated environments. This causes a second version of my factory (see the code above) to be initialized. And the second factory tries to initialized a second Neo4j instance.
The solution was, to separate the Neo4j code into a dedicated storage verticle and write test code that accesses that verticle via the event bus.

Categories

Resources