Two instances of my java application are deployed in a server. One of the instances will be live at any one point and other will be standby. The live instance will receive some data from some receivers and do some processing. Now if my live instance got shutdown due to some error the standby will become live.
Can the data(map/list) maintained/collected in the first instance be somehow shared to second instance?
You can do by using some kind of distributed caching mechanism like redis, hazelcast, ignite etc.
You can maintain distributed collections in cache itself. Like Hazelcast provides java like abstractions of collection.
Similarly Redisson java client(on top of redis) also provides distributed implementation of java collections and much more.
Related
During some testing of multiple memcached instance I realized that spymemcached Java client was evenly distributing the key data across the configured instances. I know that memcached is a distributed, but is there a way to configure a client to write key data to all configured instances? I know that memory cache approaches like this are not designed to replace persistent storage (DB) but I have zero need for persistent storage and need a lightweight way to synchronize basic key/value data between two or more instances of my service.
The test Java code I prototyped worked great, and I feel the spymemcached API would integrate well, but I need to replicate the data between memcached instances. I assumed if I specified multiple MC instances that the data would be distributed to all, not across all available. Thanks.
There is some memcached client that allow data replication among multiple memcached servers. From what I can tell, SpyMemcached is not one of them.
I do not understand however, why you want this. Lightweight synchronization works just as well without replication. Memcached clients generally (this includes SpyMemcached) use consistent hashing to map from a key to a server, so every instance of your service will look for a key on the same server.
In my application i would like to have two cache maps
One for distributed cache(should be available in all instances of the tcp/ip network for global access
and another for application specific cache should be available only for this instance.
How to configure this?
Can anyone guide me?
Thanks in advance.
You could just use a regular ConcurrentHashMap since IMap in a single instance of Hazelcast is basically a local jvm process that extends it. but if you still need to use Hazelcast specific features on that single instance, you can just set it up as a single member cluster by disabling all join mechanisms.
We are building an Apache Flink based data stream processing application in Java 8. We need to maintain a state-full list of objects which characteristics are updated every ten seconds via a source stream.
By specs we must use, if possible, no distributed storage. So, my question is about Flink's memory manager: in a cluster configuration, does it replicate the memory used by a task-manager? Or is there any way to use a distributed in-memory solution with Flink?
Have a look at Flink state. This way you can store it in flink's state which will be integrated with internal mechanisms like checkpointing/savepointing etc.
If you need to query it externally from other services a queryable state can be a good addition.
In Java, I have a HashMap containing objects (which can be serializable, if it helps). Elsewhere on a network, I have another HashMap in another copy of the application that I would like to stay in sync with the first.
For example if on computer A, someone runs myMap.put("Hello", "World"); and on computer B, someone runs myMap.put("foo", "bar");, then after some time delay for changes to propagate, both computers would have mayMap.get("Hello") == "World" and mayMap.get("foo") == "bar".
Is this requirement met by an existing facility in the Java language, a library, or some other program? If this is already a "solved problem" it would be great not to have to write my own code for this.
If there are multiple ways of achieving this I would prefer, in priority order:
Changes are guaranteed to propagate 100% of the time (doesn't matter how long it takes)
Changes propagate rapidly
Changes propagate with minimal bandwidth use between computers.
(Note: I have had trouble searching for solutions as results are dominated by questions about synchronizing access to a Map from multiple threads in the same application. This is not what my question is about.)
You could look at the hazelcast in-memory database.
It's an open source solution designed for distributed architectures.
It maps really well to your problem since the hazelcast IMap extends java.util.Map.
Link: Hazelcast IMap
what you are trying to do is call clustering between two node
here i have some solution
you can achieve your requirement using serialization make your map
serializable read and write state of map in each interval of time
and sync it.this is core and basic way to achieve your
functionality.but by using serialization you have to manually manage
sync of map(i.e you have to do code for that)
Hazelcast open source distributed caching mechanism hazelcast
is best api and have reach libarary to achive cluster environment
and share data between different node
coherence web also provide mechanism to achieve clustering by
Oracle
Ehcache is a cache library introduced in 2003 to improve
performance by reducing the load on underlying resources. Ehcache is
not for both general-purpose caching and caching Hibernate
(second-level cache), data access objects, security credentials, and
web pages. It can also be used for SOAP and RESTful server caching,
application persistence, and distributed caching
among all of above Hazelcast is best api go through it will sure help you
I have an administrative page in a web application that resets the cache, but it only resets the cache on the current JVM.
The web application is deployed as a cluster to two WAS servers.
Any way that I can elegantly have the "clear cache" button on each server trigger the method on both JVMs?
Edit:
The original developer just wrote a singleton holding a HashMap to be the cache in question. Lightweight and (previously) worked just fine for the requirements. It caches content pulled from six or seven web services for specified amounts of time.
Edit:
The entire application in question is three pages, so the elegant solution might well be the lightest solution.
Since the Cache is internal to your application you are going to need to provide an interface to clear it within your application. Quotidian says to use a JMS queue. That will not work because only one instance will pick up the message assuming you have clustered MQ Queues.
If you want to reuse the same implementation then the only way to do this is to write some JMX that you will be able to interact with at the instance level.
If not you can either use the built in WAS cache (which is JMX enabled) or use a distributed cache like ehcache.
in the past I have created a subclassed LinkedHashMap that was linked to all instances on the network using JBOSS JGroups. Of course reinventing the wheel is always painful.
I tend to use a JMS queue for doing exactly that.