I am new to Redis things.
I have some Object which is Externalizable.
But Spring Data Redis is not working with these Objects.
Does Spring Data Redis need Serializable strictly or there is some way to work with Externalizable as well?
Spring Data Redis supports different serialization strategies to represent your objects in binary form so it can be stored in Redis.
One of the serialization formats is using Java's serialization mechanism via ObjectOutputStream. There are no Spring Data specifics when using Java serialization.
Related
I am using JCache with Redisson, it's not clear to me how serialization/deserialization works while using the cache.
When I setup the Cache via configuration I didn't setup anything about this. Is this done transparently?
The objects I am storing in cache are lists, objects from java.time for example, but I require all of objects of the classes I am storing in the cache implement Serializable, is this enough?
Looking at the data on redis it seems it is storing data serialized via java default serialization, am I wrong?
Can I control this behaviour? or it's better to leave it as it is ?
Thanks for help
As my comment, from redisson documentation Redisson use Kryo as default data serializer/deserializer.
I am new to Redis and planning to use it as in memory cache. I am using Lettuce 5.2 client for it.
I have multiple applications which will use redis as in memory cache. My idea is to write library using lettuce like wrapper which can be used by multiple application in order to interact with Redis. That library will manage connection pooling, redis failover cases and command execution etc. so that application writer should not worry about all this and just need to use my library.
Now for this library i am confused on below points :
1) Should i use Spring data redis (it also supports lettuce)? If my objective is to create library then first of all, can i use spring data redis ?
2) What all advantage Spring data redis will give me. I have checked documentation https://docs.spring.io/spring-data/data-redis/docs/current/reference/html/#reference
3) If i don't use Spring data redis then I will just use only lettuce and create client, connention pool etc myself.
I am confused whether i should use spring data redis for creating library or not ?
Can you please help me to clear my confusion ?
You are able to implement custom Repository methods in Spring Data, which has been outlined in other answers on SO such as here: How to add custom method to Spring Data JPA.
So you can easily combine both the out of the box Spring Data Redis functionality with custom Lettuce method code for a Spring Data Repository, I would suggest starting with Spring Data, and if you need to fine tune anything beyond that then write a custom methods with Lettuce.
So long as you can use the same connection pool in Lettuce as Spring Data Redis, you should be able to share that as a resource, the same way you can consider Threads as a resource.
No one can really give you a yes no answer as to what libraries you should or shouldn't use, hopefully you have enough information now to make progress going forward.
We are going to use Redis cache for faster performance. We require that we want to create single java domain class (for example Employee.java) which we can use for both Redis and Sybase ASE but the problem is Redis is NoSql database and Sybase ASE is a relational database.
If we store Employee object as key-value pair in Redis and then if we want to store it in the database (Sybase ASE) from extracting it from Redis cache then it will create a problem.
So, in short, we require a single java domain class. How can we achieve this?
Just serialize your Employee into a C-String value to put in Redis, for instance thanks to Kryo library. Then you just have to deserialize it from Redis to rebuild your Java instance and use it with Sybase (the other way works too).
Any process of java serialization into a C-String (bytearray) or classic string can be used, so you can look at Jackson (JSON serialization from and to Java), JSON-schema (that generates JSON serializable java classes), MessagePack (JSON serialization with compression), FlatBuffers... Even vanilla traditional Java serialization can be used.
Does hibernate internally uses Serialization for persisting POJO classes? If yes, How does it use it in persisting data? If No, then how does it persist data to DB?
Hibernate persists data to the database using SQL. Java serialization is not used at all. (SQL) databases are language-agnostic. As such, they cannot depend on language-specific technology such as Java serialization.
Serialization is only relevant when you need to send a POJO over the wire to other servers running Java. For example, if you have some sort of cache of POJOs that spans multiple machines, you could use serialization to send copies of the POJO over the wire.
See https://stackoverflow.com/a/2726387/14731 for a related discussion.
My application needs to cache non-serializable objects for performance reasons. These non-serializable objects are in-memory models built from an external resource. For example, a validation template is stored as XML in the database, and an in-memory model is constructed by parsing the XML. The in-memory model is relatively expensive to build, so caching improves performance. However, the in-memory model needs to be reloaded from the database when the underlying record is changed.
In a single application scenario, I stored the objects in a simple map. When a record is changed in the database, the in-memory model is rebuilt and replaced the old entry in the map.
In a distributed scenario, I need the invalidation message to propagate across the cluster so that all nodes rebuild the in-memory model when the record changes. I have looked at Infinispan and Hazelcast and they both require all cached objects to be serializable. However, if the cache operates in an invalidation mode (where data is not sent across the wire), I don't see why the cached objects need to be serializable.
What techniques are commonly used in this scenario? Is this scenario unusual (i.e. should I be doing something different)?
However, if the cache operates in an invalidation mode (where data is
not sent across the wire)
not exactly sure what this means, why store objects in distributed cache then?
And how did you get them in the cache in a first place?
Your objects do not have to be serializable in a pure Java sense, i.e., they do not have to implement Serializable interface. But since your cache is distributed, be it Hazelcast or Memcached or EhCache, you need to get your Java objects across the wire and store them in cache in some external format, and then be able to get them back from cache and restore as Java objects. This is called marshaling /unmarshaling, or ... serialization/deserialization. The are variety of formats you can consider: XML, Json, Bson, Yaml, Thrift, etc. There are numerous frameworks and libraries that can help you work with these different serialization schemas. XStream, JAXB, Jackson, Apache Camel, etc.
As far as Hazelcast goes, its documentation explicitly says: "All your distributed objects such as your key and value objects, objects you offer into distributed queue and your distributed callable/runnable objects have to be Serializable." May be you could consider Guava in-memory cache?