Storing Large Amount of Data in Session - Should I Do This? - java

High level question here - I am developing a Spring MVC/Spring Data Project to search Tables in a database via Spring Repositories. The tables I am searching on will contain thousands, if not tens of thousands of records. I would like to hold onto these records in a List throughout the duration of using the application, until a condition is met where the List will be reset or removed.
My question is, is it feasible to store this List in my HttpSession?
Should I expect any performance issues with this method?
Is there a security concern if storing my List in the session?
Are there any best practices anyone can offer for a task such as this?
A quick view of the user session info along side a small List of 4 objects (could potentially be thousands)
Session Attribute Name Session Attribute Value
searchResults [com.mysite.dto.User#322bfb87,
com.mysite.dto.User#658e75cc,
com.mysite.dto.User#6bd7d82a,
com.mysite.dto.User#27b0e4b6]
userRoles [ROLE_ADMIN]
username admin
Thanks in advance

i would store them in a cache but not in session. You can use external caches like BigMemory Terracotta (http://terracotta.org/products/bigmemory) or memcache (Amazon provides an easy to use one if you are running your application in the EC2). This will be much more scale able than your servers JVM heap.
another idea would be to store in cache a reference of the user id with the user object and now in session you can store only the ids. That way your session saves only numbers instead of whole objects

Related

How to keep a java list in memory synced with a table in database?

I want to perform a search of a inputs in a list. That list resides in a database. I see two options for doing that-
Hit the db for each search and return the result.
keep a copy in memory synced with table and search in memory and return the result.
I like the second option as it will be faster. However I am confused on how to keep the list in sync with table.
example : I have a list L = [12,11,14,42,56]
and I receive an input : 14
I need to return the result if the input does exists in the list or not. The list can be updated by other applications. I need to keep the list in sync with table.
What would be the most optimized approach here and how to keep the list in sync with database?
Is there any way my application can be informed of the changes in the table so that I can reload the list on demand.
Instead of recreating your own implementation of something that already exists, I would leverage Hibernate's Second Level Cache (2LC) with an implementation such as EhCache.
By using a 2LC, you can specify the time-to-live expiration time for your entities and once they expire, any query would reload them from the database. If the entity cache has not yet expired, Hibernate will hydrate them from the 2LC application cache rather than the database.
If you are using Spring, you might also want to take a look at #Cachable. This operates at the component / bean tier allowing Spring to cache a result-set into a named region. See their documentation for more details.
To satisfied your requirement, you should control the read and write in one place, otherwise, there will always be some unsync case for the data.

Using Hazelcast / Redis for DB backed cache requirement

I am developing a distributed Java application that needs to check a list of blacklist userids on each request.
If request fails on some eligibility rules, system should add userid ( a parameter of request ) to blacklist.
I am trying to find a proper caching solution for blacklist implementation. My requirements are;
querying blacklist should be very fast
blacklist persistence technology should be scalable
all blacklist data should be persisted on a RDBMS also for fail over / reloading purposes.
They are two possible solutions;
Option 1: I can use redis for storing blacklist data. Whenever a request fails on eligibility rules I can add userid to redis cache easly.
- advantages: extremely fast query, easy to implement
- disadvantages: trusting on redis persistency although it works, it is a cache solution by design not a persistency layer.
Option 2: I can use redis for storing blacklist data meanwhile I can maintain db tables on RDBMS for blacklist. Whenever a request fails on eligibility rules I can add userid to redis cache and rdbms table together.
- advantages: extremely fast query, ability(possibility) to reload redis cache from db
- disadvantages: there is a consistency issue between redis and db table.
Option 3: I can use hazelcast as hibernate L2 cache and when I add any user id to blacklist it is both added to cache and db.
I have questions about option 3
Does hazelcast L2 cache is suitable for preserving such a list of blacklisted users?
Does hibernate manages consistency issue between cache and db?
When application restarted, how L2 cache is reloaded?
and a last question
- Do you have any other suggestion for such a use-case?
Edit:
There will be 100m records in blacklist and I have a couple smilar blacklist.
my read performance is important. I need to query existence of a key within blacklist ~100ms
Ygok,
Still waiting for clarification on the query requirements but I can assume it a lookup by key (since you mention Redis and Redis doesn't have a query language. Hazelcast does have Distributed Query / Predicate API).
Lookup by key is an extremely fast operation with Hazelcast.
In option 2 you need to maintain data consistency between your RDBMS and Redis cache. Using Hazelcast MapLoader / MapStore you can implement write-through- / read-through- cache concepts. All you need to do is put the entry to the cache, and Hazelcast persists it immediately or with configured delay (with batching) to the RDBMS.
In terms of performance, please, feel free to make yourself familiar with recent Hazelcast / Redis benchmark.
Let me know if you have any questions.
I had similar question before, first of all, how much data do you want to store and spend how much memory? how fast query per second do you need? what the data structure like, only userId as a key?
Hazelcast query not very fast on my testing(you can do it for yourself), but it can store large memory data. Hazelcast using Java
default serialize, it cost a lot of memory and IO.
Hazelcast provide hibernate L2 cache, cache data store on
Hazelcast(only query cache), so restart your application not affect
the cache.
Redis provide memory data persistence(DUMP and AOF), maybe a
bit of data will be lost when server crashed, but it very fast.
If you want to not lose any data, store on multi MySQL
server(split data by userId to different server, but you should
consider the problems when add new server), at the same time, you can
add local cache (e.g. Ehcache or google CacheBuilder) and set a
expire time, it can be promote performance.
It's possible to maintain consistency between Redis cache and RDBMS using Redisson framework. It provides write-through and read-through strategies for Map object using MapWriter and MapLoader objects which are required to use in your case.
Please read this documentation section

Where to keep user's id?

I am designing a web application which should keep the user's profile that has lots of information,
obviously it reads the data from database.
Currently, I have the username in my session and everytime I need the user's info should read the session then create an object of profile class (that read the data from database again) to retrieve user's info, is it the best practice for such an issue?
This is a typical trade-of between performance and memory consumption. If you want a fast application, keep the whole user profile in HTTP session but make sure you have enough memory in your server.
If you want to keep resource consumption low, store only user ID in session and load it from a database every time you need it. This also makes clustering simpler as there is less data to migrate.
Reasonable compromise is to use the latter approach with some caching. This way hot users (currently using the system) are kept in memory, while idle users or infrequently accessing new pages are swept out from cache (assuming cache is smaller then the number of HTTP sessions).
Agreed with Obe6 response,
Best practice is to ensure if the profile is not in session then to retreive from a datasource and then attach it to a session.
When session is invalidated then all information is removed from session.
There is a good article on this from IBM.
http://www.ibm.com/developerworks/websphere/library/bestpractices/store_objects_in_httpsession.html
Generally a 'best practice' is to maintain the User profile data in session and load all needed information only the first time from the database.
In other words mantain an instance of Profileclass in your http session (must implement Serializable). Profile must hold all the informations used more frequently.
Note that 'reading the session' is like reading an HashMap (so has a minimum cost in term of performances).
When the HttpSession will expire, Profile will be garbage collected.
UPDATE (based on your comments) :
to count (or find) active users (inactive are all the others), a typical solution is make Profile implements the HttpSessionBindingListener interface. When Profile is bound to a session, is notified, so you can increment a static counter (a static attribute of Profile class for example),and you can decrement it when the Profile is unbound (programmatically unbound or because its session has expired)
Session is generally a good enough place to keep the user profile data. You need to quantify how much of data you are talking here. Let's say its 5KB per session, then, you could store up to 20000 user profile in memory using 100 MB of RAM. You can allocate heap to JVM accordingly based on the max. number of active sessions you expect on your site.
This is just one aspect. When you plan to scale the app by adding more app servers, then, you can even think of moving the sessions out to a out-of-process cache/memory stores such as memcached.
If all the user profile data you keep in session does not get rendered on each page, then, it may be a good idea only to keep bare minimum in session and fetch other data as per need.

How much session data is too much?

We are running into unusually high memory usage issues. And I observed that many places in our code we are pulling 100s of records from DB, packing it in custom data objects, adding it to an arraylist and storing in session. I wish to know what is the recommended upper limit storing data in session. Just a good practice bad practice kind of thing.
I am using JRockit 1.5 and 1.6GB of RAM. I did profiling with Jprobe and found that some parts of app have very heavy memory footprint. Most of this data is being into session to be used later.
That depends entirely on how many sessions are typically present (which in turn depends on how many users you have, how long they stay on the site, and the session timeout) and how much RAM your server has.
But first of all: have you actually used a memory profiler to tell you that your "high memory usage" is caused by session data, or are you just guessing?
If the only problem you have is "high memory usage" on a production machine (i.e. it can handle the production load but is not performing as well as you'd like), the easiest solution is to get more RAM for the server - much quicker and cheaper than redesigning the app.
But caching entire result sets in the session is bad for a different reason as well: what if the data changes in the DB and the user expects to see that change? If you're going to cache, use one of the existing systems that do this at the DB request level - they'll allow you to cache results between users and they have facilities for cache invalidation.
If you're storing data in session to improve performance, consider using true caching since cache is application-wide, whereas session is per-user, which results in unneccessary duplication of otherwise similar objects.
If, however, you're storing them for user to edit this objects (which I doubt, since hundreds of objects is way too much), try minimizing the amount of data stored or research optimistic concurrency control.
I'd say this heavily depends on the number of active sessions you expect. If you're writing an intranet application with < 20 users, it's certainly no problem to put a few MB in the session. However, if you're expecting 5000 live session for instance, each MB of data stored per session accounts for 5GB of RAM.
However, I'd generally recommend not to store any data from DB in session. Just fetch from DB for every request. If performance is an issue, use an application-wide cache (e.g. Hibernate's 2nd level cache).
What kind of data is it? Is it really needed per session or could it be cached at application level? Do you really need all the columns or only a subset? How often is it being accessed? What pages does it need to be available on? And so on.
It may make much more sense to retrieve the records from the DB when you really need to. Storing hundreds of records in session is never a good strategy.
I'd say try to store the minimum amount of data that will be enough to recreate the necessary environment in a subsequent request. If you're storing in memory to avoid a database round-trip, then a true caching solution such as Memcache might be helpful.
If you're storing these sessions in memory instead of a database, then the round-trip is saved, and requests will be served faster as long as the memory load is low, and there's no paging. Once the number of clients goes up and paging begins, most clients will see a huge degradation in response times. Both these variables and inversely related.
Its better to measure the latency to your database server, which is usually low enough in most cases to be considered as a viable means of storage instead of in-memory.
Try to split the data you are currently storing in the session into user-specific and static data. Then implement caching for all the static parts. This will give you a lot of reuse application-wide and still allow you to cache the specific data a user is working on.
You could also make per-user mini sqlite database and connect to it, and store the data the user is accessing in it, then just retrieve the records from it, while the user is requesting it, and after the user disconnects just delete the sqlite database.

web application session cache

I want to cache data for a user session in a web application built on struts.What is the best way to do it .Currently we store certain information from the DB in java objects in the user's session .Works fine till now but people are now concerned about memory usage etc.
Any thought on how best to get around this problem.
Works fine till now but people are now
concerned about memory usage etc.
Being "concerned" is relatively meaningless - do they have any concrete reason for it? Statistics that show how much memory session objects are taking up? Along the same line: do you have concrete reasons for wanting to cache the data in the user session? Have you profiled the app and determined that fetching this data from the DB for each request is slowing down your app significantly?
Don't guess. Measure.
It's usually bad practice to store whole objects in the user session for this reason. You should probably store just the keys in the session and re-query the database when you need them again. This is trade-off, but usually acceptable for most use-cases. Querying the database on keys is usually acceptable to do between requests, rather than storing objects in session.
If you must have them in session, consider using something like an LRUMap (in Apache Collections).

Categories

Resources