I have millions of rows to be read from database and multiple users come in a day to read the same data. so I want to create a cache. so that I don't have to go to database again for same data.
I have seen many option but couldn't find figure out which approach to use.
Creating my own cache I am thinking saving the data of a query result and writing in a file or
use some third party in memory caches?
Guava CacheBuilder ,LRUMap caching,whirlycache ,cache4j.
You are not the first person to have requirements like this, which is why there are dozens of cache implementations available as open source projects, and even a standard set of Java APIs for caching (JCache). If your needs go beyond those solutions, there are even commercial solutions that handle tens of terabytes of data transparently across RAM, flash, database, etc. If none of those are sufficient, then you should definitely write your own.
Its totally dependent on multiple factors. and i think answer will be based on environment, Size of data etc. here is the main points
You want to keep the cache in ram as much as possible because its faster to access than being in file system.
You can also use OS memory mapped files which does balance access vs utilization. I suggest any proven solution than creating your own
If you are running low on memory then you might need to ask question on what is more important like caching the top access data as they are most likely to be asked by client.
So there is not a sure or definite answer but you have to decide based on your constraints. Hope this helps
I think you are overengineering the problem, it isn't trivial to write a performant, transparent cache, unless you only need a simple HashMap to hold some values. You should focus on writing code to solve your domain problem and not writing too much framework code.
Stop reinventing the wheel, use either an in-memory cache (e.g. infinispan or redis) or a database (e.g. postgres). You will have less pain and better performance.
Related
For reasons that are beside the point, a company has bought an Exadata Eighth Rack. Some of the managers thought that this would improve performance of current applications. The problem is that hardly any application makes intensive database work (yes, this is a good moment for looking at facepalm animated gifs). So, at the moment, migrations have proven just little benefit.
The question is obvious. Most of the applications are written in Java, and some of them make intensive use of Solr and Cassandra. For what I know, Exadata is intended for storing data, while Exalogic can hold applications too. Anyway, I'm wondering if there is some way of taking advantage of mentioned infrastructure.
Replace Solr with Oracle Text.
Before I get down-voted: normally I would not recommend replacing existing code built with a popular, open-source program with a seldom-used, proprietary product. But if you want to use a lot of space and CPU on your database servers then Oracle Text can definitely help.
As more generic advice, the primary role of a database is not to store data. A file system can do that. Databases are built to join data. If an application is reading a large amount of data and doing ad hoc joins, those are the jobs you want to move to the database.
Exadata -> Oracle Database extreme performance.
Exalogic -> Fusion Middleware extreme performance. (Java goes here)
Your best move will be refactoring the application to put as much workload as possible on the DB (PL/SQL).
Another thing I could think of, but this would be a radical approach I have never really tried it myself (Yes I work with Exadatas too) maybe you can give it a shot and let us know here...
What about using all those GBs on the Exadata's RAM and start tuning your Java application's latency? I mean with that gruesome amount of Memory you can try and set a real nice amount of heap and avoid Garbage Collection induced latency. Please do let me know here what comes out if you actually try this.
Which protocol do the Java applications use to connect to Oracle?
If it's not IPC (inter process communication, aka BEQUEATH, aka shared memory), but maybe TCP and you have many fast & tiny roundtrips, than this would be your low-hanging fruit - eliminate the network stack.
edit: just realized that exadata cannot run java applications by default (only ODA does) - so it wouldn't be possible to make use of IPC. However, perhaps you're able to test the impact of IPC in one of your applications using the former infrastructure?
Exadata cannot host any customer application. You cannot install anything there. You only can host Oracle database on Exadata.
It means you can use database features like DBFS (file system over Oracle database), Java option (storing and executing java code in database). But you need to check what options you have license for. And internal JVM is used, which cannot be customized or upgraded.
Exadata is database appliance designed to work with large amount of differently accessed data in very effective and manageable way.
I'm part of a team architecting a Java web application wherein users will search for results in a relational database and then view them in tabular fashion in a browser. Users will then also have the option to subsequently view the same result set (or a subset of those results) in a separate browser window, using for example a charting tool. In other words, we need to give the user the ability to visualize the same result set records later (up to a limit of 24 hours).
Since searches on the system will be resource-intensive and just out of good common sense, we would like a clean way to cache each result set so that it can be pulled later from memory (RAM or disk). We are looking for a good approach to doing this caching, we believe others have done this before, and we prefer to use a best-practice or framework rather than building such a thing from scratch. The server will have plenty of RAM but since there could be hundreds of people using the system, we may need an approach that stores to RAM first but then can also cache to hard disk if RAM is getting full.
I believe it makes most sense to persist as Java objects but I'm open to better advice. We would like a vendor-neutral approach, so that if the database team chooses to switch vendors later we aren't stuck with a proprietary solution. Thanks.
I think what you might be looking for is Terracotta Ehcache. This does everything you mentioned and more. It is a free product that can be used to cache things in memory, overflow to disk, specify max cache sizes by either MB or # of items, and expire based on last access time or entry time.
I've seen http://www.jboss.org/infinispan/ used to do exactly that. It can cache to memory, disk and or database. I wouldn't say I love it (the configuration is not super easy and documentation is somewhat lacking) but it most certainly works and is actively maintained.
Being vendor neutral is all about writing an abstraction layer that is native to your application, then plugging in the cache service you would like to use behind this layer, while keeping your layer that exposes these operations to your main code the same.
There are plenty of ways to cache. Look into using various NoSql solutions.
Redis
Memcached
Most of the time you will serialize your object and persist it to your cache layer.
I have a java file server that serves file over http. Each file is uniquely addressable by an ID like so:
http://fileserver/id/123455555
I am looking to add a caching layer to this so that the most frequently accessed files stay in memory. I would also like to control the total size of the cache. I am thinking to use ehcache or oscache for this, but I have only used them to cache serialized object before. Would they be a good choice and are there any additional considerations for building a file cache?
Edit
Thanks for all the answers. Some more details to about the file server to simplify (or complicate) the problem:
Once a file is saved, it is never modified.
MD5 hash to avoid duplicating files on save. (I am
aware of possible collision and security concerns)
File server running on linux boxes.
Edit 2
Though the server it self does not put any limitation on the file type it supports, Files are mostly images (jpg,gif, pgn), Word, excel, PDF no bigger than 10Mb.
guava cache? http://code.google.com/p/guava-libraries/wiki/CachesExplained
nice API
time based eviction
size based eviction
Take advantage of the HTTP protocol
Your most effective caching mechanism by far will be to move caching off your own server and as close to the client as possible (data locality ;)). Use the HTTP protocol effectively to allow clients and caching proxies to do the caching whenever they can appropriately do so:
Set ETag's using some function of each file's content (e.g. MD5Sum) - cache this info too, so you don't re-calculate on each serve!
Set Expires / Last-Modified / Cache-Control headers as appropriate
edit: You updated to say that the files are never modified, so I would suggest setting the Expires header to a far-future date.
... Now to answer the question more directly ...
EhCache
My experience with EhCache is its a fine choice, and can satisfy the requirements you've mentioned.
You mentioned "the most frequently accessed files stay in memory" so it seems relevant to mention that, according to some performance testing I did (several years ago now) the LFU (Least Frequently Used) eviction policy is a lot slower than LRU (Least Recently Used) on cache writes - something like 30 times slower in fact. This is a product of the additional complexity of LFU vs LRU.
It would be a good idea to check the data usage pattern you really see in production to understand which eviction policy works best for you. In most circumstances I would suggest LRU as a starting point, as it approximates to LFU under conditions where the cache is large enough and there are no significant bursts of unusual data access.
OSCache
I have not used OSCache, so cannot say anything there.
Other considerations
In his answer Peter Lawrey suggested using the OS cache. Whilst this means that you pay a penalty for the read through from java to native I think the idea has great merit since it avoids a significant problem of caching in the Java heap: that the garbage collector has extra work to do trawling the large heap. (An alternative solution to that is to use off-heap caching, for example via BigMemory, but that has its own tradeoffs)
If the content is compressible you probably want to consider caching a compressed (gzip'd) version of the file (otherwise you will end up re-compressing it every time it is served!). This is one argument that goes against using the OS disk cache. Of course there are other caveats that go with compression (e.g. content is large enough to warrant compressing and compresses reasonably well) so it really does depend on what is in those files.
Ehcache provide ability to do web caching as well . You may want to try that http://www.ehcache.org/documentation/user-guide/web-caching
IMHO, you are better of making use of the OS disk cache as this has several advantages.
Its much simpler as the OS does all the real work.
The os can use all the available free memory which can vary depending on what else the system does.
You don't double up with the disk cache (as it is the disk cache).
The OS will keeps all the least recently used files in memory anyway.
It just occurred to me why not to have most of the objects in a cache(memory) when an application start.
if it's not that large web application. Or to have a settings for how much I want to put in the cache/memory.
I just guess it could require to have something like below 1 GB RAM or a lot less.
Everything in order to speed up the application even more by not querying database.
Is it good idea?
Caching is definitely a good idea and is widely used, but it has to be implemented correctly. There are plenty of pitfalls if done incorrectly. Try looking into one of the big proven systems, like memcached.
Caching is definitely a good idea.
Databases are also not a catch-all solution, though you have to be careful about consistency between runs of your program. What if you change the data but your program crashes before you update it to the database?
There are also lightweight memory resident databases that can let you keep your current queries for now, but run much stuff from memory. Using an ORM tool instead of SQL is particularly effective for this since the switch is almost transparent.
Quickly becomes Not so good idea, when some other node starts updating database.
In that case your cache will be holding stale data.
You can maintain a cache of Frequently Used objects in the memory, just don't forget to add methods to refresh the cache when the underlying database state changes.
Eg: If you have a user's table and you need user names in many many pages, then load the entire table in cache at time of Application Startup, just make sure to update the cache when you are adding new users online or modifying / deleting entries from user table
You don't persist objects to database. What you persist is object's state. So that you can have exactly the same state even after your app stops/closes/restarts. If you want to keep states of your objects persisted, you have no choice, but to use db (or anything else, that allows you to write data to file system).
The details are beyond the scope of an answer here, but we have had good experience of using ehCache ( http://ehcache.org/ )
The combination of support for distributed caches, and overflow to disk has allowed us to keep large numbers of computationally heavy, but fairly unchanging pages in the cache for a site being served from multiple tomcats.
Distribution addresses the question of staleness (if you invalidate your items correctly) and the disk overflow allows us to basically cache everything which was just not feasible with an in-memory cache.
Of course the implementation is not trivial for a real world application, but it improved our performance significantly once the caches were bubbling.
I am currently in need of a high performance java storage mechanism.
This means:
1) I have 10,000+ objects with 1 - Many Relationship.
2) The objects are updated every 5 seconds, with the most recent updates persistent in the case of system failure.
3) The objects need to be queryable in a reasonable time (1-5 seconds). (IE: Give me all of the objects with this timestamp or give me all of the objects within these location boundaries).
4) The objects need to be available across various Glassfish installs.
Currently:
I have been using JMS to distribute the objects, Hibernate as an ORM, and HSQLDB to provide the needed recoverablity.
I am not exactly happy with the performance. Especially the JMS part of this.
After doing some Stack Overflow research, I am wondering if this would be a better solution. Keep in mind that I have no experience with what Terracotta gives me.
I would use Terracotta to distribute objects around the system, and something else need to give the ability to "query" for attributes of those objects.
Does this sound reasonable? Would it meet these performance constraints? What other solutions should I consider?
I know it's not what you asked, but, you may want to start by switching from HSQLDB to H2. H2 is a relatively new, pure Java DB. It is written by the same guy who wrote HSQLDB and he claims the performance is much better. I'm using it for some time now and I'm very happy with it. It should be a very quick transition (add a Jar, change the connection string, create the database) so it's worth a shot.
In general, I believe in trying to get the most of what I have before rewriting the application in a different architecture. Try profiling it to identify the bottleneck first.
At first, Lucene isn't your friend here. (read only)
Terracotta is to scale around at the Logical layer! Your problem seems not to be related to the processing logic. It's more around the Storage/Communication point.
Identify your bottleneck! Benchmark the Storage/Logic/JMS processing time and overhead!
Kill JMS issues with a good JMS framework (eg. ActiveMQ) and a good/tuned configuration.
Maybe a distributed key=>value store is your friend. Try Project Voldemort!
If you like to stay at Hibernate and HSQL, check out the Hibernate 2nd level cache and connection pooling (c3po, container driven...)!
Several Terracotta users have built systems like this in the past, so I can you tell you by proof of existence that it can be done. :)
Compass does have support for clustering with Terracotta so that might help you. I suspect you might get further faster by just being careful with how you create your clustered data structures.
Regarding your requirements and Terracotta:
1) 10k objects is quite small from a Terracotta perspective
2) 5 sec update rate doesn't seem like an issue. Might depend how many nodes there are and whether there is any natural partitioning you can take advantage of. All updates will be persistent.
3) 1-5 second query time seems quite easy. Building your own well-organized data structures for lookup is the tricky part. Obviously you want to avoid scanning all the data.
4) Terracotta currently supports Glassfish v1 and v2.
If you post on the Terracotta forums, you could probably get more Terracotta eyeballs on the problem.
I am currently working on writing the client for a very (very) fast Key/Value distributed hash DB that provides set + list semantics. The DB is C99 and requires GCC and right now I'm battling with good old Java network IO to break my current 30,000 get/sets per/sec barrier. Hope to be done within the week. Drop me a line through my account and I'll get back when its show time.
With such a high update rate, Lucene is almost definitely not what you're looking for, since there is no way to update a document once it's indexed. You'd have to keep all the object versions in the index and select the one with the latest time stamp, which will kill your performance.
I'm no DB expert, but I think you should look into any one of the distributed DB solutions that's been on the news lately. (CouchDB, Cassandra)
Maybe you should take a look to: Prevayler.
Your objects are always in mem.
The "changes" to your objects are persisted.
From time to time you are able to take a snapshot: every object is persisted.
You don't say what vendor you are using for JMS, but I wouldn't surprise me if you have some bottle neck there. I couldn't get more than 100 messages a second from ActiveMq, and whatever I tried in terms of configuration of acknowledgment, queue size, etc we were unable to soak the CPU beyond a few percent.
The solution was to batch many queries into one JMS message. We had a simple class that either sent a batch of messages when it got to 200 queries or reached a timeout (we used 20ms), which gave us a dramatic increase in message throughput.
Guaranteed messaging is going to be much slower than volatile messaging. Given every object is updated every few second, you might consider batching your updates (into say 500 changes or by time say 1-10 ms' worth), sending over volatile messaging, and batching your transactions. In this case you are more likely to be limited by bandwidth. Tuning your use case you may find smaller batch sizes also work efficiently. If bandwidth is critical (say you have a 10 MB connection or slower, then you could use compression over JMS)
You can achieve much higher performance with a custom solution (which also might be simpler) e.g. Hazelcast & JGroups are free (you can add a node(s) which does the database synchronization so your main app doesn't slow down). There are commercial products which handle in the order of half a million durable messages/sec.
Terracotta + jofti = queryable persistent clustered data structures
Search google for terracotta querymap or visit tusharkhairnar.blogspot.com for querymap blog
You may want to integrate timasync as well to update your database. Database is is your system of record use terracotta as caching and database offloading mechanism you can even batch async updates to make it faster so that I'd db contains fairly recent data
Tushar
tusharkhairnar.blogspot.com