tables in Ram or DB tables, for best performances - java

I am programming a server application (chat server side) with java, which receive requests and send to a target.
I have several tables in the database, and when the server
application starts, I programmed it to copy all the content of the
database to Map tables (in the ram) in order to speed up the pull
push data while the application running.
Dose this correct way? Or you suggest me to pull data from the database directly when I need a detail. and remove the Map<>
tables from the ram!?
I suffer from memory leak.
Does dealing with the database slows the application?

Whether caching all or some of the data in memory makes sense depends on your use case. It can improve performance but it adds complexity which might not be needed.
You can load millions or even billions of records into a JVM, but much more then this you need an off heap storage such as a data store for this purpose or a database. Using off heap memory, you can have trillions of records in a JVM but this is rarely needed.

Related

Loading MySQL Table into Cache (and process data) vs constantly sending queries

Im currently developing a Java Server Application which has a lot of tables with some data that is modified frequently. All of the data has to get often fully retrieved (to display it to the user or to index it). As MySQL queries seem to be expensive (and problematic because I need the data async) I came up with the idea of literally loading in the entire table as a local cache. That way I can index the data always from the local cache and I only need to send changes to the mysql. Fortunately the data is not that big so I dont have to worry about OutOfMemory Exceptions.
What is more efficient, repeatingly sending queries to MySQL or load all data in local cache to do operations?
Thanks for reading,
Regards, Tech!

Load existing SQLite database to memory

I have an existing database in a file. I want to load the database in memory; because I'm doing a lot queries and the database isn't very large (<50MB) to fasten those queries. Is there any way to do this?
50 MB easily fits in the OS file cache; you do not need to do anything.
If the file locking results in a noticeable overhead (which is unlikely), consider using the exclusive locking mode.
You could create a RAM drive and have the database use these files instead of your HDD/SSD hosted files. If you have insane performance requirements your could go for a in memory database as well.
Before you do for any in memory solutions: what is "a lot of queries" an what is the expected response time per query? Chances are that the database program isn't the performance bottleneck, but slow application code or inefficient queries / lack of indexes / ... .
I think SQLite does not support concurrent access to the database, which would waste a lot of performance. If write occur rather infrequently, you could boost your performance by keeping copies of the database and have different threads read different SQLite instances (never tried that).
Either of the solutions suggested by CL and Ray will not perform as well as a true in-memory database due to the simple fact of the file system overhead (irrespective of whether the data is cached and/or in a RAM drive; those measure will help, but you can't beat getting the file system out of the way, entirely).
SQLite allows multiple concurrent readers, but any write transaction will block readers until it is complete.
SQLite only allows a single process to use an in-memory database, though that process can have multiple threads.
You can't load (open) a persistent SQLite database as an in-memory database (at least, the last time I looked into it). You'll have to create a second in-memory database and read from the persistent database to load the in-memory database. But if the database is only 50 MB, that shouldn't be an issue. There are 3rd party tools that will then let you save that in-memory SQLite database and subsequently reload it.

is concurrency automatically handled in sqlite?

On my server, in order to speed up things, I have allocated a connection pool to my sqlite odbc source.
What happens if two or more hosts want to alter my data?
Are these multiple connections automatically handled by the sqllite?
You can read this thread
If most of those concurrent accesses are reads (e.g. SELECT), SQLite can handle them very well. But if you start writing concurrently, lock contention could become an issue. A lot would then depend on how fast your filesystem is, since the SQLite engine itself is extremely fast and has many clever optimizations to minimize contention. Especially SQLite 3.
For most desktop/laptop/tablet/phone applications, SQLite is fast enough as there's not enough concurrency. (Firefox uses SQLite extensively for bookmarks, history, etc.)
For server applications, somebody some time ago said that anything less than 100K page views a day could be handled perfectly by a SQLite database in typical scenarios (e.g. blogs, forums), and I have yet to see any evidence to the contrary. In fact, with modern disks and processors, 95% of web sites and web services would work just fine with SQLite.
If you want really fast read/write access, use an in-memory SQLite database. RAM is several orders of magnitude faster than disk.
And check this
In short: It is not good solution.
Description:
SQLite supports an unlimited number of simultaneous readers, but it will only allow one writer at any instant in time.
For your situation it is not good.
Advice: Use another RDBMS.

Write-behind caching solution for Java objects, using oracle stored procs for persistence

Im currently working on a high throughput, low latency transaction engine. For audit reasons I need to maintain object state both locally, and also persist it to DB (Oracle).
Our DBAs insist that raw SQL is not allowed, so we use stored procedures to read/write data to the database.
I've looked around, but cannot find any obvious solution.
Is there anything out there that will act as a write-behind cache (for performance) that will allow me to specify (on a per class basis) the code that is used to persist/retreive objects (so I can inject the sproc handling code)?
What I have done in the past in this situation is to write the data to Java Chronicle and have this forwarded to a database in another thread or process. Java Chronicle supports low latency persisted IPC. You can persist objects at a rate of over one million per second with sub-micro-second latencies. The reading process can pick up those objects/events with in 100 nano-seconds. As you have to do the JDBC part yourself, you do this any manner you choose.

Reliable distributed cache on app engine (Java)

I need to keep some values in memory, sort of in-memory db. In terms of reliability, I am not affraid of system failure, I can live with that. However, I can not use memcache service, because the values can be evicted anytime. I need the values to be available on other machines, when application scales. I suppose that appengine will not make memory scale or will it (e.g. if I keep value in an ordinary Java collection)?
What I am trying to achieve here is a "pick a nickname" service. This works in two steps. First, user reserves a nickname. Then he registers the nickname. Nicknames are stored under one entity group (sic!). Therefore I need to avoid datastore contention.
As far as I understand from the https://developers.google.com/appengine/articles/scaling/memcache I can to a certain extent rely on that values in memcache should not be evicted on arbitrary resons. However, I have to count on that this will happen from time to time (e.g. on high memory levels). And this losses of value are very unpleasant to my users.
Your application shares a single instance of Memcache, it is not local to a "machine" (or rather instance of your application).
So if you are running 2 instances and they both retrieve the same value from memcache they will both get the same value.
Running an "in memory" database is not feasible in the cloud - what memory is it you were planning to use, the memory in the instance that's about to shut down?
https://developers.google.com/appengine/articles/scaling/memcache
When designing your application, take the time to consider which datasets can be cached for future reuse. These could be commonly viewed pages or often read datastore entities, just to name a few. There may also be some data in your application which you would like to have shared among all instances of your app but does not need to be persisted forever. In such cases, memcache can improve the scalability of your app by providing a fast and efficient distributed storage system for transient data. Adding memcache logic to your server side code is often well worth the few extra lines of code.
You can use app engine NDB, when you use Python27. NDb is a datastore with auto caching and much more.
Other Machines ? You mean shared between instances of the same app.

Categories

Resources