Use of sqlite on network share - java

We are using SQLite (Xerial JDBC driver) on a windows desktop based Java application. Now we are moving on to a client-server version of the same application where multiple Java based Swing clients will be connecting to the same SQLite db file on the designated server Windows PC. Please correct me if I'm wrong:
Is keeping the SQLite database file over network share the only option to use SQLite in this mode? or is there some other solution that I am missing ?
Will using SQLite increase the chances of DB corruption ?
I don't see lot of concurrent update operations. There will be 5-10 clients trying to read & update the same DB. In that case, is it better to use an entperise grade DB (MySQL, Postgres)?

From the FAQ paragraph before the one quoted:
SQLite uses reader/writer locks to control access to the database.
(Under Win95/98/ME which lacks support for reader/writer locks, a
probabilistic simulation is used instead.) But use caution: this
locking mechanism might not work correctly if the database file is
kept on an NFS filesystem. This is because fcntl() file locking is
broken on many NFS implementations. You should avoid putting SQLite
database files on NFS if multiple processes might try to access the
file at the same time. On Windows, Microsoft's documentation says that
locking may not work under FAT filesystems if you are not running the
Share.exe daemon. People who have a lot of experience with Windows
tell me that file locking of network files is very buggy and is not
dependable. If what they say is true, sharing an SQLite database
between two or more Windows machines might cause unexpected problems.
I would not network share a SQLite database file as it appears you will be buying yourself nasty synchronization problems yielding hard to reproduce data corruption.
Put another way, you are using a general file sharing mechanism to substitute for the server capabilities of another DBMS. These other DBMS are tested specifically and field-hardened for multiple client access, though SQLite has great merits, this isn't one of them.

This is a FAQ:
[...] We are aware of no other embedded SQL database engine that
supports as much concurrency as SQLite. SQLite allows multiple
processes to have the database file open at once, and for multiple
processes to read the database at once. When any process wants to
write, it must lock the entire database file for the duration of its
update. But that normally only takes a few milliseconds. Other
processes just wait on the writer to finish then continue about their
business. Other embedded SQL database engines typically only allow a
single process to connect to the database at once. [...]
Also read SQLite is serverless.
Whether SQLite is sufficient for your needs is impossible to tell. If you have long-running update transactions, locking the whole database might be a serious issue. Since you're using JDBC to access it, there shouldn't be many problems switching to another database engine if necessary.

Related

Implement Multithread application to speed-up querying time with JDBC?

The facturation application I'm working needs to speed-up. I want to improve the time by dividing the work using Multithread, but the app heavyly relies in a Oracle Database and was designed to make several queries to get the information it neeeds and then performes the update statements to make the facturation each user.
The question is, will a Multithread solution will make it faster? If so, how can I implement it? Can you point me to some resources to read about the subject? And if not, then how can I make the app faster?
for your invoicing application, there are many different ways to make it run faster. for example:
use a database connection pooling (for example HikariCP) and send queries to the database using multiple threads
configure your Database to use a master/slave replication, so you can send some queries to the slave (to speed up responses from the database)
make sure that your tables have correct indexes and that each query is indeed using those indexes (use some SQL query analyzer tool)
create specific views/tables for reporting (queries on those views/tables should be a lot faster than doing queries on multiple tables)
use an in-memory database (like Redis) to store some of the results (if you use that same information in multiple reports)
run your invoicing application in a separate computer (or virtual machine), to make sure that other processes in the same computer are not the reason for the delays that you are experiencing.

Load existing SQLite database to memory

I have an existing database in a file. I want to load the database in memory; because I'm doing a lot queries and the database isn't very large (<50MB) to fasten those queries. Is there any way to do this?
50 MB easily fits in the OS file cache; you do not need to do anything.
If the file locking results in a noticeable overhead (which is unlikely), consider using the exclusive locking mode.
You could create a RAM drive and have the database use these files instead of your HDD/SSD hosted files. If you have insane performance requirements your could go for a in memory database as well.
Before you do for any in memory solutions: what is "a lot of queries" an what is the expected response time per query? Chances are that the database program isn't the performance bottleneck, but slow application code or inefficient queries / lack of indexes / ... .
I think SQLite does not support concurrent access to the database, which would waste a lot of performance. If write occur rather infrequently, you could boost your performance by keeping copies of the database and have different threads read different SQLite instances (never tried that).
Either of the solutions suggested by CL and Ray will not perform as well as a true in-memory database due to the simple fact of the file system overhead (irrespective of whether the data is cached and/or in a RAM drive; those measure will help, but you can't beat getting the file system out of the way, entirely).
SQLite allows multiple concurrent readers, but any write transaction will block readers until it is complete.
SQLite only allows a single process to use an in-memory database, though that process can have multiple threads.
You can't load (open) a persistent SQLite database as an in-memory database (at least, the last time I looked into it). You'll have to create a second in-memory database and read from the persistent database to load the in-memory database. But if the database is only 50 MB, that shouldn't be an issue. There are 3rd party tools that will then let you save that in-memory SQLite database and subsequently reload it.

Threads on Multiple VMs accessing a table on single Instance of DB causing low performance and Exceptions occasionally

Application is hosted on multiple Virtual Machines and DB is on single server. All VMs are pointing to single Instance of DB.
In this architecture, I have a table having very few record. But this table is accessed and updated by threads running on VMs very heavily. This is causing a performance bottleneck and sometimes record level exception. Database level locking does not seem to be best option as it is introducing significant delays in request processing.
Please suggest if there is any other technique to solve this problem.
Few questions first!
Is your application using connection pooling? If not, please use it. Creating a JDBC connection is expensive!
Is your application read heavy/write heavy?
What kind of storage engine are you using in your MySQL tables? InnoDB or MyISAM. If your application is write heavy, please use InnoDB based tables as it uses row level locking and will serve concurrent requests better.
One special case - if you are implementing queues on top of database tables, find a database that has a built-in queue operation and use that, or use a reliable messaging service. Building queues on top of databases is typically not efficient. See e.g. http://mikehadlow.blogspot.co.uk/2012/04/database-as-queue-anti-pattern.html
In general, running transactions on databases is slow because at the end of each transaction the database needs to be sure that enough has been saved out to disk that if the system died right now the changes made by the transaction would be safely preserved. If you don't need this you might find it faster to write a single non-database application that does what the database does but doesn't write anything out to disk, or still does database IO but does the minimum possible. Then instead of all of the VMs talking to the database directly they would all talk to this application.

is concurrency automatically handled in sqlite?

On my server, in order to speed up things, I have allocated a connection pool to my sqlite odbc source.
What happens if two or more hosts want to alter my data?
Are these multiple connections automatically handled by the sqllite?
You can read this thread
If most of those concurrent accesses are reads (e.g. SELECT), SQLite can handle them very well. But if you start writing concurrently, lock contention could become an issue. A lot would then depend on how fast your filesystem is, since the SQLite engine itself is extremely fast and has many clever optimizations to minimize contention. Especially SQLite 3.
For most desktop/laptop/tablet/phone applications, SQLite is fast enough as there's not enough concurrency. (Firefox uses SQLite extensively for bookmarks, history, etc.)
For server applications, somebody some time ago said that anything less than 100K page views a day could be handled perfectly by a SQLite database in typical scenarios (e.g. blogs, forums), and I have yet to see any evidence to the contrary. In fact, with modern disks and processors, 95% of web sites and web services would work just fine with SQLite.
If you want really fast read/write access, use an in-memory SQLite database. RAM is several orders of magnitude faster than disk.
And check this
In short: It is not good solution.
Description:
SQLite supports an unlimited number of simultaneous readers, but it will only allow one writer at any instant in time.
For your situation it is not good.
Advice: Use another RDBMS.

Highly reliable storage for a 'log' / time series

In an application I'm working on, I need a write-behind data log. That is, the application accumulates data in memory, and can hold all the data in memory. It must, however, persist, tolerate reasonable faults, and allow for backup.
Obviously, I could write to a SQL database; Derby springs to mind for easy embedding. I'm not tremendously fond of the dealing with a SQL API (JDBC however lipsticked) and I don't need any queries, indices, or other decoration. The records go out, and on restart, I need to read them all back.
Are there any other suitable alternatives?
Try using a just a simple log file.
As data comes in, store in memory and write (append) to a file. write() followed by fsync() will guarantee (on most systems, read your system and filesystem docs carefully) that the data is written to persistent storage (disc). These are the same mechanisms any database engine would use to get data in persistent storage.
On restart, reload the log. Occasionally, trim the front of the log file so data usage doesn't grow infinitely. Or, model the log file as a circular buffer the same size as what you can hold in memory.
Have you looked at (now Oracle) Berkeley DB for Java? The "Direct Persistence Layer" is actually quite simple to use. Docs here for DPL.
Has different options for backups comes with a few utilities. Runs embedded.
(Licensing: a form of the BSD License I beleive.)

Categories

Resources