I'm working on a new application, that will be used by dozens of clients, each client also with dozens of users. I really don't want to handle with multiple datasources (because that may lead to performance issues), so I choose to use only 1 database for all tenants, and prepare the application to handle multitenancy through logical data segregation (creating a tenant id with my entities and indexing all tables with this ID).
But, I was wondering, in case of a client need to restore it's backup, in order to prevent the down time of the entire system, is it possible (or viable) to address this problem creating all tables partitioned by tenant id? In this case, can I performe a backup/restore by partition on Oracle?
Yes, backup and restore of table partitions is possible. See here in the documentation: https://docs.oracle.com/database/121/BRADV/rcmresind.htm#BRADV696.
Also see here for examples:
https://www.thegeekdiary.com/oracle-database-12c-new-feature-rman-recover-table/
https://oracle-base.com/articles/12c/rman-table-point-in-time-recovery-12cr1
I have a situation wherein as part of a online transaction, I have to save some data into other database, a slight latency (few seconds) in updating the other database is fine. Now since both databases are Oracle, I have below 3 options, I need some insight as which one is better.
Oracle Database Links: Wherein I convert the SQL into PL/SQL and make my database take care of writing into another Oracle based database for DEV env both the databases are in same server as different schema while in production they happen to be two separate ORACLE RACs separated by a few routers and switches.
Spring Batch: Use a batch job somehow to pick the transactions from my source database and process and write into another target database. This way my online transactions would not fail it other database ever goes down or hits a perf issue or face a network issue. And if they ever fail I can code for job restart ability. Is Spring batch well suited for such event publishing case? Would I hit any challenge in future?
2-Phase-Commit: I simply implement 2PC and save the data in both the database in a transaction. Or maybe make it look more future proof and save in a messaging system and my source database.
I am developing spring mvc application
For now I am using innodb mysql but I have to develop the application to support other databases also.
Can any one please suggest me how to handle concurrent sql update on single record.
Suppose two users are trying to update same record then how to handle such scenario.
Note: My database structure is dependent on some configuration (It can change at runtime) and my spring controller is singleton in nature.
Thanks.
Update:
Just for reference I am going to implement version like https://stackoverflow.com/a/3618445/3898076).
Transactions are the way to go when it comes to concurrent sql updates, in spring you can use a transaction manager.
As for the database structure, as far as I know MySql does not support transactions for DDL commands, that is if you change the structure concurrently with updating, you're likely to run into problems.
To handle multiple users working on the same data, you need to implement a manual "lock" or "version" field on the table to keep track of last updates.
Does the term 'embedded database' carry different meaning from 'database'?
There are two definitions of embedded databases I've seen:
Embedded database as in a database system particularly designed for the "embedded" space (mobile devices and so on.) This means they perform reasonably in tight environments (memory/CPU wise.)
Embedded database as in databases that do not need a server, and are embedded in an application (like SQLite.) This means everything is managed by the application.
I've personally never seen the term used exactly as Wikipedia defines it, but that's probably my fault, although it resembles quite a bit my number 2 above.
The word 'embedded' does add meaning, basically that the database is dedicated to a specific application rather than shared among multiple applications, to a degree hidden from the user of the application, and completely controlled by the application.
An embedded database is conceptually just a part of the application rather than a separate thing.
Just see the usage of ... for example a H2-embedded database. You don't need a server running on your machine, your whole database ist stored in one (these are originally two) local file. It is opened and locked when you connect to your DB, and it is unlocked when you disconnect.
When a developer embeds a database library inside an application and there is no need for administrator, it is called embedded database. Database is hidden, but data management via SQL (e.g. ITTIA DB SQL) or no SQL (e.g. Berkeley DB) is accessible through APIs. Embedded databases are common for web development or device applications.
I am not very familiar with databases and what they offer outside of the CRUD operations.
My research has led me to triggers. Basically it looks like triggers offer this type of functionality:
(from Wikipedia)
There are typically three triggering events that cause triggers to "fire":
INSERT event (as a new record is being inserted into the database).
UPDATE event (as a record is being changed).
DELETE event (as a record is being deleted).
My question is: is there some way I can be notified in Java (preferably including the data that changed) by the database when a record is Updated/Deleted/Inserted using some sort of trigger semantics?
What might be some alternate solutions to this problem? How can I listen to database events?
The main reason I want to do this is a scenario like this:
I have 5 client applications all in different processes/existing across different PCs. They all share a common database (Postgres in this case).
Lets say one client changes a record in the DB that all 5 of the clients are "interested" in. I am trying to think of ways for the clients to be "notified" of the change (preferably with the affected data attached) instead of them querying for the data at some interval.
Using Oracle you can setup a Trigger on a table and then have the trigger send a JMS message. Oracle has two different JMS implementations. You can then have a process that will 'listen' for the message using the JDBC Driver. I have used this method to push changes out to my application vs. polling.
If you are using a Java database (H2) you have additional options. In my current application (SIEM) I have triggers in H2 that publish change events using JMX.
Don't mix up the database (which contains the data), and events on that data.
Triggers are one way, but normally you will have a persistence layer in your application. This layer can choose to fire off events when certain things happen - say to a JMS topic.
Triggers are a last ditch thing, as you're operating on relational items then, rather than "events" on the data. (For example, an "update", could in reality map to a "company changed legal name" event) If you rely on the db, you'll have to map the inserts & updates back to real life events.... which you already knew about!
You can then layer other stuff on top of these notifications - like event stream processing - to find events that others are interested in.
James
Hmm. So you're using PostgreSQL and you want to "listen" for events and be "notified" when they occur?
http://www.postgresql.org/docs/8.3/static/sql-listen.html
http://www.postgresql.org/docs/8.3/static/sql-notify.html
Hope this helps!
Calling external processes from the database is very vendor specific.
Just off the top of my head:
SQLServer can call CLR programs from
triggers,
postgresql can call arbitrary C
functions loaded dynamically,
MySQL can call arbitrary C functions,
but they must be compiled in,
Sybase can make system calls if set
up to do so.
The simplest thing to do is to have the insert/update/delete triggers make an entry in some log table, and have your java program monitor that table. Good columns to have in your log table would be things like EVENT_CODE, LOG_DATETIME, and LOG_MSG.
Unless you require very high performance or need to handle 100Ks of records, that is probably sufficient.
I think you're confusing two things. They are both highly db vendor specific.
The first I shall call "triggers". I am sure there is at least one DB vendor who thinks triggers are different than this, but bear with me. A trigger is a server-side piece of code that can be attached to table. For instance, you could run a PSQL stored procedure on every update in table X. Some databases allow you to write these in real programming languages, others only in their variant of SQL. Triggers are typically reasonably fast and scalable.
The other I shall call "events". These are triggers that fire in the database that allow you to define an event handler in your client program. IE, any time there are updates to the clients database, fire updateClientsList in your program. For instance, using python and firebird see http://www.firebirdsql.org/devel/python/docs/3.3.0/beyond-python-db-api.html#database-event-notification
I believe the previous suggestion to use a monitor is an equivalent way to implement this using some other database. Maybe oracle? MSSQL Notification services, mentioned in another answer is another implementation of this as well.
I would go so far as to say you'd better REALLY know why you want the database to notify your client program, otherwise you should stick with server side triggers.
What you're asking completely depends on both the database you're using and the framework you're using to communicate with your database.
If you're using something like Hibernate as your persistence layer, it has a set of listeners and interceptors that you can use to monitor records going in and out of the database.
There are a few different techniques here depending on the database you're using. One idea is to poll the database (which I'm sure you're trying to avoid). Basically you could check for changes every so often.
Another solution (if you're using SQL Server 2005) is to use Notification Services, although this techonology is supposedly being replaced in SQL 2008 (we haven't seen a pure replacement yet, but Microsoft has talked about it publicly).
This is usually what the standard client/server application is for. If all inserts/updates/deletes go through the server application, which then modifies the database, then client applications can find out much easier what changes were made.
If you are using postgresql it has capability to listen notifications from JDBC client.
I would suggest using a timestamp column, last updated, together with possibly the user updating the record, and then let the clients check their local record timestamp against that of the persisted record.
The added complexity of adding a callback/trigger functionality is just not worth it in my opinion, unless supported by the database backend and the client library used, like for instance the notification services offered for SQL Server 2005 used together with ADO.NET.