Multitenancy using data segregation in Oracle - java

I'm working on a new application, that will be used by dozens of clients, each client also with dozens of users. I really don't want to handle with multiple datasources (because that may lead to performance issues), so I choose to use only 1 database for all tenants, and prepare the application to handle multitenancy through logical data segregation (creating a tenant id with my entities and indexing all tables with this ID).
But, I was wondering, in case of a client need to restore it's backup, in order to prevent the down time of the entire system, is it possible (or viable) to address this problem creating all tables partitioned by tenant id? In this case, can I performe a backup/restore by partition on Oracle?

Yes, backup and restore of table partitions is possible. See here in the documentation: https://docs.oracle.com/database/121/BRADV/rcmresind.htm#BRADV696.
Also see here for examples:
https://www.thegeekdiary.com/oracle-database-12c-new-feature-rman-recover-table/
https://oracle-base.com/articles/12c/rman-table-point-in-time-recovery-12cr1

Related

Synchronize tables in SpringBoot multitenant architecture

I have a SpringBoot application running on a multitenant architecture.
I've two databases Admin and Client (both are MySQL) and both these databases have a User table
Client can add users to the User Table but I need them to get synchronized in the User table of Admin database.
Is there a way I can achieve this?
I've read about flyway migrations but I think it works more on database schema changes and not values.
Please ignore my mistakes as this my first question, any help would be appreciated.
This looks like a solution to your problem:
SymmetricDS is software that replicates relational database tables between multiple databases. It can also be used to replicate files and directories between multiple hosts. It uses a light-weight, web-based protocol to send and receive data, which makes it easy to work with firewalls. Replication is done in the background asynchronously, allowing data changes in offline mode. It supports most commercial and open source database platforms.
How does it work?
Triggers are installed in the database to guarantee that data changes are captured. This means that applications continue to use the database as usual without any special driver software. The triggers are written to be as small and efficient as possible. Routing and syncing of data is done outside of the database in the SymmetricDS process.
SymmetricDS supports many databases and can replicate across different databases, including Oracle, MySQL, MariaDB, PostgreSQL, MS SQL and many more.
https://www.symmetricds.org/docs/faq
You need to create some event from the flow where client adds user to the User Table.
If this "client" flow is in same java service then you can make use of Spring's Asynchronous Event Handling or have a method(which does the data copy) marked with #Async. This ensures the data copy happens in separate thread.
If the "client" flow is in different java service, then any publisher-subscriber model can be used (some opensource frameworks available are kafka, rabbitmq etc).
Now to connect to two datasources at the same time, Spring's RoutingDataSource will come handy in this scenario as it works on "lookup key" to choose the datasource. Or else you can hardcode two datasource beans in your config (since it is fixed in your case).

How to synchronize data between MongoDB and OpenLDAP databses

I have two data bases for one system. One is OpenLDAP and another one is MongoDB. To be specific this OpenLDAP is used by Atlassian Crowd that is used by us. I need to synchronize users in these two databases. That is,
If I create a user it will be defaultly created in the OpenLDAP and it has to be created in the MongoDB as well.
In past there were issues in handling this and there may be users who are in OpenLDAP but not in MongoDB. I need to find these users also.
If I delete or update a user from one I need the delete or operation to happen in both DBs.
I am going to have a cache copy of LDAP using Redis. What is the best way to synchronize data between these two databases to match the above expectations?
If it helps I am using Java in backend.
2 possible ways:
(Preferred) Design your code in a way you can "plug" database operators to handle the different databases, so you access them from a facade code that lets you access it without worriying the underlaying databases. , so creating an user, for example, would be something like this:
createUser() -> foreach dbhandle do dbhandle->createUser() forend
The same applies to delete or update any data. This approache should also solve the problem 2.
You can just update one database and have a script that runs in background updating the databases. This approach will let you work just with 1 database, letting the script handle the rest of the databases, but it is way more expensive and less reliable (as you might access 1 database that has not been updated from the master database yet)

Concurrency of JPA when same DB used by other applications

I am developing a Spring MVC JPA web application. When this application is deployed in live the same DB that my application interacts with will be used by other 2 Dotnet and VB applications at the same time. I am managing the concurrency of my JPA app through version column.
Will there be any issue when running these 3 apps at the same time for the same database? All systems are using the same tables.
I will have to build another application (most probably Spring MVC + JPA) for the same DB in the future. Will there be any issues when running both apps at the same time (in terms of persisting the same tables in both apps etc.)?
Multiple applications accessing the same database tables can be (and usually is) a concurrency nightmare. Just adding a version column to tables does not help because each application may use different concurrency management mechanisms. Common problems encountered when sharing a database across applications (assuming read-write access for all and in no particular order):
Concurrency mismatch: Imagine one application using optimistic locking throughout, another using pessimistic locking and a third using no locking at all (since all are maintained by different teams). Even if there is a central architecture group handing out good application architecture advice to everyone, developers can do whatever they like and end up causing concurrency hell.
Deadlocks: Imagine one application using SERIALIZABLE isolation level for all transactions and performing long-running operations on the database, causing queues, deadlocks and timeouts. Even if other applications do not corrupt data, they may end up seeing too many errors due to deadlocks and timeouts, reducing their usefulness.
Data validity: It is common for developers to use in-memory caches for storing fixed or slow-changing data to save repeated database roundtrips. In a JPA application backed by Hibernate, developers could use a second-level cache. However, if another application updates the database, the cache would be holding stale (and therefore inaccurate) data.
Data integrity: Different applications may use different parts of the data. If all are allowed to update the common data independently, how is referential integrity maintained? What about business rules? Will they have to be duplicated across the applications?
Team communication overhead: Each team has to keep the other teams informed about changes they need to make to the schema so that they do not step on to each others' toes. This may even reduce agility if other teams do not agree with changes required by a team or if priorities cannot be aligned.
Schema errors: What happens if someone deletes, renames, moves or archives tables or columns required by an application?
Access control: If the underlying data is sensitive and requires authentication and authorization, access control checks will have to be duplicated across the applications.
Ownership: Who owns the common tables becomes a challenge as each team may have its own stakeholders, roadmap, constraints and priorities.
Portability: If the underlying database (vendor and type) has to be changed some day, all applications will need to be changed.
Auditing and versioning: If common data needs to be audited and/or versioned (multiple versions of data rows), the code will either have to be duplicated across applications or built into the database (which may not be easy within the database, for example, knowing which application user changed a record). If done within the database, portability is affected again because the syntax may vary from one database vendor to another.
Database-specific optimizations: Some scenarios (for example, reporting) may require native queries or other optimizations that may not be possible or too hard to perform using an ORM technology like JPA. If multiple applications require access to optimized queries, such functionality will either have to be built into the database (stored procedures) or duplicated across the applications (in possibly different ways).
Archival and partitioning: Different applications may have different archival and partitioning requirements. Who makes sure that everyone's needs are met equally?
Standardization: If different teams are allowed to manage the common database objects on their own, how are things such as data dictionary, naming conventions, etc. managed? A table that is used by different teams may have differently (and mostly annoyingly) named columns, constraints, etc.
A better approach is to expose the common database tables using a service. The service can keep things consistent and controlled at all times. A common team handling changes to the service can ensure that unexpected changes are minimized and also communicated to all affected parties in a timely manner.

How to handle concurrent sql updates, given database structure can change at runtime

I am developing spring mvc application
For now I am using innodb mysql but I have to develop the application to support other databases also.
Can any one please suggest me how to handle concurrent sql update on single record.
Suppose two users are trying to update same record then how to handle such scenario.
Note: My database structure is dependent on some configuration (It can change at runtime) and my spring controller is singleton in nature.
Thanks.
Update:
Just for reference I am going to implement version like https://stackoverflow.com/a/3618445/3898076).
Transactions are the way to go when it comes to concurrent sql updates, in spring you can use a transaction manager.
As for the database structure, as far as I know MySql does not support transactions for DDL commands, that is if you change the structure concurrently with updating, you're likely to run into problems.
To handle multiple users working on the same data, you need to implement a manual "lock" or "version" field on the table to keep track of last updates.

database sharing between two servers

My current set up is a single dedicated server with Java, hibernate app running on tomcat, apache http server, MYSQL.
I need to get a second server to share the load, but using the same database from the first server.
The backend processing(excluding db transaction) is time consuming, hence the second server for backend processing).
Will there be any unwanted consequences of this setup? Is this the optimal setup?
My apps do update/delete and has transaction control as follows:
beginTransaction();
getSession().save(obj);
//sessionFactory.openSession().save(obj);
commitTransaction()
As long as only one of the apps does database updates on a shared table you should be fine. What you definitely don't want to happen is:
app1: delete/update table1.record24
app2: delete/update table1.record24
because when Hibernate writes the records one of the processes will notice the data has changed and throw an error. And as a classic Heisenbug it's really difficult to reproduce.
When, on the other hand, the responsibilities are clearly separated (the apps share data for reading, but do not delete/update the same tables) it should be ok. Document that behavior though as a future upgrade may not take that into account.
EDIT 1:Answering comments
You overcome concurrency issues by design. For any given table:
Both apps may insert
Both apps may select
one of the apps may also update / delete in that table
Your frontend will probably insert into tables, and the backend can read those tables,
update rows where necessary, create new result rows, and delete rows as cleanup.
Alternatively, when the apps communicate, the frontend can transfer ownership of the records for a given task to the business backend, which gives the ownership back when finished. Make sure the hibernate cache is flushed (transaction is executed) and no hibernate objects of that task are in use before transferring ownership.
The trick of the game is to ensure that Hibernate will not attempt write records which are changed by the other app, as that will result in a StaleStateException.
And example of how I solved a similar problem:
app 1 receives data, and writes it in table1
app 2 reads table1, processes it, and writes/updates table2
app 2 deletes the processed records in table1
Note that app 1 only writes to the shared table. It also reads, writes and updates from other tables, but those tables are not accessed by app 2, so that's no problem.
It is a fairly common approach, both for failover and load balancing.
Here's a short article describing the setup:
http://raibledesigns.com/tomcat/
Beware of singletons in this setup.

Categories

Resources