I'm trying to change the default Database when the program starts. First, I start with default Database (main_database) and after select some tables, I want to change to another table (second_database).
I use this code but doesn't work:
String qlQuery = "USE second_database;";
Query query = entityManager.createNativeQuery(qlQuery);
query.getResultList();
Server server = serverService.findById(1);
but it seems like getResultList is only for Selects.
How can I solve that problem?
I'm using Spring Boot and JPA.
Thanks!
SOLUTION:
Tenancy (to Spring Boot and JPA):
https://javadeveloperzone.com/spring-boot/spring-boot-jpa-multi-tenancy-example/
I can think of two options, but it all depends on your usecases:
1. Creating live views in "default_database"
You are working with 31 child databases, but do you need access to all their data? If this cannot be answered (e.g. you expect as a future requirement to need access to arbitrary table from one of these 31 databases) live views is a no go.
If it is determined that all the data that you will ever need for your application is - from DB 1 - table A and B; from DB2 - table C; from DB3 - table D and E; and so on - it might be a good approach to create views.
You should also take into account table data size and operations to be performed (e.g. read only or also writes?)
2. Configuring Hibernate for multitenancy:
You can configure hibernate to execute queries in different databases.
You need to take care of the following things:
the multitenancy strategy - for MySql use MultiTenancyStrategy.DATABASE
the MultiTenancyConnectionProvider implementation which you can pass via hibernate.multi_tenant_connection_provider property
the CurrentTenantIdentifierResolver implementation which you can pass via hibernate.tenant_identifier_resolver property
You can follow the official doc for more details and code samples as well as this excellent hands-on article by the master himself.
Related
How do i migrate data from one schema to another schema(the tables have also changed), both belonging to different databases, meaning i have to establish two connections. Can someone help me with inputs on how i can achieve the above functionality using JAVA.
Can i use liquibase to migrate data from one database to another, please note I have to establish two db connections since my schemas belong to different databases, also the tables design has also been changed.
Another option: Let SQL do all the work, no Java needed. Lets call the databases dbfrom and dbto. Now sign in to dbto and create a database link. Then your task basically becomes a insert statement.
-- in database dbto
create database link link_to_dbfrom;
-- ensure user has appropriate access on both databases.
-- copy data in dbfrom to dbo
insert into schema_in_dbto.table_in_dbto( column list)
select (column list)
from schema_in_dbfrom.table_in_dbfrom#link_to_dbfrom;
I am given a situation where there is a database and it's getting used from last 6 months. But from now on, a new database will be used. All the insertion operation would happen in the new database but for retrievals or all gets, a search has to be made in both the old and new databases. Design a microservice and how can the database configuration be done to achieve this?
Though not practical but you can define multiple DataSource in your spring boot project. Define a controller that intercept the get call. route the call to your service which will have the logic to transact between two different sources to build response for your rest queries. You can find an example from here :-
https://www.baeldung.com/spring-data-jpa-multiple-databases
Another thing you can do is introduce, elastic search and index all your old db data which will be part of get inquiry call to it and use elastic to fire query than db.
We are migrating a whole application originally developed in Oracle Forms a few years back, to a Java (7) web based application with Hibernate (4.2.7.Final) and Hibernate Search (4.1.1.Final).
One of the requirements is: as users are using the new migrated version, they able to use the Oracle Forms version - so Hibernate Search indexes will be out of sync. Is it feasable to implement a servlet so that some PL-SQL accesses some link that updates the local indexes in the application server (AS)?
I thought of implementing a some sort clustering mechanism for hibernate, but as I read through the documentation I realised that as clustering may be a good option for scalabillity and performance, for maintaining legacy data in sync may be a bit overkill.
Does anyone have any idea of how to implement a service, accessible via servlet, to update local AS indexes in a given model entity with a given ID?
I don't know what exactly you mean by the clustering part, but anyways:
It seems like you are facing a similar problem like me. I am currently in the works of creating a Hibernate-Search adaption for JPA providers (that are not Hibernate-ORM, meaning EclipseLink, TopLink, etc.) and I am working on an automatic reindexing feature at the moment. Since JPA doesn't have a event system suitable for reindexation with Hibernate-Search I came up with the idea to use triggers on a database level to keep track of everything.
For a basic OneToOne relationship it's pretty straight forward and for other things like relation-tables or anything that is not stored in the main table of an entity it gets a bit trickier, but once you got a system for OneToOne relationships it's not that hard to get to that next step. Okay, Let's start:
Imagine two Entities: Place and Sorcerer in the Lord of the rings universe. In order to keep things simple let's just say they are in a (quite restrictive :D) 1:1 relationship with each other. Normally you end up with 2 tables named SORCERER and PLACE.
Now you have to create 3 triggers (one for CREATE, one for DELETE and one for UPDATE) on each Table (SORCERER and PLACE) that store information about what entity (only the id, for mapping tables there are always multiple ids) has changed and how (CREATE, UPDATE, DELETE) into special UPDATE tables. Let's call these PLACE_UPDATES and SORCERER_UPDATES.
In addition to the ID of the original Object that has changed and the event-type these will need an ID field that is needed to be UNIQUE among all UPDATE tables. This is needed because if you want to feed information from the Update tables to the Hibernate-Search index you have to make sure the events are in the right order or you will break your index. How such an UNIQUE ID can be created on your database should be easy to find on the internet/stackoverflow.
Okay. Now that you have set up the triggers correctly you will just have to find a way to access all the UPDATES tables in a feasible fashion (I do this via querying from multiple tables at once and sorting each query by our UNIQUE id field and then just comparing the first result of each query with the others) and then update my index.
This can be a bit tricky and you have to find the correct ways of dealing with the specific update event but it can be done (that's what I am currently working on).
If you're interested in that part, you can find it here:
https://github.com/Hotware/Hibernate-Search-JPA/blob/master/hibernate-search-db/src/main/java/com/github/hotware/hsearch/db/events/IndexUpdater.java
The link to the whole project is:
https://github.com/Hotware/Hibernate-Search-JPA/
This uses Hibernate-Search 5.0.0.
I hope this was of help (at least a little bit).
And about your remote indexing problem:
The update tables can easily be used as some kind of dump for events until you send them to the remote machine that is to be updated.
I'm using MongoDB and PostgreSQL in my application. The need of using MongoDB is we might have any number of new fields that would get inserted for which we'll store data in MongoDB.
We are storing our fixed field values in PostgreSQL and custom field values in MongoDB.
E.g.
**Employee Table (RDBMS):**
id Name Salary
1 Krish 40000
**Employee Collection (MongoDB):**
{
<some autogenerated id of mongodb>
instanceId: 1 (The id of SQL: MANUALLY ASSIGNED),
employeeCode: A001
}
We get the records from SQL, and from their ids, we fetch related records from MongoDB. Then map the result to get the values of new fields and send on UI.
Now I'm searching for some optimized solution to get the MongoDB results in PostgreSQL POJO / Model so I don't have to fetch the data manually from MongoDB by passing ids of SQL and then mapping them again.
Is there any way through which I can connect MongoDB with PostgreSQL through columns (Here Id of RDBMS and instanceId of MongoDB) so that with one fetch, I can get related Mongo result too. Any kind of return type is acceptable but I need all of them at one call.
I'm using Hibernate and Spring in my application.
Using Spring Data might be the best solution for your use case, since it supports both:
JPA
MongoDB
You can still get all data in one request but that doesn't mean you have to use a single DB call. You can have one service call which spans to twp database calls. Because the PostgreSQL row is probably the primary entity, I advise you to share the PostgreSQL primary key with MongoDB too.
There's no need to have separate IDs. This way you can simply fetch the SQL and the Mongo document by the same ID. Sharing the same ID can give you the advantage of processing those requests concurrently and merging the result prior to returning from the service call. So the service method duration will not take the sum of the two Repositories calls, being the max of these to calls.
Astonishingly, yes, you potentially can. There's a foreign data wrapper named mongo_fdw that allows PostgreSQL to query MongoDB. I haven't used it and have no opinion as to its performance, utility or quality.
I would be very surprised if you could effectively use this via Hibernate, unless you can convince Hibernate that the FDW mapped "tables" are just views. You might have more luck with EclipseLink and their "NoSQL" support if you want to do it at the Java level.
Separately, this sounds like a monstrosity of a design. There are many sane ways to do what you want within a decent RDBMS, without going for a hybrid database platform. There's a time and a place for hybrid, but I really doubt your situation justifies the complexity.
Just use PostgreSQL's json / jsonb support to support dynamic mappings. Or use traditional options like storing json as text fields, storing XML, or even EAV mapping. Don't build a rube goldberg machine.
Recently I am working on a bilingual project and for some reference tables I need to sort the data. But because it is bilingual, the data is coming from different languages (in my case English and French) and I like to sort them all together, for example, Île comes before Inlet.
Ordinary Order By will put Île at the end of the list. I finally came up with using nativeQuery and sort the data using database engine's function (in oracle is about using NLS_SORT)
But I am tight with database engine and version, so for example if I change my database to postgres then the application will break. I was looking for native JPA solution (if exists) or any other solutions.
To archive this, without use native query JPA definition, I just can see two ways:
Create a DB view which includes escaped/translated columns based on DB functions. So, the DB differences will be on the create view sentence. You can define a OneToOne relation property to original entity.
Create extra column which stores the escaped values and sort by it. The application can perform the escape/translate before store data in DB using JPA Entity Listeners or in the persist/merge methods.
Good luck!