I am forced to discover my database endpoint at run-time. Specifically I have to discover cassandra contact points (not nodes, the actual contact points) via a web service before I can connect to cassandra. There is currently no way for me to pre-load the contact points into a properties file prior to launch.
I am currently discovering end-points and then manually initializing the db driver, and it's cumbersome and error-prone.
I would really, really like to use a JPA-like model for data access, and I'm fond of using Spring Data for straight sql, but the only ways I can find to initialize Spring data are through spring configuration initialization.
Is there a way to delay initialization of spring data, overwrite initialization, or in any other way start up spring data after i have retrieved the contact points from an httpclient call?
Related
We are currently looking into best practices concerning Cosmos DB and its Java SDK in combination with multitenancy. We have a simple Spring Boot based Java service that is providing REST interfaces for CRUD operations on data in a Cosmos DB using the SQL API. Now we need to introduce multitenancy into it. The tenant classifier will be read from a JWT when receiving a REST request and be available in the requests Thread Local context.
Current considerations are to have the data of one tenant in either a separate container or a separate database under one database account. As described in the MS document Multitenancy and Azure Cosmos DB.
Having all data in one container using the tenant classifier as a partition key will not work due to the amount of data, plus we need separation of the data to prevent the noisy neighbour problem.
Are there any best practices on how to design the data access layer - also in regards of performance pitfalls?
Our understanding at the moment
Use one CosmosDBClient per application (as the initialization takes time)
Container names are defined on entity level (#Container())
The official sample create individual repositories and database configs Azure Spring Boot Cosmos Samples which sounds wrong to me
Basically
Should we create on CosmosDB client and dynamically (how?) set the container name or database name before executing a CRUD operation?
Should we create separate clients for every tenant and re-use those instances?
I had some trouble wording the question, but basically, I have found it common, where I work, to create a Java Spring Rest API that connects to a database and a front end application uses that API (web-app -> service API -> database). This couples the application service api to the data store and is often specific use case for the front end application. I see many service APIs creating the same get calls to the same database. This seems wrong to me.
I believe it would be better to create an API for the database itself, then run the service API to that. (web-app -> service API -> datastore API -> database). This would allow all services to access the database without coupling to it directly and having to manage access to that database for 30 applications. It would also allow any application that doesn't need anything other than the data to just use the existing datastore API. I remember an article about how Amazon requires an API for every data store and this is how I would see that being handled.
Is the idea of having a data store API and connecting to that using a service API the right mindset? Or is there some other way I should be handling this?
Reading what you just wrote it seems to me that what you have is a "microservices done wrong", as I can understand you have a few applications accessing the same database/datastore, from a microservices perspective it's wrong.
Each application should have its own database split in boundaries if any application needs to query/update the other's application data it uses an API rather than accessing the database directly.
The bottom line is, instead of having an API to access a single database each application should limit itself to its own boundaries and access its own database.
Of course, one size doesn't fit all but I do belive it's a good guideline.
Suppose i have a service that make multiple api calls to multiple micro services
Currently i am fetching all required endpoints from database and setting them as system property.
But i want to remove this db call and at the same time externalize the same.
Can you please suggest me best ways to externalize these endpoints?
In micro-service based architecture, ideally what you need is service registry and server side service discovery pattern to tackle multiple service calls.
If you are sure that your api calls are never gonna change, then keeping it in .properties file make sense, but again if they change you have to re-deploy your service.
You need to get the API end points from a Directory which will have the latest API end points. This has to be a global DB or Directory. Try using JNDI.
I have a web application which uses Spring and connects to mongdb using spring data mongodb.
I want to extend it by connecting to multiple database instance for each tenant. There are some solutions which are available on stackoverflow. One was mentioned on Multi-tenant mongodb database based with spring data where the code was on github on the following link-
https://github.com/Loki-Afro/multi-tenant-spring-mongodb
I have a question regarding this? Will this approach work for concurrent requests?
I am also doing the same thing, autowiring the MongoTemplate and Repository.
Since these beans are singletons, they will be the same for each request. Then how will the database change for different request and what happens when multiple requests are being served in parallel?
I'm using play! framework 1.2.5 for a multi-tenancy application. Tenants are resolved on domain base and are kept as Models. There are multiple types of tenants, that don't share all of the functionality the applicaiton provides, so controllers are organized in packages, namespaced by their corresponding tenant types.
Now I try to map some urls to variable target actions, defined by the currently active tenant, without passing the tenant's namespace inside the url.
I know that it is possible to execute scripts inside of play's routes file, so I'm wondering if there's any possibility to pass tenant information (thus database information) to the routes file.
Do you know any way how to do this?
And if not:
How can I realize routing approach without providing extra namespace information and without re-inventing play's request parameter binding?