Schema multitenancy in Dropwizard - java

Is there a way to implement schema multi tenancy in dropwizard?
The only solution I've found so far is https://github.com/flipkart-incubator/dropwizard-multitenancy but that is using descriminator multi tenancy.

We basically had the same problem. We wanted to support multi-tenancy, but not only on database level. Different customers have certain services configured differently. In order to avoid passing through the tenancyId everywhere, we came up with a custom scope using Guice. This way, every service that is #TenancyScoped can get its own predefined configuration or simply the tenancyId in its constructor. Then your DAOs can use different schemas based on the tenancyId.
It works quite well for us, even though it might not properly scale if you have too many (maybe > 1000, really depends how complex your configuration is) tenants.
I have posted the details about Guice and custom scopes here: Multi tenancy with Guice Custom Scopes and Jersey.

I had the same problem and I created a multitenant hibernate bundle by modifying the current hibernate bundle code. If you still have the requirement you can check it out.
Here is the link: https://github.com/uditnarayan/dropwizard-hibernate-multitenant/

Related

Using Hibernate/JPA with multiple ClassLoaders to access multiple databases

Our application is a middle-tier application that provides a dozen or so front-end application with access to a couple dozen databases (and other data sources) on the back end.
We decided on using OSGi to separate the unrelated bits of code into separate bundles. This ensures proper code encapsulation and even allows for hot-swapping of specific bundles.
One advantage of this is that any code speaking to a specific database is isolated to a single bundle. It also allows us to simply drop in a new bundle for a new destination, and seamlessly integrates the new code. It also ensures that if a single back-end data source is down, that requests to other data sources are unaffected. One complication is that each of those bundles is loaded by a separate ClassLoader.
We'd like to start using JPA for our new destinations that we're building. Previously, we have been using JDBC directly to send SQL queries and updates.
We've looked into Hibernate 4, but it seems that it was built on the assumption that everything is loaded using a single ClassLoader. Switching between ClassLoaders for different bundles does not appear to be something it can handle consistently.
While it seems that Hibernate 5 may have corrected that issue, all the tutorials/documentation I've found for it gloss over the complexities of configuration. Most simply assume you are using a single application-level configuration file, which will not suit our needs at all.
So, my questions are:
Does Hibernate 5 properly handle connecting to multiple databases, with the configuration/POJos for each database loaded by a different ClassLoader?
How do we configure Hibernate to connect to multiple databases using multiple ClassLoaders?
Is there another JPA framework that might be better suited to our specific needs?
Hibernate is fine but for OSGi usage you also need an intermediary. In the OSGi specs this is defined by the OSGi JPA service spec. It defines how to connect to a JPA provider in OSGi without a hard reference to it.
This spec is implemented by Aries JPA. It also provides additional support for blueprint and declarative services. There is also Aries transaction control service that takes similar approach to supporting JPA and transactions in OSGi it also uses the core of Aries JPA but is a bit different in usage.
The last part you might need is pax-jdbc which allows to define a XA datasource just with configuration. The examples already use it.
To get started easily you can use Apache Karaf which has features for all of the above.
Aries JPA allows to use different databases in the same OSGi application.

Implementing Caching in existing application

If I am supposed to implement Caching in existing Spring application for all web service as well as database call, what would be the best way to implement it? I mean any of the design patterns and caching mechanism that can be used with other required stuffs.
I would appreciate any suggestion provided.
Since you are already using Spring stack, Spring Caching could be a alternative you can consider as it will require very less integration and most of the things come out of the box. You can take a look at simple examples here and here too to get a feel of how it works. However if you want more control on the actual underlying cache implementation and the code to interact with that you can roll out your own easily too, though that will require more code to write at your end.
If you are using springboot you can use the
#EnableCaching and #Cacheable
Since Spring Boot automatically configures a suitable CacheManager to serve as a provider for the relevant cache.
You can find more on https://spring.io/guides/gs/caching/
In addition to Guru's answer.
You can find more info about Spring Boot Caching on https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-caching.html and https://docs.spring.io/spring/docs/4.3.14.RELEASE/spring-framework-reference/htmlsingle/#cache
#EnableCaching is for configuration and #Cacheable is for trigger cache object.

How many proxies created in Java application use Spring core, Hibernate, Spring AOP?

I'm reading about Java proxy and as we know Spring Core, Hibernate, Spring AOP, Ehcache is an implement of it. I have got confused cause SpringCore will create a proxy, Hibernate will create a proxy and SpringAOP or Ehcache will do the same if we use all of them in a Java project.
How many proxies will create? Can someone help me out this problem and give me some example?
Each of those frameworks create any variable number of proxies all based upon certain design choices and configurations. That said, the only way to have any idea would be to profile your application.
Most frameworks that use proxies leverage them for similar reasons. These proxies are meant to act as placeholders that look like an object our code knows about and works with; however the internal implementation details are hidden, often supplemented with framework specific business logic.
For example, hibernate may expose a lazily-loaded collection of objects as a collection of proxies. Each proxy looks like the object our application expects in that collection; however, the internal state of that proxy is often not yet loaded until first accessed. In this case, the proxy saves on memory consumption, result-set parsing and database bandwidth, and a plethora of other things.

OSGi: how to securely share connection between bundles

I am trying to develop a java software based on OSGi (Apache Felix), which different module (which may contain more than one jar file) could be developed by different developers from different companies.
the question is: i am wondering how should i provide database connection to these modules. if i share the same user credential between modules, they may accidentally or intentionally use each other tables or data which should be avoid because of information privacy. or if i force each module to have its own connection with its own user credential then there will be many connections.
note: i am using mariadb as backend.
i know this problem is not a OSGi specific problem. i am wondering if anyone has faced such problem and has proven solution for this scenario (i only describe my development environment).
any idea,
thanks
First of all, your issue of multi-tenancy isn't something any system (beeing it OSGi or not) is made for. Therefore you need to take care of this yourself. Most OSGi applications still use datasources if you want to connect to a db, via JPA for example. Usually those datasources are registered as OSGi services.
Coming back to your multi-tenancy issue, you should make sure for each you have another datasource and just use that datasource in your application. For example make sure each tenant has it's own configuration and therefore receives his own Datasource as configured in your configuration. This way you can make sure each tenant is separate to each other.
OSGi cannot achieve the level of security you need for this scenario. An OSGi Framework is intended to represent a single logical application. If bundles exist in the same JVM and OSGi Framework, then it is very hard to prevent data leaks, especially against determined attacker.
You need to isolate processes at the very least, and run those processes as separate user IDs.

What's the best way to share business object instances between Java web apps using JBoss and Spring?

We currently have a web application loading a Spring application context which instantiates a stack of business objects, DAO objects and Hibernate. We would like to share this stack with another web application, to avoid having multiple instances of the same objects.
We have looked into several approaches; exposing the objects using JMX or JNDI, or using EJB3.
The different approaches all have their issues, and we are looking for a lightweight method.
Any suggestions on how to solve this?
Edit: I have received comments requesting me to elaborate a bit, so here goes:
The main problem we want to solve is that we want to have only one instance of Hibernate. This is due to problems with invalidation of Hibernate's 2nd level cache when running several client applications working with the same datasource. Also, the business/DAO/Hibernate stack is growing rather large, so not duplicating it just makes more sense.
First, we tried to look at how the business layer alone could be exposed to other web apps, and Spring offers JMX wrapping at the price of a tiny amount of XML. However, we were unable to bind the JMX entities to the JNDI tree, so we couldn't lookup the objects from the web apps.
Then we tried binding the business layer directly to JNDI. Although Spring didn't offer any method for this, using JNDITemplate to bind them was also trivial. But this led to several new problems: 1) Security manager denies access to RMI classloader, so the client failed once we tried to invoke methods on the JNDI resource. 2) Once the security issues were resolved, JBoss threw IllegalArgumentException: object is not an instance of declaring class. A bit of reading reveals that we need stub implementations for the JNDI resources, but this seems like a lot of hassle (perhaps Spring can help us?)
We haven't looked too much into EJB yet, but after the first two tries I'm wondering if what we're trying to achieve is at all possible.
To sum up what we're trying to achieve: One JBoss instance, several web apps utilizing one stack of business objects on top of DAO layer and Hibernate.
Best regards,
Nils
Are the web applications deployed on the same server?
I can't speak for Spring, but it is straightforward to move your business logic in to the EJB tier using Session Beans.
The application organization is straight forward. The Logic goes in to Session Beans, and these Session Beans are bundled within a single jar as an Java EE artifact with a ejb-jar.xml file (in EJB3, this will likely be practically empty).
Then bundle you Entity classes in to a seperate jar file.
Next, you will build each web app in to their own WAR file.
Finally, all of the jars and the wars are bundled in to a Java EE EAR, with the associated application.xml file (again, this will likely be quite minimal, simply enumerating the jars in the EAR).
This EAR is deployed wholesale to the app server.
Each WAR is effectively independent -- their own sessions, there own context paths, etc. But they share the common EJB back end, so you have only a single 2nd level cache.
You also use local references and calling semantic to talk to the EJBs since they're in the same server. No need for remote calls here.
I think this solves quite well the issue you're having, and its is quite straightforward in Java EE 5 with EJB 3.
Also, you can still use Spring for much of your work, as I understand, but I'm not a Spring person so I can not speak to the details.
What about spring parentContext?
Check out this article:
http://springtips.blogspot.com/2007/06/using-shared-parent-application-context.html
Terracotta might be a good fit here (disclosure: I am a developer for Terracotta). Terracotta transparently clusters Java objects at the JVM level, and integrates with both Spring and Hibernate. It is free and open source.
As you said, the problem of more than one client web app using an L2 cache is keeping those caches in synch. With Terracotta you can cluster a single Hibernate L2 cache. Each client node works with it's copy of that clustered cache, and Terracotta keeps it in synch. This link explains more.
As for your business objects, you can use Terracotta's Spring integration to cluster your beans - each web app can share clustered bean instances, and Terracotta keeps the clustered state in synch transparently.
Actually, if you want a lightweight solution and don't need transactions or clustering just use Spring support for RMI. It allows to expose Spring beans remotely using simple annotations in the latest versions. See http://static.springframework.org/spring/docs/2.0.x/reference/remoting.html.
You should take a look at the Terracotta Reference Web Application - Examinator. It has most of the components you are looking for - it's got Hibernate, JPA, and Spring with a MySQL backend.
It's been pre-tuned to scale up to 16 nodes, 20k concurrent users.
Check it out here: http://reference.terracotta.org/examinator
Thank you for your answers so far. We're still not quite there, but we have tried a few things now and see things more clearly. Here's a short update:
The solution which appears to be the most viable is EJB. However, this will require some amount of changes in our code, so we're not going to fully implement that solution right now. I'm almost surprised that we haven't been able to find some Spring feature to help us out here.
We have also tried the JNDI route, which ends with the need for stubs for all shared interfaces. This feels like a lot of hassle, considering that everything is on the same server anyway.
Yesterday, we had a small break through with JMX. Although JMX is definately not meant for this kind of use, we have proven that it can be done - with no code changes and a minimal amount of XML (a big Thank You to Spring for MBeanExporter and MBeanProxyFactoryBean). The major drawbacks to this method are performance and the fact that our domain classes must be shared through JBoss' server/lib folder. I.e., we have to remove some dependencies from our WARs and move them to server/lib, else we get ClassCastException when the business layer returns objects from our own domain model. I fully understand why this happens, but it is not ideal for what we're trying to achieve.
I thought it was time for a little update, because what appears to be the best solution will take some time to implement. I'll post our findings here once we've done that job.
Spring does have an integration point that might be of interest to you: EJB 3 injection nterceptor. This enables you to access spring beans from EJBs.
I'm not really sure what you are trying to solve; at the end of the day each jvm will either have replicated instances of the objects, or stubs representing objects existing on another (logical) server.
You could, setup a third 'business logic' server that has a remote api which your two web apps could call. The typical solution is to use EJB, but I think spring has remoting options built into its stack.
The other option is to use some form of shared cache architecture... which will synchronize object changes between the servers, but you still have two sets of instances.
Take a look at JBossCache. It allows you to easily share/replicate maps of data between mulitple JVM instances (same box or different). It is easy to use and has lots of wire level protocol options (TCP, UDP Multicast, etc.).

Categories

Resources