Currently using JBoss 5.2 + Java 8 (Upgrading JBoss is soon to come). We have the opportunity to revitalize the application with a strong core for multi-tenancy support, so assume this can be started from scratch.
Our Java+Spring application is a simple web app:
Exposes various REST services to be utilized by our client implementations (mobile native + browser).
Connects various systems to complete a 'tenant' implementation.
We build out our tenant implementation utilizing our standardized client/server patterns with customization on each side. On the service side, this involves connecting to various external services including the specific tenant's back-end system for querying or end-product submission.
It's a domain driven design, treating the various services as pluggable utilities based on an interface. The services basically act as a translation and request/response handler to and from our standardized domain objects. Currently the various service interfaces, common services, and tenant-specific modules are broken off into separate projects and maven'd together via POM in JAR format. Also note we currently have a single controller layer since the client/server interfaces are standardized for our solution.
My question is: How can we support releases of various tenants without bringing down the JVM?
The current thought was to be able to dynamically swap the tenant JAR dependencies. Right now the tenant-specific services are designated by spring bean injection - tenant config determined by the URL requests are made with (secured by a session token).
Consider the following scenario of 3 tenants:
Tenant A releases monthly # 3am on a Saturday
Tenant B wants agile/security releases every 2 weeks # 9am Saturday
Tenant C wants planned development releases every quarter # 8pm on a Wednesday
Definitely need to ensure security of the application, so don't want to break the class loading to do anything shady.
Any help/direction would be appreciated and will update with what we end up with.
Thanks in advance!
So what I've come up with so far is a single JVM hosting multiple WARs.
Components separated into their own projects:
'core' - Controllers, web service factories, etc..
Service interfaces
Domain Objects
Tenant Implementations (each their own)
This split of projects allows them to be built individually and take advantage of Maven's versioning system (snapshots).
Each implementation(tenant) now has an 'app' project that mavens the various components and services together. This is a different project than the Tenant Implementations - allows that one to be reused by other projects that would utilize the same domain objects. These 'app' projects would provide all configurations for the app and sub components, being built into their own WAR.
Deployment process is as said - a single JVM that doesn't get cycled while starting/stopping WARs. We currently run an older version of JBoss, so messing with the class loader has been the main issue, but not bad. Looking to upgrade or separate WARs into their own server areas under JBoss.
Going to look into a custom class loader via database storage for JARs or into moving to a Docker setup.
Related
I'm looking for a way to create a free version and a paid version of an application. I was wondering if spring has the functionality to group/tag services so I can switch between services i.e. services which don't do much for a free user and the actual service for the paid user.
Is this the right approach? or is there another framework which lets me do this and works well with spring?
Is there a way I can do the same in the front end i.e. show or hide features/icons based on the type of user?
-- Edited --
The project is a multi module maven project with a war module and 3 jars which uses Spring framework with spring security (nothing fancy) and angularjs.
The requirement is that I should be able to build the war file based on different configuration. For example, lets say a client doesn't want a particular feature, I should be able to turn it off by just changing some configuration. So the user will not see that particular feature anymore.
Can it be done?
My advice is to do the licensing in your code. Its much
more flexible, and its not difficult to implement! and easier to maintain....
You can use bean definition profiles to use different bean implementations depending on startup parameters, but that would require that you are in charge of the startup parameters used for launching the application (i.e. this would not be a suitable solution for an application that is downloaded and run by the customer on his own machine; then the startup profile settings could be hacked).
More information on the intended application architecture is probably needed to give good advice here.
We have an java enterprise application that is supposed to run on cluster of servers. The application consists of different WARs hosted by some web containers running on these servers.
Now we have a lot of different configurations for this application, to name a few:
Relational DB host/port, credentials and so forth
Non Relational DB configurations - stuff like mongo, redis and so forth
Internal lookup configurations (how to obtain a web service in SOA architecture, stuff like that).
Logging related configuration, log4j.xml
Connection pooling configurations
Maybe in future some internal settings for smart load balancing, maybe Multi Tenancy support
Add to this multiple environments, test/staging/production/development and what not, having different hosts/ports for all aforementioned examples and we and up with a dozen of configuration files.
As I see it, all these things are not something related directly to the business layer of the application, but rather can be considered "generic" for all applications, at least in the java enterprise world.
So I'm wondering whether exists some solution for dealing with configuration management of this kind???
Basically I'm looking for the following abilities:
Start my war on any of my servers in cluster with a host/port of this configuration server.
The war will "register" itself and "download" all the needed configurations. Of course it will have adapters to apply this configuration.
This way, all my N wars in different JVMs in cluster start (they're all share-nothing architecture, so I consider them as independent pieces of deployment)
Now, if I want to change some setting, like, setting the log level of some logger to DEBUG, I go to the management console UI of this configuration server and apply the change.
Since this management center knows about all the wars (as they were registered), it should notify them about the setting change. I want to be able to change settings for one specific WAR or cluster wide. If one of the web servers that hosts the application gets restarted it will again ask for configuration and will get the configuration including the DEBUG level of that logger.
I'm not looking for solution based on deployment systems like puppet, chef and so forth since I want to change my settings during the runtime as well.
So far I couldn't find any descent ready solution for this. Of course I can craft something like that by myself, but I don't want to reinvent the wheel, So I'm asking for advice here, any help will be appreciated.
Thanks in advance
I'm trying to getting into understanding properly the package by feature approach.
1 - Let say I have 2 features that tap on the same data. For instance,
one feature could be visualizing bank account information with
different sophisticate possibilities. The other feature is about
making transaction from the bank account (We could well imagine that
this feature does not involve visualization, it could be simply
provided as a rest service).
1.a - The data model is shared across two features here. How does that impact the package by features. Shall we create redundant data models
class in the 2 package ? Shall we create a specific package for the
data model instead?
which leads me to the second question?
2- In general how are cross-cutting concern dealt with ?
2.a - For instance the case above when it comes to the data model?
2.b - Or, when it comes to the database access or some common access to an external service (shared by different feature but doing
something different with it)?
2.c - Else, the front-end or the overall bundling of the application in general.
What i mean here, is the following case: Currently i have an application which has
(i) a message transfer capability (between participant of the system)
(ii) It also has the messaging monitoring capability whereby it automatically detect rules violation and give penalties.
(iii) A visualization capability dedicated to the administrator of the system.
(iv) A notification capability provided to the administrator of the system to send message to participants.
(V) A violation cancellation capability for the admin as well. And so on.
The point is all of it has to be packaged in one application that i
call marketplace infrastructure. Should the marketplace infrastructure
that wires everything together have his own package ? Even if it is
not a feature.
I think the same could be applied some how in a Web-application as well. There has to be one central point that bundles all the feature modules / packages altogether. If each module define routes, controllers etc... There should be a central routes that import all routes for instance.
If the application has a database behind, this database is used by different feature, well who is going to start the database and wire every modules.
So bottom line is: what about the cross functional stuff (data models,
service access and etc..) and the bundling (wiring everything
together).
PS: By wiring i think about dependency injection, still the graph of object has to be defined somewhere.
Many thanks for any help.
I would like to design an application with a core and multiple modules on top of the core. The core would have the responsibility to receive messages from the network, to parse the incoming messages and to distribute the messages to registered modules.
There are multiple types of messages, some modules may be interested to receive only some of the types. The modules can execute in parallel, or they can execute sequentially (e.g. when modules are interdependent with a well defined order of execution).
Also, it would be great if the modules could be deployed/undeployed even if the core is up.
It is completely new for me, I used to write modular application but with the multiple parts wired statically.
Which direction (i.e. framework, pattern...) should I take for such a design? I don't know if it's relevant to my question but I precise I'll use Java.
Thanks
You has a very good approach at the architecture level. But it will be beneficial only when your application layers/tire will be at separate instance, so that you can shut down one module/server and while other part will still be running. Point is will you run the modules on separate instances?
Secondly, I would like to suggest you to build the application core architecture using Web-Service either REST/SOAP as it will automatic achieve your thought that follows Service Oriented Architecture (SOA). That will be producer - consumer relation and you can run on separate instance. And while deploying/undeploying you can still run the services part to support other client instances.
Using Web service will also provide you a global information exchange system that will likely to communicate with several application views/front end.
Currently we are building web services applications with Spring, Hibernate, MySQL and tomcat. We are not using real application server- SoA architecture. Regarding the persistence layer - today we are using Hibernate with MySQL but after one year we may end up with MongoDB and Morphia.
The idea here is to create architecture of the system regardless concrete database engine or persistence layer and get maximum benefits.
Let me explain - https://s3.amazonaws.com/creately-published/gtp2dsmt1. We have two cases here:
Scenario one:
We have one database that is replicated (in the beginning no) and different applications. Each application represents on war that has it's one controllers, application context, servlet xml. Domain and persistence layer is imported as maven lib - there is one version for it that is included in each application.
Pros:
Small applications that are easy to maintain
Distributed solution - each application can be moved to it's own tomcat instance or different machine for example
Cons:
Possible problems when using hibernate session and sync of it between different applications. I don't know that is possible at all with that implementation.
Scenario two - one application that has internal logic to split and organize different services - News and User.
Pros:
One persistence layer - full featured of hibernate
More j2ee look with options to extend to next level- integrate EJB and move to application server
Cons:
One huge war application more efforts to maintain
Not distribute as in the first scenario
I like more the first scenario but I'm worried about Hibernate behavior in that case and all benefits that I can get from it.
I'll be very thankful for your opinion on that case.
Cheers
Possible problems when using hibernate session and sync of it between different applications. I don't know that is possible at all with that implementation.
There are a couple of solutions that solve this exact problem:
Terracotta
Take a look at Hibernate Distributed Cache Tutorial
Also there is a bit older slide share Scaling Hibernate with Terracotta that delivers the point in pictures
Infinispan
Take a look at Using Infinispan as JPA-Hibernate Second Level Cache Provider
Going with the first solution (distributed) may be the right way to go.
It all depends on what the business problem is
Of course distributed is cool and fault tolerant and, and,.. but RAM and disks are getting cheaper and cheaper, so "scaling up" (and having a couple hot hot replicas) is actually NOT all that bad => these are props to the the "second" approach you described.
But let's say you go with the approach #1. If you do that, you would benefit from switching to NoSQL in the future, since you now have replica sets / sharding, etc.. and actually several nodes to support the concept.
But.. is 100% consistency something that a must have? ( e.g. does the product has to do with money ). How big are you planning to become => are you ready to maintain hundreds of servers? Do you have complex aggregate queries that need to run faster than xteen hours?
These are the questions that, in addition to your understanding of the business, should help you land on #1 or #2.
So, this is very late answer for this but finally I'm ready to answer. I'll put some details here about further developing of the REST service application.
Finally I landed on solution #1 from tolitius's great answer with option to migrate to solution #2 on later stage.
This is the application architecture - I'll add graphics later.
Persistence layer - this holds domain model, all database operations. Generated from database model with Spring Roo, generated repository and service layer for easy migration later.
Business layer - here is located all the business logic necessary for the oprations. This layer depends on Persistence layer.
Presentation layer validation, controllers calling Business layer.
All of this is run on Tomcat without Application server extras. On later phase this can be moved to Application server and implement Service locator pattern fully.
Infrastructure - geo located servers with geo load balancer, MySQL replication ring between all of them and one backup server and one backup server in case of fail.
My idea was to make more modern system architecture but from my experience with Java technology this is a "normal risk" situation.
With more experience - more beautiful solutions :) Looking forward for this!