Communication between web applications in a weblogic cluster - java

I have a problem and I am wondering what the best way of solving it is.
Basically I have two web apps in a clustered environment (weblogic 11g).
The first web application is for uploading "documents" and managing these web applications as viewable (or not) in the second web app. The documents are stored in a database which both web applications can read
The second web application can be thought of as a document viewer.
Because loading these documents can be very slow. I'd like to load them as soon as I can rather waiting for a request.
A pull model where the web application periodically checks the database for new/removed/updated documents doesn't seem to be very practical.
What would be the best way of signalling when an user (admin) of the first webapp has updated a document, so that the second webapp can retrieve the document from the database?
My first thoughts were to use a JMS Server, but that seems a little heavy for such a simple signalling system.
What would be the best fit for this scenario?
A JMS Server for the cluster?
A JNDI Object?

Why is JMS heavy? You already use an application server with integrated JMS.
You could use one queue dedicated to each cluster node.
On upload you could post one message in each queue
On each cluster node there is a job which acts as QueueReceiver which in turn updates it's local cache
As an alternative you could try to have a servlet/web service which is called for each cluster node (which again updates the local cache).

Related

Hosting Neo4j in GCP Compute engine

I want to host a Neo4j web service for a Wikipedia graph of pages and categories and basically get some recommendations out via cypher queries.
I have already created and populated the database.
How do I “ideally” setup such a service ?
Should I keep 1 dedicated instance for the Neo4j server and separate
instances for running Tomcat or Jetty which receive the client’s
requests and then forward the request to the Neo4j server instance
via the REST API ?
Or directly send requests (cypher via REST) from the client to the 1 neo4j instance ?
Or should I choose unmanaged extensions provided my Neo4j ?
Or is there any other way to set it up keeping scaling in mind?
I do plan to run load balancing and HA clusters in the future.
The web service will be accessed by browsers and mobile apps.
I have never hosted such a web service before so it would be great if someone helps me out :)
I would recommend that you create an API app that sits between your clients and Neo4j. Your clients would make requests to the API server, which would then make a Cypher request to Neo4j (could be one instance or an HA cluster).
The benefits of this include being able to implement caching at the API layer, authenticate requests before they hit your database server, being able to instantly update Cypher queries by deploying to the API server (imagine if the Cypher logic lived in your mobile app - you would be at the mercy of app store / user upgrades), and easily scaling your API by deploying more API servers.
I've skipped this layer and just used unmanaged extensions to extend the Neo4j REST API and have clients access Neo4j directly which works OK for rapidly implementing a prototype, but you lose many of the benefits listed above with one additional downside that you will have to restart your database in order to deploy new versions of the unmanaged extension.

Load Balancing Tomcat 7 for Application Deployment

I am serving a java app through apache mod_jk and tomcat 7. I want to be able to deploy a new instance of the application ( on a separate tomcat instance) that will accept all new sessions. However all existing sessions will continue to be served by the old tomcat. Then after all users have logged off or after a certain time the old server will be shut down and all traffic will be handled by the new tomcat ( I don't expect the load balancer to do this ). This will allow me to deploy without disrupting any connected users.
I have read about mod_jk lad balancing which provides the sticky sessions that I need but I have not found how to force all new sessions to be served from the new application. It looks simple enough to set up a round robbin, but that is not what i want.
So the formal question is:
Are there any load balancers for tomcat7/apache that will allow me to customize balancing rules to respect sticky sessions but preferentially serve from one node?
Any thoughts on how to best achieve this?
Each node manages it's own session data. To remove a node with minimal disruptuion to connected users you need to share session data across all nodes. Tomcat provides session replication for this. Even with replication, it is possilbe that a node may crash before it has shared it's data. There are other solutions as dicussed here
Tomcat supports running multiple versions of the one web application with the Parallel Deployment feature. When a new session is created, it will be using the most recent version of the web application. Existing sessions will continue to use the version of the web application that was the most recent at the session creation time.
Here is an article that discusses Parallel Deployment: http://www.objectpartners.com/2012/04/17/tomcat-v7-parallel-deployment/

Is there any way to send the realtime data to JBOSS

I have a system which contains the data of the employee and this system has exposed many web-services and RMI through which any other system can request for data. Now I have a web application hosted on JBOSS. Here the problem is, now I want to get the data from the system to JBOSS in real-time fashion. Though that system has sevral webservice and RMI services through which JBOSS/web-applications can request data but that is on-demand. I looking for a way thorugh which if there is any change in the system, JBOSS get notify at that moment. One solution to it is, I should make process which will call the webservice to know if there is any change in the employee data within a time-interval. What is the other way to notify the JBOSS in real-time?
One way is to have a job/separate process which polls your employee application periodically, and sends "data has changed events" to JBoss using JMS.
In JBoss you could have a message driven bean (MDB) whichs listens to these events, and stores them in a database.
Another possibility is to have this job running in JBoss which just stores the events/results in memory (search for Quartz, for example have a look at this tutorial).

advantage and disadvantage of Tomcat clustering

Currently we have same web application deployed on 4 different tomcat instances, each one running on independent machine. A load Balancer distribute requests to these servers. Our web application makes database calls, maintain cache (key-value pairs). All tomcat instances read same data(XML) from same data-source(another server) and serve it to clients. In future, we are planning to collect some usage data from requests, process and store it in database. This functionality should be common(one module) between all tomcat servers.
Now we are thinking of using tomcat clustering. I done some research but I am not able to figure out how to separate data fetching operations i.e. reading same data(XML) from same data-source(another server) part from all tomcat web apps that make it common. So that once one server fetches data from server, it will maintain it (may be in cache) and the same data can be used by some other server to serve client. Now this functionality can be implemented using distributed cache. But there are other modules that can be made common in all other tomcat instances.
So basically, Is there any advantage of using Tomcat clustering? And if yes then how can I implement module which are common to all tomcat servers.
Read Tomcat configuration reference and clustering guide. Available clustering features are as follows:
The tomcat cluster implementation provides session replication,
context attribute replication and cluster wide WAR file deployment.
So, by clustering, you'll gain:
High availability: when one node fails, another will be able to take over without losing access to the data. For example, a HTTP session can still be handled without the user noticing the error.
Farm deployment: you can deploy your .war to a single node, and the rest will synchronize automatically.
The costs are mainly in performance:
Replication implies object serialization between the nodes. This may be undesired in some cases, but it's also possible to fine tune.
If you just want to share some state between the nodes, then you don't need clustering at all (unless you're going to use context or session replication). Just use a database and/or a distributed cache model like ehcache (or anything else).

How to run multiple tomcats against the same Database with load balancing

Please suggest what are different ways of achieving load balance on database while more than one tomcat is accessing the same database?
Thanks.
This is a detailed example of using multiple tomcat instances and an apache based loadbalancing control
Note, if you have a hardware that would make a load balancing, its even more preferable way as to me (place it instead of apache).
In short it works like this:
A request comes from some client to apache web server/hardware loadbalancer
the web server determines to which node it wants to redirect the request for futher process
the web server calls Tomcat and tomcat gets the request
the Tomcat process the request and sends it back.
Regarding the database :
- tomcat itself has nothing to do with your Database, its your application that talks to DB, not a Tomcat.
Regardless your application layer you can establish a cluster of database servers (For example google for Oracle RAC, but its entirely different story)
In general, when implementing application layer loadbalancing please notice that the common state of the application gets replicated.
The technique called "sticky session" partially handles the issue but in general you should be aware of it.
Hope this helps

Categories

Resources