Food We are using WAS 7.0, with a customized local cache in a clustered environment. In our application there is some very commonly used (and very seldomly updated) reference information that is retrieved from the database and stored in a customized cache on server start up (We do not use the cache thru the application server). New Reference values can be entered through the application, when this is done, the data in the database is updated and the cached data on that single server (1 of 3) is reflected. If a user hits any of the other servers in that cluster they will not see the newly entered reference values (unless the server is bounced). Restarting the cluster is not a feasible solution as it will bring down production. My question is how do I tell the other servers to update their local cache as well?
I looked at JMS publish/subscribe mechanism. I was thinking of publishing a message every time I update values of any reference table, and the other servers can act as subscribers and get the message and refresh their cache. The servers need to act as both a publisher and a subscriber. I am not sure how to incorporate this solution to my application. I am open to suggestions and other solutions as well.
In general you should consider Dynamic cache service provided by application server. It has already replication options out of the box. Check Using the DistributedMap and DistributedObjectCache interfaces for the dynamic cache for more details.
And regarding your custom JMS solution - you should have MDB in your application, which will be configured to the topic. Once cache is changed, your application will publish change to that topic. MDB reads that message and updates the local cache.
But since it is quite complex change, I'd strongly consider switching to the built in cache.
Related
I have a Java web application which is deployed on two VMs. and NLB (Network Load Balancing) is set for these VMs. My Application uses sessions. I am confused that how the user session is managed in both VMs. i.e. For Example- If I make a request that goes to VM1 and create a user session. Now the second time I make request and it goes to VM2 and want to access the session data. How would it find the session which has been created in VM1.
Please Help me to clear this confusion.
There are several solutions:
configure the load balancer to be sticky: i.e. requests belonging to the same session would always go to the same VM. The advantage is that this solution is simple. The disadvantage is that if one VM fails, half of the users lose their session
configure the servers to use persistent sessions. If sessions are saved to a central database and loaded from this central database, then both VMs will see the same data in the session. You might still want to have sticky sessions to avoid concurrent accesses to the same session
configure the servers in a cluster, and to distribute/replicate the sessions on all the nodes of the cluster
avoid using sessions, and just use an signed cookie to identify the users (and possibly contain a few additional information). A JSON web token could be a good solution. Get everything else from the database when you need it. This ensures scalability and failover, and, IMO, often makes things simpler on the server instead of making it more complicated.
You'll have to look in the documentation of your server to see what is possible with that server, or use a third-party solution.
We can use distributed Redis to store the session and that could solve this problem.
There are two servers running a web service and the servers are load balanced using HAProxy.
Web Service does a post and updates a global variable with a certain value and another application running on the same servers will read this global value and does some processing.
My issues is when I set the value to gloabl variable using the web service; due to the load balancer only one server's gets updated. Due to this, application some times read the global value and when the request goes to the second server it wont read it as the global variable is not updated in that server.
Can someone tell me how to handle a situation like this. Both the web service and the application is Java based.
First, relying on global data is probably a bad idea. A better solution would be to persist the data in some common store, such as a database or a (shared) Redis cache. Second, you could - if necessary - use sticky sessions on the load balancer so that subsequent requests always return to the same web server, as long as it is available. The latter qualification is one of the reasons why a shared cache or database solution should be preferred - the server may go down for maintenance or some other issue during the user's session.
I've an app which is deployed on to a cluster with 2 jvms. The web application has cache implemented using Mbeans and the cache runs on each jvm. the cache is refreshed with a request pattern */refresh. The problem is that when the request goes through ODR, it routes it to only one server and the cache for only one server is refreshed. How do I solve this problem? Cache replication? I think it might be lot of work to implement cache replication. Any other solutions? Websphere api's ?
if I get the current instance of the application, I'm thinking of using AdminClient to get the clusters and then call the request on all the nodes on which the application is installed except for the current instance.
The Websphere way to do this is to use the DynaCache feature with DRS. The DynaCache is a kind of a hashmap, which can be distributed across the DRS cluster members. The dynacache has an API, DistributedMap, which extends the java.util.Map.
There are also a lot of configuration (Through AdminConsole and cachespec.xml) and monitoring possibilities (PMI with TPV).
Technical overview:
http://pic.dhe.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=%2Fliaag%2Fcache%2Fpubwasdynacachoverview.htm
DistributedMap API
http://pic.dhe.ibm.com/infocenter/adiehelp/v5r1m1/index.jsp?topic=%2Fcom.ibm.wasee.doc%2Finfo%2Fee%2Fjavadoc%2Fee%2Fcom%2Fibm%2Fwebsphere%2Fcache%2FDistributedMap.html
A good article from developerworks
http://www.ibm.com/developerworks/websphere/library/techarticles/0906_salvarinov/0906_salvarinov.html
The crude way we did something similar was to directly hit each Web Container on its own port. If you're able to reach them, that is.
I have a problem and I am wondering what the best way of solving it is.
Basically I have two web apps in a clustered environment (weblogic 11g).
The first web application is for uploading "documents" and managing these web applications as viewable (or not) in the second web app. The documents are stored in a database which both web applications can read
The second web application can be thought of as a document viewer.
Because loading these documents can be very slow. I'd like to load them as soon as I can rather waiting for a request.
A pull model where the web application periodically checks the database for new/removed/updated documents doesn't seem to be very practical.
What would be the best way of signalling when an user (admin) of the first webapp has updated a document, so that the second webapp can retrieve the document from the database?
My first thoughts were to use a JMS Server, but that seems a little heavy for such a simple signalling system.
What would be the best fit for this scenario?
A JMS Server for the cluster?
A JNDI Object?
Why is JMS heavy? You already use an application server with integrated JMS.
You could use one queue dedicated to each cluster node.
On upload you could post one message in each queue
On each cluster node there is a job which acts as QueueReceiver which in turn updates it's local cache
As an alternative you could try to have a servlet/web service which is called for each cluster node (which again updates the local cache).
Currently we have same web application deployed on 4 different tomcat instances, each one running on independent machine. A load Balancer distribute requests to these servers. Our web application makes database calls, maintain cache (key-value pairs). All tomcat instances read same data(XML) from same data-source(another server) and serve it to clients. In future, we are planning to collect some usage data from requests, process and store it in database. This functionality should be common(one module) between all tomcat servers.
Now we are thinking of using tomcat clustering. I done some research but I am not able to figure out how to separate data fetching operations i.e. reading same data(XML) from same data-source(another server) part from all tomcat web apps that make it common. So that once one server fetches data from server, it will maintain it (may be in cache) and the same data can be used by some other server to serve client. Now this functionality can be implemented using distributed cache. But there are other modules that can be made common in all other tomcat instances.
So basically, Is there any advantage of using Tomcat clustering? And if yes then how can I implement module which are common to all tomcat servers.
Read Tomcat configuration reference and clustering guide. Available clustering features are as follows:
The tomcat cluster implementation provides session replication,
context attribute replication and cluster wide WAR file deployment.
So, by clustering, you'll gain:
High availability: when one node fails, another will be able to take over without losing access to the data. For example, a HTTP session can still be handled without the user noticing the error.
Farm deployment: you can deploy your .war to a single node, and the rest will synchronize automatically.
The costs are mainly in performance:
Replication implies object serialization between the nodes. This may be undesired in some cases, but it's also possible to fine tune.
If you just want to share some state between the nodes, then you don't need clustering at all (unless you're going to use context or session replication). Just use a database and/or a distributed cache model like ehcache (or anything else).