Let me share our current setup first.
We have an application that runs on 2 servers (server A and B), and the purpose of having this is for load balancing purposes. The application version on both A and B are exactly the same and there is a shared database between 2 instances.
We are currently encountering an issue wherein it seems that stored values of variables are being shared between the two instances as well which is not what we expected.
For example, there is one configuration file in server A and another in server B. There are instances wherein the contents are different. What we have found is that sometimes when accessing the application in server A and then reading the configuration file, we are also getting values that are contained in the configuration file in server B.
Has anyone encountered a similar issue as we have? And any tips on how to get around this issue?
Regards,
Philip
You can use etcd service, share your configuration between any number of instances you want
etcd is a distributed, consistent key-value store for shared configuration and > service discovery, with a focus on being:
- Simple: curl'able user-facing API (HTTP+JSON)
- Secure: optional SSL client cert authentication
- Fast: benchmarked 1000s of writes/s per instance
- Reliable: properly distributed using Raft
Related
Food We are using WAS 7.0, with a customized local cache in a clustered environment. In our application there is some very commonly used (and very seldomly updated) reference information that is retrieved from the database and stored in a customized cache on server start up (We do not use the cache thru the application server). New Reference values can be entered through the application, when this is done, the data in the database is updated and the cached data on that single server (1 of 3) is reflected. If a user hits any of the other servers in that cluster they will not see the newly entered reference values (unless the server is bounced). Restarting the cluster is not a feasible solution as it will bring down production. My question is how do I tell the other servers to update their local cache as well?
I looked at JMS publish/subscribe mechanism. I was thinking of publishing a message every time I update values of any reference table, and the other servers can act as subscribers and get the message and refresh their cache. The servers need to act as both a publisher and a subscriber. I am not sure how to incorporate this solution to my application. I am open to suggestions and other solutions as well.
In general you should consider Dynamic cache service provided by application server. It has already replication options out of the box. Check Using the DistributedMap and DistributedObjectCache interfaces for the dynamic cache for more details.
And regarding your custom JMS solution - you should have MDB in your application, which will be configured to the topic. Once cache is changed, your application will publish change to that topic. MDB reads that message and updates the local cache.
But since it is quite complex change, I'd strongly consider switching to the built in cache.
There are two servers running a web service and the servers are load balanced using HAProxy.
Web Service does a post and updates a global variable with a certain value and another application running on the same servers will read this global value and does some processing.
My issues is when I set the value to gloabl variable using the web service; due to the load balancer only one server's gets updated. Due to this, application some times read the global value and when the request goes to the second server it wont read it as the global variable is not updated in that server.
Can someone tell me how to handle a situation like this. Both the web service and the application is Java based.
First, relying on global data is probably a bad idea. A better solution would be to persist the data in some common store, such as a database or a (shared) Redis cache. Second, you could - if necessary - use sticky sessions on the load balancer so that subsequent requests always return to the same web server, as long as it is available. The latter qualification is one of the reasons why a shared cache or database solution should be preferred - the server may go down for maintenance or some other issue during the user's session.
Currently we have same web application deployed on 4 different tomcat instances, each one running on independent machine. A load Balancer distribute requests to these servers. Our web application makes database calls, maintain cache (key-value pairs). All tomcat instances read same data(XML) from same data-source(another server) and serve it to clients. In future, we are planning to collect some usage data from requests, process and store it in database. This functionality should be common(one module) between all tomcat servers.
Now we are thinking of using tomcat clustering. I done some research but I am not able to figure out how to separate data fetching operations i.e. reading same data(XML) from same data-source(another server) part from all tomcat web apps that make it common. So that once one server fetches data from server, it will maintain it (may be in cache) and the same data can be used by some other server to serve client. Now this functionality can be implemented using distributed cache. But there are other modules that can be made common in all other tomcat instances.
So basically, Is there any advantage of using Tomcat clustering? And if yes then how can I implement module which are common to all tomcat servers.
Read Tomcat configuration reference and clustering guide. Available clustering features are as follows:
The tomcat cluster implementation provides session replication,
context attribute replication and cluster wide WAR file deployment.
So, by clustering, you'll gain:
High availability: when one node fails, another will be able to take over without losing access to the data. For example, a HTTP session can still be handled without the user noticing the error.
Farm deployment: you can deploy your .war to a single node, and the rest will synchronize automatically.
The costs are mainly in performance:
Replication implies object serialization between the nodes. This may be undesired in some cases, but it's also possible to fine tune.
If you just want to share some state between the nodes, then you don't need clustering at all (unless you're going to use context or session replication). Just use a database and/or a distributed cache model like ehcache (or anything else).
We have already created 2 JAVA web applications A & B and hosted it on 2 different
servers using TOMCAT on both these servers. These applications have already been moved
to production and we cannot make any major changes or move it to a single server as it is not in our control.
The authentication is being done through a common LDAP server which is being used by many other applications also.
Now, the client wants us to create a new application C and once logged in, the end users should be able to access the above 2 Applications A & B(through links) without having to
re-login again(SSO).
Pls advise on how to implement the same.
CAS seems the right solution for this but I don't see how you could do to avoid touching anything.
You will at least have to touch some config files.
Look at:
- Spring Security and CAS Integration
Regards
Please suggest what are different ways of achieving load balance on database while more than one tomcat is accessing the same database?
Thanks.
This is a detailed example of using multiple tomcat instances and an apache based loadbalancing control
Note, if you have a hardware that would make a load balancing, its even more preferable way as to me (place it instead of apache).
In short it works like this:
A request comes from some client to apache web server/hardware loadbalancer
the web server determines to which node it wants to redirect the request for futher process
the web server calls Tomcat and tomcat gets the request
the Tomcat process the request and sends it back.
Regarding the database :
- tomcat itself has nothing to do with your Database, its your application that talks to DB, not a Tomcat.
Regardless your application layer you can establish a cluster of database servers (For example google for Oracle RAC, but its entirely different story)
In general, when implementing application layer loadbalancing please notice that the common state of the application gets replicated.
The technique called "sticky session" partially handles the issue but in general you should be aware of it.
Hope this helps