Global Variables not updated due to load balancing - java

There are two servers running a web service and the servers are load balanced using HAProxy.
Web Service does a post and updates a global variable with a certain value and another application running on the same servers will read this global value and does some processing.
My issues is when I set the value to gloabl variable using the web service; due to the load balancer only one server's gets updated. Due to this, application some times read the global value and when the request goes to the second server it wont read it as the global variable is not updated in that server.
Can someone tell me how to handle a situation like this. Both the web service and the application is Java based.

First, relying on global data is probably a bad idea. A better solution would be to persist the data in some common store, such as a database or a (shared) Redis cache. Second, you could - if necessary - use sticky sessions on the load balancer so that subsequent requests always return to the same web server, as long as it is available. The latter qualification is one of the reasons why a shared cache or database solution should be preferred - the server may go down for maintenance or some other issue during the user's session.

Related

Shared Database between 2 application instances in different servers

Let me share our current setup first.
We have an application that runs on 2 servers (server A and B), and the purpose of having this is for load balancing purposes. The application version on both A and B are exactly the same and there is a shared database between 2 instances.
We are currently encountering an issue wherein it seems that stored values of variables are being shared between the two instances as well which is not what we expected.
For example, there is one configuration file in server A and another in server B. There are instances wherein the contents are different. What we have found is that sometimes when accessing the application in server A and then reading the configuration file, we are also getting values that are contained in the configuration file in server B.
Has anyone encountered a similar issue as we have? And any tips on how to get around this issue?
Regards,
Philip
You can use etcd service, share your configuration between any number of instances you want
etcd is a distributed, consistent key-value store for shared configuration and > service discovery, with a focus on being:
- Simple: curl'able user-facing API (HTTP+JSON)
- Secure: optional SSL client cert authentication
- Fast: benchmarked 1000s of writes/s per instance
- Reliable: properly distributed using Raft

How to manage sessions in a distributed application

I have a Java web application which is deployed on two VMs. and NLB (Network Load Balancing) is set for these VMs. My Application uses sessions. I am confused that how the user session is managed in both VMs. i.e. For Example- If I make a request that goes to VM1 and create a user session. Now the second time I make request and it goes to VM2 and want to access the session data. How would it find the session which has been created in VM1.
Please Help me to clear this confusion.
There are several solutions:
configure the load balancer to be sticky: i.e. requests belonging to the same session would always go to the same VM. The advantage is that this solution is simple. The disadvantage is that if one VM fails, half of the users lose their session
configure the servers to use persistent sessions. If sessions are saved to a central database and loaded from this central database, then both VMs will see the same data in the session. You might still want to have sticky sessions to avoid concurrent accesses to the same session
configure the servers in a cluster, and to distribute/replicate the sessions on all the nodes of the cluster
avoid using sessions, and just use an signed cookie to identify the users (and possibly contain a few additional information). A JSON web token could be a good solution. Get everything else from the database when you need it. This ensures scalability and failover, and, IMO, often makes things simpler on the server instead of making it more complicated.
You'll have to look in the documentation of your server to see what is possible with that server, or use a third-party solution.
We can use distributed Redis to store the session and that could solve this problem.

WebSphere propagating changes to customized cache across all servers in a cluster

Food We are using WAS 7.0, with a customized local cache in a clustered environment. In our application there is some very commonly used (and very seldomly updated) reference information that is retrieved from the database and stored in a customized cache on server start up (We do not use the cache thru the application server). New Reference values can be entered through the application, when this is done, the data in the database is updated and the cached data on that single server (1 of 3) is reflected. If a user hits any of the other servers in that cluster they will not see the newly entered reference values (unless the server is bounced). Restarting the cluster is not a feasible solution as it will bring down production. My question is how do I tell the other servers to update their local cache as well?
I looked at JMS publish/subscribe mechanism. I was thinking of publishing a message every time I update values of any reference table, and the other servers can act as subscribers and get the message and refresh their cache. The servers need to act as both a publisher and a subscriber. I am not sure how to incorporate this solution to my application. I am open to suggestions and other solutions as well.
In general you should consider Dynamic cache service provided by application server. It has already replication options out of the box. Check Using the DistributedMap and DistributedObjectCache interfaces for the dynamic cache for more details.
And regarding your custom JMS solution - you should have MDB in your application, which will be configured to the topic. Once cache is changed, your application will publish change to that topic. MDB reads that message and updates the local cache.
But since it is quite complex change, I'd strongly consider switching to the built in cache.

advantage and disadvantage of Tomcat clustering

Currently we have same web application deployed on 4 different tomcat instances, each one running on independent machine. A load Balancer distribute requests to these servers. Our web application makes database calls, maintain cache (key-value pairs). All tomcat instances read same data(XML) from same data-source(another server) and serve it to clients. In future, we are planning to collect some usage data from requests, process and store it in database. This functionality should be common(one module) between all tomcat servers.
Now we are thinking of using tomcat clustering. I done some research but I am not able to figure out how to separate data fetching operations i.e. reading same data(XML) from same data-source(another server) part from all tomcat web apps that make it common. So that once one server fetches data from server, it will maintain it (may be in cache) and the same data can be used by some other server to serve client. Now this functionality can be implemented using distributed cache. But there are other modules that can be made common in all other tomcat instances.
So basically, Is there any advantage of using Tomcat clustering? And if yes then how can I implement module which are common to all tomcat servers.
Read Tomcat configuration reference and clustering guide. Available clustering features are as follows:
The tomcat cluster implementation provides session replication,
context attribute replication and cluster wide WAR file deployment.
So, by clustering, you'll gain:
High availability: when one node fails, another will be able to take over without losing access to the data. For example, a HTTP session can still be handled without the user noticing the error.
Farm deployment: you can deploy your .war to a single node, and the rest will synchronize automatically.
The costs are mainly in performance:
Replication implies object serialization between the nodes. This may be undesired in some cases, but it's also possible to fine tune.
If you just want to share some state between the nodes, then you don't need clustering at all (unless you're going to use context or session replication). Just use a database and/or a distributed cache model like ehcache (or anything else).

How to run multiple tomcats against the same Database with load balancing

Please suggest what are different ways of achieving load balance on database while more than one tomcat is accessing the same database?
Thanks.
This is a detailed example of using multiple tomcat instances and an apache based loadbalancing control
Note, if you have a hardware that would make a load balancing, its even more preferable way as to me (place it instead of apache).
In short it works like this:
A request comes from some client to apache web server/hardware loadbalancer
the web server determines to which node it wants to redirect the request for futher process
the web server calls Tomcat and tomcat gets the request
the Tomcat process the request and sends it back.
Regarding the database :
- tomcat itself has nothing to do with your Database, its your application that talks to DB, not a Tomcat.
Regardless your application layer you can establish a cluster of database servers (For example google for Oracle RAC, but its entirely different story)
In general, when implementing application layer loadbalancing please notice that the common state of the application gets replicated.
The technique called "sticky session" partially handles the issue but in general you should be aware of it.
Hope this helps

Categories

Resources