About the WebSphere Application Server cluster? - java

Cluster:
A logical grouping of one or more functionally identical application server processes. A cluster provides ease of deployment, configuration, workload balancing, and fallback redundancy. A cluster is a collection of servers working together as a single system to ensure that mission-critical applications and resources remain available to clients.
Clusters provide scalability. For more information, refer to additional documentation that customer support may provide that describes vertical and horizontal clustering in the WebSphere Application Server distributed environment.
Above is the explanation for WebSphere cluster. In WebSphere world, a cell can have one or many clusters. I want to know in which case one application should be deployed in more than one cluster in WebSphere?

You cannot deploy exactly the same application to more than one cluster, if you need more processing power you just add members to the cluster.
One of the few reasons that comes to my mind to deploy to second cluster, could be to use different application version - check this post for more details and restrictions of deploying multiple versions of the same application.

Related

Design considerations for J2EE webapp on Tomcat in Amazon WebServices

My project is looking to deploy a new j2ee application to Amazon's cloud. ElasticBeanstalk supports Tomcat apps, which seems perfect. Are there any particular design considerations to keep in mind when writing said app that might differ from just a standalone tomcat on a server?
For example, I understand that the server is meant to scale automatically. Is this like a cluster? Our application framework tends to like to stick state in the HttpSession, is that a problem? Or when it says it scales automatically, does that just mean memory and CPU?
Automatic scaling on AWS is done via adding more servers, not adding more CPU/RAM. You can add more CPU/RAM manually, but it requires shutting down the server for a minute to make the change, and then configuring any software running on the server to take advantage of the added RAM, so that's not the way automatic scaling is done.
Elastic Beanstalk is basically a management interface for Amazon EC2 servers, Elastic Load Balancers and Auto Scaling Groups. It sets all that up for you and provides a convenient way of deploying new versions of your application easily. Elastic Beanstalk will create EC2 servers behind an Elastic Load Balancer and use an Auto Scaling configuration to add more servers as your application load increases. It handles adding the servers to the load balancer when they are ready to receive traffic, and removing them from the load balancer and deleting the extra servers when they are no longer needed.
For your Java application running on Tomcat you have a few options to handle horizontal scaling well. You can enable sticky sessions on the Load Balancer so that all requests from a specific user will go to the same server, thus keeping the HttpSession tied to the user. The main problem with this is that if a server is removed from the pool you may lose some HttpSessions and cause any users that were "stuck" to that server to be logged out of your application. The solution to this is to configure your Tomcat instances to store sessions in a shared location. There are Tomcat session store implementations out there that work with AWS services like ElastiCache (Redis) and DynamoDB. I would recommend using one of those, probably the Redis implementation if you aren't already familiar with DynamoDB.
Another consideration for moving a Java application to AWS is that you cannot use any tools or libraries that rely on multi-cast. You may not be using multi-cast for anything, but in my experience every Java app I've had to migrate to AWS relied on multi-cast for clustering and I had to modify it to use a different clustering method.
Also, for a successful migration to AWS I suggest you read up a bit on VPCs, private IP versus public IP, and Security Groups. A solid understanding of those topics is key to setting up your network so that your web servers can communicate with your DB and cache servers in a secure and performant manner.

Infinispan Clustering applications on 2 servers

I have a Scenario where i have 2 weblogic servers let say WL1 and WL2
in WL1 i have 2 applications deployed APP1 and APP2
in WL2 i have 2 applications deployed APP3 and APP4
I want to create a infinispan configuration where APP1 from WL1 forms a cluster with APP3 in WL2 and APP2 from WL1 forms a cluster with APP4 in WL2
So i tried using default UDP multicasting and looks like all 4 applications are forming a cluster, so i changed the multicast port to solve this issue but is this the only way to get across this kind of a situation?
Can something be done with TCPPing i am wondering because it's a p2p so it can form a cluster between WL1 & WL2 and not with individual applications right?
I am also considering of using remote caching but want's to explore embedded caching before we completely rule it out, so any help will be highly appreciated.
Answering to the question in comment: remote x embedded
The main drawback of remote cache is the latency added for communication between the client and server. Also, you can't use transactions in remote mode, there may be other features missing as well.
On the other hand, with remote cache you can easily upgrade the application without changing the data inside Infinispan. With embedded mode this would be more complicated. You can also load-balance: although Infinispan is aiming at linear scalability, things are never as bright. Therefore, you can use e.g. 20 application servers and only 4 Infinispan servers (provided that the application requires more computing power).

advantage and disadvantage of Tomcat clustering

Currently we have same web application deployed on 4 different tomcat instances, each one running on independent machine. A load Balancer distribute requests to these servers. Our web application makes database calls, maintain cache (key-value pairs). All tomcat instances read same data(XML) from same data-source(another server) and serve it to clients. In future, we are planning to collect some usage data from requests, process and store it in database. This functionality should be common(one module) between all tomcat servers.
Now we are thinking of using tomcat clustering. I done some research but I am not able to figure out how to separate data fetching operations i.e. reading same data(XML) from same data-source(another server) part from all tomcat web apps that make it common. So that once one server fetches data from server, it will maintain it (may be in cache) and the same data can be used by some other server to serve client. Now this functionality can be implemented using distributed cache. But there are other modules that can be made common in all other tomcat instances.
So basically, Is there any advantage of using Tomcat clustering? And if yes then how can I implement module which are common to all tomcat servers.
Read Tomcat configuration reference and clustering guide. Available clustering features are as follows:
The tomcat cluster implementation provides session replication,
context attribute replication and cluster wide WAR file deployment.
So, by clustering, you'll gain:
High availability: when one node fails, another will be able to take over without losing access to the data. For example, a HTTP session can still be handled without the user noticing the error.
Farm deployment: you can deploy your .war to a single node, and the rest will synchronize automatically.
The costs are mainly in performance:
Replication implies object serialization between the nodes. This may be undesired in some cases, but it's also possible to fine tune.
If you just want to share some state between the nodes, then you don't need clustering at all (unless you're going to use context or session replication). Just use a database and/or a distributed cache model like ehcache (or anything else).

Application using akka deployed in a weblogic cluster

I am currently developping an application which will be deployed in a weblogic application server cluster. This application is consuming some JMS messages through a MDB and process some business logic through AKKA actors.
Some of these agents are singleton and others are grouped in a pool and contact through a round-robin router.
I am trying to figure out how all these things will work in a clustered environment:
Is it possible to create a "unique" AKKA system even if the application is deployed on several nodes in the cluster? Do agents created on each server will known each other?
It it possible to add new weblogic node in the cluster and have AKKA framework recognize these new resources?
How configure all these things?
For what i see in AKKA documentation about the cluster implementation, it seems that the architecture supported is outside an application server, with AKKA nodes started from a java shell command.
Sadly, i have not found any valuable information on the use of AKKA in a application server environment.
Thanks for your help
When you say Akka agents, you mean actors? Also, I assume that round-robin dispatcher is a RoundRobinRouter :)
Akka does not have explicit support for application servers, but you should be able to instantiate an ActorSystem in your code.
As for "uniqueness", if you use clustering, the membership is maintained automatically for you so you can see which nodes are available, and you can add nodes easily. There is currently no naming service implemented on top of that, that is the target of a later version, so you have to take care of finding an actor inside the cluster yourself, or handling singletons global to the cluster.
I recommend reading the relevant sections in the documentation how you can set up and configure your cluster.
http://doc.akka.io/docs/akka/2.1.0/cluster/index.html

What's the difference between standalone and domain on JEE6?

I'm starting an JBoss to use on the development, and I'm using it as standalone.
I read that on the production environment the JBoss should be as a domain.
I searched for that to understand what's the difference between than. But I didn't found any document well explained.
That's not really correct. Standalone is fine for production. It's commonly used in production, especially when you only need one instance of the server.
Domain is used when you run several instances of JBoss AS and you want a single point where you can control configuration from. You can read more about it in the documentation.
Update
The link has been changed to the latest version of WildFly as the JBoss AS 7 documentation has been archived, but is still available at https://docs.jboss.org/author/display/AS71/Admin%20Guide.html#8094211_AdminGuide-StandaloneServer
Standalone mode
each JBoss server has its own configuration
single JVM process
Domain mode
central control of multiple servers
central configuration for multiple servers
It's important to understand that the choice between a managed domain and standalone servers is all about how your servers are managed, not what capabilities they have to service end user requests. This distinction is particularly important when it comes to high availability clusters.
So, given all that:
A single server installation gains nothing from running in a managed domain, so running a standalone server is a better choice.
For multi-server production environments, the choice of running a managed domain versus standalone servers comes down to whether the user wants to use the centralized management capabilities a managed domain provides.
Running a standalone server is better suited for most development scenarios. Any individual server configuration that can be achieved in a managed domain can also be achieved in a standalone server, so even if the application being developed will eventually run in production on a managed domain installation, much (probably most) development can be done using a standalone server.
For the Above explanation and more follow this link

Categories

Resources