I know there are cache products supporting cluster, JBoss Cache etc.
But the JBoss Cache only works for JBoss server and it's not application-level component.
Is it possible to write own cache for cluster in my application? Each application instance cannot know other instances in cluster, is it true?
Have you checked infinispan? It's from JBoss too, but it has an api to control it programatically. To be clear, you don't need to run JBoss, you just need to add the infinispan jar in your app.
In a few words: terracotta.
Brilliant solution. Works fantastically, good support on the forums. Good error messages.
No source footprint, it's all a matter of configuration. Tc will instrument your code with bytecode needed to communicate with the tc server.
Backside: is hub and spoke design. Needs a designated terracotta server.
Related
My project is looking to deploy a new j2ee application to Amazon's cloud. ElasticBeanstalk supports Tomcat apps, which seems perfect. Are there any particular design considerations to keep in mind when writing said app that might differ from just a standalone tomcat on a server?
For example, I understand that the server is meant to scale automatically. Is this like a cluster? Our application framework tends to like to stick state in the HttpSession, is that a problem? Or when it says it scales automatically, does that just mean memory and CPU?
Automatic scaling on AWS is done via adding more servers, not adding more CPU/RAM. You can add more CPU/RAM manually, but it requires shutting down the server for a minute to make the change, and then configuring any software running on the server to take advantage of the added RAM, so that's not the way automatic scaling is done.
Elastic Beanstalk is basically a management interface for Amazon EC2 servers, Elastic Load Balancers and Auto Scaling Groups. It sets all that up for you and provides a convenient way of deploying new versions of your application easily. Elastic Beanstalk will create EC2 servers behind an Elastic Load Balancer and use an Auto Scaling configuration to add more servers as your application load increases. It handles adding the servers to the load balancer when they are ready to receive traffic, and removing them from the load balancer and deleting the extra servers when they are no longer needed.
For your Java application running on Tomcat you have a few options to handle horizontal scaling well. You can enable sticky sessions on the Load Balancer so that all requests from a specific user will go to the same server, thus keeping the HttpSession tied to the user. The main problem with this is that if a server is removed from the pool you may lose some HttpSessions and cause any users that were "stuck" to that server to be logged out of your application. The solution to this is to configure your Tomcat instances to store sessions in a shared location. There are Tomcat session store implementations out there that work with AWS services like ElastiCache (Redis) and DynamoDB. I would recommend using one of those, probably the Redis implementation if you aren't already familiar with DynamoDB.
Another consideration for moving a Java application to AWS is that you cannot use any tools or libraries that rely on multi-cast. You may not be using multi-cast for anything, but in my experience every Java app I've had to migrate to AWS relied on multi-cast for clustering and I had to modify it to use a different clustering method.
Also, for a successful migration to AWS I suggest you read up a bit on VPCs, private IP versus public IP, and Security Groups. A solid understanding of those topics is key to setting up your network so that your web servers can communicate with your DB and cache servers in a secure and performant manner.
Hi I am trying to use infinispan as a remote caching solution and when following through the guide i see the following:
> This server provides easy to use RESTful HTTP access to the Infinispan
> data grid, build on JAX_RS. This application is delivered (currently)
> as a WAR file, which you can deploy to a servlet container (as many
> instances as you need).
I could not find the WAR in the 5.3.0.Final.
But i see that Infispan Server installation can serve as a Remote Data Grid, so is the REST interface included in the server installation with the latest release?
If yes
What server is it running on ?
Do we need licence to run the Server on enterprise level?
What is the good way to deploy it in any other Application Server?
Any help will be highly appreciated?
But i see that Infinispan Server installation can serve as a Remote Data Grid, so is the REST interface included in the server installation with the latest release?
We will be talking about this: https://github.com/infinispan/infinispan-server Answer is, I'd say, yes. When you will use Infinispan Server, you will have possibility of accessing Infinispan cache via REST endpoint. (see readme + see endpoint subsystem in, for example, standalone.xml configuration file) After start of this standalone server you can connect to http://127.0.0.1:8080/ (REST server) and start using it according to the rules described in the documentation.
What server is it running on ?
The whole Infinispan server is very based on JBoss AS. Imagine "big" JBoss AS minus all unnecessary systems, subsystems and functionality. This "little boy" is Infinispan Server which, for example, doesn't support deploying applications etc.
Do we need licence to run the Server on enterprise level?
No. This is open source project. If you still looking for "officially" supported version, I'd suggest you to check Red Hat's JBoss Data Grid solution, which is productized and supported Infinispan + Infinispan Server. See http://www.redhat.com/products/jbossenterprisemiddleware/data-grid/
What is the good way to deploy it in any other Application Server?
There is no such a way. As I mentioner earlier, Infinispan Server itself is standalone server which already contains everything you need for caching and running cluster of virtually 128 (or even more) nodes.
Any help will be highly appreciated?
Maybe. I can't answer this question properly :(
I've an app which is deployed on to a cluster with 2 jvms. The web application has cache implemented using Mbeans and the cache runs on each jvm. the cache is refreshed with a request pattern */refresh. The problem is that when the request goes through ODR, it routes it to only one server and the cache for only one server is refreshed. How do I solve this problem? Cache replication? I think it might be lot of work to implement cache replication. Any other solutions? Websphere api's ?
if I get the current instance of the application, I'm thinking of using AdminClient to get the clusters and then call the request on all the nodes on which the application is installed except for the current instance.
The Websphere way to do this is to use the DynaCache feature with DRS. The DynaCache is a kind of a hashmap, which can be distributed across the DRS cluster members. The dynacache has an API, DistributedMap, which extends the java.util.Map.
There are also a lot of configuration (Through AdminConsole and cachespec.xml) and monitoring possibilities (PMI with TPV).
Technical overview:
http://pic.dhe.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=%2Fliaag%2Fcache%2Fpubwasdynacachoverview.htm
DistributedMap API
http://pic.dhe.ibm.com/infocenter/adiehelp/v5r1m1/index.jsp?topic=%2Fcom.ibm.wasee.doc%2Finfo%2Fee%2Fjavadoc%2Fee%2Fcom%2Fibm%2Fwebsphere%2Fcache%2FDistributedMap.html
A good article from developerworks
http://www.ibm.com/developerworks/websphere/library/techarticles/0906_salvarinov/0906_salvarinov.html
The crude way we did something similar was to directly hit each Web Container on its own port. If you're able to reach them, that is.
I'm starting an JBoss to use on the development, and I'm using it as standalone.
I read that on the production environment the JBoss should be as a domain.
I searched for that to understand what's the difference between than. But I didn't found any document well explained.
That's not really correct. Standalone is fine for production. It's commonly used in production, especially when you only need one instance of the server.
Domain is used when you run several instances of JBoss AS and you want a single point where you can control configuration from. You can read more about it in the documentation.
Update
The link has been changed to the latest version of WildFly as the JBoss AS 7 documentation has been archived, but is still available at https://docs.jboss.org/author/display/AS71/Admin%20Guide.html#8094211_AdminGuide-StandaloneServer
Standalone mode
each JBoss server has its own configuration
single JVM process
Domain mode
central control of multiple servers
central configuration for multiple servers
It's important to understand that the choice between a managed domain and standalone servers is all about how your servers are managed, not what capabilities they have to service end user requests. This distinction is particularly important when it comes to high availability clusters.
So, given all that:
A single server installation gains nothing from running in a managed domain, so running a standalone server is a better choice.
For multi-server production environments, the choice of running a managed domain versus standalone servers comes down to whether the user wants to use the centralized management capabilities a managed domain provides.
Running a standalone server is better suited for most development scenarios. Any individual server configuration that can be achieved in a managed domain can also be achieved in a standalone server, so even if the application being developed will eventually run in production on a managed domain installation, much (probably most) development can be done using a standalone server.
For the Above explanation and more follow this link
We have a distributed architecture where our application runs on four Tomcat instances. I would like to know the various options available for communicating between these Tomcat instances.
The details : Say a user sends a request to stop listening to the incoming queues, this needs to be communicated with other Tomcat instances so that they stop their listeners as well. How can this communication be made across Tomcats?
Thanks,
Midhun
Looks like you are facing coordination problem.
I'd recommend you to use Apache ZooKeeper for this kind of the problems.
Consider putting your configuration to the ZooKeeper. ZooKeeper allows you to watch for the changes and if configuration was changed in ZooKeeper tomcat instance will be notified and you can adjust the behavior of your application on every node.
You can use any kind of external persistent storage to solve this problem, though.
Other possible way is to implement communication between tomcat nodes by yourself but in this case you'll have a problem with managing your deployment topology: every tomcat node should know about other nodes in the cluster.
what lies on the surface is RMI, HTTP requests. As well, IMHO, you could try to use MBeans. One more thing, you could use some non-java related things, like DBus or something, or even flat files... if all tomcats run on the same machine. Lots of options...
We use Hazelcast for this kind of scenario. They have an handy Http Session Clustering