I have Tomcat session replication using static members tribe configuration in my server and it is working fine. However, I wanted to leverage the same setup in my application to send messages between members of the cluster to facilitate my event architecture my app uses. The reason I want to use this is for the following reasons:
Tribes is a peer to peer communication framework already built into tomcat.
Reuse the configuration of peers.
No need to add additional overhead of new libraries.
Is there a way to programmatically gain access to Tomcat's Cluster Channel object to send message over? Or is there a way to figure out the members of the cluster to create your own channel to minimize the need to duplicate configuration?
Here is an example of using JMX to find the cluster configuration. It's pretty hacky, but there might be an cleaner way to find this information in JMX.
http://www.coderanch.com/t/570194/Tomcat/find-Tomcat-cluster-members
Related
We have some Wildfly servers running in standalone mode.
Every single instance provides a bunch of stateless services that can be accessed through ejb remote calls (http-remoting) from some webapplications.
The outbound connection of the webapplication points to a http loadbalancer using round robin, no stickiness. This balancers checks the availability of the service applications before connecting.
This work so far, failover also.
The problem:
The number of standalone servers could vary. Once an outbound connection is established from one of the webapps it will never be closed. So always the same standalone server is reached until it would die.
The purpose that under heavy load we just start another VM running a standalone server that would also be used by the loadbalancer does not work, because no new connection is established from the webapps.
Question:
Is this a scenario that could work, and if, is it possible to configure the webapps to start a new connection after some time, requests counts, or whatever?
I tried no keep alives for tcp or http header in undertow and request idle time, but no success so far.
Kind regards
Marcus
There is no easy way to dynamically load balance ejb remote calls due to their binary nature. The JBoss EJB client enables specifications of multiple remote connections, that are invoked in round-robin fashion, but the list is still hardcoded in your client config.
Example jboss client config jboss-ejb-client.properties:
endpoint.name=client-endpoint
remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED=false
remote.connections=node1,node2
remote.connection.node1.host=192.168.1.105
remote.connection.node1.port = 4447
remote.connection.node1.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS=false
remote.connection.node1.username=appuser
remote.connection.node1.password=apppassword
remote.connection.node2.host=192.168.1.106
remote.connection.node2.port = 4447
remote.connection.node2.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS=false
remote.connection.node2.username=appuser
remote.connection.node2.password=apppassword
I understand, that your web application is also java based. Is there any reason why not run both the EJB layer and Web on the same server within a single .ear deployment? That way you could use local access, or even inject #EJB beans directly into your web controllers without the need to serialize all calls into binary form for remote EJB with the benefit of much simpler configuration and better performance.
If your application is actually a separate deployment then the preferred way is to expose your backend functionality via REST API(JAX-RS). This way it would be accessible via HTTP, that you could simply access from your web app and you can load balance it just like you did with your web UI(you can choose to keep your API http context private - visible only locally for the services on the same network, or make it public e.g. for mobile apps) .
Hope that helps
You should be using the standalone-ha.xml or standalone-full-ha.xml profile. While you might not need the ha part to manage the state of stateful beans across your cluster, you need it for the ejbclient to discover the other nodes in your cluster automatically
In effect, the load balancing is done by the ejbclient, not a separate dedicated load balancer
I am creating a Java application in eclipse to let different devices communicate together using a publish/subscribe protocol.
I am using Jboss and ActiveMQ and I want to know if I should use an ActiveMQ resource adapter to integrate the broker in jboss in a standalone mode or I should just add dependencies in my pom.xml file and use explicit java code like indicated here http://activemq.apache.org/how-do-i-embed-a-broker-inside-a-connection.html.
Here the documentation I found to integrate ActiveMQ within jboss in a standalone mode https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_A-MQ/6.1/html/Integrating_with_JBoss_Enterprise_Application_Platform/DeployRar-InstallRar.html
Could someone tell me what is the difference between the two approaches?
Here is the answer for my question:
The first approach starts a broker within your webapp itself. You can use a
normal consumer (not a message-driven bean - MDB), but only your webapp can
access it, via the VM transport (vm://).
The second approach lets the app server manage both the connection to the
broker and the creation of the broker, so it's probably also within the JVM
that runs your webapp and probably only accessible to your webapp, but
those details are hidden from you by the app server. You can only consume
messages via an MDB, but this provides a uniform interface that doesn't
need to change if you switch to another JMS provider in the future.
Since the standard way to integrate a JEE webapp with a JMS broker is via
the RA, I'd recommend using that approach simply for consistency and
standardization. That should also allow you to switch to a standalone
ActiveMQ broker (or another JMS product) in the future with minimal effort.
I have Apache ActiveMQ embedded into my java 8 server side project. Its working fine, and I am able to send and consume messages from pre-configured queues. I now need to be able programatically remove messages from the queue upon request. After reading some docs I found that Apache ActiveMQ has a sub-project called Artemis that seems to provide the required functionality. But I am a bit confused on how to do it. Is Artemis sort of plugin on top of ActiveMQ and I just need to add required dependencies and use the tools or is it a separate product and it doesn't work with Active MQ but as an independent product. If so how do I manage individual messages (in particular delete requested message) in Active MQ?
First off, 'ActiveMQ Artemis' is a sub-project within the ActiveMQ project that represents an entirely new broker with a radically different underlying architecture than the main ActiveMQ broker. You would run one or the other.
To manage messages in the ActiveMQ broker you would use the JMX Mamagement API and the Queue#remove methods it exposes to remove specific messages. This can be done using the Message ID or more broadly using a message selector to capture more than one message if need be. The JMX API is also exposed via Jolokia so that you can manage the broker via simple REST calls instead of the JMX way if you prefer.
In any case this sort of message level management on the broker is a bit of an anti-pattern in the messaging world. If you find yourself needing to treat the broker as a database then you should ask yourself why you aren't using a database since a broker is not a database. Often you will run into many more issues trying to manage your messages this way as opposed to just putting them into a database.
I would like to know if Servlet specifications provides a way to load http sessions into my web application.
The idea is simple : every time a new http client is connected, a new session is created... and I will send this session and its values into a database (for the time being this step is easy to do).
If this "master server" dies, another machine will take its IP address, so http clients will now send their requests to this new machine (lets call it "slave server").
Here I would like my slave server retrieve sessions from the old server... but I don't know which method from Servlet specifications can "add" session ! Is there a way to do it ?
PS: it's for an university project, so I cannot use already existing modules like Tomcat's mod_jk for this homemade load-balancer.
EDIT:
I think that a lot of people think I am crazy to not use already existing tools. It's an university project, and I have to make it with my bare hands in order to show to my professors the low level mecanisms that I have used. I already know it would be crazy to use what I am doing in production, when this project will be finished, it will be thrown in the trash.
For the moment, I didn't find a "standard way" to make it with the Servlet specifications, but I can maybe do it with Manager and Session from Tomcat native classes... How can I get the instances for those interfaces ?
This isn't exactly a new idea and is called session replication. There are a couple of ways to do this. The easiest ones imho are (in ascending order of preference):
Jetty's Session clustering with a database
Tomcat's Session clustering. I personally prefer the BackupManager, which makes sure that a session lives on 2 servers in a cluster at any given point in time and forwards clients accordingly. This reduces the network traffic for session replication to a bare minimum.
Session replication with a distributed cache like hazelnuts or ehcache. There are plugins for both jetty and Tomcat to do this. Since most often a cache is used anyway, this is to be the best solution for me. What I tend to do is to put 2 round robin balanced varnish servers in front of such a cluster, which serve the dual purpose role of load balancing the cluster and serving static content from in memory cache.
As for your university project, I'd turn in an embedded jetty with automatic session replication which connects to other servers via broadcast using hazelcast. Useful, not overcomplicated (iirc, you need to implement 2 relatively simple interfaces), yet powerful. Put a varnish in front of your test machines and you should be good to go.
This feature is supported by all major Java EE application server vendors out of the box, so you shouldn't implement anything by yourself. As Markus wrote it is referred as session replication or session persistence. You can take a look at WebSphere Liberty which is available for free for development. It supports it out of the box, without need to implement anything. You just need to:
install Liberty Download just the Liberty profile runtime
configure session replication Configuring session persistence for the Liberty profile
install and configure IBM Http Server for load balancing Configuring a web server plug-in for the Liberty profile
We have a distributed architecture where our application runs on four Tomcat instances. I would like to know the various options available for communicating between these Tomcat instances.
The details : Say a user sends a request to stop listening to the incoming queues, this needs to be communicated with other Tomcat instances so that they stop their listeners as well. How can this communication be made across Tomcats?
Thanks,
Midhun
Looks like you are facing coordination problem.
I'd recommend you to use Apache ZooKeeper for this kind of the problems.
Consider putting your configuration to the ZooKeeper. ZooKeeper allows you to watch for the changes and if configuration was changed in ZooKeeper tomcat instance will be notified and you can adjust the behavior of your application on every node.
You can use any kind of external persistent storage to solve this problem, though.
Other possible way is to implement communication between tomcat nodes by yourself but in this case you'll have a problem with managing your deployment topology: every tomcat node should know about other nodes in the cluster.
what lies on the surface is RMI, HTTP requests. As well, IMHO, you could try to use MBeans. One more thing, you could use some non-java related things, like DBus or something, or even flat files... if all tomcats run on the same machine. Lots of options...
We use Hazelcast for this kind of scenario. They have an handy Http Session Clustering