Application using akka deployed in a weblogic cluster - java

I am currently developping an application which will be deployed in a weblogic application server cluster. This application is consuming some JMS messages through a MDB and process some business logic through AKKA actors.
Some of these agents are singleton and others are grouped in a pool and contact through a round-robin router.
I am trying to figure out how all these things will work in a clustered environment:
Is it possible to create a "unique" AKKA system even if the application is deployed on several nodes in the cluster? Do agents created on each server will known each other?
It it possible to add new weblogic node in the cluster and have AKKA framework recognize these new resources?
How configure all these things?
For what i see in AKKA documentation about the cluster implementation, it seems that the architecture supported is outside an application server, with AKKA nodes started from a java shell command.
Sadly, i have not found any valuable information on the use of AKKA in a application server environment.
Thanks for your help

When you say Akka agents, you mean actors? Also, I assume that round-robin dispatcher is a RoundRobinRouter :)
Akka does not have explicit support for application servers, but you should be able to instantiate an ActorSystem in your code.
As for "uniqueness", if you use clustering, the membership is maintained automatically for you so you can see which nodes are available, and you can add nodes easily. There is currently no naming service implemented on top of that, that is the target of a later version, so you have to take care of finding an actor inside the cluster yourself, or handling singletons global to the cluster.
I recommend reading the relevant sections in the documentation how you can set up and configure your cluster.
http://doc.akka.io/docs/akka/2.1.0/cluster/index.html

Related

Interest of using activeMQ resource adapter

I am creating a Java application in eclipse to let different devices communicate together using a publish/subscribe protocol.
I am using Jboss and ActiveMQ and I want to know if I should use an ActiveMQ resource adapter to integrate the broker in jboss in a standalone mode or I should just add dependencies in my pom.xml file and use explicit java code like indicated here http://activemq.apache.org/how-do-i-embed-a-broker-inside-a-connection.html.
Here the documentation I found to integrate ActiveMQ within jboss in a standalone mode https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_A-MQ/6.1/html/Integrating_with_JBoss_Enterprise_Application_Platform/DeployRar-InstallRar.html
Could someone tell me what is the difference between the two approaches?
Here is the answer for my question:
The first approach starts a broker within your webapp itself. You can use a
normal consumer (not a message-driven bean - MDB), but only your webapp can
access it, via the VM transport (vm://).
The second approach lets the app server manage both the connection to the
broker and the creation of the broker, so it's probably also within the JVM
that runs your webapp and probably only accessible to your webapp, but
those details are hidden from you by the app server. You can only consume
messages via an MDB, but this provides a uniform interface that doesn't
need to change if you switch to another JMS provider in the future.
Since the standard way to integrate a JEE webapp with a JMS broker is via
the RA, I'd recommend using that approach simply for consistency and
standardization. That should also allow you to switch to a standalone
ActiveMQ broker (or another JMS product) in the future with minimal effort.

Java web application calling different other Java applications (workers)

I am looking for a better logical solution of a situation where one core Java EE (Web) application will call/execute many other Java applications/workers (which can be core Java or J2EE(web) application (don't know what will be the best)) at a certain time.
Those other Java applications/workers will basically connect (individually) with different Data sources (can be from remote DB or REST or SOAP, etc...) and populate/update local DB at a certain period of time.
I was doing research on Java Quartz Scheduler recently. Do u have any good suggestion to me for this Enterprise level architecture?
Btw, I am using Spring 4, Java 7
Thank you as always for all good and professional ideas.
Sample diagram can be as follows:
You can connect your java application with others easy with spring's httpInvoker or rmiInvoker.
More information here: http://docs.spring.io/spring/docs/current/spring-framework-reference/html/remoting.html
Not sure to understand good, but you can look at a messaging mechanism. Typically, the WebApp will send a message that will be received by all the Workers.
Have a look a JMS which it designed for this kind of use, and integrates well with both JEE (it is a part of the JEE spec) and Spring.
There are basically two parts to your question:
How do I schedule jobs on a Java EE server?
How do I invoke remote services from that scheduled job?
Job Scheduling
The trick with job scheduling in a Java EE environment is that you are typically running jobs in a cluster, or more than one server. Thus, only one of the nodes should be running that job at a time "on behalf of" the cluster, otherwise, you'll get multiple calls to those remote resources for the same thing.
There is a standard out there for this, JSR-237, which covers Timers and WorkManagers. Each Java EE vendor has its own implementation. WebLogic has one, WebSphere has one, and JBoss has one (the JBoss one isn't compliant with the JSR, but it does the same thing).
If you are running one of the servers that only runs the web tier of the Java EE spec (i.e, Tomcat or Geronimo), then Quartz is a good choice.
How to invoke remote services from timed jobs
Echoing #Alexandre Cartapanis' answer, probably what you'll want to do is create a JMS Topic in your Java EE server, and then when the job runs, post a message to the topic. The remote services (whatever Java EE servers) subscribe to this topic, and then you can run your queries.
The big advantage here is that if you ever need to add another service that needs to populate the local DB, all you have to do is have that server subscribe to the topic - no code changes needed. With JSch or remoting, you'll have to make a code change every time a new service comes online. You also have to make code changes if DNS addresses or IP addresses change, etc, where as the JMS way is just configuration on the server. There's a lot more that you can do with JMS, and the support is much better across the board.
Spring has adapters for Quartz and I think there's one out there for WorkManagers and Timers too.
You can make use of JSch - Java Secure Channel to trigger remote ssh calls which can start a JVM and run the Worker class.
Here are some examples.

About the WebSphere Application Server cluster?

Cluster:
A logical grouping of one or more functionally identical application server processes. A cluster provides ease of deployment, configuration, workload balancing, and fallback redundancy. A cluster is a collection of servers working together as a single system to ensure that mission-critical applications and resources remain available to clients.
Clusters provide scalability. For more information, refer to additional documentation that customer support may provide that describes vertical and horizontal clustering in the WebSphere Application Server distributed environment.
Above is the explanation for WebSphere cluster. In WebSphere world, a cell can have one or many clusters. I want to know in which case one application should be deployed in more than one cluster in WebSphere?
You cannot deploy exactly the same application to more than one cluster, if you need more processing power you just add members to the cluster.
One of the few reasons that comes to my mind to deploy to second cluster, could be to use different application version - check this post for more details and restrictions of deploying multiple versions of the same application.

Communicate between tomcat instances (Distributed Architecture)

We have a distributed architecture where our application runs on four Tomcat instances. I would like to know the various options available for communicating between these Tomcat instances.
The details : Say a user sends a request to stop listening to the incoming queues, this needs to be communicated with other Tomcat instances so that they stop their listeners as well. How can this communication be made across Tomcats?
Thanks,
Midhun
Looks like you are facing coordination problem.
I'd recommend you to use Apache ZooKeeper for this kind of the problems.
Consider putting your configuration to the ZooKeeper. ZooKeeper allows you to watch for the changes and if configuration was changed in ZooKeeper tomcat instance will be notified and you can adjust the behavior of your application on every node.
You can use any kind of external persistent storage to solve this problem, though.
Other possible way is to implement communication between tomcat nodes by yourself but in this case you'll have a problem with managing your deployment topology: every tomcat node should know about other nodes in the cluster.
what lies on the surface is RMI, HTTP requests. As well, IMHO, you could try to use MBeans. One more thing, you could use some non-java related things, like DBus or something, or even flat files... if all tomcats run on the same machine. Lots of options...
We use Hazelcast for this kind of scenario. They have an handy Http Session Clustering

What kind of application would serve as a dedicated application server?

In a very popular ecommerce store, I'd imagine the actual processing of the credit card would be moved to some sort of dedicated application server, and made into more of a asynchronous process.
What sort of java application type would that be? i.e. a service that would take a message of the queue, and start processing the request and update some db table once finished.
In .net, I guess one would use a windows service. What would you use in the java world?
It is typically a J2EE application that uses a HTTP web service interface or a JMS messaging interface. HTTP interfaces are accessible via a URL, and JMS connects to a queue to pick up messages that are sent to it. The app can run on any one of the major commercial (WebSphere, Weblogic, Oracle) or free (Glassfish, JBoss) servers.
In Java you already have great open source projects that do all this for you like Glassfish, Tomcat etc.
For a mission critical system, you might want something like IBM MQ series as the middleware, and a straight Java application that uses the MQ interface to process the requests.
At a few banks that I know of, this is their architecture. Originally the application servers were written in C, as was the middleware. They were able to switch to java because the code that was actually doing the critical work (sending and receiving messages, assuring guaranteed delivery, protecting against interruptions if a component went down) were the IBM MQ's.
In our case we use an application server from Sybase that can house Java components. They are pretty much standard Java classes that have public methods that are exposed for calling via CORBA. Components can also be scheduled to run constantly or on a schedule (like a service) to look for work to do (via items in a database table, an Oracle AQ queue, or a JMS queue). All of this is contained in the app server and the app server provides transaction management, resource management, and database connection pooling for us.
Or use an OSGI environment.

Categories

Resources