Wildfly remote EJB calls through outbound connection through loadbalancer - java

We have some Wildfly servers running in standalone mode.
Every single instance provides a bunch of stateless services that can be accessed through ejb remote calls (http-remoting) from some webapplications.
The outbound connection of the webapplication points to a http loadbalancer using round robin, no stickiness. This balancers checks the availability of the service applications before connecting.
This work so far, failover also.
The problem:
The number of standalone servers could vary. Once an outbound connection is established from one of the webapps it will never be closed. So always the same standalone server is reached until it would die.
The purpose that under heavy load we just start another VM running a standalone server that would also be used by the loadbalancer does not work, because no new connection is established from the webapps.
Question:
Is this a scenario that could work, and if, is it possible to configure the webapps to start a new connection after some time, requests counts, or whatever?
I tried no keep alives for tcp or http header in undertow and request idle time, but no success so far.
Kind regards
Marcus

There is no easy way to dynamically load balance ejb remote calls due to their binary nature. The JBoss EJB client enables specifications of multiple remote connections, that are invoked in round-robin fashion, but the list is still hardcoded in your client config.
Example jboss client config jboss-ejb-client.properties:
endpoint.name=client-endpoint
remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED=false
remote.connections=node1,node2
remote.connection.node1.host=192.168.1.105
remote.connection.node1.port = 4447
remote.connection.node1.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS=false
remote.connection.node1.username=appuser
remote.connection.node1.password=apppassword
remote.connection.node2.host=192.168.1.106
remote.connection.node2.port = 4447
remote.connection.node2.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS=false
remote.connection.node2.username=appuser
remote.connection.node2.password=apppassword
I understand, that your web application is also java based. Is there any reason why not run both the EJB layer and Web on the same server within a single .ear deployment? That way you could use local access, or even inject #EJB beans directly into your web controllers without the need to serialize all calls into binary form for remote EJB with the benefit of much simpler configuration and better performance.
If your application is actually a separate deployment then the preferred way is to expose your backend functionality via REST API(JAX-RS). This way it would be accessible via HTTP, that you could simply access from your web app and you can load balance it just like you did with your web UI(you can choose to keep your API http context private - visible only locally for the services on the same network, or make it public e.g. for mobile apps) .
Hope that helps

You should be using the standalone-ha.xml or standalone-full-ha.xml profile. While you might not need the ha part to manage the state of stateful beans across your cluster, you need it for the ejbclient to discover the other nodes in your cluster automatically
In effect, the load balancing is done by the ejbclient, not a separate dedicated load balancer

Related

Interest of using activeMQ resource adapter

I am creating a Java application in eclipse to let different devices communicate together using a publish/subscribe protocol.
I am using Jboss and ActiveMQ and I want to know if I should use an ActiveMQ resource adapter to integrate the broker in jboss in a standalone mode or I should just add dependencies in my pom.xml file and use explicit java code like indicated here http://activemq.apache.org/how-do-i-embed-a-broker-inside-a-connection.html.
Here the documentation I found to integrate ActiveMQ within jboss in a standalone mode https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_A-MQ/6.1/html/Integrating_with_JBoss_Enterprise_Application_Platform/DeployRar-InstallRar.html
Could someone tell me what is the difference between the two approaches?
Here is the answer for my question:
The first approach starts a broker within your webapp itself. You can use a
normal consumer (not a message-driven bean - MDB), but only your webapp can
access it, via the VM transport (vm://).
The second approach lets the app server manage both the connection to the
broker and the creation of the broker, so it's probably also within the JVM
that runs your webapp and probably only accessible to your webapp, but
those details are hidden from you by the app server. You can only consume
messages via an MDB, but this provides a uniform interface that doesn't
need to change if you switch to another JMS provider in the future.
Since the standard way to integrate a JEE webapp with a JMS broker is via
the RA, I'd recommend using that approach simply for consistency and
standardization. That should also allow you to switch to a standalone
ActiveMQ broker (or another JMS product) in the future with minimal effort.

Jboss Failover support without load balancing

I have two Jboss instances installed on different servers. My aim is to manage a failover recovery between these Jboss instances. Load balancing is not needed. This is customer requirement. Normally client is connected to JBoss 1, perform a failover, and which the client will automatically connect to JBoss 2.
As I said I want to serve only one Jboss instance at a time.
Actually I have a solution, but I don't know if it works. My aim was to write a service that checks if Jboss is running. When I catch a failover, I want to change jndi.properties of client, then other Jboss instance will start to serve client request. But I do not know this solution works. Also Jboss can provide me a service( or other property) to check a specific instance is running?
Do you think my solution can work? Or How can I do this? Do I have to install Apache ? Apache provides failover without load balancing.

Using GlassFish for a socket server application

I have a java server application which uses sockets to communicate with clients. Application needs to have load balancing, session sharing among instances and database connection pooling. Currently this is an standalone application without load balancing.
Is it possible to use an application server like GlassFish to host this server application? If it is, how should I do it?
I need to remind that this is NOT a web application.
My thoughts:
It is possible. Web apps can create their own threads or thread pools. Unless prevented from doing so by a security policy, they can open sockets and listen.
You won't benefit from glassfish session management, as that is part of the HTTP stack. You'll have to have a servlet, or servlet-context-listener whose only job is to initialize your application on startup. That is a bit weird, when there is no web content, but I guess it is ok. Web apps are often deployed on machines that might only have web ports open to them in the firewall (80, 443, etc). You could add a HTTP page to the app for administration/monitoring, and make it a web app, at least partially.

How to call EJB running on remote machine over internet from standalone client on another machine without JSP/Servlet as intermediate?

Case:
Developing a standalone java client which will run on different locations on multiple user desktops.
Application server(Oracle Weblogic) will be running at centralized location in different place.
Now I want to access/call EJB (Session Bean) running on central server from client.
As client and server are on different location and not connected via Intranet or LAN only medium of connection is internet.
My question is how can I call EJB's in server directly from client without using a servlet/JSP layer in between?
EJB was devised for remote access , why a servlet dependency?
I have read that RMI-IIOP can be used to make this type of connection but I am unable to use RMI-IIOP over internet!
What is the best architecture/solution for this type of remote communication?
Remember that the EJB is a consise unit of buisness logic, they are protocol agnostic. Exposing it to the caller is the job of the application server. RMI-IIOP/CORBA is just the default.
The internet routing issue with IIOP is similar to many protocols, it is not that they don't route over the internet, it is that they do not have an easy proxy / reverse proxy feature built in. Hence has issues going through a DMZ. Compared to HTTP which has support for reverse proxy, or SMTP allows for relaying. The firewall port is typically closed as well, You would not normally put an application server in a DMZ.
To solve the problem of giving direct access to business logic to an external network, i typically change protocols to something designed for external network communications. For example, annotate (or deployment descriptor) the EJBs with the #WebMethod and the become available as SOAP services automatically, or use the #PATH (etc.) to expose them as HTTP/JSON/XML services.
CORBA type protocols have out-of-the-box features for security, XA transactions and are very high performance. I usually use this for enterprise level component oriented systems internally. (each EJB component is essentially used as a microservice) While for external integration, especially where the caller does not need to pre-know the interface contract, I typically go with a SOAP or HTTP/JSON/XML endpoints for the EJB.
There is no servlet dependency. There is a custom client/protocol dependency that's app server specific. Each server has their own way of setting up the connection, manifested through configuring JNDI for the proper providers and protocol handlers.
Why won't RMI-IIOP work over the internet? The only potential issue I can see there is security, I don't know if there's an encrypted version of RMI-IIOP or not, but other than that, it's a perfectly routable protocol.
You may run in to port and firewall issues, but that's not the protocols fault. If you want to run RMI-IIOP over port 80 (http's port), then that's fine (obviously it won't be http, nor work with http proxies, but again, that's not the protocols issue).
Weblogic also has (had?) their own protocol, T3? I think it was? Can you use that?
I think the key is why you don't think you can run RMI-IIOP "over the internet", and trying to solve that problem, not necessarily what protocol to use.
The RMI/IIOP is the default provided by the application server. By configuration / annotation SOAP or HTTP/XML/JSON can be used instead. (Although these protocols have some trade-off with security, transactions, etc.)
Well EJB doesn't have at all a dependency on the servlet. They can be called directly using RMI/IIOP.
The only problem you have to face is the network structure, i mean RMI/IIOP uses some ports that usually aren't open in company Firewall and it could be quite difficult to open them.
So usually it is better to make an HTTP request because almost all firewall accepts this kind of request.
So if you are in an intranet (client and server in the same intranet) you can use RMI/IIOP but if your client and server are placed in different networks with internet connection then i suggest to you to use HTTP.
You could use Webservices and "export" your EJB as a web service.
If you don't want to use Webservices then you could implement as extrema-ratio a servlet that receives HTTP request and calls the EJB. It depends on the type of object you have to return to the client.
If it is a simple string or something not too complex then you could even use a Servlet but if they are objects then the Servelt solution isn't the right choice.

How to run multiple tomcats against the same Database with load balancing

Please suggest what are different ways of achieving load balance on database while more than one tomcat is accessing the same database?
Thanks.
This is a detailed example of using multiple tomcat instances and an apache based loadbalancing control
Note, if you have a hardware that would make a load balancing, its even more preferable way as to me (place it instead of apache).
In short it works like this:
A request comes from some client to apache web server/hardware loadbalancer
the web server determines to which node it wants to redirect the request for futher process
the web server calls Tomcat and tomcat gets the request
the Tomcat process the request and sends it back.
Regarding the database :
- tomcat itself has nothing to do with your Database, its your application that talks to DB, not a Tomcat.
Regardless your application layer you can establish a cluster of database servers (For example google for Oracle RAC, but its entirely different story)
In general, when implementing application layer loadbalancing please notice that the common state of the application gets replicated.
The technique called "sticky session" partially handles the issue but in general you should be aware of it.
Hope this helps

Categories

Resources