Tomcat cross-service request forwarding? - java

I've had experience forwarding requests between separate webapps by updating each webapp's META-INF/context.xml to contain crossContext="true".
However, I have a situation now where I have webapps deployed within the same running tomcat but in entirely separate areas. To elaborate, in tomcat's server.xml:
app1 uses Service with name "app1Svc" with its own Connectors (to allow running on separate ports), so therefore its own Engine, Realm, and Host.
app2 has a similar setup, with a distince Service named "app2Svc" with its own connectors, etc.
If I run these webapps within the same host, I can dispatch requests between the two via their context.xml's crossContext="true" and obtaining the relevant servlet context to forward the request to (as per Tomcat not able to get ServletContext of another webapp).
However, is this possible to dispatch between two webapps that essentially have to run on separate ports (without putting httpd or somesuch in front of tomcat)?

Not in a native way, which is probably good.
You can access then by generating http requests from one to the other. For that purpose you need each of them expose some functionality over http (perhaps RESTfully). In order to make the requests you can use apache http components, or simply URL.openConnection(). You will just need to supply the URL (+port) of other apps to the application, so that they can make the invocations.

Related

How can I see what Tomcat is doing with respect to Servlets and the network side

Currently, I am learning about Java web development.
A lot of it seems to simply be configuration to me, and I feel that my understanding is superficial because I only see the configuration (i.e. define your servlets and their mappings in the web.xml file, make custom Servlets by extending the HttpServlet class, instantiate Tomcat in the main method, etc.)
I want do know a bit more about what is actually going on under the hood - so I am in some need of some guidance.
To this end, I have done some cursory reading on Tomcat and servlets from the following links:
What is a servlet
Difference between embedded and not embedded
Tomcat docs
So what I think I understand from this is that the servlets sit inside the Tomcat instance (a servlet container) and Tomcat handles all of the receiving all of the requests of the client and relaying them to the servlets. The servlets process the requests, send back a response, which Tomcat then sends back to the client. I suppose in the local setup I have, my machine would both be acting as the client and server.
Given the above I want to know:
How I can directly see and monitor the client's sending the request to Tomcat and verify that Tomcat has received the request? Essentially, how can I verify that this networking side of things is happening due to some implementation by Tomcat?
How does Tomcat parse the request information and send it to the servlets?
Is Tomcat a servlet container or a web server? Are these the same thing?
In the answer given in the second link regarding embedded vs. non-embedded, the answer states that an embedded server looks like a regular java program. Does this mean that for an embedded server, the server is in the java application while the web application is inside the server in the non-embedded case? Like the containment relationship is reversed? What does containment mean in the first place here?
Apologies for the numerous questions and thank you for helping to clarify.
2. How does Tomcat parse the request information and send it to the servlets?
The Servlet specification explains that in detail. The spec is surprisingly easy to read; I suggest giving it a go.
As a simplified overview…
The job of a Servlet container is to process the incoming request, which is just a bunch of text. The Servlet container pulls out the various pieces and assembles them into a request object.
Likewise, the response produced by your servlet is packaged up as a response object. The Servlet container's job is to use all the info contained in that object to create a stream of text to be sent back to the client web browser.
The whole point of Servlet containers is to relieve the servlet-writing programmers of the need to know much of the details of HTTP and how to make a server. The Servlet container does all that work. In other words, the great thing about Servlet technology is that you the programmer need not ask this # 2 question of yours!
3. Is Tomcat a servlet container or a web server? Are these the same thing?
(a) both, (b) no.
No, servlet containers and web servers are two different kinds of software.
A web server handles:
listening for incoming connections from clients (web browsers, etc.)
sending response back to the client
A web server handles all the network traffic.
A Servlet container provides an environment in which relatively small chunks of code (servlets) can process a request and formulate a response. The small servlet does not have to handle network traffic, launching & shutting down, security, and all the other responsibilities of a full server. That explains the "-let" in "Servlet".
Your servlet you write plugs into a Servlet container. The container communicates with the web server, receiving each request passed by the web server, and passing to the web server the response produced by your servlet. When a request arrives, the container invokes your servlet.
Your servlet remains blissfully ignorant as to what particular Servlet container implementation is running, as long as it complies with the Jakarta Servlet specification. And your servlet remains blissfully ignorant as to the existence of web servers.
Some products, such as Tomcat & Jetty, can be composed of both a web server and a Servlet container.
Tomcat is composed mainly of three components: (1) Catalina, a servlet container, (2) Coyote, a web server, and (3) Jasper, a Jakarta Server Pages processor. See Wikipedia.
For most people's needs, the Coyote web server in Tomcat is a suitable web server. So you can use Tomcat as as all-in-one application server, handling both web traffic and servlets.
[web request] ➜ [Tomcat Coyote] ➜ [Tomcat Catalina] ➜ [your servlet]
Alternatively, some folks choose to use Tomcat only as a Servlet container, sitting behind a separate web server such as Apache HTTP Server. In such a case, Tomcat’s Coyote component goes unused. Instead, the separate web server handles client browser components, and processes incoming requests. If a request is asking for a static resource, the web server serves it out, without any involvement from Tomcat. If the request is asking for work that has been assigned to a servlet, then the separate web server passes the request on to Tomcat and its Catalina component. After your servlet produces a response, the response moves from Tomcat back to the external web server, which traffics the response onwards to the client web browser.
[web request] ➜ [Apache HTTP Server] ➜ [Tomcat Catalina] ➜ [your servlet]
4 … embedded vs. non-embedded …
Non-embedded is the classic situation, as originally envisioned when Servlet technology was first invented.
Back then, servers were few, expensive, and already in place permanently. The goal of Servlet technology was to make it easy for companies to keep those expensive servers busy by having many web applications running alongside each other.
Servlet technology allowed many different servlets to be running on one machine without stepping on each other, and without the programmers of each servlet having known anything about the other servlets being written. The Servlet container can stay up and running as servlets are deployed and un-deployed.
Fast forward, and we have cloud technology where servers are many, cheap, and convenient to create and destroy on-the-fly. So nowadays many people want to run their web applications separately, one web app per virtual machine or virtual service. Thus the need for embedded mode. We need an application that can be launched and shutdown on its own, to run one specific servlet (or multiple servlets meant to work together) without any other unrelated web apps.
One way to achieve this new goal is to package a web server and servlet container into a standalone Java app. A system administrator can launch and quit this standalone app like any other Java app, without knowing anything about how to configure an on-going web server and Servlet container.

Wildfly remote EJB calls through outbound connection through loadbalancer

We have some Wildfly servers running in standalone mode.
Every single instance provides a bunch of stateless services that can be accessed through ejb remote calls (http-remoting) from some webapplications.
The outbound connection of the webapplication points to a http loadbalancer using round robin, no stickiness. This balancers checks the availability of the service applications before connecting.
This work so far, failover also.
The problem:
The number of standalone servers could vary. Once an outbound connection is established from one of the webapps it will never be closed. So always the same standalone server is reached until it would die.
The purpose that under heavy load we just start another VM running a standalone server that would also be used by the loadbalancer does not work, because no new connection is established from the webapps.
Question:
Is this a scenario that could work, and if, is it possible to configure the webapps to start a new connection after some time, requests counts, or whatever?
I tried no keep alives for tcp or http header in undertow and request idle time, but no success so far.
Kind regards
Marcus
There is no easy way to dynamically load balance ejb remote calls due to their binary nature. The JBoss EJB client enables specifications of multiple remote connections, that are invoked in round-robin fashion, but the list is still hardcoded in your client config.
Example jboss client config jboss-ejb-client.properties:
endpoint.name=client-endpoint
remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED=false
remote.connections=node1,node2
remote.connection.node1.host=192.168.1.105
remote.connection.node1.port = 4447
remote.connection.node1.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS=false
remote.connection.node1.username=appuser
remote.connection.node1.password=apppassword
remote.connection.node2.host=192.168.1.106
remote.connection.node2.port = 4447
remote.connection.node2.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS=false
remote.connection.node2.username=appuser
remote.connection.node2.password=apppassword
I understand, that your web application is also java based. Is there any reason why not run both the EJB layer and Web on the same server within a single .ear deployment? That way you could use local access, or even inject #EJB beans directly into your web controllers without the need to serialize all calls into binary form for remote EJB with the benefit of much simpler configuration and better performance.
If your application is actually a separate deployment then the preferred way is to expose your backend functionality via REST API(JAX-RS). This way it would be accessible via HTTP, that you could simply access from your web app and you can load balance it just like you did with your web UI(you can choose to keep your API http context private - visible only locally for the services on the same network, or make it public e.g. for mobile apps) .
Hope that helps
You should be using the standalone-ha.xml or standalone-full-ha.xml profile. While you might not need the ha part to manage the state of stateful beans across your cluster, you need it for the ejbclient to discover the other nodes in your cluster automatically
In effect, the load balancing is done by the ejbclient, not a separate dedicated load balancer

How does web-container handles incoming request and maps to a deployed web-application war

I want to understand how the web-container maps incoming requests to a particular web-application (and a servlet afterwards).
To begin with, I believe a web-container must be able to listen for incoming Http requests (else how will the client reach to web-application at all). This assumption I believe holds correct. If this is not correct, then how does request will ever reach to web-container?
Now, assume I wrote a web-application (based on plain servlets i.e., not using any other framework like Spring MVC), create the .war file, say firstwebapp.war; and deployed it in Apache Tomcat, with context root /firstapp
Now, the client makes request to the deployed web-application as:
http://servername:port/firstapp
How does the web-container handle this request? Where is this mapping of /firstapp to the web-application deployed as firstwebapp.war?
Does web-container first "sees" the incoming request URL before passing on the control to respective web-application? And based on what criteria is it able to map to proper .war?
Yes, the server will see /firstapp first and know where to rout it. After that it depends on your war, e.g., your web.xml.

How do Apache httpd and Tomcat work together?

I am inheriting a project involving a Java web app whose backend is powered by an Apache httpd/Tomcat combo. The web server is being used to serve back JS, static content, and to perform general load balancing, and Tomcat is serving back JSPs via a single WAR file.
I will be receiving access to the code base later on today or tomorrow, but wanted to try and do some research ahead of time.
My question can be summed up as: how do these two work together?
Who first receives HTTP requests?
How does httpd know when to forward JSP requests on to Tomcat, or to just respond to a request itself?
How does httpd "pass" the request to, and "receive" the response from, Tomcat? Does it just "copy-n-paste" the request/response to a port Tomcat is listening on? Is there some sort of OS-level interprocess communication going on? Etc.
These are just general questions about how the technologies collaborate with each other. Thanks in advance!
Who first receives HTTP requests?
Apache, almost certainly. There could be admin processes that talk directly to Tomcat, though.
How does httpd know when to forward JSP requests on to Tomcat, or to just respond to a request itself?
From its configuration. The specifics will vary. It might, for instance, be using mod_jk or mod_jk2, in which case you'll find JkMount directives in the config files, e.g.:
JkMount /*.jsp ajp13_worker
...which tells it to pass on requests at the root of the site for files matching *.jsp to the ajp13_worker, which is defined in the workers.properties file.
Or it could be set up in a simple HTTP reverse-proxy arrangement. Or something else.
How does httpd "pass" the request to, and "receive" the response from, Tomcat?
It depends on the configuration; it could be HTTP, it could be AJP, or it could be using some other module.
Does it just "copy-n-paste" the request/response to a port Tomcat is listening on?
Sort of. :-) See the reverse-proxy link above.
Is there some sort of OS-level interprocess communication going on?
Yes. AFAIK, it's all socket-based (rather than, say, shared memory stuff), which means (amongst other things) that Tomcat and Apache need not be running on the same machine.

How can I run a servlet programme?

How can I run a servlet programme?How can i set the classpath?
Please read this http://www.jsptube.com/servlet-tutorials/simple-servlet-example.html for your first steps.
Well, you generally run it in a servlet container such as Websphere Application Server or Tomcat. And the way you configure the classpath depends on the servlet container you choose.
just to repeat the basics: a servlet is a piece of java code that runs inside a servlet container (i.e. a spezialized web server). it listens to client requests (typically issued through a web browser) and answers them with a response (e.g. with an HTML page).
"running" a servlet can thus mean two different things:
deploying the servlet to a servlet container (and thereby making it listen to requests), or
actually making it process a client request
(1) is achieved by packaging the servlet code (+ 3rd party libraries) into a war file and deploying this web application archive to the server -- details may vary depending on the type of servlet container. (2) is triggered by issuing a http request, e.g. through typing the servlet URL (something like http://localhost:8080/servletexample) into the location bar of your browser.

Categories

Resources