Can any one guide me in working with X-FORWARDED-PROTO header in Java web application deployed to Apache Tomcat.
The application setup is in such a way that tomcat talks with Apache webserver, which in turn talks with Cisco Load Balancer, finally the balancer publishes the pages to the client (tomcat -> apache2 -> load balancer -> client).
The SSL Certificate is installed in Load Balancer and it's handling HTTPS requests. My requirement is to make the application behave in such a way that it uses the X-FORWARDED-PROTO and change the pages as HTTP or HTTPS.
Checking on the header files of my webpages I could not find the X-FORWARDED-PROTO header. I don't have access to the Load Balancer configuration either, and the IT has suggested us to use the X-FORWARDED-PROTO to differentiate between HTTP and HTTPS request.
Is there any configuration to be done in Tomcat or Apache level so that it will return the X-FORWARDED-PROTO header? Or is it that the configuration should be handled in Load Balancer?
I am pretty sure you have it all figured out by now but I will add the answer nonetheless.
You can use the class org.apache.catalina.valves.RemoteIpValve in the engine tag in conf/server.xml of tomcat.
<Valve className="org.apache.catalina.valves.RemoteIpValve"
internalProxies="192.168.1.XXX"
remoteIpHeader="x-forwarded-for"
remoteIpProxiesHeader="x-forwarded-by"
protocolHeader="x-forwarded-proto"
/>
Something to note that is very important is to set the internalProxies value. If this is not set and you are you using a non-standard network setup it could cause some issues where tomcat will not check for x-forwarded headers and it will default to "http". For security reasons I'd recommend to set it even if it works with the defaults.
Look here for more information.
Add this to your apache vhost managing connections
<VirtualHost *:80>
...
RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule !/status https://%{SERVER_NAME}%{REQUEST_URI} [L,R]
</VirtualHost>
this assumes your health check is /status, which doesn’t require https
Related
Here is the basic architecture I currently use to deliver access to web application (AngularJS on the front-end - JEE-JAX-RS for the back-end REST API):
Client -> Apache -> Application server (Java container - ex. tomcat)
The client browser connect to the application through HTTPS (handled by Apache) and Apache forwards the connection to the Java container (I'm using Oracle Weblogic).
Everything works fine. But now I'd like to use HTTP/2.
Apparently, HTTP/2 will be available only in JEE8 (Servlet v4) which means it will not be available in solution like Weblogic before a loooong time.
Actually I have two questions :
Can I just activate Apache mod_http2 and configure my front-end
(AngularJS) to communicate in HTTP/2 or is it also necessary for my
application server to be able to handle HTTP/2 ?
If Apache receive connection in HTTP/2 and forward it to the Java
container through HTTP/1.1 or AJP will I still benefit from all the HTTP/2 advantages, even if part of the connection is not in HTTP/2 ?
Apache (and Nginx) do not currently have the capability to work in reserve-proxy mode and communicate to the backend using HTTP/2.
When you have such "mixed" communication (browser to Apache in HTTP/2 and Apache to backend in HTTP/1.1 or AJP) you are losing a number of optimizations that HTTP/2 brings, in particular multiplexing and HTTP/2 push, not to mention the overhead due to translating the request from HTTP/2 to HTTP/1.1 and viceversa.
HTTP/2 is already available in the Java world: Jetty (I am the Jetty HTTP/2 lead), Undertow and Netty already provide transparent HTTP/2 support so that you just deploy your JEE application, enable HTTP/2 and it's done.
Because of these limitations of Apache and Nginx, we currently recommend to use HAProxy in front of Jetty (as explained in details here).
This configuration will give you the maximum benefit for HTTP/2: fast TLS offloading performed by HAProxy, powerful load balancing, very efficient communication with the backend (no translation to HTTP/1.1), with HTTP/2 everywhere and therefore all its benefits.
Jetty also offers an automatic HTTP/2 push mechanism, which is not available, to my knowledge, in Apache or Nginx.
Specifically for your questions:
You can activate mod_http2 so that browser and Apache will communicate via HTTP/2, but you may lose HTTP/2 Push. Communication with the backend will use HTTP/1.1, however. This will work but it's not an optimal HTTP/2 deployment.
You will not benefit of any HTTP/2 advantage in the communication between the client and the backend if part of the communication is not in HTTP/2.
Yes, you can activate mod_http2 in httpd.conf file in Apache24/conf folder. You also need to enable the following modules:
1. mod_log_config
2. mod_setenvif
3. mod_ssl
4. socache_shmcb_module
You have to include the httpd-ssl.conf file in your httpd.conf file by uncommenting the line -- include /extra/httpd-ssl.conf
Include the certificate and key in the conf folder and set their paths in the https-ssl.conf file
The above steps will enable HTTP/2 in Apache 2.4
You can enable HTTP/2.0 for your Java Application hosted on Tomcat by installing Tomcat-9. Tomcat-9 supports HTTP/2.0 and server push services.
You can redirect your Requests from Apache 2.4 to Tomcat 9 using the instructions in the below link
https://www3.ntu.edu.sg/home/ehchua/programming/howto/ApachePlusTomcat_HowTo.html
Using these steps you can enable HTTP/2.0 to work between client browser, Apache and your Java Application. You will get the full benefits of HTTP/2.0 in this way.
I have already implemented all the above steps in my Project and getting full rewards of high performance in communication.
If you have any doubts you can leave your comments here.
HTTP/2 is also available in Tomcat 8.5
I have done one web application using BIRT(Birt runtime 4.2) reports. All the reports accessible properly in local machine as well as through my IP. After creating .war file, I deployed it on my production server(Tomcat8) then BAR chart is not visible it is showing a cross mark(screenshot) instead of the chart, where as if click on export as PDF the report is coming on PDF document. Please help me out.
NOTE: The reports are working properly in application as well as in PDF.
Thanks in Advance..
Based on the comments, easiest setup I could imagine in order to avoid issues (including issues with mixed content):
Setup tomcat to serve HTTP requests on port 8080 (context /myApp, do not add any HTTPS setup in tomcat)
Set BIRT's baseURL to https://apacheServerHost/myApp
Add the following to your httpd.conf apache config:
ProxyPass /myApp http:tomcatServerIp:8080/myApp
ProxyPassReverse /myApp http:tomcatServerIp:8080/myApp
Add the following to your httpd-ssl.conf apache config:
ProxyPass /myApp http:tomcatServerIp:8080/myApp
ProxyPassReverse /myApp http:tomcatServerIp:8080/myApp
Why this?
The browser knows nothing about the tomcat server, it will only 'see' your apache server
When you connect to the BIRT viewer over HTTP using the apacheServer host URL, all images will be loaded by HTTPS (using baseURL path), base page HTTP loading from HTTPS is ok.
When you connect to the BIRT viewer over HTTPS using the apacheServer host URL, all images will be loaded by HTTPS (using baseURL path), which is ok
I have this complex Java application which is hosted behind a reverse-proxy.
What is the best practice to determine your user-facing url at the java application level when calling request.getServerName(), request.getServerPort() and friends ?
We are using Tomcat (but we might switch to an embedded jetty) behind an Apache mod_proxy (but we'll definitely switch to Amazon Elastic Load Balancer).
I have listed 4 solutions:
Use apache mod_proxy to rewrite the 303 redirects. This is part of our current solution but is ruled out because not available with Elastic Load Balancer
Let the application server read the Host HTTP header of the request
Hardcode the application location at the application server level (example config in Tomcat)
Stop using the standard ServletRequest API. Instead have the full qualified name of the server in a config file and read this config from our code.
Our current solution :
redirects are rewritten by mod_proxy (first approach)
some other parts of the application use a path that we set in a config file (last approach)
I definitely need to stop using approach 1 and I would like to settle on one of the other three propositions.
EDIT:
This can be summarized as :
Can I trust request.getServerName() ?
Is so, can I trust the Host HTTP header ?
you can trust the HOST header passed on by the mod_proxy on Tomcat if you configure tomcat to preserver the HOST from the request i.e. using Directive:
ProxyPreserveHost On
I have a Java webservice running on Tomcat in our internal environment. Let's say the wsdl is
http://actual:8080/app/temp?wsdl
To provide access to this webservice from outside the network, we created a proxy using Apache on another server and used ProxyPass to do something like
ProxyPass /app/temp http://actual:8080/app/temp
So externally when we access proxy/app/temp over http, it gets diverted to actual:8080/app/temp just fine. So no issues with that, and I can also access the wsdl.
But the WSDL has references to "actual" server for the "webservice location" on the port. This causes failure when an actual call is made to the webservice methods from a client.
Any ideas on how this can be fixed, please? Thanks.
Note: The client is generated using Metro. I found a way to force a different endpoint in the client using a code like below. But I am looking more for a pure proxy solution that we can do, instead of developers using our webservice having to touch their code.
((BindingProvider)port).getRequestContext().put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, "http://proxy/app/temp?wsdl");
You can use the ProxyPreserveHost directive. Quoting from the directive's section in the link:
When enabled, this option will pass the Host: line from the incoming
request to the proxied host, instead of the hostname specified in the
ProxyPass line
therefore you should have the following in your configuration file:
ProxyPreserveHost On
ProxyPass /app/temp http://actual:8080/app/temp
and then restart apache server.
using this option, you will not need to change anything in web service related code or setup.
I have added this transport-guarantee tag in the web.xml meaning that certain pages can only be accessed by https. But however this has an issue with environment that has web server and load balancer.
Apparently it does not redirect to the application with ssl port.
Seens like a firewall restriction.
Any advise anyone?
Which container are you using? I believe some containers do allow you to specify the "front end" (i.e., web server, load balancer etc) SSL port in configuration.
I've done this for WebLogic, but I'm not sure if this requirement is explicitly speced out in the Java EE specs, or if all containers support it.