So I am using HAProxy in front of Jetty servlets.
The goal at the moment is just proof of concept and load and stress testing once everything's configured.
However I have a problem configuring haproxy. I know that it's not a problem with my application cause I have nginx(tengine) running and everything works properly. So it has to be something with the haproxy configuration or just the way haproxy works is not suitable for my needs.
So what my client tries to do is connect to haproxy using two different connections and keep them open:
Connect with a chunked streaming mode for upload.
Connect with a normal mode and establish a download channel.
Here's how my haproxy.conf file looks like:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
# ca-base /etc/ssl/certs
# crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL).
ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL
maxconn 2048
defaults
log global
mode http
option forwardfor
option http-server-close
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
stats enable
stats uri /stats
stats realm Haproxy\ Statistics
stats auth user:password
frontend www-http
bind *:80
reqadd X-Forwarded-Proto:\ http
default_backend www-backend
frontend www-https
bind *:443 ssl crt /etc/haproxy/server.pem
reqadd X-Forwarded-Proto:\ https
default_backend www-backend
backend www-backend
redirect scheme https if !{ ssl_fc }
server www-1 localhost:8080 check maxconn 2048
And here's what my logs say when I try to access port 443:
Sep 17 11:10:18 xxxxx-pc haproxy[15993]: 127.0.0.1:32875 [17/Sep/2014:11:10:18.464] www- https~ www-backend/www-1 0/0/0/-1/1 502 212 - - PH-- 0/0/0/0/0 0/0 "GET /test HTTP/1.1"
Any ideas what the problem might be?
An issue with the configuration or ?
Thanks.
PH means that haproxy rejected the header from the backend because it was malformed.
http://www.haproxy.org/download/1.4/doc/configuration.txt
PH - The proxy blocked the server's response, because it was invalid,
incomplete, dangerous (cache control), or matched a security filter.
In any case, an HTTP 502 error is sent to the client. One possible
cause for this error is an invalid syntax in an HTTP header name
containing unauthorized characters. It is also possible but quite
rare, that the proxy blocked a chunked-encoding request from the
client due to an invalid syntax, before the server responded. In this
case, an HTTP 400 error is sent to the client and reported in the
logs.
Related
I experience a problem while doing integration tests of our system, that usage of a JSF application via https always returns a 400 (forbidden) when coming back from a redirect after POST because of conversion to http. This only happens, if the request is done through a tunneled connection via jumphost.
So here's the setup:
Server has a WildFly running with an JSF application and an nginx which cares about incoming https requests from 443 and transfer them to the WildFly port
Server is in a closed network which is accessable via a jumphost which again can be accessed from my machine
An SSH connection establishes a tunnel between a local port on my machine and the 443 port on the server
The browser requests https://localhost:myport
Now, whenever I post something, e.g. the login, the redirected answer comes back with http scheme and thus gives me a 400. If I manually add https in the browser, the requests gets answered correctly.
A curl of the same URL gives me this:
curl -i -k https://localhost:8426
HTTP/1.1 302 Found
Server: nginx
Date: Fri, 23 Oct 2020 15:03:52 GMT
Content-Length: 0
Connection: keep-alive
Set-Cookie: INCENTCONTROL_JSESSIONID=[....]; path=/
Location: http://localhost:8426/login.xhtml
If I do the very same directly from a machine within the network of the server, everything is fine.
What has the tunneling to do with the problem and does anyone have an idea how to overcome it?
Consider this HAProxy configuration here:
global
chroot /var/lib/haproxy
user haproxy
group haproxy
defaults
timeout connect 10s
timeout client 50s
timeout server 50s
frontend fe_https_tomcat
mode tcp
bind *:443 ssl crt /path/cert.pem alpn h2,http/1.1
default_backend be_tomcat
backend be_tomcat
mode tcp
server localhost localhost:8081 check
The issue I have is that WebSocket do not seem to get through. My guess was that in tcp mode everything would pass through. Looks like it doesn't ... :-)
The server responds with an error 403 when the WebSocket connection is getting established.
Note that with the following http-mode setup, the WebSocket just works:
frontend fe_http_8080
mode http
bind *:8080
default_backend be_tomcat_8080
backend be_tomcat_8080
mode http
server localhost localhost:8081 check
Note that I need tcp-mode to have http/2 working.
The issue was not related to HAProxy at the end, but to the WebSocket setup in Spring.
This fixed it:
-registry.addHandler(webSocketHandler, "/ws");
+registry.addHandler(webSocketHandler, "/ws").setAllowedOrigins("*");
We have a java web server which is able to serve content over h2c (HTTP/2 clear text)
We would like to reverse proxy connections established using h2 (i.e. standard HTTP/2 over SSL) to the java server in h2c.
Enabling HTTP/2 on nginx is simple enough and handling incoming h2 connections works fine.
How do we tell nginx to proxy the connection using h2c rather than http/1.1 ?
Note: a non-nginx solution may be acceptable
server {
listen 443 ssl http2 default_server;
server_name localhost;
ssl_certificate /opt/nginx/certificates/???.pem;
ssl_certificate_key /opt/nginx/certificates/???.pk8.key.pem;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://localhost:8080/; ## <---- h2c here rather than http/1.1
}
}
CONCLUSION (June 2016)
This can be done with haproxy using a configuration file as simple as the one below.
Querying (HttpServletRequest) req.getProtocol() clearly returns HTTP/2.0
global
tune.ssl.default-dh-param 1024
defaults
timeout connect 10000ms
timeout client 60000ms
timeout server 60000ms
frontend fe_http
mode http
bind *:80
# Redirect to https
redirect scheme https code 301
frontend fe_https
mode tcp
bind *:443 ssl no-sslv3 crt mydomain.pem ciphers TLSv1.2 alpn h2,http/1.1
default_backend be_http
backend be_http
mode tcp
server domain 127.0.0.1:8080
HAProxy does support that.
HAProxy can offload TLS and forward to a backend that speaks h2c.
Details on how to setup this configuration are available in this blog post.
I am attempting to use Apache Commons Net library to connect to an FTP server where the initial connection is plain text (and the file listings), but the authorization and data transfer are SSL. I've verified using CoreFTP that this is the actual behavior of the server. How can I accomplish this with the Apache Commons library.
If I use a plain FTPClient I can get a connection but then I get this message: 503 USER: Server policy requires that all clients be secured.
If I try a FTPSClient this way
FTPSClient l_ftp = new FTPSClient("SSL", true);
l_ftp.setAuthValue("SSL");
l_ftp.connect(l_host, l_port);
I get this error: javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
which makes a bit of sense, as the server is expecting a plain text connection and the client is attempting SSL.
If I try this
FTPSClient l_ftp = new FTPSClient("SSL", false);
l_ftp.setAuthValue("SSL");
l_ftp.connect(l_host, l_port);
I get this:
javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
Caused by: java.io.EOFException: SSL peer shut down incorrectly
which I think probably means about the same, server expecting plain text and client expecting SSL.
Is this even possible with the Apache Commons library?
Here is the CoreFTP Log
Welcome to Core FTP, release ver 2.2, build 1857 (x64) -- © 2003-2014
WinSock 2.0
Mem -- 2,096,632 KB, Virt -- 8,589,934,464 KB
Started on Monday October 26, 2015 at 14:18:PM
Resolving nnnnnnn.nnnnn.com...
Connect socket #900 to 222.222.222.222, port 21...
220 CONNECT:Enterprise Gateway 2.0.02. S48 FTP Server ready... 15:18:25 10-26-2015
AUTH SSL
234 AUTH: command accepted. Securing command channel ...
TLSv1, cipher TLSv1/SSLv3 (RC4-MD5) - 128 bit
USER omitted
331 Password required for omitted.
PASS **********
230 User omitted logged in. Session Id: 25846.
PBSZ 0
200 PBSZ command accepted.
PROT C
534 PROT Request denied for policy reasons.
PROT cmd failed...
CCC
200 CCC command channel is no longer secured.
SYST
502 Command not implemented.
Keep alive off...
PWD
257 "omitted" is the current working Mailbox ID.
PASV
227 PASV Entering passive mode (209,95,224,76,121,95).
LIST
Connect socket #940 to 209.95.224.76, port 31071...
150 Opening data connection.
226 Transfer complete. 0 Bytes sent.
Transferred 0 bytes in 0.008 seconds
This turned out to be some kind of library or some other conflict in the application server I was running in. When I pulled my test code out to a standalone project, it worked fine. For posterity sake, here is the working code.
FTPSClient l_ftp = new FTPSClient("SSL", false);
l_ftp.addProtocolCommandListener(new PrintCommandListener(new PrintWriter(System.out)));
l_ftp.setAuthValue("SSL");
l_ftp.connect(l_host, l_port);
if (!l_ftp.login(l_username, l_password)) {
// BAD!
}
l_ftp.execPBSZ(0L);
l_ftp.execCCC();
l_ftp.pwd();
// DO STUFF
l_ftp.logout();
l_ftp.disconnect();
We are building a mass mailing sending application in Java. Mail is being send by third party SMTP. After sending 400-500 mails tomcat6 service get stopped. Below is the error.
Proxy Error
The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET
/lin/Campaignn.jsp.
Reason: Error reading from remote server
Additionally, a 502 Bad Gateway error was encountered while trying to use an ErrorDocument to handle the request.
Apache Server at msizzler.com Port 80
But when we are sending from localhost I did not received any error. It send all the mails.
Please help me to sort it out this problem.
The HTTP 502 "Bad Gateway" response is generated when Apache web server does not receive a valid HTTP response from the upstream server, which in this case is your Tomcat web application.
Some reasons why this might happen:
Tomcat may have crashed
The web application did not respond in time and the request from Apache timed out
The Tomcat threads are timing out
A network device is blocking the request, perhaps as some sort of connection timeout or DoS attack prevention system
If the problem is related to timeout settings, you may be able to resolve it by investigating the following:
ProxyTimeout directive of Apache's mod_proxy
Connector config of Apache Tomcat
Your network device's manual
Add this into your httpd.conf file
Timeout 2400
ProxyTimeout 2400
ProxyBadHeader Ignore
The java application takes too long to respond(maybe due start-up/jvm being cold) thus you get the proxy error.
Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /lin/Campaignn.jsp.
As Albert Maclang said amending the http timeout configuration may fix the issue.
I suspect the java application throws a 500+ error thus the apache gateway error too. You should look in the logs.
I had this issue once. It turned out to be database query issue. After re-create tables and index it has been fixed.
Although it says proxy error, when you look at server log, it shows execute query timeout. This is what I had before and how I solved it.
I had this problem too.
I was using apache as a reverse proxy for tomcat, my problem was associated with the return time of the response for "apache" proxy
I solved it like this:
open the "etc/apache/apache2.conf" and the ssl mod conf file "etc/apache/sites-available/000-default-le-ssl.conf" and add the following lines:
Timeout 28800
KeepAlive On
maybe this will help you