HTTPS redirects get lost through tunneling - java

I experience a problem while doing integration tests of our system, that usage of a JSF application via https always returns a 400 (forbidden) when coming back from a redirect after POST because of conversion to http. This only happens, if the request is done through a tunneled connection via jumphost.
So here's the setup:
Server has a WildFly running with an JSF application and an nginx which cares about incoming https requests from 443 and transfer them to the WildFly port
Server is in a closed network which is accessable via a jumphost which again can be accessed from my machine
An SSH connection establishes a tunnel between a local port on my machine and the 443 port on the server
The browser requests https://localhost:myport
Now, whenever I post something, e.g. the login, the redirected answer comes back with http scheme and thus gives me a 400. If I manually add https in the browser, the requests gets answered correctly.
A curl of the same URL gives me this:
curl -i -k https://localhost:8426
HTTP/1.1 302 Found
Server: nginx
Date: Fri, 23 Oct 2020 15:03:52 GMT
Content-Length: 0
Connection: keep-alive
Set-Cookie: INCENTCONTROL_JSESSIONID=[....]; path=/
Location: http://localhost:8426/login.xhtml
If I do the very same directly from a machine within the network of the server, everything is fine.
What has the tunneling to do with the problem and does anyone have an idea how to overcome it?

Related

Java HttpClient doesn't keep TCP connection alive with HTTP/1.1 version

I configured java.net.http.HttpClient as shown below:
HttpClient client = HttpClient.newBuilder().version(HttpClient.Version.HTTP_1_1).build();
Also, I have a simple Spring Boot (Tomcat) HTTP server, which is running on the 8080 port. For each request, I check incoming headers in a controller and the number of TCP connections using the next command: lsof -i -P | grep "TCP" | grep "8080".
When I make a GET request from client then exactly one TCP connection is created for each request. Incoming headers don't have any information about keep-alive
When I try to set keep-alive header directly I got the exception.
HttpRequest req = HttpRequest.newBuilder()
.setHeader("Connection", "Keep-Alive")
.uri(uri)
.build();
When I make a GET request from a browser (safari) then the browser adds keep-alive headers to each request and only one TCP connection is created for multiply requests (as expected).
When I set version HTTP/2 and make the request from the client then only one TCP connection creates for all requests (as expected):
HttpClient client = HttpClient.newBuilder().version(HttpClient.Version.HTTP_2).build();
As described here - both HTTP/1.1 and HTTP/2 have keep-alive property which is enabled by default, but as you can see from the examples above it doesn't work for HTTP/1.1 in my case.
Does anyone know how to configure HttpClient properly? Or maybe, I'm doing something wrong?

Spring Boot REST app returns 400 when requested from other docker-compose service by service name

I'm trying to introduce a Spring Boot REST service in our development setup. The development setup is using docker-compose and an API gateway to expose the individual services on the same domain (ie. localhost).
When I try to make a HTTP request to my service from inside another container via the service name in the shared docker-compose file, the service returns a 400.
The setup
I've edited our docker-compose file, so it looks like the below to introduce the Spring Boot Java service. The service is based on spring-boot-starter-parent (2.0.3.RELEASE) and spring-boot-starter-web. I haven't configured anything related to the web server (except adding the server.server-header property to ensure myself that the request is hitting my service).
version: '3'
services:
...
hello_java:
build:
context: ../hello-java/
dockerfile: Dockerfile
depends_on:
- postgres
- castle_black
ports:
- "8301:8080"
castle_black:
build: ../castle-black/tyk-gateway
ports:
- "8191:8080"
depends_on:
- redis
The behaviour
If I request the hello service from outside the containers (e.g. in my browser on localhost:8301) it replies back correctly. If I'm inside a container, but obtain the IP that the container with my new service gets in the docker network and use that the new service also responds correctly back.
Below I have shown a request from inside the API gateway container to the Java service, first by using the service name and then afterwards with the IP that was resolved. It only replies with a correct response in the IP case.
# curl -v http://hello_java:8080/hello-java/greet?username=Java
* Hostname was NOT found in DNS cache
* Trying 172.19.0.6...
* Connected to hello_java (172.19.0.6) port 8080 (#0)
> GET /hello-java/greet?username=Java HTTP/1.1
> User-Agent: curl/7.35.0
> Host: hello_java:8080
> Accept: */*
>
< HTTP/1.1 400
< Transfer-Encoding: chunked
< Date: Wed, 01 Aug 2018 11:34:34 GMT
< Connection: close
* Server MySpringBootApp is not blacklisted
< Server: MySpringBootApp
<
* Closing connection 0
# curl -v http://172.19.0.6:8080/hello-java/greet?username=Java
* Hostname was NOT found in DNS cache
* Trying 172.19.0.6...
* Connected to 172.19.0.6 (172.19.0.6) port 8080 (#0)
> GET /hello-java/greet?username=Java HTTP/1.1
> User-Agent: curl/7.35.0
> Host: 172.19.0.6:8080
> Accept: */*
>
< HTTP/1.1 200
< Content-Type: text/plain;charset=UTF-8
< Content-Length: 10
< Date: Wed, 01 Aug 2018 11:34:55 GMT
* Server MySpringBootApp is not blacklisted
< Server: MySpringBootApp
<
* Connection #0 to host 172.19.0.6 left intact
Hello Java
The questions
Is there something in the standard spring-boot-starter-web setup that prevents the web server from servicing the request, when the client adds the "Host: hello_java:8080" header? Or why is the web server behaving differently in the two scenarios? And what can I do about it?
After some experimentation it turned out that the it was the underscore in the service name that caused the issue. Changing the service name to not have an underscore solved the problem.
RFC 952 stipulates that "name" (Net, Host, Gateway, or Domain name) is a text string up to 24 characters drawn from the alphabet (A-Z), digits (0-9), minus sign (-), and period (.)
It seems that the _ is not a valid component for host names. It's a bit confusing because I had the same problem and when I ping app_server it's fine but when I wget from app_server I got bad request.
Changing the underscore to minus fixed it for me.

HttpClient does not send cookie when connecting via hostname

I'm using Apache HttpComponents HttpClient 4.4.1 to communicate with an ASP.Net SOAP service running on IIS (client and server are on the same host as it happens). The SOAP service returns a Set-Cookie header with a session token in response to its AuthenticateUser service. Example:
02 22:16:32.994 DEBUG <localhost-startStop-1> [org.apache.http.headers ] http-outgoing-1 << Set-Cookie: NTSSOCookie=50d2300d-31cf-4187-820f-83b29949f38b; expires=Sat, 02-Jul-2016 21:16:32 GMT; path=/
...
02 22:16:33.002 DEBUG <localhost-startStop-1> [rotocol.ResponseProcessCookies] Cookie accepted [NTSSOCookie="50d2300d-31cf-4187-820f-83b29949f38b", version:0, domain:palab46, path:/, expiry:Sat Jul 02 22:16:32 BST 2016]
Something peculiar I have observed is that if I connect to the server using just the hostname, the HttpClient library indicates that it has accepted the cookie returned from the server, but never attaches it to outgoing requests for the same connection. If however, I switch to using either the IP address or the FQDN of the server, the cookie is returned just fine.
I've verified the address, port and protocol are consistent across the authentication request and subsequent requests, and also that the cookie path and expiry date values are valid and consistent in all cases.
For kicks, I also checked the hostname and FQDN both resolve to the correct IP address.
What could be causing this odd behaviour?

Haproxy Bad Gateway 502

So I am using HAProxy in front of Jetty servlets.
The goal at the moment is just proof of concept and load and stress testing once everything's configured.
However I have a problem configuring haproxy. I know that it's not a problem with my application cause I have nginx(tengine) running and everything works properly. So it has to be something with the haproxy configuration or just the way haproxy works is not suitable for my needs.
So what my client tries to do is connect to haproxy using two different connections and keep them open:
Connect with a chunked streaming mode for upload.
Connect with a normal mode and establish a download channel.
Here's how my haproxy.conf file looks like:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
# ca-base /etc/ssl/certs
# crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL).
ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL
maxconn 2048
defaults
log global
mode http
option forwardfor
option http-server-close
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
stats enable
stats uri /stats
stats realm Haproxy\ Statistics
stats auth user:password
frontend www-http
bind *:80
reqadd X-Forwarded-Proto:\ http
default_backend www-backend
frontend www-https
bind *:443 ssl crt /etc/haproxy/server.pem
reqadd X-Forwarded-Proto:\ https
default_backend www-backend
backend www-backend
redirect scheme https if !{ ssl_fc }
server www-1 localhost:8080 check maxconn 2048
And here's what my logs say when I try to access port 443:
Sep 17 11:10:18 xxxxx-pc haproxy[15993]: 127.0.0.1:32875 [17/Sep/2014:11:10:18.464] www- https~ www-backend/www-1 0/0/0/-1/1 502 212 - - PH-- 0/0/0/0/0 0/0 "GET /test HTTP/1.1"
Any ideas what the problem might be?
An issue with the configuration or ?
Thanks.
PH means that haproxy rejected the header from the backend because it was malformed.
http://www.haproxy.org/download/1.4/doc/configuration.txt
PH - The proxy blocked the server's response, because it was invalid,
incomplete, dangerous (cache control), or matched a security filter.
In any case, an HTTP 502 error is sent to the client. One possible
cause for this error is an invalid syntax in an HTTP header name
containing unauthorized characters. It is also possible but quite
rare, that the proxy blocked a chunked-encoding request from the
client due to an invalid syntax, before the server responded. In this
case, an HTTP 400 error is sent to the client and reported in the
logs.

SSO authentication, response is always NTLM

I'm trying to implement SSO on an intranet application we are developing. I am using SPNEGO for this. Now I'm having some trouble configuring the SSO and hope someone here is able to help me.
The setup is like this:
Linux server with tomcat to serve the intranet application
Windows Server 2008 as domain controller (Active Directory)
Windows 7 client with IE9 and Firefox
When I open the intranet application I see a GET request going from the client to the tomcat server. The first response of the tomcat server and the SpnegoFilter is a 401 unauthorized which is right, cause the client needs to be authenticated.
806 6.117724 192.168.65.50 192.168.65.50 HTTP 284 HTTP/1.1 401 Unauthorized
WWW-Authenticate: Negotiate\r\n
The response of the client then is a GET request with a flag NTLMSSP_NEGOTIATE. Here it breaks. I don't expect a NTLM response, but a kerberos/spnego response. Somehow I just can't figure out how to send the correct response to the tomcat server.
808 6.123277 192.168.65.50 192.168.65.50 HTTP 637 GET / HTTP/1.1 , NTLMSSP_NEGOTIATE
By default NTLM isn't supported by SPNEGO so I get the following entry in my log:
java.lang.UnsupportedOperationException: NTLM specified. Downgraded to Basic Auth (and/or SSL) but downgrade not supported.
So I'm doing something wrong, but aftert a day fiddling with configurations and policies I just can't figure out what it is.
Hoping for some response.
Kerberos does not work on IPs, use fully qualified domain names.
Have you registered the SPN and is the client domain joined? The WWW-Authenticate: Negotiate will tell the web browser to try kerberos. The browser hands of that request to the OS (SSPI) based on URL in the address bar. There must be a SPN in AD for the URL. As others noted above, using an IP in your URL is more complicated, but can be done. If your client is not domain joined, there is extra config work to get it to contact your AD KDC. Firefox takes extra setup as well. Solve ths with IE, to eliminate that and them come back to FF when the issue is resolved.

Categories

Resources