Java HttpServer Basic Authentication problems - java

I have an implementation of Java's HttpServer that I use for testing its pretty basic and means I can adjust what is served up to the client on the fly. My code is using apache http client.
I would like to test some authentication using this Implementation but i'm having some issues. My problem is that my code never authenticates, the initial request is sent and the server responds with 401 but the http client never responds. It goes through the list of authentication types but never chooses BASIC
If i connect to the same url using my browser I am prompted and when i submit credentials it logs in. If i change my code so the code attempts to log into some other server it is successful, so i know that both ends work!
I have wiresharked the connection on the client and server side and the differences I can see are:
when the connection is successful the subsequent request is sent as a POST not a GET.
when my server responds the authentication header is Www not WWW (as it is when it works)
EDIT:
Looking through the http code the case sensitivity shouldnt be causing any problems. The first response is the one that fails and the second is the one that works:
Fails
GET /testing HTTP/1.1
Host: 192.168.30.65:8000
Connection: Keep-Alive
User-Agent: Apache-HttpAsyncClient/4.0-beta1 (java 1.5)
HTTP/1.1 401 Unauthorized
Content-length: 0
Www-authenticate: Basic realm="myRealm"
Works
GET /svn HTTP/1.1
Host: svnserver
Connection: Keep-Alive
User-Agent: Apache-HttpAsyncClient/4.0-beta1 (java 1.5)
HTTP/1.1 401 Authorization Required
Date: Mon, 16 Apr 2012 09:51:58 GMT
Server: Apache/2.2.3 (CentOS)
WWW-Authenticate: Basic realm="Subversion Repository"
Content-Length: 475
Keep-Alive: timeout=15, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=iso-8859-1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>401 Authorization Required</title>
</head><body>
<h1>Authorization Required</h1>
<p>This server could not verify that you
are authorized to access the document
requested. Either you supplied the wrong
credentials (e.g., bad password), or your
browser doesn't understand how to supply
the credentials required.</p>
<hr>
<address>Apache/2.2.3 (CentOS) Server at svnserver Port 80</address>
</body></html>
GET /svn HTTP/1.1
Host: svnserver
Connection: Keep-Alive
User-Agent: Apache-HttpAsyncClient/4.0-beta1 (java 1.5)
Authorization: Basic YQVkd2Gm3GS6dXNjbMk5

Related

RESTEasyClient Request - session, cookie

i've sending the "same" request (a simple get-request) to a server and with Postman all works fine and with RestEasyClient it doesn't (401 Unauthorized)...
I looked on both requests by fiddler and saw some differences which might be the cause of the problem (i actually don't know) but at least in my opinion it makes no sense to send these parameters... but i have no idea where to turn it off, it seems to be a default behavior from RESTEasyClient.
Here the postman request:
GET https://xxxx/ping HTTP/1.1
Authorization: Bearer 7e6e4255-0d94-3d29-8527-fb5c8ff8e23b
cache-control: no-cache
Postman-Token: 7d54d38f-ca13-4fb0-8d14-18153f9b2f93
User-Agent: PostmanRuntime/7.3.0
Accept: */*
Host: xxxx
accept-encoding: gzip, deflate
Connection: close
Here the RESTEasyClient-Request:
GET https://tapi002-vpn-api.e-bk.m086/t1/msc-grawe/v1/ping HTTP/1.1
Authorization: Bearer 7e6e4255-0d94-3d29-8527-fb5c8ff8e23b
Host: xxxx
Connection: Keep-Alive
User-Agent: Apache-HttpClient/4.5.2 (Java/1.8.0_144)
Cookie: ROUTEID=.2
My questions are:
Why sends RESTEasyClient Connection: Keep-Alive? Wouldn't it be better to send connection close, because there is no session?!
Why does RESTEasyClient send a Cookie? I don't want and need any cookies...
And by the way: What's the postman token?!
Update:
The Cookie: ROUTEID=.2 causes the error... so the important question is how to remove the Cookie from the RESTEasyClient request header.
Update 2:
The server requested to set the cookie in the token-response... strange... i will try to remove the cookie...
Set-Cookie: ROUTEID=.1; path=/;Secure;HttpOnly; max-age=1200
Why sends RESTEasyClient Connection: Keep-Alive? Wouldn't it be better to send connection close, because there is no session?!
As for Keep-Alive: Because RestEasy uses HTTP/1.1 with connection reuse by default. That doesn't mean a session
Thanks to jokster for this answer.
Why does RESTEasyClient send a Cookie? I don't want and need any cookies...
RESTEasyClient does not send any cookies by default! In this case: Because the server requested the cookie in a request before...
And by the way: What's the postman token?!
Have a look at: What is the postman-token in generated code from Postman?

Http keep-alive protocol "Connection: close" from client but no "connection" header from server

I've a problem with broken connections and I believe this is due to an incorrect behavior respect to http keepalive but I can't understand if the 'culpit' is on the client or on the server side.
I'm dealing with a scenario where the client sends and HTTP 1.1 request with:
Connection: close
and the server does NOT reply with a Connection header.
The behavior is as follows:
The client sends the request
The server sends it response
The server does NOT close the connection
The client does NOT close the connection (.1)
The client sends another request using the same connection
The server does not do anything and after 30" closes the connection
The components are as follows:
Client: Java HttpConnection (within Dell Boomi iPaaS)
Server: SAP ERP OData Webservice
According to this article: "Should a server adhere to the HTTP Connection: close header sent from a client?" the problem seems to be on the client side. However the Java HttpURLCoonnection implementation should be pretty robust (tested x64 server VM on Linux v 1.7.0_55-b13 and x64 server on Windows 7 v 1.7.0_75-b13).
Here are the complete set of headers from the 1st request from client:
GET /sap/opu/odata/SAP/ZZSALESORDER_SRV/$metadata HTTP/1.1
User-Agent: Boomi Http Transport
Authorization: Basic YmPRIVATESECRETPLEASExNg==
X-CSRF-Token: Fetch
Connection: close
Cache-Control: no-cache
Pragma: no-cache
Host: some.server.behind.firewall.local:8000
Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
Cookie: sap-usercontext=sap-client=100; SAP_SESSIONID_DEV_100=Cm7LsDSECRETSECRETSECTERFaIMak%3d
And these are the ones from the response from the server:
HTTP/1.1 200 OK
content-type: application/xml
content-length: 79750
x-csrf-token: oolTHISAGAINISASECRET3PA==
last-modified: Fri, 25 Mar 2016 17:55:35 GMT
dataserviceversion: 2.0
After the server has replied the client sends a second request using the same connection:
PUT /sap/opu/odata/SAP/ZZSALESORDER_SRV/SalesOrderItems(NumDocSap='1200001534',PosId='000020') HTTP/1.1
User-Agent: Boomi Http Transport
Content-Type: application/atom+xml
X-CSRF-Token: oolZMYSECRETPA==
Connection: close
Authorization: Basic YmPRIVATESECRETPLEASExNg==
Cache-Control: no-cache
Pragma: no-cache
Host: some.server.behind.firewall.local:8000
Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
Content-Length: 406
Cookie: sap-usercontext=sap-client=100; SAP_SESSIONID_DEV_100=Cm7PRETTYPRIVATESECRETak%3d
and the server does not reply and closed the connection abruptly after 30".
The problem can be completely resolved setting the JVM flag
http.keepAlive=false
on the java client side (Boomi) but this seems more of a workaround then a solution.
Can someone please explain:
is there an HTTP protocol violation on client or server side?
can this be fixed by sending different headers?

is HTTP server header response ignored?

I make very simple HTTP server in Java. The response sent to the browser is
HTTP 1.1 200 OK
Server: OneFile 1.0
Content-Type: text/html; charset=utf-8
Content-Length: 202
Transfer-Encoding: chunked
<HTML><HEAD><TITLE>My website</TITLE></HEAD>
<BODY><H1>Document </H1>
</BODY></HTML>
mozilla firefox displays it as text/plain although it should be text/html Why?
I suspect the Setup info is ignored...is it any difference for browser if I make connection on port 8080?
Thanks for any help
The browser will honor your headers. Unfortunately, your response is malformed for several reasons:
the response should start HTTP/1.1, not HTTP 1.1
you specify Transfer-Encoding: chunked, but your response does not follow the chunked format.
It appears that Firefox, quite sensibly, refuses to interpret such malformed response and just shows it unchanged.

How to properly handle client "Connection: close" request on HTTP file server?

How do I handle properly a client Connection: close request field? As of now if I get this particular field I close the socket and wait for a following request from the client than reply again and start serving the data.
I don't know why my client/server communication is not working as the Apache Server I tested with.
Thanks for any clarifications...
Client/Server comunication:
CLIENT:
HEAD /stream.mpeg HTTP/1.0
Host: 127.0.0.1
User-Agent: SuperPlayer
Connection: Close
SERVER:
HTTP/1.0 200 OK
Date: Wed, 1 Jun 2011 20:05:13 GMT
Server: HTTP Server
Last-Modified: Mon, 06 Aug 2009 01:02:23 GMT
Accept-Ranges: bytes
Connection: Close
Content-Type: audio/mpeg
CLIENT:
HEAD /stream.mpeg HTTP/1.0
Host: 127.0.0.1
User-Agent: SuperPlayer
Connection: Close
SERVER:
HTTP/1.0 200 OK
Date: Wed, 1 Jun 2011 20:05:13 GMT
Server: HTTP Server
Last-Modified: Mon, 06 Aug 2009 01:02:23 GMT
Accept-Ranges: bytes
Connection: Close
Content-Type: audio/mpeg
231489172304981723409817234981234acvass123412323
21312hjdfaoi8w34yorhadl4hi8rali45mhalo3i,wmotw
345fqw354aoicu43yocq2i3hr
Client/ApacheServer Comunication:
CLIENT:
GET /test.mp3 HTTP/1.0
Host: 192.168.1.120
User-Agent: SuperPlayer
Connection: Close
SERVER:
HTTP/1.1 200 OK
Date: Wed, 01 Jun 2011 19:15:11 GMT
Server: Apache/2.2.16 (Win32)
Last-Modified: Thu, 29 Apr 2010 21:06:34 GMT
ETag: "14000000047049-4f75c8-4856680636a80"
Accept-Ranges: bytes
Content-Length: 5207496
Connection: close
Content-Type: audio/mpeg
...d.....<).0.. ..........<.#.. ( .h.$.J...1...i....A. ......c....a.9..!g.N...A. ........ ....>......|.......8....a......|..|N.............'>........?...C.....#..TJt.n .e...r.iL..#..IH...pR|.
Yes closing the socket is the right action to take. If the client is using this header properly, they are closing the socket on their end once they receive your response.
What I'm noticing here is that your server is not returning a Content-Length header. Even though the client is issuing a HEAD request, based on the W3C proposal (sec. 9.4):
The metainformation contained in the HTTP
headers in response to a HEAD request
SHOULD be identical to the information
sent in response to a GET request.
This method can be used for obtaining
metainformation about the entity
implied by the request without
transferring the entity-body itself.
This method is often used for testing
hypertext links for validity,
accessibility, and recent
modification.
The response to a HEAD request MAY be
cacheable in the sense that the
information contained in the response
MAY be used to update a previously
cached entity from that resource. If
the new field values indicate that the
cached entity differs from the current
entity (as would be indicated by a
change in Content-Length, Content-MD5,
ETag or Last-Modified), then the cache
MUST treat the cache entry as stale.
The key here is to make sure you're telling the client the size of the response without actually sending the data.
The Connection: close header just means that the client is expecting you to close the connection after sending the response. That also absolves you of having to send a Content-Length: header.
May I ask why are you using http 1.0 in the request?
There were no persistent connections in http 1.0, so the server is supposed to terminate the TCP connection after the response, whether you send Connection: close or not.
If you are using HTTP 1.0, there is no persistent connections as alexrs pointed, instead, Connection: keep-alive is being used with HTTP 1.0. On HTTP 1.1, you do not need that because HTTP connections are persistent by default on HTTP 1.1.
8.1.2 Overall Operation
A significant difference between HTTP/1.1 and earlier versions of HTTP
is that persistent connections are the default behavior of any HTTP
connection. That is, unless otherwise indicated, the client SHOULD
assume that the server will maintain a persistent connection, even
after error responses from the server.
Persistent connections provide a mechanism by which a client and a
server can signal the close of a TCP connection. This signaling takes
place using the Connection header field (section 14.10). Once a close
has been signaled, the client MUST NOT send any more requests on that
connection.
You can take a look at to the HTTP 1.1 RFC;
RFC for HTTP 1.1

cookie being set for www.example.com instead of example.com

I have java application running under tomcat which is fronted by apache webserver.
In my code I set cookie domain as
.example.com
but still my cookies shows up under www.example.com instead of under example.com in the client browser. What is so strange google analytics cookies shows up under example.com but my own code cannot store cookies under example.com?
Apache server is setup such that requests for example.com shows up as www.example.com in the client browser address bar if that is related to the issue ? I do need this otherwise different session id are generated for example.com and www.example.com which is bad for my applicaton.
Apache server is setup such that
requests for example.com shows up as
www.example.com in the client browser
address bar if that is related to the
issue ?
I am not 100% sure, but this looks like the root of the problem. How does Apache make the client browser to display www.example.com instead of example.com? Most probably, by redirecting each request for example.com to www.example.com. When the browser processes redirection, it sends a request for www.example.com and from that point on thinks that it is working with www.example.com.
Now, what happens when there is a Set-Cookie in the response header? It will obviously treat it as coming from www.example.com. There is no way a browser would allow such cookie to set its domain to .example.com because it would be a security problem. Imagine that mysite.somefreehosting.com sets a cookie for the domain .somefreehosting.com. Then someothersite.somefreehosting.com would receive this cookie which may lead to a lot of trouble. The standard specifies that such cookie should be rejected, but I wouldn't be surprised if some browsers are smart enough to handle such cases and to treat .example.com as www.example.com.
To be sure, I recommend that you check what exactly your site sends to the browser by sending a request with something like lwp-request script. You'll see what redirections are happening and what headers are actually set in the response, like this:
alqualos#ubuntu:~$ lwp-request -sSed http://google.com/
GET http://google.com/ --> 301 Moved Permanently
GET http://www.google.com/ --> 302 Found
GET http://www.google.co.il/ --> 200 OK
Cache-Control: private, max-age=0
Connection: close
Date: Sat, 18 Dec 2010 18:54:57 GMT
Server: gws
Content-Type: text/html; charset=windows-1255
Content-Type: text/html; charset=windows-1255
Expires: -1
Client-Date: Sat, 18 Dec 2010 18:54:57 GMT
Client-Peer: 173.194.37.104:80
Client-Response-Num: 1
Set-Cookie: PREF=ID=368e9cfd56643257:FF=0:TM=1292698497:LM=1292698497:S=s-Jur84NgaNH5Mzx;
expires=Mon, 17-Dec-2012 18:54:57 GMT; path=/; domain=.google.co.il
Set-Cookie: NID=42=bZ6goDV_b2MiWlTMONwiijaON5U_TBGB2_yNheonEwA1GVLU77EhyfUhk9Wvj70xTFrpvGy4s_aBp1UZtvRRnsnYjacjz_UVx0_iSr9R3nYXMyRtwkS5qV98_Egb16pZ;
expires=Sun, 19-Jun-2011 18:54:57 GMT; path=/; domain=.google.co.il; HttpOnly
Title: Google
X-XSS-Protection: 1; mode=block

Categories

Resources