If I run a webapp under the uri /myapp then as soon as the app is accessed via http://example.com/myapp, the URL changes to http://example.com/myapp/. Is there any way to prevent this?
When you have such a behaviour your web (or application) server returns a
301 Moved Permanently
when the URL without slash is requested.
You can see a similar example when getting http://www.google.es/services
HTTP/1.1 301 Moved Permanently
Location: http://www.google.es/services/
Content-Type: text/html; charset=UTF-8
X-Content-Type-Options: nosniff
Date: Wed, 11 May 2011 15:24:06 GMT
Expires: Fri, 10 Jun 2011 15:24:06 GMT
Cache-Control: public, max-age=2592000
Server: sffe
Content-Length: 227
X-XSS-Protection: 1; mode=block
After this first HTTP get to http://www.google.es/services
(without slash), the browser makes a second HTTP get to http://www.google.es/services/ (with slash). You can trace the HTTP requests with Network tab in Firebug, for example.
You can check your web/application server configuration, and maybe you can change this behaviour.
Related
I have encountered something that i have no clue how to solve. Managing to narrow down what the issue was is a win on its on but still... Basically i have a class that implements AbstractPdfView and I use it to generate documents. After that I return it to the client and its downloaded. It is all good when i run it locally, but when i deploy it, it directly goes to 500, (faild)net::ERR_INVALID_RESPONSE. Here is the response as well:
H/1.1 500
Server: nginx/1.14.1
Date: Thu, 02 May 2019 19:18:40 GMT
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Pragma: private
Cache-Control: private, must-revalidate
Content-Disposition: attachment;
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
X-Frame-Options: DENY
I am not sure what else to provide, codewise... Question is - is there any protection over receiving files as responses? Could it be that the file is directly downloaded and not through a window where you specify where to save it?
Thanks.
I cannot believe it... Turns out I had a font (which btw I was loading from the project directory) in the PDF file and it wasn't recognizing it. Switch to a basic font, all good now... Thanks to #peekay for guiding me to find a solution.
I need to use the "Range" header to continue downloading a partially downloaded file with my android app.
conn.setRequestProperty("Range", "bytes=" + this.downloadedBytes + "-");
And when I log it to logcat I see only what I've
conn.getRequestProperties().toString()
{Range=[bytes=3129-]}
My server is responding with a HTTP 416, range cannot be satisfied. I see this response in the apache access log. I get an IOException (java.io.FileNotFoundException), which is my guess as to how it deals with a 416, just like it would a 404. And that is a totally normal response, except that curl works perfectly to that same file!
curl -I --header "Range: bytes=3129-` ...
I get the expected HTTP 200 response:
Last-Modified: Fri, 27 Mar 2015 15:26:59 GMT
Accept-Ranges: bytes
Content-Length: 1915
Cache-Control: max-age=0
Expires: Fri, 27 Mar 2015 19:17:33 GMT
Vary: Accept-Encoding
Content-Range: bytes 3129-5043/5044
Content-Type: text/html
What am I missing on the android/java side here? What about the request is making apache serve back a 416 when curl works just fine?
I try to create a session connecting an Android phone to a Java backend.
For this I call an url looking like this: https://sub.domain.com/path-web/rest/user/init
HTTP/1.1 302 Found
Location: https://path.domain.com;jsessionid=SOMEID.frontend2
Content-Type: text/plain; charset=UTF-8
Content-Length: 0
Connection: close
Set-Cookie: JSESSIONID=SOMEID.frontend2;
Path=/; Secure; HttpOnly
Date: Fri, 24 Oct 2014 08:32:17 GMT
This causes my http library, in this case okhttp to try to follow the redirection to https://path.domain.com;jsessionid=SOMEID.frontend2.
This now fails because parsing this url with java.net.URI produces an URI with a null host.
Also Chrome want open the url at it is.
Is the url created wrong from the backend or is the parsing of the url wrong in java.net.URI?
What can I do to work with urls like that?
As described in the comments. The url is not valid and therefore not parsed from java.net.URI.
How do I handle properly a client Connection: close request field? As of now if I get this particular field I close the socket and wait for a following request from the client than reply again and start serving the data.
I don't know why my client/server communication is not working as the Apache Server I tested with.
Thanks for any clarifications...
Client/Server comunication:
CLIENT:
HEAD /stream.mpeg HTTP/1.0
Host: 127.0.0.1
User-Agent: SuperPlayer
Connection: Close
SERVER:
HTTP/1.0 200 OK
Date: Wed, 1 Jun 2011 20:05:13 GMT
Server: HTTP Server
Last-Modified: Mon, 06 Aug 2009 01:02:23 GMT
Accept-Ranges: bytes
Connection: Close
Content-Type: audio/mpeg
CLIENT:
HEAD /stream.mpeg HTTP/1.0
Host: 127.0.0.1
User-Agent: SuperPlayer
Connection: Close
SERVER:
HTTP/1.0 200 OK
Date: Wed, 1 Jun 2011 20:05:13 GMT
Server: HTTP Server
Last-Modified: Mon, 06 Aug 2009 01:02:23 GMT
Accept-Ranges: bytes
Connection: Close
Content-Type: audio/mpeg
231489172304981723409817234981234acvass123412323
21312hjdfaoi8w34yorhadl4hi8rali45mhalo3i,wmotw
345fqw354aoicu43yocq2i3hr
Client/ApacheServer Comunication:
CLIENT:
GET /test.mp3 HTTP/1.0
Host: 192.168.1.120
User-Agent: SuperPlayer
Connection: Close
SERVER:
HTTP/1.1 200 OK
Date: Wed, 01 Jun 2011 19:15:11 GMT
Server: Apache/2.2.16 (Win32)
Last-Modified: Thu, 29 Apr 2010 21:06:34 GMT
ETag: "14000000047049-4f75c8-4856680636a80"
Accept-Ranges: bytes
Content-Length: 5207496
Connection: close
Content-Type: audio/mpeg
...d.....<).0.. ..........<.#.. ( .h.$.J...1...i....A. ......c....a.9..!g.N...A. ........ ....>......|.......8....a......|..|N.............'>........?...C.....#..TJt.n .e...r.iL..#..IH...pR|.
Yes closing the socket is the right action to take. If the client is using this header properly, they are closing the socket on their end once they receive your response.
What I'm noticing here is that your server is not returning a Content-Length header. Even though the client is issuing a HEAD request, based on the W3C proposal (sec. 9.4):
The metainformation contained in the HTTP
headers in response to a HEAD request
SHOULD be identical to the information
sent in response to a GET request.
This method can be used for obtaining
metainformation about the entity
implied by the request without
transferring the entity-body itself.
This method is often used for testing
hypertext links for validity,
accessibility, and recent
modification.
The response to a HEAD request MAY be
cacheable in the sense that the
information contained in the response
MAY be used to update a previously
cached entity from that resource. If
the new field values indicate that the
cached entity differs from the current
entity (as would be indicated by a
change in Content-Length, Content-MD5,
ETag or Last-Modified), then the cache
MUST treat the cache entry as stale.
The key here is to make sure you're telling the client the size of the response without actually sending the data.
The Connection: close header just means that the client is expecting you to close the connection after sending the response. That also absolves you of having to send a Content-Length: header.
May I ask why are you using http 1.0 in the request?
There were no persistent connections in http 1.0, so the server is supposed to terminate the TCP connection after the response, whether you send Connection: close or not.
If you are using HTTP 1.0, there is no persistent connections as alexrs pointed, instead, Connection: keep-alive is being used with HTTP 1.0. On HTTP 1.1, you do not need that because HTTP connections are persistent by default on HTTP 1.1.
8.1.2 Overall Operation
A significant difference between HTTP/1.1 and earlier versions of HTTP
is that persistent connections are the default behavior of any HTTP
connection. That is, unless otherwise indicated, the client SHOULD
assume that the server will maintain a persistent connection, even
after error responses from the server.
Persistent connections provide a mechanism by which a client and a
server can signal the close of a TCP connection. This signaling takes
place using the Connection header field (section 14.10). Once a close
has been signaled, the client MUST NOT send any more requests on that
connection.
You can take a look at to the HTTP 1.1 RFC;
RFC for HTTP 1.1
I have java application running under tomcat which is fronted by apache webserver.
In my code I set cookie domain as
.example.com
but still my cookies shows up under www.example.com instead of under example.com in the client browser. What is so strange google analytics cookies shows up under example.com but my own code cannot store cookies under example.com?
Apache server is setup such that requests for example.com shows up as www.example.com in the client browser address bar if that is related to the issue ? I do need this otherwise different session id are generated for example.com and www.example.com which is bad for my applicaton.
Apache server is setup such that
requests for example.com shows up as
www.example.com in the client browser
address bar if that is related to the
issue ?
I am not 100% sure, but this looks like the root of the problem. How does Apache make the client browser to display www.example.com instead of example.com? Most probably, by redirecting each request for example.com to www.example.com. When the browser processes redirection, it sends a request for www.example.com and from that point on thinks that it is working with www.example.com.
Now, what happens when there is a Set-Cookie in the response header? It will obviously treat it as coming from www.example.com. There is no way a browser would allow such cookie to set its domain to .example.com because it would be a security problem. Imagine that mysite.somefreehosting.com sets a cookie for the domain .somefreehosting.com. Then someothersite.somefreehosting.com would receive this cookie which may lead to a lot of trouble. The standard specifies that such cookie should be rejected, but I wouldn't be surprised if some browsers are smart enough to handle such cases and to treat .example.com as www.example.com.
To be sure, I recommend that you check what exactly your site sends to the browser by sending a request with something like lwp-request script. You'll see what redirections are happening and what headers are actually set in the response, like this:
alqualos#ubuntu:~$ lwp-request -sSed http://google.com/
GET http://google.com/ --> 301 Moved Permanently
GET http://www.google.com/ --> 302 Found
GET http://www.google.co.il/ --> 200 OK
Cache-Control: private, max-age=0
Connection: close
Date: Sat, 18 Dec 2010 18:54:57 GMT
Server: gws
Content-Type: text/html; charset=windows-1255
Content-Type: text/html; charset=windows-1255
Expires: -1
Client-Date: Sat, 18 Dec 2010 18:54:57 GMT
Client-Peer: 173.194.37.104:80
Client-Response-Num: 1
Set-Cookie: PREF=ID=368e9cfd56643257:FF=0:TM=1292698497:LM=1292698497:S=s-Jur84NgaNH5Mzx;
expires=Mon, 17-Dec-2012 18:54:57 GMT; path=/; domain=.google.co.il
Set-Cookie: NID=42=bZ6goDV_b2MiWlTMONwiijaON5U_TBGB2_yNheonEwA1GVLU77EhyfUhk9Wvj70xTFrpvGy4s_aBp1UZtvRRnsnYjacjz_UVx0_iSr9R3nYXMyRtwkS5qV98_Egb16pZ;
expires=Sun, 19-Jun-2011 18:54:57 GMT; path=/; domain=.google.co.il; HttpOnly
Title: Google
X-XSS-Protection: 1; mode=block