I make very simple HTTP server in Java. The response sent to the browser is
HTTP 1.1 200 OK
Server: OneFile 1.0
Content-Type: text/html; charset=utf-8
Content-Length: 202
Transfer-Encoding: chunked
<HTML><HEAD><TITLE>My website</TITLE></HEAD>
<BODY><H1>Document </H1>
</BODY></HTML>
mozilla firefox displays it as text/plain although it should be text/html Why?
I suspect the Setup info is ignored...is it any difference for browser if I make connection on port 8080?
Thanks for any help
The browser will honor your headers. Unfortunately, your response is malformed for several reasons:
the response should start HTTP/1.1, not HTTP 1.1
you specify Transfer-Encoding: chunked, but your response does not follow the chunked format.
It appears that Firefox, quite sensibly, refuses to interpret such malformed response and just shows it unchanged.
Related
I am currently implementing an HTTP server in Java but faced one problem when it comes to transfer encoding.
While
HTTP/1.1 200 OK
Content-Type: text/plain
Content-Encoding: gzip
Transfer-Encoding: chunked
works properly, using gzip and chunked as transfer encoding only like this:
Transfer-Encoding: gzip, chunked
results in the browser not displaying the response correctly.
While Chrome tries downloading the resource as a .gz file, Firefox tries to display it which results in this:
The strange thing about this is that the message body generated by the server is exactly the same as when using gzip as Content-Encoding instead, because RFC7230 allows to apply multiple transfer encodings if the last one applied is chunked.
For example,
Transfer-Encoding: gzip, chunked
indicates that the payload body has been compressed using the gzip
coding and then chunked using the chunked coding while forming the
message body.
This is the original response from the server:
HTTP/1.1 200 OK
Date: Tue, 09 Jul 2019 17:52:41 GMT
Server: jPuzzle
Content-Type: text/plain
Transfer-Encoding: gzip, chunked
1c
òHÍÉÉW(Ï/ÊIQ ÿÿ
a
0
As one can guess, the body is gziped and chunked after that.
I would appreciate any help because I can't see where the specs have been violated.
You shoul use content-encoding header for end to end compression.
Transfer-Encoding is a hop-by-hop header, that is applied to a message between two nodes, not to a resource itself. Each segment of a multi-node connection can use different Transfer-Encoding values. If you want to compress data over the whole connection, use the end-to-end Content-Encoding header instead.
Also, send Accept-Encoding: gzip request header to tell the server what the client expects.
What is special with API Gateway, it is not required to include Access-Control-Allow-Headers in the response header.
This is AWS API Gateway Response Header:
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 152
Connection: keep-alive
Date: Tue, 11 Oct 2016 02:39:40 GMT
Access-Control-Allow-Origin: *
x-amzn-RequestId: f3838f6a-8f5b-11e6-b13a-XXXXXXX
X-Cache: Miss from cloudfront
Via: 1.1 XXXXXXXXXXX.cloudfront.net (CloudFront)
X-Amz-Cf-Id: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==
This is My Own Rest Server Response Header:
HTTP/1.1 200
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: Content-Type
Content-Type: application/json
Content-Length: 335
Date: Tue, 11 Oct 2016 02:34:31 GMT
The Problem with My Own Rest Server is that I need include Access-Control-Allow-Headers in the response otherwise I will encounter Request header field content-type is not allowed by Access-Control-Allow-Headers in preflight response.
With AWS API Gateway, I am not encounter that error even the Access-Control-Allow-Headers is not in response header.
According to the documentation here, Content-Type should be allowed by default.
"Apart from the headers set automatically by the user agent (e.g. Connection, User-Agent, etc.), the only headers which are allowed to be manually set are:
Accept
Accept-Language
Content-Language
Content-Type
The only allowed values for the Content-Type header are:
application/x-www-form-urlencoded
multipart/form-data
text/plain"
Hope that helps, Ritisha.
My team and I have a Tomcat server running a Restfull webservice, implemented using RestEasy:
#POST
#GZIP
#Path("/capture")
#Consumes(MediaType.APPLICATION_JSON)
Response RecieveData(#GZIP RecievingData recievingData);
We need to make compressed post to this service. The problem is we are not finding an implementation that works.
We tried using interceptors:
https://hc.apache.org/httpcomponents-client-4.2.x/httpclient/examples/org/apache/http/examples/client/ClientGZipContentCompression.java. But we were unable to capture the POST request Body and compress it.
We tried using the RestEasy client but it doesn´t seems to be compreesing the body of the Post Request: www.posttestserver.com/data/2016/01/06/15.33.391016591335
Finally we tried a customized class: https://gist.github.com/takumakei/913067. But we got a 400 error on the request:
HTTP/1.1 400 Bad Request [Content-Encoding: gzip, Content-Type:
text/html; charset=UTF-8, Date: Thu, 07 Jan 2016 10:07:05 GMT, Server:
Apache-Coyote/1.1, Content-Length: 66, Connection: keep-alive]
We are out of ideas and this supose to be a simple function for an HTTP Client. Any ideas?
OBS: Here is the RestEasy Proxy:
#POST
#GZIP
#Consumes(MediaType.APPLICATION_JSON)
public Response saveData(#GZIP RecievingData customer);
EDIT: Got some changes in the Firewall and the 3rd method changed to an error 400.
If using Tomcat why not add a RequestFilter that will pre-process received requests that contain header Content-Encoding: gzip and decompress it before the rest of the filter chain handles it?
EDIT:
I'm guessing your third option may actually have worked (snoop the network to verify), the issue was you got 403 - Forbidden response from the server. That's a problem with authorization not with the URL, request encoding, or anything else. The GZIP might actually be working for you right now.
EDIT:
Your latest output for HTTP response code 400 - Bad Request shows Content-Type: text/html. The Controller is expecting Content-Type: application/json, so the client did not set the ContentType as required by the Controller. Recheck your usage and config of the client code.
In the end I used the Resteasy framework for server and client to implement the GZIP compression.
Server side:
https://docs.jboss.org/resteasy/docs/2.3.0.GA/userguide/html/gzip.html
Client Side:
https://docs.jboss.org/resteasy/docs/2.2.1.GA/userguide/html/RESTEasy_Client_Framework.html
That worked for me.
I have an implementation of Java's HttpServer that I use for testing its pretty basic and means I can adjust what is served up to the client on the fly. My code is using apache http client.
I would like to test some authentication using this Implementation but i'm having some issues. My problem is that my code never authenticates, the initial request is sent and the server responds with 401 but the http client never responds. It goes through the list of authentication types but never chooses BASIC
If i connect to the same url using my browser I am prompted and when i submit credentials it logs in. If i change my code so the code attempts to log into some other server it is successful, so i know that both ends work!
I have wiresharked the connection on the client and server side and the differences I can see are:
when the connection is successful the subsequent request is sent as a POST not a GET.
when my server responds the authentication header is Www not WWW (as it is when it works)
EDIT:
Looking through the http code the case sensitivity shouldnt be causing any problems. The first response is the one that fails and the second is the one that works:
Fails
GET /testing HTTP/1.1
Host: 192.168.30.65:8000
Connection: Keep-Alive
User-Agent: Apache-HttpAsyncClient/4.0-beta1 (java 1.5)
HTTP/1.1 401 Unauthorized
Content-length: 0
Www-authenticate: Basic realm="myRealm"
Works
GET /svn HTTP/1.1
Host: svnserver
Connection: Keep-Alive
User-Agent: Apache-HttpAsyncClient/4.0-beta1 (java 1.5)
HTTP/1.1 401 Authorization Required
Date: Mon, 16 Apr 2012 09:51:58 GMT
Server: Apache/2.2.3 (CentOS)
WWW-Authenticate: Basic realm="Subversion Repository"
Content-Length: 475
Keep-Alive: timeout=15, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=iso-8859-1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>401 Authorization Required</title>
</head><body>
<h1>Authorization Required</h1>
<p>This server could not verify that you
are authorized to access the document
requested. Either you supplied the wrong
credentials (e.g., bad password), or your
browser doesn't understand how to supply
the credentials required.</p>
<hr>
<address>Apache/2.2.3 (CentOS) Server at svnserver Port 80</address>
</body></html>
GET /svn HTTP/1.1
Host: svnserver
Connection: Keep-Alive
User-Agent: Apache-HttpAsyncClient/4.0-beta1 (java 1.5)
Authorization: Basic YQVkd2Gm3GS6dXNjbMk5
I want to fetch a web page from a ASP.NET site that is only accessible from within a session. I'm using Apache HttpClient. I first open the main page of the site, then I search for the link to the "goal" page, and then I fire up a GET request for the "goal" page. The problem is that when I get the response for the second GET request, I always get the same (first) page. If I open the site with Firefox or Google Chrome I get the "goal" page.
From the first response from the server I get the following headers:
HTTP/1.1 200 OK
Date: Sun, 12 Dec 2010 19:03:56 GMT
Server: Microsoft-IIS/6.0
Platform: Mobitel Pla.NET
Node: 4
X-Powered-By: ASP.NET
X-AspNet-Version: 1.1.4322
Set-Cookie: ASP.NET_SessionId=0vpgd055cifko3mnw4nkuimz; path=/
Cache-Control: no-cache, must-revalidate
Content-Type: text/html; charset=utf-8
Content-Length: 7032
I inspected the traffic with WireShark and all headers look OK. I send the correct cookie back to the server on the second GET request.
I'm using Apache HttpClient. I have only one instance of DefaultHttpClient and I reuse that for the second request. I have BROWSER_COMPATIBILITY Cookie Policy.
Any ideas?
You need send back this header from the client (send back the cookie you received) in all your further requests:
Cookie: ASP.NET_SessionId=0vpgd055cifko3mnw4nkuimz; // and all other cookies
That should do the trick
I found my stupid mistake.
The mistake was that I was sending the second GET request to a link, without replacing the ampersand character codes.
Ex:
/(0vpgd055cifko3mnw4nkuimz)/Mp.aspx?ni=1482&pi=72&_72_url=925b9749-b7c7-4615-9f1a-9b613c344c82
That is wrong, because I send & instead of &
The RIGHT way to do it is:
/(0vpgd055cifko3mnw4nkuimz)/Mp.aspx?ni=1482&pi=72&_72_url=925b9749-b7c7-4615-9f1a-9b613c344c82