I am trying to send a gziped multipart POST to a Tomcat server from a Java application using Jersey. When the multipart request is not compressed, it works perfectly fine. Other types of compressed POSTS work fine, such as sending a single entity XML. I (believe) posting compressed data isn't an HTTP standard, but it does seem Tomcat supports it to some degree.
a working uncompressed multipart post:
POST /myApp/rest/data HTTP/1.1
Content-Type: multipart/mixed; boundary=Boundary_1_23237284_1331130438482
Cookie: JSESSIONID=XXXXXXXXXXXXXXXXXXXXXXXXX;Version=1;Path=/myApp/
MIME-Version: 1.0
User-Agent: Java/1.6.0_26
Host: localhost:8080
Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
Connection: keep-alive
Transfer-Encoding: chunked
d3
--Boundary_1_23237284_1331130438482
Content-Type: application/octet-stream
Content-Disposition: form-data; filename="uploadFile.war"; modification-date="Wed, 29 Feb 2012 18:01:38 GMT"; size=25343899; name="file"
{binary data here}
--Boundary_1_25179713_1331128929019
Content-Type: application/xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><myXMLEntity>stuff</myXMLEntity>
--Boundary_1_25179713_1331128929019--
When I compress it using the Jersey GZIPContentEncodingFilter() the following headers are sent, and I get back an HTTP 400 with a description of "incorrect syntax"
POST /myApp/rest/data HTTP/1.1
Content-Type: multipart/mixed
Cookie: JSESSIONID=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX;Version=1;Path=/myApp/
Accept-Encoding: gzip
Content-Encoding: gzip
User-Agent: Java/1.6.0_26
Host: localhost:8080
Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
Connection: keep-alive
Transfer-Encoding: chunked
{binary data here}
Is what I'm trying to do possible? Should the Content-Type actually read multipart/x-gzip? I notice that when it gets compressed, the boundary text gets left off of the Content-Type header - is this also a problem?
I ran into this same issue (or something very similar) and tracked it down to the Content-Type header missing the boundary parameter when using GZIPContentEncodingFilter. I was able to work around it by using MultiPartMediaTypes.createFormData() when setting the type of the entity I was POSTing from the Jersey client. Doing so makes sure the boundary parameter is set earlier than Jersey would automatically set it, which seems to be too late when using the GZIPContentEncodingFilter for compressing the request entity. There is an equivalent method for multipart/mixed.
I don't have an IDE handy but something similar to this:
// client is a Jersey Client object
client.resource(uri).entity(multipartFormData, MultiPartMediaTypes.createFormData()).post(ClientResponse.class);
All that said, this will still only work if your server is able to handle GZIP compressed requests.
IMO you can't do this that way, because the server and the client need to agree on how to communicate (e.g. zip compression). HTTP is designed as request/response and server can return what the client can support.
The client sends request to the saying, "Hey server, I need this resource and I support gzip, so you can return gzip if you can". :)
Imagine a situation that your client sends to the server a few megabytes in gzip, but the server doesn't support this.
Related
I am trying to download a file from an application through two frameworks. one with struts 1(older framework) and the other with Spring MVC(migrated from old).In Spring migrated application it is showing file could not be downloaded in IE 11 when a compressed(gzipped) response is sent to the client. It is showing "File could not be downloaded". It works fine in chrome as well as in older struts framework. Also if the response is not compressed, file gets downloaded successfully on IE as well on Spring MVC. I cannot really identify the cause here. Requesting some guidance and help to identify this problem.
Request headers is
<code>
Request URL: //edited
Request Method: POST
Status Code: 200 / OK
- Request Headers
Accept: text/html, application/xhtml+xml, image/jxr, */*
Accept-Encoding: gzip, deflate
Accept-Language: en-US
Authorization: Basic YW3Rt2aW46dG4V3zdD5EyMz6Q=
Cache-Control: no-cache
Connection: Keep-Alive
Content-Length: 3521
Content-Type: application/x-www-form-urlencoded
Cookie: JSESSIONID=urVBPpjD3QrP6KhkqCK4r8KSAuvKFSVPdp-UXyz-FYSz4W0cQmV9sh!4524586920
Host: localhost:7001
Referer: //edited
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko
</code>
Below is the response headers
<code>
Response Headers
Cache-Control: private, no-cache, no-store
Content-Disposition: attachment; filename="Closed DSP01 CRD0037_2019-12-26_133924.csv"
Content-Encoding: gzip
Content-Type: text/csv; charset=UTF-8
Date: Thu, 26 Dec 2019 08:09:23 GMT
Expires: 0
Pragma: no-cache
Transfer-Encoding: chunked
</code>
particular code where compression is done
<code>
if (canUseGzip) {
response.setHeader("Content-Encoding", "gzip");
GZIPOutputStream out = new GZIPOutputStream(response.getOutputStream());
pw = new OutputStreamWriter(out, "UTF-8");
} else {
pw = response.getWriter();
}
</code>
I tried different possibilities and when I tried to explicitly set content-length header ,say,
response.setHeader("Content-Length", String.valueOf(1024));
the file got downloaded successfully in IE. When I googled, I found transfer-encoding and content-length are mutually exclusive and former is already on the response. I don't know why adding content-length worked here and it was required only on the Spring migrated code. File gets downloaded successfully on older struts framework without mentioning the content-length header.
Is there anything specific I am missing here? Is there any other permanent solution as well?
Also how to correctly set the content length? I randomly tried with 1024 and it may fail for some other data.
Any help is appreciated
I am currently implementing an HTTP server in Java but faced one problem when it comes to transfer encoding.
While
HTTP/1.1 200 OK
Content-Type: text/plain
Content-Encoding: gzip
Transfer-Encoding: chunked
works properly, using gzip and chunked as transfer encoding only like this:
Transfer-Encoding: gzip, chunked
results in the browser not displaying the response correctly.
While Chrome tries downloading the resource as a .gz file, Firefox tries to display it which results in this:
The strange thing about this is that the message body generated by the server is exactly the same as when using gzip as Content-Encoding instead, because RFC7230 allows to apply multiple transfer encodings if the last one applied is chunked.
For example,
Transfer-Encoding: gzip, chunked
indicates that the payload body has been compressed using the gzip
coding and then chunked using the chunked coding while forming the
message body.
This is the original response from the server:
HTTP/1.1 200 OK
Date: Tue, 09 Jul 2019 17:52:41 GMT
Server: jPuzzle
Content-Type: text/plain
Transfer-Encoding: gzip, chunked
1c
òHÍÉÉW(Ï/ÊIQ ÿÿ
a
0
As one can guess, the body is gziped and chunked after that.
I would appreciate any help because I can't see where the specs have been violated.
You shoul use content-encoding header for end to end compression.
Transfer-Encoding is a hop-by-hop header, that is applied to a message between two nodes, not to a resource itself. Each segment of a multi-node connection can use different Transfer-Encoding values. If you want to compress data over the whole connection, use the end-to-end Content-Encoding header instead.
Also, send Accept-Encoding: gzip request header to tell the server what the client expects.
I'm trying to debug some GAE code on my local development server but have hit a wall. The code uses Google's Blobstore service to facilitate file uploads. The code works just fine on production but not on my local development server.
I'm using the standard Google pattern of including as my form action in my jsp blobstoreService.createUploadUrl("/uploadSurvey") and then calling blobstoreService.getUploads(request) in my servlet.
The file is correctly uploaded (I can see it using the local admin console) but the call to getUploads() throws the exception: java.lang.IllegalStateException: Must be called from a blob upload callback request.
Looking at the request in the debugger, the required blobkey attribute isn't found, nor are any of the other input parameters in the form. Looking at the raw request (the one dispatched to /_ah/upload/...), the form parameters are present.
I use the google cloud tools app-engine-plugin, which uses the gcloud python dev server to run the generated war.
I realize blobstore is an older GAE feature, but as this code is "working" on prod, I'd prefer to not have to switch to the newer subsystem.
Anyone able to give me a clue where to look to get this all working on my dev server?
Thanks,
-- Dave
p.s. below is the request forwarded on to my servlet after the uploaded file has been stripped out:
POST /uploadSurvey HTTP/1.1
Accept-Encoding: identity
X-APPENGINE-BACKEND-ID: 8
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
X-APPENGINE-SERVER-NAME: localhost
Cookie: JSESSIONID=5773y31x3eut
X-Appengine-User-Email:
X-APPENGINE-DEFAULT-VERSION-HOSTNAME: localhost:8888
X-APPENGINE-SERVER-PROTOCOL: HTTP/1.1
X-Appengine-User-Organization:
X-APPENGINE-DEV-SCRIPT: unused
ORIGIN: http://localhost:8888
X-Appengine-User-Id:
Accept-Language: en-us
X-APPENGINE-SERVER-SOFTWARE: Development/2.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/602.4.8 (KHTML, like Gecko) Version/10.0.3 Safari/602.4.8
X-Appengine-User-Nickname:
Host: localhost:8888
X-Appengine-Dev-Request-Id: wCTAonUKrB
Content-Type: multipart/form-data; boundary="===============1477989950756010976=="
Content-Length: 1372
X-APPENGINE-REQUEST-LOG-ID: 5e8eaef5aff4add89b774badea1fd3a30da8be
X-Appengine-User-Is-Admin: 0
UPGRADE-INSECURE-REQUESTS: 1
X-APPENGINE-SERVER-PORT: 8888
Referer: http://localhost:8888/settings
X-AppEngine-Country: ZZ
X-APPENGINE-REQUEST-ID-HASH: BFD4FDDA
X-APPENGINE-REMOTE-ADDR: ::1
Update:
I added some debugging to http_proxy.py under the gcloud devserver2 directory and observed this content type being forwarded. I'm even more confused now, as it looks like the content is present...
--===============5516630363169856841==
Content-Type: message/external-body; blob-key="XOQvaKc1cdczcwkIHfRFOw=="; access-type="X-AppEngine-BlobKey"
Content-Disposition: form-data; name="myFile"; filename="Naro Group - SNHU - Sales Readiness Assessment - Dec 2016.xls"
Content-Type: application/vnd.ms-excel
Content-Length: 164352
Content-MD5: NjBiNjI0N2M3MjZiMzc3NWMxZDQxYmM5YTU2YmM5YmM=
content-disposition: form-data; name="myFile"; filename="Naro Group - SNHU - Sales Readiness Assessment - Dec 2016.xls"
X-AppEngine-Upload-Creation: 2017-02-16 20:17:05.729401
--===============5516630363169856841==
Content-Type: text/plain
Content-Disposition: form-data; name="newSurveyId"
10001
--===============5516630363169856841==
Content-Type: text/plain
Content-Disposition: form-data; name="newSurveyName"
N
--===============5516630363169856841==
Content-Type: text/plain
Content-Disposition: form-data; name="newUserMessage"
--===============5516630363169856841==
Content-Type: text/plain
Content-Disposition: form-data; name="selectedClient"
6
--===============5516630363169856841==
Content-Type: text/plain
Content-Disposition: form-data; name="selectedPsf"
3
--===============5516630363169856841==
Content-Type: text/plain
Content-Disposition: form-data; name="selectedSection"
1
--===============5516630363169856841==--
I figured out a workaround, but I do think there is a bug here the Google folks need to look at. The Cloud Tools python dev server wasn't putting the all important X-AppEngine-BlobUpload header element into the rewritten request header. I modified blob_upload.py and http_proxy.py to do this (in tools/devserver2 under google-cloud-sdk). I then had to steel some code from Google's own production server code base, ParseBlobUploadFilter.java, to process the non-standard request payload and build the missing request attributes and make accessible the original request parameters. Of course, this code path should only be taken when running on a local dev server; this code is correctly called by the Google App Engine.
I hava a django rest framework web service that works fine with httpie and firefox: when I request with httpie I have a json formatted answer and when I request with firefox an html formatted one (httpie is a http client).
Now I'm building java API to communicate with services. I'm using URL class to perform requests.
I can receive html-formatted answers from the server if I don't override the content-type property. So I looked how httpie overrides this property and did the same:
connection.setRequestProperty("Content-Type", "application/x-www-form-urlencoded; charset=utf-8");
connection.setRequestProperty("Accept", "*\\*");
Now the communication end with Http 406 error, which means that client can't accept the answer.
If I use only the content-type property I have no error but still the html-formatted answer
Does anyone know how to solve it?
EDIT (adding requests' header):
httpie:
GET /match/39.3280114/16.241917599999965/0/5/ HTTP/1.1
Host: 127.0.0.1:8001
Connection: keep-alive
Accept-Encoding: gzip, deflate
Accept: /
User-Agent: HTTPie/0.9.3
java-API
GET /match/39.3280114/16.241917599999965/0/5/ HTTP/1.1
Host: 127.0.0.1:8001
Accept-Encoding: gzip, deflate
Accept: **
User-Agent: Java-API
Solved: I was using the wrong slash for Accept property
Your Accept header is malformed. It should be:
Accept: */*
See RFC 7231 § 5.3.2.
However, */* means “any media type.” If you actually want a specific media type (JSON), you should request it:
Accept: application/json
I've a problem with broken connections and I believe this is due to an incorrect behavior respect to http keepalive but I can't understand if the 'culpit' is on the client or on the server side.
I'm dealing with a scenario where the client sends and HTTP 1.1 request with:
Connection: close
and the server does NOT reply with a Connection header.
The behavior is as follows:
The client sends the request
The server sends it response
The server does NOT close the connection
The client does NOT close the connection (.1)
The client sends another request using the same connection
The server does not do anything and after 30" closes the connection
The components are as follows:
Client: Java HttpConnection (within Dell Boomi iPaaS)
Server: SAP ERP OData Webservice
According to this article: "Should a server adhere to the HTTP Connection: close header sent from a client?" the problem seems to be on the client side. However the Java HttpURLCoonnection implementation should be pretty robust (tested x64 server VM on Linux v 1.7.0_55-b13 and x64 server on Windows 7 v 1.7.0_75-b13).
Here are the complete set of headers from the 1st request from client:
GET /sap/opu/odata/SAP/ZZSALESORDER_SRV/$metadata HTTP/1.1
User-Agent: Boomi Http Transport
Authorization: Basic YmPRIVATESECRETPLEASExNg==
X-CSRF-Token: Fetch
Connection: close
Cache-Control: no-cache
Pragma: no-cache
Host: some.server.behind.firewall.local:8000
Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
Cookie: sap-usercontext=sap-client=100; SAP_SESSIONID_DEV_100=Cm7LsDSECRETSECRETSECTERFaIMak%3d
And these are the ones from the response from the server:
HTTP/1.1 200 OK
content-type: application/xml
content-length: 79750
x-csrf-token: oolTHISAGAINISASECRET3PA==
last-modified: Fri, 25 Mar 2016 17:55:35 GMT
dataserviceversion: 2.0
After the server has replied the client sends a second request using the same connection:
PUT /sap/opu/odata/SAP/ZZSALESORDER_SRV/SalesOrderItems(NumDocSap='1200001534',PosId='000020') HTTP/1.1
User-Agent: Boomi Http Transport
Content-Type: application/atom+xml
X-CSRF-Token: oolZMYSECRETPA==
Connection: close
Authorization: Basic YmPRIVATESECRETPLEASExNg==
Cache-Control: no-cache
Pragma: no-cache
Host: some.server.behind.firewall.local:8000
Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
Content-Length: 406
Cookie: sap-usercontext=sap-client=100; SAP_SESSIONID_DEV_100=Cm7PRETTYPRIVATESECRETak%3d
and the server does not reply and closed the connection abruptly after 30".
The problem can be completely resolved setting the JVM flag
http.keepAlive=false
on the java client side (Boomi) but this seems more of a workaround then a solution.
Can someone please explain:
is there an HTTP protocol violation on client or server side?
can this be fixed by sending different headers?