I need to send a HTTPPost with body gzipped, the server accepts non gzipped data also but would prefer it gzipped, so Im trying to convert some existing workign code to use gzip
The data is currently set with
httpMethod.setEntity(new UrlEncodedFormEntity(nameValuePairs));
Ive tried sublclassing HttpEntityWrapper
static class GzipWrapper extends HttpEntityWrapper
{
public GzipWrapper(HttpEntity wrapped)
{
super(wrapped);
}
public void writeTo(OutputStream outstream)
throws IOException
{
GZIPOutputStream gzip = new GZIPOutputStream(outstream);
super.writeTo(gzip);
}
}
and changed to
httpMethod.setEntity(new GzipWrapper(
new UrlEncodedFormEntity(nameValuePairs)));
and added
if (!httpMethod.containsHeader("Accept-Encoding"))
{
httpMethod.addHeader("Accept-Encoding", "gzip");
}
but now my request just time outs I think there must be something wrong with my GZIpWrapper but Im not sure what.
On another note I looked at the http://hc.apache.org/httpcomponents-client-ga/httpclient/examples/org/apache/http/examples/client/ClientGZipContentCompression.java. example. Aside from the fact that I dont like interceptors because it is difficult to follow program flow it doesnt make sense to me because the request header is set to tell the server to accept gzip data but nowhere does it actually gzip encode any data, it only unzips the response.
(1) GzipWrapper implementation is wrong. It transforms the entity content when writing it out to output stream but it still returns the Content-Length of the wrapped entity, this causing the server to expect more input than actually transmitted by the client.
(2) You completely misunderstand the purpose of the Accept-Encoding header
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
(3) ClientGZipContentCompression sample is correct. It does not compress outgoing request entity because it is not meant to do so. See point (2)
Related
I'm trying to reproduce some behavior provide by my front end app by using a java http client.
I'm trying to send (stream) binary data from httpClient to server over PUT request. So content type is application/octet-stream. I've to send an unknown amount of data that is incoming.
Firstly, I used Apache HttpClient because it can handle digest authentication easily (that's is a requirement). For it, I use ContentProducer that enable writing directly to the OutputStream.
Below is an example:
HttpPut sendDataReq= new HttpPut(
HTTP_URI);
ContentProducer myContentProducer = new ContentProducer() {
#Override
public void writeTo(OutputStream out) throws IOException
{
out.write("ContentProducer rocks!".getBytes());
out.write(("Time requested: " + new Date()).getBytes());
}
};
HttpEntity myEntity = new EntityTemplate(myContentProducer);
sendDataReq.setEntity(myEntity );
HttpResponse response= httpClient.execute(sendDataReq);
I expect from this piece of code to stream request (AND NOT RESPONSE) from client to server.
By using Wireshark, I'm able to see my PUT request but it is send over TCP protocol and then nothing. When I try to listen using my front end web app, I can see that the PUT request is sent over HTTP protocol with 0 content length, data is then sent bytes by bytes (packet of some amount of bytes) over HTTP protocol with a log info: CONTINUATION.
Also, I tried with httpUrlConnection but there is no digestAuthentication implementation. So, I give up to use it.
Any hints of what is bad in my ContentProducer and how to accomplish it? Using other java HTTP clients? I can provide Wireshark log of what is expected and what I have.
First of all, I'm not sure whether this could be possible or not. I'm flushing bytes of data as PDF to the browser. Now the requirement is, I want to generate the pdf and also send one more extra object to be sent . Is it possible?
I've written something like this, but the result object is not getting as response.
YBUtil.GeneratePdf(response,documentBytes, "Bureau");
result.setStatus("SUCCESS");
return result; --> I want to pass this object as well
GeneratePdf method
public static void GeneratePdf(HttpServletResponse response, byte[] documentBytes, String fileName){
response.setHeader("Content-Disposition", "inline;filename="+fileName+".pdf");
response.setContentType("application/pdf");
response.setHeader("Expires", "0");
response.setHeader("Cache-Control", "must-revalidate, postcheck=0, pre-check=0");
response.setHeader("Pragma", "public");
response.setContentLength(documentBytes.length);
ServletOutputStream out = null;
try {
out = response.getOutputStream();
out.write(documentBytes);
out.flush();
out.close();
} catch (IOException e) {
e.printStackTrace();
}
}
In principle, this is more about the HTTP protocol than about Java.
HTTP is designed to send requests, with one optional request body sent along, and to receive a response in reaction, with one optional response body sent along. One. Not more than that.
When dealing with typical text stuff, you can send/respond a text-like format such as XML, JSON or web forms, that contain all the stuff you want it to contain. But when you want to receive/send a file, it's binary stuff and it must be sent as-is, alongside metadata that tell the file's type and name.
Now when you want to send/receive more than just a file, it looks like you're stuck. Well no. Look up multipart/form-data and realize you can use something similar for an HTTP response. Just like an email would.
Java can be programmed to respond with a multipart response. However, it is a bit of work to program, and I haven't really found an effective library that will success in helping me do it.
I have done this by sending a DTO object that will contain bytes(for pdf, this parsing of pdf is done on client side) and other values added to the DTO which are necessary.
I'm trying to make a little utility that will synchronise data between two servers. Most of the calls there are REST calls with JSON, so I decided to use Apache HttpClient for this.
There is however a section where I need to upload a file. I'm trying to do this using the mutipart form data with the MutipartEntityBuilder but I encounter a Content too long problem. (I tried to gzip the contents of the file too, but I'm still going over the limit).
Here's my java code:
HttpPost request = new HttpPost(baseUrl+URL);
MultipartEntityBuilder builder = MultipartEntityBuilder.create();
//create upload file params
builder.addTextBody("scanName", "Test upload");
builder.addBinaryBody("myfile", f);
HttpEntity params= builder.build();
request.setEntity(params);
request.addHeader("content-type","multipart/form-data");
HttpResponse response = httpClient.execute(request);
Are there better atlernatives that I should be using for the file upload part? I'm also going to download the files from one of the server. Will I hit a similar issue when try to handle those responses?
Is there something I'm doing wrong?
I try to use your code and send some file with size something about 33MB and it was successful. So, I think your problem one of the follows:
Created http client has limitations for request size - in this case you need to change properties of client or use another client;
In some peace of code you call HttpEntity.getContent() method. For multipart request for this method exists limitations - 25kB. For this case you need to use writeTo(OutputStream) instead of getContent()
In comments you told about swagger, but I don't understand what does it mean. If you use swagger generated api, that problems maybe occurred at their code and you need to fix generation logic (or something like this - I never used swagger)
I hope my answer will help you
I've been having an issue with Jetty processing application/json formatted request body data. Essentially, when the request body is processed by Jetty, the request data is cut off.
I have a relatively large POST body of around 74,000 bytes. As per some advice I found online, I instantiated a new context handler with the setMaxFormContentSize property set to a sufficiently large size of 500,000 bytes.
ServletContextHandler handler = new ServletContextHandler(server, "/");
handler.setMaxFormContentSize(500000);
However, this did not seem to work correctly. I also read online that this property might only work for form encoded data, not application/json, which is a strict requirement of our application.
Is there any way to circumvent this issue? Is there some special constraint class that I can subclass to allow the processing size to increase to at least 500KB?
Edit #1: I should add that I also tried to drop the size of the limit to 5 bytes to see if it would cut off more of the data in the payload. That also didn't working, which seems to imply that's definitely ignoring the property entirely.
Edit #2: Here is where I read the information from the request stream.
#Override
protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
try {
String json = CharStreams.toString(new InputStreamReader(req.getInputStream()));
....
} catch (Exception e) {
logger.error("Exception in internal api forwarder", e);
throw e;
}
}
This seems to a standard way of reading from a request stream. I also tried using a BufferedReader from req.getReader() with the same issue.
Vivek
What is this CharStreams object?
It doesn't seem to know, or care, or honor the request character encoding. (Bad idea)
Suggest that you use the servlet request.getReader() instead of request.getInputStream() (which is really only designed for binary request body content)
Using request.getReader() will at the very least support your request character encoding properly.
Another bit of information you might want to look into is request.getContentLength() and verify that the request headers does indeed contain the size you are expecting.
I'm using Jersey on both the server and client of a web application. On the server I have Interceptors as noted in https://jersey.java.net/documentation/latest/filters-and-interceptors.html to handle GZIP compression going out and coming in. From the server side, it's easy enough to select which resource methods are compressed using the #Compress annotation. However, if I also want to selectively compress entities from the Client to the Server, what's the best way to do that?
I had started adding a Content-Encoding: x-gzip header to the request, but my client side Interceptor does not see that header (presumably because it's not an official client side header).
Before you point to section 10.6 of the Jersey documentation, note that this works for the Server side. Although I could do something similar on the Client, I don't want to restrict it by URL. I'd rather control the compression flag as close to the request as possible (i.e. Header?).
Here's what I have so far, but it does not work since my header is removed:
class GzipWriterClientInterceptor implements WriterInterceptor {
private static final Set<String> supportedEncodings = new GZipEncoder().getSupportedEncodings(); //support gzip and x-gzip
#Override
public void aroundWriteTo(WriterInterceptorContext context)
throws IOException, WebApplicationException {
if (supportedEncodings.contains(context.getHeaders().getFirst(HttpHeaderConstants.CONTENT_ENCODING_HEADER))) {
System.out.println("ZIPPING DATA");
final OutputStream outputStream = context.getOutputStream();
context.setOutputStream(new GZIPOutputStream(outputStream));
} else {
context.headers.remove(HttpHeaderConstants.CONTENT_ENCODING_HEADER) //remove it since we won't actually be compressing the data
}
context.proceed();
}
}
Sample Request:
Response response = getBaseTarget().path(getBasePath()).path(graphic.uuid.toString())
.request(DEFAULT_MEDIA_TYPE)
.header(HttpHeaderConstants.CONTENT_ENCODING_HEADER, MediaTypeConstants.ENCODING_GZIP)
.put( Entity.entity(graphic, DEFAULT_MEDIA_TYPE))
I also have a logging filter as well that shows all the request headers. I've simplified the above, but all other headers I add are logged.